What to expect from AutoML software

Automated machine learning (AutoML) takes a higher-level approach to machine learning than most practitioners are used to, so we've gathered a handful of guidelines on what to expect when running AutoML software such as TPOT.

AutoML algorithms aren't intended to run for only a few minutes

Of course, you can run TPOT for only a few minutes and it will find a reasonably good pipeline for your dataset. However, if you don't run TPOT for very long, it may not find the best pipeline possible for your dataset. Often it is worthwhile to run multiple instances of TPOT in parallel for a long time (hours to days) to allow TPOT to thoroughly search the pipeline space for your dataset.

AutoML algorithms can take a long time to finish their search

AutoML algorithms aren't as simple as fitting one model on the dataset; they are considering multiple machine learning algorithms (random forests, linear models, SVMs, etc.) in a pipeline with multiple preprocessing steps (missing value imputation, scaling, PCA, feature selection, etc.), the hyperparameters for all of the models and preprocessing steps, as well as multiple ways to ensemble or stack the algorithms within the pipeline.

As such, TPOT will take a while to run on larger datasets, but it's important to realize why. With the default TPOT settings (100 generations with 100 population size), TPOT will evaluate 10,000 pipeline configurations before finishing. To put this number into context, think about a grid search of 10,000 hyperparameter combinations for a machine learning algorithm and how long that grid search will take. That is 10,000 model configurations to evaluate with 10-fold cross-validation, which means that roughly 100,000 models are fit and evaluated on the training data in one grid search. That's a time-consuming procedure, even for simpler models like decision trees.

Typical TPOT runs will take hours to days to finish (unless it's a small dataset), but you can always interrupt the run partway through and see the best results so far. TPOT also provides a warm_start parameter that lets you restart a TPOT run from where it left off.

AutoML algorithms can recommend different solutions for the same dataset

If you're working with a reasonably complex dataset or run TPOT for a short amount of time, different TPOT runs may result in different pipeline recommendations. TPOT's optimization algorithm is stochastic in nature, which means that it uses randomness (in part) to search the possible pipeline space. When two TPOT runs recommend different pipelines, this means that the TPOT runs didn't converge due to lack of time or that multiple pipelines perform more-or-less the same on your dataset.

This is actually an advantage over fixed grid search techniques: TPOT is meant to be an assistant that gives you ideas on how to solve a particular machine learning problem by exploring pipeline configurations that you might have never considered, then leaves the fine-tuning to more constrained parameter tuning techniques such as grid search.

TPOT with code

We've taken care to design the TPOT interface to be as similar as possible to scikit-learn.

TPOT can be imported just like any regular Python module. To import TPOT, type:

from tpot import TPOTClassifier

then create an instance of TPOT as follows:

pipeline_optimizer = TPOTClassifier()

It's also possible to use TPOT for regression problems with the TPOTRegressor class. Other than the class name, a TPOTRegressor is used the same way as a TPOTClassifier. You can read more about the TPOTClassifier and TPOTRegressor classes in the API documentation.

Some example code with custom TPOT parameters might look like:

pipeline_optimizer = TPOTClassifier(generations=5, population_size=20, cv=5,
                                    random_state=42, verbosity=2)

Now TPOT is ready to optimize a pipeline for you. You can tell TPOT to optimize a pipeline based on a data set with the fit function:

pipeline_optimizer.fit(X_train, y_train)

The fit function takes in a training data set and uses k-fold cross-validation when evaluating pipelines. It then initializes the genetic programming algoritm to find the best pipeline based on average k-fold score.

You can then proceed to evaluate the final pipeline on the testing set with the score function:

print(pipeline_optimizer.score(X_test, y_test))

Finally, you can tell TPOT to export the corresponding Python code for the optimized pipeline to a text file with the export function:

pipeline_optimizer.export('tpot_exported_pipeline.py')

Once this code finishes running, tpot_exported_pipeline.py will contain the Python code for the optimized pipeline.

Below is a full example script using TPOT to optimize a pipeline, score it, and export the best pipeline to a file.

from tpot import TPOTClassifier
from sklearn.datasets import load_digits
from sklearn.model_selection import train_test_split

digits = load_digits()
X_train, X_test, y_train, y_test = train_test_split(digits.data, digits.target,
                                                    train_size=0.75, test_size=0.25)

pipeline_optimizer = TPOTClassifier(generations=5, population_size=20, cv=5,
                                    random_state=42, verbosity=2)
pipeline_optimizer.fit(X_train, y_train)
print(pipeline_optimizer.score(X_test, y_test))
pipeline_optimizer.export('tpot_exported_pipeline.py')

Check our examples to see TPOT applied to some specific data sets.

TPOT on the command line

To use TPOT via the command line, enter the following command with a path to the data file:

tpot /path_to/data_file.csv

An example command-line call to TPOT may look like:

tpot data/mnist.csv -is , -target class -o tpot_exported_pipeline.py -g 5 -p 20 -cv 5 -s 42 -v 2

TPOT offers several arguments that can be provided at the command line. To see brief descriptions of these arguments, enter the following command:

tpot --help

Detailed descriptions of the command-line arguments are below.

Argument Parameter Valid values Effect
-is INPUT_SEPARATOR Any string Character used to separate columns in the input file.
-target TARGET_NAME Any string Name of the target column in the input file.
-mode TPOT_MODE ['classification', 'regression'] Whether TPOT is being used for a supervised classification or regression problem.
-o OUTPUT_FILE String path to a file File to export the code for the final optimized pipeline.
-g GENERATIONS Any positive integer Number of iterations to run the pipeline optimization process. Generally, TPOT will work better when you give it more generations (and therefore time) to optimize the pipeline.

TPOT will evaluate POPULATION_SIZE + GENERATIONS x OFFSPRING_SIZE pipelines in total.
-p POPULATION_SIZE Any positive integer Number of individuals to retain in the GP population every generation. Generally, TPOT will work better when you give it more individuals (and therefore time) to optimize the pipeline.

TPOT will evaluate POPULATION_SIZE + GENERATIONS x OFFSPRING_SIZE pipelines in total.
-os OFFSPRING_SIZE Any positive integer Number of offspring to produce in each GP generation.

By default, OFFSPRING_SIZE = POPULATION_SIZE.
-mr MUTATION_RATE [0.0, 1.0] GP mutation rate in the range [0.0, 1.0]. This tells the GP algorithm how many pipelines to apply random changes to every generation.

We recommend using the default parameter unless you understand how the mutation rate affects GP algorithms.
-xr CROSSOVER_RATE [0.0, 1.0] GP crossover rate in the range [0.0, 1.0]. This tells the GP algorithm how many pipelines to "breed" every generation.

We recommend using the default parameter unless you understand how the crossover rate affects GP algorithms.
-scoring SCORING_FN 'accuracy', 'adjusted_rand_score', 'average_precision', 'balanced_accuracy',
'f1', 'f1_macro', 'f1_micro', 'f1_samples', 'f1_weighted', 'neg_log_loss', 'neg_mean_absolute_error', 'neg_mean_squared_error', 'neg_median_absolute_error', 'precision', 'precision_macro', 'precision_micro', 'precision_samples', 'precision_weighted',
'r2', 'recall', 'recall_macro', 'recall_micro', 'recall_samples', 'recall_weighted', 'roc_auc'
Function used to evaluate the quality of a given pipeline for the problem. By default, accuracy is used for classification and mean squared error (MSE) is used for regression.

TPOT assumes that any function with "error" or "loss" in the name is meant to be minimized, whereas any other functions will be maximized.

See the section on scoring functions for more details.
-cv CV Any integer >1 Number of folds to evaluate each pipeline over in k-fold cross-validation during the TPOT optimization process.
-sub SUBSAMPLE (0.0, 1.0] Subsample ratio of the training instance. Setting it to 0.5 means that TPOT randomly collects half of training samples for pipeline optimization process.
-njobs NUM_JOBS Any positive integer or -1 Number of CPUs for evaluating pipelines in parallel during the TPOT optimization process.

Assigning this to -1 will use as many cores as available on the computer.
-maxtime MAX_TIME_MINS Any positive integer How many minutes TPOT has to optimize the pipeline.

If provided, this setting will override the "generations" parameter and allow TPOT to run until it runs out of time.
-maxeval MAX_EVAL_MINS Any positive integer How many minutes TPOT has to evaluate a single pipeline.

Setting this parameter to higher values will allow TPOT to consider more complex pipelines but will also allow TPOT to run longer.
-s RANDOM_STATE Any positive integer Random number generator seed for reproducibility.

Set this seed if you want your TPOT run to be reproducible with the same seed and data set in the future.
-config CONFIG_FILE File path or string A path to a configuration file for customizing the operators and parameters that TPOT uses in the optimization process.

See the built-in configurations section for the list of configurations included with TPOT, and the custom configuration section for more information and examples of how to create your own TPOT configurations.
-v VERBOSITY {0, 1, 2, 3} How much information TPOT communicates while it is running.

0 = none, 1 = minimal, 2 = high, 3 = all.

A setting of 2 or higher will add a progress bar during the optimization procedure.
--no-update-check Flag indicating whether the TPOT version checker should be disabled.
--version Show TPOT's version number and exit.
--help Show TPOT's help documentation and exit.

Scoring functions

TPOT makes use of sklearn.model_selection.cross_val_score for evaluating pipelines, and as such offers the same support for scoring functions. There are two ways to make use of scoring functions with TPOT:

  1. You can pass in a string to the scoring parameter from the list above. Any other strings will cause TPOT to throw an exception.

  2. You can pass a function with the signature scorer(y_true, y_pred), where y_true are the true target values and y_pred are the predicted target values from an estimator. To do this, you should implement your own function. See the example below for further explanation.

from tpot import TPOTClassifier
from sklearn.datasets import load_digits
from sklearn.model_selection import train_test_split

digits = load_digits()
X_train, X_test, y_train, y_test = train_test_split(digits.data, digits.target,
                                                    train_size=0.75, test_size=0.25)

def my_custom_accuracy(y_true, y_pred):
    return float(sum(y_pred == y_true)) / len(y_true)

tpot = TPOTClassifier(generations=5, population_size=20, verbosity=2,
                      scoring=my_custom_accuracy)
tpot.fit(X_train, y_train)
print(tpot.score(X_test, y_test))
tpot.export('tpot_mnist_pipeline.py')

Built-in TPOT configurations

TPOT comes with a handful of default operators and parameter configurations that we believe work well for optimizing machine learning pipelines. Below is a list of the current built-in configurations that come with TPOT.

Configuration Name Description Operators
Default TPOT TPOT will search over a broad range of preprocessors, feature constructors, feature selectors, models, and parameters to find a series of operators that minimize the error of the model predictions. Some of these operators are complex and may take a long time to run, especially on larger datasets.

Note: This is the default configuration for TPOT. To use this configuration, use the default value (None) for the config_dict parameter.
Classification

Regression
TPOT light TPOT will search over a restricted range of preprocessors, feature constructors, feature selectors, models, and parameters to find a series of operators that minimize the error of the model predictions. Only simpler and fast-running operators will be used in these pipelines, so TPOT light is useful for finding quick and simple pipelines for a classification or regression problem.

This configuration works for both the TPOTClassifier and TPOTRegressor.
Classification

Regression
TPOT MDR TPOT will search over a series of feature selectors and Multifactor Dimensionality Reduction models to find a series of operators that maximize prediction accuracy. The TPOT MDR configuration is specialized for genome-wide association studies (GWAS), and is described in detail online here.

Note that TPOT MDR may be slow to run because the feature selection routines are computationally expensive, especially on large datasets.
Classification

Regression

To use any of these configurations, simply pass the string name of the configuration to the config_dict parameter (or -config on the command line). For example, to use the "TPOT light" configuration:

from tpot import TPOTClassifier
from sklearn.datasets import load_digits
from sklearn.model_selection import train_test_split

digits = load_digits()
X_train, X_test, y_train, y_test = train_test_split(digits.data, digits.target,
                                                    train_size=0.75, test_size=0.25)

tpot = TPOTClassifier(generations=5, population_size=20, verbosity=2,
                      config_dict='TPOT light')
tpot.fit(X_train, y_train)
print(tpot.score(X_test, y_test))
tpot.export('tpot_mnist_pipeline.py')

Customizing TPOT's operators and parameters

Beyond the default configurations that come with TPOT, in some cases it is useful to limit the algorithms and parameters that TPOT considers. For that reason, we allow users to provide TPOT with a custom configuration for its operators and parameters.

The custom TPOT configuration must be in nested dictionary format, where the first level key is the path and name of the operator (e.g., sklearn.naive_bayes.MultinomialNB) and the second level key is the corresponding parameter name for that operator (e.g., fit_prior). The second level key should point to a list of parameter values for that parameter, e.g., 'fit_prior': [True, False].

For a simple example, the configuration could be:

tpot_config = {
    'sklearn.naive_bayes.GaussianNB': {
    },

    'sklearn.naive_bayes.BernoulliNB': {
        'alpha': [1e-3, 1e-2, 1e-1, 1., 10., 100.],
        'fit_prior': [True, False]
    },

    'sklearn.naive_bayes.MultinomialNB': {
        'alpha': [1e-3, 1e-2, 1e-1, 1., 10., 100.],
        'fit_prior': [True, False]
    }
}

in which case TPOT would only consider pipelines containing GaussianNB, BernoulliNB, MultinomialNB, and tune those algorithm's parameters in the ranges provided. This dictionary can be passed directly within the code to the TPOTClassifier/TPOTRegressor config_dict parameter, described above. For example:

from tpot import TPOTClassifier
from sklearn.datasets import load_digits
from sklearn.model_selection import train_test_split

digits = load_digits()
X_train, X_test, y_train, y_test = train_test_split(digits.data, digits.target,
                                                    train_size=0.75, test_size=0.25)

tpot_config = {
    'sklearn.naive_bayes.GaussianNB': {
    },

    'sklearn.naive_bayes.BernoulliNB': {
        'alpha': [1e-3, 1e-2, 1e-1, 1., 10., 100.],
        'fit_prior': [True, False]
    },

    'sklearn.naive_bayes.MultinomialNB': {
        'alpha': [1e-3, 1e-2, 1e-1, 1., 10., 100.],
        'fit_prior': [True, False]
    }
}

tpot = TPOTClassifier(generations=5, population_size=20, verbosity=2,
                      config_dict=tpot_config)
tpot.fit(X_train, y_train)
print(tpot.score(X_test, y_test))
tpot.export('tpot_mnist_pipeline.py')

Command-line users must create a separate .py file with the custom configuration and provide the path to the file to the tpot call. For example, if the simple example configuration above is saved in tpot_classifier_config.py, that configuration could be used on the command line with the command:

tpot data/mnist.csv -is , -target class -config tpot_classifier_config.py -g 5 -p 20 -v 2 -o tpot_exported_pipeline.py

When using the command-line interface, the configuration file specified in the -config parameter must name its custom TPOT configuration tpot_config. Otherwise, TPOT will not be able to locate the configuration dictionary.

For more detailed examples of how to customize TPOT's operator configuration, see the default configurations for classification and regression in TPOT's source code.

Note that you must have all of the corresponding packages for the operators installed on your computer, otherwise TPOT will not be able to use them. For example, if XGBoost is not installed on your computer, then TPOT will simply not import nor use XGBoost in the pipelines it considers.