TPOT now detects whether there are missing values in your dataset and replaces them with the median value of the column.
TPOT now allows you to set a
groupparameter in the
fitfunction so you can use the GroupKFold cross-validation strategy.
TPOT now allows you to set a subsample ratio of the training instance with the
subsampleparameter. For example, setting
subsample=0.5 tells TPOT to create a fixed subsample of half of the training data for the pipeline optimization process. This parameter can be useful for speeding up the pipeline optimization process, but may give less accurate performance estimates from cross-validation.
TPOT now has more built-in configurations, including TPOT MDR and TPOT light, for both classification and regression problems.
TPOTRegressornow expose three useful internal attributes,
evaluated_individuals_. These attributes are described in the API documentation.
Oh, TPOT now has thorough API documentation. Check it out!
Fixed a reproducibility issue where setting
random_seeddidn't necessarily result in the same results every time. This bug was present since TPOT v0.7.
Refined input checking in TPOT.
Removed Python 2 uncompliant code.
TPOT now has multiprocessing support. TPOT allows you to use multiple processes in parallel to accelerate the pipeline optimization process in TPOT with the
TPOT now allows you to customize the operators and parameters considered during the optimization process, which can be accomplished with the new
config_dictparameter. The format of this customized dictionary can be found in the online documentation, along with a list of built-in configurations.
TPOT now allows you to specify a time limit for evaluating a single pipeline (default limit is 5 minutes) in optimization process with the
max_eval_time_minsparameter, so TPOT won't spend hours evaluating overly-complex pipelines.
We tweaked TPOT's underlying evolutionary optimization algorithm to work even better, including using the mu+lambda algorithm. This algorithm gives you more control of how many pipelines are generated every iteration with the
Refined the default operators and parameters in TPOT, so TPOT 0.7 should work even better than 0.6.
TPOT now supports sample weights in the fitness function if some if your samples are more important to classify correctly than others. The sample weights option works the same as in scikit-learn, e.g.,
tpot.fit(x_train, y_train, sample_weights=sample_weights).
The default scoring metric in TPOT has been changed from balanced accuracy to accuracy, the same default metric for classification algorithms in scikit-learn. Balanced accuracy can still be used by setting
scoring='balanced_accuracy'when creating a TPOT instance.
TPOT now supports regression problems! We have created two separate
TPOTRegressorclasses to support classification and regression problems, respectively. The command-line interface also supports this feature through the
TPOT now allows you to specify a time limit for the optimization process with the
max_time_minsparameter, so you don't need to guess how long TPOT will take any more to recommend a pipeline to you.
Added a new operator that performs feature selection using ExtraTrees feature importance scores.
XGBoost has been added as an optional dependency to TPOT. If you have XGBoost installed, TPOT will automatically detect your installation and use the
XGBoostRegressorin its pipelines.
TPOT now offers a verbosity level of 3 ("science mode"), which outputs the entire Pareto front instead of only the current best score. This feature may be useful for users looking to make a trade-off between pipeline complexity and score.
- Major refactor: Each operator is defined in a separate class file. Hooray for easier-to-maintain code!
- TPOT now exports directly to scikit-learn Pipelines instead of hacky code.
- Internal representation of individuals now uses scikit-learn pipelines.
- Parameters for each operator have been optimized so TPOT spends less time exploring useless parameters.
- We have removed pandas as a dependency and instead use numpy matrices to store the data.
- TPOT now uses k-fold cross-validation when evaluating pipelines, with a default k = 3. This k parameter can be tuned when creating a new TPOT instance.
- Improved scoring function support: Even though TPOT uses balanced accuracy by default, you can now have TPOT use any of the scoring functions that
- Added the scikit-learn Normalizer preprocessor.
- Minor text fixes.
In TPOT 0.4, we've made some major changes to the internals of TPOT and added some convenience functions. We've summarized the changes below.
- Added new sklearn models and preprocessors
- Added operator that inserts virtual features for the count of features with values of zero
- Reworked parameterization of TPOT operators
- Reduced parameter search space with information from a scikit-learn benchmark
- TPOT no longer generates arbitrary parameter values, but uses a fixed parameter set instead
- Removed XGBoost as a dependency
- Too many users were having install issues with XGBoost
- Replaced with scikit-learn's GradientBoostingClassifier
- Improved descriptiveness of TPOT command line parameter documentation
- Removed min/max/avg details during fit() when verbosity > 1
- Replaced with tqdm progress bar
- Added tqdm as a dependency
get_params()function so TPOT can operate in scikit-learn's
cross_val_score& related functions
- We revised the internal optimization process of TPOT to make it more efficient, in particular in regards to the model parameters that TPOT optimizes over.
TPOT now has the ability to export the optimized pipelines to sklearn code.
Logistic regression, SVM, and k-nearest neighbors classifiers were added as pipeline operators. Previously, TPOT only included decision tree and random forest classifiers.
TPOT can now use arbitrary scoring functions for the optimization process.
TPOT now performs multi-objective Pareto optimization to balance model complexity (i.e., # of pipeline operators) and the score of the pipeline.
First public release of TPOT.
Optimizes pipelines with decision trees and random forest classifiers as the model, and uses a handful of feature preprocessors.