* config in result
* value can be float
* pytorch notebook example
* docker, pre-commit
* max_failure (#192); early_stop
* extend starting_points (#196)
Co-authored-by: Chi Wang (MSR) <wang.chi@microsoft.com>
Co-authored-by: Qingyun Wu <qw2ky@virginia.edu>
* increase test coverage
* use define by run only when needed
* warmstart bs
* classification -> binary, multi
* warm start with evaluated rewards
* data transformer; resource attr for gs
* BlendSearchTuner bug fix and unittest
* bug fix
* docstr and import
* task type
* remove catboost training dir
* close#48
* bs for hierarchical space. close#85
* retrain for hierarchical space
* clean ml (#180)
Co-authored-by: Qingyun Wu <qxw5138@psu.edu>
* support ranking task
* examples
* cv shuffle
* forecast api and implementation cleaner
* period constraints
* delete groups after fit
* non hashable value out of signature
* parallel trials
* add random in _search_parallel
* fix bug in retraining
* check memory constraint before training
* retrain_full
* log custom metric
* retraining budget check
* sample size check before retrain
* remove 'time2eval' from result
* report 'total_search_time' in result
* rename total_search_time to wall_clock_time
* rename train_loss boolean to log_training_metric
* set default train_loss to None
* exclude oom result
* log retrained model
* no subsample
* doc str
* notebook
* predicted value is NaN for sarimax
* version
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
Co-authored-by: Qingyun Wu <qxw5138@psu.edu>
* added 'forecast' task with estimators ['fbprophet', 'arima', 'sarimax']
* update setup.py
* add TimeSeriesSplit to 'regression' and 'classification' task
* add 'time' split_type for 'classification' and 'regression' task
Signed-off-by: Kevin Chen <chenkevin.8787@gmail.com>
* feature importance
* variable name
* Update test/test_split.py
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
* Update test/test_forecast.py
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
* prophet installation fail in windows
* upload flaml_forecast.ipynb
Signed-off-by: Kevin Chen <chenkevin.8787@gmail.com>
* remove extra comma
* exclusive bound
* log file name
* add cost to space
* dataset_format
* add load_openml_dataset test
* docstr
* revise test format
* simplify restore
* order categories
* openml server exception in test
* process space
* add warning
* log format
* reduce n_cpu
* nested space
* hierarchical search space for CFO
* non hierarchical for bs
* unflatten hierarchical config
* connection error
* random sample
* config signature
* check ray version
* preprocess numpy array
* catboost preprocess
* time budget
* seed, verbose, hpo_method
* test cfocat
* shallow copy in flatten_dict
prevent lgbm model duplication
* match estimator name
* quantize and log
* test qloguniform and qrandint
* test qlograndint
* thread.running
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
Co-authored-by: Qingyun Wu <qingyunwu@Qingyuns-MacBook-Pro-2.local>
* subspace in flow2
* search space and trainable from AutoML
* experimental features: multivariate TPE, grouping, add_evaluated_points
* test experimental features
* readme
* define by run
* set time_budget_s for bs
Co-authored-by: liususan091219 <Xqq630517>
* version
* acl
* test define_by_run_func
* size
* constraints
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
* add starting point in fit
* add estimator best config
* add test
* add doc string
* when there are multiple points_to_evaluate in CFO, use the best one to start local search; after that use low cost partial config as the start point; then, remove the points whose performance is worse than the converged, and start local search from the remaining ones ordered by their performance.
Co-authored-by: Qingyun Wu <qingyunwu@Qingyuns-MacBook-Pro-2.local>
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
* pickle the AutoML object
* get best model per estimator
* test deberta
* stateless API
* prevent divide by zero
* test roberta
* BlendSearchTuner
* delta time
* reindex columns when dropping int-indexed columns
* test drop columns and small training data
* param set for ensemble builder
* fillna on copy
Co-authored-by: Chi Wang (MSR) <chiw@microsoft.com>
* Update automl.py
* Pass verbose-1 to tune
passing verbose-1 to tune, ensures that for verbose=1, tune is silenced (no INFO prints) and for verbose=2 we see the INFO prints, and for verbose=3 we get DEBUG level at tune, as we want. This is due to: https://github.com/microsoft/FLAML/blob/main/flaml/tune/tune.py#L227
* pickle the AutoML object
* get best model per estimator
* test deberta
* stateless API
* Add Gitter badge (#41)
* prevent divide by zero
* test roberta
* BlendSearchTuner
Co-authored-by: Chi Wang (MSR) <chiw@microsoft.com>
Co-authored-by: The Gitter Badger <badger@gitter.im>
* xgboost notebook
* finetuning notebook
* finetuning test
* experimental nni support
* support nested search space
* log file name
* record training_iteration
* eps
* reset times
* std set to default step size if 0
* v0.2.2
separate the HPO part into the module flaml.tune
enhanced implementation of FLOW^2, CFO and BlendSearch
support parallel tuning using ray tune
add support for sample_weight and generic fit arguments
enable mlflow logging
Co-authored-by: Chi Wang (MSR) <chiw@microsoft.com>
Co-authored-by: qingyun-wu <qw2ky@virginia.edu>
* set default logging level to INFO
* remove unnecessary import
* API future compatibility
* add test for customized learner
* test dependency
Co-authored-by: Chi Wang (MSR) <chiw@microsoft.com>