* notebook test
* add ipykernel, remove except
* only create dir if not empty
* Stop sequential tuning when result is None
* fix reproducibility of global search
* save gs seed
* use get to avoid KeyError
* test
* add bs restore test
* use default metric when not provided
* update documentation
* remove print
* period
* remove bs restore test
* Update website/docs/Use-Cases/Task-Oriented-AutoML.md
* handle non-flaml scheduler in flaml.tune
* revise time budget
* Update website/docs/Use-Cases/Tune-User-Defined-Function.md
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
* Update website/docs/Use-Cases/Tune-User-Defined-Function.md
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
* Update flaml/tune/tune.py
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
* add docstr
* remove random seed
* StopIteration
* StopIteration format
* format
* Update flaml/tune/tune.py
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
* revise docstr
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
* fix a bug when using ray & update ray on aml
When using with_parameters(), the config argument must be the first argument in the trainable function.
* make training function runnable standalone
* update tune function
* pass incumbent result to the training function
* Update test/tune/test_record_incumbent.py
* Update flaml/searcher/search_thread.py
* Update flaml/searcher/blendsearch.py
* Update flaml/tune/tune.py
* add constant variable
Co-authored-by: 张少坤 <zhangshaokun@fuzhi.ai>
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
* warning -> info for low cost partial config
#195, #110
* when n_estimators < 0, use trained_estimator's
* log debug info
* test random seed
* remove "objective"; avoid ZeroDivisionError
* hp config to estimator params
* check type of searcher
* default n_jobs
* try import
* Update searchalgo_auto.py
* CLASSIFICATION
* auto_augment flag
* min_sample_size
* make catboost optional
* config in result
* value can be float
* pytorch notebook example
* docker, pre-commit
* max_failure (#192); early_stop
* extend starting_points (#196)
Co-authored-by: Chi Wang (MSR) <wang.chi@microsoft.com>
Co-authored-by: Qingyun Wu <qw2ky@virginia.edu>
* increase test coverage
* use define by run only when needed
* warmstart bs
* classification -> binary, multi
* warm start with evaluated rewards
* data transformer; resource attr for gs
* BlendSearchTuner bug fix and unittest
* bug fix
* docstr and import
* task type
* remove extra comma
* exclusive bound
* log file name
* add cost to space
* dataset_format
* add load_openml_dataset test
* docstr
* revise test format
* simplify restore
* order categories
* openml server exception in test
* process space
* add warning
* log format
* reduce n_cpu
* nested space
* hierarchical search space for CFO
* non hierarchical for bs
* unflatten hierarchical config
* connection error
* random sample
* config signature
* check ray version
* preprocess numpy array
* catboost preprocess
* time budget
* seed, verbose, hpo_method
* test cfocat
* shallow copy in flatten_dict
prevent lgbm model duplication
* match estimator name
* quantize and log
* test qloguniform and qrandint
* test qlograndint
* thread.running
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
Co-authored-by: Qingyun Wu <qingyunwu@Qingyuns-MacBook-Pro-2.local>
* add starting point in fit
* add estimator best config
* add test
* add doc string
* when there are multiple points_to_evaluate in CFO, use the best one to start local search; after that use low cost partial config as the start point; then, remove the points whose performance is worse than the converged, and start local search from the remaining ones ordered by their performance.
Co-authored-by: Qingyun Wu <qingyunwu@Qingyuns-MacBook-Pro-2.local>
Co-authored-by: Chi Wang <wang.chi@microsoft.com>