* fix a bug when using ray & update ray on aml
When using with_parameters(), the config argument must be the first argument in the trainable function.
* make training function runnable standalone
* warning -> info for low cost partial config
#195, #110
* when n_estimators < 0, use trained_estimator's
* log debug info
* test random seed
* remove "objective"; avoid ZeroDivisionError
* hp config to estimator params
* check type of searcher
* default n_jobs
* try import
* Update searchalgo_auto.py
* CLASSIFICATION
* auto_augment flag
* min_sample_size
* make catboost optional
* remove extra comma
* exclusive bound
* log file name
* add cost to space
* dataset_format
* add load_openml_dataset test
* docstr
* revise test format
* simplify restore
* order categories
* openml server exception in test
* process space
* add warning
* log format
* reduce n_cpu
* nested space
* hierarchical search space for CFO
* non hierarchical for bs
* unflatten hierarchical config
* connection error
* random sample
* config signature
* check ray version
* preprocess numpy array
* catboost preprocess
* time budget
* seed, verbose, hpo_method
* test cfocat
* shallow copy in flatten_dict
prevent lgbm model duplication
* match estimator name
* quantize and log
* test qloguniform and qrandint
* test qlograndint
* thread.running
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
Co-authored-by: Qingyun Wu <qingyunwu@Qingyuns-MacBook-Pro-2.local>
* v0.2.2
separate the HPO part into the module flaml.tune
enhanced implementation of FLOW^2, CFO and BlendSearch
support parallel tuning using ray tune
add support for sample_weight and generic fit arguments
enable mlflow logging
Co-authored-by: Chi Wang (MSR) <chiw@microsoft.com>
Co-authored-by: qingyun-wu <qw2ky@virginia.edu>