* changed signature of automl.predict and automl.predict_proba to X
* XGBoostEstimator
* changed signature of Prophet predict to X
* changed signature of ARIMA predict to X
* changed signature of TS_SKLearn_Regressor predict to X
* add sklearn regressors as learners for ts_forecast task
* add direct forecasting strategy
warnings and errors for duplicate rows and missing values
- add preprocess for sklearn time series forecast
update automl.py
update test/test_forecast.py
* update model.py and test_forecast.py for cv eval_method
* add "hcrystalball" dependency in setup.py
* update automl.py
- add _validate_ts_data function for abstraction
- include xgb_limitdepth as a learner
* update model.py
- update search space for sklearn ts regressors
* update automl.py and test_forecast.py for numpy array inputs
* add documentations to model.py
* add documentation for removing catboost regressor
* update automl.py
- _validate_ts_data() function
Signed-off-by: Kevin Chen <chenkevin.8787@gmail.com>
* query logged runs
* mlflow log when using ray
* key check for newer version of ray #363
* catch importerror
* log and load AutoML model
* retrain if necessary when ensemble fails
* add support for customized splitters
* use the param split_type for feeding generators
* use single API for customized splitter and add test
* when task==TS_FORCAST, always set shuffle=False
* update docstr
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
* fix checkpoint naming + trial id for non-ray mode, fix the bug in running test mode, delete all the checkpoints in non-ray mode
* finished testing for checkpoint naming, delete checkpoint, ray, max iter = 1
* adding predict_proba, address PR 293's comments
close#293#291
* make AutoML inherit sklearn.base.BaseEstimator such that it can be wrapped in sklearn.multioutput.MultiOutputRegressor for multi-output regression.
* moved and simplified preprocessing code in AutoML.predictI() to _preprocess()
if save_best_model_per_estimator is False and retrain_final is True, unfit the model after evaluation in HPO.
retrain if using ray.
update ITER_HP in config after a trial is finished.
change prophet logging level.
example and notebook update.
allow settings to be passed to AutoML constructor. Are you planning to add multi-output-regression capability to FLAML #192 Is multi-tasking allowed? #277 can pass the auotml setting to the constructor instead of requiring a derived class.
remove model_history.
checkpoint bug fix.
* model_history meaning save_best_model_per_estimator
* ITER_HP
* example update
* prophet logging level
* comment update in forecast notebook
* print format improvement
* allow settings to be passed to AutoML constructor
* checkpoint bug fix
* time limit for autohf regression test
* skip slow test on macos
* cleanup before del
* limit time and memory
* separate tests
* lrl1 can't be limited by limit_resource
* free memory when possible
* passthrough=False when ensemble fails;
retrain when trained_estimator is None
* use callback to for resource limit
* handle lower version of xgb with no callback
* free mem ratio
* reduce verbosity
* retrain_final when max_iter==1
* remove trained_estimator from result
* model_history
* wheel
* retrain time as best_config_train_time
* ci: libomp version for xgboost on macos
* limit_resource not working in windows
* test pickle load
* mute forecaster
* notebook update
* check hard
* preventive callback
* add use_ray
* Integrate multivariate time series forecasting, now supports
continuous and categorical variables
- update data.py to transform time series data
- update search space
- update documentations to reflect changes
- update test_forecast.py
- rename 'forecast' task to 'ts_forecast' task
* update automl.py and test_forecast.py
* update forecast notebook
* update README.md and setup.py
* update ml.py and test_forecast.py
- make "ds" and "y" constant variables
* replace constants with constant variables
* bump version to 0.7.0
* update setup.py
- support 'forecast' and 'ts_forecast'
* update automl.py and data.py
- support 'forecast' and 'ts_forecast' tasks
* warning -> info for low cost partial config
#195, #110
* when n_estimators < 0, use trained_estimator's
* log debug info
* test random seed
* remove "objective"; avoid ZeroDivisionError
* hp config to estimator params
* check type of searcher
* default n_jobs
* try import
* Update searchalgo_auto.py
* CLASSIFICATION
* auto_augment flag
* min_sample_size
* make catboost optional
* update config if n_estimators is modified
* prediction as int
* handle the case n_estimators <= 0
* if trained and no budget to train more, return the trained model
* split_type=group for classification & regression
* set converge flag when no trial can be sampled
* require custom_metric to return dict for logging
close#218
* estimate time budget needed
* log info per iteration