2021-02-22 22:10:41 -08:00
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"Copyright (c) 2020-2021 Microsoft Corporation. All rights reserved. \n",
"\n",
"Licensed under the MIT License.\n",
"\n",
"# Tune LightGBM with FLAML Library\n",
"\n",
"\n",
"## 1. Introduction\n",
"\n",
"FLAML is a Python library (https://github.com/microsoft/FLAML) designed to automatically produce accurate machine learning models \n",
"with low computational cost. It is fast and cheap. The simple and lightweight design makes it easy \n",
"to use and extend, such as adding new learners. FLAML can \n",
"- serve as an economical AutoML engine,\n",
"- be used as a fast hyperparameter tuning tool, or \n",
"- be embedded in self-tuning software that requires low latency & resource in repetitive\n",
" tuning tasks.\n",
"\n",
"In this notebook, we demonstrate how to use FLAML library to tune hyperparameters of LightGBM with a regression example.\n",
"\n",
"FLAML requires `Python>=3.6`. To run this notebook example, please install flaml with the `notebook` option:\n",
"```bash\n",
"pip install flaml[notebook]\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!pip install flaml[notebook];"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"## 2. Regression Example\n",
"### Load data and preprocess\n",
"\n",
"Download [houses dataset](https://www.openml.org/d/537) from OpenML. The task is to predict median price of the house in the region based on demographic composition and a state of housing market in the region."
]
},
{
"cell_type": "code",
2021-04-08 09:29:55 -07:00
"execution_count": 2,
2021-02-22 22:10:41 -08:00
"metadata": {
"slideshow": {
"slide_type": "subslide"
},
"tags": []
},
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"load dataset from ./openml_ds537.pkl\nDataset name: houses\nX_train.shape: (15480, 8), y_train.shape: (15480,);\nX_test.shape: (5160, 8), y_test.shape: (5160,)\n"
]
}
],
"source": [
"from flaml.data import load_openml_dataset\n",
2021-04-08 09:29:55 -07:00
"X_train, X_test, y_train, y_test = load_openml_dataset(dataset_id=537, data_dir='./')"
2021-02-22 22:10:41 -08:00
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"### Run FLAML\n",
"In the FLAML automl run configuration, users can specify the task type, time budget, error metric, learner list, whether to subsample, resampling strategy type, and so on. All these arguments have default values which will be used if users do not provide them. "
]
},
{
"cell_type": "code",
2021-04-08 09:29:55 -07:00
"execution_count": 3,
2021-02-22 22:10:41 -08:00
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"outputs": [],
"source": [
"''' import AutoML class from flaml package '''\n",
"from flaml import AutoML\n",
"automl = AutoML()"
]
},
{
"cell_type": "code",
2021-04-08 09:29:55 -07:00
"execution_count": 4,
2021-02-22 22:10:41 -08:00
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"outputs": [],
"source": [
"settings = {\n",
2021-04-08 09:29:55 -07:00
" \"time_budget\": 120, # total running time in seconds\n",
" \"metric\": 'r2', # primary metrics for regression can be chosen from: ['mae','mse','r2']\n",
" \"estimator_list\": ['lgbm'], # list of ML learners; we tune lightgbm in this example\n",
" \"task\": 'regression', # task type \n",
" \"log_file_name\": 'houses_experiment.log', # flaml log file\n",
2021-02-22 22:10:41 -08:00
"}"
]
},
{
"cell_type": "code",
2021-04-08 09:29:55 -07:00
"execution_count": 5,
2021-02-22 22:10:41 -08:00
"metadata": {
"slideshow": {
"slide_type": "slide"
},
"tags": []
},
"outputs": [
{
"output_type": "stream",
"name": "stderr",
"text": [
2021-04-08 09:29:55 -07:00
"[flaml.automl: 04-07 09:44:04] {890} INFO - Evaluation method: cv\n",
"[flaml.automl: 04-07 09:44:04] {606} INFO - Using RepeatedKFold\n",
"[flaml.automl: 04-07 09:44:04] {911} INFO - Minimizing error metric: 1-r2\n",
"[flaml.automl: 04-07 09:44:04] {929} INFO - List of ML learners in AutoML Run: ['lgbm']\n",
"[flaml.automl: 04-07 09:44:04] {993} INFO - iteration 0, current learner lgbm\n",
"[flaml.automl: 04-07 09:44:05] {1141} INFO - at 0.4s,\tbest lgbm's error=0.7383,\tbest lgbm's error=0.7383\n",
"[flaml.automl: 04-07 09:44:05] {993} INFO - iteration 1, current learner lgbm\n",
"[flaml.automl: 04-07 09:44:05] {1141} INFO - at 0.5s,\tbest lgbm's error=0.7383,\tbest lgbm's error=0.7383\n",
"[flaml.automl: 04-07 09:44:05] {993} INFO - iteration 2, current learner lgbm\n",
"[flaml.automl: 04-07 09:44:05] {1141} INFO - at 0.8s,\tbest lgbm's error=0.3888,\tbest lgbm's error=0.3888\n",
"[flaml.automl: 04-07 09:44:05] {993} INFO - iteration 3, current learner lgbm\n",
"[flaml.automl: 04-07 09:44:05] {1141} INFO - at 0.9s,\tbest lgbm's error=0.3888,\tbest lgbm's error=0.3888\n",
"[flaml.automl: 04-07 09:44:05] {993} INFO - iteration 4, current learner lgbm\n",
"[flaml.automl: 04-07 09:44:05] {1141} INFO - at 1.3s,\tbest lgbm's error=0.2657,\tbest lgbm's error=0.2657\n",
"[flaml.automl: 04-07 09:44:05] {993} INFO - iteration 5, current learner lgbm\n",
"[flaml.automl: 04-07 09:44:06] {1141} INFO - at 1.7s,\tbest lgbm's error=0.2256,\tbest lgbm's error=0.2256\n",
"[flaml.automl: 04-07 09:44:06] {993} INFO - iteration 6, current learner lgbm\n",
"[flaml.automl: 04-07 09:44:06] {1141} INFO - at 1.9s,\tbest lgbm's error=0.2256,\tbest lgbm's error=0.2256\n",
"[flaml.automl: 04-07 09:44:06] {993} INFO - iteration 7, current learner lgbm\n",
"[flaml.automl: 04-07 09:44:06] {1141} INFO - at 2.3s,\tbest lgbm's error=0.2256,\tbest lgbm's error=0.2256\n",
"[flaml.automl: 04-07 09:44:06] {993} INFO - iteration 8, current learner lgbm\n",
"[flaml.automl: 04-07 09:44:07] {1141} INFO - at 2.5s,\tbest lgbm's error=0.2256,\tbest lgbm's error=0.2256\n",
"[flaml.automl: 04-07 09:44:07] {993} INFO - iteration 9, current learner lgbm\n",
"[flaml.automl: 04-07 09:44:07] {1141} INFO - at 2.8s,\tbest lgbm's error=0.2256,\tbest lgbm's error=0.2256\n",
"[flaml.automl: 04-07 09:44:07] {993} INFO - iteration 10, current learner lgbm\n",
"[flaml.automl: 04-07 09:44:07] {1141} INFO - at 3.0s,\tbest lgbm's error=0.2256,\tbest lgbm's error=0.2256\n",
"[flaml.automl: 04-07 09:44:07] {993} INFO - iteration 11, current learner lgbm\n",
"[flaml.automl: 04-07 09:44:08] {1141} INFO - at 3.6s,\tbest lgbm's error=0.2099,\tbest lgbm's error=0.2099\n",
"[flaml.automl: 04-07 09:44:08] {993} INFO - iteration 12, current learner lgbm\n",
"[flaml.automl: 04-07 09:44:09] {1141} INFO - at 4.7s,\tbest lgbm's error=0.2099,\tbest lgbm's error=0.2099\n",
"[flaml.automl: 04-07 09:44:09] {993} INFO - iteration 13, current learner lgbm\n",
"[flaml.automl: 04-07 09:44:09] {1141} INFO - at 4.9s,\tbest lgbm's error=0.2099,\tbest lgbm's error=0.2099\n",
"[flaml.automl: 04-07 09:44:09] {993} INFO - iteration 14, current learner lgbm\n",
"[flaml.automl: 04-07 09:44:10] {1141} INFO - at 6.3s,\tbest lgbm's error=0.1644,\tbest lgbm's error=0.1644\n",
"[flaml.automl: 04-07 09:44:10] {993} INFO - iteration 15, current learner lgbm\n",
"[flaml.automl: 04-07 09:44:11] {1141} INFO - at 7.0s,\tbest lgbm's error=0.1644,\tbest lgbm's error=0.1644\n",
"[flaml.automl: 04-07 09:44:11] {993} INFO - iteration 16, current learner lgbm\n",
"[flaml.automl: 04-07 09:44:19] {1141} INFO - at 14.9s,\tbest lgbm's error=0.1644,\tbest lgbm's error=0.1644\n",
"[flaml.automl: 04-07 09:44:19] {993} INFO - iteration 17, current learner lgbm\n",
"[flaml.automl: 04-07 09:44:20] {1141} INFO - at 16.0s,\tbest lgbm's error=0.1644,\tbest lgbm's error=0.1644\n",
"[flaml.automl: 04-07 09:44:20] {993} INFO - iteration 18, current learner lgbm\n",
"[flaml.automl: 04-07 09:44:25] {1141} INFO - at 20.4s,\tbest lgbm's error=0.1644,\tbest lgbm's error=0.1644\n",
"[flaml.automl: 04-07 09:44:25] {993} INFO - iteration 19, current learner lgbm\n",
"[flaml.automl: 04-07 09:44:25] {1141} INFO - at 20.9s,\tbest lgbm's error=0.1644,\tbest lgbm's error=0.1644\n",
"[flaml.automl: 04-07 09:44:25] {993} INFO - iteration 20, current learner lgbm\n",
"[flaml.automl: 04-07 09:44:26] {1141} INFO - at 21.9s,\tbest lgbm's error=0.1644,\tbest lgbm's error=0.1644\n",
"[flaml.automl: 04-07 09:44:26] {993} INFO - iteration 21, current learner lgbm\n",
"[flaml.automl: 04-07 09:44:26] {1141} INFO - at 22.4s,\tbest lgbm's error=0.1644,\tbest lgbm's error=0.1644\n",
"[flaml.automl: 04-07 09:44:26] {993} INFO - iteration 22, current learner lgbm\n",
"[flaml.automl: 04-07 09:44:31] {1141} INFO - at 26.9s,\tbest lgbm's error=0.1644,\tbest lgbm's error=0.1644\n",
"[flaml.automl: 04-07 09:44:31] {993} INFO - iteration 23, current learner lgbm\n",
"[flaml.automl: 04-07 09:44:33] {1141} INFO - at 28.4s,\tbest lgbm's error=0.1644,\tbest lgbm's error=0.1644\n",
"[flaml.automl: 04-07 09:44:33] {993} INFO - iteration 24, current learner lgbm\n",
"[flaml.automl: 04-07 09:44:34] {1141} INFO - at 29.7s,\tbest lgbm's error=0.1644,\tbest lgbm's error=0.1644\n",
"[flaml.automl: 04-07 09:44:34] {993} INFO - iteration 25, current learner lgbm\n",
"[flaml.automl: 04-07 09:44:35] {1141} INFO - at 30.8s,\tbest lgbm's error=0.1644,\tbest lgbm's error=0.1644\n",
"[flaml.automl: 04-07 09:44:35] {993} INFO - iteration 26, current learner lgbm\n",
"[flaml.automl: 04-07 09:44:37] {1141} INFO - at 32.5s,\tbest lgbm's error=0.1644,\tbest lgbm's error=0.1644\n",
"[flaml.automl: 04-07 09:44:37] {993} INFO - iteration 27, current learner lgbm\n",
"[flaml.automl: 04-07 09:44:37] {1141} INFO - at 33.2s,\tbest lgbm's error=0.1644,\tbest lgbm's error=0.1644\n",
"[flaml.automl: 04-07 09:44:37] {993} INFO - iteration 28, current learner lgbm\n",
"[flaml.automl: 04-07 09:44:39] {1141} INFO - at 34.8s,\tbest lgbm's error=0.1644,\tbest lgbm's error=0.1644\n",
"[flaml.automl: 04-07 09:44:39] {993} INFO - iteration 29, current learner lgbm\n",
"[flaml.automl: 04-07 09:44:44] {1141} INFO - at 39.6s,\tbest lgbm's error=0.1604,\tbest lgbm's error=0.1604\n",
"[flaml.automl: 04-07 09:44:44] {993} INFO - iteration 30, current learner lgbm\n",
"[flaml.automl: 04-07 09:44:44] {1141} INFO - at 40.2s,\tbest lgbm's error=0.1604,\tbest lgbm's error=0.1604\n",
"[flaml.automl: 04-07 09:44:44] {993} INFO - iteration 31, current learner lgbm\n",
"[flaml.automl: 04-07 09:44:58] {1141} INFO - at 54.3s,\tbest lgbm's error=0.1604,\tbest lgbm's error=0.1604\n",
"[flaml.automl: 04-07 09:44:58] {993} INFO - iteration 32, current learner lgbm\n",
"[flaml.automl: 04-07 09:45:02] {1141} INFO - at 58.0s,\tbest lgbm's error=0.1604,\tbest lgbm's error=0.1604\n",
"[flaml.automl: 04-07 09:45:02] {993} INFO - iteration 33, current learner lgbm\n",
"[flaml.automl: 04-07 09:45:03] {1141} INFO - at 58.6s,\tbest lgbm's error=0.1604,\tbest lgbm's error=0.1604\n",
"[flaml.automl: 04-07 09:45:03] {993} INFO - iteration 34, current learner lgbm\n",
"[flaml.automl: 04-07 09:45:28] {1141} INFO - at 83.5s,\tbest lgbm's error=0.1604,\tbest lgbm's error=0.1604\n",
"[flaml.automl: 04-07 09:45:28] {993} INFO - iteration 35, current learner lgbm\n",
"[flaml.automl: 04-07 09:45:34] {1141} INFO - at 89.9s,\tbest lgbm's error=0.1604,\tbest lgbm's error=0.1604\n",
"[flaml.automl: 04-07 09:45:34] {993} INFO - iteration 36, current learner lgbm\n",
"[flaml.automl: 04-07 09:45:37] {1141} INFO - at 92.4s,\tbest lgbm's error=0.1604,\tbest lgbm's error=0.1604\n",
"[flaml.automl: 04-07 09:45:37] {993} INFO - iteration 37, current learner lgbm\n",
"[flaml.automl: 04-07 09:45:40] {1141} INFO - at 95.7s,\tbest lgbm's error=0.1604,\tbest lgbm's error=0.1604\n",
"[flaml.automl: 04-07 09:45:40] {993} INFO - iteration 38, current learner lgbm\n",
"[flaml.automl: 04-07 09:45:42] {1141} INFO - at 97.5s,\tbest lgbm's error=0.1604,\tbest lgbm's error=0.1604\n",
"[flaml.automl: 04-07 09:45:42] {993} INFO - iteration 39, current learner lgbm\n",
"[flaml.automl: 04-07 09:45:52] {1141} INFO - at 107.4s,\tbest lgbm's error=0.1604,\tbest lgbm's error=0.1604\n",
"[flaml.automl: 04-07 09:45:52] {993} INFO - iteration 40, current learner lgbm\n",
"[flaml.automl: 04-07 09:45:56] {1141} INFO - at 111.4s,\tbest lgbm's error=0.1604,\tbest lgbm's error=0.1604\n",
"[flaml.automl: 04-07 09:45:56] {993} INFO - iteration 41, current learner lgbm\n",
"[flaml.automl: 04-07 09:45:58] {1141} INFO - at 113.6s,\tbest lgbm's error=0.1604,\tbest lgbm's error=0.1604\n",
"[flaml.automl: 04-07 09:45:58] {993} INFO - iteration 42, current learner lgbm\n",
"[flaml.automl: 04-07 09:46:03] {1141} INFO - at 118.5s,\tbest lgbm's error=0.1604,\tbest lgbm's error=0.1604\n",
"[flaml.automl: 04-07 09:46:03] {1187} INFO - selected model: LGBMRegressor(colsample_bytree=0.7586723794764185,\n",
" learning_rate=0.10418050364992694, max_bin=127,\n",
" min_child_samples=21, n_estimators=95, num_leaves=254,\n",
" objective='regression', reg_alpha=0.09228337080759572,\n",
" reg_lambda=0.46673178167010676, subsample=0.9097941662911945)\n",
"[flaml.automl: 04-07 09:46:03] {944} INFO - fit succeeded\n"
2021-02-22 22:10:41 -08:00
]
}
],
"source": [
"'''The main flaml automl API'''\n",
2021-04-08 09:29:55 -07:00
"automl.fit(X_train=X_train, y_train=y_train, **settings)"
2021-02-22 22:10:41 -08:00
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"### Best model and metric"
]
},
{
"cell_type": "code",
2021-04-08 09:29:55 -07:00
"execution_count": 6,
2021-02-22 22:10:41 -08:00
"metadata": {
"slideshow": {
"slide_type": "slide"
},
"tags": []
},
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
2021-04-08 09:29:55 -07:00
"Best hyperparmeter config: {'n_estimators': 95.0, 'num_leaves': 254.0, 'min_child_samples': 21.0, 'learning_rate': 0.10418050364992694, 'subsample': 0.9097941662911945, 'log_max_bin': 7.0, 'colsample_bytree': 0.7586723794764185, 'reg_alpha': 0.09228337080759572, 'reg_lambda': 0.46673178167010676}\nBest r2 on validation data: 0.8396\nTraining duration of best run: 4.812 s\n"
2021-02-22 22:10:41 -08:00
]
}
],
"source": [
"''' retrieve best config'''\n",
"print('Best hyperparmeter config:', automl.best_config)\n",
"print('Best r2 on validation data: {0:.4g}'.format(1-automl.best_loss))\n",
"print('Training duration of best run: {0:.4g} s'.format(automl.best_config_train_time))"
]
},
{
"cell_type": "code",
2021-04-08 09:29:55 -07:00
"execution_count": 7,
2021-02-22 22:10:41 -08:00
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/plain": [
2021-04-08 09:29:55 -07:00
"LGBMRegressor(colsample_bytree=0.7586723794764185,\n",
" learning_rate=0.10418050364992694, max_bin=127,\n",
" min_child_samples=21, n_estimators=95, num_leaves=254,\n",
" objective='regression', reg_alpha=0.09228337080759572,\n",
" reg_lambda=0.46673178167010676, subsample=0.9097941662911945)"
2021-02-22 22:10:41 -08:00
]
},
"metadata": {},
2021-04-08 09:29:55 -07:00
"execution_count": 7
2021-02-22 22:10:41 -08:00
}
],
"source": [
"automl.model"
]
},
{
"cell_type": "code",
2021-04-08 09:29:55 -07:00
"execution_count": 8,
2021-02-22 22:10:41 -08:00
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"outputs": [],
"source": [
2021-03-16 22:13:35 -07:00
"''' pickle and save the automl object '''\n",
2021-02-22 22:10:41 -08:00
"import pickle\n",
2021-03-16 22:13:35 -07:00
"with open('automl.pkl', 'wb') as f:\n",
" pickle.dump(automl, f, pickle.HIGHEST_PROTOCOL)"
2021-02-22 22:10:41 -08:00
]
},
{
"cell_type": "code",
2021-04-08 09:29:55 -07:00
"execution_count": 9,
2021-02-22 22:10:41 -08:00
"metadata": {
"slideshow": {
"slide_type": "slide"
},
"tags": []
},
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
2021-04-08 09:29:55 -07:00
"Predicted labels [150367.25556214 263353.37798151 136897.76625025 ... 190606.68038356\n 237816.02972335 263063.11183796]\nTrue labels [136900. 241300. 200700. ... 160900. 227300. 265600.]\n"
2021-02-22 22:10:41 -08:00
]
}
],
"source": [
"''' compute predictions of testing dataset ''' \n",
"y_pred = automl.predict(X_test)\n",
"print('Predicted labels', y_pred)\n",
"print('True labels', y_test)"
]
},
{
"cell_type": "code",
2021-04-08 09:29:55 -07:00
"execution_count": 10,
2021-02-22 22:10:41 -08:00
"metadata": {
"slideshow": {
"slide_type": "slide"
},
"tags": []
},
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
2021-04-08 09:29:55 -07:00
"r2 = 0.8500929784828137\nmse = 1981546944.5284543\nmae = 29485.579651356835\n"
2021-02-22 22:10:41 -08:00
]
}
],
"source": [
"''' compute different metric values on testing dataset'''\n",
"from flaml.ml import sklearn_metric_loss_score\n",
"print('r2', '=', 1 - sklearn_metric_loss_score('r2', y_pred, y_test))\n",
"print('mse', '=', sklearn_metric_loss_score('mse', y_pred, y_test))\n",
"print('mae', '=', sklearn_metric_loss_score('mae', y_pred, y_test))"
]
},
{
"cell_type": "code",
2021-04-08 09:29:55 -07:00
"execution_count": 11,
2021-02-22 22:10:41 -08:00
"metadata": {
"slideshow": {
"slide_type": "subslide"
},
"tags": []
},
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
2021-04-08 09:29:55 -07:00
"{'Current Learner': 'lgbm', 'Current Sample': 15480, 'Current Hyper-parameters': {'n_estimators': 4, 'num_leaves': 4, 'min_child_samples': 20, 'learning_rate': 0.1, 'subsample': 1.0, 'log_max_bin': 8, 'colsample_bytree': 1.0, 'reg_alpha': 0.0009765625, 'reg_lambda': 1.0}, 'Best Learner': 'lgbm', 'Best Hyper-parameters': {'n_estimators': 4, 'num_leaves': 4, 'min_child_samples': 20, 'learning_rate': 0.1, 'subsample': 1.0, 'log_max_bin': 8, 'colsample_bytree': 1.0, 'reg_alpha': 0.0009765625, 'reg_lambda': 1.0}}\n{'Current Learner': 'lgbm', 'Current Sample': 15480, 'Current Hyper-parameters': {'n_estimators': 4.0, 'num_leaves': 4.0, 'min_child_samples': 25.0, 'learning_rate': 1.0, 'subsample': 0.8513627344387318, 'log_max_bin': 10.0, 'colsample_bytree': 0.9684145930669938, 'reg_alpha': 0.001831177697321707, 'reg_lambda': 0.2790165919053839}, 'Best Learner': 'lgbm', 'Best Hyper-parameters': {'n_estimators': 4.0, 'num_leaves': 4.0, 'min_child_samples': 25.0, 'learning_rate': 1.0, 'subsample': 0.8513627344387318, 'log_max_bin': 10.0, 'colsample_bytree': 0.9684145930669938, 'reg_alpha': 0.001831177697321707, 'reg_lambda': 0.2790165919053839}}\n{'Current Learner': 'lgbm', 'Current Sample': 15480, 'Current Hyper-parameters': {'n_estimators': 20.0, 'num_leaves': 4.0, 'min_child_samples': 48.0, 'learning_rate': 1.0, 'subsample': 0.9814787163243813, 'log_max_bin': 10.0, 'colsample_bytree': 0.9534346594834143, 'reg_alpha': 0.002208534076096185, 'reg_lambda': 0.5460627024738886}, 'Best Learner': 'lgbm', 'Best Hyper-parameters': {'n_estimators': 20.0, 'num_leaves': 4.0, 'min_child_samples': 48.0, 'learning_rate': 1.0, 'subsample': 0.9814787163243813, 'log_max_bin': 10.0, 'colsample_bytree': 0.9534346594834143, 'reg_alpha': 0.002208534076096185, 'reg_lambda': 0.5460627024738886}}\n{'Current Learner': 'lgbm', 'Current Sample': 15480, 'Current Hyper-parameters': {'n_estimators': 11.0, 'num_leaves': 15.0, 'min_child_samples': 42.0, 'learning_rate': 0.4743416464891248, 'subsample': 0.9233328006239466, 'log_max_bin': 10.0, 'colsample_bytree': 1.0, 'reg_alpha': 0.034996420228767956, 'reg_lambda': 0.6169079461473814}, 'Best Learner': 'lgbm', 'Best Hyper-parameters': {'n_estimators': 11.0, 'num_leaves': 15.0, 'min_child_samples': 42.0, 'learning_rate': 0.4743416464891248, 'subsample': 0.9233328006239466, 'log_max_bin': 10.0, 'colsample_bytree': 1.0, 'reg_alpha': 0.034996420228767956, 'reg_lambda': 0.6169079461473814}}\n{'Current Learner': 'lgbm', 'Current Sample': 15480, 'Current Hyper-parameters': {'n_estimators': 22.0, 'num_leaves': 44.0, 'min_child_samples': 33.0, 'learning_rate': 0.7277554644304967, 'subsample': 0.8890322269681047, 'log_max_bin': 9.0, 'colsample_bytree': 0.8917187085424868, 'reg_alpha': 0.3477637978466495, 'reg_lambda': 0.24655709710146537}, 'Best Learner': 'lgbm', 'Best Hyper-parameters': {'n_estimators': 22.0, 'num_leaves': 44.0, 'min_child_samples': 33.0, 'learning_rate': 0.7277554644304967, 'subsample': 0.8890322269681047, 'log_max_bin': 9.0, 'colsample_bytree': 0.8917187085424868, 'reg_alpha': 0.3477637978466495, 'reg_lambda': 0.24655709710146537}}\n{'Current Learner': 'lgbm', 'Current Sample': 15480, 'Current Hyper-parameters': {'n_estimators': 60.0, 'num_leaves': 72.0, 'min_child_samples': 37.0, 'learning_rate': 0.23811059538783155, 'subsample': 1.0, 'log_max_bin': 8.0, 'colsample_bytree': 0.9162072323824675, 'reg_alpha': 0.7017839907881602, 'reg_lambda': 0.23027329389914142}, 'Best Learner': 'lgbm', 'Best Hyper-parameters': {'n_estimators': 60.0, 'num_leaves': 72.0, 'min_child_samples': 37.0, 'learning_rate': 0.23811059538783155, 'subsample': 1.0, 'log_max_bin': 8.0, 'colsample_bytree': 0.9162072323824675, 'reg_alpha': 0.7017839907881602, 'reg_lambda': 0.23027329389914142}}\n{'Current Learner': 'lgbm', 'Current Sample': 15480, 'Current Hyper-parameters': {'n_estimators': 95.0, 'num_leaves': 254.0, 'min_child_samples': 21.0, 'learning_rate': 0.10418050364992694, 'subsample': 0.9097941662911945, 'log_max_bin': 7.0, 'colsample_bytree': 0.7586723794764185, 'reg_alpha': 0.09228337080759572, 'reg_lambda': 0.4667317
2021-02-22 22:10:41 -08:00
]
}
],
"source": [
"from flaml.data import get_output_from_log\n",
"time_history, best_valid_loss_history, valid_loss_history, config_history, train_loss_history = \\\n",
2021-04-08 09:29:55 -07:00
" get_output_from_log(filename=settings['log_file_name'], time_budget=60)\n",
2021-02-22 22:10:41 -08:00
"\n",
"for config in config_history:\n",
" print(config)"
]
},
{
"cell_type": "code",
2021-04-08 09:29:55 -07:00
"execution_count": 12,
2021-02-22 22:10:41 -08:00
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"outputs": [
{
"output_type": "display_data",
"data": {
"text/plain": "<Figure size 432x288 with 1 Axes>",
2021-04-08 09:29:55 -07:00
"image/svg+xml": "<?xml version=\"1.0\" encoding=\"utf-8\" standalone=\"no\"?>\n<!DOCTYPE svg PUBLIC \"-//W3C//DTD SVG 1.1//EN\"\n \"http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd\">\n<!-- Created with matplotlib (https://matplotlib.org/) -->\n<svg height=\"277.314375pt\" version=\"1.1\" viewBox=\"0 0 385.78125 277.314375\" width=\"385.78125pt\" xmlns=\"http://www.w3.org/2000/svg\" xmlns:xlink=\"http://www.w3.org/1999/xlink\">\n <defs>\n <style type=\"text/css\">\n*{stroke-linecap:butt;stroke-linejoin:round;}\n </style>\n </defs>\n <g id=\"figure_1\">\n <g id=\"patch_1\">\n <path d=\"M 0 277.314375 \nL 385.78125 277.314375 \nL 385.78125 0 \nL 0 0 \nz\n\" style=\"fill:none;\"/>\n </g>\n <g id=\"axes_1\">\n <g id=\"patch_2\">\n <path d=\"M 43.78125 239.758125 \nL 378.58125 239.758125 \nL 378.58125 22.318125 \nL 43.78125 22.318125 \nz\n\" style=\"fill:#ffffff;\"/>\n </g>\n <g id=\"PathCollection_1\">\n <defs>\n <path d=\"M 0 3 \nC 0.795609 3 1.55874 2.683901 2.12132 2.12132 \nC 2.683901 1.55874 3 0.795609 3 0 \nC 3 -0.795609 2.683901 -1.55874 2.12132 -2.12132 \nC 1.55874 -2.683901 0.795609 -3 0 -3 \nC -0.795609 -3 -1.55874 -2.683901 -2.12132 -2.12132 \nC -2.683901 -1.55874 -3 -0.795609 -3 0 \nC -3 0.795609 -2.683901 1.55874 -2.12132 2.12132 \nC -1.55874 2.683901 -0.795609 3 0 3 \nz\n\" id=\"m559b741c5f\" style=\"stroke:#1f77b4;\"/>\n </defs>\n <g clip-path=\"url(#p95a63bce45)\">\n <use style=\"fill:#1f77b4;stroke:#1f77b4;\" x=\"58.999432\" xlink:href=\"#m559b741c5f\" y=\"229.874489\"/>\n <use style=\"fill:#1f77b4;stroke:#1f77b4;\" x=\"61.865949\" xlink:href=\"#m559b741c5f\" y=\"110.31857\"/>\n <use style=\"fill:#1f77b4;stroke:#1f77b4;\" x=\"65.983189\" xlink:href=\"#m559b741c5f\" y=\"68.231069\"/>\n <use style=\"fill:#1f77b4;stroke:#1f77b4;\" x=\"69.28761\" xlink:href=\"#m559b741c5f\" y=\"54.493868\"/>\n <use style=\"fill:#1f77b4;stroke:#1f77b4;\" x=\"84.233916\" xlink:href=\"#m559b741c5f\" y=\"49.119324\"/>\n <use style=\"fill:#1f77b4;stroke:#1f77b4;\" x=\"104.740952\" xlink:href=\"#m559b741c5f\" y=\"33.574395\"/>\n <use style=\"fill:#1f77b4;stroke:#1f77b4;\" x=\"363.363068\" xlink:href=\"#m559b741c5f\" y=\"32.201761\"/>\n </g>\n </g>\n <g id=\"matplotlib.axis_1\">\n <g id=\"xtick_1\">\n <g id=\"line2d_1\">\n <defs>\n <path d=\"M 0 0 \nL 0 3.5 \n\" id=\"m4516ca0ef1\" style=\"stroke:#000000;stroke-width:0.8;\"/>\n </defs>\n <g>\n <use style=\"stroke:#000000;stroke-width:0.8;\" x=\"56.014357\" xlink:href=\"#m4516ca0ef1\" y=\"239.758125\"/>\n </g>\n </g>\n <g id=\"text_1\">\n <!-- 0 -->\n <defs>\n <path d=\"M 31.78125 66.40625 \nQ 24.171875 66.40625 20.328125 58.90625 \nQ 16.5 51.421875 16.5 36.375 \nQ 16.5 21.390625 20.328125 13.890625 \nQ 24.171875 6.390625 31.78125 6.390625 \nQ 39.453125 6.390625 43.28125 13.890625 \nQ 47.125 21.390625 47.125 36.375 \nQ 47.125 51.421875 43.28125 58.90625 \nQ 39.453125 66.40625 31.78125 66.40625 \nz\nM 31.78125 74.21875 \nQ 44.046875 74.21875 50.515625 64.515625 \nQ 56.984375 54.828125 56.984375 36.375 \nQ 56.984375 17.96875 50.515625 8.265625 \nQ 44.046875 -1.421875 31.78125 -1.421875 \nQ 19.53125 -1.421875 13.0625 8.265625 \nQ 6.59375 17.96875 6.59375 36.375 \nQ 6.59375 54.828125 13.0625 64.515625 \nQ 19.53125 74.21875 31.78125 74.21875 \nz\n\" id=\"DejaVuSans-48\"/>\n </defs>\n <g transform=\"translate(52.833107 254.356562)scale(0.1 -0.1)\">\n <use xlink:href=\"#DejaVuSans-48\"/>\n </g>\n </g>\n </g>\n <g id=\"xtick_2\">\n <g id=\"line2d_2\">\n <g>\n <use style=\"stroke:#000000;stroke-width:0.8;\" x=\"94.84625\" xlink:href=\"#m4516ca0ef1\" y=\"239.758125\"/>\n </g>\n </g>\n <g id=\"text_2\">\n <!-- 5 -->\n <defs>\n <path d=\"M 10.796875 72.90625 \nL 49.515625 72.90625 \nL 49.515625 64.59375 \nL 19.828125 64.59375 \nL 19.828125 46.734375 \nQ 21.96875 47.46875 24.109375 47.828125 \nQ 26.265625 48.1875 28.421875 48.1875 \nQ 40.625 48.1875 47.75 41.5 \nQ 54.890625 34.8125 54.890625 2
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAYIAAAEWCAYAAABrDZDcAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4yLjAsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy8GearUAAAcoUlEQVR4nO3de5gdVZnv8e+PJiFBLgHTMqSTkCAxGi4mGGHwgsCACagkCDKBeeYojkYdYRxwookCMnA4ozLiwedEOcBBLsMdQwgYiSi3EZAkECAXDIaIJB2UcAkEbMntPX9Udahsdu/e3enae3fX7/M8++ldq1ZVvV1J73evtapWKSIwM7Pi2qHeAZiZWX05EZiZFZwTgZlZwTkRmJkVnBOBmVnBORGYmRWcE4FZBZI+Kml5veMwy5MTgTUsSc9KOrqeMUTEf0fE6Lz2L2mCpAckrZe0VtL9ko7P63hm5TgRWKFJaqrjsU8CbgGuAYYCewHnAp/qxr4kyX/P1i3+j2O9jqQdJE2X9IyklyTdLGnPzPpbJP1J0qvpt+39M+uukvQTSXMlvQEcmbY8/k3Sk+k2N0kakNY/QtLqzPYd1k3Xf0PS85LWSPqCpJC0X5nfQcDFwAURcUVEvBoRWyLi/oj4YlrnPEn/ldlmRLq/HdPl+yRdKOlB4C/ANEkLS45zpqQ56fudJP2npOck/VnSpZIGbuc/h/UBTgTWG50BTAY+BgwBXgFmZtb/AhgFvAt4DLiuZPtTgQuBXYHfpGUnAxOBkcBBwOcqHL9sXUkTgbOAo4H9gCMq7GM0MAy4tUKdavwjMJXkd7kUGC1pVGb9qcD16fvvAu8BxqbxtZC0QKzgnAisN/oy8O2IWB0RbwLnASe1f1OOiCsjYn1m3fsl7Z7Z/vaIeDD9Bv7XtOxHEbEmIl4G7iD5sOxIR3VPBn4aEUsj4i/psTvyzvTn89X+0h24Kj3epoh4FbgdOAUgTQjvBeakLZCpwJkR8XJErAf+FzBlO49vfYATgfVG+wC3SVonaR3wFLAZ2EtSk6Tvpt1GrwHPptsMzmy/qsw+/5R5/xdglwrH76jukJJ9lztOu5fSn3tXqFON0mNcT5oISFoDs9Ok1AzsDDyaOW93peVWcE4E1hutAo6NiEGZ14CIaCX58JtE0j2zOzAi3UaZ7fOacvd5kkHfdsMq1F1O8nucWKHOGyQf3u3+pkyd0t/lbqBZ0liShNDeLfQi0Absnzlnu0dEpYRnBeFEYI2un6QBmdeOJH3hF0raB0BSs6RJaf1dgTdJvnHvTNL9USs3A6dJep+knYFzOqoYyfzvZwHnSDpN0m7pIPhHJF2WVnscOFzS8LRra0ZnAUTERpIrkS4C9iRJDETEFuBy4IeS3gUgqUXShG7/ttZnOBFYo5tL8k22/XUecAkwB/ilpPXAb4FD0/rXAH8EWoFl6bqaiIhfAD8C7gVWZI79Zgf1bwX+Hvg8sAb4M/A/Sfr5iYi7gZuAJ4FHgTurDOV6khbRLRGxKVP+zfa40m6zX5EMWlvByQ+mMcuHpPcBS4CdSj6QzRqKWwRmPUjSCen1+nsA3wPucBKwRudEYNazvgS8ADxDciXTV+objlnn3DVkZlZwbhGYmRXcjvUOoKsGDx4cI0aMqHcYZma9yqOPPvpiRJS9gbDXJYIRI0awcOHCziuamdlWkv7Y0Tp3DZmZFZwTgZlZwTkRmJkVnBOBmVnBORGYmRVcr7tqyMysaGYvauWiectZs66NIYMGMm3CaCaPa+mx/TsRmJk1sNmLWpkxazFtGzcD0LqujRmzFgP0WDJwImhgeX8LMLPGd9G85VuTQLu2jZu5aN5yJ4K+rhbfAsys8a1Z19al8u5wImhQHX0L+MatT3LD/OfqFJWZ1Vq/ph3YsHnL28qHDBrYY8fwVUMNqqNsX+4/hJn1XcP2HMgO2rZsYL8mpk3ouYfLuUXQoIYMGkhrmWTQMmggN33psDpEZGb14quGCmrahNHbjBFAz38LMLPeYfK4llzHBp0IGlT7P/o3bn2SDZu30OKrhswsJ04EDWzyuJatA8PuDjKzvHiw2Mys4JwIzMwKzonAzKzgnAjMzAou10QgaaKk5ZJWSJpeZv1wSfdKWiTpSUnH5RmPmZm9XW6JQFITMBM4FhgDnCJpTEm1s4GbI2IcMAX4cV7xmJlZeXm2CA4BVkTEyojYANwITCqpE8Bu6fvdgTU5xmNmZmXkeR9BC7Aqs7waOLSkznnALyWdAbwDOLrcjiRNBaYCDB8+vMcD7SmeNtrMeqN6DxafAlwVEUOB44BrJb0tpoi4LCLGR8T45ubmmgdZjfZpo1vXtRG8NW307EWt9Q7NzKyiPFsErcCwzPLQtCzrn4CJABHxsKQBwGDghRzjykVe00Yve/41xuy9W+cVzcy6Kc8WwQJglKSRkvqTDAbPKanzHPB3AJLeBwwA1uYYU27ymjZ6zN67MWmsu5fMLD+5tQgiYpOk04F5QBNwZUQslXQ+sDAi5gBfBy6XdCbJwPHnIiLyiilPnjbazHqrXCedi4i5wNySsnMz75cBH84zhlrxtNFm1lt59tES3b3yx9NGm1lv5USQsb0PjPe00WbWGzkRZPTElT++ysfMept630fQUHriyh9f5WNmvY1bBBm+8sfMisgtgoxpE0YzsF/TNmW+8sfM+jq3CDJ85Y+ZFZETQQlf+WNmReOuITOzgnMiMDMrOCcCM7OCcyIwMys4JwIzs4Ir/FVD5SaZMzMrkkK3CDp6vOSLr79Z79DMzGqm0Imgo0nmVq59o04RmZnVXqETQUeTzAV44jgzK4xCJ4IhgwaWLW8ZNJBTDx1e42jMzOqj0InAk8yZmRX8qiFPMmdmVvBEAJ5kzsys0F1DZmbmRGBmVnhOBGZmBedEYGZWcLkmAkkTJS2XtELS9DLrfyjp8fT1tKR1ecZjZmZvl9tVQ5KagJnAMcBqYIGkORGxrL1ORJyZqX8GMC6veMzMrLw8WwSHACsiYmVEbABuBCZVqH8KcEOO8ZiZWRl5JoIWYFVmeXVa9jaS9gFGAvd0sH6qpIWSFq5du7bHAzUzK7JGGSyeAtwaEZvLrYyIyyJifESMb25urnFoZmZ9W56JoBUYllkempaVMwV3C5mZ1UWeiWABMErSSEn9ST7s55RWkvReYA/g4RxjMTOzDuSWCCJiE3A6MA94Crg5IpZKOl/S8ZmqU4AbIyLyisXMzDqW66RzETEXmFtSdm7J8nl5xmBmZpU1ymCxmZnViROBmVnBORGYmRWcE4GZWcE5EZiZFZwTgZlZwTkRmJkVnBOBmVnBORGYmRWcE4GZWcE5EZiZFZwTgZlZwTkRmJkVnBOBmVnBORGYmRWcE4GZWcFVTASSdpP07jLlB+UXkpmZ1VKHiUDSycDvgJ9JWirpg5nVV+UdmJmZ1UalFsG3gA9ExFjgNOBaSSek65R7ZGZmVhOVnlncFBHPA0TEfElHAndKGgb4QfNmZn1EpRbB+uz4QJoUjgAmAfvnHJeZmdVIpRbBVyjpAoqI9ZImAifnGlWOZi9q5aJ5y1mzro0hgwYybcLoeodkZlZXHbYIIuIJ4A+S7i0p3xgR1+UeWQ5mL2plxqzFtK5rI4DWdW3MmLWYF19/s96hmZnVTcXLRyNiM7BF0u41iidXF81bTtvGzduUtW3czMq1b9QpIjOz+qvUNdTudWCxpLuBrZ+YEfEvuUWVkzXr2sqWBzBpbEttgzEzaxDV3Fk8CzgHeAB4NPPqlKSJkpZLWiFpegd1Tpa0LL1X4fpqA++OIYMGli1vGTSQUw8dnuehzcwaVqctgoi4ujs7ltQEzASOAVYDCyTNiYhlmTqjgBnAhyPiFUnv6s6xqjVtwmhmzFq8TffQwH5NHjA2s0LLc66hQ4AVEbEyIjYAN5Jcepr1RWBmRLwCEBEv5BgPk8e18B+fPpD+Tcmv3TJoIP/x6QOZPM7dQmZWXNWMEXRXC7Aqs7waOLSkznsAJD0INAHnRcRdpTuSNBWYCjB8+PZ14Uwe18IN858D4KYvHbZd+zIz6wvqPfvojsA
2021-02-22 22:10:41 -08:00
},
"metadata": {
"needs_background": "light"
}
}
],
"source": [
"import matplotlib.pyplot as plt\n",
"import numpy as np\n",
"\n",
"plt.title('Learning Curve')\n",
"plt.xlabel('Wall Clock Time (s)')\n",
"plt.ylabel('Validation r2')\n",
2021-04-08 09:29:55 -07:00
"plt.scatter(time_history, 1 - np.array(valid_loss_history))\n",
"plt.step(time_history, 1 - np.array(best_valid_loss_history), where='post')\n",
2021-02-22 22:10:41 -08:00
"plt.show()"
]
},
{
"source": [
"## 3. Comparison with alternatives\n",
"\n",
"### FLAML's accuracy"
],
"cell_type": "markdown",
"metadata": {}
},
{
"cell_type": "code",
2021-04-08 09:29:55 -07:00
"execution_count": 13,
2021-02-22 22:10:41 -08:00
"metadata": {},
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
2021-04-08 09:29:55 -07:00
"flaml r2 = 0.8500929784828137\n"
2021-02-22 22:10:41 -08:00
]
}
],
"source": [
"print('flaml r2', '=', 1 - sklearn_metric_loss_score('r2', y_pred, y_test))"
]
},
{
"source": [
"### Default LightGBM"
],
"cell_type": "markdown",
"metadata": {}
},
{
"cell_type": "code",
2021-04-08 09:29:55 -07:00
"execution_count": 14,
2021-02-22 22:10:41 -08:00
"metadata": {},
"outputs": [],
"source": [
"from lightgbm import LGBMRegressor\n",
"lgbm = LGBMRegressor()"
]
},
{
"cell_type": "code",
2021-04-08 09:29:55 -07:00
"execution_count": 15,
2021-02-22 22:10:41 -08:00
"metadata": {},
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/plain": [
"LGBMRegressor()"
]
},
"metadata": {},
2021-04-08 09:29:55 -07:00
"execution_count": 15
2021-02-22 22:10:41 -08:00
}
],
"source": [
"lgbm.fit(X_train, y_train)"
]
},
{
"cell_type": "code",
2021-04-08 09:29:55 -07:00
"execution_count": 16,
2021-02-22 22:10:41 -08:00
"metadata": {},
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"default lgbm r2 = 0.8296179648694404\n"
]
}
],
"source": [
"y_pred = lgbm.predict(X_test)\n",
"from flaml.ml import sklearn_metric_loss_score\n",
"print('default lgbm r2', '=', 1 - sklearn_metric_loss_score('r2', y_pred, y_test))"
]
},
{
"source": [
"### Optuna LightGBM Tuner"
],
"cell_type": "markdown",
"metadata": {}
},
{
"cell_type": "code",
2021-04-08 09:29:55 -07:00
"execution_count": 17,
2021-02-22 22:10:41 -08:00
"metadata": {},
"outputs": [],
"source": [
2021-04-08 09:29:55 -07:00
"# !pip install optuna==2.5.0;"
2021-02-22 22:10:41 -08:00
]
},
{
"cell_type": "code",
2021-04-08 09:29:55 -07:00
"execution_count": 18,
2021-02-22 22:10:41 -08:00
"metadata": {},
"outputs": [],
"source": [
"from sklearn.model_selection import train_test_split\n",
"train_x, val_x, train_y, val_y = train_test_split(X_train, y_train, test_size=0.1)\n",
"import optuna.integration.lightgbm as lgb\n",
"dtrain = lgb.Dataset(train_x, label=train_y)\n",
"dval = lgb.Dataset(val_x, label=val_y)\n",
"params = {\n",
" \"objective\": \"regression\",\n",
" \"metric\": \"regression\",\n",
" \"verbosity\": -1,\n",
"}\n"
]
},
{
"cell_type": "code",
2021-04-08 09:29:55 -07:00
"execution_count": 19,
2021-02-22 22:10:41 -08:00
"metadata": {
"tags": [
"outputPrepend"
]
},
"outputs": [
{
"output_type": "stream",
"name": "stderr",
"text": [
2021-04-08 09:29:55 -07:00
"ture_fraction': 0.6}. Best is trial 1 with value: 2168792363.2716327.\u001b[0m\n",
"feature_fraction, val_score: 2168792363.271633: 43%|####2 | 3/7 [00:06<00:08, 2.09s/it]\u001b[32m[I 2021-04-07 09:46:14,689]\u001b[0m Trial 2 finished with value: 2203882864.83228 and parameters: {'feature_fraction': 0.5}. Best is trial 1 with value: 2168792363.2716327.\u001b[0m\n",
"feature_fraction, val_score: 2141150566.925444: 57%|#####7 | 4/7 [00:08<00:06, 2.08s/it]\u001b[32m[I 2021-04-07 09:46:16,760]\u001b[0m Trial 3 finished with value: 2141150566.9254436 and parameters: {'feature_fraction': 0.8}. Best is trial 3 with value: 2141150566.9254436.\u001b[0m\n",
"feature_fraction, val_score: 2141150566.925444: 71%|#######1 | 5/7 [00:10<00:04, 2.18s/it]\u001b[32m[I 2021-04-07 09:46:19,171]\u001b[0m Trial 4 finished with value: 2222173233.287535 and parameters: {'feature_fraction': 1.0}. Best is trial 3 with value: 2141150566.9254436.\u001b[0m\n",
"feature_fraction, val_score: 2141150566.925444: 86%|########5 | 6/7 [00:12<00:02, 2.11s/it]\u001b[32m[I 2021-04-07 09:46:21,120]\u001b[0m Trial 5 finished with value: 2434969459.590528 and parameters: {'feature_fraction': 0.4}. Best is trial 3 with value: 2141150566.9254436.\u001b[0m\n",
"feature_fraction, val_score: 2141150566.925444: 100%|##########| 7/7 [00:14<00:00, 2.06s/it]\u001b[32m[I 2021-04-07 09:46:23,051]\u001b[0m Trial 6 finished with value: 2141150566.9254436 and parameters: {'feature_fraction': 0.7}. Best is trial 3 with value: 2141150566.9254436.\u001b[0m\n",
"feature_fraction, val_score: 2141150566.925444: 100%|##########| 7/7 [00:14<00:00, 2.06s/it]\n",
"num_leaves, val_score: 2141150566.925444: 5%|5 | 1/20 [00:04<01:29, 4.69s/it]\u001b[32m[I 2021-04-07 09:46:27,748]\u001b[0m Trial 7 finished with value: 2221392827.567352 and parameters: {'num_leaves': 102}. Best is trial 7 with value: 2221392827.567352.\u001b[0m\n",
"num_leaves, val_score: 2141150566.925444: 10%|# | 2/20 [00:11<01:34, 5.28s/it]\u001b[32m[I 2021-04-07 09:46:34,392]\u001b[0m Trial 8 finished with value: 2209090245.9009995 and parameters: {'num_leaves': 153}. Best is trial 8 with value: 2209090245.9009995.\u001b[0m\n",
"num_leaves, val_score: 2141150566.925444: 15%|#5 | 3/20 [00:21<01:54, 6.71s/it]\u001b[32m[I 2021-04-07 09:46:44,453]\u001b[0m Trial 9 finished with value: 2213438622.2234197 and parameters: {'num_leaves': 207}. Best is trial 8 with value: 2209090245.9009995.\u001b[0m\n",
"num_leaves, val_score: 2141150566.925444: 20%|## | 4/20 [00:22<01:21, 5.11s/it]\u001b[32m[I 2021-04-07 09:46:45,816]\u001b[0m Trial 10 finished with value: 2260840758.3015094 and parameters: {'num_leaves': 11}. Best is trial 8 with value: 2209090245.9009995.\u001b[0m\n",
"num_leaves, val_score: 2141150566.925444: 25%|##5 | 5/20 [00:24<01:02, 4.19s/it]\u001b[32m[I 2021-04-07 09:46:47,877]\u001b[0m Trial 11 finished with value: 2189414110.8748965 and parameters: {'num_leaves': 28}. Best is trial 11 with value: 2189414110.8748965.\u001b[0m\n",
"num_leaves, val_score: 2141150566.925444: 30%|### | 6/20 [00:35<01:26, 6.15s/it]\u001b[32m[I 2021-04-07 09:46:58,592]\u001b[0m Trial 12 finished with value: 2205695162.3326616 and parameters: {'num_leaves': 250}. Best is trial 11 with value: 2189414110.8748965.\u001b[0m\n",
"num_leaves, val_score: 2141150566.925444: 35%|###5 | 7/20 [00:40<01:15, 5.78s/it]\u001b[32m[I 2021-04-07 09:47:03,503]\u001b[0m Trial 13 finished with value: 2249106235.2104974 and parameters: {'num_leaves': 87}. Best is trial 11 with value: 2189414110.8748965.\u001b[0m\n",
"num_leaves, val_score: 2141150566.925444: 40%|#### | 8/20 [00:57<01:51, 9.28s/it]\u001b[32m[I 2021-04-07 09:47:20,942]\u001b[0m Trial 14 finished with value: 2204001547.9940004 and parameters: {'num_leaves': 256}. Best is trial 11 with value: 2189414110.8748965.\u001b[0m\n",
"num_leaves, val_score: 2141150566.925444: 45%|####5 | 9/20 [01:09<01:48, 9.83s/it]\u001b[32m[I 2021-04-07 09:47:32,063]\u001b[0m Trial 15 finished with value: 2204043662.7487397 and parameters: {'num_leaves': 180}. Best is trial 11 with value: 2189414110.8748965.\u001b[0m\n",
"num_leaves, val_score: 2141150566.925444: 50%|##### | 10/20 [01:13<01:23, 8.34s/it]\u001b[32m[I 2021-04-07 09:47:36,944]\u001b[0m Trial 16 finished with value: 2185138465.178819 and parameters: {'num_leaves': 50}. Best is trial 16 with value: 2185138465.178819.\u001b[0m\n",
"num_leaves, val_score: 2141150566.925444: 55%|#####5 | 11/20 [01:31<01:40, 11.12s/it]\u001b[32m[I 2021-04-07 09:47:54,525]\u001b[0m Trial 17 finished with value: 2218934177.762569 and parameters: {'num_leaves': 217}. Best is trial 16 with value: 2185138465.178819.\u001b[0m\n",
"num_leaves, val_score: 2141150566.925444: 60%|###### | 12/20 [01:39<01:20, 10.04s/it]\u001b[32m[I 2021-04-07 09:48:02,060]\u001b[0m Trial 18 finished with value: 2178018049.391758 and parameters: {'num_leaves': 126}. Best is trial 18 with value: 2178018049.391758.\u001b[0m\n",
"num_leaves, val_score: 2141150566.925444: 65%|######5 | 13/20 [01:42<00:57, 8.21s/it]\u001b[32m[I 2021-04-07 09:48:05,995]\u001b[0m Trial 19 finished with value: 2174930961.807095 and parameters: {'num_leaves': 67}. Best is trial 19 with value: 2174930961.807095.\u001b[0m\n",
"num_leaves, val_score: 2141150566.925444: 70%|####### | 14/20 [01:51<00:49, 8.28s/it]\u001b[32m[I 2021-04-07 09:48:14,424]\u001b[0m Trial 20 finished with value: 2219052218.0844493 and parameters: {'num_leaves': 167}. Best is trial 19 with value: 2174930961.807095.\u001b[0m\n",
"num_leaves, val_score: 2141150566.925444: 75%|#######5 | 15/20 [01:52<00:31, 6.27s/it]\u001b[32m[I 2021-04-07 09:48:16,027]\u001b[0m Trial 21 finished with value: 2327836585.7967525 and parameters: {'num_leaves': 8}. Best is trial 19 with value: 2174930961.807095.\u001b[0m\n",
"num_leaves, val_score: 2141150566.925444: 80%|######## | 16/20 [01:59<00:25, 6.43s/it]\u001b[32m[I 2021-04-07 09:48:22,821]\u001b[0m Trial 22 finished with value: 2218546943.8393993 and parameters: {'num_leaves': 131}. Best is trial 19 with value: 2174930961.807095.\u001b[0m\n",
"num_leaves, val_score: 2141150566.925444: 85%|########5 | 17/20 [02:10<00:23, 7.83s/it]\u001b[32m[I 2021-04-07 09:48:33,908]\u001b[0m Trial 23 finished with value: 2198057734.031422 and parameters: {'num_leaves': 223}. Best is trial 19 with value: 2174930961.807095.\u001b[0m\n",
"num_leaves, val_score: 2141150566.925444: 90%|######### | 18/20 [02:13<00:12, 6.18s/it]\u001b[32m[I 2021-04-07 09:48:36,256]\u001b[0m Trial 24 finished with value: 2213091258.3774385 and parameters: {'num_leaves': 40}. Best is trial 19 with value: 2174930961.807095.\u001b[0m\n",
"num_leaves, val_score: 2141150566.925444: 95%|#########5| 19/20 [02:21<00:06, 6.84s/it]\u001b[32m[I 2021-04-07 09:48:44,615]\u001b[0m Trial 25 finished with value: 2174165556.463721 and parameters: {'num_leaves': 187}. Best is trial 25 with value: 2174165556.463721.\u001b[0m\n",
"num_leaves, val_score: 2141150566.925444: 100%|##########| 20/20 [02:26<00:00, 6.36s/it]\u001b[32m[I 2021-04-07 09:48:49,851]\u001b[0m Trial 26 finished with value: 2234043799.4520364 and parameters: {'num_leaves': 107}. Best is trial 25 with value: 2174165556.463721.\u001b[0m\n",
"num_leaves, val_score: 2141150566.925444: 100%|##########| 20/20 [02:26<00:00, 7.34s/it]\n",
"bagging, val_score: 2141150566.925444: 10%|# | 1/10 [00:03<00:33, 3.74s/it]\u001b[32m[I 2021-04-07 09:48:53,597]\u001b[0m Trial 27 finished with value: 2194976294.3425474 and parameters: {'bagging_fraction': 0.9381257201628649, 'bagging_freq': 2}. Best is trial 27 with value: 2194976294.3425474.\u001b[0m\n",
"bagging, val_score: 2141150566.925444: 20%|## | 2/10 [00:07<00:29, 3.69s/it]\u001b[32m[I 2021-04-07 09:48:57,185]\u001b[0m Trial 28 finished with value: 2342329430.8664627 and parameters: {'bagging_fraction': 0.41898866342782093, 'bagging_freq': 7}. Best is trial 27 with value: 2194976294.3425474.\u001b[0m\n",
"bagging, val_score: 2141150566.925444: 30%|### | 3/10 [00:11<00:27, 3.97s/it]\u001b[32m[I 2021-04-07 09:49:01,816]\u001b[0m Trial 29 finished with value: 2328041384.6089053 and parameters: {'bagging_fraction': 0.42240147371851844, 'bagging_freq': 7}. Best is trial 27 with value: 2194976294.3425474.\u001b[0m\n",
"bagging, val_score: 2141150566.925444: 40%|#### | 4/10 [00:15<00:23, 3.89s/it]\u001b[32m[I 2021-04-07 09:49:05,524]\u001b[0m Trial 30 finished with value: 2220585084.5260315 and parameters: {'bagging_fraction': 0.7602049748396973, 'bagging_freq': 1}. Best is trial 27 with value: 2194976294.3425474.\u001b[0m\n",
"bagging, val_score: 2141150566.925444: 50%|##### | 5/10 [00:18<00:18, 3.68s/it]\u001b[32m[I 2021-04-07 09:49:08,687]\u001b[0m Trial 31 finished with value: 2151535025.6906157 and parameters: {'bagging_fraction': 0.9886357335328686, 'bagging_freq': 4}. Best is trial 31 with value: 2151535025.6906157.\u001b[0m\n",
"bagging, val_score: 2141150566.925444: 60%|###### | 6/10 [00:22<00:14, 3.66s/it]\u001b[32m[I 2021-04-07 09:49:12,307]\u001b[0m Trial 32 finished with value: 2157453394.751742 and parameters: {'bagging_fraction': 0.948059097995945, 'bagging_freq': 4}. Best is trial 31 with value: 2151535025.6906157.\u001b[0m\n",
"bagging, val_score: 2141150566.925444: 70%|####### | 7/10 [00:25<00:10, 3.62s/it]\u001b[32m[I 2021-04-07 09:49:15,835]\u001b[0m Trial 33 finished with value: 2195134563.4246325 and parameters: {'bagging_fraction': 0.9992531270613643, 'bagging_freq': 4}. Best is trial 31 with value: 2151535025.6906157.\u001b[0m\n",
"bagging, val_score: 2141150566.925444: 80%|######## | 8/10 [00:28<00:06, 3.39s/it]\u001b[32m[I 2021-04-07 09:49:18,679]\u001b[0m Trial 34 finished with value: 2201514253.966678 and parameters: {'bagging_fraction': 0.8603945993074557, 'bagging_freq': 4}. Best is trial 31 with value: 2151535025.6906157.\u001b[0m\n",
"bagging, val_score: 2141150566.925444: 90%|######### | 9/10 [00:31<00:03, 3.32s/it]\u001b[32m[I 2021-04-07 09:49:21,839]\u001b[0m Trial 35 finished with value: 2177340359.8120227 and parameters: {'bagging_fraction': 0.9944809344338572, 'bagging_freq': 5}. Best is trial 31 with value: 2151535025.6906157.\u001b[0m\n",
"bagging, val_score: 2141150566.925444: 100%|##########| 10/10 [00:35<00:00, 3.43s/it]\u001b[32m[I 2021-04-07 09:49:25,521]\u001b[0m Trial 36 finished with value: 2256532597.998516 and parameters: {'bagging_fraction': 0.6149278740592209, 'bagging_freq': 3}. Best is trial 31 with value: 2151535025.6906157.\u001b[0m\n",
"bagging, val_score: 2141150566.925444: 100%|##########| 10/10 [00:35<00:00, 3.57s/it]\n",
"feature_fraction_stage2, val_score: 2141150566.925444: 17%|#6 | 1/6 [00:02<00:10, 2.15s/it]\u001b[32m[I 2021-04-07 09:49:27,677]\u001b[0m Trial 37 finished with value: 2196528967.9181175 and parameters: {'feature_fraction': 0.88}. Best is trial 37 with value: 2196528967.9181175.\u001b[0m\n",
"feature_fraction_stage2, val_score: 2141150566.925444: 33%|###3 | 2/6 [00:04<00:09, 2.33s/it]\u001b[32m[I 2021-04-07 09:49:30,418]\u001b[0m Trial 38 finished with value: 2196528967.9181175 and parameters: {'feature_fraction': 0.8160000000000001}. Best is trial 37 with value: 2196528967.9181175.\u001b[0m\n",
"feature_fraction_stage2, val_score: 2141150566.925444: 50%|##### | 3/6 [00:07<00:07, 2.39s/it]\u001b[32m[I 2021-04-07 09:49:32,970]\u001b[0m Trial 39 finished with value: 2141150566.9254436 and parameters: {'feature_fraction': 0.7520000000000001}. Best is trial 39 with value: 2141150566.9254436.\u001b[0m\n",
"feature_fraction_stage2, val_score: 2141150566.925444: 67%|######6 | 4/6 [00:10<00:05, 2.64s/it]\u001b[32m[I 2021-04-07 09:49:36,199]\u001b[0m Trial 40 finished with value: 2141150566.9254436 and parameters: {'feature_fraction': 0.784}. Best is trial 39 with value: 2141150566.9254436.\u001b[0m\n",
"feature_fraction_stage2, val_score: 2141150566.925444: 83%|########3 | 5/6 [00:14<00:02, 2.97s/it]\u001b[32m[I 2021-04-07 09:49:39,926]\u001b[0m Trial 41 finished with value: 2196528967.9181175 and parameters: {'feature_fraction': 0.8480000000000001}. Best is trial 39 with value: 2141150566.9254436.\u001b[0m\n",
"feature_fraction_stage2, val_score: 2141150566.925444: 100%|##########| 6/6 [00:17<00:00, 3.09s/it]\u001b[32m[I 2021-04-07 09:49:43,305]\u001b[0m Trial 42 finished with value: 2141150566.9254436 and parameters: {'feature_fraction': 0.7200000000000001}. Best is trial 39 with value: 2141150566.9254436.\u001b[0m\n",
"feature_fraction_stage2, val_score: 2141150566.925444: 100%|##########| 6/6 [00:17<00:00, 2.96s/it]\n",
"regularization_factors, val_score: 2141150384.773846: 5%|5 | 1/20 [00:02<00:44, 2.33s/it]\u001b[32m[I 2021-04-07 09:49:45,641]\u001b[0m Trial 43 finished with value: 2141150384.7738457 and parameters: {'lambda_l1': 0.06294194806191455, 'lambda_l2': 4.3876112262829564e-05}. Best is trial 43 with value: 2141150384.7738457.\u001b[0m\n",
"regularization_factors, val_score: 2115227274.261475: 10%|# | 2/20 [00:04<00:41, 2.28s/it]\u001b[32m[I 2021-04-07 09:49:47,814]\u001b[0m Trial 44 finished with value: 2115227274.2614753 and parameters: {'lambda_l1': 0.10310301230335357, 'lambda_l2': 8.873754009714573e-05}. Best is trial 44 with value: 2115227274.2614753.\u001b[0m\n",
"regularization_factors, val_score: 2115227274.261475: 15%|#5 | 3/20 [00:06<00:39, 2.31s/it]\u001b[32m[I 2021-04-07 09:49:50,193]\u001b[0m Trial 45 finished with value: 2141150410.2318819 and parameters: {'lambda_l1': 0.1156997936219081, 'lambda_l2': 3.3172871939656863e-05}. Best is trial 44 with value: 2115227274.2614753.\u001b[0m\n",
"regularization_factors, val_score: 2115227274.261475: 20%|## | 4/20 [00:09<00:39, 2.47s/it]\u001b[32m[I 2021-04-07 09:49:53,027]\u001b[0m Trial 46 finished with value: 2141150309.358196 and parameters: {'lambda_l1': 0.13173318624572986, 'lambda_l2': 5.888295892154888e-05}. Best is trial 44 with value: 2115227274.2614753.\u001b[0m\n",
"regularization_factors, val_score: 2115227274.261475: 25%|##5 | 5/20 [00:12<00:36, 2.43s/it]\u001b[32m[I 2021-04-07 09:49:55,374]\u001b[0m Trial 47 finished with value: 2141150340.1057699 and parameters: {'lambda_l1': 0.14472391805658655, 'lambda_l2': 4.9688537888110276e-05}. Best is trial 44 with value: 2115227274.2614753.\u001b[0m\n",
"regularization_factors, val_score: 2115227274.261475: 30%|### | 6/20 [00:14<00:33, 2.39s/it]\u001b[32m[I 2021-04-07 09:49:57,660]\u001b[0m Trial 48 finished with value: 2141150335.5146017 and parameters: {'lambda_l1': 0.076040157489334, 'lambda_l2': 5.6173432118995905e-05}. Best is trial 44 with value: 2115227274.2614753.\u001b[0m\n",
"regularization_factors, val_score: 2115227274.261475: 35%|###5 | 7/20 [00:16<00:30, 2.34s/it]\u001b[32m[I 2021-04-07 09:49:59,876]\u001b[0m Trial 49 finished with value: 2141150302.209143 and parameters: {'lambda_l1': 0.09052718358387855, 'lambda_l2': 6.390283061599122e-05}. Best is trial 44 with value: 2115227274.2614753.\u001b[0m\n",
"regularization_factors, val_score: 2115227274.261475: 40%|#### | 8/20 [00:20<00:32, 2.74s/it]\u001b[32m[I 2021-04-07 09:50:03,559]\u001b[0m Trial 50 finished with value: 2141150251.1968682 and parameters: {'lambda_l1': 0.10237603744220825, 'lambda_l2': 7.654698252019428e-05}. Best is trial 44 with value: 2115227274.2614753.\u001b[0m\n",
"regularization_factors, val_score: 2115227258.465188: 45%|####5 | 9/20 [00:22<00:28, 2.56s/it]\u001b[32m[I 2021-04-07 09:50:05,683]\u001b[0m Trial 51 finished with value: 2115227258.4651875 and parameters: {'lambda_l1': 0.16517523834461412, 'lambda_l2': 8.822808793625208e-05}. Best is trial 51 with value: 2115227258.4651875.\u001b[0m\n",
"regularization_factors, val_score: 2115227258.465188: 50%|##### | 10/20 [00:24<00:24, 2.41s/it]\u001b[32m[I 2021-04-07 09:50:07,755]\u001b[0m Trial 52 finished with value: 2115227271.294053 and parameters: {'lambda_l1': 0.10300814643324889, 'lambda_l2': 8.986477964053952e-05}. Best is trial 51 with value: 2115227258.4651875.\u001b[0m\n",
"regularization_factors, val_score: 2115227066.560425: 55%|#####5 | 11/20 [00:26<00:20, 2.29s/it]\u001b[32m[I 2021-04-07 09:50:09,772]\u001b[0m Trial 53 finished with value: 2115227066.5604246 and parameters: {'lambda_l1': 0.1641168724158022, 'lambda_l2': 0.0001526989159908344}. Best is trial 53 with value: 2115227066.5604246.\u001b[0m\n",
"regularization_factors, val_score: 2115227066.560425: 60%|###### | 12/20 [00:28<00:17, 2.22s/it]\u001b[32m[I 2021-04-07 09:50:11,832]\u001b[0m Trial 54 finished with value: 2146215291.9992926 and parameters: {'lambda_l1': 0.15939243928354838, 'lambda_l2': 0.0006817203686132486}. Best is trial 53 with value: 2115227066.5604246.\u001b[0m\n",
"regularization_factors, val_score: 2115227066.560425: 65%|######5 | 13/20 [00:30<00:15, 2.26s/it]\u001b[32m[I 2021-04-07 09:50:14,173]\u001b[0m Trial 55 finished with value: 2124575175.9074316 and parameters: {'lambda_l1': 1.590701206686507, 'lambda_l2': 0.0004836750671216936}. Best is trial 53 with value: 2115227066.5604246.\u001b[0m\n",
"regularization_factors, val_score: 2115227066.560425: 70%|####### | 14/20 [00:33<00:13, 2.25s/it]\u001b[32m[I 2021-04-07 09:50:16,402]\u001b[0m Trial 56 finished with value: 2215145015.078994 and parameters: {'lambda_l1': 6.558085959732524, 'lambda_l2': 0.036794303002768765}. Best is trial 53 with value: 2115227066.5604246.\u001b[0m\n",
"regularization_factors, val_score: 2115227066.560425: 75%|#######5 | 15/20 [00:35<00:11, 2.24s/it]\u001b[32m[I 2021-04-07 09:50:18,616]\u001b[0m Trial 57 finished with value: 2128795697.287768 and parameters: {'lambda_l1': 1.9780457030427747, 'lambda_l2': 0.0025698103247060802}. Best is trial 53 with value: 2115227066.5604246.\u001b[0m\n",
"regularization_factors, val_score: 2115227066.560425: 80%|######## | 16/20 [00:37<00:09, 2.29s/it]\u001b[32m[I 2021-04-07 09:50:21,014]\u001b[0m Trial 58 finished with value: 2142481678.8166425 and parameters: {'lambda_l1': 5.446960909510068, 'lambda_l2': 0.0049375987390305265}. Best is trial 53 with value: 2115227066.5604246.\u001b[0m\n",
"regularization_factors, val_score: 2115227066.560425: 85%|########5 | 17/20 [00:39<00:06, 2.23s/it]\u001b[32m[I 2021-04-07 09:50:23,116]\u001b[0m Trial 59 finished with value: 2141150565.1464455 and parameters: {'lambda_l1': 4.560996301544332e-07, 'lambda_l2': 6.012037841700167e-07}. Best is trial 53 with value: 2115227066.5604246.\u001b[0m\n",
"regularization_factors, val_score: 2115227066.560425: 90%|######### | 18/20 [00:42<00:04, 2.24s/it]\u001b[32m[I 2021-04-07 09:50:25,376]\u001b[0m Trial 60 finished with value: 2131740079.3508956 and parameters: {'lambda_l1': 3.2803363921216047, 'lambda_l2': 0.0020327218351278574}. Best is trial 53 with value: 2115227066.5604246.\u001b[0m\n",
"regularization_factors, val_score: 2115227066.560425: 95%|#########5| 19/20 [00:44<00:02, 2.27s/it]\u001b[32m[I 2021-04-07 09:50:27,726]\u001b[0m Trial 61 finished with value: 2117851257.56334 and parameters: {'lambda_l1': 3.720228982707488, 'lambda_l2': 0.0018619995969515544}. Best is trial 53 with value: 2115227066.5604246.\u001b[0m\n",
"regularization_factors, val_score: 2115227066.560425: 100%|##########| 20/20 [00:46<00:00, 2.24s/it]\u001b[32m[I 2021-04-07 09:50:29,895]\u001b[0m Trial 62 finished with value: 2129019536.0433643 and parameters: {'lambda_l1': 2.024171578537482, 'lambda_l2': 0.000987105837490513}. Best is trial 53 with value: 2115227066.5604246.\u001b[0m\n",
"regularization_factors, val_score: 2115227066.560425: 100%|##########| 20/20 [00:46<00:00, 2.33s/it]\n",
"min_data_in_leaf, val_score: 2115227066.560425: 20%|## | 1/5 [00:02<00:09, 2.48s/it]\u001b[32m[I 2021-04-07 09:50:32,380]\u001b[0m Trial 63 finished with value: 2164909720.933062 and parameters: {'min_child_samples': 25}. Best is trial 63 with value: 2164909720.933062.\u001b[0m\n",
"min_data_in_leaf, val_score: 2115227066.560425: 40%|#### | 2/5 [00:04<00:07, 2.45s/it]\u001b[32m[I 2021-04-07 09:50:34,769]\u001b[0m Trial 64 finished with value: 2216971696.818525 and parameters: {'min_child_samples': 50}. Best is trial 63 with value: 2164909720.933062.\u001b[0m\n",
"min_data_in_leaf, val_score: 2115227066.560425: 60%|###### | 3/5 [00:07<00:05, 2.61s/it]\u001b[32m[I 2021-04-07 09:50:37,755]\u001b[0m Trial 65 finished with value: 2254180298.9772987 and parameters: {'min_child_samples': 100}. Best is trial 63 with value: 2164909720.933062.\u001b[0m\n",
"min_data_in_leaf, val_score: 2115227066.560425: 80%|######## | 4/5 [00:09<00:02, 2.46s/it]\u001b[32m[I 2021-04-07 09:50:39,872]\u001b[0m Trial 66 finished with value: 2166633996.5077305 and parameters: {'min_child_samples': 10}. Best is trial 63 with value: 2164909720.933062.\u001b[0m\n",
"min_data_in_leaf, val_score: 2115227066.560425: 100%|##########| 5/5 [00:12<00:00, 2.34s/it]\u001b[32m[I 2021-04-07 09:50:41,932]\u001b[0m Trial 67 finished with value: 2147448653.8826163 and parameters: {'min_child_samples': 5}. Best is trial 67 with value: 2147448653.8826163.\u001b[0m\n",
"min_data_in_leaf, val_score: 2115227066.560425: 100%|##########| 5/5 [00:12<00:00, 2.41s/it]CPU times: user 4min 15s, sys: 12.3 s, total: 4min 27s\n",
"Wall time: 4min 33s\n",
2021-02-22 22:10:41 -08:00
"\n"
]
}
],
"source": [
"%%time\n",
"model = lgb.train(params, dtrain, valid_sets=[dtrain, dval], verbose_eval=10000) \n"
]
},
{
"cell_type": "code",
2021-04-08 09:29:55 -07:00
"execution_count": 20,
2021-02-22 22:10:41 -08:00
"metadata": {},
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
2021-04-08 09:29:55 -07:00
"Optuna LightGBM Tuner r2 = 0.8454837791695002\n"
2021-02-22 22:10:41 -08:00
]
}
],
"source": [
"y_pred = model.predict(X_test)\n",
"from flaml.ml import sklearn_metric_loss_score\n",
"print('Optuna LightGBM Tuner r2', '=', 1 - sklearn_metric_loss_score('r2', y_pred, y_test))"
]
}
],
"metadata": {
"kernelspec": {
"name": "python3",
2021-04-08 09:29:55 -07:00
"display_name": "Python 3.8.0 64-bit ('blend': conda)",
2021-02-22 22:10:41 -08:00
"metadata": {
"interpreter": {
2021-04-08 09:29:55 -07:00
"hash": "0cfea3304185a9579d09e0953576b57c8581e46e6ebc6dfeb681bc5a511f7544"
2021-02-22 22:10:41 -08:00
}
}
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
2021-04-08 09:29:55 -07:00
"version": "3.8.0"
2021-02-22 22:10:41 -08:00
}
},
"nbformat": 4,
"nbformat_minor": 2
}