2021-02-28 12:43:43 -08:00
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"Copyright (c) 2020-2021 Microsoft Corporation. All rights reserved. \n",
"\n",
"Licensed under the MIT License.\n",
"\n",
"# Tune XGBoost with FLAML Library\n",
"\n",
"\n",
"## 1. Introduction\n",
"\n",
"FLAML is a Python library (https://github.com/microsoft/FLAML) designed to automatically produce accurate machine learning models \n",
"with low computational cost. It is fast and cheap. The simple and lightweight design makes it easy \n",
"to use and extend, such as adding new learners. FLAML can \n",
"- serve as an economical AutoML engine,\n",
"- be used as a fast hyperparameter tuning tool, or \n",
"- be embedded in self-tuning software that requires low latency & resource in repetitive\n",
" tuning tasks.\n",
"\n",
"In this notebook, we demonstrate how to use FLAML library to tune hyperparameters of XGBoost with a regression example.\n",
"\n",
"FLAML requires `Python>=3.6`. To run this notebook example, please install flaml with the `notebook` option:\n",
"```bash\n",
"pip install flaml[notebook]\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!pip install flaml[notebook];"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"## 2. Regression Example\n",
"### Load data and preprocess\n",
"\n",
"Download [houses dataset](https://www.openml.org/d/537) from OpenML. The task is to predict median price of the house in the region based on demographic composition and a state of housing market in the region."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"slideshow": {
"slide_type": "subslide"
},
"tags": []
},
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"load dataset from ./openml_ds537.pkl\nDataset name: houses\nX_train.shape: (15480, 8), y_train.shape: (15480,);\nX_test.shape: (5160, 8), y_test.shape: (5160,)\n"
]
}
],
"source": [
"from flaml.data import load_openml_dataset\n",
"X_train, X_test, y_train, y_test = load_openml_dataset(dataset_id = 537, data_dir = './')"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"### Run FLAML\n",
"In the FLAML automl run configuration, users can specify the task type, time budget, error metric, learner list, whether to subsample, resampling strategy type, and so on. All these arguments have default values which will be used if users do not provide them. "
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"outputs": [],
"source": [
"''' import AutoML class from flaml package '''\n",
"from flaml import AutoML\n",
"automl = AutoML()"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"outputs": [],
"source": [
"settings = {\n",
" \"time_budget\": 60, # total running time in seconds\n",
" \"metric\": 'r2', # primary metrics for regression can be chosen from: ['mae','mse','r2']\n",
" \"estimator_list\": ['xgboost'], # list of ML learners; we tune xgboost in this example\n",
" \"task\": 'regression', # task type \n",
" \"log_file_name\": 'houses_experiment.log', # flaml log file\n",
"}"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"slideshow": {
"slide_type": "slide"
},
"tags": []
},
"outputs": [
{
"output_type": "stream",
"name": "stderr",
"text": [
"[flaml.automl: 02-23 14:54:34] {853} INFO - Evaluation method: cv\n",
"INFO - Evaluation method: cv\n",
"[flaml.automl: 02-23 14:54:34] {577} INFO - Using RepeatedKFold\n",
"INFO - Using RepeatedKFold\n",
"[flaml.automl: 02-23 14:54:34] {874} INFO - Minimizing error metric: 1-r2\n",
"INFO - Minimizing error metric: 1-r2\n",
"[flaml.automl: 02-23 14:54:34] {894} INFO - List of ML learners in AutoML Run: ['xgboost']\n",
"INFO - List of ML learners in AutoML Run: ['xgboost']\n",
"[flaml.automl: 02-23 14:54:34] {953} INFO - iteration 0 current learner xgboost\n",
"INFO - iteration 0 current learner xgboost\n",
"[flaml.automl: 02-23 14:54:35] {1107} INFO - at 1.3s,\tbest xgboost's error=2.1267,\tbest xgboost's error=2.1267\n",
"INFO - at 1.3s,\tbest xgboost's error=2.1267,\tbest xgboost's error=2.1267\n",
"[flaml.automl: 02-23 14:54:35] {953} INFO - iteration 1 current learner xgboost\n",
"INFO - iteration 1 current learner xgboost\n",
"[flaml.automl: 02-23 14:54:35] {1107} INFO - at 1.4s,\tbest xgboost's error=2.1267,\tbest xgboost's error=2.1267\n",
"INFO - at 1.4s,\tbest xgboost's error=2.1267,\tbest xgboost's error=2.1267\n",
"[flaml.automl: 02-23 14:54:35] {953} INFO - iteration 2 current learner xgboost\n",
"INFO - iteration 2 current learner xgboost\n",
"[flaml.automl: 02-23 14:54:36] {1107} INFO - at 1.5s,\tbest xgboost's error=0.4565,\tbest xgboost's error=0.4565\n",
"INFO - at 1.5s,\tbest xgboost's error=0.4565,\tbest xgboost's error=0.4565\n",
"[flaml.automl: 02-23 14:54:36] {953} INFO - iteration 3 current learner xgboost\n",
"INFO - iteration 3 current learner xgboost\n",
"[flaml.automl: 02-23 14:54:36] {1107} INFO - at 1.6s,\tbest xgboost's error=0.4565,\tbest xgboost's error=0.4565\n",
"INFO - at 1.6s,\tbest xgboost's error=0.4565,\tbest xgboost's error=0.4565\n",
"[flaml.automl: 02-23 14:54:36] {953} INFO - iteration 4 current learner xgboost\n",
"INFO - iteration 4 current learner xgboost\n",
"[flaml.automl: 02-23 14:54:36] {1107} INFO - at 1.9s,\tbest xgboost's error=0.2697,\tbest xgboost's error=0.2697\n",
"INFO - at 1.9s,\tbest xgboost's error=0.2697,\tbest xgboost's error=0.2697\n",
"[flaml.automl: 02-23 14:54:36] {953} INFO - iteration 5 current learner xgboost\n",
"INFO - iteration 5 current learner xgboost\n",
"[flaml.automl: 02-23 14:54:36] {1107} INFO - at 2.1s,\tbest xgboost's error=0.2278,\tbest xgboost's error=0.2278\n",
"INFO - at 2.1s,\tbest xgboost's error=0.2278,\tbest xgboost's error=0.2278\n",
"[flaml.automl: 02-23 14:54:36] {953} INFO - iteration 6 current learner xgboost\n",
"INFO - iteration 6 current learner xgboost\n",
"[flaml.automl: 02-23 14:54:36] {1107} INFO - at 2.2s,\tbest xgboost's error=0.2278,\tbest xgboost's error=0.2278\n",
"INFO - at 2.2s,\tbest xgboost's error=0.2278,\tbest xgboost's error=0.2278\n",
"[flaml.automl: 02-23 14:54:36] {953} INFO - iteration 7 current learner xgboost\n",
"INFO - iteration 7 current learner xgboost\n",
"[flaml.automl: 02-23 14:54:36] {1107} INFO - at 2.5s,\tbest xgboost's error=0.2228,\tbest xgboost's error=0.2228\n",
"INFO - at 2.5s,\tbest xgboost's error=0.2228,\tbest xgboost's error=0.2228\n",
"[flaml.automl: 02-23 14:54:36] {953} INFO - iteration 8 current learner xgboost\n",
"INFO - iteration 8 current learner xgboost\n",
"[flaml.automl: 02-23 14:54:37] {1107} INFO - at 2.6s,\tbest xgboost's error=0.2228,\tbest xgboost's error=0.2228\n",
"INFO - at 2.6s,\tbest xgboost's error=0.2228,\tbest xgboost's error=0.2228\n",
"[flaml.automl: 02-23 14:54:37] {953} INFO - iteration 9 current learner xgboost\n",
"INFO - iteration 9 current learner xgboost\n",
"[flaml.automl: 02-23 14:54:37] {1107} INFO - at 2.8s,\tbest xgboost's error=0.2228,\tbest xgboost's error=0.2228\n",
"INFO - at 2.8s,\tbest xgboost's error=0.2228,\tbest xgboost's error=0.2228\n",
"[flaml.automl: 02-23 14:54:37] {953} INFO - iteration 10 current learner xgboost\n",
"INFO - iteration 10 current learner xgboost\n",
"[flaml.automl: 02-23 14:54:37] {1107} INFO - at 3.0s,\tbest xgboost's error=0.2228,\tbest xgboost's error=0.2228\n",
"INFO - at 3.0s,\tbest xgboost's error=0.2228,\tbest xgboost's error=0.2228\n",
"[flaml.automl: 02-23 14:54:37] {953} INFO - iteration 11 current learner xgboost\n",
"INFO - iteration 11 current learner xgboost\n",
"[flaml.automl: 02-23 14:54:38] {1107} INFO - at 3.6s,\tbest xgboost's error=0.2228,\tbest xgboost's error=0.2228\n",
"INFO - at 3.6s,\tbest xgboost's error=0.2228,\tbest xgboost's error=0.2228\n",
"[flaml.automl: 02-23 14:54:38] {953} INFO - iteration 12 current learner xgboost\n",
"INFO - iteration 12 current learner xgboost\n",
"[flaml.automl: 02-23 14:54:38] {1107} INFO - at 4.1s,\tbest xgboost's error=0.2228,\tbest xgboost's error=0.2228\n",
"INFO - at 4.1s,\tbest xgboost's error=0.2228,\tbest xgboost's error=0.2228\n",
"[flaml.automl: 02-23 14:54:38] {953} INFO - iteration 13 current learner xgboost\n",
"INFO - iteration 13 current learner xgboost\n",
"[flaml.automl: 02-23 14:54:38] {1107} INFO - at 4.2s,\tbest xgboost's error=0.2228,\tbest xgboost's error=0.2228\n",
"INFO - at 4.2s,\tbest xgboost's error=0.2228,\tbest xgboost's error=0.2228\n",
"[flaml.automl: 02-23 14:54:38] {953} INFO - iteration 14 current learner xgboost\n",
"INFO - iteration 14 current learner xgboost\n",
"[flaml.automl: 02-23 14:54:39] {1107} INFO - at 4.9s,\tbest xgboost's error=0.1814,\tbest xgboost's error=0.1814\n",
"INFO - at 4.9s,\tbest xgboost's error=0.1814,\tbest xgboost's error=0.1814\n",
"[flaml.automl: 02-23 14:54:39] {953} INFO - iteration 15 current learner xgboost\n",
"INFO - iteration 15 current learner xgboost\n",
"[flaml.automl: 02-23 14:54:39] {1107} INFO - at 5.2s,\tbest xgboost's error=0.1814,\tbest xgboost's error=0.1814\n",
"INFO - at 5.2s,\tbest xgboost's error=0.1814,\tbest xgboost's error=0.1814\n",
"[flaml.automl: 02-23 14:54:39] {953} INFO - iteration 16 current learner xgboost\n",
"INFO - iteration 16 current learner xgboost\n",
"[flaml.automl: 02-23 14:54:46] {1107} INFO - at 12.3s,\tbest xgboost's error=0.1813,\tbest xgboost's error=0.1813\n",
"INFO - at 12.3s,\tbest xgboost's error=0.1813,\tbest xgboost's error=0.1813\n",
"[flaml.automl: 02-23 14:54:46] {953} INFO - iteration 17 current learner xgboost\n",
"INFO - iteration 17 current learner xgboost\n",
"[flaml.automl: 02-23 14:54:51] {1107} INFO - at 17.5s,\tbest xgboost's error=0.1642,\tbest xgboost's error=0.1642\n",
"INFO - at 17.5s,\tbest xgboost's error=0.1642,\tbest xgboost's error=0.1642\n",
"[flaml.automl: 02-23 14:54:51] {953} INFO - iteration 18 current learner xgboost\n",
"INFO - iteration 18 current learner xgboost\n",
"[flaml.automl: 02-23 14:55:04] {1107} INFO - at 30.4s,\tbest xgboost's error=0.1642,\tbest xgboost's error=0.1642\n",
"INFO - at 30.4s,\tbest xgboost's error=0.1642,\tbest xgboost's error=0.1642\n",
"[flaml.automl: 02-23 14:55:04] {953} INFO - iteration 19 current learner xgboost\n",
"INFO - iteration 19 current learner xgboost\n",
"[flaml.automl: 02-23 14:55:06] {1107} INFO - at 32.1s,\tbest xgboost's error=0.1642,\tbest xgboost's error=0.1642\n",
"INFO - at 32.1s,\tbest xgboost's error=0.1642,\tbest xgboost's error=0.1642\n",
"[flaml.automl: 02-23 14:55:06] {953} INFO - iteration 20 current learner xgboost\n",
"INFO - iteration 20 current learner xgboost\n",
"[flaml.automl: 02-23 14:55:10] {1107} INFO - at 35.7s,\tbest xgboost's error=0.1642,\tbest xgboost's error=0.1642\n",
"INFO - at 35.7s,\tbest xgboost's error=0.1642,\tbest xgboost's error=0.1642\n",
"[flaml.automl: 02-23 14:55:10] {953} INFO - iteration 21 current learner xgboost\n",
"INFO - iteration 21 current learner xgboost\n",
"[flaml.automl: 02-23 14:55:11] {1107} INFO - at 36.7s,\tbest xgboost's error=0.1642,\tbest xgboost's error=0.1642\n",
"INFO - at 36.7s,\tbest xgboost's error=0.1642,\tbest xgboost's error=0.1642\n",
"[flaml.automl: 02-23 14:55:11] {953} INFO - iteration 22 current learner xgboost\n",
"INFO - iteration 22 current learner xgboost\n",
"[flaml.automl: 02-23 14:55:34] {1107} INFO - at 59.7s,\tbest xgboost's error=0.1601,\tbest xgboost's error=0.1601\n",
"INFO - at 59.7s,\tbest xgboost's error=0.1601,\tbest xgboost's error=0.1601\n",
"[flaml.automl: 02-23 14:55:34] {1148} INFO - selected model: <xgboost.core.Booster object at 0x00000234901A00C8>\n",
"INFO - selected model: <xgboost.core.Booster object at 0x00000234901A00C8>\n",
"[flaml.automl: 02-23 14:55:34] {908} INFO - fit succeeded\n",
"INFO - fit succeeded\n"
]
}
],
"source": [
"'''The main flaml automl API'''\n",
"automl.fit(X_train = X_train, y_train = y_train, **settings)"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"### Best model and metric"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {
"slideshow": {
"slide_type": "slide"
},
"tags": []
},
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"Best hyperparmeter config: {'n_estimators': 317.0, 'max_leaves': 216.0, 'min_child_weight': 20.0, 'learning_rate': 0.03224270129795047, 'subsample': 1.0, 'colsample_bylevel': 1.0, 'colsample_bytree': 0.9439217658719203, 'reg_alpha': 1.0302456093366526e-07, 'reg_lambda': 1.0}\nBest r2 on validation data: 0.8399\nTraining duration of best run: 23.02 s\n"
]
}
],
"source": [
"''' retrieve best config'''\n",
"print('Best hyperparmeter config:', automl.best_config)\n",
"print('Best r2 on validation data: {0:.4g}'.format(1-automl.best_loss))\n",
"print('Training duration of best run: {0:.4g} s'.format(automl.best_config_train_time))"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/plain": [
"<xgboost.core.Booster at 0x234901a00c8>"
]
},
"metadata": {},
"execution_count": 6
}
],
"source": [
"automl.model"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"outputs": [],
"source": [
2021-03-16 22:13:35 -07:00
"''' pickle and save the automl object '''\n",
2021-02-28 12:43:43 -08:00
"import pickle\n",
2021-03-16 22:13:35 -07:00
"with open('automl.pkl', 'wb') as f:\n",
" pickle.dump(automl, f, pickle.HIGHEST_PROTOCOL)"
2021-02-28 12:43:43 -08:00
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {
"slideshow": {
"slide_type": "slide"
},
"tags": []
},
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"Predicted labels [144760.3 259431.67 158536.47 ... 172862.03 235544.52 272218.38]\nTrue labels [136900. 241300. 200700. ... 160900. 227300. 265600.]\n"
]
}
],
"source": [
"''' compute predictions of testing dataset ''' \n",
"y_pred = automl.predict(X_test)\n",
"print('Predicted labels', y_pred)\n",
"print('True labels', y_test)"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {
"slideshow": {
"slide_type": "slide"
},
"tags": []
},
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"r2 = 0.8381301879550651\nmse = 2139677169.3730364\nmae = 30461.545708424175\n"
]
}
],
"source": [
"''' compute different metric values on testing dataset'''\n",
"from flaml.ml import sklearn_metric_loss_score\n",
"print('r2', '=', 1 - sklearn_metric_loss_score('r2', y_pred, y_test))\n",
"print('mse', '=', sklearn_metric_loss_score('mse', y_pred, y_test))\n",
"print('mae', '=', sklearn_metric_loss_score('mae', y_pred, y_test))"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {
"slideshow": {
"slide_type": "subslide"
},
"tags": []
},
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"{'Current Learner': 'xgboost', 'Current Sample': 15480, 'Current Hyper-parameters': {'n_estimators': 4, 'max_leaves': 4, 'min_child_weight': 20.0, 'learning_rate': 0.1, 'subsample': 1.0, 'colsample_bylevel': 1.0, 'colsample_bytree': 1.0, 'reg_alpha': 1e-10, 'reg_lambda': 1.0}, 'Best Learner': 'xgboost', 'Best Hyper-parameters': {'n_estimators': 4, 'max_leaves': 4, 'min_child_weight': 20.0, 'learning_rate': 0.1, 'subsample': 1.0, 'colsample_bylevel': 1.0, 'colsample_bytree': 1.0, 'reg_alpha': 1e-10, 'reg_lambda': 1.0}}\n{'Current Learner': 'xgboost', 'Current Sample': 15480, 'Current Hyper-parameters': {'n_estimators': 4.0, 'max_leaves': 4.0, 'min_child_weight': 20.0, 'learning_rate': 0.46335414315327306, 'subsample': 0.9339389930838808, 'colsample_bylevel': 1.0, 'colsample_bytree': 0.9904286645657556, 'reg_alpha': 2.841147337412889e-10, 'reg_lambda': 0.12000833497054482}, 'Best Learner': 'xgboost', 'Best Hyper-parameters': {'n_estimators': 4.0, 'max_leaves': 4.0, 'min_child_weight': 20.0, 'learning_rate': 0.46335414315327306, 'subsample': 0.9339389930838808, 'colsample_bylevel': 1.0, 'colsample_bytree': 0.9904286645657556, 'reg_alpha': 2.841147337412889e-10, 'reg_lambda': 0.12000833497054482}}\n{'Current Learner': 'xgboost', 'Current Sample': 15480, 'Current Hyper-parameters': {'n_estimators': 20.0, 'max_leaves': 4.0, 'min_child_weight': 20.0, 'learning_rate': 1.0, 'subsample': 0.9917683183663918, 'colsample_bylevel': 1.0, 'colsample_bytree': 0.9858892907525497, 'reg_alpha': 3.8783982645515837e-10, 'reg_lambda': 0.36607431863072826}, 'Best Learner': 'xgboost', 'Best Hyper-parameters': {'n_estimators': 20.0, 'max_leaves': 4.0, 'min_child_weight': 20.0, 'learning_rate': 1.0, 'subsample': 0.9917683183663918, 'colsample_bylevel': 1.0, 'colsample_bytree': 0.9858892907525497, 'reg_alpha': 3.8783982645515837e-10, 'reg_lambda': 0.36607431863072826}}\n{'Current Learner': 'xgboost', 'Current Sample': 15480, 'Current Hyper-parameters': {'n_estimators': 11.0, 'max_leaves': 15.0, 'min_child_weight': 14.947587304572773, 'learning_rate': 0.6092558236172073, 'subsample': 0.9659256891661986, 'colsample_bylevel': 1.0, 'colsample_bytree': 1.0, 'reg_alpha': 3.816590663384559e-08, 'reg_lambda': 0.4482946615262561}, 'Best Learner': 'xgboost', 'Best Hyper-parameters': {'n_estimators': 11.0, 'max_leaves': 15.0, 'min_child_weight': 14.947587304572773, 'learning_rate': 0.6092558236172073, 'subsample': 0.9659256891661986, 'colsample_bylevel': 1.0, 'colsample_bytree': 1.0, 'reg_alpha': 3.816590663384559e-08, 'reg_lambda': 0.4482946615262561}}\n{'Current Learner': 'xgboost', 'Current Sample': 15480, 'Current Hyper-parameters': {'n_estimators': 7.0, 'max_leaves': 41.0, 'min_child_weight': 20.0, 'learning_rate': 0.8834537640176922, 'subsample': 1.0, 'colsample_bylevel': 1.0, 'colsample_bytree': 0.9837052481490312, 'reg_alpha': 4.482246955743696e-08, 'reg_lambda': 0.028657570201141073}, 'Best Learner': 'xgboost', 'Best Hyper-parameters': {'n_estimators': 7.0, 'max_leaves': 41.0, 'min_child_weight': 20.0, 'learning_rate': 0.8834537640176922, 'subsample': 1.0, 'colsample_bylevel': 1.0, 'colsample_bytree': 0.9837052481490312, 'reg_alpha': 4.482246955743696e-08, 'reg_lambda': 0.028657570201141073}}\n{'Current Learner': 'xgboost', 'Current Sample': 15480, 'Current Hyper-parameters': {'n_estimators': 19.0, 'max_leaves': 67.0, 'min_child_weight': 20.0, 'learning_rate': 0.42055174154191877, 'subsample': 1.0, 'colsample_bylevel': 0.9680465326601881, 'colsample_bytree': 0.9911260129490254, 'reg_alpha': 1.4386508989317825e-07, 'reg_lambda': 0.025583032704844726}, 'Best Learner': 'xgboost', 'Best Hyper-parameters': {'n_estimators': 19.0, 'max_leaves': 67.0, 'min_child_weight': 20.0, 'learning_rate': 0.42055174154191877, 'subsample': 1.0, 'colsample_bylevel': 0.9680465326601881, 'colsample_bytree': 0.9911260129490254, 'reg_alpha': 1.4386508989317825e-07, 'reg_lambda': 0.025583032704844726}}\n{'Current Learner': 'xgboost', 'Current Sample': 15480, 'Current Hyper-parameters': {'n_estimators': 59.0, 'max_leaves': 361.0, 'min_child_weight': 20.0, 'learning_rate
]
}
],
"source": [
"from flaml.data import get_output_from_log\n",
"time_history, best_valid_loss_history, valid_loss_history, config_history, train_loss_history = \\\n",
" get_output_from_log(filename = settings['log_file_name'], time_budget = 60)\n",
"\n",
"for config in config_history:\n",
" print(config)"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"outputs": [
{
"output_type": "display_data",
"data": {
"text/plain": "<Figure size 432x288 with 1 Axes>",
"image/svg+xml": "<?xml version=\"1.0\" encoding=\"utf-8\" standalone=\"no\"?>\r\n<!DOCTYPE svg PUBLIC \"-//W3C//DTD SVG 1.1//EN\"\r\n \"http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd\">\r\n<!-- Created with matplotlib (https://matplotlib.org/) -->\r\n<svg height=\"277.314375pt\" version=\"1.1\" viewBox=\"0 0 400.523437 277.314375\" width=\"400.523437pt\" xmlns=\"http://www.w3.org/2000/svg\" xmlns:xlink=\"http://www.w3.org/1999/xlink\">\r\n <defs>\r\n <style type=\"text/css\">\r\n*{stroke-linecap:butt;stroke-linejoin:round;}\r\n </style>\r\n </defs>\r\n <g id=\"figure_1\">\r\n <g id=\"patch_1\">\r\n <path d=\"M 0 277.314375 \r\nL 400.523437 277.314375 \r\nL 400.523437 0 \r\nL 0 0 \r\nz\r\n\" style=\"fill:none;\"/>\r\n </g>\r\n <g id=\"axes_1\">\r\n <g id=\"patch_2\">\r\n <path d=\"M 58.523438 239.758125 \r\nL 393.323438 239.758125 \r\nL 393.323438 22.318125 \r\nL 58.523438 22.318125 \r\nz\r\n\" style=\"fill:#ffffff;\"/>\r\n </g>\r\n <g id=\"PathCollection_1\">\r\n <defs>\r\n <path d=\"M 0 3 \r\nC 0.795609 3 1.55874 2.683901 2.12132 2.12132 \r\nC 2.683901 1.55874 3 0.795609 3 0 \r\nC 3 -0.795609 2.683901 -1.55874 2.12132 -2.12132 \r\nC 1.55874 -2.683901 0.795609 -3 0 -3 \r\nC -0.795609 -3 -1.55874 -2.683901 -2.12132 -2.12132 \r\nC -2.683901 -1.55874 -3 -0.795609 -3 0 \r\nC -3 0.795609 -2.683901 1.55874 -2.12132 2.12132 \r\nC -1.55874 2.683901 -0.795609 3 0 3 \r\nz\r\n\" id=\"m2937806fa7\" style=\"stroke:#1f77b4;\"/>\r\n </defs>\r\n <g clip-path=\"url(#pae0c1da17c)\">\r\n <use style=\"fill:#1f77b4;stroke:#1f77b4;\" x=\"73.741619\" xlink:href=\"#m2937806fa7\" y=\"229.874489\"/>\r\n <use style=\"fill:#1f77b4;stroke:#1f77b4;\" x=\"74.887343\" xlink:href=\"#m2937806fa7\" y=\"61.994745\"/>\r\n <use style=\"fill:#1f77b4;stroke:#1f77b4;\" x=\"76.783009\" xlink:href=\"#m2937806fa7\" y=\"43.216967\"/>\r\n <use style=\"fill:#1f77b4;stroke:#1f77b4;\" x=\"78.012066\" xlink:href=\"#m2937806fa7\" y=\"39.006531\"/>\r\n <use style=\"fill:#1f77b4;stroke:#1f77b4;\" x=\"79.845248\" xlink:href=\"#m2937806fa7\" y=\"38.504869\"/>\r\n <use style=\"fill:#1f77b4;stroke:#1f77b4;\" x=\"92.418551\" xlink:href=\"#m2937806fa7\" y=\"34.345291\"/>\r\n <use style=\"fill:#1f77b4;stroke:#1f77b4;\" x=\"130.815098\" xlink:href=\"#m2937806fa7\" y=\"34.331016\"/>\r\n <use style=\"fill:#1f77b4;stroke:#1f77b4;\" x=\"158.058717\" xlink:href=\"#m2937806fa7\" y=\"32.60987\"/>\r\n <use style=\"fill:#1f77b4;stroke:#1f77b4;\" x=\"378.105256\" xlink:href=\"#m2937806fa7\" y=\"32.201761\"/>\r\n </g>\r\n </g>\r\n <g id=\"matplotlib.axis_1\">\r\n <g id=\"xtick_1\">\r\n <g id=\"line2d_1\">\r\n <defs>\r\n <path d=\"M 0 0 \r\nL 0 3.5 \r\n\" id=\"m143fd363c3\" style=\"stroke:#000000;stroke-width:0.8;\"/>\r\n </defs>\r\n <g>\r\n <use style=\"stroke:#000000;stroke-width:0.8;\" x=\"66.961277\" xlink:href=\"#m143fd363c3\" y=\"239.758125\"/>\r\n </g>\r\n </g>\r\n <g id=\"text_1\">\r\n <!-- 0 -->\r\n <defs>\r\n <path d=\"M 31.78125 66.40625 \r\nQ 24.171875 66.40625 20.328125 58.90625 \r\nQ 16.5 51.421875 16.5 36.375 \r\nQ 16.5 21.390625 20.328125 13.890625 \r\nQ 24.171875 6.390625 31.78125 6.390625 \r\nQ 39.453125 6.390625 43.28125 13.890625 \r\nQ 47.125 21.390625 47.125 36.375 \r\nQ 47.125 51.421875 43.28125 58.90625 \r\nQ 39.453125 66.40625 31.78125 66.40625 \r\nz\r\nM 31.78125 74.21875 \r\nQ 44.046875 74.21875 50.515625 64.515625 \r\nQ 56.984375 54.828125 56.984375 36.375 \r\nQ 56.984375 17.96875 50.515625 8.265625 \r\nQ 44.046875 -1.421875 31.78125 -1.421875 \r\nQ 19.53125 -1.421875 13.0625 8.265625 \r\nQ 6.59375 17.96875 6.59375 36.375 \r\nQ 6.59375 54.828125 13.0625 64.515625 \r\nQ 19.53125 74.21875 31.78125 74.21875 \r\nz\r\n\" id=\"DejaVuSans-48\"/>\r\n </defs>\r\n <g transform=\"translate(63.780027 254.356562)scale(0.1 -0.1)\">\r\n <use xlink:href=\"#DejaVuSans-48\"/>\r\n </g>\r\n </g>\r\n </g>\r\n <g id=\"xtick_2\">\r\n <g id=\"line2d_2\">\r\n <g>\r\n <use style=\"stroke:#000000;stroke-
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAZAAAAEWCAYAAABIVsEJAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4yLjAsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy8GearUAAAfq0lEQVR4nO3df7xVdZ3v8dfbIySaiiYaoogmkVgJddLMpsQk0cnAMtMar9dukY3OVE4W1lSW1xknbnVrMr3oWNZopiZIRqJiaZmmIMoPkSQ1hUOCP1BSEoHP/WN9N222e++zzzpn/zrn/Xw89mOv9V3ftdbnC/vsz17ftdZ3KSIwMzPrqe2aHYCZmbUnJxAzM8vFCcTMzHJxAjEzs1ycQMzMLBcnEDMzy8UJxKxOJP2dpOXNjsOsXpxArF+S9Jiko5sZQ0T8JiLG1Gv7ko6RdIek9ZLWSrpd0vvrtT+zUk4gZjlJ6mjivk8ErgV+BOwD7AV8BTg+x7Ykyd8F1mP+0NiAImk7SdMk/VHS05KukbR70fJrJf1Z0nPp1/3BRct+KOliSXMkvQBMSEc6n5O0KK3zU0k7pPpHSlpZtH7Fumn55yWtltQl6eOSQtKBZdog4FvA+RFxWUQ8FxFbIuL2iPhEqnOepP8uWmdU2t72af7Xki6QdCfwIvBFSfNL9vNZSbPT9Ksk/R9Jj0t6UtIlkob08r/D2pwTiA00/wxMAd4N7A08C1xUtPyXwGhgT+A+4MqS9T8CXADsDPw2lZ0ETAL2B94M/M8q+y9bV9Ik4GzgaODAFF8lY4B9geuq1KnFqcBUsrb8JzBG0uii5R8BrkrT/wG8HhiX4htBdsRjA5gTiA00nwS+FBErI+Il4DzgxMIv84i4PCLWFy07RNKuRevfEBF3pl/8f01l342Iroh4Bvg52ZdsJZXqngT8ICKWRsSLwNeqbOM16X11za0u74dpf5si4jngBuAUgJRI3gDMTkc8nwA+GxHPRMR64N+Ak3u5f2tzTiA20OwHzJS0TtI6YBmwGdhLUoekC1P31vPAY2mdPYrWf6LMNv9cNP0i8Ooq+69Ud++SbZfbT8HT6X14lTq1KN3HVaQEQnb0MSsls2HAjsCCon+3m1K5DWBOIDbQPAEcGxFDi147RMQqsi/NyWTdSLsCo9I6Klq/XsNXryY7GV6wb5W6y8na8cEqdV4g+9IveG2ZOqVtuRnYQ9I4skRS6L56CtgAHFz0b7ZrRFRLlDYAOIFYfzZI0g5Fr+2BS4ALJO0HIGmYpMmp/s7AS2S/8Hck66ZplGuA0yUdJGlHqpxfiOwZDGcDX5Z0uqRd0sUB75Q0I1W7H3iXpJGpC+7c7gKIiE1k51WmA7sDt6TyLcClwLcl7QkgaYSkY3K31voFJxDrz+aQ/XIuvM4DvgPMBm6WtB64Gzgs1f8R8CdgFfBgWtYQEfFL4LvAr4AVwF1p0UsV6l8HfBj4GNAFPAn8b7LzGETELcBPgUXAAuDGGkO5iuwI7NqUUAq+kOK6O3Xv3Up2Mt8GMPmBUmatR9JBwBLgVSVf5GYtw0cgZi1C0gmSBkvajeyy2Z87eVgrcwIxax2fBNYCfyS7MuxTzQ3HrDp3YZmZWS4+AjEzs1y2b3YAjbTHHnvEqFGjmh2GmVlbWbBgwVMR8YobRwdUAhk1ahTz58/vvqKZmW0l6U/lyt2FZWZmuTiBmJlZLk4gZmaWixOImZnl4gRiZma5DKirsMzMBppZC1cxfe5yutZtYO+hQzjnmDFMGT+iT7btBGINVc8Ps5lta9bCVZx7/WI2vLwZgFXrNnDu9YsB+uTvzgmkgVr1y7NRcdX7w2xm25o+d/nWv7eCDS9vZvrc5U4g7aRVvzwbGVelD/Pnr1vET+55vE/3ZWbZ33M5XRXKe8oJpEFa9ctz4ePr2Lh5yzZl9Yqr0oe5dP9m1jcGd2xX9u9r76FD+mT7TiANUinjN/vLs9L+6xFXpQ/ziKFD+OknD+/z/ZkNdKU9DABDBnVwzjF98zBJJ5AG2XvokLK/wJv95XnEhbc1LK56f5jNbFuFbmhfhdUiCiecV63bQIfE5ghG1PCfcs4xY1ryy7ORcdX7w2xmrzRl/Ii6/Y05gfRA6S/ozelhXLWceC6Uf/66RWzcvKWmpNMIjf5Sr+eH2cwaq6lPJJQ0CfgO0AFcFhEXliw/B/homt0eOAgYFhHPSHoMWE/26M9NEdHZ3f46OzujN8O5V+ruKRjcsR3jRw6tuo0HVz/P2OG7uM/fzNqGpAXlvmObdgQiqQO4CJgIrATulTQ7Ih4s1ImI6cD0VP944LMR8UzRZiZExFONirm7S99qOfE8dvguTB7nX+Bm1v6a2YV1KLAiIh4BkHQ1MBl4sEL9U4CfNCi2sobuOIhnX3y54vJmnxA3M2ukZg6mOAJ4omh+ZSp7BUk7ApOAnxUVB3CzpAWSplbaiaSpkuZLmr927drcwc5auIq//HVTxeWtcELczKyRmnkEojJllU7IHA/cWdJ9dUREdEnaE7hF0kMRcccrNhgxA5gB2TmQvMFOn7ucl7eUX71VToibmTVSMxPISmDfovl9gK4KdU+mpPsqIrrS+xpJM8m6xF6RQPpKpfMfAu6cdlS9dmtm1rKa2YV1LzBa0v6SBpMlidmllSTtCrwbuKGobCdJOxemgfcCS+oZbKVb//tqSAAzs3bTtAQSEZuAs4C5wDLgmohYKukMSWcUVT0BuDkiXigq2wv4raQHgHuAX0TETfWM95xjxjBkUMc2ZT7vYWYDWVPvA2m03t4HMmvhqpa7EdDMrN5a7j6QdjRl/IitI9T6cl0zG+j8THQzM8vFCcTMzHJxAjEzs1ycQMzMLBcnEDMzy8UJxMzMcnECMTOzXJxAzMwsFycQMzPLxQnEzMxycQIxM7NcPBZWDWYtXMX0ucvpWreBQR3bse/uHsLdzMxHIN2YtXAV516/mFXrNhDAxs1bePSpF5i1cFWzQzMzayonkG5Mn7ucDS9v3qZsS2TlZmYDmRNINyo9yrZSuZnZQNHUBCJpkqTlklZImlZm+ZGSnpN0f3p9pdZ1+4ofZWtmVl7TEoikDuAi4FhgLHCKpLFlqv4mIsal19d7uG6v+VG2ZmblNfMI5FBgRUQ8EhEbgauByQ1Yt0emjB/Bv3/gTQzuyP6pRgwdwr9/4E1+lK2ZDXjNvIx3BPBE0fxK4LAy9Q6X9ADQBXwuIpb2YF0kTQWmAowcOTJXoH6UrZnZKzXzCERlyqJk/j5gv4g4BPhPYFYP1s0KI2ZERGdEdA4bNix3sGZmtq1mJpCVwL5F8/uQHWVsFRHPR8Rf0vQcYJCkPWpZ18zM6quZCeReYLSk/SUNBk4GZhdXkPRaSUrTh5LF+3Qt65qZWX017RxIRGySdBYwF+gALo+IpZLOSMsvAU4EPiVpE7ABODkiAii7blMaYmY2QDV1LKzULTWnpOySounvAd+rdV0zM2sc34luZma5OIGYmVkuTiBmZpaLE4iZmeXiBGJmZrk4gZiZWS5OIGZmlosTiJmZ5eIEYmZmuTiBmJlZLk4gZmaWixOImZnl4gRiZma5OIGYmVkuTiBmZpaLE4iZmeXS1AQiaZKk5ZJWSJpWZvlHJS1Kr99JOqRo2WOSFku6X9L8xkZuZmZNeyKhpA7gImAisBK4V9LsiHiwqNqjwLsj4llJxwIzgMOKlk+IiKcaFrSZmW3VzCOQQ4EVEfFIRGwErgYmF1eIiN9FxLNp9m5gnwbHaGZmFTQzgYwAniiaX5nKKvlfwC+L5gO4WdICSVPrEJ+ZmVXRtC4sQGXKomxFaQJZAnlnUfEREdElaU/gFkkPRcQdZdadCkwFGDlyZO+jNjMzoLlHICuBfYvm9wG6SitJejNwGTA5Ip4ulEdEV3pfA8wk6xJ7hYiYERGdEdE5bNiwPgzfzGxga2YCuRcYLWl/SYOBk4HZxRUkjQSuB06NiD8Ule8kaefCNPBeYEnDIjczs+Z1YUXEJklnAXOBDuDyiFgq6Yy0/BLgK8BrgO9LAtgUEZ3AXsDMVLY9cFVE3NSEZpiZDVj
},
"metadata": {
"needs_background": "light"
}
}
],
"source": [
"import matplotlib.pyplot as plt\n",
"import numpy as np\n",
"\n",
"plt.title('Learning Curve')\n",
"plt.xlabel('Wall Clock Time (s)')\n",
"plt.ylabel('Validation r2')\n",
"plt.scatter(time_history, 1-np.array(valid_loss_history))\n",
"plt.step(time_history, 1-np.array(best_valid_loss_history), where='post')\n",
"plt.show()"
]
},
{
"source": [
"## 3. Comparison with untuned XGBoost\n",
"\n",
"### FLAML's accuracy"
],
"cell_type": "markdown",
"metadata": {}
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {},
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"flaml (60s) r2 = 0.8381301879550651\n"
]
}
],
"source": [
"print('flaml (60s) r2', '=', 1 - sklearn_metric_loss_score('r2', y_pred, y_test))"
]
},
{
"source": [
"### Default XGBoost"
],
"cell_type": "markdown",
"metadata": {}
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {},
"outputs": [],
"source": [
"from xgboost import XGBRegressor\n",
"xgb = XGBRegressor()"
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {},
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/plain": [
"XGBRegressor(base_score=0.5, booster='gbtree', colsample_bylevel=1,\n",
" colsample_bynode=1, colsample_bytree=1, gamma=0, gpu_id=-1,\n",
" importance_type='gain', interaction_constraints='',\n",
" learning_rate=0.300000012, max_delta_step=0, max_depth=6,\n",
" min_child_weight=1, missing=nan, monotone_constraints='()',\n",
" n_estimators=100, n_jobs=8, num_parallel_tree=1, random_state=0,\n",
" reg_alpha=0, reg_lambda=1, scale_pos_weight=1, subsample=1,\n",
" tree_method='exact', validate_parameters=1, verbosity=None)"
]
},
"metadata": {},
"execution_count": 14
}
],
"source": [
"xgb.fit(X_train, y_train)"
]
},
{
"cell_type": "code",
"execution_count": 15,
"metadata": {},
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"default xgboost r2 = 0.8265451174596482\n"
]
}
],
"source": [
"y_pred = xgb.predict(X_test)\n",
"from flaml.ml import sklearn_metric_loss_score\n",
"print('default xgboost r2', '=', 1 - sklearn_metric_loss_score('r2', y_pred, y_test))"
]
}
],
"metadata": {
"kernelspec": {
"name": "python3",
"display_name": "Python 3.7.7 64-bit ('flaml': conda)",
"metadata": {
"interpreter": {
"hash": "bfcd9a6a9254a5e160761a1fd7a9e444f011592c6770d9f4180dde058a9df5dd"
}
}
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.7-final"
}
},
"nbformat": 4,
"nbformat_minor": 2
}