{ "cells": [ { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "Copyright (c) Microsoft Corporation. All rights reserved. \n", "\n", "Licensed under the MIT License.\n", "\n", "# FineTuning NLP Models with FLAML Library\n", "\n", "\n", "## 1. Introduction\n", "\n", "FLAML is a Python library (https://github.com/microsoft/FLAML) designed to automatically produce accurate machine learning models \n", "with low computational cost. It is fast and economical. The simple and lightweight design makes it easy to use and extend, such as adding new learners. FLAML can \n", "- serve as an economical AutoML engine,\n", "- be used as a fast hyperparameter tuning tool, or \n", "- be embedded in self-tuning software that requires low latency & resource in repetitive\n", " tuning tasks.\n", "\n", "In this notebook, we demonstrate how to use the FLAML library to fine tune an NLP language model with hyperparameter search. We have tested this notebook on a server with 4 NVidia V100 GPU (32GB) and 400GB CPU Ram.\n", "\n", "FLAML requires `Python>=3.7`. To run this notebook example, please install flaml with the `nlp,ray,notebook` and `blendsearch` option:\n", "```bash\n", "pip install flaml[nlp,ray,notebook,blendsearch];\n", "```" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\u001b[33mWARNING: Ignoring invalid distribution -andas (/home/xliu127/.local/lib/python3.8/site-packages)\u001b[0m\u001b[33m\n", "\u001b[0m\u001b[33mWARNING: Ignoring invalid distribution -andas (/home/xliu127/.local/lib/python3.8/site-packages)\u001b[0m\u001b[33m\n", "\u001b[0mRequirement already satisfied: flaml[blendsearch,nlp,notebook,ray] in /data/xliu127/projects/hyperopt/FLAML (1.0.7)\n", "Requirement already satisfied: NumPy>=1.17.0rc1 in /home/xliu127/.local/lib/python3.8/site-packages (from flaml[blendsearch,nlp,notebook,ray]) (1.21.2)\n", "Requirement already satisfied: lightgbm>=2.3.1 in /home/xliu127/.local/lib/python3.8/site-packages (from flaml[blendsearch,nlp,notebook,ray]) (3.2.1)\n", "Requirement already satisfied: xgboost<=1.3.3,>=0.90 in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from flaml[blendsearch,nlp,notebook,ray]) (1.3.3)\n", "Requirement already satisfied: scipy>=1.4.1 in /home/xliu127/.local/lib/python3.8/site-packages (from flaml[blendsearch,nlp,notebook,ray]) (1.6.2)\n", "Requirement already satisfied: pandas>=1.1.4 in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from flaml[blendsearch,nlp,notebook,ray]) (1.4.3)\n", "Requirement already satisfied: scikit-learn>=0.24 in /home/xliu127/.local/lib/python3.8/site-packages (from flaml[blendsearch,nlp,notebook,ray]) (0.24.1)\n", "Requirement already satisfied: openml==0.10.2 in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from flaml[blendsearch,nlp,notebook,ray]) (0.10.2)\n", "Requirement already satisfied: jupyter in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from flaml[blendsearch,nlp,notebook,ray]) (1.0.0)\n", "Requirement already satisfied: matplotlib in /home/xliu127/.local/lib/python3.8/site-packages (from flaml[blendsearch,nlp,notebook,ray]) (3.4.3)\n", "Requirement already satisfied: rgf-python in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from flaml[blendsearch,nlp,notebook,ray]) (3.12.0)\n", "Requirement already satisfied: catboost>=0.26 in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from flaml[blendsearch,nlp,notebook,ray]) (1.0.6)\n", "Requirement already satisfied: ray[tune]~=1.10 in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from flaml[blendsearch,nlp,notebook,ray]) (1.13.0)\n", "Requirement already satisfied: protobuf<4 in /home/xliu127/.local/lib/python3.8/site-packages (from flaml[blendsearch,nlp,notebook,ray]) (3.15.8)\n", "Requirement already satisfied: transformers>=4.14 in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from flaml[blendsearch,nlp,notebook,ray]) (4.18.0)\n", "Requirement already satisfied: datasets in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from flaml[blendsearch,nlp,notebook,ray]) (2.4.0)\n", "Requirement already satisfied: torch in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from flaml[blendsearch,nlp,notebook,ray]) (1.12.0)\n", "Requirement already satisfied: seqeval in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from flaml[blendsearch,nlp,notebook,ray]) (1.2.2)\n", "Requirement already satisfied: nltk in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from flaml[blendsearch,nlp,notebook,ray]) (3.7)\n", "Requirement already satisfied: rouge_score in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from flaml[blendsearch,nlp,notebook,ray]) (0.1.2)\n", "Requirement already satisfied: optuna==2.8.0 in /home/xliu127/.local/lib/python3.8/site-packages (from flaml[blendsearch,nlp,notebook,ray]) (2.8.0)\n", "Requirement already satisfied: liac-arff>=2.4.0 in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from openml==0.10.2->flaml[blendsearch,nlp,notebook,ray]) (2.5.0)\n", "Requirement already satisfied: xmltodict in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from openml==0.10.2->flaml[blendsearch,nlp,notebook,ray]) (0.13.0)\n", "Requirement already satisfied: python-dateutil in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from openml==0.10.2->flaml[blendsearch,nlp,notebook,ray]) (2.8.2)\n", "Requirement already satisfied: requests in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from openml==0.10.2->flaml[blendsearch,nlp,notebook,ray]) (2.28.1)\n", "Requirement already satisfied: packaging>=20.0 in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from optuna==2.8.0->flaml[blendsearch,nlp,notebook,ray]) (21.3)\n", "Requirement already satisfied: sqlalchemy>=1.1.0 in /home/xliu127/.local/lib/python3.8/site-packages (from optuna==2.8.0->flaml[blendsearch,nlp,notebook,ray]) (1.4.11)\n", "Requirement already satisfied: tqdm in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from optuna==2.8.0->flaml[blendsearch,nlp,notebook,ray]) (4.64.0)\n", "Requirement already satisfied: colorlog in /home/xliu127/.local/lib/python3.8/site-packages (from optuna==2.8.0->flaml[blendsearch,nlp,notebook,ray]) (5.0.1)\n", "Requirement already satisfied: cmaes>=0.8.2 in /home/xliu127/.local/lib/python3.8/site-packages (from optuna==2.8.0->flaml[blendsearch,nlp,notebook,ray]) (0.8.2)\n", "Requirement already satisfied: alembic in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from optuna==2.8.0->flaml[blendsearch,nlp,notebook,ray]) (1.8.1)\n", "Requirement already satisfied: cliff in /home/xliu127/.local/lib/python3.8/site-packages (from optuna==2.8.0->flaml[blendsearch,nlp,notebook,ray]) (3.7.0)\n", "Requirement already satisfied: plotly in /home/xliu127/.local/lib/python3.8/site-packages (from catboost>=0.26->flaml[blendsearch,nlp,notebook,ray]) (4.14.3)\n", "Requirement already satisfied: six in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from catboost>=0.26->flaml[blendsearch,nlp,notebook,ray]) (1.16.0)\n", "Requirement already satisfied: graphviz in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from catboost>=0.26->flaml[blendsearch,nlp,notebook,ray]) (0.20.1)\n", "Requirement already satisfied: wheel in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from lightgbm>=2.3.1->flaml[blendsearch,nlp,notebook,ray]) (0.37.1)\n", "Requirement already satisfied: pytz>=2020.1 in /home/xliu127/.local/lib/python3.8/site-packages (from pandas>=1.1.4->flaml[blendsearch,nlp,notebook,ray]) (2021.1)\n", "Requirement already satisfied: attrs in /home/xliu127/.local/lib/python3.8/site-packages (from ray[tune]~=1.10->flaml[blendsearch,nlp,notebook,ray]) (20.3.0)\n", "Requirement already satisfied: frozenlist in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from ray[tune]~=1.10->flaml[blendsearch,nlp,notebook,ray]) (1.3.0)\n", "Requirement already satisfied: virtualenv in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from ray[tune]~=1.10->flaml[blendsearch,nlp,notebook,ray]) (20.16.2)\n", "Requirement already satisfied: click<=8.0.4,>=7.0 in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from ray[tune]~=1.10->flaml[blendsearch,nlp,notebook,ray]) (8.0.4)\n", "Requirement already satisfied: msgpack<2.0.0,>=1.0.0 in /home/xliu127/.local/lib/python3.8/site-packages (from ray[tune]~=1.10->flaml[blendsearch,nlp,notebook,ray]) (1.0.2)\n", "Requirement already satisfied: filelock in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from ray[tune]~=1.10->flaml[blendsearch,nlp,notebook,ray]) (3.7.1)\n", "Requirement already satisfied: grpcio<=1.43.0,>=1.28.1 in /home/xliu127/.local/lib/python3.8/site-packages (from ray[tune]~=1.10->flaml[blendsearch,nlp,notebook,ray]) (1.40.0)\n", "Requirement already satisfied: pyyaml in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from ray[tune]~=1.10->flaml[blendsearch,nlp,notebook,ray]) (6.0)\n", "Requirement already satisfied: aiosignal in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from ray[tune]~=1.10->flaml[blendsearch,nlp,notebook,ray]) (1.2.0)\n", "Requirement already satisfied: jsonschema in /home/xliu127/.local/lib/python3.8/site-packages (from ray[tune]~=1.10->flaml[blendsearch,nlp,notebook,ray]) (3.2.0)\n", "Requirement already satisfied: tensorboardX>=1.9 in /home/xliu127/.local/lib/python3.8/site-packages (from ray[tune]~=1.10->flaml[blendsearch,nlp,notebook,ray]) (2.2)\n", "Requirement already satisfied: tabulate in /home/xliu127/.local/lib/python3.8/site-packages (from ray[tune]~=1.10->flaml[blendsearch,nlp,notebook,ray]) (0.8.9)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Requirement already satisfied: threadpoolctl>=2.0.0 in /home/xliu127/.local/lib/python3.8/site-packages (from scikit-learn>=0.24->flaml[blendsearch,nlp,notebook,ray]) (2.1.0)\n", "Requirement already satisfied: joblib>=0.11 in /home/xliu127/.local/lib/python3.8/site-packages (from scikit-learn>=0.24->flaml[blendsearch,nlp,notebook,ray]) (1.0.1)\n", "Requirement already satisfied: sacremoses in /home/xliu127/.local/lib/python3.8/site-packages (from transformers>=4.14->flaml[blendsearch,nlp,notebook,ray]) (0.0.45)\n", "Requirement already satisfied: regex!=2019.12.17 in /home/xliu127/.local/lib/python3.8/site-packages (from transformers>=4.14->flaml[blendsearch,nlp,notebook,ray]) (2021.8.28)\n", "Requirement already satisfied: tokenizers!=0.11.3,<0.13,>=0.11.1 in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from transformers>=4.14->flaml[blendsearch,nlp,notebook,ray]) (0.12.1)\n", "Requirement already satisfied: huggingface-hub<1.0,>=0.1.0 in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from transformers>=4.14->flaml[blendsearch,nlp,notebook,ray]) (0.8.1)\n", "Requirement already satisfied: xxhash in /home/xliu127/.local/lib/python3.8/site-packages (from datasets->flaml[blendsearch,nlp,notebook,ray]) (2.0.2)\n", "Requirement already satisfied: fsspec[http]>=2021.11.1 in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from datasets->flaml[blendsearch,nlp,notebook,ray]) (2022.7.1)\n", "Requirement already satisfied: multiprocess in /home/xliu127/.local/lib/python3.8/site-packages (from datasets->flaml[blendsearch,nlp,notebook,ray]) (0.70.12.2)\n", "Requirement already satisfied: dill<0.3.6 in /home/xliu127/.local/lib/python3.8/site-packages (from datasets->flaml[blendsearch,nlp,notebook,ray]) (0.3.4)\n", "Requirement already satisfied: aiohttp in /home/xliu127/.local/lib/python3.8/site-packages (from datasets->flaml[blendsearch,nlp,notebook,ray]) (3.7.4.post0)\n", "Requirement already satisfied: responses<0.19 in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from datasets->flaml[blendsearch,nlp,notebook,ray]) (0.18.0)\n", "Requirement already satisfied: pyarrow>=6.0.0 in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from datasets->flaml[blendsearch,nlp,notebook,ray]) (8.0.0)\n", "Requirement already satisfied: ipywidgets in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from jupyter->flaml[blendsearch,nlp,notebook,ray]) (7.7.1)\n", "Requirement already satisfied: notebook in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from jupyter->flaml[blendsearch,nlp,notebook,ray]) (6.4.12)\n", "Requirement already satisfied: qtconsole in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from jupyter->flaml[blendsearch,nlp,notebook,ray]) (5.3.1)\n", "Requirement already satisfied: nbconvert in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from jupyter->flaml[blendsearch,nlp,notebook,ray]) (6.5.1)\n", "Requirement already satisfied: jupyter-console in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from jupyter->flaml[blendsearch,nlp,notebook,ray]) (6.4.4)\n", "Requirement already satisfied: ipykernel in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from jupyter->flaml[blendsearch,nlp,notebook,ray]) (6.15.1)\n", "Requirement already satisfied: pyparsing>=2.2.1 in /home/xliu127/.local/lib/python3.8/site-packages (from matplotlib->flaml[blendsearch,nlp,notebook,ray]) (2.4.7)\n", "Requirement already satisfied: kiwisolver>=1.0.1 in /home/xliu127/.local/lib/python3.8/site-packages (from matplotlib->flaml[blendsearch,nlp,notebook,ray]) (1.3.1)\n", "Requirement already satisfied: cycler>=0.10 in /home/xliu127/.local/lib/python3.8/site-packages (from matplotlib->flaml[blendsearch,nlp,notebook,ray]) (0.10.0)\n", "Requirement already satisfied: pillow>=6.2.0 in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from matplotlib->flaml[blendsearch,nlp,notebook,ray]) (9.2.0)\n", "Requirement already satisfied: absl-py in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from rouge_score->flaml[blendsearch,nlp,notebook,ray]) (1.2.0)\n", "Requirement already satisfied: typing-extensions in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from torch->flaml[blendsearch,nlp,notebook,ray]) (4.3.0)\n", "Requirement already satisfied: charset-normalizer<3,>=2 in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from requests->openml==0.10.2->flaml[blendsearch,nlp,notebook,ray]) (2.1.0)\n", "Requirement already satisfied: certifi>=2017.4.17 in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from requests->openml==0.10.2->flaml[blendsearch,nlp,notebook,ray]) (2022.6.15)\n", "Requirement already satisfied: idna<4,>=2.5 in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from requests->openml==0.10.2->flaml[blendsearch,nlp,notebook,ray]) (3.3)\n", "Requirement already satisfied: urllib3<1.27,>=1.21.1 in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from requests->openml==0.10.2->flaml[blendsearch,nlp,notebook,ray]) (1.26.11)\n", "Requirement already satisfied: greenlet!=0.4.17 in /home/xliu127/.local/lib/python3.8/site-packages (from sqlalchemy>=1.1.0->optuna==2.8.0->flaml[blendsearch,nlp,notebook,ray]) (1.0.0)\n", "Requirement already satisfied: multidict<7.0,>=4.5 in /home/xliu127/.local/lib/python3.8/site-packages (from aiohttp->datasets->flaml[blendsearch,nlp,notebook,ray]) (5.1.0)\n", "Requirement already satisfied: yarl<2.0,>=1.0 in /home/xliu127/.local/lib/python3.8/site-packages (from aiohttp->datasets->flaml[blendsearch,nlp,notebook,ray]) (1.6.3)\n", "Requirement already satisfied: chardet<5.0,>=2.0 in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from aiohttp->datasets->flaml[blendsearch,nlp,notebook,ray]) (4.0.0)\n", "Requirement already satisfied: async-timeout<4.0,>=3.0 in /home/xliu127/.local/lib/python3.8/site-packages (from aiohttp->datasets->flaml[blendsearch,nlp,notebook,ray]) (3.0.1)\n", "Requirement already satisfied: importlib-metadata in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from alembic->optuna==2.8.0->flaml[blendsearch,nlp,notebook,ray]) (4.12.0)\n", "Requirement already satisfied: importlib-resources in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from alembic->optuna==2.8.0->flaml[blendsearch,nlp,notebook,ray]) (5.9.0)\n", "Requirement already satisfied: Mako in /home/xliu127/.local/lib/python3.8/site-packages (from alembic->optuna==2.8.0->flaml[blendsearch,nlp,notebook,ray]) (1.1.4)\n", "Requirement already satisfied: cmd2>=1.0.0 in /home/xliu127/.local/lib/python3.8/site-packages (from cliff->optuna==2.8.0->flaml[blendsearch,nlp,notebook,ray]) (1.5.0)\n", "Requirement already satisfied: PrettyTable>=0.7.2 in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from cliff->optuna==2.8.0->flaml[blendsearch,nlp,notebook,ray]) (3.3.0)\n", "Requirement already satisfied: stevedore>=2.0.1 in /home/xliu127/.local/lib/python3.8/site-packages (from cliff->optuna==2.8.0->flaml[blendsearch,nlp,notebook,ray]) (3.3.0)\n", "Requirement already satisfied: pbr!=2.1.0,>=2.0.0 in /home/xliu127/.local/lib/python3.8/site-packages (from cliff->optuna==2.8.0->flaml[blendsearch,nlp,notebook,ray]) (5.5.1)\n", "Requirement already satisfied: debugpy>=1.0 in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from ipykernel->jupyter->flaml[blendsearch,nlp,notebook,ray]) (1.6.2)\n", "Requirement already satisfied: traitlets>=5.1.0 in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from ipykernel->jupyter->flaml[blendsearch,nlp,notebook,ray]) (5.3.0)\n", "Requirement already satisfied: matplotlib-inline>=0.1 in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from ipykernel->jupyter->flaml[blendsearch,nlp,notebook,ray]) (0.1.3)\n", "Requirement already satisfied: psutil in /home/xliu127/.local/lib/python3.8/site-packages (from ipykernel->jupyter->flaml[blendsearch,nlp,notebook,ray]) (5.8.0)\n", "Requirement already satisfied: ipython>=7.23.1 in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from ipykernel->jupyter->flaml[blendsearch,nlp,notebook,ray]) (8.4.0)\n", "Requirement already satisfied: jupyter-client>=6.1.12 in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from ipykernel->jupyter->flaml[blendsearch,nlp,notebook,ray]) (7.3.4)\n", "Requirement already satisfied: tornado>=6.1 in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from ipykernel->jupyter->flaml[blendsearch,nlp,notebook,ray]) (6.2)\n", "Requirement already satisfied: nest-asyncio in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from ipykernel->jupyter->flaml[blendsearch,nlp,notebook,ray]) (1.5.5)\n", "Requirement already satisfied: pyzmq>=17 in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from ipykernel->jupyter->flaml[blendsearch,nlp,notebook,ray]) (23.2.0)\n", "Requirement already satisfied: jupyterlab-widgets>=1.0.0 in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from ipywidgets->jupyter->flaml[blendsearch,nlp,notebook,ray]) (1.1.1)\n", "Requirement already satisfied: ipython-genutils~=0.2.0 in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from ipywidgets->jupyter->flaml[blendsearch,nlp,notebook,ray]) (0.2.0)\n", "Requirement already satisfied: widgetsnbextension~=3.6.0 in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from ipywidgets->jupyter->flaml[blendsearch,nlp,notebook,ray]) (3.6.1)\n", "Requirement already satisfied: pyrsistent>=0.14.0 in /home/xliu127/.local/lib/python3.8/site-packages (from jsonschema->ray[tune]~=1.10->flaml[blendsearch,nlp,notebook,ray]) (0.17.3)\n", "Requirement already satisfied: setuptools in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from jsonschema->ray[tune]~=1.10->flaml[blendsearch,nlp,notebook,ray]) (61.2.0)\n", "Requirement already satisfied: pygments in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from jupyter-console->jupyter->flaml[blendsearch,nlp,notebook,ray]) (2.12.0)\n", "Requirement already satisfied: prompt-toolkit!=3.0.0,!=3.0.1,<3.1.0,>=2.0.0 in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from jupyter-console->jupyter->flaml[blendsearch,nlp,notebook,ray]) (3.0.30)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Requirement already satisfied: defusedxml in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from nbconvert->jupyter->flaml[blendsearch,nlp,notebook,ray]) (0.7.1)\n", "Requirement already satisfied: pandocfilters>=1.4.1 in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from nbconvert->jupyter->flaml[blendsearch,nlp,notebook,ray]) (1.5.0)\n", "Requirement already satisfied: beautifulsoup4 in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from nbconvert->jupyter->flaml[blendsearch,nlp,notebook,ray]) (4.11.1)\n", "Requirement already satisfied: jupyter-core>=4.7 in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from nbconvert->jupyter->flaml[blendsearch,nlp,notebook,ray]) (4.11.1)\n", "Requirement already satisfied: lxml in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from nbconvert->jupyter->flaml[blendsearch,nlp,notebook,ray]) (4.9.1)\n", "Requirement already satisfied: jupyterlab-pygments in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from nbconvert->jupyter->flaml[blendsearch,nlp,notebook,ray]) (0.2.2)\n", "Requirement already satisfied: bleach in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from nbconvert->jupyter->flaml[blendsearch,nlp,notebook,ray]) (5.0.1)\n", "Requirement already satisfied: tinycss2 in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from nbconvert->jupyter->flaml[blendsearch,nlp,notebook,ray]) (1.1.1)\n", "Requirement already satisfied: mistune<2,>=0.8.1 in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from nbconvert->jupyter->flaml[blendsearch,nlp,notebook,ray]) (0.8.4)\n", "Requirement already satisfied: MarkupSafe>=2.0 in /home/xliu127/.local/lib/python3.8/site-packages (from nbconvert->jupyter->flaml[blendsearch,nlp,notebook,ray]) (2.0.1)\n", "Requirement already satisfied: jinja2>=3.0 in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from nbconvert->jupyter->flaml[blendsearch,nlp,notebook,ray]) (3.1.2)\n", "Requirement already satisfied: nbclient>=0.5.0 in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from nbconvert->jupyter->flaml[blendsearch,nlp,notebook,ray]) (0.6.6)\n", "Requirement already satisfied: nbformat>=5.1 in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from nbconvert->jupyter->flaml[blendsearch,nlp,notebook,ray]) (5.4.0)\n", "Requirement already satisfied: entrypoints>=0.2.2 in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from nbconvert->jupyter->flaml[blendsearch,nlp,notebook,ray]) (0.4)\n", "Requirement already satisfied: prometheus-client in /home/xliu127/.local/lib/python3.8/site-packages (from notebook->jupyter->flaml[blendsearch,nlp,notebook,ray]) (0.10.1)\n", "Requirement already satisfied: argon2-cffi in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from notebook->jupyter->flaml[blendsearch,nlp,notebook,ray]) (21.3.0)\n", "Requirement already satisfied: Send2Trash>=1.8.0 in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from notebook->jupyter->flaml[blendsearch,nlp,notebook,ray]) (1.8.0)\n", "Requirement already satisfied: terminado>=0.8.3 in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from notebook->jupyter->flaml[blendsearch,nlp,notebook,ray]) (0.15.0)\n", "Requirement already satisfied: retrying>=1.3.3 in /home/xliu127/.local/lib/python3.8/site-packages (from plotly->catboost>=0.26->flaml[blendsearch,nlp,notebook,ray]) (1.3.3)\n", "Requirement already satisfied: qtpy>=2.0.1 in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from qtconsole->jupyter->flaml[blendsearch,nlp,notebook,ray]) (2.1.0)\n", "Requirement already satisfied: platformdirs<3,>=2 in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from virtualenv->ray[tune]~=1.10->flaml[blendsearch,nlp,notebook,ray]) (2.5.2)\n", "Requirement already satisfied: distlib<1,>=0.3.1 in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from virtualenv->ray[tune]~=1.10->flaml[blendsearch,nlp,notebook,ray]) (0.3.5)\n", "Requirement already satisfied: wcwidth>=0.1.7 in /home/xliu127/.local/lib/python3.8/site-packages (from cmd2>=1.0.0->cliff->optuna==2.8.0->flaml[blendsearch,nlp,notebook,ray]) (0.2.5)\n", "Requirement already satisfied: colorama>=0.3.7 in /home/xliu127/.local/lib/python3.8/site-packages (from cmd2>=1.0.0->cliff->optuna==2.8.0->flaml[blendsearch,nlp,notebook,ray]) (0.4.4)\n", "Requirement already satisfied: pyperclip>=1.6 in /home/xliu127/.local/lib/python3.8/site-packages (from cmd2>=1.0.0->cliff->optuna==2.8.0->flaml[blendsearch,nlp,notebook,ray]) (1.8.2)\n", "Requirement already satisfied: stack-data in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from ipython>=7.23.1->ipykernel->jupyter->flaml[blendsearch,nlp,notebook,ray]) (0.3.0)\n", "Requirement already satisfied: pexpect>4.3 in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from ipython>=7.23.1->ipykernel->jupyter->flaml[blendsearch,nlp,notebook,ray]) (4.8.0)\n", "Requirement already satisfied: backcall in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from ipython>=7.23.1->ipykernel->jupyter->flaml[blendsearch,nlp,notebook,ray]) (0.2.0)\n", "Requirement already satisfied: pickleshare in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from ipython>=7.23.1->ipykernel->jupyter->flaml[blendsearch,nlp,notebook,ray]) (0.7.5)\n", "Requirement already satisfied: decorator in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from ipython>=7.23.1->ipykernel->jupyter->flaml[blendsearch,nlp,notebook,ray]) (5.1.1)\n", "Requirement already satisfied: jedi>=0.16 in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from ipython>=7.23.1->ipykernel->jupyter->flaml[blendsearch,nlp,notebook,ray]) (0.18.1)\n", "Requirement already satisfied: fastjsonschema in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from nbformat>=5.1->nbconvert->jupyter->flaml[blendsearch,nlp,notebook,ray]) (2.16.1)\n", "Requirement already satisfied: ptyprocess in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from terminado>=0.8.3->notebook->jupyter->flaml[blendsearch,nlp,notebook,ray]) (0.7.0)\n", "Requirement already satisfied: argon2-cffi-bindings in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from argon2-cffi->notebook->jupyter->flaml[blendsearch,nlp,notebook,ray]) (21.2.0)\n", "Requirement already satisfied: soupsieve>1.2 in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from beautifulsoup4->nbconvert->jupyter->flaml[blendsearch,nlp,notebook,ray]) (2.3.2.post1)\n", "Requirement already satisfied: webencodings in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from bleach->nbconvert->jupyter->flaml[blendsearch,nlp,notebook,ray]) (0.5.1)\n", "Requirement already satisfied: zipp>=0.5 in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from importlib-metadata->alembic->optuna==2.8.0->flaml[blendsearch,nlp,notebook,ray]) (3.8.1)\n", "Requirement already satisfied: parso<0.9.0,>=0.8.0 in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from jedi>=0.16->ipython>=7.23.1->ipykernel->jupyter->flaml[blendsearch,nlp,notebook,ray]) (0.8.3)\n", "Requirement already satisfied: cffi>=1.0.1 in /home/xliu127/.local/lib/python3.8/site-packages (from argon2-cffi-bindings->argon2-cffi->notebook->jupyter->flaml[blendsearch,nlp,notebook,ray]) (1.14.6)\n", "Requirement already satisfied: executing in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from stack-data->ipython>=7.23.1->ipykernel->jupyter->flaml[blendsearch,nlp,notebook,ray]) (0.9.1)\n", "Requirement already satisfied: asttokens in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from stack-data->ipython>=7.23.1->ipykernel->jupyter->flaml[blendsearch,nlp,notebook,ray]) (2.0.7)\n", "Requirement already satisfied: pure-eval in /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages (from stack-data->ipython>=7.23.1->ipykernel->jupyter->flaml[blendsearch,nlp,notebook,ray]) (0.2.2)\n", "Requirement already satisfied: pycparser in /home/xliu127/.local/lib/python3.8/site-packages (from cffi>=1.0.1->argon2-cffi-bindings->argon2-cffi->notebook->jupyter->flaml[blendsearch,nlp,notebook,ray]) (2.20)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "\u001b[33mWARNING: Ignoring invalid distribution -andas (/home/xliu127/.local/lib/python3.8/site-packages)\u001b[0m\u001b[33m\n", "\u001b[0m\u001b[33mWARNING: Ignoring invalid distribution -andas (/home/xliu127/.local/lib/python3.8/site-packages)\u001b[0m\u001b[33m\n", "\u001b[0m\u001b[33mWARNING: Ignoring invalid distribution -andas (/home/xliu127/.local/lib/python3.8/site-packages)\u001b[0m\u001b[33m\n", "\u001b[0m\u001b[33mWARNING: Ignoring invalid distribution -andas (/home/xliu127/.local/lib/python3.8/site-packages)\u001b[0m\u001b[33m\n", "\u001b[0mNote: you may need to restart the kernel to use updated packages.\n" ] } ], "source": [ "%pip install flaml[nlp,ray,notebook,blendsearch]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's run some examples. \n", "\n", "Note: throughout this notebook, you may see a few ModuleNotFoundErrors. As long as the cell successfully executes, you can ignore that error." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 2. Sentiment Classification Example\n", "### Load data and preprocess\n", "\n", "The Stanford Sentiment treebank (SST-2) dataset is a dataset for sentiment classification. First, let's load this dataset into pandas dataframes:" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "Reusing dataset glue (/home/xliu127/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad)\n", "Reusing dataset glue (/home/xliu127/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad)\n", "Reusing dataset glue (/home/xliu127/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad)\n" ] } ], "source": [ "from datasets import load_dataset\n", "\n", "train_dataset = load_dataset(\"glue\", \"sst2\", split=\"train\").to_pandas()\n", "dev_dataset = load_dataset(\"glue\", \"sst2\", split=\"validation\").to_pandas()\n", "test_dataset = load_dataset(\"glue\", \"sst2\", split=\"test\").to_pandas()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Take a look at the first 5 examples of this dataset:" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
sentencelabelidx
0hide new secretions from the parental units00
1contains no wit , only labored gags01
2that loves its characters and communicates som...12
3remains utterly satisfied to remain the same t...03
4on the worst revenge-of-the-nerds clichés the ...04
\n", "
" ], "text/plain": [ " sentence label idx\n", "0 hide new secretions from the parental units 0 0\n", "1 contains no wit , only labored gags 0 1\n", "2 that loves its characters and communicates som... 1 2\n", "3 remains utterly satisfied to remain the same t... 0 3\n", "4 on the worst revenge-of-the-nerds clichés the ... 0 4" ] }, "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "train_dataset.head(5)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Separate the data into X and y:" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "custom_sent_keys = [\"sentence\"] # specify the column names of the input sentences\n", "label_key = \"label\" # specify the column name of the label\n", "\n", "X_train, y_train = train_dataset[custom_sent_keys], train_dataset[label_key]\n", "X_val, y_val = dev_dataset[custom_sent_keys], dev_dataset[label_key]\n", "X_test = test_dataset[custom_sent_keys]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Run FLAML" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "/data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages/xgboost/compat.py:31: FutureWarning: pandas.Int64Index is deprecated and will be removed from pandas in a future version. Use pandas.Index with the appropriate dtype instead.\n", " from pandas import MultiIndex, Int64Index\n", "2022-08-13 16:52:12,092\tINFO services.py:1470 -- View the Ray dashboard at \u001b[1m\u001b[32mhttp://127.0.0.1:8265\u001b[39m\u001b[22m\n" ] } ], "source": [ "''' import AutoML class from flaml package '''\n", "from flaml import AutoML\n", "automl = AutoML()\n", "\n", "import ray\n", "if not ray.is_initialized():\n", " ray.init() " ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [], "source": [ "TIME_BUDGET=1800\n", "automl_settings = {\n", " \"time_budget\": TIME_BUDGET, # setting the time budget\n", " \"task\": \"seq-classification\", # setting the task as seq-classification\n", " \"fit_kwargs_by_estimator\": {\n", " \"transformer\": {\n", " \"output_dir\": \"data/output/\", # setting the output directory\n", " \"model_path\": \"google/electra-small-discriminator\", # if model_path is not set, the default model is facebook/muppet-roberta-base: https://huggingface.co/facebook/muppet-roberta-base\n", " }\n", " },\n", " \"gpu_per_trial\": 1, # set to 0 if no GPU is available\n", " \"log_file_name\": \"seqclass.log\", # set the file to save the log for HPO\n", " \"log_type\": \"all\", # the log type for trials: \"all\" if logging all the trials, \"better\" if only keeping the better trials\n", " \"use_ray\": {\"local_dir\": \"data/output/\"}, # set whether to use Ray\n", " \"n_concurrent_trials\": 4,\n", " \"keep_search_state\": True, # keeping the search state\n", "}" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "data": { "text/html": [ "== Status ==
Current time: 2022-08-13 17:22:28 (running for 00:30:10.04)
Memory usage on this node: 25.3/376.6 GiB
Using FIFO scheduling algorithm.
Resources requested: 0/96 CPUs, 0/4 GPUs, 0.0/249.66 GiB heap, 0.0/110.99 GiB objects (0.0/1.0 accelerator_type:V100)
Current best trial: ac41a40a with val_loss=0.07798165137614677 and parameters={'learning_rate': 3.864623804361677e-05, 'num_train_epochs': 3, 'per_device_train_batch_size': 32, 'seed': 24, 'global_max_steps': 9223372036854775807, 'learner': 'transformer', 'FLAML_sample_size': 67349}
Result logdir: /data/xliu127/projects/hyperopt/FLAML/notebook/data/output/train_2022-08-13_16-52-17
Number of trials: 49/1000000 (49 TERMINATED)

" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "name": "stdout", "output_type": "stream", "text": [ "\u001b[2m\u001b[36m(train pid=49559)\u001b[0m {'loss': 0.1493, 'learning_rate': 9.269681649947959e-06, 'epoch': 2.26}\n", "\u001b[2m\u001b[36m(train pid=50106)\u001b[0m {'eval_loss': 0.3011920750141144, 'eval_automl_metric': 0.09633027522935778, 'eval_runtime': 6.9816, 'eval_samples_per_second': 124.899, 'eval_steps_per_second': 124.899, 'epoch': 2.0}\n", "\u001b[2m\u001b[36m(train pid=50265)\u001b[0m {'loss': 0.1551, 'learning_rate': 3.2693767566133266e-05, 'epoch': 1.9}\n", "\u001b[2m\u001b[36m(train pid=50443)\u001b[0m {'loss': 0.2014, 'learning_rate': 1.2879756568772649e-05, 'epoch': 1.43}\n", "\u001b[2m\u001b[36m(train pid=50106)\u001b[0m {'loss': 0.1524, 'learning_rate': 7.4163905463114075e-06, 'epoch': 2.14}\n", "\u001b[2m\u001b[36m(train pid=50265)\u001b[0m {'eval_loss': 0.3561520278453827, 'eval_automl_metric': 0.11238532110091748, 'eval_runtime': 7.1398, 'eval_samples_per_second': 122.133, 'eval_steps_per_second': 122.133, 'epoch': 2.0}\n", "\u001b[2m\u001b[36m(train pid=49559)\u001b[0m {'loss': 0.1492, 'learning_rate': 7.788901833662341e-06, 'epoch': 2.38}\n", "\u001b[2m\u001b[36m(train pid=50265)\u001b[0m {'loss': 0.1292, 'learning_rate': 2.5632478674959775e-05, 'epoch': 2.14}\n", "\u001b[2m\u001b[36m(train pid=50443)\u001b[0m {'loss': 0.1936, 'learning_rate': 1.169158714360912e-05, 'epoch': 1.66}\n", "\u001b[2m\u001b[36m(train pid=50106)\u001b[0m {'loss': 0.155, 'learning_rate': 5.373307751184297e-06, 'epoch': 2.38}\n", "\u001b[2m\u001b[36m(train pid=49559)\u001b[0m {'loss': 0.1405, 'learning_rate': 6.308122017376726e-06, 'epoch': 2.49}\n", "\u001b[2m\u001b[36m(train pid=50265)\u001b[0m {'loss': 0.1078, 'learning_rate': 1.8571189783786284e-05, 'epoch': 2.38}\n", "\u001b[2m\u001b[36m(train pid=50443)\u001b[0m {'loss': 0.1975, 'learning_rate': 1.0503417718445592e-05, 'epoch': 1.9}\n", "\u001b[2m\u001b[36m(train pid=49559)\u001b[0m {'loss': 0.1215, 'learning_rate': 4.827342201091109e-06, 'epoch': 2.61}\n", "\u001b[2m\u001b[36m(train pid=50106)\u001b[0m {'loss': 0.1437, 'learning_rate': 3.330224956057188e-06, 'epoch': 2.61}\n", "\u001b[2m\u001b[36m(train pid=50443)\u001b[0m {'loss': 0.1698, 'learning_rate': 9.315248293282063e-06, 'epoch': 2.14}\n", "\u001b[2m\u001b[36m(train pid=50265)\u001b[0m {'loss': 0.1083, 'learning_rate': 1.1509900892612791e-05, 'epoch': 2.61}\n", "\u001b[2m\u001b[36m(train pid=49559)\u001b[0m {'loss': 0.1355, 'learning_rate': 3.346562384805493e-06, 'epoch': 2.73}\n", "\u001b[2m\u001b[36m(train pid=50106)\u001b[0m {'loss': 0.1428, 'learning_rate': 1.287142160930079e-06, 'epoch': 2.85}\n", "\u001b[2m\u001b[36m(train pid=50265)\u001b[0m {'loss': 0.106, 'learning_rate': 4.4486120014393e-06, 'epoch': 2.85}\n", "\u001b[2m\u001b[36m(train pid=50443)\u001b[0m {'loss': 0.1661, 'learning_rate': 8.127078868118535e-06, 'epoch': 2.38}\n", "\u001b[2m\u001b[36m(train pid=50106)\u001b[0m {'eval_loss': 0.2953557074069977, 'eval_automl_metric': 0.0905963302752294, 'eval_runtime': 7.0221, 'eval_samples_per_second': 124.18, 'eval_steps_per_second': 124.18, 'epoch': 3.0}\n", "\u001b[2m\u001b[36m(train pid=49559)\u001b[0m {'loss': 0.1445, 'learning_rate': 1.8657825685198765e-06, 'epoch': 2.85}\n", "\u001b[2m\u001b[36m(train pid=50106)\u001b[0m {'train_runtime': 416.8987, 'train_samples_per_second': 484.643, 'train_steps_per_second': 15.148, 'train_loss': 0.2114269960332464, 'epoch': 3.0}\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "\u001b[2m\u001b[36m(train pid=50106)\u001b[0m ***** Running Prediction *****\n", "\u001b[2m\u001b[36m(train pid=50106)\u001b[0m Num examples = 872\n", "\u001b[2m\u001b[36m(train pid=50106)\u001b[0m Batch size = 1\n", "\u001b[2m\u001b[36m(train pid=50106)\u001b[0m Didn't find file /data/xliu127/projects/hyperopt/FLAML/notebook/data/output/train_2022-08-13_16-52-17/train_80bc1972_47_FLAML_sample_size=67349,global_max_steps=9223372036854775807,learner=transformer,learning_rate=0.0000,num_train__2022-08-13_17-17-39/checkpoint-6315/added_tokens.json. We won't load it.\n", "\u001b[2m\u001b[36m(train pid=50106)\u001b[0m loading file /data/xliu127/projects/hyperopt/FLAML/notebook/data/output/train_2022-08-13_16-52-17/train_80bc1972_47_FLAML_sample_size=67349,global_max_steps=9223372036854775807,learner=transformer,learning_rate=0.0000,num_train__2022-08-13_17-17-39/checkpoint-6315/vocab.txt\n", "\u001b[2m\u001b[36m(train pid=50106)\u001b[0m loading file /data/xliu127/projects/hyperopt/FLAML/notebook/data/output/train_2022-08-13_16-52-17/train_80bc1972_47_FLAML_sample_size=67349,global_max_steps=9223372036854775807,learner=transformer,learning_rate=0.0000,num_train__2022-08-13_17-17-39/checkpoint-6315/tokenizer.json\n", "\u001b[2m\u001b[36m(train pid=50106)\u001b[0m loading file None\n", "\u001b[2m\u001b[36m(train pid=50106)\u001b[0m loading file /data/xliu127/projects/hyperopt/FLAML/notebook/data/output/train_2022-08-13_16-52-17/train_80bc1972_47_FLAML_sample_size=67349,global_max_steps=9223372036854775807,learner=transformer,learning_rate=0.0000,num_train__2022-08-13_17-17-39/checkpoint-6315/special_tokens_map.json\n", "\u001b[2m\u001b[36m(train pid=50106)\u001b[0m loading file /data/xliu127/projects/hyperopt/FLAML/notebook/data/output/train_2022-08-13_16-52-17/train_80bc1972_47_FLAML_sample_size=67349,global_max_steps=9223372036854775807,learner=transformer,learning_rate=0.0000,num_train__2022-08-13_17-17-39/checkpoint-6315/tokenizer_config.json\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "\u001b[2m\u001b[36m(train pid=50265)\u001b[0m {'eval_loss': 0.3420582413673401, 'eval_automl_metric': 0.09633027522935778, 'eval_runtime': 8.4313, 'eval_samples_per_second': 103.424, 'eval_steps_per_second': 103.424, 'epoch': 3.0}\n", "\u001b[2m\u001b[36m(train pid=50265)\u001b[0m {'train_runtime': 404.0512, 'train_samples_per_second': 500.053, 'train_steps_per_second': 15.629, 'train_loss': 0.18374049291663574, 'epoch': 3.0}\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "\u001b[2m\u001b[36m(train pid=50265)\u001b[0m ***** Running Prediction *****\n", "\u001b[2m\u001b[36m(train pid=50265)\u001b[0m Num examples = 872\n", "\u001b[2m\u001b[36m(train pid=50265)\u001b[0m Batch size = 1\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "\u001b[2m\u001b[36m(train pid=50443)\u001b[0m {'loss': 0.1614, 'learning_rate': 6.938909442955006e-06, 'epoch': 2.61}\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "\u001b[2m\u001b[36m(train pid=50265)\u001b[0m Didn't find file /data/xliu127/projects/hyperopt/FLAML/notebook/data/output/train_2022-08-13_16-52-17/train_918542ec_48_FLAML_sample_size=67349,global_max_steps=9223372036854775807,learner=transformer,learning_rate=0.0001,num_train__2022-08-13_17-18-07/checkpoint-6315/added_tokens.json. We won't load it.\n", "\u001b[2m\u001b[36m(train pid=50265)\u001b[0m loading file /data/xliu127/projects/hyperopt/FLAML/notebook/data/output/train_2022-08-13_16-52-17/train_918542ec_48_FLAML_sample_size=67349,global_max_steps=9223372036854775807,learner=transformer,learning_rate=0.0001,num_train__2022-08-13_17-18-07/checkpoint-6315/vocab.txt\n", "\u001b[2m\u001b[36m(train pid=50265)\u001b[0m loading file /data/xliu127/projects/hyperopt/FLAML/notebook/data/output/train_2022-08-13_16-52-17/train_918542ec_48_FLAML_sample_size=67349,global_max_steps=9223372036854775807,learner=transformer,learning_rate=0.0001,num_train__2022-08-13_17-18-07/checkpoint-6315/tokenizer.json\n", "\u001b[2m\u001b[36m(train pid=50265)\u001b[0m loading file None\n", "\u001b[2m\u001b[36m(train pid=50265)\u001b[0m loading file /data/xliu127/projects/hyperopt/FLAML/notebook/data/output/train_2022-08-13_16-52-17/train_918542ec_48_FLAML_sample_size=67349,global_max_steps=9223372036854775807,learner=transformer,learning_rate=0.0001,num_train__2022-08-13_17-18-07/checkpoint-6315/special_tokens_map.json\n", "\u001b[2m\u001b[36m(train pid=50265)\u001b[0m loading file /data/xliu127/projects/hyperopt/FLAML/notebook/data/output/train_2022-08-13_16-52-17/train_918542ec_48_FLAML_sample_size=67349,global_max_steps=9223372036854775807,learner=transformer,learning_rate=0.0001,num_train__2022-08-13_17-18-07/checkpoint-6315/tokenizer_config.json\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "\u001b[2m\u001b[36m(train pid=49559)\u001b[0m {'loss': 0.136, 'learning_rate': 3.8500275223426025e-07, 'epoch': 2.97}\n", "\u001b[2m\u001b[36m(train pid=49559)\u001b[0m {'eval_loss': 0.3623480498790741, 'eval_automl_metric': 0.08600917431192656, 'eval_runtime': 6.9268, 'eval_samples_per_second': 125.888, 'eval_steps_per_second': 125.888, 'epoch': 3.0}\n", "\u001b[2m\u001b[36m(train pid=49559)\u001b[0m {'train_runtime': 794.5838, 'train_samples_per_second': 254.28, 'train_steps_per_second': 15.895, 'train_loss': 0.20292378529045002, 'epoch': 3.0}\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "\u001b[2m\u001b[36m(train pid=49559)\u001b[0m ***** Running Prediction *****\n", "\u001b[2m\u001b[36m(train pid=49559)\u001b[0m Num examples = 872\n", "\u001b[2m\u001b[36m(train pid=49559)\u001b[0m Batch size = 1\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "\u001b[2m\u001b[36m(train pid=50443)\u001b[0m {'loss': 0.1528, 'learning_rate': 5.750740017791478e-06, 'epoch': 2.85}\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "\u001b[2m\u001b[36m(train pid=49559)\u001b[0m Didn't find file /data/xliu127/projects/hyperopt/FLAML/notebook/data/output/train_2022-08-13_16-52-17/train_bbde8ec8_45_FLAML_sample_size=67349,global_max_steps=9223372036854775807,learner=transformer,learning_rate=0.0000,num_train__2022-08-13_17-12-09/checkpoint-12630/added_tokens.json. We won't load it.\n", "\u001b[2m\u001b[36m(train pid=49559)\u001b[0m loading file /data/xliu127/projects/hyperopt/FLAML/notebook/data/output/train_2022-08-13_16-52-17/train_bbde8ec8_45_FLAML_sample_size=67349,global_max_steps=9223372036854775807,learner=transformer,learning_rate=0.0000,num_train__2022-08-13_17-12-09/checkpoint-12630/vocab.txt\n", "\u001b[2m\u001b[36m(train pid=49559)\u001b[0m loading file /data/xliu127/projects/hyperopt/FLAML/notebook/data/output/train_2022-08-13_16-52-17/train_bbde8ec8_45_FLAML_sample_size=67349,global_max_steps=9223372036854775807,learner=transformer,learning_rate=0.0000,num_train__2022-08-13_17-12-09/checkpoint-12630/tokenizer.json\n", "\u001b[2m\u001b[36m(train pid=49559)\u001b[0m loading file None\n", "\u001b[2m\u001b[36m(train pid=49559)\u001b[0m loading file /data/xliu127/projects/hyperopt/FLAML/notebook/data/output/train_2022-08-13_16-52-17/train_bbde8ec8_45_FLAML_sample_size=67349,global_max_steps=9223372036854775807,learner=transformer,learning_rate=0.0000,num_train__2022-08-13_17-12-09/checkpoint-12630/special_tokens_map.json\n", "\u001b[2m\u001b[36m(train pid=49559)\u001b[0m loading file /data/xliu127/projects/hyperopt/FLAML/notebook/data/output/train_2022-08-13_16-52-17/train_bbde8ec8_45_FLAML_sample_size=67349,global_max_steps=9223372036854775807,learner=transformer,learning_rate=0.0000,num_train__2022-08-13_17-12-09/checkpoint-12630/tokenizer_config.json\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "\u001b[2m\u001b[36m(train pid=50443)\u001b[0m {'eval_loss': 0.29546257853507996, 'eval_automl_metric': 0.08830275229357798, 'eval_runtime': 7.0421, 'eval_samples_per_second': 123.827, 'eval_steps_per_second': 123.827, 'epoch': 3.0}\n", "\u001b[2m\u001b[36m(train pid=50443)\u001b[0m {'loss': 0.1462, 'learning_rate': 4.56257059262795e-06, 'epoch': 3.09}\n", "\u001b[2m\u001b[36m(train pid=50443)\u001b[0m {'loss': 0.1474, 'learning_rate': 3.374401167464421e-06, 'epoch': 3.33}\n", "\u001b[2m\u001b[36m(train pid=50443)\u001b[0m {'loss': 0.134, 'learning_rate': 2.1862317423008924e-06, 'epoch': 3.56}\n", "\u001b[2m\u001b[36m(train pid=50443)\u001b[0m {'loss': 0.138, 'learning_rate': 9.98062317137364e-07, 'epoch': 3.8}\n", "\u001b[2m\u001b[36m(train pid=50443)\u001b[0m {'eval_loss': 0.32222986221313477, 'eval_automl_metric': 0.09518348623853212, 'eval_runtime': 7.0713, 'eval_samples_per_second': 123.315, 'eval_steps_per_second': 123.315, 'epoch': 4.0}\n", "\u001b[2m\u001b[36m(train pid=50443)\u001b[0m {'train_runtime': 539.1601, 'train_samples_per_second': 499.659, 'train_steps_per_second': 15.617, 'train_loss': 0.2059766846428008, 'epoch': 4.0}\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "\u001b[2m\u001b[36m(train pid=50443)\u001b[0m ***** Running Prediction *****\n", "\u001b[2m\u001b[36m(train pid=50443)\u001b[0m Num examples = 872\n", "\u001b[2m\u001b[36m(train pid=50443)\u001b[0m Batch size = 1\n", "\u001b[2m\u001b[36m(train pid=50443)\u001b[0m Didn't find file /data/xliu127/projects/hyperopt/FLAML/notebook/data/output/train_2022-08-13_16-52-17/train_b7e08512_49_FLAML_sample_size=67349,global_max_steps=9223372036854775807,learner=transformer,learning_rate=0.0000,num_train__2022-08-13_17-19-11/checkpoint-6315/added_tokens.json. We won't load it.\n", "\u001b[2m\u001b[36m(train pid=50443)\u001b[0m loading file /data/xliu127/projects/hyperopt/FLAML/notebook/data/output/train_2022-08-13_16-52-17/train_b7e08512_49_FLAML_sample_size=67349,global_max_steps=9223372036854775807,learner=transformer,learning_rate=0.0000,num_train__2022-08-13_17-19-11/checkpoint-6315/vocab.txt\n", "\u001b[2m\u001b[36m(train pid=50443)\u001b[0m loading file /data/xliu127/projects/hyperopt/FLAML/notebook/data/output/train_2022-08-13_16-52-17/train_b7e08512_49_FLAML_sample_size=67349,global_max_steps=9223372036854775807,learner=transformer,learning_rate=0.0000,num_train__2022-08-13_17-19-11/checkpoint-6315/tokenizer.json\n", "\u001b[2m\u001b[36m(train pid=50443)\u001b[0m loading file None\n", "\u001b[2m\u001b[36m(train pid=50443)\u001b[0m loading file /data/xliu127/projects/hyperopt/FLAML/notebook/data/output/train_2022-08-13_16-52-17/train_b7e08512_49_FLAML_sample_size=67349,global_max_steps=9223372036854775807,learner=transformer,learning_rate=0.0000,num_train__2022-08-13_17-19-11/checkpoint-6315/special_tokens_map.json\n", "\u001b[2m\u001b[36m(train pid=50443)\u001b[0m loading file /data/xliu127/projects/hyperopt/FLAML/notebook/data/output/train_2022-08-13_16-52-17/train_b7e08512_49_FLAML_sample_size=67349,global_max_steps=9223372036854775807,learner=transformer,learning_rate=0.0000,num_train__2022-08-13_17-19-11/checkpoint-6315/tokenizer_config.json\n", "2022-08-13 17:28:47,144\tINFO tune.py:747 -- Total run time: 2189.40 seconds (1803.73 seconds for the tuning loop).\n", "[flaml.automl: 08-13 17:28:52] {3322} INFO - selected model: None\n", "/data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages/transformers/optimization.py:306: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning\n", " warnings.warn(\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "{'loss': 0.417, 'learning_rate': 3.55863617139559e-05, 'epoch': 0.24}\n", "{'loss': 0.2872, 'learning_rate': 3.252648538429503e-05, 'epoch': 0.48}\n", "{'loss': 0.2452, 'learning_rate': 2.9466609054634162e-05, 'epoch': 0.71}\n", "{'loss': 0.2204, 'learning_rate': 2.6406732724973297e-05, 'epoch': 0.95}\n", "{'loss': 0.1823, 'learning_rate': 2.334685639531243e-05, 'epoch': 1.19}\n", "{'loss': 0.1743, 'learning_rate': 2.0286980065651558e-05, 'epoch': 1.43}\n", "{'loss': 0.1662, 'learning_rate': 1.722710373599069e-05, 'epoch': 1.66}\n", "{'loss': 0.1674, 'learning_rate': 1.416722740632982e-05, 'epoch': 1.9}\n", "{'loss': 0.1422, 'learning_rate': 1.1107351076668952e-05, 'epoch': 2.14}\n", "{'loss': 0.1271, 'learning_rate': 8.047474747008085e-06, 'epoch': 2.38}\n", "{'loss': 0.123, 'learning_rate': 4.987598417347216e-06, 'epoch': 2.61}\n", "{'loss': 0.1209, 'learning_rate': 1.927722087686347e-06, 'epoch': 2.85}\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "[flaml.automl: 08-13 17:36:12] {3465} INFO - retrain transformer for 440.2s\n", "[flaml.automl: 08-13 17:36:12] {3472} INFO - retrained model: None\n", "[flaml.automl: 08-13 17:36:12] {2749} INFO - fit succeeded\n", "[flaml.automl: 08-13 17:36:12] {2750} INFO - Time taken to find the best model: 1610.794395685196\n", "[flaml.automl: 08-13 17:36:12] {2761} WARNING - Time taken to find the best model is 89% of the provided time budget and not all estimators' hyperparameter search converged. Consider increasing the time budget.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "{'train_runtime': 415.3339, 'train_samples_per_second': 486.469, 'train_steps_per_second': 15.205, 'train_loss': 0.194211708848097, 'epoch': 3.0}\n" ] } ], "source": [ "'''The main flaml automl API'''\n", "automl.fit(X_train=X_train, y_train=y_train, X_val=X_val, y_val=y_val, **automl_settings)" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "The best loss by FLAML: 0.07798165137614677\n" ] } ], "source": [ "print(\"The best loss by FLAML: {}\".format(automl.best_loss))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Best model and metric" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Best hyperparmeter config: {'learning_rate': 1.4736175808553141e-05, 'num_train_epochs': 7.623375372739029, 'per_device_train_batch_size': 16, 'warmup_ratio': 0.21605876280261357, 'weight_decay': 0.11938244526496489, 'adam_epsilon': 7.353322403647365e-07, 'seed': 42, 'global_max_steps': 1878, 'learner': 'transformer'}\n", "Best accuracy on validation data: 0.9404\n", "Training duration of best run: 157.7 s\n" ] } ], "source": [ "'''retrieve best config and best learner'''\n", "print('Best hyperparmeter config:', automl.best_config)\n", "print('Best accuracy on validation data: {0:.4g}'.format(1-automl.best_loss))\n", "print('Training duration of best run: {0:.4g} s'.format(automl.best_config_train_time))" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "/data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages/transformers/optimization.py:306: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning\n", " warnings.warn(\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "{'loss': 0.528, 'learning_rate': 8.898933352349567e-06, 'epoch': 1.0}\n", "{'eval_loss': 0.2549280524253845, 'eval_automl_metric': 0.08600917431192656, 'eval_runtime': 1.0003, 'eval_samples_per_second': 871.751, 'eval_steps_per_second': 54.984, 'epoch': 1.0}\n", "{'loss': 0.2278, 'learning_rate': 1.3880017803076292e-05, 'epoch': 2.0}\n", "{'eval_loss': 0.24966619908809662, 'eval_automl_metric': 0.06766055045871555, 'eval_runtime': 1.0201, 'eval_samples_per_second': 854.778, 'eval_steps_per_second': 53.914, 'epoch': 2.0}\n", "{'loss': 0.1455, 'learning_rate': 1.1410179501562432e-05, 'epoch': 3.0}\n", "{'eval_loss': 0.23046882450580597, 'eval_automl_metric': 0.059633027522935755, 'eval_runtime': 1.0097, 'eval_samples_per_second': 863.6, 'eval_steps_per_second': 54.47, 'epoch': 3.0}\n", "{'eval_loss': 0.23046882450580597, 'eval_automl_metric': 0.059633027522935755, 'eval_runtime': 0.9726, 'eval_samples_per_second': 896.568, 'eval_steps_per_second': 56.55, 'epoch': 3.0}\n", "{'train_runtime': 146.7879, 'train_samples_per_second': 519.346, 'train_steps_per_second': 32.462, 'train_loss': 0.30043953600021217, 'epoch': 3.0}\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "Using amp half precision backend\n", "***** Running Prediction *****\n", " Num examples = 872\n", " Batch size = 64\n" ] } ], "source": [ "import pickle\n", "automl.pickle(\"automl.pkl\")\n", "\n", "with open(\"automl.pkl\", \"rb\") as f:\n", " automl = pickle.load(f)" ] }, { "cell_type": "code", "execution_count": 9, "metadata": { "scrolled": true }, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "Using amp half precision backend\n", "***** Running Prediction *****\n", " Num examples = 1821\n", " Batch size = 4\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Predicted labels [0 0 1 ... 1 1 1]\n" ] } ], "source": [ "'''compute predictions of testing dataset''' \n", "y_pred = automl.predict(X_test, **{\"per_device_eval_batch_size\": 1})\n", "print('Predicted labels', y_pred)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Log history" ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "{'Current Learner': 'transformer', 'Current Sample': 10000, 'Current Hyper-parameters': {'learning_rate': 1.0000000000000003e-05, 'num_train_epochs': 1.0, 'per_device_train_batch_size': 32, 'warmup_ratio': 0.0, 'weight_decay': 0.0, 'adam_epsilon': 1e-06, 'seed': 42, 'global_max_steps': 313, 'learner': 'transformer'}, 'Best Learner': 'transformer', 'Best Hyper-parameters': {'learning_rate': 1.0000000000000003e-05, 'num_train_epochs': 1.0, 'per_device_train_batch_size': 32, 'warmup_ratio': 0.0, 'weight_decay': 0.0, 'adam_epsilon': 1e-06, 'seed': 42, 'global_max_steps': 313, 'learner': 'transformer'}}\n", "{'Current Learner': 'transformer', 'Current Sample': 10000, 'Current Hyper-parameters': {'learning_rate': 1.4211966209891772e-05, 'num_train_epochs': 0.22667066737595704, 'per_device_train_batch_size': 4, 'warmup_ratio': 0.0638407085008166, 'weight_decay': 0.24365576482793252, 'adam_epsilon': 1.2017005181798623e-08, 'seed': 44, 'global_max_steps': 567, 'learner': 'transformer'}, 'Best Learner': 'transformer', 'Best Hyper-parameters': {'learning_rate': 1.0000000000000003e-05, 'num_train_epochs': 1.0, 'per_device_train_batch_size': 32, 'warmup_ratio': 0.0, 'weight_decay': 0.0, 'adam_epsilon': 1e-06, 'seed': 42, 'global_max_steps': 313, 'learner': 'transformer'}}\n", "{'Current Learner': 'transformer', 'Current Sample': 10000, 'Current Hyper-parameters': {'learning_rate': 0.00010969207855501012, 'num_train_epochs': 0.5341197058945968, 'per_device_train_batch_size': 32, 'warmup_ratio': 0.00727467370977386, 'weight_decay': 0.2998042286577564, 'adam_epsilon': 2.5359832189101376e-08, 'seed': 44, 'global_max_steps': 168, 'learner': 'transformer'}, 'Best Learner': 'transformer', 'Best Hyper-parameters': {'learning_rate': 0.00010969207855501012, 'num_train_epochs': 0.5341197058945968, 'per_device_train_batch_size': 32, 'warmup_ratio': 0.00727467370977386, 'weight_decay': 0.2998042286577564, 'adam_epsilon': 2.5359832189101376e-08, 'seed': 44, 'global_max_steps': 168, 'learner': 'transformer'}}\n", "{'Current Learner': 'transformer', 'Current Sample': 10000, 'Current Hyper-parameters': {'learning_rate': 6.105850155006292e-06, 'num_train_epochs': 0.5203392488419986, 'per_device_train_batch_size': 32, 'warmup_ratio': 0.02982185620209511, 'weight_decay': 0.06967105125024514, 'adam_epsilon': 2.8808753274240916e-07, 'seed': 42, 'global_max_steps': 163, 'learner': 'transformer'}, 'Best Learner': 'transformer', 'Best Hyper-parameters': {'learning_rate': 0.00010969207855501012, 'num_train_epochs': 0.5341197058945968, 'per_device_train_batch_size': 32, 'warmup_ratio': 0.00727467370977386, 'weight_decay': 0.2998042286577564, 'adam_epsilon': 2.5359832189101376e-08, 'seed': 44, 'global_max_steps': 168, 'learner': 'transformer'}}\n", "{'Current Learner': 'transformer', 'Current Sample': 10000, 'Current Hyper-parameters': {'learning_rate': 9.029571768192494e-05, 'num_train_epochs': 0.23081935882872334, 'per_device_train_batch_size': 16, 'warmup_ratio': 0.05927331754619176, 'weight_decay': 0.06990187648448716, 'adam_epsilon': 2.6275569358808137e-08, 'seed': 41, 'global_max_steps': 145, 'learner': 'transformer'}, 'Best Learner': 'transformer', 'Best Hyper-parameters': {'learning_rate': 0.00010969207855501012, 'num_train_epochs': 0.5341197058945968, 'per_device_train_batch_size': 32, 'warmup_ratio': 0.00727467370977386, 'weight_decay': 0.2998042286577564, 'adam_epsilon': 2.5359832189101376e-08, 'seed': 44, 'global_max_steps': 168, 'learner': 'transformer'}}\n", "{'Current Learner': 'transformer', 'Current Sample': 10000, 'Current Hyper-parameters': {'learning_rate': 0.00010208514406839324, 'num_train_epochs': 0.24916036398153807, 'per_device_train_batch_size': 32, 'warmup_ratio': 0.0, 'weight_decay': 0.28313267937361647, 'adam_epsilon': 1.5042842082223077e-08, 'seed': 44, 'global_max_steps': 78, 'learner': 'transformer'}, 'Best Learner': 'transformer', 'Best Hyper-parameters': {'learning_rate': 0.00010969207855501012, 'num_train_epochs': 0.5341197058945968, 'per_device_train_batch_size': 32, 'warmup_ratio': 0.00727467370977386, 'weight_decay': 0.2998042286577564, 'adam_epsilon': 2.5359832189101376e-08, 'seed': 44, 'global_max_steps': 168, 'learner': 'transformer'}}\n", "{'Current Learner': 'transformer', 'Current Sample': 10000, 'Current Hyper-parameters': {'learning_rate': 0.00011786584823407102, 'num_train_epochs': 1.1449809097488275, 'per_device_train_batch_size': 32, 'warmup_ratio': 0.015398649752204965, 'weight_decay': 0.3, 'adam_epsilon': 4.275263179285732e-08, 'seed': 43, 'global_max_steps': 313, 'learner': 'transformer'}, 'Best Learner': 'transformer', 'Best Hyper-parameters': {'learning_rate': 0.00011786584823407102, 'num_train_epochs': 1.1449809097488275, 'per_device_train_batch_size': 32, 'warmup_ratio': 0.015398649752204965, 'weight_decay': 0.3, 'adam_epsilon': 4.275263179285732e-08, 'seed': 43, 'global_max_steps': 313, 'learner': 'transformer'}}\n", "{'Current Learner': 'transformer', 'Current Sample': 10000, 'Current Hyper-parameters': {'learning_rate': 3.59933640287326e-05, 'num_train_epochs': 1.7937390219164033, 'per_device_train_batch_size': 32, 'warmup_ratio': 0.016745580745240057, 'weight_decay': 0.2892279950527897, 'adam_epsilon': 6.514301011066509e-08, 'seed': 44, 'global_max_steps': 313, 'learner': 'transformer'}, 'Best Learner': 'transformer', 'Best Hyper-parameters': {'learning_rate': 3.59933640287326e-05, 'num_train_epochs': 1.7937390219164033, 'per_device_train_batch_size': 32, 'warmup_ratio': 0.016745580745240057, 'weight_decay': 0.2892279950527897, 'adam_epsilon': 6.514301011066509e-08, 'seed': 44, 'global_max_steps': 313, 'learner': 'transformer'}}\n", "{'Current Learner': 'transformer', 'Current Sample': 10000, 'Current Hyper-parameters': {'learning_rate': 1.4736175808553141e-05, 'num_train_epochs': 7.623375372739029, 'per_device_train_batch_size': 16, 'warmup_ratio': 0.21605876280261357, 'weight_decay': 0.11938244526496489, 'adam_epsilon': 7.353322403647365e-07, 'seed': 42, 'global_max_steps': 1878, 'learner': 'transformer'}, 'Best Learner': 'transformer', 'Best Hyper-parameters': {'learning_rate': 1.4736175808553141e-05, 'num_train_epochs': 7.623375372739029, 'per_device_train_batch_size': 16, 'warmup_ratio': 0.21605876280261357, 'weight_decay': 0.11938244526496489, 'adam_epsilon': 7.353322403647365e-07, 'seed': 42, 'global_max_steps': 1878, 'learner': 'transformer'}}\n", "{'Current Learner': 'transformer', 'Current Sample': 10000, 'Current Hyper-parameters': {'learning_rate': 0.0001178658482340711, 'num_train_epochs': 1.1449809097488262, 'per_device_train_batch_size': 32, 'warmup_ratio': 0.015398649752204969, 'weight_decay': 0.3, 'adam_epsilon': 4.275263179285729e-08, 'seed': 43, 'global_max_steps': 313, 'learner': 'transformer'}, 'Best Learner': 'transformer', 'Best Hyper-parameters': {'learning_rate': 1.4736175808553141e-05, 'num_train_epochs': 7.623375372739029, 'per_device_train_batch_size': 16, 'warmup_ratio': 0.21605876280261357, 'weight_decay': 0.11938244526496489, 'adam_epsilon': 7.353322403647365e-07, 'seed': 42, 'global_max_steps': 1878, 'learner': 'transformer'}}\n" ] } ], "source": [ "from flaml.data import get_output_from_log\n", "time_history, best_valid_loss_history, valid_loss_history, config_history, metric_history = \\\n", " get_output_from_log(filename=automl_settings['log_file_name'], time_budget=3000)\n", "for config in config_history:\n", " print(config)" ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "10\n" ] }, { "data": { "image/png": "iVBORw0KGgoAAAANSUhEUgAAAYgAAAEWCAYAAAB8LwAVAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjUuMSwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy/YYfK9AAAACXBIWXMAAAsTAAALEwEAmpwYAAAiUElEQVR4nO3de7xVdZ3/8ddbQqHxAgg1CnKxkNRqxHZaWXkphfxVojkN9qtxbCamKf1VTs7gdHP050TZ5WePcXKwn2M2maFjRGWRhVrjJTmEglAYoilHSxApUwKBz/yxvhsW23X2WQfOvp3zfj4e+8Fe3/Vde332Pof9Oet7W4oIzMzMau3V6gDMzKw9OUGYmVkhJwgzMyvkBGFmZoWcIMzMrJAThJmZFXKCMNsNkt4gaVWr4zBrJCcI6ziSHpb05lbGEBE/jYgpjXp9SdMk/UTS05LWSbpd0tsbdT6zIk4QZgUkDWnhuc8EbgCuBcYBLwY+CbxtN15Lkvz/3HaLf3FswJC0l6TZkh6U9KSkeZJG5fbfIOk3kn6X/jo/MrfvGklflnSzpGeAE9OVykclLUvHfFPSsFT/BElrc8f3WDft/wdJj0t6TNLfSApJLy14DwK+AFwSEV+JiN9FxPaIuD0i3pfqXCTpP3PHTEyv94K0fZukSyXdATwLXCCpq+Y8H5G0ID3fR9LnJD0i6beSrpQ0fA9/HDYAOEHYQHIeMAM4HjgYeAq4Irf/+8Bk4EXAz4Gv1xz/LuBSYD/gv1PZO4HpwCTglcBf1Tl/YV1J04HzgTcDLwVOqPMaU4BDgBvr1CnjPcAssvdyJTBF0uTc/ncB16Xnc4DDgKNSfGPJrlhskHOCsIHk/cDHImJtRGwGLgLOrP5lHRFXR8TTuX1/JumA3PHfjog70l/sf0xlX4qIxyJiA/Adsi/RnvRU953Af0TEioh4Np27Jwemfx8v95Z7dE0639aI+B3wbeAsgJQoXgYsSFcss4CPRMSGiHga+Bdg5h6e3wYAJwgbSCYA35K0UdJG4BfANuDFkoZImpOan34PPJyOGZ07/tGC1/xN7vmzwL51zt9T3YNrXrvoPFVPpn8PqlOnjNpzXEdKEGRXD/NTshoDvBBYkvvcfpDKbZBzgrCB5FHgLRExIvcYFhHdZF+Kp5E18xwATEzHKHd8o5Y2fpyss7nqkDp1V5G9j3fUqfMM2Zd61Z8W1Kl9L7cAYyQdRZYoqs1L64FNwJG5z+yAiKiXCG2QcIKwTjVU0rDc4wVkbe2XSpoAIGmMpNNS/f2AzWR/ob+QrBmlWeYB50g6XNILgU/0VDGy9ffPBz4h6RxJ+6fO99dLmpuq3Qu8UdL41ER2YW8BRMRzZCOjLgNGkSUMImI7cBXwRUkvApA0VtK03X2zNnA4QVinupnsL9/q4yLgcmAB8ENJTwN3A8em+tcCvwa6gZVpX1NExPeBLwG3Aqtz597cQ/0bgb8A3gs8BvwW+L9k/QhExC3AN4FlwBLguyVDuY7sCuqGiNiaK//Halyp+e1HZJ3lNsjJNwwyay5JhwP3A/vUfFGbtRVfQZg1gaTT03yDkcBngO84OVi7c4Iwa46/BZ4AHiQbWfV3rQ3HrHduYjIzs0K+gjAzs0IvaHUA/WX06NExceLEVodhZtZRlixZsj4iCidGNjRBpDVoLgeGAF+JiDk1+ycAV5PN2twAvDsi1qZ924DlqeojEVF3qeOJEyfS1dVVr4qZmdWQ9Oue9jUsQaTlkq8ATgbWAoslLYiIlblqnwOujYivSjoJ+DTZImMAmyLiqEbFZ2Zm9TWyD+IYYHVErImILcD1ZEsd5B0BLErPby3Yb2ZmLdLIBDGWXRcMW5vK8u4DzkjPTwf2k1RdzXKYpC5Jd0uaUXQCSbNSna5169b1Y+hmZtbqUUwfBY6XtJRsDf9usjHiABMiokK2yNr/k/SS2oMjYm5EVCKiMmaMF580M+tPjeyk7mbXVSvHpbIdIuIx0hWEpH2Bd0TExrSvO/27RtJtwFSySUZmZtYEjUwQi4HJkiaRJYaZZFcDO0gaDWxIK0peSDaiibQcwbMRsTnVOQ74bANjNTPrGPOXdnPZwlU8tnETB48YzgXTpjBjam0L/p5rWBNTWmfmXGAh2Y1b5kXECkkXS6oOWT0BWCXpAbIbs1+ayg8HuiTdR9Z5Padm9JOZ2aA0f2k3F960nO6Nmwige+MmLrxpOfOXdvd6bF8NmKU2KpVKeB6EmQ10x81ZRPfGTc8rHztiOHfMPqnPrydpServfZ4BM5PazAauZjWpdILHCpJDvfI90epRTGZmdTWzSaUTHDxieJ/K94SvIMysrV22cBWbntu2S9mm57bxDzcu4xv3PNKiqFpn2NC92EuwPdc7MHzoEC6Y1v83AXSCMLO21lPTyZZt25scSXsYve8+ADy6YRNbtm1nbAOb3JwgzKytHTxieI+dst/829e2IKLBw30QZtbWLpg2heFDh+xS1qgmFduVryDMGsQjb/pH9TP7hxuXNbxJpdM0+nfMCcKsAaojb6qdq9WRN4C/2HbDjKljd3RIu1kp04zfMScIswbwyJv+t/Lx33PEQfu3Ooy20dPv2GULV/VbgnAfhFkDeORN/zvioP057ShffVU1Y8KcryDMGsAjb6zRevod688Jc76CMGsAj7yxRmvG75ivIMwawCNvrNGqv0sexWTWgTzyxhptxtSxDf2jw01MZmZWyFcQbc6TrcysVZwg2pgnW5lZKzlBtLFOmmy1/g+bd6wuufeQvThk1PAdq04OZp7cZZ3MCaKNdcpkq/V/2MxD65/ZsT79lm3beWj9MwCDPkl4cpd1MieINtYpk62Om7Nol5uXQHYzkz8+t72t4jSzvvEopjbWKZOtmnmPXDNrHieINjZj6lg+fcYr2HtI9mMaO2I4nz7jFW3XQd3Me+SaWfM0NEFImi5plaTVkmYX7J8g6ceSlkm6TdK43L6zJf0qPc5uZJztbMbUsUwdP4JjJ43ijtkntV1ygM650jGzvmlYH4SkIcAVwMnAWmCxpAURsTJX7XPAtRHxVUknAZ8G3iNpFPApoAIEsCQd+1R/x+l5BnuuGVP+zaz5GtlJfQywOiLWAEi6HjgNyCeII4Dz0/Nbgfnp+TTglojYkI69BZgOfKM/A/Q8g/7T6Cn/ZtZ8jUwQY4FHc9trgWNr6twHnAFcDpwO7CfpwB6Ofd63j6RZwCyA8ePH9znATpln4LH0ZtYKre6k/ihwvKSlwPFAN7Ct/iE7RcTciKhERGXMmDF9PnmnzDPwWHoza4VGXkF0A4fktselsh0i4jGyKwgk7Qu8IyI2SuoGTqg59rb+DrBT5hmYmbVCI68gFgOTJU2StDcwE1iQryBptKRqDBcCV6fnC4FTJI2UNBI4JZX1K4++MTPrWcMSRERsBc4l+2L/BTAvIlZIuljS21O1E4BVkh4AXgxcmo7dAFxClmQWAxdXO6z7U3WewdgRwxHtO8/AzKwVFBG91+oAlUolurq6Wh2GmVlHkbQkIipF+1rdSW1mZm3Ki/XV8MQ5M7OME0SOJ86Zme3kJqacnibOXbZwVYsiMjNrHSeIHC9bbWa2kxNEjpetNjPbyQkixxPn+s/8pd0cN2cRk2Z/j+PmLGL+0u7eDzKztuJO6hwvW90/3NlvNjA4QdTwstV7rl5nvz9bs87hJibrd+7sNxsYnCCs37mz32xgcIKwfufOfrOBwX0Q1u/c2W82MDhBWEO4s9+s87mJyczMCjlBmJlZIScIMzMr5ARhZmaFnCDMzKyQE4SZmRXqNUFIOrAZgZiZWXspcwVxt6QbJJ0qSQ2PyMzM2kKZBHEYMBd4D/ArSf8i6bAyLy5puqRVklZLml2wf7ykWyUtlbRM0qmpfKKkTZLuTY8r+/KmzMxsz/U6kzoiArgFuEXSicB/Ah+QdB8wOyLuKjpO0hDgCuBkYC2wWNKCiFiZq/ZxYF5EfFnSEcDNwMS078GIOGr33paZme2pXhNE6oN4N9kVxG+B84AFwFHADcCkHg49BlgdEWvS61wPnAbkE0QA+6fnBwCP9fkdmJlZQ5RpYrqL7Et8RkT8r4i4KSK2RkQXUK/pZyzwaG57bSrLuwh4t6S1ZFcP5+X2TUpNT7dLekPRCSTNktQlqWvdunUl3oqZmZVVZrG+KamZ6Xki4jN7eP6zgGsi4vOSXgt8TdLLgceB8RHxpKRXAfMlHRkRv685/1yy/hEqlUphjGZmtnvKXEH8UNKI6oakkZIWljiuGzgktz0uleX9NTAPIPVlDANGR8TmiHgylS8BHiTrLDczsyYpkyDGRMTG6kZEPAW8qMRxi4HJkiZJ2huYSdZ3kfcI8CYASYeTJYh1ksakTm4kHQpMBtaUOKeZmfWTMk1M2ySNj4hHACRNIOtcrisitko6F1gIDAGujogVki4GuiJiAfD3wFWSPpJe868iIiS9EbhY0nPAduD9EbFht96hmZntFvXQvbCzgjSdrJ3/dkDAG4BZEVGmmalpKpVKdHV1tToMM7OOImlJRFSK9pWZB/EDSUcDr0lFH46I9f0ZoJmZtZ+ytxzdBjxB1kdwhCQi4ieNC8vMzFqtzES5vwE+RDYK6V6yK4m7gJMaGpmZmbVUmVFMHwJeDfw6Ik4EpgIbGxmUmZm1XpkE8ceI+COApH0i4pfAlMaGZWZmrVamD2Jtmig3n2zBvqeAXzcyKDMza70yo5hOT08vknQr2aJ6P2hoVGZm1nJ1E0SazbwiIl4GEBG3NyUqMzNrubp9EBGxDVglaXyT4jEzszZRpg9iJLBC0j3AM9XCiHh7w6IyM7OWK5MgPtHwKMzMrO2U6aR2v4OZ2SBUZib10+xcvXVvYCjwTETs3/NRZmbW6cpcQexXfS5JZPeVfk3PR5iZ2UBQZib1DpGZD0xrTDhmZtYuyjQxnZHb3AuoAH9sWERmZtYWyoxielvu+VbgYbJmJjMzG8DK9EGc04xAzMysvfTaByHpq2mxvur2SElXNzQqMzNruTKd1K+MiI3VjYh4iuyeEGZmNoCVSRB7SRpZ3ZA0ivK3KjUzsw5VJkF8HrhL0iWSLgHuBD5b5sUlTZe0StJqSbML9o+XdKukpZKWSTo1t+/CdNwqSR5Wa2bWZGU6qa+V1MXOe1CfERErezsuLRV+BXAysBZYLGlBzbEfB+ZFxJclHQHcDExMz2cCRwIHAz+SdFhaXdbMzJqgTCf1a4BHI+JfI+Jfye4wd2yJ1z4GWB0RayJiC3A9zx8eG0B1yY4DgMfS89OA6yNic0Q8BKxOr2dmZk1Sponpy8Afctt/SGW9GQs8mttem8ryLgLeLWkt2dXDeX04FkmzJHVJ6lq3bl2JkMzMrKwyCUIRUV2sj4jYTv91Up8FXBMR44BTga9JKr38R0TMjYhKRFTGjBnTTyGZmRmUSxBrJP0fSUPT40PAmhLHdQOH5LbHpbK8vwbmAUTEXcAwYHTJY83MrIHKJIj3A68j+4JeCxwLvK/EcYuByZImSdqbrNN5QU2dR4A3AUg6nCxBrEv1ZkraR9IkYDJwT4lzmplZPykziukJsi93ACQNB94K3NDLcVslnQssBIYAV0fECkkXA10RsQD4e+AqSR8h67D+q9SctULSPGAl2fpPH/QIJjOz5lKue6HnStmQ1WlkfQYnA/8dEWc2OLY+qVQq0dXV1eowzMw6iqQlEVEp2lf3CkLS8cC7yDqQ7wGOAw6NiGf7PUozM2srPSaINPT0EbIhrR+NiKclPeTkYGY2ONTrpL6RbBbzXwBvk/Qn7Lw3tZmZDXA9JoiI+DAwiWwtphOAVcAYSe+UtG9TojMzs5apO8w13YP61oiYRZYsziJbBuPhJsRmZmYtVHpGdEQ8B3wX+G4a6mpmZgNY6WUt8iJiU38HYmZm7WW3EoSZmQ18ThBmZlao1z4ISYcBFwAT8vUj4qQeDzIzs45XppP6BuBK4CrA6yGZmQ0SZRLE1ogoc4MgMzMbQMr0QXxH0gckHSRpVPXR8MjMzKylylxBnJ3+vSBXFsCh/R+OmZm1izL3g5jUjEDMzKy9lBnFNBT4O+CNqeg24N/TzGozMxugyjQxfRkYCvxb2n5PKvubRgVlZmatVyZBvDoi/iy3vUjSfY0KyMzM2kOZUUzbJL2kuiHpUDwfwsxswCtzBXEBcKukNYDIZlSf09CozMys5cqMYvqxpMnAlFS0KiI2NzYsMzNrtXr3pD4pIhZJOqNm10slERE3NTg2MzNroXpXEMcDi4C3FewLoNcEIWk6cDkwBPhKRMyp2f9F4MS0+ULgRRExIu3bBixP+x6JiLf3dj4zM+s/PSaIiPhUenpxRDyU3yep18lzkoYAVwAnA2uBxZIWRMTK3Dk+kqt/HjA19xKbIuKoMm/CzMz6X5lRTP9VUHZjieOOAVZHxJqI2AJcT3Y/656cBXyjxOuamVkT1OuDeBlwJHBATT/E/sCwEq89Fng0t70WOLaHc00AJpE1aVUNk9QFbAXmRMT8guNmAbMAxo8fXyIkMzMrq14fxBTgrcAIdu2HeBp4Xz/HMRO4MSLy8ysmRER3mnexSNLyiHgwf1BEzAXmAlQqlejnmMzMBrV6fRDfBr4t6bURcdduvHY3cEhue1wqKzIT+GDN+bvTv2sk3UbWP/Hg8w81M7NGKDNRbqmkD5I1N+1oWoqI9/Zy3GJgcurQ7iZLAu+qrZSaskYCd+XKRgLPRsRmSaOB44DPlojVzMz6SZlO6q8BfwpMA24nuxJ4ureDImIrcC6wEPgFMC8iVki6WFJ+yOpM4PqIyDcRHQ50pTWfbiXrg1iJmZk1jXb9Xi6oIC2NiKmSlkXEK9Py3z+NiNc0J8RyKpVKdHV1tToMM7OOImlJRFSK9pW5gqje92GjpJcDBwAv6q/gzMysPZXpg5ib+gQ+ASwA9gU+2dCozMys5cos1veV9PR2fB9qM7NBo95EufPrHRgRX+j/cMzMrF3Uu4LYL/07BXg1WfMSZJPm7mlkUGZm1nr1Jsr9M4CknwBHR8TTafsi4HtNic7MzFqmzCimFwNbcttbUpmZmQ1gZUYxXQvcI+lbaXsGcE2jAjIzs/ZQZhTTpZK+D7whFZ0TEUsbG5aZmbVavVFM+0fE7yWNAh5Oj+q+URGxofHhmZlZq9S7griObLnvJWS3GK1S2vacCDOzAazeKKa3pn97vb2omZkNPPWamI6ud2BE/Lz/wzEzs3ZRr4np83X2BXBSP8diZmZtpF4T04nNDMTMzNpLmXkQpGW+j2DXO8pd26igzMys9XpNEJI+BZxAliBuBt4C/DfZBDozMxugyiy1cSbwJuA3EXEO8GdkNw0yM7MBrEyC2BQR24GtkvYHngAOaWxYZmbWamX6ILokjQCuIps09wfgrkYGZWZmrVdvHsQVwHUR8YFUdKWkHwD7R8SypkRnZmYtU+8K4gHgc5IOAuYB3/AifWZmg0ePfRARcXlEvBY4HngSuFrSLyV9StJhZV5c0nRJqyStljS7YP8XJd2bHg9I2pjbd7akX6XH2X1/a2ZmticUEb3XqlaWpgJXA6+MiCG91B1CdhVyMrAWWAycFREre6h/HjA1It6bVpDtAipks7aXAK+KiKd6Ol+lUomurq7S78XMzEDSkoioFO3rdRSTpBdIepukrwPfB1YBZ5Q47zHA6ohYExFbgOuB0+rUPwv4Rno+DbglIjakpHALML3EOc3MrJ/U66Q+mexL+1TgHrIv+FkR8UzJ1x4LPJrbXgsc28O5JgCTgEV1jh1bcNwsYBbA+PHjS4ZlZmZl1LuCuBC4Ezg8It4eEdf1ITn01UzgxojY1peDImJuRFQiojJmzJgGhWZmNjjVW6xvT1dr7WbXCXXjUlmRmcAHa449oebY2/YwHjMz64MyM6l312JgsqRJkvYmSwILaitJehkwkl0n3y0ETpE0UtJI4JRUZmZmTVJqNdfdERFbJZ1L9sU+BLg6IlZIuhjoiohqspgJXB+54VQRsUHSJWRJBuBi3wPbzKy5+jTMtZ15mKuZWd/t0TBXMzMbnJwgzMyskBOEmZkVcoIwM7NCThBmZlbICcLMzAo5QZiZWSEnCDMzK+QEYWZmhZwgzMyskBOEmZkVcoIwM7NCThBmZlbICcLMzAo5QZiZWSEnCDMzK+QEYWZmhZwgzMyskBOEmZkVcoIwM7NCThBmZlaooQlC0nRJqyStljS7hzrvlLRS0gpJ1+XKt0m6Nz0WNDJOMzN7vhc06oUlDQGuAE4G1gKLJS2IiJW5OpOBC4HjIuIpSS/KvcSmiDiqUfGZmVl9jbyCOAZYHRFrImILcD1wWk2d9wFXRMRTABHxRAPjMTOzPmhkghgLPJrbXpvK8g4DDpN0h6S7JU3P7RsmqSuVzyg6gaRZqU7XunXr+jV4M7PBrmFNTH04/2TgBGAc8BNJr4iIjcCEiOiWdCiwSNLyiHgwf3BEzAXmAlQqlWhq5GZmA1wjryC6gUNy2+NSWd5aYEFEPBcRDwEPkCUMIqI7/bsGuA2Y2sBYzcysRiMTxGJgsqRJkvYGZgK1o5Hmk109IGk0WZPTGkkjJe2TKz8OWImZmTVNw5qYImKrpHOBhcAQ4OqIWCHpYqArIhakfadIWglsAy6IiCclvQ74d0nbyZLYnPzoJzMzazxFDIym+0qlEl1dXa0Ow8yso0haEhGVon2eSW1mZoWcIMzMrJAThJmZFXKCMDOzQk4QZmZWyAnCzMwKOUGYmVkhJwgzMyvkBGFmZoWcIMzMrJAThJmZFWr1/SBsD8xf2s1lC1fx2MZNHDxiOBdMm8KMqbX3ZDIz2z1OEB1q/tJuLrxpOZue2wZA98ZNXHjTcgAnCTPrF25i6lCXLVy1IzlUbXpuG5ctXNWiiMxsoHGC6FCPbdzUp3Izs75yguhQB48Y3qdyM7O+coLoUBdMm8LwoUN2KRs+dAgXTJvSoojMbKBxJ3WHqnZEexSTmTWKE0QHmzF1rBOCmTWMm5jMzKyQE4SZmRVygjAzs0JOEGZmVsgJwszMCikiWh1Dv5C0Dvh1iaqjgfUNDqeRHH9rOf7Wcvz9b0JEjCnaMWASRFmSuiKi0uo4dpfjby3H31qOv7ncxGRmZoWcIMzMrNBgTBBzWx3AHnL8reX4W8vxN9Gg64MwM7NyBuMVhJmZleAEYWZmhQZ8gpD0sKTlku6V1JXKRkm6RdKv0r8jWx1nlaSrJT0h6f5cWWG8ynxJ0mpJyyQd3brId8RaFP9FkrrTz+BeSafm9l2Y4l8laVprot4RyyGSbpW0UtIKSR9K5R3x+deJvyM+/xTPMEn3SLovvYd/TuWTJP0sxfpNSXun8n3S9uq0f2Kbxn+NpIdyP4OjUnlb/Q49T0QM6AfwMDC6puyzwOz0fDbwmVbHmYvtjcDRwP29xQucCnwfEPAa4GdtGv9FwEcL6h4B3AfsA0wCHgSGtDD2g4Cj0/P9gAdSjB3x+deJvyM+/xSTgH3T86HAz9JnOw+YmcqvBP4uPf8AcGV6PhP4ZpvGfw1wZkH9tvodqn0M+CuIHpwGfDU9/yowo3Wh7CoifgJsqCnuKd7TgGsjczcwQtJBTQm0Bz3E35PTgOsjYnNEPASsBo5pWHC9iIjHI+Ln6fnTwC+AsXTI518n/p601ecPkD7LP6TNoekRwEnAjam89mdQ/dncCLxJkpoT7fPVib8nbfU7VGswJIgAfihpiaRZqezFEfF4ev4b4MWtCa20nuIdCzyaq7eW+l8IrXRuuoS+Otek17bxp6aKqWR/AXbc518TP3TQ5y9piKR7gSeAW8iubDZGxNZUJR/njveQ9v8OOLCpAdeojT8iqj+DS9PP4IuS9kllbfkzqBoMCeL1EXE08Bbgg5LemN8Z2XVex4z17bR4ky8DLwGOAh4HPt/SaHohaV/gv4APR8Tv8/s64fMviL+jPv+I2BYRRwHjyK5oXtbaiPqmNn5JLwcuJHsfrwZGAf/YugjLG/AJIiK6079PAN8i+4X7bfUyLv37ROsiLKWneLuBQ3L1xqWythIRv03/abYDV7GzGaPt4pc0lOzL9esRcVMq7pjPvyj+Tvr88yJiI3Ar8FqyppfqLZLzce54D2n/AcCTzY20WC7+6an5LyJiM/AfdMjPYEAnCEl/Imm/6nPgFOB+YAFwdqp2NvDt1kRYWk/xLgD+Mo2EeA3wu1xTSNuoaVM9nexnAFn8M9NIlEnAZOCeZsdXldqu/z/wi4j4Qm5XR3z+PcXfKZ8/gKQxkkak58OBk8n6Um4FzkzVan8G1Z/NmcCidJXXEj3E/8vcHxgi6z/J/wza5nfoeVrdS97IB3Ao2SiN+4AVwMdS+YHAj4FfAT8CRrU61lzM3yBrBniOrD3yr3uKl2zkwxVkbbTLgUqbxv+1FN8ysv8QB+XqfyzFvwp4S4tjfz1Z89Ey4N70OLVTPv868XfE55/ieSWwNMV6P/DJVH4oWfJaDdwA7JPKh6Xt1Wn/oW0a/6L0M7gf+E92jnRqq9+h2oeX2jAzs0IDuonJzMx2nxOEmZkVcoIwM7NCThBmZlbICcLMzAo5QVhHSMsTfDi3vVDSV3Lbn5d0fp3jr5F0Znp+m6Tn3The0lBJc5St2vpzSXdJekva97Ck0bsR947z9rD/irS650pJm3KrfZ4p6ebqmPr+JOkgSd+ts39vST/JTUyzQcoJwjrFHcDrACTtBYwGjsztfx1w5x6e4xKyFVFfHtnyLDPIVkVtmIj4YGTLMpwKPBgRR6XHjRFxamSzcfvb+WQzqnuKaQvZvI+/aMC5rYM4QVinuJNsyQXIEsP9wNOSRqaFzw4Hfi7pk5IWS7pf0tyyK3tKeiHwPuC8yJZDILIlKuYV1D0/vf79NVc1f5kWY7tP0tcKjrskXVEMKRnTw5JGS5oo6Zfp2AckfV3SmyXdka52jkn1/0TZYnz3SFoq6bQeXvodwA/SMUem+vem2CenOvOB/10mThu4fAlpHSEiHpO0VdJ4squFu8hWvXwt2QqeyyNii6R/jYiLAdKX9FuB75Q4xUuBR6Jmcb5akl4FnAMcSzYL9meSbge2AB8HXhcR6yWNqjnuMrKrkXNi92anvhT4c+C9wGLgXWQzp98O/BPZ1c7HyJaaeG9qmrpH0o8i4plcHJOAp6pJEHg/cHlEfF3ZTXiqyet+soXlbBDzFYR1kjvJkkM1QdyV274j1TlR2Z3FlpPdQ+DIohfaA68HvhURz0S27v9NwBvSuW6IiPUAEZG/J8YngAMi4v27mRwAHoqI5ZEtuLcC+HF6reXAxFTnFGC2sqWmbyNbhmJ8zescBKzLbd8F/JOkfwQmRMSmFP82YIvSWmY2ODlBWCep9kO8guwv3LvJriBeB9wpaRjwb2R37noFWTv7sJKvvRoYL2n/fo86+4v/VbVXFX20Ofd8e257OztbAgS8I9ePMT4iflHzOpvIfSYRcR3ZVcgm4GZJJ+Xq7gP8cQ9itg7nBGGd5E6yJqMNkS1fvQEYQZYk7mTnF996ZfdE6HH0UK2IeJZsJdTLtfN+x2Mk/XlN1Z8CMyS9UNkKwaenskXAn0s6MB2bTwY/AOYA32vwX+QLgfOq/S6SphbUeYCdVxxIOhRYExFfIlsh9ZWp/EBgfUQ818B4rc05QVgnWU42eunumrLfRcT6NOLnKrKri4Vkf7n3xcfJml9WSrof+C5Qe8Ogn5PdX/gesru1fSUilkbECuBS4HZJ9wFfqDnuhhTbgrQMdCNcQnaLy2WSVqTtXaT+iAclvTQVvRO4PzVLvRy4NpWfCHyvQXFah/BqrmaDjKTTgVdFxMfr1LkJmB0RDzQvMms3HsVkNshExLeqTWFFUhPbfCcH8xWEmZkVch+EmZkVcoIwM7NCThBmZlbICcLMzAo5QZiZWaH/AVC/n89BGYiaAAAAAElFTkSuQmCC", "text/plain": [ "
" ] }, "metadata": { "needs_background": "light" }, "output_type": "display_data" } ], "source": [ "import matplotlib.pyplot as plt\n", "import numpy as np\n", "\n", "plt.title('Learning Curve')\n", "plt.xlabel('Wall Clock Time (s)')\n", "plt.ylabel('Validation Accuracy')\n", "print(len(valid_loss_history))\n", "plt.scatter(time_history, 1 - np.array(valid_loss_history))\n", "plt.step(time_history, 1 - np.array(best_valid_loss_history), where='post')\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 3. Spooky-author-identification example" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [ { "data": { "text/html": [ "== Status ==
Current time: 2022-07-22 07:20:52 (running for 00:30:13.14)
Memory usage on this node: 19.0/376.6 GiB
Using FIFO scheduling algorithm.
Resources requested: 0/4 CPUs, 0/4 GPUs, 0.0/252.65 GiB heap, 0.0/112.27 GiB objects (0.0/1.0 accelerator_type:V100)
Current best trial: 504afb96 with val_loss=0.11011235955056176 and parameters={'learning_rate': 4.567279255636039e-05, 'num_train_epochs': 5, 'per_device_train_batch_size': 32, 'seed': 37, 'global_max_steps': 9223372036854775807, 'learner': 'transformer'}
Result logdir: /data/xliu127/projects/hyperopt/FLAML/notebook/data/output/train_2022-07-22_06-50-38
Number of trials: 12/1000000 (12 TERMINATED)

" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "name": "stdout", "output_type": "stream", "text": [ "\u001b[2m\u001b[36m(train pid=78542)\u001b[0m {'train_runtime': 675.882, 'train_samples_per_second': 108.628, 'train_steps_per_second': 6.791, 'train_loss': 0.15587145374491324, 'epoch': 5.0}\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "\u001b[2m\u001b[36m(train pid=78542)\u001b[0m The following columns in the test set don't have a corresponding argument in `BertForSequenceClassification.forward` and have been ignored: __index_level_0__. If __index_level_0__ are not expected by `BertForSequenceClassification.forward`, you can safely ignore this message.\n", "\u001b[2m\u001b[36m(train pid=78542)\u001b[0m ***** Running Prediction *****\n", "\u001b[2m\u001b[36m(train pid=78542)\u001b[0m Num examples = 4895\n", "\u001b[2m\u001b[36m(train pid=78542)\u001b[0m Batch size = 1\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "\u001b[2m\u001b[36m(train pid=78793)\u001b[0m {'eval_loss': 0.47973549365997314, 'eval_automl_metric': 0.12134831460674156, 'eval_runtime': 42.4421, 'eval_samples_per_second': 115.334, 'eval_steps_per_second': 115.334, 'epoch': 4.0}\n", "\u001b[2m\u001b[36m(train pid=78793)\u001b[0m {'train_runtime': 482.2078, 'train_samples_per_second': 121.806, 'train_steps_per_second': 1.908, 'train_loss': 0.23729169679724652, 'epoch': 4.0}\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "\u001b[2m\u001b[36m(train pid=78793)\u001b[0m The following columns in the test set don't have a corresponding argument in `BertForSequenceClassification.forward` and have been ignored: __index_level_0__. If __index_level_0__ are not expected by `BertForSequenceClassification.forward`, you can safely ignore this message.\n", "\u001b[2m\u001b[36m(train pid=78793)\u001b[0m ***** Running Prediction *****\n", "\u001b[2m\u001b[36m(train pid=78793)\u001b[0m Num examples = 4895\n", "\u001b[2m\u001b[36m(train pid=78793)\u001b[0m Batch size = 1\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "\u001b[2m\u001b[36m(train pid=78225)\u001b[0m {'loss': 0.1844, 'learning_rate': 2.7020242630660653e-06, 'epoch': 3.0}\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "\u001b[2m\u001b[36m(train pid=78542)\u001b[0m Didn't find file /data/xliu127/projects/hyperopt/FLAML/notebook/data/output/train_2022-07-22_06-50-38/train_e041e7d6_10_global_max_steps=9223372036854775807,learner=transformer,learning_rate=0.0001,num_train_epochs=5,per_device_trai_2022-07-22_07-09-19/checkpoint-4590/added_tokens.json. We won't load it.\n", "\u001b[2m\u001b[36m(train pid=78542)\u001b[0m loading file /data/xliu127/projects/hyperopt/FLAML/notebook/data/output/train_2022-07-22_06-50-38/train_e041e7d6_10_global_max_steps=9223372036854775807,learner=transformer,learning_rate=0.0001,num_train_epochs=5,per_device_trai_2022-07-22_07-09-19/checkpoint-4590/vocab.txt\n", "\u001b[2m\u001b[36m(train pid=78542)\u001b[0m loading file /data/xliu127/projects/hyperopt/FLAML/notebook/data/output/train_2022-07-22_06-50-38/train_e041e7d6_10_global_max_steps=9223372036854775807,learner=transformer,learning_rate=0.0001,num_train_epochs=5,per_device_trai_2022-07-22_07-09-19/checkpoint-4590/tokenizer.json\n", "\u001b[2m\u001b[36m(train pid=78542)\u001b[0m loading file None\n", "\u001b[2m\u001b[36m(train pid=78542)\u001b[0m loading file /data/xliu127/projects/hyperopt/FLAML/notebook/data/output/train_2022-07-22_06-50-38/train_e041e7d6_10_global_max_steps=9223372036854775807,learner=transformer,learning_rate=0.0001,num_train_epochs=5,per_device_trai_2022-07-22_07-09-19/checkpoint-4590/special_tokens_map.json\n", "\u001b[2m\u001b[36m(train pid=78542)\u001b[0m loading file /data/xliu127/projects/hyperopt/FLAML/notebook/data/output/train_2022-07-22_06-50-38/train_e041e7d6_10_global_max_steps=9223372036854775807,learner=transformer,learning_rate=0.0001,num_train_epochs=5,per_device_trai_2022-07-22_07-09-19/checkpoint-4590/tokenizer_config.json\n", "\u001b[2m\u001b[36m(train pid=78793)\u001b[0m Didn't find file /data/xliu127/projects/hyperopt/FLAML/notebook/data/output/train_2022-07-22_06-50-38/train_567b6878_11_global_max_steps=9223372036854775807,learner=transformer,learning_rate=0.0000,num_train_epochs=4,per_device_trai_2022-07-22_07-12-37/checkpoint-920/added_tokens.json. We won't load it.\n", "\u001b[2m\u001b[36m(train pid=78793)\u001b[0m loading file /data/xliu127/projects/hyperopt/FLAML/notebook/data/output/train_2022-07-22_06-50-38/train_567b6878_11_global_max_steps=9223372036854775807,learner=transformer,learning_rate=0.0000,num_train_epochs=4,per_device_trai_2022-07-22_07-12-37/checkpoint-920/vocab.txt\n", "\u001b[2m\u001b[36m(train pid=78793)\u001b[0m loading file /data/xliu127/projects/hyperopt/FLAML/notebook/data/output/train_2022-07-22_06-50-38/train_567b6878_11_global_max_steps=9223372036854775807,learner=transformer,learning_rate=0.0000,num_train_epochs=4,per_device_trai_2022-07-22_07-12-37/checkpoint-920/tokenizer.json\n", "\u001b[2m\u001b[36m(train pid=78793)\u001b[0m loading file None\n", "\u001b[2m\u001b[36m(train pid=78793)\u001b[0m loading file /data/xliu127/projects/hyperopt/FLAML/notebook/data/output/train_2022-07-22_06-50-38/train_567b6878_11_global_max_steps=9223372036854775807,learner=transformer,learning_rate=0.0000,num_train_epochs=4,per_device_trai_2022-07-22_07-12-37/checkpoint-920/special_tokens_map.json\n", "\u001b[2m\u001b[36m(train pid=78793)\u001b[0m loading file /data/xliu127/projects/hyperopt/FLAML/notebook/data/output/train_2022-07-22_06-50-38/train_567b6878_11_global_max_steps=9223372036854775807,learner=transformer,learning_rate=0.0000,num_train_epochs=4,per_device_trai_2022-07-22_07-12-37/checkpoint-920/tokenizer_config.json\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "\u001b[2m\u001b[36m(train pid=78225)\u001b[0m {'eval_loss': 0.7813186049461365, 'eval_automl_metric': 0.13564862104187947, 'eval_runtime': 38.0461, 'eval_samples_per_second': 128.66, 'eval_steps_per_second': 128.66, 'epoch': 3.0}\n", "\u001b[2m\u001b[36m(train pid=78225)\u001b[0m {'loss': 0.0829, 'learning_rate': 2.3353000145500413e-06, 'epoch': 3.13}\n", "\u001b[2m\u001b[36m(train pid=79025)\u001b[0m {'eval_loss': 0.5877022743225098, 'eval_automl_metric': 0.11848825331971402, 'eval_runtime': 40.6459, 'eval_samples_per_second': 120.43, 'eval_steps_per_second': 120.43, 'epoch': 4.0}\n", "\u001b[2m\u001b[36m(train pid=78225)\u001b[0m {'loss': 0.0714, 'learning_rate': 1.9685757660340173e-06, 'epoch': 3.27}\n", "\u001b[2m\u001b[36m(train pid=79025)\u001b[0m {'loss': 0.0237, 'learning_rate': 3.91430499829805e-06, 'epoch': 4.36}\n", "\u001b[2m\u001b[36m(train pid=78225)\u001b[0m {'loss': 0.0584, 'learning_rate': 1.6018515175179931e-06, 'epoch': 3.41}\n", "\u001b[2m\u001b[36m(train pid=78225)\u001b[0m {'loss': 0.0836, 'learning_rate': 1.235127269001969e-06, 'epoch': 3.54}\n", "\u001b[2m\u001b[36m(train pid=78225)\u001b[0m {'loss': 0.0335, 'learning_rate': 8.684030204859451e-07, 'epoch': 3.68}\n", "\u001b[2m\u001b[36m(train pid=79025)\u001b[0m {'eval_loss': 0.6322397589683533, 'eval_automl_metric': 0.11664964249233911, 'eval_runtime': 39.775, 'eval_samples_per_second': 123.067, 'eval_steps_per_second': 123.067, 'epoch': 5.0}\n", "\u001b[2m\u001b[36m(train pid=79025)\u001b[0m {'train_runtime': 607.6324, 'train_samples_per_second': 120.83, 'train_steps_per_second': 3.777, 'train_loss': 0.17213530145699163, 'epoch': 5.0}\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "\u001b[2m\u001b[36m(train pid=79025)\u001b[0m The following columns in the test set don't have a corresponding argument in `BertForSequenceClassification.forward` and have been ignored: __index_level_0__. If __index_level_0__ are not expected by `BertForSequenceClassification.forward`, you can safely ignore this message.\n", "\u001b[2m\u001b[36m(train pid=79025)\u001b[0m ***** Running Prediction *****\n", "\u001b[2m\u001b[36m(train pid=79025)\u001b[0m Num examples = 4895\n", "\u001b[2m\u001b[36m(train pid=79025)\u001b[0m Batch size = 1\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "\u001b[2m\u001b[36m(train pid=78225)\u001b[0m {'loss': 0.0562, 'learning_rate': 5.01678771969921e-07, 'epoch': 3.81}\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "\u001b[2m\u001b[36m(train pid=79025)\u001b[0m Didn't find file /data/xliu127/projects/hyperopt/FLAML/notebook/data/output/train_2022-07-22_06-50-38/train_9a34cd4a_12_global_max_steps=9223372036854775807,learner=transformer,learning_rate=0.0000,num_train_epochs=5,per_device_trai_2022-07-22_07-14-32/checkpoint-2295/added_tokens.json. We won't load it.\n", "\u001b[2m\u001b[36m(train pid=79025)\u001b[0m loading file /data/xliu127/projects/hyperopt/FLAML/notebook/data/output/train_2022-07-22_06-50-38/train_9a34cd4a_12_global_max_steps=9223372036854775807,learner=transformer,learning_rate=0.0000,num_train_epochs=5,per_device_trai_2022-07-22_07-14-32/checkpoint-2295/vocab.txt\n", "\u001b[2m\u001b[36m(train pid=79025)\u001b[0m loading file /data/xliu127/projects/hyperopt/FLAML/notebook/data/output/train_2022-07-22_06-50-38/train_9a34cd4a_12_global_max_steps=9223372036854775807,learner=transformer,learning_rate=0.0000,num_train_epochs=5,per_device_trai_2022-07-22_07-14-32/checkpoint-2295/tokenizer.json\n", "\u001b[2m\u001b[36m(train pid=79025)\u001b[0m loading file None\n", "\u001b[2m\u001b[36m(train pid=79025)\u001b[0m loading file /data/xliu127/projects/hyperopt/FLAML/notebook/data/output/train_2022-07-22_06-50-38/train_9a34cd4a_12_global_max_steps=9223372036854775807,learner=transformer,learning_rate=0.0000,num_train_epochs=5,per_device_trai_2022-07-22_07-14-32/checkpoint-2295/special_tokens_map.json\n", "\u001b[2m\u001b[36m(train pid=79025)\u001b[0m loading file /data/xliu127/projects/hyperopt/FLAML/notebook/data/output/train_2022-07-22_06-50-38/train_9a34cd4a_12_global_max_steps=9223372036854775807,learner=transformer,learning_rate=0.0000,num_train_epochs=5,per_device_trai_2022-07-22_07-14-32/checkpoint-2295/tokenizer_config.json\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "\u001b[2m\u001b[36m(train pid=78225)\u001b[0m {'loss': 0.0527, 'learning_rate': 1.3495452345389687e-07, 'epoch': 3.95}\n", "\u001b[2m\u001b[36m(train pid=78225)\u001b[0m {'eval_loss': 0.8104404211044312, 'eval_automl_metric': 0.12625127681307458, 'eval_runtime': 37.4885, 'eval_samples_per_second': 130.573, 'eval_steps_per_second': 130.573, 'epoch': 4.0}\n", "\u001b[2m\u001b[36m(train pid=78225)\u001b[0m {'train_runtime': 1051.1951, 'train_samples_per_second': 55.875, 'train_steps_per_second': 13.969, 'train_loss': 0.2927411694969607, 'epoch': 4.0}\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "\u001b[2m\u001b[36m(train pid=78225)\u001b[0m The following columns in the test set don't have a corresponding argument in `BertForSequenceClassification.forward` and have been ignored: __index_level_0__. If __index_level_0__ are not expected by `BertForSequenceClassification.forward`, you can safely ignore this message.\n", "\u001b[2m\u001b[36m(train pid=78225)\u001b[0m ***** Running Prediction *****\n", "\u001b[2m\u001b[36m(train pid=78225)\u001b[0m Num examples = 4895\n", "\u001b[2m\u001b[36m(train pid=78225)\u001b[0m Batch size = 1\n", "\u001b[2m\u001b[36m(train pid=78225)\u001b[0m Didn't find file /data/xliu127/projects/hyperopt/FLAML/notebook/data/output/train_2022-07-22_06-50-38/train_d0e0b7d6_9_global_max_steps=9223372036854775807,learner=transformer,learning_rate=0.0000,num_train_epochs=4,per_device_train_2022-07-22_07-08-52/checkpoint-14684/added_tokens.json. We won't load it.\n", "\u001b[2m\u001b[36m(train pid=78225)\u001b[0m loading file /data/xliu127/projects/hyperopt/FLAML/notebook/data/output/train_2022-07-22_06-50-38/train_d0e0b7d6_9_global_max_steps=9223372036854775807,learner=transformer,learning_rate=0.0000,num_train_epochs=4,per_device_train_2022-07-22_07-08-52/checkpoint-14684/vocab.txt\n", "\u001b[2m\u001b[36m(train pid=78225)\u001b[0m loading file /data/xliu127/projects/hyperopt/FLAML/notebook/data/output/train_2022-07-22_06-50-38/train_d0e0b7d6_9_global_max_steps=9223372036854775807,learner=transformer,learning_rate=0.0000,num_train_epochs=4,per_device_train_2022-07-22_07-08-52/checkpoint-14684/tokenizer.json\n", "\u001b[2m\u001b[36m(train pid=78225)\u001b[0m loading file None\n", "\u001b[2m\u001b[36m(train pid=78225)\u001b[0m loading file /data/xliu127/projects/hyperopt/FLAML/notebook/data/output/train_2022-07-22_06-50-38/train_d0e0b7d6_9_global_max_steps=9223372036854775807,learner=transformer,learning_rate=0.0000,num_train_epochs=4,per_device_train_2022-07-22_07-08-52/checkpoint-14684/special_tokens_map.json\n", "\u001b[2m\u001b[36m(train pid=78225)\u001b[0m loading file /data/xliu127/projects/hyperopt/FLAML/notebook/data/output/train_2022-07-22_06-50-38/train_d0e0b7d6_9_global_max_steps=9223372036854775807,learner=transformer,learning_rate=0.0000,num_train_epochs=4,per_device_train_2022-07-22_07-08-52/checkpoint-14684/tokenizer_config.json\n", "2022-07-22 07:27:20,126\tINFO tune.py:747 -- Total run time: 2201.85 seconds (1801.97 seconds for the tuning loop).\n", "[flaml.automl: 07-22 07:27:25] {3314} INFO - selected model: None\n", "/data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages/transformers/optimization.py:306: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning\n", " warnings.warn(\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "{'loss': 0.2317, 'learning_rate': 5.95732076822092e-06, 'epoch': 4.35}\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "[flaml.automl: 07-22 07:35:19] {3457} INFO - retrain transformer for 474.4s\n", "[flaml.automl: 07-22 07:35:19] {3464} INFO - retrained model: None\n", "[flaml.automl: 07-22 07:35:19] {2742} INFO - fit succeeded\n", "[flaml.automl: 07-22 07:35:19] {2743} INFO - Time taken to find the best model: 1118.247492313385\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "{'train_runtime': 463.5873, 'train_samples_per_second': 158.374, 'train_steps_per_second': 1.24, 'train_loss': 0.20362980179164722, 'epoch': 5.0}\n" ] } ], "source": [ "from flaml import AutoML\n", "import ray\n", "import pandas as pd\n", "from sklearn.model_selection import train_test_split\n", "ray.init(num_cpus=4, num_gpus=4, ignore_reinit_error=True)\n", "\n", "df = pd.read_csv('/data/xliu127/projects/hyperopt/FLAML/data/spooky-author-identification.csv')\n", "X, y = df.drop('author', axis=1), df['author']\n", "\n", "X_train, X_val, y_train, y_val = train_test_split(X, y, random_state=123)\n", "\n", "\n", "print(len(X_train), len(X_val))\n", "automl_model = AutoML()\n", "\n", "automl_settings = {\n", " \"time_budget\": 1800, \n", " \"task\": \"seq-classification\", \n", " \"fit_kwargs_by_estimator\": {\n", " \"transformer\": {\n", " \"output_dir\": \"data/output/\", \n", " \"model_path\": \"bert-base-uncased\", \n", " }\n", " },\n", " \"metric\": \"accuracy\",\n", " \"gpu_per_trial\": 1, \n", " \"log_file_name\": \"spooky_bert.log\", \n", " \"log_type\": \"all\", \n", " \"use_ray\": {\"local_dir\": \"data/output/\"}, # set whether to use Ray\n", " \"n_concurrent_trials\": 4,\n", " \"keep_search_state\": True, # keeping the search state\n", "}\n", "\n", "automl_model.fit(X_train=X_train, y_train=y_train,X_val=X_val, y_val=y_val, **automl_settings)" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "the best loss for spooky author identification: 0.11133810010214507\n" ] } ], "source": [ "print(\"the best loss for spooky author identification: {}\".format(automl_model.best_loss))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/html": [ "== Status ==
Current time: 2022-07-21 21:21:15 (running for 00:30:10.30)
Memory usage on this node: 20.5/376.6 GiB
Using FIFO scheduling algorithm.
Resources requested: 0/4 CPUs, 0/4 GPUs, 0.0/252.62 GiB heap, 0.0/112.26 GiB objects (0.0/1.0 accelerator_type:V100)
Current best trial: 84d3be85 with val_loss=0.12951991828396325 and parameters={'learning_rate': 4.486769916716146e-05, 'num_train_epochs': 4, 'per_device_train_batch_size': 8, 'seed': 28, 'global_max_steps': 9223372036854775807, 'learner': 'transformer'}
Result logdir: /data/xliu127/projects/hyperopt/FLAML/notebook/data/output/train_2022-07-21_20-51-05
Number of trials: 12/1000000 (12 TERMINATED)

" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "name": "stdout", "output_type": "stream", "text": [ "\u001b[2m\u001b[36m(train pid=50245)\u001b[0m {'eval_loss': 0.7418951392173767, 'eval_automl_metric': 0.1284984678243105, 'eval_runtime': 37.3935, 'eval_samples_per_second': 130.905, 'eval_steps_per_second': 130.905, 'epoch': 4.0}\n", "\u001b[2m\u001b[36m(train pid=50245)\u001b[0m {'train_runtime': 565.7729, 'train_samples_per_second': 103.816, 'train_steps_per_second': 6.49, 'train_loss': 0.2802804773409642, 'epoch': 4.0}\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "\u001b[2m\u001b[36m(train pid=50245)\u001b[0m The following columns in the test set don't have a corresponding argument in `RobertaForSequenceClassification.forward` and have been ignored: __index_level_0__. If __index_level_0__ are not expected by `RobertaForSequenceClassification.forward`, you can safely ignore this message.\n", "\u001b[2m\u001b[36m(train pid=50245)\u001b[0m ***** Running Prediction *****\n", "\u001b[2m\u001b[36m(train pid=50245)\u001b[0m Num examples = 4895\n", "\u001b[2m\u001b[36m(train pid=50245)\u001b[0m Batch size = 1\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "\u001b[2m\u001b[36m(train pid=50412)\u001b[0m {'eval_loss': 1.0893423557281494, 'eval_automl_metric': 0.6024514811031665, 'eval_runtime': 39.7178, 'eval_samples_per_second': 123.245, 'eval_steps_per_second': 123.245, 'epoch': 3.0}\n", "\u001b[2m\u001b[36m(train pid=50658)\u001b[0m {'loss': 0.2369, 'learning_rate': 1.4090340380281214e-05, 'epoch': 2.72}\n", "\u001b[2m\u001b[36m(train pid=50412)\u001b[0m {'train_runtime': 566.9953, 'train_samples_per_second': 77.694, 'train_steps_per_second': 9.714, 'train_loss': 1.0928592581461545, 'epoch': 3.0}\n", "\u001b[2m\u001b[36m(train pid=49988)\u001b[0m {'eval_loss': 1.092341661453247, 'eval_automl_metric': 0.6024514811031665, 'eval_runtime': 38.0057, 'eval_samples_per_second': 128.797, 'eval_steps_per_second': 128.797, 'epoch': 3.0}\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "\u001b[2m\u001b[36m(train pid=50412)\u001b[0m The following columns in the test set don't have a corresponding argument in `RobertaForSequenceClassification.forward` and have been ignored: __index_level_0__. If __index_level_0__ are not expected by `RobertaForSequenceClassification.forward`, you can safely ignore this message.\n", "\u001b[2m\u001b[36m(train pid=50412)\u001b[0m ***** Running Prediction *****\n", "\u001b[2m\u001b[36m(train pid=50412)\u001b[0m Num examples = 4895\n", "\u001b[2m\u001b[36m(train pid=50412)\u001b[0m Batch size = 1\n", "\u001b[2m\u001b[36m(train pid=50245)\u001b[0m Didn't find file /data/xliu127/projects/hyperopt/FLAML/notebook/data/output/train_2022-07-21_20-51-05/train_60247332_10_global_max_steps=9223372036854775807,learner=transformer,learning_rate=0.0000,num_train_epochs=4,per_device_trai_2022-07-21_21-11-36/checkpoint-3672/added_tokens.json. We won't load it.\n", "\u001b[2m\u001b[36m(train pid=50245)\u001b[0m loading file /data/xliu127/projects/hyperopt/FLAML/notebook/data/output/train_2022-07-21_20-51-05/train_60247332_10_global_max_steps=9223372036854775807,learner=transformer,learning_rate=0.0000,num_train_epochs=4,per_device_trai_2022-07-21_21-11-36/checkpoint-3672/vocab.json\n", "\u001b[2m\u001b[36m(train pid=50245)\u001b[0m loading file /data/xliu127/projects/hyperopt/FLAML/notebook/data/output/train_2022-07-21_20-51-05/train_60247332_10_global_max_steps=9223372036854775807,learner=transformer,learning_rate=0.0000,num_train_epochs=4,per_device_trai_2022-07-21_21-11-36/checkpoint-3672/merges.txt\n", "\u001b[2m\u001b[36m(train pid=50245)\u001b[0m loading file /data/xliu127/projects/hyperopt/FLAML/notebook/data/output/train_2022-07-21_20-51-05/train_60247332_10_global_max_steps=9223372036854775807,learner=transformer,learning_rate=0.0000,num_train_epochs=4,per_device_trai_2022-07-21_21-11-36/checkpoint-3672/tokenizer.json\n", "\u001b[2m\u001b[36m(train pid=50245)\u001b[0m loading file None\n", "\u001b[2m\u001b[36m(train pid=50245)\u001b[0m loading file /data/xliu127/projects/hyperopt/FLAML/notebook/data/output/train_2022-07-21_20-51-05/train_60247332_10_global_max_steps=9223372036854775807,learner=transformer,learning_rate=0.0000,num_train_epochs=4,per_device_trai_2022-07-21_21-11-36/checkpoint-3672/special_tokens_map.json\n", "\u001b[2m\u001b[36m(train pid=50245)\u001b[0m loading file /data/xliu127/projects/hyperopt/FLAML/notebook/data/output/train_2022-07-21_20-51-05/train_60247332_10_global_max_steps=9223372036854775807,learner=transformer,learning_rate=0.0000,num_train_epochs=4,per_device_trai_2022-07-21_21-11-36/checkpoint-3672/tokenizer_config.json\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "\u001b[2m\u001b[36m(train pid=49988)\u001b[0m {'loss': 1.0896, 'learning_rate': 1.5104688589428795e-05, 'epoch': 3.13}\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "\u001b[2m\u001b[36m(train pid=50412)\u001b[0m Didn't find file /data/xliu127/projects/hyperopt/FLAML/notebook/data/output/train_2022-07-21_20-51-05/train_6861ba34_11_global_max_steps=9223372036854775807,learner=transformer,learning_rate=0.0001,num_train_epochs=3,per_device_trai_2022-07-21_21-11-51/checkpoint-3672/added_tokens.json. We won't load it.\n", "\u001b[2m\u001b[36m(train pid=50412)\u001b[0m loading file /data/xliu127/projects/hyperopt/FLAML/notebook/data/output/train_2022-07-21_20-51-05/train_6861ba34_11_global_max_steps=9223372036854775807,learner=transformer,learning_rate=0.0001,num_train_epochs=3,per_device_trai_2022-07-21_21-11-51/checkpoint-3672/vocab.json\n", "\u001b[2m\u001b[36m(train pid=50412)\u001b[0m loading file /data/xliu127/projects/hyperopt/FLAML/notebook/data/output/train_2022-07-21_20-51-05/train_6861ba34_11_global_max_steps=9223372036854775807,learner=transformer,learning_rate=0.0001,num_train_epochs=3,per_device_trai_2022-07-21_21-11-51/checkpoint-3672/merges.txt\n", "\u001b[2m\u001b[36m(train pid=50412)\u001b[0m loading file /data/xliu127/projects/hyperopt/FLAML/notebook/data/output/train_2022-07-21_20-51-05/train_6861ba34_11_global_max_steps=9223372036854775807,learner=transformer,learning_rate=0.0001,num_train_epochs=3,per_device_trai_2022-07-21_21-11-51/checkpoint-3672/tokenizer.json\n", "\u001b[2m\u001b[36m(train pid=50412)\u001b[0m loading file None\n", "\u001b[2m\u001b[36m(train pid=50412)\u001b[0m loading file /data/xliu127/projects/hyperopt/FLAML/notebook/data/output/train_2022-07-21_20-51-05/train_6861ba34_11_global_max_steps=9223372036854775807,learner=transformer,learning_rate=0.0001,num_train_epochs=3,per_device_trai_2022-07-21_21-11-51/checkpoint-3672/special_tokens_map.json\n", "\u001b[2m\u001b[36m(train pid=50412)\u001b[0m loading file /data/xliu127/projects/hyperopt/FLAML/notebook/data/output/train_2022-07-21_20-51-05/train_6861ba34_11_global_max_steps=9223372036854775807,learner=transformer,learning_rate=0.0001,num_train_epochs=3,per_device_trai_2022-07-21_21-11-51/checkpoint-3672/tokenizer_config.json\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "\u001b[2m\u001b[36m(train pid=50658)\u001b[0m {'loss': 0.2195, 'learning_rate': 1.2404892966371977e-05, 'epoch': 3.0}\n", "\u001b[2m\u001b[36m(train pid=49988)\u001b[0m {'loss': 1.0907, 'learning_rate': 1.2732721160184323e-05, 'epoch': 3.27}\n", "\u001b[2m\u001b[36m(train pid=50658)\u001b[0m {'loss': 0.1252, 'learning_rate': 1.0719445552462741e-05, 'epoch': 3.27}\n", "\u001b[2m\u001b[36m(train pid=49988)\u001b[0m {'loss': 1.0926, 'learning_rate': 1.0360753730939852e-05, 'epoch': 3.41}\n", "\u001b[2m\u001b[36m(train pid=50658)\u001b[0m {'loss': 0.1093, 'learning_rate': 9.033998138553504e-06, 'epoch': 3.54}\n", "\u001b[2m\u001b[36m(train pid=49988)\u001b[0m {'loss': 1.0908, 'learning_rate': 7.988786301695379e-06, 'epoch': 3.54}\n", "\u001b[2m\u001b[36m(train pid=50658)\u001b[0m {'loss': 0.1166, 'learning_rate': 7.348550724644269e-06, 'epoch': 3.81}\n", "\u001b[2m\u001b[36m(train pid=49988)\u001b[0m {'loss': 1.0899, 'learning_rate': 5.616818872450909e-06, 'epoch': 3.68}\n", "\u001b[2m\u001b[36m(train pid=49988)\u001b[0m {'loss': 1.0923, 'learning_rate': 3.244851443206437e-06, 'epoch': 3.81}\n", "\u001b[2m\u001b[36m(train pid=50658)\u001b[0m {'eval_loss': 0.7831101417541504, 'eval_automl_metric': 0.13462717058222673, 'eval_runtime': 37.9679, 'eval_samples_per_second': 128.925, 'eval_steps_per_second': 128.925, 'epoch': 4.0}\n", "\u001b[2m\u001b[36m(train pid=49988)\u001b[0m {'loss': 1.0862, 'learning_rate': 8.728840139619655e-07, 'epoch': 3.95}\n", "\u001b[2m\u001b[36m(train pid=50658)\u001b[0m {'loss': 0.1164, 'learning_rate': 5.663103310735033e-06, 'epoch': 4.08}\n", "\u001b[2m\u001b[36m(train pid=49988)\u001b[0m {'eval_loss': 1.0893481969833374, 'eval_automl_metric': 0.6024514811031665, 'eval_runtime': 36.2865, 'eval_samples_per_second': 134.899, 'eval_steps_per_second': 134.899, 'epoch': 4.0}\n", "\u001b[2m\u001b[36m(train pid=49988)\u001b[0m {'train_runtime': 1069.9104, 'train_samples_per_second': 54.898, 'train_steps_per_second': 13.725, 'train_loss': 1.0960205283875493, 'epoch': 4.0}\n", "\u001b[2m\u001b[36m(train pid=49988)\u001b[0m \n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "\u001b[2m\u001b[36m(train pid=49988)\u001b[0m The following columns in the test set don't have a corresponding argument in `RobertaForSequenceClassification.forward` and have been ignored: __index_level_0__. If __index_level_0__ are not expected by `RobertaForSequenceClassification.forward`, you can safely ignore this message.\n", "\u001b[2m\u001b[36m(train pid=49988)\u001b[0m ***** Running Prediction *****\n", "\u001b[2m\u001b[36m(train pid=49988)\u001b[0m Num examples = 4895\n", "\u001b[2m\u001b[36m(train pid=49988)\u001b[0m Batch size = 1\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "\u001b[2m\u001b[36m(train pid=50658)\u001b[0m {'loss': 0.0542, 'learning_rate': 3.977655896825797e-06, 'epoch': 4.36}\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "\u001b[2m\u001b[36m(train pid=49988)\u001b[0m Didn't find file /data/xliu127/projects/hyperopt/FLAML/notebook/data/output/train_2022-07-21_20-51-05/train_ebe7d3ee_9_global_max_steps=9223372036854775807,learner=transformer,learning_rate=0.0001,num_train_epochs=4,per_device_train_2022-07-21_21-08-22/checkpoint-11013/added_tokens.json. We won't load it.\n", "\u001b[2m\u001b[36m(train pid=49988)\u001b[0m loading file /data/xliu127/projects/hyperopt/FLAML/notebook/data/output/train_2022-07-21_20-51-05/train_ebe7d3ee_9_global_max_steps=9223372036854775807,learner=transformer,learning_rate=0.0001,num_train_epochs=4,per_device_train_2022-07-21_21-08-22/checkpoint-11013/vocab.json\n", "\u001b[2m\u001b[36m(train pid=49988)\u001b[0m loading file /data/xliu127/projects/hyperopt/FLAML/notebook/data/output/train_2022-07-21_20-51-05/train_ebe7d3ee_9_global_max_steps=9223372036854775807,learner=transformer,learning_rate=0.0001,num_train_epochs=4,per_device_train_2022-07-21_21-08-22/checkpoint-11013/merges.txt\n", "\u001b[2m\u001b[36m(train pid=49988)\u001b[0m loading file /data/xliu127/projects/hyperopt/FLAML/notebook/data/output/train_2022-07-21_20-51-05/train_ebe7d3ee_9_global_max_steps=9223372036854775807,learner=transformer,learning_rate=0.0001,num_train_epochs=4,per_device_train_2022-07-21_21-08-22/checkpoint-11013/tokenizer.json\n", "\u001b[2m\u001b[36m(train pid=49988)\u001b[0m loading file None\n", "\u001b[2m\u001b[36m(train pid=49988)\u001b[0m loading file /data/xliu127/projects/hyperopt/FLAML/notebook/data/output/train_2022-07-21_20-51-05/train_ebe7d3ee_9_global_max_steps=9223372036854775807,learner=transformer,learning_rate=0.0001,num_train_epochs=4,per_device_train_2022-07-21_21-08-22/checkpoint-11013/special_tokens_map.json\n", "\u001b[2m\u001b[36m(train pid=49988)\u001b[0m loading file /data/xliu127/projects/hyperopt/FLAML/notebook/data/output/train_2022-07-21_20-51-05/train_ebe7d3ee_9_global_max_steps=9223372036854775807,learner=transformer,learning_rate=0.0001,num_train_epochs=4,per_device_train_2022-07-21_21-08-22/checkpoint-11013/tokenizer_config.json\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "\u001b[2m\u001b[36m(train pid=50658)\u001b[0m {'loss': 0.0618, 'learning_rate': 2.2922084829165607e-06, 'epoch': 4.63}\n", "\u001b[2m\u001b[36m(train pid=50658)\u001b[0m {'loss': 0.0494, 'learning_rate': 6.06761069007325e-07, 'epoch': 4.9}\n", "\u001b[2m\u001b[36m(train pid=50658)\u001b[0m {'eval_loss': 0.88468998670578, 'eval_automl_metric': 0.12972420837589382, 'eval_runtime': 37.9519, 'eval_samples_per_second': 128.979, 'eval_steps_per_second': 128.979, 'epoch': 5.0}\n", "\u001b[2m\u001b[36m(train pid=50658)\u001b[0m {'train_runtime': 873.0679, 'train_samples_per_second': 84.094, 'train_steps_per_second': 10.515, 'train_loss': 0.27977710040306475, 'epoch': 5.0}\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "\u001b[2m\u001b[36m(train pid=50658)\u001b[0m The following columns in the test set don't have a corresponding argument in `RobertaForSequenceClassification.forward` and have been ignored: __index_level_0__. If __index_level_0__ are not expected by `RobertaForSequenceClassification.forward`, you can safely ignore this message.\n", "\u001b[2m\u001b[36m(train pid=50658)\u001b[0m ***** Running Prediction *****\n", "\u001b[2m\u001b[36m(train pid=50658)\u001b[0m Num examples = 4895\n", "\u001b[2m\u001b[36m(train pid=50658)\u001b[0m Batch size = 1\n", "\u001b[2m\u001b[36m(train pid=50658)\u001b[0m Didn't find file /data/xliu127/projects/hyperopt/FLAML/notebook/data/output/train_2022-07-21_20-51-05/train_bd71ed64_12_global_max_steps=9223372036854775807,learner=transformer,learning_rate=0.0000,num_train_epochs=5,per_device_trai_2022-07-21_21-14-13/checkpoint-9180/added_tokens.json. We won't load it.\n", "\u001b[2m\u001b[36m(train pid=50658)\u001b[0m loading file /data/xliu127/projects/hyperopt/FLAML/notebook/data/output/train_2022-07-21_20-51-05/train_bd71ed64_12_global_max_steps=9223372036854775807,learner=transformer,learning_rate=0.0000,num_train_epochs=5,per_device_trai_2022-07-21_21-14-13/checkpoint-9180/vocab.json\n", "\u001b[2m\u001b[36m(train pid=50658)\u001b[0m loading file /data/xliu127/projects/hyperopt/FLAML/notebook/data/output/train_2022-07-21_20-51-05/train_bd71ed64_12_global_max_steps=9223372036854775807,learner=transformer,learning_rate=0.0000,num_train_epochs=5,per_device_trai_2022-07-21_21-14-13/checkpoint-9180/merges.txt\n", "\u001b[2m\u001b[36m(train pid=50658)\u001b[0m loading file /data/xliu127/projects/hyperopt/FLAML/notebook/data/output/train_2022-07-21_20-51-05/train_bd71ed64_12_global_max_steps=9223372036854775807,learner=transformer,learning_rate=0.0000,num_train_epochs=5,per_device_trai_2022-07-21_21-14-13/checkpoint-9180/tokenizer.json\n", "\u001b[2m\u001b[36m(train pid=50658)\u001b[0m loading file None\n", "\u001b[2m\u001b[36m(train pid=50658)\u001b[0m loading file /data/xliu127/projects/hyperopt/FLAML/notebook/data/output/train_2022-07-21_20-51-05/train_bd71ed64_12_global_max_steps=9223372036854775807,learner=transformer,learning_rate=0.0000,num_train_epochs=5,per_device_trai_2022-07-21_21-14-13/checkpoint-9180/special_tokens_map.json\n", "\u001b[2m\u001b[36m(train pid=50658)\u001b[0m loading file /data/xliu127/projects/hyperopt/FLAML/notebook/data/output/train_2022-07-21_20-51-05/train_bd71ed64_12_global_max_steps=9223372036854775807,learner=transformer,learning_rate=0.0000,num_train_epochs=5,per_device_trai_2022-07-21_21-14-13/checkpoint-9180/tokenizer_config.json\n", "2022-07-21 21:29:43,228\tINFO tune.py:747 -- Total run time: 2317.81 seconds (1801.93 seconds for the tuning loop).\n", "[flaml.automl: 07-21 21:29:46] {3314} INFO - selected model: None\n", "/data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages/transformers/optimization.py:306: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning\n", " warnings.warn(\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "{'loss': 0.5742, 'learning_rate': 3.264882684494973e-05, 'epoch': 1.09}\n" ] } ], "source": [ "automl_settings[\"fit_kwargs_by_estimator\"][\"transformer\"][\"model_path\"] = \"roberta-base\"\n", "automl_settings[\"log_file_name\"] = \"spooky_roberta.log\"\n", "automl_model.fit(X_train=X_train, y_train=y_train,X_val=X_val, y_val=y_val, **automl_settings)" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "8\n", "8\n" ] }, { "data": { "image/png": "iVBORw0KGgoAAAANSUhEUgAAAXQAAAD4CAYAAAD8Zh1EAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjQuMywgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy/MnkTPAAAACXBIWXMAAAsTAAALEwEAmpwYAAAaA0lEQVR4nO3dfXRV9Z3v8ffXyEOKCEpShQRNdDEMj0Jv5KG0V1uHgq2AWBfC1XtLO63tdaF36QwtXLuQyXWqlVntHbvoVG+rtLN8gKEUgzorVdB22TKaAMpDMBIeCgl2jCj4FCQk3/vH2Qkn6Ulyguecfc7O57VWVvb+7c05v192+GSf3/7t3zZ3R0REct85YVdARERSQ4EuIhIRCnQRkYhQoIuIRIQCXUQkIs4N640LCgq8pKQkrLcXEclJ27Zte9vdCxNtCy3QS0pKqK6uDuvtRURykpn9qatt6nIREYmIpALdzGabWa2Z1ZnZsgTbLzWzzWa208xeNLPi1FdVRES602Ogm1kesBq4FhgLLDKzsZ12+yfgV+4+ESgH7kt1RUVEpHvJnKFPAerc/YC7nwKeBOZ12mcssCVYfiHBdhERSbNkAr0IOBK3Xh+UxXsNuCFYng8MNrNhnV/IzG41s2ozq25sbDyb+oqISBdSdVH074GrzGwHcBXQALR03sndH3b3MncvKyxMOOpGRLLQxh0NzLh/C6XLnmHG/VvYuKMh7CpJAskMW2wARsatFwdl7dz9KMEZupmdB3zV3Y+nqI4iEqKNOxpYvmEXTc2xc7SG400s37ALgOsnd/6wLmFKJtCrgFFmVkosyBcC/y1+BzMrAN5x91ZgOfBIqisqIuFYVVnbHuZtmppb+O76nTzxyuGQapXbxo44n3vmjEv56/bY5eLup4ElQCWwF1jn7nvMrNzM5ga7XQ3UmtkbwEXAP6a8piISiqPHmxKWn2ppzXBNpCdJ3Snq7s8Cz3YqWxG3vB5Yn9qqiUg2GDE0n4YEoV40NJ+1354eQo2kK7pTVES6tXTWaPL75XUoy++Xx9JZo0OqkXQltLlcRCQ3tF34/O76nZxqaaVoaD5LZ43WBdEspEAXkR5dP7mo/QKoulmyl7pcJPftXAc/Hg8rh8a+71wXdo1EQqEzdMltO9fBpjugObhod+JIbB1g4oLw6pVjNu5oYFVlLUePNzFCXSo5y9w9lDcuKytzzYcun9iPx8dCvLO8AVB8Zebrk4Pe/uBjDrz9Ia1xWXCOGZcVDKLgvAHtZXvePAHAuOFDMl7HrHfxBLj2/oy8lZltc/eyRNt0hi657UR94vKWjzNbjxx2+J2POoQ5QKs7+xs/4D/fP9le9tGpFj7VP6/zP5csokCX3DakOPEZ+pCR8PVnMl+fHPTVZc/Q1ef0qUUXdlifN6mIcVMvSX+l5Kwo0CW3XbOiYx86QL/8WLkkRTcORYdGuUhum7gA5jwY6zOH2Jn5nAd1QbQXdONQdOgMXXLfxAWw7ZexZXWz9JpuHIoOBbqI6MahiFCgZ4DG+IpIJijQ00wPB5CM2rkONpfHhnMOKY5dHNb1hD5DgZ5mejhAZqw4FrvppfyhrSHXJEQfNsKxI9D6tdj6W8DaI/B8BQzq+ZGPLx98h6mlF/a4n8TJsj+gGuWSZno4gGTMu4egtdPvVWtrrDwJU0svZN4kfWpMWtu0EyeOAH5m2okQ5xLSGXqaaYxvhjwaux197df78M905bWQl+gWIYNvH890baJvc3nH+x8gtr65PLSzdJ2hp1mPY3w1U6CkypDi3pXLJ9PVtBNdlWeAAj3Nrp9cxH03TKB/XuxHXTQ0n/tumBC7IJqFH9kkh12zInaXbDzdNZs+WfgHVF0uGdDlGN+uPrI9teTMjTKSnD/vis1415e1fczPoot0kZaF004o0MOkmQJT5+IJMOHGsGsRvokLFOCZkoV/QBXoYdJMgSK5Lcv+gKoPPUzq8xSRFFKgh0kzBYpICqnLJWyaKVBEUiS3ztA1ZltEpEu5c4aup7uLiHQrdwI9E2O2P3wL3v1TbNhg3gC44FIY9OmUvHTb5FFtt6h3oDHUIhkT5emscyfQ0z1m+8O34FgdeOuZ1z1WF1tOUah3SWOoRTIi6tNZ506gp3vM9o/HnwnzNt4a+xSQgtdvm9a1T08eJRKyrqazXlVZG4lAz52Loukes52FE+2ISGp1NZ11V+W5JqlAN7PZZlZrZnVmtizB9kvM7AUz22FmO83syymvaduY7SEjAUv9mO0snGhHRFJrxND8XpWn2sYdDcy4fwuly55hxv1b2LijIaWv32OXi5nlAauBmUA9UGVmFe5eE7fb94F17v4vZjYWeBYoSWlNIb232WbhRDsiklpLZ43u0IcOnaazTqNM9N8n04c+Bahz9wMAZvYkMA+ID3QHzg+WhwBHU1K7TMrCiXZEMiHKoz46a2tXGO3NRP99MoFeBMRfjawHpnbaZyXwWzO7HRgE/E2iFzKzW4FbAS655JLe1jX9smyiHZF0i/qoj0Sun1wUStsy0X+fqouii4A17l4MfBn4VzP7i9d294fdvczdywoLe35orYikV3dnjZJamei/TybQG4CRcevFQVm8vwXWAbj7VmAgUJCKCopI+kR91Ec26fFxlCmQTKBXAaPMrNTM+gMLgYpO+xwGrgEwszHEAr0xZbUUkbQIe9RHX9L2OMqiofkYnR5HmSI99qG7+2kzWwJUAnnAI+6+x8zKgWp3rwD+Dvh/ZnYnsQuki9090ePHRSSLhDnqoy9Kd/99UneKuvuzxIYixpetiFuuAWaktmoikm5hjvqQ1MudW/9FJC3CGvUhqZc7t/6LiEi3FOgiIhGhQBcRiQgFuohIRCjQRUQiQoEuIhIRCnQRkYhQoIuIRIQCXUQkIhToIiIRoUAXEYkIBbqISEQo0EVEIkKBLiISEQp0EZGI6FPzoW/c0aCJ/EUksvpMoG/c0dDhUVsNx5tYvmEXgEJdJNvsXAeby+FEPQwphmtWwMQFYdcq6/WZQF9VWdvhuYkATc0tfHf9Tp545XDa37/mzfcYO/z8tL+PSM7buQ423QHNTbH1E0di66BQ70Gf6UM/erwpYfmpltaMvP/Y4eczb5I+CYj0aHP5mTBv09wUK5du9Zkz9BFD82lIEOpFQ/NZ++3pIdRIRBI6Ud+7cmnXZ87Ql84aTX6/vA5l+f3yWDprdEg1EpGEhhT3rlza9ZlAv35yEffdMIGiofkYsTPz+26YoAuiItnmmhXQL79jWb/8WLl0q890uUAs1BXgIlmu7cKnRrn0Wp8KdBHJERMXKMDPQp/pchERiToFuohIRCjQRUQiQoEuIhIRCnQRkYhQoIuIRERSgW5ms82s1szqzGxZgu0/NrNXg683zOx4ymsqIiLd6nEcupnlAauBmUA9UGVmFe5e07aPu98Zt//twOQ01FVERLqRzBn6FKDO3Q+4+yngSWBeN/svAp5IReVERCR5yQR6EXAkbr0+KPsLZnYpUAps6WL7rWZWbWbVjY2Nva2riIh0I9UXRRcC6929JdFGd3/Y3cvcvaywsDDFby0i0rclE+gNwMi49eKgLJGFqLtFRCQUyQR6FTDKzErNrD+x0K7ovJOZ/TVwAbA1tVUUEZFk9Bjo7n4aWAJUAnuBde6+x8zKzWxu3K4LgSfd3dNTVRER6U5S0+e6+7PAs53KVnRaX5m6aomISG/pTlERkYhQoIuIRIQCXUQkIhToIiIRoUAXEYkIBbqISEQo0EVEIkKBLiISEUndWCS9t3FHA6sqazl6vIkRQ/NZOms0109OOEmliEhKKNDTYOOOBpZv2EVTc2zSyYbjTSzfsAtAoS4iaaMulzRYVVnbHuZtmppbWFVZG1KNRKQvUKCnwdHjTb0qFxFJBQV6GowYmt+rchGRVFCgp8HSWaPJ75fXoSy/Xx5LZ40OqUYi0hfoomgatF341CgXEckkBXqaXD+5SAEuIhmlLhcRkYhQoIuIRIQCXUQkIhToIiIRoUAXEYkIBbqISEQo0EVEIkKBLiISEQp0EZGIUKCLiESEAl1EJCIU6CIiEaFAFxGJCAW6iEhEKNBFRCIiqUA3s9lmVmtmdWa2rIt9FphZjZntMbPHU1tNERHpSY8PuDCzPGA1MBOoB6rMrMLda+L2GQUsB2a4+7tm9ul0VVhERBJL5gx9ClDn7gfc/RTwJDCv0z7fAla7+7sA7v5WaqspIiI9SSbQi4Ajcev1QVm8vwL+ysz+YGb/YWazU1VBERFJTqqeKXouMAq4GigGfm9mE9z9ePxOZnYrcCvAJZdckqK3FhERSO4MvQEYGbdeHJTFqwcq3L3Z3Q8CbxAL+A7c/WF3L3P3ssLCwrOts4iIJJBMoFcBo8ys1Mz6AwuBik77bCR2do6ZFRDrgjmQumqKiEhPegx0dz8NLAEqgb3AOnffY2blZjY32K0SOGZmNcALwFJ3P5auSouIyF8ydw/ljcvKyry6ujqU9xYRyVVmts3dyxJt052iIiIRoUAXEYkIBbqISEQo0EVEIkKBLiISEQp0EZGIUKCLiESEAl1EJCIU6CIiEaFAFxGJCAW6iEhEKNBFRCJCgS4iEhEKdBGRiFCgi4hEhAJdRCQiFOgiIhGhQBcRiQgFuohIRCjQRUQiQoEuIhIRCnQRkYhQoIuIRIQCXUQkIhToIiIRoUAXEYkIBbqISEQo0EVEIkKBLiISEQp0EZGIUKCLiESEAl1EJCKSCnQzm21mtWZWZ2bLEmxfbGaNZvZq8PXN1FdVRES6c25PO5hZHrAamAnUA1VmVuHuNZ12XevuS9JQRxERSUIyZ+hTgDp3P+Dup4AngXnprZaIiPRWMoFeBByJW68Pyjr7qpntNLP1ZjYy0QuZ2a1mVm1m1Y2NjWdRXRER6UqqLopuAkrcfSLwHPDLRDu5+8PuXubuZYWFhSl6axERgeQCvQGIP+MuDsraufsxd/84WP058F9SUz0REUlWMoFeBYwys1Iz6w8sBCridzCz4XGrc4G9qauiiIgko8dRLu5+2syWAJVAHvCIu+8xs3Kg2t0rgDvMbC5wGngHWJzGOouISALm7qG8cVlZmVdXV4fy3iIiucrMtrl7WaJtulNURCQiFOgiIhGhQBcRiQgFuohIRCjQRUQiQoEuIhIRCnQRkYjo8caiTGpubqa+vp6TJ0+GXZWsM3DgQIqLi+nXr1/YVRGRLJVVgV5fX8/gwYMpKSnBzMKuTtZwd44dO0Z9fT2lpaVhV0dEslRWdbmcPHmSYcOGKcw7MTOGDRumTy4i0q2sCnRAYd4F/VxEpCdZF+giInJ2FOidHDp0iPHjx5/1v9+4cSM1NZ0ftyoikn4K9BQ6ffq0Al1EQpNVo1zi/cOmPdQcfS+lrzl2xPncM2dcj/udPn2am2++me3btzNu3Dh+9atfsXfvXu666y4++OADCgoKWLNmDcOHD+fqq69m0qRJvPTSS8yfP5+Kigp+97vfce+99/LrX/+ayy+/PKVtEBHpStYGephqa2v5xS9+wYwZM/jGN77B6tWr+c1vfsNTTz1FYWEha9eu5e677+aRRx4B4NSpU7TN7b5v3z6uu+46brzxxjCbICJ9UNYGejJn0ukycuRIZsyYAcAtt9zCD37wA3bv3s3MmTMBaGlpYfjwM0/du+mmm0Kpp4hIvKwN9DB1HiI4ePBgxo0bx9atWxPuP2jQoExUS0SkW7oomsDhw4fbw/vxxx9n2rRpNDY2tpc1NzezZ8+ehP928ODBvP/++xmrq4hIGwV6AqNHj2b16tWMGTOGd999l9tvv53169fzve99jyuuuIJJkybxxz/+MeG/XbhwIatWrWLy5Mns378/wzUXkb4sqx4SvXfvXsaMGRNKfXKBfj4ioodEi4j0AQp0EZGIUKCLiESEAl1EJCIU6CIiEaFAFxGJCAX6WXjxxRe57rrrPtFrrFmzhqNHj6aoRiIiCvRuuTutra0pf92WlhYFuoikXPbO5fLvy+DPu1L7mhdPgGvv73aXQ4cOMWvWLKZOncq2bduYMmUKVVVVmBnf//732yfieu+99/jKV75CXV0dX/jCF/jpT3/KOeecw29/+1vuuecePv74Yy6//HIeffRRzjvvPEpKSrjpppt47rnnuOuuu6iurubmm28mPz+frVu3smrVKjZt2kRTUxOf/exneeihh/TYORHpFZ2hJ7Bv3z5uu+02ysvLqa+v57XXXuP5559n6dKlvPnmmwC88sor/OQnP6Gmpob9+/ezYcMG3n77be69916ef/55tm/fTllZGT/60Y/aX3fYsGFs376dW265hbKyMh577DFeffVV8vPzWbJkCVVVVezevZumpiaefvrpsJovIjkqqTN0M5sN/DOQB/zc3ROe5prZV4H1wJXuXp1on6T1cCadTpdeeinTpk3jzjvvZNGiReTl5XHRRRdx1VVXUVVVxfnnn8+UKVO47LLLAFi0aBEvvfQSAwcOpKampn3q3VOnTjF9+vT21+1umt0XXniBBx54gI8++oh33nmHcePGMWfOnPQ2VEQipcdAN7M8YDUwE6gHqsyswt1rOu03GPhfwMvpqGgmJTMdbufuEDPD3Zk5cyZPPPFEr1735MmT3HbbbVRXVzNy5EhWrlzJyZMne19xkajauQ42l8OJehhSDNesgIkLwq5V1kmmy2UKUOfuB9z9FPAkMC/Bfv8H+CEQmST6/Oc/z9q1a2lpaaGxsZHf//73TJkyBYh1uRw8eJDW1lbWrl3L5z73OaZNm8Yf/vAH6urqAPjwww954403Er52/DS7beFdUFDABx98wPr16zPQOpEcsXMdbLoDThwBPPZ90x2xcukgmUAvAo7ErdcHZe3M7DPASHd/prsXMrNbzazazKobGxt7XdlMmz9/PhMnTuSKK67gi1/8Ig888AAXX3wxAFdeeSVLlixhzJgxlJaWMn/+fAoLC1mzZg2LFi1i4sSJTJ8+nddffz3hay9evJjvfOc7TJo0iQEDBvCtb32L8ePHM2vWLK688spMNlMku20uh+amjmXNTbFy6aDH6XPN7EZgtrt/M1j/78BUd18SrJ8DbAEWu/shM3sR+Pue+tA1fW7v6ecjfdLKoUCinDJYeTyzdckCn3T63AZgZNx6cVDWZjAwHnjRzA4B04AKM0v4hiIivTKkuHflfVgygV4FjDKzUjPrDywEKto2uvsJdy9w9xJ3LwH+A5j7iUe5iIhA7AJov/yOZf3yY+XSQY+B7u6ngSVAJbAXWOfue8ys3MzmprpCYT1BKdvp5yJ91sQFMOdBGDISsNj3OQ9qlEsCWfUIuoMHDzJ48GCGDRumuyTjuDvHjh3j/fffp7S0NOzqiEiIuutDz6pb/4uLi6mvrycXRsBk2sCBAykuVp+hiHQtqwK9X79+OgMVETlLmstFRCQiFOgiIhGhQBcRiYjQRrmYWSPwp1DePHUKgLfDrkSKRa1NUWsPRK9Nak/vXOruhYk2hBboUWBm1V0NH8pVUWtT1NoD0WuT2pM66nIREYkIBbqISEQo0D+Zh8OuQBpErU1Raw9Er01qT4qoD11EJCJ0hi4iEhEKdBGRiFCg98DM8sxsh5k9HayXmtnLZlZnZmuDOeIxswHBel2wvSTUinfBzIaa2Xoze93M9prZdDO70MyeM7N9wfcLgn3NzB4M2rQzeNRgVjGzO81sj5ntNrMnzGxgrh0jM3vEzN4ys91xZb0+Jmb2tWD/fWb2tTDaEtQjUXtWBb9zO83sN2Y2NG7b8qA9tWY2K658dlBWZ2bLMtyMDhK1KW7b35mZm1lBsB7eMXJ3fXXzBdwFPA48HayvAxYGyz8D/mewfBvws2B5IbA27Lp30Z5fAt8MlvsDQ4EHgGVB2TLgh8Hyl4F/B4zYk6heDrv+ndpSBBwE8uOOzeJcO0bAfwU+A+yOK+vVMQEuBA4E3y8Ili/IovZ8CTg3WP5hXHvGAq8BA4BSYD+QF3ztBy4Lfk9fA8Zm0zEKykcSe1bEn4CCsI9R6L/M2fxF7HF7m4EvAk8HB+jtuF/M6UBlsFwJTA+Wzw32s7Db0Kk9Q4IAtE7ltcDwYHk4UBssPwQsSrRfNnxx5gHmFwY/86eBWbl4jICSTgHYq2MCLAIeiivvsF/Y7em0bT7wWLC8HFget60yOGbtxy3RftnSJmA9cAVwKC7QQztG6nLp3v8Fvgu0BuvDgOMee4oTQD2xUIEz4UKw/USwfzYpBRqBR4NupJ+b2SDgInd/M9jnz8BFwXJ7mwLx7Q2duzcA/wQcBt4k9jPfRm4foza9PSZZfaw6+QaxM1jI4faY2Tygwd1f67QptDYp0LtgZtcBb7n7trDrkkLnEvvY+C/uPhn4kNjH+XYeO3XIibGsQb/yPGJ/qEYAg4DZoVYqDXLpmPTEzO4GTgOPhV2XT8LMPgX8byCrHmyqQO/aDGCumR0CniTW7fLPwFAza3swSDHQECw3EOtPI9g+BDiWyQonoR6od/eXg/X1xAL+P81sOEDw/a1ge3ubAvHtzQZ/Axx090Z3bwY2EDtuuXyM2vT2mGT7scLMFgPXATcHf6Qgd9tzObETideCjCgGtpvZxYTYJgV6F9x9ubsXu3sJsQtoW9z9ZuAF4MZgt68BTwXLFcE6wfYtcb+0WcHd/wwcMbPRQdE1QA0d6965Tf8juGo/DTgR1w2QDQ4D08zsU2ZmnGlPzh6jOL09JpXAl8zsguCTy5eCsqxgZrOJdV/OdfeP4jZVAAuDEUilwCjgFaAKGBWMWOpP7P9gRabr3RV33+Xun3b3kiAj6oHPBP/HwjtGYV5kyJUv4GrOjHK5jNgvXB3wb8CAoHxgsF4XbL8s7Hp30ZZJQDWwE9hI7Gr7MGIXf/cBzwMXBvsasJrYaINdQFnY9U/Qnn8AXgd2A/9KbLRETh0j4Ali1wCaiQXD357NMSHWN10XfH09y9pTR6z/+NXg62dx+98dtKcWuDau/MvAG8G2u7PtGHXafogzF0VDO0a69V9EJCLU5SIiEhEKdBGRiFCgi4hEhAJdRCQiFOgiIhGhQBcRiQgFuohIRPx/cNXA2tMFSnEAAAAASUVORK5CYII=\n", "text/plain": [ "
" ] }, "metadata": { "needs_background": "light" }, "output_type": "display_data" } ], "source": [ "from flaml.data import get_output_from_log\n", "import matplotlib.pyplot as plt\n", "import numpy as np\n", "\n", "for each_file_name in ['bert', 'roberta']:\n", " time_history, best_valid_loss_history, valid_loss_history, config_history, metric_history = \\\n", " get_output_from_log(filename='spooky_' + each_file_name + '.log', time_budget=3000)\n", " print(len(valid_loss_history))\n", " plt.scatter(time_history, 1 - np.array(valid_loss_history))\n", " plt.step(time_history, 1 - np.array(best_valid_loss_history), where='post')\n", "\n", "plt.legend(['bert', 'roberta'])\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 4. Other Tasks" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Besides sequence classification, FLAML currently also supports four other tasks (more tasks are to be supported, which can be found on FLAML's documentation website https://microsoft.github.io/FLAML/docs/Examples/AutoML-NLP):\n", "\n", "- sequence regression: predicting a float number from the input sequence, e.g., predicting the rating of a hotel review based on the text content;\n", "- token classification: predicting the label of each token in a sequence, e.g., named entity recognition;\n", "- multiple choice: predicting the best second half of a sentence that comes next to the first part of a sentence based on common sensen reasoning. An example is seen below;\n", "- (abstractive) summarization: generating the textual summarization of an input paragraph;\n", "\n", "For each task, you only have to change the \"Load data and preprocess\" with the corresponding data loading process. For example:" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 4.1 Multiple Choice Example" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Multiple choice is a task of predicting the best second half of a sentence that follows the first half based on common sense reasoning. An example of multiple-choice classification problem is:\n", "\n", "On stage, a woman takes a seat at the piano. She\n", "a) sits on a bench as her sister plays with the doll.\n", "b) smiles with someone as the music plays.\n", "c) is in the crowd, watching the dancers.\n", "d) *nervously sets her fingers on the keys*." ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "No config specified, defaulting to: swag/regular\n", "Reusing dataset swag (/home/xliu127/.cache/huggingface/datasets/swag/regular/0.0.0/9640de08cdba6a1469ed3834fcab4b8ad8e38caf5d1ba5e7436d8b1fd067ad4c)\n", "No config specified, defaulting to: swag/regular\n", "Reusing dataset swag (/home/xliu127/.cache/huggingface/datasets/swag/regular/0.0.0/9640de08cdba6a1469ed3834fcab4b8ad8e38caf5d1ba5e7436d8b1fd067ad4c)\n", "No config specified, defaulting to: swag/regular\n", "Reusing dataset swag (/home/xliu127/.cache/huggingface/datasets/swag/regular/0.0.0/9640de08cdba6a1469ed3834fcab4b8ad8e38caf5d1ba5e7436d8b1fd067ad4c)\n" ] } ], "source": [ "from datasets import load_dataset\n", "\n", "train_dataset = load_dataset(\"swag\", split=\"train\").to_pandas().iloc[:10000]\n", "dev_dataset = load_dataset(\"swag\", split=\"validation\").to_pandas().iloc[:10000]\n", "test_dataset = load_dataset(\"swag\", split=\"test\").to_pandas()\n", "\n", "custom_sent_keys = [\n", " \"sent1\",\n", " \"sent2\",\n", " \"ending0\",\n", " \"ending1\",\n", " \"ending2\",\n", " \"ending3\",\n", " \"gold-source\",\n", " \"video-id\",\n", " \"startphrase\",\n", " \"fold-ind\",\n", " ] # specify the column names of the input sentences\n", "label_key = \"label\" # specify the column name of the label\n", "\n", "X_train, y_train = train_dataset[custom_sent_keys], train_dataset[label_key]\n", "X_val, y_val = dev_dataset[custom_sent_keys], dev_dataset[label_key]\n", "X_test = test_dataset[custom_sent_keys]" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "'Members of the procession walk down the street holding small horn brass instruments.'" ] }, "execution_count": 7, "metadata": {}, "output_type": "execute_result" } ], "source": [ "train_dataset.iloc[0][\"sent1\"]" ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [ { "data": { "text/html": [ "== Status ==
Current time: 2022-03-19 14:39:29 (running for 00:08:29.94)
Memory usage on this node: 33.0/376.6 GiB
Using FIFO scheduling algorithm.
Resources requested: 0/96 CPUs, 0/4 GPUs, 0.0/250.17 GiB heap, 0.0/111.21 GiB objects (0.0/1.0 accelerator_type:V100)
Current best trial: de45e672 with val_loss=0.18300000000000005 and parameters={'learning_rate': 6.104513714676502e-06, 'num_train_epochs': 2.3743291981165893, 'per_device_train_batch_size': 8, 'warmup_ratio': 0.23610846764298543, 'weight_decay': 0.20205904544254147, 'adam_epsilon': 5.752964074991208e-08, 'seed': 41, 'global_max_steps': 9223372036854775807, 'learner': 'transformer'}
Result logdir: /data/xliu127/projects/hyperopt/FLAML/notebook/data/output/train_2022-03-19_14-30-59
Number of trials: 10/1000000 (10 TERMINATED)

" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "name": "stdout", "output_type": "stream", "text": [ "\u001b[2m\u001b[36m(train pid=86157)\u001b[0m {'eval_loss': 0.6315866112709045, 'eval_automl_metric': 0.18779999999999997, 'eval_runtime': 15.4883, 'eval_samples_per_second': 645.648, 'eval_steps_per_second': 40.353, 'epoch': 1.66}\n", "\u001b[2m\u001b[36m(train pid=86157)\u001b[0m {'train_runtime': 190.7625, 'train_samples_per_second': 87.254, 'train_steps_per_second': 10.909, 'train_loss': 0.5091343906738046, 'epoch': 1.66}\n", "\u001b[2m\u001b[36m(train pid=86249)\u001b[0m {'eval_loss': 1.2118068933486938, 'eval_automl_metric': 0.2015, 'eval_runtime': 15.2585, 'eval_samples_per_second': 655.374, 'eval_steps_per_second': 40.961, 'epoch': 2.87}\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "\u001b[2m\u001b[36m(train pid=86157)\u001b[0m Using amp half precision backend\n", "\u001b[2m\u001b[36m(train pid=86157)\u001b[0m The following columns in the test set don't have a corresponding argument in `RobertaForMultipleChoice.forward` and have been ignored: ending3, ending1, video-id, sent1, ending0, sent2, fold-ind, ending2, startphrase, gold-source. If ending3, ending1, video-id, sent1, ending0, sent2, fold-ind, ending2, startphrase, gold-source are not expected by `RobertaForMultipleChoice.forward`, you can safely ignore this message.\n", "\u001b[2m\u001b[36m(train pid=86157)\u001b[0m ***** Running Prediction *****\n", "\u001b[2m\u001b[36m(train pid=86157)\u001b[0m Num examples = 10000\n", "\u001b[2m\u001b[36m(train pid=86157)\u001b[0m Batch size = 16\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "\u001b[2m\u001b[36m(train pid=86249)\u001b[0m {'eval_loss': 1.2118068933486938, 'eval_automl_metric': 0.2015, 'eval_runtime': 15.1369, 'eval_samples_per_second': 660.639, 'eval_steps_per_second': 41.29, 'epoch': 2.87}\n", "\u001b[2m\u001b[36m(train pid=86249)\u001b[0m {'train_runtime': 546.3809, 'train_samples_per_second': 156.658, 'train_steps_per_second': 39.165, 'train_loss': 0.5030154804349909, 'epoch': 2.87}\n", "\u001b[2m\u001b[36m(train pid=86195)\u001b[0m {'loss': 0.4854, 'learning_rate': 1.3592147782116173e-06, 'epoch': 2.0}\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "\u001b[2m\u001b[36m(train pid=86249)\u001b[0m Using amp half precision backend\n", "\u001b[2m\u001b[36m(train pid=86249)\u001b[0m The following columns in the test set don't have a corresponding argument in `RobertaForMultipleChoice.forward` and have been ignored: fold-ind, sent2, gold-source, ending1, startphrase, sent1, ending0, video-id, ending2, ending3. If fold-ind, sent2, gold-source, ending1, startphrase, sent1, ending0, video-id, ending2, ending3 are not expected by `RobertaForMultipleChoice.forward`, you can safely ignore this message.\n", "\u001b[2m\u001b[36m(train pid=86249)\u001b[0m ***** Running Prediction *****\n", "\u001b[2m\u001b[36m(train pid=86249)\u001b[0m Num examples = 10000\n", "\u001b[2m\u001b[36m(train pid=86249)\u001b[0m Batch size = 16\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "\u001b[2m\u001b[36m(train pid=86195)\u001b[0m {'eval_loss': 0.49709731340408325, 'eval_automl_metric': 0.17600000000000005, 'eval_runtime': 15.4983, 'eval_samples_per_second': 645.232, 'eval_steps_per_second': 40.327, 'epoch': 2.0}\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "2022-03-19 14:41:56,719\tWARNING ray_trial_executor.py:146 -- Skipping cleanup - trainable.stop did not return in time. Consider making `stop` a faster operation.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "\u001b[2m\u001b[36m(train pid=86195)\u001b[0m {'eval_loss': 0.5254333019256592, 'eval_automl_metric': 0.17800000000000005, 'eval_runtime': 15.45, 'eval_samples_per_second': 647.251, 'eval_steps_per_second': 40.453, 'epoch': 3.0}\n", "\u001b[2m\u001b[36m(train pid=86195)\u001b[0m {'loss': 0.3989, 'learning_rate': 3.8051750127352887e-07, 'epoch': 3.0}\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "2022-03-19 14:42:56,729\tWARNING ray_trial_executor.py:146 -- Skipping cleanup - trainable.stop did not return in time. Consider making `stop` a faster operation.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "\u001b[2m\u001b[36m(train pid=86195)\u001b[0m {'eval_loss': 0.5254867076873779, 'eval_automl_metric': 0.17789999999999995, 'eval_runtime': 15.424, 'eval_samples_per_second': 648.341, 'eval_steps_per_second': 40.521, 'epoch': 3.0}\n", "\u001b[2m\u001b[36m(train pid=86195)\u001b[0m {'eval_loss': 0.5332269072532654, 'eval_automl_metric': 0.17830000000000001, 'eval_runtime': 15.4452, 'eval_samples_per_second': 647.45, 'eval_steps_per_second': 40.466, 'epoch': 3.39}\n", "\u001b[2m\u001b[36m(train pid=86195)\u001b[0m {'train_runtime': 382.2827, 'train_samples_per_second': 88.597, 'train_steps_per_second': 11.076, 'train_loss': 0.5299136270370808, 'epoch': 3.39}\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "2022-03-19 14:43:56,739\tWARNING ray_trial_executor.py:146 -- Skipping cleanup - trainable.stop did not return in time. Consider making `stop` a faster operation.\n", "\u001b[2m\u001b[36m(train pid=86195)\u001b[0m Using amp half precision backend\n", "\u001b[2m\u001b[36m(train pid=86195)\u001b[0m The following columns in the test set don't have a corresponding argument in `RobertaForMultipleChoice.forward` and have been ignored: ending2, sent1, ending0, sent2, ending3, video-id, gold-source, ending1, startphrase, fold-ind. If ending2, sent1, ending0, sent2, ending3, video-id, gold-source, ending1, startphrase, fold-ind are not expected by `RobertaForMultipleChoice.forward`, you can safely ignore this message.\n", "\u001b[2m\u001b[36m(train pid=86195)\u001b[0m ***** Running Prediction *****\n", "\u001b[2m\u001b[36m(train pid=86195)\u001b[0m Num examples = 10000\n", "\u001b[2m\u001b[36m(train pid=86195)\u001b[0m Batch size = 16\n", "2022-03-19 14:44:14,271\tINFO tune.py:639 -- Total run time: 795.18 seconds (504.18 seconds for the tuning loop).\n", "[flaml.automl: 03-19 14:44:19] {2837} INFO - selected model: None\n", "/data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages/transformers/optimization.py:306: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning\n", " warnings.warn(\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "{'loss': 0.6603, 'learning_rate': 4.631567529441369e-06, 'epoch': 1.0}\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "[flaml.automl: 03-19 14:46:08] {2947} INFO - retrain transformer for 109.2s\n", "[flaml.automl: 03-19 14:46:08] {2954} INFO - retrained model: None\n", "[flaml.automl: 03-19 14:46:08] {2283} INFO - fit succeeded\n", "[flaml.automl: 03-19 14:46:08] {2284} INFO - Time taken to find the best model: 319.927033662796\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "{'train_runtime': 96.899, 'train_samples_per_second': 245.031, 'train_steps_per_second': 30.63, 'train_loss': 0.6602518278346073, 'epoch': 1.0}\n" ] } ], "source": [ "''' import AutoML class from flaml package '''\n", "from flaml import AutoML\n", "automl = AutoML()\n", "\n", "if not ray.is_initialized():\n", " ray.init()\n", "\n", "automl_settings = {\n", " \"time_budget\": 500, # setting the time budget\n", " \"task\": \"multichoice-classification\", # setting the task as multiplechoice-classification\n", " \"fit_kwargs_by_estimator\": { # if model_path is not set, the default model is facebook/muppet-roberta-base: https://huggingface.co/facebook/muppet-roberta-base\n", " \"transformer\": {\n", " \"output_dir\": \"data/output/\", # setting the output directory\n", " \"per_device_eval_batch_size\": 16, # the batch size for validation (inference)\n", " }\n", " },\n", " \"gpu_per_trial\": 1, # set to 0 if no GPU is available\n", " \"log_file_name\": \"seqclass.log\", # set the file to save the log for HPO\n", " \"log_type\": \"all\", # the log type for trials: \"all\" if logging all the trials, \"better\" if only keeping the better trials\n", " \"use_ray\": {\"local_dir\": \"data/output/\"}, # set whether to use Ray\n", " \"n_concurrent_trials\": 4\n", "}\n", "\n", "'''The main flaml automl API'''\n", "automl.fit(X_train=X_train, y_train=y_train, X_val=X_val, y_val=y_val, **automl_settings)" ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "{'Current Learner': 'transformer', 'Current Sample': 10000, 'Current Hyper-parameters': {'learning_rate': 0.00021956991427751982, 'num_train_epochs': 0.3549576494055084, 'per_device_train_batch_size': 8, 'warmup_ratio': 0.07425273520338253, 'weight_decay': 0.03879221030529465, 'adam_epsilon': 3.7880482987985576e-08, 'seed': 43, 'global_max_steps': 444, 'learner': 'transformer'}, 'Best Learner': 'transformer', 'Best Hyper-parameters': {'learning_rate': 0.00021956991427751982, 'num_train_epochs': 0.3549576494055084, 'per_device_train_batch_size': 8, 'warmup_ratio': 0.07425273520338253, 'weight_decay': 0.03879221030529465, 'adam_epsilon': 3.7880482987985576e-08, 'seed': 43, 'global_max_steps': 444, 'learner': 'transformer'}}\n", "{'Current Learner': 'transformer', 'Current Sample': 10000, 'Current Hyper-parameters': {'learning_rate': 1.0000000000000003e-05, 'num_train_epochs': 1.0, 'per_device_train_batch_size': 32, 'warmup_ratio': 0.0, 'weight_decay': 0.0, 'adam_epsilon': 1e-06, 'seed': 42, 'global_max_steps': 313, 'learner': 'transformer'}, 'Best Learner': 'transformer', 'Best Hyper-parameters': {'learning_rate': 1.0000000000000003e-05, 'num_train_epochs': 1.0, 'per_device_train_batch_size': 32, 'warmup_ratio': 0.0, 'weight_decay': 0.0, 'adam_epsilon': 1e-06, 'seed': 42, 'global_max_steps': 313, 'learner': 'transformer'}}\n", "{'Current Learner': 'transformer', 'Current Sample': 10000, 'Current Hyper-parameters': {'learning_rate': 1.3241899893349513e-06, 'num_train_epochs': 0.4379128434860086, 'per_device_train_batch_size': 16, 'warmup_ratio': 0.257055208282222, 'weight_decay': 0.012652183020312091, 'adam_epsilon': 1.0189125195705357e-07, 'seed': 43, 'global_max_steps': 274, 'learner': 'transformer'}, 'Best Learner': 'transformer', 'Best Hyper-parameters': {'learning_rate': 1.0000000000000003e-05, 'num_train_epochs': 1.0, 'per_device_train_batch_size': 32, 'warmup_ratio': 0.0, 'weight_decay': 0.0, 'adam_epsilon': 1e-06, 'seed': 42, 'global_max_steps': 313, 'learner': 'transformer'}}\n", "{'Current Learner': 'transformer', 'Current Sample': 10000, 'Current Hyper-parameters': {'learning_rate': 0.0002562922748967212, 'num_train_epochs': 0.1802995999606059, 'per_device_train_batch_size': 4, 'warmup_ratio': 0.1809477882684876, 'weight_decay': 0.10305626005953175, 'adam_epsilon': 5.536776887412208e-08, 'seed': 42, 'global_max_steps': 451, 'learner': 'transformer'}, 'Best Learner': 'transformer', 'Best Hyper-parameters': {'learning_rate': 1.0000000000000003e-05, 'num_train_epochs': 1.0, 'per_device_train_batch_size': 32, 'warmup_ratio': 0.0, 'weight_decay': 0.0, 'adam_epsilon': 1e-06, 'seed': 42, 'global_max_steps': 313, 'learner': 'transformer'}}\n", "{'Current Learner': 'transformer', 'Current Sample': 10000, 'Current Hyper-parameters': {'learning_rate': 6.104513714676502e-06, 'num_train_epochs': 2.3743291981165893, 'per_device_train_batch_size': 8, 'warmup_ratio': 0.23610846764298543, 'weight_decay': 0.20205904544254147, 'adam_epsilon': 5.752964074991208e-08, 'seed': 41, 'global_max_steps': 1251, 'learner': 'transformer'}, 'Best Learner': 'transformer', 'Best Hyper-parameters': {'learning_rate': 6.104513714676502e-06, 'num_train_epochs': 2.3743291981165893, 'per_device_train_batch_size': 8, 'warmup_ratio': 0.23610846764298543, 'weight_decay': 0.20205904544254147, 'adam_epsilon': 5.752964074991208e-08, 'seed': 41, 'global_max_steps': 1251, 'learner': 'transformer'}}\n", "{'Current Learner': 'transformer', 'Current Sample': 10000, 'Current Hyper-parameters': {'learning_rate': 9.306519250357542e-06, 'num_train_epochs': 0.4664878701006166, 'per_device_train_batch_size': 32, 'warmup_ratio': 0.0, 'weight_decay': 0.0, 'adam_epsilon': 5.931759315303309e-07, 'seed': 43, 'global_max_steps': 147, 'learner': 'transformer'}, 'Best Learner': 'transformer', 'Best Hyper-parameters': {'learning_rate': 6.104513714676502e-06, 'num_train_epochs': 2.3743291981165893, 'per_device_train_batch_size': 8, 'warmup_ratio': 0.23610846764298543, 'weight_decay': 0.20205904544254147, 'adam_epsilon': 5.752964074991208e-08, 'seed': 41, 'global_max_steps': 1251, 'learner': 'transformer'}}\n", "6\n" ] }, { "data": { "image/png": "iVBORw0KGgoAAAANSUhEUgAAAYIAAAEWCAYAAABrDZDcAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjUuMSwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy/YYfK9AAAACXBIWXMAAAsTAAALEwEAmpwYAAAd8klEQVR4nO3df9wVZZ3/8dfbWxQqFY27VhGEEinNEiN/9GMzy8XcFDIzbLc126S2NMuiZEtjbd0yV3vUY9l8oF/zxzdDZc3IKHLTrBQF/IlgGCkqaIo/UFIUwc/+MdfB8XjuwwBnzrnve97Px+M8mLnmOjOfM8x9Pmeua+YaRQRmZlZdW3U6ADMz6ywnAjOzinMiMDOrOCcCM7OKcyIwM6s4JwIzs4pzIjBrQtJ7JC3pdBxmZXIisF5L0jJJH+hkDBHx+4gYXdb6JY2T9DtJqyWtlHS9pCPK2p5ZI04EVmmSujq47aOAK4CLgV2B1wOnAYdvxrokyX/Ptll84FifI2krSadI+rOkxyVdLmmn3PIrJP1F0lPp1/ZeuWUXSvqhpNmSngHel848viLpzvSeyyQNTPUPkrQ89/4e66blX5X0sKSHJH1aUkjavcFnEHAO8K2IOD8inoqIFyPi+og4PtWZKun/594zIq1v6zT/W0lnSLoBeBaYLGlB3Xa+JGlWmt5W0n9KekDSI5LOlTRoC/87rB9wIrC+6ERgAvBeYBfgSWBabvkvgVHA64BbgR/Xvf/jwBnAdsAfUtnRwKHASOCtwCebbL9hXUmHAicDHwB2Bw5qso7RwDBgZpM6RXwCmET2Wc4FRksalVv+ceDSNP0dYA9gnxTfULIzEKs4JwLriz4LfD0ilkfE88BU4KjaL+WIuCAiVueWvU3SDrn3/ywibki/wJ9LZT+IiIci4gng52Rflj3pqe7RwI8iYlFEPJu23ZPXpn8fLvaRe3Rh2t66iHgK+BlwDEBKCG8CZqUzkEnAlyLiiYhYDfwHMHELt2/9gBOB9UW7AT+VtErSKuBuYD3wekldkr6Tmo2eBpal9wzJvf/BBuv8S276WeA1TbbfU91d6tbdaDs1j6d/d25Sp4j6bVxKSgRkZwNXpaTUDbwKuCW3336Vyq3inAisL3oQ+GBEDM69BkbECrIvv/FkzTM7ACPSe5R7f1lD7j5M1ulbM6xJ3SVkn+MjTeo8Q/blXfM3DerUf5ZrgG5J+5AlhFqz0GPAGmCv3D7bISKaJTyrCCcC6+0GSBqYe21N1hZ+hqTdACR1Sxqf6m8HPE/2i/tVZM0f7XI5cJykN0t6FXBqTxUjG//9ZOBUScdJ2j51gr9b0vRU7XbgbyUNT01bUzYWQES8QHYl0lnATmSJgYh4ETgP+J6k1wFIGipp3OZ+WOs/nAist5tN9ku29poKfB+YBfxa0mrgJmD/VP9i4H5gBbA4LWuLiPgl8APgOmBpbtvP91B/JvAx4FPAQ8AjwL+TtfMTEdcAlwF3ArcAVxcM5VKyM6IrImJdrvxrtbhSs9n/knVaW8XJD6YxK4ekNwN3AdvWfSGb9So+IzBrIUkfTtfr7wicCfzcScB6OycCs9b6DPAo8GeyK5n+pbPhmG2cm4bMzCrOZwRmZhW3dacD2FRDhgyJESNGdDoMM7M+5ZZbbnksIhreQNjnEsGIESNYsGDBxiuamdkGku7vaZmbhszMKs6JwMys4pwIzMwqzonAzKzinAjMzCquz101ZGa2Oa66bQVnzVnCQ6vWsMvgQUweN5oJY4Z2OqxewYmgBXyAmfVuV922gilXLmTNC+sBWLFqDVOuXAjgv1WcCLaYDzCz3u+sOUs2/I3WrHlhPV+deSc/mfdAh6LadHvusj3fPHyvlq/XiWAL9ZcDzKw/W7FqTcPytetfbHMkvZMTwRZ6yAeYWa+3TddWDf8mhw4exGWfObADEW2aWvPzvPue4NeLHml587MTwRbaZfCghr82+soBZlYF9U24AIMGdDF5XO9/QFs7mp9LvXxU0qGSlkhaKumUBsuHS7pO0m2S7pR0WJnxlGHyuNEMGtD1srK+coCZVcWEMUP59pF7M3TwIET2Q+3bR+7dJ/rxemp+PmvOkpZto7QzAkldwDTgEGA5MF/SrIhYnKv2DeDyiPihpD3Jnk87oqyYylA7kL46807Wrn+Rob5qyKxXmjBmaJ/8u+yp+bmn8s1RZtPQfsDSiLgXQNIMYDzZA8VrAtg+Te9A9gDvPmfCmKEbOobdHGRmrdRT8/Mugwe1bBtlNg0NBR7MzS9PZXlTgX+UtJzsbODERiuSNEnSAkkLVq5cWUasZma9Ujuanzs9xMQxwIURsStwGHCJpFfEFBHTI2JsRIzt7m74XAUzs36pHf0bZTYNrQCG5eZ3TWV5/wwcChARcyUNBIaQPfzbzMwov3+jzDOC+cAoSSMlbQNMBGbV1XkAeD+ApDcDAwG3/ZiZtVFpiSAi1gEnAHOAu8muDlok6XRJR6RqXwaOl3QH8BPgkxERZcVkZmavVOoNZRExm6wTOF92Wm56MfCuMmMwazcPQmh9je8sNmshD0JofVGnrxoy61facReoWas5EZi1UDvuAjVrNScCsxbq6W7PVt4FatZqTgRmLeRBCK0vcmexWQvVOoR91ZD1JU4EZi3WV0e5tOpy05CZWcU5EZiZVZwTgZlZxTkRmJlVnBOBmVnFORGYmVWcE4GZWcU5EZiZVZwTgZlZxTkRmJlVnBOBmVnFORGYmVWcE4GZWcU5EZiZVZwTgZlZxTkRmJlVnBOBmVnFORGYmVWcE4GZWcWVmggkHSppiaSlkk5psPx7km5Pr3skrSozHjMze6XSHl4vqQuYBhwCLAfmS5oVEYtrdSLiS7n6JwJjyorHzMwaK/OMYD9gaUTcGxFrgRnA+Cb1jwF+UmI8ZmbWQJmJYCjwYG5+eSp7BUm7ASOBa3tYPknSAkkLVq5c2fJAzcyqrLd0Fk8EZkbE+kYLI2J6RIyNiLHd3d1tDs3MrH8rMxGsAIbl5ndNZY1MxM1CZmYdUWYimA+MkjRS0jZkX/az6itJehOwIzC3xFjMzKwHpSWCiFgHnADMAe4GLo+IRZJOl3RErupEYEZERFmxmJlZz0q7fBQgImYDs+vKTqubn1pmDGZm1lxv6Sw2M7MOcSIwM6s4JwIzs4pzIjAzq7iNJgJJr21HIGZm1hlFzghuknSFpMMkqfSIzMysrYokgj2A6cAngD9J+g9Je5QblpmZtctGE0FkromIY4DjgWOBeZKul3Rg6RGamVmpNnpDWeoj+EeyM4JHgBPJhorYB7iCbNRQMzPro4rcWTwXuASYEBHLc+ULJJ1bTlhmZtYuRRLB6J7GAYqIM1scj5mZtVmRzuJfSxpcm5G0o6Q55YVkZmbtVCQRdEfEqtpMRDwJvK60iMzMrK2KJIL1kobXZtJjJT1ktJlZP1Gkj+DrwB8kXQ8IeA8wqdSozMysbTaaCCLiV5L2BQ5IRV+MiMfKDcvMzNql6INp1gOPAgOBPSUREb8rLywzM2uXIjeUfRo4iezh87eTnRnMBQ4uNTIzM2uLIp3FJwHvAO6PiPcBY4BVZQZlZmbtUyQRPBcRzwFI2jYi/giMLjcsMzNrlyJ9BMvTDWVXAddIehK4v8ygzMysfYpcNfThNDlV0nXADsCvSo3KzMzapmkikNQFLIqINwFExPVticrMzNqmaR9BRKwHluTvLDYzs/6lSB/BjsAiSfOAZ2qFEXFEaVGZmVnbFEkEp27uyiUdCnwf6ALOj4jvNKhzNDCVbPyiOyLi45u7PTMz23RFOos3q18g9S9MAw4BlgPzJc2KiMW5OqOAKcC7IuJJSR7V1MyszTZ6H4Gk1ZKeTq/nJK2X9HSBde8HLI2IeyNiLTADGF9X53hgWhramoh4dFM/gJmZbZkiZwTb1aYliezL/ICe37HBUODB3PxyYP+6Onuk9d5A1nw0NSJecWmqpEmkEU+HD3e/tZlZKxW5s3iDyFwFjGvR9rcGRgEHAccA5+Wfhpbb7vSIGBsRY7u7u1u0aTMzg2KDzh2Zm90KGAs8V2DdK4BhufldU1necuDmiHgBuE/SPWSJYX6B9ZuZWQsUuWro8Nz0OmAZr2zrb2Q+MErSSLIEMBGovyLoKrIzgR9JGkLWVHRvgXWbmVmLFOkjOG5zVhwR6ySdAMwha/+/ICIWSTodWBARs9Kyv5O0mOyZB5Mj4vHN2Z6ZmW2eIk1DFwEn1R5gL2lH4OyI+NTG3hsRs4HZdWWn5aYDODm9zMysA4p0Fr+1lgQA0qWeY0qLyMzM2qpIItgqnQUAIGknij/i0szMerkiX+hnA3MlXZHmPwqcUV5IZmbWTkU6iy+WtICXnlF8ZH6YCDMz69uKdBYfQPZMgv9K89tL2j8ibi49OjMzK12RPoIfAn/Nzf81lZmZWT9QJBEoXeYJQES8iDuLzcz6jSKJ4F5JX5A0IL1Ownf/mpn1G0USwWeBd5INE1EbQfT4MoMyM7P2KXLV0KNk4wQBIGkQ8CHgih7fZGZmfUahYagldUk6TNIlwH3Ax8oNy8zM2qXpGYGk95KNGHoYMA94F/CGiHi2DbGZmVkb9JgIJC0HHiC7VPQrEbFa0n1OAmZm/UuzpqGZwC5kzUCHS3o1EE3qm5lZH9RjIoiILwIjycYaOghYAnRLOlrSa9oSnZmZla5pZ3F6RvF1ETGJLCkcQ/Z0smVtiM3MzNqg8B3C6bnCVwNXp0tIzcysHyh0+Wi9iFjT6kDMzKwzNisRmJlZ/+FEYGZWcUWeR7AHMBnYLV8/Ig7u8U1mZtZnFOksvgI4FzgPWF9uOGZm1m5FEsG6iPCDaMzM+qkifQQ/l/Q5STtL2qn2Kj0yMzNriyJnBMemfyfnygJ4Q+vDMTOzdivyPIKR7QjEzMw6Y6NNQ+nxlF+QNDO9TpA0oMjKJR0qaYmkpZJOabD8k5JWSro9vT69OR/CzMw2X5GmoR8CA4D/TvOfSGVNv7QldQHTgEPIHnE5X9KsiFhcV/WyiDhhk6I2M7OWKZII3hERb8vNXyvpjgLv2w9YGhH3AkiaQTZgXX0iMDOzDipy1dB6SW+szUh6A8XuJxgKPJibX57K6n1E0p2p2WlYoxVJmiRpgaQFK1euLLBpMzMrqkgimAxcJ+m3kq4HrgW+3KLt/xwYERFvBa4BLmpUKSKmR8TYiBjb3d3dok2bmRkUu2roN5JGAaNT0ZKIeL7AulcA+V/4u6ay/Lofz82eD3y3wHrNzKyFmj2z+OCIuFbSkXWLdpdERFy5kXXPB0ZJGkmWACYCH6/bxs4R8XCaPQK4e9PCNzOzLdXsjOC9ZM1AhzdYFkDTRBAR6ySdAMwBuoALImKRpNOBBRExC/iCpCOAdcATwCc3/SOYmdmW6DERRMQ30+TpEXFffln6lb9RETEbmF1XdlpuegowpXC0ZmbWckU6i/+nQdnMVgdiZmad0ayP4E3AXsAOdf0E2wMDyw7MzMzao1kfwWjgQ8BgXt5PsBo4vsSYzMysjZr1EfwM+JmkAyNibhtjMjOzNioyxMRtkj5P1ky0oUkoIj5VWlRmZtY2RTqLLwH+BhgHXE92Y9jqMoMyM7P2KZIIdo+IU4FnIuIi4O+B/csNy8zM2qVIIngh/btK0luAHYDXlReSmZm1U5E+gumSdgROBWYBrwFOa/4WMzPrK4oMOnd+mrweP6fYzKzfaXZD2cnN3hgR57Q+HDMza7dmZwTbpX9HA+8gaxaC7OayeWUGZWZm7dPshrJ/A5D0O2DfiFid5qcCv2hLdGZmVroiVw29Hlibm1+byszMrB8octXQxcA8ST9N8xOAC8sKyMzM2qvIVUNnSPol8J5UdFxE3FZuWGZm1i7NrhraPiKelrQTsCy9ast2iognyg/PzMzK1uyM4FKyYahvIXs0ZY3SvO8pMDPrB5pdNfSh9G+hx1KamVnf1KxpaN9mb4yIW1sfjpmZtVuzpqGzmywL4OAWx2JmZh3QrGnofe0MxMzMOqPIfQSk4af35OVPKLu4rKDMzKx9NpoIJH0TOIgsEcwGPgj8gexGMzMz6+OKDDFxFPB+4C8RcRzwNrKH05iZWT9QJBGsiYgXgXWStgceBYYVWbmkQyUtkbRU0ilN6n1EUkgaWyxsMzNrlSJ9BAskDQbOI7u57K/A3I29SVIXMA04BFgOzJc0KyIW19XbDjgJuHnTQjczs1bo8YxA0jRJ74qIz0XEqog4l+xL/djURLQx+wFLI+LeiFgLzADGN6j3LeBM4LnNiN/MzLZQs6ahe4D/lLRM0ncljYmIZRFxZ8F1DwUezM0vT2UbpJvWhkVE0+cbSJokaYGkBStXriy4eTMzK6LHRBAR34+IA4H3Ao8DF0j6o6RvStpjSzcsaSvgHODLG6sbEdMjYmxEjO3u7t7STZuZWc5GO4sj4v6IODMixgDHkD2P4O4C617ByzuVd01lNdsBbwF+K2kZcAAwyx3GZmbttdFEIGlrSYdL+jHwS2AJcGSBdc8HRkkaKWkbYCIvPfeYiHgqIoZExIiIGAHcBBwREQs254OYmdnmaTbo3CFkZwCHkT2sfgYwKSKeKbLiiFgn6QRgDtAFXBARiySdDiyIiFnN12BmZu3Q7PLRKWTPJPhyRDy5OSuPiNlkdyPny07roe5Bm7MNMzPbMs0GnfPoomZmFVDkzmIzM+vHnAjMzCrOicDMrOKcCMzMKs6JwMys4pwIzMwqzonAzKzinAjMzCrOicDMrOKcCMzMKs6JwMys4pwIzMwqzonAzKzinAjMzCrOicDMrOKcCMzMKs6JwMys4pwIzMwqzonAzKzinAjMzCrOicDMrOKcCMzMKs6JwMys4pwIzMwqzonAzKziSk0Ekg6VtETSUkmnNFj+WUkLJd0u6Q+S9iwzHjMze6XSEoGkLmAa8EFgT+CYBl/0l0bE3hGxD/Bd4Jyy4jEzs8bKPCPYD1gaEfdGxFpgBjA+XyEins7NvhqIEuMxM7MGti5x3UOBB3Pzy4H96ytJ+jxwMrANcHCjFUmaBEwCGD58eMsDNTOrso53FkfEtIh4I/A14Bs91JkeEWMjYmx3d3d7AzQz6+fKTAQrgGG5+V1TWU9mABNKjMfMzBooMxHMB0ZJGilpG2AiMCtfQdKo3OzfA38qMR4zM2ugtD6CiFgn6QRgDtAFXBARiySdDiyIiFnACZI+ALwAPAkcW1Y8ZmbWWJmdxUTEbGB2XdlpuemTyty+mZltXMc7i83MrLOcCMzMKs6JwMys4pwIzMwqzonAzKzinAjMzCrOicDMrOKcCMzMKs6JwMys4kq9s7i3uOq2FZw1ZwkPrVrDLoMHMXncaCaMGdrpsMzMeoV+nwiuum0FU65cyJoX1gOwYtUaply5EMDJwCzxj6Vq6/eJ4Kw5SzYkgZo1L6znqzPv5CfzHmjZdhY//DR77rx9y9Zn1i7+sWT9vo/goVVrGpavXf9iS7ez587bM34f/9FY39PTj6Wz5izpUETWbv3+jGCXwYNY0SAZDB08iMs+c2AHIjLrXXr6sdRTufU//f6MYPK40Qwa0PWyskEDupg8bnSHIjLrXXYZPGiTyq3/6feJYMKYoXz7yL0ZOngQIjsT+PaRe7vt0yzxjyXr901DkCUDf/GbNVb72/BVQ9VViURgZs35x1K19fumITMza86JwMys4pwIzMwqzonAzKzinAjMzCpOEdHpGDaJpJXA/Z2OYwsNAR7rdBC9gPdDxvsh4/1Q7j7YLSK6Gy3oc4mgP5C0ICLGdjqOTvN+yHg/ZLwfOrcP3DRkZlZxTgRmZhXnRNAZ0zsdQC/h/ZDxfsh4P3RoH7iPwMys4nxGYGZWcU4EZmYV50RQAkkXSHpU0l25sqmSVki6Pb0Oyy2bImmppCWSxnUm6taSNEzSdZIWS1ok6aRUvpOkayT9Kf27YyqXpB+k/XCnpH07+wlao8l+qNrxMFDSPEl3pP3wb6l8pKSb0+e9TNI2qXzbNL80LR/R0Q/QIk32w4WS7ssdD/uk8vb8XUSEXy1+AX8L7AvclSubCnylQd09gTuAbYGRwJ+Brk5/hhbsg52BfdP0dsA96bN+FzgllZ8CnJmmDwN+CQg4ALi505+h5P1QteNBwGvS9ADg5vT/fDkwMZWfC/xLmv4ccG6anghc1unPUPJ+uBA4qkH9tvxd+IygBBHxO+CJgtXHAzMi4vmIuA9YCuxXWnBtEhEPR8StaXo1cDcwlOzzXpSqXQRMSNPjgYsjcxMwWNLO7Y269Zrsh5701+MhIuKvaXZAegVwMDAzldcfD7XjZCbwfklqT7TlabIfetKWvwsngvY6IZ3eXVBrEiH7UngwV2c5zb8o+px0Wj+G7NfP6yPi4bToL8Dr03TV9gNU7HiQ1CXpduBR4Bqys51VEbEuVcl/1g37IS1/CnhtWwMuSf1+iIja8XBGOh6+J2nbVNaW48GJoH1+CLwR2Ad4GDi7o9G0iaTXAP8DfDEins4vi+zctxLXLzfYD5U7HiJifUTsA+xKdpbzps5G1Bn1+0HSW4ApZPvjHcBOwNfaGZMTQZtExCPpAHgROI+XTvdXAMNyVXdNZX2epAFkX34/jogrU/EjtVPb9O+jqbxS+6GKx0NNRKwCrgMOJGvqqD0yN/9ZN+yHtHwH4PH2Rlqu3H44NDUhRkQ8D/yINh8PTgRtUteu92GgdkXRLGBiukpiJDAKmNfu+Fottef+P+DuiDgnt2gWcGyaPhb4Wa78n9JVEgcAT+WakPqsnvZDBY+HbkmD0/Qg4BCy/pLrgKNStfrjoXacHAVcm84g+7Qe9sMfcz+ORNZPkj8eSv+78MPrSyDpJ8BBwBBJy4FvAgelS8ICWAZ8BiAiFkm6HFgMrAM+HxHrOxB2q70L+ASwMLWHAvwr8B3gckn/TDac+NFp2WyyKySWAs8Cx7U12vL0tB+OqdjxsDNwkaQush+gl0fE1ZIWAzMk/TtwG1nSJP17iaSlZBdeTOxE0CXoaT9cK6mb7Oqg24HPpvpt+bvwEBNmZhXnpiEzs4pzIjAzqzgnAjOzinMiMDOrOCcCM7OKcyKwXiXdXv/F3PwcSefn5s+WdHKT918o6ag0/VtJr3gQuKQBkr6jbATUWyXNlfTBtGyZpCGbEfeG7fawfFoaVXKxpDW5USaPkjS7dm15K0naWdLVTZZvI+l3uRu6rKKcCKy3uQF4J4CkrYAhwF655e8EbtzCbXyL7Hrut0TEvmQ38Gy3hetsKiI+n4YVOAz4c0Tsk14zI+KwdJdpq51MdtdyTzGtBX4DfKyEbVsf4kRgvc2NZEMPQJYA7gJWS9oxDcT1ZuBWSadJmi/pLknTi45MKelVwPHAiel2/tpwD5c3qHtyWv9ddWcp/5QGB7tD0iUN3vetdIbQVTCmZZKGSBoh6Y/pvfdI+rGkD0i6IZ297Jfqv1rZQHXzJN0maXwPq/4I8Kv0nr1S/dtT7KNSnauAfygSp/VfPiW0XiUiHpK0TtJwsl//c8lGWzyQbATKhRGxVtJ/RcTpAOnL+EPAzwtsYnfggfoB8OpJejvZXZz7k93tebOk64G1wDeAd0bEY5J2qnvfWWRnF8dt5pAIuwMfBT4FzAc+DrwbOILsjuQJwNfJhlz4VGpSmifpfyPimVwcI4Ena8mO7E7V70fEj5U9/KWWpO4iG+jMKsxnBNYb3UiWBGqJYG5u/oZU533Knly1kGxM+70arWgLvBv4aUQ8k8aPvxJ4T9rWFRHxGEBE5J87cSqwQ0R8dgvGxbkvIhamwegWAb9J61oIjEh1/g44JQ1Z8VtgIDC8bj07Aytz83OBf5X0NWC3iFiT4l8PrJVUatOY9W5OBNYb1foJ9ib7xXoT2RnBO4EbJQ0E/pvsiU57k7WDDyy47qXAcEnbtzzq7Bf82+vPEjbR87npF3PzL/LSGbyAj+T6GYZHxN1161lDbp9ExKVkZxVrgNmSDs7V3RZ4bgtitj7OicB6oxvJmnqeSEM1PwEMJksGN/LSF9xjysb57/FqnXoR8SzZgGbf10vPx+2W9NG6qr8HJkh6laRXk40Q+nvgWuCjkl6b3pv/0v8V2aB6vyj5F/Yc4MRav4ikMQ3q3MNLZxBIegNwb0T8gGyEz7em8tcCj0XECyXGa72cE4H1RgvJrha6qa7sqYh4LF1hcx7Z2cIcsl/im+IbZM0miyXdBVwN1D8051ay58jOI3ui2PkRcVtELALOAK6XdAdwTt37rkixzVI2zHAZvkX2iMM7JS1K8y+T+gv+LGn3VHQ0cFdqTnoLcHEqfx/wi5LitD7Co4+a9VOSPgy8PSK+0aTOlcApEXFP+yKz3sZXDZn1UxHx01oTViOpaewqJwHzGYGZWcW5j8DMrOKcCMzMKs6JwMys4pwIzMwqzonAzKzi/g/E4r9dHt+9nQAAAABJRU5ErkJggg==", "text/plain": [ "
" ] }, "metadata": { "needs_background": "light" }, "output_type": "display_data" } ], "source": [ "from flaml.data import get_output_from_log\n", "time_history, best_valid_loss_history, valid_loss_history, config_history, metric_history = \\\n", " get_output_from_log(filename=automl_settings['log_file_name'], time_budget=3000)\n", "for config in config_history:\n", " print(config)\n", "\n", "import matplotlib.pyplot as plt\n", "import numpy as np\n", "plt.title('Learning Curve')\n", "plt.xlabel('Wall Clock Time (s)')\n", "plt.ylabel('Validation Accuracy')\n", "print(len(valid_loss_history))\n", "plt.scatter(time_history, 1 - np.array(valid_loss_history))\n", "plt.step(time_history, 1 - np.array(best_valid_loss_history), where='post')\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 4.2 Text Summarization Example" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The text summarization task summarizes a long text into a short sentence. For example:\n", "\n", "- Document: Army explosives experts were called out to deal with a suspect package at the offices on the Newtownards Road on Friday night. Roads were sealed off and traffic diverted as a controlled explosion was carried out. The premises, used by East Belfast MP Naomi Long, have been targeted a number of times. Most recently, petrol bomb attacks were carried out on the offices on consecutive nights in April and May. The attacks began following a Belfast City Council vote in December 2012 restricting the flying of the union flag at the City Hall. Condemning the latest hoax, Alliance MLA Chris Lyttle said: \"It is a serious incident for the local area, it causes serious disruption, it puts people's lives at risk, it can prevent emergency services reaching the area. \"Ultimately we need people with information to share that with the police in order for them to do their job and bring these people to justice.\n", "\n", "- Summary: A suspicious package left outside an Alliance Party office in east Belfast has been declared a hoax.\n", "\n", "In this example, we use FLAML to perform *abstractive summarization* using the t5-small language model, i.e., the summary is generated word-by-word. " ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "Using custom data configuration default\n", "Reusing dataset xsum (/home/xliu127/.cache/huggingface/datasets/xsum/default/1.2.0/32c23220eadddb1149b16ed2e9430a05293768cfffbdfd151058697d4c11f934)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "204045\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "Using custom data configuration default\n", "Reusing dataset xsum (/home/xliu127/.cache/huggingface/datasets/xsum/default/1.2.0/32c23220eadddb1149b16ed2e9430a05293768cfffbdfd151058697d4c11f934)\n", "Using custom data configuration default\n", "Reusing dataset xsum (/home/xliu127/.cache/huggingface/datasets/xsum/default/1.2.0/32c23220eadddb1149b16ed2e9430a05293768cfffbdfd151058697d4c11f934)\n" ] } ], "source": [ "from datasets import load_dataset\n", "\n", "train_dataset = load_dataset(\"xsum\", split=\"train\").to_pandas()\n", "print(len(train_dataset))\n", "dev_dataset = load_dataset(\"xsum\", split=\"validation\").to_pandas()\n", "test_dataset = load_dataset(\"xsum\", split=\"test\").to_pandas()\n", "\n", "custom_sent_keys = [\"document\"] # specify the column names of the input sentences\n", "label_key = \"summary\" # specify the column name of the label \n", "\n", "X_train, y_train = train_dataset[custom_sent_keys], train_dataset[label_key]\n", "X_val, y_val = dev_dataset[custom_sent_keys], dev_dataset[label_key]\n", "X_test = test_dataset[custom_sent_keys]" ] }, { "cell_type": "code", "execution_count": 16, "metadata": {}, "outputs": [ { "data": { "text/html": [ "== Status ==
Current time: 2022-03-19 14:55:00 (running for 00:08:31.38)
Memory usage on this node: 23.1/376.6 GiB
Using FIFO scheduling algorithm.
Resources requested: 0/96 CPUs, 0/4 GPUs, 0.0/250.17 GiB heap, 0.0/111.21 GiB objects (0.0/1.0 accelerator_type:V100)
Current best trial: 08b6571c with val_loss=0.8569452656271894 and parameters={'learning_rate': 1.0000000000000003e-05, 'num_train_epochs': 1.0, 'per_device_train_batch_size': 32, 'warmup_ratio': 0.0, 'weight_decay': 0.0, 'adam_epsilon': 1e-06, 'seed': 42, 'global_max_steps': 9223372036854775807, 'learner': 'transformer', 'FLAML_sample_size': 10000}
Result logdir: /data/xliu127/projects/hyperopt/FLAML/notebook/data/output/train_2022-03-19_14-46-29
Number of trials: 8/1000000 (8 TERMINATED)

" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "name": "stderr", "output_type": "stream", "text": [ "\u001b[2m\u001b[36m(train pid=86232)\u001b[0m /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages/transformers/optimization.py:306: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning\n", "\u001b[2m\u001b[36m(train pid=86232)\u001b[0m warnings.warn(\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "\u001b[2m\u001b[36m(train pid=86232)\u001b[0m {'loss': 8.7635, 'learning_rate': 1.2308416834153697e-05, 'epoch': 0.11}\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "\u001b[2m\u001b[36m(train pid=86184)\u001b[0m /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages/transformers/optimization.py:306: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning\n", "\u001b[2m\u001b[36m(train pid=86184)\u001b[0m warnings.warn(\n", "\u001b[2m\u001b[36m(train pid=86225)\u001b[0m /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages/transformers/optimization.py:306: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning\n", "\u001b[2m\u001b[36m(train pid=86225)\u001b[0m warnings.warn(\n", "\u001b[2m\u001b[36m(train pid=86160)\u001b[0m /data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages/transformers/optimization.py:306: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning\n", "\u001b[2m\u001b[36m(train pid=86160)\u001b[0m warnings.warn(\n", "2022-03-19 14:56:00,679\tWARNING ray_trial_executor.py:146 -- Skipping cleanup - trainable.stop did not return in time. Consider making `stop` a faster operation.\n", "\u001b[2m\u001b[36m(train pid=86232)\u001b[0m [nltk_data] Downloading package punkt to /home/xliu127/nltk_data...\n", "\u001b[2m\u001b[36m(train pid=86232)\u001b[0m [nltk_data] Package punkt is already up-to-date!\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "\u001b[2m\u001b[36m(train pid=86232)\u001b[0m {'eval_loss': 6.893245697021484, 'eval_automl_metric': 0.8537338408275918, 'eval_runtime': 102.2734, 'eval_samples_per_second': 110.801, 'eval_steps_per_second': 6.932, 'epoch': 0.11}\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "2022-03-19 14:57:00,687\tWARNING ray_trial_executor.py:146 -- Skipping cleanup - trainable.stop did not return in time. Consider making `stop` a faster operation.\n", "\u001b[2m\u001b[36m(train pid=86184)\u001b[0m [nltk_data] Downloading package punkt to /home/xliu127/nltk_data...\n", "\u001b[2m\u001b[36m(train pid=86184)\u001b[0m [nltk_data] Package punkt is already up-to-date!\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "\u001b[2m\u001b[36m(train pid=86184)\u001b[0m {'eval_loss': 7.381210803985596, 'eval_automl_metric': 0.8475751825208984, 'eval_runtime': 107.4032, 'eval_samples_per_second': 105.509, 'eval_steps_per_second': 6.601, 'epoch': 0.16}\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "\u001b[2m\u001b[36m(train pid=86160)\u001b[0m [nltk_data] Downloading package punkt to /home/xliu127/nltk_data...\n", "\u001b[2m\u001b[36m(train pid=86160)\u001b[0m [nltk_data] Package punkt is already up-to-date!\n", "\u001b[2m\u001b[36m(train pid=86225)\u001b[0m [nltk_data] Downloading package punkt to /home/xliu127/nltk_data...\n", "\u001b[2m\u001b[36m(train pid=86225)\u001b[0m [nltk_data] Package punkt is already up-to-date!\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "\u001b[2m\u001b[36m(train pid=86160)\u001b[0m {'eval_loss': 10.150897979736328, 'eval_automl_metric': 0.8566791839938478, 'eval_runtime': 108.2143, 'eval_samples_per_second': 104.718, 'eval_steps_per_second': 6.552, 'epoch': 0.36}\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "2022-03-19 14:58:00,697\tWARNING ray_trial_executor.py:146 -- Skipping cleanup - trainable.stop did not return in time. Consider making `stop` a faster operation.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "\u001b[2m\u001b[36m(train pid=86225)\u001b[0m {'eval_loss': 11.665904998779297, 'eval_automl_metric': 0.858011676038827, 'eval_runtime': 109.4667, 'eval_samples_per_second': 103.52, 'eval_steps_per_second': 6.477, 'epoch': 0.38}\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "\u001b[2m\u001b[36m(train pid=86232)\u001b[0m [nltk_data] Downloading package punkt to /home/xliu127/nltk_data...\n", "\u001b[2m\u001b[36m(train pid=86232)\u001b[0m [nltk_data] Package punkt is already up-to-date!\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "\u001b[2m\u001b[36m(train pid=86232)\u001b[0m {'eval_loss': 6.893245697021484, 'eval_automl_metric': 0.8537338408275918, 'eval_runtime': 110.7246, 'eval_samples_per_second': 102.344, 'eval_steps_per_second': 6.403, 'epoch': 0.11}\n", "\u001b[2m\u001b[36m(train pid=86232)\u001b[0m {'train_runtime': 220.8946, 'train_samples_per_second': 4.648, 'train_steps_per_second': 0.149, 'train_loss': 8.763471198804451, 'epoch': 0.11}\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "2022-03-19 14:59:00,706\tWARNING ray_trial_executor.py:146 -- Skipping cleanup - trainable.stop did not return in time. Consider making `stop` a faster operation.\n", "\u001b[2m\u001b[36m(train pid=86232)\u001b[0m Using amp half precision backend\n", "\u001b[2m\u001b[36m(train pid=86232)\u001b[0m ***** Running Prediction *****\n", "\u001b[2m\u001b[36m(train pid=86232)\u001b[0m Num examples = 11332\n", "\u001b[2m\u001b[36m(train pid=86232)\u001b[0m Batch size = 16\n", "\u001b[2m\u001b[36m(train pid=86184)\u001b[0m [nltk_data] Downloading package punkt to /home/xliu127/nltk_data...\n", "\u001b[2m\u001b[36m(train pid=86184)\u001b[0m [nltk_data] Package punkt is already up-to-date!\n", "\u001b[2m\u001b[36m(train pid=86160)\u001b[0m [nltk_data] Downloading package punkt to /home/xliu127/nltk_data...\n", "\u001b[2m\u001b[36m(train pid=86160)\u001b[0m [nltk_data] Package punkt is already up-to-date!\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "\u001b[2m\u001b[36m(train pid=86184)\u001b[0m {'eval_loss': 7.381210803985596, 'eval_automl_metric': 0.8475751825208984, 'eval_runtime': 109.1975, 'eval_samples_per_second': 103.775, 'eval_steps_per_second': 6.493, 'epoch': 0.16}\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "\u001b[2m\u001b[36m(train pid=86225)\u001b[0m [nltk_data] Downloading package punkt to /home/xliu127/nltk_data...\n", "\u001b[2m\u001b[36m(train pid=86225)\u001b[0m [nltk_data] Package punkt is already up-to-date!\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "\u001b[2m\u001b[36m(train pid=86184)\u001b[0m {'train_runtime': 232.9303, 'train_samples_per_second': 10.067, 'train_steps_per_second': 1.262, 'train_loss': 9.880440506280637, 'epoch': 0.16}\n", "\u001b[2m\u001b[36m(train pid=86160)\u001b[0m {'eval_loss': 10.150897979736328, 'eval_automl_metric': 0.8566791839938478, 'eval_runtime': 108.3182, 'eval_samples_per_second': 104.618, 'eval_steps_per_second': 6.546, 'epoch': 0.36}\n", "\u001b[2m\u001b[36m(train pid=86160)\u001b[0m {'train_runtime': 232.4568, 'train_samples_per_second': 92.218, 'train_steps_per_second': 2.887, 'train_loss': 11.215172903878349, 'epoch': 0.36}\n", "\u001b[2m\u001b[36m(train pid=86225)\u001b[0m {'eval_loss': 11.665904998779297, 'eval_automl_metric': 0.858011676038827, 'eval_runtime': 110.526, 'eval_samples_per_second': 102.528, 'eval_steps_per_second': 6.415, 'epoch': 0.38}\n", "\u001b[2m\u001b[36m(train pid=86225)\u001b[0m {'train_runtime': 236.6253, 'train_samples_per_second': 19.714, 'train_steps_per_second': 0.621, 'train_loss': 11.549961930614407, 'epoch': 0.38}\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "2022-03-19 15:00:00,942\tWARNING ray_trial_executor.py:146 -- Skipping cleanup - trainable.stop did not return in time. Consider making `stop` a faster operation.\n", "\u001b[2m\u001b[36m(train pid=86184)\u001b[0m Using amp half precision backend\n", "\u001b[2m\u001b[36m(train pid=86184)\u001b[0m ***** Running Prediction *****\n", "\u001b[2m\u001b[36m(train pid=86184)\u001b[0m Num examples = 11332\n", "\u001b[2m\u001b[36m(train pid=86184)\u001b[0m Batch size = 16\n", "\u001b[2m\u001b[36m(train pid=86160)\u001b[0m Using amp half precision backend\n", "\u001b[2m\u001b[36m(train pid=86160)\u001b[0m ***** Running Prediction *****\n", "\u001b[2m\u001b[36m(train pid=86160)\u001b[0m Num examples = 11332\n", "\u001b[2m\u001b[36m(train pid=86160)\u001b[0m Batch size = 16\n", "\u001b[2m\u001b[36m(train pid=86225)\u001b[0m Using amp half precision backend\n", "\u001b[2m\u001b[36m(train pid=86225)\u001b[0m ***** Running Prediction *****\n", "\u001b[2m\u001b[36m(train pid=86225)\u001b[0m Num examples = 11332\n", "\u001b[2m\u001b[36m(train pid=86225)\u001b[0m Batch size = 16\n", "2022-03-19 15:01:00,948\tWARNING ray_trial_executor.py:146 -- Skipping cleanup - trainable.stop did not return in time. Consider making `stop` a faster operation.\n", "2022-03-19 15:02:20,150\tINFO tune.py:639 -- Total run time: 950.87 seconds (500.36 seconds for the tuning loop).\n", "[flaml.automl: 03-19 15:02:25] {2837} INFO - selected model: None\n", "/data/installation/anaconda3/envs/tmp/lib/python3.8/site-packages/transformers/optimization.py:306: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning\n", " warnings.warn(\n", "[flaml.automl: 03-19 15:14:54] {2947} INFO - retrain transformer for 748.2s\n", "[flaml.automl: 03-19 15:14:54] {2954} INFO - retrained model: None\n", "[flaml.automl: 03-19 15:14:54] {2283} INFO - fit succeeded\n", "[flaml.automl: 03-19 15:14:54] {2284} INFO - Time taken to find the best model: 472.3055913448334\n", "[flaml.automl: 03-19 15:14:54] {2295} WARNING - Time taken to find the best model is 94% of the provided time budget and not all estimators' hyperparameter search converged. Consider increasing the time budget.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "{'train_runtime': 14.6848, 'train_samples_per_second': 13894.959, 'train_steps_per_second': 434.258, 'train_loss': 10.199760437011719, 'epoch': 0.02}\n" ] } ], "source": [ "''' import AutoML class from flaml package '''\n", "from flaml import AutoML\n", "automl = AutoML()\n", "\n", "import ray\n", "if not ray.is_initialized():\n", " ray.init()\n", "\n", "automl_settings = {\n", " \"time_budget\": 500, # setting the time budget\n", " \"task\": \"summarization\", # setting the task as summarization\n", " \"fit_kwargs_by_estimator\": { # if model_path is not set, the default model is t5-small: https://huggingface.co/t5-small\n", " \"transformer\": {\n", " \"output_dir\": \"data/output/\", # setting the output directory\n", " \"model_path\": \"t5-small\",\n", " \"per_device_eval_batch_size\": 16, # the batch size for validation (inference)\n", " }\n", " },\n", " \"gpu_per_trial\": 1, # set to 0 if no GPU is available\n", " \"log_file_name\": \"seqclass.log\", # set the file to save the log for HPO\n", " \"log_type\": \"all\", # the log type for trials: \"all\" if logging all the trials, \"better\" if only keeping the better trials\n", " \"use_ray\": {\"local_dir\": \"data/output/\"}, # set whether to use Ray\n", " \"metric\": \"rouge1\",\n", " \"n_concurrent_trials\": 4, # sample: False # if the time is sufficient (e.g., longer than one trial's running time), you can set \n", "}\n", "\n", "'''The main flaml automl API'''\n", "automl.fit(X_train=X_train, y_train=y_train, X_val=X_val, y_val=y_val, **automl_settings)" ] }, { "cell_type": "code", "execution_count": 17, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "{'Current Learner': 'transformer', 'Current Sample': 10000, 'Current Hyper-parameters': {'learning_rate': 3.6439277745413994e-06, 'num_train_epochs': 0.454119690781029, 'per_device_train_batch_size': 32, 'warmup_ratio': 0.04654549348562217, 'weight_decay': 0.06669806327326033, 'adam_epsilon': 2.5833461668835812e-08, 'seed': 42, 'global_max_steps': 125, 'learner': 'transformer', 'FLAML_sample_size': 10000}, 'Best Learner': 'transformer', 'Best Hyper-parameters': {'learning_rate': 3.6439277745413994e-06, 'num_train_epochs': 0.454119690781029, 'per_device_train_batch_size': 32, 'warmup_ratio': 0.04654549348562217, 'weight_decay': 0.06669806327326033, 'adam_epsilon': 2.5833461668835812e-08, 'seed': 42, 'global_max_steps': 125, 'learner': 'transformer', 'FLAML_sample_size': 10000}}\n", "{'Current Learner': 'transformer', 'Current Sample': 10000, 'Current Hyper-parameters': {'learning_rate': 1.0000000000000003e-05, 'num_train_epochs': 1.0, 'per_device_train_batch_size': 32, 'warmup_ratio': 0.0, 'weight_decay': 0.0, 'adam_epsilon': 1e-06, 'seed': 42, 'global_max_steps': 112, 'learner': 'transformer', 'FLAML_sample_size': 10000}, 'Best Learner': 'transformer', 'Best Hyper-parameters': {'learning_rate': 1.0000000000000003e-05, 'num_train_epochs': 1.0, 'per_device_train_batch_size': 32, 'warmup_ratio': 0.0, 'weight_decay': 0.0, 'adam_epsilon': 1e-06, 'seed': 42, 'global_max_steps': 112, 'learner': 'transformer', 'FLAML_sample_size': 10000}}\n", "{'Current Learner': 'transformer', 'Current Sample': 10000, 'Current Hyper-parameters': {'learning_rate': 3.4236378229097798e-06, 'num_train_epochs': 8.919336644807531, 'per_device_train_batch_size': 4, 'warmup_ratio': 0.022492820063166875, 'weight_decay': 0.27013721375576616, 'adam_epsilon': 6.366959214432801e-08, 'seed': 43, 'global_max_steps': 180, 'learner': 'transformer', 'FLAML_sample_size': 10000}, 'Best Learner': 'transformer', 'Best Hyper-parameters': {'learning_rate': 1.0000000000000003e-05, 'num_train_epochs': 1.0, 'per_device_train_batch_size': 32, 'warmup_ratio': 0.0, 'weight_decay': 0.0, 'adam_epsilon': 1e-06, 'seed': 42, 'global_max_steps': 112, 'learner': 'transformer', 'FLAML_sample_size': 10000}}\n", "{'Current Learner': 'transformer', 'Current Sample': 10000, 'Current Hyper-parameters': {'learning_rate': 2.83823390666728e-06, 'num_train_epochs': 1.6667827812145841, 'per_device_train_batch_size': 16, 'warmup_ratio': 0.04013366246992448, 'weight_decay': 0.2945152447208819, 'adam_epsilon': 4.694476379503266e-08, 'seed': 43, 'global_max_steps': 163, 'learner': 'transformer', 'FLAML_sample_size': 10000}, 'Best Learner': 'transformer', 'Best Hyper-parameters': {'learning_rate': 1.0000000000000003e-05, 'num_train_epochs': 1.0, 'per_device_train_batch_size': 32, 'warmup_ratio': 0.0, 'weight_decay': 0.0, 'adam_epsilon': 1e-06, 'seed': 42, 'global_max_steps': 112, 'learner': 'transformer', 'FLAML_sample_size': 10000}}\n", "4\n" ] }, { "data": { "image/png": "iVBORw0KGgoAAAANSUhEUgAAAZMAAAEWCAYAAACjYXoKAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjUuMSwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy/YYfK9AAAACXBIWXMAAAsTAAALEwEAmpwYAAAcdUlEQVR4nO3de7gddX3v8feHcAtFiEqkEEBQKApVQQIUKx5vPUBVwAoKeNpGq6gU2+ojFmq1FsoRS2nVSlX0WGorClKM2EajvSlF0UTAQKDRSFNIQm1QuWkkBL7nj5mNi83eO2tn9tqX5P16nvUw85vfzHz3hL0++zez1kyqCkmSuthmqguQJM18hokkqTPDRJLUmWEiSerMMJEkdWaYSJI6M0ykCZTk6CQrproOabIZJtpiJFmV5MVTWUNVXVNVBw5q+0mOSfLVJPclWZfkK0mOH9T+pH4ZJtI4JJk1hfs+CfgM8AlgL2B34F3AyzZjW0ni778mjP8zaYuXZJskZyf5XpIfJLkiyRN6ln8myX8nuaf9q//gnmWXJvlQkkVJfgy8oB0BvS3Jsnady5Ps2PZ/fpLVPeuP2rdd/vYkdyZZm+R1SSrJ/iP8DAH+HDivqj5WVfdU1cNV9ZWqen3b591J/q5nnX3b7W3bzv9bkvOTXAv8BDgrydJh+3lLkqvb6R2S/FmS25N8P8mHk8zu+M+hLZRhoq3Bm4ETgf8F7An8CLi4Z/kXgAOAJwHXA58ctv5pwPnA44B/b9teCRwL7Ac8E1gwxv5H7JvkWOCtwIuB/YHnj7GNA4G9gSvH6NOPXwdOp/lZPgwcmOSAnuWnAZe10xcAvwAc0tY3j2YkJD2GYaKtwRuBd1TV6qp6AHg3cNLQX+xV9fGquq9n2bOS7Nqz/ueq6tp2JPDTtu0DVbW2qn4IfJ7mDXc0o/V9JfDXVbW8qn7S7ns0T2z/e2d/P/KoLm33t7Gq7gE+B5wK0IbK04Cr25HQ6cBbquqHVXUf8H+BUzruX1sow0RbgycDn01yd5K7gVuBh4Ddk8xKckF7CuxeYFW7zm49698xwjb/u2f6J8DOY+x/tL57Dtv2SPsZ8oP2v3uM0acfw/dxGW2Y0IxKFrbBNhfYCfhWz3H7YtsuPYZhoq3BHcBxVTWn57VjVa2heQM9geZU067Avu066Vl/ULfWvpPmQvqQvcfou4Lm53jFGH1+TBMAQ35+hD7Df5YvA3OTHEITKkOnuO4C1gMH9xyzXatqrNDUVsww0ZZmuyQ79ry2pbk2cH6SJwMkmZvkhLb/44AHaP7y34nmVM5kuQJ4TZKnJ9kJeOdoHat5VsRbgXcmeU2SXdoPFjw3ySVttxuB5yXZpz1Nd86mCqiqB2k+IXYh8ASacKGqHgY+CvxFkicBJJmX5JjN/WG1ZTNMtKVZRPMX9dDr3cD7gauBLyW5D7gOOLLt/wngv4A1wC3tsklRVV8APgD8K7CyZ98PjNL/SuBVwGuBtcD3gT+hue5BVX0ZuBxYBnwL+Ic+S7mMZmT2mara2NP++0N1tacA/4nmgwDSY8SHY0nTQ5KnAzcDOwx7U5emPUcm0hRK8vL2+xyPB94LfN4g0UxkmEhT6w3A/wDfo/mE2Zumthxp83iaS5LUmSMTSVJn2051AZNht912q3333Xeqy5CkGWO33XZj8eLFi6vq2H76bxVhsu+++7J06dJNd5QkPSLJbpvu1fA0lySpM8NEktSZYSJJ6swwkSR1ZphIkjrbKj7NJUnjsfCGNVy4eAVr717PnnNmc9YxB3LiofOmuqxpzTCRpB4Lb1jDOVfdxPoHHwJgzd3rOeeqmwAMlDEYJpLU48LFKx4JkiHrH3yIt1+5jE998/YpqmrzHLTnLvzRyw6elH0N9JpJkmOTrEiyMsnZIyx/XpLrk2xMctIIy3dJsjrJB3vavpjk20mWJ/lwklmD/BkkbV3W3r1+xPYNDz08yZXMLAMbmbRv8hcDvwKsBpYkubqqbunpdjuwAHjbKJs5D/jqsLZXVtW9SQJcCZwMfHoia5e09dpzzmzWjBAo8+bM5vI3HDUFFc0MgxyZHAGsrKrbqmoDzRv+Cb0dqmpVVS0DHhP5SQ4Ddge+NGyde9vJbYHtGdzzuSVthc465kBmb/foEx6zt5vFWcf4kMmxDDJM5gF39Myvbts2Kck2wEWMMmJJspjmGRD30YxORupzepKlSZauW7duPHVL2oqdeOg83vNrz2D7Wc3b47w5s3nPrz3Di++bMF2/Z3IGsKiqVo+0sKqOAfYAdgBeOEqfS6pqflXNnzt37uAqlbTFOfHQeRy6zxyO3O8JXHv2Cw2SPgzy01xrgL175vdq2/pxFHB0kjOAnYHtk9xfVY9cxK+qnyb5HM2psy9PUM2SpM0wyDBZAhyQZD+aEDkFOK2fFavq1UPTSRYA86vq7CQ7A4+rqjuTbAu8BLhmwiuXJI3LwE5zVdVG4ExgMXArcEVVLU9ybpLjAZIcnmQ1zSeyPpJk+SY2+3PA1UmWATfSXDf58KB+BklSfwb6pcWqWgQsGtb2rp7pJTSnv8baxqXApe3094HDJ7pOSVI30/UCvCRpBjFMJEmdGSaSpM4ME0lSZ4aJJKkzw0SS1JlhIknqzDCRJHVmmEiSOjNMJEmdGSaSpM4ME0lSZ4aJJKkzw0SS1JlhIknqzDCRJHVmmEiSOjNMJEmdGSaSpM4ME0lSZ4aJJKkzw0SS1JlhIknqzDCRJHVmmEiSOjNMJEmdGSaSpM4ME0lSZ4aJJKkzw0SS1JlhIknqzDCRJHVmmEiSOjNMJEmdGSaSpM4GGiZJjk2yIsnKJGePsPx5Sa5PsjHJSSMs3yXJ6iQfbOd3SvKPSf4jyfIkFwyyfklSfwYWJklmARcDxwEHAacmOWhYt9uBBcBlo2zmPOCrw9r+rKqeBhwK/HKS4yasaEnSZhnkyOQIYGVV3VZVG4BPAyf0dqiqVVW1DHh4+MpJDgN2B77U0/8nVfWv7fQG4Hpgr8H9CJKkfgwyTOYBd/TMr27bNinJNsBFwNvG6DMHeBnwz6MsPz3J0iRL161b12/NkqTNMF0vwJ8BLKqq1SMtTLIt8CngA1V120h9quqSqppfVfPnzp07wFIlSdsOcNtrgL175vdq2/pxFHB0kjOAnYHtk9xfVUMX8S8BvltV75uoYiVJm2+QYbIEOCDJfjQhcgpwWj8rVtWrh6aTLADmDwVJkj8BdgVeN9EFS5I2z8BOc1XVRuBMYDFwK3BFVS1Pcm6S4wGSHJ5kNXAy8JEky8faZpK9gHfQfDrs+iQ3JjFUJGmKDXJkQlUtAhYNa3tXz/QSNvFprKq6FLi0nV4NZKLrlCR1M10vwEuSZhDDRJLUmWEiSerMMJEkdWaYSJI6M0wkSZ0ZJpKkzgwTSVJnhokkqTPDRJLUmWEiSerMMJEkdWaYSJI6M0wkSZ0ZJpKkzgwTSVJnhokkqTPDRJLUmWEiSerMMJEkdWaYSJI6M0wkSZ0ZJpKkzgwTSVJnhokkqTPDRJLUmWEiSerMMJEkdbZZYZLkCxNdiCRp5tp2tAVJnj3aIuCQgVQjSZqRRg0TYAnwFZrwGG7OQKqRJM1IY4XJrcAbquq7wxckuWNwJUmSZpqxrpm8e4zlb574UiRJM9WoI5OqunKMZQsHUo0kaUbyo8GSpM4ME0lSZwMNkyTHJlmRZGWSs0dY/rwk1yfZmOSkEZbvkmR1kg/2tJ2f5I4k9w+ydklS/zYZJkl2SvLOJB9t5w9I8tI+1psFXAwcBxwEnJrkoGHdbgcWAJeNspnzgK8Oa/s8cMSm9i9Jmjz9jEz+GngAOKqdXwP8SR/rHQGsrKrbqmoD8GnghN4OVbWqqpYBDw9fOclhwO7Al4atc11V3dnH/iVJk6SfMHlqVf0p8CBAVf2Ekb/IONw8oPf7KKvbtk1Ksg1wEfC2fvqPso3TkyxNsnTdunWbuxlJUh/6CZMNSWYDBZDkqTQjlUE6A1hUVas3dwNVdUlVza+q+XPnzp3A0iRJw431DfghfwR8Edg7ySeBX6a5zrEpa4C9e+b3atv6cRRwdJIzgJ2B7ZPcX1WPuYgvSZp6mwyTqvpykuuBX6I5vfW7VXVXH9teAhyQZD+aEDkFOK2foqrq1UPTSRYA8w0SSZq++vk017OBJwN3AmuBfZI8NcmYQVRVG4EzgcU09/m6oqqWJzk3yfHttg9Psho4GfhIkuV91POn7To7tR8bfvem1pEkDVY/p7n+Cng2sIxmZPKLwHJg1yRvqqovjbZiVS0CFg1re1fP9BKa01+jqqpLgUt75t8OvL2PuiVJk6SfC/BrgUPbi9mHAYcCtwG/AvzpIIuTJM0M/YTJL1TVI6efquoW4GlVddvgypIkzST9nOZanuRDNF86BHgVcEuSHWi/eyJJ2rr1MzJZAKwEfq993da2PQi8YDBlSZJmkn4+Grye5tvoF42w2JstSpI2HSZJ/pP22++9quopA6lIkjTj9HPNZH7P9I403wl5wmDKkSTNRJu8ZlJVP+h5ramq9wEvGXxpkqSZop/TXM/umd2GZqTSz4hGkrSV6CcUei+8bwRWAa8cSDWSpBmpn09z+fFfSdKY+rnR465J/nzoQVNJLkqy62QUJ0maGfr50uLHgftoTm29EriX5lG+kiQB/V0zeWpVvaJn/o+T3DigeiRJM1A/I5P1SZ47NJPkl4H1gytJkjTT9DMyeRPwN+11kgA/BH5zoFVJkmaUfj7NdSPwrCS7tE0/pnkE77IB1iVJmkFGPc2VZJck5yT5YJJfobkI/xs0dxD2eyaSpEeMNTL5W+BHwNeB1wPvoDnN9fJ2tCJJEjB2mDylqp4BkORjwJ3APlX100mpTJK0WRbesIYLF69g7d3r2XPObM465kBOPHTeQPc5Vpg88hTFqnooyWqDRJKmt4U3rOGcq25i/YMPAbDm7vWcc9VNAAMNlLE+GvysJPe2r/uAZw5NJ7l3YBVJkjbbhYtXPBIkQ9Y/+BAXLl4x0P2OOjKpqlkD3bMkacKtvXvkrwGO1j5R+vnSoiRphthzzuxxtU8Uw0SStiBnHXMgs7d79Iml2dvN4qxjDhzofn3IlSRtQYYusk+nT3NJkmagEw+dN/DwGM7TXJKkzgwTSVJnhokkqTPDRJLUmWEiSerMMJEkdWaYSJI6M0wkSZ0ZJpKkzgYaJkmOTbIiycokZ4+w/HlJrk+yMclJIyzfJcnqJB/saTssyU3tNj+QJIP8GSRJmzawMEkyC7gYOA44CDg1yUHDut0OLAAuG2Uz5wFfHdb2IZrHCB/Qvo6doJIlSZtpkCOTI4CVVXVbVW0APg2c0NuhqlZV1TLg4eErJzkM2B34Uk/bHsAuVXVdVRXwCeDEwf0IkqR+DPJGj/OAO3rmVwNH9rNikm2Ai4D/A7x42DZXD9vmiHczS3I6cDrAPvvs03fRQ6biGcqSNFNN1wvwZwCLqmr1JnuOoqouqar5VTV/7ty541p36BnKa+5eT/GzZygvvGHN5pYjSVu0QY5M1gB798zv1bb14yjg6CRnADsD2ye5H3h/u53N2WbfRnuG8tuvXManvnn7RO9O0jR0y533ctAeu0x1GTPGIMNkCXBAkv1o3vBPAU7rZ8WqevXQdJIFwPyqOrudvzfJLwHfAH4D+MsJrnvUZyVveOgxl3YkbaEO2mMXTjjEU9v9GliYVNXGJGcCi4FZwMeranmSc4GlVXV1ksOBzwKPB16W5I+r6uBNbPoM4FJgNvCF9jWh9pwzmzUjBMq8ObO5/A1HTfTuJGnGS/OhqC3b/Pnza+nSpX33H7pm0nuqa/Z2s3jPrz3Di/CSthpJvlVV8/vp62N7RzBVz1CWpJnKMBnFVDxDWZJmqun60WBJ0gximEiSOjNMJEmdGSaSpM4ME0lSZ4aJJKkzw0SS1JlhIknqzDCRJHVmmEiSOjNMJEmdGSaSpM4ME0lSZ4aJJKkzw0SS1JlhIknqzDCRJHVmmEiSOjNMJEmdGSaSpM4ME0lSZ4aJJKkzw0SS1JlhIknqzDCRJHVmmEiSOjNMJEmdGSaSpM4ME0lSZ4aJJKkzw0SS1JlhIknqzDCRJHU20DBJcmySFUlWJjl7hOXPS3J9ko1JTuppf3LbfmOS5Une2LPsVUmWte3vHWT9kqT+DCxMkswCLgaOAw4CTk1y0LButwMLgMuGtd8JHFVVhwBHAmcn2TPJE4ELgRdV1cHAzyd50aB+BklSfwY5MjkCWFlVt1XVBuDTwAm9HapqVVUtAx4e1r6hqh5oZ3foqfMpwHeral07/0/AKwb1A0iS+jPIMJkH3NEzv7pt60uSvZMsa7fx3qpaC6wEDkyyb5JtgROBvSeuZEnS5pi2F+Cr6o6qeiawP/CbSXavqh8BbwIuB64BVgEPjbR+ktOTLE2ydN26dSN1kSRNkEGGyRoePWrYq20bl3ZEcjNwdDv/+ao6sqqOAlYA3xllvUuqan5VzZ87d+64i5ck9W+QYbIEOCDJfkm2B04Bru5nxSR7JZndTj8eeC5NcJDkST3tZwAfG0DtkqRxGFiYVNVG4ExgMXArcEVVLU9ybpLjAZIcnmQ1cDLwkSTL29WfDnwjybeBrwB/VlU3tcven+QW4FrggqoacWQiSZo8qaqprmHg5s+fX0uXLp3qMiRpRknyraqa30/faXsBXpI0cxgmkqTODBNJUmeGiSSpM8NEktSZYSJJ6swwkSR1ZphIkjrbdqoLkCRNnIU3rOHCxStYe/d69pwzm7OOOZATD+37hu2bzTCRpC3EwhvWcM5VN7H+weZm6mvuXs85VzV3ohp0oHiaS5K2EBcuXvFIkAxZ/+BDXLh4xcD3bZhI0hZi7d3rx9U+kQwTSdpC7Dln9rjaJ5JhIklbiLOOOZDZ2816VNvs7WZx1jEHDnzfXoCXpC3E0EV2P80lSerkxEPnTUp4DOdpLklSZ4aJJKkzw0SS1JlhIknqzDCRJHWWqprqGgYuyTrgvwa4i92Auwa4/S6ma23WNT7WNT7WNX7Da7sLoKqO7WflrSJMBi3J0qqaP9V1jGS61mZd42Nd42Nd49e1Nk9zSZI6M0wkSZ0ZJhPjkqkuYAzTtTbrGh/rGh/rGr9OtXnNRJLUmSMTSVJnhokkqTPDpE9JZiW5Ick/DGv/QJL7e+Z3SHJ5kpVJvpFk32lU24Ik65Lc2L5eN5l1Jbk0yX/27P+Qtj1trSuTLEvy7GlS1/OT3NPT/q5JritJzk/ynSS3JvmdnvapPF6j1TXVx+uann2vTbKwp96pPF6j1TWpx2uU2l6U5Pp2//+eZP+2fdzvY96Cvn+/C9wK7DLUkGQ+8Phh/X4L+FFV7Z/kFOC9wKumSW0Al1fVmQOuZ9S6gLOq6sph/Y4DDmhfRwIfav871XUBXFNVLx1gLb2G17UA2Bt4WlU9nORJbftUH6/R6oIpPF5VdfTQgiR/D3yunZ3S4zVGXTC5x+sxtdEcixOq6tYkZwB/SPPvO+73MUcmfUiyF/AS4GM9bbOAC4G3D+t+AvA37fSVwIuSZJrUNmlGqmsMJwCfqMZ1wJwke0yDuibNKHW9CTi3qh4GqKr/adun+niNVtekGevfMckuwAuBhW3TVB+v0eqaVKPUVvwsWHYF1rbT434fM0z68z6aN+aHe9rOBK6uqjuH9Z0H3AFQVRuBe4AnTpPaAF7RDvWvTLL3JNcFcH67/79IskPb9sgxa61u26a6LoCjknw7yReSHDygmkar66nAq5Isbfd/QNs+1cdrtLpgao/XkBOBf66qe9v5qT5eo9UFk3e8RqvtdcCiJKuBXwcuaNvH/T5mmGxCkpcC/1NV3+pp2xM4GfjLKSuMzart88C+VfVM4Mv87C+PgdfVOgd4GnA48ATg9wex/wms63rgyVX1LJrjuXCS69oB+Gl7i4uPAh8fxP4nsK6pPl5DTgU+NYh9j2Uz6pqU47WJ2t4C/GpV7QX8NfDnm72TqvI1xgt4D81fMquA/wZ+AvyonV7Vvh4GVrb9FwNHtdPb0twsLdOhtmHrzgLumcS6/m5Yn+cD/9BOfwQ4tWfZCmCPqa5rhPVXAbtNVl3AfwD7tX0y9O811cdrtLqm+ni1y3YDfgDs2NN/yv//GqmuyTpeY9T2j8D3evrsA9zSTo/7fWzCi96SX6O90QD390z/NvDhdvoU4IppVNsePdMvB66bzLqG9t++Ab0PuKCdfwnwhbb9l4BvTpO6fn7oFwg4Arh9U79QE1zXBcBre9qXTJPjNVpdU3q82vk3An8zrM+UHq8x6pr049VbW09I/ELb/lvA37fT434f89NcE+//AX+bZCXwQ5p/iOnid5IcD2ykqW3BJO//k0nm0vxS30jzCwawCPhVYCXNX0yvmSZ1nQS8KclGYD1wSrW/XZPkgra2twD305zfhqk/XqPVNdXHC5rftwuGtU318YKR65rS41VVG5O8Hvj7JA/TnNV4bbt43O9j3k5FktSZF+AlSZ0ZJpKkzgwTSVJnhokkqTPDRJLUmWGiGa+9Bcrv9cwvTtJ7r7KLkrx1jPUvTXJSO/1v7U0yh/fZLskFSb7b3mX160mOa5etSrLbZtT9yH5HWX5xezfXW5Ks77m77ElJFiWZM9599lHTHhl29+lhy7dP8tUkfq1Aj2KYaEtwLfAcgCTb0HzbuPc+R88BvtZxH+cBewC/WFXPprnP0uM6bnNMVfXbVXUIzXckvldVh7SvK6vqV6vq7gHs9q00t0gZraYNwD8z+Dtha4YxTLQl+BpwVDt9MHAzcF+Sx7c3bXw6cH2SdyVZkuTmJJf0ezfnJDsBrwfeXFUPAFTV96vqihH6vrXd/s3DRku/0d5I8ttJ/naE9c5rRyqz+qxpVZLdkuyb5D/adb+T5JNJXpzk2nYUdUTb/+eSfDzJN9M8z+KEUTb9CuCL7ToHt/1vbGsfuqHjQuDV/dSprYdDVc14VbU2ycYk+9CMQr5Oc9fTo2judnpTVW1I8sGqOhegfUN/Kc3NLzdlf+D2evTdXh8jyWE0364+kubb9N9I8hVgA81zIp5TVXclecKw9S6kGeW8ZjO/Ab0/zc09XwssAU4DngscD/wBzSjqHcC/VNVr29Nj30zyT1X145469qN5hsUDbdMbgfdX1SeTbE9zPzdowvrwzahTWzBHJtpSfI0mSIbC5Os989e2fV6Q5qlxN9E8V2Kib/n9XOCzVfXjqrofuAo4ut3XZ6rqLoCq+mHPOu8Edq2qN3a4lcZ/VtVN1TxfZDnNbc4LuAnYt+3zv4Gzk9wI/BuwI82N/XrtAazrmf868AdJfp/m7rbr2/ofAjYkGehpPs0shom2FEPXTZ5B85fzdTQjk+cAX0uyI/BXwElV9Qya6wI79rntlcA+aR5uNNGWAIcNH62M0wM90w/3zD/Mz84+BHhFz3WXfarq1mHbWU/PMamqy2hGN+tpnnnxwp6+OwA/7VCztjCGibYUX6M5bfXDqnqo/et/Dk2gfI2fvUnelWRnmpvs9aWqfkJz47v3t6d7SDI3ycnDul4DnJhkpyQ/R3Nn5muAfwFOTvLEdt3e4PgizQ0A/3HAf+kvBt48dJ0oyaEj9PkOPxvJkOQpwG1V9QGaR80+s21/InBXVT04wHo1wxgm2lLcRPMpruuGtd1TVXe1n3z6KM2oZTHNiGA8/pDmFNAtSW6muYX3o66hVNX1wKXAN4FvAB+rqhuqajlwPvCVJN9m2AOIquozbW1XJ5k9zrr6dR6wHbAsyfJ2/lHa6yffS7J/2/RK4Ob21NgvAp9o219A8ywM6RHeNVjSI5K8HDisqv5wjD5XAWdX1XcmrzJNd36aS9IjquqzQ6fjRtKe5ltokGg4RyaSpM68ZiJJ6swwkSR1ZphIkjozTCRJnRkmkqTO/j/XoXUXWf2mTgAAAABJRU5ErkJggg==", "text/plain": [ "
" ] }, "metadata": { "needs_background": "light" }, "output_type": "display_data" } ], "source": [ "\n", "from flaml.data import get_output_from_log\n", "time_history, best_valid_loss_history, valid_loss_history, config_history, metric_history = \\\n", " get_output_from_log(filename=automl_settings['log_file_name'], time_budget=3000)\n", "for config in config_history:\n", " print(config)\n", "\n", "import matplotlib.pyplot as plt\n", "import numpy as np\n", "plt.title('Learning Curve')\n", "plt.xlabel('Wall Clock Time (s)')\n", "plt.ylabel('Rouge 1')\n", "print(len(valid_loss_history))\n", "plt.scatter(time_history, 1 - np.array(valid_loss_history))\n", "plt.step(time_history, 1 - np.array(best_valid_loss_history), where='post')\n", "plt.show()" ] } ], "metadata": { "interpreter": { "hash": "e9d36fc5b7c3dd4177ff1b60184dd696c0acc18150a44682abca4d769811bd46" }, "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.0" } }, "nbformat": 4, "nbformat_minor": 2 }