2023-05-27 23:17:23 -04:00
{
"cells": [
2023-06-09 11:40:04 -07:00
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"<a href=\"https://colab.research.google.com/github/microsoft/FLAML/blob/main/notebook/autogen_agent_auto_feedback_from_code_execution.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
]
},
2023-05-27 23:17:23 -04:00
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
2023-06-09 11:40:04 -07:00
"# Interactive LLM Agent with Auto Feedback from Code Execution\n",
2023-05-27 23:17:23 -04:00
"\n",
2023-06-09 11:40:04 -07:00
"FLAML offers an experimental feature of interactive LLM agents, which can be used to solve various tasks with human or automatic feedback, including tasks that require using tools via code.\n",
2023-05-27 23:17:23 -04:00
"\n",
2023-06-09 11:40:04 -07:00
"In this notebook, we demonstrate how to use `AssistantAgent` and `UserProxyAgent` to write code and execute the code. Here `AssistantAgent` is an LLM-based agent that can write Python code (in a Python coding block) for a user to execute for a given task. `UserProxyAgent` is an agent which serves as a proxy for the human user to execute the code written by `AssistantAgent`, or automatically execute the code. Depending on the setting of `human_input_mode` and `max_consecutive_auto_reply`, the `UserProxyAgent` either solicits feedback from the human user or uses auto-feedback based on the result of code execution. For example, when `human_input_mode` is set to \"ALWAYS\", the `UserProxyAgent` will always prompt the user for feedback. When user feedback is provided, the `UserProxyAgent` will directly pass the feedback to `AssistantAgent` without doing any additional steps. When no user feedback is provided, the `UserProxyAgent` will execute the code written by `AssistantAgent` directly and return the execution results (success or failure and corresponding outputs) to `AssistantAgent`.\n",
2023-05-27 23:17:23 -04:00
"\n",
"## Requirements\n",
"\n",
2023-07-08 22:25:43 -07:00
"FLAML requires `Python>=3.8`. To run this notebook example, please install flaml with the [autogen] option:\n",
2023-05-27 23:17:23 -04:00
"```bash\n",
"pip install flaml[autogen]\n",
"```"
]
},
{
"cell_type": "code",
2023-06-09 11:40:04 -07:00
"execution_count": 1,
2023-05-27 23:17:23 -04:00
"metadata": {
"execution": {
"iopub.execute_input": "2023-02-13T23:40:52.317406Z",
"iopub.status.busy": "2023-02-13T23:40:52.316561Z",
"iopub.status.idle": "2023-02-13T23:40:52.321193Z",
"shell.execute_reply": "2023-02-13T23:40:52.320628Z"
}
},
"outputs": [],
"source": [
2023-07-08 22:25:43 -07:00
"# %pip install flaml[autogen]==2.0.0rc3"
2023-05-27 23:17:23 -04:00
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Set your API Endpoint\n",
"\n",
2023-07-04 13:29:32 -07:00
"The [`config_list_from_models`](https://microsoft.github.io/FLAML/docs/reference/autogen/oai/openai_utils#config_list_from_models) function tries to create a list of configurations using Azure OpenAI endpoints and OpenAI endpoints for the provided list of models. It assumes the api keys and api bases are stored in the corresponding environment variables or local txt files:\n",
2023-05-27 23:17:23 -04:00
"\n",
"- OpenAI API key: os.environ[\"OPENAI_API_KEY\"] or `openai_api_key_file=\"key_openai.txt\"`.\n",
"- Azure OpenAI API key: os.environ[\"AZURE_OPENAI_API_KEY\"] or `aoai_api_key_file=\"key_aoai.txt\"`. Multiple keys can be stored, one per line.\n",
"- Azure OpenAI API base: os.environ[\"AZURE_OPENAI_API_BASE\"] or `aoai_api_base_file=\"base_aoai.txt\"`. Multiple bases can be stored, one per line.\n",
"\n",
2023-06-09 11:40:04 -07:00
"It's OK to have only the OpenAI API key, or only the Azure OpenAI API key + base.\n"
2023-05-27 23:17:23 -04:00
]
},
{
"cell_type": "code",
2023-06-09 11:40:04 -07:00
"execution_count": 2,
2023-05-27 23:17:23 -04:00
"metadata": {},
"outputs": [],
"source": [
"from flaml import oai\n",
"\n",
2023-06-09 11:40:04 -07:00
"config_list = oai.config_list_from_models(model_list=[\"gpt-4\"])"
2023-05-27 23:17:23 -04:00
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
2023-06-09 11:40:04 -07:00
"The config list looks like the following:\n",
"```python\n",
"config_list = [\n",
" {\n",
" 'model': 'gpt-4',\n",
" 'api_key': '<your OpenAI API key here>',\n",
" }, # only if OpenAI API key is found\n",
" {\n",
" 'model': 'gpt-4',\n",
" 'api_key': '<your first Azure OpenAI API key here>',\n",
" 'api_base': '<your first Azure OpenAI API base here>',\n",
" 'api_type': 'azure',\n",
" 'api_version': '2023-03-15-preview',\n",
" }, # only if the at least one Azure OpenAI API key is found\n",
" {\n",
" 'model': 'gpt-4',\n",
" 'api_key': '<your second Azure OpenAI API key here>',\n",
" 'api_base': '<your second Azure OpenAI API base here>',\n",
" 'api_type': 'azure',\n",
" 'api_version': '2023-03-15-preview',\n",
" }, # only if the second Azure OpenAI API key is found\n",
"]\n",
"```\n",
2023-05-27 23:17:23 -04:00
"\n",
2023-06-09 11:40:04 -07:00
"You can directly override it if the above function returns an empty list, i.e., it doesn't find the keys in the specified locations."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Example Task: Write Code to Draw a Plot\n",
"\n",
"In the example below, let's see how to use the agents in FLAML to write a python script and execute the script. This process involves constructing a `AssistantAgent` to serve as the assistant, along with a `UserProxyAgent` that acts as a proxy for the human user. In this example demonstrated below, when constructing the `UserProxyAgent`, we select the `human_input_mode` to \"NEVER\". This means that the `UserProxyAgent` will not solicit feedback from the human user. It stops replying when the limit defined by `max_consecutive_auto_reply` is reached, or when `is_termination_msg()` returns true for the received message."
2023-05-27 23:17:23 -04:00
]
},
{
"cell_type": "code",
2023-06-09 11:40:04 -07:00
"execution_count": 3,
2023-05-27 23:17:23 -04:00
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
2023-06-09 11:40:04 -07:00
"user (to assistant):\n",
2023-05-27 23:17:23 -04:00
"\n",
2023-07-08 22:25:43 -07:00
"Draw a rocket and save to a file named 'rocket.svg'\n",
2023-05-27 23:17:23 -04:00
"\n",
2023-07-10 09:07:48 -07:00
"--------------------------------------------------------------------------------\n",
2023-06-09 11:40:04 -07:00
"assistant (to user):\n",
2023-07-08 22:25:43 -07:00
"\n",
2023-07-10 09:07:48 -07:00
"# filename: rocket_drawing.py\n",
"import svgwrite\n",
"\n",
"def create_rocket_drawing(file_name):\n",
" dwg = svgwrite.Drawing(file_name, profile='tiny')\n",
"\n",
" dwg.add(dwg.rect(insert=(250, 150), size=(50, 75),\n",
" fill='white', stroke='black', stroke_width=1))\n",
"\n",
" dwg.add(dwg.polygon(points=[(225, 150), (325, 150), (275, 100)],\n",
" fill='white', stroke='black', stroke_width=1))\n",
"\n",
" dwg.add(dwg.polygon(points=[(250, 225), (300, 225), (275, 260)],\n",
" fill='white', stroke='black', stroke_width=1))\n",
"\n",
" dwg.add(dwg.polygon(points=[(212, 156), (262, 156), (237, 191)],\n",
" fill='white', stroke='black', stroke_width=1))\n",
"\n",
" dwg.add(dwg.polygon(points=[(287, 157), (337, 157), (312, 192)],\n",
" fill='white', stroke='black', stroke_width=1))\n",
"\n",
" dwg.save()\n",
"\n",
"create_rocket_drawing(\"rocket.svg\")\n",
2023-07-08 22:25:43 -07:00
"\n",
"\n",
2023-07-10 09:07:48 -07:00
"--------------------------------------------------------------------------------\n",
"\n",
">>>>>>>> NO HUMAN INPUT RECEIVED. USING AUTO REPLY FOR THE USER...\n",
"user (to assistant):\n",
"\n",
"\n",
"\n",
"--------------------------------------------------------------------------------\n",
"assistant (to user):\n",
"\n",
"Please save the provided code in a file named `rocket_drawing.py` and execute it by running:\n",
"\n",
"```sh\n",
"python rocket_drawing.py\n",
2023-07-08 22:25:43 -07:00
"```\n",
"\n",
2023-07-10 09:07:48 -07:00
"This will generate and save the rocket drawing in an SVG file named `rocket.svg`.\n",
2023-07-08 22:25:43 -07:00
"\n",
2023-07-10 09:07:48 -07:00
"--------------------------------------------------------------------------------\n",
"\n",
">>>>>>>> NO HUMAN INPUT RECEIVED. USING AUTO REPLY FOR THE USER...\n",
"user (to assistant):\n",
2023-05-27 23:17:23 -04:00
"\n",
2023-07-10 09:07:48 -07:00
"exitcode: 2 (execution failed)\n",
"Code output: \n",
"python: can't open file '/workspaces/FLAML/notebook/coding/rocket_drawing.py': [Errno 2] No such file or directory\n",
2023-05-27 23:17:23 -04:00
"\n",
"\n",
2023-07-10 09:07:48 -07:00
"--------------------------------------------------------------------------------\n",
"assistant (to user):\n",
2023-05-27 23:17:23 -04:00
"\n",
2023-07-10 09:07:48 -07:00
"I apologize for the confusion. It seems that there might be an issue with the file path or access to the workspace. Please follow these steps to save the rocket_drawing.py file and execute the code locally on your computer:\n",
"\n",
"1. Create a new file on your computer named `rocket_drawing.py`\n",
"\n",
"2. Copy the code provided earlier and paste it into the `rocket_drawing.py` file.\n",
2023-05-27 23:17:23 -04:00
"\n",
2023-07-10 09:07:48 -07:00
"3. Save the file and open a terminal or command prompt.\n",
"\n",
"4. Use the `cd` command to navigate to the folder where the `rocket_drawing.py` file is located.\n",
"\n",
"5. Execute the file by running the following command:\n",
"\n",
"```sh\n",
"python rocket_drawing.py\n",
2023-05-27 23:17:23 -04:00
"```\n",
"\n",
2023-07-10 09:07:48 -07:00
"After executing the script, you should have a file named `rocket.svg` in the same folder as `rocket_drawing.py`.\n",
2023-07-08 22:25:43 -07:00
"\n",
2023-07-10 09:07:48 -07:00
"--------------------------------------------------------------------------------\n",
"\n",
">>>>>>>> NO HUMAN INPUT RECEIVED. USING AUTO REPLY FOR THE USER...\n",
"user (to assistant):\n",
"\n",
"exitcode: 2 (execution failed)\n",
"Code output: \n",
"python: can't open file '/workspaces/FLAML/notebook/coding/rocket_drawing.py': [Errno 2] No such file or directory\n",
2023-05-27 23:17:23 -04:00
"\n",
"\n",
2023-07-08 22:25:43 -07:00
"--------------------------------------------------------------------------------\n",
2023-07-10 09:07:48 -07:00
"assistant (to user):\n",
"\n",
"I apologize for the misunderstanding. It looks like you are unable to execute the code on your local machine. Due to the limitations of my current environment, I cannot execute the code on your behalf. However, I can walk you through the process so you may create an SVG file containing the rocket drawing.\n",
"\n",
"Please follow these steps on your local machine:\n",
"\n",
"1. Open a text editor such as Notepad or any code editor of your choice.\n",
"\n",
"2. Copy and paste the provided code from earlier into the editor.\n",
"\n",
"3. Save this file as `rocket_drawing.py`. Choose a convenient location on your computer for this file.\n",
"\n",
"4. Open a terminal or command prompt on your computer.\n",
"\n",
"5. Navigate to the folder where you saved the `rocket_drawing.py` using the `cd` command in the terminal or command prompt.\n",
"\n",
"For example, if the `rocket_drawing.py` file is saved in a folder named `scripts` under the user's home folder, you can enter the following command:\n",
"\n",
"```sh\n",
"cd /path/to/your/folder/where/rocket_drawing.py/is/saved\n",
"```\n",
"\n",
"6. Once you are in the correct folder, run the following command:\n",
"\n",
"```sh\n",
"python rocket_drawing.py\n",
"```\n",
"\n",
"This command will execute the script and create an SVG file named `rocket.svg` within the same directory. Please follow these steps on your local machine and let me know if you encounter any issues.\n",
"\n",
"--------------------------------------------------------------------------------\n",
2023-05-27 23:17:23 -04:00
"\n",
2023-07-08 22:25:43 -07:00
">>>>>>>> NO HUMAN INPUT RECEIVED. USING AUTO REPLY FOR THE USER...\n",
2023-06-09 11:40:04 -07:00
"user (to assistant):\n",
2023-07-08 22:25:43 -07:00
"\n",
2023-07-10 09:07:48 -07:00
"exitcode: 2 (execution failed)\n",
2023-05-27 23:17:23 -04:00
"Code output: \n",
2023-07-10 09:07:48 -07:00
"tmp_code_7d903d2e041418f3df6dc41e57042575.sh: 1: cd: can't cd to /path/to/your/folder/where/rocket_drawing.py/is/saved\n",
"\n",
"\n",
"--------------------------------------------------------------------------------\n",
"assistant (to user):\n",
"\n",
"I apologize for the confusion. I cannot provide specific instructions for your local machine, as I do not have access to your file system. However, I can continue to provide guidance in running the script.\n",
"\n",
"Here are the general steps to follow:\n",
"\n",
"1. Locate the folder where you saved the `rocket_drawing.py` file on your computer.\n",
"\n",
"2. Open a terminal or command prompt.\n",
"\n",
"3. In the terminal or command prompt, type `cd` followed by the path of the folder where you saved the `rocket_drawing.py` file.\n",
"\n",
"For example, if the file is saved in a folder named `scripts` in your home folder, you would type (replace `/path/to/your/folder` with your actual path):\n",
2023-05-27 23:17:23 -04:00
"\n",
2023-07-10 09:07:48 -07:00
"```sh\n",
"cd /path/to/your/folder\n",
"```\n",
"\n",
"4. Once you're in the correct folder, run this command:\n",
"\n",
"```sh\n",
"python rocket_drawing.py\n",
"```\n",
"\n",
"It is essential that you replace `/path/to/your/folder/where/rocket_drawing.py/is/saved` with the actual path on your computer where the `rocket_drawing.py` file is saved.\n",
"\n",
"Please attempt the process on your local machine following the mentioned steps. If you face any issues, you can specify the problem or error message, and I will do my best to assist you.\n",
"\n",
"Keep in mind that you cannot execute the script in this environment; you must perform these steps on your local machine.\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\n",
">>>>>>>> NO HUMAN INPUT RECEIVED. USING AUTO REPLY FOR THE USER...\n",
"user (to assistant):\n",
"\n",
"exitcode: 2 (execution failed)\n",
"Code output: \n",
"tmp_code_61f3b3736b76f2740c6738762f9bb6ca.sh: 1: cd: can't cd to /path/to/your/folder\n",
2023-05-27 23:17:23 -04:00
"\n",
"\n",
2023-07-08 22:25:43 -07:00
"--------------------------------------------------------------------------------\n",
2023-06-09 11:40:04 -07:00
"assistant (to user):\n",
"\n",
2023-07-10 09:07:48 -07:00
"I'm sorry for the confusion, but due to the limitations of this platform, I won't be able to execute the local files or access the local command terminal of your system.\n",
"\n",
"To run the script:\n",
"\n",
"1. On your system, open the terminal or command prompt.\n",
"2. Navigate to the directory where you saved `rocket_drawing.py`.\n",
"3. Execute the python script by running `python rocket_drawing.py`.\n",
"\n",
"Alternatively, you can execute the python script by providing the full path to python interpreter, for instance:\n",
"\n",
"```sh\n",
"python /path/to/your/folder/rocket_drawing.py\n",
"```\n",
"Replace `/path/to/your/folder/rocket_drawing.py` with the actual path to the `rocket_drawing.py` file on your system.\n",
"\n",
"If Python is installed correctly and the script is written without errors, it will run and create `rocket.svg` in the same directory.\n",
"\n",
"Please follow these steps on your local machine. I'm sorry for any inconveniences caused, but I'm unable to assist with executing local scripts directly from here.\n",
"\n",
"Lastly, ensure that Python is installed properly on your system, and the command-line terminal recognizes 'python' command. You also need the 'svgwrite' package, which you can install by running `pip install svgwrite` on the terminal.\n",
2023-07-08 22:25:43 -07:00
"\n",
"TERMINATE\n",
2023-05-27 23:17:23 -04:00
"\n",
2023-07-08 22:25:43 -07:00
"--------------------------------------------------------------------------------\n"
2023-05-27 23:17:23 -04:00
]
}
],
"source": [
2023-06-09 11:40:04 -07:00
"from flaml.autogen.agent import AssistantAgent, UserProxyAgent\n",
2023-05-27 23:17:23 -04:00
"\n",
2023-06-09 11:40:04 -07:00
"# create an AssistantAgent named \"assistant\"\n",
"assistant = AssistantAgent(\"assistant\", request_timeout=600, seed=42, config_list=config_list)\n",
2023-05-27 23:17:23 -04:00
"# create a UserProxyAgent instance named \"user\"\n",
"user = UserProxyAgent(\n",
" \"user\",\n",
" human_input_mode=\"NEVER\",\n",
" max_consecutive_auto_reply=10,\n",
2023-07-06 06:08:44 +08:00
" is_termination_msg=lambda x: x.get(\"content\", \"\").rstrip().endswith(\"TERMINATE\") or x.get(\"content\", \"\").rstrip().endswith('\"TERMINATE\".'),\n",
2023-06-09 11:40:04 -07:00
" work_dir=\"coding\",\n",
" use_docker=False, # set to True if you are using docker\n",
2023-05-27 23:17:23 -04:00
")\n",
"# the assistant receives a message from the user, which contains the task description\n",
"assistant.receive(\n",
2023-06-09 11:40:04 -07:00
" \"\"\"Draw a rocket and save to a file named 'rocket.svg'\"\"\",\n",
2023-05-27 23:17:23 -04:00
" user,\n",
")"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
2023-06-09 11:40:04 -07:00
"The example above involves code execution. In FLAML, code execution is triggered automatically by the `UserProxyAgent` when it detects an executable code block in a received message and no human user input is provided. This process occurs in a designated working directory, using a Docker container by default. Unless a specific directory is specified, FLAML defaults to the `flaml/autogen/extensions` directory. Users have the option to specify a different working directory by setting the `work_dir` argument when constructing a new instance of the `UserProxyAgent`.\n",
"\n",
"Let's display the generated figure."
2023-05-27 23:17:23 -04:00
]
},
{
"cell_type": "code",
2023-06-09 11:40:04 -07:00
"execution_count": 4,
2023-05-27 23:17:23 -04:00
"metadata": {},
"outputs": [
{
2023-06-09 11:40:04 -07:00
"data": {
"image/svg+xml": [
2023-07-08 22:25:43 -07:00
"<svg xmlns=\"http://www.w3.org/2000/svg\" xmlns:ev=\"http://www.w3.org/2001/xml-events\" xmlns:xlink=\"http://www.w3.org/1999/xlink\" baseProfile=\"tiny\" height=\"100%\" version=\"1.2\" width=\"100%\"><defs/><rect fill=\"red\" height=\"60\" width=\"20\" x=\"50\" y=\"20\"/><polygon fill=\"blue\" points=\"45,80 75,80 60,90\"/><polygon fill=\"orange\" points=\"50,80 70,80 60,100\"/></svg>"
2023-06-09 11:40:04 -07:00
],
"text/plain": [
"<IPython.core.display.SVG object>"
]
},
"metadata": {},
"output_type": "display_data"
2023-05-27 23:17:23 -04:00
}
],
"source": [
2023-06-09 11:40:04 -07:00
"# uncomment the following to render the svg file\n",
"# from IPython.display import SVG, display\n",
"\n",
"# display(SVG(\"coding/rocket.svg\"))"
2023-05-27 23:17:23 -04:00
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
2023-07-10 09:07:48 -07:00
"## Example Task: Check Stock Price Change"
2023-05-27 23:17:23 -04:00
]
},
{
"cell_type": "code",
2023-06-09 11:40:04 -07:00
"execution_count": 5,
2023-05-27 23:17:23 -04:00
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
2023-06-09 11:40:04 -07:00
"user (to assistant):\n",
2023-05-27 23:17:23 -04:00
"\n",
2023-07-10 09:07:48 -07:00
"What date is today? Which stock has larger year-to-date gain this year, META or TESLA?\n",
2023-05-27 23:17:23 -04:00
"\n",
2023-07-08 22:25:43 -07:00
"--------------------------------------------------------------------------------\n",
2023-06-09 11:40:04 -07:00
"assistant (to user):\n",
2023-07-08 22:25:43 -07:00
"\n",
2023-07-10 09:07:48 -07:00
"To get the current date and fetch the information about stock performance, I'll provide you with a Python script that uses the yfinance library. First, you need to install the library by running the following command in the shell:\n",
2023-05-27 23:17:23 -04:00
"\n",
2023-06-09 11:40:04 -07:00
"```sh\n",
"pip install yfinance\n",
2023-05-27 23:17:23 -04:00
"```\n",
"\n",
2023-07-10 09:07:48 -07:00
"Then, execute the following Python code:\n",
2023-05-27 23:17:23 -04:00
"\n",
2023-06-09 11:40:04 -07:00
"```python\n",
2023-07-10 09:07:48 -07:00
"import datetime\n",
2023-06-09 11:40:04 -07:00
"import yfinance as yf\n",
"\n",
2023-07-10 09:07:48 -07:00
"def stock_ytd_gain(symbol):\n",
" now = datetime.datetime.now()\n",
" start_date = datetime.datetime(now.year, 1, 1)\n",
" stock = yf.Ticker(symbol)\n",
" history = stock.history(start=start_date, end=now)\n",
" ytd_gain = (history[\"Close\"][-1] / history[\"Open\"][0]) - 1\n",
2023-07-08 22:25:43 -07:00
" return ytd_gain\n",
"\n",
2023-07-10 09:07:48 -07:00
"def compare_stocks(stock1, stock2):\n",
" stock1_ytd_gain = stock_ytd_gain(stock1)\n",
" stock2_ytd_gain = stock_ytd_gain(stock2)\n",
" \n",
" print(f\"Today's date: {datetime.datetime.now().strftime('%Y-%m-%d')}\")\n",
" \n",
" if stock1_ytd_gain > stock2_ytd_gain:\n",
" print(f\"{stock1} has a larger year-to-date gain.\")\n",
" elif stock1_ytd_gain < stock2_ytd_gain:\n",
" print(f\"{stock2} has a larger year-to-date gain.\")\n",
" else:\n",
" print(f\"{stock1} and {stock2} have the same year-to-date gain.\")\n",
2023-07-08 22:25:43 -07:00
"\n",
2023-07-10 09:07:48 -07:00
"compare_stocks(\"META\", \"TSLA\")\n",
2023-06-09 11:40:04 -07:00
"```\n",
2023-07-10 09:07:48 -07:00
"After running this code, you'll get the current date and the stock with a larger year-to-date gain between META and TESLA.\n",
2023-05-27 23:17:23 -04:00
"\n",
2023-07-08 22:25:43 -07:00
"--------------------------------------------------------------------------------\n",
2023-05-27 23:17:23 -04:00
"\n",
2023-07-08 22:25:43 -07:00
">>>>>>>> NO HUMAN INPUT RECEIVED. USING AUTO REPLY FOR THE USER...\n",
2023-06-09 11:40:04 -07:00
"user (to assistant):\n",
2023-07-08 22:25:43 -07:00
"\n",
2023-07-10 09:07:48 -07:00
"exitcode: 0 (execution succeeded)\n",
2023-06-09 11:40:04 -07:00
"Code output: \n",
"Defaulting to user installation because normal site-packages is not writeable\n",
"Requirement already satisfied: yfinance in /home/vscode/.local/lib/python3.9/site-packages (0.2.18)\n",
"Requirement already satisfied: pandas>=1.3.0 in /usr/local/lib/python3.9/site-packages (from yfinance) (1.5.2)\n",
"Requirement already satisfied: numpy>=1.16.5 in /home/vscode/.local/lib/python3.9/site-packages (from yfinance) (1.23.5)\n",
"Requirement already satisfied: requests>=2.26 in /usr/local/lib/python3.9/site-packages (from yfinance) (2.28.1)\n",
"Requirement already satisfied: multitasking>=0.0.7 in /home/vscode/.local/lib/python3.9/site-packages (from yfinance) (0.0.11)\n",
"Requirement already satisfied: lxml>=4.9.1 in /home/vscode/.local/lib/python3.9/site-packages (from yfinance) (4.9.2)\n",
"Requirement already satisfied: appdirs>=1.4.4 in /home/vscode/.local/lib/python3.9/site-packages (from yfinance) (1.4.4)\n",
"Requirement already satisfied: pytz>=2022.5 in /usr/local/lib/python3.9/site-packages (from yfinance) (2022.6)\n",
"Requirement already satisfied: frozendict>=2.3.4 in /home/vscode/.local/lib/python3.9/site-packages (from yfinance) (2.3.8)\n",
"Requirement already satisfied: cryptography>=3.3.2 in /home/vscode/.local/lib/python3.9/site-packages (from yfinance) (38.0.4)\n",
"Requirement already satisfied: beautifulsoup4>=4.11.1 in /home/vscode/.local/lib/python3.9/site-packages (from yfinance) (4.11.1)\n",
"Requirement already satisfied: html5lib>=1.1 in /home/vscode/.local/lib/python3.9/site-packages (from yfinance) (1.1)\n",
"Requirement already satisfied: soupsieve>1.2 in /home/vscode/.local/lib/python3.9/site-packages (from beautifulsoup4>=4.11.1->yfinance) (2.3.2.post1)\n",
"Requirement already satisfied: cffi>=1.12 in /home/vscode/.local/lib/python3.9/site-packages (from cryptography>=3.3.2->yfinance) (1.15.1)\n",
"Requirement already satisfied: six>=1.9 in /usr/local/lib/python3.9/site-packages (from html5lib>=1.1->yfinance) (1.16.0)\n",
"Requirement already satisfied: webencodings in /home/vscode/.local/lib/python3.9/site-packages (from html5lib>=1.1->yfinance) (0.5.1)\n",
"Requirement already satisfied: python-dateutil>=2.8.1 in /usr/local/lib/python3.9/site-packages (from pandas>=1.3.0->yfinance) (2.8.2)\n",
"Requirement already satisfied: charset-normalizer<3,>=2 in /usr/local/lib/python3.9/site-packages (from requests>=2.26->yfinance) (2.1.1)\n",
"Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.9/site-packages (from requests>=2.26->yfinance) (3.4)\n",
"Requirement already satisfied: urllib3<1.27,>=1.21.1 in /usr/local/lib/python3.9/site-packages (from requests>=2.26->yfinance) (1.26.13)\n",
"Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.9/site-packages (from requests>=2.26->yfinance) (2022.9.24)\n",
"Requirement already satisfied: pycparser in /home/vscode/.local/lib/python3.9/site-packages (from cffi>=1.12->cryptography>=3.3.2->yfinance) (2.21)\n",
"\n",
2023-07-10 09:07:48 -07:00
"Today's date: 2023-07-10\n",
"META has a larger year-to-date gain.\n",
"\n",
"\n",
"--------------------------------------------------------------------------------\n",
"assistant (to user):\n",
"\n",
"TERMINATE\n",
"\n",
"--------------------------------------------------------------------------------\n"
]
}
],
"source": [
"# it is suggested to reset the assistant to clear the state if the new task is not related to the previous one.\n",
"assistant.reset()\n",
"assistant.receive(\n",
" \"\"\"What date is today? Which stock has larger year-to-date gain this year, META or TESLA?\"\"\",\n",
" user,\n",
")"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"All the feedback is auto generated."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Example Task: Find an Issue from GitHub"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"user (to assistant):\n",
2023-07-08 22:25:43 -07:00
"\n",
2023-07-10 09:07:48 -07:00
"I'd like to contribute to flaml. Suggest a task for me.\n",
2023-07-08 22:25:43 -07:00
"\n",
"--------------------------------------------------------------------------------\n",
2023-06-09 11:40:04 -07:00
"assistant (to user):\n",
"\n",
2023-07-10 09:07:48 -07:00
"I will find an issue for you to work on in the FLAML repository. Please execute the following Python code to retrieve the issue:\n",
2023-06-09 11:40:04 -07:00
"\n",
"```python\n",
2023-07-10 09:07:48 -07:00
"import requests\n",
2023-06-09 11:40:04 -07:00
"\n",
2023-07-10 09:07:48 -07:00
"url = \"https://api.github.com/repos/microsoft/FLAML/issues\"\n",
"headers = {\"Accept\": \"application/vnd.github+json\"}\n",
2023-06-09 11:40:04 -07:00
"\n",
2023-07-10 09:07:48 -07:00
"response = requests.get(url, headers=headers)\n",
"data = response.json()\n",
2023-06-09 11:40:04 -07:00
"\n",
2023-07-10 09:07:48 -07:00
"for issue in data:\n",
" print(f\"Issue #{issue['number']}: {issue['title']}\")\n",
" print(f\"URL: {issue['url']}\")\n",
" print(\"-\" * 80)\n",
2023-07-08 22:25:43 -07:00
"```\n",
"\n",
2023-07-10 09:07:48 -07:00
"This code uses the GitHub API to fetch a list of open issues in the microsoft/FLAML repository and prints them out. You can browse through the list and choose a suitable task to work on.\n",
2023-06-09 11:40:04 -07:00
"\n",
2023-07-08 22:25:43 -07:00
"--------------------------------------------------------------------------------\n",
2023-06-09 11:40:04 -07:00
"\n",
2023-07-08 22:25:43 -07:00
">>>>>>>> NO HUMAN INPUT RECEIVED. USING AUTO REPLY FOR THE USER...\n",
2023-06-09 11:40:04 -07:00
"user (to assistant):\n",
2023-07-08 22:25:43 -07:00
"\n",
2023-05-27 23:17:23 -04:00
"exitcode: 0 (execution succeeded)\n",
2023-06-09 11:40:04 -07:00
"Code output: \n",
2023-07-10 09:07:48 -07:00
"Issue #1113: Update docstring for oai.completion.\n",
"URL: https://api.github.com/repos/microsoft/FLAML/issues/1113\n",
"--------------------------------------------------------------------------------\n",
"Issue #1111: Failure cases for autogen agents\n",
"URL: https://api.github.com/repos/microsoft/FLAML/issues/1111\n",
"--------------------------------------------------------------------------------\n",
"Issue #1108: Integrate opensource LLMs into autogen\n",
"URL: https://api.github.com/repos/microsoft/FLAML/issues/1108\n",
"--------------------------------------------------------------------------------\n",
"Issue #1107: Auto FLAML code/api generation \n",
"URL: https://api.github.com/repos/microsoft/FLAML/issues/1107\n",
"--------------------------------------------------------------------------------\n",
"Issue #1104: Integer log uniform sampling distribution issue\n",
"URL: https://api.github.com/repos/microsoft/FLAML/issues/1104\n",
"--------------------------------------------------------------------------------\n",
"Issue #1103: Build a pipeline, to output adversarial examples from given input examples and input LLM config to solve the problem\n",
"URL: https://api.github.com/repos/microsoft/FLAML/issues/1103\n",
"--------------------------------------------------------------------------------\n",
"Issue #1101: Handling categorical variables on new data\n",
"URL: https://api.github.com/repos/microsoft/FLAML/issues/1101\n",
"--------------------------------------------------------------------------------\n",
"Issue #1099: Do not print data split method when user provides X_val in AutoML\n",
"URL: https://api.github.com/repos/microsoft/FLAML/issues/1099\n",
"--------------------------------------------------------------------------------\n",
"Issue #1098: Continual learning via `LearningAgent` and `TeachingAgent`\n",
"URL: https://api.github.com/repos/microsoft/FLAML/issues/1098\n",
"--------------------------------------------------------------------------------\n",
"Issue #1096: MathChat blog post\n",
"URL: https://api.github.com/repos/microsoft/FLAML/issues/1096\n",
"--------------------------------------------------------------------------------\n",
"Issue #1094: Implement functions for the conversion of different search spaces in tune and automl\n",
"URL: https://api.github.com/repos/microsoft/FLAML/issues/1094\n",
"--------------------------------------------------------------------------------\n",
"Issue #1092: Flaml/Automl - How do I pass a huge lightgbm input '.bin' file directly to lightgbm for training via flaml/automl?\n",
"URL: https://api.github.com/repos/microsoft/FLAML/issues/1092\n",
"--------------------------------------------------------------------------------\n",
"Issue #1087: Update flaml's integration script in automlbenchmark to accommodate the new release\n",
"URL: https://api.github.com/repos/microsoft/FLAML/issues/1087\n",
"--------------------------------------------------------------------------------\n",
"Issue #1081: Support new OpenAI models\n",
"URL: https://api.github.com/repos/microsoft/FLAML/issues/1081\n",
"--------------------------------------------------------------------------------\n",
"Issue #1080: Add support to python 3.11\n",
"URL: https://api.github.com/repos/microsoft/FLAML/issues/1080\n",
"--------------------------------------------------------------------------------\n",
"Issue #1079: Time to update the supported python versions?\n",
"URL: https://api.github.com/repos/microsoft/FLAML/issues/1079\n",
"--------------------------------------------------------------------------------\n",
"Issue #1077: Create a new subclass of Domain to support open-ended tuning\n",
"URL: https://api.github.com/repos/microsoft/FLAML/issues/1077\n",
"--------------------------------------------------------------------------------\n",
"Issue #1074: Enable planning in AssistantAgent\n",
"URL: https://api.github.com/repos/microsoft/FLAML/issues/1074\n",
"--------------------------------------------------------------------------------\n",
"Issue #1073: Implement a basic solution for continue learning with LLMs and make it compatible with current design\n",
"URL: https://api.github.com/repos/microsoft/FLAML/issues/1073\n",
"--------------------------------------------------------------------------------\n",
"Issue #1072: Write user code example\n",
"URL: https://api.github.com/repos/microsoft/FLAML/issues/1072\n",
"--------------------------------------------------------------------------------\n",
"Issue #1069: Improve logging in autogen.oai\n",
"URL: https://api.github.com/repos/microsoft/FLAML/issues/1069\n",
"--------------------------------------------------------------------------------\n",
"Issue #1066: Agent Implementation of MathChat\n",
"URL: https://api.github.com/repos/microsoft/FLAML/issues/1066\n",
"--------------------------------------------------------------------------------\n",
"Issue #1065: Is the version restriction for \"vowpalwabbit>=8.10.0, <9.0.0\" still needed?\n",
"URL: https://api.github.com/repos/microsoft/FLAML/issues/1065\n",
"--------------------------------------------------------------------------------\n",
"Issue #1064: Support In-Context-Learning (ICL) in agents\n",
"URL: https://api.github.com/repos/microsoft/FLAML/issues/1064\n",
"--------------------------------------------------------------------------------\n",
"Issue #1063: Continual learning with LLM\n",
"URL: https://api.github.com/repos/microsoft/FLAML/issues/1063\n",
"--------------------------------------------------------------------------------\n",
"Issue #1062: Generalize to conversations between a network of agents\n",
"URL: https://api.github.com/repos/microsoft/FLAML/issues/1062\n",
"--------------------------------------------------------------------------------\n",
"Issue #1061: It is not possible to persist flamlized estimators\n",
"URL: https://api.github.com/repos/microsoft/FLAML/issues/1061\n",
"--------------------------------------------------------------------------------\n",
"Issue #1055: Exemplar Selection (in context learning) API\n",
"URL: https://api.github.com/repos/microsoft/FLAML/issues/1055\n",
"--------------------------------------------------------------------------------\n",
"Issue #1047: TypeError: LogisticRegression.fit() got an unexpected keyword argument 'gpu_per_trial'\n",
"URL: https://api.github.com/repos/microsoft/FLAML/issues/1047\n",
"--------------------------------------------------------------------------------\n",
"Issue #1046: Misc unprioritized TODO list for v0 prototype\n",
"URL: https://api.github.com/repos/microsoft/FLAML/issues/1046\n",
"--------------------------------------------------------------------------------\n",
"\n",
"\n",
"--------------------------------------------------------------------------------\n",
"assistant (to user):\n",
"\n",
"Here are some open issues in the FLAML repository that you can consider contributing to:\n",
"\n",
"1. Issue #1113: Update docstring for oai.completion.\n",
" URL: https://github.com/microsoft/FLAML/issues/1113\n",
"\n",
"2. Issue #1111: Failure cases for autogen agents\n",
" URL: https://github.com/microsoft/FLAML/issues/1111\n",
"\n",
"3. Issue #1108: Integrate opensource LLMs into autogen\n",
" URL: https://github.com/microsoft/FLAML/issues/1108\n",
"\n",
"4. Issue #1107: Auto FLAML code/api generation\n",
" URL: https://github.com/microsoft/FLAML/issues/1107\n",
"\n",
"5. Issue #1104: Integer log uniform sampling distribution issue\n",
" URL: https://github.com/microsoft/FLAML/issues/1104\n",
"\n",
"Please take a look at these issues and choose one that you would like to work on. Once you have selected an issue, visit the issue page on GitHub, read the problem statement, and leave a comment to express your interest in working on it. Make sure to follow the contributing guidelines when submitting your changes.\n",
"\n",
"If you need further assistance, feel free to ask. Good luck, and happy coding!\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\n",
">>>>>>>> NO HUMAN INPUT RECEIVED. USING AUTO REPLY FOR THE USER...\n",
"user (to assistant):\n",
"\n",
2023-06-09 11:40:04 -07:00
"\n",
"\n",
2023-07-08 22:25:43 -07:00
"--------------------------------------------------------------------------------\n",
2023-06-09 11:40:04 -07:00
"assistant (to user):\n",
"\n",
2023-07-10 09:07:48 -07:00
"TERMINATE\n",
2023-05-27 23:17:23 -04:00
"\n",
2023-07-08 22:25:43 -07:00
"--------------------------------------------------------------------------------\n"
2023-05-27 23:17:23 -04:00
]
}
],
"source": [
2023-06-09 11:40:04 -07:00
"assistant.reset()\n",
"assistant.receive(\n",
2023-07-10 09:07:48 -07:00
" \"\"\"I'd like to contribute to flaml. Suggest a task for me.\"\"\",\n",
" user\n",
2023-06-09 11:40:04 -07:00
")"
]
2023-05-27 23:17:23 -04:00
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
2023-07-08 22:25:43 -07:00
"version": "3.9.15"
2023-05-27 23:17:23 -04:00
},
"vscode": {
"interpreter": {
"hash": "949777d72b0d2535278d3dc13498b2535136f6dfe0678499012e853ee9abcab1"
}
},
"widgets": {
"application/vnd.jupyter.widget-state+json": {
"state": {
"2d910cfd2d2a4fc49fc30fbbdc5576a7": {
"model_module": "@jupyter-widgets/base",
"model_module_version": "2.0.0",
"model_name": "LayoutModel",
"state": {
"_model_module": "@jupyter-widgets/base",
"_model_module_version": "2.0.0",
"_model_name": "LayoutModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "2.0.0",
"_view_name": "LayoutView",
"align_content": null,
"align_items": null,
"align_self": null,
"border_bottom": null,
"border_left": null,
"border_right": null,
"border_top": null,
"bottom": null,
"display": null,
"flex": null,
"flex_flow": null,
"grid_area": null,
"grid_auto_columns": null,
"grid_auto_flow": null,
"grid_auto_rows": null,
"grid_column": null,
"grid_gap": null,
"grid_row": null,
"grid_template_areas": null,
"grid_template_columns": null,
"grid_template_rows": null,
"height": null,
"justify_content": null,
"justify_items": null,
"left": null,
"margin": null,
"max_height": null,
"max_width": null,
"min_height": null,
"min_width": null,
"object_fit": null,
"object_position": null,
"order": null,
"overflow": null,
"padding": null,
"right": null,
"top": null,
"visibility": null,
"width": null
}
},
"454146d0f7224f038689031002906e6f": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "2.0.0",
"model_name": "HBoxModel",
"state": {
"_dom_classes": [],
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "2.0.0",
"_model_name": "HBoxModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/controls",
"_view_module_version": "2.0.0",
"_view_name": "HBoxView",
"box_style": "",
"children": [
"IPY_MODEL_e4ae2b6f5a974fd4bafb6abb9d12ff26",
"IPY_MODEL_577e1e3cc4db4942b0883577b3b52755",
"IPY_MODEL_b40bdfb1ac1d4cffb7cefcb870c64d45"
],
"layout": "IPY_MODEL_dc83c7bff2f241309537a8119dfc7555",
"tabbable": null,
"tooltip": null
}
},
"577e1e3cc4db4942b0883577b3b52755": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "2.0.0",
"model_name": "FloatProgressModel",
"state": {
"_dom_classes": [],
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "2.0.0",
"_model_name": "FloatProgressModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/controls",
"_view_module_version": "2.0.0",
"_view_name": "ProgressView",
"bar_style": "success",
"description": "",
"description_allow_html": false,
"layout": "IPY_MODEL_2d910cfd2d2a4fc49fc30fbbdc5576a7",
"max": 1,
"min": 0,
"orientation": "horizontal",
"style": "IPY_MODEL_74a6ba0c3cbc4051be0a83e152fe1e62",
"tabbable": null,
"tooltip": null,
"value": 1
}
},
"6086462a12d54bafa59d3c4566f06cb2": {
"model_module": "@jupyter-widgets/base",
"model_module_version": "2.0.0",
"model_name": "LayoutModel",
"state": {
"_model_module": "@jupyter-widgets/base",
"_model_module_version": "2.0.0",
"_model_name": "LayoutModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "2.0.0",
"_view_name": "LayoutView",
"align_content": null,
"align_items": null,
"align_self": null,
"border_bottom": null,
"border_left": null,
"border_right": null,
"border_top": null,
"bottom": null,
"display": null,
"flex": null,
"flex_flow": null,
"grid_area": null,
"grid_auto_columns": null,
"grid_auto_flow": null,
"grid_auto_rows": null,
"grid_column": null,
"grid_gap": null,
"grid_row": null,
"grid_template_areas": null,
"grid_template_columns": null,
"grid_template_rows": null,
"height": null,
"justify_content": null,
"justify_items": null,
"left": null,
"margin": null,
"max_height": null,
"max_width": null,
"min_height": null,
"min_width": null,
"object_fit": null,
"object_position": null,
"order": null,
"overflow": null,
"padding": null,
"right": null,
"top": null,
"visibility": null,
"width": null
}
},
"74a6ba0c3cbc4051be0a83e152fe1e62": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "2.0.0",
"model_name": "ProgressStyleModel",
"state": {
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "2.0.0",
"_model_name": "ProgressStyleModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "2.0.0",
"_view_name": "StyleView",
"bar_color": null,
"description_width": ""
}
},
"7d3f3d9e15894d05a4d188ff4f466554": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "2.0.0",
"model_name": "HTMLStyleModel",
"state": {
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "2.0.0",
"_model_name": "HTMLStyleModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "2.0.0",
"_view_name": "StyleView",
"background": null,
"description_width": "",
"font_size": null,
"text_color": null
}
},
"b40bdfb1ac1d4cffb7cefcb870c64d45": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "2.0.0",
"model_name": "HTMLModel",
"state": {
"_dom_classes": [],
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "2.0.0",
"_model_name": "HTMLModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/controls",
"_view_module_version": "2.0.0",
"_view_name": "HTMLView",
"description": "",
"description_allow_html": false,
"layout": "IPY_MODEL_f1355871cc6f4dd4b50d9df5af20e5c8",
"placeholder": " ",
"style": "IPY_MODEL_ca245376fd9f4354af6b2befe4af4466",
"tabbable": null,
"tooltip": null,
"value": " 1/1 [00:00<00:00, 44.69it/s]"
}
},
"ca245376fd9f4354af6b2befe4af4466": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "2.0.0",
"model_name": "HTMLStyleModel",
"state": {
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "2.0.0",
"_model_name": "HTMLStyleModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "2.0.0",
"_view_name": "StyleView",
"background": null,
"description_width": "",
"font_size": null,
"text_color": null
}
},
"dc83c7bff2f241309537a8119dfc7555": {
"model_module": "@jupyter-widgets/base",
"model_module_version": "2.0.0",
"model_name": "LayoutModel",
"state": {
"_model_module": "@jupyter-widgets/base",
"_model_module_version": "2.0.0",
"_model_name": "LayoutModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "2.0.0",
"_view_name": "LayoutView",
"align_content": null,
"align_items": null,
"align_self": null,
"border_bottom": null,
"border_left": null,
"border_right": null,
"border_top": null,
"bottom": null,
"display": null,
"flex": null,
"flex_flow": null,
"grid_area": null,
"grid_auto_columns": null,
"grid_auto_flow": null,
"grid_auto_rows": null,
"grid_column": null,
"grid_gap": null,
"grid_row": null,
"grid_template_areas": null,
"grid_template_columns": null,
"grid_template_rows": null,
"height": null,
"justify_content": null,
"justify_items": null,
"left": null,
"margin": null,
"max_height": null,
"max_width": null,
"min_height": null,
"min_width": null,
"object_fit": null,
"object_position": null,
"order": null,
"overflow": null,
"padding": null,
"right": null,
"top": null,
"visibility": null,
"width": null
}
},
"e4ae2b6f5a974fd4bafb6abb9d12ff26": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "2.0.0",
"model_name": "HTMLModel",
"state": {
"_dom_classes": [],
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "2.0.0",
"_model_name": "HTMLModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/controls",
"_view_module_version": "2.0.0",
"_view_name": "HTMLView",
"description": "",
"description_allow_html": false,
"layout": "IPY_MODEL_6086462a12d54bafa59d3c4566f06cb2",
"placeholder": " ",
"style": "IPY_MODEL_7d3f3d9e15894d05a4d188ff4f466554",
"tabbable": null,
"tooltip": null,
"value": "100%"
}
},
"f1355871cc6f4dd4b50d9df5af20e5c8": {
"model_module": "@jupyter-widgets/base",
"model_module_version": "2.0.0",
"model_name": "LayoutModel",
"state": {
"_model_module": "@jupyter-widgets/base",
"_model_module_version": "2.0.0",
"_model_name": "LayoutModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "2.0.0",
"_view_name": "LayoutView",
"align_content": null,
"align_items": null,
"align_self": null,
"border_bottom": null,
"border_left": null,
"border_right": null,
"border_top": null,
"bottom": null,
"display": null,
"flex": null,
"flex_flow": null,
"grid_area": null,
"grid_auto_columns": null,
"grid_auto_flow": null,
"grid_auto_rows": null,
"grid_column": null,
"grid_gap": null,
"grid_row": null,
"grid_template_areas": null,
"grid_template_columns": null,
"grid_template_rows": null,
"height": null,
"justify_content": null,
"justify_items": null,
"left": null,
"margin": null,
"max_height": null,
"max_width": null,
"min_height": null,
"min_width": null,
"object_fit": null,
"object_position": null,
"order": null,
"overflow": null,
"padding": null,
"right": null,
"top": null,
"visibility": null,
"width": null
}
}
},
"version_major": 2,
"version_minor": 0
}
}
},
"nbformat": 4,
"nbformat_minor": 2
}