2023-10-23 22:26:41 -07:00
{
"cells": [
{
"cell_type": "markdown",
"id": "2c75da30",
"metadata": {},
"source": [
"# Agent Chat with Multimodal Models\n",
"\n",
"We use **LLaVA** as an example for the multimodal feature. More information about LLaVA can be found in their [GitHub page](https://github.com/haotian-liu/LLaVA)\n",
"\n",
"\n",
"This notebook contains the following information and examples:\n",
"\n",
"1. Install [LLaVA package](#install)\n",
"2. Setup LLaVA Model\n",
" - Option 1: Use [API calls from `Replicate`](#replicate)\n",
" - Option 2: Setup [LLaVA locally (requires GPU)](#local)\n",
"2. Application 1: [Image Chat](#app-1)\n",
"3. Application 2: [Figure Creator](#app-2)"
]
},
{
"cell_type": "code",
"execution_count": 37,
"id": "b1ffe2ab",
"metadata": {},
"outputs": [],
"source": [
"# We use this variable to control where you want to host LLaVA, locally or remotely?\n",
"# More details in the two setup options below.\n",
"LLAVA_MODE = \"remote\" # Either \"local\" or \"remote\"\n",
"assert LLAVA_MODE in [\"local\", \"remote\"]"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "2ec49aeb",
"metadata": {},
"outputs": [],
"source": [
"# we will override the following variables later.\n",
"MODEL_NAME = \"\" \n",
"SEP = \"###\""
]
},
{
"cell_type": "markdown",
"id": "d64154f0",
"metadata": {},
"source": [
"<a id=\"install\"></a>\n",
"## Install the LLaVA library\n",
"\n",
"Please follow the LLaVA GitHub [page](https://github.com/haotian-liu/LLaVA/) to install LLaVA.\n",
"\n",
"\n",
"#### Download the package\n",
"```bash\n",
"git clone https://github.com/haotian-liu/LLaVA.git\n",
"cd LLaVA\n",
"```\n",
"\n",
"#### Install the inference package\n",
"```bash\n",
"conda create -n llava python=3.10 -y\n",
"conda activate llava\n",
"pip install --upgrade pip # enable PEP 660 support\n",
"pip install -e .\n",
"```\n",
"\n",
"### Don't forget AutoGen in the new environment\n",
"```bash\n",
"pip install pyautogen\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "67d45964",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[2023-10-20 12:47:04,159] [INFO] [real_accelerator.py:110:get_accelerator] Setting ds_accelerator to cuda (auto detect)\n"
]
}
],
"source": [
"import requests\n",
"import json\n",
"from llava.conversation import default_conversation as conv\n",
"from llava.conversation import Conversation\n",
"\n",
2023-11-03 21:01:49 -07:00
"from typing import Dict, List, Optional, Tuple, Union\n",
2023-10-23 22:26:41 -07:00
"\n",
"import autogen\n",
2023-11-03 21:01:49 -07:00
"from autogen import AssistantAgent, Agent, ConversableAgent, OpenAIWrapper\n",
"from termcolor import colored"
2023-10-23 22:26:41 -07:00
]
},
{
"cell_type": "markdown",
"id": "acc4703b",
"metadata": {},
"source": [
"<a id=\"replicate\"></a>\n",
"## (Option 1, preferred) Use API Calls from Replicate [Remote]\n",
"We can also use [Replicate](https://replicate.com/yorickvp/llava-13b/api) to use LLaVA directly, which will host the model for you.\n",
"\n",
"1. Run `pip install replicate` to install the package\n",
"2. You need to get an API key from Replicate from your [account setting page](https://replicate.com/account/api-tokens)\n",
"3. Next, copy your API token and authenticate by setting it as an environment variable:\n",
" `export REPLICATE_API_TOKEN=<paste-your-token-here>` \n",
"4. You need to enter your credit card information for Replicate 🥲\n",
" "
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "f650bf3d",
"metadata": {},
"outputs": [],
"source": [
"# pip install replicate\n",
"# import os\n",
"## alternatively, you can put your API key here for the environment variable.\n",
"# os.environ[\"REPLICATE_API_TOKEN\"] = \"r8_xyz your api key goes here~\""
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "267ffd78",
"metadata": {},
"outputs": [],
"source": [
"if LLAVA_MODE == \"remote\":\n",
" import replicate"
]
},
{
"cell_type": "markdown",
"id": "1805e4bd",
"metadata": {},
"source": [
"<a id=\"local\"></a>\n",
"## [Option 2] Setup LLaVA Locally\n",
"\n",
"\n",
"Some helpful packages and dependencies:\n",
"```bash\n",
"conda install -c nvidia cuda-toolkit\n",
"```\n",
"\n",
"\n",
"### Launch\n",
"\n",
"In one terminal, start the controller first:\n",
"```bash\n",
"python -m llava.serve.controller --host 0.0.0.0 --port 10000\n",
"```\n",
"\n",
"\n",
"Then, in another terminal, start the worker, which will load the model to the GPU:\n",
"```bash\n",
"python -m llava.serve.model_worker --host 0.0.0.0 --controller http://localhost:10000 --port 40000 --worker http://localhost:40000 --model-path liuhaotian/llava-v1.5-13b\n",
"``"
]
},
{
"cell_type": "markdown",
"id": "9c29925f",
"metadata": {},
"source": [
"**Note: make sure the environment of this notebook also installed the llava package from `pip install -e .`**"
]
},
{
"cell_type": "code",
"execution_count": 32,
"id": "93bf7915",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'models': ['llava-v1.5-13b']}\n",
"Model Name: llava-v1.5-13b\n"
]
}
],
"source": [
"# Run this code block only if you want to run LlaVA locally\n",
"if LLAVA_MODE == \"local\":\n",
" # Setup some global constants for convenience\n",
" # Note: make sure the addresses below are consistent with your setup in LLaVA \n",
" CONTROLLER_ADDR = \"http://0.0.0.0:10000\"\n",
" SEP = conv.sep\n",
" ret = requests.post(CONTROLLER_ADDR + \"/list_models\")\n",
" print(ret.json())\n",
" MODEL_NAME = ret.json()[\"models\"][0]\n",
" print(\"Model Name:\", MODEL_NAME)"
]
},
{
"cell_type": "markdown",
"id": "307852dd",
"metadata": {},
"source": [
"# Multimodal Functions\n",
"\n",
"The Multimodal Functions library provides a set of utilities to manage and process multimodal data, focusing on textual and image components. The library allows you to format prompts, extract image paths, and handle image data in various formats.\n",
"\n",
"## Functions\n",
"\n",
"\n",
"### `get_image_data`\n",
"\n",
"This function retrieves the content of an image specified by a file path or URL and optionally converts it to base64 format. It can handle both web-hosted images and locally stored files.\n",
"\n",
"\n",
"### `lmm_formater`\n",
"\n",
"This function formats a user-provided prompt containing `<img ...>` tags, replacing these tags with `<image>` or numbered versions like `<image 1>`, `<image 2>`, etc., and extracts the image locations. It returns a tuple containing the new formatted prompt and a list of image data."
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "4bf7f549",
"metadata": {},
"outputs": [],
"source": [
"import base64\n",
"import re\n",
"from io import BytesIO\n",
"\n",
"from PIL import Image\n",
"\n",
"import re\n",
"\n",
"\n",
"def get_image_data(image_file, use_b64=True):\n",
" if image_file.startswith('http://') or image_file.startswith('https://'):\n",
" response = requests.get(image_file)\n",
" content = response.content\n",
" elif re.match(r\"data:image/(?:png|jpeg);base64,\", image_file):\n",
" return re.sub(r\"data:image/(?:png|jpeg);base64,\", \"\", image_file)\n",
" else:\n",
" image = Image.open(image_file).convert('RGB')\n",
" buffered = BytesIO()\n",
" image.save(buffered, format=\"PNG\")\n",
" content = buffered.getvalue()\n",
" \n",
" if use_b64:\n",
" return base64.b64encode(content).decode('utf-8')\n",
" else:\n",
" return content\n",
"\n",
"def lmm_formater(prompt: str, order_image_tokens: bool = False) -> Tuple[str, List[str]]:\n",
" \"\"\"\n",
" Formats the input prompt by replacing image tags and returns the new prompt along with image locations.\n",
" \n",
" Parameters:\n",
" - prompt (str): The input string that may contain image tags like <img ...>.\n",
" - order_image_tokens (bool, optional): Whether to order the image tokens with numbers. \n",
" It will be useful for GPT-4V. Defaults to False.\n",
" \n",
" Returns:\n",
" - Tuple[str, List[str]]: A tuple containing the formatted string and a list of images (loaded in b64 format).\n",
" \"\"\"\n",
" \n",
" # Initialize variables\n",
" new_prompt = prompt\n",
" image_locations = []\n",
" images = []\n",
" image_count = 0\n",
" \n",
" # Regular expression pattern for matching <img ...> tags\n",
" img_tag_pattern = re.compile(r'<img ([^>]+)>')\n",
" \n",
" # Find all image tags\n",
" for match in img_tag_pattern.finditer(prompt):\n",
" image_location = match.group(1)\n",
" \n",
" try: \n",
" img_data = get_image_data(image_location)\n",
" except:\n",
" # Remove the token\n",
" print(f\"Warning! Unable to load image from {image_location}\")\n",
" new_prompt = new_prompt.replace(match.group(0), \"\", 1)\n",
" continue\n",
" \n",
" image_locations.append(image_location)\n",
" images.append(img_data)\n",
" \n",
" # Increment the image count and replace the tag in the prompt\n",
" new_token = f'<image {image_count}>' if order_image_tokens else \"<image>\"\n",
"\n",
" new_prompt = new_prompt.replace(match.group(0), new_token, 1)\n",
" image_count += 1\n",
" \n",
" return new_prompt, images\n",
"\n",
"\n",
"\n",
"def gpt4v_formatter(prompt: str) -> List[Union[str, dict]]:\n",
" \"\"\"\n",
" Formats the input prompt by replacing image tags and returns a list of text and images.\n",
" \n",
" Parameters:\n",
" - prompt (str): The input string that may contain image tags like <img ...>.\n",
"\n",
" Returns:\n",
" - List[Union[str, dict]]: A list of alternating text and image dictionary items.\n",
" \"\"\"\n",
" output = []\n",
" last_index = 0\n",
" image_count = 0\n",
" \n",
" # Regular expression pattern for matching <img ...> tags\n",
" img_tag_pattern = re.compile(r'<img ([^>]+)>')\n",
" \n",
" # Find all image tags\n",
" for match in img_tag_pattern.finditer(prompt):\n",
" image_location = match.group(1)\n",
" \n",
" try:\n",
" img_data = get_image_data(image_location)\n",
" except:\n",
" # Warning and skip this token\n",
" print(f\"Warning! Unable to load image from {image_location}\")\n",
" continue\n",
"\n",
" # Add text before this image tag to output list\n",
" output.append(prompt[last_index:match.start()])\n",
" \n",
" # Add image data to output list\n",
" output.append({\"image\": img_data})\n",
" \n",
" last_index = match.end()\n",
" image_count += 1\n",
"\n",
" # Add remaining text to output list\n",
" output.append(prompt[last_index:])\n",
" \n",
" return output\n",
"\n",
"\n",
"def extract_img_paths(paragraph: str) -> list:\n",
" \"\"\"\n",
" Extract image paths (URLs or local paths) from a text paragraph.\n",
" \n",
" Parameters:\n",
" paragraph (str): The input text paragraph.\n",
" \n",
" Returns:\n",
" list: A list of extracted image paths.\n",
" \"\"\"\n",
" # Regular expression to match image URLs and file paths\n",
" img_path_pattern = re.compile(r'\\b(?:http[s]?://\\S+\\.(?:jpg|jpeg|png|gif|bmp)|\\S+\\.(?:jpg|jpeg|png|gif|bmp))\\b', \n",
" re.IGNORECASE)\n",
" \n",
" # Find all matches in the paragraph\n",
" img_paths = re.findall(img_path_pattern, paragraph)\n",
" return img_paths\n",
"\n",
"\n",
"def _to_pil(data):\n",
" return Image.open(BytesIO(base64.b64decode(data)))\n",
"\n",
"\n",
"\n",
"def llava_call_binary(prompt: str, images: list, \n",
" model_name:str = MODEL_NAME, \n",
" max_new_tokens:int=1000, temperature: float=0.5, seed: int = 1):\n",
" # TODO 1: add caching around the LLaVA call to save compute and cost\n",
" # TODO 2: add `seed` to ensure reproducibility. The seed is not working now.\n",
" if LLAVA_MODE == \"local\":\n",
" headers = {\"User-Agent\": \"LLaVA Client\"}\n",
" pload = {\n",
" \"model\": model_name,\n",
" \"prompt\": prompt,\n",
" \"max_new_tokens\": max_new_tokens,\n",
" \"temperature\": temperature,\n",
" \"stop\": SEP,\n",
" \"images\": images,\n",
" }\n",
"\n",
" response = requests.post(CONTROLLER_ADDR + \"/worker_generate_stream\", headers=headers,\n",
" json=pload, stream=False)\n",
"\n",
" for chunk in response.iter_lines(chunk_size=8192, decode_unicode=False, delimiter=b\"\\0\"):\n",
" if chunk:\n",
" data = json.loads(chunk.decode(\"utf-8\"))\n",
" output = data[\"text\"].split(SEP)[-1]\n",
" elif LLAVA_MODE == \"remote\":\n",
" # The Replicate version of the model only support 1 image for now.\n",
" img = 'data:image/jpeg;base64,' + images[0]\n",
" response = replicate.run(\n",
" \"yorickvp/llava-13b:2facb4a474a0462c15041b78b1ad70952ea46b5ec6ad29583c0b29dbd4249591\",\n",
" input={\"image\": img, \"prompt\": prompt.replace(\"<image>\", \" \"), \"seed\": seed}\n",
" )\n",
" # The yorickvp/llava-13b model can stream output as it's running.\n",
" # The predict method returns an iterator, and you can iterate over that output.\n",
" output = \"\"\n",
" for item in response:\n",
" # https://replicate.com/yorickvp/llava-13b/versions/2facb4a474a0462c15041b78b1ad70952ea46b5ec6ad29583c0b29dbd4249591/api#output-schema\n",
" output += item\n",
" \n",
" # Remove the prompt and the space.\n",
" output = output.replace(prompt, \"\").strip().rstrip()\n",
" return output\n",
" \n",
"\n",
"def llava_call(prompt:str, model_name: str=MODEL_NAME, images: list=[], \n",
" max_new_tokens:int=1000, temperature: float=0.5, seed: int = 1) -> str:\n",
" \"\"\"\n",
" Makes a call to the LLaVA service to generate text based on a given prompt and optionally provided images.\n",
"\n",
" Args:\n",
" - prompt (str): The input text for the model. Any image paths or placeholders in the text should be replaced with \"<image>\".\n",
" - model_name (str, optional): The name of the model to use for the text generation. Defaults to the global constant MODEL_NAME.\n",
" - images (list, optional): A list of image paths or URLs. If not provided, they will be extracted from the prompt.\n",
" If provided, they will be appended to the prompt with the \"<image>\" placeholder.\n",
" - max_new_tokens (int, optional): Maximum number of new tokens to generate. Defaults to 1000.\n",
" - temperature (float, optional): temperature for the model. Defaults to 0.5.\n",
"\n",
" Returns:\n",
" - str: Generated text from the model.\n",
"\n",
" Raises:\n",
" - AssertionError: If the number of \"<image>\" tokens in the prompt and the number of provided images do not match.\n",
" - RunTimeError: If any of the provided images is empty.\n",
"\n",
" Notes:\n",
" - The function uses global constants: CONTROLLER_ADDR and SEP.\n",
" - Any image paths or URLs in the prompt are automatically replaced with the \"<image>\" token.\n",
" - If more images are provided than there are \"<image>\" tokens in the prompt, the extra tokens are appended to the end of the prompt.\n",
" \"\"\"\n",
"\n",
" if len(images) == 0:\n",
" prompt, images = lmm_formater(prompt, order_image_tokens=False)\n",
" else:\n",
" # Append the <image> token if missing\n",
" assert prompt.count(\"<image>\") <= len(images), \"the number \"\n",
" \"of image token in prompt and in the images list should be the same!\"\n",
" num_token_missing = len(images) - prompt.count(\"<image>\")\n",
" prompt += \" <image> \" * num_token_missing\n",
" images = [get_image_data(x) for x in images]\n",
" \n",
" for im in images:\n",
" if len(im) == 0:\n",
" raise RunTimeError(\"An image is empty!\")\n",
"\n",
" return llava_call_binary(prompt, images, \n",
" model_name, \n",
" max_new_tokens, temperature, seed)\n"
]
},
{
"cell_type": "markdown",
"id": "4123df2c",
"metadata": {},
"source": [
"Here is the image that we are going to use.\n",
"\n",
""
]
},
{
"cell_type": "markdown",
"id": "05ed5a35",
"metadata": {},
"source": [
"We can call llava by providing the prompt and images separately.\n"
]
},
{
"cell_type": "code",
"execution_count": 19,
"id": "ec31ca74",
"metadata": {
"scrolled": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"The image features a small, orange, and black toy animal, possibly a stuffed dog or a toy horse, with flames coming out of its back. The toy is sitting on a table, and it appears to be a unique and creative design. The toy is wearing glasses, adding a touch of whimsy to its appearance. The overall scene is quite eye-catching and playful.\n"
]
}
],
"source": [
"out = llava_call(\"Describe this image: <image>\", \n",
" images=[\"https://github.com/haotian-liu/LLaVA/raw/main/images/llava_logo.png\"])\n",
"print(out)"
]
},
{
"cell_type": "markdown",
"id": "6619dc30",
"metadata": {},
"source": [
"Or, we can also call LLaVA with only prompt, with images embedded in the prompt with the <img xxx> format\n"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "12a7db5a",
"metadata": {
"scrolled": true
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"A red toy with flames and glasses on it.\n"
]
}
],
"source": [
"out = llava_call(\"Describe this image in one sentence: <img https://github.com/haotian-liu/LLaVA/raw/main/images/llava_logo.png>\")\n",
"print(out)"
]
},
{
"cell_type": "markdown",
"id": "7e4faf59",
"metadata": {},
"source": [
"<a id=\"app-1\"></a>\n",
"## Application 1: Image Chat\n",
"\n",
"In this section, we present a straightforward dual-agent architecture to enable user to chat with a multimodal agent.\n",
"\n",
"\n",
"First, we show this image and ask a question.\n",
""
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "286938aa",
"metadata": {},
"outputs": [],
"source": [
"\n",
"config_list_gpt4 = autogen.config_list_from_json(\n",
" \"OAI_CONFIG_LIST\",\n",
" filter_dict={\n",
" \"model\": [\"gpt-4\", \"gpt-4-0314\", \"gpt4\", \"gpt-4-32k\", \"gpt-4-32k-0314\", \"gpt-4-32k-v0314\"],\n",
" },\n",
")\n",
"\n",
"llm_config = {\"config_list\": config_list_gpt4, \"seed\": 42}\n",
"\n",
"DEFAULT_LMM_SYS_MSG = \"\"\"You are a helpful AI assistant.\n",
"You can also view images, where the \"<image i>\" represent the i-th image you received.\"\"\"\n",
"\n",
"class MultimodalConversableAgent(ConversableAgent):\n",
" def __init__(\n",
" self,\n",
" name: str,\n",
" system_message: Optional[Tuple[str, List]] = DEFAULT_LMM_SYS_MSG,\n",
" is_termination_msg=None,\n",
" *args,\n",
" **kwargs,\n",
" ):\n",
" \"\"\"\n",
" Args:\n",
" name (str): agent name.\n",
" system_message (str): system message for the ChatCompletion inference.\n",
" Please override this attribute if you want to reprogram the agent.\n",
" **kwargs (dict): Please refer to other kwargs in\n",
" [ConversableAgent](conversable_agent#__init__).\n",
" \"\"\"\n",
" super().__init__(\n",
" name,\n",
" system_message,\n",
" is_termination_msg=is_termination_msg,\n",
" *args,\n",
" **kwargs,\n",
" )\n",
" \n",
" self.update_system_message(system_message)\n",
" self._is_termination_msg = (\n",
" is_termination_msg if is_termination_msg is not None else (lambda x: x.get(\"content\")[-1] == \"TERMINATE\")\n",
" )\n",
" \n",
" @property\n",
" def system_message(self) -> List:\n",
" \"\"\"Return the system message.\"\"\"\n",
" return self._oai_system_message[0][\"content\"]\n",
"\n",
" def update_system_message(self, system_message: str):\n",
" \"\"\"Update the system message.\n",
"\n",
" Args:\n",
" system_message (str): system message for the ChatCompletion inference.\n",
" \"\"\"\n",
" self._oai_system_message[0][\"content\"] = self._message_to_dict(system_message)[\"content\"]\n",
" self._oai_system_message[0][\"role\"] = \"system\"\n",
" \n",
" @staticmethod\n",
" def _message_to_dict(message: Union[Dict, List, str]):\n",
" \"\"\"Convert a message to a dictionary.\n",
"\n",
" The message can be a string or a dictionary. The string will be put in the \"content\" field of the new dictionary.\n",
" \"\"\"\n",
" if isinstance(message, str):\n",
" return {\"content\": gpt4v_formatter(message)}\n",
" if isinstance(message, list):\n",
" return {\"content\": message}\n",
" else:\n",
" return message\n",
" \n",
" def _content_str(self, content: List) -> str:\n",
" rst = \"\"\n",
" for item in content:\n",
" if isinstance(item, str):\n",
" rst += item\n",
" else:\n",
" assert isinstance(item, dict) and \"image\" in item, (\"Wrong content format.\")\n",
" rst += \"<image>\"\n",
" return rst\n",
" \n",
" def _print_received_message(self, message: Union[Dict, str], sender: Agent):\n",
" # print the message received\n",
" print(colored(sender.name, \"yellow\"), \"(to\", f\"{self.name}):\\n\", flush=True)\n",
" if message.get(\"role\") == \"function\":\n",
" func_print = f\"***** Response from calling function \\\"{message['name']}\\\" *****\"\n",
" print(colored(func_print, \"green\"), flush=True)\n",
" print(self._content_str(message[\"content\"]), flush=True)\n",
" print(colored(\"*\" * len(func_print), \"green\"), flush=True)\n",
" else:\n",
" content = message.get(\"content\")\n",
" if content is not None:\n",
" if \"context\" in message:\n",
2023-11-03 21:01:49 -07:00
" content = OpenAIWrapper.instantiate(\n",
2023-10-23 22:26:41 -07:00
" content,\n",
" message[\"context\"],\n",
" self.llm_config and self.llm_config.get(\"allow_format_str_template\", False),\n",
" )\n",
" print(self._content_str(content), flush=True)\n",
" if \"function_call\" in message:\n",
" func_print = f\"***** Suggested function Call: {message['function_call'].get('name', '(No function name found)')} *****\"\n",
" print(colored(func_print, \"green\"), flush=True)\n",
" print(\n",
" \"Arguments: \\n\",\n",
" message[\"function_call\"].get(\"arguments\", \"(No arguments found)\"),\n",
" flush=True,\n",
" sep=\"\",\n",
" )\n",
" print(colored(\"*\" * len(func_print), \"green\"), flush=True)\n",
" print(\"\\n\", \"-\" * 80, flush=True, sep=\"\")\n",
" # TODO: we may want to udpate `generate_code_execution_reply` or `extract_code` for the \"content\" type change.\n",
" \n",
"\n",
"DEFAULT_LLAVA_SYS_MSG = \"You are an AI agent and you can view images.\"\n",
"class LLaVAAgent(MultimodalConversableAgent):\n",
" def __init__(\n",
" self,\n",
" name: str,\n",
" system_message: Optional[Tuple[str, List]] = DEFAULT_LLAVA_SYS_MSG,\n",
" *args,\n",
" **kwargs,\n",
" ):\n",
" \"\"\"\n",
" Args:\n",
" name (str): agent name.\n",
" system_message (str): system message for the ChatCompletion inference.\n",
" Please override this attribute if you want to reprogram the agent.\n",
" **kwargs (dict): Please refer to other kwargs in\n",
" [ConversableAgent](conversable_agent#__init__).\n",
" \"\"\"\n",
" super().__init__(\n",
" name,\n",
" system_message=system_message,\n",
" *args,\n",
" **kwargs,\n",
" )\n",
" self.register_reply([Agent, None], reply_func=LLaVAAgent._image_reply, position=0)\n",
"\n",
" def _image_reply(\n",
" self,\n",
" messages=None,\n",
" sender=None, config=None\n",
" ):\n",
" # Note: we did not use \"llm_config\" yet.\n",
" # TODO 1: make the LLaVA API design compatible with llm_config\n",
" \n",
" if all((messages is None, sender is None)):\n",
" error_msg = f\"Either {messages=} or {sender=} must be provided.\"\n",
" logger.error(error_msg)\n",
" raise AssertionError(error_msg)\n",
"\n",
" if messages is None:\n",
" messages = self._oai_messages[sender]\n",
"\n",
" # The formats for LLaVA and GPT are different. So, we manually handle them here.\n",
" # TODO: format the images from the history accordingly.\n",
" images = []\n",
" prompt = self._content_str(self.system_message) + \"\\n\"\n",
" for msg in messages:\n",
" role = \"Human\" if msg[\"role\"] == \"user\" else \"Assistant\"\n",
" images += [d[\"image\"] for d in msg[\"content\"] if isinstance(d, dict)]\n",
" content_prompt = self._content_str(msg[\"content\"])\n",
" prompt += f\"{SEP}{role}: {content_prompt}\\n\"\n",
" prompt += \"\\n\" + SEP + \"Assistant: \"\n",
" print(colored(prompt, \"blue\"))\n",
" \n",
" out = \"\"\n",
" retry = 10\n",
" while len(out) == 0 and retry > 0:\n",
" # image names will be inferred automatically from llava_call\n",
" out = llava_call_binary(prompt=prompt, images=images, temperature=0, max_new_tokens=2000)\n",
" retry -= 1\n",
" \n",
" assert out != \"\", \"Empty response from LLaVA.\"\n",
" \n",
" \n",
" return True, out"
]
},
{
"cell_type": "markdown",
"id": "e3d5580e",
"metadata": {},
"source": [
"Within the user proxy agent, we can decide to activate the human input mode or not (for here, we use human_input_mode=\"NEVER\" for conciseness). This allows you to interact with LLaVA in a multi-round dialogue, enabling you to provide feedback as the conversation unfolds."
]
},
{
"cell_type": "code",
"execution_count": 47,
"id": "67157629",
"metadata": {
"scrolled": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[33mUser_proxy\u001b[0m (to image-explainer):\n",
"\n",
"What's the breed of this dog? \n",
"<image>.\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[34mYou are an AI agent and you can view images.\n",
"###Human: What's the breed of this dog? \n",
"<image>.\n",
"\n",
"###Assistant: \u001b[0m\n",
"\u001b[33mimage-explainer\u001b[0m (to User_proxy):\n",
"\n",
"The dog in the image is a poodle.\n",
"\n",
"--------------------------------------------------------------------------------\n"
]
}
],
"source": [
"image_agent = LLaVAAgent(\n",
" name=\"image-explainer\",\n",
" max_consecutive_auto_reply=0\n",
")\n",
"\n",
"user_proxy = autogen.UserProxyAgent(\n",
" name=\"User_proxy\",\n",
" system_message=\"A human admin.\",\n",
" code_execution_config={\n",
" \"last_n_messages\": 3,\n",
" \"work_dir\": \"groupchat\"\n",
" },\n",
" human_input_mode=\"NEVER\", # Try between ALWAYS or NEVER\n",
"# llm_config=llm_config,\n",
" max_consecutive_auto_reply=0,\n",
")\n",
"\n",
"# Ask the question with an image\n",
"user_proxy.initiate_chat(image_agent, \n",
" message=\"\"\"What's the breed of this dog? \n",
"<img https://th.bing.com/th/id/R.422068ce8af4e15b0634fe2540adea7a?rik=y4OcXBE%2fqutDOw&pid=ImgRaw&r=0>.\"\"\")"
]
},
{
"cell_type": "markdown",
"id": "3f60521d",
"metadata": {},
"source": [
"Now, input another image, and ask a followup question.\n",
"\n",
""
]
},
{
"cell_type": "code",
"execution_count": 49,
"id": "73a2b234",
"metadata": {
"scrolled": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[33mUser_proxy\u001b[0m (to image-explainer):\n",
"\n",
"How about these breeds? \n",
"<image>\n",
"\n",
"Among the breeds, which one barks less?\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[34mYou are an AI agent and you can view images.\n",
"###Human: What's the breed of this dog? \n",
"<image>.\n",
"###Assistant: The dog in the image is a poodle.\n",
"###Human: How about these breeds? <image> and <image>\n",
"Among all the breeds, which one barks less?\n",
"###Assistant: The breeds of the dog in the image are a poodle and a terrier. Among the two, the poodle is known to bark less.\n",
"###Human: How about these breeds? \n",
"<image>\n",
"\n",
"Among the breeds, which one barks less?\n",
"\n",
"###Assistant: \u001b[0m\n",
"\u001b[33mimage-explainer\u001b[0m (to User_proxy):\n",
"\n",
"Among the breeds, the poodle is known to bark less.\n",
"\n",
"--------------------------------------------------------------------------------\n"
]
}
],
"source": [
"# Ask the question with an image\n",
"user_proxy.send(message=\"\"\"How about these breeds? \n",
"<img https://th.bing.com/th/id/OIP.29Mi2kJmcHHyQVGe_0NG7QHaEo?pid=ImgDet&rs=1>\n",
"\n",
"Among the breeds, which one barks less?\"\"\", \n",
" recipient=image_agent)"
]
},
{
"cell_type": "markdown",
"id": "0c40d0eb",
"metadata": {},
"source": [
"<a id=\"app-2\"></a>\n",
"## Application 2: Figure Creator\n",
"\n",
"Here, we define a `FigureCreator` agent, which contains three child agents: commander, coder, and critics.\n",
"\n",
"- Commander: interacts with users, runs code, and coordinates the flow between the coder and critics.\n",
"- Coder: writes code for visualization.\n",
"- Critics: LLaVA-based agent that provides comments and feedback on the generated image."
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "e8eca993",
"metadata": {},
"outputs": [],
"source": [
"class FigureCreator(AssistantAgent):\n",
"\n",
" def __init__(self, n_iters=2, **kwargs):\n",
" \"\"\"\n",
" Initializes a FigureCreator instance.\n",
" \n",
" This agent facilitates the creation of visualizations through a collaborative effort among its child agents: commander, coder, and critics.\n",
" \n",
" Parameters:\n",
" - n_iters (int, optional): The number of \"improvement\" iterations to run. Defaults to 2.\n",
" - **kwargs: keyword arguments for the parent AssistantAgent.\n",
" \"\"\"\n",
" super().__init__(**kwargs)\n",
" self.register_reply([Agent, None],\n",
" reply_func=FigureCreator._reply_user,\n",
" position=0)\n",
" self._n_iters = n_iters\n",
"\n",
" def _reply_user(self, messages=None, sender=None, config=None):\n",
" if all((messages is None, sender is None)):\n",
" error_msg = f\"Either {messages=} or {sender=} must be provided.\"\n",
" logger.error(error_msg)\n",
" raise AssertionError(error_msg)\n",
"\n",
" if messages is None:\n",
" messages = self._oai_messages[sender]\n",
"\n",
" user_question = messages[-1][\"content\"]\n",
"\n",
" ### Define the agents\n",
" commander = AssistantAgent(\n",
" name=\"Commander\",\n",
" human_input_mode=\"NEVER\",\n",
" max_consecutive_auto_reply=10,\n",
" system_message=\n",
" \"Help me run the code, and tell other agents it is in the <img result.jpg> file location.\",\n",
" is_termination_msg=lambda x: x.get(\"content\", \"\").rstrip().endswith(\n",
" \"TERMINATE\"),\n",
" code_execution_config={\n",
" \"last_n_messages\": 3,\n",
" \"work_dir\": \".\",\n",
" \"use_docker\": False\n",
" },\n",
" llm_config=self.llm_config,\n",
" )\n",
"\n",
" critics = LLaVAAgent(\n",
" name=\"Critics\",\n",
" system_message=\n",
" \"Criticize the input figure. How to replot the figure so it will be better? Find bugs and issues for the figure. If you think the figures is good enough, then simply say NO_ISSUES\",\n",
" llm_config=self.llm_config,\n",
" human_input_mode=\"NEVER\",\n",
" max_consecutive_auto_reply=0,\n",
" # use_docker=False,\n",
" )\n",
"\n",
" coder = AssistantAgent(\n",
" name=\"Coder\",\n",
" llm_config=self.llm_config,\n",
" )\n",
"\n",
" coder.update_system_message(\n",
" coder.system_message +\n",
" \"ALWAYS save the figure in `result.jpg` file. Tell other agents it is in the <img result.jpg> file location.\"\n",
" )\n",
"\n",
" # Data flow begins\n",
" commander.initiate_chat(coder, message=user_question)\n",
" img = Image.open(\"result.jpg\")\n",
" plt.imshow(img)\n",
" plt.axis('off') # Hide the axes\n",
" plt.show()\n",
" \n",
" for i in range(self._n_iters):\n",
" commander.send(message=\"Improve <img result.jpg>\",\n",
" recipient=critics,\n",
" request_reply=True)\n",
" \n",
" feedback = commander._oai_messages[critics][-1][\"content\"]\n",
" if feedback.find(\"NO_ISSUES\") >= 0:\n",
" break\n",
" commander.send(\n",
" message=\"Here is the feedback to your figure. Please improve! Save the result to `result.jpg`\\n\"\n",
" + feedback,\n",
" recipient=coder,\n",
" request_reply=True)\n",
" img = Image.open(\"result.jpg\")\n",
" plt.imshow(img)\n",
" plt.axis('off') # Hide the axes\n",
" plt.show()\n",
" \n",
" return True, \"result.jpg\""
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "977b9017",
"metadata": {
"scrolled": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[33mUser\u001b[0m (to Figure Creator~):\n",
"\n",
"\n",
"Plot a figure by using the data from:\n",
"https://raw.githubusercontent.com/vega/vega/main/docs/data/seattle-weather.csv\n",
"\n",
"I want to show both temperature high and low.\n",
"\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33mCommander\u001b[0m (to Coder):\n",
"\n",
"\n",
"Plot a figure by using the data from:\n",
"https://raw.githubusercontent.com/vega/vega/main/docs/data/seattle-weather.csv\n",
"\n",
"I want to show both temperature high and low.\n",
"\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33mCoder\u001b[0m (to Commander):\n",
"\n",
"To plot the figure using the data from the provided URL, we'll first download the data, then use the pandas library to read the CSV data and finally, use the matplotlib library to plot the temperature high and low.\n",
"\n",
"Step 1: Download the CSV file\n",
"Step 2: Read the CSV file using pandas\n",
"Step 3: Plot the temperature high and low using matplotlib\n",
"\n",
"Please execute the following code:\n",
"\n",
"```python\n",
"import pandas as pd\n",
"import matplotlib.pyplot as plt\n",
"import urllib.request\n",
"\n",
"# Download the CSV file from the URL\n",
"url = \"https://raw.githubusercontent.com/vega/vega/main/docs/data/seattle-weather.csv\"\n",
"urllib.request.urlretrieve(url, \"seattle-weather.csv\")\n",
"\n",
"# Read the CSV file using pandas\n",
"data = pd.read_csv(\"seattle-weather.csv\")\n",
"\n",
"# Plot the temperature high and low using matplotlib\n",
"plt.plot(data[\"date\"], data[\"temp_max\"], label=\"Temperature High\")\n",
"plt.plot(data[\"date\"], data[\"temp_min\"], label=\"Temperature Low\")\n",
"plt.xlabel(\"Date\")\n",
"plt.ylabel(\"Temperature\")\n",
"plt.title(\"Seattle Weather - Temperature High and Low\")\n",
"plt.legend()\n",
"plt.savefig(\"result.jpg\")\n",
"plt.show()\n",
"```\n",
"\n",
"After executing the code, you should see the desired plot with temperature high and low. The figure will be saved as `result.jpg`.\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[31m\n",
">>>>>>>> EXECUTING CODE BLOCK 0 (inferred language is python)...\u001b[0m\n",
"\u001b[33mCommander\u001b[0m (to Coder):\n",
"\n",
"exitcode: 0 (execution succeeded)\n",
"Code output: \n",
"Figure(640x480)\n",
"\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33mCoder\u001b[0m (to Commander):\n",
"\n",
"Great! The code execution succeeded, and the figure has been plotted using the data provided. The figure is saved in the `result.jpg` file. Please check the file for the plotted figure showing both temperature high and low.\n",
"\n",
"TERMINATE\n",
"\n",
"--------------------------------------------------------------------------------\n"
]
},
{
"data": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAgAAAAGFCAYAAACL7UsMAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjcuMSwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy/bCgiHAAAACXBIWXMAAA9hAAAPYQGoP6dpAAEAAElEQVR4nOy9d7wdRf3//5yZ3dNuzU0PJJAQAqGHKkXEQhMQLPizIKKigljAXgDbx4KoHwRFQL8qCAgIKiAgoKh8AGlBCCKEEEIIpLdbzzm7OzO/P2Zn795LQIMCuTf7yuPknrpldmfe7fV+v4W11lKgQIECBQoU2KwgX+kDKFCgQIECBQq8/CgUgAIFChQoUGAzRKEAFChQoECBApshCgWgQIECBQoU2AxRKAAFChQoUKDAZohCAShQoECBAgU2QxQKQIECBQoUKLAZolAAChQoUKBAgc0QhQJQoECBAgUKbIYoFIACBQoUKFBgM0ShABQoUKBAgQKbIQoFoECBAgUKFNgMUSgABQoUKFCgwGaIQgEoUKBAgQIFNkMUCkCBAgUKFCiwGaJQAAoUKFCgQIHNEIUCUKBAgQIFCmyGKBSAAgUKFChQYDNEoQAUKFCgQIECmyEKBaBAgQIFChTYDFEoAAUKFChQoMBmiEIBKFCgQIECBTZDFApAgQIFChQosBmiUAAKFChQoECBzRCFAlCgQIECBQpshigUgAIFChQoUGAzRKEAFChQoECBApshCgWgQIECBQoU2AxRKAAFChQoUKDAZohCAShQoECBAgU2QxQKQIECBQoUKLAZolAAChQoUKBAgc0QwSt9AAX+O7DWAiCEyJ7n3/ef5T8XQmz0PowxSCmHvGetzd6z1g7ZbpIkSCmRUg753gvtw3/PP89vTwhBHMcEwdBbN0kSwjBEa42UcqPPLb//F/ps+LG8HHi+cdBaZ9fUGJM9f7mO67+B/Hj7e+vlOH5/n/hx9WOslMo+98fn71l/XPmxzv8+/x3/2w3NjZfr+vi554/LH7d/7e+hl2vMC2x6KDwAowjWWrTWxHGM1hpjTPYXIIqibFHyjxezD3CLS34b+b/5hzFmyHH5Y3i+bfvvRlE0ZBE2xmCMyfbrz8mfXxzHNBqNbBtJkmz0ueWPwx+3328cx9mx+X2+mPHb2OPQWj9HSEZRlO3ff5a/ziMNG7pnXmr4cbTWEscxAHEc02w2s+OI4zi7/vljyh9j/v7On0P+3k2ShCRJXrbrE8cxSZJkx+Dfy3/mj8tam82bApsfCgVgFMFPaqVUZsl47T6O4+y9/0R4KaWy/RhjaDab2T7ygtG/zh9XEATZMTwf8hbKcIvJWzFKKaIootFoIKWk2WxSqVQIw5AoirLj3FgkSZItlN669hZcEARIKV8WwZ+HPwav1EkpCcMw+8wLFaXUiLPivKDMW+DDPR0vFYIgIAzDIVZ/EAQEQZDdY/49fw82Gg2AzKPln4O7Fvnn/uHni5+TL1bx3hgopbJ71h+Pv4/DMCQIgsw7IISgVCq9pMdTYNNFoQCMIlhrCcMwE/gwuDAFQYDWmnq9/qKEI5AJG2stlUoFIUS2yHj3rRdEfmHPLzJ+4XshL0DeaslbJ0IIenp6CIKAOI6pVCqUSiWMMVQqlcwC8++9mPBG/njzC7dXdgYGBjJF4OWC9zx4oe/P05+fv64AjUZjRHoB8u73vFLzUu/T37PD96e1JkmSIa59Ywzlcplms5l9J38t8nMjv32vkAOZR+HlODfvAdNa02w2KZVKQ7wUSqkhXowCmycKDsAoQd5S9pN8xYoV3H///Tz55JP09/czadIktttuO/bdd98XtQ8v+LxL8+GHH2bWrFmMGTMms4wfeughJk2axMSJEzMhetVVVzFp0iQOOOCAIRbshmCtJQgC6vU6AL/85S/Zaaed2Hfffeno6KC7u5vf/e53NJtN3v/+92fKxr333stTTz3FkUceSaVSeQ5H4N/F008/TV9fHzvuuGO2eD/44IMsWLCAww8/nHK5nL3/UisCeWvx7rvv5rbbbkNKmSlXHlprDjzwQPbff/+XVTn5b8ALYK01999/P1tttRXjx49/yffrhaSUkquvvpoxY8Zw0EEHZfdsT08PN910E9tvvz277bYby5Yt4/rrr+fd7343UkriOKZaraK1HuJx816NefPmMX/+fA499FDa2tqew795KeEVj+uvv54xY8aw//77DwmZCSFYu3Ytv/vd73jd617HlClTXvR8KTCyMbJWiwLPi7xVKIRg9erVfOc73+Hcc8+lr68PIQRz587lsssue1EWMgxan2EYsmTJEr71rW8xf/78bBHs7+/nrLPO4t57783c8UmS8Itf/IIHH3wwE5p5K3848gLOGMNf/vIXfv/732eW15IlS/jZz37Gd77zHVavXo21loGBAa6++mpuv/32zAvwYvkN1113Heedd15GNBRC8Oijj3L55ZfT29sLQBiGL4ug9Z4IIQTlcpmuri7Gjx9Po9Hghz/8IcuXL6ezs5POzk5qtdqL9uy8kvBCOIoizjjjDO68886X5Tz82CqluPzyy7npppsolUoopTKvypVXXsm8efNQSlEul+ns7CQIgiEx/uHkVn//zp07l1/96lf09/dngv/l8h55JfuKK67gtttuy8Ia3sMlpaS7u5vzzz+fxYsXFyGAzRiF2jfK4AXnAw88wP/93/9x2WWXMXXq1IzI5IW1X5TWrFnD+vXrsdbS3t7O2LFjM/dgb28v69atI45jyuUyY8aMoaWlhSRJWLFiBfPnz+eZZ56hq6uLlpYW1q1bx5IlS1i4cCGPPPII7e3tbLHFFmitM4WgXC5jrWXp0qUMDAwghGDChAm0trZmC5Rn9CdJwv77789NN93EihUrmDp1KvPmzWOLLbag0Wjw97//ncMPPxytNbfffjvve9/7qNVq1Ot1Vq1aRbPZJAxDxo8fT0tLS2b1rV69moGBAaSUVKtVxo0bh1KKvr4+nnjiCR577DEWLVqEEIKpU6dmi3xPT0+miIwdO5bOzs5M8Wo2m6xatYp6vT5knwDr1q2jt7eXtrY2uru7kVKy1VZbDblueYVluKWotWa33XZjhx12QErJsmXLuPLKK3nLW97Cq1/9agDWr1/PwoULAWhpaWH8+PGEYYgxhhUrVlAqlUiShN7eXsrlMpMmTSJJElauXInWmo6ODsaOHQu4EM2yZctobW2lr6+PJEno6Oigq6trCAdk9erV9PX1IaWks7OTrq6uLFa+dOlSxowZQ09PD3Ecs9VWW1Gv11m7di1RFBGGIR0dHXR2dmKMYenSpSxatIinn36axx57jPb2dsaNG8fKlSvp6uqiVqsB0N/fz8qVK5k+fTrWWpYsWUKtVkNrTU9PDxMmTMjGed26dWitaWtrY9y4cZnwzsfovas+z0Pwil+ecNrS0sIee+wxRElYvXo169evRylFV1cX/f39tLS00N7enpHwenp6Ml5KV1cXHR0dQ66vP578fWmMoVqtMnHixMwy7+npoaenh2q1Sk9PD8YYxo0bR3t7O+CE/vr161m7di1JktDZ2ZnxfjynwSv9cRw/5xiG/200GqxcuZIoiiiVSowfP55arUYURaxYsYIJEyZQKpWo1+usWLGCMWPGZNdyyZIljB07ltbW1iH3dIFND4UCMIqQT6PyE27BggVMmjSJjo6OLJ7sra577rmHn/zkJ6xatQqtNa2trZx88snsv//+NJtNrrzySm6//fYsZr/tttvy0Y9+lEqlwu9//3tWrlzJD3/4Q8aMGcOhhx7Kk08+yaJFi7jiiiu466672HHHHfnqV7+aHYuPoV566aX87ne/IwgC+vv72XXXXTnxxBOZNWsWQohMcLW0tLDDDjvw05/+lO7ubiZMmMDChQvZaaed6OrqYt68eRxyyCEsXbqUer3OzJkziaKICy+8kNtuu41SqUSz2WTOnDmcfPLJ2e/PO+88li1bloUi3v72t3PUUUfxyCOPcPvtt7NixQo+97nP0dbWxumnn06z2WT58uVceOGFrF+/niVLlrDjjjty5pln0tXVRXd3N5dccgl//vO
"text/plain": [
"<Figure size 640x480 with 1 Axes>"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[33mCommander\u001b[0m (to Critics):\n",
"\n",
"Improve <image>\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[34mCriticize the input figure. How to replot the figure so it will be better? Find bugs and issues for the figure. If you think the figures is good enough, then simply say NO_ISSUES\n",
"###Human: Improve <image>\n",
"\n",
"###Assistant: \u001b[0m\n",
"\u001b[33mCritics\u001b[0m (to Commander):\n",
"\n",
"The input figure shows a graph with three different colored lines, representing temperature high, temperature low, and temperature average. The graph is labeled \"Seattle Weather - Temperature High and Low.\" However, the graph is not well-organized, and the lines are not clearly distinguishable.\n",
"\n",
"To improve the figure, one could use a more visually appealing color scheme for the lines, such as different shades of blue, green, and orange. Additionally, the labels for the temperature high, temperature low, and temperature average could be placed above or below the respective lines to make them more easily readable. The graph could also benefit from a clear title and axis labels to provide more context and information about the data being displayed.\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33mCommander\u001b[0m (to Coder):\n",
"\n",
"Here is the feedback to your figure. Please improve! Save the result to `result.jpg`\n",
"The input figure shows a graph with three different colored lines, representing temperature high, temperature low, and temperature average. The graph is labeled \"Seattle Weather - Temperature High and Low.\" However, the graph is not well-organized, and the lines are not clearly distinguishable.\n",
"\n",
"To improve the figure, one could use a more visually appealing color scheme for the lines, such as different shades of blue, green, and orange. Additionally, the labels for the temperature high, temperature low, and temperature average could be placed above or below the respective lines to make them more easily readable. The graph could also benefit from a clear title and axis labels to provide more context and information about the data being displayed.\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33mCoder\u001b[0m (to Commander):\n",
"\n",
"I appreciate your feedback. I will make the following improvements based on the feedback provided for the figure:\n",
"\n",
"1. Use a better color scheme for the lines, such as different shades of blue, green, and orange.\n",
"2. Add a label for temperature average.\n",
"3. Place the labels for temperature high, temperature low, and temperature average above or below the respective lines.\n",
"4. Improve the title and axis labels for better readability and context.\n",
"\n",
"Please execute the following code to generate the improved figure:\n",
"\n",
"```python\n",
"import pandas as pd\n",
"import matplotlib.pyplot as plt\n",
"import urllib.request\n",
"\n",
"# Download the CSV file from the URL\n",
"url = \"https://raw.githubusercontent.com/vega/vega/main/docs/data/seattle-weather.csv\"\n",
"urllib.request.urlretrieve(url, \"seattle-weather.csv\")\n",
"\n",
"# Read the CSV file using pandas\n",
"data = pd.read_csv(\"seattle-weather.csv\")\n",
"\n",
"# Calculate temperature average\n",
"data[\"temp_avg\"] = (data[\"temp_max\"] + data[\"temp_min\"]) / 2\n",
"\n",
"# Plot the temperature high, low, and average using matplotlib with better colors\n",
"plt.plot(data[\"date\"], data[\"temp_max\"], color=\"darkorange\", label=\"Temperature High\")\n",
"plt.plot(data[\"date\"], data[\"temp_min\"], color=\"dodgerblue\", label=\"Temperature Low\")\n",
"plt.plot(data[\"date\"], data[\"temp_avg\"], color=\"mediumseagreen\", label=\"Temperature Average\")\n",
"\n",
"# Improve the title and axis labels\n",
"plt.xlabel(\"Date\", fontsize=12)\n",
"plt.ylabel(\"Temperature\", fontsize=12)\n",
"plt.title(\"Seattle Weather - Temperatures (High, Low, and Average)\", fontsize=14)\n",
"\n",
"# Plot the legend\n",
"plt.legend(fontsize=10, loc='upper right')\n",
"\n",
"# Save the improved figure as 'result.jpg'\n",
"plt.savefig(\"result.jpg\", dpi=100)\n",
"\n",
"# Show the figure\n",
"plt.show()\n",
"```\n",
"\n",
"After executing the code, you should see the improved plot with better colors, labels, and readability. The figure will be saved as `result.jpg`.\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[31m\n",
">>>>>>>> EXECUTING CODE BLOCK 0 (inferred language is python)...\u001b[0m\n",
"\u001b[33mCommander\u001b[0m (to Coder):\n",
"\n",
"exitcode: 0 (execution succeeded)\n",
"Code output: \n",
"Figure(640x480)\n",
"\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33mCoder\u001b[0m (to Commander):\n",
"\n",
"Great! The code execution succeeded, and the improved figure has been plotted using the updated colors, labels, and readability. The figure is saved in the `result.jpg` file. Please check the file for the updated plotted figure showing temperature high, low, and average.\n",
"\n",
"TERMINATE\n",
"\n",
"--------------------------------------------------------------------------------\n"
]
},
{
"data": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAgAAAAGFCAYAAACL7UsMAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjcuMSwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy/bCgiHAAAACXBIWXMAAA9hAAAPYQGoP6dpAAEAAElEQVR4nOydd5xV1bn3v2uX06bBDL036dKxgqjYxR7lRsWu0ViwRY2xJLmaRGNMNCYGrrHGBqhRUbAgKKKiiCiiICi9DWX6Kbus9f5xztrsM2Bucl8TUfbv85k5bZe1V3n68yyhlFJEiBAhQoQIEfYoGN92AyJEiBAhQoQI/3lEAkCECBEiRIiwByISACJEiBAhQoQ9EJEAECFChAgRIuyBiASACBEiRIgQYQ9EJABEiBAhQoQIeyAiASBChAgRIkTYAxEJABEiRIgQIcIeiEgAiBAhQoQIEfZARAJAhAgRIkSIsAciEgAiRIgQIUKEPRCRABAhQoQIESLsgYgEgAgRIkSIEGEPRCQARIgQIUKECHsgIgEgQoQIESJE2AMRCQARIkSIECHCHohIAIgQIUKECBH2QEQCQIQIESJEiLAHIhIAIkSIECFChD0QkQAQIUKECBEi7IGIBIAIESJEiBBhD0QkAESIECFChAh7ICIBIEKECBEiRNgDEQkAESJEiBAhwh6ISACIECFChAgR9kBEAkCECBEiRIiwByISACJEiBAhQoQ9EJEAECFChAgRIuyBiASACBEiRIgQYQ9EJABEiBAhQoQIeyAiASBChAgRIkTYAxEJABEiRIgQIcIeCOvbbsD/D6SUSCkxTROlFIaRl2eUUsFnpRSe52GaJkKIovOVUgC7/F4IUfS7Pib8Xfi9lDI4TkoZ3L/5ec3b1/ze4WN1O/S99Dm+7wfPpu9nGEbwqr/3PA/L2vUQ62P+Ud/qe+n+3VVf/SM0v0fzayilitqs77Ortn3dfZv31a7ur7/XYxR+xvB9w/cJj2n42uH5po9r3ubm4xW+F1A0T8PXD7+G54dSKhhzPc7hebGr/gn3S/M+31WfNn/G5vNL99PX9XXz6+j267bqa+5q7HVbwus03PfhPg1fa1dt0fM1jF21VUpZNJ676qvmz6Xv2fx7IFhrzefBP8KuxiI8brrP9diHn7f5/GreT7tqa/N77up64Xm1qznv+z6WZe1E78LrLNzeMP3QxwJF7RNCBOfoV8/zsG27qB3N18mu5n+Y9uoxaT7Pw+eH2xOmD83nqRAiWBPN+UJzGvR1a675uPwrtPTfhe+8BUAz+OYLR0/WTCYTHCOlxPf9YND1OXrS+L4ffNbwfR+lFLlcDtd1g2P0b5Bf/K7rFl1Tv2YyGRzHKWIAGp7nBdeXUhZdX7c3/DxKqaJjpJTBcboP9J/jOEX9Eu6v8Kt+Dn1/fc9sNht8H35t3s9aCGvel8BOz61fc7ncTufpdjTvXz0u4THZFfR5juME99Ln6PN1fzuOQy6XK2q/bqs+J5vNFvWZ7nvdD57nBcfp9+F+1M+l+zF8r3D79DPra4Tv2XxM9He6T5rPe/2bvpaek83nZ5hphftVf6+v3by9etx2NafC46T7M9wP4d/DYxweZ93H2WwWKWUwRuFn0dfSY9n8c7hfYcd81H3peR6O4xTdEyh6tvAa0/Pif0P4vs37pzl0/4T7TY9zeEzD6zvcl3p8w+s9fMzXtTf8vLuaP+Hr6L7S99LHhp9R95m+npSyaOzCc04/n4a+Xphu6T7J5XJFaz+dThc9KxC0L9z34bbpVz339e/h+zUfq/C80fNEX1/Tja87Pty28BoJrz29JnYnfGcFAKUU6XQawzBwXZdMJkNtbS1NTU00NTUFAxaPx7FtG8uycF03kOIaGxvxfT+Q4rQUqqEni2maZLNZLMvC9322bt0a3Nv3fRoaGshkMsHiCxPziRMn8tvf/jYgTlri1EzBNM0iDUi3DQik5rq6uuBe2WwW0zSxLIumpqaAQFqWFRzveV7wfSwWC84NI5vNFhFo/fxa2whLsJZlBQzT933q6+uD6zz11FOcdtpprFy5MpBmwxpJWCNyXTdol5bsYcei0G3QWoDuf/3d/yYtay3Otm1836exsZFMJkM6nSaTyQSCWGNjY3DPsCZp23aR9K4/A0XzJKzN6OOAYG7p9urvtAaydetWLMsKNA19TX2OaZrkcrlA8NBjoeeIJoKagITnikZY+3McB8MwAgHFsqygP8PMUX/WY+T7PrZtY5pmkTCk26qfTc8Jx3GK1pBGLBYLxqS51UQT+3C79bOuX7+eq666ig8++CDoO30vyDNqwzCCa1dXVwfr1DAMLr74Yq677rrgWMgL4boN+lxtaTBNk23btnHUUUfx0ksvBb/rZ9Hr/usIt2bmer7osW4u0GnouaMtc7rvw+3Ra0XPF72+9fjosZFSkk6ng3HSc/EfCSzhdarvqZm2nhd6nMP9pOeHXtN1dXVB3+gxmT9/Pqeeeirr168PaK6+vu4fLVDpOZrJZGhqairSjPVzpNNpTNMkHo8XWSkymUzQN+H5o/tWzzPdx+G5qedAWKvX46t/02Ot16SeO6Zp4jhOkXKg+1rfN0xX9NpobGzkZz/7GU8//TRSSjKZzNeOz38a31kBAPKEadKkSRx55JEMGjSI0aNHM3bsWK6++mpqamqIxWJFDCUejwcDn0qlipimZphhE5KewLZtk8lkuPXWWznzzDPJZDIkEolgwvzsZz/jyiuvpK6uLmDQSinWrl3Ll19+GRAoyE+URCKBbdtFmh3sWJRaojUMg4ULF9KjRw/mzZtHIpEINII//OEPVFZWct999wXavmmavP766/Tr148XX3wRKSWJRCIwh2lGm0gkAsFItyGXy5FOpwOGn0wmcV0Xx3H47LPPGDFiBPPnz6e0tDQQuKSUfP7558AODSB8Pf1MkF+M4bboPrZtm1gsVqSZhJmyJnLNhZjm0ARKKcX69ev5yU9+wuDBgxk8eDB77703bdq0oXfv3gwbNowhQ4YwYsQInnrqqSJmrpQKiGt43oRNk9qiFLaSWJYVEEMt5KXT6UAYAWjdunUR0w8LSvq9HhcNTURc16WsrAwgmNO6T2EHMw5rOFpAGDNmDD/72c+KTKdhZqznjRbM9HGayOo5FLYwCCGIxWLEYrEiAUgLGPq6YSaviawmtOG5FxaYXn/9ddavX0+bNm1wXZdVq1Zx3HHHcd111wX3hTxT//jjjzn++OOZPHlyMCaJRIJUKoUQgng8ju/7RefoMdXEWZualyxZwsaNGwOtVzOJ8PPuCoZhBPM3l8vx/vvvc8wxx/DII4987VzVa0M/u2Y6mpnEYrGg7Xr+1NbWEovFgn7V9CmZTBYJZfp5vg5akQibrzUt04qFZqhh65Ruq+6TFi1aBOtdz5uamhoWLFhQpCnr5wt/1oLAKaecQteuXXn00UdxHCcQdDRdKCkpKRoL/VsqlQJ2rPl0Oh0IGmEBW99PIyyUhS0K4WM0DdDfxePxQPjSPMK27YCWhs/Tgp1pmnieRywWQwhBixYtGDJkCL///e9Zv359Eb37tvGdjQGQUjJlyhTuuecejj76aH7yk5+glKK6uppVq1axZs2agOjqwdCTKKyBaWIUJoSa8OlBlVJSUlKCaZpUV1cHhFcTxaampuD62WyWeDyO4zjBOZrQhM/TCBPr8GTUE7lVq1aUl5fz1Vdfsf/++wdCzCuvvELbtm1ZtmwZ27dvp23btkgpee+994jFYowYMaLo2nrhaQIcvr+W1vWCTyQSpNNpkskkuVyuSEvQBCa8CMLStNaEwwQsTADClgF9vn7Vz26aZsCAXNclkUh8bSyDvmZ44bZu3ZrzzjuPU089FcMwWL58Offeey8HHXQQJ598cnCfvfbaK2hLWBDUVp+wVqYJlLZeKKUCzUSPl2ZymkiFzcx6vun+0FYl3X/NtTc9RmHBMXyd8NzUhEYTS92epqYmSkpKgnkavpaeB5ooai1WCzTh59Tt1uOhmbdeM/oPdvh89RoK+4wTiUSRBqjnpG5bOp3mb3/7G8cddxzt27cP2pHJZALBSN8/mUw
"text/plain": [
"<Figure size 640x480 with 1 Axes>"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[33mCommander\u001b[0m (to Critics):\n",
"\n",
"Improve <image>\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[34mCriticize the input figure. How to replot the figure so it will be better? Find bugs and issues for the figure. If you think the figures is good enough, then simply say NO_ISSUES\n",
"###Human: Improve <image>\n",
"###Assistant: The input figure shows a graph with three different colored lines, representing temperature high, temperature low, and temperature average. The graph is labeled \"Seattle Weather - Temperature High and Low.\" However, the graph is not well-organized, and the lines are not clearly distinguishable.\n",
"\n",
"To improve the figure, one could use a more visually appealing color scheme for the lines, such as different shades of blue, green, and orange. Additionally, the labels for the temperature high, temperature low, and temperature average could be placed above or below the respective lines to make them more easily readable. The graph could also benefit from a clear title and axis labels to provide more context and information about the data being displayed.\n",
"###Human: Improve <image>\n",
"\n",
"###Assistant: \u001b[0m\n",
"\u001b[33mCritics\u001b[0m (to Commander):\n",
"\n",
"To improve the figure, one could use a more visually appealing color scheme for the lines, such as different shades of blue, green, and orange. Additionally, the labels for the temperature high, temperature low, and temperature average could be placed above or below the respective lines to make them more easily readable. The graph could also benefit from a clear title and axis labels to provide more context and information about the data being displayed.\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33mCommander\u001b[0m (to Coder):\n",
"\n",
"Here is the feedback to your figure. Please improve! Save the result to `result.jpg`\n",
"To improve the figure, one could use a more visually appealing color scheme for the lines, such as different shades of blue, green, and orange. Additionally, the labels for the temperature high, temperature low, and temperature average could be placed above or below the respective lines to make them more easily readable. The graph could also benefit from a clear title and axis labels to provide more context and information about the data being displayed.\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33mCoder\u001b[0m (to Commander):\n",
"\n",
"Thank you for the feedback. I misunderstood the part about placing the labels above or below the respective lines. I will implement the necessary changes and generate the figure again. Here's the plan:\n",
"\n",
"1. Improve the line style for better readability by using different line styles.\n",
"2. Annotate the points on the graph with their respective labels (high, low, and average) for better readability.\n",
"3. Keep the colors, title, and axis labels from the previous improvement.\n",
"\n",
"Please execute the following code to generate the updated figure:\n",
"\n",
"```python\n",
"import pandas as pd\n",
"import matplotlib.pyplot as plt\n",
"import urllib.request\n",
"import numpy as np\n",
"\n",
"# Download the CSV file from the URL\n",
"url = \"https://raw.githubusercontent.com/vega/vega/main/docs/data/seattle-weather.csv\"\n",
"urllib.request.urlretrieve(url, \"seattle-weather.csv\")\n",
"\n",
"# Read the CSV file using pandas\n",
"data = pd.read_csv(\"seattle-weather.csv\")\n",
"\n",
"# Calculate temperature average\n",
"data[\"temp_avg\"] = (data[\"temp_max\"] + data[\"temp_min\"]) / 2\n",
"\n",
"# Plot the temperature high, low, and average using matplotlib with better colors\n",
"plt.plot(data[\"date\"], data[\"temp_max\"], color=\"darkorange\", label=\"Temperature High\", linestyle='--')\n",
"plt.plot(data[\"date\"], data[\"temp_min\"], color=\"dodgerblue\", label=\"Temperature Low\", linestyle='-.')\n",
"plt.plot(data[\"date\"], data[\"temp_avg\"], color=\"mediumseagreen\", label=\"Temperature Average\", linestyle='-')\n",
"\n",
"# Improve the title and axis labels\n",
"plt.xlabel(\"Date\", fontsize=12)\n",
"plt.ylabel(\"Temperature\", fontsize=12)\n",
"plt.title(\"Seattle Weather - Temperatures (High, Low, and Average)\", fontsize=14)\n",
"\n",
"# Add labels for some data points\n",
"num_labels = 5\n",
"label_indices = np.linspace(0, len(data) - 1, num_labels, dtype=int)\n",
"\n",
"for i in label_indices:\n",
" plt.text(data.loc[i, \"date\"], data.loc[i, \"temp_max\"], \"High\", fontsize=8, verticalalignment=\"bottom\", horizontalalignment=\"left\")\n",
" plt.text(data.loc[i, \"date\"], data.loc[i, \"temp_min\"], \"Low\", fontsize=8, verticalalignment=\"top\", horizontalalignment=\"left\")\n",
" plt.text(data.loc[i, \"date\"], data.loc[i, \"temp_avg\"], \"Avg\", fontsize=8, verticalalignment=\"baseline\", horizontalalignment=\"left\")\n",
"\n",
"# Plot the legend\n",
"plt.legend(fontsize=10, loc='upper right')\n",
"\n",
"# Save the improved figure as 'result.jpg'\n",
"plt.savefig(\"result.jpg\", dpi=100)\n",
"\n",
"# Show the figure\n",
"plt.show()\n",
"```\n",
"\n",
"After executing the code, you should see the updated plot that addresses the concerns with labels and better readability. The figure will be saved as `result.jpg`.\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[31m\n",
">>>>>>>> EXECUTING CODE BLOCK 0 (inferred language is python)...\u001b[0m\n",
"\u001b[33mCommander\u001b[0m (to Coder):\n",
"\n",
"exitcode: 0 (execution succeeded)\n",
"Code output: \n",
"Figure(640x480)\n",
"\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33mCoder\u001b[0m (to Commander):\n",
"\n",
"Great! The code execution succeeded, and the updated figure has been plotted with the improved line styles and annotations for a better visualization. The figure is saved in the `result.jpg` file. Please check the file for the updated plotted figure showing temperature high, low, and average with better readability.\n",
"\n",
"TERMINATE\n",
"\n",
"--------------------------------------------------------------------------------\n"
]
},
{
"data": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAgAAAAGFCAYAAACL7UsMAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjcuMSwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy/bCgiHAAAACXBIWXMAAA9hAAAPYQGoP6dpAAEAAElEQVR4nOydd5xV1bn3v2uX06bPwNDL0JsUASuIgl3shasRY0FjYkGNUWMsSa6aqGkarwZDrEGlqFFRsCAooqJIEUEQkN4GmGHKabut949z1mbPgLnlNRFh//zgzJyzy6pPf54lpJSSECFChAgRIsRBBe27bkCIECFChAgR4t+PUAAIESJEiBAhDkKEAkCIECFChAhxECIUAEKECBEiRIiDEKEAECJEiBAhQhyECAWAECFChAgR4iBEKACECBEiRIgQByFCASBEiBAhQoQ4CBEKACFChAgRIsRBiFAACBEiRIgQIQ5ChAJAiBAhQoQIcRAiFABChAgRIkSIgxChABAiRIgQIUIchAgFgBAhQoQIEeIgRCgAhAgRIkSIEAchQgEgRIgQIUKEOAgRCgAhQoQIESLEQYhQAAgRIkSIECEOQoQCQIgQIUKECHEQIhQAQoQIESJEiIMQoQAQIkSIECFCHIQIBYAQIUKECBHiIEQoAIQIESJEiBAHIUIBIESIECFChDgIEQoAIUKECBEixEGIUAAIESJEiBAhDkKEAkCIECFChAhxECIUAEKECBEiRIiDEKEAECJEiBAhQhyECAWAECFChAgR4iBEKACECBEiRIgQByFCASBEiBAhQoQ4CGF81w34/4HneXieh67rSCnRtJw8I6X0/5ZS4jgOuq4jhGhyv5QSYJ+fCyGafK+uCX4W/N3zPP86z/P89ze/r3n7mr87eK1qh3qXusd1Xb9v6n2apvk/1eeO42AY+55idc0/G1v1LjW++xqrf4bm72j+DCllkzar9+yrbd/03uZjta/3q8/VHAX7GHxv8D3BOQ0+O7je1HXN29x8voLvApqs0+Dzgz+D60NK6c+5mufgutjX+ATHpfmY72tMm/ex+fpS4/RNY938Oar9qq3qmfuae9WW4D4Njn1wTIPP2ldb1HoNYl9t9TyvyXzua6ya90u9s/nngL/Xmq+Df4Z9zUVw3tSYq7kP9rf5+mo+Tvtqa/N37ut5wXW1rzXvui6GYexF74L7LNjeIP1Q1wJN2ieE8O9RPx3HwTTNJu1ovk/2tf6DtFfNSfN1Hrw/2J4gfWi+ToUQ/p5ozhea06Bv2nPN5+V/Q0v/VfjeWwAUg2++cdRiTafT/jWe5+G6rj/p6h61aFzX9f9WcF0XKSXZbBbbtv1r1HeQ2/y2bTd5pvqZTqexLKsJA1BwHMd/vud5TZ6v2hvsj5SyyTWe5/nXqTFQ/yzLajIuwfEK/lT9UO9X78xkMv7nwZ/Nx1kJYc3HEtir3+pnNpvd6z7Vjubjq+YlOCf7grrPsiz/Xeoedb8ab8uyyGazTdqv2qruyWQyTcZMjb0aB8dx/OvU78FxVP1S4xh8V7B9qs/qGcF3Np8T9Zkak+brXn2nnqXWZPP1GWRawXFVn6tnN2+vmrd9rangPKnxDI5D8PvgHAfnWY1xJpPB8zx/joJ9Uc9Sc9n87+C4wp71qMbScRwsy2ryTqBJ34J7TK2L/w7B9zYfn+ZQ4xMcNzXPwTkN7u/gWKr5De734DXf1N5gf/e1foLPUWOl3qWuDfZRjZl6nud5TeYuuOZU/xTU84J0S41JNpttsvdTqVSTvgJ++4JjH2yb+qnWvvo++L7mcxVcN2qdqOcruvFN1wfbFtwjwb2n9sT+hO+tACClJJVKoWkatm2TTqfZvXs3yWSSZDLpT1g0GsU0TQzDwLZtX4prbGzEdV1filNSqIJaLLquk8lkMAwD13XZuXOn/27XdWloaCCdTvubL0jMx48fz4MPPugTJyVxKqag63oTDUi1DfCl5rq6Ov9dmUwGXdcxDINkMukTSMMw/Osdx/E/j0Qi/r1BZDKZJgRa9V9pG0EJ1jAMn2G6rkt9fb3/nBdeeIELLriAtWvX+tJsUCMJakS2bfvtUpI97NkUqg1KC1Djrz7776RlpcWZponrujQ2NpJOp0mlUqTTaV8Qa2xs9N8Z1CRN02wivau/gSbrJKjNqOsAf22p9qrPlAayc+dODMPwNQ31THWPrutks1lf8FBzodaIIoKKgATXikJQ+7MsC03TfAHFMAx/PIPMUf2t5sh1XUzTRNf1JsKQaqvqm1oTlmU12UMKkUjEn5PmVhNF7IPtVn3dvHkzN954I59++qk/dupdkGPUmqb5z66urvb3qaZpXH311dxyyy3+tZATwlUb1L3K0qDrOrt27eLkk0/m9ddf979XfVH7/psIt2Lmar2ouW4u0CmotaMsc2rsg+1Re0WtF7W/1fyoufE8j1Qq5c+TWov/TGAJ7lP1TsW01bpQ8xwcJ7U+1J6uq6vzx0bNyfz58zn//PPZvHmzT3PV89X4KIFKrdF0Ok0ymWyiGat+pFIpdF0nGo02sVKk02l/bILrR42tWmdqjINrU62BoFav5ld9p+Za7Um1dnRdx7KsJsqBGmv13iBdUXujsbGRX/ziF0yePBnP80in0984P/9ufG8FAMgRpgkTJnDSSSfRv39/hg8fzqhRo7jpppuora0lEok0YSjRaNSf+EQi0YRpKoYZNCGpBWyaJul0mrvvvpuLL76YdDpNLBbzF8wvfvELbrjhBurq6nwGLaVk48aNrFmzxidQkFsosVgM0zSbaHawZ1MqiVbTNBYuXEiXLl2YN28esVjM1wj+9Kc/UV5eziOPPOJr+7qu884779C7d29ee+01PM8jFov55jDFaGOxmC8YqTZks1lSqZTP8OPxOLZtY1kWy5cvZ8iQIcyfP5/CwkJf4PI8jy+//BLYowEEn6f6BLnNGGyLGmPTNIlEIk00kyBTVkSuuRDTHIpASSnZvHkzP/vZzxgwYAADBgzgkEMOobKykh49enDooYcycOBAhgwZwgsvvNCEmUspfeIaXDdB06SyKAWtJIZh+MRQCXmpVMoXRgBatmzZhOkHBSX1u5oXBUVEbNumqKgIwF/TakxhDzMOajhKQBgxYgS/+MUvmphOg8xYrRslmKnrFJFVayhoYRBCEIlEiEQiTQQgJWCo5waZvCKyitAG115QYHrnnXfYvHkzlZWV2LbNunXrOP3007nlllv890KOqS9ZsoQzzjiDxx9/3J+TWCxGIpFACEE0GsV13Sb3qDlVxFmZmpctW8bWrVt9rVcxiWB/9wVN0/z1m81m+eSTTzj11FN5+umnv3Gtqr2h+q6YjmImkUjEb7taP7t37yYSifjjquhTPB5vIpSp/nwTlCIRNF8rWqYUC8VQg9Yp1VY1JqWlpf5+V+umtraWBQsWNNGUVf+CfytB4Nxzz6VTp04888wzWJblCzqKLhQUFDSZC/VdIpEA9uz5VCrlCxpBAVu9TyEolAUtCsFrFA1Qn0WjUV/4UjzCNE2flgbvU4Kdrus4jkMkEkEIQWlpKQMHDuSPf/wjmzdvbkLvvmt8b2MAPM9jypQpPPTQQ5xyyin87Gc/Q0pJdXU169atY8OGDT7RVZOhFlFQA1PEKEgIFeFTk+p5HgUFBei6TnV1tU94FVFMJpP+8zOZDNFoFMuy/HsUoQnepxAk1sHFqBZyixYtKC4u5uuvv+bII4/0hZg333yTVq1asXLlSmpqamjVqhWe5/Hxxx8TiUQYMmRIk2erjacIcPD9SlpXGz4Wi5FKpYjH42Sz2SZagiIwwU0QlKaVJhwkYEECELQMqPvVT9V3Xdd9BmTbNrFY7BtjGdQzgxu3ZcuWXH755Zx//vlomsaqVat4+OGHOeaYYzjnnHP893Tv3t1vS1AQVFafoFamCJSyXkgpfc1EzZdicopIBc3Mar2p8VBWJTV+zbU3NUdBwTH4nODaVIRGEUvVnmQySUFBgb9Og89S60ARRaXFKoEm2E/VbjUfinmrPaP+wR6fr9pDQZ9xLBZrogGqNanalkql+Pvf/87pp59OmzZt/Hak02l
"text/plain": [
"<Figure size 640x480 with 1 Axes>"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[33mFigure Creator~\u001b[0m (to User):\n",
"\n",
"result.jpg\n",
"\n",
"--------------------------------------------------------------------------------\n"
]
}
],
"source": [
"import matplotlib.pyplot as plt\n",
"import time\n",
"\n",
"creator = FigureCreator(\n",
" name=\"Figure Creator~\",\n",
" llm_config=llm_config\n",
" \n",
")\n",
"\n",
"user_proxy = autogen.UserProxyAgent(\n",
" name=\"User\",\n",
" human_input_mode=\"NEVER\",\n",
" llm_config=llm_config,\n",
" max_consecutive_auto_reply=0\n",
")\n",
"\n",
"user_proxy.initiate_chat(creator, message=\"\"\"\n",
"Plot a figure by using the data from:\n",
"https://raw.githubusercontent.com/vega/vega/main/docs/data/seattle-weather.csv\n",
"\n",
"I want to show both temperature high and low.\n",
"\"\"\")\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f0a58827",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.16"
}
},
"nbformat": 4,
"nbformat_minor": 5
}