autogen/notebook/autogen_agentchat_groupchat.ipynb

485 lines
22 KiB
Plaintext
Raw Normal View History

{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"<a href=\"https://colab.research.google.com/github/microsoft/FLAML/blob/main/notebook/autogen_agentchat_groupchat.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"# Auto Generated Agent Chat: Group Chat\n",
"\n",
"`flaml.autogen` offers conversable agents powered by LLM, tool or human, which can be used to perform tasks collectively via automated chat. This framwork allows tool use and human participance through multi-agent conversation.\n",
"Please find documentation about this feature [here](https://microsoft.github.io/FLAML/docs/Use-Cases/Autogen#agents).\n",
"\n",
"This notebook is modified based on https://github.com/microsoft/FLAML/blob/4ea686af5c3e8ff24d9076a7a626c8b28ab5b1d7/notebook/autogen_multiagent_roleplay_chat.ipynb\n",
"\n",
"## Requirements\n",
"\n",
"FLAML requires `Python>=3.8`. To run this notebook example, please install flaml with the [autogen] option:\n",
"```bash\n",
"pip install flaml[autogen]\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
"%%capture --no-stderr\n",
"# %pip install flaml[autogen]~=2.0.0"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Set your API Endpoint\n",
"\n",
"The [`config_list_from_json`](https://microsoft.github.io/FLAML/docs/reference/autogen/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file."
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [],
"source": [
"from flaml import autogen\n",
"\n",
"config_list_gpt4 = autogen.config_list_from_json(\n",
" \"OAI_CONFIG_LIST\",\n",
" filter_dict={\n",
" \"model\": [\"gpt-4\", \"gpt4\", \"gpt-4-32k\", \"gpt-4-32k-0314\", \"gpt-4-32k-v0314\"],\n",
" },\n",
")\n",
"# config_list_gpt35 = autogen.config_list_from_json(\n",
"# \"OAI_CONFIG_LIST\",\n",
"# filter_dict={\n",
"# \"model\": {\n",
"# \"gpt-3.5-turbo\",\n",
"# \"gpt-3.5-turbo-16k\",\n",
"# \"gpt-3.5-turbo-0301\",\n",
"# \"chatgpt-35-turbo-0301\",\n",
"# \"gpt-35-turbo-v0301\",\n",
"# },\n",
"# },\n",
"# )"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"It first looks for environment variable \"OAI_CONFIG_LIST\" which needs to be a valid json string. If that variable is not found, it then looks for a json file named \"OAI_CONFIG_LIST\". It filters the configs by models (you can filter by other keys as well). Only the gpt-4 models are kept in the list based on the filter condition.\n",
"\n",
"The config list looks like the following:\n",
"```python\n",
"config_list = [\n",
" {\n",
" 'model': 'gpt-4',\n",
" 'api_key': '<your OpenAI API key here>',\n",
" },\n",
" {\n",
" 'model': 'gpt-4',\n",
" 'api_key': '<your Azure OpenAI API key here>',\n",
" 'api_base': '<your Azure OpenAI API base here>',\n",
" 'api_type': 'azure',\n",
" 'api_version': '2023-06-01-preview',\n",
" },\n",
" {\n",
" 'model': 'gpt-4-32k',\n",
" 'api_key': '<your Azure OpenAI API key here>',\n",
" 'api_base': '<your Azure OpenAI API base here>',\n",
" 'api_type': 'azure',\n",
" 'api_version': '2023-06-01-preview',\n",
" },\n",
"]\n",
"```\n",
"\n",
"If you open this notebook in colab, you can upload your files by clicking the file icon on the left panel and then choose \"upload file\" icon.\n",
"\n",
"You can set the value of config_list in other ways you prefer, e.g., loading from a YAML file."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Construct Agents"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [],
"source": [
"llm_config = {\"config_list\": config_list_gpt4}\n",
"human = autogen.UserProxyAgent(\n",
" name=\"Human\",\n",
" system_message=\"A human admin.\",\n",
" code_execution_config={\"last_n_messages\": 2, \"work_dir\": \"groupchat\"},\n",
")\n",
"alice = autogen.AssistantAgent(\n",
" name=\"Alice\",\n",
" llm_config=llm_config,\n",
")\n",
"bob = autogen.AssistantAgent(\n",
" name=\"Bob\",\n",
" system_message=\"Code reviewer. Prevent code execution if unsafe or not well documented. Suggest changes. Otherwise, approve and return the final code to execute.\",\n",
" llm_config=llm_config,\n",
")\n",
"groupchat = autogen.GroupChat(agents=[human, alice, bob], messages=[], max_round=12)\n",
"manager = autogen.GroupChatManager(groupchat=groupchat, llm_config=llm_config)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Start Chat"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[33mHuman\u001b[0m (to chat_manager):\n",
"\n",
"find a latest paper about generative agents\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33mAlice\u001b[0m (to chat_manager):\n",
"\n",
"To accomplish this, we can utilize the \"scholarly\" library in Python, which enables us to search Google Scholar for papers. Here's the Python code to achieve this:\n",
"\n",
"```python\n",
"# filename: googlescholar_search.py\n",
"\n",
"import scholarly\n",
"\n",
"def get_latest_paper(query):\n",
" search_query = scholarly.search_pubs(query)\n",
" paper = next(search_query)\n",
" print(\"The latest paper is:\", paper.bib['title'])\n",
" print(\"The abstract of the paper is:\", paper.bib['abstract'])\n",
" print(\"The year of publication is:\", paper.bib['year'])\n",
"\n",
"get_latest_paper(\"Generative agents\")\n",
"```\n",
"\n",
"To execute this script:\n",
"1. Save the code to a file named googlescholar_search.py\n",
"2. Run `pip install scholarly` to install the necessary library if you haven't installed it yet.\n",
"3. Run `python googlescholar_search.py` to execute the script and get the latest paper on generative agents.\n",
"\n",
"Please note that Google Scholar doesn't provide a stable API and has rate limit restrictions in place, meaning that if you run this code multiple times in a short period, Google might temporarily block your IP.\n",
"\n",
"If the rate limit becomes an issue, you might want to consider using a different database or method, such as utilizing APIs from databases like arXiv or Pubmed, or web scraping, in a manner compliant with the website's robots.txt and terms of usage.\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33mBob\u001b[0m (to chat_manager):\n",
"\n",
"The provided code seems fine, however, according to Google's Terms of Service API users are restricted from programmatically sending requests to Google Scholar. Even being an unofficial API, it doesn't make using scholarly legal as per the use policy. It's important you consider these limitations when handling this tool and any consequences that may arise on its usage. \n",
"\n",
"Remember to use APIs responsibly and always in accordance with their terms of service. Without explicit permission from Google, using such a tool can get your IP banned. \n",
"\n",
"I will suggest to use APIs from databases like arXiv or Pubmed, or webscraping, in a manner that is compliant with the website's robots.txt-file and terms of usage.\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33mAlice\u001b[0m (to chat_manager):\n",
"\n",
"I apologize for the oversight. You're correct, direct scraping of Google Scholar violates Google's terms of service. Let's change to use the arXiv API which doesn't have this issue and is more reliable. Here's the python code:\n",
"\n",
"```python\n",
"# filename: arxiv_search.py\n",
"import urllib\n",
"import feedparser\n",
"\n",
"def search_arxiv(query: str):\n",
" base_url = 'http://export.arxiv.org/api/query?'\n",
" query = {'search_query' : f'ti:{query}', 'start' : 0, 'max_results' : 1, 'sortBy' : 'submittedDate', 'sortOrder' : 'descending'}\n",
" url = base_url + urllib.parse.urlencode(query)\n",
" \n",
" # connect to arXiv API and get response\n",
" response = urllib.request.urlopen(url).read()\n",
"\n",
" # parse the response using feedparser\n",
" feed = feedparser.parse(response)\n",
" \n",
" # get the first (and presumably, the most recent) article in the result\n",
" entry = feed.entries[0]\n",
"\n",
" # print details of the most recent article\n",
" print('The latest paper on', query['search_query'], 'that I could find is:\\n')\n",
" print('Title: ', entry.title)\n",
" print('Author: ', entry.author)\n",
" print('Link: ', entry.link)\n",
" print('\\nAbstract: ', entry.summary)\n",
"\n",
"# search for the latest paper about \"generative agents\"\n",
"search_arxiv(\"generative agents\")\n",
"```\n",
"\n",
"To execute this script:\n",
"1. Save the code to a file named arxiv_search.py\n",
"2. Run `pip install feedparser` to install the necessary library.\n",
"3. Run `python arxiv_search.py` to execute the script and get the latest paper on generative agents.\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33mBob\u001b[0m (to chat_manager):\n",
"\n",
"The provided code snippet is clear, efficient, and well-documented. It appropriately uses the arXiv API to retrieve the most recent papers about \"generative agents\". The search terms are correctly URI-encoded and passed to the arXiv query API, and proper error handling is in place.\n",
"\n",
"However, you should ensure that you handle potential exception which may occur when trying to connect to the URL and parse the response. For example, if the internet is disconnected or something is wrong with the server, `urllib.request.urlopen()` will raise a `URLError`. If the returned content is not properly formatted, `feedparser.parse()` may also fail. You should catch and properly handle these exceptions.\n",
"\n",
"Therefore, I would add these modifications:\n",
"\n",
"```python\n",
"from urllib.error import URLError\n",
"\n",
"try:\n",
" # connect to arXiv API and get response\n",
" response = urllib.request.urlopen(url).read()\n",
"\n",
"except URLError as e:\n",
" print(\"There was a problem connecting to the arXiv API:\")\n",
" print(e.reason)\n",
"\n",
"else:\n",
" try:\n",
" # parse the response using feedparser\n",
" feed = feedparser.parse(response)\n",
" \n",
" # get the first (and presumably, the most recent) article in the result\n",
" entry = feed.entries[0]\n",
"\n",
" except Exception as e:\n",
" print(\"There was a problem parsing the result:\")\n",
" print(e)\n",
"\n",
" else:\n",
" # print details of the most recent article\n",
" print('The latest paper on', query['search_query'], 'that I could find is:\\n')\n",
" print('Title: ', entry.title)\n",
" print('Author: ', entry.author)\n",
" print('Link: ', entry.link)\n",
" print('\\nAbstract: ', entry.summary)\n",
"```\n",
"\n",
"The keyword `except` is used to catch and handle exceptions. The modifications suggested include exception handlers for `URLError` (which are raised if there was a problem connecting to the arXiv API) and a generic Exception (which could be any other exception during parsing the response). The `else` keyword allows us to group together the normal operation code, separating it from the error handling code. \n",
"\n",
"The code is ready to be executed now.\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[31m\n",
">>>>>>>> EXECUTING CODE BLOCK 0 (inferred language is python)...\u001b[0m\n",
"\u001b[33mHuman\u001b[0m (to chat_manager):\n",
"\n",
"exitcode: 1 (execution failed)\n",
"Code output: \n",
"Traceback (most recent call last):\n",
" File \"\", line 5, in <module>\n",
" response = urllib.request.urlopen(url).read()\n",
"NameError: name 'urllib' is not defined\n",
"\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33mAlice\u001b[0m (to chat_manager):\n",
"\n",
"Apologies for the oversight. It looks like I missed importing the required `urllib.request` module. Please use the following updated code with the necessary import statement:\n",
"\n",
"```python\n",
"# filename: arxiv_search.py\n",
"import urllib.request\n",
"import urllib.parse\n",
"import feedparser\n",
"from urllib.error import URLError\n",
"\n",
"def search_arxiv(query: str):\n",
" base_url = 'http://export.arxiv.org/api/query?'\n",
" query = {'search_query' : f'ti:{query}', 'start' : 0, 'max_results' : 1, 'sortBy' : 'submittedDate', 'sortOrder' : 'descending'}\n",
" url = base_url + urllib.parse.urlencode(query)\n",
" \n",
" try:\n",
" # connect to arXiv API and get response\n",
" response = urllib.request.urlopen(url).read()\n",
"\n",
" except URLError as e:\n",
" print(\"There was a problem connecting to the arXiv API:\")\n",
" print(e.reason)\n",
"\n",
" else:\n",
" try:\n",
" # parse the response using feedparser\n",
" feed = feedparser.parse(response)\n",
" \n",
" # get the first (and presumably, the most recent) article in the result\n",
" entry = feed.entries[0]\n",
"\n",
" except Exception as e:\n",
" print(\"There was a problem parsing the result:\")\n",
" print(e)\n",
"\n",
" else:\n",
" # print details of the most recent article\n",
" print('The latest paper on', query['search_query'], 'that I could find is:\\n')\n",
" print('Title: ', entry.title)\n",
" print('Author: ', entry.author)\n",
" print('Link: ', entry.link)\n",
" print('\\nAbstract: ', entry.summary)\n",
"\n",
"# search for the latest paper about \"generative agents\"\n",
"search_arxiv(\"generative agents\")\n",
"```\n",
"\n",
"To execute this script:\n",
"1. Save the code to a file named arxiv_search.py\n",
"2. Run `pip install feedparser` to install the necessary library.\n",
"3. Run `python arxiv_search.py` to execute the script and get the latest paper on generative agents.\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33mBob\u001b[0m (to chat_manager):\n",
"\n",
"The amendment is correctly appended. You have properly imported the required `urllib.request` module. The code is now ready for execution. It correctly searches for the latest paper about \"generative agents\" using the arXiv API, and handles potential exceptions that may arise during the connection to the API or while parsing the response. Please proceed to execute this code.\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[31m\n",
">>>>>>>> EXECUTING CODE BLOCK 0 (inferred language is python)...\u001b[0m\n",
"\u001b[33mHuman\u001b[0m (to chat_manager):\n",
"\n",
"exitcode: 1 (execution failed)\n",
"Code output: \n",
"Traceback (most recent call last):\n",
" File \"arxiv_search.py\", line 4, in <module>\n",
" import feedparser\n",
"ModuleNotFoundError: No module named 'feedparser'\n",
"\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33mAlice\u001b[0m (to chat_manager):\n",
"\n",
"My apologies for the inconvenience. It appears that the 'feedparser' module is not installed. Please install the 'feedparser' module by running the following command:\n",
"\n",
"```sh\n",
"pip install feedparser\n",
"```\n",
"\n",
"Once you have installed the 'feedparser' module, please execute the arxiv_search.py script again:\n",
"\n",
"```sh\n",
"python arxiv_search.py\n",
"```\n",
"\n",
"This should execute the script and fetch the latest paper on generative agents.\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33mBob\u001b[0m (to chat_manager):\n",
"\n",
"That's correct. Make sure to install the 'feedparser' module using the provided command, and then you should be able to execute the updated arxiv_search.py script successfully. The script will search for the latest paper about \"generative agents\" using the arXiv API, and return information about the most recent article it finds.\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[31m\n",
">>>>>>>> EXECUTING CODE BLOCK 0 (inferred language is sh)...\u001b[0m\n",
"\u001b[31m\n",
">>>>>>>> EXECUTING CODE BLOCK 1 (inferred language is sh)...\u001b[0m\n",
"\u001b[33mHuman\u001b[0m (to chat_manager):\n",
"\n",
"exitcode: 0 (execution succeeded)\n",
"Code output: \n",
"Defaulting to user installation because normal site-packages is not writeable\n",
"Collecting feedparser\n",
" Downloading feedparser-6.0.10-py3-none-any.whl (81 kB)\n",
" ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 81.1/81.1 KB 17.2 MB/s eta 0:00:00\n",
"Collecting sgmllib3k\n",
" Downloading sgmllib3k-1.0.0.tar.gz (5.8 kB)\n",
" Preparing metadata (setup.py): started\n",
" Preparing metadata (setup.py): finished with status 'done'\n",
"Building wheels for collected packages: sgmllib3k\n",
" Building wheel for sgmllib3k (setup.py): started\n",
" Building wheel for sgmllib3k (setup.py): finished with status 'done'\n",
" Created wheel for sgmllib3k: filename=sgmllib3k-1.0.0-py3-none-any.whl size=6046 sha256=867dc31954f27685ad79808f2ca2b5d8235496de750c61f110c200ba664a50e4\n",
" Stored in directory: /home/vscode/.cache/pip/wheels/65/7a/a7/78c287f64e401255dff4c13fdbc672fed5efbfd21c530114e1\n",
"Successfully built sgmllib3k\n",
"Installing collected packages: sgmllib3k, feedparser\n",
"Successfully installed feedparser-6.0.10 sgmllib3k-1.0.0\n",
"\n",
"The latest paper on ti:generative agents that I could find is:\n",
"\n",
"Title: LayoutLLM-T2I: Eliciting Layout Guidance from LLM for Text-to-Image\n",
" Generation\n",
"Author: Tat-Seng Chua\n",
"Link: http://arxiv.org/abs/2308.05095v1\n",
"\n",
"Abstract: In the text-to-image generation field, recent remarkable progress in Stable\n",
"Diffusion makes it possible to generate rich kinds of novel photorealistic\n",
"images. However, current models still face misalignment issues (e.g.,\n",
"problematic spatial relation understanding and numeration failure) in complex\n",
"natural scenes, which impedes the high-faithfulness text-to-image generation.\n",
"Although recent efforts have been made to improve controllability by giving\n",
"fine-grained guidance (e.g., sketch and scribbles), this issue has not been\n",
"fundamentally tackled since users have to provide such guidance information\n",
"manually. In this work, we strive to synthesize high-fidelity images that are\n",
"semantically aligned with a given textual prompt without any guidance. Toward\n",
"this end, we propose a coarse-to-fine paradigm to achieve layout planning and\n",
"image generation. Concretely, we first generate the coarse-grained layout\n",
"conditioned on a given textual prompt via in-context learning based on Large\n",
"Language Models. Afterward, we propose a fine-grained object-interaction\n",
"diffusion method to synthesize high-faithfulness images conditioned on the\n",
"prompt and the automatically generated layout. Extensive experiments\n",
"demonstrate that our proposed method outperforms the state-of-the-art models in\n",
"terms of layout and image generation. Our code and settings are available at\n",
"\\url{https://layoutllm-t2i.github.io}.\n",
"\n",
"\n",
"--------------------------------------------------------------------------------\n"
]
}
],
"source": [
"human.initiate_chat(manager, message=\"find a latest paper about generative agents\")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "flaml",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.17"
},
"orig_nbformat": 4
},
"nbformat": 4,
"nbformat_minor": 2
}