autogen/notebook/autogen_agentchat_groupchat.ipynb

489 lines
21 KiB
Plaintext
Raw Normal View History

{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"<a href=\"https://colab.research.google.com/github/microsoft/FLAML/blob/main/notebook/autogen_agentchat_groupchat.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"# Auto Generated Agent Chat: Group Chat\n",
"\n",
"Modified based on https://github.com/microsoft/FLAML/blob/4ea686af5c3e8ff24d9076a7a626c8b28ab5b1d7/notebook/autogen_multiagent_roleplay_chat.ipynb\n",
"\n",
"## Requirements\n",
"\n",
"FLAML requires `Python>=3.8`. To run this notebook example, please install flaml with the [autogen] option:\n",
"```bash\n",
"pip install flaml[autogen]\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"%%capture --no-stderr\n",
"# %pip install flaml[autogen]~=2.0.0rc5"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Set your API Endpoint\n",
"\n",
"The [`config_list_from_json`](https://microsoft.github.io/FLAML/docs/reference/autogen/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file."
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"from flaml import autogen\n",
"\n",
"config_list_gpt4 = autogen.config_list_from_json(\n",
" \"OAI_CONFIG_LIST\",\n",
" filter_dict={\n",
" \"model\": [\"gpt-4\", \"gpt4\", \"gpt-4-32k\", \"gpt-4-32k-0314\"],\n",
" },\n",
")\n",
"# config_list_gpt35 = autogen.config_list_from_json(\n",
"# \"OAI_CONFIG_LIST\",\n",
"# filter_dict={\n",
"# \"model\": {\n",
"# \"gpt-3.5-turbo\",\n",
"# \"gpt-3.5-turbo-16k\",\n",
"# \"gpt-3.5-turbo-0301\",\n",
"# \"chatgpt-35-turbo-0301\",\n",
"# \"gpt-35-turbo-v0301\",\n",
"# },\n",
"# },\n",
"# )"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"It first looks for environment variable \"OAI_CONFIG_LIST\" which needs to be a valid json string. If that variable is not found, it then looks for a json file named \"OAI_CONFIG_LIST\". It filters the configs by models (you can filter by other keys as well). Only the gpt-4 models are kept in the list based on the filter condition.\n",
"\n",
"The config list looks like the following:\n",
"```python\n",
"config_list = [\n",
" {\n",
" 'model': 'gpt-4',\n",
" 'api_key': '<your OpenAI API key here>',\n",
" },\n",
" {\n",
" 'model': 'gpt-4',\n",
" 'api_key': '<your Azure OpenAI API key here>',\n",
" 'api_base': '<your Azure OpenAI API base here>',\n",
" 'api_type': 'azure',\n",
" 'api_version': '2023-06-01-preview',\n",
" },\n",
" {\n",
" 'model': 'gpt-4-32k',\n",
" 'api_key': '<your Azure OpenAI API key here>',\n",
" 'api_base': '<your Azure OpenAI API base here>',\n",
" 'api_type': 'azure',\n",
" 'api_version': '2023-06-01-preview',\n",
" },\n",
"]\n",
"```\n",
"\n",
"If you open this notebook in colab, you can upload your files by clicking the file icon on the left panel and then choose \"upload file\" icon.\n",
"\n",
"You can set the value of config_list in other ways you prefer, e.g., loading from a YAML file."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Construct Agents"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"llm_config = {\"config_list\": config_list_gpt4}\n",
"human = autogen.UserProxyAgent(\n",
" name=\"Human\",\n",
" system_message=\"A human admin.\",\n",
")\n",
"alice = autogen.AssistantAgent(\n",
" name=\"Alice\",\n",
" llm_config=llm_config,\n",
")\n",
"bob = autogen.AssistantAgent(\n",
" name=\"Bob\",\n",
" system_message=\"Code reviewer. Prevent code execution if unsafe or not well documented. Suggest changes. Otherwise, approve and return the final code to execute.\",\n",
" llm_config=llm_config,\n",
")\n",
"groupchat = autogen.GroupChat(agents=[human, alice, bob], messages=[], max_round=4)\n",
"manager = autogen.GroupChatManager(groupchat=groupchat, llm_config=llm_config)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Start Chat"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[33mHuman\u001b[0m (to chat_manager):\n",
"\n",
"find a latest paper about generative agents\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33mchat_manager\u001b[0m (to Alice):\n",
"\n",
"find a latest paper about generative agents\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33mchat_manager\u001b[0m (to Bob):\n",
"\n",
"find a latest paper about generative agents\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33mAlice\u001b[0m (to chat_manager):\n",
"\n",
"As an AI, I am unable to browse or search the web, download or read a file directly. But I can provide you with a Python script to scrape Google Scholar for the latest papers on generative agents.\n",
"\n",
"Make sure that you have the BeautifulSoup and requests libraries installed. If not, you can install them using the pip command:\n",
"\n",
"```bash\n",
"pip install beautifulsoup4 requests\n",
"```\n",
"\n",
"Then you can use this Python script to fetch and print the title of the latest paper:\n",
"\n",
"Python code:\n",
"```python\n",
"import requests\n",
"from bs4 import BeautifulSoup\n",
"\n",
"# Send HTTP request to Google Scholar with the query \"generative agents\"\n",
"res = requests.get('https://scholar.google.com/scholar?hl=en&as_sdt=0%2C5&q=generative+agents&btnG=')\n",
"\n",
"# Parse the HTML content of the page\n",
"soup = BeautifulSoup(res.text, 'html.parser')\n",
"\n",
"# Find the first result (which is the latest) and print its title\n",
"title = soup.find('h3', {'class': 'gs_rt'}).a.text\n",
"print(f\"The title of the latest paper about 'generative agents' is:\\n{title}\")\n",
"```\n",
"Please note that scraping platforms like Google Scholar may not always yield consistent results and is not always advised as it could violate the terms of service. Please use this code responsibly.\n",
"\n",
"If you are affiliated with a university or an organization that gives you access to paid scientific repositories (like IEEE, Springer, Elsevier), it's best to use those platforms as they provide more specific and legal access to scientific papers.\n",
"\n",
"Alternatively, databases like PubMed or arXiv.org provide free access to a large number of scientific papers - you might want to check them out for latest research papers on your topic of interest.\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33mchat_manager\u001b[0m (to Human):\n",
"\n",
"As an AI, I am unable to browse or search the web, download or read a file directly. But I can provide you with a Python script to scrape Google Scholar for the latest papers on generative agents.\n",
"\n",
"Make sure that you have the BeautifulSoup and requests libraries installed. If not, you can install them using the pip command:\n",
"\n",
"```bash\n",
"pip install beautifulsoup4 requests\n",
"```\n",
"\n",
"Then you can use this Python script to fetch and print the title of the latest paper:\n",
"\n",
"Python code:\n",
"```python\n",
"import requests\n",
"from bs4 import BeautifulSoup\n",
"\n",
"# Send HTTP request to Google Scholar with the query \"generative agents\"\n",
"res = requests.get('https://scholar.google.com/scholar?hl=en&as_sdt=0%2C5&q=generative+agents&btnG=')\n",
"\n",
"# Parse the HTML content of the page\n",
"soup = BeautifulSoup(res.text, 'html.parser')\n",
"\n",
"# Find the first result (which is the latest) and print its title\n",
"title = soup.find('h3', {'class': 'gs_rt'}).a.text\n",
"print(f\"The title of the latest paper about 'generative agents' is:\\n{title}\")\n",
"```\n",
"Please note that scraping platforms like Google Scholar may not always yield consistent results and is not always advised as it could violate the terms of service. Please use this code responsibly.\n",
"\n",
"If you are affiliated with a university or an organization that gives you access to paid scientific repositories (like IEEE, Springer, Elsevier), it's best to use those platforms as they provide more specific and legal access to scientific papers.\n",
"\n",
"Alternatively, databases like PubMed or arXiv.org provide free access to a large number of scientific papers - you might want to check them out for latest research papers on your topic of interest.\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33mchat_manager\u001b[0m (to Bob):\n",
"\n",
"As an AI, I am unable to browse or search the web, download or read a file directly. But I can provide you with a Python script to scrape Google Scholar for the latest papers on generative agents.\n",
"\n",
"Make sure that you have the BeautifulSoup and requests libraries installed. If not, you can install them using the pip command:\n",
"\n",
"```bash\n",
"pip install beautifulsoup4 requests\n",
"```\n",
"\n",
"Then you can use this Python script to fetch and print the title of the latest paper:\n",
"\n",
"Python code:\n",
"```python\n",
"import requests\n",
"from bs4 import BeautifulSoup\n",
"\n",
"# Send HTTP request to Google Scholar with the query \"generative agents\"\n",
"res = requests.get('https://scholar.google.com/scholar?hl=en&as_sdt=0%2C5&q=generative+agents&btnG=')\n",
"\n",
"# Parse the HTML content of the page\n",
"soup = BeautifulSoup(res.text, 'html.parser')\n",
"\n",
"# Find the first result (which is the latest) and print its title\n",
"title = soup.find('h3', {'class': 'gs_rt'}).a.text\n",
"print(f\"The title of the latest paper about 'generative agents' is:\\n{title}\")\n",
"```\n",
"Please note that scraping platforms like Google Scholar may not always yield consistent results and is not always advised as it could violate the terms of service. Please use this code responsibly.\n",
"\n",
"If you are affiliated with a university or an organization that gives you access to paid scientific repositories (like IEEE, Springer, Elsevier), it's best to use those platforms as they provide more specific and legal access to scientific papers.\n",
"\n",
"Alternatively, databases like PubMed or arXiv.org provide free access to a large number of scientific papers - you might want to check them out for latest research papers on your topic of interest.\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33mBob\u001b[0m (to chat_manager):\n",
"\n",
"Your code as it stands can throw an exception and result in an error if the HTTP request fails or if no search results are found. Also, the use of 'beautifulsoup4' and 'requests' should be well-documented.\n",
"\n",
"Here is the more secure and documented code:\n",
"\n",
"```python\n",
"import requests\n",
"from bs4 import BeautifulSoup\n",
"\n",
"# Function that uses requests.get to fetch an URL's content\n",
"def get_url_content(url):\n",
" try:\n",
" response = requests.get(url)\n",
" response.raise_for_status()\n",
" return response.text\n",
" except (requests.RequestException, ValueError) as error:\n",
" print(f'Google scholar cannot be accessed because of: {error}')\n",
" return None\n",
"\n",
"# Function to find the title of the latest paper about \"generative agents\"\n",
"def find_latest_paper(url):\n",
" html = get_url_content(url)\n",
" if html:\n",
" # Parse the HTML content of the page\n",
" soup = BeautifulSoup(html, 'html.parser')\n",
" # Find the first result (which is the latest one)\n",
" result = soup.find('h3', {'class': 'gs_rt'})\n",
" \n",
" # If result found, print its title; Otherwise, print paper not found\n",
" if result:\n",
" title = result.a.text\n",
" print(f\"The title of the latest paper about 'generative agents' is:\\n{title}\")\n",
" else:\n",
" print(\"No papers about 'generative agents' found.\")\n",
" else:\n",
" print(\"No internet or Google scholar is down.\")\n",
"\n",
"# URL of Google scholar with a search query \"generative agents\"\n",
"google_scholar_url = 'https://scholar.google.com/scholar?hl=en&as_sdt=0%2C5&q=generative+agents&btnG='\n",
"\n",
"find_latest_paper(google_scholar_url)\n",
"```\n",
"\n",
"Always use this script carefully because web-scraping isn't always reliable or legal on all web pages. Always ensure you have express permission or that the website's terms and conditions don't forbid this kind of usage.\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33mchat_manager\u001b[0m (to Human):\n",
"\n",
"Your code as it stands can throw an exception and result in an error if the HTTP request fails or if no search results are found. Also, the use of 'beautifulsoup4' and 'requests' should be well-documented.\n",
"\n",
"Here is the more secure and documented code:\n",
"\n",
"```python\n",
"import requests\n",
"from bs4 import BeautifulSoup\n",
"\n",
"# Function that uses requests.get to fetch an URL's content\n",
"def get_url_content(url):\n",
" try:\n",
" response = requests.get(url)\n",
" response.raise_for_status()\n",
" return response.text\n",
" except (requests.RequestException, ValueError) as error:\n",
" print(f'Google scholar cannot be accessed because of: {error}')\n",
" return None\n",
"\n",
"# Function to find the title of the latest paper about \"generative agents\"\n",
"def find_latest_paper(url):\n",
" html = get_url_content(url)\n",
" if html:\n",
" # Parse the HTML content of the page\n",
" soup = BeautifulSoup(html, 'html.parser')\n",
" # Find the first result (which is the latest one)\n",
" result = soup.find('h3', {'class': 'gs_rt'})\n",
" \n",
" # If result found, print its title; Otherwise, print paper not found\n",
" if result:\n",
" title = result.a.text\n",
" print(f\"The title of the latest paper about 'generative agents' is:\\n{title}\")\n",
" else:\n",
" print(\"No papers about 'generative agents' found.\")\n",
" else:\n",
" print(\"No internet or Google scholar is down.\")\n",
"\n",
"# URL of Google scholar with a search query \"generative agents\"\n",
"google_scholar_url = 'https://scholar.google.com/scholar?hl=en&as_sdt=0%2C5&q=generative+agents&btnG='\n",
"\n",
"find_latest_paper(google_scholar_url)\n",
"```\n",
"\n",
"Always use this script carefully because web-scraping isn't always reliable or legal on all web pages. Always ensure you have express permission or that the website's terms and conditions don't forbid this kind of usage.\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33mchat_manager\u001b[0m (to Alice):\n",
"\n",
"Your code as it stands can throw an exception and result in an error if the HTTP request fails or if no search results are found. Also, the use of 'beautifulsoup4' and 'requests' should be well-documented.\n",
"\n",
"Here is the more secure and documented code:\n",
"\n",
"```python\n",
"import requests\n",
"from bs4 import BeautifulSoup\n",
"\n",
"# Function that uses requests.get to fetch an URL's content\n",
"def get_url_content(url):\n",
" try:\n",
" response = requests.get(url)\n",
" response.raise_for_status()\n",
" return response.text\n",
" except (requests.RequestException, ValueError) as error:\n",
" print(f'Google scholar cannot be accessed because of: {error}')\n",
" return None\n",
"\n",
"# Function to find the title of the latest paper about \"generative agents\"\n",
"def find_latest_paper(url):\n",
" html = get_url_content(url)\n",
" if html:\n",
" # Parse the HTML content of the page\n",
" soup = BeautifulSoup(html, 'html.parser')\n",
" # Find the first result (which is the latest one)\n",
" result = soup.find('h3', {'class': 'gs_rt'})\n",
" \n",
" # If result found, print its title; Otherwise, print paper not found\n",
" if result:\n",
" title = result.a.text\n",
" print(f\"The title of the latest paper about 'generative agents' is:\\n{title}\")\n",
" else:\n",
" print(\"No papers about 'generative agents' found.\")\n",
" else:\n",
" print(\"No internet or Google scholar is down.\")\n",
"\n",
"# URL of Google scholar with a search query \"generative agents\"\n",
"google_scholar_url = 'https://scholar.google.com/scholar?hl=en&as_sdt=0%2C5&q=generative+agents&btnG='\n",
"\n",
"find_latest_paper(google_scholar_url)\n",
"```\n",
"\n",
"Always use this script carefully because web-scraping isn't always reliable or legal on all web pages. Always ensure you have express permission or that the website's terms and conditions don't forbid this kind of usage.\n",
"\n",
"--------------------------------------------------------------------------------\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[31m\n",
">>>>>>>> NO HUMAN INPUT RECEIVED.\u001b[0m\n",
"\u001b[31m\n",
">>>>>>>> USING AUTO REPLY...\u001b[0m\n",
"\u001b[31m\n",
">>>>>>>> EXECUTING CODE BLOCK 0 (inferred language is python)...\u001b[0m\n",
"\u001b[33mHuman\u001b[0m (to chat_manager):\n",
"\n",
"exitcode: 0 (execution succeeded)\n",
"Code output: \n",
"The title of the latest paper about 'generative agents' is:\n",
"Generative agents for player decision modeling in games\n",
"\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33mchat_manager\u001b[0m (to Alice):\n",
"\n",
"exitcode: 0 (execution succeeded)\n",
"Code output: \n",
"The title of the latest paper about 'generative agents' is:\n",
"Generative agents for player decision modeling in games\n",
"\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33mchat_manager\u001b[0m (to Bob):\n",
"\n",
"exitcode: 0 (execution succeeded)\n",
"Code output: \n",
"The title of the latest paper about 'generative agents' is:\n",
"Generative agents for player decision modeling in games\n",
"\n",
"\n",
"--------------------------------------------------------------------------------\n"
]
}
],
"source": [
"human.initiate_chat(manager, message=\"find a latest paper about generative agents\")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "flaml",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.17"
},
"orig_nbformat": 4
},
"nbformat": 4,
"nbformat_minor": 2
}