autogen/notebook/autogen_agent_function_call.ipynb
Chi Wang 3e7aac6e8b
unify auto_reply; bug fix in UserProxyAgent; reorg agent hierarchy (#1142)
* simplify the initiation of chat

* version update

* include openai

* completion

* load config list from json

* initiate_chat

* oai config list

* oai config list

* config list

* config_list

* raise_error

* retry_time

* raise condition

* oai config list

* catch file not found

* catch openml error

* handle openml error

* handle openml error

* handle openml error

* handle openml error

* handle openml error

* handle openml error

* close #1139

* use property

* termination msg

* AIUserProxyAgent

* smaller dev container

* update notebooks

* match

* document code execution and AIUserProxyAgent

* gpt 3.5 config list

* rate limit

* variable visibility

* remove unnecessary import

* quote

* notebook comments

* remove mathchat from init import

* two users

* import location

* expose config

* return str not tuple

* rate limit

* ipython user proxy

* message

* None result

* rate limit

* rate limit

* rate limit

* rate limit

* make auto_reply a common method for all agents

* abs path

* refactor and doc

* set mathchat_termination

* code format

* modified

* emove import

* code quality

* sender -> messages

* system message

* clean agent hierarchy

* dict check

* invalid oai msg

* return

* openml error

* docstr

---------

Co-authored-by: kevin666aa <yrwu000627@gmail.com>
2023-07-25 23:46:11 +00:00

459 lines
19 KiB
Plaintext

{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"id": "ae1f50ec",
"metadata": {},
"source": [
"<a href=\"https://colab.research.google.com/github/microsoft/FLAML/blob/main/notebook/autogen_agent_function_call.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "9a71fa36",
"metadata": {},
"source": [
"# Interative LLM Agent with Function Calls\n",
"\n",
"FLAML offers an experimental feature of interactive LLM agents, which can be used to solve various tasks with human or automatic feedback, including tasks that require using tools via code. Please find documentation about this feature [here](https://microsoft.github.io/FLAML/docs/Use-Cases/Auto-Generation#agents-experimental).\n",
"\n",
"In this notebook, we demonstrate how to use `AssistantAgent` and `UserProxyAgent` to make function calls with the new feature of OpenAI models (in model version 0613). A specified prompt and function configs need to be passed to `AssistantAgent` to initialize the agent. The corresponding functions need to be passed to `UserProxyAgent`, which will be responsible for executing any function calls made by `AssistantAgent`. Besides this requirement of matching descriptions with functions, we recommend checking the system prompt to make sure the instructions align with the function call descriptions.\n",
"\n",
"## Requirements\n",
"\n",
"FLAML requires `Python>=3.8`. To run this notebook example, please install flaml with the [mathchat] option since we will import functions from `MathUserProxyAgent`:\n",
"```bash\n",
"pip install flaml[mathchat]\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "2b803c17",
"metadata": {},
"outputs": [],
"source": [
"# %pip install flaml[mathchat]~=2.0.0rc4"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "5ebd2397",
"metadata": {},
"source": [
"## Set your API Endpoint\n",
"\n",
"The [`config_list_from_models`](https://microsoft.github.io/FLAML/docs/reference/autogen/oai/openai_utils#config_list_from_models) function tries to create a list of configurations using Azure OpenAI endpoints and OpenAI endpoints for the provided list of models. It assumes the api keys and api bases are stored in the corresponding environment variables or local txt files:\n",
"\n",
"- OpenAI API key: os.environ[\"OPENAI_API_KEY\"] or `openai_api_key_file=\"key_openai.txt\"`.\n",
"- Azure OpenAI API key: os.environ[\"AZURE_OPENAI_API_KEY\"] or `aoai_api_key_file=\"key_aoai.txt\"`. Multiple keys can be stored, one per line.\n",
"- Azure OpenAI API base: os.environ[\"AZURE_OPENAI_API_BASE\"] or `aoai_api_base_file=\"base_aoai.txt\"`. Multiple bases can be stored, one per line.\n",
"\n",
"It's OK to have only the OpenAI API key, or only the Azure OpenAI API key + base.\n",
"If you open this notebook in google colab, you can upload your files by click the file icon on the left panel and then choose \"upload file\" icon.\n",
"\n",
"The following code excludes Azure OpenAI endpoints from the config list because they don't support functions yet. Remove the `exclude` argument after they do."
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "dca301a4",
"metadata": {},
"outputs": [],
"source": [
"from flaml import oai\n",
"\n",
"config_list = oai.config_list_from_models(model_list=[\"gpt-4\", \"gpt-3.5-turbo\", \"gpt-3.5-turbo-16k\"], exclude=\"aoai\")"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "92fde41f",
"metadata": {},
"source": [
"The config list looks like the following:\n",
"```python\n",
"config_list = [\n",
" {\n",
" 'model': 'gpt-4',\n",
" 'api_key': '<your OpenAI API key here>',\n",
" }, # OpenAI API endpoint for gpt-4\n",
" {\n",
" 'model': 'gpt-3.5-turbo',\n",
" 'api_key': '<your OpenAI API key here>',\n",
" }, # OpenAI API endpoint for gpt-3.5-turbo\n",
" {\n",
" 'model': 'gpt-3.5-turbo-16k',\n",
" 'api_key': '<your OpenAI API key here>',\n",
" }, # OpenAI API endpoint for gpt-3.5-turbo-16k\n",
"]\n",
"```\n"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "2b9526e7",
"metadata": {},
"source": [
"## Making Function Calls\n",
"\n",
"In this example, we demonstrate function call execution with `AssistantAgent` and `UserProxyAgent`. With the default system prompt of `AssistantAgent`, we allow the LLM assistant to perform tasks with code, and the `UserProxyAgent` would extract code blocks from the LLM response and execute them. With the new \"function_call\" feature, we define a new function using the pre-defined `execute_code` from `UserProxyAgent` and specify the description of the function in the OpenAI config. \n",
"\n",
"Then, the model has two paths to execute code:\n",
"1. Put the code blocks in the response. `UserProxyAgent` will extract and execute the code through `execute_code` method in the class.\n",
"2. As we put a function description to OpenAI config and register a function `exec_code` in `UserProxyAgent`, the model can also make function calls (will be put in `function_call` field of the API reply). `UserProxyAgent` will execute the function call through the registered `exec_code` method."
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "9fb85afb",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"user (to chatbot):\n",
"\n",
"Draw a rocket and save to a file named 'rocket.svg'\n",
"\n",
"--------------------------------------------------------------------------------\n",
"chatbot (to user):\n",
"\n",
"You can use the `svgwrite` library for Python to draw images into SVG format. Before we draw the rocket, you need to install the library. Use the code below to install it.\n",
"\n",
"```sh\n",
"pip install svgwrite\n",
"```\n",
"\n",
"After installing the library, here is the python code you can use to draw a rocket and save it to a file named 'rocket.svg':\n",
"\n",
"```python\n",
"# filename: draw_rocket.py\n",
"\n",
"import svgwrite\n",
"\n",
"def draw_rocket():\n",
" dwg = svgwrite.Drawing('rocket.svg', profile='tiny')\n",
"\n",
" # Draw rocket body\n",
" dwg.add(dwg.rect((50, 20), (20, 40), fill='grey'))\n",
"\n",
" # Draw top of rocket\n",
" dwg.add(dwg.polygon(points=[(50, 20), (60, 0), (70, 20)], fill='red'))\n",
"\n",
" # Draw bottom of rocket\n",
" dwg.add(dwg.polygon(points=[(50, 60), (60, 80), (70, 60)], fill='red'))\n",
"\n",
" # Draw rocket window\n",
" dwg.add(dwg.circle(center=(60, 40), r=5, fill='blue'))\n",
"\n",
" dwg.save()\n",
"\n",
"draw_rocket()\n",
"```\n",
"You can run this code using Python by calling `python draw_rocket.py`. After running this script, you will have a file named `rocket.svg` in your current directory. The SVG picture represents a simple rocket with a gray body, red top and bottom, and a blue window. \n",
"\n",
"Please replace the `draw_rocket.py` with your actual python filename when you execute the script.\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\n",
">>>>>>>> NO HUMAN INPUT RECEIVED. USING AUTO REPLY FOR THE USER...\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"user (to chatbot):\n",
"\n",
"exitcode: 0 (execution succeeded)\n",
"Code output: \n",
"Collecting svgwrite\n",
" Downloading svgwrite-1.4.3-py3-none-any.whl (67 kB)\n",
" ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 67.1/67.1 kB 3.2 MB/s eta 0:00:00\n",
"Installing collected packages: svgwrite\n",
"Successfully installed svgwrite-1.4.3\n",
"WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv\n",
"\n",
"\n",
"\n",
"--------------------------------------------------------------------------------\n",
"chatbot (to user):\n",
"\n",
"***** Suggested function Call: execute_code *****\n",
"Arguments: \n",
"{\n",
" \"code_type\": \"python\",\n",
" \"code\": \"import svgwrite\\n\\ndef draw_rocket():\\n dwg = svgwrite.Drawing('rocket.svg', profile='tiny')\\n\\n # Draw rocket body\\n dwg.add(dwg.rect((50, 20), (20, 40), fill='grey'))\\n\\n # Draw top of rocket\\n dwg.add(dwg.polygon(points=[(50, 20), (60, 0), (70, 20)], fill='red'))\\n\\n # Draw bottom of rocket\\n dwg.add(dwg.polygon(points=[(50, 60), (60, 80), (70, 60)], fill='red'))\\n\\n # Draw rocket window\\n dwg.add(dwg.circle(center=(60, 40), r=5, fill='blue'))\\n\\n dwg.save()\\n\\ndraw_rocket()\"\n",
"}\n",
"*************************************************\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\n",
">>>>>>>> NO HUMAN INPUT RECEIVED. USING AUTO REPLY FOR THE USER...\n",
"user (to chatbot):\n",
"\n",
"***** Response from calling function \"execute_code\" *****\n",
"(0, '\\n')\n",
"*********************************************************\n",
"\n",
"--------------------------------------------------------------------------------\n",
"chatbot (to user):\n",
"\n",
"Great! The code to draw the rocket and save it to 'rocket.svg' should have successfully executed. \n",
"\n",
"You should now be able to find the file 'rocket.svg' in your current directory and open it with an application that can handle SVG files (a web browser, for instance).\n",
"\n",
"Is there anything else you need help with? If not, please reply 'TERMINATE'.\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\n",
">>>>>>>> NO HUMAN INPUT RECEIVED. USING AUTO REPLY FOR THE USER...\n",
"user (to chatbot):\n",
"\n",
"\n",
"\n",
"--------------------------------------------------------------------------------\n",
"chatbot (to user):\n",
"\n",
"TERMINATE\n",
"\n",
"--------------------------------------------------------------------------------\n"
]
}
],
"source": [
"from flaml.autogen.agent import AssistantAgent, UserProxyAgent\n",
"\n",
"oai_config = {\n",
" \"functions\": [\n",
" {\n",
" \"name\": \"execute_code\",\n",
" \"description\": \"Receive a python code or shell script and return the execution result.\",\n",
" \"parameters\": {\n",
" \"type\": \"object\",\n",
" \"properties\": {\n",
" \"code_type\": {\n",
" \"type\": \"string\",\n",
" \"description\": \"Code type, 'python' or 'sh'.\",\n",
" },\n",
" \"code\": {\n",
" \"type\": \"string\",\n",
" \"description\": \"Valid Python code to execute.\",\n",
" }\n",
" },\n",
" \"required\": [\"code_type\", \"code\"],\n",
" },\n",
" }\n",
" ],\n",
" \"config_list\": config_list,\n",
"}\n",
"chatbot = AssistantAgent(\"chatbot\", oai_config=oai_config)\n",
"\n",
"# create a UserProxyAgent instance named \"user\"\n",
"user = UserProxyAgent(\n",
" \"user\",\n",
" human_input_mode=\"NEVER\",\n",
" code_execution_config={\"work_dir\": \"coding\"},\n",
")\n",
"\n",
"# define an `execute_code` function according to the function desription\n",
"def exec_code(code_type, code):\n",
" # here we reuse the method in the user proxy agent\n",
" # in general, this is not necessary\n",
" return user.execute_code_blocks([(code_type, code)])\n",
"\n",
"# register the `execute_code` function\n",
"user.register_function(function_map={\"execute_code\": exec_code})\n",
"\n",
"# start the conversation\n",
"user.initiate_chat(\n",
" chatbot,\n",
" message=\"Draw a rocket and save to a file named 'rocket.svg'\",\n",
")\n"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "42cee331",
"metadata": {},
"outputs": [
{
"data": {
"image/svg+xml": [
"<svg xmlns=\"http://www.w3.org/2000/svg\" xmlns:ev=\"http://www.w3.org/2001/xml-events\" xmlns:xlink=\"http://www.w3.org/1999/xlink\" baseProfile=\"tiny\" height=\"100%\" version=\"1.2\" width=\"100%\"><defs/><rect fill=\"grey\" height=\"40\" width=\"20\" x=\"50\" y=\"20\"/><polygon fill=\"red\" points=\"50,20 60,0 70,20\"/><polygon fill=\"red\" points=\"50,60 60,80 70,60\"/><circle cx=\"60\" cy=\"40\" fill=\"blue\" r=\"5\"/></svg>"
],
"text/plain": [
"<IPython.core.display.SVG object>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"# uncomment the following to render the svg file\n",
"# from IPython.display import SVG, display\n",
"\n",
"# display(SVG(\"coding/rocket.svg\"))"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "e9531d55",
"metadata": {},
"source": [
"## Another example with Wolfram Alpha API\n",
"\n",
"We give another example of query Wolfram Alpha API to solve math problem. We use the predefined function `MathUserProxyAgent().execute_one_wolfram_query` as the function to be called."
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "4a917492",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"user (to chatbot):\n",
"\n",
"Problem: Find all $x$ that satisfy the inequality $(2x+10)(x+3)<(3x+9)(x+8)$. Express your answer in interval notation.\n",
"\n",
"--------------------------------------------------------------------------------\n",
"chatbot (to user):\n",
"\n",
"***** Suggested function Call: query_wolfram *****\n",
"Arguments: \n",
"{\n",
" \"query\": \"solve (2x+10)(x+3) < (3x+9)(x+8) for x\"\n",
"}\n",
"**************************************************\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\n",
">>>>>>>> NO HUMAN INPUT RECEIVED. USING AUTO REPLY FOR THE USER...\n",
"user (to chatbot):\n",
"\n",
"***** Response from calling function \"query_wolfram\" *****\n",
"('Assumption: solve (2 x + 10) (x + 3)<(3 x + 9) (x + 8) for x \\nAnswer: ans 0: x<-14\\nans 1: x>-3\\n', True)\n",
"**********************************************************\n",
"\n",
"--------------------------------------------------------------------------------\n",
"chatbot (to user):\n",
"\n",
"The solution to the inequality $(2x+10)(x+3)<(3x+9)(x+8)$ is $x \\in (-\\infty, -14) \\cup (-3, +\\infty)$.\n",
"\n",
"To express in interval notation, the answer is $(-\\infty, -14) \\cup (-3, \\infty)$. \n",
"\n",
"TERMINATE.\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\n",
">>>>>>>> NO HUMAN INPUT RECEIVED. USING AUTO REPLY FOR THE USER...\n",
"user (to chatbot):\n",
"\n",
"\n",
"\n",
"--------------------------------------------------------------------------------\n",
"chatbot (to user):\n",
"\n",
"TERMINATE\n",
"\n",
"--------------------------------------------------------------------------------\n"
]
}
],
"source": [
"import os\n",
"from flaml.autogen.agent import AssistantAgent, UserProxyAgent\n",
"from flaml.autogen.agent.math_user_proxy_agent import MathUserProxyAgent\n",
"\n",
"# you need to provide a wolfram alpha appid to run this example\n",
"if not os.environ.get(\"WOLFRAM_ALPHA_APPID\"):\n",
" os.environ[\"WOLFRAM_ALPHA_APPID\"] = open(\"wolfram.txt\").read().strip()\n",
"\n",
"\n",
"sys_prompt = \"\"\"You are an advanced AI with the capability to solve complex math problems.\n",
"Wolfram alpha is provided as an external service to help you solve math problems.\n",
"\n",
"When the user gives a math problem, please use the most efficient way to solve the problem.\n",
"You are encouraged to use Wolfram alpha whenever it is possible during the solving process. For example, simplications, calculations, equation solving, etc.\n",
"However, if the operation requires little computation (very simple calculations, etc), you can also solve it directly.\n",
"Reply \"TERMINATE\" in the end when everything is done.\n",
"\"\"\"\n",
"oai_config = {\n",
" \"model\": \"gpt-4-0613\",\n",
" \"functions\": [\n",
" {\n",
" \"name\": \"query_wolfram\",\n",
" \"description\": \"Return the API query result from the Wolfram Alpha. the ruturn is a tuple of (result, is_success).\",\n",
" \"parameters\": {\n",
" \"type\": \"object\",\n",
" \"properties\": {\n",
" \"query\": {\n",
" \"type\": \"string\",\n",
" \"description\": \"The Wolfram Alpha code to be executed.\",\n",
" }\n",
" },\n",
" \"required\": [\"query\"],\n",
" },\n",
" }\n",
" ],\n",
" \"config_list\": config_list,\n",
"}\n",
"chatbot = AssistantAgent(\"chatbot\", system_message=sys_prompt, oai_config=oai_config)\n",
"\n",
"# the key in `function_map` should match the function name in \"functions\" above\n",
"# we register a class instance method directly\n",
"user = UserProxyAgent(\n",
" \"user\",\n",
" max_consecutive_auto_reply=2,\n",
" human_input_mode=\"NEVER\",\n",
" function_map={\"query_wolfram\": MathUserProxyAgent().execute_one_wolfram_query},\n",
")\n",
"\n",
"# start the conversation\n",
"user.initiate_chat(\n",
" chatbot,\n",
" message=\"Problem: Find all $x$ that satisfy the inequality $(2x+10)(x+3)<(3x+9)(x+8)$. Express your answer in interval notation.\",\n",
")\n"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "flaml_dev",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.15"
}
},
"nbformat": 4,
"nbformat_minor": 5
}