mirror of
https://github.com/microsoft/autogen.git
synced 2025-08-21 23:22:05 +00:00
Add tool_agent_caller_loop and group chat notebook. (#405)
* Add tool_agent_caller_loop and group chat notebook. * Fix types * fix ref --------- Co-authored-by: Jack Gerrits <jackgerrits@users.noreply.github.com>
This commit is contained in:
parent
c8f6f3bb38
commit
12cf331e71
@ -116,7 +116,7 @@ implementation of the contracts determines how agents handle messages.
|
|||||||
The behavior contract is sometimes referred to as the message protocol.
|
The behavior contract is sometimes referred to as the message protocol.
|
||||||
It is the developer's responsibility to implement the behavior contract.
|
It is the developer's responsibility to implement the behavior contract.
|
||||||
Multi-agent patterns are design patterns that emerge from behavior contracts
|
Multi-agent patterns are design patterns that emerge from behavior contracts
|
||||||
(see [Multi-Agent Design Patterns](../getting-started/multi-agent-design-patterns.ipynb)).
|
(see [Multi-Agent Design Patterns](../getting-started/multi-agent-design-patterns.md)).
|
||||||
|
|
||||||
### An Example Application
|
### An Example Application
|
||||||
|
|
||||||
|
399
python/docs/src/getting-started/group-chat.ipynb
Normal file
399
python/docs/src/getting-started/group-chat.ipynb
Normal file
File diff suppressed because one or more lines are too long
@ -0,0 +1,18 @@
|
|||||||
|
# Multi-Agent Design Patterns
|
||||||
|
|
||||||
|
Agents can work together in a variety of ways to solve problems.
|
||||||
|
Research works like [AutoGen](https://aka.ms/autogen-paper),
|
||||||
|
[MetaGPT](https://arxiv.org/abs/2308.00352)
|
||||||
|
and [ChatDev](https://arxiv.org/abs/2307.07924) have shown
|
||||||
|
multi-agent systems out-performing single agent systems at complex tasks
|
||||||
|
like software development.
|
||||||
|
|
||||||
|
A multi-agent design pattern is a structure that emerges from message protocols:
|
||||||
|
it describes how agents interact with each other to solve problems.
|
||||||
|
For example, the [tool-equiped agent](./tools.ipynb#tool-equipped-agent) in
|
||||||
|
the previous section employs a design pattern called ReAct,
|
||||||
|
which involves an agent interacting with tools.
|
||||||
|
|
||||||
|
You can implement any multi-agent design pattern using AGNext agents.
|
||||||
|
In the next two sections, we will discuss two common design patterns:
|
||||||
|
group chat for task decomposition, and reflection for robustness.
|
@ -4,30 +4,7 @@
|
|||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"# Multi-Agent Design Patterns\n",
|
"# Reflection\n",
|
||||||
"\n",
|
|
||||||
"Agents can work together in a variety of ways to solve problems.\n",
|
|
||||||
"Research works like [AutoGen](https://aka.ms/autogen-paper),\n",
|
|
||||||
"[MetaGPT](https://arxiv.org/abs/2308.00352)\n",
|
|
||||||
"and [ChatDev](https://arxiv.org/abs/2307.07924) have shown\n",
|
|
||||||
"multi-agent systems out-performing single agent systems at complex tasks\n",
|
|
||||||
"like software development.\n",
|
|
||||||
"\n",
|
|
||||||
"A multi-agent design pattern is a structure that emerges from message protocols:\n",
|
|
||||||
"it describes how agents interact with each other to solve problems.\n",
|
|
||||||
"For example, the [tool-equiped agent](./tools.ipynb#tool-equipped-agent) in\n",
|
|
||||||
"the previous section employs a design pattern called ReAct,\n",
|
|
||||||
"which involves an agent interacting with tools.\n",
|
|
||||||
"\n",
|
|
||||||
"You can implement any multi-agent design pattern using AGNext agents.\n",
|
|
||||||
"In this section, we use the reflection pattern as an example."
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"## Reflection\n",
|
|
||||||
"\n",
|
"\n",
|
||||||
"Reflection is a design pattern where an LLM generation is followed by a reflection,\n",
|
"Reflection is a design pattern where an LLM generation is followed by a reflection,\n",
|
||||||
"which in itself is another LLM generation conditioned on the output of the first one.\n",
|
"which in itself is another LLM generation conditioned on the output of the first one.\n",
|
||||||
@ -50,7 +27,7 @@
|
|||||||
"will generate a code snippet, and the reviewer agent will generate a critique\n",
|
"will generate a code snippet, and the reviewer agent will generate a critique\n",
|
||||||
"of the code snippet.\n",
|
"of the code snippet.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"### Message Protocol\n",
|
"## Message Protocol\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Before we define the agents, we need to first define the message protocol for the agents."
|
"Before we define the agents, we need to first define the message protocol for the agents."
|
||||||
]
|
]
|
||||||
@ -107,7 +84,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"\n",
|
"\n",
|
||||||
"\n",
|
"\n",
|
||||||
"### Agents\n",
|
"## Agents\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Now, let's define the agents for the reflection design pattern."
|
"Now, let's define the agents for the reflection design pattern."
|
||||||
]
|
]
|
||||||
@ -376,7 +353,7 @@
|
|||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"### Logging\n",
|
"## Logging\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Turn on logging to see the messages exchanged between the agents."
|
"Turn on logging to see the messages exchanged between the agents."
|
||||||
]
|
]
|
||||||
@ -397,7 +374,7 @@
|
|||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"### Running the Design Pattern\n",
|
"## Running the Design Pattern\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Let's test the design pattern with a coding task."
|
"Let's test the design pattern with a coding task."
|
||||||
]
|
]
|
@ -1,324 +1,320 @@
|
|||||||
{
|
{
|
||||||
"cells": [
|
"cells": [
|
||||||
{
|
{
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"# Tools\n",
|
"# Tools\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Tools are code that can be executed by an agent to perform actions. A tool\n",
|
"Tools are code that can be executed by an agent to perform actions. A tool\n",
|
||||||
"can be a simple function such as a calculator, or an API call to a third-party service\n",
|
"can be a simple function such as a calculator, or an API call to a third-party service\n",
|
||||||
"such as stock price lookup and weather forecast.\n",
|
"such as stock price lookup and weather forecast.\n",
|
||||||
"In the context of AI agents, tools are designed to be executed by agents in\n",
|
"In the context of AI agents, tools are designed to be executed by agents in\n",
|
||||||
"response to model-generated function calls.\n",
|
"response to model-generated function calls.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"AGNext provides the {py:mod}`agnext.components.tools` module with a suite of built-in\n",
|
"AGNext provides the {py:mod}`agnext.components.tools` module with a suite of built-in\n",
|
||||||
"tools and utilities for creating and running custom tools."
|
"tools and utilities for creating and running custom tools."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"## Built-in Tools\n",
|
"## Built-in Tools\n",
|
||||||
"\n",
|
"\n",
|
||||||
"One of the built-in tools is the {py:class}`agnext.components.tools.PythonCodeExecutionTool`,\n",
|
"One of the built-in tools is the {py:class}`agnext.components.tools.PythonCodeExecutionTool`,\n",
|
||||||
"which allows agents to execute Python code snippets.\n",
|
"which allows agents to execute Python code snippets.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Here is how you create the tool and use it."
|
"Here is how you create the tool and use it."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"execution_count": null,
|
"execution_count": 1,
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [
|
||||||
"source": [
|
{
|
||||||
"from agnext.components.code_executor import LocalCommandLineCodeExecutor\n",
|
"name": "stdout",
|
||||||
"from agnext.components.tools import PythonCodeExecutionTool\n",
|
"output_type": "stream",
|
||||||
"from agnext.core import CancellationToken\n",
|
"text": [
|
||||||
"\n",
|
"Hello, world!\n",
|
||||||
"# Create the tool.\n",
|
"\n"
|
||||||
"code_executor = LocalCommandLineCodeExecutor()\n",
|
]
|
||||||
"code_execution_tool = PythonCodeExecutionTool(code_executor)\n",
|
}
|
||||||
"cancellation_token = CancellationToken()\n",
|
],
|
||||||
"\n",
|
"source": [
|
||||||
"# Use the tool directly without an agent.\n",
|
"from agnext.components.code_executor import LocalCommandLineCodeExecutor\n",
|
||||||
"code = \"print('Hello, world!')\"\n",
|
"from agnext.components.tools import PythonCodeExecutionTool\n",
|
||||||
"result = await code_execution_tool.run_json({\"code\": code}, cancellation_token)\n",
|
"from agnext.core import CancellationToken\n",
|
||||||
"print(code_execution_tool.return_value_as_string(result))"
|
"\n",
|
||||||
]
|
"# Create the tool.\n",
|
||||||
},
|
"code_executor = LocalCommandLineCodeExecutor()\n",
|
||||||
{
|
"code_execution_tool = PythonCodeExecutionTool(code_executor)\n",
|
||||||
"cell_type": "markdown",
|
"cancellation_token = CancellationToken()\n",
|
||||||
"metadata": {},
|
"\n",
|
||||||
"source": [
|
"# Use the tool directly without an agent.\n",
|
||||||
"The {py:class}`~agnext.components.code_executor.LocalCommandLineCodeExecutor`\n",
|
"code = \"print('Hello, world!')\"\n",
|
||||||
"class is a built-in code executor that runs Python code snippets in a subprocess\n",
|
"result = await code_execution_tool.run_json({\"code\": code}, cancellation_token)\n",
|
||||||
"in the local command line environment.\n",
|
"print(code_execution_tool.return_value_as_string(result))"
|
||||||
"The {py:class}`~agnext.components.tools.PythonCodeExecutionTool` class wraps the code executor\n",
|
]
|
||||||
"and provides a simple interface to execute Python code snippets.\n",
|
},
|
||||||
"\n",
|
{
|
||||||
"Other built-in tools will be added in the future."
|
"cell_type": "markdown",
|
||||||
]
|
"metadata": {},
|
||||||
},
|
"source": [
|
||||||
{
|
"The {py:class}`~agnext.components.code_executor.LocalCommandLineCodeExecutor`\n",
|
||||||
"cell_type": "markdown",
|
"class is a built-in code executor that runs Python code snippets in a subprocess\n",
|
||||||
"metadata": {},
|
"in the local command line environment.\n",
|
||||||
"source": [
|
"The {py:class}`~agnext.components.tools.PythonCodeExecutionTool` class wraps the code executor\n",
|
||||||
"## Custom Function Tools\n",
|
"and provides a simple interface to execute Python code snippets.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"A tool can also be a simple Python function that performs a specific action.\n",
|
"Other built-in tools will be added in the future."
|
||||||
"To create a custom function tool, you just need to create a Python function\n",
|
]
|
||||||
"and use the {py:class}`agnext.components.tools.FunctionTool` class to wrap it.\n",
|
},
|
||||||
"\n",
|
{
|
||||||
"For example, a simple tool to obtain the stock price of a company might look like this:"
|
"cell_type": "markdown",
|
||||||
]
|
"metadata": {},
|
||||||
},
|
"source": [
|
||||||
{
|
"## Custom Function Tools\n",
|
||||||
"cell_type": "code",
|
"\n",
|
||||||
"execution_count": 5,
|
"A tool can also be a simple Python function that performs a specific action.\n",
|
||||||
"metadata": {},
|
"To create a custom function tool, you just need to create a Python function\n",
|
||||||
"outputs": [
|
"and use the {py:class}`agnext.components.tools.FunctionTool` class to wrap it.\n",
|
||||||
{
|
"\n",
|
||||||
"name": "stdout",
|
"For example, a simple tool to obtain the stock price of a company might look like this:"
|
||||||
"output_type": "stream",
|
]
|
||||||
"text": [
|
},
|
||||||
"138.75280591295171\n"
|
{
|
||||||
]
|
"cell_type": "code",
|
||||||
}
|
"execution_count": 2,
|
||||||
],
|
"metadata": {},
|
||||||
"source": [
|
"outputs": [
|
||||||
"import random\n",
|
{
|
||||||
"\n",
|
"name": "stdout",
|
||||||
"from agnext.components.tools import FunctionTool\n",
|
"output_type": "stream",
|
||||||
"from agnext.core import CancellationToken\n",
|
"text": [
|
||||||
"from typing_extensions import Annotated\n",
|
"194.71306528148511\n"
|
||||||
"\n",
|
]
|
||||||
"\n",
|
}
|
||||||
"async def get_stock_price(ticker: str, date: Annotated[str, \"Date in YYYY/MM/DD\"]) -> float:\n",
|
],
|
||||||
" # Returns a random stock price for demonstration purposes.\n",
|
"source": [
|
||||||
" return random.uniform(10, 200)\n",
|
"import random\n",
|
||||||
"\n",
|
"\n",
|
||||||
"\n",
|
"from agnext.components.tools import FunctionTool\n",
|
||||||
"# Create a function tool.\n",
|
"from agnext.core import CancellationToken\n",
|
||||||
"stock_price_tool = FunctionTool(get_stock_price, description=\"Get the stock price.\")\n",
|
"from typing_extensions import Annotated\n",
|
||||||
"\n",
|
"\n",
|
||||||
"# Run the tool.\n",
|
"\n",
|
||||||
"cancellation_token = CancellationToken()\n",
|
"async def get_stock_price(ticker: str, date: Annotated[str, \"Date in YYYY/MM/DD\"]) -> float:\n",
|
||||||
"result = await stock_price_tool.run_json({\"ticker\": \"AAPL\", \"date\": \"2021/01/01\"}, cancellation_token)\n",
|
" # Returns a random stock price for demonstration purposes.\n",
|
||||||
"\n",
|
" return random.uniform(10, 200)\n",
|
||||||
"# Print the result.\n",
|
"\n",
|
||||||
"print(stock_price_tool.return_value_as_string(result))"
|
"\n",
|
||||||
]
|
"# Create a function tool.\n",
|
||||||
},
|
"stock_price_tool = FunctionTool(get_stock_price, description=\"Get the stock price.\")\n",
|
||||||
{
|
"\n",
|
||||||
"cell_type": "markdown",
|
"# Run the tool.\n",
|
||||||
"metadata": {},
|
"cancellation_token = CancellationToken()\n",
|
||||||
"source": [
|
"result = await stock_price_tool.run_json({\"ticker\": \"AAPL\", \"date\": \"2021/01/01\"}, cancellation_token)\n",
|
||||||
"## Tool-Equipped Agent\n",
|
"\n",
|
||||||
"\n",
|
"# Print the result.\n",
|
||||||
"To use tools with an agent, you can use {py:class}`agnext.components.tool_agent.ToolAgent`,\n",
|
"print(stock_price_tool.return_value_as_string(result))"
|
||||||
"by using it in a composition pattern.\n",
|
]
|
||||||
"Here is an example tool-use agent that uses {py:class}`~agnext.components.tool_agent.ToolAgent`\n",
|
},
|
||||||
"as an inner agent for executing tools."
|
{
|
||||||
]
|
"cell_type": "markdown",
|
||||||
},
|
"metadata": {},
|
||||||
{
|
"source": [
|
||||||
"cell_type": "code",
|
"## Tool-Equipped Agent\n",
|
||||||
"execution_count": 6,
|
"\n",
|
||||||
"metadata": {},
|
"To use tools with an agent, you can use {py:class}`agnext.components.tool_agent.ToolAgent`,\n",
|
||||||
"outputs": [],
|
"by using it in a composition pattern.\n",
|
||||||
"source": [
|
"Here is an example tool-use agent that uses {py:class}`~agnext.components.tool_agent.ToolAgent`\n",
|
||||||
"import asyncio\n",
|
"as an inner agent for executing tools."
|
||||||
"from dataclasses import dataclass\n",
|
]
|
||||||
"from typing import List\n",
|
},
|
||||||
"\n",
|
{
|
||||||
"from agnext.application import SingleThreadedAgentRuntime\n",
|
"cell_type": "code",
|
||||||
"from agnext.components import FunctionCall, RoutedAgent, message_handler\n",
|
"execution_count": 3,
|
||||||
"from agnext.components.models import (\n",
|
"metadata": {},
|
||||||
" AssistantMessage,\n",
|
"outputs": [],
|
||||||
" ChatCompletionClient,\n",
|
"source": [
|
||||||
" FunctionExecutionResult,\n",
|
"from dataclasses import dataclass\n",
|
||||||
" FunctionExecutionResultMessage,\n",
|
"from typing import List\n",
|
||||||
" LLMMessage,\n",
|
"\n",
|
||||||
" OpenAIChatCompletionClient,\n",
|
"from agnext.application import SingleThreadedAgentRuntime\n",
|
||||||
" SystemMessage,\n",
|
"from agnext.components import RoutedAgent, message_handler\n",
|
||||||
" UserMessage,\n",
|
"from agnext.components.models import (\n",
|
||||||
")\n",
|
" ChatCompletionClient,\n",
|
||||||
"from agnext.components.tool_agent import ToolAgent, ToolException\n",
|
" LLMMessage,\n",
|
||||||
"from agnext.components.tools import FunctionTool, Tool, ToolSchema\n",
|
" OpenAIChatCompletionClient,\n",
|
||||||
"from agnext.core import AgentId, AgentInstantiationContext, MessageContext\n",
|
" SystemMessage,\n",
|
||||||
"\n",
|
" UserMessage,\n",
|
||||||
"\n",
|
")\n",
|
||||||
"@dataclass\n",
|
"from agnext.components.tool_agent import ToolAgent, tool_agent_caller_loop\n",
|
||||||
"class Message:\n",
|
"from agnext.components.tools import FunctionTool, Tool, ToolSchema\n",
|
||||||
" content: str\n",
|
"from agnext.core import AgentId, AgentInstantiationContext, MessageContext\n",
|
||||||
"\n",
|
"\n",
|
||||||
"\n",
|
"\n",
|
||||||
"class ToolUseAgent(RoutedAgent):\n",
|
"@dataclass\n",
|
||||||
" def __init__(self, model_client: ChatCompletionClient, tool_schema: List[ToolSchema], tool_agent: AgentId) -> None:\n",
|
"class Message:\n",
|
||||||
" super().__init__(\"An agent with tools\")\n",
|
" content: str\n",
|
||||||
" self._system_messages: List[LLMMessage] = [SystemMessage(\"You are a helpful AI assistant.\")]\n",
|
"\n",
|
||||||
" self._model_client = model_client\n",
|
"\n",
|
||||||
" self._tool_schema = tool_schema\n",
|
"class ToolUseAgent(RoutedAgent):\n",
|
||||||
" self._tool_agent = tool_agent\n",
|
" def __init__(self, model_client: ChatCompletionClient, tool_schema: List[ToolSchema], tool_agent: AgentId) -> None:\n",
|
||||||
"\n",
|
" super().__init__(\"An agent with tools\")\n",
|
||||||
" @message_handler\n",
|
" self._system_messages: List[LLMMessage] = [SystemMessage(\"You are a helpful AI assistant.\")]\n",
|
||||||
" async def handle_user_message(self, message: Message, ctx: MessageContext) -> Message:\n",
|
" self._model_client = model_client\n",
|
||||||
" # Create a session of messages.\n",
|
" self._tool_schema = tool_schema\n",
|
||||||
" session: List[LLMMessage] = [UserMessage(content=message.content, source=\"user\")]\n",
|
" self._tool_agent = tool_agent\n",
|
||||||
" # Get a response from the model.\n",
|
"\n",
|
||||||
" response = await self._model_client.create(\n",
|
" @message_handler\n",
|
||||||
" self._system_messages + session, tools=self._tool_schema, cancellation_token=cancellation_token\n",
|
" async def handle_user_message(self, message: Message, ctx: MessageContext) -> Message:\n",
|
||||||
" )\n",
|
" # Create a session of messages.\n",
|
||||||
" # Add the response to the session.\n",
|
" session: List[LLMMessage] = [UserMessage(content=message.content, source=\"user\")]\n",
|
||||||
" session.append(AssistantMessage(content=response.content, source=\"assistant\"))\n",
|
" # Run the caller loop to handle tool calls.\n",
|
||||||
"\n",
|
" messages = await tool_agent_caller_loop(\n",
|
||||||
" # Keep iterating until the model stops generating tool calls.\n",
|
" self,\n",
|
||||||
" while isinstance(response.content, list) and all(isinstance(item, FunctionCall) for item in response.content):\n",
|
" tool_agent_id=self._tool_agent,\n",
|
||||||
" # Execute functions called by the model by sending messages to itself.\n",
|
" model_client=self._model_client,\n",
|
||||||
" results: List[FunctionExecutionResult | BaseException] = await asyncio.gather(\n",
|
" input_messages=session,\n",
|
||||||
" *[self.send_message(call, self._tool_agent) for call in response.content],\n",
|
" tool_schema=self._tool_schema,\n",
|
||||||
" return_exceptions=True,\n",
|
" cancellation_token=ctx.cancellation_token,\n",
|
||||||
" )\n",
|
" )\n",
|
||||||
" # Combine the results into a single response and handle exceptions.\n",
|
" # Return the final response.\n",
|
||||||
" function_results: List[FunctionExecutionResult] = []\n",
|
" assert isinstance(messages[-1].content, str)\n",
|
||||||
" for result in results:\n",
|
" return Message(content=messages[-1].content)"
|
||||||
" if isinstance(result, FunctionExecutionResult):\n",
|
]
|
||||||
" function_results.append(result)\n",
|
},
|
||||||
" elif isinstance(result, ToolException):\n",
|
{
|
||||||
" function_results.append(FunctionExecutionResult(content=f\"Error: {result}\", call_id=result.call_id))\n",
|
"cell_type": "markdown",
|
||||||
" elif isinstance(result, BaseException):\n",
|
"metadata": {},
|
||||||
" raise result # Unexpected exception.\n",
|
"source": [
|
||||||
" session.append(FunctionExecutionResultMessage(content=function_results))\n",
|
"The `ToolUseAgent` class uses a convenience function {py:meth}`agnext.components.tool_agent.tool_agent_caller_loop`, \n",
|
||||||
" # Query the model again with the new response.\n",
|
"to handle the interaction between the model and the tool agent.\n",
|
||||||
" response = await self._model_client.create(\n",
|
"The core idea can be described using a simple control flow graph:\n",
|
||||||
" self._system_messages + session, tools=self._tool_schema, cancellation_token=cancellation_token\n",
|
"\n",
|
||||||
" )\n",
|
"\n",
|
||||||
" session.append(AssistantMessage(content=response.content, source=self.metadata[\"type\"]))\n",
|
"\n",
|
||||||
"\n",
|
"The `ToolUseAgent`'s `handle_user_message` handler handles messages from the user,\n",
|
||||||
" # Return the final response.\n",
|
"and determines whether the model has generated a tool call.\n",
|
||||||
" assert isinstance(response.content, str)\n",
|
"If the model has generated tool calls, then the handler sends a function call\n",
|
||||||
" return Message(content=response.content)"
|
"message to the {py:class}`~agnext.components.tool_agent.ToolAgent` agent\n",
|
||||||
]
|
"to execute the tools,\n",
|
||||||
},
|
"and then queries the model again with the results of the tool calls.\n",
|
||||||
{
|
"This process continues until the model stops generating tool calls,\n",
|
||||||
"cell_type": "markdown",
|
"at which point the final response is returned to the user.\n",
|
||||||
"metadata": {},
|
"\n",
|
||||||
"source": [
|
"By having the tool execution logic in a separate agent,\n",
|
||||||
"The `ToolUseAgent` class is a bit involved, however,\n",
|
"we expose the model-tool interactions to the agent runtime as messages, so the tool executions\n",
|
||||||
"the core idea can be described using a simple control flow graph:\n",
|
"can be observed externally and intercepted if necessary.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"\n",
|
"To run the agent, we need to create a runtime and register the agent."
|
||||||
"\n",
|
]
|
||||||
"The `ToolUseAgent`'s `handle_user_message` handler handles messages from the user,\n",
|
},
|
||||||
"and determines whether the model has generated a tool call.\n",
|
{
|
||||||
"If the model has generated tool calls, then the handler sends a function call\n",
|
"cell_type": "code",
|
||||||
"message to the {py:class}`~agnext.components.tool_agent.ToolAgent` agent\n",
|
"execution_count": 4,
|
||||||
"to execute the tools,\n",
|
"metadata": {},
|
||||||
"and then queries the model again with the results of the tool calls.\n",
|
"outputs": [
|
||||||
"This process continues until the model stops generating tool calls,\n",
|
{
|
||||||
"at which point the final response is returned to the user.\n",
|
"data": {
|
||||||
"\n",
|
"text/plain": [
|
||||||
"By having the tool execution logic in a separate agent,\n",
|
"AgentType(type='tool_use_agent')"
|
||||||
"we expose the model-tool interactions to the agent runtime as messages, so the tool executions\n",
|
]
|
||||||
"can be observed externally and intercepted if necessary.\n",
|
},
|
||||||
"\n",
|
"execution_count": 4,
|
||||||
"To run the agent, we need to create a runtime and register the agent."
|
"metadata": {},
|
||||||
]
|
"output_type": "execute_result"
|
||||||
},
|
}
|
||||||
{
|
],
|
||||||
"cell_type": "code",
|
"source": [
|
||||||
"execution_count": 7,
|
"# Create a runtime.\n",
|
||||||
"metadata": {},
|
"runtime = SingleThreadedAgentRuntime()\n",
|
||||||
"outputs": [],
|
"# Create the tools.\n",
|
||||||
"source": [
|
"tools: List[Tool] = [FunctionTool(get_stock_price, description=\"Get the stock price.\")]\n",
|
||||||
"# Create a runtime.\n",
|
"# Register the agents.\n",
|
||||||
"runtime = SingleThreadedAgentRuntime()\n",
|
"await runtime.register(\n",
|
||||||
"# Create the tools.\n",
|
" \"tool_executor_agent\",\n",
|
||||||
"tools: List[Tool] = [FunctionTool(get_stock_price, description=\"Get the stock price.\")]\n",
|
" lambda: ToolAgent(\n",
|
||||||
"# Register the agents.\n",
|
" description=\"Tool Executor Agent\",\n",
|
||||||
"await runtime.register(\n",
|
" tools=tools,\n",
|
||||||
" \"tool-executor-agent\",\n",
|
" ),\n",
|
||||||
" lambda: ToolAgent(\n",
|
")\n",
|
||||||
" description=\"Tool Executor Agent\",\n",
|
"await runtime.register(\n",
|
||||||
" tools=tools,\n",
|
" \"tool_use_agent\",\n",
|
||||||
" ),\n",
|
" lambda: ToolUseAgent(\n",
|
||||||
")\n",
|
" OpenAIChatCompletionClient(model=\"gpt-4o-mini\"),\n",
|
||||||
"await runtime.register(\n",
|
" tool_schema=[tool.schema for tool in tools],\n",
|
||||||
" \"tool-use-agent\",\n",
|
" tool_agent=AgentId(\"tool_executor_agent\", AgentInstantiationContext.current_agent_id().key),\n",
|
||||||
" lambda: ToolUseAgent(\n",
|
" ),\n",
|
||||||
" OpenAIChatCompletionClient(model=\"gpt-4o-mini\"),\n",
|
")"
|
||||||
" tool_schema=[tool.schema for tool in tools],\n",
|
]
|
||||||
" tool_agent=AgentId(\"tool-executor-agent\", AgentInstantiationContext.current_agent_id().key),\n",
|
},
|
||||||
" ),\n",
|
{
|
||||||
")"
|
"cell_type": "markdown",
|
||||||
]
|
"metadata": {},
|
||||||
},
|
"source": [
|
||||||
{
|
"This example uses the {py:class}`agnext.components.models.OpenAIChatCompletionClient`,\n",
|
||||||
"cell_type": "markdown",
|
"for Azure OpenAI and other clients, see [Model Clients](./model-clients.ipynb).\n",
|
||||||
"metadata": {},
|
"Let's test the agent with a question about stock price."
|
||||||
"source": [
|
]
|
||||||
"This example uses the {py:class}`agnext.components.models.OpenAIChatCompletionClient`,\n",
|
},
|
||||||
"for Azure OpenAI and other clients, see [Model Clients](./model-clients.ipynb).\n",
|
{
|
||||||
"Let's test the agent with a question about stock price."
|
"cell_type": "code",
|
||||||
]
|
"execution_count": 5,
|
||||||
},
|
"metadata": {},
|
||||||
{
|
"outputs": [
|
||||||
"cell_type": "code",
|
{
|
||||||
"execution_count": 8,
|
"name": "stdout",
|
||||||
"metadata": {},
|
"output_type": "stream",
|
||||||
"outputs": [
|
"text": [
|
||||||
{
|
"The stock price of NVIDIA (NVDA) on June 1, 2024, was approximately $148.86.\n"
|
||||||
"name": "stdout",
|
]
|
||||||
"output_type": "stream",
|
}
|
||||||
"text": [
|
],
|
||||||
"The stock price of NVDA on June 1, 2024, is approximately $49.28.\n"
|
"source": [
|
||||||
]
|
"# Start processing messages.\n",
|
||||||
}
|
"runtime.start()\n",
|
||||||
],
|
"# Send a direct message to the tool agent.\n",
|
||||||
"source": [
|
"tool_use_agent = AgentId(\"tool_use_agent\", \"default\")\n",
|
||||||
"# Start processing messages.\n",
|
"response = await runtime.send_message(Message(\"What is the stock price of NVDA on 2024/06/01?\"), tool_use_agent)\n",
|
||||||
"runtime.start()\n",
|
"print(response.content)\n",
|
||||||
"# Send a direct message to the tool agent.\n",
|
"# Stop processing messages.\n",
|
||||||
"tool_use_agent = AgentId(\"tool-use-agent\", \"default\")\n",
|
"await runtime.stop()"
|
||||||
"response = await runtime.send_message(Message(\"What is the stock price of NVDA on 2024/06/01?\"), tool_use_agent)\n",
|
]
|
||||||
"print(response.content)\n",
|
},
|
||||||
"# Stop processing messages.\n",
|
{
|
||||||
"await runtime.stop()"
|
"cell_type": "markdown",
|
||||||
]
|
"metadata": {},
|
||||||
},
|
"source": [
|
||||||
{
|
"See [samples](https://github.com/microsoft/agnext/tree/main/python/samples#tool-use-examples)\n",
|
||||||
"cell_type": "markdown",
|
"for more examples of using tools with agents, including how to use\n",
|
||||||
"metadata": {},
|
"broadcast communication model for tool execution, and how to intercept tool\n",
|
||||||
"source": [
|
"execution for human-in-the-loop approval."
|
||||||
"See [samples](https://github.com/microsoft/agnext/tree/main/python/samples#tool-use-examples)\n",
|
]
|
||||||
"for more examples of using tools with agents, including how to use\n",
|
}
|
||||||
"broadcast communication model for tool execution, and how to intercept tool\n",
|
],
|
||||||
"execution for human-in-the-loop approval."
|
"metadata": {
|
||||||
]
|
"kernelspec": {
|
||||||
}
|
"display_name": "agnext",
|
||||||
],
|
"language": "python",
|
||||||
"metadata": {
|
"name": "python3"
|
||||||
"kernelspec": {
|
},
|
||||||
"display_name": "agnext",
|
"language_info": {
|
||||||
"language": "python",
|
"codemirror_mode": {
|
||||||
"name": "python3"
|
"name": "ipython",
|
||||||
},
|
"version": 3
|
||||||
"language_info": {
|
},
|
||||||
"codemirror_mode": {
|
"file_extension": ".py",
|
||||||
"name": "ipython",
|
"mimetype": "text/x-python",
|
||||||
"version": 3
|
"name": "python",
|
||||||
},
|
"nbconvert_exporter": "python",
|
||||||
"file_extension": ".py",
|
"pygments_lexer": "ipython3",
|
||||||
"mimetype": "text/x-python",
|
"version": "3.11.9"
|
||||||
"name": "python",
|
}
|
||||||
"nbconvert_exporter": "python",
|
},
|
||||||
"pygments_lexer": "ipython3",
|
"nbformat": 4,
|
||||||
"version": "3.11.9"
|
"nbformat_minor": 2
|
||||||
}
|
|
||||||
},
|
|
||||||
"nbformat": 4,
|
|
||||||
"nbformat_minor": 2
|
|
||||||
}
|
}
|
||||||
|
@ -30,6 +30,8 @@ To learn about the core concepts of AGNext, read the `overview <core-concepts/ov
|
|||||||
getting-started/model-clients
|
getting-started/model-clients
|
||||||
getting-started/tools
|
getting-started/tools
|
||||||
getting-started/multi-agent-design-patterns
|
getting-started/multi-agent-design-patterns
|
||||||
|
getting-started/group-chat
|
||||||
|
getting-started/reflection
|
||||||
|
|
||||||
.. toctree::
|
.. toctree::
|
||||||
:caption: Guides
|
:caption: Guides
|
||||||
|
@ -1,3 +1,4 @@
|
|||||||
|
from ._caller_loop import tool_agent_caller_loop
|
||||||
from ._tool_agent import (
|
from ._tool_agent import (
|
||||||
InvalidToolArgumentsException,
|
InvalidToolArgumentsException,
|
||||||
ToolAgent,
|
ToolAgent,
|
||||||
@ -12,4 +13,5 @@ __all__ = [
|
|||||||
"ToolNotFoundException",
|
"ToolNotFoundException",
|
||||||
"InvalidToolArgumentsException",
|
"InvalidToolArgumentsException",
|
||||||
"ToolExecutionException",
|
"ToolExecutionException",
|
||||||
|
"tool_agent_caller_loop",
|
||||||
]
|
]
|
||||||
|
77
python/src/agnext/components/tool_agent/_caller_loop.py
Normal file
77
python/src/agnext/components/tool_agent/_caller_loop.py
Normal file
@ -0,0 +1,77 @@
|
|||||||
|
import asyncio
|
||||||
|
from typing import List
|
||||||
|
|
||||||
|
from ...components import FunctionCall
|
||||||
|
from ...core import AgentId, AgentRuntime, BaseAgent, CancellationToken
|
||||||
|
from ..models import (
|
||||||
|
AssistantMessage,
|
||||||
|
ChatCompletionClient,
|
||||||
|
FunctionExecutionResult,
|
||||||
|
FunctionExecutionResultMessage,
|
||||||
|
LLMMessage,
|
||||||
|
)
|
||||||
|
from ..tools import Tool, ToolSchema
|
||||||
|
from ._tool_agent import ToolException
|
||||||
|
|
||||||
|
|
||||||
|
async def tool_agent_caller_loop(
|
||||||
|
caller: BaseAgent | AgentRuntime,
|
||||||
|
tool_agent_id: AgentId,
|
||||||
|
model_client: ChatCompletionClient,
|
||||||
|
input_messages: List[LLMMessage],
|
||||||
|
tool_schema: List[ToolSchema] | List[Tool],
|
||||||
|
cancellation_token: CancellationToken | None = None,
|
||||||
|
caller_source: str = "assistant",
|
||||||
|
) -> List[LLMMessage]:
|
||||||
|
"""Start a caller loop for a tool agent. This function sends messages to the tool agent
|
||||||
|
and the model client in an alternating fashion until the model client stops generating tool calls.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
tool_agent_id (AgentId): The Agent ID of the tool agent.
|
||||||
|
input_messages (List[LLMMessage]): The list of input messages.
|
||||||
|
model_client (ChatCompletionClient): The model client to use for the model API.
|
||||||
|
tool_schema (List[Tool | ToolSchema]): The list of tools that the model can use.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
List[LLMMessage]: The list of output messages created in the caller loop.
|
||||||
|
"""
|
||||||
|
|
||||||
|
generated_messages: List[LLMMessage] = []
|
||||||
|
|
||||||
|
# Get a response from the model.
|
||||||
|
response = await model_client.create(input_messages, tools=tool_schema, cancellation_token=cancellation_token)
|
||||||
|
# Add the response to the generated messages.
|
||||||
|
generated_messages.append(AssistantMessage(content=response.content, source=caller_source))
|
||||||
|
|
||||||
|
# Keep iterating until the model stops generating tool calls.
|
||||||
|
while isinstance(response.content, list) and all(isinstance(item, FunctionCall) for item in response.content):
|
||||||
|
# Execute functions called by the model by sending messages to tool agent.
|
||||||
|
results: List[FunctionExecutionResult | BaseException] = await asyncio.gather(
|
||||||
|
*[
|
||||||
|
caller.send_message(
|
||||||
|
message=call,
|
||||||
|
recipient=tool_agent_id,
|
||||||
|
cancellation_token=cancellation_token,
|
||||||
|
)
|
||||||
|
for call in response.content
|
||||||
|
],
|
||||||
|
return_exceptions=True,
|
||||||
|
)
|
||||||
|
# Combine the results into a single response and handle exceptions.
|
||||||
|
function_results: List[FunctionExecutionResult] = []
|
||||||
|
for result in results:
|
||||||
|
if isinstance(result, FunctionExecutionResult):
|
||||||
|
function_results.append(result)
|
||||||
|
elif isinstance(result, ToolException):
|
||||||
|
function_results.append(FunctionExecutionResult(content=f"Error: {result}", call_id=result.call_id))
|
||||||
|
elif isinstance(result, BaseException):
|
||||||
|
raise result # Unexpected exception.
|
||||||
|
generated_messages.append(FunctionExecutionResultMessage(content=function_results))
|
||||||
|
# Query the model again with the new response.
|
||||||
|
response = await model_client.create(
|
||||||
|
input_messages + generated_messages, tools=tool_schema, cancellation_token=cancellation_token
|
||||||
|
)
|
||||||
|
generated_messages.append(AssistantMessage(content=response.content, source=caller_source))
|
||||||
|
|
||||||
|
# Return the generated messages.
|
||||||
|
return generated_messages
|
@ -1,19 +1,34 @@
|
|||||||
import asyncio
|
import asyncio
|
||||||
import json
|
import json
|
||||||
|
from typing import Any, AsyncGenerator, List
|
||||||
|
|
||||||
import pytest
|
import pytest
|
||||||
|
from openai.resources.chat.completions import AsyncCompletions
|
||||||
|
from openai.types.chat.chat_completion import ChatCompletion, Choice
|
||||||
|
from openai.types.chat.chat_completion_chunk import ChatCompletionChunk
|
||||||
|
from openai.types.chat.chat_completion_message import ChatCompletionMessage
|
||||||
|
from openai.types.chat.chat_completion_message_tool_call import ChatCompletionMessageToolCall, Function
|
||||||
|
from openai.types.completion_usage import CompletionUsage
|
||||||
|
|
||||||
from agnext.application import SingleThreadedAgentRuntime
|
from agnext.application import SingleThreadedAgentRuntime
|
||||||
from agnext.components import FunctionCall
|
from agnext.components import FunctionCall
|
||||||
from agnext.components.models import FunctionExecutionResult
|
|
||||||
from agnext.components.tool_agent import (
|
from agnext.components.tool_agent import (
|
||||||
InvalidToolArgumentsException,
|
InvalidToolArgumentsException,
|
||||||
ToolAgent,
|
ToolAgent,
|
||||||
ToolExecutionException,
|
ToolExecutionException,
|
||||||
ToolNotFoundException,
|
ToolNotFoundException,
|
||||||
|
tool_agent_caller_loop,
|
||||||
|
)
|
||||||
|
from agnext.components.tools import FunctionTool, Tool
|
||||||
|
from agnext.core import CancellationToken, AgentId
|
||||||
|
from agnext.components.models import (
|
||||||
|
AssistantMessage,
|
||||||
|
FunctionExecutionResult,
|
||||||
|
FunctionExecutionResultMessage,
|
||||||
|
OpenAIChatCompletionClient,
|
||||||
|
UserMessage,
|
||||||
)
|
)
|
||||||
from agnext.components.tools import FunctionTool
|
from agnext.components.tools import FunctionTool
|
||||||
from agnext.core import CancellationToken
|
|
||||||
from agnext.core import AgentId
|
|
||||||
|
|
||||||
|
|
||||||
def _pass_function(input: str) -> str:
|
def _pass_function(input: str) -> str:
|
||||||
@ -29,6 +44,60 @@ async def _async_sleep_function(input: str) -> str:
|
|||||||
return "pass"
|
return "pass"
|
||||||
|
|
||||||
|
|
||||||
|
class _MockChatCompletion:
|
||||||
|
def __init__(self, model: str = "gpt-4o") -> None:
|
||||||
|
self._saved_chat_completions: List[ChatCompletion] = [
|
||||||
|
ChatCompletion(
|
||||||
|
id="id1",
|
||||||
|
choices=[
|
||||||
|
Choice(
|
||||||
|
finish_reason="tool_calls",
|
||||||
|
index=0,
|
||||||
|
message=ChatCompletionMessage(
|
||||||
|
content=None,
|
||||||
|
tool_calls=[
|
||||||
|
ChatCompletionMessageToolCall(
|
||||||
|
id="1",
|
||||||
|
type="function",
|
||||||
|
function=Function(
|
||||||
|
name="pass",
|
||||||
|
arguments=json.dumps({"input": "pass"}),
|
||||||
|
),
|
||||||
|
)
|
||||||
|
],
|
||||||
|
role="assistant",
|
||||||
|
),
|
||||||
|
)
|
||||||
|
],
|
||||||
|
created=0,
|
||||||
|
model=model,
|
||||||
|
object="chat.completion",
|
||||||
|
usage=CompletionUsage(prompt_tokens=0, completion_tokens=0, total_tokens=0),
|
||||||
|
),
|
||||||
|
ChatCompletion(
|
||||||
|
id="id2",
|
||||||
|
choices=[
|
||||||
|
Choice(
|
||||||
|
finish_reason="stop", index=0, message=ChatCompletionMessage(content="Hello", role="assistant")
|
||||||
|
)
|
||||||
|
],
|
||||||
|
created=0,
|
||||||
|
model=model,
|
||||||
|
object="chat.completion",
|
||||||
|
usage=CompletionUsage(prompt_tokens=0, completion_tokens=0, total_tokens=0),
|
||||||
|
),
|
||||||
|
]
|
||||||
|
self._curr_index = 0
|
||||||
|
|
||||||
|
async def mock_create(
|
||||||
|
self, *args: Any, **kwargs: Any
|
||||||
|
) -> ChatCompletion | AsyncGenerator[ChatCompletionChunk, None]:
|
||||||
|
await asyncio.sleep(0.1)
|
||||||
|
completion = self._saved_chat_completions[self._curr_index]
|
||||||
|
self._curr_index += 1
|
||||||
|
return completion
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_tool_agent() -> None:
|
async def test_tool_agent() -> None:
|
||||||
runtime = SingleThreadedAgentRuntime()
|
runtime = SingleThreadedAgentRuntime()
|
||||||
@ -74,3 +143,33 @@ async def test_tool_agent() -> None:
|
|||||||
await result_future
|
await result_future
|
||||||
|
|
||||||
await runtime.stop()
|
await runtime.stop()
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_caller_loop(monkeypatch: pytest.MonkeyPatch) -> None:
|
||||||
|
mock = _MockChatCompletion(model="gpt-4o-2024-05-13")
|
||||||
|
monkeypatch.setattr(AsyncCompletions, "create", mock.mock_create)
|
||||||
|
client = OpenAIChatCompletionClient(model="gpt-4o-2024-05-13", api_key="api_key")
|
||||||
|
tools : List[Tool] = [FunctionTool(_pass_function, name="pass", description="Pass function")]
|
||||||
|
runtime = SingleThreadedAgentRuntime()
|
||||||
|
await runtime.register(
|
||||||
|
"tool_agent",
|
||||||
|
lambda: ToolAgent(
|
||||||
|
description="Tool agent",
|
||||||
|
tools=tools,
|
||||||
|
),
|
||||||
|
)
|
||||||
|
agent = AgentId("tool_agent", "default")
|
||||||
|
runtime.start()
|
||||||
|
messages = await tool_agent_caller_loop(
|
||||||
|
runtime,
|
||||||
|
agent,
|
||||||
|
client,
|
||||||
|
[UserMessage(content="Hello", source="user")],
|
||||||
|
tool_schema=tools
|
||||||
|
)
|
||||||
|
assert len(messages) == 3
|
||||||
|
assert isinstance(messages[0], AssistantMessage)
|
||||||
|
assert isinstance(messages[1], FunctionExecutionResultMessage)
|
||||||
|
assert isinstance(messages[2], AssistantMessage)
|
||||||
|
await runtime.stop()
|
||||||
|
Loading…
x
Reference in New Issue
Block a user