mirror of
https://github.com/microsoft/autogen.git
synced 2025-12-29 16:09:07 +00:00
Add tutorial index page; improve installation pages; improve Core tutorial to mention how to use AgentChat agent in Core. (#4950)
This commit is contained in:
parent
318820e5ed
commit
903305e810
@ -31,19 +31,19 @@ How to install AgentChat
|
||||
Build your first agent
|
||||
:::
|
||||
|
||||
:::{grid-item-card} {fas}`graduation-cap;pst-color-primary` Tutorial
|
||||
:link: ./tutorial/models.html
|
||||
:::{grid-item-card} {fas}`school;pst-color-primary` Tutorial
|
||||
:link: ./tutorial/index.html
|
||||
|
||||
Step-by-step guide to using AgentChat, learn about agents, teams, and more
|
||||
:::
|
||||
|
||||
:::{grid-item-card} {fas}`book;pst-color-primary` Selector Group Chat
|
||||
:::{grid-item-card} {fas}`sitemap;pst-color-primary` Selector Group Chat
|
||||
:link: ./selector-group-chat.html
|
||||
|
||||
Multi-agent coordination through a shared context and centralized, customizable selector
|
||||
:::
|
||||
|
||||
:::{grid-item-card} {fas}`book;pst-color-primary` Swarm
|
||||
:::{grid-item-card} {fas}`dove;pst-color-primary` Swarm
|
||||
:link: ./swarm.html
|
||||
|
||||
Multi-agent coordination through a shared context and localized, tool-based selector
|
||||
@ -82,6 +82,7 @@ migration-guide
|
||||
:hidden:
|
||||
:caption: Tutorial
|
||||
|
||||
tutorial/index
|
||||
tutorial/models
|
||||
tutorial/messages
|
||||
tutorial/agents
|
||||
|
||||
@ -38,7 +38,7 @@ deactivate
|
||||
Create and activate:
|
||||
|
||||
```bash
|
||||
conda create -n autogen python=3.10
|
||||
conda create -n autogen python=3.12
|
||||
conda activate autogen
|
||||
```
|
||||
|
||||
@ -77,15 +77,8 @@ extensions:
|
||||
pip install "autogen-ext[openai]==0.4.0.dev13"
|
||||
```
|
||||
|
||||
## Install Docker for Code Execution
|
||||
If you are using Azure OpenAI with AAD authentication, you need to install the following:
|
||||
|
||||
We recommend using Docker for code execution.
|
||||
To install Docker, follow the instructions for your operating system on the [Docker website](https://docs.docker.com/get-docker/).
|
||||
|
||||
A simple example of how to use Docker for code execution is shown below:
|
||||
|
||||
<!-- ```{include} stocksnippet.md
|
||||
|
||||
``` -->
|
||||
|
||||
To learn more about agents that execute code, see the [agents tutorial](./tutorial/agents.ipynb).
|
||||
```bash
|
||||
pip install "autogen-ext[azure]==0.4.0.dev13"
|
||||
```
|
||||
|
||||
@ -0,0 +1,72 @@
|
||||
---
|
||||
myst:
|
||||
html_meta:
|
||||
"description lang=en": |
|
||||
Tutorial for AgentChat, a high-level API for AutoGen
|
||||
---
|
||||
|
||||
# Introduction
|
||||
|
||||
This tutorial provides a step-by-step guide to using AgentChat.
|
||||
Make sure you have first followed the [installation instructions](../installation.md)
|
||||
to prepare your environment.
|
||||
|
||||
At any point you are stuck, feel free to ask for help on
|
||||
[GitHub Discussions](https://github.com/microsoft/autogen/discussions)
|
||||
or [Discord](https://aka.ms/autogen-discord).
|
||||
|
||||
```{note}
|
||||
If you are coming from AutoGen v0.2, please read the [migration guide](../migration-guide.md).
|
||||
```
|
||||
|
||||
::::{grid} 2 2 2 2
|
||||
:gutter: 3
|
||||
|
||||
:::{grid-item-card} {fas}`brain;pst-color-primary` Models
|
||||
:link: ./models.html
|
||||
|
||||
How to use LLM model clients
|
||||
:::
|
||||
|
||||
:::{grid-item-card} {fas}`envelope;pst-color-primary` Messages
|
||||
:link: ./messages.html
|
||||
|
||||
Understand the message types
|
||||
:::
|
||||
|
||||
:::{grid-item-card} {fas}`robot;pst-color-primary` Agents
|
||||
:link: ./agents.html
|
||||
|
||||
Work with AgentChat agents and get started with {py:class}`~autogen_agentchat.agents.AssistantAgent`
|
||||
:::
|
||||
|
||||
:::{grid-item-card} {fas}`sitemap;pst-color-primary` Teams
|
||||
:link: ./teams.html
|
||||
|
||||
Work with teams of agents and get started with {py:class}`~autogen_agentchat.teams.RoundRobinGroupChat`.
|
||||
:::
|
||||
|
||||
:::{grid-item-card} {fas}`person-chalkboard;pst-color-primary` Human-in-the-Loop
|
||||
:link: ./human-in-the-loop.html
|
||||
|
||||
Best practices for providing feedback to a team
|
||||
:::
|
||||
|
||||
:::{grid-item-card} {fas}`circle-stop;pst-color-primary` Termination
|
||||
:link: ./termination.html
|
||||
|
||||
Control a team using termination conditions
|
||||
:::
|
||||
|
||||
:::{grid-item-card} {fas}`code;pst-color-primary` Custom Agents
|
||||
:link: ./custom-agents.html
|
||||
|
||||
Create your own agents
|
||||
:::
|
||||
|
||||
:::{grid-item-card} {fas}`database;pst-color-primary` Managing State
|
||||
:link: ./state.html
|
||||
|
||||
Save and load agents and teams for persistent sessions
|
||||
:::
|
||||
::::
|
||||
@ -7,9 +7,8 @@
|
||||
"# Agent and Agent Runtime\n",
|
||||
"\n",
|
||||
"In this and the following section, we focus on the core concepts of AutoGen:\n",
|
||||
"agents, agent runtime, messages, and communication.\n",
|
||||
"You will not find any AI models or tools here, just the foundational\n",
|
||||
"building blocks for building multi-agent applications.\n",
|
||||
"agents, agent runtime, messages, and communication -- \n",
|
||||
"the foundational building blocks for an multi-agent applications.\n",
|
||||
"\n",
|
||||
"```{note}\n",
|
||||
"The Core API is designed to be unopinionated and flexible. So at times, you\n",
|
||||
@ -25,22 +24,31 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"An agent in AutoGen is an entity defined by the base class {py:class}`autogen_core.Agent`.\n",
|
||||
"It has a unique identifier of the type {py:class}`autogen_core.AgentId`,\n",
|
||||
"a metadata dictionary of the type {py:class}`autogen_core.AgentMetadata`,\n",
|
||||
"An agent in AutoGen is an entity defined by the base interface {py:class}`~autogen_core.Agent`.\n",
|
||||
"It has a unique identifier of the type {py:class}`~autogen_core.AgentId`,\n",
|
||||
"a metadata dictionary of the type {py:class}`~autogen_core.AgentMetadata`.\n",
|
||||
"\n",
|
||||
"and method for handling messages {py:meth}`autogen_core.BaseAgent.on_message_impl`. In most cases, you can subclass your agents from higher level class {py:class}`autogen_core.RoutedAgent` which enables you to route messages to corresponding message handler specified with {py:meth}`autogen_core.message_handler` decorator and proper type hint for the `message` variable.\n",
|
||||
"In most cases, you can subclass your agents from higher level class {py:class}`~autogen_core.RoutedAgent` which enables you to route messages to corresponding message handler specified with {py:meth}`~autogen_core.message_handler` decorator and proper type hint for the `message` variable.\n",
|
||||
"An agent runtime is the execution environment for agents in AutoGen.\n",
|
||||
"\n",
|
||||
"Similar to the runtime environment of a programming language,\n",
|
||||
"an agent runtime provides the necessary infrastructure to facilitate communication\n",
|
||||
"between agents, manage agent lifecycles, enforce security boundaries, and support monitoring and\n",
|
||||
"debugging.\n",
|
||||
"\n",
|
||||
"For local development, developers can use {py:class}`~autogen_core.SingleThreadedAgentRuntime`,\n",
|
||||
"which can be embedded in a Python application.\n",
|
||||
"\n",
|
||||
"```{note}\n",
|
||||
"Agents are not directly instantiated and managed by application code.\n",
|
||||
"Instead, they are created by the runtime when needed and managed by the runtime.\n",
|
||||
"\n",
|
||||
"If you are already familiar with [AgentChat](../../agentchat-user-guide/index.md),\n",
|
||||
"it is important to note that AgentChat's agents such as\n",
|
||||
"{py:class}`~autogen_agentchat.agents.AssistantAgent` are created by application \n",
|
||||
"and thus not directly managed by the runtime. To use an AgentChat agent in Core,\n",
|
||||
"you need to create a wrapper Core agent that delegates messages to the AgentChat agent\n",
|
||||
"and let the runtime manage the wrapper agent.\n",
|
||||
"```"
|
||||
]
|
||||
},
|
||||
@ -59,7 +67,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"execution_count": 11,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@ -79,7 +87,7 @@
|
||||
"\n",
|
||||
" @message_handler\n",
|
||||
" async def handle_my_message_type(self, message: MyMessageType, ctx: MessageContext) -> None:\n",
|
||||
" print(f\"Received message: {message.content}\") # type: ignore"
|
||||
" print(f\"{self.id.type} received message: {message.content}\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -90,6 +98,55 @@
|
||||
"See the next section on [message and communication](./message-and-communication.ipynb)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Using an AgentChat Agent\n",
|
||||
"\n",
|
||||
"If you have an [AgentChat](../../agentchat-user-guide/index.md) agent and want to use it in the Core API, you can create\n",
|
||||
"a wrapper {py:class}`~autogen_core.RoutedAgent` that delegates messages to the AgentChat agent.\n",
|
||||
"The following example shows how to create a wrapper agent for the {py:class}`~autogen_agentchat.agents.AssistantAgent`\n",
|
||||
"in AgentChat."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 12,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from autogen_agentchat.agents import AssistantAgent\n",
|
||||
"from autogen_agentchat.messages import TextMessage\n",
|
||||
"from autogen_ext.models.openai import OpenAIChatCompletionClient\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"class MyAssistant(RoutedAgent):\n",
|
||||
" def __init__(self, name: str) -> None:\n",
|
||||
" super().__init__(name)\n",
|
||||
" model_client = OpenAIChatCompletionClient(model=\"gpt-4o\")\n",
|
||||
" self._delegate = AssistantAgent(name, model_client=model_client)\n",
|
||||
"\n",
|
||||
" @message_handler\n",
|
||||
" async def handle_my_message_type(self, message: MyMessageType, ctx: MessageContext) -> None:\n",
|
||||
" print(f\"{self.id.type} received message: {message.content}\")\n",
|
||||
" response = await self._delegate.on_messages(\n",
|
||||
" [TextMessage(content=message.content, source=\"user\")], ctx.cancellation_token\n",
|
||||
" )\n",
|
||||
" print(f\"{self.id.type} responded: {response.chat_message.content}\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"For how to use model client, see the [Model Client](./model-clients.ipynb) section.\n",
|
||||
"\n",
|
||||
"Since the Core API is unopinionated,\n",
|
||||
"you are not required to use the AgentChat API to use the Core API.\n",
|
||||
"You can implement your own agents or use another agent framework."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
@ -106,7 +163,7 @@
|
||||
"when they are needed.\n",
|
||||
"\n",
|
||||
"Agent type ({py:class}`~autogen_core.AgentType`) is not the same as the agent class. In this example,\n",
|
||||
"the agent type is `AgentType(\"my_agent\")` and the agent class is the Python class `MyAgent`.\n",
|
||||
"the agent type is `AgentType(\"my_agent\")` or `AgentType(\"my_assistant\")` and the agent class is the Python class `MyAgent` or `MyAssistantAgent`.\n",
|
||||
"The factory function is expected to return an instance of the agent class \n",
|
||||
"on which the {py:meth}`~autogen_core.BaseAgent.register` class method is invoked.\n",
|
||||
"Read [Agent Identity and Lifecycles](../core-concepts/agent-identity-and-lifecycle.md)\n",
|
||||
@ -119,23 +176,23 @@
|
||||
"can be used to create different instances of the same agent class.\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"To register an agent type with the \n",
|
||||
"To register our agent types with the \n",
|
||||
"{py:class}`~autogen_core.SingleThreadedAgentRuntime`,\n",
|
||||
"the following code can be used:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"execution_count": 13,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"AgentType(type='my_agent')"
|
||||
"AgentType(type='my_assistant')"
|
||||
]
|
||||
},
|
||||
"execution_count": 2,
|
||||
"execution_count": 13,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
@ -144,7 +201,8 @@
|
||||
"from autogen_core import SingleThreadedAgentRuntime\n",
|
||||
"\n",
|
||||
"runtime = SingleThreadedAgentRuntime()\n",
|
||||
"await MyAgent.register(runtime, \"my_agent\", lambda: MyAgent())"
|
||||
"await MyAgent.register(runtime, \"my_agent\", lambda: MyAgent())\n",
|
||||
"await MyAssistant.register(runtime, \"my_assistant\", lambda: MyAssistant(\"my_assistant\"))"
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -159,21 +217,23 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"execution_count": 14,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Received message: Hello, World!\n"
|
||||
"my_agent received message: Hello, World!\n",
|
||||
"my_assistant received message: Hello, World!\n",
|
||||
"my_assistant responded: Hello! How can I assist you today?\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"agent_id = AgentId(\"my_agent\", \"default\")\n",
|
||||
"runtime.start() # Start processing messages in the background.\n",
|
||||
"await runtime.send_message(MyMessageType(\"Hello, World!\"), agent_id)\n",
|
||||
"await runtime.send_message(MyMessageType(\"Hello, World!\"), AgentId(\"my_agent\", \"default\"))\n",
|
||||
"await runtime.send_message(MyMessageType(\"Hello, World!\"), AgentId(\"my_assistant\", \"default\"))\n",
|
||||
"await runtime.stop() # Stop processing messages in the background."
|
||||
]
|
||||
},
|
||||
@ -203,7 +263,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"execution_count": 15,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@ -228,7 +288,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"execution_count": 16,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@ -246,7 +306,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"execution_count": 17,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
|
||||
@ -1,5 +1,53 @@
|
||||
# Installation
|
||||
|
||||
## Create a Virtual Environment (optional)
|
||||
|
||||
When installing AgentChat locally, we recommend using a virtual environment for the installation. This will ensure that the dependencies for AgentChat are isolated from the rest of your system.
|
||||
|
||||
``````{tab-set}
|
||||
|
||||
`````{tab-item} venv
|
||||
|
||||
Create and activate:
|
||||
|
||||
```bash
|
||||
python3 -m venv .venv
|
||||
source .venv/bin/activate
|
||||
```
|
||||
|
||||
To deactivate later, run:
|
||||
|
||||
```bash
|
||||
deactivate
|
||||
```
|
||||
|
||||
`````
|
||||
|
||||
`````{tab-item} conda
|
||||
|
||||
[Install Conda](https://docs.conda.io/projects/conda/en/stable/user-guide/install/index.html) if you have not already.
|
||||
|
||||
|
||||
Create and activate:
|
||||
|
||||
```bash
|
||||
conda create -n autogen python=3.12
|
||||
conda activate autogen
|
||||
```
|
||||
|
||||
To deactivate later, run:
|
||||
|
||||
```bash
|
||||
conda deactivate
|
||||
```
|
||||
|
||||
|
||||
`````
|
||||
|
||||
|
||||
|
||||
``````
|
||||
|
||||
## Install using pip
|
||||
|
||||
Install the `autogen-core` package using pip:
|
||||
@ -12,3 +60,26 @@ pip install "autogen-core==0.4.0.dev13"
|
||||
```{note}
|
||||
Python 3.10 or later is required.
|
||||
```
|
||||
|
||||
## Install OpenAI for Model Client
|
||||
|
||||
To use the OpenAI and Azure OpenAI models, you need to install the following
|
||||
extensions:
|
||||
|
||||
```bash
|
||||
pip install "autogen-ext[openai]==0.4.0.dev13"
|
||||
```
|
||||
|
||||
If you are using Azure OpenAI with AAD authentication, you need to install the following:
|
||||
|
||||
```bash
|
||||
pip install "autogen-ext[azure]==0.4.0.dev13"
|
||||
```
|
||||
|
||||
## Install Docker for Code Execution (Optional)
|
||||
|
||||
We recommend using Docker to use {py:class}`~autogen_ext.code_executors.docker.DockerCommandLineCodeExecutor` for execution of model-generated code.
|
||||
To install Docker, follow the instructions for your operating system on the [Docker website](https://docs.docker.com/get-docker/).
|
||||
|
||||
To learn more code execution, see [Command Line Code Executors](./framework/command-line-code-executors.ipynb)
|
||||
and [Code Execution](./design-patterns/code-execution-groupchat.ipynb).
|
||||
|
||||
Loading…
x
Reference in New Issue
Block a user