mirror of
https://github.com/microsoft/autogen.git
synced 2025-07-18 14:32:09 +00:00
264 lines
8.9 KiB
Plaintext
264 lines
8.9 KiB
Plaintext
{
|
||
"cells": [
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"# Introduction to AutoGen\n",
|
||
"\n",
|
||
"Welcome! AutoGen is an open-source framework that leverages multiple _agents_ to enable complex workflows. This tutorial introduces basic concepts and building blocks of AutoGen."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Why AutoGen?\n",
|
||
"\n",
|
||
"> _The whole is greater than the sum of its parts._<br/>\n",
|
||
"> -**Aristotle**\n",
|
||
"\n",
|
||
"While there are many definitions of agents, in AutoGen, an agent is an entity that can send messages, receive messages and generate reply using models, tools, human inputs or a mixture of them.\n",
|
||
"This abstraction not only allows agents to model real-world and abstract entities, such as people and algorithms, but it also simplifies implementation of complex workflows as collaboration among agents.\n",
|
||
"\n",
|
||
"Further, AutoGen is extensible and composable: you can extend a simple agent with customizable components and create workflows that can combine these agents and power a more sophisticated agent, resulting in implementations that are modular and easy to maintain.\n",
|
||
"\n",
|
||
"Most importantly, AutoGen is developed by a vibrant community of researchers\n",
|
||
"and engineers. It incorporates the latest research in multi-agent systems\n",
|
||
"and has been used in many real-world applications, including agent platform,\n",
|
||
"advertising, AI employees, blog/article writing, blockchain, calculate burned areas by wildfires,\n",
|
||
"customer support, cybersecurity, data analytics, debate, education, finance, gaming, legal consultation,\n",
|
||
"research, robotics, sales/marketing, social simulation, software engineering,\n",
|
||
"software security, supply chain, t-shirt design, training data generation, Youtube service..."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Installation\n",
|
||
"\n",
|
||
"The simplest way to install AutoGen is from pip: `pip install pyautogen`. Find more options in [Installation](/docs/installation/)."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Agents\n",
|
||
"\n",
|
||
"In AutoGen, an agent is an entity that can send and receive messages to and from\n",
|
||
"other agents in its environment. An agent can be powered by models (such as a large language model\n",
|
||
"like GPT-4), code executors (such as an IPython kernel), human, or a combination of these\n",
|
||
"and other pluggable and customizable components.\n",
|
||
"\n",
|
||
"```{=mdx}\n",
|
||
"\n",
|
||
"```\n",
|
||
"\n",
|
||
"An example of such agents is the built-in `ConversableAgent` which supports the following components:\n",
|
||
"\n",
|
||
"1. A list of LLMs\n",
|
||
"2. A code executor\n",
|
||
"3. A function and tool executor\n",
|
||
"4. A component for keeping human-in-the-loop\n",
|
||
"\n",
|
||
"You can switch each component on or off and customize it to suit the need of \n",
|
||
"your application. You even can add additional components to the agent's capabilities."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"LLMs, for example, enable agents to converse in natural languages and transform between structured and unstructured text. \n",
|
||
"The following example shows a `ConversableAgent` with a GPT-4 LLM switched on and other\n",
|
||
"components switched off:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 1,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"import os\n",
|
||
"from autogen import ConversableAgent\n",
|
||
"\n",
|
||
"agent = ConversableAgent(\n",
|
||
" \"chatbot\",\n",
|
||
" llm_config={\"config_list\": [{\"model\": \"gpt-4\", \"api_key\": os.environ.get(\"OPENAI_API_KEY\")}]},\n",
|
||
" code_execution_config=False, # Turn off code execution, by default it is off.\n",
|
||
" function_map=None, # No registered functions, by default it is None.\n",
|
||
" human_input_mode=\"NEVER\", # Never ask for human input.\n",
|
||
")"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"The `llm_config` argument contains a list of configurations for the LLMs.\n",
|
||
"See [LLM Configuration](/docs/topics/llm_configuration) for more details."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"You can ask this agent to generate a response to a question using the `generate_reply` method:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 2,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"Sure, here's one for you:\n",
|
||
"\n",
|
||
"Why don't scientists trust atoms? \n",
|
||
"\n",
|
||
"Because they make up everything!\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"reply = agent.generate_reply(messages=[{\"content\": \"Tell me a joke.\", \"role\": \"user\"}])\n",
|
||
"print(reply)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Roles and Conversations\n",
|
||
"\n",
|
||
"In AutoGen, you can assign roles to agents and have them participate in conversations or chat with each other. A conversation is a sequence of messages exchanged between agents. You can then use these conversations to make progress on a task. For example, in the example below, we assign different roles to two agents by setting their\n",
|
||
"`system_message`."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 3,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"cathy = ConversableAgent(\n",
|
||
" \"cathy\",\n",
|
||
" system_message=\"Your name is Cathy and you are a part of a duo of comedians.\",\n",
|
||
" llm_config={\"config_list\": [{\"model\": \"gpt-4\", \"temperature\": 0.9, \"api_key\": os.environ.get(\"OPENAI_API_KEY\")}]},\n",
|
||
" human_input_mode=\"NEVER\", # Never ask for human input.\n",
|
||
")\n",
|
||
"\n",
|
||
"joe = ConversableAgent(\n",
|
||
" \"joe\",\n",
|
||
" system_message=\"Your name is Joe and you are a part of a duo of comedians.\",\n",
|
||
" llm_config={\"config_list\": [{\"model\": \"gpt-4\", \"temperature\": 0.7, \"api_key\": os.environ.get(\"OPENAI_API_KEY\")}]},\n",
|
||
" human_input_mode=\"NEVER\", # Never ask for human input.\n",
|
||
")"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"Now that we have two comedian agents, we can ask them to start a comedy show.\n",
|
||
"This can be done using the `initiate_chat` method.\n",
|
||
"We set the `max_turns` to 2 to keep the conversation short."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 4,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"\u001b[33mjoe\u001b[0m (to cathy):\n",
|
||
"\n",
|
||
"Tell me a joke.\n",
|
||
"\n",
|
||
"--------------------------------------------------------------------------------\n",
|
||
"\u001b[33mcathy\u001b[0m (to joe):\n",
|
||
"\n",
|
||
"Sure, here's a classic one for you:\n",
|
||
"\n",
|
||
"Why don't scientists trust atoms?\n",
|
||
"\n",
|
||
"Because they make up everything!\n",
|
||
"\n",
|
||
"--------------------------------------------------------------------------------\n",
|
||
"\u001b[33mjoe\u001b[0m (to cathy):\n",
|
||
"\n",
|
||
"That's a great one, Joe! Here's my turn:\n",
|
||
"\n",
|
||
"Why don't some fish play piano?\n",
|
||
"\n",
|
||
"Because you can't tuna fish!\n",
|
||
"\n",
|
||
"--------------------------------------------------------------------------------\n",
|
||
"\u001b[33mcathy\u001b[0m (to joe):\n",
|
||
"\n",
|
||
"Haha, good one, Cathy! I have another:\n",
|
||
"\n",
|
||
"Why was the math book sad?\n",
|
||
"\n",
|
||
"Because it had too many problems!\n",
|
||
"\n",
|
||
"--------------------------------------------------------------------------------\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"result = joe.initiate_chat(cathy, message=\"Tell me a joke.\", max_turns=2)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"The comedians are bouncing off each other!"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Summary\n",
|
||
"\n",
|
||
"In this chapter, we introduced the concept of agents, roles and conversations in AutoGen.\n",
|
||
"For simplicity, we only used LLMs and created fully autonomous agents (`human_input_mode` was set to `NEVER`). \n",
|
||
"In the next chapter, \n",
|
||
"we will show how you can control when to _terminate_ a conversation between autonomous agents."
|
||
]
|
||
}
|
||
],
|
||
"metadata": {
|
||
"kernelspec": {
|
||
"display_name": "autogen",
|
||
"language": "python",
|
||
"name": "python3"
|
||
},
|
||
"language_info": {
|
||
"codemirror_mode": {
|
||
"name": "ipython",
|
||
"version": 3
|
||
},
|
||
"file_extension": ".py",
|
||
"mimetype": "text/x-python",
|
||
"name": "python",
|
||
"nbconvert_exporter": "python",
|
||
"pygments_lexer": "ipython3",
|
||
"version": "3.10.13"
|
||
}
|
||
},
|
||
"nbformat": 4,
|
||
"nbformat_minor": 2
|
||
}
|