mirror of
https://github.com/microsoft/autogen.git
synced 2025-07-27 02:41:12 +00:00
139 lines
5.0 KiB
Plaintext
139 lines
5.0 KiB
Plaintext
![]() |
import Tabs from '@theme/Tabs';
|
||
|
import TabItem from '@theme/TabItem';
|
||
|
|
||
|
# Getting Started
|
||
|
|
||
|
AutoGen is a framework that enables development of LLM applications using
|
||
|
multiple agents that can converse with each other to solve tasks. AutoGen agents
|
||
|
are customizable, conversable, and seamlessly allow human participation. They
|
||
|
can operate in various modes that employ combinations of LLMs, human inputs, and
|
||
|
tools.
|
||
|
|
||
|

|
||
|
|
||
|
### Main Features
|
||
|
|
||
|
|
||
|
- AutoGen enables building next-gen LLM applications based on [multi-agent
|
||
|
conversations](/docs/Use-Cases/agent_chat) with minimal effort. It simplifies
|
||
|
the orchestration, automation, and optimization of a complex LLM workflow. It
|
||
|
maximizes the performance of LLM models and overcomes their weaknesses.
|
||
|
- It supports [diverse conversation
|
||
|
patterns](/docs/Use-Cases/agent_chat#supporting-diverse-conversation-patterns)
|
||
|
for complex workflows. With customizable and conversable agents, developers can
|
||
|
use AutoGen to build a wide range of conversation patterns concerning
|
||
|
conversation autonomy, the number of agents, and agent conversation topology.
|
||
|
- It provides a collection of working systems with different complexities. These
|
||
|
systems span a [wide range of
|
||
|
applications](/docs/Use-Cases/agent_chat#diverse-applications-implemented-with-autogen)
|
||
|
from various domains and complexities. This demonstrates how AutoGen can
|
||
|
easily support diverse conversation patterns.
|
||
|
|
||
|
AutoGen is powered by collaborative [research studies](/docs/Research) from
|
||
|
Microsoft, Penn State University, and University of Washington.
|
||
|
|
||
|
### Quickstart
|
||
|
|
||
|
```sh
|
||
|
pip install pyautogen
|
||
|
```
|
||
|
|
||
|
<Tabs>
|
||
|
<TabItem value="local" label="Local execution" default>
|
||
|
:::warning
|
||
|
When asked, be sure to check the generated code before continuing to ensure it is safe to run.
|
||
|
:::
|
||
|
|
||
|
```python
|
||
|
from autogen import AssistantAgent, UserProxyAgent
|
||
|
from autogen.coding import LocalCommandLineCodeExecutor
|
||
|
|
||
|
import os
|
||
|
from pathlib import Path
|
||
|
|
||
|
llm_config = {
|
||
|
"config_list": [{"model": "gpt-4", "api_key": os.environ["OPENAI_API_KEY"]}],
|
||
|
}
|
||
|
|
||
|
work_dir = Path("coding")
|
||
|
work_dir.mkdir(exist_ok=True)
|
||
|
|
||
|
assistant = AssistantAgent("assistant", llm_config=llm_config)
|
||
|
|
||
|
code_executor = LocalCommandLineCodeExecutor(work_dir=work_dir)
|
||
|
user_proxy = UserProxyAgent(
|
||
|
"user_proxy", code_execution_config={"executor": code_executor}
|
||
|
)
|
||
|
|
||
|
# Start the chat
|
||
|
user_proxy.initiate_chat(
|
||
|
assistant,
|
||
|
message="Plot a chart of NVDA and TESLA stock price change YTD.",
|
||
|
)
|
||
|
```
|
||
|
|
||
|
</TabItem>
|
||
|
<TabItem value="docker" label="Docker execution" default>
|
||
|
|
||
|
|
||
|
```python
|
||
|
from autogen import AssistantAgent, UserProxyAgent
|
||
|
from autogen.coding import DockerCommandLineCodeExecutor
|
||
|
|
||
|
import os
|
||
|
from pathlib import Path
|
||
|
|
||
|
llm_config = {
|
||
|
"config_list": [{"model": "gpt-4", "api_key": os.environ["OPENAI_API_KEY"]}],
|
||
|
}
|
||
|
|
||
|
work_dir = Path("coding")
|
||
|
work_dir.mkdir(exist_ok=True)
|
||
|
|
||
|
with DockerCommandLineCodeExecutor(work_dir=work_dir) as code_executor:
|
||
|
assistant = AssistantAgent("assistant", llm_config=llm_config)
|
||
|
user_proxy = UserProxyAgent(
|
||
|
"user_proxy", code_execution_config={"executor": code_executor}
|
||
|
)
|
||
|
|
||
|
# Start the chat
|
||
|
user_proxy.initiate_chat(
|
||
|
assistant,
|
||
|
message="Plot a chart of NVDA and TESLA stock price change YTD. Save the plot to a file called plot.png",
|
||
|
)
|
||
|
```
|
||
|
|
||
|
Open `coding/plot.png` to see the generated plot.
|
||
|
|
||
|
</TabItem>
|
||
|
</Tabs>
|
||
|
|
||
|
:::tip
|
||
|
Learn more about configuring LLMs for agents [here](/docs/topics/llm_configuration).
|
||
|
:::
|
||
|
|
||
|
|
||
|
|
||
|
#### Multi-Agent Conversation Framework
|
||
|
Autogen enables the next-gen LLM applications with a generic multi-agent conversation framework. It offers customizable and conversable agents which integrate LLMs, tools, and humans.
|
||
|
By automating chat among multiple capable agents, one can easily make them collectively perform tasks autonomously or with human feedback, including tasks that require using tools via code. For [example](https://github.com/microsoft/autogen/blob/main/test/twoagent.py),
|
||
|
|
||
|
The figure below shows an example conversation flow with AutoGen.
|
||
|
|
||
|

|
||
|
|
||
|
|
||
|
### Where to Go Next?
|
||
|
|
||
|
* Go through the [tutorial](/docs/tutorial/introduction) to learn more about the core concepts in AutoGen
|
||
|
* Read the examples and guides in the [notebooks section](/docs/notebooks)
|
||
|
* Understand the use cases for [multi-agent conversation](/docs/Use-Cases/agent_chat) and [enhanced LLM inference](/docs/Use-Cases/enhanced_inference)
|
||
|
* Read the [API](/docs/reference/agentchat/conversable_agent/) docs
|
||
|
* Learn about [research](/docs/Research) around AutoGen
|
||
|
* Chat on [Discord](https://discord.gg/pAbnFJrkgZ)
|
||
|
* Follow on [Twitter](https://twitter.com/pyautogen)
|
||
|
|
||
|
If you like our project, please give it a [star](https://github.com/microsoft/autogen/stargazers) on GitHub. If you are interested in contributing, please read [Contributor's Guide](/docs/Contribute).
|
||
|
|
||
|
<iframe src="https://ghbtns.com/github-btn.html?user=microsoft&repo=autogen&type=star&count=true&size=large" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe>
|