autogen/website/docs/Getting-Started.mdx
Eric Zhu 74298cda2c
AutoGen Tutorial (#1702)
* update intro

* update intro

* tutorial

* update notebook

* update notebooks

* update

* merge

* add conversation patterns

* rename; delete unused files.

* Reorganize new guides

* Improve intro, fix typos

* add what is next

* outline for code executor

* initiate chats png

* Improve language

* Improve language of human in the loop tutorial

* update

* update

* Update group chat

* code executor

* update convsersation patterns

* update code executor section to use legacy code executor

* update conversation pattern

* redirect

* update figures

* update whats next

* Break down chapter 2 into two chapters

* udpate

* fix website build

* Minor corrections of typos and grammar.

* remove broken links, update sidebar

* code executor update

* Suggest changes to the code executor section

* update what is next

* reorder

* update getting started

* title

* update navbar

* Delete website/docs/tutorial/what-is-next.ipynb

* update conversable patterns

* Improve language

* Fix typo

* minor fixes

---------

Co-authored-by: Jack Gerrits <jack@jackgerrits.com>
Co-authored-by: gagb <gagb@users.noreply.github.com>
Co-authored-by: Joshua Kim <joshua@spectdata.com>
Co-authored-by: Jack Gerrits <jackgerrits@users.noreply.github.com>
2024-03-09 17:45:58 +00:00

139 lines
5.0 KiB
Plaintext

import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
# Getting Started
AutoGen is a framework that enables development of LLM applications using
multiple agents that can converse with each other to solve tasks. AutoGen agents
are customizable, conversable, and seamlessly allow human participation. They
can operate in various modes that employ combinations of LLMs, human inputs, and
tools.
![AutoGen Overview](/img/autogen_agentchat.png)
### Main Features
- AutoGen enables building next-gen LLM applications based on [multi-agent
conversations](/docs/Use-Cases/agent_chat) with minimal effort. It simplifies
the orchestration, automation, and optimization of a complex LLM workflow. It
maximizes the performance of LLM models and overcomes their weaknesses.
- It supports [diverse conversation
patterns](/docs/Use-Cases/agent_chat#supporting-diverse-conversation-patterns)
for complex workflows. With customizable and conversable agents, developers can
use AutoGen to build a wide range of conversation patterns concerning
conversation autonomy, the number of agents, and agent conversation topology.
- It provides a collection of working systems with different complexities. These
systems span a [wide range of
applications](/docs/Use-Cases/agent_chat#diverse-applications-implemented-with-autogen)
from various domains and complexities. This demonstrates how AutoGen can
easily support diverse conversation patterns.
AutoGen is powered by collaborative [research studies](/docs/Research) from
Microsoft, Penn State University, and University of Washington.
### Quickstart
```sh
pip install pyautogen
```
<Tabs>
<TabItem value="local" label="Local execution" default>
:::warning
When asked, be sure to check the generated code before continuing to ensure it is safe to run.
:::
```python
from autogen import AssistantAgent, UserProxyAgent
from autogen.coding import LocalCommandLineCodeExecutor
import os
from pathlib import Path
llm_config = {
"config_list": [{"model": "gpt-4", "api_key": os.environ["OPENAI_API_KEY"]}],
}
work_dir = Path("coding")
work_dir.mkdir(exist_ok=True)
assistant = AssistantAgent("assistant", llm_config=llm_config)
code_executor = LocalCommandLineCodeExecutor(work_dir=work_dir)
user_proxy = UserProxyAgent(
"user_proxy", code_execution_config={"executor": code_executor}
)
# Start the chat
user_proxy.initiate_chat(
assistant,
message="Plot a chart of NVDA and TESLA stock price change YTD.",
)
```
</TabItem>
<TabItem value="docker" label="Docker execution" default>
```python
from autogen import AssistantAgent, UserProxyAgent
from autogen.coding import DockerCommandLineCodeExecutor
import os
from pathlib import Path
llm_config = {
"config_list": [{"model": "gpt-4", "api_key": os.environ["OPENAI_API_KEY"]}],
}
work_dir = Path("coding")
work_dir.mkdir(exist_ok=True)
with DockerCommandLineCodeExecutor(work_dir=work_dir) as code_executor:
assistant = AssistantAgent("assistant", llm_config=llm_config)
user_proxy = UserProxyAgent(
"user_proxy", code_execution_config={"executor": code_executor}
)
# Start the chat
user_proxy.initiate_chat(
assistant,
message="Plot a chart of NVDA and TESLA stock price change YTD. Save the plot to a file called plot.png",
)
```
Open `coding/plot.png` to see the generated plot.
</TabItem>
</Tabs>
:::tip
Learn more about configuring LLMs for agents [here](/docs/topics/llm_configuration).
:::
#### Multi-Agent Conversation Framework
Autogen enables the next-gen LLM applications with a generic multi-agent conversation framework. It offers customizable and conversable agents which integrate LLMs, tools, and humans.
By automating chat among multiple capable agents, one can easily make them collectively perform tasks autonomously or with human feedback, including tasks that require using tools via code. For [example](https://github.com/microsoft/autogen/blob/main/test/twoagent.py),
The figure below shows an example conversation flow with AutoGen.
![Agent Chat Example](/img/chat_example.png)
### Where to Go Next?
* Go through the [tutorial](/docs/tutorial/introduction) to learn more about the core concepts in AutoGen
* Read the examples and guides in the [notebooks section](/docs/notebooks)
* Understand the use cases for [multi-agent conversation](/docs/Use-Cases/agent_chat) and [enhanced LLM inference](/docs/Use-Cases/enhanced_inference)
* Read the [API](/docs/reference/agentchat/conversable_agent/) docs
* Learn about [research](/docs/Research) around AutoGen
* Chat on [Discord](https://discord.gg/pAbnFJrkgZ)
* Follow on [Twitter](https://twitter.com/pyautogen)
If you like our project, please give it a [star](https://github.com/microsoft/autogen/stargazers) on GitHub. If you are interested in contributing, please read [Contributor's Guide](/docs/Contribute).
<iframe src="https://ghbtns.com/github-btn.html?user=microsoft&amp;repo=autogen&amp;type=star&amp;count=true&amp;size=large" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe>