Update docs for new executors (#2119)

* Update docs for new executors

* Update website/docs/FAQ.mdx

Co-authored-by: gagb <gagb@users.noreply.github.com>

* Update website/docs/FAQ.mdx

Co-authored-by: gagb <gagb@users.noreply.github.com>

* Update website/docs/installation/Installation.mdx

Co-authored-by: gagb <gagb@users.noreply.github.com>

* Update website/docs/installation/Installation.mdx

Co-authored-by: gagb <gagb@users.noreply.github.com>

---------

Co-authored-by: gagb <gagb@users.noreply.github.com>
This commit is contained in:
Eric Zhu 2024-03-22 21:19:54 -07:00 committed by GitHub
parent 01afc9bbe7
commit 3dfa305acb
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
3 changed files with 91 additions and 70 deletions

View File

@ -71,31 +71,46 @@ in the system message. This line is in the default system message of the `Assist
If the `# filename` doesn't appear in the suggested code still, consider adding explicit instructions such as "save the code to disk" in the initial user message in `initiate_chat`.
The `AssistantAgent` doesn't save all the code by default, because there are cases in which one would just like to finish a task without saving the code.
## Code execution
## Legacy code executor
We strongly recommend using docker to execute code. There are two ways to use docker:
:::note
The new code executors offers more choices of execution backend.
Read more about [code executors](/docs/tutorial/code-executors).
:::
1. Run AutoGen in a docker container. For example, when developing in [GitHub codespace](https://codespaces.new/microsoft/autogen?quickstart=1), AutoGen runs in a docker container. If you are not developing in Github codespace, follow instructions [here](installation/Docker.md#option-1-install-and-run-autogen-in-docker) to install and run AutoGen in docker.
2. Run AutoGen outside of a docker, while performing code execution with a docker container. For this option, make sure docker is up and running. If you want to run the code locally (not recommended) then `use_docker` can be set to `False` in `code_execution_config` for each code-execution agent, or set `AUTOGEN_USE_DOCKER` to `False` as an environment variable.
The legacy code executor is used by specifying the `code_execution_config` in the agent's constructor.
### Enable Python 3 docker image
You might want to override the default docker image used for code execution. To do that set `use_docker` key of `code_execution_config` property to the name of the image. E.g.:
```python
user_proxy = autogen.UserProxyAgent(
name="agent",
human_input_mode="TERMINATE",
max_consecutive_auto_reply=10,
from autogen import UserProxyAgent
user_proxy = UserProxyAgent(
name="user_proxy",
code_execution_config={"work_dir":"_output", "use_docker":"python:3"},
llm_config=llm_config,
system_message=""""Reply TERMINATE if the task has been solved at full satisfaction.
Otherwise, reply CONTINUE, or the reason why the task is not solved yet."""
)
```
If you have problems with agents running `pip install` or get errors similar to `Error while fetching server API version: ('Connection aborted.', FileNotFoundError(2, 'No such file or directory')`, you can choose **'python:3'** as image as shown in the code example above and that should solve the problem.
In this example, the `code_execution_config` specifies that the code will be
executed in a docker container with the image `python:3`.
By default, the image name is `python:3-slim` if not specified.
The `work_dir` specifies the directory where the code will be executed.
If you have problems with agents running `pip install` or get errors similar to
`Error while fetching server API version: ('Connection aborted.', FileNotFoundError(2, 'No such file or directory')`,
you can choose **'python:3'** as image as shown in the code example above and
that should solve the problem.
### Agents keep thanking each other when using `gpt-3.5-turbo`
By default it runs code in a docker container. If you want to run code locally
(not recommended) then `use_docker` can be set to `False` in `code_execution_config`
for each code-execution agent, or set `AUTOGEN_USE_DOCKER` to `False` as an
environment variable.
You can also develop your AutoGen application in a docker container.
For example, when developing in [GitHub codespace](https://codespaces.new/microsoft/autogen?quickstart=1),
AutoGen runs in a docker container.
If you are not developing in GitHub Codespaces,
follow instructions [here](installation/Docker.md#option-1-install-and-run-autogen-in-docker)
to install and run AutoGen in docker.
## Agents keep thanking each other when using `gpt-3.5-turbo`
When using `gpt-3.5-turbo` you may often encounter agents going into a "gratitude loop", meaning when they complete a task they will begin congratulating and thanking each other in a continuous loop. This is a limitation in the performance of `gpt-3.5-turbo`, in contrast to `gpt-4` which has no problem remembering instructions. This can hinder the experimentation experience when trying to test out your own use case with cheaper models.

View File

@ -78,35 +78,29 @@ pip install pyautogen
:::
## Install Docker for Code Execution
## Code execution with Docker (default)
We recommend using Docker for code execution.
To install Docker, follow the instructions for your operating system on the [Docker website](https://docs.docker.com/get-docker/).
Even if you install AutoGen locally, we highly recommend using Docker for [code execution](FAQ.mdx#code-execution).
The default behaviour for code-execution agents is for code execution to be performed in a docker container.
**To turn this off**: if you want to run the code locally (not recommended) then `use_docker` can be set to `False` in `code_execution_config` for each code-execution agent, or set `AUTOGEN_USE_DOCKER` to `False` as an environment variable.
You might want to override the default docker image used for code execution. To do that set `use_docker` key of `code_execution_config` property to the name of the image. E.g.:
A simple example of how to use Docker for code execution is shown below:
```python
user_proxy = autogen.UserProxyAgent(
name="agent",
human_input_mode="TERMINATE",
max_consecutive_auto_reply=10,
code_execution_config={"work_dir":"_output", "use_docker":"python:3"},
llm_config=llm_config,
system_message=""""Reply TERMINATE if the task has been solved at full satisfaction.
Otherwise, reply CONTINUE, or the reason why the task is not solved yet."""
from pathlib import Path
from autogen import UserProxyAgent
from autogen.coding import DockerCommandLineCodeExecutor
work_dir = Path("coding")
work_dir.mkdir(exist_ok=True)
with DockerCommandLineCodeExecutor(work_dir=work_dir) as code_executor:
user_proxy = UserProxyAgent(
name="user_proxy",
code_execution_config={"executor": code_executor},
)
```
**Turn off code execution entirely**: if you want to turn off code execution entirely, set `code_execution_config` to `False`. E.g.:
To learn more about code executors, see the [code executors tutorial](/docs/tutorial/code-executors).
```python
user_proxy = autogen.UserProxyAgent(
name="agent",
llm_config=llm_config,
code_execution_config=False,
)
```
You might have seen a different way of defining the executors without creating the
executor object, please refer to FAQ for this [legacy code executor](/docs/FAQ#legacy-code-executor).

View File

@ -21,11 +21,9 @@ set TOGETHER_API_KEY=YourTogetherAIKeyHere
Create your LLM configuration, with the [model you want](https://docs.together.ai/docs/inference-models).
```python
import autogen
import os
llm_config={
"config_list": [
config_list = [
{
# Available together.ai model strings:
# https://docs.together.ai/docs/inference-models
@ -33,62 +31,76 @@ llm_config={
"api_key": os.environ['TOGETHER_API_KEY'],
"base_url": "https://api.together.xyz/v1"
}
],
"cache_seed": 42
}
]
```
## Construct Agents
```python
from pathlib import Path
from autogen import AssistantAgent, UserProxyAgent
from autogen.coding import LocalCommandLineCodeExecutor
work_dir = Path("groupchat")
work_dir.mkdir(exist_ok=True)
# Create local command line code executor.
code_executor = LocalCommandLineCodeExecutor(work_dir=work_dir)
# User Proxy will execute code and finish the chat upon typing 'exit'
user_proxy = autogen.UserProxyAgent(
user_proxy = UserProxyAgent(
name="UserProxy",
system_message="A human admin",
code_execution_config={
"last_n_messages": 2,
"work_dir": "groupchat",
"use_docker": False,
}, # Please set use_docker=True if docker is available to run the generated code. Using docker is safer than running the generated code directly.
"executor": code_executor,
},
human_input_mode="TERMINATE",
is_termination_msg=lambda x: "TERMINATE" in x.get("content"),
)
# Python Coder agent
coder = autogen.AssistantAgent(
coder = AssistantAgent(
name="softwareCoder",
description="Software Coder, writes Python code as required and reiterates with feedback from the Code Reviewer.",
system_message="You are a senior Python developer, a specialist in writing succinct Python functions.",
llm_config=llm_config,
llm_config={"config_list": config_list},
)
# Code Reviewer agent
reviewer = autogen.AssistantAgent(
reviewer = AssistantAgent(
name="codeReviewer",
description="Code Reviewer, reviews written code for correctness, efficiency, and security. Asks the Software Coder to address issues.",
system_message="You are a Code Reviewer, experienced in checking code for correctness, efficiency, and security. Review and provide feedback to the Software Coder until you are satisfied, then return the word TERMINATE",
is_termination_msg=lambda x: "TERMINATE" in x.get("content"),
llm_config=llm_config,
llm_config={"config_list": config_list},
)
```
## Establish the group chat
```python
from autogen import GroupChat, GroupChatManager
# Establish the Group Chat and disallow a speaker being selected consecutively
groupchat = autogen.GroupChat(agents=[user_proxy, coder, reviewer], messages=[], max_round=12, allow_repeat_speaker=False)
groupchat = GroupChat(agents=[user_proxy, coder, reviewer], messages=[], max_round=12, allow_repeat_speaker=False)
# Manages the group of multiple agents
manager = autogen.GroupChatManager(groupchat=groupchat, llm_config=llm_config)
manager = GroupChatManager(groupchat=groupchat, llm_config={"config_list": config_list})
```
## Start Chat
```python
from autogen.cache import Cache
# Cache LLM responses.
with Cache.disk() as cache:
# Start the chat with a request to write a function
user_proxy.initiate_chat(
manager,
message="Write a Python function for the Fibonacci sequence, the function will have one parameter for the number in the sequence, which the function will return the Fibonacci number for."
message="Write a Python function for the Fibonacci sequence, the function will have one parameter for the number in the sequence, which the function will return the Fibonacci number for.",
cache=cache,
)
# type exit to terminate the chat
```