set use_docker to default to True (#1147)

* set use_docker to default to true

* black formatting

* centralize checking and add env variable option

* set docker env flag for contrib tests

* set docker env flag for contrib tests

* better error message and cleanup

* disable explicit docker tests

* docker is installed so can't check for that in test

* pr comments and fix test

* rename and fix function descriptions

* documentation

* update notebooks so that they can be run with change in default

* add unit tests for new code

* cache and restore env var

* skip on windows because docker is running in the CI but there are problems connecting the volume

* update documentation

* move header

* update contrib tests
This commit is contained in:
olgavrou 2024-01-18 19:03:49 +02:00 committed by GitHub
parent 22e36cbb10
commit a911d1c2ec
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
36 changed files with 547 additions and 106 deletions

View File

@ -40,6 +40,12 @@ jobs:
pip install -e .
python -c "import autogen"
pip install -e. pytest mock
- name: Set AUTOGEN_USE_DOCKER based on OS
shell: bash
run: |
if [[ ${{ matrix.os }} != ubuntu-latest ]]; then
echo "AUTOGEN_USE_DOCKER=False" >> $GITHUB_ENV
fi
- name: Test with pytest
if: matrix.python-version != '3.10'
run: |

View File

@ -45,6 +45,12 @@ jobs:
- name: Install packages and dependencies for RetrieveChat
run: |
pip install -e .[retrievechat]
- name: Set AUTOGEN_USE_DOCKER based on OS
shell: bash
run: |
if [[ ${{ matrix.os }} != ubuntu-latest ]]; then
echo "AUTOGEN_USE_DOCKER=False" >> $GITHUB_ENV
fi
- name: Test RetrieveChat
run: |
pytest test/test_retrieve_utils.py test/agentchat/contrib/test_retrievechat.py test/agentchat/contrib/test_qdrant_retrievechat.py --skip-openai
@ -81,6 +87,12 @@ jobs:
- name: Install packages and dependencies for Compression
run: |
pip install -e .
- name: Set AUTOGEN_USE_DOCKER based on OS
shell: bash
run: |
if [[ ${{ matrix.os }} != ubuntu-latest ]]; then
echo "AUTOGEN_USE_DOCKER=False" >> $GITHUB_ENV
fi
- name: Test Compression
if: matrix.python-version != '3.10' # diversify the python versions
run: |
@ -118,6 +130,12 @@ jobs:
- name: Install packages and dependencies for GPTAssistantAgent
run: |
pip install -e .
- name: Set AUTOGEN_USE_DOCKER based on OS
shell: bash
run: |
if [[ ${{ matrix.os }} != ubuntu-latest ]]; then
echo "AUTOGEN_USE_DOCKER=False" >> $GITHUB_ENV
fi
- name: Test GPTAssistantAgent
if: matrix.python-version != '3.11' # diversify the python versions
run: |
@ -155,6 +173,12 @@ jobs:
- name: Install packages and dependencies for Teachability
run: |
pip install -e .[teachable]
- name: Set AUTOGEN_USE_DOCKER based on OS
shell: bash
run: |
if [[ ${{ matrix.os }} != ubuntu-latest ]]; then
echo "AUTOGEN_USE_DOCKER=False" >> $GITHUB_ENV
fi
- name: Test TeachableAgent
if: matrix.python-version != '3.9' # diversify the python versions
run: |
@ -192,6 +216,12 @@ jobs:
- name: Install packages and dependencies for LMM
run: |
pip install -e .[lmm]
- name: Set AUTOGEN_USE_DOCKER based on OS
shell: bash
run: |
if [[ ${{ matrix.os }} != ubuntu-latest ]]; then
echo "AUTOGEN_USE_DOCKER=False" >> $GITHUB_ENV
fi
- name: Test LMM and LLaVA
run: |
pytest test/agentchat/contrib/test_img_utils.py test/agentchat/contrib/test_lmm.py test/agentchat/contrib/test_llava.py --skip-openai

View File

@ -65,7 +65,7 @@ The easiest way to start playing is
## [Installation](https://microsoft.github.io/autogen/docs/Installation)
### Option 1. Install and Run AutoGen in Docker
Find detailed instructions for users [here](https://microsoft.github.io/autogen/docs/Installation#option-1-install-and-run-autogen-in-docker), and for developers [here](https://microsoft.github.io/autogen/docs/Contribute#docker).
Find detailed instructions for users [here](https://microsoft.github.io/autogen/docs/Installation#option-1-install-and-run-autogen-in-docker), and for developers [here](https://microsoft.github.io/autogen/docs/Contribute#docker-for-development).
### Option 2. Install AutoGen Locally
@ -86,7 +86,7 @@ Find more options in [Installation](https://microsoft.github.io/autogen/docs/Ins
<!-- Each of the [`notebook examples`](https://github.com/microsoft/autogen/tree/main/notebook) may require a specific option to be installed. -->
Even if you are installing AutoGen locally out of docker, we recommend performing [code execution](https://microsoft.github.io/autogen/docs/FAQ/#code-execution) in docker. Find more instructions [here](https://microsoft.github.io/autogen/docs/Installation#docker).
Even if you are installing and running AutoGen locally outside of docker, the recommendation and default behavior of agents is to perform [code execution](https://microsoft.github.io/autogen/docs/FAQ/#code-execution) in docker. Find more instructions and how to change the default behaviour [here](https://microsoft.github.io/autogen/docs/Installation#code-execution-with-docker-(default)).
For LLM inference configurations, check the [FAQs](https://microsoft.github.io/autogen/docs/FAQ#set-your-api-endpoints).
@ -111,7 +111,7 @@ from autogen import AssistantAgent, UserProxyAgent, config_list_from_json
config_list = config_list_from_json(env_or_file="OAI_CONFIG_LIST")
# You can also set config_list directly as a list, for example, config_list = [{'model': 'gpt-4', 'api_key': '<your OpenAI API key here>'},]
assistant = AssistantAgent("assistant", llm_config={"config_list": config_list})
user_proxy = UserProxyAgent("user_proxy", code_execution_config={"work_dir": "coding"})
user_proxy = UserProxyAgent("user_proxy", code_execution_config={"work_dir": "coding", "use_docker": False}) # IMPORTANT: set to True to run code in docker, recommended
user_proxy.initiate_chat(assistant, message="Plot a chart of NVDA and TESLA stock price change YTD.")
# This initiates an automated chat between the two agents to solve the task
```

View File

@ -9,7 +9,18 @@ from collections import defaultdict
from typing import Any, Awaitable, Callable, Dict, List, Literal, Optional, Tuple, Type, TypeVar, Union
from .. import OpenAIWrapper
from ..code_utils import DEFAULT_MODEL, UNKNOWN, content_str, execute_code, extract_code, infer_lang
from ..code_utils import (
DEFAULT_MODEL,
UNKNOWN,
content_str,
check_can_use_docker_or_throw,
decide_use_docker,
execute_code,
extract_code,
infer_lang,
)
from ..function_utils import get_function_schema, load_basemodels_if_needed, serialize_to_str
from .agent import Agent
from .._pydantic import model_dump
@ -89,11 +100,10 @@ class ConversableAgent(Agent):
The default working directory is the "extensions" directory under
"path_to_autogen".
- use_docker (Optional, list, str or bool): The docker image to use for code execution.
Default is True, which means the code will be executed in a docker container. A default list of images will be used.
If a list or a str of image name(s) is provided, the code will be executed in a docker container
with the first image successfully pulled.
If None, False or empty, the code will be executed in the current environment.
Default is True when the docker python package is installed.
When set to True, a default list will be used.
If False, the code will be executed in the current environment.
We strongly recommend using docker for code execution.
- timeout (Optional, int): The maximum execution time in seconds.
- last_n_messages (Experimental, Optional, int or str): The number of messages to look back for code execution. Default to 1. If set to 'auto', it will scan backwards through all messages arriving since the agent last spoke (typically this is the last time execution was attempted).
@ -128,6 +138,13 @@ class ConversableAgent(Agent):
self._code_execution_config: Union[Dict, Literal[False]] = (
{} if code_execution_config is None else code_execution_config
)
if isinstance(self._code_execution_config, dict):
use_docker = self._code_execution_config.get("use_docker", None)
use_docker = decide_use_docker(use_docker)
check_can_use_docker_or_throw(use_docker)
self._code_execution_config["use_docker"] = use_docker
self.human_input_mode = human_input_mode
self._max_consecutive_auto_reply = (
max_consecutive_auto_reply if max_consecutive_auto_reply is not None else self.MAX_CONSECUTIVE_AUTO_REPLY

View File

@ -62,12 +62,11 @@ class UserProxyAgent(ConversableAgent):
The default working directory is the "extensions" directory under
"path_to_autogen".
- use_docker (Optional, list, str or bool): The docker image to use for code execution.
Default is True, which means the code will be executed in a docker container. A default list of images will be used.
If a list or a str of image name(s) is provided, the code will be executed in a docker container
with the first image successfully pulled.
If None, False or empty, the code will be executed in the current environment.
Default is True, which will be converted into a list.
If the code is executed in the current environment,
the code must be trusted.
If False, the code will be executed in the current environment.
We strongly recommend using docker for code execution.
- timeout (Optional, int): The maximum execution time in seconds.
- last_n_messages (Experimental, Optional, int): The number of messages to look back for code execution. Default to 1.
default_auto_reply (str or dict or None): the default auto reply message when no code execution or llm based reply is generated.

View File

@ -17,6 +17,7 @@ try:
except ImportError:
docker = None
SENTINEL = object()
DEFAULT_MODEL = "gpt-4"
FAST_MODEL = "gpt-3.5-turbo"
# Regular expression for finding a code block
@ -225,6 +226,70 @@ def _cmd(lang):
raise NotImplementedError(f"{lang} not recognized in code execution")
def is_docker_running():
"""Check if docker is running.
Returns:
bool: True if docker is running; False otherwise.
"""
if docker is None:
return False
try:
client = docker.from_env()
client.ping()
return True
except docker.errors.DockerException:
return False
def in_docker_container():
"""Check if the code is running in a docker container.
Returns:
bool: True if the code is running in a docker container; False otherwise.
"""
return os.path.exists("/.dockerenv")
def decide_use_docker(use_docker) -> bool:
if use_docker is None:
env_var_use_docker = os.environ.get("AUTOGEN_USE_DOCKER", "True")
truthy_values = {"1", "true", "yes", "t"}
falsy_values = {"0", "false", "no", "f"}
# Convert the value to lowercase for case-insensitive comparison
env_var_use_docker_lower = env_var_use_docker.lower()
# Determine the boolean value based on the environment variable
if env_var_use_docker_lower in truthy_values:
use_docker = True
elif env_var_use_docker_lower in falsy_values:
use_docker = False
elif env_var_use_docker_lower == "none": # Special case for 'None' as a string
use_docker = None
else:
# Raise an error for any unrecognized value
raise ValueError(
f'Invalid value for AUTOGEN_USE_DOCKER: {env_var_use_docker}. Please set AUTOGEN_USE_DOCKER to "1/True/yes", "0/False/no", or "None".'
)
return use_docker
def check_can_use_docker_or_throw(use_docker) -> None:
if use_docker is not None:
inside_docker = in_docker_container()
docker_installed_and_running = is_docker_running()
if use_docker and not inside_docker and not docker_installed_and_running:
raise RuntimeError(
"Code execution is set to be run in docker (default behaviour) but docker is not running.\n"
"The options available are:\n"
"- Make sure docker is running (advised approach for code execution)\n"
'- Set "use_docker": False in code_execution_config\n'
'- Set AUTOGEN_USE_DOCKER to "0/False/no" in your environment variables'
)
def _sanitize_filename_for_docker_tag(filename: str) -> str:
"""Convert a filename to a valid docker tag.
See https://docs.docker.com/engine/reference/commandline/tag/ for valid tag
@ -253,7 +318,7 @@ def execute_code(
timeout: Optional[int] = None,
filename: Optional[str] = None,
work_dir: Optional[str] = None,
use_docker: Optional[Union[List[str], str, bool]] = None,
use_docker: Union[List[str], str, bool] = SENTINEL,
lang: Optional[str] = "python",
) -> Tuple[int, str, str]:
"""Execute code in a docker container.
@ -273,15 +338,15 @@ def execute_code(
If None, a default working directory will be used.
The default working directory is the "extensions" directory under
"path_to_autogen".
use_docker (Optional, list, str or bool): The docker image to use for code execution.
use_docker (list, str or bool): The docker image to use for code execution.
Default is True, which means the code will be executed in a docker container. A default list of images will be used.
If a list or a str of image name(s) is provided, the code will be executed in a docker container
with the first image successfully pulled.
If None, False or empty, the code will be executed in the current environment.
Default is None, which will be converted into an empty list when docker package is available.
If False, the code will be executed in the current environment.
Expected behaviour:
- If `use_docker` is explicitly set to True and the docker package is available, the code will run in a Docker container.
- If `use_docker` is explicitly set to True but the Docker package is missing, an error will be raised.
- If `use_docker` is not set (i.e., left default to None) and the Docker package is not available, a warning will be displayed, but the code will run natively.
- If `use_docker` is not set (i.e. left default to True) or is explicitly set to True and the docker package is available, the code will run in a Docker container.
- If `use_docker` is not set (i.e. left default to True) or is explicitly set to True but the Docker package is missing or docker isn't running, an error will be raised.
- If `use_docker` is explicitly set to False, the code will run natively.
If the code is executed in the current environment,
the code must be trusted.
lang (Optional, str): The language of the code. Default is "python".
@ -296,23 +361,13 @@ def execute_code(
logger.error(error_msg)
raise AssertionError(error_msg)
if use_docker and docker is None:
error_msg = "Cannot use docker because the python docker package is not available."
logger.error(error_msg)
raise AssertionError(error_msg)
running_inside_docker = in_docker_container()
docker_running = is_docker_running()
# Warn if use_docker was unspecified (or None), and cannot be provided (the default).
# In this case the current behavior is to fall back to run natively, but this behavior
# is subject to change.
if use_docker is None:
if docker is None:
use_docker = False
logger.warning(
"execute_code was called without specifying a value for use_docker. Since the python docker package is not available, code will be run natively. Note: this fallback behavior is subject to change"
)
else:
# Default to true
use_docker = True
# SENTINEL is used to indicate that the user did not explicitly set the argument
if use_docker is SENTINEL:
use_docker = decide_use_docker(use_docker=None)
check_can_use_docker_or_throw(use_docker)
timeout = timeout or DEFAULT_TIMEOUT
original_filename = filename
@ -324,15 +379,16 @@ def execute_code(
filename = f"tmp_code_{code_hash}.{'py' if lang.startswith('python') else lang}"
if work_dir is None:
work_dir = WORKING_DIR
filepath = os.path.join(work_dir, filename)
file_dir = os.path.dirname(filepath)
os.makedirs(file_dir, exist_ok=True)
if code is not None:
with open(filepath, "w", encoding="utf-8") as fout:
fout.write(code)
# check if already running in a docker container
in_docker_container = os.path.exists("/.dockerenv")
if not use_docker or in_docker_container:
if not use_docker or running_inside_docker:
# already running in a docker container
cmd = [
sys.executable if lang.startswith("python") else _cmd(lang),
@ -376,7 +432,13 @@ def execute_code(
return result.returncode, logs, None
# create a docker client
if use_docker and not docker_running:
raise RuntimeError(
"Docker package is missing or docker is not running. Please make sure docker is running or set use_docker=False."
)
client = docker.from_env()
image_list = (
["python:3-slim", "python:3", "python:3-windowsservercore"]
if use_docker is True

View File

@ -348,7 +348,7 @@
" is_termination_msg=lambda x: x.get(\"content\", \"\").rstrip().endswith(\"TERMINATE\"),\n",
" code_execution_config={\n",
" \"work_dir\": \"coding\",\n",
" \"use_docker\": False, # set to True or image name like \"python:3\" to use docker\n",
" \"use_docker\": False, # Please set use_docker=True if docker is available to run the generated code. Using docker is safer than running the generated code directly.\n",
" },\n",
")\n",
"# the assistant receives a message from the user_proxy, which contains the task description\n",

View File

@ -561,7 +561,9 @@
"mathproxyagent = MathUserProxyAgent(\n",
" name=\"mathproxyagent\",\n",
" human_input_mode=\"NEVER\",\n",
" code_execution_config={\"use_docker\": False},\n",
" code_execution_config={\n",
" \"use_docker\": False\n",
" }, # Please set use_docker=True if docker is available to run the generated code. Using docker is safer than running the generated code directly.\n",
" max_consecutive_auto_reply=5,\n",
")\n",
"math_problem = (\n",
@ -835,7 +837,10 @@
" is_termination_msg=lambda x: x.get(\"content\", \"\") and x.get(\"content\", \"\").rstrip().endswith(\"TERMINATE\"),\n",
" human_input_mode=\"NEVER\",\n",
" max_consecutive_auto_reply=10,\n",
" code_execution_config={\"work_dir\": \"coding\"},\n",
" code_execution_config={\n",
" \"work_dir\": \"coding\",\n",
" \"use_docker\": False,\n",
" }, # Please set use_docker=True if docker is available to run the generated code. Using docker is safer than running the generated code directly.\n",
")\n",
"\n",
"\n",
@ -1259,7 +1264,10 @@
" max_consecutive_auto_reply=10,\n",
" is_termination_msg=lambda x: x.get(\"content\", \"\").rstrip().endswith(\"TERMINATE\")\n",
" or x.get(\"content\", \"\").rstrip().endswith(\"TERMINATE.\"),\n",
" code_execution_config={\"work_dir\": \"web\"},\n",
" code_execution_config={\n",
" \"work_dir\": \"web\",\n",
" \"use_docker\": False,\n",
" }, # Please set use_docker=True if docker is available to run the generated code. Using docker is safer than running the generated code directly.\n",
" system_message=\"\"\"Reply TERMINATE if the task has been solved at full satisfaction.\n",
"Otherwise, reply CONTINUE, or the reason why the task is not solved yet.\"\"\",\n",
")\n",

View File

@ -211,7 +211,10 @@
" is_termination_msg=lambda x: x.get(\"content\", \"\") and x.get(\"content\", \"\").rstrip().endswith(\"TERMINATE\"),\n",
" human_input_mode=\"NEVER\",\n",
" max_consecutive_auto_reply=10,\n",
" code_execution_config={\"work_dir\": \"coding\"},\n",
" code_execution_config={\n",
" \"work_dir\": \"coding\",\n",
" \"use_docker\": False,\n",
" }, # Please set use_docker=True if docker is available to run the generated code. Using docker is safer than running the generated code directly.\n",
")\n",
"\n",
"\n",

View File

@ -128,7 +128,11 @@
"user_proxy = autogen.UserProxyAgent(\n",
" name=\"User_proxy\",\n",
" system_message=\"A human admin.\",\n",
" code_execution_config={\"last_n_messages\": 2, \"work_dir\": \"groupchat\"},\n",
" code_execution_config={\n",
" \"last_n_messages\": 2,\n",
" \"work_dir\": \"groupchat\",\n",
" \"use_docker\": False,\n",
" }, # Please set use_docker=True if docker is available to run the generated code. Using docker is safer than running the generated code directly.\n",
" human_input_mode=\"TERMINATE\",\n",
")\n",
"coder = autogen.AssistantAgent(\n",

View File

@ -146,7 +146,11 @@
" name=\"Executor\",\n",
" system_message=\"Executor. Execute the code written by the engineer and report the result.\",\n",
" human_input_mode=\"NEVER\",\n",
" code_execution_config={\"last_n_messages\": 3, \"work_dir\": \"paper\"},\n",
" code_execution_config={\n",
" \"last_n_messages\": 3,\n",
" \"work_dir\": \"paper\",\n",
" \"use_docker\": False,\n",
" }, # Please set use_docker=True if docker is available to run the generated code. Using docker is safer than running the generated code directly.\n",
")\n",
"critic = autogen.AssistantAgent(\n",
" name=\"Critic\",\n",

View File

@ -141,7 +141,11 @@
"user_proxy = autogen.UserProxyAgent(\n",
" name=\"User_proxy\",\n",
" system_message=\"A human admin.\",\n",
" code_execution_config={\"last_n_messages\": 3, \"work_dir\": \"groupchat\"},\n",
" code_execution_config={\n",
" \"last_n_messages\": 3,\n",
" \"work_dir\": \"groupchat\",\n",
" \"use_docker\": False,\n",
" }, # Please set use_docker=True if docker is available to run the generated code. Using docker is safer than running the generated code directly.\n",
" human_input_mode=\"NEVER\",\n",
")\n",
"coder = autogen.AssistantAgent(\n",

View File

@ -178,7 +178,10 @@
"user_proxy = UserProxyAgent(\n",
" \"user\",\n",
" human_input_mode=\"TERMINATE\",\n",
" code_execution_config={\"work_dir\": \"coding\"},\n",
" code_execution_config={\n",
" \"work_dir\": \"coding\",\n",
" \"use_docker\": False,\n",
" }, # Please set use_docker=True if docker is available to run the generated code. Using docker is safer than running the generated code directly.\n",
" is_termination_msg=lambda msg: \"TERMINATE\" in msg.get(\"content\"),\n",
")\n",
"user_proxy.initiate_chat(guidance_agent, message=\"Plot and save a chart of nvidia and tsla stock price change YTD.\")"

View File

@ -134,6 +134,9 @@
" name=\"user_proxy\",\n",
" human_input_mode=\"ALWAYS\",\n",
" is_termination_msg=lambda x: x.get(\"content\", \"\").rstrip().endswith(\"TERMINATE\"),\n",
" code_execution_config={\n",
" \"use_docker\": False\n",
" }, # Please set use_docker=True if docker is available to run the generated code. Using docker is safer than running the generated code directly.\n",
")"
]
},

View File

@ -312,7 +312,10 @@
" is_termination_msg=lambda x: x.get(\"content\", \"\") and x.get(\"content\", \"\").rstrip().endswith(\"TERMINATE\"),\n",
" human_input_mode=\"NEVER\",\n",
" max_consecutive_auto_reply=10,\n",
" code_execution_config={\"work_dir\": \"coding\"},\n",
" code_execution_config={\n",
" \"work_dir\": \"coding\",\n",
" \"use_docker\": False,\n",
" }, # Please set use_docker=True if docker is available to run the generated code. Using docker is safer than running the generated code directly.\n",
")\n",
"\n",
"# Register the tool and start the conversation\n",
@ -659,7 +662,10 @@
" is_termination_msg=lambda x: x.get(\"content\", \"\") and x.get(\"content\", \"\").rstrip().endswith(\"TERMINATE\"),\n",
" human_input_mode=\"NEVER\",\n",
" max_consecutive_auto_reply=10,\n",
" code_execution_config={\"work_dir\": \"coding\"},\n",
" code_execution_config={\n",
" \"work_dir\": \"coding\",\n",
" \"use_docker\": False,\n",
" }, # Please set use_docker=True if docker is available to run the generated code. Using docker is safer than running the generated code directly.\n",
")\n",
"\n",
"print(function_map)\n",

View File

@ -130,6 +130,9 @@
" system_message=\"A human admin.\",\n",
" human_input_mode=\"NEVER\", # Try between ALWAYS or NEVER\n",
" max_consecutive_auto_reply=0,\n",
" code_execution_config={\n",
" \"use_docker\": False\n",
" }, # Please set use_docker=True if docker is available to run the generated code. Using docker is safer than running the generated code directly.\n",
")\n",
"\n",
"# Ask the question with an image\n",
@ -662,7 +665,9 @@
"\n",
"creator = FigureCreator(name=\"Figure Creator~\", llm_config=gpt4_llm_config)\n",
"\n",
"user_proxy = autogen.UserProxyAgent(name=\"User\", human_input_mode=\"NEVER\", max_consecutive_auto_reply=0)\n",
"user_proxy = autogen.UserProxyAgent(\n",
" name=\"User\", human_input_mode=\"NEVER\", max_consecutive_auto_reply=0, code_execution_config={\"use_docker\": False}\n",
")\n",
"\n",
"user_proxy.initiate_chat(\n",
" creator,\n",
@ -770,6 +775,9 @@
" system_message=\"Ask both image explainer 1 and 2 for their description.\",\n",
" human_input_mode=\"TERMINATE\", # Try between ALWAYS, NEVER, and TERMINATE\n",
" max_consecutive_auto_reply=10,\n",
" code_execution_config={\n",
" \"use_docker\": False\n",
" }, # Please set use_docker=True if docker is available to run the generated code. Using docker is safer than running the generated code directly.\n",
")\n",
"\n",
"# We set max_round to 5\n",

View File

@ -283,7 +283,11 @@
"user_proxy = autogen.UserProxyAgent(\n",
" name=\"User_proxy\",\n",
" system_message=\"A human admin.\",\n",
" code_execution_config={\"last_n_messages\": 3, \"work_dir\": \"groupchat\"},\n",
" code_execution_config={\n",
" \"last_n_messages\": 3,\n",
" \"work_dir\": \"groupchat\",\n",
" \"use_docker\": False,\n",
" }, # Please set use_docker=True if docker is available to run the generated code. Using docker is safer than running the generated code directly.\n",
" human_input_mode=\"NEVER\", # Try between ALWAYS or NEVER\n",
" max_consecutive_auto_reply=0,\n",
")\n",
@ -412,7 +416,11 @@
" max_consecutive_auto_reply=10,\n",
" system_message=\"Help me run the code, and tell other agents it is in the <img result.jpg> file location.\",\n",
" is_termination_msg=lambda x: x.get(\"content\", \"\").rstrip().endswith(\"TERMINATE\"),\n",
" code_execution_config={\"last_n_messages\": 3, \"work_dir\": \".\", \"use_docker\": False},\n",
" code_execution_config={\n",
" \"last_n_messages\": 3,\n",
" \"work_dir\": \".\",\n",
" \"use_docker\": False,\n",
" }, # Please set use_docker=True if docker is available to run the generated code. Using docker is safer than running the generated code directly.\n",
" llm_config=self.llm_config,\n",
" )\n",
"\n",
@ -823,7 +831,9 @@
"\n",
"creator = FigureCreator(name=\"Figure Creator~\", llm_config=gpt4_llm_config)\n",
"\n",
"user_proxy = autogen.UserProxyAgent(name=\"User\", human_input_mode=\"NEVER\", max_consecutive_auto_reply=0)\n",
"user_proxy = autogen.UserProxyAgent(\n",
" name=\"User\", human_input_mode=\"NEVER\", max_consecutive_auto_reply=0, code_execution_config={\"use_docker\": False}\n",
") # Please set use_docker=True if docker is available to run the generated code. Using docker is safer than running the generated code directly.\n",
"\n",
"user_proxy.initiate_chat(\n",
" creator,\n",

View File

@ -327,7 +327,7 @@
" is_termination_msg=lambda x: x.get(\"content\", \"\").rstrip().endswith(\"TERMINATE\"),\n",
" code_execution_config={\n",
" \"work_dir\": \"coding\",\n",
" \"use_docker\": False, # set to True or image name like \"python:3\" to use docker\n",
" \"use_docker\": False, # Please set use_docker=True if docker is available to run the generated code. Using docker is safer than running the generated code directly.\n",
" },\n",
" llm_config=llm_config,\n",
" system_message=\"\"\"Reply TERMINATE if the task has been solved at full satisfaction.\n",
@ -768,7 +768,7 @@
" is_termination_msg=lambda x: x.get(\"content\", \"\").rstrip().endswith(\"TERMINATE\"),\n",
" code_execution_config={\n",
" \"work_dir\": \"coding\",\n",
" \"use_docker\": False, # set to True or image name like \"python:3\" to use docker\n",
" \"use_docker\": False, # Please set use_docker=True if docker is available to run the generated code. Using docker is safer than running the generated code directly.\n",
" },\n",
")\n",
"# the assistant receives a message from the user_proxy, which contains the task description\n",

View File

@ -233,7 +233,10 @@
"source": [
"user_proxy = UserProxyAgent(\n",
" name=\"user_proxy\",\n",
" code_execution_config={\"work_dir\": \"coding\"},\n",
" code_execution_config={\n",
" \"work_dir\": \"coding\",\n",
" \"use_docker\": False,\n",
" }, # Please set use_docker=True if docker is available to run the generated code. Using docker is safer than running the generated code directly.\n",
" is_termination_msg=lambda msg: \"TERMINATE\" in msg[\"content\"],\n",
" human_input_mode=\"NEVER\",\n",
" max_consecutive_auto_reply=1,\n",

View File

@ -92,7 +92,11 @@
"user_proxy = autogen.UserProxyAgent(\n",
" name=\"User_proxy\",\n",
" system_message=\"A human admin.\",\n",
" code_execution_config={\"last_n_messages\": 2, \"work_dir\": \"groupchat\"},\n",
" code_execution_config={\n",
" \"last_n_messages\": 2,\n",
" \"work_dir\": \"groupchat\",\n",
" \"use_docker\": False,\n",
" }, # Please set use_docker=True if docker is available to run the generated code. Using docker is safer than running the generated code directly.\n",
" human_input_mode=\"TERMINATE\",\n",
")\n",
"\n",

View File

@ -96,7 +96,10 @@
"\n",
"user_proxy = UserProxyAgent(\n",
" name=\"user_proxy\",\n",
" code_execution_config={\"work_dir\": \"coding\"},\n",
" code_execution_config={\n",
" \"work_dir\": \"coding\",\n",
" \"use_docker\": False,\n",
" }, # Please set use_docker=True if docker is available to run the generated code. Using docker is safer than running the generated code directly.\n",
" is_termination_msg=lambda msg: \"TERMINATE\" in msg[\"content\"],\n",
" human_input_mode=\"NEVER\",\n",
" max_consecutive_auto_reply=1,\n",

View File

@ -127,7 +127,7 @@
" is_termination_msg=lambda msg: \"TERMINATE\" in msg[\"content\"],\n",
" code_execution_config={\n",
" \"work_dir\": \"coding\",\n",
" \"use_docker\": False, # set to True or image name like \"python:3\" to use docker\n",
" \"use_docker\": False, # Please set use_docker=True if docker is available to run the generated code. Using docker is safer than running the generated code directly.\n",
" },\n",
" human_input_mode=\"NEVER\",\n",
")\n",

View File

@ -128,6 +128,9 @@
" name=\"planner_user\",\n",
" max_consecutive_auto_reply=0, # terminate without auto-reply\n",
" human_input_mode=\"NEVER\",\n",
" code_execution_config={\n",
" \"use_docker\": False\n",
" }, # Please set use_docker=True if docker is available to run the generated code. Using docker is safer than running the generated code directly.\n",
")\n",
"\n",
"\n",
@ -184,7 +187,10 @@
" human_input_mode=\"TERMINATE\",\n",
" max_consecutive_auto_reply=10,\n",
" # is_termination_msg=lambda x: \"content\" in x and x[\"content\"] is not None and x[\"content\"].rstrip().endswith(\"TERMINATE\"),\n",
" code_execution_config={\"work_dir\": \"planning\"},\n",
" code_execution_config={\n",
" \"work_dir\": \"planning\",\n",
" \"use_docker\": False,\n",
" }, # Please set use_docker=True if docker is available to run the generated code. Using docker is safer than running the generated code directly.\n",
" function_map={\"ask_planner\": ask_planner},\n",
")"
]

View File

@ -175,6 +175,9 @@
" human_input_mode=\"NEVER\",\n",
" is_termination_msg=lambda x: True if \"TERMINATE\" in x.get(\"content\") else False,\n",
" max_consecutive_auto_reply=0,\n",
" code_execution_config={\n",
" \"use_docker\": False\n",
" }, # Please set use_docker=True if docker is available to run the generated code. Using docker is safer than running the generated code directly.\n",
")"
]
},

View File

@ -330,7 +330,10 @@
"source": [
"user_proxy = UserProxyAgent(\n",
" name=\"user_proxy\",\n",
" code_execution_config={\"work_dir\": \"coding\"},\n",
" code_execution_config={\n",
" \"work_dir\": \"coding\",\n",
" \"use_docker\": False,\n",
" }, # Please set use_docker=True if docker is available to run the generated code. Using docker is safer than running the generated code directly.\n",
" is_termination_msg=lambda msg: \"TERMINATE\" in msg[\"content\"],\n",
" human_input_mode=\"NEVER\",\n",
" max_consecutive_auto_reply=0,\n",
@ -587,7 +590,7 @@
" is_termination_msg=lambda msg: \"TERMINATE\" in msg[\"content\"],\n",
" code_execution_config={\n",
" \"work_dir\": \"coding\",\n",
" \"use_docker\": False, # set to True or image name like \"python:3\" to use docker\n",
" \"use_docker\": False, # Please set use_docker=True if docker is available to run the generated code. Using docker is safer than running the generated code directly.\n",
" },\n",
" human_input_mode=\"NEVER\",\n",
" max_consecutive_auto_reply=0,\n",

View File

@ -128,7 +128,10 @@
" expert = autogen.UserProxyAgent(\n",
" name=\"expert\",\n",
" human_input_mode=\"ALWAYS\",\n",
" code_execution_config={\"work_dir\": \"expert\"},\n",
" code_execution_config={\n",
" \"work_dir\": \"expert\",\n",
" \"use_docker\": False,\n",
" }, # Please set use_docker=True if docker is available to run the generated code. Using docker is safer than running the generated code directly.\n",
" )\n",
"\n",
" expert.initiate_chat(assistant_for_expert, message=message)\n",
@ -187,7 +190,10 @@
" name=\"student\",\n",
" human_input_mode=\"TERMINATE\",\n",
" max_consecutive_auto_reply=10,\n",
" code_execution_config={\"work_dir\": \"student\"},\n",
" code_execution_config={\n",
" \"work_dir\": \"student\",\n",
" \"use_docker\": False,\n",
" }, # Please set use_docker=True if docker is available to run the generated code. Using docker is safer than running the generated code directly.\n",
" function_map={\"ask_expert\": ask_expert},\n",
")"
]

View File

@ -359,7 +359,10 @@
" is_termination_msg=lambda x: x.get(\"content\", \"\") and x.get(\"content\", \"\").rstrip().endswith(\"TERMINATE\"),\n",
" human_input_mode=\"NEVER\",\n",
" max_consecutive_auto_reply=10,\n",
" code_execution_config={\"work_dir\": \"coding_2\"},\n",
" code_execution_config={\n",
" \"work_dir\": \"coding_2\",\n",
" \"use_docker\": False,\n",
" }, # Please set use_docker=True if docker is available to run the generated code. Using docker is safer than running the generated code directly.\n",
")\n",
"\n",
"user_proxy.register_function(\n",

View File

@ -146,7 +146,10 @@
" human_input_mode=\"TERMINATE\",\n",
" max_consecutive_auto_reply=10,\n",
" is_termination_msg=lambda x: x.get(\"content\", \"\").rstrip().endswith(\"TERMINATE\"),\n",
" code_execution_config={\"work_dir\": \"web\"},\n",
" code_execution_config={\n",
" \"work_dir\": \"web\",\n",
" \"use_docker\": False,\n",
" }, # Please set use_docker=True if docker is available to run the generated code. Using docker is safer than running the generated code directly.\n",
" llm_config=llm_config,\n",
" system_message=\"\"\"Reply TERMINATE if the task has been solved at full satisfaction.\n",
"Otherwise, reply CONTINUE, or the reason why the task is not solved yet.\"\"\",\n",

View File

@ -134,6 +134,9 @@
" name=\"critic_user\",\n",
" max_consecutive_auto_reply=0, # terminate without auto-reply\n",
" human_input_mode=\"NEVER\",\n",
" code_execution_config={\n",
" \"use_docker\": False\n",
" }, # Please set use_docker=True if docker is available to run the generated code. Using docker is safer than running the generated code directly.\n",
")\n",
"\n",
"\n",
@ -414,6 +417,9 @@
" name=\"quantifier_user\",\n",
" max_consecutive_auto_reply=0, # terminate without auto-reply\n",
" human_input_mode=\"NEVER\",\n",
" code_execution_config={\n",
" \"use_docker\": False\n",
" }, # Please set use_docker=True if docker is available to run the generated code. Using docker is safer than running the generated code directly.\n",
")\n",
"\n",
"dictionary_for_eval = open(criteria_file, \"r\").read()"

View File

@ -21,6 +21,7 @@ install_requires = [
"python-dotenv",
"tiktoken",
"pydantic>=1.10,<3", # could be both V1 and V2
"docker",
]
setuptools.setup(
@ -33,10 +34,6 @@ setuptools.setup(
long_description_content_type="text/markdown",
url="https://github.com/microsoft/autogen",
packages=setuptools.find_packages(include=["autogen*"], exclude=["test"]),
# package_data={
# "autogen.default": ["*/*.json"],
# },
# include_package_data=True,
install_requires=install_requires,
extras_require={
"test": [

View File

@ -0,0 +1,118 @@
from autogen import UserProxyAgent
import pytest
from conftest import skip_openai
import os
from autogen.code_utils import (
is_docker_running,
in_docker_container,
)
try:
import openai
except ImportError:
skip = True
else:
skip = False or skip_openai
def get_current_autogen_env_var():
return os.environ.get("AUTOGEN_USE_DOCKER", None)
def restore_autogen_env_var(current_env_value):
if current_env_value is None:
del os.environ["AUTOGEN_USE_DOCKER"]
else:
os.environ["AUTOGEN_USE_DOCKER"] = current_env_value
def docker_running():
return is_docker_running() or in_docker_container()
@pytest.mark.skipif(skip, reason="openai not installed")
def test_agent_setup_with_code_execution_off():
user_proxy = UserProxyAgent(
name="test_agent",
human_input_mode="NEVER",
code_execution_config=False,
)
assert user_proxy._code_execution_config is False
@pytest.mark.skipif(skip, reason="openai not installed")
def test_agent_setup_with_use_docker_false():
user_proxy = UserProxyAgent(
name="test_agent",
human_input_mode="NEVER",
code_execution_config={"use_docker": False},
)
assert user_proxy._code_execution_config["use_docker"] is False
@pytest.mark.skipif(skip, reason="openai not installed")
def test_agent_setup_with_env_variable_false_and_docker_running():
current_env_value = get_current_autogen_env_var()
os.environ["AUTOGEN_USE_DOCKER"] = "False"
user_proxy = UserProxyAgent(
name="test_agent",
human_input_mode="NEVER",
)
assert user_proxy._code_execution_config["use_docker"] is False
restore_autogen_env_var(current_env_value)
@pytest.mark.skipif(skip or (not docker_running()), reason="openai not installed OR docker not running")
def test_agent_setup_with_default_and_docker_running():
user_proxy = UserProxyAgent(
name="test_agent",
human_input_mode="NEVER",
)
assert user_proxy._code_execution_config["use_docker"] is True
@pytest.mark.skipif(skip or (docker_running()), reason="openai not installed OR docker running")
def test_raises_error_agent_setup_with_default_and_docker_not_running():
with pytest.raises(RuntimeError):
UserProxyAgent(
name="test_agent",
human_input_mode="NEVER",
)
@pytest.mark.skipif(skip or (docker_running()), reason="openai not installed OR docker running")
def test_raises_error_agent_setup_with_env_variable_true_and_docker_not_running():
current_env_value = get_current_autogen_env_var()
os.environ["AUTOGEN_USE_DOCKER"] = "True"
with pytest.raises(RuntimeError):
UserProxyAgent(
name="test_agent",
human_input_mode="NEVER",
)
restore_autogen_env_var(current_env_value)
@pytest.mark.skipif(skip or (not docker_running()), reason="openai not installed OR docker not running")
def test_agent_setup_with_env_variable_true_and_docker_running():
current_env_value = get_current_autogen_env_var()
os.environ["AUTOGEN_USE_DOCKER"] = "True"
user_proxy = UserProxyAgent(
name="test_agent",
human_input_mode="NEVER",
)
assert user_proxy._code_execution_config["use_docker"] is True
restore_autogen_env_var(current_env_value)

View File

@ -16,6 +16,10 @@ from autogen.code_utils import (
improve_code,
improve_function,
infer_lang,
is_docker_running,
in_docker_container,
decide_use_docker,
check_can_use_docker_or_throw,
)
KEY_LOC = "notebook"
@ -309,6 +313,10 @@ def scrape(url):
assert len(codeblocks) == 1 and codeblocks[0] == ("", "source setup.sh")
# skip if os is windows
@pytest.mark.skipif(
sys.platform in ["win32"] or (not is_docker_running() and not in_docker_container()), reason="docker is not running"
)
def test_execute_code(use_docker=None):
try:
import docker
@ -350,14 +358,18 @@ def test_execute_code(use_docker=None):
assert isinstance(image, str) or docker is None or os.path.exists("/.dockerenv") or use_docker is False
@pytest.mark.skipif(docker_package_installed is False, reason="docker package not installed")
@pytest.mark.skipif(
sys.platform in ["win32"] or (not is_docker_running() and not in_docker_container()), reason="docker is not running"
)
def test_execute_code_with_custom_filename_on_docker():
exit_code, msg, image = execute_code("print('hello world')", filename="tmp/codetest.py", use_docker=True)
assert exit_code == 0 and msg == "hello world\n", msg
assert image == "python:tmp_codetest.py"
@pytest.mark.skipif(docker_package_installed is False, reason="docker package not installed")
@pytest.mark.skipif(
sys.platform in ["win32"] or (not is_docker_running() and not in_docker_container()), reason="docker is not running"
)
def test_execute_code_with_misformed_filename_on_docker():
exit_code, msg, image = execute_code(
"print('hello world')", filename="tmp/codetest.py (some extra information)", use_docker=True
@ -382,6 +394,89 @@ def test_execute_code_no_docker():
assert image is None
def get_current_autogen_env_var():
return os.environ.get("AUTOGEN_USE_DOCKER", None)
def restore_autogen_env_var(current_env_value):
if current_env_value is None:
del os.environ["AUTOGEN_USE_DOCKER"]
else:
os.environ["AUTOGEN_USE_DOCKER"] = current_env_value
def test_decide_use_docker_truthy_values():
current_env_value = get_current_autogen_env_var()
for truthy_value in ["1", "true", "yes", "t"]:
os.environ["AUTOGEN_USE_DOCKER"] = truthy_value
assert decide_use_docker(None) is True
restore_autogen_env_var(current_env_value)
def test_decide_use_docker_falsy_values():
current_env_value = get_current_autogen_env_var()
for falsy_value in ["0", "false", "no", "f"]:
os.environ["AUTOGEN_USE_DOCKER"] = falsy_value
assert decide_use_docker(None) is False
restore_autogen_env_var(current_env_value)
def test_decide_use_docker():
current_env_value = get_current_autogen_env_var()
os.environ["AUTOGEN_USE_DOCKER"] = "none"
assert decide_use_docker(None) is None
os.environ["AUTOGEN_USE_DOCKER"] = "invalid"
with pytest.raises(ValueError):
decide_use_docker(None)
restore_autogen_env_var(current_env_value)
def test_decide_use_docker_with_env_var():
current_env_value = get_current_autogen_env_var()
os.environ["AUTOGEN_USE_DOCKER"] = "false"
assert decide_use_docker(None) is False
os.environ["AUTOGEN_USE_DOCKER"] = "true"
assert decide_use_docker(None) is True
os.environ["AUTOGEN_USE_DOCKER"] = "none"
assert decide_use_docker(None) is None
os.environ["AUTOGEN_USE_DOCKER"] = "invalid"
with pytest.raises(ValueError):
decide_use_docker(None)
restore_autogen_env_var(current_env_value)
def test_decide_use_docker_with_env_var_and_argument():
current_env_value = get_current_autogen_env_var()
os.environ["AUTOGEN_USE_DOCKER"] = "false"
assert decide_use_docker(True) is True
os.environ["AUTOGEN_USE_DOCKER"] = "true"
assert decide_use_docker(False) is False
os.environ["AUTOGEN_USE_DOCKER"] = "none"
assert decide_use_docker(True) is True
os.environ["AUTOGEN_USE_DOCKER"] = "invalid"
assert decide_use_docker(True) is True
restore_autogen_env_var(current_env_value)
def test_can_use_docker_or_throw():
check_can_use_docker_or_throw(None)
if not is_docker_running() and not in_docker_container():
check_can_use_docker_or_throw(False)
if not is_docker_running() and not in_docker_container():
with pytest.raises(RuntimeError):
check_can_use_docker_or_throw(True)
def _test_improve():
try:
import openai

View File

@ -5,5 +5,7 @@ from autogen import AssistantAgent, UserProxyAgent, config_list_from_json
# and OAI_CONFIG_LIST_sample
config_list = config_list_from_json(env_or_file="OAI_CONFIG_LIST")
assistant = AssistantAgent("assistant", llm_config={"config_list": config_list})
user_proxy = UserProxyAgent("user_proxy", code_execution_config={"work_dir": "coding"})
user_proxy = UserProxyAgent(
"user_proxy", code_execution_config={"work_dir": "coding", "use_docker": False}
) # IMPORTANT: set to True to run code in docker, recommended
user_proxy.initiate_chat(assistant, message="Plot a chart of NVDA and TESLA stock price change YTD.")

View File

@ -1,5 +1,23 @@
# Frequently Asked Questions
- [Set your API endpoints](#set-your-api-endpoints)
- [Use the constructed configuration list in agents](#use-the-constructed-configuration-list-in-agents)
- [Unexpected keyword argument 'base_url'](#unexpected-keyword-argument-base_url)
- [Can I use non-OpenAI models?](#can-i-use-non-openai-models)
- [Handle Rate Limit Error and Timeout Error](#handle-rate-limit-error-and-timeout-error)
- [How to continue a finished conversation](#how-to-continue-a-finished-conversation)
- [How do we decide what LLM is used for each agent? How many agents can be used? How do we decide how many agents in the group?](#how-do-we-decide-what-llm-is-used-for-each-agent-how-many-agents-can-be-used-how-do-we-decide-how-many-agents-in-the-group)
- [Why is code not saved as file?](#why-is-code-not-saved-as-file)
- [Code execution](#code-execution)
- [Enable Python 3 docker image](#enable-python-3-docker-image)
- [Agents keep thanking each other when using `gpt-3.5-turbo`](#agents-keep-thanking-each-other-when-using-gpt-35-turbo)
- [ChromaDB fails in codespaces because of old version of sqlite3](#chromadb-fails-in-codespaces-because-of-old-version-of-sqlite3)
- [How to register a reply function](#how-to-register-a-reply-function)
- [How to get last message?](#how-to-get-last-message)
- [How to get each agent message?](#how-to-get-each-agent-message)
- [When using autogen docker, is it always necessary to reinstall modules?](#when-using-autogen-docker-is-it-always-necessary-to-reinstall-modules)
- [Agents are throwing due to docker not running, how can I resolve this?](#agents-are-throwing-due-to-docker-not-running-how-can-i-resolve-this)
## Set your API endpoints
There are multiple ways to construct configurations for LLM inference in the `oai` utilities:
@ -72,7 +90,7 @@ The `AssistantAgent` doesn't save all the code by default, because there are cas
We strongly recommend using docker to execute code. There are two ways to use docker:
1. Run AutoGen in a docker container. For example, when developing in [GitHub codespace](https://codespaces.new/microsoft/autogen?quickstart=1), AutoGen runs in a docker container. If you are not developing in Github codespace, follow instructions [here](Installation.md#option-1-install-and-run-autogen-in-docker) to install and run AutoGen in docker.
2. Run AutoGen outside of a docker, while performing code execution with a docker container. For this option, set up docker and make sure the python package `docker` is installed. When not installed and `use_docker` is omitted in `code_execution_config`, the code will be executed locally (this behavior is subject to change in future).
2. Run AutoGen outside of a docker, while performing code execution with a docker container. For this option, make sure docker is up and running. If you want to run the code locally (not recommended) then `use_docker` can be set to `False` in `code_execution_config` for each code-execution agent, or set `AUTOGEN_USE_DOCKER` to `False` as an environment variable.
### Enable Python 3 docker image
@ -173,3 +191,11 @@ Please refer to https://microsoft.github.io/autogen/docs/reference/agentchat/con
The "use_docker" arg in an agent's code_execution_config will be set to the name of the image containing the change after execution, when the conversation finishes.
You can save that image name. For a new conversation, you can set "use_docker" to the saved name of the image to start execution there.
## Agents are throwing due to docker not running, how can I resolve this?
If running AutoGen locally the default for agents who execute code is for them to try and perform code execution within a docker container. If docker is not running, this will cause the agent to throw an error. To resolve this you have the below options:
- **Recommended**: Make sure docker is up and running.
- If you want to run the code locally then `use_docker` can be set to `False` in `code_execution_config` for each code-execution agent.
- If you want to run the code locally for all code-execution agents: set `AUTOGEN_USE_DOCKER` to `False` as an environment variable.

View File

@ -32,7 +32,7 @@ from autogen import AssistantAgent, UserProxyAgent, config_list_from_json
# and OAI_CONFIG_LIST_sample.json
config_list = config_list_from_json(env_or_file="OAI_CONFIG_LIST")
assistant = AssistantAgent("assistant", llm_config={"config_list": config_list})
user_proxy = UserProxyAgent("user_proxy", code_execution_config={"work_dir": "coding"})
user_proxy = UserProxyAgent("user_proxy", code_execution_config={"work_dir": "coding", "use_docker": False}) # IMPORTANT: set to True to run code in docker, recommended
user_proxy.initiate_chat(assistant, message="Plot a chart of NVDA and TESLA stock price change YTD.")
# This initiates an automated chat between the two agents to solve the task
```

View File

@ -94,7 +94,9 @@ docker run -it -p {WorkstationPortNum}:{DockerPortNum} -v {WorkStation_Dir}:{Doc
When installing AutoGen locally, we recommend using a virtual environment for the installation. This will ensure that the dependencies for AutoGen are isolated from the rest of your system.
### Option a: venv
### Setup a virtual environment
#### Option a: venv
You can create a virtual environment with `venv` as below:
@ -109,7 +111,7 @@ The following command will deactivate the current `venv` environment:
deactivate
```
### Option b: conda
#### Option b: conda
Another option is with `Conda`. You can install it by following [this doc](https://docs.conda.io/projects/conda/en/stable/user-guide/install/index.html),
and then create a virtual environment as below:
@ -125,7 +127,7 @@ The following command will deactivate the current `conda` environment:
conda deactivate
```
### Option c: poetry
#### Option c: poetry
Another option is with `poetry`, which is a dependency manager for Python.
@ -149,7 +151,7 @@ exit
Now, you're ready to install AutoGen in the virtual environment you've just created.
## Python
### Python requirements
AutoGen requires **Python version >= 3.8, < 3.13**. It can be installed from pip:
@ -159,11 +161,27 @@ pip install pyautogen
`pyautogen<0.2` requires `openai<1`. Starting from pyautogen v0.2, `openai>=1` is required.
<!--
or conda:
### Code execution with Docker (default)
Even if you install AutoGen locally, we highly recommend using Docker for [code execution](FAQ.md#code-execution).
The default behaviour for code-execution agents is for code execution to be performed in a docker container.
**To turn this off**: if you want to run the code locally (not recommended) then `use_docker` can be set to `False` in `code_execution_config` for each code-execution agent, or set `AUTOGEN_USE_DOCKER` to `False` as an environment variable.
You might want to override the default docker image used for code execution. To do that set `use_docker` key of `code_execution_config` property to the name of the image. E.g.:
```python
user_proxy = autogen.UserProxyAgent(
name="agent",
human_input_mode="TERMINATE",
max_consecutive_auto_reply=10,
code_execution_config={"work_dir":"_output", "use_docker":"python:3"},
llm_config=llm_config,
system_message=""""Reply TERMINATE if the task has been solved at full satisfaction.
Otherwise, reply CONTINUE, or the reason why the task is not solved yet."""
)
```
conda install pyautogen -c conda-forge
``` -->
### Migration guide to v0.2
@ -187,32 +205,10 @@ Inference parameter tuning can be done via [`flaml.tune`](https://microsoft.gith
- autogen uses local disk cache to guarantee the exactly same output is produced for the same input and when cache is hit, no openai api call will be made.
- openai's `seed` is a best-effort deterministic sampling with no guarantee of determinism. When using openai's `seed` with `cache_seed` set to None, even for the same input, an openai api call will be made and there is no guarantee for getting exactly the same output.
## Other Installation Options
### Optional Dependencies
- #### Docker
Even if you install AutoGen locally, we highly recommend using Docker for [code execution](FAQ.md#enable-python-3-docker-image).
To use docker for code execution, you also need to install the python package `docker`:
```bash
pip install docker
```
You might want to override the default docker image used for code execution. To do that set `use_docker` key of `code_execution_config` property to the name of the image. E.g.:
```python
user_proxy = autogen.UserProxyAgent(
name="agent",
human_input_mode="TERMINATE",
max_consecutive_auto_reply=10,
code_execution_config={"work_dir":"_output", "use_docker":"python:3"},
llm_config=llm_config,
system_message=""""Reply TERMINATE if the task has been solved at full satisfaction.
Otherwise, reply CONTINUE, or the reason why the task is not solved yet."""
)
```
- #### blendsearch
`pyautogen<0.2` offers a cost-effective hyperparameter optimization technique [EcoOptiGen](https://arxiv.org/abs/2303.04673) for tuning Large Language Models. Please install with the [blendsearch] option to use it.