# Installation ## Setup Virtual Environment When not using a docker container, we recommend using a virtual environment to install AutoGen. This will ensure that the dependencies for AutoGen are isolated from the rest of your system. ### Option 1: venv You can create a virtual environment with `venv` as below: ```bash python3 -m venv pyautogen source pyautogen/bin/activate ``` The following command will deactivate the current `venv` environment: ```bash deactivate ``` ### Option 2: conda Another option is with `Conda`, Conda works better at solving dependency conflicts than pip. You can install it by following [this doc](https://docs.conda.io/projects/conda/en/stable/user-guide/install/index.html), and then create a virtual environment as below: ```bash conda create -n pyautogen python=3.10 # python 3.10 is recommended as it's stable and not too old conda activate pyautogen ``` The following command will deactivate the current `conda` environment: ```bash conda deactivate ``` ### Option 3: poetry Another option is with `poetry`, which is a dependency manager for Python. [Poetry](https://python-poetry.org/docs/) is a tool for dependency management and packaging in Python. It allows you to declare the libraries your project depends on and it will manage (install/update) them for you. Poetry offers a lockfile to ensure repeatable installs, and can build your project for distribution. You can install it by following [this doc](https://python-poetry.org/docs/#installation), and then create a virtual environment as below: ```bash poetry init poetry shell poetry add pyautogen ``` The following command will deactivate the current `poetry` environment: ```bash exit ``` Now, you're ready to install AutoGen in the virtual environment you've just created. ## Python AutoGen requires **Python version >= 3.8, < 3.12**. It can be installed from pip: ```bash pip install pyautogen ``` `pyautogen<0.2` requires `openai<1`. Starting from pyautogen v0.2, `openai>=1` is required. ### Migration guide to v0.2 openai v1 is a total rewrite of the library with many breaking changes. For example, the inference requires instantiating a client, instead of using a global class method. Therefore, some changes are required for users of `pyautogen<0.2`. - `api_base` -> `base_url`, `request_timeout` -> `timeout` in `llm_config` and `config_list`. `max_retry_period` and `retry_wait_time` are deprecated. `max_retries` can be set for each client. - MathChat is unsupported until it is tested in future release. - `autogen.Completion` and `autogen.ChatCompletion` are deprecated. The essential functionalities are moved to `autogen.OpenAIWrapper`: ```python from autogen import OpenAIWrapper client = OpenAIWrapper(config_list=config_list) response = client.create(messages=[{"role": "user", "content": "2+2="}]) print(client.extract_text_or_completion_object(response)) ``` - Inference parameter tuning and inference logging features are currently unavailable in `OpenAIWrapper`. Logging will be added in a future release. Inference parameter tuning can be done via [`flaml.tune`](https://microsoft.github.io/FLAML/docs/Use-Cases/Tune-User-Defined-Function). - `seed` in autogen is renamed into `cache_seed` to accommodate the newly added `seed` param in openai chat completion api. `use_cache` is removed as a kwarg in `OpenAIWrapper.create()` for being automatically decided by `cache_seed`: int | None. The difference between autogen's `cache_seed` and openai's `seed` is that: * autogen uses local disk cache to guarantee the exactly same output is produced for the same input and when cache is hit, no openai api call will be made. * openai's `seed` is a best-effort deterministic sampling with no guarantee of determinism. When using openai's `seed` with `cache_seed` set to None, even for the same input, an openai api call will be made and there is no guarantee for getting exactly the same output. ### Optional Dependencies - #### docker For the best user experience and seamless code execution, we highly recommend using Docker with AutoGen. Docker is a containerization platform that simplifies the setup and execution of your code. Developing in a docker container, such as GitHub Codespace, also makes the development convenient. When running AutoGen out of a docker container, to use docker for code execution, you also need to install the python package `docker`: ```bash pip install docker ``` - #### blendsearch `pyautogen<0.2` offers a cost-effective hyperparameter optimization technique [EcoOptiGen](https://arxiv.org/abs/2303.04673) for tuning Large Language Models. Please install with the [blendsearch] option to use it. ```bash pip install "pyautogen[blendsearch]<0.2" ``` Example notebooks: [Optimize for Code Generation](https://github.com/microsoft/autogen/blob/main/notebook/oai_completion.ipynb) [Optimize for Math](https://github.com/microsoft/autogen/blob/main/notebook/oai_chatgpt_gpt4.ipynb) - #### retrievechat `pyautogen` supports retrieval-augmented generation tasks such as question answering and code generation with RAG agents. Please install with the [retrievechat] option to use it. ```bash pip install "pyautogen[retrievechat]" ``` RetrieveChat can handle various types of documents. By default, it can process plain text and PDF files, including formats such as 'txt', 'json', 'csv', 'tsv', 'md', 'html', 'htm', 'rtf', 'rst', 'jsonl', 'log', 'xml', 'yaml', 'yml' and 'pdf'. If you install [unstructured](https://unstructured-io.github.io/unstructured/installation/full_installation.html) (`pip install "unstructured[all-docs]"`), additional document types such as 'docx', 'doc', 'odt', 'pptx', 'ppt', 'xlsx', 'eml', 'msg', 'epub' will also be supported. You can find a list of all supported document types by using `autogen.retrieve_utils.TEXT_FORMATS`. Example notebooks: [Automated Code Generation and Question Answering with Retrieval Augmented Agents](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_RetrieveChat.ipynb) [Group Chat with Retrieval Augmented Generation (with 5 group member agents and 1 manager agent)](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_groupchat_RAG.ipynb) [Automated Code Generation and Question Answering with Qdrant based Retrieval Augmented Agents](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_qdrant_RetrieveChat.ipynb) - #### TeachableAgent To use TeachableAgent, please install AutoGen with the [teachable] option. ```bash pip install "pyautogen[teachable]" ``` Example notebook: [Chatting with TeachableAgent](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_teachability.ipynb) - #### Large Multimodal Model (LMM) Agents We offered Multimodal Conversable Agent and LLaVA Agent. Please install with the [lmm] option to use it. ```bash pip install "pyautogen[lmm]" ``` Example notebooks: [LLaVA Agent](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_lmm_llava.ipynb) - #### mathchat `pyautogen<0.2` offers an experimental agent for math problem solving. Please install with the [mathchat] option to use it. ```bash pip install "pyautogen[mathchat]<0.2" ``` Example notebooks: [Using MathChat to Solve Math Problems](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_MathChat.ipynb)