mirror of
https://github.com/microsoft/autogen.git
synced 2025-08-10 17:51:22 +00:00

* docs: split apart installation file into parts * missing newline * fix link --------- Co-authored-by: Chi Wang <wang.chi@microsoft.com>
110 lines
3.9 KiB
Markdown
110 lines
3.9 KiB
Markdown
# Optional Dependencies
|
|
|
|
## LLM Caching
|
|
|
|
To use LLM caching with Redis, you need to install the Python package with
|
|
the option `redis`:
|
|
|
|
```bash
|
|
pip install "pyautogen[redis]"
|
|
```
|
|
|
|
See [LLM Caching](Use-Cases/agent_chat.md#llm-caching) for details.
|
|
|
|
## Docker
|
|
|
|
Even if you install AutoGen locally, we highly recommend using Docker for [code execution](FAQ.md#enable-python-3-docker-image).
|
|
|
|
To use docker for code execution, you also need to install the python package `docker`:
|
|
|
|
```bash
|
|
pip install docker
|
|
```
|
|
|
|
You might want to override the default docker image used for code execution. To do that set `use_docker` key of `code_execution_config` property to the name of the image. E.g.:
|
|
|
|
```python
|
|
user_proxy = autogen.UserProxyAgent(
|
|
name="agent",
|
|
human_input_mode="TERMINATE",
|
|
max_consecutive_auto_reply=10,
|
|
code_execution_config={"work_dir":"_output", "use_docker":"python:3"},
|
|
llm_config=llm_config,
|
|
system_message=""""Reply TERMINATE if the task has been solved at full satisfaction.
|
|
Otherwise, reply CONTINUE, or the reason why the task is not solved yet."""
|
|
)
|
|
```
|
|
|
|
## blendsearch
|
|
|
|
`pyautogen<0.2` offers a cost-effective hyperparameter optimization technique [EcoOptiGen](https://arxiv.org/abs/2303.04673) for tuning Large Language Models. Please install with the [blendsearch] option to use it.
|
|
|
|
```bash
|
|
pip install "pyautogen[blendsearch]<0.2"
|
|
```
|
|
|
|
Example notebooks:
|
|
|
|
[Optimize for Code Generation](https://github.com/microsoft/autogen/blob/main/notebook/oai_completion.ipynb)
|
|
|
|
[Optimize for Math](https://github.com/microsoft/autogen/blob/main/notebook/oai_chatgpt_gpt4.ipynb)
|
|
|
|
## retrievechat
|
|
|
|
`pyautogen` supports retrieval-augmented generation tasks such as question answering and code generation with RAG agents. Please install with the [retrievechat] option to use it.
|
|
|
|
```bash
|
|
pip install "pyautogen[retrievechat]"
|
|
```
|
|
|
|
RetrieveChat can handle various types of documents. By default, it can process
|
|
plain text and PDF files, including formats such as 'txt', 'json', 'csv', 'tsv',
|
|
'md', 'html', 'htm', 'rtf', 'rst', 'jsonl', 'log', 'xml', 'yaml', 'yml' and 'pdf'.
|
|
If you install [unstructured](https://unstructured-io.github.io/unstructured/installation/full_installation.html)
|
|
(`pip install "unstructured[all-docs]"`), additional document types such as 'docx',
|
|
'doc', 'odt', 'pptx', 'ppt', 'xlsx', 'eml', 'msg', 'epub' will also be supported.
|
|
|
|
You can find a list of all supported document types by using `autogen.retrieve_utils.TEXT_FORMATS`.
|
|
|
|
Example notebooks:
|
|
|
|
[Automated Code Generation and Question Answering with Retrieval Augmented Agents](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_RetrieveChat.ipynb)
|
|
|
|
[Group Chat with Retrieval Augmented Generation (with 5 group member agents and 1 manager agent)](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_groupchat_RAG.ipynb)
|
|
|
|
[Automated Code Generation and Question Answering with Qdrant based Retrieval Augmented Agents](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_qdrant_RetrieveChat.ipynb)
|
|
|
|
## Teachability
|
|
|
|
To use Teachability, please install AutoGen with the [teachable] option.
|
|
|
|
```bash
|
|
pip install "pyautogen[teachable]"
|
|
```
|
|
|
|
Example notebook: [Chatting with a teachable agent](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_teachability.ipynb)
|
|
|
|
## Large Multimodal Model (LMM) Agents
|
|
|
|
We offered Multimodal Conversable Agent and LLaVA Agent. Please install with the [lmm] option to use it.
|
|
|
|
```bash
|
|
pip install "pyautogen[lmm]"
|
|
```
|
|
|
|
Example notebooks:
|
|
|
|
[LLaVA Agent](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_lmm_llava.ipynb)
|
|
|
|
## mathchat
|
|
|
|
`pyautogen<0.2` offers an experimental agent for math problem solving. Please install with the [mathchat] option to use it.
|
|
|
|
```bash
|
|
pip install "pyautogen[mathchat]<0.2"
|
|
```
|
|
|
|
Example notebooks:
|
|
|
|
[Using MathChat to Solve Math Problems](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_MathChat.ipynb)
|