Distributed Group Chat
from autogen_core.application import WorkerAgentRuntimeHost
This example runs a gRPC server using WorkerAgentRuntimeHost and instantiates three distributed runtimes using WorkerAgentRuntime. These runtimes connect to the gRPC server as hosts and facilitate a round-robin distributed group chat. This example leverages the Azure OpenAI Service to implement writer and editor LLM agents. Agents are instructed to provide concise answers, as the primary goal of this example is to showcase the distributed runtime rather than the quality of agent responses.
Setup
Setup Python Environment
- Create a virtual environment as instructed in README.
- Run
uv pip install chainlit
in the same virtual environment
General Configuration
In the config.yaml
file, you can configure the client_config
section to connect the code to the Azure OpenAI Service.
Authentication
The recommended method for authentication is through Azure Active Directory (AAD), as explained in Model Clients - Azure AI. This example works with both the AAD approach (recommended) and by providing the api_key
in the config.yaml
file.
Run
Run Through Scripts
The run.sh file provides commands to run the host and agents using tmux. The steps for this approach are:
- Install tmux.
- Activate the Python environment:
source .venv/bin/activate
. - Run the bash script:
./run.sh
.
Here is a screen recording of the execution:
Note: Some asyncio.sleep
commands have been added to the example code to make the ./run.sh
execution look sequential and visually easy to follow. In practice, these lines are not necessary.
Run Individual Files
If you prefer to run Python files individually, follow these steps. Note that each step must be run in a different terminal process, and the virtual environment should be activated using source .venv/bin/activate
.
python run_host.py
: Starts the host and listens for agent connections.python run_editor.py
: Starts theeditor agent and connects it to the host.
python run_writer.py
: Starts thewriter agent and connects it to the host.
chainlit run run_group_chat_manager.py --port 8001
: Run chainlit app which startsgroup chat manager agent and sends the initial message to start the conversation. We're using port 8001 as the default port 8000 is used to run host (assuming using same machine to run all of the agents)
What's Going On?
The general flow of this example is as follows:
- The
Group Chat Manager, on behalf of
User
, sends aRequestToSpeak
request to thewriter_agent
. - The
writer_agent
writes a short sentence into the group chat topic. - The
editor_agent
receives the message in the group chat topic and updates its memory. - The
Group Chat Manager receives the message sent by the writer into the group chat simultaneously and sends the next participant, the
editor_agent
, aRequestToSpeak
message. - The
editor_agent
sends its feedback to the group chat topic. - The
writer_agent
receives the feedback and updates its memory. - The
Group Chat Manager receives the message simultaneously and repeats the loop from step 1.
Here is an illustration of the system developed in this example:
graph TD;
subgraph Host
A1[GRPC Server]
wt[Writer Topic]
et[Editor Topic]
gct[Group Chat Topic]
end
subgraph Distributed Writer Runtime
writer_agent[<img src="./public/avatars/writer.png" width="50"/> Writer Agent] --> A1
wt -.->|2 - Subscription| writer_agent
gct -.->|4 - Subscription| writer_agent
writer_agent -.->|3 - Publish: Group Chat Message| gct
end
subgraph Distributed Editor Runtime
editor_agent[<img src="./public/avatars/editor.png" width="50"/> Editor Agent] --> A1
et -.->|6 - Subscription| editor_agent
gct -.->|4 - Subscription| editor_agent
editor_agent -.->|7 - Publish: Group Chat Message| gct
end
subgraph Distributed Group Chat Manager Runtime
group_chat_manager[<img src="./public/avatars/group_chat_manager.png" width="50"/> Group Chat Manager Agent] --> A1
gct -.->|4 - Subscription| group_chat_manager
group_chat_manager -.->|1 - Request To Speak| wt
group_chat_manager -.->|5 - Request To Speak| et
end
style wt fill:#beb2c3,color:#000
style et fill:#beb2c3,color:#000
style gct fill:#beb2c3,color:#000
style writer_agent fill:#b7c4d7,color:#000
style editor_agent fill:#b7c4d7,color:#000
style group_chat_manager fill:#b7c4d7,color:#000
TODO:
- Properly handle chat restarts. It complains about group chat manager being already registered
- Send Chainlit messages within each agent (Currently the manager can just sends messages in the group chat topic)
- Add streaming to the UI like this example but Autogen's Open AI Client does not supporting streaming yet