
This PR refactored `AgentEvent` and `ChatMessage` union types to abstract base classes. This allows for user-defined message types that subclass one of the base classes to be used in AgentChat. To support a unified interface for working with the messages, the base classes added abstract methods for: - Convert content to string - Convert content to a `UserMessage` for model client - Convert content for rendering in console. - Dump into a dictionary - Load and create a new instance from a dictionary This way, all agents such as `AssistantAgent` and `SocietyOfMindAgent` can utilize the unified interface to work with any built-in and user-defined message type. This PR also introduces a new message type, `StructuredMessage` for AgentChat (Resolves #5131), which is a generic type that requires a user-specified content type. You can create a `StructuredMessage` as follow: ```python class MessageType(BaseModel): data: str references: List[str] message = StructuredMessage[MessageType](content=MessageType(data="data", references=["a", "b"]), source="user") # message.content is of type `MessageType`. ``` This PR addresses the receving side of this message type. To produce this message type from `AssistantAgent`, the work continue in #5934. Added unit tests to verify this message type works with agents and teams.
AgentChat Chess Game
This is a simple chess game that you can play with an AI agent.
Setup
Install the chess
package with the following command:
pip install "chess"
To use OpenAI models or models hosted on OpenAI-compatible API endpoints,
you need to install the autogen-ext[openai]
package. You can install it with the following command:
pip install "autogen-ext[openai]"
# pip install "autogen-ext[openai,azure]" for Azure OpenAI models
Create a new file named model_config.yaml
in the the same directory as the script
to configure the model you want to use.
For example, to use gpt-4o
model from OpenAI, you can use the following configuration:
provider: autogen_ext.models.openai.OpenAIChatCompletionClient
config:
model: gpt-4o
api_key: replace with your API key or skip it if you have environment variable OPENAI_API_KEY set
To use o3-mini-2025-01-31
model from OpenAI, you can use the following configuration:
provider: autogen_ext.models.openai.OpenAIChatCompletionClient
config:
model: o3-mini-2025-01-31
api_key: replace with your API key or skip it if you have environment variable OPENAI_API_KEY set
To use a locally hosted DeepSeek-R1:8b model using Ollama throught its compatibility endpoint, you can use the following configuration:
provider: autogen_ext.models.openai.OpenAIChatCompletionClient
config:
model: deepseek-r1:8b
base_url: http://localhost:11434/v1
api_key: ollama
model_info:
function_calling: false
json_output: false
vision: false
family: r1
For more information on how to configure the model and use other providers, please refer to the Models documentation.
Run
Run the following command to start the game:
python main.py
By default, the game will use a random agent to play against the AI agent.
You can enable human vs AI mode by setting the --human
flag:
python main.py --human