
It clarifies the missing dependencies of all README.md in python/samples/ - Added explicit mention of required dependencies - Improved instructions for initial setup <!-- Thank you for your contribution! Please review https://microsoft.github.io/autogen/docs/Contribute before opening a pull request. --> <!-- Please add a reviewer to the assignee section when you create a PR. If you don't have the access to it, we will shortly find a reviewer and assign them to your PR. --> ## Why are these changes needed? According to issue #6076, several dependencies were missing from the requirements.txt and not mentioned in the README.md instructions. This change adds the missing installation instructions to ensure that users can run the demo smoothly. ## Related issue number Closes #6076 <!-- For example: "Closes #1234" --> ## Checks - [x] I've included any doc changes needed for <https://microsoft.github.io/autogen/>. See <https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to build and test documentation locally. - [x] I've added tests (if relevant) corresponding to the changes introduced in this PR. - [x] I've made sure all auto checks have passed. Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
AgentChat Chess Game
This is a simple chess game that you can play with an AI agent.
Setup
Install the chess
package with the following command:
pip install "chess"
To use OpenAI models or models hosted on OpenAI-compatible API endpoints,
you need to install the autogen-ext[openai]
package. You can install it with the following command:
pip install "autogen-ext[openai]"
# pip install "autogen-ext[openai,azure]" for Azure OpenAI models
To run this sample, you will need to install the following packages:
pip install -U autogen-agentchat pyyaml
Create a new file named model_config.yaml
in the the same directory as the script
to configure the model you want to use.
For example, to use gpt-4o
model from OpenAI, you can use the following configuration:
provider: autogen_ext.models.openai.OpenAIChatCompletionClient
config:
model: gpt-4o
api_key: replace with your API key or skip it if you have environment variable OPENAI_API_KEY set
To use o3-mini-2025-01-31
model from OpenAI, you can use the following configuration:
provider: autogen_ext.models.openai.OpenAIChatCompletionClient
config:
model: o3-mini-2025-01-31
api_key: replace with your API key or skip it if you have environment variable OPENAI_API_KEY set
To use a locally hosted DeepSeek-R1:8b model using Ollama throught its compatibility endpoint, you can use the following configuration:
provider: autogen_ext.models.openai.OpenAIChatCompletionClient
config:
model: deepseek-r1:8b
base_url: http://localhost:11434/v1
api_key: ollama
model_info:
function_calling: false
json_output: false
vision: false
family: r1
For more information on how to configure the model and use other providers, please refer to the Models documentation.
Run
Run the following command to start the game:
python main.py
By default, the game will use a random agent to play against the AI agent.
You can enable human vs AI mode by setting the --human
flag:
python main.py --human