afourney af5dcc7fdf
Significant updates to agbench. (#5313)
- Updated HumanEval template to use AgentChat
- Update templates to use config.yaml for model and other configuration
- Read environment from ENV.yaml (ENV.json still supported but
deprecated)
- Temporarily removed WebArena and AssistantBench. Neither had viable
Templates after `autogen_magentic_one` was removed. Templates need to be
update to AgentChat (in a future PR, but this PR is getting big enough
already)
2025-02-07 18:01:44 +00:00
..
2024-10-18 06:33:33 +02:00

Benchmarking Agents

This directory provides ability to benchmarks agents (e.g., built using Autogen) using AgBench. Use the instructions below to prepare your environment for benchmarking. Once done, proceed to relevant benchmarks directory (e.g., benchmarks/GAIA) for further scenario-specific instructions.

Setup on WSL

  1. Install Docker Desktop. After installation, restart is needed, then open Docker Desktop, in Settings, Ressources, WSL Integration, Enable integration with additional distros Ubuntu

  2. Clone autogen and export AUTOGEN_REPO_BASE. This environment variable enables the Docker containers to use the correct version agents.

    git clone git@github.com:microsoft/autogen.git
    export AUTOGEN_REPO_BASE=<path_to_autogen>
    
  3. Install agbench. AgBench is currently a tool in the Autogen repo.

    cd autogen/python/packages/agbench
    pip install -e .