
1. Add host network support in Docker and remove unused requirements from argument check. 2. Use Pandas to simplify summary statistic calculations. 3. Add running time to summary statistics ``` Using tabulation method defined in '/home/ekzhu/autogen/python/packages/agbench/benchmarks/HumanEval/Scripts/custom_tabulate.py' Task Id Trial 0 Success Trial 0 Time -- ------------ ----------------- -------------- 0 HumanEval_0 True 3 1 HumanEval_1 False 15 2 HumanEval_2 True 2 3 HumanEval_3 True 11 4 HumanEval_4 True 4 5 HumanEval_5 True 2 6 HumanEval_6 False 18 7 HumanEval_7 True 2 8 HumanEval_8 True 2 9 HumanEval_9 True 12 10 HumanEval_10 False 11 11 HumanEval_11 True 2 12 HumanEval_12 True 3 13 HumanEval_13 True 1 14 HumanEval_14 True 4 15 HumanEval_15 True 1 16 HumanEval_16 True 2 17 HumanEval_17 False 76 18 HumanEval_18 True 4 19 HumanEval_19 True 3 20 HumanEval_20 True 5 21 HumanEval_21 True 3 22 HumanEval_22 True 1 23 HumanEval_23 True 2 24 HumanEval_24 nan Summary Statistics Successes Failures Missing Total Average Success Rate Average Time Total Time ------- ----------- ---------- --------- ------- ---------------------- -------------- ------------ Trial 0 20 4 1 25 0.8 7.875 189 CAUTION: 'autogenbench tabulate' is in early preview and is not thoroughly tested. Please do not cite values from these calculations in academic work without first inspecting and verifying the results in the run logs yourself. ``` Now the default tabulate output looks like this --------- Co-authored-by: Ryan Sweet <rysweet@microsoft.com>
HumanEval Benchmark
This scenario implements a modified version of the HumanEval benchmark. Compared to the original benchmark, there are two key differences here:
- A chat model rather than a completion model is used.
- The agents get pass/fail feedback about their implementations, and can keep trying until they succeed or run out of tokens or turns.
Running the tasks
Navigate to HumanEval
cd benchmarks/HumanEval
Update config.yaml
to point to your model host, as appropriate. The default configuration points to 'gpt-4o'.
Now initialize the tasks.
python Scripts/init_tasks.py
Note: This will attempt to download HumanEval
Then run Scripts/init_tasks.py
again.
Once the script completes, you should now see a folder in your current directory called Tasks
that contains one JSONL file per template in Templates
.
Now to run a specific subset of HumanEval use:
agbench run Tasks/human_eval_AgentChat.jsonl
You should see the command line print the raw logs that shows the agents in action To see a summary of the results (e.g., task completion rates), in a new terminal run the following:
agbench tabulate Results/human_eval_AgentChat
References
Evaluating Large Language Models Trained on Code<br/>
Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter, Philippe Tillet, Felipe Petroski Such, Dave Cummings, Matthias Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel Herbert-Voss, William Hebgen Guss, Alex Nichol, Alex Paino, Nikolas Tezak, Jie Tang, Igor Babuschkin, Suchir Balaji, Shantanu Jain, William Saunders, Christopher Hesse, Andrew N. Carr, Jan Leike, Josh Achiam, Vedant Misra, Evan Morikawa, Alec Radford, Matthew Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, Wojciech Zaremba<br/>
https://arxiv.org/abs/2107.03374