* Initial commit of the autogen testbed environment.
* Fixed some typos in the Testbed README.md
* Added some stricter termination logic to the two_agent scenario, and swiched the logo task from finding Autogen's logo, to finding Microsoft's (it's easier)
* Added documentation to testbed code in preparation for PR
* Added a variation of HumanEval to the Testbed. It is also a reasonable example of how to integrate other benchmarks.
* Removed ChatCompletion.start_logging and related features. Added an explicit TERMINATE output to HumanEval to save 1 turn in each conversation.
* Added metrics utils script for HumanEval
* Updated the requirements in the README.
* Added documentation for HumanEval csv schemas
* Standardized on how the OAI_CONFIG_LIST is handled.
* Removed dot-slash from 'includes' path for cross-platform compatibility
* Missed a file.
* Updated readme to include known-working versions.
* Adding async support to get_human_input
* Adjust code for Code formatting testing fail
* Adjust the test_async_get_human_input.py to run async on test
* Adjust the test_async_get_human_input.py for pre-commit-check error
* Adjust the test_async_get_human_input.py for pre-commit-check error v2
* Adjust remove unnecessary register_reply
* Adjust test to use asyncio call
* Adjust go back to not use asyncio
* async run group chat
* conversible agent allow async functions to generate reply
* test for async execution
---------
Co-authored-by: Qingyun Wu <qingyun.wu@psu.edu>
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
* Update FAQ with workaround for Issue #251
* Update website/docs/FAQ.md
* Update website/docs/FAQ.md
---------
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
* Update Installation.md
Replace autogen->pyautogen in env setup to avoid confusion
Related issue: #211
* Update Installation.md
Add deactivation instructions
* Update website/docs/Installation.md
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
---------
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
* UPDATE - FAQ section in documentation
* FIX - formatting test failure
* FIX - added disclaimer
* pre-commit
* Update website/docs/FAQ.md
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
* Update website/docs/FAQ.md
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
* Update website/docs/FAQ.md
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
* UPDATE - notebook and FAQ information for config_list_from_models
---------
Co-authored-by: Ward <award40@LAMU0CLP74YXVX6.uhc.com>
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
Co-authored-by: Li Jiang <bnujli@gmail.com>
* Adds jupyter as a vscode extension, fix validation errors in vscode (see https://containers.dev/supporting#visual-studio-code)
* Trim trailing whitespace
* Add newline to end of file
---------
Co-authored-by: Li Jiang <bnujli@gmail.com>
Co-authored-by: Li Jiang <lijiang1@microsoft.com>
Co-authored-by: Victor Dibia <chuvidi2003@gmail.com>
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
* LMM notebook
* Use "register_reply" instead.
* Loop check LLaVA non-empty response
* Run notebook
* Make the llava_call function more flexible
* Include API for LLaVA from Replicate
* LLaVA data format update x2
1. prompt formater function
2. conversation format with SEP
* Coding example added
* Rename "ImageAgent" -> "LLaVAAgent"
* Docstring and comments updates
* Debug notebook: Remote LLaVA tested
* Example 1: remove system message
* MultimodalConversableAgent added
* Add gpt4v_formatter
* LLaVA: update example 1
* LLaVA: Add link to "Table of Content"
* Updating Examples to follow new categorical structure. #273
Addressing the remaining task for #273, I have copied over the changes from /Usecases to /Examples to follow the new categorical example notebooks structure.
* Add the new example notebook
---------
Co-authored-by: Qingyun Wu <qingyun.wu@psu.edu>
* Initial commit.
* Disable LLM response caching.
* Add teachability option to setup.py
* Modify test to use OAI_CONFIG_LIST as suggested in the docs.
* Expand unit test.
* Complete unit test.
* Add filter_dict
* details
* AnalysisAgent
* details
* More documentation and debug output.
* Support retrieval of any number of relevant memos, including zero.
* More robust analysis separator.
* cleanup
* teach_config
* refactoring
* For robustness, allow more flexibility on memo storage and retrieval.
* de-dupe the retrieved memos.
* Simplify AnalysisAgent. The unit tests now pass with gpt-3.5
* comments
* Add a verbosity level to control analyzer messages.
* refactoring
* comments
* Persist memory on disk.
* cleanup
* Use markdown to format retrieved memos.
* Use markdown in TextAnalyzerAgent
* Add another verbosity level.
* clean up logging
* notebook
* minor edits
* cleanup
* linter fixes
* Skip tests that fail to import openai
* Address reviewer feedback.
* lint
* refactoring
* Improve wording
* Improve code coverage.
* lint
* Use llm_config to control caching.
* lowercase notebook name
* Sort out the parameters passed through to ConversableAgent, and supply full docstrings for the others.
* lint
* Allow TextAnalyzerAgent to be given a different llm_config than TeachableAgent.
* documentation
* Modifications to run openai workflow.
* Test on just python 3.10.
Replace agent with agent teachable_agent as recommended.
* Test on python 3.9 instead of 3.10.
* Remove space from name -> teachableagent
---------
Co-authored-by: Li Jiang <bnujli@gmail.com>
Co-authored-by: Chi Wang <wang.chi@microsoft.com>