mirror of
https://github.com/microsoft/autogen.git
synced 2025-07-13 12:01:04 +00:00

* Initial commit. * Disable LLM response caching. * Add teachability option to setup.py * Modify test to use OAI_CONFIG_LIST as suggested in the docs. * Expand unit test. * Complete unit test. * Add filter_dict * details * AnalysisAgent * details * More documentation and debug output. * Support retrieval of any number of relevant memos, including zero. * More robust analysis separator. * cleanup * teach_config * refactoring * For robustness, allow more flexibility on memo storage and retrieval. * de-dupe the retrieved memos. * Simplify AnalysisAgent. The unit tests now pass with gpt-3.5 * comments * Add a verbosity level to control analyzer messages. * refactoring * comments * Persist memory on disk. * cleanup * Use markdown to format retrieved memos. * Use markdown in TextAnalyzerAgent * Add another verbosity level. * clean up logging * notebook * minor edits * cleanup * linter fixes * Skip tests that fail to import openai * Address reviewer feedback. * lint * refactoring * Improve wording * Improve code coverage. * lint * Use llm_config to control caching. * lowercase notebook name * Sort out the parameters passed through to ConversableAgent, and supply full docstrings for the others. * lint * Allow TextAnalyzerAgent to be given a different llm_config than TeachableAgent. * documentation * Modifications to run openai workflow. * Test on just python 3.10. Replace agent with agent teachable_agent as recommended. * Test on python 3.9 instead of 3.10. * Remove space from name -> teachableagent --------- Co-authored-by: Li Jiang <bnujli@gmail.com> Co-authored-by: Chi Wang <wang.chi@microsoft.com>
2.6 KiB
2.6 KiB
AutoGen - Automated Multi Agent Chat
AutoGen offers conversable agents powered by LLM, tool or human, which can be used to perform tasks collectively via automated chat. This framework allows tool use and human participation via multi-agent conversation. Please find documentation about this feature here.
Links to notebook examples:
- Automated Task Solving with Code Generation, Execution & Debugging
- Auto Code Generation, Execution, Debugging and Human Feedback
- Solve Tasks Requiring Web Info
- Use Provided Tools as Functions
- Automated Task Solving with Coding & Planning Agents
- Automated Task Solving with GPT-4 + Multiple Human Users
- Automated Chess Game Playing & Chitchatting by GPT-4 Agents
- Automated Task Solving by Group Chat (with 3 group member agents and 1 manager agent)
- Automated Data Visualization by Group Chat (with 3 group member agents and 1 manager agent)
- Automated Complex Task Solving by Group Chat (with 6 group member agents and 1 manager agent)
- Automated Continual Learning from New Data
- Teach Agents New Skills & Reuse via Automated Chat
- Teach Agents New Facts, User Preferences and Skills Beyond Coding
- Automated Code Generation and Question Answering with Retrieval Augemented Agents
- Group Chat with Retrieval Augmented Generation (with 5 group member agents and 1 manager agent)