* Refactor GPTAssistantAgent constructor to handle
instructions and overwrite_instructions flag
- Ensure that `system_message` is always consistent with `instructions`
- Ensure provided instructions are always used
- Add option to permanently modify the instructions of the assistant
* Improve default behavior
* Add a test; add method to delete assistant
* Add a new test for overwriting instructions
* Add test case for when no instructions are given for existing assistant
* Add pytest markers to test_gpt_assistant.py
* add test in workflow
* update
* fix test_client_stream
* comment out test_hierarchy_
* Add basic gptassistant notebook
- also improve logging in gpt assistant
* Update notebook/agentchat_oai_assistant_twoagents_basic.ipynb
Co-authored-by: Qingyun Wu <qingyun.wu@psu.edu>
---------
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
Co-authored-by: kevin666aa <yrwu000627@gmail.com>
Co-authored-by: Qingyun Wu <qingyun.wu@psu.edu>
* Notebook showing how to use select speaker to control conversation flow.
* pytest associated with notebook.
* Added llm_config to assistant and user proxy agent, and clarified why we set use_cache to false, as requested in the review.
* Added a @pytest.mark.skipif decorator like other tests to run it only in one py version, 3.10
* Fixed config warning.
* Removd llm_config to UserProxyAgent
* Fixed minor typos.
* Reran outputs
* Remopved llm_config from user_proxy_agent
* Colab Badge link updated.
* pre-commit formatting changes.
* Fixed base_url
---------
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
* LMM Code added
* LLaVA notebook update
* Test cases and Notebook modified for OpenAI v1
* Move LMM into contrib
To resolve test issues and deploy issues
In the future, we can install pillow by default, and then move back
LMM agents into agentchat
* LMM test setup update
* try...except... clause for LMM tests
* disable patch for llava agent test
To resolve dependencies issue for build
* Add LMM Blog
* Change docstring for LMM agents
* Docstring update patch
* llava: insert reply at position 1 now
So, it can still handle human_input_mode
and max_consecutive_reply
* Resolve comments
Fixing: typos, blogs, yml, and add OpenAIWrapper
* Signature typo fix for LMM agent: system_message
* Update LMM "content" from latest OpenAI release
Reference https://platform.openai.com/docs/guides/vision
* update LMM test according to latest OpenAI release
* Fully support GPT-4V now
1. Add a notebook for GPT-4V. LLava notebook also updated.
2. img_utils updated
3. GPT-4V formatter now return base64 image with mime type
4. Infer mime type directly from b64 image content (while loading
without suffix)
5. Test cases modified according to all the related changes.
* GPT-4V link updated in blog
---------
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
* UPDATE - FAQ section in documentation
* FIX - formatting test failure
* FIX - added disclaimer
* pre-commit
* Update website/docs/FAQ.md
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
* Update website/docs/FAQ.md
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
* Update website/docs/FAQ.md
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
* UPDATE - notebook and FAQ information for config_list_from_models
---------
Co-authored-by: Ward <award40@LAMU0CLP74YXVX6.uhc.com>
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
Co-authored-by: Li Jiang <bnujli@gmail.com>
* LMM notebook
* Use "register_reply" instead.
* Loop check LLaVA non-empty response
* Run notebook
* Make the llava_call function more flexible
* Include API for LLaVA from Replicate
* LLaVA data format update x2
1. prompt formater function
2. conversation format with SEP
* Coding example added
* Rename "ImageAgent" -> "LLaVAAgent"
* Docstring and comments updates
* Debug notebook: Remote LLaVA tested
* Example 1: remove system message
* MultimodalConversableAgent added
* Add gpt4v_formatter
* LLaVA: update example 1
* LLaVA: Add link to "Table of Content"
* Initial commit.
* Disable LLM response caching.
* Add teachability option to setup.py
* Modify test to use OAI_CONFIG_LIST as suggested in the docs.
* Expand unit test.
* Complete unit test.
* Add filter_dict
* details
* AnalysisAgent
* details
* More documentation and debug output.
* Support retrieval of any number of relevant memos, including zero.
* More robust analysis separator.
* cleanup
* teach_config
* refactoring
* For robustness, allow more flexibility on memo storage and retrieval.
* de-dupe the retrieved memos.
* Simplify AnalysisAgent. The unit tests now pass with gpt-3.5
* comments
* Add a verbosity level to control analyzer messages.
* refactoring
* comments
* Persist memory on disk.
* cleanup
* Use markdown to format retrieved memos.
* Use markdown in TextAnalyzerAgent
* Add another verbosity level.
* clean up logging
* notebook
* minor edits
* cleanup
* linter fixes
* Skip tests that fail to import openai
* Address reviewer feedback.
* lint
* refactoring
* Improve wording
* Improve code coverage.
* lint
* Use llm_config to control caching.
* lowercase notebook name
* Sort out the parameters passed through to ConversableAgent, and supply full docstrings for the others.
* lint
* Allow TextAnalyzerAgent to be given a different llm_config than TeachableAgent.
* documentation
* Modifications to run openai workflow.
* Test on just python 3.10.
Replace agent with agent teachable_agent as recommended.
* Test on python 3.9 instead of 3.10.
* Remove space from name -> teachableagent
---------
Co-authored-by: Li Jiang <bnujli@gmail.com>
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
* Add group chat and retrieve agent example
* Fix link and models
* Support call rag in a group chat and not init with rag
* Fix n_results logic
* Update notebook
* Fix format
* Improve wording
* Update variable name
* Revert to main
* Update function call
* Update keys
* Update contents
* Update contents
* FORMATTING
* UPDATE - OAI __init__.py
* ruff
* ADD - notebook covering oai API configuration options and their different purposes
* ADD openai util updates so that the function just assumes the same environment variable name for all models, also added functionality for adding API configurations like api_base etc.
* ADD - updates to config_list_from_dotenv and tests for openai_util testing, update example notebook
* UPDATE - added working config_list_from_dotenv() with passing tests, and updated notebook
* UPDATE - code and tests to potentially get around the window build permission error, used different method of producing temporary files
---------
Co-authored-by: Ward <award40@LAMU0CLP74YXVX6.uhc.com>
* UPDATE - Updated retrieve_utils.py to have the ability to parse text from pdf files
* UNDO - change to recursive condition
* UPDATE - updated agentchat_RetrieveChat.ipynb to clarify which file types are accepted to be in the docs path
* ADD - missing import
* UPDATE - setup.py to have PyPDF2 in retrievechat
* RE-ADD - urls
* ADD - tests for retrieve utils, and removed deprecated PyPdf2
* Update agentchat_RetrieveChat.ipynb
* Update retrieve_utils.py
Fix format
* Update retrieve_utils.py
Replace print with logger
* UPDATE - added more specific exception to PDF decryption try/catch
* FIX - typo, return statement at wrong indentation in extract_text_from_pdf
---------
Co-authored-by: Ward <award40@LAMU0CLP74YXVX6.uhc.com>
Co-authored-by: Li Jiang <bnujli@gmail.com>
* Improves clarity and fixes punctuation in README and Multi-agent documentation
* fix broken colab link to agentchat_groupchat_research.ipynb (others are fine)
* fix typos, improves readability
* fix bug for windows
* fix bug for windows
* more clear example
* link to example
* add test
* format
* comment
* fix assertion error
* fix test error and links
---------
Co-authored-by: Chi Wang (MSR) <chiw@microsoft.com>