* add function decorator to converasble agent
* polishing
* polishing
* added function decorator to the notebook with async function calls
* added support for return type hint and JSON encoding of returned value if needed
* polishing
* polishing
* refactored async case
* Python 3.8 support added
* polishing
* polishing
* missing docs added
* refacotring and changes as requested
* getLogger
* documentation added
* test fix
* test fix
* added testing of agentchat_function_call_currency_calculator.ipynb to test_notebook.py
* added support for Pydantic parameters in function decorator
* polishing
* Update website/docs/Use-Cases/agent_chat.md
Co-authored-by: Li Jiang <bnujli@gmail.com>
* Update website/docs/Use-Cases/agent_chat.md
Co-authored-by: Li Jiang <bnujli@gmail.com>
* fixes problem with logprob parameter in openai.types.chat.chat_completion.Choice added by openai version 1.5.0
* get 100% code coverage on code added
* updated docs
* default values added to JSON schema
* serialization using json.dump() add for values not string or BaseModel
* added limit to openai version because of breaking changes in 1.5.0
* added line-by-line comments in docs to explain the process
* polishing
---------
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
Co-authored-by: Li Jiang <bnujli@gmail.com>
* Add examples of constrained generation
* Add guidance based coder that always returns valid code blocks
* Add guidance based agent that always generates a valid json
* Add link to guidance nb
* Add example nb for async funcs
* Add a notebook based test for async function calls
* Update nb
* Update nb
* Remove duplicate code
* Rename func for consistency
* Fix bug
* Add intro text for cmd cell 4
* Add a short comment on await
* Update agentchat_function_call_async.ipynb
Minor typo
* Add link to nb
---------
Co-authored-by: Joshua Kim <joshkyh@users.noreply.github.com>
Co-authored-by: Qingyun Wu <qingyun.wu@psu.edu>
* add agentchat_video_transcript_translate.ipynb
* finish the agentchat_video_transcript_translate.ipynb file notebook
* modify the recognize_transcript_from_video function
* run the script and add the output to the notebook
* implement the notebook
* add the link to the video clip
* rename the file and add the version requirement of each packages
* add the new notebook path
* add the notebook path to Example.md
* add the new notebook path to the new example.md
* add the instruction of FFmpeg and video download
* Update Examples.md
* Update Examples.md
* Update Examples.md
* Update Examples.md
* Delete notebook/agentchat_video_transcript_translate.ipynb
* Update Examples.md and add the link
---------
Co-authored-by: silver233jpg <60947716+silver233jpg@users.noreply.github.com>
* Change "content" type in Conversable Agent
* content and system_message support str and List
Update for all other agents
* Content_str now also takes None as input
* Group Chat now works with LMM too
* Style: newline for import in Conversable Agentt
* Add test for gourpchat + lmm
* Resolve comments
1. Undo AssistantAgent changes
2. Modify the asserts and raises in `content_str` function and update
test accordingly.
* Undo AssistantAgent
* Update comments and add assertion for LMM
* Typo fix in docstring for content_str
* Remove “None” out conversable_agent.py
* Lint message to dict in multimodal_conversable_agent.py
* Address lint issues
* linting
* Move lmm test into contrib test
* Resolve 2 comments
* Move img_utils into contrib folder
* Resolve img_utils path issues
* Enable defining new functions after agent creation
* Add notebook for function inception example
* format
* 1. fix bug 2. support remove function
* 1. fix bug 2. support remove function
* 1. add example doc 2. change test file 3. change ipynb title
* Update website/docs/Examples.md
---------
Co-authored-by: Li Jiang <bnujli@gmail.com>
Co-authored-by: “skzhang1” <“shaokunzhang529@gmail.com”>
Co-authored-by: Shaokun Zhang <shaokunzhang529@gmail.com>
Co-authored-by: Qingyun Wu <qingyun.wu@psu.edu>
* Update chat_with_teachable_agent.py to v2.
* Update agentchat_teachability.ipynb to v2.
* Add test of teachability accuracy.
* Update installation instructions.
* Add to contrib tests.
* pre-commit fixes
* Apply reviewer suggestions to test workflows.
* Update agentchat_oai_assistant_retrieval.ipynb
our -> your to reduce confusion
* Update notebook/agentchat_oai_assistant_retrieval.ipynb
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
---------
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
* add agenteval-notebook for math problems and the blog post about it
* update gitignore
* updates to notebook
* adding folder for the logs
* adding math problems logs
* adding folder for alfworld logs
* added limitiation and future work to blog post
* minor edits blog post
* adding changes
* reorg
* modify the main notebook
* modification of the main notebook
* remove wrong notebook
* uploading new notebook
* update agenteval notebook
* change the sample
* Update agenteval_cq_math.ipynb
* adding final changes to notebook
* updated framework picture
* Update index.mdx
* Update index.md
* Add files via upload
* updates to notebool
* revise the blog
* revise the blog
* update the agent img
* revise the blog
* revise the blog
* Excluded model logs from the main branch, you can find them in agenteval branch
* Fixed pre-commit formatting.
* Update website/blog/2023-11-11-AgentEval/index.mdx
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
* update gitignore
* update index.mdx
* update authors.yml by adding Negar and Julia
* remove md file
* remove md file
* update gitignore
* update authors file
* pre-commit checks
* pre-commit checks on authors.yml
* pre-commit checks on authors.yml
* update index.mdx
* update authors.yml by adding Negar and Julia
* updated the blog-post version 1
* updated the blog-post: TL;DR is ready
* updated the blog-post: first part of introduction is ready
* updated figures: typos on fig 1, changed terminology on the fig 2
* upadated the Framework part
* fixed redering issues
* upload zip file instead of single samples
* update prealgebra.zip
* update
* upload
* update z
* update naming
* update zip
* update the agenteval notebook
* update the notebook - removing unmercenary logs
* updated fig 1 and references to it
* updated fig 1
* incorporated PR comments
* merged agenteval branch
* final changes to the blog
* updated taxonomy
* update notebook
* minor changes to the blog
* Fixed formatting
* Update the link in agenteval_cq_math.ipynb
* update the blog and link in notebook
* Update index.mdx
* change folder name
* Changes to be committed:
modified: OAI_CONFIG_LIST_sample.txt
* add sample OAI file
* fix the url link to colab and typos
* fix the url link to colab and typos
* add authors
* update profile pic
* "update authors"
* fixing the problem in test_groupchat.py
* update the title lower case
* reverting changes in setup.py
* rerun pre-commit
---------
Co-authored-by: Negar Arabzadeh <ngr.arabzadeh@gmail.com>
Co-authored-by: Julia Kiseleva <jukisele@microsoft.com>
Co-authored-by: afourney <adamfo@microsoft.com>
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
Co-authored-by: Qingyun Wu <qingyun.wu@psu.edu>
* Add custom text types and recursive
* Add custom text types and recursive
* Fix format
* Update qdrant, Add pdf to unstructured
* Use unstructed as the default text extractor if installed
* Add tests for unstructured
* Update tests env for unstructured
* Fix error if last message is a function call, issue #569
* Remove csv, md and tsv from UNSTRUCTURED_FORMATS
* Update docstring of docs_path
* Update test for get_files_from_dir
* Update docstring of custom_text_types
* Fix missing search_string in update_context
* Add custom_text_types to notebook example
* Refactor GPTAssistantAgent constructor to handle
instructions and overwrite_instructions flag
- Ensure that `system_message` is always consistent with `instructions`
- Ensure provided instructions are always used
- Add option to permanently modify the instructions of the assistant
* Improve default behavior
* Add a test; add method to delete assistant
* Add a new test for overwriting instructions
* Add test case for when no instructions are given for existing assistant
* Add pytest markers to test_gpt_assistant.py
* add test in workflow
* update
* fix test_client_stream
* comment out test_hierarchy_
* Add basic gptassistant notebook
- also improve logging in gpt assistant
* Update notebook/agentchat_oai_assistant_twoagents_basic.ipynb
Co-authored-by: Qingyun Wu <qingyun.wu@psu.edu>
---------
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
Co-authored-by: kevin666aa <yrwu000627@gmail.com>
Co-authored-by: Qingyun Wu <qingyun.wu@psu.edu>
* Notebook showing how to use select speaker to control conversation flow.
* pytest associated with notebook.
* Added llm_config to assistant and user proxy agent, and clarified why we set use_cache to false, as requested in the review.
* Added a @pytest.mark.skipif decorator like other tests to run it only in one py version, 3.10
* Fixed config warning.
* Removd llm_config to UserProxyAgent
* Fixed minor typos.
* Reran outputs
* Remopved llm_config from user_proxy_agent
* Colab Badge link updated.
* pre-commit formatting changes.
* Fixed base_url
---------
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
* LMM Code added
* LLaVA notebook update
* Test cases and Notebook modified for OpenAI v1
* Move LMM into contrib
To resolve test issues and deploy issues
In the future, we can install pillow by default, and then move back
LMM agents into agentchat
* LMM test setup update
* try...except... clause for LMM tests
* disable patch for llava agent test
To resolve dependencies issue for build
* Add LMM Blog
* Change docstring for LMM agents
* Docstring update patch
* llava: insert reply at position 1 now
So, it can still handle human_input_mode
and max_consecutive_reply
* Resolve comments
Fixing: typos, blogs, yml, and add OpenAIWrapper
* Signature typo fix for LMM agent: system_message
* Update LMM "content" from latest OpenAI release
Reference https://platform.openai.com/docs/guides/vision
* update LMM test according to latest OpenAI release
* Fully support GPT-4V now
1. Add a notebook for GPT-4V. LLava notebook also updated.
2. img_utils updated
3. GPT-4V formatter now return base64 image with mime type
4. Infer mime type directly from b64 image content (while loading
without suffix)
5. Test cases modified according to all the related changes.
* GPT-4V link updated in blog
---------
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
* UPDATE - FAQ section in documentation
* FIX - formatting test failure
* FIX - added disclaimer
* pre-commit
* Update website/docs/FAQ.md
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
* Update website/docs/FAQ.md
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
* Update website/docs/FAQ.md
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
* UPDATE - notebook and FAQ information for config_list_from_models
---------
Co-authored-by: Ward <award40@LAMU0CLP74YXVX6.uhc.com>
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
Co-authored-by: Li Jiang <bnujli@gmail.com>
* LMM notebook
* Use "register_reply" instead.
* Loop check LLaVA non-empty response
* Run notebook
* Make the llava_call function more flexible
* Include API for LLaVA from Replicate
* LLaVA data format update x2
1. prompt formater function
2. conversation format with SEP
* Coding example added
* Rename "ImageAgent" -> "LLaVAAgent"
* Docstring and comments updates
* Debug notebook: Remote LLaVA tested
* Example 1: remove system message
* MultimodalConversableAgent added
* Add gpt4v_formatter
* LLaVA: update example 1
* LLaVA: Add link to "Table of Content"