* set use_docker to default to true
* black formatting
* centralize checking and add env variable option
* set docker env flag for contrib tests
* set docker env flag for contrib tests
* better error message and cleanup
* disable explicit docker tests
* docker is installed so can't check for that in test
* pr comments and fix test
* rename and fix function descriptions
* documentation
* update notebooks so that they can be run with change in default
* add unit tests for new code
* cache and restore env var
* skip on windows because docker is running in the CI but there are problems connecting the volume
* update documentation
* move header
* update contrib tests
* fixed spelling, minor errors and reformatted using black
* polishing
* added codespell to pre-commit hooks, fixed a number of spelling errors and a few minor bugs in the code
* update autogen library version in notebooks
* update autogen library version in notebooks
* update autogen library version in notebooks
* update autogen library version in notebooks
* update autogen library version in notebooks
* LMM Code added
* LLaVA notebook update
* Test cases and Notebook modified for OpenAI v1
* Move LMM into contrib
To resolve test issues and deploy issues
In the future, we can install pillow by default, and then move back
LMM agents into agentchat
* LMM test setup update
* try...except... clause for LMM tests
* disable patch for llava agent test
To resolve dependencies issue for build
* Add LMM Blog
* Change docstring for LMM agents
* Docstring update patch
* llava: insert reply at position 1 now
So, it can still handle human_input_mode
and max_consecutive_reply
* Resolve comments
Fixing: typos, blogs, yml, and add OpenAIWrapper
* Signature typo fix for LMM agent: system_message
* Update LMM "content" from latest OpenAI release
Reference https://platform.openai.com/docs/guides/vision
* update LMM test according to latest OpenAI release
* Fully support GPT-4V now
1. Add a notebook for GPT-4V. LLava notebook also updated.
2. img_utils updated
3. GPT-4V formatter now return base64 image with mime type
4. Infer mime type directly from b64 image content (while loading
without suffix)
5. Test cases modified according to all the related changes.
* GPT-4V link updated in blog
---------
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
* LMM notebook
* Use "register_reply" instead.
* Loop check LLaVA non-empty response
* Run notebook
* Make the llava_call function more flexible
* Include API for LLaVA from Replicate
* LLaVA data format update x2
1. prompt formater function
2. conversation format with SEP
* Coding example added
* Rename "ImageAgent" -> "LLaVAAgent"
* Docstring and comments updates
* Debug notebook: Remote LLaVA tested
* Example 1: remove system message
* MultimodalConversableAgent added
* Add gpt4v_formatter
* LLaVA: update example 1
* LLaVA: Add link to "Table of Content"