This pull request introduces the integration of the `llama-cpp` library
into the `autogen-ext` package, with significant changes to the project
dependencies and the implementation of a new chat completion client. The
most important changes include updating the project dependencies, adding
a new module for the `LlamaCppChatCompletionClient`, and implementing
the client with various functionalities.
### Project Dependencies:
*
[`python/packages/autogen-ext/pyproject.toml`](diffhunk://#diff-095119d4420ff09059557bd25681211d1772c2be0fbe0ff2d551a3726eff1b4bR34-R38):
Added `llama-cpp-python` as a new dependency under the `llama-cpp`
section.
### New Module:
*
[`python/packages/autogen-ext/src/autogen_ext/models/llama_cpp/__init__.py`](diffhunk://#diff-42ae3ba17d51ca917634c4ea3c5969cf930297c288a783f8d9c126f2accef71dR1-R8):
Introduced the `LlamaCppChatCompletionClient` class and handled import
errors with a descriptive message for missing dependencies.
### Implementation of `LlamaCppChatCompletionClient`:
*
`python/packages/autogen-ext/src/autogen_ext/models/llama_cpp/_llama_cpp_completion_client.py`:
- Added the `LlamaCppChatCompletionClient` class with methods to
initialize the client, create chat completions, detect and execute
tools, and handle streaming responses.
- Included detailed logging for debugging purposes and implemented
methods to count tokens, track usage, and provide model information.…d
chat capabilities
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
<!-- Please give a short summary of the change and the problem this
solves. -->
## Related issue number
<!-- For example: "Closes #1234" -->
## Checks
- [X ] I've included any doc changes needed for
https://microsoft.github.io/autogen/. See
https://microsoft.github.io/autogen/docs/Contribute#documentation to
build and test documentation locally.
- [X ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ X] I've made sure all auto checks have passed.
---------
Co-authored-by: aribornstein <x@x.com>
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
Co-authored-by: Ryan Sweet <rysweet@microsoft.com>
_(EXPERIMENTAL, RESEARCH IN PROGRESS)_
In 2023 AutoGen introduced [Teachable
Agents](https://microsoft.github.io/autogen/0.2/blog/2023/10/26/TeachableAgent/)
that users could teach new facts, preferences and skills. But teachable
agents were limited in several ways: They could only be
`ConversableAgent` subclasses, they couldn't learn a new skill unless
the user stated (in a single turn) both the task and how to solve it,
and they couldn't learn on their own. **Task-Centric Memory** overcomes
these limitations, allowing users to teach arbitrary agents (or teams)
more flexibly and reliably, and enabling agents to learn from their own
trial-and-error experiences.
This PR is large and complex. All of the files are new, and most of the
added components depend on the others to run at all. But the review
process can be accelerated if approached in the following order.
1. Start with the [Task-Centric Memory
README](https://github.com/microsoft/autogen/tree/agentic_memory/python/packages/autogen-ext/src/autogen_ext/task_centric_memory).
1. Install the memory extension locally, since it won't be in pypi until
it's merged. In the `agentic_memory` branch, and the `python/packages`
directory:
- `pip install -e autogen-agentchat`
- `pip install -e autogen-ext[openai]`
- `pip install -e autogen-ext[task-centric-memory]`
2. Run the Quickstart sample code, then immediately open the
`./pagelogs/quick/0 Call Tree.html` file in a browser to view the work
in progress.
3. Click through the web page links to see the details.
2. Continue through the rest of the main README to get a high-level
overview of the architecture.
3. Read through the [code samples
README](https://github.com/microsoft/autogen/tree/agentic_memory/python/samples/task_centric_memory),
running each of the 4 code samples while viewing their page logs.
4. Skim through the 4 code samples, along with their corresponding yaml
config files:
1. `chat_with_teachable_agent.py`
2. `eval_retrieval.py`
3. `eval_teachability.py`
4. `eval_learning_from_demonstration.py`
5. `eval_self_teaching.py`
6. Read `task_centric_memory_controller.py`, referring back to the
previously generated page logs as needed. This is the most important and
complex file in the PR.
7. Read the remaining core files.
1. `_task_centric_memory_bank.py`
2. `_string_similarity_map.py`
3. `_prompter.py`
8. Read the supporting files in the utils dir.
1. `teachability.py`
2. `apprentice.py`
3. `grader.py`
4. `page_logger.py`
5. `_functions.py`
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
<!-- Please give a short summary of the change and the problem this
solves. -->
Shows an example of how to use the `Memory` interface to implement a
just-in-time vector memory based on chromadb.
```python
import os
from pathlib import Path
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.ui import Console
from autogen_core.memory import MemoryContent, MemoryMimeType
from autogen_ext.memory.chromadb import ChromaDBVectorMemory, PersistentChromaDBVectorMemoryConfig
from autogen_ext.models.openai import OpenAIChatCompletionClient
# Initialize ChromaDB memory with custom config
chroma_user_memory = ChromaDBVectorMemory(
config=PersistentChromaDBVectorMemoryConfig(
collection_name="preferences",
persistence_path=os.path.join(str(Path.home()), ".chromadb_autogen"),
k=2, # Return top k results
score_threshold=0.4, # Minimum similarity score
)
)
# a HttpChromaDBVectorMemoryConfig is also supported for connecting to a remote ChromaDB server
# Add user preferences to memory
await chroma_user_memory.add(
MemoryContent(
content="The weather should be in metric units",
mime_type=MemoryMimeType.TEXT,
metadata={"category": "preferences", "type": "units"},
)
)
await chroma_user_memory.add(
MemoryContent(
content="Meal recipe must be vegan",
mime_type=MemoryMimeType.TEXT,
metadata={"category": "preferences", "type": "dietary"},
)
)
# Create assistant agent with ChromaDB memory
assistant_agent = AssistantAgent(
name="assistant_agent",
model_client=OpenAIChatCompletionClient(
model="gpt-4o",
),
tools=[get_weather],
memory=[user_memory],
)
stream = assistant_agent.run_stream(task="What is the weather in New York?")
await Console(stream)
await user_memory.close()
```
```txt
---------- user ----------
What is the weather in New York?
---------- assistant_agent ----------
[MemoryContent(content='The weather should be in metric units', mime_type='MemoryMimeType.TEXT', metadata={'category': 'preferences', 'mime_type': 'MemoryMimeType.TEXT', 'type': 'units', 'score': 0.4342913043162201, 'id': '8a8d683c-5866-41e1-ac17-08c4fda6da86'}), MemoryContent(content='The weather should be in metric units', mime_type='MemoryMimeType.TEXT', metadata={'category': 'preferences', 'mime_type': 'MemoryMimeType.TEXT', 'type': 'units', 'score': 0.4342913043162201, 'id': 'f27af42c-cb63-46f0-b26b-ffcc09955ca1'})]
---------- assistant_agent ----------
[FunctionCall(id='call_a8U3YEj2dxA065vyzdfXDtNf', arguments='{"city":"New York","units":"metric"}', name='get_weather')]
---------- assistant_agent ----------
[FunctionExecutionResult(content='The weather in New York is 23 °C and Sunny.', call_id='call_a8U3YEj2dxA065vyzdfXDtNf', is_error=False)]
---------- assistant_agent ----------
The weather in New York is 23 °C and Sunny.
```
Note that MemoryContent object in the MemoryQuery events have useful
metadata like the score and id retrieved memories.
## Related issue number
<!-- For example: "Closes #1234" -->
## Checks
- [ ] I've included any doc changes needed for
https://microsoft.github.io/autogen/. See
https://microsoft.github.io/autogen/docs/Contribute#documentation to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
@ekzhu should likely be assigned as reviewer
## Why are these changes needed?
These changes address the bug reported in #5663. Prevents TypeError from
being thrown at inference time by ollama AsyncClient when `host` (and
other) kwargs are passed to autogen OllamaChatCompletionClient
constructor.
It also adds ollama as a named optional extra so that the ollama
requirements can be installed alongside autogen-ext (e.g. `pip install
autogen-ext[ollama]`
@ekzhu, I will need some help or guidance to ensure that the associated
test (which requires ollama and tiktoken as dependencies of the
OllamaChatCompletionClient) can run successfully in autogen's test
execution environment.
I have also left the "I've made sure all auto checks have passed" check
below unchecked as this PR is coming from my fork. (UPDATE: auto checks
appear to have passed after opening PR, so I have checked box below)
## Related issue number
Intended to close#5663
## Checks
- [x] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [x] I've made sure all auto checks have passed.
---------
Co-authored-by: Ryan Stewart <ryanstewart@Ryans-MacBook-Pro.local>
Co-authored-by: Jack Gerrits <jackgerrits@users.noreply.github.com>
Co-authored-by: peterychang <49209570+peterychang@users.noreply.github.com>
This PR improves documentation on custom agents
- Shows example on how to create a custom agent that directly uses a
model client. In this case an example of a GeminiAssistantAgent that
directly uses the Gemini SDK model client.
- Shows that that CustomAgent can be easily added to any agentchat team
- Shows how the same CustomAgent can be made declarative by inheriting
the Component interface and implementing the required methods.
Closes#5450
## Why are these changes needed?
These changes are needed because currently there's no generic way to add
`tools` to autogen studio workflows using the existing DSL and schema
other than inline python.
This API will be quite verbose, and lacks a discovery mechanism, but it
unlocks a lot of programmatic use-cases.
## Related issue number
https://github.com/microsoft/autogen/issues/5170
Co-authored-by: Victor Dibia <victordibia@microsoft.com>
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
* Rebase to latest main branch
* Moved _azure module to azure
* Validate extra_create_args in and json response
* Added Support for Github Models
* Added normalize_name and assert_valid name
* Added Tests for AzureAIChatCompletionClient
* WIP: Azure AI Client
* Added: object-level usage data
* Added: doc string
* Added: check existing response_format value
* Added: _validate_config and _create_client
* lint
* merge dependencies
* add tests for img and function calling
* support actual tests through env vars
* address mypy errors
* doc example fix
* fmt
* fix doc fmt
* Update python/packages/autogen-ext/src/autogen_ext/models/azure/_azure_ai_client.py
---------
Co-authored-by: Rohan Thacker <thackerrohan4@gmail.com>
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
Co-authored-by: Leonardo Pinheiro <lpinheiro@microsoft.com>
* use caching to run tests and report coverage
* fix test step dep name
* try to fix cov fname
* add working dir to mv step
* update artifact download
* fmt
* reduce concurrency on ext test
---------
Co-authored-by: Leonardo Pinheiro <lpinheiro@microsoft.com>
* Add ChatCompletionCache along with AbstractStore for caching completions
* Addressing comments
* Improve interface for cachestore
* Improve documentation & revert protocol
* Make cache store typed, and improve docs
* remove unnecessary casts
* Doc update to include model context usage
* add langchain tools
* update langchain tool wrapper api doc
* updat
* update
* format
* add langchain experimental dev dep
* type
* Fix type
* Fix some types in langchain adapter
* type ignores
* Fix definition of workspace package, remove uv pin
* add --all-packages
* pin docs uv versions for older project structure
* try old version to verify CI
* Use workflow target
* change syntax
* change check
* try with var in matrix
* add all packages to workspace
* remove project table
* Add MagenticOne API
* Add CodeExecutorAgent to MagenticOne for enhanced task execution
* Refactor MagenticOne class to inherit from MagenticOneGroupChat and streamline initialization
* Enhance MagenticOne class documentation with detailed usage examples and initialization instructions
* Refactor MagenticOne module structure and update import paths
* Remove unused imports
* Add documentation for MagenticOne module and remove redundant initialization comments
* Enhance MagenticOne class with human-in-the-loop mode and update examples
* Update MagenticOne class documentation with safety precautions and architecture details
* Run poe format
* Add blog post reference to MagenticOne class documentation
* change default of websurfer use_ocr to false because of refusals
* Refactor MagenticOne class to use ChatCompletionClient instead of OpenAIChatCompletionClient
* Add client capability validation to MagenticOne initialization
* Poe format
* Refactor imports in MagenticOne class for clarity and organization
* Add stacklevel parameter to warning in client capability validation
* Update README to recommend using Magentic-One API for improved integration
* Add create_args property to OpenAIChatCompletionClient for better access to initialization arguments
* Enhance client capability validation in MagenticOne to ensure compatibility with OpenAI GPT-4o model
* Refactor client capability validation in MagenticOne for improved clarity
* Update magentic_one.py
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
* Remove create_args property from OpenAIChatCompletionClient and update validation logic in MagenticOne to directly access _create_args
* Refactor documentation in MagenticOne for improved readability and consistency
* Refactor client capability validation in MagenticOne to remove unnecessary model check for GPT-4o
* Add MagenticOne CLI (#4788)
* Add MagenticOne CLI script for task execution with OpenAI GPT-4o integration
* Fix argument parsing in MagenticOne CLI to require a single task input
* Add docstring to main function in MagenticOne CLI for improved usage clarity
* Fix example usage in docstring of MagenticOne CLI for correct argument order
* Refactor argument parsing in MagenticOne CLI for improved clarity and consistency
* Add type hints to run_task function in MagenticOne CLI
* Add type hint for main function in MagenticOne CLI
* Remove type ignore from main function call in MagenticOne CLI
---------
Co-authored-by: Hussein Mozannar <hmozannar@microsoft.com>
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>