<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
The current `StreamableHttpServerParams` has timedelta values that are
not JSON serializable (config.dump_component.model_dump_json()).
This make is unusable in UIs like AGS that expect configs to be
serializable to json,
```python
class StreamableHttpServerParams(BaseModel):
"""Parameters for connecting to an MCP server over Streamable HTTP."""
type: Literal["StreamableHttpServerParams"] = "StreamableHttpServerParams"
url: str # The endpoint URL.
headers: dict[str, Any] | None = None # Optional headers to include in requests.
timeout: timedelta = timedelta(seconds=30) # HTTP timeout for regular operations.
sse_read_timeout: timedelta = timedelta(seconds=60 * 5) # Timeout for SSE read operations.
terminate_on_close: bool = True
```
This PR uses float for time outs and casts it to timedelta as needed.
```python
class StreamableHttpServerParams(BaseModel):
"""Parameters for connecting to an MCP server over Streamable HTTP."""
type: Literal["StreamableHttpServerParams"] = "StreamableHttpServerParams"
url: str # The endpoint URL.
headers: dict[str, Any] | None = None # Optional headers to include in requests.
timeout: float = 30.0 # HTTP timeout for regular operations in seconds.
sse_read_timeout: float = 300.0 # Timeout for SSE read operations in seconds.
terminate_on_close: bool = True
```
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
<!-- Please give a short summary of the change and the problem this
solves. -->
## Related issue number
<!-- For example: "Closes #1234" -->
## Checks
- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
These changes are needed to expand AutoGen's memory capabilities with a
robust, production-ready integration with Mem0.ai.
<!-- Please give a short summary of the change and the problem this
solves. -->
This PR adds a new memory component for AutoGen that integrates with
Mem0.ai, providing a robust memory solution that supports both cloud and
local backends. The Mem0Memory class enables agents to store and
retrieve information persistently across conversation sessions.
## Key Features
- Seamless integration with Mem0.ai memory system
- Support for both cloud-based and local storage backends
- Robust error handling with detailed logging
- Full implementation of AutoGen's Memory interface
- Context updating for enhanced agent conversations
- Configurable search parameters for memory retrieval
## Related issue number
<!-- For example: "Closes #1234" -->
## Checks
- [x] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
---------
Co-authored-by: Victor Dibia <victordibia@microsoft.com>
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
Co-authored-by: Ricky Loynd <riloynd@microsoft.com>
## Why are these changes needed?
This PR adds support for configurable embedding functions in
ChromaDBVectorMemory, addressing the need for users to customize how
embeddings are generated for vector similarity search. Currently,
ChromaDB memory is limited to default embedding functions, which
restricts flexibility for different use cases that may require specific
embedding models or custom embedding logic.
The implementation allows users to:
- Use different SentenceTransformer models for domain-specific
embeddings
- Integrate with OpenAI's embedding API for consistent embedding
generation
- Define custom embedding functions for specialized requirements
- Maintain backward compatibility with existing default behavior
## Related issue number
Closes#6267
## Checks
- [x] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests corresponding to the changes introduced in this
PR.
- [x] I've made sure all auto checks have passed.
---------
Co-authored-by: Victor Dibia <victordibia@microsoft.com>
Co-authored-by: Victor Dibia <victor.dibia@gmail.com>
Add OTel GenAI traces:
- `create_agent`
- `invoke_agnet`
- `execute_tool`
Introduces context manager helpers to create these traces. The helpers
also serve as instrumentation points for other instrumentation
libraries.
Resolves#6644
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
MCP Python-sdk has started to support a new transport protocol named
`Streamble HTTP` since
[v1.8.0](https://github.com/modelcontextprotocol/python-sdk/releases/tag/v1.8.0)
last month. I heard it supersedes the SSE transport. Therefore, AutoGen
have to support it as soon as possible.
## Related issue number
https://github.com/microsoft/autogen/discussions/6517
## Checks
- [x] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [x] I've made sure all auto checks have passed.
---------
Co-authored-by: Victor Dibia <victordibia@microsoft.com>
Co-authored-by: Victor Dibia <victor.dibia@gmail.com>
## Why are these changes needed?
This PR introduces a new `OpenAIAgent` implementation that uses the
[OpenAI Response
API](https://platform.openai.com/docs/guides/responses-vs-chat-completions)
as its backend. The OpenAI Assistant API will be deprecated in 2026, and
the Response API is its successor. This change ensures our codebase is
future-proof and aligned with OpenAI’s latest platform direction.
### Motivation
- **Deprecation Notice:** The OpenAI Assistant API will be deprecated in
2026.
- **Future-Proofing:** The Response API is the recommended replacement
and offers improved capabilities for stateful, multi-turn, and
tool-augmented conversations.
- **AgentChat Compatibility:** The new agent is designed to conform to
the behavior and expectations of `AssistantAgent` in AgentChat, but is
implemented directly on top of the OpenAI Response API.
### Key Changes
- **New Agent:** Adds `OpenAIAgent`, a stateful agent that interacts
with the OpenAI Response API.
- **Stateful Design:** The agent maintains conversation state, tool
usage, and other metadata as required by the Response API.
- **AssistantAgent Parity:** The new agent matches the interface and
behavior of `AssistantAgent` in AgentChat, ensuring a smooth migration
path.
- **Direct OpenAI Integration:** Uses the official `openai` Python
library for all API interactions.
- **Extensible:** Designed to support future enhancements, such as
advanced tool use, function calling, and multi-modal capabilities.
### Migration Path
- Existing users of the Assistant API should migrate to the new
`OpenAIAgent` to ensure long-term compatibility.
- Documentation and examples will be updated to reflect the new agent
and its usage patterns.
### References
- [OpenAI: Responses vs. Chat
Completions](https://platform.openai.com/docs/guides/responses-vs-chat-completions)
- [OpenAI Deprecation
Notice](https://platform.openai.com/docs/guides/responses-vs-chat-completions#deprecation-timeline)
---
## Related issue number
Closes#6032
## Checks
- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [X] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [X] I've made sure all auto checks have passed.
Co-authored-by: Griffin Bassman <griffinbassman@gmail.com>
## Why are these changes needed?
Enables usage statistics for streaming responses by default.
There is a similar bug in the AzureAI client. Theoretically adding the
parameter
```
model_extras={"stream_options": {"include_usage": True}}
```
should fix the problem, but I'm currently unable to test that workflow
## Related issue number
closes https://github.com/microsoft/autogen/issues/6548
## Checks
- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
There have been updates to the azure ai agent foundry sdk
(azure-ai-project). This PR updates the autogen `AzureAIAgent` which
wraps the azure ai agent. A list of some changes
- Update docstring samples to use `endpoint` (instead of connection
string previously)
- Update imports and arguments e.g, from `azure.ai.agents` etc
- Add a guide in ext docs showing Bing Search Grounding tool example.
<img width="1423" alt="image"
src="https://github.com/user-attachments/assets/0b7c8fa6-8aa5-4c20-831b-b525ac8243b7"
/>
## Why are these changes needed?
<!-- Please give a short summary of the change and the problem this
solves. -->
## Related issue number
Closes#6601
<!-- For example: "Closes #1234" -->
## Checks
- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
<!-- Please give a short summary of the change and the problem this
solves. -->
## Related issue number
resolved https://github.com/microsoft/autogen/issues/6584
## Checks
- [x] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [x] I've made sure all auto checks have passed.
## Why are these changes needed?
I added `created_at` to both BaseChatMessage and BaseAgentEvent classes
that store the time these Pydantic model instances are generated. And
then users will be able to use `created_at` to build up a customized
external persisting state management layer for their case.
## Related issue number
https://github.com/microsoft/autogen/discussions/6169#discussioncomment-13151540
## Checks
- [x] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [x] I've made sure all auto checks have passed.
---------
Co-authored-by: Jack Gerrits <jackgerrits@users.noreply.github.com>
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
<!-- Please give a short summary of the change and the problem this
solves. -->
The `LocalCommandLineCodeExecutor `creates temporary files for each code
execution, which can accumulate over time and clutter the filesystem -
especially when a temporary working directory is not used. These changes
introduce an option to automatically delete temporary files after
execution, helping to prevent file system debris, reduce disk usage, and
ensure cleaner runtime environments in long-running or repeated
execution scenarios.
## Related issue number
Closes#4380
## Checks
- [x] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [x] I've made sure all auto checks have passed.
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
<!-- Please give a short summary of the change and the problem this
solves. -->
## Related issue number
<!-- For example: "Closes #1234" -->
## Checks
- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
Currently when an error occurs when executing code in docker jupyter
executor, it returns only the error output.
This PR updates the handling of error output to include outputs from
previous code blocks that have been successfully executed.
Test it with this script:
```python
from autogen_agentchat.agents import AssistantAgent
from autogen_ext.models.openai import OpenAIChatCompletionClient
from autogen_ext.code_executors.docker_jupyter import DockerJupyterCodeExecutor, DockerJupyterServer
from autogen_ext.tools.code_execution import PythonCodeExecutionTool
from autogen_agentchat.ui import Console
from autogen_core.code_executor import CodeBlock
from autogen_core import CancellationToken
from autogen_agentchat.teams import RoundRobinGroupChat
from autogen_agentchat.conditions import TextMessageTermination
# Download the dataset from https://www.kaggle.com/datasets/nelgiriyewithana/top-spotify-songs-2023
# and place it the coding directory as `spotify-2023.csv`.
bind_dir = "./coding"
# Use a custom docker image with the Jupyter kernel gateway and data science libraries installed.
# Custom docker image: ds-kernel-gateway:latest -- you need to build this image yourself.
# Dockerfile:
# FROM quay.io/jupyter/docker-stacks-foundation:latest
#
# # ensure that 'mamba' and 'fix-permissions' are on the PATH
# SHELL ["/bin/bash", "-o", "pipefail", "-c"]
#
# # Switch to the default notebook user
# USER ${NB_UID}
#
# # Install data-science packages + kernel gateway
# RUN mamba install --quiet --yes \
# numpy \
# pandas \
# scipy \
# matplotlib \
# scikit-learn \
# seaborn \
# jupyter_kernel_gateway \
# ipykernel \
# && mamba clean --all -f -y \
# && fix-permissions "${CONDA_DIR}" \
# && fix-permissions "/home/${NB_USER}"
#
# # Allow you to set a token at runtime (or leave blank for no auth)
# ENV TOKEN=""
#
# # Launch the Kernel Gateway, listening on all interfaces,
# # with the HTTP endpoint for listing kernels enabled
# CMD ["python", "-m", "jupyter", "kernelgateway", \
# "--KernelGatewayApp.ip=0.0.0.0", \
# "--KernelGatewayApp.port=8888", \
# # "--KernelGatewayApp.auth_token=${TOKEN}", \
# "--JupyterApp.answer_yes=true", \
# "--JupyterWebsocketPersonality.list_kernels=true"]
#
# EXPOSE 8888
#
# WORKDIR "${HOME}"
async def main():
model = OpenAIChatCompletionClient(model="gpt-4.1")
async with DockerJupyterServer(
custom_image_name="ds-kernel-gateway:latest",
bind_dir=bind_dir,
) as server:
async with DockerJupyterCodeExecutor(jupyter_server=server) as code_executor:
await code_executor.execute_code_blocks([
CodeBlock(code="import pandas as pd\ndf = pd.read_csv('/workspace/spotify-2023.csv', encoding='latin-1')", language="python"),
],
cancellation_token=CancellationToken(),
)
tool = PythonCodeExecutionTool(
executor=code_executor,
)
assistant = AssistantAgent(
"assistant",
model_client=model,
system_message="You have access to a Jupyter kernel. Do not write all code at once. Write one code block, observe the output, and then write the next code block.",
tools=[tool],
)
team = RoundRobinGroupChat(
[assistant],
termination_condition=TextMessageTermination(source="assistant"),
)
task = f"Datafile has been loaded as variable `df`. First preview dataset. Then answer the following question: What is the highest streamed artist in the dataset?"
await Console(team.run_stream(task=task))
if __name__ == "__main__":
import asyncio
asyncio.run(main())
```
You can see the file encoding error gets recovered and the agent
successfully executes the query in the end.
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
Allows implicit AWS credential setting when using
AnthropicBedrockChatCompletionClient in an instance where you have
already logged into AWS with SSO and credentials are set as environment
variables.
## Related issue number
Closes#6560
## Checks
- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
Co-authored-by: Jack Gerrits <jackgerrits@users.noreply.github.com>
Fix for LLMCallEvent failing to log "tools" passed to
BaseOpenAIChatCompletionClient in
autogen_ext.models.openai._openai_client.BaseOpenAIChatCompletionClient
This bug creates problems inspecting why a certain tool was selected/not
selected by the LLM as the list of tools available to the LLM is not
present in the logs
## Why are these changes needed?
Added "tools" to the LLMCallEvent to log tools available to the LLM as
these were being missed causing difficulties during debugging LLM tool
calls.
## Related issue number
[<!-- For example: "Closes #1234"
-->](https://github.com/microsoft/autogen/issues/6531)
## Checks
- [x] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [x] I've made sure all auto checks have passed.
---------
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
## Why are these changes needed?
To add the latest support for using Llama API offerings with AutoGen
## Checks
- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
---------
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
## Why are these changes needed?
Simplified the azure ai search tool and fixed bugs in the code
## Related issue number
"Closes #6430 "
## Checks
- [X] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [X] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [X] I've made sure all auto checks have passed.
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
## Why are these changes needed?
FIX/mistral could not recive name field, so add model transformer for
mistral
## Related issue number
Closes#6147
## Checks
- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [x] I've made sure all auto checks have passed.
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
…ized properly
## Why are these changes needed?
The exceptions thrown by MCP server tools weren't being serialized
properly - the user would see `[{}, {}, ... {}]` instead of an actual
error/exception message.
## Related issue number
Fixes#6481
## Checks
- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [x] I've made sure all auto checks have passed.
---------
Signed-off-by: Peter Jausovec <peter.jausovec@solo.io>
Co-authored-by: Victor Dibia <victordibia@microsoft.com>
Co-authored-by: Victor Dibia <victor.dibia@gmail.com>
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
Nice to have functionality
## Related issue number
Closes#6060
## Checks
- [x] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [x] I've made sure all auto checks have passed.
---------
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
As-is: Deleting `McpWorkbench` does not close the `McpSession`.
To-be: Deleting `McpWorkbench` now properly closes the `McpSession`.
<!-- Please give a short summary of the change and the problem this
solves. -->
## Related issue number
<!-- For example: "Closes #1234" -->
## Checks
- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [x] I've made sure all auto checks have passed.
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
Some fixes with the AnthropicBedrockChatCompletionClient
- Ensure `AnthropicBedrockChatCompletionClient` exported and can be
imported.
- Update the BedrockInfo keys serialization - client argument can be
string (similar to api key in this ) but exported config should be
Secret
- Replace `AnthropicBedrock` with `AsyncAnthropicBedrock` : client
should be async to work with the ag stack and the BaseAnthropicClient it
inherits from
- Improve `AnthropicBedrockChatCompletionClient` docstring to use the
correct client arguments rather than serialized dict format.
Expect
```python
from autogen_ext.models.anthropic import AnthropicBedrockChatCompletionClient, BedrockInfo
from autogen_core.models import UserMessage, ModelInfo
async def main():
anthropic_client = AnthropicBedrockChatCompletionClient(
model="anthropic.claude-3-5-sonnet-20240620-v1:0",
temperature=0.1,
model_info=ModelInfo(vision=False, function_calling=True,
json_output=False, family="unknown", structured_output=True),
bedrock_info=BedrockInfo(
aws_access_key="<aws_access_key>",
aws_secret_key="<aws_secret_key>",
aws_session_token="<aws_session_token>",
aws_region="<aws_region>",
),
)
# type: ignore
result = await anthropic_client.create([UserMessage(content="What is the capital of France?", source="user")])
print(result)
await main()
```
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
<!-- Please give a short summary of the change and the problem this
solves. -->
## Related issue number
<!-- For example: "Closes #1234" -->
Closes#6483
## Checks
- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
---------
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
Add gpt 4o search models to list of default models
```python
"gpt-4o-mini-search-preview-2025-03-11": {
"vision": False,
"function_calling": True,
"json_output": True,
"family": ModelFamily.GPT_4O,
"structured_output": True,
"multiple_system_messages": True,
},
"gpt-4o-search-preview-2025-03-11": {
"vision": False,
"function_calling": True,
"json_output": True,
"family": ModelFamily.GPT_4O,
"structured_output": True,
"multiple_system_messages": True,
},
```
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
<!-- Please give a short summary of the change and the problem this
solves. -->
## Related issue number
<!-- For example: "Closes #1234" -->
Closes#6491
## Checks
- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
## Why are these changes needed?
Our McpWorkbench required properties however mongodb-lens's some tools
do not has it. I will fix it from when properties is None, -> {}
Our McpWorkbench now does not have stop routine with without async with
McpWorkbench(params) as workbench: and lazy init. So, I will adding def
__del__: pass just insert that, It could show error.
## Related issue number
Closes#6425
## Checks
- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [x] I've made sure all auto checks have passed.
## Why are these changes needed?
Multimodal message fill context with other routine. However current
`_set_empty_to_whitespace` is fill with context.
So, error occured.
And, I checked `multimodal_user_transformer_funcs` and I found it, in
this routine, context must not be empty.
Now remove the `_set_empty_to_whitespace` when multimodal message,
<!-- Please give a short summary of the change and the problem this
solves. -->
## Related issue number
Closes#6439
## Checks
- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [x] I've made sure all auto checks have passed.
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
## Why are these changes needed?
Current autogen-ext's test is too slow.
So, I will search slow test case and makes more fast.
[init docker executor function to module
180s->140s](a3cf70bcf8)
[reuse executor at some tests
140s->120s](ca15938afa)
[Remove unnecessary start of docker
120s->110s](61247611e0)
## Related issue number
<!-- For example: "Closes #1234" -->
Part of #6376
## Checks
- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [x] I've made sure all auto checks have passed.
---------
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
Adding support for Bing grounding citations to the AzureAIAgent.
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
<!-- Please give a short summary of the change and the problem this
solves. -->
## Related issue number
<!-- For example: "Closes #1234" -->
## Checks
- [X] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [X] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [X] I've made sure all auto checks have passed.
---------
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
Co-authored-by: Dheeraj Bandaru <BandaruDheeraj@users.noreply.github.com>
## Why are these changes needed?
Anthropic models are supported by AWS bedrock. ChatCompletionClient can
be created for anthropic bedrock models using this changes. This enables
the user to do the following
- Add any anthropic models and version from AWS bedrock
- Can use ChatCompletionClient for bedrock anthropic models
## Related issue number
Closes#5226
---------
Co-authored-by: harini.narasimhan <harini.narasimhan@eagleview.com>
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
## Why are these changes needed?
*Problem*
Previously, in `DockerCommandLineCodeExecutor`, cancellation tasks were
added directly to `self._cancellation_tasks` using
`asyncio.create_task()`:
```python
self._cancellation_tasks.append(asyncio.create_task(self._kill_running_command(command)))
```
This caused issues when cancellation tasks were created from multiple
event loops, leading to loop mismatch errors during executor shutdown.
*Solution*
This PR fixes the issue by introducing a dedicated internal event loop
for managing cancellation tasks.
Cancellation tasks are now scheduled in a fixed event loop using
`asyncio.run_coroutine_threadsafe()`:
```python
future: ConcurrentFuture[None] = asyncio.run_coroutine_threadsafe(
self._kill_running_command(command), self._loop
)
self._cancellation_futures.append(future)
```
*Additional Changes*
- Added detailed logging for easier debugging.
- Ensured clean shutdown of the internal event loop and associated
thread.
*Note*
This change ensures that all cancellation tasks are handled consistently
in a single loop, preventing cross-loop conflicts and improving executor
stability in multi-threaded environments.
## Related issue number
<!-- For example: "Closes #1234" -->
Closes#6395
## Checks
- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [x] I've made sure all auto checks have passed.
---------
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
## Why are these changes needed?
- Add return_value_as_string for formating result from MCP tool
## Related issue number
- Opened Issue on #6368
## Checks
- [x] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [x] I've made sure all auto checks have passed.
---------
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
## Why are these changes needed?
> The pytest tests test_local_executor_with_custom_venv and
test_local_executor_with_custom_venv_in_local_relative_path located in
packages/autogen-ext/tests/code_executors/test_commandline_code_executor.py
fail when run on macOS (aarch64) using a Python interpreter managed by
uv (following the project's recommended development setup).
>
> The failure occurs during the creation of a nested virtual environment
using Python's standard venv.EnvBuilder. Specifically, the attempt to
run ensurepip inside the newly created venv fails immediately with a
SIGABRT signal. The root cause appears to be a dynamic library loading
error (dyld error) where the Python executable inside the newly created
venv cannot find its required libpythonX.Y.dylib shared library.
So, when MacOS + uv case, skipping that test.
And, adding uv-venv case
## Related issue number
Closes#6341
## Checks
- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [x] I've made sure all auto checks have passed.
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
This PR introduces `WorkBench`.
A workbench provides a group of tools that share the same resource and
state. For example, `McpWorkbench` provides the underlying tools on the
MCP server. A workbench allows tools to be managed together and abstract
away the lifecycle of individual tools under a single entity. This makes
it possible to create agents with stateful tools from serializable
configuration (component configs), and it also supports dynamic tools:
tools change after each execution.
Here is how a workbench may be used with AssistantAgent (not included in
this PR):
```python
workbench = McpWorkbench(server_params)
agent = AssistantAgent("assistant", tools=workbench)
result = await agent.run(task="do task...")
```
TODOs:
1. In a subsequent PR, update `AssistantAgent` to use workbench as an
alternative in the `tools` parameter. Use `StaticWorkbench` to manage
individual tools.
2. In another PR, add documentation on workbench.
---------
Co-authored-by: EeS <chiyoung.song@motov.co.kr>
Co-authored-by: Minh Đăng <74671798+perfogic@users.noreply.github.com>
## Why are these changes needed?
| Package | Test time-Origin (Sec) | Test time-Edited (Sec) |
|-------------------------|------------------|-----------------------------------------------|
| autogen-studio | 1.64 | 1.64 |
| autogen-core | 6.03 | 6.17 |
| autogen-ext | 387.15 | 373.40 |
| autogen-agentchat | 54.20 | 20.67 |
## Related issue number
Related #6361
## Checks
- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
- Added the support Azure AI Agent. The new agent is named AzureAIAgent.
- The agent supports Bing search, file search, and Azure search tools.
- Added a Jupiter notebook to demonstrate the usage of the AzureAIAgent.
## What's missing?
- AzureAIAgent support only text message responses
- Parallel execution for the custom functions.
## Related issue number
[5545](https://github.com/microsoft/autogen/issues/5545#event-16626859772)
---------
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
The DockerCommandLineCodeExecutor doesn't currently offer GPU support.
By simply using DeviceRequest from the docker python API, these changes
expose GPUs to the docker container and provide the ability to execute
CUDA-accelerated code within autogen.
## Related issue number
Closes: #6302
## Checks
- [x] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [x] I've made sure all auto checks have passed.
---------
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
## Why are these changes needed?
`convert_tools` failed if Optional args were used in tools (the `type`
field doesn't exist in that case and `anyOf` must be used).
This uses the `anyOf` field to pick the first non-null type to use.
## Related issue number
Fixes#6323
## Checks
- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [x] I've made sure all auto checks have passed.
---------
Signed-off-by: Peter Jausovec <peter.jausovec@solo.io>
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>