316 Commits

Author SHA1 Message Date
Eric Zhu
e85eea88f2
Update OpenAIAssistantAgent doc (#6870) 2025-07-28 01:52:10 -07:00
Eric Zhu
7d627f45ca
Bring back OpenAIAssistantAgent (#6867) 2025-07-28 01:29:06 -07:00
Eric Zhu
b33b688991
Update installation guide in _openai_assistant_agent.py (#6863) 2025-07-27 17:59:11 -07:00
Victor Dibia
f00d1eef9e
upgrade graphrag sample to v2.3+ (#6744)
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
2025-07-27 02:04:40 -07:00
Copilot
5f1c69d049
Add include_name_in_message parameter to make name field optional in OpenAI messages (#6845)
Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: ekzhu <320302+ekzhu@users.noreply.github.com>
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
2025-07-27 00:39:39 -07:00
Justin Cechmanek
3b139c6539
Adds Redis Memory extension class (#6743) 2025-07-25 21:54:22 -07:00
Saverio Murgia
e26bb1c850
fix: use correct format when adding memory to mem0 (#6831)
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
2025-07-23 22:46:59 +00:00
Copilot
7c536a8c95
Fix OpenAI UnprocessableEntityError when AssistantAgent makes multiple tool calls (#6799)
Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: ekzhu <320302+ekzhu@users.noreply.github.com>
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
2025-07-20 00:30:42 -07:00
Eric Zhu
ae024e262d
Deprecating openai assistant agent. Apply version conditioned import for open ai version < 1.83 (#6827) 2025-07-17 16:37:20 -07:00
Tyler Payne
413d8f158c
Expand MCP Workbench to support more MCP Client features (#6785)
Co-authored-by: Tyler Payne <tylerpayne@microsoft.com>
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
Co-authored-by: Victor Dibia <victordibia@microsoft.com>
2025-07-17 08:11:08 -07:00
lo5twind
aa131bb7f7
feat: add timeout for http tools (#6818)
Co-authored-by: Victor Dibia <victordibia@microsoft.com>
2025-07-16 13:06:06 -07:00
Victor Dibia
d094a3fd47
Upgrade_mcp_version (#6814)
Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
2025-07-16 09:26:55 -07:00
Tejas Dharani
25d732876e
Feat/OpenAI agent builtin tools 6657 (#6671)
Co-authored-by: Victor Dibia <victordibia@microsoft.com>
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
2025-07-10 10:29:20 -07:00
Eric Zhu
af85eb26c2
Remove duckduckgo search tools and agents (#6783) 2025-07-09 10:32:23 -07:00
Ryan Sousa
203323fb3e
feat: add qwen2.5vl support (#6650)
Co-authored-by: Victor Dibia <victordibia@microsoft.com>
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
2025-07-07 23:13:02 +00:00
Sean Goedecke
02e6574566
Update GitHub Models url to the new url (#6759)
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
2025-07-07 15:49:41 -07:00
Eric Zhu
4a3634d6da
Move docs from python/packages/autogen-core to python/docs (#6757) 2025-07-06 21:31:36 -07:00
Eric Zhu
89841b6aaf
Add script to automatically generate API documentation and remove hard-coded RST files; fix API docs (#6755) 2025-07-06 18:38:39 -07:00
VARAD SRIVASTAVA
13f9a73519
Added DuckDuckGo Search Tool and Agent in AutoGen Extensions (#6682)
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
2025-07-06 15:59:55 -07:00
Copilot
0bd99ee516
Add tool name and description override functionality to Workbench implementations (#6690)
Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: ekzhu <320302+ekzhu@users.noreply.github.com>
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
2025-07-06 13:39:05 -07:00
Z1m4-blu3
aa0d835f4d
Fix function calling support for Llama3.3 (#6750)
## Why are these changes needed?
This PR fixes incorrect model metadata for llama3.3.
The function_calling capability was previously set to False, but
[llama3.3 supports function
calling.](https://www.llama.com/docs/model-cards-and-prompt-formats/llama3_3/)
with [ollama](https://ollama.com/library/llama3.3).

<!-- Please give a short summary of the change and the problem this
solves. -->

## Related issue number

<!-- For example: "Closes #1234" -->

## Checks

- [x] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [x] I've made sure all auto checks have passed.

Co-authored-by: mh <>
2025-07-06 03:42:59 +00:00
Copilot
c150f85044
Add tool_choice parameter to ChatCompletionClient create and create_stream methods (#6697)
## Summary

Implements the `tool_choice` parameter for `ChatCompletionClient`
interface as requested in #6696. This allows users to restrict which
tools the model can choose from when multiple tools are available.

## Changes

### Core Interface
- Core Interface: Added `tool_choice: Tool | Literal["auto", "required",
"none"] = "auto"` parameter to `ChatCompletionClient.create()` and
`create_stream()` methods
- Model Implementations: Updated client implementations to support the
new parameter, for now, only the following model clients are supported:
  - OpenAI
  - Anthropic
  - Azure AI
  - Ollama
- `LlamaCppChatCompletionClient` currently not supported

Features
- "auto" (default): Let the model choose whether to use tools, when
there is no tool, it has no effect.
- "required": Force the model to use at least one tool
- "none": Disable tool usage completely
- Tool object: Force the model to use a specific tool

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: ekzhu <320302+ekzhu@users.noreply.github.com>
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
2025-06-30 14:15:28 +09:00
David Schmidt
3436ec2ca1
Add support for Gemini 2.5 flash stable (#6692)
As Gemini 2.5 Flash was released as stable the model infos should be
changed accordingly.

See https://ai.google.dev/gemini-api/docs/models?hl=de#gemini-2.5-flash

## Related issue number

No issue

## Checks

- [x] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [x] I've made sure all auto checks have passed.

---------

Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
2025-06-28 01:57:36 +00:00
jeongsu-an
b9c01d0bc1
fix: enable function_calling for o1-2024-12-17 (#6725) 2025-06-27 19:56:27 +00:00
Tejas Dharani
11b7743b7d
Fix completion tokens none issue 6352 (#6665) 2025-06-26 23:26:27 +00:00
Victor Dibia
1183962a59
fix serialization issue in streamablehttp mcp tools (#6721)
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->

The current `StreamableHttpServerParams` has timedelta values that are
not JSON serializable (config.dump_component.model_dump_json()).
This make is unusable in UIs like AGS that expect configs to be
serializable to json,

```python
class StreamableHttpServerParams(BaseModel):
    """Parameters for connecting to an MCP server over Streamable HTTP."""

    type: Literal["StreamableHttpServerParams"] = "StreamableHttpServerParams"

    url: str  # The endpoint URL.
    headers: dict[str, Any] | None = None  # Optional headers to include in requests.
    timeout: timedelta = timedelta(seconds=30)  # HTTP timeout for regular operations.
    sse_read_timeout: timedelta = timedelta(seconds=60 * 5)  # Timeout for SSE read operations.
    terminate_on_close: bool = True
```

This PR uses float for time outs and casts it to timedelta as needed. 

```python
class StreamableHttpServerParams(BaseModel):
    """Parameters for connecting to an MCP server over Streamable HTTP."""

    type: Literal["StreamableHttpServerParams"] = "StreamableHttpServerParams"

    url: str  # The endpoint URL.
    headers: dict[str, Any] | None = None  # Optional headers to include in requests.
    timeout: float = 30.0  # HTTP timeout for regular operations in seconds.
    sse_read_timeout: float = 300.0  # Timeout for SSE read operations in seconds.
    terminate_on_close: bool = True
```

<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->

## Why are these changes needed?

<!-- Please give a short summary of the change and the problem this
solves. -->

## Related issue number

<!-- For example: "Closes #1234" -->

## Checks

- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
2025-06-25 12:28:09 -07:00
alpha-xone
89927ca436
Add mem0 Memory Implementation (#6510)
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->

<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->

## Why are these changes needed?

These changes are needed to expand AutoGen's memory capabilities with a
robust, production-ready integration with Mem0.ai.

<!-- Please give a short summary of the change and the problem this
solves. -->

This PR adds a new memory component for AutoGen that integrates with
Mem0.ai, providing a robust memory solution that supports both cloud and
local backends. The Mem0Memory class enables agents to store and
retrieve information persistently across conversation sessions.

## Key Features

- Seamless integration with Mem0.ai memory system
- Support for both cloud-based and local storage backends
- Robust error handling with detailed logging
- Full implementation of AutoGen's Memory interface
- Context updating for enhanced agent conversations
- Configurable search parameters for memory retrieval

## Related issue number

<!-- For example: "Closes #1234" -->

## Checks

- [x] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.

---------

Co-authored-by: Victor Dibia <victordibia@microsoft.com>
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
Co-authored-by: Ricky Loynd <riloynd@microsoft.com>
2025-06-16 15:39:02 -07:00
Griffin Bassman
f101469e29
update: openai response api (#6622)
Co-authored-by: Victor Dibia <victordibia@microsoft.com>
2025-06-16 10:30:57 -07:00
Tejas Dharani
67ebeeda0e
Feature/chromadb embedding functions #6267 (#6648)
## Why are these changes needed?

This PR adds support for configurable embedding functions in
ChromaDBVectorMemory, addressing the need for users to customize how
embeddings are generated for vector similarity search. Currently,
ChromaDB memory is limited to default embedding functions, which
restricts flexibility for different use cases that may require specific
embedding models or custom embedding logic.

The implementation allows users to:
- Use different SentenceTransformer models for domain-specific
embeddings
- Integrate with OpenAI's embedding API for consistent embedding
generation
- Define custom embedding functions for specialized requirements
- Maintain backward compatibility with existing default behavior

## Related issue number

Closes #6267

## Checks

- [x] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests corresponding to the changes introduced in this
PR.
- [x] I've made sure all auto checks have passed.

---------

Co-authored-by: Victor Dibia <victordibia@microsoft.com>
Co-authored-by: Victor Dibia <victor.dibia@gmail.com>
2025-06-13 09:06:15 -07:00
Eric Zhu
e14fb8fc09
OTel GenAI Traces for Agent and Tool (#6653)
Add OTel GenAI traces:
- `create_agent`
- `invoke_agnet`
- `execute_tool`

Introduces context manager helpers to create these traces. The helpers
also serve as instrumentation points for other instrumentation
libraries.

Resolves #6644
2025-06-12 15:07:47 -07:00
peterychang
8a2582c541
SK KernelFunction from ToolSchemas (#6637)
## Why are these changes needed?

Only a subset of available tools will sent to SK

## Related issue number

resolves https://github.com/microsoft/autogen/issues/6582

## Checks

- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
2025-06-06 10:15:56 -04:00
Sungjun.Kim
9065c6f37b
feat: Support the Streamable HTTP transport for MCP (#6615)
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->

<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->

## Why are these changes needed?

MCP Python-sdk has started to support a new transport protocol named
`Streamble HTTP` since
[v1.8.0](https://github.com/modelcontextprotocol/python-sdk/releases/tag/v1.8.0)
last month. I heard it supersedes the SSE transport. Therefore, AutoGen
have to support it as soon as possible.

## Related issue number

https://github.com/microsoft/autogen/discussions/6517

## Checks

- [x] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [x] I've made sure all auto checks have passed.

---------

Co-authored-by: Victor Dibia <victordibia@microsoft.com>
Co-authored-by: Victor Dibia <victor.dibia@gmail.com>
2025-06-03 13:36:16 -07:00
peterychang
1858799fa6
Parse backtick-enclosed json (#6607)
## Why are these changes needed?

Some models enclose json in markdown code blocks

## Related issue number

resolves https://github.com/microsoft/autogen/issues/6599. , #6547 

## Checks

- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.

---------

Co-authored-by: Victor Dibia <victordibia@microsoft.com>
2025-06-03 18:22:01 +00:00
Jay Prakash Thakur
d1d664b67e
Feature: Add OpenAIAgent backed by OpenAI Response API (#6418)
## Why are these changes needed?

This PR introduces a new `OpenAIAgent` implementation that uses the
[OpenAI Response
API](https://platform.openai.com/docs/guides/responses-vs-chat-completions)
as its backend. The OpenAI Assistant API will be deprecated in 2026, and
the Response API is its successor. This change ensures our codebase is
future-proof and aligned with OpenAI’s latest platform direction.

### Motivation
- **Deprecation Notice:** The OpenAI Assistant API will be deprecated in
2026.
- **Future-Proofing:** The Response API is the recommended replacement
and offers improved capabilities for stateful, multi-turn, and
tool-augmented conversations.
- **AgentChat Compatibility:** The new agent is designed to conform to
the behavior and expectations of `AssistantAgent` in AgentChat, but is
implemented directly on top of the OpenAI Response API.

### Key Changes
- **New Agent:** Adds `OpenAIAgent`, a stateful agent that interacts
with the OpenAI Response API.
- **Stateful Design:** The agent maintains conversation state, tool
usage, and other metadata as required by the Response API.
- **AssistantAgent Parity:** The new agent matches the interface and
behavior of `AssistantAgent` in AgentChat, ensuring a smooth migration
path.
- **Direct OpenAI Integration:** Uses the official `openai` Python
library for all API interactions.
- **Extensible:** Designed to support future enhancements, such as
advanced tool use, function calling, and multi-modal capabilities.

### Migration Path
- Existing users of the Assistant API should migrate to the new
`OpenAIAgent` to ensure long-term compatibility.
- Documentation and examples will be updated to reflect the new agent
and its usage patterns.
### References
- [OpenAI: Responses vs. Chat
Completions](https://platform.openai.com/docs/guides/responses-vs-chat-completions)
- [OpenAI Deprecation
Notice](https://platform.openai.com/docs/guides/responses-vs-chat-completions#deprecation-timeline)
---

## Related issue number

Closes #6032 

## Checks

- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [X] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [X] I've made sure all auto checks have passed.

Co-authored-by: Griffin Bassman <griffinbassman@gmail.com>
2025-06-03 13:42:27 -04:00
peterychang
03394a42c0
Default usage statistics for streaming responses (#6578)
## Why are these changes needed?

Enables usage statistics for streaming responses by default.

There is a similar bug in the AzureAI client. Theoretically adding the
parameter
```
model_extras={"stream_options": {"include_usage": True}}
```
should fix the problem, but I'm currently unable to test that workflow

## Related issue number

closes https://github.com/microsoft/autogen/issues/6548

## Checks

- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
2025-05-28 14:32:04 -04:00
Victor Dibia
9bbcfa03ac
feat: [draft] update version of azureaiagent (#6581)
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->

<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->

There have been updates to the azure ai agent foundry sdk
(azure-ai-project). This PR updates the autogen `AzureAIAgent` which
wraps the azure ai agent. A list of some changes

- Update docstring samples to use `endpoint` (instead of connection
string previously)
- Update imports and arguments e.g, from `azure.ai.agents` etc 
- Add a guide in ext docs showing Bing Search Grounding tool example. 
<img width="1423" alt="image"
src="https://github.com/user-attachments/assets/0b7c8fa6-8aa5-4c20-831b-b525ac8243b7"
/>


## Why are these changes needed?

<!-- Please give a short summary of the change and the problem this
solves. -->

## Related issue number

Closes #6601
<!-- For example: "Closes #1234" -->

## Checks

- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
2025-05-27 10:52:47 -07:00
Sungjun.Kim
b8d02c9a20
feat: Add missing Anthropic models (Claude Sonnet 4, Claude Opus 4) (#6585)
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->

<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->

## Why are these changes needed?

<!-- Please give a short summary of the change and the problem this
solves. -->

## Related issue number

resolved https://github.com/microsoft/autogen/issues/6584

## Checks

- [x] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [x] I've made sure all auto checks have passed.
2025-05-23 12:15:12 -07:00
Björn Holtvogt
f7a45feca1
Add option to auto-delete temporary files in LocalCommandLineCodeExecutor (#6556)
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->

<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->

## Why are these changes needed?

<!-- Please give a short summary of the change and the problem this
solves. -->

The `LocalCommandLineCodeExecutor `creates temporary files for each code
execution, which can accumulate over time and clutter the filesystem -
especially when a temporary working directory is not used. These changes
introduce an option to automatically delete temporary files after
execution, helping to prevent file system debris, reduce disk usage, and
ensure cleaner runtime environments in long-running or repeated
execution scenarios.

## Related issue number

Closes #4380

## Checks

- [x] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [x] I've made sure all auto checks have passed.
2025-05-21 21:47:47 +02:00
Damien Menigaux
c6c8693b2c
Add gemini 2.5 fash compatibility (#6574)
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->

<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->

## Why are these changes needed?

<!-- Please give a short summary of the change and the problem this
solves. -->

## Related issue number

<!-- For example: "Closes #1234" -->

## Checks

- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
2025-05-21 19:38:03 +00:00
Eric Zhu
1578cd955f
Include all output to error output in docker jupyter code executor (#6572)
Currently when an error occurs when executing code in docker jupyter
executor, it returns only the error output.

This PR updates the handling of error output to include outputs from
previous code blocks that have been successfully executed.

Test it with this script:

```python
from autogen_agentchat.agents import AssistantAgent
from autogen_ext.models.openai import OpenAIChatCompletionClient
from autogen_ext.code_executors.docker_jupyter import DockerJupyterCodeExecutor, DockerJupyterServer
from autogen_ext.tools.code_execution import PythonCodeExecutionTool
from autogen_agentchat.ui import Console
from autogen_core.code_executor import CodeBlock
from autogen_core import CancellationToken
from autogen_agentchat.teams import RoundRobinGroupChat
from autogen_agentchat.conditions import TextMessageTermination

# Download the dataset from https://www.kaggle.com/datasets/nelgiriyewithana/top-spotify-songs-2023
# and place it the coding directory as `spotify-2023.csv`.
bind_dir = "./coding"

# Use a custom docker image with the Jupyter kernel gateway and data science libraries installed.
# Custom docker image: ds-kernel-gateway:latest -- you need to build this image yourself.
# Dockerfile:
# FROM quay.io/jupyter/docker-stacks-foundation:latest
# 
# # ensure that 'mamba' and 'fix-permissions' are on the PATH
# SHELL ["/bin/bash", "-o", "pipefail", "-c"]
# 
# # Switch to the default notebook user
# USER ${NB_UID}
# 
# # Install data-science packages + kernel gateway
# RUN mamba install --quiet --yes \
#     numpy \
#     pandas \
#     scipy \
#     matplotlib \
#     scikit-learn \
#     seaborn \
#     jupyter_kernel_gateway \
#     ipykernel \
#     && mamba clean --all -f -y \
#     && fix-permissions "${CONDA_DIR}" \
#     && fix-permissions "/home/${NB_USER}"
# 
# # Allow you to set a token at runtime (or leave blank for no auth)
# ENV TOKEN=""
# 
# # Launch the Kernel Gateway, listening on all interfaces,
# # with the HTTP endpoint for listing kernels enabled
# CMD ["python", "-m", "jupyter", "kernelgateway", \
#     "--KernelGatewayApp.ip=0.0.0.0", \
#     "--KernelGatewayApp.port=8888", \
#     # "--KernelGatewayApp.auth_token=${TOKEN}", \
#     "--JupyterApp.answer_yes=true", \
#     "--JupyterWebsocketPersonality.list_kernels=true"]
# 
# EXPOSE 8888
# 
# WORKDIR "${HOME}"

async def main():
    model = OpenAIChatCompletionClient(model="gpt-4.1")
    async with DockerJupyterServer(
        custom_image_name="ds-kernel-gateway:latest", 
        bind_dir=bind_dir,
    ) as server:
        async with DockerJupyterCodeExecutor(jupyter_server=server) as code_executor:
            await code_executor.execute_code_blocks([
                CodeBlock(code="import pandas as pd\ndf = pd.read_csv('/workspace/spotify-2023.csv', encoding='latin-1')", language="python"),
            ],
                cancellation_token=CancellationToken(),
            )
            tool = PythonCodeExecutionTool(
                executor=code_executor,
            )
            assistant = AssistantAgent(
                "assistant",
                model_client=model,
                system_message="You have access to a Jupyter kernel. Do not write all code at once. Write one code block, observe the output, and then write the next code block.",
                tools=[tool],
            )
            team = RoundRobinGroupChat(
                [assistant],
                termination_condition=TextMessageTermination(source="assistant"),
            )
            task = f"Datafile has been loaded as variable `df`. First preview dataset. Then answer the following question: What is the highest streamed artist in the dataset?"
            await Console(team.run_stream(task=task))

if __name__ == "__main__":
    import asyncio
    asyncio.run(main())
```

You can see the file encoding error gets recovered and the agent
successfully executes the query in the end.
2025-05-21 17:27:46 +00:00
GeorgiosEfstathiadis
113aca0b81
Allow implicit aws credential setting for AnthropicBedrockChatCompletionClient (#6561)
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->

<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->

## Why are these changes needed?

Allows implicit AWS credential setting when using
AnthropicBedrockChatCompletionClient in an instance where you have
already logged into AWS with SSO and credentials are set as environment
variables.

## Related issue number

Closes #6560 

## Checks

- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.

Co-authored-by: Jack Gerrits <jackgerrits@users.noreply.github.com>
2025-05-21 12:49:03 -04:00
Afzal Pawaskar
446da624ac
Fix missing tools in logs (#6532)
Fix for LLMCallEvent failing to log "tools" passed to
BaseOpenAIChatCompletionClient in
autogen_ext.models.openai._openai_client.BaseOpenAIChatCompletionClient

This bug creates problems inspecting why a certain tool was selected/not
selected by the LLM as the list of tools available to the LLM is not
present in the logs

## Why are these changes needed?

Added "tools" to the LLMCallEvent to log tools available to the LLM as
these were being missed causing difficulties during debugging LLM tool
calls.

## Related issue number

[<!-- For example: "Closes #1234"
-->](https://github.com/microsoft/autogen/issues/6531)

## Checks

- [x] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [x] I've made sure all auto checks have passed.

---------

Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
2025-05-15 17:12:19 -07:00
Chester Hu
9d297318b5
Add Llama API OAI compatible endpoint support (#6442)
## Why are these changes needed?

To add the latest support for using Llama API offerings with AutoGen


## Checks

- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.

---------

Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
2025-05-15 16:57:27 -07:00
Miroslav Pokrovskii
aa22b622d0
feat: add qwen3 support (#6528)
## Why are these changes needed?

Add ollama qwen 3 support
2025-05-14 09:52:13 -07:00
Jay Prakash Thakur
87cf4f07dd
Simplify Azure Ai Search Tool (#6511)
## Why are these changes needed?

Simplified the azure ai search tool and fixed bugs in the code


## Related issue number

"Closes #6430 " 

## Checks

- [X] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [X] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [X] I've made sure all auto checks have passed.

Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
2025-05-13 13:42:11 -07:00
EeS
978cbd2e89
FIX/mistral could not recive name field (#6503)
## Why are these changes needed?
FIX/mistral could not recive name field, so add model transformer for
mistral

## Related issue number
Closes #6147

## Checks

- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [x] I've made sure all auto checks have passed.

Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
2025-05-12 19:32:14 -07:00
Peter Jausovec
867194a907
fixes the issues where exceptions from MCP server tools aren't serial… (#6482)
…ized properly

## Why are these changes needed?
The exceptions thrown by MCP server tools weren't being serialized
properly - the user would see `[{}, {}, ... {}]` instead of an actual
error/exception message.

## Related issue number

Fixes #6481 

## Checks

- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [x] I've made sure all auto checks have passed.

---------

Signed-off-by: Peter Jausovec <peter.jausovec@solo.io>
Co-authored-by: Victor Dibia <victordibia@microsoft.com>
Co-authored-by: Victor Dibia <victor.dibia@gmail.com>
2025-05-12 15:01:08 -07:00
peterychang
9118f9b998
Add ability to register Agent instances (#6131)
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->

<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->

## Why are these changes needed?

Nice to have functionality

## Related issue number

Closes #6060 

## Checks

- [x] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [x] I've made sure all auto checks have passed.

---------

Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
2025-05-12 15:34:48 +00:00
EeS
c26d894c34
fix/mcp_session_auto_close_when_Mcpworkbench_deleted (#6497)
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->

<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->

## Why are these changes needed?
As-is: Deleting `McpWorkbench` does not close the `McpSession`.  
To-be: Deleting `McpWorkbench` now properly closes the `McpSession`.

<!-- Please give a short summary of the change and the problem this
solves. -->

## Related issue number

<!-- For example: "Closes #1234" -->

## Checks

- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [x] I've made sure all auto checks have passed.
2025-05-12 13:31:01 +00:00
Victor Dibia
6427c07f5c
Fix AnthropicBedrockChatCompletionClient import error (#6489)
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->

Some fixes with the AnthropicBedrockChatCompletionClient 

- Ensure `AnthropicBedrockChatCompletionClient` exported and can be
imported.
- Update the BedrockInfo keys serialization - client argument can be
string (similar to api key in this ) but exported config should be
Secret
- Replace `AnthropicBedrock` with `AsyncAnthropicBedrock` : client
should be async to work with the ag stack and the BaseAnthropicClient it
inherits from
- Improve `AnthropicBedrockChatCompletionClient` docstring to use the
correct client arguments rather than serialized dict format.

Expect

```python
from autogen_ext.models.anthropic import AnthropicBedrockChatCompletionClient, BedrockInfo
from autogen_core.models import UserMessage, ModelInfo


async def main():
    anthropic_client = AnthropicBedrockChatCompletionClient(
        model="anthropic.claude-3-5-sonnet-20240620-v1:0",
        temperature=0.1,
        model_info=ModelInfo(vision=False, function_calling=True,
                             json_output=False, family="unknown", structured_output=True),
        bedrock_info=BedrockInfo(
            aws_access_key="<aws_access_key>",
            aws_secret_key="<aws_secret_key>",
            aws_session_token="<aws_session_token>",
            aws_region="<aws_region>",
        ),
    )
    # type: ignore
    result = await anthropic_client.create([UserMessage(content="What is the capital of France?", source="user")])
    print(result)

await main()
```

<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->

## Why are these changes needed?

<!-- Please give a short summary of the change and the problem this
solves. -->

## Related issue number

<!-- For example: "Closes #1234" -->

Closes #6483 

## Checks

- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.

---------

Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
2025-05-10 09:12:02 -07:00