328 Commits

Author SHA1 Message Date
Björn Holtvogt
f7a45feca1
Add option to auto-delete temporary files in LocalCommandLineCodeExecutor (#6556)
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->

<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->

## Why are these changes needed?

<!-- Please give a short summary of the change and the problem this
solves. -->

The `LocalCommandLineCodeExecutor `creates temporary files for each code
execution, which can accumulate over time and clutter the filesystem -
especially when a temporary working directory is not used. These changes
introduce an option to automatically delete temporary files after
execution, helping to prevent file system debris, reduce disk usage, and
ensure cleaner runtime environments in long-running or repeated
execution scenarios.

## Related issue number

Closes #4380

## Checks

- [x] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [x] I've made sure all auto checks have passed.
2025-05-21 21:47:47 +02:00
Damien Menigaux
c6c8693b2c
Add gemini 2.5 fash compatibility (#6574)
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->

<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->

## Why are these changes needed?

<!-- Please give a short summary of the change and the problem this
solves. -->

## Related issue number

<!-- For example: "Closes #1234" -->

## Checks

- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
2025-05-21 19:38:03 +00:00
Eric Zhu
1578cd955f
Include all output to error output in docker jupyter code executor (#6572)
Currently when an error occurs when executing code in docker jupyter
executor, it returns only the error output.

This PR updates the handling of error output to include outputs from
previous code blocks that have been successfully executed.

Test it with this script:

```python
from autogen_agentchat.agents import AssistantAgent
from autogen_ext.models.openai import OpenAIChatCompletionClient
from autogen_ext.code_executors.docker_jupyter import DockerJupyterCodeExecutor, DockerJupyterServer
from autogen_ext.tools.code_execution import PythonCodeExecutionTool
from autogen_agentchat.ui import Console
from autogen_core.code_executor import CodeBlock
from autogen_core import CancellationToken
from autogen_agentchat.teams import RoundRobinGroupChat
from autogen_agentchat.conditions import TextMessageTermination

# Download the dataset from https://www.kaggle.com/datasets/nelgiriyewithana/top-spotify-songs-2023
# and place it the coding directory as `spotify-2023.csv`.
bind_dir = "./coding"

# Use a custom docker image with the Jupyter kernel gateway and data science libraries installed.
# Custom docker image: ds-kernel-gateway:latest -- you need to build this image yourself.
# Dockerfile:
# FROM quay.io/jupyter/docker-stacks-foundation:latest
# 
# # ensure that 'mamba' and 'fix-permissions' are on the PATH
# SHELL ["/bin/bash", "-o", "pipefail", "-c"]
# 
# # Switch to the default notebook user
# USER ${NB_UID}
# 
# # Install data-science packages + kernel gateway
# RUN mamba install --quiet --yes \
#     numpy \
#     pandas \
#     scipy \
#     matplotlib \
#     scikit-learn \
#     seaborn \
#     jupyter_kernel_gateway \
#     ipykernel \
#     && mamba clean --all -f -y \
#     && fix-permissions "${CONDA_DIR}" \
#     && fix-permissions "/home/${NB_USER}"
# 
# # Allow you to set a token at runtime (or leave blank for no auth)
# ENV TOKEN=""
# 
# # Launch the Kernel Gateway, listening on all interfaces,
# # with the HTTP endpoint for listing kernels enabled
# CMD ["python", "-m", "jupyter", "kernelgateway", \
#     "--KernelGatewayApp.ip=0.0.0.0", \
#     "--KernelGatewayApp.port=8888", \
#     # "--KernelGatewayApp.auth_token=${TOKEN}", \
#     "--JupyterApp.answer_yes=true", \
#     "--JupyterWebsocketPersonality.list_kernels=true"]
# 
# EXPOSE 8888
# 
# WORKDIR "${HOME}"

async def main():
    model = OpenAIChatCompletionClient(model="gpt-4.1")
    async with DockerJupyterServer(
        custom_image_name="ds-kernel-gateway:latest", 
        bind_dir=bind_dir,
    ) as server:
        async with DockerJupyterCodeExecutor(jupyter_server=server) as code_executor:
            await code_executor.execute_code_blocks([
                CodeBlock(code="import pandas as pd\ndf = pd.read_csv('/workspace/spotify-2023.csv', encoding='latin-1')", language="python"),
            ],
                cancellation_token=CancellationToken(),
            )
            tool = PythonCodeExecutionTool(
                executor=code_executor,
            )
            assistant = AssistantAgent(
                "assistant",
                model_client=model,
                system_message="You have access to a Jupyter kernel. Do not write all code at once. Write one code block, observe the output, and then write the next code block.",
                tools=[tool],
            )
            team = RoundRobinGroupChat(
                [assistant],
                termination_condition=TextMessageTermination(source="assistant"),
            )
            task = f"Datafile has been loaded as variable `df`. First preview dataset. Then answer the following question: What is the highest streamed artist in the dataset?"
            await Console(team.run_stream(task=task))

if __name__ == "__main__":
    import asyncio
    asyncio.run(main())
```

You can see the file encoding error gets recovered and the agent
successfully executes the query in the end.
2025-05-21 17:27:46 +00:00
GeorgiosEfstathiadis
113aca0b81
Allow implicit aws credential setting for AnthropicBedrockChatCompletionClient (#6561)
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->

<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->

## Why are these changes needed?

Allows implicit AWS credential setting when using
AnthropicBedrockChatCompletionClient in an instance where you have
already logged into AWS with SSO and credentials are set as environment
variables.

## Related issue number

Closes #6560 

## Checks

- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.

Co-authored-by: Jack Gerrits <jackgerrits@users.noreply.github.com>
2025-05-21 12:49:03 -04:00
Afzal Pawaskar
446da624ac
Fix missing tools in logs (#6532)
Fix for LLMCallEvent failing to log "tools" passed to
BaseOpenAIChatCompletionClient in
autogen_ext.models.openai._openai_client.BaseOpenAIChatCompletionClient

This bug creates problems inspecting why a certain tool was selected/not
selected by the LLM as the list of tools available to the LLM is not
present in the logs

## Why are these changes needed?

Added "tools" to the LLMCallEvent to log tools available to the LLM as
these were being missed causing difficulties during debugging LLM tool
calls.

## Related issue number

[<!-- For example: "Closes #1234"
-->](https://github.com/microsoft/autogen/issues/6531)

## Checks

- [x] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [x] I've made sure all auto checks have passed.

---------

Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
2025-05-15 17:12:19 -07:00
Chester Hu
9d297318b5
Add Llama API OAI compatible endpoint support (#6442)
## Why are these changes needed?

To add the latest support for using Llama API offerings with AutoGen


## Checks

- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.

---------

Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
2025-05-15 16:57:27 -07:00
Miroslav Pokrovskii
aa22b622d0
feat: add qwen3 support (#6528)
## Why are these changes needed?

Add ollama qwen 3 support
2025-05-14 09:52:13 -07:00
Jay Prakash Thakur
87cf4f07dd
Simplify Azure Ai Search Tool (#6511)
## Why are these changes needed?

Simplified the azure ai search tool and fixed bugs in the code


## Related issue number

"Closes #6430 " 

## Checks

- [X] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [X] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [X] I've made sure all auto checks have passed.

Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
2025-05-13 13:42:11 -07:00
EeS
978cbd2e89
FIX/mistral could not recive name field (#6503)
## Why are these changes needed?
FIX/mistral could not recive name field, so add model transformer for
mistral

## Related issue number
Closes #6147

## Checks

- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [x] I've made sure all auto checks have passed.

Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
2025-05-12 19:32:14 -07:00
Eric Zhu
177211b5b4
Update version 0.5.7 (#6518) 2025-05-12 17:19:32 -07:00
Peter Jausovec
867194a907
fixes the issues where exceptions from MCP server tools aren't serial… (#6482)
…ized properly

## Why are these changes needed?
The exceptions thrown by MCP server tools weren't being serialized
properly - the user would see `[{}, {}, ... {}]` instead of an actual
error/exception message.

## Related issue number

Fixes #6481 

## Checks

- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [x] I've made sure all auto checks have passed.

---------

Signed-off-by: Peter Jausovec <peter.jausovec@solo.io>
Co-authored-by: Victor Dibia <victordibia@microsoft.com>
Co-authored-by: Victor Dibia <victor.dibia@gmail.com>
2025-05-12 15:01:08 -07:00
peterychang
9118f9b998
Add ability to register Agent instances (#6131)
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->

<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->

## Why are these changes needed?

Nice to have functionality

## Related issue number

Closes #6060 

## Checks

- [x] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [x] I've made sure all auto checks have passed.

---------

Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
2025-05-12 15:34:48 +00:00
EeS
c26d894c34
fix/mcp_session_auto_close_when_Mcpworkbench_deleted (#6497)
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->

<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->

## Why are these changes needed?
As-is: Deleting `McpWorkbench` does not close the `McpSession`.  
To-be: Deleting `McpWorkbench` now properly closes the `McpSession`.

<!-- Please give a short summary of the change and the problem this
solves. -->

## Related issue number

<!-- For example: "Closes #1234" -->

## Checks

- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [x] I've made sure all auto checks have passed.
2025-05-12 13:31:01 +00:00
Victor Dibia
6427c07f5c
Fix AnthropicBedrockChatCompletionClient import error (#6489)
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->

Some fixes with the AnthropicBedrockChatCompletionClient 

- Ensure `AnthropicBedrockChatCompletionClient` exported and can be
imported.
- Update the BedrockInfo keys serialization - client argument can be
string (similar to api key in this ) but exported config should be
Secret
- Replace `AnthropicBedrock` with `AsyncAnthropicBedrock` : client
should be async to work with the ag stack and the BaseAnthropicClient it
inherits from
- Improve `AnthropicBedrockChatCompletionClient` docstring to use the
correct client arguments rather than serialized dict format.

Expect

```python
from autogen_ext.models.anthropic import AnthropicBedrockChatCompletionClient, BedrockInfo
from autogen_core.models import UserMessage, ModelInfo


async def main():
    anthropic_client = AnthropicBedrockChatCompletionClient(
        model="anthropic.claude-3-5-sonnet-20240620-v1:0",
        temperature=0.1,
        model_info=ModelInfo(vision=False, function_calling=True,
                             json_output=False, family="unknown", structured_output=True),
        bedrock_info=BedrockInfo(
            aws_access_key="<aws_access_key>",
            aws_secret_key="<aws_secret_key>",
            aws_session_token="<aws_session_token>",
            aws_region="<aws_region>",
        ),
    )
    # type: ignore
    result = await anthropic_client.create([UserMessage(content="What is the capital of France?", source="user")])
    print(result)

await main()
```

<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->

## Why are these changes needed?

<!-- Please give a short summary of the change and the problem this
solves. -->

## Related issue number

<!-- For example: "Closes #1234" -->

Closes #6483 

## Checks

- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.

---------

Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
2025-05-10 09:12:02 -07:00
peterychang
3db7a29403
improve Otel tracing (#6499)
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->

<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->

## Why are these changes needed?

Will the changes made in
https://github.com/microsoft/autogen/pull/5853/files and this PR need to
be ported to the worker_runtime as well?
Resolves https://github.com/microsoft/autogen/issues/5894

## Related issue number

https://github.com/microsoft/autogen/issues/5894

## Checks

- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
2025-05-09 16:24:30 -04:00
Victor Dibia
1ab24417b8
Add gpt 4o search (#6492)
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->

Add gpt 4o search models to list of default models

```python
"gpt-4o-mini-search-preview-2025-03-11": {
        "vision": False,
        "function_calling": True,
        "json_output": True,
        "family": ModelFamily.GPT_4O,
        "structured_output": True,
        "multiple_system_messages": True,
    },
"gpt-4o-search-preview-2025-03-11": {
        "vision": False,
        "function_calling": True,
        "json_output": True,
        "family": ModelFamily.GPT_4O,
        "structured_output": True,
        "multiple_system_messages": True,
    },
```

<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->

## Why are these changes needed?

<!-- Please give a short summary of the change and the problem this
solves. -->

## Related issue number

<!-- For example: "Closes #1234" -->

Closes #6491

## Checks

- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
2025-05-08 18:01:38 -07:00
EeS
880a225cb2
FIX/McpWorkbench_errors_properties_and_grace_shutdown (#6444)
## Why are these changes needed?
Our McpWorkbench required properties however mongodb-lens's some tools
do not has it. I will fix it from when properties is None, -> {}
Our McpWorkbench now does not have stop routine with without async with
McpWorkbench(params) as workbench: and lazy init. So, I will adding def
__del__: pass just insert that, It could show error.

## Related issue number

Closes #6425

## Checks

- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [x] I've made sure all auto checks have passed.
2025-05-02 15:23:54 -07:00
EeS
6fc4f53212
FIX: MultiModalMessage in gemini with openai sdk error occured (#6440)
## Why are these changes needed?

Multimodal message fill context with other routine. However current
`_set_empty_to_whitespace` is fill with context.
So, error occured.

And, I checked `multimodal_user_transformer_funcs` and I found it, in
this routine, context must not be empty.
Now remove the `_set_empty_to_whitespace` when multimodal message,
<!-- Please give a short summary of the change and the problem this
solves. -->

## Related issue number

Closes #6439 

## Checks

- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [x] I've made sure all auto checks have passed.

Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
2025-05-01 09:27:31 -07:00
Eric Zhu
8efa1f10a9
update autogen version 0.5.6 (#6433) 2025-04-29 16:18:36 -07:00
EeS
b0c13a476b
test_docker_commandline_code_executor.py : 161.66s -> 108.07 (#6429)
## Why are these changes needed?
Current autogen-ext's test is too slow.
So, I will search slow test case and makes more fast.

[init docker executor function to module
180s->140s](a3cf70bcf8)
[reuse executor at some tests
140s->120s](ca15938afa)
[Remove unnecessary start of docker
120s->110s](61247611e0)

## Related issue number

<!-- For example: "Closes #1234" -->
Part of #6376

## Checks

- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [x] I've made sure all auto checks have passed.

---------

Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
2025-04-29 14:18:51 -07:00
Abdo Talema
881cd6a75c
Bing grounding citations (#6370)
Adding support for Bing grounding citations to the AzureAIAgent.

<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->

## Why are these changes needed?

<!-- Please give a short summary of the change and the problem this
solves. -->

## Related issue number

<!-- For example: "Closes #1234" -->

## Checks

- [X] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [X] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [X] I've made sure all auto checks have passed.

---------

Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
Co-authored-by: Dheeraj Bandaru <BandaruDheeraj@users.noreply.github.com>
2025-04-28 13:09:13 -07:00
Harini N
a91006cdc2
Adding bedrock chat completion for anthropic models (#6170)
## Why are these changes needed?

Anthropic models are supported by AWS bedrock. ChatCompletionClient can
be created for anthropic bedrock models using this changes. This enables
the user to do the following
- Add any anthropic models and version from AWS bedrock
- Can use ChatCompletionClient for bedrock anthropic models

## Related issue number
Closes #5226

---------

Co-authored-by: harini.narasimhan <harini.narasimhan@eagleview.com>
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
2025-04-28 11:56:46 -07:00
EeS
516a211954
[FIX] DockerCommandLineCodeExecutor multi event loop aware (#6402)
## Why are these changes needed?
*Problem*

Previously, in `DockerCommandLineCodeExecutor`, cancellation tasks were
added directly to `self._cancellation_tasks` using
`asyncio.create_task()`:
```python
self._cancellation_tasks.append(asyncio.create_task(self._kill_running_command(command)))
```
This caused issues when cancellation tasks were created from multiple
event loops, leading to loop mismatch errors during executor shutdown.

*Solution*
This PR fixes the issue by introducing a dedicated internal event loop
for managing cancellation tasks.
Cancellation tasks are now scheduled in a fixed event loop using
`asyncio.run_coroutine_threadsafe()`:
```python
                    future: ConcurrentFuture[None] = asyncio.run_coroutine_threadsafe(
                        self._kill_running_command(command), self._loop
                    )
                    self._cancellation_futures.append(future)
```

*Additional Changes*
- Added detailed logging for easier debugging.
- Ensured clean shutdown of the internal event loop and associated
thread.


*Note*
This change ensures that all cancellation tasks are handled consistently
in a single loop, preventing cross-loop conflicts and improving executor
stability in multi-threaded environments.

## Related issue number

<!-- For example: "Closes #1234" -->
Closes #6395 

## Checks

- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [x] I've made sure all auto checks have passed.

---------

Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
2025-04-28 10:30:19 -07:00
Eric Zhu
63c791d342
Add more mcp workbench examples to MCP API doc (#6403) 2025-04-25 17:45:24 -07:00
Minh Đăng
519a04d5fc
Update: implement return_value_as_string for McpToolAdapter (#6380)
## Why are these changes needed?
- Add return_value_as_string for formating result from MCP tool

## Related issue number
- Opened Issue on #6368 

## Checks
- [x] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [x] I've made sure all auto checks have passed.

---------

Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
2025-04-25 15:41:02 -07:00
Eric Zhu
70784eaeda
Update version to 0.5.5 (#6397) 2025-04-25 14:22:03 -07:00
EeS
0c9fd64d6e
TEST: skip when macos+uv and adding uv venv tests (#6387)
## Why are these changes needed?

> The pytest tests test_local_executor_with_custom_venv and
test_local_executor_with_custom_venv_in_local_relative_path located in
packages/autogen-ext/tests/code_executors/test_commandline_code_executor.py
fail when run on macOS (aarch64) using a Python interpreter managed by
uv (following the project's recommended development setup).
> 
> The failure occurs during the creation of a nested virtual environment
using Python's standard venv.EnvBuilder. Specifically, the attempt to
run ensurepip inside the newly created venv fails immediately with a
SIGABRT signal. The root cause appears to be a dynamic library loading
error (dyld error) where the Python executable inside the newly created
venv cannot find its required libpythonX.Y.dylib shared library.

So, when MacOS + uv case, skipping that test.
And, adding uv-venv case

## Related issue number

Closes #6341

## Checks

- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [x] I've made sure all auto checks have passed.

Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
2025-04-24 14:34:38 -07:00
Eric Zhu
cbd8745f2b
Add guide for workbench and mcp & bug fixes for create_mcp_server_session (#6392)
Add user guide for workbench and mcp

*Also fixed a bug in create_mcp_server_session that included "type" in
the parameters Resolves: #6392
2025-04-24 14:19:09 -07:00
Eric Zhu
f059262b6d
Remove name field from OpenAI Assistant Message (#6388)
Resolves #3247
2025-04-24 13:11:40 -07:00
Eric Zhu
8fcba01704
Introduce workbench (#6340)
This PR introduces `WorkBench`.

A workbench provides a group of tools that share the same resource and
state. For example, `McpWorkbench` provides the underlying tools on the
MCP server. A workbench allows tools to be managed together and abstract
away the lifecycle of individual tools under a single entity. This makes
it possible to create agents with stateful tools from serializable
configuration (component configs), and it also supports dynamic tools:
tools change after each execution.

Here is how a workbench may be used with AssistantAgent (not included in
this PR):

```python
workbench = McpWorkbench(server_params)
agent = AssistantAgent("assistant", tools=workbench)
result = await agent.run(task="do task...")
```


TODOs:
1. In a subsequent PR, update `AssistantAgent` to use workbench as an
alternative in the `tools` parameter. Use `StaticWorkbench` to manage
individual tools.
2. In another PR, add documentation on workbench.

---------

Co-authored-by: EeS <chiyoung.song@motov.co.kr>
Co-authored-by: Minh Đăng <74671798+perfogic@users.noreply.github.com>
2025-04-24 10:37:41 -07:00
EeS
a283d268df
TEST/change gpt4, gpt4o serise to gpt4.1nano (#6375)
## Why are these changes needed?

| Package | Test time-Origin (Sec) | Test time-Edited (Sec) |

|-------------------------|------------------|-----------------------------------------------|
| autogen-studio          | 1.64             | 1.64 |
| autogen-core            | 6.03             | 6.17 |
| autogen-ext             | 387.15           | 373.40 |
| autogen-agentchat       | 54.20            | 20.67 |


## Related issue number

Related #6361 

## Checks

- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
2025-04-23 17:51:25 +00:00
Abdo Talema
8a9729214f
Add azure ai agent (#6191)
- Added the support Azure AI Agent. The new agent is named AzureAIAgent.
- The agent supports Bing search, file search, and Azure search tools.
- Added a Jupiter notebook to demonstrate the usage of the AzureAIAgent.

## What's missing?
- AzureAIAgent support only text message responses
-  Parallel execution for the custom functions. 



## Related issue number


[5545](https://github.com/microsoft/autogen/issues/5545#event-16626859772)

---------

Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
2025-04-21 21:51:22 -07:00
Henry Miller
00153155e3
Added support for exposing GPUs to docker code executor (#6339)
The DockerCommandLineCodeExecutor doesn't currently offer GPU support.
By simply using DeviceRequest from the docker python API, these changes
expose GPUs to the docker container and provide the ability to execute
CUDA-accelerated code within autogen.

## Related issue number

Closes: #6302 

## Checks

- [x] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [x] I've made sure all auto checks have passed.

---------

Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
2025-04-22 01:17:06 +00:00
Peter Jausovec
d051da52c3
fix: ollama fails when tools use optional args (#6343)
## Why are these changes needed?
`convert_tools` failed if Optional args were used in tools (the `type`
field doesn't exist in that case and `anyOf` must be used).

This uses the `anyOf` field to pick the first non-null type to use.  

## Related issue number

Fixes #6323

## Checks

- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [x] I've made sure all auto checks have passed.

---------

Signed-off-by: Peter Jausovec <peter.jausovec@solo.io>
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
2025-04-22 00:06:46 +00:00
Eric Zhu
71363a30ec
Add experimental notice to canvas (#6349) 2025-04-21 21:31:12 +00:00
Peter Jausovec
4d3e47a0f1
fix: ensure serialized messages are passed to LLMStreamStartEvent (#6344)
## Why are these changes needed?


I was getting the following exception when doing tool calls with
anthropic - the exception was coming form the `__str__` in
`LLMStreamStartEvent`.

```
('Object of type ToolUseBlock is not JSON serializable',)
```

The issue is that when creating the LLMStreamStartevent in the
`create_stream`, the messages weren't being serialized first.
## Related issue number

Signed-off-by: Peter Jausovec <peter.jausovec@solo.io>
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
2025-04-21 11:43:21 -07:00
EeS
1de07ab293
Generalize Continuous SystemMessage merging via model_info[“multiple_system_messages”] instead of startswith("gemini-") (#6345)
The current implementation of consecutive `SystemMessage` merging
applies only to models where `model_info.family` starts with
`"gemini-"`.

Since PR #6327 introduced the `multiple_system_messages` field in
`model_info`, we can now generalize this logic by checking whether the
field is explicitly set to `False`.

This change replaces the hardcoded family check with a conditional that
merges consecutive `SystemMessage` blocks whenever
`multiple_system_messages` is set to `False`.

Test cases that previously depended on the `"gemini"` model family have
been updated to reflect this configuration flag, and renamed accordingly
for clarity.

In addition, for consistency across conditional logic, a follow-up PR is
planned to refactor the Claude-specific transformation condition
(currently implemented via `create_args.get("model",
"unknown").startswith("claude-")`)
to instead use the existing `is_claude()`.

Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
2025-04-21 11:30:35 -07:00
Leonardo Pinheiro
99aac24dd3
Agentchat canvas (#6215)
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->

<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->

## Why are these changes needed?

This is an initial exploration of what could be a solution for #6214 .
It implements a simple text canvas using difflib and also a memory
component and a tool component for interacting with the canvas. Still in
early testing but would love feedback on the design.

## Related issue number

<!-- For example: "Closes #1234" -->

## Checks

- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.

---------

Co-authored-by: Leonardo Pinheiro <lpinheiro@microsoft.com>
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
2025-04-21 18:03:29 +10:00
Eric Zhu
183dbb8dd1
Update version 0.5.4 (#6334) 2025-04-18 17:19:07 -07:00
Jay Prakash Thakur
e49ee48908
Bugfix: Azure AI Search Tool - fix query type (#6331) 2025-04-18 05:50:49 +00:00
EeS
b13264ac60
FEAT: adding multiple_system_message on model_info (#6327)
## Why are these changes needed?
`SocietyOfMindAgent` has multiple system message, however many
client/model does not support it.

## Related issue number
Related #6290

---------

Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
2025-04-17 22:39:47 -07:00
masquerlin
4a5dd9eec9
Make Docker Jupyter support to the Version 0.4 as Version 0.2 (#6231)
This PR introduces a safer and more controllable execution environment
for LLM code execution in version 0.4 by enabling the use of Jupyter
inside a container. This enhancement addresses security concerns and
provides a more robust execution context. In particular, it allows:

Isolation of code execution via containerized Jupyter environments.

Persistent memory of variables and their values throughout the
conversation.

Memory of code execution results to support more advanced reasoning and
follow-up tasks.

These improvements help build a more interactive and stateful LLM-agent
programming experience, especially for iterative code generation and
debugging scenarios.

## Related issue number

Open #6153

## Checks

- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [x] I've made sure all auto checks have passed.

---------

Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
2025-04-17 21:25:57 +00:00
Jay Prakash Thakur
bb792b0734
Fix: Azure AI Search Tool Client Lifetime Management (#6316)
## Why are these changes needed?
This PR fixes a bug where the underlying azure `SearchClient` was being
closed prematurely due to use of `async with client` : inside the tool's
run method. this caused the users to encounter errors "HTTP transport
has already been closed"

## Related issue number

Closes #6308 "

## Checks

- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [X] I've made sure all auto checks have passed.

---------

Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
2025-04-16 19:39:27 -07:00
Eric Zhu
629fb86e96
Add GPT4.1, o4-mini and o3 (#6314) 2025-04-17 01:10:14 +00:00
Eric Zhu
27b834f296
Make shared session possible for MCP tool (#6312)
Resolves #6232, #6198

This PR introduces an optional parameter `session` to `mcp_server_tools`
to support reuse of the same session.

```python
import asyncio

from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.conditions import TextMentionTermination
from autogen_agentchat.teams import RoundRobinGroupChat
from autogen_agentchat.ui import Console
from autogen_ext.models.openai import OpenAIChatCompletionClient
from autogen_ext.tools.mcp import StdioServerParams, create_mcp_server_session, mcp_server_tools


async def main() -> None:
    model_client = OpenAIChatCompletionClient(model="gpt-4o", parallel_tool_calls=False)  # type: ignore
    params = StdioServerParams(
        command="npx",
        args=["@playwright/mcp@latest"],
        read_timeout_seconds=60,
    )
    async with create_mcp_server_session(params) as session:
        await session.initialize()
        tools = await mcp_server_tools(server_params=params, session=session)
        print(f"Tools: {[tool.name for tool in tools]}")

        agent = AssistantAgent(
            name="Assistant",
            model_client=model_client,
            tools=tools,  # type: ignore
        )

        termination = TextMentionTermination("TERMINATE")
        team = RoundRobinGroupChat([agent], termination_condition=termination)
        await Console(
            team.run_stream(
                task="Go to https://ekzhu.com/, visit the first link in the page, then tell me about the linked page."
            )
        )


asyncio.run(main())
``` 

Based on discussion in this thread: #6284, we will consider
serialization and deserialization of MCP server tools when used in this
manner in a separate issue.

This PR also replaces the `json_schema_to_pydantic` dependency with
built-in utils.
2025-04-16 17:43:28 -07:00
Eric Zhu
8bd162f8fc
Update version to 0.5.3 (#6310) 2025-04-16 11:02:25 -07:00
cheng-tan
88dda88f53
Pin opentelemetry-proto version (#6305)
## Description
This PR pins opentelemetry-proto version to >=1.28.0, which uses
protobuf > 5.0, < 6.0 to generate protobuf files.

## Related issue number
Closes #6304
2025-04-15 09:04:01 -07:00
Sungjun.Kim
71a4eaedf9
Bump up json-schema-to-pydantic from v0.2.3 to v0.2.4 (#6300)
---------

Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
2025-04-14 21:50:49 -07:00
Eric Zhu
3500170be1
update version 0.5.2 (#6296)
Update version
2025-04-14 18:03:44 -07:00
Ricky Loynd
92df415edf
Expose TCM TypedDict classes for apps to use (#6269)
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->

<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->

An app can pass untyped dicts to set configuration options of various
Task-Centric Memory classes. But tools like pyright can complain about
the loose typing. This PR exposes 4 TypedDict classes that apps can
optionally use.

<!-- Please give a short summary of the change and the problem this
solves. -->

## Related issue number

<!-- For example: "Closes #1234" -->

## Checks

- [x] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
2025-04-10 15:55:21 -07:00