3673 Commits

Author SHA1 Message Date
Eric Zhu
087ce0f4eb
Unpin uv version to use the latest version (#6713) 2025-06-26 06:28:08 +00:00
Victor Dibia
1183962a59
fix serialization issue in streamablehttp mcp tools (#6721)
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->

The current `StreamableHttpServerParams` has timedelta values that are
not JSON serializable (config.dump_component.model_dump_json()).
This make is unusable in UIs like AGS that expect configs to be
serializable to json,

```python
class StreamableHttpServerParams(BaseModel):
    """Parameters for connecting to an MCP server over Streamable HTTP."""

    type: Literal["StreamableHttpServerParams"] = "StreamableHttpServerParams"

    url: str  # The endpoint URL.
    headers: dict[str, Any] | None = None  # Optional headers to include in requests.
    timeout: timedelta = timedelta(seconds=30)  # HTTP timeout for regular operations.
    sse_read_timeout: timedelta = timedelta(seconds=60 * 5)  # Timeout for SSE read operations.
    terminate_on_close: bool = True
```

This PR uses float for time outs and casts it to timedelta as needed. 

```python
class StreamableHttpServerParams(BaseModel):
    """Parameters for connecting to an MCP server over Streamable HTTP."""

    type: Literal["StreamableHttpServerParams"] = "StreamableHttpServerParams"

    url: str  # The endpoint URL.
    headers: dict[str, Any] | None = None  # Optional headers to include in requests.
    timeout: float = 30.0  # HTTP timeout for regular operations in seconds.
    sse_read_timeout: float = 300.0  # Timeout for SSE read operations in seconds.
    terminate_on_close: bool = True
```

<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->

## Why are these changes needed?

<!-- Please give a short summary of the change and the problem this
solves. -->

## Related issue number

<!-- For example: "Closes #1234" -->

## Checks

- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
2025-06-25 12:28:09 -07:00
Zen
9b8dc8d707
add activation group for workflow with multiple cycles (#6711)
## Why are these changes needed?
1. problem
When the GraphFlowManager encounters cycles, it tracks remaining
indegree counts for the node's activation. However, this tracking
mechanism has a flaw when dealing with cycles. When a node first enters
a cycle, the GraphFlowManager evaluates all remaining incoming edges,
including those that loop back to the origin node. If the activation
prerequisites are not satisfied at that moment, the workflow will
eventually finish because the _remaining counter never reaches zero,
preventing the select_speaker() method from selecting any agents for
execution.
2. solution
change activation map to 2 layer for ditinguish remaining inside
different cycle and outside the cycle.
add a activation group and policy property for edge, compute the
remaining map when GraphFlowManager is init and check the remaining map
with activation group to avoid checking the loop back edges
<!-- Please give a short summary of the change and the problem this
solves. -->

## Related issue number

#6710

## Checks

- [x] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [x] I've made sure all auto checks have passed.
2025-06-25 12:20:04 +08:00
Eitan Yarmush
c5b893d3f8
add env var to disable runtime tracing (#6681)
Recently a PR merged to enable GENAI semantic convention tracing,
however, when using component loading it's not currently possible to
disable the runtime tracing.

---------

Signed-off-by: Eitan Yarmush <eitan.yarmush@solo.io>
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
2025-06-20 20:26:14 +08:00
alpha-xone
89927ca436
Add mem0 Memory Implementation (#6510)
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->

<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->

## Why are these changes needed?

These changes are needed to expand AutoGen's memory capabilities with a
robust, production-ready integration with Mem0.ai.

<!-- Please give a short summary of the change and the problem this
solves. -->

This PR adds a new memory component for AutoGen that integrates with
Mem0.ai, providing a robust memory solution that supports both cloud and
local backends. The Mem0Memory class enables agents to store and
retrieve information persistently across conversation sessions.

## Key Features

- Seamless integration with Mem0.ai memory system
- Support for both cloud-based and local storage backends
- Robust error handling with detailed logging
- Full implementation of AutoGen's Memory interface
- Context updating for enhanced agent conversations
- Configurable search parameters for memory retrieval

## Related issue number

<!-- For example: "Closes #1234" -->

## Checks

- [x] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.

---------

Co-authored-by: Victor Dibia <victordibia@microsoft.com>
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
Co-authored-by: Ricky Loynd <riloynd@microsoft.com>
2025-06-16 15:39:02 -07:00
Griffin Bassman
f101469e29
update: openai response api (#6622)
Co-authored-by: Victor Dibia <victordibia@microsoft.com>
2025-06-16 10:30:57 -07:00
Zen
cd15c0853c
fix: fix self-loop in workflow (#6677) 2025-06-15 23:00:14 -07:00
Victor Dibia
8c1236dd9e
fix: fix devcontainer issue with AGS (#6675)
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->

fix devcontainer issue with AGS

<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->

## Why are these changes needed?

<!-- Please give a short summary of the change and the problem this
solves. -->

## Related issue number

<!-- For example: "Closes #1234" -->

Closes #5715

## Checks

- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
2025-06-14 12:45:34 -07:00
Roy Shani
6e7415ecad
docs: Memory and RAG: add missing backtick for class reference (#6656)
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->

<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->

## Why are these changes needed?
Update `Memory and RAG` doc to include missing backticks for class
references.
<!-- Please give a short summary of the change and the problem this
solves. -->

## Related issue number

<!-- For example: "Closes #1234" -->

## Checks

- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
<img width="386" alt="image"
src="https://github.com/user-attachments/assets/16004b28-8fe9-476f-949f-ab4c7dcc9d56"
/>

Co-authored-by: Victor Dibia <victor.dibia@gmail.com>
2025-06-13 09:33:42 -07:00
Tejas Dharani
67ebeeda0e
Feature/chromadb embedding functions #6267 (#6648)
## Why are these changes needed?

This PR adds support for configurable embedding functions in
ChromaDBVectorMemory, addressing the need for users to customize how
embeddings are generated for vector similarity search. Currently,
ChromaDB memory is limited to default embedding functions, which
restricts flexibility for different use cases that may require specific
embedding models or custom embedding logic.

The implementation allows users to:
- Use different SentenceTransformer models for domain-specific
embeddings
- Integrate with OpenAI's embedding API for consistent embedding
generation
- Define custom embedding functions for specialized requirements
- Maintain backward compatibility with existing default behavior

## Related issue number

Closes #6267

## Checks

- [x] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests corresponding to the changes introduced in this
PR.
- [x] I've made sure all auto checks have passed.

---------

Co-authored-by: Victor Dibia <victordibia@microsoft.com>
Co-authored-by: Victor Dibia <victor.dibia@gmail.com>
2025-06-13 09:06:15 -07:00
Eric Zhu
150ea0192d
Use yaml safe_load instead of load (#6672) 2025-06-12 15:18:33 -07:00
Eric Zhu
e14fb8fc09
OTel GenAI Traces for Agent and Tool (#6653)
Add OTel GenAI traces:
- `create_agent`
- `invoke_agnet`
- `execute_tool`

Introduces context manager helpers to create these traces. The helpers
also serve as instrumentation points for other instrumentation
libraries.

Resolves #6644
2025-06-12 15:07:47 -07:00
Aaron (Aron) Roh
892492f1d9
docs: fix shell command with escaped brackets in pip install (#6464)
Fix the installation command in
`python/samples/agentchat_chainlit/README.md` by properly escaping or
quoting package names with square brackets to prevent shell
interpretation errors in zsh and other shells.

<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->

<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->

## Why are these changes needed?

[AS-IS]

![image](https://github.com/user-attachments/assets/d6c8c95a-eb45-4206-a147-cd74aa6e949f)

[After fixed]

![image](https://github.com/user-attachments/assets/f2ffea4b-f993-4888-8559-c19e2b5c6eea)


<!-- Please give a short summary of the change and the problem this
solves. -->

## Related issue number

<!-- For example: "Closes #1234" -->

## Checks

- [x] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [x] I've made sure all auto checks have passed.

Co-authored-by: Victor Dibia <victordibia@microsoft.com>
2025-06-09 17:24:39 +00:00
peterychang
8a2582c541
SK KernelFunction from ToolSchemas (#6637)
## Why are these changes needed?

Only a subset of available tools will sent to SK

## Related issue number

resolves https://github.com/microsoft/autogen/issues/6582

## Checks

- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
2025-06-06 10:15:56 -04:00
Eric Zhu
348bcb17a8
Update version to 0.6.1 (#6631) python-v0.6.1 2025-06-04 22:47:36 -07:00
Eric Zhu
c99aa7416d
Fix graph validation logic and add tests (#6630)
Follow up to #6629
2025-06-04 22:05:16 -07:00
Eric Zhu
1b32eb660d
Add list of function calls and results in ToolCallSummaryMessage (#6626)
To address the comment here:
https://github.com/microsoft/autogen/issues/6542#issuecomment-2922465639
2025-06-04 21:44:03 -07:00
Eric Zhu
4358dfd5c3
Fix bug in GraphFlow cycle check (#6629)
Resolve #6628
2025-06-04 21:35:27 -07:00
Eric Zhu
796e349cc3
update website to v0.6.0 (#6625) 2025-06-04 17:19:58 -07:00
Eric Zhu
16e1943c05
Update version to 0.6.0 (#6624) python-v0.6.0 2025-06-04 16:50:19 -07:00
Bobur Hakimiy
76f0a9762e
fix typo in the doc distributed-agent-runtime.ipynb (#6614) 2025-06-04 22:51:26 +00:00
Eric Zhu
b31b4e508d
Add callable condition for GraphFlow edges (#6623)
This PR adds callable as an option to specify conditional edges in
GraphFlow.

```python
import asyncio

from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.conditions import MaxMessageTermination
from autogen_agentchat.teams import DiGraphBuilder, GraphFlow
from autogen_ext.models.openai import OpenAIChatCompletionClient


async def main():
    # Initialize agents with OpenAI model clients.
    model_client = OpenAIChatCompletionClient(model="gpt-4.1-nano")
    agent_a = AssistantAgent(
        "A",
        model_client=model_client,
        system_message="Detect if the input is in Chinese. If it is, say 'yes', else say 'no', and nothing else.",
    )
    agent_b = AssistantAgent("B", model_client=model_client, system_message="Translate input to English.")
    agent_c = AssistantAgent("C", model_client=model_client, system_message="Translate input to Chinese.")

    # Create a directed graph with conditional branching flow A -> B ("yes"), A -> C (otherwise).
    builder = DiGraphBuilder()
    builder.add_node(agent_a).add_node(agent_b).add_node(agent_c)
    # Create conditions as callables that check the message content.
    builder.add_edge(agent_a, agent_b, condition=lambda msg: "yes" in msg.to_model_text())
    builder.add_edge(agent_a, agent_c, condition=lambda msg: "yes" not in msg.to_model_text())
    graph = builder.build()

    # Create a GraphFlow team with the directed graph.
    team = GraphFlow(
        participants=[agent_a, agent_b, agent_c],
        graph=graph,
        termination_condition=MaxMessageTermination(5),
    )

    # Run the team and print the events.
    async for event in team.run_stream(task="AutoGen is a framework for building AI agents."):
        print(event)


asyncio.run(main())
```

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: ekzhu <320302+ekzhu@users.noreply.github.com>
2025-06-04 22:43:26 +00:00
Sungjun.Kim
9065c6f37b
feat: Support the Streamable HTTP transport for MCP (#6615)
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->

<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->

## Why are these changes needed?

MCP Python-sdk has started to support a new transport protocol named
`Streamble HTTP` since
[v1.8.0](https://github.com/modelcontextprotocol/python-sdk/releases/tag/v1.8.0)
last month. I heard it supersedes the SSE transport. Therefore, AutoGen
have to support it as soon as possible.

## Related issue number

https://github.com/microsoft/autogen/discussions/6517

## Checks

- [x] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [x] I've made sure all auto checks have passed.

---------

Co-authored-by: Victor Dibia <victordibia@microsoft.com>
Co-authored-by: Victor Dibia <victor.dibia@gmail.com>
2025-06-03 13:36:16 -07:00
peterychang
1858799fa6
Parse backtick-enclosed json (#6607)
## Why are these changes needed?

Some models enclose json in markdown code blocks

## Related issue number

resolves https://github.com/microsoft/autogen/issues/6599. , #6547 

## Checks

- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.

---------

Co-authored-by: Victor Dibia <victordibia@microsoft.com>
2025-06-03 18:22:01 +00:00
Jay Prakash Thakur
d1d664b67e
Feature: Add OpenAIAgent backed by OpenAI Response API (#6418)
## Why are these changes needed?

This PR introduces a new `OpenAIAgent` implementation that uses the
[OpenAI Response
API](https://platform.openai.com/docs/guides/responses-vs-chat-completions)
as its backend. The OpenAI Assistant API will be deprecated in 2026, and
the Response API is its successor. This change ensures our codebase is
future-proof and aligned with OpenAI’s latest platform direction.

### Motivation
- **Deprecation Notice:** The OpenAI Assistant API will be deprecated in
2026.
- **Future-Proofing:** The Response API is the recommended replacement
and offers improved capabilities for stateful, multi-turn, and
tool-augmented conversations.
- **AgentChat Compatibility:** The new agent is designed to conform to
the behavior and expectations of `AssistantAgent` in AgentChat, but is
implemented directly on top of the OpenAI Response API.

### Key Changes
- **New Agent:** Adds `OpenAIAgent`, a stateful agent that interacts
with the OpenAI Response API.
- **Stateful Design:** The agent maintains conversation state, tool
usage, and other metadata as required by the Response API.
- **AssistantAgent Parity:** The new agent matches the interface and
behavior of `AssistantAgent` in AgentChat, ensuring a smooth migration
path.
- **Direct OpenAI Integration:** Uses the official `openai` Python
library for all API interactions.
- **Extensible:** Designed to support future enhancements, such as
advanced tool use, function calling, and multi-modal capabilities.

### Migration Path
- Existing users of the Assistant API should migrate to the new
`OpenAIAgent` to ensure long-term compatibility.
- Documentation and examples will be updated to reflect the new agent
and its usage patterns.
### References
- [OpenAI: Responses vs. Chat
Completions](https://platform.openai.com/docs/guides/responses-vs-chat-completions)
- [OpenAI Deprecation
Notice](https://platform.openai.com/docs/guides/responses-vs-chat-completions#deprecation-timeline)
---

## Related issue number

Closes #6032 

## Checks

- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [X] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [X] I've made sure all auto checks have passed.

Co-authored-by: Griffin Bassman <griffinbassman@gmail.com>
2025-06-03 13:42:27 -04:00
Eric Zhu
b0c800255a
Use structured output for m1 orchestrator (#6540)
Use structured output when available in m1 orchestrator

Co-authored-by: Victor Dibia <victordibia@microsoft.com>
2025-06-02 12:11:24 -07:00
Griffin Bassman
cd49d71f2a
note: note selector_func is not serializable (#6609)
Resolves #6519

---------

Co-authored-by: Victor Dibia <victor.dibia@gmail.com>
2025-06-02 09:11:28 -07:00
Griffin Bassman
c683175120
feat: support multiple workbenches in assistant agent (#6529)
resolves: #6456
2025-05-29 10:36:14 -04:00
Victor Dibia
6cadc7dc17
feat: bump ags version, minor fixes (#6603)
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->

Update autogenstudio version. 

<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->

## Why are these changes needed?

<!-- Please give a short summary of the change and the problem this
solves. -->

## Related issue number

Closes #6580

<!-- For example: "Closes #1234" -->

## Checks

- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
2025-05-28 15:49:28 -07:00
Abhijeetsingh Meena
955f4f9b9f
Add support for specifying the languages to parse from the CodeExecutorAgent response (#6592)
## Why are these changes needed?

The `CodeExecutorAgent` can generate code blocks in various programming
languages, some of which may not be supported by the executor
environment. Adding support for specifying languages to be parsed helps
users ignore unnecessary code blocks, preventing potential execution
errors.

## Related issue number
Closes #6471

## Checks

- [x] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [x] I've made sure all auto checks have passed.

---------

Signed-off-by: Abhijeetsingh Meena <abhijeet040403@gmail.com>
Co-authored-by: Victor Dibia <victordibia@microsoft.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-05-28 14:03:46 -07:00
peterychang
03394a42c0
Default usage statistics for streaming responses (#6578)
## Why are these changes needed?

Enables usage statistics for streaming responses by default.

There is a similar bug in the AzureAI client. Theoretically adding the
parameter
```
model_extras={"stream_options": {"include_usage": True}}
```
should fix the problem, but I'm currently unable to test that workflow

## Related issue number

closes https://github.com/microsoft/autogen/issues/6548

## Checks

- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
2025-05-28 14:32:04 -04:00
Victor Dibia
9bbcfa03ac
feat: [draft] update version of azureaiagent (#6581)
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->

<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->

There have been updates to the azure ai agent foundry sdk
(azure-ai-project). This PR updates the autogen `AzureAIAgent` which
wraps the azure ai agent. A list of some changes

- Update docstring samples to use `endpoint` (instead of connection
string previously)
- Update imports and arguments e.g, from `azure.ai.agents` etc 
- Add a guide in ext docs showing Bing Search Grounding tool example. 
<img width="1423" alt="image"
src="https://github.com/user-attachments/assets/0b7c8fa6-8aa5-4c20-831b-b525ac8243b7"
/>


## Why are these changes needed?

<!-- Please give a short summary of the change and the problem this
solves. -->

## Related issue number

Closes #6601
<!-- For example: "Closes #1234" -->

## Checks

- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
2025-05-27 10:52:47 -07:00
Alex
53d384236c
Missing UserMessage import (#6583)
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->

<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->

## Why are these changes needed?

The code block fails to execute without the import

## Related issue number

N/A

## Checks

- [x ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.

Co-authored-by: Victor Dibia <victordibia@microsoft.com>
2025-05-23 12:50:59 -07:00
Sungjun.Kim
b8d02c9a20
feat: Add missing Anthropic models (Claude Sonnet 4, Claude Opus 4) (#6585)
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->

<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->

## Why are these changes needed?

<!-- Please give a short summary of the change and the problem this
solves. -->

## Related issue number

resolved https://github.com/microsoft/autogen/issues/6584

## Checks

- [x] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [x] I've made sure all auto checks have passed.
2025-05-23 12:15:12 -07:00
Sungjun.Kim
db125fbd2d
Add created_at to BaseChatMessage and BaseAgentEvent (#6557)
## Why are these changes needed?

I added `created_at` to both BaseChatMessage and BaseAgentEvent classes
that store the time these Pydantic model instances are generated. And
then users will be able to use `created_at` to build up a customized
external persisting state management layer for their case.

## Related issue number


https://github.com/microsoft/autogen/discussions/6169#discussioncomment-13151540

## Checks

- [x] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [x] I've made sure all auto checks have passed.

---------

Co-authored-by: Jack Gerrits <jackgerrits@users.noreply.github.com>
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
2025-05-22 22:29:24 -07:00
peterychang
726e0be110
Add/fix windows install instructions (#6579)
## Why are these changes needed?

Install instructions for Windows are missing or incorrect

## Related issue number

closes https://github.com/microsoft/autogen/issues/6577

## Checks

- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.

Co-authored-by: Victor Dibia <victordibia@microsoft.com>
2025-05-22 19:36:56 +00:00
peterychang
0a81100f72
remove superfluous underline in the docs (#6573)
## Why are these changes needed?

unnecessary underline shown in the docs
before:
<img width="157" alt="image"
src="https://github.com/user-attachments/assets/37503e10-6b8a-48ee-ae63-d9a12fe3beb5"
/>

after:
<img width="151" alt="image"
src="https://github.com/user-attachments/assets/ea6c1851-3640-4f64-b8ff-91dcc11a6379"
/>


## Related issue number

closes https://github.com/microsoft/autogen/issues/6564

## Checks

- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
2025-05-21 21:02:39 +00:00
Björn Holtvogt
f7a45feca1
Add option to auto-delete temporary files in LocalCommandLineCodeExecutor (#6556)
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->

<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->

## Why are these changes needed?

<!-- Please give a short summary of the change and the problem this
solves. -->

The `LocalCommandLineCodeExecutor `creates temporary files for each code
execution, which can accumulate over time and clutter the filesystem -
especially when a temporary working directory is not used. These changes
introduce an option to automatically delete temporary files after
execution, helping to prevent file system debris, reduce disk usage, and
ensure cleaner runtime environments in long-running or repeated
execution scenarios.

## Related issue number

Closes #4380

## Checks

- [x] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [x] I've made sure all auto checks have passed.
2025-05-21 21:47:47 +02:00
Damien Menigaux
c6c8693b2c
Add gemini 2.5 fash compatibility (#6574)
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->

<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->

## Why are these changes needed?

<!-- Please give a short summary of the change and the problem this
solves. -->

## Related issue number

<!-- For example: "Closes #1234" -->

## Checks

- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
2025-05-21 19:38:03 +00:00
Eric Zhu
1578cd955f
Include all output to error output in docker jupyter code executor (#6572)
Currently when an error occurs when executing code in docker jupyter
executor, it returns only the error output.

This PR updates the handling of error output to include outputs from
previous code blocks that have been successfully executed.

Test it with this script:

```python
from autogen_agentchat.agents import AssistantAgent
from autogen_ext.models.openai import OpenAIChatCompletionClient
from autogen_ext.code_executors.docker_jupyter import DockerJupyterCodeExecutor, DockerJupyterServer
from autogen_ext.tools.code_execution import PythonCodeExecutionTool
from autogen_agentchat.ui import Console
from autogen_core.code_executor import CodeBlock
from autogen_core import CancellationToken
from autogen_agentchat.teams import RoundRobinGroupChat
from autogen_agentchat.conditions import TextMessageTermination

# Download the dataset from https://www.kaggle.com/datasets/nelgiriyewithana/top-spotify-songs-2023
# and place it the coding directory as `spotify-2023.csv`.
bind_dir = "./coding"

# Use a custom docker image with the Jupyter kernel gateway and data science libraries installed.
# Custom docker image: ds-kernel-gateway:latest -- you need to build this image yourself.
# Dockerfile:
# FROM quay.io/jupyter/docker-stacks-foundation:latest
# 
# # ensure that 'mamba' and 'fix-permissions' are on the PATH
# SHELL ["/bin/bash", "-o", "pipefail", "-c"]
# 
# # Switch to the default notebook user
# USER ${NB_UID}
# 
# # Install data-science packages + kernel gateway
# RUN mamba install --quiet --yes \
#     numpy \
#     pandas \
#     scipy \
#     matplotlib \
#     scikit-learn \
#     seaborn \
#     jupyter_kernel_gateway \
#     ipykernel \
#     && mamba clean --all -f -y \
#     && fix-permissions "${CONDA_DIR}" \
#     && fix-permissions "/home/${NB_USER}"
# 
# # Allow you to set a token at runtime (or leave blank for no auth)
# ENV TOKEN=""
# 
# # Launch the Kernel Gateway, listening on all interfaces,
# # with the HTTP endpoint for listing kernels enabled
# CMD ["python", "-m", "jupyter", "kernelgateway", \
#     "--KernelGatewayApp.ip=0.0.0.0", \
#     "--KernelGatewayApp.port=8888", \
#     # "--KernelGatewayApp.auth_token=${TOKEN}", \
#     "--JupyterApp.answer_yes=true", \
#     "--JupyterWebsocketPersonality.list_kernels=true"]
# 
# EXPOSE 8888
# 
# WORKDIR "${HOME}"

async def main():
    model = OpenAIChatCompletionClient(model="gpt-4.1")
    async with DockerJupyterServer(
        custom_image_name="ds-kernel-gateway:latest", 
        bind_dir=bind_dir,
    ) as server:
        async with DockerJupyterCodeExecutor(jupyter_server=server) as code_executor:
            await code_executor.execute_code_blocks([
                CodeBlock(code="import pandas as pd\ndf = pd.read_csv('/workspace/spotify-2023.csv', encoding='latin-1')", language="python"),
            ],
                cancellation_token=CancellationToken(),
            )
            tool = PythonCodeExecutionTool(
                executor=code_executor,
            )
            assistant = AssistantAgent(
                "assistant",
                model_client=model,
                system_message="You have access to a Jupyter kernel. Do not write all code at once. Write one code block, observe the output, and then write the next code block.",
                tools=[tool],
            )
            team = RoundRobinGroupChat(
                [assistant],
                termination_condition=TextMessageTermination(source="assistant"),
            )
            task = f"Datafile has been loaded as variable `df`. First preview dataset. Then answer the following question: What is the highest streamed artist in the dataset?"
            await Console(team.run_stream(task=task))

if __name__ == "__main__":
    import asyncio
    asyncio.run(main())
```

You can see the file encoding error gets recovered and the agent
successfully executes the query in the end.
2025-05-21 17:27:46 +00:00
GeorgiosEfstathiadis
113aca0b81
Allow implicit aws credential setting for AnthropicBedrockChatCompletionClient (#6561)
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->

<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->

## Why are these changes needed?

Allows implicit AWS credential setting when using
AnthropicBedrockChatCompletionClient in an instance where you have
already logged into AWS with SSO and credentials are set as environment
variables.

## Related issue number

Closes #6560 

## Checks

- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.

Co-authored-by: Jack Gerrits <jackgerrits@users.noreply.github.com>
2025-05-21 12:49:03 -04:00
Ethan Yang
bd3b97a71f
Update state.ipynb, fix a grammar error (#6448)
Fix a grammar error, change "your" to "you".

<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->

<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->

## Why are these changes needed?

<!-- Please give a short summary of the change and the problem this
solves. -->

## Related issue number

<!-- For example: "Closes #1234" -->

## Checks

- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
2025-05-21 15:33:56 +00:00
SU_G
ec45dca291
fix:Prevent Async Event Loop from Running Indefinitely (#6530)
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->

<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->

## Why are these changes needed?

## Prevent Async Event Loop from Running Indefinitely

### Description

This pull request addresses a bug in the
python/packages/autogen-core/src/autogen_core/_single_threaded_agent_runtime.py
`async send_message` function where messages were being queued for
recipients that were not recognized. The current implementation sets an
exception on the future object when the recipient is not found, but
continues to enqueue the message, potentially leading to inconsistent
states.

### Changes Made

- Added a return statement immediately after setting the exception when
the recipient is not found. This ensures that the function exits early,
preventing further processing of the message and avoiding unnecessary
operations.
- This fix also addresses an issue where the asynchronous event loop
could potentially continue running indefinitely without terminating, due
to the future not being properly handled when an unknown recipient is
encountered.

### Impact

This fix prevents messages from being sent to unknown recipients. It
also ensures that the event loop can terminate correctly without being
stuck in an indefinite state.

### Testing

Ensure that the function correctly handles cases where the recipient is
not recognized by returning the exception without enqueuing the message,
and verify that the event loop terminates as expected.


<!-- Please give a short summary of the change and the problem this
solves. -->

## Related issue number

<!-- For example: "Closes #1234" -->

## Checks

- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.

Co-authored-by: Wanfeng Ge (葛万峰) <wf.ge@trip.com>
Co-authored-by: Jack Gerrits <jackgerrits@users.noreply.github.com>
2025-05-20 20:30:38 +00:00
Eric Zhu
f0b73441b6
Enable concurrent execution of agents in GraphFlow (#6545)
Support concurrent execution in `GraphFlow`:
- Updated `BaseGroupChatManager.select_speaker` to return a union of a
single string or a list of speaker name strings and added logics to
check for currently activated speakers and only proceed to select next
speakers when all activated speakers have finished.
- Updated existing teams (e.g., `SelectorGroupChat`) with the new
signature, while still returning a single speaker in their
implementations.
- Updated `GraphFlow` to support multiple speakers selected. 
- Refactored `GraphFlow` for less dictionary gymnastic by using a queue
and update using `update_message_thread`.

Example: a fan out graph:

```python
import asyncio

from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.teams import DiGraphBuilder, GraphFlow
from autogen_ext.models.openai import OpenAIChatCompletionClient

async def main():
    # Initialize agents with OpenAI model clients.
    model_client = OpenAIChatCompletionClient(model="gpt-4.1-nano")
    agent_a = AssistantAgent("A", model_client=model_client, system_message="You are a helpful assistant.")
    agent_b = AssistantAgent("B", model_client=model_client, system_message="Translate input to Chinese.")
    agent_c = AssistantAgent("C", model_client=model_client, system_message="Translate input to Japanese.")

    # Create a directed graph with fan-out flow A -> (B, C).
    builder = DiGraphBuilder()
    builder.add_node(agent_a).add_node(agent_b).add_node(agent_c)
    builder.add_edge(agent_a, agent_b).add_edge(agent_a, agent_c)
    graph = builder.build()

    # Create a GraphFlow team with the directed graph.
    team = GraphFlow(
        participants=[agent_a, agent_b, agent_c],
        graph=graph,
    )

    # Run the team and print the events.
    async for event in team.run_stream(task="Write a short story about a cat."):
        print(event)


asyncio.run(main())
```

Resolves:
#6541 
#6533
2025-05-19 21:47:55 +00:00
Justin Trantham
2eadef440e
Update README.md (#6506)
Was unable to get this to work without changing HumanInputMode.ALWAYS
for Azure OpenAI model IDE would not compile


## Why are these changes needed?

Unable to compile until changing

## Related issue number


## Checks

- [x] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [x] I've made sure all auto checks have passed.
2025-05-19 21:08:55 +00:00
Zhenyu
8c5dcabf87
fix: CodeExecutorAgent prompt misuse (#6559)
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->

<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->

## Why are these changes needed?

**Summary of Change:**
The instruction regarding code block format ("Python code should be
provided in python code blocks, and sh shell scripts should be provided
in sh code blocks for execution") will be moved from
`DEFAULT_AGENT_DESCRIPTION` to `DEFAULT_SYSTEM_MESSAGE`.

**Problem Solved:**
Ensure that the `model_client` receives the correct instructions for
generating properly formatted code blocks. Previously, the instruction
was only included in the agent's description and not passed to the
model_client, leading to potential issues in code generation. By moving
it to `DEFAULT_SYSTEM_MESSAGE`, the `model_client` will now accurately
format code blocks, improving the reliability of code generation.

## Related issue number

Closes #6558 

## Checks

- [x] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [x] I've made sure all auto checks have passed.
2025-05-19 20:49:22 +00:00
Stephen Toub
fffa61f639
Update to stable Microsoft.Extensions.AI release (#6552)
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->

<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->

## Why are these changes needed?

Moves to the stable 9.5.0 release instead of a preview (for the core
Microsoft.Extensions.AI.Abstractions and Microsoft.Extensions.AI
packages).

## Related issue number

<!-- For example: "Closes #1234" -->

## Checks

- [x] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [x] I've made sure all auto checks have passed.
2025-05-19 16:23:56 -04:00
Afzal Pawaskar
446da624ac
Fix missing tools in logs (#6532)
Fix for LLMCallEvent failing to log "tools" passed to
BaseOpenAIChatCompletionClient in
autogen_ext.models.openai._openai_client.BaseOpenAIChatCompletionClient

This bug creates problems inspecting why a certain tool was selected/not
selected by the LLM as the list of tools available to the LLM is not
present in the logs

## Why are these changes needed?

Added "tools" to the LLMCallEvent to log tools available to the LLM as
these were being missed causing difficulties during debugging LLM tool
calls.

## Related issue number

[<!-- For example: "Closes #1234"
-->](https://github.com/microsoft/autogen/issues/6531)

## Checks

- [x] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [x] I've made sure all auto checks have passed.

---------

Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
2025-05-15 17:12:19 -07:00
Chester Hu
9d297318b5
Add Llama API OAI compatible endpoint support (#6442)
## Why are these changes needed?

To add the latest support for using Llama API offerings with AutoGen


## Checks

- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.

---------

Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
2025-05-15 16:57:27 -07:00
ChrisBlaa
1eb7f93366
add tool_call_summary_msg_format_fct and test (#6460)
## Why are these changes needed?

This change introduces support for dynamic formatting of tool call
summary messages by allowing a user-defined
`tool_call_summary_format_fct`. Instead of relying solely on a static
string template, this function enables runtime generation of summary
messages based on the specific tool call and its result. This provides
greater flexibility and cleaner integration without introducing any
breaking changes.

### My Use Case / Problem

In my use case, I needed concise summaries for successful tool calls and
detailed messages for failures. The existing static summary string
didn't allow conditional formatting, which led to overly verbose success
messages or inconsistent failure outputs. This change allows customizing
summaries per result type, solving that limitation cleanly.

## Related issue number

Closes #6426

## Checks

- [x] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [x] I've made sure all auto checks have passed.

---------

Co-authored-by: Chris Wieczorek <Chris.Wieczorek@iav.de>
Co-authored-by: EeS <chiyoung.song@motov.co.kr>
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
Co-authored-by: Mehrsa Golestaneh <mehrsa.golestaneh@gmail.com>
Co-authored-by: Mehrsa Golestaneh <mgolestaneh@microsoft.com>
Co-authored-by: Zhenyu <81767213+Dormiveglia-elf@users.noreply.github.com>
2025-05-14 10:02:34 -07:00