## Problem
When using GraphFlow with a termination condition, the second task
execution would immediately terminate without running any agents. The
first task would run successfully, but subsequent tasks would skip all
agents and go directly to the stop agent.
This was demonstrated by the following issue:
```python
# First task runs correctly
result1 = await team.run(task="First task") # ✅ Works fine
# Second task fails immediately
result2 = await team.run(task="Second task") # ❌ Only user + stop messages
```
## Root Cause
The `GraphFlowManager` was not resetting its execution state when
termination occurred. After the first task completed:
1. The `_ready` queue was empty (all nodes had been processed)
2. The `_remaining` and `_enqueued_any` tracking structures remained in
"completed" state
3. The `_message_thread` retained history from the previous task
This left the graph in a "completed" state, causing subsequent tasks to
immediately trigger the stop agent instead of executing the workflow.
## Solution
Added an override of the `_apply_termination_condition` method in
`GraphFlowManager` to automatically reset the graph execution state when
termination occurs:
```python
async def _apply_termination_condition(
self, delta: Sequence[BaseAgentEvent | BaseChatMessage], increment_turn_count: bool = False
) -> bool:
# Call the base implementation first
terminated = await super()._apply_termination_condition(delta, increment_turn_count)
# If terminated, reset the graph execution state and message thread for the next task
if terminated:
self._remaining = {target: Counter(groups) for target, groups in self._graph.get_remaining_map().items()}
self._enqueued_any = {n: {g: False for g in self._enqueued_any[n]} for n in self._enqueued_any}
self._ready = deque([n for n in self._graph.get_start_nodes()])
# Clear the message thread to start fresh for the next task
self._message_thread.clear()
return terminated
```
This ensures that when a task completes (termination condition is met),
the graph is automatically reset to its initial state ready for the next
task.
## Testing
Added a comprehensive test case
`test_digraph_group_chat_multiple_task_execution` that validates:
- Multiple tasks can be run sequentially without explicit reset calls
- All agents are executed the expected number of times
- Both tasks produce the correct number of messages
- The fix works with various termination conditions
(MaxMessageTermination, TextMentionTermination)
## Result
GraphFlow now works like SelectorGroupChat where multiple tasks can be
run sequentially without explicit resets between them:
```python
# Both tasks now work correctly
result1 = await team.run(task="First task") # ✅ 5 messages, all agents called
result2 = await team.run(task="Second task") # ✅ 5 messages, all agents called again
```
Fixes#6746.
> [!WARNING]
>
> <details>
> <summary>Firewall rules blocked me from connecting to one or more
addresses</summary>
>
> #### I tried to connect to the following addresses, but was blocked by
firewall rules:
>
> - `esm.ubuntu.com`
> - Triggering command: `/usr/lib/apt/methods/https` (dns block)
>
> If you need me to access, download, or install something from one of
these locations, you can either:
>
> - Configure [Actions setup
steps](https://gh.io/copilot/actions-setup-steps) to set up my
environment, which run before the firewall is enabled
> - Add the appropriate URLs or hosts to my [firewall allow
list](https://gh.io/copilot/firewall-config)
>
> </details>
<!-- START COPILOT CODING AGENT TIPS -->
---
💬 Share your feedback on Copilot coding agent for the chance to win a
$200 gift card! Click
[here](https://survey.alchemer.com/s3/8343779/Copilot-Coding-agent) to
start the survey.
---------
Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: ekzhu <320302+ekzhu@users.noreply.github.com>
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
## Why are these changes needed?
1. problem
When the GraphFlowManager encounters cycles, it tracks remaining
indegree counts for the node's activation. However, this tracking
mechanism has a flaw when dealing with cycles. When a node first enters
a cycle, the GraphFlowManager evaluates all remaining incoming edges,
including those that loop back to the origin node. If the activation
prerequisites are not satisfied at that moment, the workflow will
eventually finish because the _remaining counter never reaches zero,
preventing the select_speaker() method from selecting any agents for
execution.
2. solution
change activation map to 2 layer for ditinguish remaining inside
different cycle and outside the cycle.
add a activation group and policy property for edge, compute the
remaining map when GraphFlowManager is init and check the remaining map
with activation group to avoid checking the loop back edges
<!-- Please give a short summary of the change and the problem this
solves. -->
## Related issue number
#6710
## Checks
- [x] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [x] I've made sure all auto checks have passed.
This PR adds callable as an option to specify conditional edges in
GraphFlow.
```python
import asyncio
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.conditions import MaxMessageTermination
from autogen_agentchat.teams import DiGraphBuilder, GraphFlow
from autogen_ext.models.openai import OpenAIChatCompletionClient
async def main():
# Initialize agents with OpenAI model clients.
model_client = OpenAIChatCompletionClient(model="gpt-4.1-nano")
agent_a = AssistantAgent(
"A",
model_client=model_client,
system_message="Detect if the input is in Chinese. If it is, say 'yes', else say 'no', and nothing else.",
)
agent_b = AssistantAgent("B", model_client=model_client, system_message="Translate input to English.")
agent_c = AssistantAgent("C", model_client=model_client, system_message="Translate input to Chinese.")
# Create a directed graph with conditional branching flow A -> B ("yes"), A -> C (otherwise).
builder = DiGraphBuilder()
builder.add_node(agent_a).add_node(agent_b).add_node(agent_c)
# Create conditions as callables that check the message content.
builder.add_edge(agent_a, agent_b, condition=lambda msg: "yes" in msg.to_model_text())
builder.add_edge(agent_a, agent_c, condition=lambda msg: "yes" not in msg.to_model_text())
graph = builder.build()
# Create a GraphFlow team with the directed graph.
team = GraphFlow(
participants=[agent_a, agent_b, agent_c],
graph=graph,
termination_condition=MaxMessageTermination(5),
)
# Run the team and print the events.
async for event in team.run_stream(task="AutoGen is a framework for building AI agents."):
print(event)
asyncio.run(main())
```
---------
Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: ekzhu <320302+ekzhu@users.noreply.github.com>
## Why are these changes needed?
I added `created_at` to both BaseChatMessage and BaseAgentEvent classes
that store the time these Pydantic model instances are generated. And
then users will be able to use `created_at` to build up a customized
external persisting state management layer for their case.
## Related issue number
https://github.com/microsoft/autogen/discussions/6169#discussioncomment-13151540
## Checks
- [x] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [x] I've made sure all auto checks have passed.
---------
Co-authored-by: Jack Gerrits <jackgerrits@users.noreply.github.com>
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
Support concurrent execution in `GraphFlow`:
- Updated `BaseGroupChatManager.select_speaker` to return a union of a
single string or a list of speaker name strings and added logics to
check for currently activated speakers and only proceed to select next
speakers when all activated speakers have finished.
- Updated existing teams (e.g., `SelectorGroupChat`) with the new
signature, while still returning a single speaker in their
implementations.
- Updated `GraphFlow` to support multiple speakers selected.
- Refactored `GraphFlow` for less dictionary gymnastic by using a queue
and update using `update_message_thread`.
Example: a fan out graph:
```python
import asyncio
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.teams import DiGraphBuilder, GraphFlow
from autogen_ext.models.openai import OpenAIChatCompletionClient
async def main():
# Initialize agents with OpenAI model clients.
model_client = OpenAIChatCompletionClient(model="gpt-4.1-nano")
agent_a = AssistantAgent("A", model_client=model_client, system_message="You are a helpful assistant.")
agent_b = AssistantAgent("B", model_client=model_client, system_message="Translate input to Chinese.")
agent_c = AssistantAgent("C", model_client=model_client, system_message="Translate input to Japanese.")
# Create a directed graph with fan-out flow A -> (B, C).
builder = DiGraphBuilder()
builder.add_node(agent_a).add_node(agent_b).add_node(agent_c)
builder.add_edge(agent_a, agent_b).add_edge(agent_a, agent_c)
graph = builder.build()
# Create a GraphFlow team with the directed graph.
team = GraphFlow(
participants=[agent_a, agent_b, agent_c],
graph=graph,
)
# Run the team and print the events.
async for event in team.run_stream(task="Write a short story about a cat."):
print(event)
asyncio.run(main())
```
Resolves:
#6541#6533
## Why are these changes needed?
❗ Before
Previously, GraphFlow.__init__() modified the inner_chats and
termination_condition for internal execution logic (e.g., constructing
_StopAgent or composing OrTerminationCondition).
However, these modified values were also used during dump_component(),
meaning the serialized config no longer matched the original inputs.
As a result:
1. dump_component() → load_component() → dump_component() produced
non-idempotent configs.
2. Internal-only constructs like _StopAgent were mistakenly serialized,
even though they should only exist in runtime.
⸻
✅ After
This patch changes the behavior to:
• Store original inner_chats and termination_condition as-is at
initialization.
• During to_config(), serialize only the original unmodified versions.
• Avoid serializing _StopAgent or other dynamically built agents.
• Ensure deserialization (from_config) produces a logically equivalent
object without additional nesting or duplication.
This ensures that:
• GraphFlow.dump_component() → load_component() round-trip produces
consistent, minimal configs.
• Internal execution logic and serialized component structure are
properly separated.
<!-- Please give a short summary of the change and the problem this
solves. -->
## Related issue number
<!-- For example: "Closes #1234" -->
Closes#6431
## Checks
- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [x] I've made sure all auto checks have passed.
Closes#4623
### Add Directed Graph-based Group Chat Execution Engine
(`DiGraphGroupChat`)
This PR introduces a new graph-based execution framework for Autogen
agent teams, located under `autogen_agentchat/teams/_group_chat/_graph`.
**Key Features:**
- **`DiGraphGroupChat`**: A new group chat implementation that executes
agents based on a user-defined directed graph (DAG or cyclic with exit
conditions).
- **`AGGraphBuilder`**: A fluent builder API to programmatically
construct graphs.
- **`MessageFilterAgent`**: A wrapper to restrict what messages an agent
sees before invocation, supporting per-source and per-position
filtering.
**Capabilities:**
- Supports sequential, parallel, conditional, and cyclic workflows.
- Enables fine-grained control over both execution order and message
context.
- Compatible with existing Autogen agents and runtime interfaces.
**Tests:**
- Located in `autogen_agentchat/tests/test_group_chat_graph.py`
- Includes unit and integration tests covering:
- Graph validation
- Execution paths
- Conditional routing
- Loops with exit conditions
- Message filtering
Let me know if anything needs refactoring or if you'd like the
components split further.
---------
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
Co-authored-by: Leonardo Pinheiro <leosantospinheiro@gmail.com>