256 Commits

Author SHA1 Message Date
EeS
0cd3ff46fa
FIX: Anthropic and Gemini could take multiple system message (#6118)
Anthropic SDK could not takes multiple system messages.
However some autogen Agent(e.g. SocietyOfMindAgent) makes multiple
system messages.

And... Gemini with OpenaiSDK do not take error. However is not working
mulitple system messages.
(Just last one is working)

So, I simple change of, "merge multiple system message" at these cases.

## Related issue number
Closes #6116
Closes #6117


---------

Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
2025-03-28 09:05:54 -07:00
Stuart Leeks
c24eba6ae1
Add suppress_result_output to ACADynamicSessionsCodeExecutor initializer (#6130)
When using the `ACADynamicSessionsCodeExecutor` it includes the stdout
from the execution but also the `results` property from the call to
dynamic sessions. In some situations, when the executed code results in
a file being saved this is included in the result:

```console
Plot saved as 'results_by_date.png'
{'type': 'image', 'format': 'png', 'base64_data': 'iVBORw0KGgoAAAANSUhEUgAAA90AAAJOCAYAAACqS2TfAAAAOXRFWHRTb2Z0d2FyZQ...
```

In some situations, this additional output is not desirable:
- when displaying the code output to a user - in this case, the stdout
content is dwarfed by the base64 encoded file content
- when an LLM agent is going to evaluate the code output to determine
next steps - in this case, the base64 content will be included in the
message history sent to the LLM increasing the prompt token cost

To handle these cases, this PR adds a new (optional) argument to the
`ACADynamicSessionsCodeExecutor` constructor that would allow
suppressing the result content (but default to False to preserve the
current behaviour in the default case)

(from #6042)
Closes #6042 


Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
2025-03-28 01:48:18 +00:00
EeS
2754eda611
FEAT: Add missing OpenAI-compatible models (GPT-4.5, Claude models) (#6120)
This PR adds missing model entries for OpenAI-compatible endpoints,
including gpt-4.5-turbo, gpt-4.5-turbo-preview, and claude-3.5-sonnet.
This improves coverage and avoids potential fallback or mismatch issues
when initializing clients.
2025-03-27 18:39:22 -07:00
Griffin Bassman
7487687cdc
[feat] token-limited message context (#6087) 2025-03-27 13:59:27 -07:00
Eric Zhu
29485ef85b
Fix MCP tool bug by dropping unset parameters from input (#6125)
Resolves #6096

Additionally: make sure MCP errors are formatted correctly, added unit
tests for mcp servers and upgrade mcp version.
2025-03-27 13:22:06 -07:00
Jay Prakash Thakur
b5ff7ee355
feat(ollama): Add thought field support and fix LLM control parameters (#6126) 2025-03-26 23:14:26 -07:00
Eric Zhu
025490a1bd
Use class hierarchy to organize AgentChat message types and introduce StructuredMessage type (#5998)
This PR refactored `AgentEvent` and `ChatMessage` union types to
abstract base classes. This allows for user-defined message types that
subclass one of the base classes to be used in AgentChat.

To support a unified interface for working with the messages, the base
classes added abstract methods for:
- Convert content to string
- Convert content to a `UserMessage` for model client
- Convert content for rendering in console.
- Dump into a dictionary
- Load and create a new instance from a dictionary

This way, all agents such as `AssistantAgent` and `SocietyOfMindAgent`
can utilize the unified interface to work with any built-in and
user-defined message type.

This PR also introduces a new message type, `StructuredMessage` for
AgentChat (Resolves #5131), which is a generic type that requires a
user-specified content type.

You can create a `StructuredMessage` as follow:

```python

class MessageType(BaseModel):
  data: str
  references: List[str]

message = StructuredMessage[MessageType](content=MessageType(data="data", references=["a", "b"]), source="user")

# message.content is of type `MessageType`. 
```

This PR addresses the receving side of this message type. To produce
this message type from `AssistantAgent`, the work continue in #5934.

Added unit tests to verify this message type works with agents and
teams.
2025-03-26 16:19:52 -07:00
Jack Gerrits
8a5ee3de6a
Add autogen user agent to azure openai requests (#6124) 2025-03-26 16:01:42 -07:00
Liu Jia
ce92926e78
add read timeout for create_mcp_server_session (#6080)
Closes #6031 

---------

Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
2025-03-26 17:51:09 +00:00
y26s4824k264
0bec835d59
Emit <think> and </think> around reasoning chunks from model_extras in choices.detla
So the behavior of hosted R1 model is the same as locally hosted R1 model.
Addresses: #5989
---------

Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
2025-03-25 16:17:53 -07:00
Victor Dibia
9a0588347a
add utf encoding in websurfer read file (#6094)
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->



<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->

## Why are these changes needed?

Add utf encoding to file reading. 
Without this, a default system encoding will be used. On Windows
machines this can default to any local encoding causing errors.

```python
with open(
            os.path.join(os.path.abspath(os.path.dirname(__file__)), "page_script.js"), "rt", encoding="utf-8"
        ) as fh:
```

<!-- Please give a short summary of the change and the problem this
solves. -->

## Related issue number

<!-- For example: "Closes #1234" -->

Closes #6093


## Checks

- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
2025-03-25 09:01:27 -07:00
Jay Prakash Thakur
7047fb8b8d
Add support for thought field in AzureAIChatCompletionClient (#6062)
added support for the thought process in tool calls for
`OpenAIChatCompletionClient`, allowing additional text produced by a
model alongside tool calls to be preserved in the thought field of
`CreateResult`. This PR extends the same functionality to
`AzureAIChatCompletionClient` for consistency across model clients.

#5650
Co-authored-by: Jay Prakash Thakur <jathakur@microsoft.com>
2025-03-24 17:33:10 -07:00
EeS
bca4d7e82f
FIX: Anthropic multimodal(Image) message for Anthropic >= 0.48 aware (#6054)
## Why are these changes needed?
This PR fixes a `TypeError: Cannot instantiate typing.Union` that occurs
when using the `MultimodalWebSurfer_agent` with Anthropic models. The
error was caused by the incorrect usage of `typing.Union` as a class
constructor instead of a type hint within the `_anthropic_client.py`
file. The code was attempting to instantiate `typing.Union`, which is
not allowed. The fix correctly uses `typing.Union` within type hints,
and uses the correct `Base64ImageSourceParam` type. It also updates the
`pyproject.toml` dependency.

## Related issue number
Closes #6035 

## Checks

- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [v] I've made sure all auto checks have passed.

---------

Co-authored-by: Victor Dibia <victordibia@microsoft.com>
2025-03-22 00:46:55 -07:00
Hussein Mozannar
fef953e062
Fix bytes in markdown converter playwright (#6044)
Fix error:

TypeError: Input stream must be opened in bytes mode, not in text mode.

Markdown converter takes binary stream
2025-03-20 11:53:53 -07:00
Eric Zhu
46add11ec7
Move start() and stop() as interface methods for CodeExecutor (#6040)
Resolves #6015
2025-03-20 10:00:52 -07:00
afourney
ecdb74b1ef
Limit what files and folders FileSurfer can access. (#6024)
Optionally limit what files and folders FileSurfer can access
(constraining it to a subtree of the FS).

This is not a replacement for Docker sandboxing, but can be used in
conjunction with sandboxing to help prevent FileSurfer from accessing
sensitive files.
2025-03-20 08:35:09 -07:00
EdwinInnovation
3498c3ccda
Fix issue #5946: changed code for ACASessionsExecutor _ensure_access_token to be https:/ /dynamicsessions.io/.default (#6001)
## Why are these changes needed?

when I want to create a ACASessionsExecutor instance and execute some
code, the default library imported does not work. It always returns:
"ClientAuthenticationError: Authentication failed: AADSTS70011: The
provided request must include a 'scope' input parameter. The provided
value for the input parameter 'scope' is not valid. The scope
https://dynamicsessions.io/ is not valid. Trace ID:
d75efa58-8be7-44ef-8839-aacfdc850600 Correlation ID:
a8e4d859-92da-4fbe-a8e0-05116323ab55 Timestamp: 2025-03-14 14:15:09Z"

After changing the scope in _ensure_access_token to be
"https://dynamicsessions.io/.default" rather than
""https://dynamicsessions.io/" and it worked.

## Related issue number

 issue #5946

## Checks

- [Y ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ Y] I've made sure all auto checks have passed.

Co-authored-by: edwinwu <edwin@Edwin-MBA.local>
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
2025-03-20 07:26:14 +00:00
Eric Zhu
9103359ef4
add cancellation support to docker executor (#6027)
Resolves #6013
2025-03-19 21:29:01 -07:00
Eric Zhu
69292e6ff4
Update mimum openai version to 1.66.5 as import path changed (#5996)
Resolves #5994

Open AI moved `openai.types.beta.vector_store` to
`openai.types.vector_store`.
https://github.com/openai/openai-python/compare/v1.65.5...v1.66.0

Also fixed unit tests and use parameterized fixture to run all
scenarios.
2025-03-19 05:20:04 +00:00
Eric Zhu
a8cef327f1
Support json schema for response format type in OpenAIChatCompletionClient (#5988)
Resolves #5982

This PR adds support for `json_schema` as a `response_format` type in
`OpenAIChatCompletionClient`. This is necessary because it allows the
client to be serialized along with the schema. If user use
`response_format=SomeBaseModel`, the client cannot be serialized.

Usage:

```python
# Structured output response, with a pre-defined JSON schema.

OpenAIChatCompletionClient(...,
response_format = {
    "type": "json_schema",
    "json_schema": {
        "name": "name of the schema, must be an identifier.",
        "description": "description for the model.",
        # You can convert a Pydantic (v2) model to JSON schema
        # using the `model_json_schema()` method.
        "schema": "<the JSON schema itself>",
        # Whether to enable strict schema adherence when
        # generating the output. If set to true, the model will
        # always follow the exact schema defined in the
        # `schema` field. Only a subset of JSON Schema is
        # supported when `strict` is `true`.
        # To learn more, read
        # https://platform.openai.com/docs/guides/structured-outputs.
        "strict": False,  # or True
    },
},
)
````
2025-03-18 03:14:42 +00:00
Federico Villa
09d8d344a2
Filter invalid parameters in Ollama client requests (#5983)
Remove unrecognized parameters in Ollama API calls.
---------

Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
2025-03-17 21:09:26 +00:00
ZakWork
685142cf51
Fix R1 reasoning parser for openai client (#5961)
R1 reasoning tokens from hosted R1 model were not parsed correctly for the openai client

Resolves #5941

---------

Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
2025-03-17 10:09:41 -07:00
Eric Zhu
aba41d74d3
feat: add structured output to model clients (#5936) 2025-03-15 07:58:13 -07:00
Eric Zhu
9bde5ef911
Improve docs for model clients (#5952)
Address questions related to logging of model client calls and reduce
redundant docs.
2025-03-15 02:28:15 +00:00
Eric Zhu
5f9e37dc27
Upgrade llama cpp to 0.3.8 to fix windows related error (#5948)
use the latest version of llama-cpp-python to ensure `uv sync
--all-extras` don't fail on windows.

reference:
https://github.com/microsoft/autogen/pull/5942#issuecomment-2724478534
2025-03-14 12:20:42 -07:00
Nissa Seru
0276aac8fb
Fix poe check on Windows (#5942)
`poe check` fails on main on Windows due to a combination line ending
mismatches, Unix-specific commands, and Windows-specific `asyncio`
behavior. This PR attempts to fix this (so that `poe check` on a
freshly-pulled `main` passes on Windows 11.)
2025-03-14 11:44:38 -07:00
Victor Dibia
b8b7a2db3a
Ensure SecretStr is cast to str on load for model clients (#5947)
Currently we have SecretStr type for model clients to promote security
best practices.

- when we dump_component, keys are serialized  as SecreteStr ..
- when we load_component ... SecreteStr type is passed to the client in
the api_key field. This i causes the type problems as the clients expect
a string type.

This PR updates the from_config method for model clients to ensure we
get the value from SecretStr.

Closes #5944
2025-03-14 10:15:21 -07:00
Eric Zhu
a4b6372813
Use SecretStr type for api key (#5939)
To prevent accidental export of API keys
2025-03-13 21:29:19 -07:00
afourney
84c622a4cc
Fixes an error that can occur when listing the contents of a directory. (#5938)
Fixes issues like the following trace:

```
packages/autogen_ext/agents/file_surfer/_markdown_file_browser.py", line 39, in __init__
    self.set_path(self._base_path)
  File "/home/hmozannar/webby/.venv/lib/python3.12/site-packages/autogen_ext/agents/file_surfer/_markdown_file_browser.py", line 67, in set_path
    self._open_path(path)
  File "/home/hmozannar/webby/.venv/lib/python3.12/site-packages/autogen_ext/agents/file_surfer/_markdown_file_browser.py", line 210, in _open_path
    io.StringIO(self._fetch_local_dir(path)), file_extension=".txt"
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/hmozannar/webby/.venv/lib/python3.12/site-packages/autogen_ext/agents/file_surfer/_markdown_file_browser.py", line 248, in _fetch_local_dir
    mtime = datetime.datetime.fromtimestamp(os.path.getmtime(full_path)).strftime("%Y-%m-%d %H:%M")
                                            ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "<frozen genericpath>", line 67, in getmtime
PermissionError: [Errno 13] Permission denied: '/home/hmozannar/webby/autogen-studio/frontend/readme.txt'
```
2025-03-13 20:40:30 -07:00
Nissa Seru
6ae098fe49
bugfix: Workaround for pydantic/#7713 (#5893)
Use of `SKChatCompletionAdapter` reliably fails with "'MockValSer'
object cannot be converted to 'SchemaSerializer'"; can repro with this
example:
https://microsoft.github.io/autogen/stable/user-guide/core-user-guide/components/model-clients.html#semantic-kernel-adapter

This appears to be related to
https://github.com/pydantic/pydantic/issues/7713 - commit uses
workaround from
https://github.com/pydantic/pydantic/issues/7713#issuecomment-2604574418

## Why are these changes needed?

This unblocks use of the Semantic Kernel integration by addressing the
above-referenced error, enabling the integration to perform as expected.

## Related issue number

N/A, see https://github.com/pydantic/pydantic/issues/7713 for context,
though.

## Checks

- [X] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
 - None needed, internal only change.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- None added; this works on my machine, but I'm not clear on the root
cause of the issue and have no strong opinion on whether this is the
ideal way to fix it long term - simply leaning towards PR`ing a tenative
fix instead of raising an issue.
- [ ] I've made sure all auto checks have passed.
 - I am not familiar with these, but assume they will be run during CI.

---------

Co-authored-by: Leonardo Pinheiro <leosantospinheiro@gmail.com>
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
2025-03-13 18:23:01 +00:00
afourney
aefa66a3ce
Update MarkItDown. (#5920)
Update FileSurfer and WebSurfer to use the latest MarkItDown package.
2025-03-12 21:17:25 -07:00
Eric Zhu
4d8b97eed1
Fix logging error with ollama client (#5917)
Resolves #5910

Co-authored-by: peterychang <49209570+peterychang@users.noreply.github.com>
2025-03-12 16:59:43 -04:00
Eric Zhu
bb8439c7bd
update version to v0.4.9 (#5903) 2025-03-11 19:35:22 -07:00
Eitan Yarmush
817f728d04
add LLMStreamStartEvent and LLMStreamEndEvent (#5890)
These changes are needed because there is currently no way to get
logging information about Streaming LLM requests/responses.

I decided to put the StreamStart event AFTER the first chunk so there
aren't false positives about connections/auth.

Closes #5730
---------

Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
2025-03-11 15:02:46 -07:00
PythicCoder
6a3acc4548
Feature add Add LlamaCppChatCompletionClient and llama-cpp (#5326)
This pull request introduces the integration of the `llama-cpp` library
into the `autogen-ext` package, with significant changes to the project
dependencies and the implementation of a new chat completion client. The
most important changes include updating the project dependencies, adding
a new module for the `LlamaCppChatCompletionClient`, and implementing
the client with various functionalities.

### Project Dependencies:

*
[`python/packages/autogen-ext/pyproject.toml`](diffhunk://#diff-095119d4420ff09059557bd25681211d1772c2be0fbe0ff2d551a3726eff1b4bR34-R38):
Added `llama-cpp-python` as a new dependency under the `llama-cpp`
section.

### New Module:

*
[`python/packages/autogen-ext/src/autogen_ext/models/llama_cpp/__init__.py`](diffhunk://#diff-42ae3ba17d51ca917634c4ea3c5969cf930297c288a783f8d9c126f2accef71dR1-R8):
Introduced the `LlamaCppChatCompletionClient` class and handled import
errors with a descriptive message for missing dependencies.

### Implementation of `LlamaCppChatCompletionClient`:

*
`python/packages/autogen-ext/src/autogen_ext/models/llama_cpp/_llama_cpp_completion_client.py`:
- Added the `LlamaCppChatCompletionClient` class with methods to
initialize the client, create chat completions, detect and execute
tools, and handle streaming responses.
- Included detailed logging for debugging purposes and implemented
methods to count tokens, track usage, and provide model information.…d
chat capabilities

<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->

<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->

## Why are these changes needed?

<!-- Please give a short summary of the change and the problem this
solves. -->

## Related issue number

<!-- For example: "Closes #1234" -->

## Checks

- [X ] I've included any doc changes needed for
https://microsoft.github.io/autogen/. See
https://microsoft.github.io/autogen/docs/Contribute#documentation to
build and test documentation locally.
- [X ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ X] I've made sure all auto checks have passed.

---------

Co-authored-by: aribornstein <x@x.com>
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
Co-authored-by: Ryan Sweet <rysweet@microsoft.com>
2025-03-10 16:53:53 -07:00
Leonardo Pinheiro
a1858efac9
feat: update local code executor to support powershell (#5884)
To support powershell on the local code executor.
Closes #5518
2025-03-10 14:00:14 -07:00
Hussein Mozannar
7d17b22925
Add an optional base path to FileSurfer (#5886)
This pull request introduces a new feature to the `FileSurfer` agent and
`MarkdownFileBrowser` by adding support for specifying a base path for
file browsing.

*
`python/packages/autogen-ext/src/autogen_ext/agents/file_surfer/_file_surfer.py`:
* Added `base_path` parameter to `FileSurfer` class and its
initialization method, with a default value of the current working
directory (`os.getcwd()`).
[[1]](diffhunk://#diff-084847b5e64c659c9aff0bd2d05bbcd0fff2c819a4b91bbe65fa0566054c0972R58)
[[2]](diffhunk://#diff-084847b5e64c659c9aff0bd2d05bbcd0fff2c819a4b91bbe65fa0566054c0972R80-R85)
* Updated `MarkdownFileBrowser` initialization within `FileSurfer` to
use the `base_path` parameter.

*
`python/packages/autogen-ext/src/autogen_ext/agents/file_surfer/_markdown_file_browser.py`:
* Added `base_path` parameter to `MarkdownFileBrowser` class and its
initialization method, with a default value of the current working
directory (`os.getcwd()`).
* Updated `MarkdownFileBrowser` to use the `base_path` for setting the
initial path and returning the current page path.
2025-03-09 20:33:18 -07:00
Victor Dibia
134a8c71ef
Add anthropic docs (#5882)
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->

<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->

## Why are these changes needed?

Add anthropic docs

- Add api docs 
- Add sample code + usage in agent chat user guide

<!-- Please give a short summary of the change and the problem this
solves. -->

## Related issue number

<!-- For example: "Closes #1234" -->

Closes #5856 

## Checks

- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
2025-03-08 19:35:28 -08:00
Eric Zhu
740afe5b61
Add ToolCallEvent and log it from all builtin tools (#5859)
Resolves #5745

Also made sure to log LLMCallEvent from all builtin model clients, and
added unit test for coverage.

---------

Co-authored-by: Ryan Sweet <rysweet@microsoft.com>
Co-authored-by: Victor Dibia <victordibia@microsoft.com>
2025-03-07 16:04:45 -08:00
afourney
8f737de0e1
Add client close (#5871)
Fixes #4821 by adding a `close()` method to all clients.

Additionally:
* The m1 CLI is updated to close the client before exiting.
* The playwrightcontroller is updated to suppress some other unrelated
chatty warnings (e.g,, produced by markitdown when encountering
conversions that require external utilities)
2025-03-07 14:10:06 -08:00
afourney
5685bd1888
Update markitdown requirements to >= 0.0.1, while still in the 0.0.x range (#5864) 2025-03-06 21:33:09 -08:00
Eric Zhu
ea89a84c30
fix: remove max_tokens from az ai client create call when stream=True (#5860) 2025-03-06 17:18:37 -08:00
Eric Zhu
7e5c1154cf
Support for external agent runtime in AgentChat (#5843)
Resolves #4075

1. Introduce custom runtime parameter for all AgentChat teams
(RoundRobinGroupChat, SelectorGroupChat, etc.). This is done by making
sure each team's topics are isolated from other teams, and decoupling
state from agent identities. Also, I removed the closure agent from the
BaseGroupChat and use the group chat manager agent to relay messages to
the output message queue.
2. Added unit tests to test scenarios with custom runtimes by using
pytest fixture
3. Refactored existing unit tests to use ReplayChatCompletionClient with
a few improvements to the client.
4. Fix a one-liner bug in AssistantAgent that caused deserialized agent
to have handoffs.

How to use it? 

```python
import asyncio
from autogen_core import SingleThreadedAgentRuntime
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.teams import RoundRobinGroupChat
from autogen_agentchat.conditions import TextMentionTermination
from autogen_ext.models.replay import ReplayChatCompletionClient

async def main() -> None:
    # Create a runtime
    runtime = SingleThreadedAgentRuntime()
    runtime.start()

    # Create a model client.
    model_client = ReplayChatCompletionClient(
        ["1", "2", "3", "4", "5", "6", "7", "8", "9", "10"],
    )

    # Create agents
    agent1 = AssistantAgent("assistant1", model_client=model_client, system_message="You are a helpful assistant.")
    agent2 = AssistantAgent("assistant2", model_client=model_client, system_message="You are a helpful assistant.")

    # Create a termination condition
    termination_condition = TextMentionTermination("10", sources=["assistant1", "assistant2"])

    # Create a team
    team = RoundRobinGroupChat([agent1, agent2], runtime=runtime, termination_condition=termination_condition)

    # Run the team
    stream = team.run_stream(task="Count to 10.")
    async for message in stream:
        print(message)
    
    # Save the state.
    state = await team.save_state()

    # Load the state to an existing team.
    await team.load_state(state)

    # Run the team again
    model_client.reset()
    stream = team.run_stream(task="Count to 10.")
    async for message in stream:
        print(message)

    # Create a new team, with the same agent names.
    agent3 = AssistantAgent("assistant1", model_client=model_client, system_message="You are a helpful assistant.")
    agent4 = AssistantAgent("assistant2", model_client=model_client, system_message="You are a helpful assistant.")
    new_team = RoundRobinGroupChat([agent3, agent4], runtime=runtime, termination_condition=termination_condition)

    # Load the state to the new team.
    await new_team.load_state(state)

    # Run the new team
    model_client.reset()
    new_stream = new_team.run_stream(task="Count to 10.")
    async for message in new_stream:
        print(message)
    
    # Stop the runtime
    await runtime.stop()

asyncio.run(main())
```

TODOs as future PRs:
1. Documentation.
2. How to handle errors in custom runtime when the agent has exception?

---------

Co-authored-by: Ryan Sweet <rysweet@microsoft.com>
2025-03-06 10:32:52 -08:00
Leonardo Pinheiro
9d235d2585
fix: add plugin to kernel (#5830)
Line that adds the plugin to the kernel was accidentally removed, which
caused SK to be unable to invoke tools.
2025-03-05 04:37:43 +00:00
Ricky Loynd
97536af7a3
Task-Centric Memory (#5227)
_(EXPERIMENTAL, RESEARCH IN PROGRESS)_

In 2023 AutoGen introduced [Teachable
Agents](https://microsoft.github.io/autogen/0.2/blog/2023/10/26/TeachableAgent/)
that users could teach new facts, preferences and skills. But teachable
agents were limited in several ways: They could only be
`ConversableAgent` subclasses, they couldn't learn a new skill unless
the user stated (in a single turn) both the task and how to solve it,
and they couldn't learn on their own. **Task-Centric Memory** overcomes
these limitations, allowing users to teach arbitrary agents (or teams)
more flexibly and reliably, and enabling agents to learn from their own
trial-and-error experiences.

This PR is large and complex. All of the files are new, and most of the
added components depend on the others to run at all. But the review
process can be accelerated if approached in the following order.
1. Start with the [Task-Centric Memory
README](https://github.com/microsoft/autogen/tree/agentic_memory/python/packages/autogen-ext/src/autogen_ext/task_centric_memory).
1. Install the memory extension locally, since it won't be in pypi until
it's merged. In the `agentic_memory` branch, and the `python/packages`
directory:
        - `pip install -e autogen-agentchat`
        - `pip install -e autogen-ext[openai]`
        - `pip install -e autogen-ext[task-centric-memory]`
2. Run the Quickstart sample code, then immediately open the
`./pagelogs/quick/0 Call Tree.html` file in a browser to view the work
in progress.
    3. Click through the web page links to see the details.
2. Continue through the rest of the main README to get a high-level
overview of the architecture.
3. Read through the [code samples
README](https://github.com/microsoft/autogen/tree/agentic_memory/python/samples/task_centric_memory),
running each of the 4 code samples while viewing their page logs.
4. Skim through the 4 code samples, along with their corresponding yaml
config files:
    1. `chat_with_teachable_agent.py`
    2. `eval_retrieval.py`
    3. `eval_teachability.py`
    4. `eval_learning_from_demonstration.py`
    5. `eval_self_teaching.py`
6. Read `task_centric_memory_controller.py`, referring back to the
previously generated page logs as needed. This is the most important and
complex file in the PR.
7. Read the remaining core files.
    1. `_task_centric_memory_bank.py`
    2. `_string_similarity_map.py`
    3. `_prompter.py`
8. Read the supporting files in the utils dir.
    1. `teachability.py`
    2. `apprentice.py`
    3. `grader.py`
    4. `page_logger.py`
    5. `_functions.py`
2025-03-04 09:56:49 -08:00
Eric Zhu
4858676bdd
Add examples for custom model context in AssistantAgent and ChatCompletionContext (#5810)
Resolves #5777
2025-03-03 22:19:59 -08:00
Leonardo Pinheiro
906b09e451
fix: Update SKChatCompletionAdapter message conversion (#5749)
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->

<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->

## Why are these changes needed?

<!-- Please give a short summary of the change and the problem this
solves. -->

The PR introduces two changes.

The first change is adding a name attribute to
`FunctionExecutionResult`. The motivation is that semantic kernel
requires it for their function result interface and it seemed like a
easy modification as `FunctionExecutionResult` is always created in the
context of a `FunctionCall` which will contain the name. I'm unsure if
there was a motivation to keep it out but this change makes it easier to
trace which tool the result refers to and also increases api
compatibility with SK.

The second change is an update to how messages are mapped from autogen
to semantic kernel, which includes an update/fix in the processing of
function results.

## Related issue number

<!-- For example: "Closes #1234" -->

Related to #5675 but wont fix the underlying issue of anthropic
requiring tools during AssistantAgent reflection.

## Checks

- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.

---------

Co-authored-by: Leonardo Pinheiro <lpinheiro@microsoft.com>
2025-03-03 23:05:54 +00:00
Peter Jausovec
a785cd90f9
add stream_options to openai model (#5788)
stream_options are not part of the model classes, so they won't get
serialized when calling dump_component. Adding this to the model allows
us to store the stream options when the component is serialized.
---------

Signed-off-by: Peter Jausovec <peter.jausovec@solo.io>
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
2025-03-03 21:58:05 +00:00
peterychang
8c9961ecba
add options to ollama client (#5805)
Necessary to configure ollama client

## Related issue number

#5597 

Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
2025-03-03 13:24:14 -08:00
Victor Dibia
b8b13935c9
Make FileSurfer and CodeExecAgent Declarative (#5765)
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->

<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->

## Why are these changes needed?

Make FileSurfer and CodeExecAgent Declarative.
These agent presents are used as part of magentic one and having them
declarative is a precursor to their use in AGS.

<!-- Please give a short summary of the change and the problem this
solves. -->

## Related issue number

<!-- For example: "Closes #1234" -->
Closes #5607

## Checks

- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
2025-03-01 15:46:30 +00:00