Token limited model context is currently broken because it is importing
from extensions.
This fix removed the imports and updated the model context
implementation to use model client directly.
In the future, the model client's token counting should cache results
from model API to provide accurate counting.
Anthropic SDK could not takes multiple system messages.
However some autogen Agent(e.g. SocietyOfMindAgent) makes multiple
system messages.
And... Gemini with OpenaiSDK do not take error. However is not working
mulitple system messages.
(Just last one is working)
So, I simple change of, "merge multiple system message" at these cases.
## Related issue number
Closes#6116Closes#6117
---------
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
When using the `ACADynamicSessionsCodeExecutor` it includes the stdout
from the execution but also the `results` property from the call to
dynamic sessions. In some situations, when the executed code results in
a file being saved this is included in the result:
```console
Plot saved as 'results_by_date.png'
{'type': 'image', 'format': 'png', 'base64_data': 'iVBORw0KGgoAAAANSUhEUgAAA90AAAJOCAYAAACqS2TfAAAAOXRFWHRTb2Z0d2FyZQ...
```
In some situations, this additional output is not desirable:
- when displaying the code output to a user - in this case, the stdout
content is dwarfed by the base64 encoded file content
- when an LLM agent is going to evaluate the code output to determine
next steps - in this case, the base64 content will be included in the
message history sent to the LLM increasing the prompt token cost
To handle these cases, this PR adds a new (optional) argument to the
`ACADynamicSessionsCodeExecutor` constructor that would allow
suppressing the result content (but default to False to preserve the
current behaviour in the default case)
(from #6042)
Closes#6042
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
This PR adds missing model entries for OpenAI-compatible endpoints,
including gpt-4.5-turbo, gpt-4.5-turbo-preview, and claude-3.5-sonnet.
This improves coverage and avoids potential fallback or mismatch issues
when initializing clients.
This PR refactored `AgentEvent` and `ChatMessage` union types to
abstract base classes. This allows for user-defined message types that
subclass one of the base classes to be used in AgentChat.
To support a unified interface for working with the messages, the base
classes added abstract methods for:
- Convert content to string
- Convert content to a `UserMessage` for model client
- Convert content for rendering in console.
- Dump into a dictionary
- Load and create a new instance from a dictionary
This way, all agents such as `AssistantAgent` and `SocietyOfMindAgent`
can utilize the unified interface to work with any built-in and
user-defined message type.
This PR also introduces a new message type, `StructuredMessage` for
AgentChat (Resolves#5131), which is a generic type that requires a
user-specified content type.
You can create a `StructuredMessage` as follow:
```python
class MessageType(BaseModel):
data: str
references: List[str]
message = StructuredMessage[MessageType](content=MessageType(data="data", references=["a", "b"]), source="user")
# message.content is of type `MessageType`.
```
This PR addresses the receving side of this message type. To produce
this message type from `AssistantAgent`, the work continue in #5934.
Added unit tests to verify this message type works with agents and
teams.
So the behavior of hosted R1 model is the same as locally hosted R1 model.
Addresses: #5989
---------
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
Take the output of the tool and use that to create the HandoffMessage.
[discussion is
here](https://github.com/microsoft/autogen/discussions/6067#discussion-8117177)
Supports agents to carry specific instructions when performing handoff
operations
---------
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
Add utf encoding to file reading.
Without this, a default system encoding will be used. On Windows
machines this can default to any local encoding causing errors.
```python
with open(
os.path.join(os.path.abspath(os.path.dirname(__file__)), "page_script.js"), "rt", encoding="utf-8"
) as fh:
```
<!-- Please give a short summary of the change and the problem this
solves. -->
## Related issue number
<!-- For example: "Closes #1234" -->
Closes#6093
## Checks
- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
added support for the thought process in tool calls for
`OpenAIChatCompletionClient`, allowing additional text produced by a
model alongside tool calls to be preserved in the thought field of
`CreateResult`. This PR extends the same functionality to
`AzureAIChatCompletionClient` for consistency across model clients.
#5650
Co-authored-by: Jay Prakash Thakur <jathakur@microsoft.com>
This PR introduces a metadata field in AssistantAgentConfig, allowing
applications to assign and track identity information for agents.
The metadata field is a Dict[str, str] and is included in the
configuration for proper serialization.
---------
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
## Why are these changes needed?
This PR fixes a `TypeError: Cannot instantiate typing.Union` that occurs
when using the `MultimodalWebSurfer_agent` with Anthropic models. The
error was caused by the incorrect usage of `typing.Union` as a class
constructor instead of a type hint within the `_anthropic_client.py`
file. The code was attempting to instantiate `typing.Union`, which is
not allowed. The fix correctly uses `typing.Union` within type hints,
and uses the correct `Base64ImageSourceParam` type. It also updates the
`pyproject.toml` dependency.
## Related issue number
Closes#6035
## Checks
- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [v] I've made sure all auto checks have passed.
---------
Co-authored-by: Victor Dibia <victordibia@microsoft.com>
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
- Adds tracing docs page for AgentChat with Jaeger example
- [x] Runtime tracing: Example code where tracing is done with the
SingleThreaded Runtime, logging all events
- [x] Custom event tracing: Example code logging messages returned from
`team.run_stream()`
- [ ] LLM span tracing .. depends on
https://github.com/microsoft/autogen/issues/5895
- [ ] [TBD] Distributed tracing
See
[tracing.ipynb](bdb6ac5315/python/packages/autogen-core/docs/src/user-guide/agentchat-user-guide/tracing.ipynb)
here
<!-- Please give a short summary of the change and the problem this
solves. -->
## Related issue number
<!-- For example: "Closes #1234" -->
#5992
## Open Questions
@ekzu
- What is the recommended way to directly log custom events like
LLMCallEvents and ToolCallEvents? LogEventhandlers in user code that
become traced spans?
- Currenltly tool calls and their args are already logged (not sure
where this is done), but LLM call events are not. Should we include
samples on this?
## Checks
- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
---------
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
## Why are these changes needed?
fix the accessibility issue that screen reader doesn't announce the
theme when it changes
## Related issue number
#5631 (13) (31) (59)
---------
Co-authored-by: peterychang <49209570+peterychang@users.noreply.github.com>
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
<!-- Please give a short summary of the change and the problem this
solves. -->
## Related issue number
<!-- For example: "Closes #1234" -->
## Checks
- [x] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [x] I've made sure all auto checks have passed.
Signed-off-by: zhanluxianshen <zhanluxianshen@163.com>
Co-authored-by: Victor Dibia <victordibia@microsoft.com>
This PR allows docker-out-of-docker scenarios to be run in agbench
(e.g., agent teams that rely on the DockerCommandLineExecutor)
This is becoming increasingly important for benchmarking and testing,
since the behaviors of running local executors can diverge in important
ways.
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
<!-- Please give a short summary of the change and the problem this
solves. -->
If user tab to a code block copy button then hit enter, screen reader
doesn't announce "Copied". This PR fixed this bug.
## Related issue number
#5631 (8)
<!-- For example: "Closes #1234" -->
## Checks
- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
## Why are these changes needed?
Fixes (53) on screen reader issues. A special thanks to @sjay8 for
starting the work on this task
## Related issue number
https://github.com/microsoft/autogen/issues/5631
This pull request introduces a new linting feature to the benchmark
configuration in the `agbench` package. The main changes include adding
a new command to the CLI, implementing the linter functionality, and
integrating it with the existing codebase.
### New Linting Feature:
*
[`python/packages/agbench/src/agbench/cli.py`](diffhunk://#diff-0eafed70ad5e99e6f7319927bf92ee3ce4787d156dd2775b10a61baad7ec1799R10):
Added `lint_cli` import and integrated the new "lint" command into the
`main` function.
[[1]](diffhunk://#diff-0eafed70ad5e99e6f7319927bf92ee3ce4787d156dd2775b10a61baad7ec1799R10)
[[2]](diffhunk://#diff-0eafed70ad5e99e6f7319927bf92ee3ce4787d156dd2775b10a61baad7ec1799R37-R41)
### Linter Implementation:
*
[`python/packages/agbench/src/agbench/linter/__init__.py`](diffhunk://#diff-45842e728e3daad063b3cf84d5857a4fdfe14e6d977fb2054f284eb9f5bb5272R1-R4):
Added necessary imports to initialize the linter module.
*
[`python/packages/agbench/src/agbench/linter/_base.py`](diffhunk://#diff-f7ea2f6706232406b6c727fda6d71f09c568b4573f070af79bb7f3da3514e364R1-R81):
Defined core classes such as `Document`, `Code`, `CodeExample`,
`CodedDocument`, and the `BaseQualitativeCoder` protocol.
*
[`python/packages/agbench/src/agbench/linter/cli.py`](diffhunk://#diff-e6ad1e14dc0df2c10fe62fede5a06d83865ad1961f99ec2d78f9052feb4d663bR1-R86):
Implemented the `lint_cli` function, which includes loading log files,
coding them, and printing the results.
*
[`python/packages/agbench/src/agbench/linter/coders/oai_coder.py`](diffhunk://#diff-5059129410822c8a214f797a6167cbfcfbe31bd6a3b1efcb65a2dd703ef9b331R1-R212):
Implemented the `OAIQualitativeCoder` class to interact with OpenAI for
coding documents and caching results.
Example usage:
<img width="997" alt="image"
src="https://github.com/user-attachments/assets/6718688e-9917-4a43-a2f1-1105b030528d"
/>
<img width="999" alt="image"
src="https://github.com/user-attachments/assets/7fcb9c43-70f2-4fe7-ae29-5ad6a4ef2a16"
/>
> If you are in VSCode Terminal, you can click on the links in the
terminal output to jump to the exact error.
---------
Co-authored-by: afourney <adamfo@microsoft.com>
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
Fixes Screen Reader issue (58)
## Related issue number
https://github.com/microsoft/autogen/issues/5631
## Checks
- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
---------
Co-authored-by: Jack Gerrits <jackgerrits@users.noreply.github.com>
Optionally limit what files and folders FileSurfer can access
(constraining it to a subtree of the FS).
This is not a replacement for Docker sandboxing, but can be used in
conjunction with sandboxing to help prevent FileSurfer from accessing
sensitive files.
## Why are these changes needed?
when I want to create a ACASessionsExecutor instance and execute some
code, the default library imported does not work. It always returns:
"ClientAuthenticationError: Authentication failed: AADSTS70011: The
provided request must include a 'scope' input parameter. The provided
value for the input parameter 'scope' is not valid. The scope
https://dynamicsessions.io/ is not valid. Trace ID:
d75efa58-8be7-44ef-8839-aacfdc850600 Correlation ID:
a8e4d859-92da-4fbe-a8e0-05116323ab55 Timestamp: 2025-03-14 14:15:09Z"
After changing the scope in _ensure_access_token to be
"https://dynamicsessions.io/.default" rather than
""https://dynamicsessions.io/" and it worked.
## Related issue number
issue #5946
## Checks
- [Y ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ Y] I've made sure all auto checks have passed.
Co-authored-by: edwinwu <edwin@Edwin-MBA.local>
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
<!-- Please give a short summary of the change and the problem this
solves. -->
## Related issue number
<!-- For example: "Closes #1234" -->
## Checks
- [x] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [x] I've made sure all auto checks have passed.
Signed-off-by: zhanluxianshen <zhanluxianshen@163.com>
Co-authored-by: Jack Gerrits <jackgerrits@users.noreply.github.com>
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
Resolve#5953
## Related issue number
#5953
<!-- For example: "Closes #1234" -->
## Checks
- [x] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [x] I've made sure all auto checks have passed.
I have run all [common
tasks](https://github.com/microsoft/autogen/blob/main/python/README.md#common-tasks),
got below errors which I think it is due to no OpenAI API Key is in my
environment variables. Can we ignore them or do I need to buy one?
```
=================================================== short test summary info ===================================================
ERROR tests/test_db_manager.py::TestDatabaseOperations::test_basic_entity_creation - openai.OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_...
ERROR tests/test_db_manager.py::TestDatabaseOperations::test_upsert_operations - openai.OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_...
ERROR tests/test_db_manager.py::TestDatabaseOperations::test_delete_operations - openai.OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_...
ERROR tests/test_team_manager.py::TestTeamManager::test_load_from_file - openai.OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_...
ERROR tests/test_team_manager.py::TestTeamManager::test_load_from_directory - openai.OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_...
ERROR tests/test_team_manager.py::TestTeamManager::test_create_team - openai.OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_...
ERROR tests/test_team_manager.py::TestTeamManager::test_run_stream - openai.OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_...
=========================================== 3 passed, 5 warnings, 7 errors in 9.07s ===========================================
```
Co-authored-by: Leonardo Pinheiro <leosantospinheiro@gmail.com>
Resolves#5982
This PR adds support for `json_schema` as a `response_format` type in
`OpenAIChatCompletionClient`. This is necessary because it allows the
client to be serialized along with the schema. If user use
`response_format=SomeBaseModel`, the client cannot be serialized.
Usage:
```python
# Structured output response, with a pre-defined JSON schema.
OpenAIChatCompletionClient(...,
response_format = {
"type": "json_schema",
"json_schema": {
"name": "name of the schema, must be an identifier.",
"description": "description for the model.",
# You can convert a Pydantic (v2) model to JSON schema
# using the `model_json_schema()` method.
"schema": "<the JSON schema itself>",
# Whether to enable strict schema adherence when
# generating the output. If set to true, the model will
# always follow the exact schema defined in the
# `schema` field. Only a subset of JSON Schema is
# supported when `strict` is `true`.
# To learn more, read
# https://platform.openai.com/docs/guides/structured-outputs.
"strict": False, # or True
},
},
)
````
## Summary of Changes
- Added 'candidate_func' to 'SelectorGroupChat' to narrow-down the pool
of candidate speakers.
- Introduced a test in tests/test_group_chat_endpoint.py to validate its
functionality.
- Updated the selector group chat user guide with an example
demonstrating 'candidate_func'.
## Why are these changes needed?
- These changes adds a new parameter `candidate_func` to
`SelectorGroupChat` that helps user narrow-down the set of agents for
speaker selection, allowing users to automatically select next speaker
from a smaller pool of agents.
## Related issue number
Closes#5828
## Checks
- [x] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [x] I've made sure all auto checks have passed.
---------
Signed-off-by: Abhijeetsingh Meena <abhijeet040403@gmail.com>
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
R1 reasoning tokens from hosted R1 model were not parsed correctly for the openai client
Resolves#5941
---------
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>