mirror of
https://github.com/microsoft/autogen.git
synced 2025-07-22 16:31:47 +00:00
10 Commits
Author | SHA1 | Message | Date | |
---|---|---|---|---|
![]() |
6427c07f5c
|
Fix AnthropicBedrockChatCompletionClient import error (#6489)
<!-- Thank you for your contribution! Please review https://microsoft.github.io/autogen/docs/Contribute before opening a pull request. --> Some fixes with the AnthropicBedrockChatCompletionClient - Ensure `AnthropicBedrockChatCompletionClient` exported and can be imported. - Update the BedrockInfo keys serialization - client argument can be string (similar to api key in this ) but exported config should be Secret - Replace `AnthropicBedrock` with `AsyncAnthropicBedrock` : client should be async to work with the ag stack and the BaseAnthropicClient it inherits from - Improve `AnthropicBedrockChatCompletionClient` docstring to use the correct client arguments rather than serialized dict format. Expect ```python from autogen_ext.models.anthropic import AnthropicBedrockChatCompletionClient, BedrockInfo from autogen_core.models import UserMessage, ModelInfo async def main(): anthropic_client = AnthropicBedrockChatCompletionClient( model="anthropic.claude-3-5-sonnet-20240620-v1:0", temperature=0.1, model_info=ModelInfo(vision=False, function_calling=True, json_output=False, family="unknown", structured_output=True), bedrock_info=BedrockInfo( aws_access_key="<aws_access_key>", aws_secret_key="<aws_secret_key>", aws_session_token="<aws_session_token>", aws_region="<aws_region>", ), ) # type: ignore result = await anthropic_client.create([UserMessage(content="What is the capital of France?", source="user")]) print(result) await main() ``` <!-- Please add a reviewer to the assignee section when you create a PR. If you don't have the access to it, we will shortly find a reviewer and assign them to your PR. --> ## Why are these changes needed? <!-- Please give a short summary of the change and the problem this solves. --> ## Related issue number <!-- For example: "Closes #1234" --> Closes #6483 ## Checks - [ ] I've included any doc changes needed for <https://microsoft.github.io/autogen/>. See <https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to build and test documentation locally. - [ ] I've added tests (if relevant) corresponding to the changes introduced in this PR. - [ ] I've made sure all auto checks have passed. --------- Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com> |
||
![]() |
bd572cc112
|
Ensure message sent to LLMCallEvent for Anthropic is serializable (#6135)
Messages sent as part of `LLMCallEvent` for Anthropic were not fully serializable The example below shows TextBlock and ToolUseBlocks inside the content of messages - these throw downsteam errors in apps like AGS (or event sinks) that expect serializable dicts inside the LLMCallEvent. ``` [ {'role': 'user', 'content': 'What is the weather in New York?'}, {'role': 'assistant', 'content': [TextBlock(citations=None, text='I can help you find the weather in New York. Let me check that for you.', type='text'), ToolUseBlock(id='toolu_016W8g55GejYGBzRRrcsnt7M', input={'city': 'New York'}, name='get_weather', type='tool_use')]}, {'role': 'user', 'content': [{'type': 'tool_result', 'tool_use_id': 'toolu_016W8g55GejYGBzRRrcsnt7M', 'content': 'The weather in New York is 73 degrees and Sunny.'}]} ] ``` This PR attempts to first serialize content of anthropic messages before they are passed to `LLMCallEvent` ``` [ {'role': 'user', 'content': 'What is the weather in New York?'}, {'role': 'assistant', 'content': [{'citations': None, 'text': 'I can help you find the weather in New York. Let me check that for you.', 'type': 'text'}, {'id': 'toolu_016W8g55GejYGBzRRrcsnt7M', 'input': {'city': 'New York'}, 'name': 'get_weather', 'type': 'tool_use'}]}, {'role': 'user', 'content': [{'type': 'tool_result', 'tool_use_id': 'toolu_016W8g55GejYGBzRRrcsnt7M', 'content': 'The weather in New York is 73 degrees and Sunny.'}]} ] ``` |
||
![]() |
9de16d5f70
|
Fix/anthropic colud not end with trailing whitespace at assistant content (#6168)
## Why are these changes needed? This PR fixes a `400 - invalid_request_error` that occurs when using Anthropic models and the **final message is from the assistant and ends with trailing whitespace**. Example error: ``` Error code: 400 - {'error': {'code': 'invalid_request_error', 'message': 'messages: final assistant content cannot end with trailing whitespace', ...}} ``` To unblock ongoing internal usage, this patch introduces an **ad-hoc fix** that strips trailing whitespace if the model is Anthropic and the last message is from the assistant. ## Related issue number Ad-hoc fix for issue discussed here: https://github.com/microsoft/autogen/issues/6167 Follow-up structural proposal here: https://github.com/microsoft/autogen/issues/6167 https://github.com/microsoft/autogen/issues/6167#issuecomment-2768592840 |
||
![]() |
fbdd89b46b
|
[BugFix][Refactor] Modular Transformer Pipeline and Fix Gemini/Anthropic Empty Content Handling (#6063)
## Why are these changes needed? This change addresses a compatibility issue when using Google Gemini models with AutoGen. Specifically, Gemini returns a 400 INVALID_ARGUMENT error when receiving a response with an empty "text" parameter. The root cause is that Gemini does not accept empty string values (e.g., "") as valid inputs in the history of the conversation. To fix this, if the content field is falsy (e.g., None, "", etc.), it is explicitly replaced with a single whitespace (" "), which prevents the Gemini model from rejecting the request. - **Gemini API compatibility:** Gemini models reject empty assistant messages (e.g., `""`), causing runtime errors. This PR ensures such messages are safely replaced with whitespace where appropriate. - **Avoiding regressions:** Applying the empty content workaround **only to Gemini**, and **only to valid message types**, avoids breaking OpenAI or other models. - **Reducing duplication:** Previously, message transformation logic was scattered and repeated across different message types and models. Modularizing this pipeline removes that redundancy. - **Improved maintainability:** With future model variants likely to introduce more constraints, this modular structure makes it easier to adapt transformations without writing ad-hoc code each time. - **Testing for correctness:** The new structure is verified with tests, ensuring the bug fix is effective and non-intrusive. ## Summary This PR introduces a **modular transformer pipeline** for message conversion and **fixes a Gemini-specific bug** related to empty assistant message content. ### Key Changes - **[Refactor]** Extracted message transformation logic into a unified pipeline to: - Reduce code duplication - Improve maintainability - Simplify debugging and extension for future model-specific logic - **[BugFix]** Gemini models do not accept empty assistant message content. - Introduced `_set_empty_to_whitespace` transformer to replace empty strings with `" "` only where needed - Applied it **only** to `"text"` and `"thought"` message types, not to `"tools"` to avoid serialization errors - **Improved structure for model-specific handling** - Transformer functions are now grouped and conditionally applied based on message type and model family - This design makes it easier to support future models or combinations (e.g., Gemini + R1) - **Test coverage added** - Added dedicated tests to verify that empty assistant content causes errors for Gemini - Ensured the fix resolves the issue without affecting OpenAI models --- ## Motivation Originally, Gemini-compatible endpoints would fail when receiving assistant messages with empty content (`""`). This issue required special handling without introducing brittle, ad-hoc patches. In addressing this, I also saw an opportunity to **modularize** the message transformation logic across models. This improves clarity, avoids duplication, and simplifies future adaptations (e.g., different constraints across model families). --- ## 📘 AutoGen Modular Message Transformer: Design & Usage Guide This document introduces the **new modular transformer system** used in AutoGen for converting `LLMMessage` instances to SDK-specific message formats (e.g., OpenAI-style `ChatCompletionMessageParam`). The design improves **reusability, extensibility**, and **maintainability** across different model families. --- ### 🚀 Overview Instead of scattering model-specific message conversion logic across the codebase, the new design introduces: - Modular transformer **functions** for each message type - Per-model **transformer maps** (e.g., for OpenAI-compatible models) - Optional **conditional transformers** for multimodal/text hybrid models - Clear separation between **message adaptation logic** and **SDK-specific builder** (e.g., `ChatCompletionUserMessageParam`) --- ### 🧱 1. Define Transform Functions Each transformer function takes: - `LLMMessage`: a structured AutoGen message - `context: dict`: metadata passed through the builder pipeline And returns: - A dictionary of keyword arguments for the target message constructor (e.g., `{"content": ..., "name": ..., "role": ...}`) ```python def _set_thought_as_content_gemini(message: LLMMessage, context: Dict[str, Any]) -> Dict[str, str | None]: assert isinstance(message, AssistantMessage) return {"content": message.thought or " "} ``` --- ### 🪢 2. Compose Transformer Pipelines Multiple transformer functions are composed into a pipeline using `build_transformer_func()`: ```python base_user_transformer_funcs: List[Callable[[LLMMessage, Dict[str, Any]], Dict[str, Any]]] = [ _assert_valid_name, _set_name, _set_role("user"), ] user_transformer = build_transformer_func( funcs=base_user_transformer_funcs, message_param_func=ChatCompletionUserMessageParam ) ``` - The `message_param_func` is the actual constructor for the target message class (usually from the SDK). - The pipeline is **ordered** — each function adds or overrides keys in the builder kwargs. --- ### 🗂️ 3. Register Transformer Map Each model family maintains a `TransformerMap`, which maps `LLMMessage` types to transformers: ```python __BASE_TRANSFORMER_MAP: TransformerMap = { SystemMessage: system_transformer, UserMessage: user_transformer, AssistantMessage: assistant_transformer, } register_transformer("openai", model_name_or_family, __BASE_TRANSFORMER_MAP) ``` - `"openai"` is currently required (as only OpenAI-compatible format is supported now). - Registration ensures AutoGen knows how to transform each message type for that model. --- ### 🔁 4. Conditional Transformers (Optional) When message construction depends on runtime conditions (e.g., `"text"` vs. `"multimodal"`), use: ```python conditional_transformer = build_conditional_transformer_func( funcs_map=user_transformer_funcs_claude, message_param_func_map=user_transformer_constructors, condition_func=user_condition, ) ``` Where: - `funcs_map`: maps condition label → list of transformer functions ```python user_transformer_funcs_claude = { "text": text_transformers + [_set_empty_to_whitespace], "multimodal": multimodal_transformers + [_set_empty_to_whitespace], } ``` - `message_param_func_map`: maps condition label → message builder ```python user_transformer_constructors = { "text": ChatCompletionUserMessageParam, "multimodal": ChatCompletionUserMessageParam, } ``` - `condition_func`: determines which transformer to apply at runtime ```python def user_condition(message: LLMMessage, context: Dict[str, Any]) -> str: if isinstance(message.content, str): return "text" return "multimodal" ``` --- ### 🧪 Example Flow ```python llm_message = AssistantMessage(name="a", thought="let’s go") model_family = "openai" model_name = "claude-3-opus" transformer = get_transformer(model_family, model_name, type(llm_message)) sdk_message = transformer(llm_message, context={}) ``` --- ### 🎯 Design Benefits | Feature | Benefit | |--------|---------| | 🧱 Function-based modular design | Easy to compose and test | | 🧩 Per-model registry | Clean separation across model families | | ⚖️ Conditional support | Allows multimodal / dynamic adaptation | | 🔄 Reuse-friendly | Shared logic (e.g., `_set_name`) is DRY | | 📦 SDK-specific | Keeps message adaptation aligned to builder interface | --- ### 🔮 Future Direction - Support more SDKs and formats by introducing new message_param_func - Global registry integration (currently `"openai"`-scoped) - Class-based transformer variant if complexity grows --- ## Related issue number Closes #5762 ## Checks - [ ] I've included any doc changes needed for <https://microsoft.github.io/autogen/>. See <https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to build and test documentation locally. - [x] I've added tests (if relevant) corresponding to the changes introduced in this PR. - [ v ] I've made sure all auto checks have passed. --------- Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com> |
||
![]() |
0cd3ff46fa
|
FIX: Anthropic and Gemini could take multiple system message (#6118)
Anthropic SDK could not takes multiple system messages. However some autogen Agent(e.g. SocietyOfMindAgent) makes multiple system messages. And... Gemini with OpenaiSDK do not take error. However is not working mulitple system messages. (Just last one is working) So, I simple change of, "merge multiple system message" at these cases. ## Related issue number Closes #6116 Closes #6117 --------- Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com> |
||
![]() |
a4b6372813
|
Use SecretStr type for api key (#5939)
To prevent accidental export of API keys |
||
![]() |
817f728d04
|
add LLMStreamStartEvent and LLMStreamEndEvent (#5890)
These changes are needed because there is currently no way to get logging information about Streaming LLM requests/responses. I decided to put the StreamStart event AFTER the first chunk so there aren't false positives about connections/auth. Closes #5730 --------- Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com> |
||
![]() |
740afe5b61
|
Add ToolCallEvent and log it from all builtin tools (#5859)
Resolves #5745 Also made sure to log LLMCallEvent from all builtin model clients, and added unit test for coverage. --------- Co-authored-by: Ryan Sweet <rysweet@microsoft.com> Co-authored-by: Victor Dibia <victordibia@microsoft.com> |
||
![]() |
906b09e451
|
fix: Update SKChatCompletionAdapter message conversion (#5749)
<!-- Thank you for your contribution! Please review https://microsoft.github.io/autogen/docs/Contribute before opening a pull request. --> <!-- Please add a reviewer to the assignee section when you create a PR. If you don't have the access to it, we will shortly find a reviewer and assign them to your PR. --> ## Why are these changes needed? <!-- Please give a short summary of the change and the problem this solves. --> The PR introduces two changes. The first change is adding a name attribute to `FunctionExecutionResult`. The motivation is that semantic kernel requires it for their function result interface and it seemed like a easy modification as `FunctionExecutionResult` is always created in the context of a `FunctionCall` which will contain the name. I'm unsure if there was a motivation to keep it out but this change makes it easier to trace which tool the result refers to and also increases api compatibility with SK. The second change is an update to how messages are mapped from autogen to semantic kernel, which includes an update/fix in the processing of function results. ## Related issue number <!-- For example: "Closes #1234" --> Related to #5675 but wont fix the underlying issue of anthropic requiring tools during AssistantAgent reflection. ## Checks - [ ] I've included any doc changes needed for <https://microsoft.github.io/autogen/>. See <https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to build and test documentation locally. - [ ] I've added tests (if relevant) corresponding to the changes introduced in this PR. - [ ] I've made sure all auto checks have passed. --------- Co-authored-by: Leonardo Pinheiro <lpinheiro@microsoft.com> |
||
![]() |
05fc763b8a
|
add anthropic native support (#5695)
<!-- Thank you for your contribution! Please review https://microsoft.github.io/autogen/docs/Contribute before opening a pull request. --> Claude 3.7 just came out. Its a pretty capable model and it would be great to support it in Autogen. This will could augment the already excellent support we have for Anthropic via the SKAdapters in the following ways - Based on the ChatCompletion API similar to the ollama and openai client - Configurable/serializable (can be dumped) .. this means it can be used easily in AGS. ## What is Supported (video below shows the client being used in autogen studio) https://github.com/user-attachments/assets/8fb7c17c-9f9c-4525-aa9c-f256aad0f40b - streaming - tool callign / function calling - drop in integration with assistant agent. - multimodal support ```python from dotenv import load_dotenv import os load_dotenv() from autogen_agentchat.agents import AssistantAgent from autogen_agentchat.ui import Console from autogen_ext.models.openai import OpenAIChatCompletionClient from autogen_ext.models.anthropic import AnthropicChatCompletionClient model_client = AnthropicChatCompletionClient( model="claude-3-7-sonnet-20250219" ) async def get_weather(city: str) -> str: """Get the weather for a given city.""" return f"The weather in {city} is 73 degrees and Sunny." agent = AssistantAgent( name="weather_agent", model_client=model_client, tools=[get_weather], system_message="You are a helpful assistant.", # model_client_stream=True, ) # Run the agent and stream the messages to the console. async def main() -> None: await Console(agent.run_stream(task="What is the weather in New York?")) await main() ``` result ``` messages = [ UserMessage(content="Write a very short story about a dragon.", source="user"), ] # Create a stream. stream = model_client.create_stream(messages=messages) # Iterate over the stream and print the responses. print("Streamed responses:") async for response in stream: # type: ignore if isinstance(response, str): # A partial response is a string. print(response, flush=True, end="") else: # The last response is a CreateResult object with the complete message. print("\n\n------------\n") print("The complete response:", flush=True) print(response.content, flush=True) print("\n\n------------\n") print("The token usage was:", flush=True) print(response.usage, flush=True) ``` <!-- Please add a reviewer to the assignee section when you create a PR. If you don't have the access to it, we will shortly find a reviewer and assign them to your PR. --> ## Why are these changes needed? <!-- Please give a short summary of the change and the problem this solves. --> ## Related issue number <!-- For example: "Closes #1234" --> Closes #5205 Closes #5708 ## Checks - [ ] I've included any doc changes needed for <https://microsoft.github.io/autogen/>. See <https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to build and test documentation locally. - [ ] I've added tests (if relevant) corresponding to the changes introduced in this PR. - [ ] I've made sure all auto checks have passed. cc @rohanthacker |