26 Commits

Author SHA1 Message Date
Eric Zhu
615882ccc8 Use SecretStr type for api key (#5939)
To prevent accidental export of API keys
2025-03-13 21:48:09 -07:00
Eitan Yarmush
817f728d04
add LLMStreamStartEvent and LLMStreamEndEvent (#5890)
These changes are needed because there is currently no way to get
logging information about Streaming LLM requests/responses.

I decided to put the StreamStart event AFTER the first chunk so there
aren't false positives about connections/auth.

Closes #5730
---------

Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
2025-03-11 15:02:46 -07:00
Eric Zhu
740afe5b61
Add ToolCallEvent and log it from all builtin tools (#5859)
Resolves #5745

Also made sure to log LLMCallEvent from all builtin model clients, and
added unit test for coverage.

---------

Co-authored-by: Ryan Sweet <rysweet@microsoft.com>
Co-authored-by: Victor Dibia <victordibia@microsoft.com>
2025-03-07 16:04:45 -08:00
Leonardo Pinheiro
906b09e451
fix: Update SKChatCompletionAdapter message conversion (#5749)
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->

<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->

## Why are these changes needed?

<!-- Please give a short summary of the change and the problem this
solves. -->

The PR introduces two changes.

The first change is adding a name attribute to
`FunctionExecutionResult`. The motivation is that semantic kernel
requires it for their function result interface and it seemed like a
easy modification as `FunctionExecutionResult` is always created in the
context of a `FunctionCall` which will contain the name. I'm unsure if
there was a motivation to keep it out but this change makes it easier to
trace which tool the result refers to and also increases api
compatibility with SK.

The second change is an update to how messages are mapped from autogen
to semantic kernel, which includes an update/fix in the processing of
function results.

## Related issue number

<!-- For example: "Closes #1234" -->

Related to #5675 but wont fix the underlying issue of anthropic
requiring tools during AssistantAgent reflection.

## Checks

- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.

---------

Co-authored-by: Leonardo Pinheiro <lpinheiro@microsoft.com>
2025-03-03 23:05:54 +00:00
Eric Zhu
9fd8eefc55
fix: Structured output with tool calls for OpenAIChatCompletionClient (#5671)
Resolves: #5568

Also, refactored some unit tests.

Integration tests against OpenAI endpoint passed:
https://github.com/microsoft/autogen/actions/runs/13484492096

Co-authored-by: Jack Gerrits <jackgerrits@users.noreply.github.com>
2025-02-24 14:18:46 +00:00
Eric Zhu
7784f44ea6
feat: Add thought process handling in tool calls and expose ThoughtEvent through stream in AgentChat (#5500)
Resolves #5192

Test

```python
import asyncio
import os
from random import randint
from typing import List
from autogen_core.tools import BaseTool, FunctionTool
from autogen_ext.models.openai import OpenAIChatCompletionClient
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.ui import Console

async def get_current_time(city: str) -> str:
    return f"The current time in {city} is {randint(0, 23)}:{randint(0, 59)}."

tools: List[BaseTool] = [
    FunctionTool(
        get_current_time,
        name="get_current_time",
        description="Get current time for a city.",
    ),
]

model_client = OpenAIChatCompletionClient(
    model="anthropic/claude-3.5-haiku-20241022",
    base_url="https://openrouter.ai/api/v1",
    api_key=os.environ["OPENROUTER_API_KEY"],
    model_info={
        "family": "claude-3.5-haiku",
        "function_calling": True,
        "vision": False,
        "json_output": False,
    }
)

agent = AssistantAgent(
    name="Agent",
    model_client=model_client,
    tools=tools,
    system_message= "You are an assistant with some tools that can be used to answer some questions",
)

async def main() -> None:
    await Console(agent.run_stream(task="What is current time of Paris and Toronto?"))

asyncio.run(main())
```

```
---------- user ----------
What is current time of Paris and Toronto?
---------- Agent ----------
I'll help you find the current time for Paris and Toronto by using the get_current_time function for each city.
---------- Agent ----------
[FunctionCall(id='toolu_01NwP3fNAwcYKn1x656Dq9xW', arguments='{"city": "Paris"}', name='get_current_time'), FunctionCall(id='toolu_018d4cWSy3TxXhjgmLYFrfRt', arguments='{"city": "Toronto"}', name='get_current_time')]
---------- Agent ----------
[FunctionExecutionResult(content='The current time in Paris is 1:10.', call_id='toolu_01NwP3fNAwcYKn1x656Dq9xW', is_error=False), FunctionExecutionResult(content='The current time in Toronto is 7:28.', call_id='toolu_018d4cWSy3TxXhjgmLYFrfRt', is_error=False)]
---------- Agent ----------
The current time in Paris is 1:10.
The current time in Toronto is 7:28.
```

---------

Co-authored-by: Jack Gerrits <jackgerrits@users.noreply.github.com>
2025-02-21 13:58:32 -08:00
Eric Zhu
ec314c586c
feat: Add strict mode support to BaseTool, ToolSchema and FunctionTool (#5507)
Resolves #4447

For `openai` client's structured output support is through its beta
client, which requires the function JSON schema to be strict when in
structured output mode.

Reference:
https://platform.openai.com/docs/guides/function-calling#strict-mode
2025-02-13 19:44:55 +00:00
wistuba
7a772a2fcd
feat: add indictor for tool failure to FunctionExecutionResult (#5428)
Some LLMs recieve an explicit signal about tool use failures. 

Closes #5273

Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
2025-02-09 21:57:50 -08:00
Eric Zhu
9a028acf9f
feat: enhance Gemini model support in OpenAI client and tests (#5461) 2025-02-09 10:12:59 -08:00
afourney
0b659de36d
Mitigates #5401 by optionally prepending names to messages. (#5448)
Mitigates #5401 by optionally prepending names to messages.

Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
2025-02-08 07:04:24 +00:00
Eric Zhu
569bc19769
feat: add gemini model families, enhance group chat selection for Gemini model and add tests (#5334)
Resolves #5322
2025-02-03 18:32:35 +00:00
Eric Zhu
f656ff1e01
feat: Support R1 reasoning text in model create result; enhance API docs (#5262)
Resolves #5255 

---------

Co-authored-by: afourney <adamfo@microsoft.com>
2025-01-30 11:03:54 -08:00
Eric Zhu
44db2cc1fb
fix: handle non-string function arguments in tool calls and add corresponding warnings (#5260) 2025-01-30 16:49:22 +00:00
Eric Zhu
b441d5b43a
fix: Enhance OpenAI client to handle additional stop reasons and improve tool call validation in tests to address empty tool_calls list. (#5223)
Resolves #5222
2025-01-27 21:16:47 +00:00
Eric Zhu
da1c2bf12e
fix: use tool_calls field to detect tool calls in OpenAI client; add integration tests for OpenAI and Gemini (#5122)
* fix: use tool_calls field to detect tool calls in OpenAI client

* Add unit tests for tool calling; and integration tests for openai and gemini
2025-01-21 09:06:19 -05:00
Eric Zhu
af420a83e2
fix: ensure proper handling of structured output in OpenAI client and improve test coverage for structured output (#5116) 2025-01-20 20:54:39 +00:00
Jack Gerrits
cb1633b501
feat!: Add support for model family specification (#4856)
* Add support for model family specification

* spelling mistake

* lint, etc

* fixes
2024-12-30 15:09:21 -05:00
Sachin Joglekar
3b4dd6e050
Support custom models with OpenAI client (#4808) 2024-12-24 13:04:23 -08:00
Leonardo Pinheiro
253fe216fd
Add models.openai and tools.langchain namespaces (#4601)
* add models.openai namespace

* refactor tools namespace

* update lock file

* revert pyproject changes

* update docs and add cast

* update ext models doc ref

* increase underline

* add reply models namespace

* update imports

* fix test

* linting

* fix missing conflicts

* revert pydantic changes

* rename to replay

* replay

* fix reply

* Fix test

* formatting

* example

---------

Co-authored-by: Leonardo Pinheiro <lpinheiro@microsoft.com>
Co-authored-by: Jack Gerrits <jack@jackgerrits.com>
Co-authored-by: Jack Gerrits <jackgerrits@users.noreply.github.com>
2024-12-09 19:18:09 -08:00
Jack Gerrits
218e84fd8e
Migrate remaining components (#4626) 2024-12-09 18:39:07 -08:00
Jack Gerrits
87011ae01b
Migrate model context and models modules out of components (#4613)
* Move model context out of components

* move models out of components

* rename docs file
2024-12-09 10:00:08 -08:00
Jack Gerrits
3022369eeb
Flatten core base and components (#4513)
* Flatten core base and components

* remove extra files

* dont export from deprecated locations

* format

* fmt
2024-12-03 17:00:44 -08:00
Jack Gerrits
b2ae4d1203
Add warnings for deprecated azure oai config changes (#4317)
* Add warnings for deprecated azure oai config changes

* Update docs and usages, simplify capabilities
2024-11-25 09:34:52 -08:00
Leonardo Pinheiro
ac53961bc8
Delete autogen-ext refactor deprecations (#4305)
* delete files and update dependencies

* add explicit config exports

* ignore mypy error on nb

---------

Co-authored-by: Leonardo Pinheiro <lpinheiro@microsoft.com>
Co-authored-by: Jack Gerrits <jackgerrits@users.noreply.github.com>
2024-11-22 11:29:39 -05:00
Anthony Uphof
87bd1de396
Fix: provide valid Prompt and Completion Token usage counts from create_stream (#3972)
* Fix: `create_stream` to return valid usage token counts
* documentation

---------

Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
2024-10-29 16:20:03 -07:00
Leonardo Pinheiro
38f62e1609
migrate models (#3848)
* migrate models

* Update python/packages/autogen-agentchat/src/autogen_agentchat/agents/_tool_use_assistant_agent.py

Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>

* refactor missing imports

* ignore type check errors

* Update python/packages/autogen-ext/src/autogen_ext/models/_openai/_model_info.py

Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>

* update packages index page

---------

Co-authored-by: Leonardo Pinheiro <lpinheiro@microsoft.com>
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
2024-10-22 11:40:41 -04:00