Resolves#5192
Test
```python
import asyncio
import os
from random import randint
from typing import List
from autogen_core.tools import BaseTool, FunctionTool
from autogen_ext.models.openai import OpenAIChatCompletionClient
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.ui import Console
async def get_current_time(city: str) -> str:
return f"The current time in {city} is {randint(0, 23)}:{randint(0, 59)}."
tools: List[BaseTool] = [
FunctionTool(
get_current_time,
name="get_current_time",
description="Get current time for a city.",
),
]
model_client = OpenAIChatCompletionClient(
model="anthropic/claude-3.5-haiku-20241022",
base_url="https://openrouter.ai/api/v1",
api_key=os.environ["OPENROUTER_API_KEY"],
model_info={
"family": "claude-3.5-haiku",
"function_calling": True,
"vision": False,
"json_output": False,
}
)
agent = AssistantAgent(
name="Agent",
model_client=model_client,
tools=tools,
system_message= "You are an assistant with some tools that can be used to answer some questions",
)
async def main() -> None:
await Console(agent.run_stream(task="What is current time of Paris and Toronto?"))
asyncio.run(main())
```
```
---------- user ----------
What is current time of Paris and Toronto?
---------- Agent ----------
I'll help you find the current time for Paris and Toronto by using the get_current_time function for each city.
---------- Agent ----------
[FunctionCall(id='toolu_01NwP3fNAwcYKn1x656Dq9xW', arguments='{"city": "Paris"}', name='get_current_time'), FunctionCall(id='toolu_018d4cWSy3TxXhjgmLYFrfRt', arguments='{"city": "Toronto"}', name='get_current_time')]
---------- Agent ----------
[FunctionExecutionResult(content='The current time in Paris is 1:10.', call_id='toolu_01NwP3fNAwcYKn1x656Dq9xW', is_error=False), FunctionExecutionResult(content='The current time in Toronto is 7:28.', call_id='toolu_018d4cWSy3TxXhjgmLYFrfRt', is_error=False)]
---------- Agent ----------
The current time in Paris is 1:10.
The current time in Toronto is 7:28.
```
---------
Co-authored-by: Jack Gerrits <jackgerrits@users.noreply.github.com>
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
Currently the way to accomplish RAG behavior with agent chat,
specifically assistant agents is with the memory interface, however
there is no way to configure it via the declarative API.
<!-- Please give a short summary of the change and the problem this
solves. -->
## Related issue number
<!-- For example: "Closes #1234" -->
## Checks
- [ ] I've included any doc changes needed for
https://microsoft.github.io/autogen/. See
https://microsoft.github.io/autogen/docs/Contribute#documentation to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
---------
Co-authored-by: Victor Dibia <chuvidi2003@gmail.com>
Allow AssistantAgent to drop images when not equipped with a multi-modal model.
Adds a corresponding utility function, which can be used in autogen-ext and teams, to accomplish the same.
Resolves#3983
* introduce `model_client_stream` parameter in `AssistantAgent` to
enable token-level streaming output.
* introduce `ModelClientStreamingChunkEvent` as a type of `AgentEvent`
to pass the streaming chunks to the application via `run_stream` and
`on_messages_stream`. Although this will not affect the inner messages
list in the final `Response` or `TaskResult`.
* handle this new message type in `Console`.
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
Make AssistantAgent and Handoff use BaseTool.
This ensures that they can be made declarative/serialized
<!-- Please give a short summary of the change and the problem this
solves. -->
## Related issue number
<!-- For example: "Closes #1234" -->
## Checks
- [ ] I've included any doc changes needed for
https://microsoft.github.io/autogen/. See
https://microsoft.github.io/autogen/docs/Contribute#documentation to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
* Pass context between AssistantAgent for handoffs
* Add parallel tool call test
---------
Co-authored-by: Jack Gerrits <jackgerrits@users.noreply.github.com>
* v1, make assistant agent declarative
* make head tail context declarative
* update and formatting
* update assistant, format updates
* make websurfer declarative
* update formatting
* move declarative docs to advanced section
* remove tools until implemented
* minor updates to termination conditions
* update docs
* initial base memroy impl
* update, add example with chromadb
* include mimetype consideration
* add transform method
* update to address feedback, will update after 4681 is merged
* update memory impl,
* remove chroma db, typing fixes
* format, add test
* update uv lock
* update docs
* format updates
* update notebook
* add memoryqueryevent message, yield message for observability.
* minor fixes, make score optional/none
* Update python/packages/autogen-agentchat/src/autogen_agentchat/agents/_assistant_agent.py
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
* update tests to improve cov
* refactor, move memory to core.
* format fixxes
* format updates
* format updates
* fix azure notebook import, other fixes
* update notebook, support str query in Memory protocol
* update test
* update cells
* add specific extensible return types to memory query and update_context
---------
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
* feat: add support for list of messages as team task input
* Update society of mind agent to use the list input task
---------
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Ryan Sweet <rysweet@microsoft.com>
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
1. convert dataclass types to pydantic basemodel
2. add save_state and load_state for ChatAgent
3. state types for AgentChat
---------
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
* AgentChat pause and resume a task
Resolves#3859
* Add
* Update usage
* Update usage
* WIP to address stateful group chat
* Refactor group chat to add reset and flags for running
* documentation