mirror of
https://github.com/microsoft/autogen.git
synced 2025-12-28 15:38:53 +00:00
reply func renaming
This commit is contained in:
parent
409fec0dfb
commit
9233bf077f
@ -26,7 +26,7 @@ We have designed a generic `ConversableAgent` class for Agents that are capable
|
||||
- The `UserProxyAgent` is conceptually a proxy agent for humans, soliciting human input as the agent's reply at each interaction turn by default and also having the capability to execute code and call functions. The `UserProxyAgent` triggers code execution automatically when it detects an executable code block in the received message and no human user input is provided. Code execution can be disabled by setting `code_execution_config` to False. LLM-based response is disabled by default. It can be enabled by setting `llm_config` to a dict corresponding to the [inference](/docs/Use-Cases/enhanced_inference) configuration. When `llm_config` is set to a dict, `UserProxyAgent` can generate replies using an LLM when code execution is not performed. When `llm_config` is set to a dict, `UserProxyAgent` can generate replies using an LLM when code execution is not performed.
|
||||
|
||||
The auto-reply capability of `ConversableAgent` allows for more autonomous multi-agent communication while retaining the possibility of human intervention.
|
||||
One can also easily extend it by registering auto_reply functions with the `register_reply()` method.
|
||||
One can also easily extend it by registering reply functions with the `register_reply()` method.
|
||||
|
||||
In the following code, we create an `AssistantAgent` named "assistant" to serve as the assistant and a `UserProxyAgent` named "user_proxy" to serve as a proxy for the human user. We will later employ these two agents to solve a task.
|
||||
|
||||
|
||||
Loading…
x
Reference in New Issue
Block a user