
The `SemanticKernelAgent` class has been updated to include an optional `modelServiceId` parameter, allowing the specification of a service ID for the model. ## Why are these changes needed? Currently, `SemanticKernelAgent` uses the parameterless method for resolving `IChatCompletionSerivce`. This will fail, when multiple models are registered in the Kernel. To support different models registered in the Kernel, I adopted the resolving of the `IChatCompletionSerivce` within the `SemanticKernelAgent` with an optional parameter. When it is not set, I resolve the default instance, otherwise, I use the optional parameter as a servide id for resolving the `IChatCompletionSerivce` service. ## Related issue number ## Checks - [x] I've included any doc changes needed for https://microsoft.github.io/autogen/. See https://microsoft.github.io/autogen/docs/Contribute#documentation to build and test documentation locally. - [x] I've added tests (if relevant) corresponding to the changes introduced in this PR. - [x] I've made sure all auto checks have passed. --------- Co-authored-by: Ryan Sweet <rysweet@microsoft.com> Co-authored-by: Xiaoyun Zhang <bigmiao.zhang@gmail.com>
AutoGen for .NET
Thre are two sets of packages here: AutoGen.* the older packages derived from AutoGen 0.2 for .NET - these will gradually be deprecated and ported into the new packages Microsoft.AutoGen.* the new packages for .NET that use the event-driven model - These APIs are not yet stable and are subject to change.
To get started with the new packages, please see the samples and in particular the Hello sample.
You can install both new and old packages from the following feeds:
Note
Nightly build is available at:
Firstly, following the installation guide to install AutoGen packages.
Then you can start with the following code snippet to create a conversable agent and chat with it.
using AutoGen;
using AutoGen.OpenAI;
var openAIKey = Environment.GetEnvironmentVariable("OPENAI_API_KEY") ?? throw new Exception("Please set OPENAI_API_KEY environment variable.");
var gpt35Config = new OpenAIConfig(openAIKey, "gpt-3.5-turbo");
var assistantAgent = new AssistantAgent(
name: "assistant",
systemMessage: "You are an assistant that help user to do some tasks.",
llmConfig: new ConversableAgentConfig
{
Temperature = 0,
ConfigList = [gpt35Config],
})
.RegisterPrintMessage(); // register a hook to print message nicely to console
// set human input mode to ALWAYS so that user always provide input
var userProxyAgent = new UserProxyAgent(
name: "user",
humanInputMode: ConversableAgent.HumanInputMode.ALWAYS)
.RegisterPrintMessage();
// start the conversation
await userProxyAgent.InitiateChatAsync(
receiver: assistantAgent,
message: "Hey assistant, please do me a favor.",
maxRound: 10);
Samples
You can find more examples under the sample project.
Functionality
-
ConversableAgent
- function call
- code execution (dotnet only, powered by
dotnet-interactive
)
-
Agent communication
- Two-agent chat
- Group chat
-
Enhanced LLM Inferences
-
Exclusive for dotnet
- Source generator for type-safe function definition generation