"# Auto Generated Agent Chat: Hierarchy flow using select_speaker\n",
"\n",
"AutoGen offers conversable agents powered by LLM, tool, or human, which can be used to perform tasks collectively via automated chat. This framework allows tool use and human participation through multi-agent conversation.\n",
"Please find documentation about this feature [here](https://microsoft.github.io/autogen/docs/Use-Cases/agent_chat).\n",
"\n",
"This notebook is about restricting information flow within agents. Suppose we have the following setup:\n",
"- By limiting team members can talk to team members, we bring focus to the team.\n",
"- Information flow from Team A to Team B is made more efficient to let the X1s talk amongst themselves. It is more efficient as agent B2 do not have to see the discussion within Team A.\n",
"\n",
"\n",
"## Requirements\n",
"\n",
"AutoGen requires `Python>=3.8`. To run this notebook example, please install:\n",
"The [`config_list_from_json`](https://microsoft.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file."
"It first looks for environment variable \"OAI_CONFIG_LIST\" which needs to be a valid json string. If that variable is not found, it then looks for a json file named \"OAI_CONFIG_LIST\". It filters the configs by models (you can filter by other keys as well). Only the gpt-4 models are kept in the list based on the filter condition.\n",
"\n",
"The config list looks like the following:\n",
"```python\n",
"config_list = [\n",
" {\n",
" 'model': 'gpt-4',\n",
" 'api_key': '<your OpenAI API key here>',\n",
" },\n",
" {\n",
" 'model': 'gpt-4',\n",
" 'api_key': '<your Azure OpenAI API key here>',\n",
" 'base_url': '<your Azure OpenAI API base here>',\n",
" 'api_type': 'azure',\n",
" 'api_version': '2023-06-01-preview',\n",
" },\n",
" {\n",
" 'model': 'gpt-4-32k',\n",
" 'api_key': '<your Azure OpenAI API key here>',\n",
" 'base_url': '<your Azure OpenAI API base here>',\n",
" 'api_type': 'azure',\n",
" 'api_version': '2023-06-01-preview',\n",
" },\n",
"]\n",
"```\n",
"\n",
"If you open this notebook in colab, you can upload your files by clicking the file icon on the left panel and then choosing \"upload file\" icon.\n",
"\n",
"You can set the value of config_list in other ways you prefer, e.g., loading from a YAML file."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Extending GroupChat\n",
"\n",
"\n",
"Custom Speaker Selection Logic: The CustomGroupChat class allows us to define our own logic for selecting the next speaker in the group chat. The base GroupChat class has a default logic that may not be suitable for all scenarios.\n",
"\n",
"\n",
"Content-Driven Speaker Selection: This custom class lets us select the next speaker based on the content of the last message, like \"NEXT: A2\" or \"TERMINATE\". The base GroupChat class does not have this capability.\n",
"\n",
"Team-Based Logic: The custom class enables team-based logic for speaker selection. It allows the next speaker to be chosen from the same team as the last speaker or from a pool of team leaders, which is something the base GroupChat class does not offer.\n",
"\n",
"Previous Speaker Exclusion: The CustomGroupChat class includes logic to prevent the last speaker and the previous speaker from being selected again immediately, which adds more dynamism to the conversation.\n",
"\n",
"Flexibility: Extending the base GroupChat class allows us to preserve the existing functionalities and methods while adding new features specific to our needs. This makes the code more modular and easier to maintain.\n",
"\n",
"Special Cases Handling: The custom class can also handle special cases, like terminating the chat or transitioning to a 'User_proxy', directly within its select_speaker method.\n",
" system_message=\"You are a team leader A1, your team consists of A2, A3. You can talk to the other team leader B1, whose team member is B2.\",\n",
" llm_config=llm_config),\n",
" AssistantAgent(name='A2', \n",
" system_message=\"You are team member A2, you know the secret value of x but not y, x = 9. Tell others x to cooperate.\",\n",
" llm_config=llm_config),\n",
" AssistantAgent(name='A3', \n",
" system_message=\"You are team member A3, You know the secret value of y but not x, y = 5. Tell others y to cooperate.\",\n",
" llm_config=llm_config)\n",
"]\n",
"\n",
"agents_B = [\n",
" AssistantAgent(name='B1', \n",
" system_message=\"You are a team leader B1, your team consists of B2. You can talk to the other team leader A1, whose team member is A2, A3. Use NEXT: A1 to suggest talking to A1.\",\n",
" llm_config=llm_config),\n",
" AssistantAgent(name='B2', \n",
" system_message=\"You are team member B2. Your task is to find out the value of x and y and compute the product. Once you have the answer, say out the answer and append a new line with TERMINATE.\",\n",
" llm_config=llm_config)\n",
"]\n",
"\n",
"# Terminates the conversation when TERMINATE is detected.\n",
"user_proxy = autogen.UserProxyAgent(\n",
" name=\"User_proxy\",\n",
" system_message=\"Terminator admin.\",\n",
" code_execution_config=False,\n",
" is_termination_msg=is_termination_msg,\n",
" human_input_mode=\"NEVER\")\n",
"\n",
"list_of_agents = agents_A + agents_B\n",
"list_of_agents.append(user_proxy)\n",
"\n",
"# Create CustomGroupChat\n",
"group_chat = CustomGroupChat(\n",
" agents=list_of_agents, # Include all agents\n",
" messages=['Everyone cooperate and help agent B2 in his task. Team A has A1, A2, A3. Team B has B1, B2. Only members of the same team can talk to one another. Only team leaders (names ending with 1) can talk amongst themselves. You must use \"NEXT: B1\" to suggest talking to B1 for example; You can suggest only one person, you cannot suggest yourself or the previous speaker; You can also dont suggest anyone.'],\n",
"llm_config = {\"config_list\": config_list_gpt4, \"cache_seed\": None} # cache_seed is None because we want to observe if there is any communication pattern difference if we reran the group chat.\n",