mirror of
https://github.com/microsoft/autogen.git
synced 2025-07-17 14:01:06 +00:00

* Add support for tool calling cohere * update tool calling code * make client name configurable with default * formatting nits * update docs --------- Co-authored-by: Mark Sze <66362098+marklysze@users.noreply.github.com> Co-authored-by: Li Jiang <bnujli@gmail.com>
537 lines
21 KiB
Plaintext
537 lines
21 KiB
Plaintext
{
|
|
"cells": [
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"# Cohere\n",
|
|
"\n",
|
|
"[Cohere](https://cohere.com/) is a cloud based platform serving their own LLMs, in particular the Command family of models.\n",
|
|
"\n",
|
|
"Cohere's API differs from OpenAI's, which is the native API used by AutoGen, so to use Cohere's LLMs you need to use this library.\n",
|
|
"\n",
|
|
"You will need a Cohere account and create an API key. [See their website for further details](https://cohere.com/)."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Features\n",
|
|
"\n",
|
|
"When using this client class, AutoGen's messages are automatically tailored to accommodate the specific requirements of Cohere's API.\n",
|
|
"\n",
|
|
"Additionally, this client class provides support for function/tool calling and will track token usage and cost correctly as per Cohere's API costs (as of July 2024).\n",
|
|
"\n",
|
|
"## Getting started\n",
|
|
"\n",
|
|
"First you need to install the `pyautogen` package to use AutoGen with the Cohere API library.\n",
|
|
"\n",
|
|
"``` bash\n",
|
|
"pip install pyautogen[cohere]\n",
|
|
"```"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Cohere provides a number of models to use, included below. See the list of [models here](https://docs.cohere.com/docs/models).\n",
|
|
"\n",
|
|
"See the sample `OAI_CONFIG_LIST` below showing how the Cohere client class is used by specifying the `api_type` as `cohere`.\n",
|
|
"\n",
|
|
"```python\n",
|
|
"[\n",
|
|
" {\n",
|
|
" \"model\": \"gpt-35-turbo\",\n",
|
|
" \"api_key\": \"your OpenAI Key goes here\",\n",
|
|
" },\n",
|
|
" {\n",
|
|
" \"model\": \"gpt-4-vision-preview\",\n",
|
|
" \"api_key\": \"your OpenAI Key goes here\",\n",
|
|
" },\n",
|
|
" {\n",
|
|
" \"model\": \"dalle\",\n",
|
|
" \"api_key\": \"your OpenAI Key goes here\",\n",
|
|
" },\n",
|
|
" {\n",
|
|
" \"model\": \"command-r-plus\",\n",
|
|
" \"api_key\": \"your Cohere API Key goes here\",\n",
|
|
" \"api_type\": \"cohere\"\n",
|
|
" },\n",
|
|
" {\n",
|
|
" \"model\": \"command-r\",\n",
|
|
" \"api_key\": \"your Cohere API Key goes here\",\n",
|
|
" \"api_type\": \"cohere\"\n",
|
|
" },\n",
|
|
" {\n",
|
|
" \"model\": \"command\",\n",
|
|
" \"api_key\": \"your Cohere API Key goes here\",\n",
|
|
" \"api_type\": \"cohere\"\n",
|
|
" }\n",
|
|
"]\n",
|
|
"```"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"As an alternative to the `api_key` key and value in the config, you can set the environment variable `COHERE_API_KEY` to your Cohere key.\n",
|
|
"\n",
|
|
"Linux/Mac:\n",
|
|
"``` bash\n",
|
|
"export COHERE_API_KEY=\"your_cohere_api_key_here\"\n",
|
|
"```\n",
|
|
"\n",
|
|
"Windows:\n",
|
|
"``` bash\n",
|
|
"set COHERE_API_KEY=your_cohere_api_key_here\n",
|
|
"```\n",
|
|
"\n",
|
|
"## API parameters\n",
|
|
"\n",
|
|
"The following parameters can be added to your config for the Cohere API. See [this link](https://docs.cohere.com/reference/chat) for further information on them and their default values.\n",
|
|
"\n",
|
|
"- temperature (number > 0)\n",
|
|
"- p (number 0.01..0.99)\n",
|
|
"- k (number 0..500)\n",
|
|
"- max_tokens (null, integer >= 0)\n",
|
|
"- seed (null, integer)\n",
|
|
"- frequency_penalty (number 0..1)\n",
|
|
"- presence_penalty (number 0..1)\n",
|
|
"- client_name (null, string)\n",
|
|
"\n",
|
|
"Example:\n",
|
|
"```python\n",
|
|
"[\n",
|
|
" {\n",
|
|
" \"model\": \"command-r\",\n",
|
|
" \"api_key\": \"your Cohere API Key goes here\",\n",
|
|
" \"api_type\": \"cohere\",\n",
|
|
" \"client_name\": \"autogen-cohere\",\n",
|
|
" \"temperature\": 0.5,\n",
|
|
" \"p\": 0.2,\n",
|
|
" \"k\": 100,\n",
|
|
" \"max_tokens\": 2048,\n",
|
|
" \"seed\": 42,\n",
|
|
" \"frequency_penalty\": 0.5,\n",
|
|
" \"presence_penalty\": 0.2\n",
|
|
" }\n",
|
|
"]\n",
|
|
"```\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Two-Agent Coding Example\n",
|
|
"\n",
|
|
"In this example, we run a two-agent chat with an AssistantAgent (primarily a coding agent) to generate code to count the number of prime numbers between 1 and 10,000 and then it will be executed.\n",
|
|
"\n",
|
|
"We'll use Cohere's Command R model which is suitable for coding."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 1,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"import os\n",
|
|
"\n",
|
|
"config_list = [\n",
|
|
" {\n",
|
|
" # Let's choose the Command-R model\n",
|
|
" \"model\": \"command-r\",\n",
|
|
" # Provide your Cohere's API key here or put it into the COHERE_API_KEY environment variable.\n",
|
|
" \"api_key\": os.environ.get(\"COHERE_API_KEY\"),\n",
|
|
" # We specify the API Type as 'cohere' so it uses the Cohere client class\n",
|
|
" \"api_type\": \"cohere\",\n",
|
|
" }\n",
|
|
"]"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Importantly, we have tweaked the system message so that the model doesn't return the termination keyword, which we've changed to FINISH, with the code block."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 2,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stderr",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"/usr/local/lib/python3.11/site-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html\n",
|
|
" from .autonotebook import tqdm as notebook_tqdm\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"from pathlib import Path\n",
|
|
"\n",
|
|
"from autogen import AssistantAgent, UserProxyAgent\n",
|
|
"from autogen.coding import LocalCommandLineCodeExecutor\n",
|
|
"\n",
|
|
"# Setting up the code executor\n",
|
|
"workdir = Path(\"coding\")\n",
|
|
"workdir.mkdir(exist_ok=True)\n",
|
|
"code_executor = LocalCommandLineCodeExecutor(work_dir=workdir)\n",
|
|
"\n",
|
|
"# Setting up the agents\n",
|
|
"\n",
|
|
"# The UserProxyAgent will execute the code that the AssistantAgent provides\n",
|
|
"user_proxy_agent = UserProxyAgent(\n",
|
|
" name=\"User\",\n",
|
|
" code_execution_config={\"executor\": code_executor},\n",
|
|
" is_termination_msg=lambda msg: \"FINISH\" in msg.get(\"content\"),\n",
|
|
")\n",
|
|
"\n",
|
|
"system_message = \"\"\"You are a helpful AI assistant who writes code and the user executes it.\n",
|
|
"Solve tasks using your coding and language skills.\n",
|
|
"In the following cases, suggest python code (in a python coding block) for the user to execute.\n",
|
|
"Solve the task step by step if you need to. If a plan is not provided, explain your plan first. Be clear which step uses code, and which step uses your language skill.\n",
|
|
"When using code, you must indicate the script type in the code block. The user cannot provide any other feedback or perform any other action beyond executing the code you suggest. The user can't modify your code. So do not suggest incomplete code which requires users to modify. Don't use a code block if it's not intended to be executed by the user.\n",
|
|
"Don't include multiple code blocks in one response. Do not ask users to copy and paste the result. Instead, use 'print' function for the output when relevant. Check the execution result returned by the user.\n",
|
|
"If the result indicates there is an error, fix the error and output the code again. Suggest the full code instead of partial code or code changes. If the error can't be fixed or if the task is not solved even after the code is executed successfully, analyze the problem, revisit your assumption, collect additional info you need, and think of a different approach to try.\n",
|
|
"When you find an answer, verify the answer carefully. Include verifiable evidence in your response if possible.\n",
|
|
"IMPORTANT: Wait for the user to execute your code and then you can reply with the word \"FINISH\". DO NOT OUTPUT \"FINISH\" after your code block.\"\"\"\n",
|
|
"\n",
|
|
"# The AssistantAgent, using Cohere's model, will take the coding request and return code\n",
|
|
"assistant_agent = AssistantAgent(\n",
|
|
" name=\"Cohere Assistant\",\n",
|
|
" system_message=system_message,\n",
|
|
" llm_config={\"config_list\": config_list},\n",
|
|
")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 3,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"\u001b[33mUser\u001b[0m (to Cohere Assistant):\n",
|
|
"\n",
|
|
"Provide code to count the number of prime numbers from 1 to 10000.\n",
|
|
"\n",
|
|
"--------------------------------------------------------------------------------\n",
|
|
"\u001b[33mCohere Assistant\u001b[0m (to User):\n",
|
|
"\n",
|
|
"Here's the code to count the number of prime numbers from 1 to 10,000:\n",
|
|
"```python\n",
|
|
"# Prime Number Counter\n",
|
|
"count = 0\n",
|
|
"for num in range(2, 10001):\n",
|
|
" if num > 1:\n",
|
|
" for div in range(2, num):\n",
|
|
" if (num % div) == 0:\n",
|
|
" break\n",
|
|
" else:\n",
|
|
" count += 1\n",
|
|
"print(count)\n",
|
|
"```\n",
|
|
"\n",
|
|
"My plan is to use two nested loops. The outer loop iterates through numbers from 2 to 10,000. The inner loop checks if there's any divisor for the current number in the range from 2 to the number itself. If there's no such divisor, the number is prime and the counter is incremented.\n",
|
|
"\n",
|
|
"Please execute the code and let me know the output.\n",
|
|
"\n",
|
|
"--------------------------------------------------------------------------------\n",
|
|
"\u001b[31m\n",
|
|
">>>>>>>> NO HUMAN INPUT RECEIVED.\u001b[0m\n",
|
|
"\u001b[31m\n",
|
|
">>>>>>>> USING AUTO REPLY...\u001b[0m\n",
|
|
"\u001b[31m\n",
|
|
">>>>>>>> EXECUTING CODE BLOCK (inferred language is python)...\u001b[0m\n",
|
|
"\u001b[33mUser\u001b[0m (to Cohere Assistant):\n",
|
|
"\n",
|
|
"exitcode: 0 (execution succeeded)\n",
|
|
"Code output: 1229\n",
|
|
"\n",
|
|
"\n",
|
|
"--------------------------------------------------------------------------------\n",
|
|
"\u001b[33mCohere Assistant\u001b[0m (to User):\n",
|
|
"\n",
|
|
"That's correct! The code you executed successfully found 1229 prime numbers within the specified range.\n",
|
|
"\n",
|
|
"FINISH.\n",
|
|
"\n",
|
|
"--------------------------------------------------------------------------------\n",
|
|
"\u001b[31m\n",
|
|
">>>>>>>> NO HUMAN INPUT RECEIVED.\u001b[0m\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"# Start the chat, with the UserProxyAgent asking the AssistantAgent the message\n",
|
|
"chat_result = user_proxy_agent.initiate_chat(\n",
|
|
" assistant_agent,\n",
|
|
" message=\"Provide code to count the number of prime numbers from 1 to 10000.\",\n",
|
|
")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Tool Call Example\n",
|
|
"\n",
|
|
"In this example, instead of writing code, we will show how Cohere's Command R+ model can perform parallel tool calling, where it recommends calling more than one tool at a time.\n",
|
|
"\n",
|
|
"We'll use a simple travel agent assistant program where we have a couple of tools for weather and currency conversion.\n",
|
|
"\n",
|
|
"We start by importing libraries and setting up our configuration to use Command R+ and the `cohere` client class."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 4,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"import json\n",
|
|
"import os\n",
|
|
"from typing import Literal\n",
|
|
"\n",
|
|
"from typing_extensions import Annotated\n",
|
|
"\n",
|
|
"import autogen\n",
|
|
"\n",
|
|
"config_list = [\n",
|
|
" {\"api_type\": \"cohere\", \"model\": \"command-r-plus\", \"api_key\": os.getenv(\"COHERE_API_KEY\"), \"cache_seed\": None}\n",
|
|
"]"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Create our two agents."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 5,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# Create the agent for tool calling\n",
|
|
"chatbot = autogen.AssistantAgent(\n",
|
|
" name=\"chatbot\",\n",
|
|
" system_message=\"\"\"For currency exchange and weather forecasting tasks,\n",
|
|
" only use the functions you have been provided with.\n",
|
|
" Output 'HAVE FUN!' when an answer has been provided.\"\"\",\n",
|
|
" llm_config={\"config_list\": config_list},\n",
|
|
")\n",
|
|
"\n",
|
|
"# Note that we have changed the termination string to be \"HAVE FUN!\"\n",
|
|
"user_proxy = autogen.UserProxyAgent(\n",
|
|
" name=\"user_proxy\",\n",
|
|
" is_termination_msg=lambda x: x.get(\"content\", \"\") and \"HAVE FUN!\" in x.get(\"content\", \"\"),\n",
|
|
" human_input_mode=\"NEVER\",\n",
|
|
" max_consecutive_auto_reply=1,\n",
|
|
")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Create the two functions, annotating them so that those descriptions can be passed through to the LLM.\n",
|
|
"\n",
|
|
"We associate them with the agents using `register_for_execution` for the user_proxy so it can execute the function and `register_for_llm` for the chatbot (powered by the LLM) so it can pass the function definitions to the LLM."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 6,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# Currency Exchange function\n",
|
|
"\n",
|
|
"CurrencySymbol = Literal[\"USD\", \"EUR\"]\n",
|
|
"\n",
|
|
"# Define our function that we expect to call\n",
|
|
"\n",
|
|
"\n",
|
|
"def exchange_rate(base_currency: CurrencySymbol, quote_currency: CurrencySymbol) -> float:\n",
|
|
" if base_currency == quote_currency:\n",
|
|
" return 1.0\n",
|
|
" elif base_currency == \"USD\" and quote_currency == \"EUR\":\n",
|
|
" return 1 / 1.1\n",
|
|
" elif base_currency == \"EUR\" and quote_currency == \"USD\":\n",
|
|
" return 1.1\n",
|
|
" else:\n",
|
|
" raise ValueError(f\"Unknown currencies {base_currency}, {quote_currency}\")\n",
|
|
"\n",
|
|
"\n",
|
|
"# Register the function with the agent\n",
|
|
"\n",
|
|
"\n",
|
|
"@user_proxy.register_for_execution()\n",
|
|
"@chatbot.register_for_llm(description=\"Currency exchange calculator.\")\n",
|
|
"def currency_calculator(\n",
|
|
" base_amount: Annotated[float, \"Amount of currency in base_currency\"],\n",
|
|
" base_currency: Annotated[CurrencySymbol, \"Base currency\"] = \"USD\",\n",
|
|
" quote_currency: Annotated[CurrencySymbol, \"Quote currency\"] = \"EUR\",\n",
|
|
") -> str:\n",
|
|
" quote_amount = exchange_rate(base_currency, quote_currency) * base_amount\n",
|
|
" return f\"{format(quote_amount, '.2f')} {quote_currency}\"\n",
|
|
"\n",
|
|
"\n",
|
|
"# Weather function\n",
|
|
"\n",
|
|
"\n",
|
|
"# Example function to make available to model\n",
|
|
"def get_current_weather(location, unit=\"fahrenheit\"):\n",
|
|
" \"\"\"Get the weather for some location\"\"\"\n",
|
|
" if \"chicago\" in location.lower():\n",
|
|
" return json.dumps({\"location\": \"Chicago\", \"temperature\": \"13\", \"unit\": unit})\n",
|
|
" elif \"san francisco\" in location.lower():\n",
|
|
" return json.dumps({\"location\": \"San Francisco\", \"temperature\": \"55\", \"unit\": unit})\n",
|
|
" elif \"new york\" in location.lower():\n",
|
|
" return json.dumps({\"location\": \"New York\", \"temperature\": \"11\", \"unit\": unit})\n",
|
|
" else:\n",
|
|
" return json.dumps({\"location\": location, \"temperature\": \"unknown\"})\n",
|
|
"\n",
|
|
"\n",
|
|
"# Register the function with the agent\n",
|
|
"\n",
|
|
"\n",
|
|
"@user_proxy.register_for_execution()\n",
|
|
"@chatbot.register_for_llm(description=\"Weather forecast for US cities.\")\n",
|
|
"def weather_forecast(\n",
|
|
" location: Annotated[str, \"City name\"],\n",
|
|
") -> str:\n",
|
|
" weather_details = get_current_weather(location=location)\n",
|
|
" weather = json.loads(weather_details)\n",
|
|
" return f\"{weather['location']} will be {weather['temperature']} degrees {weather['unit']}\""
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"We pass through our customer's message and run the chat.\n",
|
|
"\n",
|
|
"Finally, we ask the LLM to summarise the chat and print that out."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 7,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"\u001b[33muser_proxy\u001b[0m (to chatbot):\n",
|
|
"\n",
|
|
"What's the weather in New York and can you tell me how much is 123.45 EUR in USD so I can spend it on my holiday? Throw a few holiday tips in as well.\n",
|
|
"\n",
|
|
"--------------------------------------------------------------------------------\n",
|
|
"\u001b[33mchatbot\u001b[0m (to user_proxy):\n",
|
|
"\n",
|
|
"I will use the weather_forecast function to find out the weather in New York, and the currency_calculator function to convert 123.45 EUR to USD. I will then search for 'holiday tips' to find some extra information to include in my answer.\n",
|
|
"\u001b[32m***** Suggested tool call (45212): weather_forecast *****\u001b[0m\n",
|
|
"Arguments: \n",
|
|
"{\"location\": \"New York\"}\n",
|
|
"\u001b[32m*********************************************************\u001b[0m\n",
|
|
"\u001b[32m***** Suggested tool call (16564): currency_calculator *****\u001b[0m\n",
|
|
"Arguments: \n",
|
|
"{\"base_amount\": 123.45, \"base_currency\": \"EUR\", \"quote_currency\": \"USD\"}\n",
|
|
"\u001b[32m************************************************************\u001b[0m\n",
|
|
"\n",
|
|
"--------------------------------------------------------------------------------\n",
|
|
"\u001b[35m\n",
|
|
">>>>>>>> EXECUTING FUNCTION weather_forecast...\u001b[0m\n",
|
|
"\u001b[35m\n",
|
|
">>>>>>>> EXECUTING FUNCTION currency_calculator...\u001b[0m\n",
|
|
"\u001b[33muser_proxy\u001b[0m (to chatbot):\n",
|
|
"\n",
|
|
"\u001b[33muser_proxy\u001b[0m (to chatbot):\n",
|
|
"\n",
|
|
"\u001b[32m***** Response from calling tool (45212) *****\u001b[0m\n",
|
|
"New York will be 11 degrees fahrenheit\n",
|
|
"\u001b[32m**********************************************\u001b[0m\n",
|
|
"\n",
|
|
"--------------------------------------------------------------------------------\n",
|
|
"\u001b[33muser_proxy\u001b[0m (to chatbot):\n",
|
|
"\n",
|
|
"\u001b[32m***** Response from calling tool (16564) *****\u001b[0m\n",
|
|
"135.80 USD\n",
|
|
"\u001b[32m**********************************************\u001b[0m\n",
|
|
"\n",
|
|
"--------------------------------------------------------------------------------\n",
|
|
"\u001b[33mchatbot\u001b[0m (to user_proxy):\n",
|
|
"\n",
|
|
"The weather in New York is 11 degrees Fahrenheit. \n",
|
|
"\n",
|
|
"€123.45 is worth $135.80. \n",
|
|
"\n",
|
|
"Here are some holiday tips:\n",
|
|
"- Make sure to pack layers for the cold weather\n",
|
|
"- Try the local cuisine, New York is famous for its pizza\n",
|
|
"- Visit Central Park and take in the views from the top of the Rockefeller Centre\n",
|
|
"\n",
|
|
"HAVE FUN!\n",
|
|
"\n",
|
|
"--------------------------------------------------------------------------------\n",
|
|
"LLM SUMMARY: The weather in New York is 11 degrees Fahrenheit. 123.45 EUR is worth 135.80 USD. Holiday tips: make sure to pack warm clothes and have a great time!\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"# start the conversation\n",
|
|
"res = user_proxy.initiate_chat(\n",
|
|
" chatbot,\n",
|
|
" message=\"What's the weather in New York and can you tell me how much is 123.45 EUR in USD so I can spend it on my holiday? Throw a few holiday tips in as well.\",\n",
|
|
" summary_method=\"reflection_with_llm\",\n",
|
|
")\n",
|
|
"\n",
|
|
"print(f\"LLM SUMMARY: {res.summary['content']}\")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"We can see that Command R+ recommended we call both tools and passed through the right parameters. The `user_proxy` executed them and this was passed back to Command R+ to interpret them and respond. Finally, Command R+ was asked to summarise the whole conversation."
|
|
]
|
|
}
|
|
],
|
|
"metadata": {
|
|
"kernelspec": {
|
|
"display_name": "autogen",
|
|
"language": "python",
|
|
"name": "python3"
|
|
},
|
|
"language_info": {
|
|
"codemirror_mode": {
|
|
"name": "ipython",
|
|
"version": 3
|
|
},
|
|
"file_extension": ".py",
|
|
"mimetype": "text/x-python",
|
|
"name": "python",
|
|
"nbconvert_exporter": "python",
|
|
"pygments_lexer": "ipython3",
|
|
"version": "3.12.5"
|
|
}
|
|
},
|
|
"nbformat": 4,
|
|
"nbformat_minor": 2
|
|
}
|