mirror of
https://github.com/microsoft/autogen.git
synced 2025-08-11 10:11:27 +00:00

* Ollama client! With function calling. Initial commit, client, no docs or tests yet. * Tidy comments * Cater for missing prompt token count * Removed use of eval, added json parsing support library * Fix to the use of the JSON fix library, handling of Mixtral escape sequence * Fixed 'name' in JSON bug, catered for single function call JSON without [] * removing role='tool' from inner tool result to reduce token usage. * Added Ollama documentation and updated library versions * Added Native Ollama tool calling (v0.3.0 req.) as well as hide/show tools support * Added native tool calling and hide_tools parameter to documentation * Update to Ollama 0.3.1, added tests * Tweak to manual function calling prompt to improve number handling. * Fix formatting Co-authored-by: gagb <gagb@users.noreply.github.com> Co-authored-by: Jack Gerrits <jackgerrits@users.noreply.github.com> * Fix formatting * Better error message --------- Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com> Co-authored-by: gagb <gagb@users.noreply.github.com> Co-authored-by: Jack Gerrits <jackgerrits@users.noreply.github.com>
632 lines
23 KiB
Plaintext
632 lines
23 KiB
Plaintext
{
|
|
"cells": [
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"# Ollama\n",
|
|
"\n",
|
|
"[Ollama](https://ollama.com/) is a local inference engine that enables you to run open-weight LLMs in your environment. It has native support for a large number of models such as Google's Gemma, Meta's Llama 2/3/3.1, Microsoft's Phi 3, Mistral.AI's Mistral/Mixtral, and Cohere's Command R models.\n",
|
|
"\n",
|
|
"Note: Previously, to use Ollama with AutoGen you required LiteLLM. Now it can be used directly and supports tool calling."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Features\n",
|
|
"\n",
|
|
"When using this Ollama client class, messages are tailored to accommodate the specific requirements of Ollama's API and this includes message role sequences, support for function/tool calling, and token usage.\n",
|
|
"\n",
|
|
"## Installing Ollama\n",
|
|
"\n",
|
|
"For Mac and Windows, [download Ollama](https://ollama.com/download).\n",
|
|
"\n",
|
|
"For Linux:\n",
|
|
"\n",
|
|
"```bash\n",
|
|
"curl -fsSL https://ollama.com/install.sh | sh\n",
|
|
"```\n",
|
|
"\n",
|
|
"## Downloading models for Ollama\n",
|
|
"\n",
|
|
"Ollama has a library of models to choose from, see them [here](https://ollama.com/library).\n",
|
|
"\n",
|
|
"Before you can use a model, you need to download it (using the name of the model from the library):\n",
|
|
"\n",
|
|
"```bash\n",
|
|
"ollama pull llama3.1\n",
|
|
"```\n",
|
|
"\n",
|
|
"To view the models you have downloaded and can use:\n",
|
|
"\n",
|
|
"```bash\n",
|
|
"ollama list\n",
|
|
"```\n",
|
|
"\n",
|
|
"## Getting started with AutoGen and Ollama\n",
|
|
"\n",
|
|
"When installing AutoGen, you need to install the `pyautogen` package with the Ollama library.\n",
|
|
"\n",
|
|
"``` bash\n",
|
|
"pip install pyautogen[ollama]\n",
|
|
"```"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"See the sample `OAI_CONFIG_LIST` below showing how the Ollama client class is used by specifying the `api_type` as `ollama`.\n",
|
|
"\n",
|
|
"```python\n",
|
|
"[\n",
|
|
" {\n",
|
|
" \"model\": \"llama3.1\",\n",
|
|
" \"api_type\": \"ollama\"\n",
|
|
" },\n",
|
|
" {\n",
|
|
" \"model\": \"llama3.1:8b-instruct-q6_K\",\n",
|
|
" \"api_type\": \"ollama\"\n",
|
|
" },\n",
|
|
" {\n",
|
|
" \"model\": \"mistral-nemo\",\n",
|
|
" \"api_type\": \"ollama\"\n",
|
|
" }\n",
|
|
"]\n",
|
|
"```\n",
|
|
"\n",
|
|
"If you need to specify the URL for your Ollama install, use the `client_host` key in your config as per the below example:\n",
|
|
"\n",
|
|
"```python\n",
|
|
"[\n",
|
|
" {\n",
|
|
" \"model\": \"llama3.1\",\n",
|
|
" \"api_type\": \"ollama\",\n",
|
|
" \"client_host\": \"http://192.168.0.1:11434\"\n",
|
|
" }\n",
|
|
"]\n",
|
|
"```"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## API parameters\n",
|
|
"\n",
|
|
"The following Ollama parameters can be added to your config. See [this link](https://github.com/ollama/ollama/blob/main/docs/api.md#parameters) for further information on them.\n",
|
|
"\n",
|
|
"- num_predict (integer): -1 is infinite, -2 is fill context, 128 is default\n",
|
|
"- repeat_penalty (float)\n",
|
|
"- seed (integer)\n",
|
|
"- stream (boolean)\n",
|
|
"- temperature (float)\n",
|
|
"- top_k (int)\n",
|
|
"- top_p (float)\n",
|
|
"\n",
|
|
"Example:\n",
|
|
"```python\n",
|
|
"[\n",
|
|
" {\n",
|
|
" \"model\": \"llama3.1:instruct\",\n",
|
|
" \"api_type\": \"ollama\",\n",
|
|
" \"num_predict\": -1,\n",
|
|
" \"repeat_penalty\": 1.1,\n",
|
|
" \"seed\": 42,\n",
|
|
" \"stream\": False,\n",
|
|
" \"temperature\": 1,\n",
|
|
" \"top_k\": 50,\n",
|
|
" \"top_p\": 0.8\n",
|
|
" }\n",
|
|
"]\n",
|
|
"```"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Two-Agent Coding Example\n",
|
|
"\n",
|
|
"In this example, we run a two-agent chat with an AssistantAgent (primarily a coding agent) to generate code to count the number of prime numbers between 1 and 10,000 and then it will be executed.\n",
|
|
"\n",
|
|
"We'll use Meta's Llama 3.1 model which is suitable for coding.\n",
|
|
"\n",
|
|
"In this example we will specify the URL for the Ollama installation using `client_host`."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 1,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"config_list = [\n",
|
|
" {\n",
|
|
" # Let's choose the Meta's Llama 3.1 model (model names must match Ollama exactly)\n",
|
|
" \"model\": \"llama3.1:8b\",\n",
|
|
" # We specify the API Type as 'ollama' so it uses the Ollama client class\n",
|
|
" \"api_type\": \"ollama\",\n",
|
|
" \"stream\": False,\n",
|
|
" \"client_host\": \"http://192.168.0.1:11434\",\n",
|
|
" }\n",
|
|
"]"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Importantly, we have tweaked the system message so that the model doesn't return the termination keyword, which we've changed to FINISH, with the code block."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 2,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stderr",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"/usr/local/lib/python3.11/site-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html\n",
|
|
" from .autonotebook import tqdm as notebook_tqdm\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"from pathlib import Path\n",
|
|
"\n",
|
|
"from autogen import AssistantAgent, UserProxyAgent\n",
|
|
"from autogen.coding import LocalCommandLineCodeExecutor\n",
|
|
"\n",
|
|
"# Setting up the code executor\n",
|
|
"workdir = Path(\"coding\")\n",
|
|
"workdir.mkdir(exist_ok=True)\n",
|
|
"code_executor = LocalCommandLineCodeExecutor(work_dir=workdir)\n",
|
|
"\n",
|
|
"# Setting up the agents\n",
|
|
"\n",
|
|
"# The UserProxyAgent will execute the code that the AssistantAgent provides\n",
|
|
"user_proxy_agent = UserProxyAgent(\n",
|
|
" name=\"User\",\n",
|
|
" code_execution_config={\"executor\": code_executor},\n",
|
|
" is_termination_msg=lambda msg: \"FINISH\" in msg.get(\"content\"),\n",
|
|
")\n",
|
|
"\n",
|
|
"system_message = \"\"\"You are a helpful AI assistant who writes code and the user\n",
|
|
"executes it. Solve tasks using your python coding skills.\n",
|
|
"In the following cases, suggest python code (in a python coding block) for the\n",
|
|
"user to execute. When using code, you must indicate the script type in the code block.\n",
|
|
"You only need to create one working sample.\n",
|
|
"Do not suggest incomplete code which requires users to modify it.\n",
|
|
"Don't use a code block if it's not intended to be executed by the user. Don't\n",
|
|
"include multiple code blocks in one response. Do not ask users to copy and\n",
|
|
"paste the result. Instead, use 'print' function for the output when relevant.\n",
|
|
"Check the execution result returned by the user.\n",
|
|
"\n",
|
|
"If the result indicates there is an error, fix the error.\n",
|
|
"\n",
|
|
"IMPORTANT: If it has executed successfully, ONLY output 'FINISH'.\"\"\"\n",
|
|
"\n",
|
|
"# The AssistantAgent, using the Ollama config, will take the coding request and return code\n",
|
|
"assistant_agent = AssistantAgent(\n",
|
|
" name=\"Ollama Assistant\",\n",
|
|
" system_message=system_message,\n",
|
|
" llm_config={\"config_list\": config_list},\n",
|
|
")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"We can now start the chat."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 3,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"\u001b[33mUser\u001b[0m (to Ollama Assistant):\n",
|
|
"\n",
|
|
"Provide code to count the number of prime numbers from 1 to 10000.\n",
|
|
"\n",
|
|
"--------------------------------------------------------------------------------\n",
|
|
"\u001b[33mOllama Assistant\u001b[0m (to User):\n",
|
|
"\n",
|
|
"```python\n",
|
|
"def is_prime(n):\n",
|
|
" if n <= 1:\n",
|
|
" return False\n",
|
|
" for i in range(2, int(n**0.5) + 1):\n",
|
|
" if n % i == 0:\n",
|
|
" return False\n",
|
|
" return True\n",
|
|
"\n",
|
|
"count = sum(is_prime(i) for i in range(1, 10001))\n",
|
|
"print(count)\n",
|
|
"```\n",
|
|
"\n",
|
|
"Please execute this code. I will wait for the result.\n",
|
|
"\n",
|
|
"--------------------------------------------------------------------------------\n",
|
|
"\u001b[31m\n",
|
|
">>>>>>>> NO HUMAN INPUT RECEIVED.\u001b[0m\n",
|
|
"\u001b[31m\n",
|
|
">>>>>>>> USING AUTO REPLY...\u001b[0m\n",
|
|
"\u001b[31m\n",
|
|
">>>>>>>> EXECUTING CODE BLOCK (inferred language is python)...\u001b[0m\n",
|
|
"\u001b[33mUser\u001b[0m (to Ollama Assistant):\n",
|
|
"\n",
|
|
"exitcode: 0 (execution succeeded)\n",
|
|
"Code output: 1229\n",
|
|
"\n",
|
|
"\n",
|
|
"--------------------------------------------------------------------------------\n",
|
|
"\u001b[33mOllama Assistant\u001b[0m (to User):\n",
|
|
"\n",
|
|
"FINISH\n",
|
|
"\n",
|
|
"--------------------------------------------------------------------------------\n",
|
|
"\u001b[31m\n",
|
|
">>>>>>>> NO HUMAN INPUT RECEIVED.\u001b[0m\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"# Start the chat, with the UserProxyAgent asking the AssistantAgent the message\n",
|
|
"chat_result = user_proxy_agent.initiate_chat(\n",
|
|
" assistant_agent,\n",
|
|
" message=\"Provide code to count the number of prime numbers from 1 to 10000.\",\n",
|
|
")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Tool Calling - Native vs Manual\n",
|
|
"\n",
|
|
"Ollama supports native tool calling (Ollama v0.3.1 library onward). If you install AutoGen with `pip install pyautogen[ollama]` you will be able to use native tool calling.\n",
|
|
"\n",
|
|
"The parameter `native_tool_calls` in your configuration allows you to specify if you want to use Ollama's native tool calling (default) or manual tool calling.\n",
|
|
"\n",
|
|
"```python\n",
|
|
"[\n",
|
|
" {\n",
|
|
" \"model\": \"llama3.1\",\n",
|
|
" \"api_type\": \"ollama\",\n",
|
|
" \"client_host\": \"http://192.168.0.1:11434\",\n",
|
|
" \"native_tool_calls\": True # Use Ollama's native tool calling, False for manual\n",
|
|
" }\n",
|
|
"]\n",
|
|
"```\n",
|
|
"\n",
|
|
"Native tool calling only works with certain models and an exception will be thrown if you try to use it with an unsupported model.\n",
|
|
"\n",
|
|
"Manual tool calling allows you to use tool calling with any Ollama model. It incorporates guided tool calling messages into the prompt that guide the LLM through the process of selecting a tool and then evaluating the result of the tool. As to be expected, the ability to follow instructions and return formatted JSON is highly dependent on the model.\n",
|
|
"\n",
|
|
"You can tailor the manual tool calling messages by adding these parameters to your configuration:\n",
|
|
"\n",
|
|
"- `manual_tool_call_instruction`\n",
|
|
"- `manual_tool_call_step1`\n",
|
|
"- `manual_tool_call_step2`\n",
|
|
"\n",
|
|
"To use manual tool calling set `native_tool_calls` to `False`."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Reducing repetitive tool calls\n",
|
|
"\n",
|
|
"By incorporating tools into a conversation, LLMs can often continually recommend them to be called, even after they've been called and a result returned. This can lead to a never ending cycle of tool calls.\n",
|
|
"\n",
|
|
"To remove the chance of an LLM recommending a tool call, an additional parameter called `hide_tools` can be used to specify when tools are hidden from the LLM. The string values for the parameter are:\n",
|
|
"\n",
|
|
"- 'never': tools are never hidden\n",
|
|
"- 'if_all_run': tools are hidden if all tools have been called\n",
|
|
"- 'if_any_run': tools are hidden if any tool has been called\n",
|
|
"\n",
|
|
"This can be used with native or manual tool calling, an example of a configuration is shown below.\n",
|
|
"\n",
|
|
"```python\n",
|
|
"[\n",
|
|
" {\n",
|
|
" \"model\": \"llama3.1\",\n",
|
|
" \"api_type\": \"ollama\",\n",
|
|
" \"client_host\": \"http://192.168.0.1:11434\",\n",
|
|
" \"native_tool_calls\": True,\n",
|
|
" \"hide_tools\": \"if_any_run\" # Hide tools once any tool has been called\n",
|
|
" }\n",
|
|
"]\n",
|
|
"```"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Tool Call Example\n",
|
|
"\n",
|
|
"In this example, instead of writing code, we will have an agent assist with some trip planning using multiple tool calling.\n",
|
|
"\n",
|
|
"Again, we'll use Meta's versatile Llama 3.1.\n",
|
|
"\n",
|
|
"Native Ollama tool calling will be used and we'll utilise the `hide_tools` parameter to hide the tools once all have been called."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 4,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"import json\n",
|
|
"from typing import Literal\n",
|
|
"\n",
|
|
"from typing_extensions import Annotated\n",
|
|
"\n",
|
|
"import autogen\n",
|
|
"\n",
|
|
"config_list = [\n",
|
|
" {\n",
|
|
" # Let's choose the Meta's Llama 3.1 model (model names must match Ollama exactly)\n",
|
|
" \"model\": \"llama3.1:8b\",\n",
|
|
" \"api_type\": \"ollama\",\n",
|
|
" \"stream\": False,\n",
|
|
" \"client_host\": \"http://192.168.0.1:11434\",\n",
|
|
" \"hide_tools\": \"if_any_run\",\n",
|
|
" }\n",
|
|
"]"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"We'll create our agents. Importantly, we're using native Ollama tool calling and to help guide it we add the JSON to the system_message so that the number fields aren't wrapped in quotes (becoming strings)."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 6,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# Create the agent for tool calling\n",
|
|
"chatbot = autogen.AssistantAgent(\n",
|
|
" name=\"chatbot\",\n",
|
|
" system_message=\"\"\"For currency exchange and weather forecasting tasks,\n",
|
|
" only use the functions you have been provided with.\n",
|
|
" Example of the return JSON is:\n",
|
|
" {\n",
|
|
" \"parameter_1_name\": 100.00,\n",
|
|
" \"parameter_2_name\": \"ABC\",\n",
|
|
" \"parameter_3_name\": \"DEF\",\n",
|
|
" }.\n",
|
|
" Another example of the return JSON is:\n",
|
|
" {\n",
|
|
" \"parameter_1_name\": \"GHI\",\n",
|
|
" \"parameter_2_name\": \"ABC\",\n",
|
|
" \"parameter_3_name\": \"DEF\",\n",
|
|
" \"parameter_4_name\": 123.00,\n",
|
|
" }.\n",
|
|
" Output 'HAVE FUN!' when an answer has been provided.\"\"\",\n",
|
|
" llm_config={\"config_list\": config_list},\n",
|
|
")\n",
|
|
"\n",
|
|
"# Note that we have changed the termination string to be \"HAVE FUN!\"\n",
|
|
"user_proxy = autogen.UserProxyAgent(\n",
|
|
" name=\"user_proxy\",\n",
|
|
" is_termination_msg=lambda x: x.get(\"content\", \"\") and \"HAVE FUN!\" in x.get(\"content\", \"\"),\n",
|
|
" human_input_mode=\"NEVER\",\n",
|
|
" max_consecutive_auto_reply=1,\n",
|
|
")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Create and register our functions (tools). See the [tutorial chapter on tool use](/docs/tutorial/tool-use) \n",
|
|
"for more information."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 7,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# Currency Exchange function\n",
|
|
"\n",
|
|
"CurrencySymbol = Literal[\"USD\", \"EUR\"]\n",
|
|
"\n",
|
|
"# Define our function that we expect to call\n",
|
|
"\n",
|
|
"\n",
|
|
"def exchange_rate(base_currency: CurrencySymbol, quote_currency: CurrencySymbol) -> float:\n",
|
|
" if base_currency == quote_currency:\n",
|
|
" return 1.0\n",
|
|
" elif base_currency == \"USD\" and quote_currency == \"EUR\":\n",
|
|
" return 1 / 1.1\n",
|
|
" elif base_currency == \"EUR\" and quote_currency == \"USD\":\n",
|
|
" return 1.1\n",
|
|
" else:\n",
|
|
" raise ValueError(f\"Unknown currencies {base_currency}, {quote_currency}\")\n",
|
|
"\n",
|
|
"\n",
|
|
"# Register the function with the agent\n",
|
|
"\n",
|
|
"\n",
|
|
"@user_proxy.register_for_execution()\n",
|
|
"@chatbot.register_for_llm(description=\"Currency exchange calculator.\")\n",
|
|
"def currency_calculator(\n",
|
|
" base_amount: Annotated[\n",
|
|
" float,\n",
|
|
" \"Amount of currency in base_currency. Type is float, not string, return value should be a number only, e.g. 987.65.\",\n",
|
|
" ],\n",
|
|
" base_currency: Annotated[CurrencySymbol, \"Base currency\"] = \"USD\",\n",
|
|
" quote_currency: Annotated[CurrencySymbol, \"Quote currency\"] = \"EUR\",\n",
|
|
") -> str:\n",
|
|
" quote_amount = exchange_rate(base_currency, quote_currency) * base_amount\n",
|
|
" return f\"{format(quote_amount, '.2f')} {quote_currency}\"\n",
|
|
"\n",
|
|
"\n",
|
|
"# Weather function\n",
|
|
"\n",
|
|
"\n",
|
|
"# Example function to make available to model\n",
|
|
"def get_current_weather(location, unit=\"fahrenheit\"):\n",
|
|
" \"\"\"Get the weather for some location\"\"\"\n",
|
|
" if \"chicago\" in location.lower():\n",
|
|
" return json.dumps({\"location\": \"Chicago\", \"temperature\": \"13\", \"unit\": unit})\n",
|
|
" elif \"san francisco\" in location.lower():\n",
|
|
" return json.dumps({\"location\": \"San Francisco\", \"temperature\": \"55\", \"unit\": unit})\n",
|
|
" elif \"new york\" in location.lower():\n",
|
|
" return json.dumps({\"location\": \"New York\", \"temperature\": \"11\", \"unit\": unit})\n",
|
|
" else:\n",
|
|
" return json.dumps({\"location\": location, \"temperature\": \"unknown\"})\n",
|
|
"\n",
|
|
"\n",
|
|
"# Register the function with the agent\n",
|
|
"\n",
|
|
"\n",
|
|
"@user_proxy.register_for_execution()\n",
|
|
"@chatbot.register_for_llm(description=\"Weather forecast for US cities.\")\n",
|
|
"def weather_forecast(\n",
|
|
" location: Annotated[str, \"City name\"],\n",
|
|
") -> str:\n",
|
|
" weather_details = get_current_weather(location=location)\n",
|
|
" weather = json.loads(weather_details)\n",
|
|
" return f\"{weather['location']} will be {weather['temperature']} degrees {weather['unit']}\""
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"And run it!"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 8,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"\u001b[33muser_proxy\u001b[0m (to chatbot):\n",
|
|
"\n",
|
|
"What's the weather in New York and can you tell me how much is 123.45 EUR in USD so I can spend it on my holiday? Throw a few holiday tips in as well.\n",
|
|
"\n",
|
|
"--------------------------------------------------------------------------------\n",
|
|
"\u001b[33mchatbot\u001b[0m (to user_proxy):\n",
|
|
"\n",
|
|
"\n",
|
|
"\u001b[32m***** Suggested tool call (ollama_func_4506): weather_forecast *****\u001b[0m\n",
|
|
"Arguments: \n",
|
|
"{\"location\": \"New York\"}\n",
|
|
"\u001b[32m********************************************************************\u001b[0m\n",
|
|
"\u001b[32m***** Suggested tool call (ollama_func_4507): currency_calculator *****\u001b[0m\n",
|
|
"Arguments: \n",
|
|
"{\"base_amount\": 123.45, \"base_currency\": \"EUR\", \"quote_currency\": \"USD\"}\n",
|
|
"\u001b[32m***********************************************************************\u001b[0m\n",
|
|
"\n",
|
|
"--------------------------------------------------------------------------------\n",
|
|
"\u001b[35m\n",
|
|
">>>>>>>> EXECUTING FUNCTION weather_forecast...\u001b[0m\n",
|
|
"\u001b[35m\n",
|
|
">>>>>>>> EXECUTING FUNCTION currency_calculator...\u001b[0m\n",
|
|
"\u001b[33muser_proxy\u001b[0m (to chatbot):\n",
|
|
"\n",
|
|
"\u001b[33muser_proxy\u001b[0m (to chatbot):\n",
|
|
"\n",
|
|
"\u001b[32m***** Response from calling tool (ollama_func_4506) *****\u001b[0m\n",
|
|
"New York will be 11 degrees fahrenheit\n",
|
|
"\u001b[32m*********************************************************\u001b[0m\n",
|
|
"\n",
|
|
"--------------------------------------------------------------------------------\n",
|
|
"\u001b[33muser_proxy\u001b[0m (to chatbot):\n",
|
|
"\n",
|
|
"\u001b[32m***** Response from calling tool (ollama_func_4507) *****\u001b[0m\n",
|
|
"135.80 USD\n",
|
|
"\u001b[32m*********************************************************\u001b[0m\n",
|
|
"\n",
|
|
"--------------------------------------------------------------------------------\n",
|
|
"\u001b[33mchatbot\u001b[0m (to user_proxy):\n",
|
|
"\n",
|
|
"Based on the results, it seems that:\n",
|
|
"\n",
|
|
"* The weather forecast for New York is expected to be around 11 degrees Fahrenheit.\n",
|
|
"* The exchange rate for EUR to USD is currently 1 EUR = 1.3580 USD, so 123.45 EUR is equivalent to approximately 135.80 USD.\n",
|
|
"\n",
|
|
"As a bonus, here are some holiday tips in New York:\n",
|
|
"\n",
|
|
"* Be sure to try a classic New York-style hot dog from a street cart or a diner.\n",
|
|
"* Explore the iconic Central Park and take a stroll through the High Line for some great views of the city.\n",
|
|
"* Catch a Broadway show or a concert at one of the many world-class venues in the city.\n",
|
|
"\n",
|
|
"And... HAVE FUN!\n",
|
|
"\n",
|
|
"--------------------------------------------------------------------------------\n",
|
|
"LLM SUMMARY: The weather forecast for New York is expected to be around 11 degrees Fahrenheit.\n",
|
|
"123.45 EUR is equivalent to approximately 135.80 USD.\n",
|
|
"Try a classic New York-style hot dog, explore Central Park and the High Line, and catch a Broadway show or concert during your visit.\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"# start the conversation\n",
|
|
"res = user_proxy.initiate_chat(\n",
|
|
" chatbot,\n",
|
|
" message=\"What's the weather in New York and can you tell me how much is 123.45 EUR in USD so I can spend it on my holiday? Throw a few holiday tips in as well.\",\n",
|
|
" summary_method=\"reflection_with_llm\",\n",
|
|
")\n",
|
|
"\n",
|
|
"print(f\"LLM SUMMARY: {res.summary['content']}\")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Great, we can see that Llama 3.1 has helped choose the right functions, their parameters, and then summarised them for us."
|
|
]
|
|
}
|
|
],
|
|
"metadata": {
|
|
"kernelspec": {
|
|
"display_name": "Python 3",
|
|
"language": "python",
|
|
"name": "python3"
|
|
},
|
|
"language_info": {
|
|
"codemirror_mode": {
|
|
"name": "ipython",
|
|
"version": 3
|
|
},
|
|
"file_extension": ".py",
|
|
"mimetype": "text/x-python",
|
|
"name": "python",
|
|
"nbconvert_exporter": "python",
|
|
"pygments_lexer": "ipython3",
|
|
"version": "3.11.9"
|
|
}
|
|
},
|
|
"nbformat": 4,
|
|
"nbformat_minor": 2
|
|
}
|