"Note: We recommend using a virtual environment for your stack, see [this article](https://microsoft.github.io/autogen/docs/installation/#create-a-virtual-environment-optional) for guidance.\n",
"\n",
"## Installing LiteLLM\n",
"\n",
"Install LiteLLM with the proxy server functionality:\n",
"\n",
"```bash\n",
"pip install 'litellm[proxy]'\n",
"```\n",
"\n",
"Note: If using Windows, run LiteLLM and Ollama within a [WSL2](https://learn.microsoft.com/en-us/windows/wsl/install).\n",
"\n",
"````mdx-code-block\n",
":::tip\n",
"For custom LiteLLM installation instructions, see their [GitHub repository](https://github.com/BerriAI/litellm).\n",
":::\n",
"````\n",
"\n",
"## Installing Ollama\n",
"\n",
"For Mac and Windows, [download Ollama](https://ollama.com/download).\n",
" Thank you for using LiteLLM! - Krrish & Ishaan\n",
"\n",
"\n",
"\n",
"Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new\n",
"\n",
"\n",
"INFO: Application startup complete.\n",
"INFO: Uvicorn running on http://0.0.0.0:4000 (Press CTRL+C to quit)\n",
"````\n",
"\n",
"This will run the proxy server and it will be available at 'http://0.0.0.0:4000/'."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Using LiteLLM+Ollama with AutoGen\n",
"\n",
"Now that we have the URL for the LiteLLM proxy server, you can use it within AutoGen\n",
"in the same way as OpenAI or cloud-based proxy servers.\n",
"\n",
"As you are running this proxy server locally, no API key is required. Additionally, as\n",
"the model is being set when running the\n",
"LiteLLM command, no model name needs to be configured in AutoGen. However, ```model```\n",
"and ```api_key``` are mandatory fields for configurations within AutoGen so we put dummy\n",
"values in them, as per the example below.\n",
"\n",
"An additional setting for the configuration is `price`, which can be used to set the pricing of tokens. As we're running it locally, we'll put our costs as zero. Using this setting will also avoid a prompt being shown when price can't be determined."
"The sky appears blue because of a phenomenon called scattering. When sunlight enters Earth's atmosphere, it encounters tiny molecules of gases such as nitrogen (N2) and oxygen (O2). These molecules scatter the light in all directions, but they scatter shorter (blue) wavelengths more than longer (red) wavelengths.\n",
"\n",
"This is known as Rayleigh scattering, named after the British physicist Lord Rayleigh, who first described the phenomenon in the late 19th century. As a result of this scattering, the blue light is distributed throughout the atmosphere, giving the sky its blue appearance.\n",
"\n",
"Additionally, when sunlight passes through more dense atmospheric particles like water vapor, pollutants, and dust, it can also be scattered or absorbed, which affects the color we see. For example, during sunrise and sunset, the light has to travel longer distances through the atmosphere, which scatters the shorter wavelengths even more, making the sky appear more red.\n",
"\n",
"So, there you have it! The blue sky is a result of the combination of sunlight, atmospheric gases, and the scattering of light.\n",
"\n",
"How's that? Do you have any other questions or would you like to explore more topics?\n",
"As I mentioned earlier, the color we see in the sky can be affected by the amount and type of particles in the atmosphere. When the sunlight has to travel longer distances through the air, like during sunrise and sunset, it encounters more atmospheric particles that scatter the shorter blue wavelengths even more than the longer red wavelengths.\n",
"\n",
"This is known as Mie scattering, named after the German physicist Gustav Mie. The larger particles, such as water droplets, pollen, and dust, are responsible for this type of scattering. They scatter the shorter blue wavelengths more efficiently than the longer red wavelengths, which is why we often see more red or orange hues during these times.\n",
"\n",
"Additionally, during sunrise and sunset, the sun's rays have to travel through a thicker layer of atmosphere, which contains more particles like water vapor, pollutants, and aerosols. These particles can absorb or scatter certain wavelengths of light, making them appear redder or more orange.\n",
"\n",
"The combination of Mie scattering and absorption by atmospheric particles can create the warm, golden hues we often see during sunrise and sunset. It's a beautiful reminder that the color of our sky is not just a result of the sun itself but also the complex interactions between sunlight, atmosphere, and particles!\n",
"\n",
"Would you like to explore more about the Earth's atmosphere or perhaps learn about other fascinating topics?\n",
"# Let the assistant start the conversation. It will end when the user types exit.\n",
"res = assistant.initiate_chat(user_proxy, message=\"How can I help you today?\")\n",
"\n",
"print(assistant)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Example with Function Calling\n",
"Function calling (aka Tool calling) is a feature of OpenAI's API that AutoGen, LiteLLM, and Ollama support.\n",
"\n",
"Below is an example of using function calling with LiteLLM and Ollama. Based on this [currency conversion](https://github.com/microsoft/autogen/blob/501f8d22726e687c55052682c20c97ce62f018ac/notebook/agentchat_function_call_currency_calculator.ipynb) notebook.\n",
"\n",
"LiteLLM is loaded in the same way as the previous example and we'll continue to use Meta's Llama3 model as it is good at constructing the\n",
"function calling message required.\n",
"\n",
"**Note:** LiteLLM version 1.41.27, or later, is required (to support function calling natively using Ollama).\n",
"\n",
"In your terminal:\n",
"\n",
"```bash\n",
"litellm --model ollama/llama3\n",
"```\n",
"\n",
"Then we run our program with function calling."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"/usr/local/lib/python3.11/site-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html\n",
" from .autonotebook import tqdm as notebook_tqdm\n"
]
}
],
"source": [
"from typing import Literal\n",
"\n",
"from typing_extensions import Annotated\n",
"\n",
"import autogen\n",
"\n",
"local_llm_config = {\n",
" \"config_list\": [\n",
" {\n",
" \"model\": \"NotRequired\", # Loaded with LiteLLM command\n",
" \"api_key\": \"NotRequired\", # Not needed\n",
" \"base_url\": \"http://0.0.0.0:4000\", # Your LiteLLM URL\n",
" \"price\": [0, 0], # Put in price per 1K tokens [prompt, response] as free!\n",
" }\n",
" ],\n",
" \"cache_seed\": None, # Turns off caching, useful for testing different models\n",
"}"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"# Create the agent and include examples of the function calling JSON in the prompt\n",
"We can see that the currency conversion function was called with the correct values and a result was generated.\n",
"\n",
"````mdx-code-block\n",
":::tip\n",
"Once functions are included in the conversation it is possible, using LiteLLM and Ollama, that the model may continue to recommend tool calls (as shown above). This is an area of active development and a native Ollama client for AutoGen is planned for a future release.\n",