mirror of
https://github.com/microsoft/autogen.git
synced 2025-07-17 14:01:06 +00:00

* add openai-gemini example * fix exec numbering * improve isntructions * fix br tag * mention roles/aiplatform.user and fix markdown reference * remove mentioning the editor role, and only use the Vertex AI User role --------- Co-authored-by: Chi Wang <wang.chi@microsoft.com>
928 lines
35 KiB
Plaintext
928 lines
35 KiB
Plaintext
{
|
||
"cells": [
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"slideshow": {
|
||
"slide_type": "slide"
|
||
}
|
||
},
|
||
"source": [
|
||
"# Use AutoGen with Gemini via VertexAI\n",
|
||
"\n",
|
||
"This notebook demonstrates how to use Autogen with Gemini via Vertex AI, which enables enhanced authentication method that also supports enterprise requirements using service accounts or even a personal Google cloud account.\n",
|
||
"\n",
|
||
"## Requirements\n",
|
||
"\n",
|
||
"Install AutoGen with Gemini features:\n",
|
||
"```bash\n",
|
||
"pip install pyautogen[gemini]\n",
|
||
"```\n",
|
||
"\n",
|
||
"### Install other Dependencies of this Notebook\n",
|
||
"```bash\n",
|
||
"pip install chromadb markdownify pypdf\n",
|
||
"```\n",
|
||
"\n",
|
||
"### Google Cloud Account\n",
|
||
"To use VertexAI a Google Cloud account is needed. If you do not have one yet, just sign up for a free trial [here](https://cloud.google.com).\n",
|
||
"\n",
|
||
"Login to your account at [console.cloud.google.com](https://console.cloud.google.com)\n",
|
||
"\n",
|
||
"In the next step we create a Google Cloud project, which is needed for VertexAI. The official guide for creating a project is available is [here](https://developers.google.com/workspace/guides/create-project). \n",
|
||
"\n",
|
||
"We will name our project Autogen-with-Gemini.\n",
|
||
"\n",
|
||
"### Enable Google Cloud APIs\n",
|
||
"\n",
|
||
"If you wish to use Gemini with your personal account, then creating a Google Cloud account is enough. However, if a service account is needed, then a few extra steps are needed.\n",
|
||
"\n",
|
||
"#### Enable API for Gemini\n",
|
||
" * For enabling Gemini for Google Cloud search for \"api\" and select Enabled APIs & services. \n",
|
||
" * Then click ENABLE APIS AND SERVICES. \n",
|
||
" * Search for Gemini, and select Gemini for Google Cloud. <br/> A direct link will look like this for our autogen-with-gemini project:\n",
|
||
"https://console.cloud.google.com/apis/library/cloudaicompanion.googleapis.com?project=autogen-with-gemini&supportedpurview=project\n",
|
||
"* Click ENABLE for Gemini for Google Cloud.\n",
|
||
"\n",
|
||
"### Enable API for Vertex AI\n",
|
||
"* For enabling Vertex AI for Google Cloud search for \"api\" and select Enabled APIs & services. \n",
|
||
"* Then click ENABLE APIS AND SERVICES. \n",
|
||
"* Search for Vertex AI, and select Vertex AI API. <br/> A direct link for our autogen-with-gemini will be: https://console.cloud.google.com/apis/library/aiplatform.googleapis.com?project=autogen-with-gemini\n",
|
||
"* Click ENABLE Vertex AI API for Google Cloud.\n",
|
||
"\n",
|
||
"### Create a Service Account\n",
|
||
"\n",
|
||
"You can find an overview of service accounts [can be found in the cloud console](https://console.cloud.google.com/iam-admin/serviceaccounts)\n",
|
||
"\n",
|
||
"Detailed guide: https://cloud.google.com/iam/docs/service-accounts-create\n",
|
||
"\n",
|
||
"A service account can be created within the scope of a project, so a project needs to be selected.\n",
|
||
"\n",
|
||
"<div>\n",
|
||
"<img src=\"https://github.com/microsoft/autogen/blob/main/website/static/img/create_gcp_svc.png?raw=true\" width=\"1000\" />\n",
|
||
"</div>\n",
|
||
"\n",
|
||
"Next we assign the [Vertex AI User](https://cloud.google.com/vertex-ai/docs/general/access-control#aiplatform.user) for the service account. This can be done in the [Google Cloud console](https://console.cloud.google.com/iam-admin/iam?project=autogen-with-gemini) in our `autogen-with-gemini` project.<br/>\n",
|
||
"Alternatively, we can also grant the [Vertex AI User](https://cloud.google.com/vertex-ai/docs/general/access-control#aiplatform.user) role by running a command using the gcloud CLI, for example in [Cloud Shell](https://shell.cloud.google.com/cloudshell):\n",
|
||
"```bash\n",
|
||
"gcloud projects add-iam-policy-binding autogen-with-gemini \\\n",
|
||
" --member=serviceAccount:autogen@autogen-with-gemini.iam.gserviceaccount.com --role roles/aiplatform.user\n",
|
||
"```\n",
|
||
"\n",
|
||
"* Under IAM & Admin > Service Account select the newly created service accounts, and click the option \"Manage keys\" among the items. \n",
|
||
"* From the \"ADD KEY\" dropdown select \"Create new key\" and select the JSON format and click CREATE.\n",
|
||
" * The new key will be downloaded automatically. \n",
|
||
"* You can then upload the service account key file to the from where you will be running autogen. \n",
|
||
" * Please consider restricting the permissions on the key file. For example, you could run `chmod 600 autogen-with-gemini-service-account-key.json` if your keyfile is called autogen-with-gemini-service-account-key.json."
|
||
]
|
||
},
|
||
{
|
||
"attachments": {},
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"### Configure Authentication\n",
|
||
"\n",
|
||
"Authentication happens using standard [Google Cloud authentication methods](https://cloud.google.com/docs/authentication), <br/> which means\n",
|
||
"that either an already active session can be reused, or by specifying the Google application credentials of a service account. <br/><br/>\n",
|
||
"Additionally, AutoGen also supports authentication using `Credentials` objects in Python with the [google-auth library](https://google-auth.readthedocs.io/), which enables even more flexibility.<br/>\n",
|
||
"For example, we can even use impersonated credentials.\n",
|
||
"\n",
|
||
"#### <a id='use_svc_keyfile'></a>Use Service Account Keyfile\n",
|
||
"\n",
|
||
"The Google Cloud service account can be specified by setting the `GOOGLE_APPLICATION_CREDENTIALS` environment variable to the path to the JSON key file of the service account. <br/>\n",
|
||
"\n",
|
||
"We could even just directly set the environment variable, or we can add the `\"google_application_credentials\"` key with the respective value for our model in the OAI_CONFIG_LIST.\n",
|
||
"\n",
|
||
"#### Use the Google Default Credentials\n",
|
||
"\n",
|
||
"If you are using [Cloud Shell](https://shell.cloud.google.com/cloudshell) or [Cloud Shell editor](https://shell.cloud.google.com/cloudshell/editor) in Google Cloud, <br/> then you are already authenticated. If you have the Google Cloud SDK installed locally, <br/> then you can login by running `gcloud auth application-default login` in the command line. \n",
|
||
"\n",
|
||
"Detailed instructions for installing the Google Cloud SDK can be found [here](https://cloud.google.com/sdk/docs/install).\n",
|
||
"\n",
|
||
"#### Authentication with the Google Auth Library for Python\n",
|
||
"\n",
|
||
"The google-auth library supports a wide range of authentication scenarios, and you can simply pass a previously created `Credentials` object to the `llm_config`.<br/>\n",
|
||
"The [official documentation](https://google-auth.readthedocs.io/) of the Python package provides a detailed overview of the supported methods and usage examples.<br/>\n",
|
||
"If you are already authenticated, like in [Cloud Shell](https://shell.cloud.google.com/cloudshell), or after running the `gcloud auth application-default login` command in a CLI, then the `google.auth.default()` Python method will automatically return your currently active credentials."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Example Config List\n",
|
||
"The config could look like the following (change `project_id` and `google_application_credentials`):\n",
|
||
"```python\n",
|
||
"config_list = [\n",
|
||
" {\n",
|
||
" \"model\": \"gemini-pro\",\n",
|
||
" \"api_type\": \"google\",\n",
|
||
" \"project_id\": \"autogen-with-gemini\",\n",
|
||
" \"location\": \"us-west1\"\n",
|
||
" },\n",
|
||
" {\n",
|
||
" \"model\": \"gemini-1.5-pro-001\",\n",
|
||
" \"api_type\": \"google\",\n",
|
||
" \"project_id\": \"autogen-with-gemini\",\n",
|
||
" \"location\": \"us-west1\"\n",
|
||
" },\n",
|
||
" {\n",
|
||
" \"model\": \"gemini-1.5-pro\",\n",
|
||
" \"api_type\": \"google\",\n",
|
||
" \"project_id\": \"autogen-with-gemini\",\n",
|
||
" \"location\": \"us-west1\",\n",
|
||
" \"google_application_credentials\": \"autogen-with-gemini-service-account-key.json\"\n",
|
||
" },\n",
|
||
" {\n",
|
||
" \"model\": \"gemini-pro-vision\",\n",
|
||
" \"api_type\": \"google\",\n",
|
||
" \"project_id\": \"autogen-with-gemini\",\n",
|
||
" \"location\": \"us-west1\"\n",
|
||
" }\n",
|
||
"]\n",
|
||
"```\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"\n",
|
||
"## Configure Safety Settings for VertexAI\n",
|
||
"Configuring safety settings for VertexAI is slightly different, as we have to use the speicialized safety setting object types instead of plain strings"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 1,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"from vertexai.generative_models import HarmBlockThreshold, HarmCategory\n",
|
||
"\n",
|
||
"safety_settings = {\n",
|
||
" HarmCategory.HARM_CATEGORY_HARASSMENT: HarmBlockThreshold.BLOCK_ONLY_HIGH,\n",
|
||
" HarmCategory.HARM_CATEGORY_HATE_SPEECH: HarmBlockThreshold.BLOCK_ONLY_HIGH,\n",
|
||
" HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT: HarmBlockThreshold.BLOCK_ONLY_HIGH,\n",
|
||
" HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT: HarmBlockThreshold.BLOCK_ONLY_HIGH,\n",
|
||
"}"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 2,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"import os\n",
|
||
"from typing import Any, Callable, Dict, List, Optional, Tuple, Type, Union\n",
|
||
"\n",
|
||
"import chromadb\n",
|
||
"from PIL import Image\n",
|
||
"from termcolor import colored\n",
|
||
"\n",
|
||
"import autogen\n",
|
||
"from autogen import Agent, AssistantAgent, ConversableAgent, UserProxyAgent\n",
|
||
"from autogen.agentchat.contrib.img_utils import _to_pil, get_image_data\n",
|
||
"from autogen.agentchat.contrib.multimodal_conversable_agent import MultimodalConversableAgent\n",
|
||
"from autogen.agentchat.contrib.retrieve_user_proxy_agent import RetrieveUserProxyAgent\n",
|
||
"from autogen.code_utils import DEFAULT_MODEL, UNKNOWN, content_str, execute_code, extract_code, infer_lang"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 3,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"config_list_gemini = autogen.config_list_from_json(\n",
|
||
" \"OAI_CONFIG_LIST\",\n",
|
||
" filter_dict={\n",
|
||
" \"model\": [\"gemini-1.5-pro\"],\n",
|
||
" },\n",
|
||
")\n",
|
||
"\n",
|
||
"config_list_gemini_vision = autogen.config_list_from_json(\n",
|
||
" \"OAI_CONFIG_LIST\",\n",
|
||
" filter_dict={\n",
|
||
" \"model\": [\"gemini-pro-vision\"],\n",
|
||
" },\n",
|
||
")\n",
|
||
"\n",
|
||
"for config_list in [config_list_gemini, config_list_gemini_vision]:\n",
|
||
" for config_list_item in config_list:\n",
|
||
" config_list_item[\"safety_settings\"] = safety_settings\n",
|
||
"\n",
|
||
"seed = 25 # for caching"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 4,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"\u001b[33muser_proxy\u001b[0m (to assistant):\n",
|
||
"\n",
|
||
"\n",
|
||
" Compute the integral of the function f(x)=x^2 on the interval 0 to 1 using a Python script,\n",
|
||
" which returns the value of the definite integral\n",
|
||
"\n",
|
||
"--------------------------------------------------------------------------------\n",
|
||
"\u001b[33massistant\u001b[0m (to user_proxy):\n",
|
||
"\n",
|
||
"Plan:\n",
|
||
"1. (code) Use Python's `scipy.integrate.quad` function to compute the integral. \n",
|
||
"\n",
|
||
"```python\n",
|
||
"# filename: integral.py\n",
|
||
"from scipy.integrate import quad\n",
|
||
"\n",
|
||
"def f(x):\n",
|
||
" return x**2\n",
|
||
"\n",
|
||
"result, error = quad(f, 0, 1)\n",
|
||
"\n",
|
||
"print(f\"The definite integral of x^2 from 0 to 1 is: {result}\")\n",
|
||
"```\n",
|
||
"\n",
|
||
"Let me know when you have executed this code. \n",
|
||
"\n",
|
||
"\n",
|
||
"--------------------------------------------------------------------------------\n",
|
||
"\u001b[31m\n",
|
||
">>>>>>>> EXECUTING CODE BLOCK 0 (inferred language is python)...\u001b[0m\n",
|
||
"\u001b[33muser_proxy\u001b[0m (to assistant):\n",
|
||
"\n",
|
||
"exitcode: 0 (execution succeeded)\n",
|
||
"Code output: \n",
|
||
"The definite integral of x^2 from 0 to 1 is: 0.33333333333333337\n",
|
||
"\n",
|
||
"\n",
|
||
"--------------------------------------------------------------------------------\n",
|
||
"\u001b[33massistant\u001b[0m (to user_proxy):\n",
|
||
"\n",
|
||
"The script executed successfully and returned the definite integral's value as approximately 0.33333333333333337. \n",
|
||
"\n",
|
||
"This aligns with the analytical solution. The indefinite integral of x^2 is (x^3)/3. Evaluating this from 0 to 1 gives us (1^3)/3 - (0^3)/3 = 1/3 = 0.33333...\n",
|
||
"\n",
|
||
"Therefore, the script successfully computed the integral of x^2 from 0 to 1.\n",
|
||
"\n",
|
||
"TERMINATE\n",
|
||
"\n",
|
||
"\n",
|
||
"--------------------------------------------------------------------------------\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"assistant = AssistantAgent(\n",
|
||
" \"assistant\", llm_config={\"config_list\": config_list_gemini, \"seed\": seed}, max_consecutive_auto_reply=3\n",
|
||
")\n",
|
||
"\n",
|
||
"user_proxy = UserProxyAgent(\n",
|
||
" \"user_proxy\",\n",
|
||
" code_execution_config={\"work_dir\": \"coding\", \"use_docker\": False},\n",
|
||
" human_input_mode=\"NEVER\",\n",
|
||
" is_termination_msg=lambda x: content_str(x.get(\"content\")).find(\"TERMINATE\") >= 0,\n",
|
||
")\n",
|
||
"\n",
|
||
"result = user_proxy.initiate_chat(\n",
|
||
" assistant,\n",
|
||
" message=\"\"\"\n",
|
||
" Compute the integral of the function f(x)=x^2 on the interval 0 to 1 using a Python script,\n",
|
||
" which returns the value of the definite integral\"\"\",\n",
|
||
")"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Example with Gemini Multimodal\n",
|
||
"Authentication is the same for vision models as for the text based Gemini models. <br/>\n",
|
||
"In this example an object of type `Credentials` will be supplied in order to authenticate.<br/>\n",
|
||
"Here, we will use the google application default credentials, so make sure to run the following commands if you are not yet authenticated:\n",
|
||
"```bash\n",
|
||
"export GOOGLE_APPLICATION_CREDENTIALS=autogen-with-gemini-service-account-key.json\n",
|
||
"gcloud auth application-default login\n",
|
||
"gcloud config set project autogen-with-gemini\n",
|
||
"```\n",
|
||
"The `GOOGLE_APPLICATION_CREDENTIALS` environment variable is a path to our service account JSON keyfile, as described in the [Use Service Account Keyfile](#use_svc_keyfile) section above.<br/>\n",
|
||
"We also need to set the Google cloud project, which is `autogen-with-gemini` in this example.<br/><br/>\n",
|
||
"\n",
|
||
"Note, we could also run `gcloud auth application-default login` to use our personal Google account instead of a service account.\n",
|
||
"In this case we need to run the following commands:\n",
|
||
"```bash\n",
|
||
"gcloud gcloud auth application-default login\n",
|
||
"gcloud config set project autogen-with-gemini\n",
|
||
"```"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 5,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"import google.auth\n",
|
||
"\n",
|
||
"scopes = [\"https://www.googleapis.com/auth/cloud-platform\"]\n",
|
||
"\n",
|
||
"credentials, project_id = google.auth.default(scopes)\n",
|
||
"\n",
|
||
"gemini_vision_config = [\n",
|
||
" {\n",
|
||
" \"model\": \"gemini-pro-vision\",\n",
|
||
" \"api_type\": \"google\",\n",
|
||
" \"project_id\": project_id,\n",
|
||
" \"credentials\": credentials,\n",
|
||
" \"location\": \"us-west1\",\n",
|
||
" \"safety_settings\": safety_settings,\n",
|
||
" }\n",
|
||
"]"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 6,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"\u001b[33muser_proxy\u001b[0m (to Gemini Vision):\n",
|
||
"\n",
|
||
"Describe what is in this image?\n",
|
||
"<image>.\n",
|
||
"\n",
|
||
"--------------------------------------------------------------------------------\n",
|
||
"\u001b[31m\n",
|
||
">>>>>>>> USING AUTO REPLY...\u001b[0m\n",
|
||
"\u001b[33mGemini Vision\u001b[0m (to user_proxy):\n",
|
||
"\n",
|
||
" The image describes a conversational agent that is able to have a conversation with a human user. The agent can be customized to the user's preferences. The conversation can be in form of a joint chat or hierarchical chat.\n",
|
||
"\n",
|
||
"--------------------------------------------------------------------------------\n"
|
||
]
|
||
},
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"ChatResult(chat_id=None, chat_history=[{'content': 'Describe what is in this image?\\n<img https://github.com/microsoft/autogen/blob/main/website/static/img/autogen_agentchat.png?raw=true>.', 'role': 'assistant'}, {'content': \" The image describes a conversational agent that is able to have a conversation with a human user. The agent can be customized to the user's preferences. The conversation can be in form of a joint chat or hierarchical chat.\", 'role': 'user'}], summary=\" The image describes a conversational agent that is able to have a conversation with a human user. The agent can be customized to the user's preferences. The conversation can be in form of a joint chat or hierarchical chat.\", cost={'usage_including_cached_inference': {'total_cost': 0.0001995, 'gemini-pro-vision': {'cost': 0.0001995, 'prompt_tokens': 267, 'completion_tokens': 44, 'total_tokens': 311}}, 'usage_excluding_cached_inference': {'total_cost': 0}}, human_input=[])"
|
||
]
|
||
},
|
||
"execution_count": 6,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"image_agent = MultimodalConversableAgent(\n",
|
||
" \"Gemini Vision\", llm_config={\"config_list\": gemini_vision_config, \"seed\": seed}, max_consecutive_auto_reply=1\n",
|
||
")\n",
|
||
"\n",
|
||
"user_proxy = UserProxyAgent(\"user_proxy\", human_input_mode=\"NEVER\", max_consecutive_auto_reply=0)\n",
|
||
"\n",
|
||
"user_proxy.initiate_chat(\n",
|
||
" image_agent,\n",
|
||
" message=\"\"\"Describe what is in this image?\n",
|
||
"<img https://github.com/microsoft/autogen/blob/main/website/static/img/autogen_agentchat.png?raw=true>.\"\"\",\n",
|
||
")"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"# Use Gemini via the OpenAI Library in Autogen\n",
|
||
"Using Gemini via the OpenAI library is also possible once you are already authenticated. <br/>\n",
|
||
"Run `gcloud auth application-default login` to set up application default credentials locally for the example below.<br/>\n",
|
||
"Also set the Google cloud project on the CLI if you have not done so far: <br/>\n",
|
||
"```bash\n",
|
||
"gcloud config set project autogen-with-gemini\n",
|
||
"```\n",
|
||
"The prerequisites are essentially the same as in the example above.<br/>\n",
|
||
"\n",
|
||
"You can read more on the topic in the [official Google docs](https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/call-gemini-using-openai-library).\n",
|
||
"<br/> A list of currently supported models can also be found in the [docs](https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/call-gemini-using-openai-library#supported_models)\n",
|
||
"<br/>\n",
|
||
"<br/>\n",
|
||
"Note, that you will need to refresh your token regularly, by default every 1 hour."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 7,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"import google.auth\n",
|
||
"\n",
|
||
"scopes = [\"https://www.googleapis.com/auth/cloud-platform\"]\n",
|
||
"creds, project = google.auth.default(scopes)\n",
|
||
"auth_req = google.auth.transport.requests.Request()\n",
|
||
"creds.refresh(auth_req)\n",
|
||
"location = \"us-west1\"\n",
|
||
"prompt_price_per_1k = (\n",
|
||
" 0.000125 # For more up-to-date prices see https://cloud.google.com/vertex-ai/generative-ai/pricing\n",
|
||
")\n",
|
||
"completion_token_price_per_1k = (\n",
|
||
" 0.000375 # For more up-to-date prices see https://cloud.google.com/vertex-ai/generative-ai/pricing\n",
|
||
")\n",
|
||
"\n",
|
||
"openai_gemini_config = [\n",
|
||
" {\n",
|
||
" \"model\": \"google/gemini-1.5-pro-001\",\n",
|
||
" \"api_type\": \"openai\",\n",
|
||
" \"base_url\": f\"https://{location}-aiplatform.googleapis.com/v1beta1/projects/{project}/locations/{location}/endpoints/openapi\",\n",
|
||
" \"api_key\": creds.token,\n",
|
||
" \"price\": [prompt_price_per_1k, completion_token_price_per_1k],\n",
|
||
" }\n",
|
||
"]"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 8,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"\u001b[33muser_proxy\u001b[0m (to assistant):\n",
|
||
"\n",
|
||
"\n",
|
||
" Compute the integral of the function f(x)=x^3 on the interval 0 to 10 using a Python script,\n",
|
||
" which returns the value of the definite integral.\n",
|
||
"\n",
|
||
"--------------------------------------------------------------------------------\n"
|
||
]
|
||
},
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"\u001b[33massistant\u001b[0m (to user_proxy):\n",
|
||
"\n",
|
||
"```python\n",
|
||
"# filename: integral.py\n",
|
||
"def integrate_x_cubed(a, b):\n",
|
||
" \"\"\"\n",
|
||
" This function calculates the definite integral of x^3 from a to b.\n",
|
||
"\n",
|
||
" Args:\n",
|
||
" a: The lower limit of integration.\n",
|
||
" b: The upper limit of integration.\n",
|
||
"\n",
|
||
" Returns:\n",
|
||
" The value of the definite integral.\n",
|
||
" \"\"\"\n",
|
||
" return (b**4 - a**4) / 4\n",
|
||
"\n",
|
||
"# Calculate the integral of x^3 from 0 to 10\n",
|
||
"result = integrate_x_cubed(0, 10)\n",
|
||
"\n",
|
||
"# Print the result\n",
|
||
"print(result)\n",
|
||
"```\n",
|
||
"\n",
|
||
"This script defines a function `integrate_x_cubed` that takes the lower and upper limits of integration as arguments and returns the definite integral of x^3 using the power rule of integration. The script then calls this function with the limits 0 and 10 and prints the result.\n",
|
||
"\n",
|
||
"Execute the script `python integral.py`, you should get the result: `2500.0`.\n",
|
||
"\n",
|
||
"TERMINATE\n",
|
||
"\n",
|
||
"\n",
|
||
"--------------------------------------------------------------------------------\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"assistant = AssistantAgent(\"assistant\", llm_config={\"config_list\": openai_gemini_config}, max_consecutive_auto_reply=3)\n",
|
||
"\n",
|
||
"user_proxy = UserProxyAgent(\n",
|
||
" \"user_proxy\",\n",
|
||
" code_execution_config={\"work_dir\": \"coding\", \"use_docker\": False},\n",
|
||
" human_input_mode=\"NEVER\",\n",
|
||
" is_termination_msg=lambda x: content_str(x.get(\"content\")).find(\"TERMINATE\") >= 0,\n",
|
||
")\n",
|
||
"\n",
|
||
"result = user_proxy.initiate_chat(\n",
|
||
" assistant,\n",
|
||
" message=\"\"\"\n",
|
||
" Compute the integral of the function f(x)=x^3 on the interval 0 to 10 using a Python script,\n",
|
||
" which returns the value of the definite integral.\"\"\",\n",
|
||
")"
|
||
]
|
||
}
|
||
],
|
||
"metadata": {
|
||
"front_matter": {
|
||
"description": "Using Gemini with AutoGen via VertexAI",
|
||
"tags": [
|
||
"gemini",
|
||
"vertexai"
|
||
]
|
||
},
|
||
"kernelspec": {
|
||
"display_name": "Python 3",
|
||
"language": "python",
|
||
"name": "python3"
|
||
},
|
||
"language_info": {
|
||
"codemirror_mode": {
|
||
"name": "ipython",
|
||
"version": 3
|
||
},
|
||
"file_extension": ".py",
|
||
"mimetype": "text/x-python",
|
||
"name": "python",
|
||
"nbconvert_exporter": "python",
|
||
"pygments_lexer": "ipython3",
|
||
"version": "3.10.14"
|
||
},
|
||
"vscode": {
|
||
"interpreter": {
|
||
"hash": "949777d72b0d2535278d3dc13498b2535136f6dfe0678499012e853ee9abcab1"
|
||
}
|
||
},
|
||
"widgets": {
|
||
"application/vnd.jupyter.widget-state+json": {
|
||
"state": {
|
||
"2d910cfd2d2a4fc49fc30fbbdc5576a7": {
|
||
"model_module": "@jupyter-widgets/base",
|
||
"model_module_version": "2.0.0",
|
||
"model_name": "LayoutModel",
|
||
"state": {
|
||
"_model_module": "@jupyter-widgets/base",
|
||
"_model_module_version": "2.0.0",
|
||
"_model_name": "LayoutModel",
|
||
"_view_count": null,
|
||
"_view_module": "@jupyter-widgets/base",
|
||
"_view_module_version": "2.0.0",
|
||
"_view_name": "LayoutView",
|
||
"align_content": null,
|
||
"align_items": null,
|
||
"align_self": null,
|
||
"border_bottom": null,
|
||
"border_left": null,
|
||
"border_right": null,
|
||
"border_top": null,
|
||
"bottom": null,
|
||
"display": null,
|
||
"flex": null,
|
||
"flex_flow": null,
|
||
"grid_area": null,
|
||
"grid_auto_columns": null,
|
||
"grid_auto_flow": null,
|
||
"grid_auto_rows": null,
|
||
"grid_column": null,
|
||
"grid_gap": null,
|
||
"grid_row": null,
|
||
"grid_template_areas": null,
|
||
"grid_template_columns": null,
|
||
"grid_template_rows": null,
|
||
"height": null,
|
||
"justify_content": null,
|
||
"justify_items": null,
|
||
"left": null,
|
||
"margin": null,
|
||
"max_height": null,
|
||
"max_width": null,
|
||
"min_height": null,
|
||
"min_width": null,
|
||
"object_fit": null,
|
||
"object_position": null,
|
||
"order": null,
|
||
"overflow": null,
|
||
"padding": null,
|
||
"right": null,
|
||
"top": null,
|
||
"visibility": null,
|
||
"width": null
|
||
}
|
||
},
|
||
"454146d0f7224f038689031002906e6f": {
|
||
"model_module": "@jupyter-widgets/controls",
|
||
"model_module_version": "2.0.0",
|
||
"model_name": "HBoxModel",
|
||
"state": {
|
||
"_dom_classes": [],
|
||
"_model_module": "@jupyter-widgets/controls",
|
||
"_model_module_version": "2.0.0",
|
||
"_model_name": "HBoxModel",
|
||
"_view_count": null,
|
||
"_view_module": "@jupyter-widgets/controls",
|
||
"_view_module_version": "2.0.0",
|
||
"_view_name": "HBoxView",
|
||
"box_style": "",
|
||
"children": [
|
||
"IPY_MODEL_e4ae2b6f5a974fd4bafb6abb9d12ff26",
|
||
"IPY_MODEL_577e1e3cc4db4942b0883577b3b52755",
|
||
"IPY_MODEL_b40bdfb1ac1d4cffb7cefcb870c64d45"
|
||
],
|
||
"layout": "IPY_MODEL_dc83c7bff2f241309537a8119dfc7555",
|
||
"tabbable": null,
|
||
"tooltip": null
|
||
}
|
||
},
|
||
"577e1e3cc4db4942b0883577b3b52755": {
|
||
"model_module": "@jupyter-widgets/controls",
|
||
"model_module_version": "2.0.0",
|
||
"model_name": "FloatProgressModel",
|
||
"state": {
|
||
"_dom_classes": [],
|
||
"_model_module": "@jupyter-widgets/controls",
|
||
"_model_module_version": "2.0.0",
|
||
"_model_name": "FloatProgressModel",
|
||
"_view_count": null,
|
||
"_view_module": "@jupyter-widgets/controls",
|
||
"_view_module_version": "2.0.0",
|
||
"_view_name": "ProgressView",
|
||
"bar_style": "success",
|
||
"description": "",
|
||
"description_allow_html": false,
|
||
"layout": "IPY_MODEL_2d910cfd2d2a4fc49fc30fbbdc5576a7",
|
||
"max": 1,
|
||
"min": 0,
|
||
"orientation": "horizontal",
|
||
"style": "IPY_MODEL_74a6ba0c3cbc4051be0a83e152fe1e62",
|
||
"tabbable": null,
|
||
"tooltip": null,
|
||
"value": 1
|
||
}
|
||
},
|
||
"6086462a12d54bafa59d3c4566f06cb2": {
|
||
"model_module": "@jupyter-widgets/base",
|
||
"model_module_version": "2.0.0",
|
||
"model_name": "LayoutModel",
|
||
"state": {
|
||
"_model_module": "@jupyter-widgets/base",
|
||
"_model_module_version": "2.0.0",
|
||
"_model_name": "LayoutModel",
|
||
"_view_count": null,
|
||
"_view_module": "@jupyter-widgets/base",
|
||
"_view_module_version": "2.0.0",
|
||
"_view_name": "LayoutView",
|
||
"align_content": null,
|
||
"align_items": null,
|
||
"align_self": null,
|
||
"border_bottom": null,
|
||
"border_left": null,
|
||
"border_right": null,
|
||
"border_top": null,
|
||
"bottom": null,
|
||
"display": null,
|
||
"flex": null,
|
||
"flex_flow": null,
|
||
"grid_area": null,
|
||
"grid_auto_columns": null,
|
||
"grid_auto_flow": null,
|
||
"grid_auto_rows": null,
|
||
"grid_column": null,
|
||
"grid_gap": null,
|
||
"grid_row": null,
|
||
"grid_template_areas": null,
|
||
"grid_template_columns": null,
|
||
"grid_template_rows": null,
|
||
"height": null,
|
||
"justify_content": null,
|
||
"justify_items": null,
|
||
"left": null,
|
||
"margin": null,
|
||
"max_height": null,
|
||
"max_width": null,
|
||
"min_height": null,
|
||
"min_width": null,
|
||
"object_fit": null,
|
||
"object_position": null,
|
||
"order": null,
|
||
"overflow": null,
|
||
"padding": null,
|
||
"right": null,
|
||
"top": null,
|
||
"visibility": null,
|
||
"width": null
|
||
}
|
||
},
|
||
"74a6ba0c3cbc4051be0a83e152fe1e62": {
|
||
"model_module": "@jupyter-widgets/controls",
|
||
"model_module_version": "2.0.0",
|
||
"model_name": "ProgressStyleModel",
|
||
"state": {
|
||
"_model_module": "@jupyter-widgets/controls",
|
||
"_model_module_version": "2.0.0",
|
||
"_model_name": "ProgressStyleModel",
|
||
"_view_count": null,
|
||
"_view_module": "@jupyter-widgets/base",
|
||
"_view_module_version": "2.0.0",
|
||
"_view_name": "StyleView",
|
||
"bar_color": null,
|
||
"description_width": ""
|
||
}
|
||
},
|
||
"7d3f3d9e15894d05a4d188ff4f466554": {
|
||
"model_module": "@jupyter-widgets/controls",
|
||
"model_module_version": "2.0.0",
|
||
"model_name": "HTMLStyleModel",
|
||
"state": {
|
||
"_model_module": "@jupyter-widgets/controls",
|
||
"_model_module_version": "2.0.0",
|
||
"_model_name": "HTMLStyleModel",
|
||
"_view_count": null,
|
||
"_view_module": "@jupyter-widgets/base",
|
||
"_view_module_version": "2.0.0",
|
||
"_view_name": "StyleView",
|
||
"background": null,
|
||
"description_width": "",
|
||
"font_size": null,
|
||
"text_color": null
|
||
}
|
||
},
|
||
"b40bdfb1ac1d4cffb7cefcb870c64d45": {
|
||
"model_module": "@jupyter-widgets/controls",
|
||
"model_module_version": "2.0.0",
|
||
"model_name": "HTMLModel",
|
||
"state": {
|
||
"_dom_classes": [],
|
||
"_model_module": "@jupyter-widgets/controls",
|
||
"_model_module_version": "2.0.0",
|
||
"_model_name": "HTMLModel",
|
||
"_view_count": null,
|
||
"_view_module": "@jupyter-widgets/controls",
|
||
"_view_module_version": "2.0.0",
|
||
"_view_name": "HTMLView",
|
||
"description": "",
|
||
"description_allow_html": false,
|
||
"layout": "IPY_MODEL_f1355871cc6f4dd4b50d9df5af20e5c8",
|
||
"placeholder": "",
|
||
"style": "IPY_MODEL_ca245376fd9f4354af6b2befe4af4466",
|
||
"tabbable": null,
|
||
"tooltip": null,
|
||
"value": " 1/1 [00:00<00:00, 44.69it/s]"
|
||
}
|
||
},
|
||
"ca245376fd9f4354af6b2befe4af4466": {
|
||
"model_module": "@jupyter-widgets/controls",
|
||
"model_module_version": "2.0.0",
|
||
"model_name": "HTMLStyleModel",
|
||
"state": {
|
||
"_model_module": "@jupyter-widgets/controls",
|
||
"_model_module_version": "2.0.0",
|
||
"_model_name": "HTMLStyleModel",
|
||
"_view_count": null,
|
||
"_view_module": "@jupyter-widgets/base",
|
||
"_view_module_version": "2.0.0",
|
||
"_view_name": "StyleView",
|
||
"background": null,
|
||
"description_width": "",
|
||
"font_size": null,
|
||
"text_color": null
|
||
}
|
||
},
|
||
"dc83c7bff2f241309537a8119dfc7555": {
|
||
"model_module": "@jupyter-widgets/base",
|
||
"model_module_version": "2.0.0",
|
||
"model_name": "LayoutModel",
|
||
"state": {
|
||
"_model_module": "@jupyter-widgets/base",
|
||
"_model_module_version": "2.0.0",
|
||
"_model_name": "LayoutModel",
|
||
"_view_count": null,
|
||
"_view_module": "@jupyter-widgets/base",
|
||
"_view_module_version": "2.0.0",
|
||
"_view_name": "LayoutView",
|
||
"align_content": null,
|
||
"align_items": null,
|
||
"align_self": null,
|
||
"border_bottom": null,
|
||
"border_left": null,
|
||
"border_right": null,
|
||
"border_top": null,
|
||
"bottom": null,
|
||
"display": null,
|
||
"flex": null,
|
||
"flex_flow": null,
|
||
"grid_area": null,
|
||
"grid_auto_columns": null,
|
||
"grid_auto_flow": null,
|
||
"grid_auto_rows": null,
|
||
"grid_column": null,
|
||
"grid_gap": null,
|
||
"grid_row": null,
|
||
"grid_template_areas": null,
|
||
"grid_template_columns": null,
|
||
"grid_template_rows": null,
|
||
"height": null,
|
||
"justify_content": null,
|
||
"justify_items": null,
|
||
"left": null,
|
||
"margin": null,
|
||
"max_height": null,
|
||
"max_width": null,
|
||
"min_height": null,
|
||
"min_width": null,
|
||
"object_fit": null,
|
||
"object_position": null,
|
||
"order": null,
|
||
"overflow": null,
|
||
"padding": null,
|
||
"right": null,
|
||
"top": null,
|
||
"visibility": null,
|
||
"width": null
|
||
}
|
||
},
|
||
"e4ae2b6f5a974fd4bafb6abb9d12ff26": {
|
||
"model_module": "@jupyter-widgets/controls",
|
||
"model_module_version": "2.0.0",
|
||
"model_name": "HTMLModel",
|
||
"state": {
|
||
"_dom_classes": [],
|
||
"_model_module": "@jupyter-widgets/controls",
|
||
"_model_module_version": "2.0.0",
|
||
"_model_name": "HTMLModel",
|
||
"_view_count": null,
|
||
"_view_module": "@jupyter-widgets/controls",
|
||
"_view_module_version": "2.0.0",
|
||
"_view_name": "HTMLView",
|
||
"description": "",
|
||
"description_allow_html": false,
|
||
"layout": "IPY_MODEL_6086462a12d54bafa59d3c4566f06cb2",
|
||
"placeholder": "",
|
||
"style": "IPY_MODEL_7d3f3d9e15894d05a4d188ff4f466554",
|
||
"tabbable": null,
|
||
"tooltip": null,
|
||
"value": "100%"
|
||
}
|
||
},
|
||
"f1355871cc6f4dd4b50d9df5af20e5c8": {
|
||
"model_module": "@jupyter-widgets/base",
|
||
"model_module_version": "2.0.0",
|
||
"model_name": "LayoutModel",
|
||
"state": {
|
||
"_model_module": "@jupyter-widgets/base",
|
||
"_model_module_version": "2.0.0",
|
||
"_model_name": "LayoutModel",
|
||
"_view_count": null,
|
||
"_view_module": "@jupyter-widgets/base",
|
||
"_view_module_version": "2.0.0",
|
||
"_view_name": "LayoutView",
|
||
"align_content": null,
|
||
"align_items": null,
|
||
"align_self": null,
|
||
"border_bottom": null,
|
||
"border_left": null,
|
||
"border_right": null,
|
||
"border_top": null,
|
||
"bottom": null,
|
||
"display": null,
|
||
"flex": null,
|
||
"flex_flow": null,
|
||
"grid_area": null,
|
||
"grid_auto_columns": null,
|
||
"grid_auto_flow": null,
|
||
"grid_auto_rows": null,
|
||
"grid_column": null,
|
||
"grid_gap": null,
|
||
"grid_row": null,
|
||
"grid_template_areas": null,
|
||
"grid_template_columns": null,
|
||
"grid_template_rows": null,
|
||
"height": null,
|
||
"justify_content": null,
|
||
"justify_items": null,
|
||
"left": null,
|
||
"margin": null,
|
||
"max_height": null,
|
||
"max_width": null,
|
||
"min_height": null,
|
||
"min_width": null,
|
||
"object_fit": null,
|
||
"object_position": null,
|
||
"order": null,
|
||
"overflow": null,
|
||
"padding": null,
|
||
"right": null,
|
||
"top": null,
|
||
"visibility": null,
|
||
"width": null
|
||
}
|
||
}
|
||
},
|
||
"version_major": 2,
|
||
"version_minor": 0
|
||
}
|
||
}
|
||
},
|
||
"nbformat": 4,
|
||
"nbformat_minor": 4
|
||
}
|