2024-04-03 18:21:08 -04:00
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"# Anthropic Claude\n",
"\n",
"In this notebook, we demonstrate how a to use Anthropic Claude model for AgentChat.\n",
"\n",
"## Requirements\n",
2024-04-12 10:53:29 -04:00
"To use Anthropic Claude with AutoGen, first you need to install the `pyautogen` and `anthropic` package.\n",
"\n",
"To try out the function call feature of Claude model, you need to install `anthropic>=0.23.1`.\n"
2024-04-03 18:21:08 -04:00
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
2024-04-12 10:53:29 -04:00
"# !pip install pyautogen\n",
"!pip install \"anthropic>=0.23.1\""
2024-04-03 18:21:08 -04:00
]
},
{
"cell_type": "code",
2024-04-12 10:53:29 -04:00
"execution_count": 10,
2024-04-03 18:21:08 -04:00
"metadata": {},
"outputs": [],
"source": [
"import inspect\n",
2024-04-12 10:53:29 -04:00
"import json\n",
2024-04-03 18:21:08 -04:00
"from typing import Any, Dict, List, Union\n",
"\n",
"from anthropic import Anthropic\n",
2024-04-12 10:53:29 -04:00
"from anthropic import __version__ as anthropic_version\n",
2024-04-03 18:21:08 -04:00
"from anthropic.types import Completion, Message\n",
2024-04-12 10:53:29 -04:00
"from openai.types.chat.chat_completion import ChatCompletionMessage\n",
"from typing_extensions import Annotated\n",
2024-04-03 18:21:08 -04:00
"\n",
"import autogen\n",
"from autogen import AssistantAgent, UserProxyAgent\n",
2024-04-12 10:53:29 -04:00
"\n",
"TOOL_ENABLED = anthropic_version >= \"0.23.1\"\n",
"if TOOL_ENABLED:\n",
" from anthropic.types.beta.tools import ToolsBetaMessage\n",
"else:\n",
" ToolsBetaMessage = object"
2024-04-03 18:21:08 -04:00
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create Anthropic Model Client following ModelClient Protocol\n",
"\n",
"We will implement our Anthropic client adhere to the `ModelClient` protocol and response structure which is defined in client.py and shown below.\n",
"\n",
"\n",
"```python\n",
"class ModelClient(Protocol):\n",
" \"\"\"\n",
" A client class must implement the following methods:\n",
" - create must return a response object that implements the ModelClientResponseProtocol\n",
" - cost must return the cost of the response\n",
" - get_usage must return a dict with the following keys:\n",
" - prompt_tokens\n",
" - completion_tokens\n",
" - total_tokens\n",
" - cost\n",
" - model\n",
"\n",
" This class is used to create a client that can be used by OpenAIWrapper.\n",
" The response returned from create must adhere to the ModelClientResponseProtocol but can be extended however needed.\n",
" The message_retrieval method must be implemented to return a list of str or a list of messages from the response.\n",
" \"\"\"\n",
"\n",
" RESPONSE_USAGE_KEYS = [\"prompt_tokens\", \"completion_tokens\", \"total_tokens\", \"cost\", \"model\"]\n",
"\n",
" class ModelClientResponseProtocol(Protocol):\n",
" class Choice(Protocol):\n",
" class Message(Protocol):\n",
" content: Optional[str]\n",
"\n",
" message: Message\n",
"\n",
" choices: List[Choice]\n",
" model: str\n",
"\n",
" def create(self, params) -> ModelClientResponseProtocol:\n",
" ...\n",
"\n",
" def message_retrieval(\n",
" self, response: ModelClientResponseProtocol\n",
" ) -> Union[List[str], List[ModelClient.ModelClientResponseProtocol.Choice.Message]]:\n",
" \"\"\"\n",
" Retrieve and return a list of strings or a list of Choice.Message from the response.\n",
"\n",
" NOTE: if a list of Choice.Message is returned, it currently needs to contain the fields of OpenAI's ChatCompletion Message object,\n",
" since that is expected for function or tool calling in the rest of the codebase at the moment, unless a custom agent is being used.\n",
" \"\"\"\n",
" ...\n",
"\n",
" def cost(self, response: ModelClientResponseProtocol) -> float:\n",
" ...\n",
"\n",
" @staticmethod\n",
" def get_usage(response: ModelClientResponseProtocol) -> Dict:\n",
" \"\"\"Return usage summary of the response using RESPONSE_USAGE_KEYS.\"\"\"\n",
" ...\n",
"```\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Implementation of AnthropicClient\n",
"\n",
"You can find the introduction to Claude-3-Opus model [here](https://docs.anthropic.com/claude/docs/intro-to-claude). \n",
"\n",
"Since anthropic provides their Python SDK with similar structure as OpenAI's, we will following the implementation from `autogen.oai.client.OpenAIClient`.\n",
"\n"
]
},
{
"cell_type": "code",
2024-04-12 10:53:29 -04:00
"execution_count": 11,
2024-04-03 18:21:08 -04:00
"metadata": {},
"outputs": [],
"source": [
"class AnthropicClient:\n",
" def __init__(self, config: Dict[str, Any]):\n",
" self._config = config\n",
" self.model = config[\"model\"]\n",
" anthropic_kwargs = set(inspect.getfullargspec(Anthropic.__init__).kwonlyargs)\n",
" filter_dict = {k: v for k, v in config.items() if k in anthropic_kwargs}\n",
" self._client = Anthropic(**filter_dict)\n",
"\n",
2024-04-12 10:53:29 -04:00
" self._last_tooluse_status = {}\n",
"\n",
" def message_retrieval(\n",
" self, response: Union[Message, ToolsBetaMessage]\n",
" ) -> Union[List[str], List[ChatCompletionMessage]]:\n",
2024-04-03 18:21:08 -04:00
" \"\"\"Retrieve the messages from the response.\"\"\"\n",
2024-04-12 10:53:29 -04:00
" messages = response.content\n",
" if len(messages) == 0:\n",
" return [None]\n",
" res = []\n",
" if TOOL_ENABLED:\n",
" for choice in messages:\n",
" if choice.type == \"tool_use\":\n",
" res.insert(0, self.response_to_openai_message(choice))\n",
" self._last_tooluse_status[\"tool_use\"] = choice.model_dump()\n",
" else:\n",
" res.append(choice.text)\n",
" self._last_tooluse_status[\"think\"] = choice.text\n",
"\n",
" return res\n",
2024-04-03 18:21:08 -04:00
"\n",
2024-04-12 10:53:29 -04:00
" else:\n",
" return [ # type: ignore [return-value]\n",
" choice.text if choice.message.function_call is not None else choice.message.content # type: ignore [union-attr]\n",
" for choice in messages\n",
" ]\n",
2024-04-03 18:21:08 -04:00
"\n",
" def create(self, params: Dict[str, Any]) -> Completion:\n",
2024-04-12 10:53:29 -04:00
" \"\"\"Create a completion for a given config.\n",
2024-04-03 18:21:08 -04:00
"\n",
" Args:\n",
" params: The params for the completion.\n",
"\n",
" Returns:\n",
" The completion.\n",
" \"\"\"\n",
2024-04-12 10:53:29 -04:00
" if \"tools\" in params:\n",
" converted_functions = self.convert_tools_to_functions(params[\"tools\"])\n",
" params[\"functions\"] = params.get(\"functions\", []) + converted_functions\n",
"\n",
" raw_contents = params[\"messages\"]\n",
" processed_messages = []\n",
" for message in raw_contents:\n",
"\n",
" if message[\"role\"] == \"system\":\n",
" params[\"system\"] = message[\"content\"]\n",
" elif message[\"role\"] == \"function\":\n",
" processed_messages.append(self.return_function_call_result(message[\"content\"]))\n",
" elif \"function_call\" in message:\n",
" processed_messages.append(self.restore_last_tooluse_status())\n",
" elif message[\"content\"] == \"\":\n",
" # I'm not sure how to elegantly terminate the conversation, please give me some advice about this.\n",
" message[\"content\"] = \"I'm done. Please send TERMINATE\"\n",
" processed_messages.append(message)\n",
" else:\n",
" processed_messages.append(message)\n",
"\n",
" params[\"messages\"] = processed_messages\n",
"\n",
" if TOOL_ENABLED and \"functions\" in params:\n",
" completions: Completion = self._client.beta.tools.messages\n",
2024-04-03 18:21:08 -04:00
" else:\n",
2024-04-12 10:53:29 -04:00
" completions: Completion = self._client.messages # type: ignore [attr-defined]\n",
2024-04-03 18:21:08 -04:00
"\n",
" # Not yet support stream\n",
" params = params.copy()\n",
" params[\"stream\"] = False\n",
" params.pop(\"model_client_cls\")\n",
2024-04-12 10:53:29 -04:00
" params[\"max_tokens\"] = params.get(\"max_tokens\", 4096)\n",
" if \"functions\" in params:\n",
" tools_configs = params.pop(\"functions\")\n",
" tools_configs = [self.openai_func_to_anthropic(tool) for tool in tools_configs]\n",
" params[\"tools\"] = tools_configs\n",
2024-04-03 18:21:08 -04:00
" response = completions.create(**params)\n",
"\n",
" return response\n",
"\n",
" def cost(self, response: Completion) -> float:\n",
" \"\"\"Calculate the cost of the response.\"\"\"\n",
" total = 0.0\n",
" tokens = {\n",
" \"input\": response.usage.input_tokens if response.usage is not None else 0,\n",
" \"output\": response.usage.output_tokens if response.usage is not None else 0,\n",
" }\n",
" price_per_million = {\n",
" \"input\": 15,\n",
" \"output\": 75,\n",
" }\n",
" for key, value in tokens.items():\n",
" total += value * price_per_million[key] / 1_000_000\n",
"\n",
" return total\n",
"\n",
2024-04-12 10:53:29 -04:00
" def response_to_openai_message(self, response) -> ChatCompletionMessage:\n",
" dict_response = response.model_dump()\n",
" return ChatCompletionMessage(\n",
" content=None,\n",
" role=\"assistant\",\n",
" function_call={\"name\": dict_response[\"name\"], \"arguments\": json.dumps(dict_response[\"input\"])},\n",
" )\n",
"\n",
" def restore_last_tooluse_status(self) -> Dict:\n",
" cached_content = []\n",
" if \"think\" in self._last_tooluse_status:\n",
" cached_content.append({\"type\": \"text\", \"text\": self._last_tooluse_status[\"think\"]})\n",
" cached_content.append(self._last_tooluse_status[\"tool_use\"])\n",
" res = {\"role\": \"assistant\", \"content\": cached_content}\n",
" return res\n",
"\n",
" def return_function_call_result(self, result: str) -> Dict:\n",
" return {\n",
" \"role\": \"user\",\n",
" \"content\": [\n",
" {\n",
" \"type\": \"tool_result\",\n",
" \"tool_use_id\": self._last_tooluse_status[\"tool_use\"][\"id\"],\n",
" \"content\": result,\n",
" }\n",
" ],\n",
" }\n",
"\n",
" @staticmethod\n",
" def openai_func_to_anthropic(openai_func: dict) -> dict:\n",
" res = openai_func.copy()\n",
" res[\"input_schema\"] = res.pop(\"parameters\")\n",
" return res\n",
"\n",
2024-04-03 18:21:08 -04:00
" @staticmethod\n",
" def get_usage(response: Completion) -> Dict:\n",
" return {\n",
" \"prompt_tokens\": response.usage.input_tokens if response.usage is not None else 0,\n",
" \"completion_tokens\": response.usage.output_tokens if response.usage is not None else 0,\n",
" \"total_tokens\": (\n",
" response.usage.input_tokens + response.usage.output_tokens if response.usage is not None else 0\n",
" ),\n",
" \"cost\": response.cost if hasattr(response, \"cost\") else 0,\n",
" \"model\": response.model,\n",
2024-04-12 10:53:29 -04:00
" }\n",
"\n",
" @staticmethod\n",
" def convert_tools_to_functions(tools: List) -> List:\n",
" functions = []\n",
" for tool in tools:\n",
" if tool.get(\"type\") == \"function\" and \"function\" in tool:\n",
" functions.append(tool[\"function\"])\n",
"\n",
" return functions"
2024-04-03 18:21:08 -04:00
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Set the config for the Anthropic API\n",
"\n",
"You can add any parameters that are needed for the custom model loading in the same configuration list.\n",
"\n",
"It is important to add the `model_client_cls` field and set it to a string that corresponds to the class name: `\"CustomModelClient\"`."
]
},
{
"cell_type": "code",
2024-04-12 10:53:29 -04:00
"execution_count": 12,
2024-04-03 18:21:08 -04:00
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"config_list_claude = [\n",
" {\n",
" # Choose your model name.\n",
2024-04-12 10:53:29 -04:00
" \"model\": \"claude-3-sonnet-20240229\",\n",
2024-04-03 18:21:08 -04:00
" # You need to provide your API key here.\n",
" \"api_key\": os.getenv(\"ANTHROPIC_API_KEY\"),\n",
" \"base_url\": \"https://api.anthropic.com\",\n",
" \"api_type\": \"anthropic\",\n",
" \"model_client_cls\": \"AnthropicClient\",\n",
" }\n",
"]"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Construct Agents\n",
"\n",
2024-04-12 10:53:29 -04:00
"Construct a simple conversation between a User proxy and an ConversableAgent based on Claude-3 model.\n"
2024-04-03 18:21:08 -04:00
]
},
{
"cell_type": "code",
2024-04-12 10:53:29 -04:00
"execution_count": 13,
2024-04-03 18:21:08 -04:00
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
2024-04-12 10:53:29 -04:00
"[autogen.oai.client: 04-08 22:15:59] {419} INFO - Detected custom model client in config: AnthropicClient, model client can not be used until register_model_client is called.\n"
2024-04-03 18:21:08 -04:00
]
}
],
"source": [
"assistant = AssistantAgent(\n",
" \"assistant\",\n",
" llm_config={\n",
" \"config_list\": config_list_claude,\n",
" },\n",
")\n",
2024-04-12 10:53:29 -04:00
"\n",
2024-04-03 18:21:08 -04:00
"user_proxy = UserProxyAgent(\n",
" \"user_proxy\",\n",
2024-04-12 10:53:29 -04:00
" human_input_mode=\"NEVER\",\n",
" code_execution_config={\n",
" \"work_dir\": \"coding\",\n",
" \"use_docker\": False,\n",
" },\n",
" is_termination_msg=lambda x: x.get(\"content\", \"\") and x.get(\"content\", \"\").rstrip().endswith(\"TERMINATE\"),\n",
" max_consecutive_auto_reply=1,\n",
2024-04-03 18:21:08 -04:00
")"
]
},
2024-04-12 10:53:29 -04:00
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Function Call in Latest Anthropic API \n",
"Anthropic just announced that tool use is now in public beta in the Anthropic API. To use this feature, please install `anthropic>=0.23.1`."
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[autogen.oai.client: 04-08 22:15:59] {419} INFO - Detected custom model client in config: AnthropicClient, model client can not be used until register_model_client is called.\n"
]
}
],
"source": [
"@user_proxy.register_for_execution()\n",
"@assistant.register_for_llm(name=\"get_weather\", description=\"Get the current weather in a given location.\")\n",
"def preprocess(location: Annotated[str, \"The city and state, e.g. Toronto, ON.\"]) -> str:\n",
" return \"Absolutely cloudy and rainy\""
]
},
2024-04-03 18:21:08 -04:00
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Register the custom client class to the assistant agent"
]
},
{
"cell_type": "code",
2024-04-12 10:53:29 -04:00
"execution_count": 15,
2024-04-03 18:21:08 -04:00
"metadata": {},
"outputs": [],
"source": [
"assistant.register_model_client(model_client_cls=AnthropicClient)"
]
},
{
"cell_type": "code",
2024-04-12 10:53:29 -04:00
"execution_count": 16,
2024-04-03 18:21:08 -04:00
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
2024-04-12 10:53:29 -04:00
"user_proxy (to assistant):\n",
"\n",
"What's the weather in Toronto?\n",
"\n",
"--------------------------------------------------------------------------------\n",
"assistant (to user_proxy):\n",
"\n",
"***** Suggested function call: get_weather *****\n",
"Arguments: \n",
"{\"location\": \"Toronto, ON\"}\n",
"************************************************\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\n",
">>>>>>>> EXECUTING FUNCTION get_weather...\n",
"user_proxy (to assistant):\n",
2024-04-03 18:21:08 -04:00
"\n",
2024-04-12 10:53:29 -04:00
"***** Response from calling function (get_weather) *****\n",
"Absolutely cloudy and rainy\n",
"********************************************************\n",
2024-04-03 18:21:08 -04:00
"\n",
2024-04-04 18:44:50 -04:00
"--------------------------------------------------------------------------------\n",
2024-04-12 10:53:29 -04:00
"assistant (to user_proxy):\n",
2024-04-03 18:21:08 -04:00
"\n",
2024-04-12 10:53:29 -04:00
"The tool returned that the current weather in Toronto, ON is absolutely cloudy and rainy.\n",
2024-04-03 18:21:08 -04:00
"\n",
"--------------------------------------------------------------------------------\n"
]
},
{
"data": {
"text/plain": [
2024-04-12 10:53:29 -04:00
"ChatResult(chat_id=None, chat_history=[{'content': \"What's the weather in Toronto?\", 'role': 'assistant'}, {'function_call': {'arguments': '{\"location\": \"Toronto, ON\"}', 'name': 'get_weather'}, 'content': None, 'role': 'assistant'}, {'content': 'Absolutely cloudy and rainy', 'name': 'get_weather', 'role': 'function'}, {'content': 'The tool returned that the current weather in Toronto, ON is absolutely cloudy and rainy.', 'role': 'user'}], summary='The tool returned that the current weather in Toronto, ON is absolutely cloudy and rainy.', cost=({'total_cost': 0.030494999999999998, 'claude-3-sonnet-20240229': {'cost': 0.030494999999999998, 'prompt_tokens': 1533, 'completion_tokens': 100, 'total_tokens': 1633}}, {'total_cost': 0}), human_input=[])"
2024-04-03 18:21:08 -04:00
]
},
2024-04-12 10:53:29 -04:00
"execution_count": 16,
2024-04-03 18:21:08 -04:00
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"user_proxy.initiate_chat(\n",
" assistant,\n",
2024-04-12 10:53:29 -04:00
" message=\"What's the weather in Toronto?\",\n",
2024-04-03 18:21:08 -04:00
")"
]
}
],
"metadata": {
"front_matter": {
"description": "Define and load a custom model",
"tags": [
"custom model"
]
},
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
2024-04-12 10:53:29 -04:00
"version": "3.9.7"
2024-04-03 18:21:08 -04:00
},
"vscode": {
"interpreter": {
"hash": "949777d72b0d2535278d3dc13498b2535136f6dfe0678499012e853ee9abcab1"
}
},
"widgets": {
"application/vnd.jupyter.widget-state+json": {
"state": {
"2d910cfd2d2a4fc49fc30fbbdc5576a7": {
"model_module": "@jupyter-widgets/base",
"model_module_version": "2.0.0",
"model_name": "LayoutModel",
"state": {
"_model_module": "@jupyter-widgets/base",
"_model_module_version": "2.0.0",
"_model_name": "LayoutModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "2.0.0",
"_view_name": "LayoutView",
"align_content": null,
"align_items": null,
"align_self": null,
"border_bottom": null,
"border_left": null,
"border_right": null,
"border_top": null,
"bottom": null,
"display": null,
"flex": null,
"flex_flow": null,
"grid_area": null,
"grid_auto_columns": null,
"grid_auto_flow": null,
"grid_auto_rows": null,
"grid_column": null,
"grid_gap": null,
"grid_row": null,
"grid_template_areas": null,
"grid_template_columns": null,
"grid_template_rows": null,
"height": null,
"justify_content": null,
"justify_items": null,
"left": null,
"margin": null,
"max_height": null,
"max_width": null,
"min_height": null,
"min_width": null,
"object_fit": null,
"object_position": null,
"order": null,
"overflow": null,
"padding": null,
"right": null,
"top": null,
"visibility": null,
"width": null
}
},
"454146d0f7224f038689031002906e6f": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "2.0.0",
"model_name": "HBoxModel",
"state": {
"_dom_classes": [],
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "2.0.0",
"_model_name": "HBoxModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/controls",
"_view_module_version": "2.0.0",
"_view_name": "HBoxView",
"box_style": "",
"children": [
"IPY_MODEL_e4ae2b6f5a974fd4bafb6abb9d12ff26",
"IPY_MODEL_577e1e3cc4db4942b0883577b3b52755",
"IPY_MODEL_b40bdfb1ac1d4cffb7cefcb870c64d45"
],
"layout": "IPY_MODEL_dc83c7bff2f241309537a8119dfc7555",
"tabbable": null,
"tooltip": null
}
},
"577e1e3cc4db4942b0883577b3b52755": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "2.0.0",
"model_name": "FloatProgressModel",
"state": {
"_dom_classes": [],
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "2.0.0",
"_model_name": "FloatProgressModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/controls",
"_view_module_version": "2.0.0",
"_view_name": "ProgressView",
"bar_style": "success",
"description": "",
"description_allow_html": false,
"layout": "IPY_MODEL_2d910cfd2d2a4fc49fc30fbbdc5576a7",
"max": 1,
"min": 0,
"orientation": "horizontal",
"style": "IPY_MODEL_74a6ba0c3cbc4051be0a83e152fe1e62",
"tabbable": null,
"tooltip": null,
"value": 1
}
},
"6086462a12d54bafa59d3c4566f06cb2": {
"model_module": "@jupyter-widgets/base",
"model_module_version": "2.0.0",
"model_name": "LayoutModel",
"state": {
"_model_module": "@jupyter-widgets/base",
"_model_module_version": "2.0.0",
"_model_name": "LayoutModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "2.0.0",
"_view_name": "LayoutView",
"align_content": null,
"align_items": null,
"align_self": null,
"border_bottom": null,
"border_left": null,
"border_right": null,
"border_top": null,
"bottom": null,
"display": null,
"flex": null,
"flex_flow": null,
"grid_area": null,
"grid_auto_columns": null,
"grid_auto_flow": null,
"grid_auto_rows": null,
"grid_column": null,
"grid_gap": null,
"grid_row": null,
"grid_template_areas": null,
"grid_template_columns": null,
"grid_template_rows": null,
"height": null,
"justify_content": null,
"justify_items": null,
"left": null,
"margin": null,
"max_height": null,
"max_width": null,
"min_height": null,
"min_width": null,
"object_fit": null,
"object_position": null,
"order": null,
"overflow": null,
"padding": null,
"right": null,
"top": null,
"visibility": null,
"width": null
}
},
"74a6ba0c3cbc4051be0a83e152fe1e62": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "2.0.0",
"model_name": "ProgressStyleModel",
"state": {
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "2.0.0",
"_model_name": "ProgressStyleModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "2.0.0",
"_view_name": "StyleView",
"bar_color": null,
"description_width": ""
}
},
"7d3f3d9e15894d05a4d188ff4f466554": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "2.0.0",
"model_name": "HTMLStyleModel",
"state": {
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "2.0.0",
"_model_name": "HTMLStyleModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "2.0.0",
"_view_name": "StyleView",
"background": null,
"description_width": "",
"font_size": null,
"text_color": null
}
},
"b40bdfb1ac1d4cffb7cefcb870c64d45": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "2.0.0",
"model_name": "HTMLModel",
"state": {
"_dom_classes": [],
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "2.0.0",
"_model_name": "HTMLModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/controls",
"_view_module_version": "2.0.0",
"_view_name": "HTMLView",
"description": "",
"description_allow_html": false,
"layout": "IPY_MODEL_f1355871cc6f4dd4b50d9df5af20e5c8",
"placeholder": " ",
"style": "IPY_MODEL_ca245376fd9f4354af6b2befe4af4466",
"tabbable": null,
"tooltip": null,
"value": " 1/1 [00:00<00:00, 44.69it/s]"
}
},
"ca245376fd9f4354af6b2befe4af4466": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "2.0.0",
"model_name": "HTMLStyleModel",
"state": {
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "2.0.0",
"_model_name": "HTMLStyleModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "2.0.0",
"_view_name": "StyleView",
"background": null,
"description_width": "",
"font_size": null,
"text_color": null
}
},
"dc83c7bff2f241309537a8119dfc7555": {
"model_module": "@jupyter-widgets/base",
"model_module_version": "2.0.0",
"model_name": "LayoutModel",
"state": {
"_model_module": "@jupyter-widgets/base",
"_model_module_version": "2.0.0",
"_model_name": "LayoutModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "2.0.0",
"_view_name": "LayoutView",
"align_content": null,
"align_items": null,
"align_self": null,
"border_bottom": null,
"border_left": null,
"border_right": null,
"border_top": null,
"bottom": null,
"display": null,
"flex": null,
"flex_flow": null,
"grid_area": null,
"grid_auto_columns": null,
"grid_auto_flow": null,
"grid_auto_rows": null,
"grid_column": null,
"grid_gap": null,
"grid_row": null,
"grid_template_areas": null,
"grid_template_columns": null,
"grid_template_rows": null,
"height": null,
"justify_content": null,
"justify_items": null,
"left": null,
"margin": null,
"max_height": null,
"max_width": null,
"min_height": null,
"min_width": null,
"object_fit": null,
"object_position": null,
"order": null,
"overflow": null,
"padding": null,
"right": null,
"top": null,
"visibility": null,
"width": null
}
},
"e4ae2b6f5a974fd4bafb6abb9d12ff26": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "2.0.0",
"model_name": "HTMLModel",
"state": {
"_dom_classes": [],
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "2.0.0",
"_model_name": "HTMLModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/controls",
"_view_module_version": "2.0.0",
"_view_name": "HTMLView",
"description": "",
"description_allow_html": false,
"layout": "IPY_MODEL_6086462a12d54bafa59d3c4566f06cb2",
"placeholder": " ",
"style": "IPY_MODEL_7d3f3d9e15894d05a4d188ff4f466554",
"tabbable": null,
"tooltip": null,
"value": "100%"
}
},
"f1355871cc6f4dd4b50d9df5af20e5c8": {
"model_module": "@jupyter-widgets/base",
"model_module_version": "2.0.0",
"model_name": "LayoutModel",
"state": {
"_model_module": "@jupyter-widgets/base",
"_model_module_version": "2.0.0",
"_model_name": "LayoutModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "2.0.0",
"_view_name": "LayoutView",
"align_content": null,
"align_items": null,
"align_self": null,
"border_bottom": null,
"border_left": null,
"border_right": null,
"border_top": null,
"bottom": null,
"display": null,
"flex": null,
"flex_flow": null,
"grid_area": null,
"grid_auto_columns": null,
"grid_auto_flow": null,
"grid_auto_rows": null,
"grid_column": null,
"grid_gap": null,
"grid_row": null,
"grid_template_areas": null,
"grid_template_columns": null,
"grid_template_rows": null,
"height": null,
"justify_content": null,
"justify_items": null,
"left": null,
"margin": null,
"max_height": null,
"max_width": null,
"min_height": null,
"min_width": null,
"object_fit": null,
"object_position": null,
"order": null,
"overflow": null,
"padding": null,
"right": null,
"top": null,
"visibility": null,
"width": null
}
}
},
"version_major": 2,
"version_minor": 0
}
}
},
"nbformat": 4,
"nbformat_minor": 2
}