mirror of
https://github.com/microsoft/autogen.git
synced 2025-07-15 13:01:15 +00:00
659 lines
42 KiB
Plaintext
659 lines
42 KiB
Plaintext
{
|
||
"cells": [
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Memory and RAG\n",
|
||
"\n",
|
||
"There are several use cases where it is valuable to maintain a _store_ of useful facts that can be intelligently added to the context of the agent just before a specific step. The typically use case here is a RAG pattern where a query is used to retrieve relevant information from a database that is then added to the agent's context.\n",
|
||
"\n",
|
||
"\n",
|
||
"AgentChat provides a {py:class}`~autogen_core.memory.Memory` protocol that can be extended to provide this functionality. The key methods are `query`, `update_context`, `add`, `clear`, and `close`. \n",
|
||
"\n",
|
||
"- `add`: add new entries to the memory store\n",
|
||
"- `query`: retrieve relevant information from the memory store \n",
|
||
"- `update_context`: mutate an agent's internal `model_context` by adding the retrieved information (used in the {py:class}`~autogen_agentchat.agents.AssistantAgent` class) \n",
|
||
"- `clear`: clear all entries from the memory store\n",
|
||
"- `close`: clean up any resources used by the memory store \n",
|
||
"\n",
|
||
"\n",
|
||
"## ListMemory Example\n",
|
||
"\n",
|
||
"{py:class}`~autogen_core.memory.ListMemory` is provided as an example implementation of the {py:class}`~autogen_core.memory.Memory` protocol. It is a simple list-based memory implementation that maintains memories in chronological order, appending the most recent memories to the model's context. The implementation is designed to be straightforward and predictable, making it easy to understand and debug.\n",
|
||
"In the following example, we will use ListMemory to maintain a memory bank of user preferences and demonstrate how it can be used to provide consistent context for agent responses over time."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 3,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"from autogen_agentchat.agents import AssistantAgent\n",
|
||
"from autogen_agentchat.ui import Console\n",
|
||
"from autogen_core.memory import ListMemory, MemoryContent, MemoryMimeType\n",
|
||
"from autogen_ext.models.openai import OpenAIChatCompletionClient"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 4,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"# Initialize user memory\n",
|
||
"user_memory = ListMemory()\n",
|
||
"\n",
|
||
"# Add user preferences to memory\n",
|
||
"await user_memory.add(MemoryContent(content=\"The weather should be in metric units\", mime_type=MemoryMimeType.TEXT))\n",
|
||
"\n",
|
||
"await user_memory.add(MemoryContent(content=\"Meal recipe must be vegan\", mime_type=MemoryMimeType.TEXT))\n",
|
||
"\n",
|
||
"\n",
|
||
"async def get_weather(city: str, units: str = \"imperial\") -> str:\n",
|
||
" if units == \"imperial\":\n",
|
||
" return f\"The weather in {city} is 73 °F and Sunny.\"\n",
|
||
" elif units == \"metric\":\n",
|
||
" return f\"The weather in {city} is 23 °C and Sunny.\"\n",
|
||
" else:\n",
|
||
" return f\"Sorry, I don't know the weather in {city}.\"\n",
|
||
"\n",
|
||
"\n",
|
||
"assistant_agent = AssistantAgent(\n",
|
||
" name=\"assistant_agent\",\n",
|
||
" model_client=OpenAIChatCompletionClient(\n",
|
||
" model=\"gpt-4o-2024-08-06\",\n",
|
||
" ),\n",
|
||
" tools=[get_weather],\n",
|
||
" memory=[user_memory],\n",
|
||
")"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 5,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"---------- TextMessage (user) ----------\n",
|
||
"What is the weather in New York?\n",
|
||
"---------- MemoryQueryEvent (assistant_agent) ----------\n",
|
||
"[MemoryContent(content='The weather should be in metric units', mime_type=<MemoryMimeType.TEXT: 'text/plain'>, metadata=None), MemoryContent(content='Meal recipe must be vegan', mime_type=<MemoryMimeType.TEXT: 'text/plain'>, metadata=None)]\n",
|
||
"---------- ToolCallRequestEvent (assistant_agent) ----------\n",
|
||
"[FunctionCall(id='call_apWw5JOedVvqsPfXWV7c5Uiw', arguments='{\"city\":\"New York\",\"units\":\"metric\"}', name='get_weather')]\n",
|
||
"---------- ToolCallExecutionEvent (assistant_agent) ----------\n",
|
||
"[FunctionExecutionResult(content='The weather in New York is 23 °C and Sunny.', name='get_weather', call_id='call_apWw5JOedVvqsPfXWV7c5Uiw', is_error=False)]\n",
|
||
"---------- ToolCallSummaryMessage (assistant_agent) ----------\n",
|
||
"The weather in New York is 23 °C and Sunny.\n"
|
||
]
|
||
},
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"TaskResult(messages=[TextMessage(source='user', models_usage=None, metadata={}, created_at=datetime.datetime(2025, 6, 12, 17, 46, 33, 492791, tzinfo=datetime.timezone.utc), content='What is the weather in New York?', type='TextMessage'), MemoryQueryEvent(source='assistant_agent', models_usage=None, metadata={}, created_at=datetime.datetime(2025, 6, 12, 17, 46, 33, 494162, tzinfo=datetime.timezone.utc), content=[MemoryContent(content='The weather should be in metric units', mime_type=<MemoryMimeType.TEXT: 'text/plain'>, metadata=None), MemoryContent(content='Meal recipe must be vegan', mime_type=<MemoryMimeType.TEXT: 'text/plain'>, metadata=None)], type='MemoryQueryEvent'), ToolCallRequestEvent(source='assistant_agent', models_usage=RequestUsage(prompt_tokens=123, completion_tokens=19), metadata={}, created_at=datetime.datetime(2025, 6, 12, 17, 46, 34, 892272, tzinfo=datetime.timezone.utc), content=[FunctionCall(id='call_apWw5JOedVvqsPfXWV7c5Uiw', arguments='{\"city\":\"New York\",\"units\":\"metric\"}', name='get_weather')], type='ToolCallRequestEvent'), ToolCallExecutionEvent(source='assistant_agent', models_usage=None, metadata={}, created_at=datetime.datetime(2025, 6, 12, 17, 46, 34, 894081, tzinfo=datetime.timezone.utc), content=[FunctionExecutionResult(content='The weather in New York is 23 °C and Sunny.', name='get_weather', call_id='call_apWw5JOedVvqsPfXWV7c5Uiw', is_error=False)], type='ToolCallExecutionEvent'), ToolCallSummaryMessage(source='assistant_agent', models_usage=None, metadata={}, created_at=datetime.datetime(2025, 6, 12, 17, 46, 34, 895054, tzinfo=datetime.timezone.utc), content='The weather in New York is 23 °C and Sunny.', type='ToolCallSummaryMessage', tool_calls=[FunctionCall(id='call_apWw5JOedVvqsPfXWV7c5Uiw', arguments='{\"city\":\"New York\",\"units\":\"metric\"}', name='get_weather')], results=[FunctionExecutionResult(content='The weather in New York is 23 °C and Sunny.', name='get_weather', call_id='call_apWw5JOedVvqsPfXWV7c5Uiw', is_error=False)])], stop_reason=None)"
|
||
]
|
||
},
|
||
"execution_count": 5,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"# Run the agent with a task.\n",
|
||
"stream = assistant_agent.run_stream(task=\"What is the weather in New York?\")\n",
|
||
"await Console(stream)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"We can inspect that the `assistant_agent` model_context is actually updated with the retrieved memory entries. The `transform` method is used to format the retrieved memory entries into a string that can be used by the agent. In this case, we simply concatenate the content of each memory entry into a single string."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 6,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"[UserMessage(content='What is the weather in New York?', source='user', type='UserMessage'),\n",
|
||
" SystemMessage(content='\\nRelevant memory content (in chronological order):\\n1. The weather should be in metric units\\n2. Meal recipe must be vegan\\n', type='SystemMessage'),\n",
|
||
" AssistantMessage(content=[FunctionCall(id='call_apWw5JOedVvqsPfXWV7c5Uiw', arguments='{\"city\":\"New York\",\"units\":\"metric\"}', name='get_weather')], thought=None, source='assistant_agent', type='AssistantMessage'),\n",
|
||
" FunctionExecutionResultMessage(content=[FunctionExecutionResult(content='The weather in New York is 23 °C and Sunny.', name='get_weather', call_id='call_apWw5JOedVvqsPfXWV7c5Uiw', is_error=False)], type='FunctionExecutionResultMessage')]"
|
||
]
|
||
},
|
||
"execution_count": 6,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"await assistant_agent._model_context.get_messages()"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"We see above that the weather is returned in Centigrade as stated in the user preferences. \n",
|
||
"\n",
|
||
"Similarly, assuming we ask a separate question about generating a meal plan, the agent is able to retrieve relevant information from the memory store and provide a personalized (vegan) response."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 8,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"---------- TextMessage (user) ----------\n",
|
||
"Write brief meal recipe with broth\n",
|
||
"---------- MemoryQueryEvent (assistant_agent) ----------\n",
|
||
"[MemoryContent(content='The weather should be in metric units', mime_type=<MemoryMimeType.TEXT: 'text/plain'>, metadata=None), MemoryContent(content='Meal recipe must be vegan', mime_type=<MemoryMimeType.TEXT: 'text/plain'>, metadata=None)]\n",
|
||
"---------- TextMessage (assistant_agent) ----------\n",
|
||
"Here's another vegan broth-based recipe:\n",
|
||
"\n",
|
||
"**Vegan Miso Soup**\n",
|
||
"\n",
|
||
"**Ingredients:**\n",
|
||
"- 4 cups vegetable broth\n",
|
||
"- 3 tablespoons white miso paste\n",
|
||
"- 1 block firm tofu, cubed\n",
|
||
"- 1 cup mushrooms, sliced (shiitake or any variety you prefer)\n",
|
||
"- 2 green onions, chopped\n",
|
||
"- 1 tablespoon soy sauce (optional)\n",
|
||
"- 1/2 cup seaweed (such as wakame)\n",
|
||
"- 1 tablespoon sesame oil\n",
|
||
"- 1 tablespoon grated ginger\n",
|
||
"- Salt to taste\n",
|
||
"\n",
|
||
"**Instructions:**\n",
|
||
"1. In a pot, heat the sesame oil over medium heat.\n",
|
||
"2. Add the grated ginger and sauté for about a minute until fragrant.\n",
|
||
"3. Pour in the vegetable broth and bring it to a simmer.\n",
|
||
"4. Add the miso paste, stirring until fully dissolved.\n",
|
||
"5. Add the tofu cubes, mushrooms, and seaweed to the broth and cook for about 5 minutes.\n",
|
||
"6. Stir in soy sauce if using, and add salt to taste.\n",
|
||
"7. Garnish with chopped green onions before serving.\n",
|
||
"\n",
|
||
"Enjoy your delicious and nutritious vegan miso soup! TERMINATE\n"
|
||
]
|
||
},
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"TaskResult(messages=[TextMessage(source='user', models_usage=None, metadata={}, created_at=datetime.datetime(2025, 6, 12, 17, 47, 19, 247083, tzinfo=datetime.timezone.utc), content='Write brief meal recipe with broth', type='TextMessage'), MemoryQueryEvent(source='assistant_agent', models_usage=None, metadata={}, created_at=datetime.datetime(2025, 6, 12, 17, 47, 19, 248736, tzinfo=datetime.timezone.utc), content=[MemoryContent(content='The weather should be in metric units', mime_type=<MemoryMimeType.TEXT: 'text/plain'>, metadata=None), MemoryContent(content='Meal recipe must be vegan', mime_type=<MemoryMimeType.TEXT: 'text/plain'>, metadata=None)], type='MemoryQueryEvent'), TextMessage(source='assistant_agent', models_usage=RequestUsage(prompt_tokens=528, completion_tokens=233), metadata={}, created_at=datetime.datetime(2025, 6, 12, 17, 47, 26, 130554, tzinfo=datetime.timezone.utc), content=\"Here's another vegan broth-based recipe:\\n\\n**Vegan Miso Soup**\\n\\n**Ingredients:**\\n- 4 cups vegetable broth\\n- 3 tablespoons white miso paste\\n- 1 block firm tofu, cubed\\n- 1 cup mushrooms, sliced (shiitake or any variety you prefer)\\n- 2 green onions, chopped\\n- 1 tablespoon soy sauce (optional)\\n- 1/2 cup seaweed (such as wakame)\\n- 1 tablespoon sesame oil\\n- 1 tablespoon grated ginger\\n- Salt to taste\\n\\n**Instructions:**\\n1. In a pot, heat the sesame oil over medium heat.\\n2. Add the grated ginger and sauté for about a minute until fragrant.\\n3. Pour in the vegetable broth and bring it to a simmer.\\n4. Add the miso paste, stirring until fully dissolved.\\n5. Add the tofu cubes, mushrooms, and seaweed to the broth and cook for about 5 minutes.\\n6. Stir in soy sauce if using, and add salt to taste.\\n7. Garnish with chopped green onions before serving.\\n\\nEnjoy your delicious and nutritious vegan miso soup! TERMINATE\", type='TextMessage')], stop_reason=None)"
|
||
]
|
||
},
|
||
"execution_count": 8,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"stream = assistant_agent.run_stream(task=\"Write brief meal recipe with broth\")\n",
|
||
"await Console(stream)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Custom Memory Stores (Vector DBs, etc.)\n",
|
||
"\n",
|
||
"You can build on the `Memory` protocol to implement more complex memory stores. For example, you could implement a custom memory store that uses a vector database to store and retrieve information, or a memory store that uses a machine learning model to generate personalized responses based on the user's preferences etc.\n",
|
||
"\n",
|
||
"Specifically, you will need to overload the `add`, `query` and `update_context` methods to implement the desired functionality and pass the memory store to your agent.\n",
|
||
"\n",
|
||
"\n",
|
||
"Currently the following example memory stores are available as part of the {py:class}`~autogen_ext` extensions package. \n",
|
||
"\n",
|
||
"- `autogen_ext.memory.chromadb.ChromaDBVectorMemory`: A memory store that uses a vector database to store and retrieve information. \n",
|
||
"\n",
|
||
"- `autogen_ext.memory.chromadb.SentenceTransformerEmbeddingFunctionConfig`: A configuration class for the SentenceTransformer embedding function used by the `ChromaDBVectorMemory` store. Note that other embedding functions such as `autogen_ext.memory.openai.OpenAIEmbeddingFunctionConfig` can also be used with the `ChromaDBVectorMemory` store.\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"---------- TextMessage (user) ----------\n",
|
||
"What is the weather in New York?\n",
|
||
"---------- MemoryQueryEvent (assistant_agent) ----------\n",
|
||
"[MemoryContent(content='The weather should be in metric units', mime_type='MemoryMimeType.TEXT', metadata={'type': 'units', 'mime_type': 'MemoryMimeType.TEXT', 'category': 'preferences', 'score': 0.4342840313911438, 'id': 'd7ed6e42-0bf5-4ee8-b5b5-fbe06f583477'})]\n",
|
||
"---------- ToolCallRequestEvent (assistant_agent) ----------\n",
|
||
"[FunctionCall(id='call_ufpz7LGcn19ZroowyEraj9bd', arguments='{\"city\":\"New York\",\"units\":\"metric\"}', name='get_weather')]\n",
|
||
"---------- ToolCallExecutionEvent (assistant_agent) ----------\n",
|
||
"[FunctionExecutionResult(content='The weather in New York is 23 °C and Sunny.', name='get_weather', call_id='call_ufpz7LGcn19ZroowyEraj9bd', is_error=False)]\n",
|
||
"---------- ToolCallSummaryMessage (assistant_agent) ----------\n",
|
||
"The weather in New York is 23 °C and Sunny.\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"import tempfile\n",
|
||
"\n",
|
||
"from autogen_agentchat.agents import AssistantAgent\n",
|
||
"from autogen_agentchat.ui import Console\n",
|
||
"from autogen_core.memory import MemoryContent, MemoryMimeType\n",
|
||
"from autogen_ext.memory.chromadb import (\n",
|
||
" ChromaDBVectorMemory,\n",
|
||
" PersistentChromaDBVectorMemoryConfig,\n",
|
||
" SentenceTransformerEmbeddingFunctionConfig,\n",
|
||
")\n",
|
||
"from autogen_ext.models.openai import OpenAIChatCompletionClient\n",
|
||
"\n",
|
||
"# Use a temporary directory for ChromaDB persistence\n",
|
||
"with tempfile.TemporaryDirectory() as tmpdir:\n",
|
||
" chroma_user_memory = ChromaDBVectorMemory(\n",
|
||
" config=PersistentChromaDBVectorMemoryConfig(\n",
|
||
" collection_name=\"preferences\",\n",
|
||
" persistence_path=tmpdir, # Use the temp directory here\n",
|
||
" k=2, # Return top k results\n",
|
||
" score_threshold=0.4, # Minimum similarity score\n",
|
||
" embedding_function_config=SentenceTransformerEmbeddingFunctionConfig(\n",
|
||
" model_name=\"all-MiniLM-L6-v2\" # Use default model for testing\n",
|
||
" ),\n",
|
||
" )\n",
|
||
" )\n",
|
||
" # Add user preferences to memory\n",
|
||
" await chroma_user_memory.add(\n",
|
||
" MemoryContent(\n",
|
||
" content=\"The weather should be in metric units\",\n",
|
||
" mime_type=MemoryMimeType.TEXT,\n",
|
||
" metadata={\"category\": \"preferences\", \"type\": \"units\"},\n",
|
||
" )\n",
|
||
" )\n",
|
||
"\n",
|
||
" await chroma_user_memory.add(\n",
|
||
" MemoryContent(\n",
|
||
" content=\"Meal recipe must be vegan\",\n",
|
||
" mime_type=MemoryMimeType.TEXT,\n",
|
||
" metadata={\"category\": \"preferences\", \"type\": \"dietary\"},\n",
|
||
" )\n",
|
||
" )\n",
|
||
"\n",
|
||
" model_client = OpenAIChatCompletionClient(\n",
|
||
" model=\"gpt-4o\",\n",
|
||
" )\n",
|
||
"\n",
|
||
" # Create assistant agent with ChromaDB memory\n",
|
||
" assistant_agent = AssistantAgent(\n",
|
||
" name=\"assistant_agent\",\n",
|
||
" model_client=model_client,\n",
|
||
" tools=[get_weather],\n",
|
||
" memory=[chroma_user_memory],\n",
|
||
" )\n",
|
||
"\n",
|
||
" stream = assistant_agent.run_stream(task=\"What is the weather in New York?\")\n",
|
||
" await Console(stream)\n",
|
||
"\n",
|
||
" await model_client.close()\n",
|
||
" await chroma_user_memory.close()"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"Note that you can also serialize the ChromaDBVectorMemory and save it to disk."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 10,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"'{\"provider\":\"autogen_ext.memory.chromadb.ChromaDBVectorMemory\",\"component_type\":\"memory\",\"version\":1,\"component_version\":1,\"description\":\"Store and retrieve memory using vector similarity search powered by ChromaDB.\",\"label\":\"ChromaDBVectorMemory\",\"config\":{\"client_type\":\"persistent\",\"collection_name\":\"preferences\",\"distance_metric\":\"cosine\",\"k\":2,\"score_threshold\":0.4,\"allow_reset\":false,\"tenant\":\"default_tenant\",\"database\":\"default_database\",\"embedding_function_config\":{\"function_type\":\"sentence_transformer\",\"model_name\":\"all-MiniLM-L6-v2\"},\"persistence_path\":\"/var/folders/wg/hgs_dt8n5lbd3gx3pq7k6lym0000gn/T/tmp9qcaqchy\"}}'"
|
||
]
|
||
},
|
||
"execution_count": 10,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"chroma_user_memory.dump_component().model_dump_json()"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"## RAG Agent: Putting It All Together\n",
|
||
"\n",
|
||
"The RAG (Retrieval Augmented Generation) pattern which is common in building AI systems encompasses two distinct phases:\n",
|
||
"\n",
|
||
"1. **Indexing**: Loading documents, chunking them, and storing them in a vector database\n",
|
||
"2. **Retrieval**: Finding and using relevant chunks during conversation runtime\n",
|
||
"\n",
|
||
"In our previous examples, we manually added items to memory and passed them to our agents. In practice, the indexing process is usually automated and based on much larger document sources like product documentation, internal files, or knowledge bases.\n",
|
||
"\n",
|
||
"> Note: The quality of a RAG system is dependent on the quality of the chunking and retrieval process (models, embeddings, etc.). You may need to experiement with more advanced chunking and retrieval models to get the best results.\n",
|
||
"\n",
|
||
"### Building a Simple RAG Agent\n",
|
||
"\n",
|
||
"To begin, let's create a simple document indexer that we will used to load documents, chunk them, and store them in a `ChromaDBVectorMemory` memory store. "
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"import re\n",
|
||
"from typing import List\n",
|
||
"\n",
|
||
"import aiofiles\n",
|
||
"import aiohttp\n",
|
||
"from autogen_core.memory import Memory, MemoryContent, MemoryMimeType\n",
|
||
"\n",
|
||
"\n",
|
||
"class SimpleDocumentIndexer:\n",
|
||
" \"\"\"Basic document indexer for AutoGen Memory.\"\"\"\n",
|
||
"\n",
|
||
" def __init__(self, memory: Memory, chunk_size: int = 1500) -> None:\n",
|
||
" self.memory = memory\n",
|
||
" self.chunk_size = chunk_size\n",
|
||
"\n",
|
||
" async def _fetch_content(self, source: str) -> str:\n",
|
||
" \"\"\"Fetch content from URL or file.\"\"\"\n",
|
||
" if source.startswith((\"http://\", \"https://\")):\n",
|
||
" async with aiohttp.ClientSession() as session:\n",
|
||
" async with session.get(source) as response:\n",
|
||
" return await response.text()\n",
|
||
" else:\n",
|
||
" async with aiofiles.open(source, \"r\", encoding=\"utf-8\") as f:\n",
|
||
" return await f.read()\n",
|
||
"\n",
|
||
" def _strip_html(self, text: str) -> str:\n",
|
||
" \"\"\"Remove HTML tags and normalize whitespace.\"\"\"\n",
|
||
" text = re.sub(r\"<[^>]*>\", \" \", text)\n",
|
||
" text = re.sub(r\"\\s+\", \" \", text)\n",
|
||
" return text.strip()\n",
|
||
"\n",
|
||
" def _split_text(self, text: str) -> List[str]:\n",
|
||
" \"\"\"Split text into fixed-size chunks.\"\"\"\n",
|
||
" chunks: list[str] = []\n",
|
||
" # Just split text into fixed-size chunks\n",
|
||
" for i in range(0, len(text), self.chunk_size):\n",
|
||
" chunk = text[i : i + self.chunk_size]\n",
|
||
" chunks.append(chunk.strip())\n",
|
||
" return chunks\n",
|
||
"\n",
|
||
" async def index_documents(self, sources: List[str]) -> int:\n",
|
||
" \"\"\"Index documents into memory.\"\"\"\n",
|
||
" total_chunks = 0\n",
|
||
"\n",
|
||
" for source in sources:\n",
|
||
" try:\n",
|
||
" content = await self._fetch_content(source)\n",
|
||
"\n",
|
||
" # Strip HTML if content appears to be HTML\n",
|
||
" if \"<\" in content and \">\" in content:\n",
|
||
" content = self._strip_html(content)\n",
|
||
"\n",
|
||
" chunks = self._split_text(content)\n",
|
||
"\n",
|
||
" for i, chunk in enumerate(chunks):\n",
|
||
" await self.memory.add(\n",
|
||
" MemoryContent(\n",
|
||
" content=chunk, mime_type=MemoryMimeType.TEXT, metadata={\"source\": source, \"chunk_index\": i}\n",
|
||
" )\n",
|
||
" )\n",
|
||
"\n",
|
||
" total_chunks += len(chunks)\n",
|
||
"\n",
|
||
" except Exception as e:\n",
|
||
" print(f\"Error indexing {source}: {str(e)}\")\n",
|
||
"\n",
|
||
" return total_chunks"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"\n",
|
||
" \n",
|
||
"Now let's use our indexer with ChromaDBVectorMemory to build a complete RAG agent:\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"Indexed 72 chunks from 4 AutoGen documents\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"import os\n",
|
||
"from pathlib import Path\n",
|
||
"\n",
|
||
"from autogen_agentchat.agents import AssistantAgent\n",
|
||
"from autogen_agentchat.ui import Console\n",
|
||
"from autogen_ext.memory.chromadb import ChromaDBVectorMemory, PersistentChromaDBVectorMemoryConfig\n",
|
||
"from autogen_ext.models.openai import OpenAIChatCompletionClient\n",
|
||
"\n",
|
||
"# Initialize vector memory\n",
|
||
"\n",
|
||
"rag_memory = ChromaDBVectorMemory(\n",
|
||
" config=PersistentChromaDBVectorMemoryConfig(\n",
|
||
" collection_name=\"autogen_docs\",\n",
|
||
" persistence_path=os.path.join(str(Path.home()), \".chromadb_autogen\"),\n",
|
||
" k=3, # Return top 3 results\n",
|
||
" score_threshold=0.4, # Minimum similarity score\n",
|
||
" )\n",
|
||
")\n",
|
||
"\n",
|
||
"await rag_memory.clear() # Clear existing memory\n",
|
||
"\n",
|
||
"\n",
|
||
"# Index AutoGen documentation\n",
|
||
"async def index_autogen_docs() -> None:\n",
|
||
" indexer = SimpleDocumentIndexer(memory=rag_memory)\n",
|
||
" sources = [\n",
|
||
" \"https://raw.githubusercontent.com/microsoft/autogen/main/README.md\",\n",
|
||
" \"https://microsoft.github.io/autogen/dev/user-guide/agentchat-user-guide/tutorial/agents.html\",\n",
|
||
" \"https://microsoft.github.io/autogen/dev/user-guide/agentchat-user-guide/tutorial/teams.html\",\n",
|
||
" \"https://microsoft.github.io/autogen/dev/user-guide/agentchat-user-guide/tutorial/termination.html\",\n",
|
||
" ]\n",
|
||
" chunks: int = await indexer.index_documents(sources)\n",
|
||
" print(f\"Indexed {chunks} chunks from {len(sources)} AutoGen documents\")\n",
|
||
"\n",
|
||
"\n",
|
||
"await index_autogen_docs()"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 4,
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"---------- user ----------\n",
|
||
"What is AgentChat?\n",
|
||
"Query results: results=[MemoryContent(content='ng OpenAI\\'s GPT-4o model. See [other supported models](https://microsoft.github.io/autogen/stable/user-guide/agentchat-user-guide/tutorial/models.html). ```python import asyncio from autogen_agentchat.agents import AssistantAgent from autogen_ext.models.openai import OpenAIChatCompletionClient async def main() -> None: model_client = OpenAIChatCompletionClient(model=\"gpt-4o\") agent = AssistantAgent(\"assistant\", model_client=model_client) print(await agent.run(task=\"Say \\'Hello World!\\'\")) await model_client.close() asyncio.run(main()) ``` ### Web Browsing Agent Team Create a group chat team with a web surfer agent and a user proxy agent for web browsing tasks. You need to install [playwright](https://playwright.dev/python/docs/library). ```python # pip install -U autogen-agentchat autogen-ext[openai,web-surfer] # playwright install import asyncio from autogen_agentchat.agents import UserProxyAgent from autogen_agentchat.conditions import TextMentionTermination from autogen_agentchat.teams import RoundRobinGroupChat from autogen_agentchat.ui import Console from autogen_ext.models.openai import OpenAIChatCompletionClient from autogen_ext.agents.web_surfer import MultimodalWebSurfer async def main() -> None: model_client = OpenAIChatCompletionClient(model=\"gpt-4o\") # The web surfer will open a Chromium browser window to perform web browsing tasks. web_surfer = MultimodalWebSurfer(\"web_surfer\", model_client, headless=False, animate_actions=True) # The user proxy agent is used to ge', mime_type='MemoryMimeType.TEXT', metadata={'chunk_index': 1, 'mime_type': 'MemoryMimeType.TEXT', 'source': 'https://raw.githubusercontent.com/microsoft/autogen/main/README.md', 'score': 0.48810458183288574, 'id': '16088e03-0153-4da3-9dec-643b39c549f5'}), MemoryContent(content='els_usage=None content='AutoGen is a programming framework for building multi-agent applications.' type='ToolCallSummaryMessage' The call to the on_messages() method returns a Response that contains the agent’s final response in the chat_message attribute, as well as a list of inner messages in the inner_messages attribute, which stores the agent’s “thought process” that led to the final response. Note It is important to note that on_messages() will update the internal state of the agent – it will add the messages to the agent’s history. So you should call this method with new messages. You should not repeatedly call this method with the same messages or the complete history. Note Unlike in v0.2 AgentChat, the tools are executed by the same agent directly within the same call to on_messages() . By default, the agent will return the result of the tool call as the final response. You can also call the run() method, which is a convenience method that calls on_messages() . It follows the same interface as Teams and returns a TaskResult object. Multi-Modal Input # The AssistantAgent can handle multi-modal input by providing the input as a MultiModalMessage . from io import BytesIO import PIL import requests from autogen_agentchat.messages import MultiModalMessage from autogen_core import Image # Create a multi-modal message with random image and text. pil_image = PIL . Image . open ( BytesIO ( requests . get ( "https://picsum.photos/300/200" ) . content )', mime_type='MemoryMimeType.TEXT', metadata={'chunk_index': 3, 'mime_type': 'MemoryMimeType.TEXT', 'source': 'https://microsoft.github.io/autogen/dev/user-guide/agentchat-user-guide/tutorial/agents.html', 'score': 0.4665141701698303, 'id': '3d603b62-7cab-4f74-b671-586fe36306f2'}), MemoryContent(content='AgentChat Termination Termination # In the previous section, we explored how to define agents, and organize them into teams that can solve tasks. However, a run can go on forever, and in many cases, we need to know when to stop them. This is the role of the termination condition. AgentChat supports several termination condition by providing a base TerminationCondition class and several implementations that inherit from it. A termination condition is a callable that takes a sequence of BaseAgentEvent or BaseChatMessage objects since the last time the condition was called , and returns a StopMessage if the conversation should be terminated, or None otherwise. Once a termination condition has been reached, it must be reset by calling reset() before it can be used again. Some important things to note about termination conditions: They are stateful but reset automatically after each run ( run() or run_stream() ) is finished. They can be combined using the AND and OR operators. Note For group chat teams (i.e., RoundRobinGroupChat , SelectorGroupChat , and Swarm ), the termination condition is called after each agent responds. While a response may contain multiple inner messages, the team calls its termination condition just once for all the messages from a single response. So the condition is called with the “delta sequence” of messages since the last time it was called. Built-In Termination Conditions: MaxMessageTermination : Stops after a specified number of messages have been produced,', mime_type='MemoryMimeType.TEXT', metadata={'chunk_index': 1, 'mime_type': 'MemoryMimeType.TEXT', 'source': 'https://microsoft.github.io/autogen/dev/user-guide/agentchat-user-guide/tutorial/termination.html', 'score': 0.461774212772051, 'id': '699ef490-d108-4cd3-b629-c1198d6b78ba'})]\n",
|
||
"---------- rag_assistant ----------\n",
|
||
"[MemoryContent(content='ng OpenAI\\'s GPT-4o model. See [other supported models](https://microsoft.github.io/autogen/stable/user-guide/agentchat-user-guide/tutorial/models.html). ```python import asyncio from autogen_agentchat.agents import AssistantAgent from autogen_ext.models.openai import OpenAIChatCompletionClient async def main() -> None: model_client = OpenAIChatCompletionClient(model=\"gpt-4o\") agent = AssistantAgent(\"assistant\", model_client=model_client) print(await agent.run(task=\"Say \\'Hello World!\\'\")) await model_client.close() asyncio.run(main()) ``` ### Web Browsing Agent Team Create a group chat team with a web surfer agent and a user proxy agent for web browsing tasks. You need to install [playwright](https://playwright.dev/python/docs/library). ```python # pip install -U autogen-agentchat autogen-ext[openai,web-surfer] # playwright install import asyncio from autogen_agentchat.agents import UserProxyAgent from autogen_agentchat.conditions import TextMentionTermination from autogen_agentchat.teams import RoundRobinGroupChat from autogen_agentchat.ui import Console from autogen_ext.models.openai import OpenAIChatCompletionClient from autogen_ext.agents.web_surfer import MultimodalWebSurfer async def main() -> None: model_client = OpenAIChatCompletionClient(model=\"gpt-4o\") # The web surfer will open a Chromium browser window to perform web browsing tasks. web_surfer = MultimodalWebSurfer(\"web_surfer\", model_client, headless=False, animate_actions=True) # The user proxy agent is used to ge', mime_type='MemoryMimeType.TEXT', metadata={'chunk_index': 1, 'mime_type': 'MemoryMimeType.TEXT', 'source': 'https://raw.githubusercontent.com/microsoft/autogen/main/README.md', 'score': 0.48810458183288574, 'id': '16088e03-0153-4da3-9dec-643b39c549f5'}), MemoryContent(content='els_usage=None content='AutoGen is a programming framework for building multi-agent applications.' type='ToolCallSummaryMessage' The call to the on_messages() method returns a Response that contains the agent’s final response in the chat_message attribute, as well as a list of inner messages in the inner_messages attribute, which stores the agent’s “thought process” that led to the final response. Note It is important to note that on_messages() will update the internal state of the agent – it will add the messages to the agent’s history. So you should call this method with new messages. You should not repeatedly call this method with the same messages or the complete history. Note Unlike in v0.2 AgentChat, the tools are executed by the same agent directly within the same call to on_messages() . By default, the agent will return the result of the tool call as the final response. You can also call the run() method, which is a convenience method that calls on_messages() . It follows the same interface as Teams and returns a TaskResult object. Multi-Modal Input # The AssistantAgent can handle multi-modal input by providing the input as a MultiModalMessage . from io import BytesIO import PIL import requests from autogen_agentchat.messages import MultiModalMessage from autogen_core import Image # Create a multi-modal message with random image and text. pil_image = PIL . Image . open ( BytesIO ( requests . get ( "https://picsum.photos/300/200" ) . content )', mime_type='MemoryMimeType.TEXT', metadata={'chunk_index': 3, 'mime_type': 'MemoryMimeType.TEXT', 'source': 'https://microsoft.github.io/autogen/dev/user-guide/agentchat-user-guide/tutorial/agents.html', 'score': 0.4665141701698303, 'id': '3d603b62-7cab-4f74-b671-586fe36306f2'}), MemoryContent(content='AgentChat Termination Termination # In the previous section, we explored how to define agents, and organize them into teams that can solve tasks. However, a run can go on forever, and in many cases, we need to know when to stop them. This is the role of the termination condition. AgentChat supports several termination condition by providing a base TerminationCondition class and several implementations that inherit from it. A termination condition is a callable that takes a sequenceBaseChatMessageent or BaseChatMessage objects since the last time the condition was called , and returns a StopMessage if the conversation should be terminated, or None otherwise. Once a termination condition has been reached, it must be reset by calling reset() before it can be used again. Some important things to note about termination conditions: They are stateful but reset automatically after each run ( run() or run_stream() ) is finished. They can be combined using the AND and OR operators. Note For group chat teams (i.e., RoundRobinGroupChat , SelectorGroupChat , and Swarm ), the termination condition is called after each agent responds. While a response may contain multiple inner messages, the team calls its termination condition just once for all the messages from a single response. So the condition is called with the “delta sequence” of messages since the last time it was called. Built-In Termination Conditions: MaxMessageTermination : Stops after a specified number of messages have been produced,', mime_type='MemoryMimeType.TEXT', metadata={'chunk_index': 1, 'mime_type': 'MemoryMimeType.TEXT', 'source': 'https://microsoft.github.io/autogen/dev/user-guide/agentchat-user-guide/tutorial/termination.html', 'score': 0.461774212772051, 'id': '699ef490-d108-4cd3-b629-c1198d6b78ba'})]\n",
|
||
"---------- rag_assistant ----------\n",
|
||
"AgentChat is part of the AutoGen framework, a programming environment for building multi-agent applications. In AgentChat, agents can interact with each other and with users to perform various tasks, including web browsing and engaging in dialogue. It utilizes models from OpenAI for chat completions and supports multi-modal input, which means agents can handle inputs that include both text and images. Additionally, AgentChat provides mechanisms to define termination conditions to control when a conversation or task should be concluded, ensuring that the agent interactions are efficient and goal-oriented. TERMINATE\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"# Create our RAG assistant agent\n",
|
||
"rag_assistant = AssistantAgent(\n",
|
||
" name=\"rag_assistant\", model_client=OpenAIChatCompletionClient(model=\"gpt-4o\"), memory=[rag_memory]\n",
|
||
")\n",
|
||
"\n",
|
||
"# Ask questions about AutoGen\n",
|
||
"stream = rag_assistant.run_stream(task=\"What is AgentChat?\")\n",
|
||
"await Console(stream)\n",
|
||
"\n",
|
||
"# Remember to close the memory when done\n",
|
||
"await rag_memory.close()"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"This implementation provides a RAG agent that can answer questions based on AutoGen documentation. When a question is asked, the Memory system retrieves relevant chunks and adds them to the context, enabling the assistant to generate informed responses.\n",
|
||
"\n",
|
||
"For production systems, you might want to:\n",
|
||
"1. Implement more sophisticated chunking strategies\n",
|
||
"2. Add metadata filtering capabilities\n",
|
||
"3. Customize the retrieval scoring\n",
|
||
"4. Optimize embedding models for your specific domain\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Mem0Memory Example\n",
|
||
"\n",
|
||
"`autogen_ext.memory.mem0.Mem0Memory` provides integration with `Mem0.ai`'s memory system. It supports both cloud-based and local backends, offering advanced memory capabilities for agents. The implementation handles proper retrieval and context updating, making it suitable for production environments.\n",
|
||
"\n",
|
||
"In the following example, we'll demonstrate how to use `Mem0Memory` to maintain persistent memories across conversations:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"from autogen_agentchat.agents import AssistantAgent\n",
|
||
"from autogen_agentchat.ui import Console\n",
|
||
"from autogen_core.memory import MemoryContent, MemoryMimeType\n",
|
||
"from autogen_ext.memory.mem0 import Mem0Memory\n",
|
||
"from autogen_ext.models.openai import OpenAIChatCompletionClient\n",
|
||
"\n",
|
||
"# Initialize Mem0 cloud memory (requires API key)\n",
|
||
"# For local deployment, use is_cloud=False with appropriate config\n",
|
||
"mem0_memory = Mem0Memory(\n",
|
||
" is_cloud=True,\n",
|
||
" limit=5, # Maximum number of memories to retrieve\n",
|
||
")\n",
|
||
"\n",
|
||
"# Add user preferences to memory\n",
|
||
"await mem0_memory.add(\n",
|
||
" MemoryContent(\n",
|
||
" content=\"The weather should be in metric units\",\n",
|
||
" mime_type=MemoryMimeType.TEXT,\n",
|
||
" metadata={\"category\": \"preferences\", \"type\": \"units\"},\n",
|
||
" )\n",
|
||
")\n",
|
||
"\n",
|
||
"await mem0_memory.add(\n",
|
||
" MemoryContent(\n",
|
||
" content=\"Meal recipe must be vegan\",\n",
|
||
" mime_type=MemoryMimeType.TEXT,\n",
|
||
" metadata={\"category\": \"preferences\", \"type\": \"dietary\"},\n",
|
||
" )\n",
|
||
")\n",
|
||
"\n",
|
||
"# Create assistant with mem0 memory\n",
|
||
"assistant_agent = AssistantAgent(\n",
|
||
" name=\"assistant_agent\",\n",
|
||
" model_client=OpenAIChatCompletionClient(\n",
|
||
" model=\"gpt-4o-2024-08-06\",\n",
|
||
" ),\n",
|
||
" tools=[get_weather],\n",
|
||
" memory=[mem0_memory],\n",
|
||
")\n",
|
||
"\n",
|
||
"# Ask about the weather\n",
|
||
"stream = assistant_agent.run_stream(task=\"What are my dietary preferences?\")\n",
|
||
"await Console(stream)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"The example above demonstrates how Mem0Memory can be used with an assistant agent. The memory integration ensures that:\n",
|
||
"\n",
|
||
"1. All agent interactions are stored in Mem0 for future reference\n",
|
||
"2. Relevant memories (like user preferences) are automatically retrieved and added to the context\n",
|
||
"3. The agent can maintain consistent behavior based on stored memories\n",
|
||
"\n",
|
||
"Mem0Memory is particularly useful for:\n",
|
||
"- Long-running agent deployments that need persistent memory\n",
|
||
"- Applications requiring enhanced privacy controls\n",
|
||
"- Teams wanting unified memory management across agents\n",
|
||
"- Use cases needing advanced memory filtering and analytics\n",
|
||
"\n",
|
||
"Just like ChromaDBVectorMemory, you can serialize Mem0Memory configurations:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"# Serialize the memory configuration\n",
|
||
"config_json = mem0_memory.dump_component().model_dump_json()\n",
|
||
"print(f\"Memory config JSON: {config_json[:100]}...\")"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": []
|
||
}
|
||
],
|
||
"metadata": {
|
||
"kernelspec": {
|
||
"display_name": ".venv",
|
||
"language": "python",
|
||
"name": "python3"
|
||
},
|
||
"language_info": {
|
||
"codemirror_mode": {
|
||
"name": "ipython",
|
||
"version": 3
|
||
},
|
||
"file_extension": ".py",
|
||
"mimetype": "text/x-python",
|
||
"name": "python",
|
||
"nbconvert_exporter": "python",
|
||
"pygments_lexer": "ipython3",
|
||
"version": "3.11.13"
|
||
}
|
||
},
|
||
"nbformat": 4,
|
||
"nbformat_minor": 2
|
||
}
|