524 lines
18 KiB
Plaintext
Raw Normal View History

{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Visualizing the knowledge graph with `yfiles-jupyter-graphs`\n",
"\n",
"This notebook is a partial copy of [local_search.ipynb](../../local_search.ipynb) that shows how to use `yfiles-jupyter-graphs` to add interactive graph visualizations of the parquet files and how to visualize the result context of `graphrag` queries (see at the end of this notebook)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Copyright (c) 2024 Microsoft Corporation.\n",
"# Licensed under the MIT License."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"import pandas as pd\n",
"import tiktoken\n",
"\n",
"from graphrag.query.context_builder.entity_extraction import EntityVectorStoreKey\n",
"from graphrag.query.indexer_adapters import (\n",
" read_indexer_covariates,\n",
" read_indexer_entities,\n",
" read_indexer_relationships,\n",
" read_indexer_reports,\n",
" read_indexer_text_units,\n",
")\n",
"from graphrag.query.llm.oai.chat_openai import ChatOpenAI\n",
"from graphrag.query.llm.oai.embedding import OpenAIEmbedding\n",
"from graphrag.query.llm.oai.typing import OpenaiApiType\n",
"from graphrag.query.structured_search.local_search.mixed_context import (\n",
" LocalSearchMixedContext,\n",
")\n",
"from graphrag.query.structured_search.local_search.search import LocalSearch\n",
"from graphrag.vector_stores.lancedb import LanceDBVectorStore"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Local Search Example\n",
"\n",
"Local search method generates answers by combining relevant data from the AI-extracted knowledge-graph with text chunks of the raw documents. This method is suitable for questions that require an understanding of specific entities mentioned in the documents (e.g. What are the healing properties of chamomile?)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Load text units and graph data tables as context for local search\n",
"\n",
"- In this test we first load indexing outputs from parquet files to dataframes, then convert these dataframes into collections of data objects aligning with the knowledge model."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Load tables to dataframes"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"INPUT_DIR = \"../../inputs/operation dulce\"\n",
"LANCEDB_URI = f\"{INPUT_DIR}/lancedb\"\n",
"\n",
"COMMUNITY_REPORT_TABLE = \"community_reports\"\n",
"COMMUNITY_TABLE = \"communities\"\n",
"ENTITY_TABLE = \"entities\"\n",
"RELATIONSHIP_TABLE = \"relationships\"\n",
"COVARIATE_TABLE = \"covariates\"\n",
"TEXT_UNIT_TABLE = \"text_units\"\n",
"COMMUNITY_LEVEL = 2"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Read entities"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# read nodes table to get community and degree data\n",
"entity_df = pd.read_parquet(f\"{INPUT_DIR}/{ENTITY_TABLE}.parquet\")\n",
"community_df = pd.read_parquet(f\"{INPUT_DIR}/{COMMUNITY_TABLE}.parquet\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Read relationships"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"relationship_df = pd.read_parquet(f\"{INPUT_DIR}/{RELATIONSHIP_TABLE}.parquet\")\n",
"relationships = read_indexer_relationships(relationship_df)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Visualizing nodes and relationships with `yfiles-jupyter-graphs`\n",
"\n",
"`yfiles-jupyter-graphs` is a graph visualization extension that provides interactive and customizable visualizations for structured node and relationship data.\n",
"\n",
"In this case, we use it to provide an interactive visualization for the knowledge graph of the [local_search.ipynb](../../local_search.ipynb) sample by passing node and relationship lists converted from the given parquet files. The requirements for the input data is an `id` attribute for the nodes and `start`/`end` properties for the relationships that correspond to the node ids. Additional attributes can be added in the `properties` of each node/relationship dict:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install yfiles_jupyter_graphs --quiet\n",
"from yfiles_jupyter_graphs import GraphWidget\n",
"\n",
"\n",
"# converts the entities dataframe to a list of dicts for yfiles-jupyter-graphs\n",
"def convert_entities_to_dicts(df):\n",
" \"\"\"Convert the entities dataframe to a list of dicts for yfiles-jupyter-graphs.\"\"\"\n",
" nodes_dict = {}\n",
" for _, row in df.iterrows():\n",
" # Create a dictionary for each row and collect unique nodes\n",
" node_id = row[\"title\"]\n",
" if node_id not in nodes_dict:\n",
" nodes_dict[node_id] = {\n",
" \"id\": node_id,\n",
" \"properties\": row.to_dict(),\n",
" }\n",
" return list(nodes_dict.values())\n",
"\n",
"\n",
"# converts the relationships dataframe to a list of dicts for yfiles-jupyter-graphs\n",
"def convert_relationships_to_dicts(df):\n",
" \"\"\"Convert the relationships dataframe to a list of dicts for yfiles-jupyter-graphs.\"\"\"\n",
" relationships = []\n",
" for _, row in df.iterrows():\n",
" # Create a dictionary for each row\n",
" relationships.append({\n",
" \"start\": row[\"source\"],\n",
" \"end\": row[\"target\"],\n",
" \"properties\": row.to_dict(),\n",
" })\n",
" return relationships\n",
"\n",
"\n",
"w = GraphWidget()\n",
"w.directed = True\n",
"w.nodes = convert_entities_to_dicts(entity_df)\n",
"w.edges = convert_relationships_to_dicts(relationship_df)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Configure data-driven visualization\n",
"\n",
"The additional properties can be used to configure the visualization for different use cases."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# show title on the node\n",
"w.node_label_mapping = \"title\"\n",
"\n",
"\n",
"# map community to a color\n",
"def community_to_color(community):\n",
" \"\"\"Map a community to a color.\"\"\"\n",
" colors = [\n",
" \"crimson\",\n",
" \"darkorange\",\n",
" \"indigo\",\n",
" \"cornflowerblue\",\n",
" \"cyan\",\n",
" \"teal\",\n",
" \"green\",\n",
" ]\n",
" return (\n",
" colors[int(community) % len(colors)] if community is not None else \"lightgray\"\n",
" )\n",
"\n",
"\n",
"def edge_to_source_community(edge):\n",
" \"\"\"Get the community of the source node of an edge.\"\"\"\n",
" source_node = next(\n",
" (entry for entry in w.nodes if entry[\"properties\"][\"title\"] == edge[\"start\"]),\n",
" None,\n",
" )\n",
" source_node_community = source_node[\"properties\"][\"community\"]\n",
" return source_node_community if source_node_community is not None else None\n",
"\n",
"\n",
"w.node_color_mapping = lambda node: community_to_color(node[\"properties\"][\"community\"])\n",
"w.edge_color_mapping = lambda edge: community_to_color(edge_to_source_community(edge))\n",
"# map size data to a reasonable factor\n",
"w.node_scale_factor_mapping = lambda node: 0.5 + node[\"properties\"][\"size\"] * 1.5 / 20\n",
"# use weight for edge thickness\n",
"w.edge_thickness_factor_mapping = \"weight\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Automatic layouts\n",
"\n",
"The widget provides different automatic layouts that serve different purposes: `Circular`, `Hierarchic`, `Organic (interactiv or static)`, `Orthogonal`, `Radial`, `Tree`, `Geo-spatial`.\n",
"\n",
"For the knowledge graph, this sample uses the `Circular` layout, though `Hierarchic` or `Organic` are also suitable choices."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Use the circular layout for this visualization. For larger graphs, the default organic layout is often preferrable.\n",
"w.circular_layout()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Display the graph"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"display(w)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Visualizing the result context of `graphrag` queries\n",
"\n",
"The result context of `graphrag` queries allow to inspect the context graph of the request. This data can similarly be visualized as graph with `yfiles-jupyter-graphs`.\n",
"\n",
"## Making the request\n",
"\n",
"The following cell recreates the sample queries from [local_search.ipynb](../../local_search.ipynb)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# setup (see also ../../local_search.ipynb)\n",
"entities = read_indexer_entities(entity_df, community_df, COMMUNITY_LEVEL)\n",
"\n",
"description_embedding_store = LanceDBVectorStore(\n",
Artifact cleanup (#1341) * Add source documents for verb tests * Remove entity_type erroneous column * Add new test data * Remove source/target degree columns * Remove top_level_node_id * Remove chunk column configs * Rename "chunk" to "text" * Rename "chunk" to "text" in base * Re-map document input to use base text units * Revert base text units as final documents dep * Update test data * Split/rename node source_id * Drop node size (dup of degree) * Drop document_ids from covariates * Remove unused document_ids from models * Remove n_tokens from covariate table * Fix missed document_ids delete * Wire base text units to final documents * Rename relationship rank as combined_degree * Add rank as first-class property to Relationship * Remove split_text operation * Fix relationships test parquet * Update test parquets * Add entity ids to community table * Remove stored graph embedding columns * Format * Semver * Fix JSON typo * Spelling * Rename lancedb * Sort lancedb * Fix unit test * Fix test to account for changing period * Update tests for separate embeddings * Format * Better assertion printing * Fix unit test for windows * Rename document.raw_content -> document.text * Remove read_documents function * Remove unused document summary from model * Remove unused imports * Format * Add new snapshots to default init * Use util to construct embeddings collection name * Align inc index model with branch changes * Update data and tests for int ids * Clean up embedding locs * Switch entity "name" to "title" for consistency * Fix short_id -> human_readable_id defaults * Format * Rework community IDs * Fix community size compute * Fix unit tests * Fix report read * Pare down nodes table output * Fix unit test * Fix merge * Fix community loading * Format * Fix community id report extraction * Update tests * Consistent short IDs and ordering * Update ordering and tests * Update incremental for new nodes model * Guard document columns loc * Match column ordering * Fix document guard * Update smoke tests * Fill NA on community extract * Logging for smoke test debug * Add parquet schema details doc * Fix community hierarchy guard * Use better empty hierarchy guard * Back-compat shims * Semver * Fix warning * Format * Remove default fallback * Reuse key
2024-11-13 15:11:19 -08:00
" collection_name=\"default-entity-description\",\n",
")\n",
"description_embedding_store.connect(db_uri=LANCEDB_URI)\n",
"covariate_df = pd.read_parquet(f\"{INPUT_DIR}/{COVARIATE_TABLE}.parquet\")\n",
"claims = read_indexer_covariates(covariate_df)\n",
"covariates = {\"claims\": claims}\n",
"report_df = pd.read_parquet(f\"{INPUT_DIR}/{COMMUNITY_REPORT_TABLE}.parquet\")\n",
"reports = read_indexer_reports(report_df, community_df, COMMUNITY_LEVEL)\n",
"text_unit_df = pd.read_parquet(f\"{INPUT_DIR}/{TEXT_UNIT_TABLE}.parquet\")\n",
"text_units = read_indexer_text_units(text_unit_df)\n",
"\n",
"api_key = os.environ[\"GRAPHRAG_API_KEY\"]\n",
"llm_model = os.environ[\"GRAPHRAG_LLM_MODEL\"]\n",
"embedding_model = os.environ[\"GRAPHRAG_EMBEDDING_MODEL\"]\n",
"\n",
"llm = ChatOpenAI(\n",
" api_key=api_key,\n",
" model=llm_model,\n",
" api_type=OpenaiApiType.OpenAI, # OpenaiApiType.OpenAI or OpenaiApiType.AzureOpenAI\n",
" max_retries=20,\n",
")\n",
"\n",
"token_encoder = tiktoken.get_encoding(\"cl100k_base\")\n",
"\n",
"text_embedder = OpenAIEmbedding(\n",
" api_key=api_key,\n",
" api_base=None,\n",
" api_type=OpenaiApiType.OpenAI,\n",
" model=embedding_model,\n",
" deployment_name=embedding_model,\n",
" max_retries=20,\n",
")\n",
"\n",
"context_builder = LocalSearchMixedContext(\n",
" community_reports=reports,\n",
" text_units=text_units,\n",
" entities=entities,\n",
" relationships=relationships,\n",
" covariates=covariates,\n",
" entity_text_embeddings=description_embedding_store,\n",
" embedding_vectorstore_key=EntityVectorStoreKey.ID, # if the vectorstore uses entity title as ids, set this to EntityVectorStoreKey.TITLE\n",
" text_embedder=text_embedder,\n",
" token_encoder=token_encoder,\n",
")\n",
"\n",
"local_context_params = {\n",
" \"text_unit_prop\": 0.5,\n",
" \"community_prop\": 0.1,\n",
" \"conversation_history_max_turns\": 5,\n",
" \"conversation_history_user_turns_only\": True,\n",
" \"top_k_mapped_entities\": 10,\n",
" \"top_k_relationships\": 10,\n",
" \"include_entity_rank\": True,\n",
" \"include_relationship_weight\": True,\n",
" \"include_community_rank\": False,\n",
" \"return_candidate_context\": False,\n",
" \"embedding_vectorstore_key\": EntityVectorStoreKey.ID, # set this to EntityVectorStoreKey.TITLE if the vectorstore uses entity title as ids\n",
" \"max_tokens\": 12_000, # change this based on the token limit you have on your model (if you are using a model with 8k limit, a good setting could be 5000)\n",
"}\n",
"\n",
"llm_params = {\n",
" \"max_tokens\": 2_000, # change this based on the token limit you have on your model (if you are using a model with 8k limit, a good setting could be 1000=1500)\n",
" \"temperature\": 0.0,\n",
"}\n",
"\n",
"search_engine = LocalSearch(\n",
" llm=llm,\n",
" context_builder=context_builder,\n",
" token_encoder=token_encoder,\n",
" llm_params=llm_params,\n",
" context_builder_params=local_context_params,\n",
" response_type=\"multiple paragraphs\", # free form text describing the response type and format, can be anything, e.g. prioritized list, single paragraph, multiple paragraphs, multiple-page report\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Run local search on sample queries"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"result = await search_engine.search(\"Tell me about Agent Mercer\")\n",
"print(result.response)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"question = \"Tell me about Dr. Jordan Hayes\"\n",
"result = await search_engine.search(question)\n",
"print(result.response)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Inspecting the context data used to generate the response"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"result.context_data[\"entities\"].head()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"result.context_data[\"relationships\"].head()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Visualizing the result context as graph"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"\"\"\"\n",
"Helper function to visualize the result context with `yfiles-jupyter-graphs`.\n",
"\n",
"The dataframes are converted into supported nodes and relationships lists and then passed to yfiles-jupyter-graphs.\n",
"Additionally, some values are mapped to visualization properties.\n",
"\"\"\"\n",
"\n",
"\n",
"def show_graph(result):\n",
" \"\"\"Visualize the result context with yfiles-jupyter-graphs.\"\"\"\n",
" from yfiles_jupyter_graphs import GraphWidget\n",
"\n",
" if (\n",
" \"entities\" not in result.context_data\n",
" or \"relationships\" not in result.context_data\n",
" ):\n",
" msg = \"The passed results do not contain 'entities' or 'relationships'\"\n",
" raise ValueError(msg)\n",
"\n",
" # converts the entities dataframe to a list of dicts for yfiles-jupyter-graphs\n",
" def convert_entities_to_dicts(df):\n",
" \"\"\"Convert the entities dataframe to a list of dicts for yfiles-jupyter-graphs.\"\"\"\n",
" nodes_dict = {}\n",
" for _, row in df.iterrows():\n",
" # Create a dictionary for each row and collect unique nodes\n",
" node_id = row[\"entity\"]\n",
" if node_id not in nodes_dict:\n",
" nodes_dict[node_id] = {\n",
" \"id\": node_id,\n",
" \"properties\": row.to_dict(),\n",
" }\n",
" return list(nodes_dict.values())\n",
"\n",
" # converts the relationships dataframe to a list of dicts for yfiles-jupyter-graphs\n",
" def convert_relationships_to_dicts(df):\n",
" \"\"\"Convert the relationships dataframe to a list of dicts for yfiles-jupyter-graphs.\"\"\"\n",
" relationships = []\n",
" for _, row in df.iterrows():\n",
" # Create a dictionary for each row\n",
" relationships.append({\n",
" \"start\": row[\"source\"],\n",
" \"end\": row[\"target\"],\n",
" \"properties\": row.to_dict(),\n",
" })\n",
" return relationships\n",
"\n",
" w = GraphWidget()\n",
" # use the converted data to visualize the graph\n",
" w.nodes = convert_entities_to_dicts(result.context_data[\"entities\"])\n",
" w.edges = convert_relationships_to_dicts(result.context_data[\"relationships\"])\n",
" w.directed = True\n",
" # show title on the node\n",
" w.node_label_mapping = \"entity\"\n",
" # use weight for edge thickness\n",
" w.edge_thickness_factor_mapping = \"weight\"\n",
" display(w)\n",
"\n",
"\n",
"show_graph(result)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.0"
}
},
"nbformat": 4,
"nbformat_minor": 4
}