diff --git a/README.md b/README.md
index de8a2f7c..b17d46fd 100644
--- a/README.md
+++ b/README.md
@@ -1,6 +1,5 @@
# GraphRAG
-👉 [Use the GraphRAG Accelerator solution](https://github.com/Azure-Samples/graphrag-accelerator)
👉 [Microsoft Research Blog Post](https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/)
👉 [Read the docs](https://microsoft.github.io/graphrag)
👉 [GraphRAG Arxiv](https://arxiv.org/pdf/2404.16130)
@@ -28,7 +27,7 @@ To learn more about GraphRAG and how it can be used to enhance your LLM's abilit
## Quickstart
-To get started with the GraphRAG system we recommend trying the [Solution Accelerator](https://github.com/Azure-Samples/graphrag-accelerator) package. This provides a user-friendly end-to-end experience with Azure resources.
+To get started with the GraphRAG system we recommend trying the [command line quickstart](https://microsoft.github.io/graphrag/get_started/).
## Repository Guidance
diff --git a/breaking-changes.md b/breaking-changes.md
index 36b98028..4c0505d6 100644
--- a/breaking-changes.md
+++ b/breaking-changes.md
@@ -12,6 +12,12 @@ There are five surface areas that may be impacted on any given release. They are
> TL;DR: Always run `graphrag init --path [path] --force` between minor version bumps to ensure you have the latest config format. Run the provided migration notebook between major version bumps if you want to avoid re-indexing prior datasets. Note that this will overwrite your configuration and prompts, so backup if necessary.
+# v2
+
+Run the [migration notebook](./docs/examples_notebooks/index_migration_to_v2.ipynb) to convert older tables to the v2 format.
+
+The v2 release renamed all of our index tables to simply name the items each table contains. The previous naming was a leftover requirement of our use of DataShaper, which is no longer necessary.
+
# v1
Run the [migration notebook](./docs/examples_notebooks/index_migration_to_v1.ipynb) to convert older tables to the v1 format.
@@ -27,7 +33,7 @@ All of the breaking changes listed below are accounted for in the four steps abo
- Alignment of fields from `create_final_entities` (such as name -> title) with `create_final_nodes`, and removal of redundant content across these tables
- Rename of `document.raw_content` to `document.text`
- Rename of `entity.name` to `entity.title`
- - Rename `rank` to `combined_degree` in `create_final_relationships` and removal of `source_degree` and `target_degree`fields
+ - Rename `rank` to `combined_degree` in `create_final_relationships` and removal of `source_degree` and `target_degree` fields
- Fixed community tables to use a proper UUID for the `id` field, and retain `community` and `human_readable_id` for the short IDs
- Removal of all embeddings columns from parquet files in favor of direct vector store writes
diff --git a/docs/config/env_vars.md b/docs/config/env_vars.md
index 26617cf7..0c2eb6ab 100644
--- a/docs/config/env_vars.md
+++ b/docs/config/env_vars.md
@@ -4,7 +4,7 @@ As of version 1.3, GraphRAG no longer supports a full complement of pre-built en
The only standard environment variable we expect, and include in the default settings.yml, is `GRAPHRAG_API_KEY`. If you are already using a number of the previous GRAPHRAG_* environment variables, you can insert them with template syntax into settings.yml and they will be adopted.
-> **The environment variables below are documented as an aid for migration, but they WILL NOT be read unless you use template syntax in your settings.yml.**
+> **The environment variables below are documented as an aid for migration, but they WILL NOT be read unless you use template syntax in your settings.yml. We also WILL NOT be updating this page as the main config object changes.**
---
diff --git a/docs/config/yaml.md b/docs/config/yaml.md
index 6e578b19..6c61ffe8 100644
--- a/docs/config/yaml.md
+++ b/docs/config/yaml.md
@@ -40,7 +40,7 @@ models:
#### Fields
- `api_key` **str** - The OpenAI API key to use.
-- `auth_type` **api_key|managed_identity** - Indicate how you want to authenticate requests.
+- `auth_type` **api_key|azure_managed_identity** - Indicate how you want to authenticate requests.
- `type` **openai_chat|azure_openai_chat|openai_embedding|azure_openai_embedding|mock_chat|mock_embeddings** - The type of LLM to use.
- `model` **str** - The model name.
- `encoding_model` **str** - The text encoding model to use. Default is to use the encoding model aligned with the language model (i.e., it is retrieved from tiktoken if unset).
@@ -73,16 +73,18 @@ models:
### input
-Our pipeline can ingest .csv, .txt, or .json data from an input folder. See the [inputs page](../index/inputs.md) for more details and examples.
+Our pipeline can ingest .csv, .txt, or .json data from an input location. See the [inputs page](../index/inputs.md) for more details and examples.
#### Fields
-- `type` **file|blob** - The input type to use. Default=`file`
+- `storage` **StorageConfig**
+ - `type` **file|blob|cosmosdb** - The storage type to use. Default=`file`
+ - `base_dir` **str** - The base directory to write output artifacts to, relative to the root.
+ - `connection_string` **str** - (blob/cosmosdb only) The Azure Storage connection string.
+ - `container_name` **str** - (blob/cosmosdb only) The Azure Storage container name.
+ - `storage_account_blob_url` **str** - (blob only) The storage account blob URL to use.
+ - `cosmosdb_account_blob_url` **str** - (cosmosdb only) The CosmosDB account blob URL to use.
- `file_type` **text|csv|json** - The type of input data to load. Default is `text`
-- `base_dir` **str** - The base directory to read input from, relative to the root.
-- `connection_string` **str** - (blob only) The Azure Storage connection string.
-- `storage_account_blob_url` **str** - The storage account blob URL to use.
-- `container_name` **str** - (blob only) The Azure Storage container name.
- `encoding` **str** - The encoding of the input file. Default is `utf-8`
- `file_pattern` **str** - A regex to match input files. Default is `.*\.csv$`, `.*\.txt$`, or `.*\.json$` depending on the specified `file_type`, but you can customize it if needed.
- `file_filter` **dict** - Key/value pairs to filter. Default is None.
diff --git a/docs/get_started.md b/docs/get_started.md
index 5fb8427f..895e8941 100644
--- a/docs/get_started.md
+++ b/docs/get_started.md
@@ -6,7 +6,6 @@
To get started with the GraphRAG system, you have a few options:
-👉 [Use the GraphRAG Accelerator solution](https://github.com/Azure-Samples/graphrag-accelerator)
👉 [Install from pypi](https://pypi.org/project/graphrag/).
👉 [Use it from source](developing.md)
diff --git a/docs/index.md b/docs/index.md
index f3cb76d2..be97b43d 100644
--- a/docs/index.md
+++ b/docs/index.md
@@ -1,7 +1,6 @@
# Welcome to GraphRAG
👉 [Microsoft Research Blog Post](https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/)
-👉 [GraphRAG Accelerator](https://github.com/Azure-Samples/graphrag-accelerator)
👉 [GraphRAG Arxiv](https://arxiv.org/pdf/2404.16130)
@@ -16,10 +15,6 @@ approaches using plain text snippets. The GraphRAG process involves extracting a
To learn more about GraphRAG and how it can be used to enhance your language model's ability to reason about your private data, please visit the [Microsoft Research Blog Post](https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/).
-## Solution Accelerator 🚀
-
-To quickstart the GraphRAG system we recommend trying the [Solution Accelerator](https://github.com/Azure-Samples/graphrag-accelerator) package. This provides a user-friendly end-to-end experience with Azure resources.
-
## Get Started with GraphRAG 🚀
To start using GraphRAG, check out the [_Get Started_](get_started.md) guide.
diff --git a/docs/index/byog.md b/docs/index/byog.md
index afcc2d58..781b9224 100644
--- a/docs/index/byog.md
+++ b/docs/index/byog.md
@@ -52,7 +52,7 @@ workflows: [create_communities, create_community_reports, generate_text_embeddin
### FastGraphRAG
-[FastGraphRAG](./methods.md#fastgraphrag) uses text_units for the community reports instead of the entity and relationship descriptions. If your graph is sourced in such a way that it does not have descriptions, this might be a useful alternative. In this case, you would update your workflows list to include the text variant:
+[FastGraphRAG](./methods.md#fastgraphrag) uses text_units for the community reports instead of the entity and relationship descriptions. If your graph is sourced in such a way that it does not have descriptions, this might be a useful alternative. In this case, you would update your workflows list to include the text variant of the community reports workflow:
```yaml
workflows: [create_communities, create_community_reports_text, generate_text_embeddings]
@@ -65,7 +65,6 @@ This method requires that your entities and relationships tables have valid link
Putting it all together:
-- `input`: GraphRAG does require an input document set, even if you don't need us to process it. You can create an input folder and drop a dummy.txt document in there to work around this.
- `output`: Create an output folder and put your entities and relationships (and optionally text_units) parquet files in it.
- Update your config as noted above to only run the workflows subset you need.
- Run `graphrag index --root