<spanstyle="display:inline-block;background:url(https://api.iconify.design/mdi/content-copy.svg) no-repeat center center / contain;width: 16px; height: 16px;"class=""></span>
</button>
</div>
<h2>CLI Arguments</h2>
<ul>
<li><code>--data <path-to-data></code> - Folder containing the <code>.parquet</code> output files from running the Indexer.</li>
<li><code>--community_level <community-level></code> - Community level in the Leiden community hierarchy from which we will load the community reports higher value means we use reports on smaller communities. Default: 2</li>
<li><code>--response_type <response-type></code> - Free form text describing the response type and format, can be anything, e.g. <code>Multiple Paragraphs</code>, <code>Single Paragraph</code>, <code>Single Sentence</code>, <code>List of 3-7 Points</code>, <code>Single Page</code>, <code>Multi-Page Report</code>. Default: <code>Multiple Paragraphs</code>.</li>
<li><code>--method <"local"|"global"></code> - Method to use to answer the query, one of local or global. For more information check <ahref="overview.md">Overview</a></li>
</ul>
<h2>Env Variables</h2>
<p>Required environment variables to execute:</p>
<ul>
<li><code>GRAPHRAG_API_KEY</code> - API Key for executing the model, will fallback to <code>OPENAI_API_KEY</code> if one is not provided.</li>
<li><code>GRAPHRAG_LLM_MODEL</code> - Model to use for Chat Completions.</li>
<li><code>GRAPHRAG_EMBEDDING_MODEL</code> - Model to use for Embeddings.</li>
</ul>
<p>You can further customize the execution by providing these environment variables:</p>
<ul>
<li><code>GRAPHRAG_LLM_API_BASE</code> - The API Base URL. Default: <code>None</code></li>
<li><code>GRAPHRAG_LLM_TYPE</code> - The LLM operation type. Either <code>openai_chat</code> or <code>azure_openai_chat</code>. Default: <code>openai_chat</code></li>
<li><code>GRAPHRAG_LLM_MAX_RETRIES</code> - The maximum number of retries to attempt when a request fails. Default: <code>20</code></li>
<li><code>GRAPHRAG_EMBEDDING_API_BASE</code> - The API Base URL. Default: <code>None</code></li>
<li><code>GRAPHRAG_EMBEDDING_TYPE</code> - The embedding client to use. Either <code>openai_embedding</code> or <code>azure_openai_embedding</code>. Default: <code>openai_embedding</code></li>
<li><code>GRAPHRAG_EMBEDDING_MAX_RETRIES</code> - The maximum number of retries to attempt when a request fails. Default: <code>20</code></li>
<li><code>GRAPHRAG_LOCAL_SEARCH_TEXT_UNIT_PROP</code> - Proportion of context window dedicated to related text units. Default: <code>0.5</code></li>
<li><code>GRAPHRAG_LOCAL_SEARCH_COMMUNITY_PROP</code> - Proportion of context window dedicated to community reports. Default: <code>0.1</code></li>
<li><code>GRAPHRAG_LOCAL_SEARCH_CONVERSATION_HISTORY_MAX_TURNS</code> - Maximum number of turns to include in the conversation history. Default: <code>5</code></li>
<li><code>GRAPHRAG_LOCAL_SEARCH_TOP_K_ENTITIES</code> - Number of related entities to retrieve from the entity description embedding store. Default: <code>10</code></li>
<li><code>GRAPHRAG_LOCAL_SEARCH_TOP_K_RELATIONSHIPS</code> - Control the number of out-of-network relationships to pull into the context window. Default: <code>10</code></li>
<li><code>GRAPHRAG_LOCAL_SEARCH_MAX_TOKENS</code> - Change this based on the token limit you have on your model (if you are using a model with 8k limit, a good setting could be 5000). Default: <code>12000</code></li>
<li><code>GRAPHRAG_LOCAL_SEARCH_LLM_MAX_TOKENS</code> - Change this based on the token limit you have on your model (if you are using a model with 8k limit, a good setting could be 1000=1500). Default: <code>2000</code></li>
<li><code>GRAPHRAG_GLOBAL_SEARCH_MAX_TOKENS</code> - Change this based on the token limit you have on your model (if you are using a model with 8k limit, a good setting could be 5000). Default: <code>12000</code></li>
<li><code>GRAPHRAG_GLOBAL_SEARCH_DATA_MAX_TOKENS</code> - Change this based on the token limit you have on your model (if you are using a model with 8k limit, a good setting could be 5000). Default: <code>12000</code></li>
<li><code>GRAPHRAG_GLOBAL_SEARCH_REDUCE_MAX_TOKENS</code> - Change this based on the token limit you have on your model (if you are using a model with 8k limit, a good setting could be 1000-1500). Default: <code>2000</code></li>