diff --git a/lightrag/api/README-zh.md b/lightrag/api/README-zh.md index 4f37bc80..840f8384 100644 --- a/lightrag/api/README-zh.md +++ b/lightrag/api/README-zh.md @@ -41,7 +41,9 @@ LightRAG 需要同时集成 LLM(大型语言模型)和嵌入模型以有效 * openai 或 openai 兼容 * azure_openai -建议使用环境变量来配置 LightRAG 服务器。项目根目录中有一个名为 `env.example` 的示例环境变量文件。请将此文件复制到启动目录并重命名为 `.env`。之后,您可以在 `.env` 文件中修改与 LLM 和嵌入模型相关的参数。需要注意的是,LightRAG 服务器每次启动时都会将 `.env` 中的环境变量加载到系统环境变量中。由于 LightRAG 服务器会优先使用系统环境变量中的设置,如果您在通过命令行启动 LightRAG 服务器后修改了 `.env` 文件,则需要执行 `source .env` 使新设置生效。 +建议使用环境变量来配置 LightRAG 服务器。项目根目录中有一个名为 `env.example` 的示例环境变量文件。请将此文件复制到启动目录并重命名为 `.env`。之后,您可以在 `.env` 文件中修改与 LLM 和嵌入模型相关的参数。需要注意的是,LightRAG 服务器每次启动时都会将 `.env` 中的环境变量加载到系统环境变量中。**LightRAG 服务器会优先使用系统环境变量中的设置**。 + +> 由于安装了 Python 扩展的 VS Code 可能会在集成终端中自动加载 .env 文件,请在每次修改 .env 文件后打开新的终端会话。 以下是 LLM 和嵌入模型的一些常见设置示例: diff --git a/lightrag/api/README.md b/lightrag/api/README.md index 0004bd34..4d3ec309 100644 --- a/lightrag/api/README.md +++ b/lightrag/api/README.md @@ -41,7 +41,9 @@ LightRAG necessitates the integration of both an LLM (Large Language Model) and * openai or openai compatible * azure_openai -It is recommended to use environment variables to configure the LightRAG Server. There is an example environment variable file named `env.example` in the root directory of the project. Please copy this file to the startup directory and rename it to `.env`. After that, you can modify the parameters related to the LLM and Embedding models in the `.env` file. It is important to note that the LightRAG Server will load the environment variables from `.env` into the system environment variables each time it starts. Since the LightRAG Server will prioritize the settings in the system environment variables, if you modify the `.env` file after starting the LightRAG Server via the command line, you need to execute `source .env` to make the new settings take effect. +It is recommended to use environment variables to configure the LightRAG Server. There is an example environment variable file named `env.example` in the root directory of the project. Please copy this file to the startup directory and rename it to `.env`. After that, you can modify the parameters related to the LLM and Embedding models in the `.env` file. It is important to note that the LightRAG Server will load the environment variables from `.env` into the system environment variables each time it starts. **LightRAG Server will prioritize the settings in the system environment variables to .env file**. + +> Since VS Code with the Python extension may automatically load the .env file in the integrated terminal, please open a new terminal session after each modification to the .env file. Here are some examples of common settings for LLM and Embedding models: