mirror of
https://github.com/HKUDS/LightRAG.git
synced 2025-06-26 22:00:19 +00:00
Refine LLM settings in env sample file
This commit is contained in:
parent
460bc3a6aa
commit
57884f2fb8
@ -45,17 +45,21 @@
|
||||
# MAX_EMBED_TOKENS=8192
|
||||
|
||||
### LLM Configuration (Use valid host. For local services installed with docker, you can use host.docker.internal)
|
||||
LLM_BINDING=ollama
|
||||
LLM_MODEL=mistral-nemo:latest
|
||||
LLM_BINDING_API_KEY=your_api_key
|
||||
### Ollama example
|
||||
LLM_BINDING=ollama
|
||||
LLM_BINDING_HOST=http://localhost:11434
|
||||
### OpenAI alike example
|
||||
# LLM_BINDING=openai
|
||||
# LLM_MODEL=gpt-4o
|
||||
# LLM_BINDING_HOST=https://api.openai.com/v1
|
||||
# LLM_BINDING_API_KEY=your_api_key
|
||||
### lollms example
|
||||
# LLM_BINDING=lollms
|
||||
# LLM_MODEL=mistral-nemo:latest
|
||||
# LLM_BINDING_HOST=http://localhost:9600
|
||||
# LLM_BINDING_API_KEY=your_api_key
|
||||
|
||||
### Embedding Configuration (Use valid host. For local services installed with docker, you can use host.docker.internal)
|
||||
EMBEDDING_MODEL=bge-m3:latest
|
||||
|
@ -45,7 +45,7 @@ EMBEDDING_BINDING_HOST=http://localhost:11434
|
||||
LLM_BINDING_HOST=http://localhost:9600
|
||||
EMBEDDING_BINDING_HOST=http://localhost:9600
|
||||
|
||||
# for openai, openai compatible or azure openai backend
|
||||
# for openai, openai compatible or azure openai backend
|
||||
LLM_BINDING_HOST=https://api.openai.com/v1
|
||||
EMBEDDING_BINDING_HOST=http://localhost:9600
|
||||
```
|
||||
@ -502,4 +502,3 @@ A query prefix in the query string can determines which LightRAG query mode is u
|
||||
For example, chat message "/mix 唐僧有几个徒弟" will trigger a mix mode query for LighRAG. A chat message without query prefix will trigger a hybrid mode query by default。
|
||||
|
||||
"/bypass" is not a LightRAG query mode, it will tell API Server to pass the query directly to the underlying LLM with chat history. So user can use LLM to answer question base on the chat history. If you are using Open WebUI as front end, you can just switch the model to a normal LLM instead of using /bypass prefix.
|
||||
|
||||
|
Loading…
x
Reference in New Issue
Block a user