mirror of
https://github.com/infiniflow/ragflow.git
synced 2025-07-27 19:00:57 +00:00
40 lines
1.4 KiB
Markdown
40 lines
1.4 KiB
Markdown
![]() |
# Ollama
|
||
|
|
||
|
<div align="center" style="margin-top:20px;margin-bottom:20px;">
|
||
|
<img src="https://github.com/infiniflow/ragflow/assets/12318111/2019e7ee-1e8a-412e-9349-11bbf702e549" width="130"/>
|
||
|
</div>
|
||
|
|
||
|
One-click deployment of local LLMs, that is [Ollama](https://github.com/ollama/ollama).
|
||
|
|
||
|
## Install
|
||
|
|
||
|
- [Ollama on Linux](https://github.com/ollama/ollama/blob/main/docs/linux.md)
|
||
|
- [Ollama Windows Preview](https://github.com/ollama/ollama/blob/main/docs/windows.md)
|
||
|
- [Docker](https://hub.docker.com/r/ollama/ollama)
|
||
|
|
||
|
## Launch Ollama
|
||
|
|
||
|
Decide which LLM you want to deploy ([here's a list for supported LLM](https://ollama.com/library)), say, **mistral**:
|
||
|
```bash
|
||
|
$ ollama run mistral
|
||
|
```
|
||
|
Or,
|
||
|
```bash
|
||
|
$ docker exec -it ollama ollama run mistral
|
||
|
```
|
||
|
|
||
|
## Use Ollama in RAGFlow
|
||
|
|
||
|
- Go to 'Settings > Model Providers > Models to be added > Ollama'.
|
||
|
|
||
|
<div align="center" style="margin-top:20px;margin-bottom:20px;">
|
||
|
<img src="https://github.com/infiniflow/ragflow/assets/12318111/2019e7ee-1e8a-412e-9349-11bbf702e549" width="130"/>
|
||
|
</div>
|
||
|
|
||
|
> Base URL: Enter the base URL where the Ollama service is accessible, like, http://<your-ollama-endpoint-domain>:11434
|
||
|
|
||
|
- Use Ollama Models.
|
||
|
|
||
|
<div align="center" style="margin-top:20px;margin-bottom:20px;">
|
||
|
<img src="https://github.com/infiniflow/ragflow/assets/12318111/2019e7ee-1e8a-412e-9349-11bbf702e549" width="130"/>
|
||
|
</div>
|