From 017f73d50cb941d34d3785cdbf71959219f8b464 Mon Sep 17 00:00:00 2001 From: rasbt Date: Thu, 6 Jun 2024 21:03:40 -0500 Subject: [PATCH] update ollama instructions --- .../llm-instruction-eval-ollama.ipynb | 17 +++++++++++++---- 1 file changed, 13 insertions(+), 4 deletions(-) diff --git a/ch07/03_model-evaluation/llm-instruction-eval-ollama.ipynb b/ch07/03_model-evaluation/llm-instruction-eval-ollama.ipynb index 59d7459..40213a4 100644 --- a/ch07/03_model-evaluation/llm-instruction-eval-ollama.ipynb +++ b/ch07/03_model-evaluation/llm-instruction-eval-ollama.ipynb @@ -100,15 +100,22 @@ "id": "9558a522-650d-401a-84fc-9fd7b1f39da7", "metadata": {}, "source": [ - "- Now let's test if ollama is set up correctly\n", - "- For this, click on the ollama application you downloaded; if it prompts you to install the command line usage, say \"yes\"\n", - "- Next, on the command line, execute the following command to try out the 8 billion parameters Llama 3 model (the model, which takes up 4.7 GB of storage space, will be automatically downloaded the first time you execute this command)\n", + "- For macOS and Windows users, click on the ollama application you downloaded; if it prompts you to install the command line usage, say \"yes\"\n", + "- Linux users can use the installation command provided on the ollama website\n", + "\n", + "- In general, before we can use ollama from the command line, we have to either start the ollama application or run `ollama serve` in a separate terminal\n", + "\n", + "\n", + "\n", + "\n", + "- With the ollama application or `ollama serve` running, in a different terminal, on the command line, execute the following command to try out the 8 billion parameters Llama 3 model (the model, which takes up 4.7 GB of storage space, will be automatically downloaded the first time you execute this command)\n", "\n", "```bash\n", "# 8B model\n", "ollama run llama3\n", "```\n", "\n", + "\n", "The output looks like as follows:\n", "\n", "```\n", @@ -165,7 +172,9 @@ "metadata": {}, "source": [ "- Now, an alternative way to interact with the model is via its REST API in Python via the following function\n", - "- First, in your terminal, start a local ollama server via `ollama serve` (after executing the code in this notebook, you can later stop this session by simply closing the terminal)\n", + "- Before you run the next cells in this notebook, make sure that ollama is still running, as described above, via\n", + " - `ollama serve` in a terminal\n", + " - the ollama application\n", "- Next, run the following code cell to query the model" ] },