diff --git a/ch07/01_main-chapter-code/ch07.ipynb b/ch07/01_main-chapter-code/ch07.ipynb index c27b8ef..c368ba1 100644 --- a/ch07/01_main-chapter-code/ch07.ipynb +++ b/ch07/01_main-chapter-code/ch07.ipynb @@ -1462,7 +1462,7 @@ "id": "8c68eda7-e02e-4caa-846b-ca6dbd396ca2" }, "source": [ - "- However, instead of loading the smallest 124 million parameter model, we load the medium version with 355 parameters since the 124 million model is too small for achieving qualitatively reasonable results via instruction finetuning" + "- However, instead of loading the smallest 124 million parameter model, we load the medium version with 355 million parameters since the 124 million model is too small for achieving qualitatively reasonable results via instruction finetuning" ] }, { @@ -2202,7 +2202,7 @@ "source": [ "- In this section, we automate the response evaluation of the finetuned LLM using another, larger LLM\n", "- In particular, we use an instruction-finetuned 8 billion parameter Llama 3 model by Meta AI that can be run locally via ollama ([https://ollama.com](https://ollama.com))\n", - "- (Alternatively, if you prefer using a more capable LLM like OpenAI's GPT-4 via the ChatGPT API, please see the [../03_model-evaluation/llm-instruction-eval-ollama.ipynb](../03_model-evaluation/llm-instruction-eval-ollama.ipynb) notebook)" + "- (Alternatively, if you prefer using a more capable LLM like OpenAI's GPT-4 via the ChatGPT API, please see the [llm-instruction-eval-openai.ipynb](../03_model-evaluation/llm-instruction-eval-openai.ipynb) notebook)" ] }, {