Fix some typos in ch07.ipynb (#224)

* Fixed some typos in ch06.ipynb

* Fix some typos in ch07.ipynb
This commit is contained in:
Jinge Wang 2024-06-19 19:14:25 +08:00 committed by GitHub
parent 1ba1bb4c77
commit 8d43b4bfea
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

View File

@ -1462,7 +1462,7 @@
"id": "8c68eda7-e02e-4caa-846b-ca6dbd396ca2"
},
"source": [
"- However, instead of loading the smallest 124 million parameter model, we load the medium version with 355 parameters since the 124 million model is too small for achieving qualitatively reasonable results via instruction finetuning"
"- However, instead of loading the smallest 124 million parameter model, we load the medium version with 355 million parameters since the 124 million model is too small for achieving qualitatively reasonable results via instruction finetuning"
]
},
{
@ -2202,7 +2202,7 @@
"source": [
"- In this section, we automate the response evaluation of the finetuned LLM using another, larger LLM\n",
"- In particular, we use an instruction-finetuned 8 billion parameter Llama 3 model by Meta AI that can be run locally via ollama ([https://ollama.com](https://ollama.com))\n",
"- (Alternatively, if you prefer using a more capable LLM like OpenAI's GPT-4 via the ChatGPT API, please see the [../03_model-evaluation/llm-instruction-eval-ollama.ipynb](../03_model-evaluation/llm-instruction-eval-ollama.ipynb) notebook)"
"- (Alternatively, if you prefer using a more capable LLM like OpenAI's GPT-4 via the ChatGPT API, please see the [llm-instruction-eval-openai.ipynb](../03_model-evaluation/llm-instruction-eval-openai.ipynb) notebook)"
]
},
{