mirror of
https://github.com/rasbt/LLMs-from-scratch.git
synced 2025-11-01 18:30:00 +00:00
Fix some typos in ch07.ipynb (#224)
* Fixed some typos in ch06.ipynb * Fix some typos in ch07.ipynb
This commit is contained in:
parent
f4c8bb024c
commit
605ec00a2a
@ -1462,7 +1462,7 @@
|
||||
"id": "8c68eda7-e02e-4caa-846b-ca6dbd396ca2"
|
||||
},
|
||||
"source": [
|
||||
"- However, instead of loading the smallest 124 million parameter model, we load the medium version with 355 parameters since the 124 million model is too small for achieving qualitatively reasonable results via instruction finetuning"
|
||||
"- However, instead of loading the smallest 124 million parameter model, we load the medium version with 355 million parameters since the 124 million model is too small for achieving qualitatively reasonable results via instruction finetuning"
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -2202,7 +2202,7 @@
|
||||
"source": [
|
||||
"- In this section, we automate the response evaluation of the finetuned LLM using another, larger LLM\n",
|
||||
"- In particular, we use an instruction-finetuned 8 billion parameter Llama 3 model by Meta AI that can be run locally via ollama ([https://ollama.com](https://ollama.com))\n",
|
||||
"- (Alternatively, if you prefer using a more capable LLM like OpenAI's GPT-4 via the ChatGPT API, please see the [../03_model-evaluation/llm-instruction-eval-ollama.ipynb](../03_model-evaluation/llm-instruction-eval-ollama.ipynb) notebook)"
|
||||
"- (Alternatively, if you prefer using a more capable LLM like OpenAI's GPT-4 via the ChatGPT API, please see the [llm-instruction-eval-openai.ipynb](../03_model-evaluation/llm-instruction-eval-openai.ipynb) notebook)"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
Loading…
x
Reference in New Issue
Block a user