diff --git a/ch07/01_main-chapter-code/ch07.ipynb b/ch07/01_main-chapter-code/ch07.ipynb
index 7c9d57e..892c8b0 100644
--- a/ch07/01_main-chapter-code/ch07.ipynb
+++ b/ch07/01_main-chapter-code/ch07.ipynb
@@ -2231,7 +2231,7 @@
},
"source": [
"- In this section, we automate the response evaluation of the finetuned LLM using another, larger LLM\n",
- "- In particular, we use an instruction-finetuned 8 billion parameter Llama 3 model by Meta AI that can be run locally via ollama ([https://ollama.com](https://ollama.com))\n",
+ "- In particular, we use an instruction-finetuned 8-billion-parameter Llama 3 model by Meta AI that can be run locally via ollama ([https://ollama.com](https://ollama.com))\n",
"- (Alternatively, if you prefer using a more capable LLM like GPT-4 via the OpenAI API, please see the [llm-instruction-eval-openai.ipynb](../03_model-evaluation/llm-instruction-eval-openai.ipynb) notebook)"
]
},
@@ -2263,7 +2263,7 @@
"
\n",
"\n",
"\n",
- "- With the ollama application or `ollama serve` running in a different terminal, on the command line, execute the following command to try out the 8 billion parameters Llama 3 model (the model, which takes up 4.7 GB of storage space, will be automatically downloaded the first time you execute this command)\n",
+ "- With the ollama application or `ollama serve` running in a different terminal, on the command line, execute the following command to try out the 8-billion-parameter Llama 3 model (the model, which takes up 4.7 GB of storage space, will be automatically downloaded the first time you execute this command)\n",
"\n",
"```bash\n",
"# 8B model\n",
@@ -2287,11 +2287,11 @@
"success\n",
"```\n",
"\n",
- "- Note that `llama3` refers to the instruction finetuned 8 billion Llama 3 model\n",
+ "- Note that `llama3` refers to the instruction finetuned 8-billion-parameter Llama 3 model\n",
"\n",
"- Using ollama with the `\"llama3\"` model (a 8B parameter model) requires 16 GB of RAM; if this is not supported by your machine, you can try the smaller model, such as the 3.8B parameter phi-3 model by setting `model = \"phi-3\"`, which only requires 8 GB of RAM\n",
"\n",
- "- Alternatively, you can also use the larger 70 billion parameters Llama 3 model, if your machine supports it, by replacing `llama3` with `llama3:70b`\n",
+ "- Alternatively, you can also use the larger 70-billion-parameter Llama 3 model, if your machine supports it, by replacing `llama3` with `llama3:70b`\n",
"\n",
"- After the download has been completed, you will see a command line prompt that allows you to chat with the model\n",
"\n",
diff --git a/ch07/03_model-evaluation/llm-instruction-eval-ollama.ipynb b/ch07/03_model-evaluation/llm-instruction-eval-ollama.ipynb
index e38e771..cc9673f 100644
--- a/ch07/03_model-evaluation/llm-instruction-eval-ollama.ipynb
+++ b/ch07/03_model-evaluation/llm-instruction-eval-ollama.ipynb
@@ -35,7 +35,7 @@
"id": "a128651b-f326-4232-a994-42f38b7ed520",
"metadata": {},
"source": [
- "- This notebook uses an 8 billion parameter Llama 3 model through ollama to evaluate responses of instruction finetuned LLMs based on a dataset in JSON format that includes the generated model responses, for example:\n",
+ "- This notebook uses an 8-billion-parameter Llama 3 model through ollama to evaluate responses of instruction finetuned LLMs based on a dataset in JSON format that includes the generated model responses, for example:\n",
"\n",
"\n",
"\n",
@@ -108,7 +108,7 @@
"
\n",
"\n",
"\n",
- "- With the ollama application or `ollama serve` running, in a different terminal, on the command line, execute the following command to try out the 8 billion parameters Llama 3 model (the model, which takes up 4.7 GB of storage space, will be automatically downloaded the first time you execute this command)\n",
+ "- With the ollama application or `ollama serve` running, in a different terminal, on the command line, execute the following command to try out the 8-billion-parameter Llama 3 model (the model, which takes up 4.7 GB of storage space, will be automatically downloaded the first time you execute this command)\n",
"\n",
"```bash\n",
"# 8B model\n",
@@ -132,9 +132,9 @@
"success \n",
"```\n",
"\n",
- "- Note that `llama3` refers to the instruction finetuned 8 billion Llama 3 model\n",
+ "- Note that `llama3` refers to the instruction finetuned 8-billion-parameter Llama 3 model\n",
"\n",
- "- Alternatively, you can also use the larger 70 billion parameters Llama 3 model, if your machine supports it, by replacing `llama3` with `llama3:70b`\n",
+ "- Alternatively, you can also use the larger 70-billion-parameter Llama 3 model, if your machine supports it, by replacing `llama3` with `llama3:70b`\n",
"\n",
"- After the download has been completed, you will see a command line prompt that allows you to chat with the model\n",
"\n",
@@ -640,7 +640,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.11.4"
+ "version": "3.10.6"
}
},
"nbformat": 4,
diff --git a/ch07/04_preference-tuning-with-dpo/create-preference-data-ollama.ipynb b/ch07/04_preference-tuning-with-dpo/create-preference-data-ollama.ipynb
index 4b7303b..41c948e 100644
--- a/ch07/04_preference-tuning-with-dpo/create-preference-data-ollama.ipynb
+++ b/ch07/04_preference-tuning-with-dpo/create-preference-data-ollama.ipynb
@@ -41,7 +41,7 @@
" 2. We use the instruction-finetuned LLM to generate multiple responses and have LLMs rank them based on given preference criteria\n",
" 3. We use an LLM to generate preferred and dispreferred responses given certain preference criteria\n",
"- In this notebook, we consider approach 3\n",
- "- This notebook uses a 70 billion parameters Llama 3.1-Instruct model through ollama to generate preference labels for an instruction dataset\n",
+ "- This notebook uses a 70-billion-parameter Llama 3.1-Instruct model through ollama to generate preference labels for an instruction dataset\n",
"- The expected format of the instruction dataset is as follows:\n",
"\n",
"\n",
@@ -162,7 +162,7 @@
"
\n",
"\n",
"\n",
- "- With the ollama application or `ollama serve` running, in a different terminal, on the command line, execute the following command to try out the 70 billion-parameters Llama 3.1 model \n",
+ "- With the ollama application or `ollama serve` running, in a different terminal, on the command line, execute the following command to try out the 70-billion-parameter Llama 3.1 model \n",
"\n",
"```bash\n",
"# 70B model\n",
@@ -186,9 +186,9 @@
"success\n",
"```\n",
"\n",
- "- Note that `llama3.1:70b` refers to the instruction finetuned 70 billion Llama 3.1 model\n",
+ "- Note that `llama3.1:70b` refers to the instruction finetuned 70-billion-parameter Llama 3.1 model\n",
"\n",
- "- Alternatively, you can also use the smaller, more resource-effiicent 8 billion-parameters Llama 3.1 model, by replacing `llama3.1:70b` with `llama3.1`\n",
+ "- Alternatively, you can also use the smaller, more resource-effiicent 8-billion-parameters Llama 3.1 model, by replacing `llama3.1:70b` with `llama3.1`\n",
"\n",
"- After the download has been completed, you will see a command line prompt that allows you to chat with the model\n",
"\n",
diff --git a/ch07/05_dataset-generation/llama3-ollama.ipynb b/ch07/05_dataset-generation/llama3-ollama.ipynb
index 0387ae7..812edf2 100644
--- a/ch07/05_dataset-generation/llama3-ollama.ipynb
+++ b/ch07/05_dataset-generation/llama3-ollama.ipynb
@@ -35,7 +35,7 @@
"id": "a128651b-f326-4232-a994-42f38b7ed520",
"metadata": {},
"source": [
- "- This notebook uses an 8 billion parameter Llama 3 model through ollama to generate a synthetic dataset using the \"hack\" proposed in the \"Magpie: Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing\" paper ([https://arxiv.org/abs/2406.08464](https://arxiv.org/abs/2406.08464))\n",
+ "- This notebook uses an 8-billion-parameter Llama 3 model through ollama to generate a synthetic dataset using the \"hack\" proposed in the \"Magpie: Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing\" paper ([https://arxiv.org/abs/2406.08464](https://arxiv.org/abs/2406.08464))\n",
"\n",
"- The generated dataset will be an instruction dataset with \"instruction\" and \"output\" field similar to what can be found in Alpaca:\n",
"\n",
@@ -109,7 +109,7 @@
"
\n",
"\n",
"\n",
- "- With the ollama application or `ollama serve` running, in a different terminal, on the command line, execute the following command to try out the 8 billion parameters Llama 3 model (the model, which takes up 4.7 GB of storage space, will be automatically downloaded the first time you execute this command)\n",
+ "- With the ollama application or `ollama serve` running, in a different terminal, on the command line, execute the following command to try out the 8-billion-parameter Llama 3 model (the model, which takes up 4.7 GB of storage space, will be automatically downloaded the first time you execute this command)\n",
"\n",
"```bash\n",
"# 8B model\n",
@@ -133,9 +133,9 @@
"success \n",
"```\n",
"\n",
- "- Note that `llama3` refers to the instruction finetuned 8 billion Llama 3 model\n",
+ "- Note that `llama3` refers to the instruction finetuned 8-billion-parameter Llama 3 model\n",
"\n",
- "- Alternatively, you can also use the larger 70 billion parameters Llama 3 model, if your machine supports it, by replacing `llama3` with `llama3:70b`\n",
+ "- Alternatively, you can also use the larger 70-billion-parameter Llama 3 model, if your machine supports it, by replacing `llama3` with `llama3:70b`\n",
"\n",
"- After the download has been completed, you will see a command line prompt that allows you to chat with the model\n",
"\n",
@@ -498,7 +498,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.11.4"
+ "version": "3.10.6"
}
},
"nbformat": 4,