"Supplementary code for the <a href=\"http://mng.bz/orYv\">Build a Large Language Model From Scratch</a> book by <a href=\"https://sebastianraschka.com\">Sebastian Raschka</a><br>\n",
" \"instruction\": \"Identify the correct spelling of the following word.\",\n",
" \"input\": \"Ocassion\",\n",
" \"output\": \"The correct spelling is 'Occasion.'\"\n",
"}\n",
"```\n",
"\n",
"In the main chapter, we formatted it according to the Alpaca-style prompt template:\n",
"\n",
"```\n",
"Below is an instruction that describes a task. Write a response that appropriately completes the request.\n",
"\n",
"### Instruction:\n",
"Identify the correct spelling of the following word.\n",
"\n",
"### Input:\n",
"Occassion\n",
"\n",
"### Response:\n",
"The correct spelling is 'Occasion.'\n",
"```\n",
"\n",
"In this exercise, we now use the Phi-3 prompt template instead, which formats the data entry as follows:\n",
"\n",
"```\n",
"<user>\n",
"Identify the correct spelling of the following word: 'Occasion'\n",
"\n",
"<assistant>\n",
"The correct spelling is 'Occasion'.\n",
"```\n",
"\n",
"Note that this prompt template is substantially shorter, which reduces the runtime and hardware requirements for finetuning the LLM and generating text since the input prompts are shorter.\n",
"To make this change, we update the `format_input` function as follows:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "f99baa1e-c24c-417f-89d0-13e6d061ea6a",
"metadata": {},
"outputs": [],
"source": [
"def format_input(entry):\n",
" instruction_text = (\n",
" f\"<|user|>\\n{entry['instruction']}\"\n",
" )\n",
"\n",
" input_text = f\"\\n{entry['input']}\" if entry[\"input\"] else \"\"\n",
"\n",
" return instruction_text + input_text"
]
},
{
"cell_type": "markdown",
"id": "e4ba538f-64b9-495d-847b-d9f1d324bc50",
"metadata": {},
"source": [
"Let's make sure that it works as intended by applying it to two input samples, one with and one without content in the `'input'` field:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "877a57e2-535f-4363-b32a-a093edd951b8",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"<|user|>\n",
"Identify the correct spelling of the following word.\n",
"Ocassion\n",
"\n",
"<|user|>\n",
"What is an antonym of 'complicated'?\n"
]
}
],
"source": [
"sample_data = [\n",
" {'instruction': 'Identify the correct spelling of the following word.', 'input': 'Ocassion', 'output': \"The correct spelling is 'Occasion.'\"}, \n",
" {'instruction': \"What is an antonym of 'complicated'?\", 'input': '', 'output': \"An antonym of 'complicated' is 'simple'.\"}\n",
"]\n",
"\n",
"print(format_input(sample_data[0]))\n",
"print()\n",
"print(format_input(sample_data[1]))"
]
},
{
"cell_type": "markdown",
"id": "fa2a6704-6c61-4a09-b8f5-ffc5a77d6aa3",
"metadata": {},
"source": [
"Next, we also update the `InstructionDataset` class to use the <|assistant|> prompt template for the response:"
"For your convenience, the exercise solution is implemented in the [exercise_experiments.py](exercise_experiments.py) script, which you can run as follows:"
"Ep 1 (Step 000000): Train loss 2.633, Val loss 2.622\n",
"...\n",
"Ep 2 (Step 000230): Train loss 0.424, Val loss 0.928\n",
"<|user|> Convert the active sentence to passive: 'The chef cooks the meal every day.' <|assistant|>: The meal is prepared every day by the chef....\n",
"Note that on an Nvidia L4 GPU, the code above, using the Phi-3 prompt template, takes 1.5 min to run. In comparison, the Alpaca-style template takes 1.80 minutes to run. So, the Phi-3 template is approximately 17% faster since it results in shorter model inputs. \n",
"\n",
"Let's take a look at some of the responses to make sure they have been formatted correctly:\n",
" \"instruction\": \"Rewrite the sentence using a simile.\",\n",
" \"input\": \"The car is very fast.\",\n",
" \"output\": \"The car is as fast as lightning.\",\n",
" \"model_response\": \"The car is as fast as a cheetah.\"\n",
" },\n",
" {\n",
" \"instruction\": \"What type of cloud is typically associated with thunderstorms?\",\n",
" \"input\": \"\",\n",
" \"output\": \"The type of cloud typically associated with thunderstorms is cumulonimbus.\",\n",
" \"model_response\": \"The type of cloud associated with thunderstorms is a cumulus cloud.\"\n",
" },\n",
" {\n",
" \"instruction\": \"Name the author of 'Pride and Prejudice'.\",\n",
" \"input\": \"\",\n",
" \"output\": \"Jane Austen.\",\n",
" \"model_response\": \"The author of 'Pride and Prejudice' is Jane Austen.\"\n",
" },\n",
"```\n",
"\n",
"We can evaluate the performance using the Ollama Llama 3 method, which is for your convenience, also implemented in the `python exercise_experiments.py` script, which we can run as follows:\n",
"The score is close to 50, which is in the same ballpark as the score we previously achieved with the Alpaca-style prompts.\n",
"\n",
"There is no inherent advantage or rationale why the Phi prompt-style should be better, but it can be more concise and efficient, except for the caveat mentioned in the *Tip* section below."
]
},
{
"cell_type": "markdown",
"id": "156bc574-3f3e-4479-8f58-c8c8c472416e",
"metadata": {},
"source": [
"#### Tip: Considering special tokens"
]
},
{
"cell_type": "markdown",
"id": "65cacf90-21c2-48f2-8f21-5c0c86749ff2",
"metadata": {},
"source": [
"- Note that the Phi-3 prompt template contains special tokens such as `<|user|>` and `<|assistant|>`, which can be suboptimal for the GPT-2 tokenizer\n",
"- While the GPT-2 tokenizer recognizes `<|endoftext|>` as a special token (encoded into token ID 50256), it is inefficient at handling other special tokens, such as the aforementioned ones\n",
"- For instance, `<|user|>` is encoded into 5 individual token IDs (27, 91, 7220, 91, 29), which is very inefficient\n",
"- We could add `<|user|>` as a new special token in `tiktoken` via the `allowed_special` argument, but please keep in mind that the GPT-2 vocabulary would not be able to handle it without additional modification\n",
"- If you are curious about how a tokenizer and LLM can be extended to handle special tokens, please see the [extend-tiktoken.ipynb](../../ch05/09_extending-tokenizers/extend-tiktoken.ipynb) bonus materials (note that this is not required here but is just an interesting/bonus consideration for curious readers)\n",
"- Furthermore, we can hypothesize that models that support these special tokens of a prompt template via their vocabulary may perform more efficiently and better overall"
"## Exercise 7.2: Instruction and input masking\n",
"\n",
"To mask out the instructions as shown in the following figure, we need to make slight modifications to the `InstructionDataset` class and `custom_collate_fn`.\n",
" input_text = f\"\\n\\n### Input:\\n{entry['input']}\" if entry[\"input\"] else \"\"\n",
"\n",
" return instruction_text + input_text"
]
},
{
"cell_type": "markdown",
"id": "83658c09-af8a-425a-b940-eb1f06e43c0b",
"metadata": {},
"source": [
"We can modify the `InstructionDataset` class to collect the lengths of the instructions, which we will use in the collate function to locate the instruction content positions in the targets when we code the collate function, as follows:"
"Next, we update the `custom_collate_fn` where each `batch` is now a tuple containing `(instruction_length, item)` instead of just `item` due to the changes in the `InstructionDataset` dataset. In addition, we now mask the corresponding instruction tokens in the target ID list."
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "f815e6fc-8e54-4105-aecd-d4c6e890ff9d",
"metadata": {},
"outputs": [],
"source": [
"def custom_collate_fn(\n",
" batch,\n",
" pad_token_id=50256,\n",
" ignore_index=-100,\n",
" allowed_max_length=None,\n",
" device=\"cpu\"\n",
"):\n",
" # Find the longest sequence in the batch\n",
" batch_max_length = max(len(item)+1 for instruction_length, item in batch) # New: batch is now a tuple\n",
"\n",
" # Pad and prepare inputs and targets\n",
" inputs_lst, targets_lst = [], []\n",
"\n",
" for instruction_length, item in batch: # New: batch is now a tuple\n",
" {'instruction': \"What is an antonym of 'complicated'?\", 'input': '', 'output': \"An antonym of 'complicated' is 'simple'.\"},\n",
" {'instruction': 'Sort the following list in alphabetical order.', 'input': 'Zebra, Elephant, Crocodile', 'output': 'Crocodile, Elephant, Zebra'},\n",
" {'instruction': 'Arrange the given numbers in descending order.', 'input': '5, 12, 8, 3, 15', 'output': '15, 12, 8, 5, 3.'}\n",
"As shown above, the non-masked target tokens exclude the `\"Instruction\"` and `\"Input\"` fields, as intended. Now, we can run the modified code to see how well the LLM performs when finetuned using this masking strategy.\n",
"\n",
"For your convenience, you can use the `exercise_experiments.py` code to run a comparison as follows:"
"As we can see based on the scores, the instruction masking does perform slightly worse, which is consistent with the observation in the \"Instruction Tuning With Loss Over Instructions\" paper (https://arxiv.org/abs/2405.14394)"
]
},
{
"cell_type": "markdown",
"id": "94a0f758-29da-44ee-b7af-32473b3c086e",
"metadata": {},
"source": [
" \n",
"## Exercise 7.3: Finetuning on the original Alpaca dataset"
]
},
{
"cell_type": "markdown",
"id": "68df7616-679f-4e53-954d-6e7cf2e2ef55",
"metadata": {},
"source": [
"To finetune the model on the original Stanford Alpaca dataset ([https://github.com/tatsu-lab/stanford_alpaca](https://github.com/tatsu-lab/stanford_alpaca)), you just need to change the file URL from\n",
"Note that the dataset contains 52k entries (50x more than in chapter 7), and the entries are longer than the ones we worked with in chapter 7.\n",
"Thus, it's highly recommended that the training be run on a GPU.\n",
"\n",
"If you encounter out-of-memory errors, consider reducing the batch size from 8 to 4, 2, or 1. In addition to lowering the batch size, you may also want to consider lowering the `allowed_max_length` from 1024 to 512 or 256."
]
},
{
"cell_type": "markdown",
"id": "d94c9621-2c3f-4551-b5b8-87cd96e38c9c",
"metadata": {},
"source": [
"For your convenience, you can use the `exercise_experiments.py` code to finetune the model on the 52k Alpaca dataset with a batch size of 4 and an `allowed_max_length` of 512 as follows:"
" \"instruction\": \"Edit the following sentence to increase readability: \\\"He made a huge effort and was so successful.\\\"\",\n",
" \"input\": \"\",\n",
" \"output\": \"He exerted a tremendous effort, and thus enjoyed great success.\",\n",
" \"model_response\": \"He put in an immense effort and was rewarded with success.\"\n",
" },\n",
" {\n",
" \"instruction\": \"Rewrite the following sentence to make it more concise: \\\"I was displeased with the result of the experiment that I conducted.\\\"\",\n",
" \"input\": \"\",\n",
" \"output\": \"I was unhappy with my experiment's outcome.\",\n",
" \"model_response\": \"I was displeased with the results of the experiment.\"\n",
" },\n",
" {\n",
" \"instruction\": \"How can we build a more efficient GPT model?\",\n",
" \"input\": \"\",\n",
" \"output\": \"We can build a more efficient GPT model by optimizing the architecture of the model, using smaller model sizes and training with fewer parameters. We can also leverage techniques such as knowledge distillation, transfer learning, dynamic sparsity and hybrid computing to further improve the efficiency of the model.\",\n",
" \"model_response\": \"Building a more efficient GPT model requires careful planning and optimization. First, it is important to identify the target language and the context in which the model is used. Then, it is important to select the appropriate model architecture, such as backpropagation, hyperparameters, and hyperparameters. Finally, it is important to select the appropriate model weights and optimizers, such as backpropagation, hyperparameters, and hyperparameters.\"\n",
" },\n",
"]"
]
},
{
"cell_type": "markdown",
"id": "840e2076-f2e6-44a2-86fd-f191f9674267",
"metadata": {},
"source": [
"Finally, we can evaluate the finetuned LLM using the [ollama_evaluate.py](ollama_evaluate.py) utility function:\n",
"The score is slightly lower than the score we obtained on the dataset we used in this chapter. However, note that the Alpaca test set contains more diverse and partly more challenging instructions than the dataset we used in the main chapter."
"Note that on an Nvidia L4 GPU, the code above, using LoRA, takes 1.30 min to run. In comparison, the baseline takes 1.80 minutes to run. So, LoRA is approximately 28% faster.\n",
"We can evaluate the performance using the Ollama Llama 3 method, which is for your convenience, also implemented in the `python exercise_experiments.py` script, which we can run as follows:\n",