diff --git a/appendix-E/01_main-chapter-code/appendix-E.ipynb b/appendix-E/01_main-chapter-code/appendix-E.ipynb
index d9da9ca..d905ad7 100644
--- a/appendix-E/01_main-chapter-code/appendix-E.ipynb
+++ b/appendix-E/01_main-chapter-code/appendix-E.ipynb
@@ -1,1423 +1,1517 @@
{
- "cells": [
- {
- "cell_type": "markdown",
- "id": "c024bfa4-1a7a-4751-b5a1-827225a3478b",
- "metadata": {
- "id": "c024bfa4-1a7a-4751-b5a1-827225a3478b"
- },
- "source": [
- "\n",
- "Supplementary code for \"Build a Large Language Model From Scratch\": https://www.manning.com/books/build-a-large-language-model-from-scratch by Sebastian Raschka
\n",
- "Code repository: https://github.com/rasbt/LLMs-from-scratch\n",
- ""
- ]
- },
- {
- "cell_type": "markdown",
- "id": "58b8c870-fb72-490e-8916-d8129bd5d1ff",
- "metadata": {},
- "source": [
- "# Appendix E: Parameter-efficient Finetuning with LoRA"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 1,
- "id": "5b7e01c2-1c84-4f2a-bb51-2e0b74abda90",
- "metadata": {
- "colab": {
- "base_uri": "https://localhost:8080/"
- },
- "id": "5b7e01c2-1c84-4f2a-bb51-2e0b74abda90",
- "outputId": "9495f150-9d79-4910-d6e7-6c0d9aae4a41"
- },
- "outputs": [
+ "cells": [
{
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "matplotlib version: 3.7.2\n",
- "numpy version: 1.25.2\n",
- "tiktoken version: 0.5.1\n",
- "torch version: 2.2.2\n",
- "tensorflow version: 2.15.0\n",
- "pandas version: 2.0.3\n"
- ]
- }
- ],
- "source": [
- "from importlib.metadata import version\n",
- "\n",
- "pkgs = [\"matplotlib\",\n",
- " \"numpy\",\n",
- " \"tiktoken\",\n",
- " \"torch\",\n",
- " \"tensorflow\", # For OpenAI's pretrained weights\n",
- " \"pandas\" # Dataset loading\n",
- " ]\n",
- "for p in pkgs:\n",
- " print(f\"{p} version: {version(p)}\")"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "21532056-0ef4-4c98-82c7-e91f61c6485e",
- "metadata": {},
- "source": [
- "## E.1 Introduction to LoRA"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "66edc999-3d91-4a1c-a157-9d056392e8d8",
- "metadata": {},
- "source": [
- "- No code in this section\n",
- "- Low-rank adaptation (LoRA) is a machine learning technique that modifies a pretrained model to better suit a specific, often smaller, dataset by adjusting only a small, low-rank subset of the model's parameters\n",
- "- This approach is important because it allows for efficient finetuning of large models on task-specific data, significantly reducing the computational cost and time required for finetuning"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "5bb75b5d-d59c-4948-821a-1594a5883dc1",
- "metadata": {},
- "source": [
- "- Suppose we have a large weight matrix $W$ for a given layer\n",
- "- During backpropagation, we learn a $\\Delta W$ matrix, which contains information on how much we want to update the original weights to minimize the loss function during training\n",
- "- In regular training and finetuning, the weight update is defined as follows:\n",
- "\n",
- "$$W_{\\text{updated}} = W + \\Delta W$$\n",
- "\n",
- "- The LoRA method proposed by [Hu et al.](https://arxiv.org/abs/2106.09685) offers a more efficient alternative to computing the weight updates $\\Delta W$ by learning an approximation of it, $\\Delta W \\approx AB$.\n",
- "- In other words, in LoRA, we have the following, where $A$ and $B$ are two small weight matrices:\n",
- "\n",
- "$$W_{\\text{updated}} = W + AB$$\n",
- "\n",
- "- The figure below illustrates these formulas for full finetuning and LoRA side by side"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "a8a7419d-cae9-4525-bb44-1641f6ef4f3b",
- "metadata": {},
- "source": [
- "
"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "4edd43c9-8ec5-48e6-b3fc-5fb3c16037cc",
- "metadata": {},
- "source": [
- "- If you paid close attention, the full finetuning and LoRA depictions in the figure above look slightly different from the formulas I have shown earlier\n",
- "- That's due to the distributive law of matrix multiplication: we don't have to add the weights with the updated weights but can keep them separate\n",
- "- For instance, if $x$ is the input data, then we can write the following for regular finetuning:\n",
- "\n",
- "$$x (W+\\Delta W) = x W + x \\Delta W$$\n",
- "\n",
- "- Similarly, we can write the following for LoRA:\n",
- "\n",
- "$$x (W+A B) = x W + x A B$$\n",
- "\n",
- "- The fact that we can keep the LoRA weight matrices separate makes LoRA especially attractive\n",
- "- In practice, this means that we don't have to modify the weights of the pretrained model at all, as we can apply the LoRA matrices on the fly\n",
- "- After setting up the dataset and loading the model, we will implement LoRA in the code to make these concepts less abstract"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "8c7017a2-32aa-4002-a2f3-12aac293ccdf",
- "metadata": {
- "id": "8c7017a2-32aa-4002-a2f3-12aac293ccdf"
- },
- "source": [
- "## E.2 Preparing the dataset"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "669c64df-4431-4d27-834d-2bb38a01fc02",
- "metadata": {},
- "source": [
- "- This section repeats the code from chapter 6 to load and prepare the dataset\n",
- "- Instead of repeating this code, one could open and run the chapter 6 notebook and then insert the LoRA code from section E.4 there\n",
- "- (The LoRA code was originally the last section of chapter 6 but was moved to the appendix due to the length of chapter 6)\n",
- "- In a similar fashion, we could also apply LoRA to the models in chapter 7 for instruction finetuning"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 2,
- "id": "def7c09b-af9c-4216-90ce-5e67aed1065c",
- "metadata": {
- "colab": {
- "base_uri": "https://localhost:8080/"
- },
- "id": "def7c09b-af9c-4216-90ce-5e67aed1065c",
- "outputId": "424e4423-f623-443c-ab9e-656f9e867559"
- },
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "sms_spam_collection/SMSSpamCollection.tsv already exists. Skipping download and extraction.\n"
- ]
- }
- ],
- "source": [
- "from pathlib import Path\n",
- "import pandas as pd\n",
- "from previous_chapters import (\n",
- " download_and_unzip_spam_data,\n",
- " create_balanced_dataset,\n",
- " random_split\n",
- ")\n",
- "\n",
- "\n",
- "url = \"https://archive.ics.uci.edu/static/public/228/sms+spam+collection.zip\"\n",
- "zip_path = \"sms_spam_collection.zip\"\n",
- "extracted_path = \"sms_spam_collection\"\n",
- "data_file_path = Path(extracted_path) / \"SMSSpamCollection.tsv\"\n",
- "\n",
- "download_and_unzip_spam_data(url, zip_path, extracted_path, data_file_path)\n",
- "\n",
- "df = pd.read_csv(data_file_path, sep=\"\\t\", header=None, names=[\"Label\", \"Text\"])\n",
- "balanced_df = create_balanced_dataset(df)\n",
- "balanced_df[\"Label\"] = balanced_df[\"Label\"].map({\"ham\": 0, \"spam\": 1})\n",
- "\n",
- "train_df, validation_df, test_df = random_split(balanced_df, 0.7, 0.1)\n",
- "train_df.to_csv(\"train.csv\", index=None)\n",
- "validation_df.to_csv(\"validation.csv\", index=None)\n",
- "test_df.to_csv(\"test.csv\", index=None)"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 3,
- "id": "74c3c463-8763-4cc0-9320-41c7eaad8ab7",
- "metadata": {
- "colab": {
- "base_uri": "https://localhost:8080/"
- },
- "id": "74c3c463-8763-4cc0-9320-41c7eaad8ab7",
- "outputId": "b5b48439-32c8-4b37-cca2-c9dc8fa86563"
- },
- "outputs": [],
- "source": [
- "import torch\n",
- "from torch.utils.data import Dataset\n",
- "import tiktoken\n",
- "from previous_chapters import SpamDataset\n",
- "\n",
- "\n",
- "tokenizer = tiktoken.get_encoding(\"gpt2\")\n",
- "train_dataset = SpamDataset(\"train.csv\", max_length=None, tokenizer=tokenizer)\n",
- "val_dataset = SpamDataset(\"validation.csv\", max_length=train_dataset.max_length, tokenizer=tokenizer)\n",
- "test_dataset = SpamDataset(\"test.csv\", max_length=train_dataset.max_length, tokenizer=tokenizer)"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 4,
- "id": "8681adc0-6f02-4e75-b01a-a6ab75d05542",
- "metadata": {
- "colab": {
- "base_uri": "https://localhost:8080/"
- },
- "id": "8681adc0-6f02-4e75-b01a-a6ab75d05542",
- "outputId": "3266c410-4fdb-4a8c-a142-7f707e2525ab"
- },
- "outputs": [],
- "source": [
- "from torch.utils.data import DataLoader\n",
- "\n",
- "num_workers = 0\n",
- "batch_size = 8\n",
- "\n",
- "torch.manual_seed(123)\n",
- "\n",
- "train_loader = DataLoader(\n",
- " dataset=train_dataset,\n",
- " batch_size=batch_size,\n",
- " shuffle=True,\n",
- " num_workers=num_workers,\n",
- " drop_last=True,\n",
- ")\n",
- "\n",
- "val_loader = DataLoader(\n",
- " dataset=val_dataset,\n",
- " batch_size=batch_size,\n",
- " num_workers=num_workers,\n",
- " drop_last=False,\n",
- ")\n",
- "\n",
- "test_loader = DataLoader(\n",
- " dataset=test_dataset,\n",
- " batch_size=batch_size,\n",
- " num_workers=num_workers,\n",
- " drop_last=False,\n",
- ")"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "ab7335db-e0bb-4e27-80c5-eea11e593a57",
- "metadata": {},
- "source": [
- "- As a verification step, we iterate through the data loaders and check that the batches contain 8 training examples each, where each training example consists of 120 tokens"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 5,
- "id": "4dee6882-4c3a-4964-af15-fa31f86ad047",
- "metadata": {},
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "Train loader:\n",
- "Input batch dimensions: torch.Size([8, 120])\n",
- "Label batch dimensions torch.Size([8])\n"
- ]
- }
- ],
- "source": [
- "print(\"Train loader:\")\n",
- "for input_batch, target_batch in train_loader:\n",
- " pass\n",
- "\n",
- "print(\"Input batch dimensions:\", input_batch.shape)\n",
- "print(\"Label batch dimensions\", target_batch.shape)"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "5cdd7947-7039-49bf-8a5e-c0a2f4281ca1",
- "metadata": {},
- "source": [
- "- Lastly, let's print the total number of batches in each dataset"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 6,
- "id": "IZfw-TYD2zTj",
- "metadata": {
- "colab": {
- "base_uri": "https://localhost:8080/"
- },
- "id": "IZfw-TYD2zTj",
- "outputId": "6934bbf2-9797-4fbe-d26b-1a246e18c2fb"
- },
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "130 training batches\n",
- "19 validation batches\n",
- "38 test batches\n"
- ]
- }
- ],
- "source": [
- "print(f\"{len(train_loader)} training batches\")\n",
- "print(f\"{len(val_loader)} validation batches\")\n",
- "print(f\"{len(test_loader)} test batches\")"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "dec9aa4a-ffd2-4d9f-a835-cce1059fe604",
- "metadata": {},
- "source": [
- "## E.3 Initializing the model"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "f36ebdaf-810e-46a2-9ad9-e017a04051b1",
- "metadata": {},
- "source": [
- "- This section repeats the code from chapter 6 to load and prepare the model"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 7,
- "id": "02b3a506-3879-4258-82b5-93a5b6bafa74",
- "metadata": {},
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "File already exists and is up-to-date: gpt2/124M/checkpoint\n",
- "File already exists and is up-to-date: gpt2/124M/encoder.json\n",
- "File already exists and is up-to-date: gpt2/124M/hparams.json\n",
- "File already exists and is up-to-date: gpt2/124M/model.ckpt.data-00000-of-00001\n",
- "File already exists and is up-to-date: gpt2/124M/model.ckpt.index\n",
- "File already exists and is up-to-date: gpt2/124M/model.ckpt.meta\n",
- "File already exists and is up-to-date: gpt2/124M/vocab.bpe\n"
- ]
- }
- ],
- "source": [
- "from gpt_download import download_and_load_gpt2\n",
- "from previous_chapters import GPTModel, load_weights_into_gpt\n",
- "\n",
- "\n",
- "CHOOSE_MODEL = \"gpt2-small (124M)\"\n",
- "INPUT_PROMPT = \"Every effort moves\"\n",
- "\n",
- "BASE_CONFIG = {\n",
- " \"vocab_size\": 50257, # Vocabulary size\n",
- " \"context_length\": 1024, # Context length\n",
- " \"drop_rate\": 0.0, # Dropout rate\n",
- " \"qkv_bias\": True # Query-key-value bias\n",
- "}\n",
- "\n",
- "model_configs = {\n",
- " \"gpt2-small (124M)\": {\"emb_dim\": 768, \"n_layers\": 12, \"n_heads\": 12},\n",
- " \"gpt2-medium (355M)\": {\"emb_dim\": 1024, \"n_layers\": 24, \"n_heads\": 16},\n",
- " \"gpt2-large (774M)\": {\"emb_dim\": 1280, \"n_layers\": 36, \"n_heads\": 20},\n",
- " \"gpt2-xl (1558M)\": {\"emb_dim\": 1600, \"n_layers\": 48, \"n_heads\": 25},\n",
- "}\n",
- "\n",
- "BASE_CONFIG.update(model_configs[CHOOSE_MODEL])\n",
- "\n",
- "model_size = CHOOSE_MODEL.split(\" \")[-1].lstrip(\"(\").rstrip(\")\")\n",
- "settings, params = download_and_load_gpt2(model_size=model_size, models_dir=\"gpt2\")\n",
- "\n",
- "model = GPTModel(BASE_CONFIG)\n",
- "load_weights_into_gpt(model, params)\n",
- "model.eval();"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "252614cd-7ce6-4908-83e6-3761f519904e",
- "metadata": {},
- "source": [
- "- To ensure that the model was loaded corrected, let's double-check that it generates coherent text"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 8,
- "id": "8b6ce20c-0700-4783-8be0-4cf17c200a7f",
- "metadata": {},
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "Every effort moves you forward.\n",
- "\n",
- "The first step is to understand the importance of your work\n"
- ]
- }
- ],
- "source": [
- "from previous_chapters import (\n",
- " generate_text_simple,\n",
- " text_to_token_ids,\n",
- " token_ids_to_text\n",
- ")\n",
- "\n",
- "\n",
- "text_1 = \"Every effort moves you\"\n",
- "\n",
- "token_ids = generate_text_simple(\n",
- " model=model,\n",
- " idx=text_to_token_ids(text_1, tokenizer),\n",
- " max_new_tokens=15,\n",
- " context_size=BASE_CONFIG[\"context_length\"]\n",
- ")\n",
- "\n",
- "print(token_ids_to_text(token_ids, tokenizer))"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "8174b31b-1ab5-4115-b01c-245369da5af3",
- "metadata": {},
- "source": [
- "- Then, we prepare the model for classification finetuning similar to chapter 6, where we replace the output layer"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 9,
- "id": "e255ce91-d73a-4854-90a4-95804928eb16",
- "metadata": {},
- "outputs": [],
- "source": [
- "torch.manual_seed(123)\n",
- "\n",
- "num_classes = 2\n",
- "model.out_head = torch.nn.Linear(in_features=768, out_features=num_classes)"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 10,
- "id": "02e6f057-1383-4ece-8444-0a88e71ac75d",
- "metadata": {},
- "outputs": [],
- "source": [
- "device = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\n",
- "model.to(device); # no assignment model = model.to(device) necessary for nn.Module classes"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "8e951cd6-5e42-44d2-b21f-895cb61004fe",
- "metadata": {},
- "source": [
- "- Lastly, let's calculate the initial classification accuracy of the non-finetuned model (we expect this to be around 50%, which means that the model is not able to distinguish between spam and non-spam messages yet reliably)"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 11,
- "id": "fc7dd72c-73a2-4881-ade0-0a9605f1ab8c",
- "metadata": {},
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "Training accuracy: 46.25%\n",
- "Validation accuracy: 45.00%\n",
- "Test accuracy: 48.75%\n"
- ]
- }
- ],
- "source": [
- "from previous_chapters import calc_accuracy_loader\n",
- "\n",
- "\n",
- "torch.manual_seed(123)\n",
- "train_accuracy = calc_accuracy_loader(train_loader, model, device, num_batches=10)\n",
- "val_accuracy = calc_accuracy_loader(val_loader, model, device, num_batches=10)\n",
- "test_accuracy = calc_accuracy_loader(test_loader, model, device, num_batches=10)\n",
- "\n",
- "print(f\"Training accuracy: {train_accuracy*100:.2f}%\")\n",
- "print(f\"Validation accuracy: {val_accuracy*100:.2f}%\")\n",
- "print(f\"Test accuracy: {test_accuracy*100:.2f}%\")"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "398a1ec9-e2a1-43d6-bf9f-12ee54b46a7b",
- "metadata": {
- "id": "398a1ec9-e2a1-43d6-bf9f-12ee54b46a7b"
- },
- "source": [
- "## E.4 Parameter-efficient finetuning with LoRA"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "652a4a82-61ef-4d0a-9858-8988e844f12c",
- "metadata": {},
- "source": [
- "- We begin by initializing a LoRALayer that creates the matrices $A$ and $B$, along with the `alpha` scaling hyperparameter and the `rank` ($r$) hyperparameters\n",
- "- This layer can accept an input and compute the corresponding output, as illustrated in the figure below\n",
- "\n",
- "
\n",
- "\n",
- "In code, this LoRA layer depicted in the figure above looks like as follows"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 12,
- "id": "2ds9ywjMwvIW",
- "metadata": {
- "id": "2ds9ywjMwvIW"
- },
- "outputs": [],
- "source": [
- "class LoRALayer(torch.nn.Module):\n",
- " def __init__(self, in_dim, out_dim, rank, alpha):\n",
- " super().__init__()\n",
- " std_dev = 1 / torch.sqrt(torch.tensor(rank).float())\n",
- " self.A = torch.nn.Parameter(torch.randn(in_dim, rank) * std_dev)\n",
- " self.B = torch.nn.Parameter(torch.zeros(rank, out_dim))\n",
- " self.alpha = alpha\n",
- "\n",
- " def forward(self, x):\n",
- " x = self.alpha * (x @ self.A @ self.B)\n",
- " return x"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "ad21faa8-0614-4257-93cd-68952193e14a",
- "metadata": {},
- "source": [
- "- In the code above, `rank` is a hyperparameter that controls the inner dimension of the matrices $A$ and $B$\n",
- "- In other words, this parameter controls the number of additional parameters introduced by LoRA and is a key factor in determining the balance between model adaptability and parameter efficiency\n",
- "- The second hyperparameter, alpha, is a scaling hyperparameter applied to the output of the low-rank adaptation\n",
- "- It essentially controls the extent to which the adapted layer's output is allowed to influence the original output of the layer being adapted\n",
- "- This can be seen as a way to regulate the impact of the low-rank adaptation on the layer's output\n",
- "- So far, the `LoRALayer` class we implemented above allows us to transform the layer inputs $x$\n",
- "- However, in LoRA, we are usually interested in replacing existing `Linear` layers so that the weight update is applied to the existing pretrained weights, as shown in the figure below\n",
- "\n",
- "
"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "3e6d5da0-dfce-4808-b89b-29ff333f563f",
- "metadata": {},
- "source": [
- "- To incorporate the original `Linear` layer weights as shown in the figure above, we implement a `LinearWithLoRA` layer below that uses the previously implemented LoRALayer and can be used to replace existing `Linear` layers in a neural network, for example, the self-attention module or feed forward modules in an LLM"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 13,
- "id": "127d3a64-8359-4b21-b056-78d58cc75fe8",
- "metadata": {},
- "outputs": [],
- "source": [
- "class LinearWithLoRA(torch.nn.Module):\n",
- " def __init__(self, linear, rank, alpha):\n",
- " super().__init__()\n",
- " self.linear = linear\n",
- " self.lora = LoRALayer(\n",
- " linear.in_features, linear.out_features, rank, alpha\n",
- " )\n",
- "\n",
- " def forward(self, x):\n",
- " return self.linear(x) + self.lora(x)"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "e1145a90-35ff-462c-820b-15483fa5b051",
- "metadata": {},
- "source": [
- "- Note that since we initialize the weight matrix $B$ (`self.B` in `LoRALayer`) with zero values in the LoRA layer, the matrix multiplication between $A$ and $B$ results in a matrix consisting of 0's and doesn't affect the original weights (since adding 0 to the original weights does not modify them)"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "e98a6d36-7bc9-434c-a7f1-533f26aff06d",
- "metadata": {
- "id": "4D21Jk7Vw3nG"
- },
- "source": [
- "- To try LoRA on the GPT model we defined earlier, we define a `replace_linear_with_lora` function to replace all `Linear` layers in the model with the new `LinearWithLoRA` layers\n",
- "\n",
- "
"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 14,
- "id": "WlQZ8ygqzN_g",
- "metadata": {
- "id": "WlQZ8ygqzN_g"
- },
- "outputs": [],
- "source": [
- "def replace_linear_with_lora(model, rank, alpha):\n",
- " for name, module in model.named_children():\n",
- " if isinstance(module, torch.nn.Linear):\n",
- " # Replace the Linear layer with LinearWithLoRA\n",
- " setattr(model, name, LinearWithLoRA(module, rank, alpha))\n",
- " else:\n",
- " # Recursively apply the same function to child modules\n",
- " replace_linear_with_lora(module, rank, alpha)"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "8c172164-cdde-4489-b7d7-aaed9cc2f5f2",
- "metadata": {},
- "source": [
- "- We then freeze the original model parameter and use the `replace_linear_with_lora` to replace the said `Linear` layers using the code below\n",
- "- This will replace the `Linear` layers in the LLM with `LinearWithLoRA` layers"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 15,
- "id": "dbe15350-4da9-4829-9d23-98bbd3d0b1a1",
- "metadata": {},
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "Total trainable parameters before: 124,441,346\n",
- "Total trainable parameters after: 0\n"
- ]
- }
- ],
- "source": [
- "total_params = sum(p.numel() for p in model.parameters() if p.requires_grad)\n",
- "print(f\"Total trainable parameters before: {total_params:,}\")\n",
- "\n",
- "for param in model.parameters():\n",
- " param.requires_grad = False\n",
- "\n",
- "total_params = sum(p.numel() for p in model.parameters() if p.requires_grad)\n",
- "print(f\"Total trainable parameters after: {total_params:,}\")"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 16,
- "id": "mLk_fPq0yz_u",
- "metadata": {
- "colab": {
- "base_uri": "https://localhost:8080/"
- },
- "id": "mLk_fPq0yz_u",
- "outputId": "7ba89607-ca75-4718-e8dc-9cdc44c3e410"
- },
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "Total trainable LoRA parameters: 1,333,264\n"
- ]
- }
- ],
- "source": [
- "replace_linear_with_lora(model, rank=8, alpha=8)\n",
- "\n",
- "total_params = sum(p.numel() for p in model.parameters() if p.requires_grad)\n",
- "print(f\"Total trainable LoRA parameters: {total_params:,}\")"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "b8b6819e-ef7a-4f0d-841a-1b467496bef9",
- "metadata": {},
- "source": [
- "- As we can see, we reduced the number of trainable parameters by almost 100x when using LoRA\n",
- "- Let's now double-check whether the layers have been modified as intended by printing the model architecture"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 18,
- "id": "1711be61-bb2c-466f-9b5b-24f4aa5ccd9c",
- "metadata": {},
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "GPTModel(\n",
- " (tok_emb): Embedding(50257, 768)\n",
- " (pos_emb): Embedding(1024, 768)\n",
- " (drop_emb): Dropout(p=0.0, inplace=False)\n",
- " (trf_blocks): Sequential(\n",
- " (0): TransformerBlock(\n",
- " (att): MultiHeadAttention(\n",
- " (W_query): LinearWithLoRA(\n",
- " (linear): Linear(in_features=768, out_features=768, bias=True)\n",
- " (lora): LoRALayer()\n",
- " )\n",
- " (W_key): LinearWithLoRA(\n",
- " (linear): Linear(in_features=768, out_features=768, bias=True)\n",
- " (lora): LoRALayer()\n",
- " )\n",
- " (W_value): LinearWithLoRA(\n",
- " (linear): Linear(in_features=768, out_features=768, bias=True)\n",
- " (lora): LoRALayer()\n",
- " )\n",
- " (out_proj): LinearWithLoRA(\n",
- " (linear): Linear(in_features=768, out_features=768, bias=True)\n",
- " (lora): LoRALayer()\n",
- " )\n",
- " (dropout): Dropout(p=0.0, inplace=False)\n",
- " )\n",
- " (ff): FeedForward(\n",
- " (layers): Sequential(\n",
- " (0): LinearWithLoRA(\n",
- " (linear): Linear(in_features=768, out_features=3072, bias=True)\n",
- " (lora): LoRALayer()\n",
- " )\n",
- " (1): GELU()\n",
- " (2): LinearWithLoRA(\n",
- " (linear): Linear(in_features=3072, out_features=768, bias=True)\n",
- " (lora): LoRALayer()\n",
- " )\n",
- " )\n",
- " )\n",
- " (norm1): LayerNorm()\n",
- " (norm2): LayerNorm()\n",
- " (drop_resid): Dropout(p=0.0, inplace=False)\n",
- " )\n",
- " (1): TransformerBlock(\n",
- " (att): MultiHeadAttention(\n",
- " (W_query): LinearWithLoRA(\n",
- " (linear): Linear(in_features=768, out_features=768, bias=True)\n",
- " (lora): LoRALayer()\n",
- " )\n",
- " (W_key): LinearWithLoRA(\n",
- " (linear): Linear(in_features=768, out_features=768, bias=True)\n",
- " (lora): LoRALayer()\n",
- " )\n",
- " (W_value): LinearWithLoRA(\n",
- " (linear): Linear(in_features=768, out_features=768, bias=True)\n",
- " (lora): LoRALayer()\n",
- " )\n",
- " (out_proj): LinearWithLoRA(\n",
- " (linear): Linear(in_features=768, out_features=768, bias=True)\n",
- " (lora): LoRALayer()\n",
- " )\n",
- " (dropout): Dropout(p=0.0, inplace=False)\n",
- " )\n",
- " (ff): FeedForward(\n",
- " (layers): Sequential(\n",
- " (0): LinearWithLoRA(\n",
- " (linear): Linear(in_features=768, out_features=3072, bias=True)\n",
- " (lora): LoRALayer()\n",
- " )\n",
- " (1): GELU()\n",
- " (2): LinearWithLoRA(\n",
- " (linear): Linear(in_features=3072, out_features=768, bias=True)\n",
- " (lora): LoRALayer()\n",
- " )\n",
- " )\n",
- " )\n",
- " (norm1): LayerNorm()\n",
- " (norm2): LayerNorm()\n",
- " (drop_resid): Dropout(p=0.0, inplace=False)\n",
- " )\n",
- " (2): TransformerBlock(\n",
- " (att): MultiHeadAttention(\n",
- " (W_query): LinearWithLoRA(\n",
- " (linear): Linear(in_features=768, out_features=768, bias=True)\n",
- " (lora): LoRALayer()\n",
- " )\n",
- " (W_key): LinearWithLoRA(\n",
- " (linear): Linear(in_features=768, out_features=768, bias=True)\n",
- " (lora): LoRALayer()\n",
- " )\n",
- " (W_value): LinearWithLoRA(\n",
- " (linear): Linear(in_features=768, out_features=768, bias=True)\n",
- " (lora): LoRALayer()\n",
- " )\n",
- " (out_proj): LinearWithLoRA(\n",
- " (linear): Linear(in_features=768, out_features=768, bias=True)\n",
- " (lora): LoRALayer()\n",
- " )\n",
- " (dropout): Dropout(p=0.0, inplace=False)\n",
- " )\n",
- " (ff): FeedForward(\n",
- " (layers): Sequential(\n",
- " (0): LinearWithLoRA(\n",
- " (linear): Linear(in_features=768, out_features=3072, bias=True)\n",
- " (lora): LoRALayer()\n",
- " )\n",
- " (1): GELU()\n",
- " (2): LinearWithLoRA(\n",
- " (linear): Linear(in_features=3072, out_features=768, bias=True)\n",
- " (lora): LoRALayer()\n",
- " )\n",
- " )\n",
- " )\n",
- " (norm1): LayerNorm()\n",
- " (norm2): LayerNorm()\n",
- " (drop_resid): Dropout(p=0.0, inplace=False)\n",
- " )\n",
- " (3): TransformerBlock(\n",
- " (att): MultiHeadAttention(\n",
- " (W_query): LinearWithLoRA(\n",
- " (linear): Linear(in_features=768, out_features=768, bias=True)\n",
- " (lora): LoRALayer()\n",
- " )\n",
- " (W_key): LinearWithLoRA(\n",
- " (linear): Linear(in_features=768, out_features=768, bias=True)\n",
- " (lora): LoRALayer()\n",
- " )\n",
- " (W_value): LinearWithLoRA(\n",
- " (linear): Linear(in_features=768, out_features=768, bias=True)\n",
- " (lora): LoRALayer()\n",
- " )\n",
- " (out_proj): LinearWithLoRA(\n",
- " (linear): Linear(in_features=768, out_features=768, bias=True)\n",
- " (lora): LoRALayer()\n",
- " )\n",
- " (dropout): Dropout(p=0.0, inplace=False)\n",
- " )\n",
- " (ff): FeedForward(\n",
- " (layers): Sequential(\n",
- " (0): LinearWithLoRA(\n",
- " (linear): Linear(in_features=768, out_features=3072, bias=True)\n",
- " (lora): LoRALayer()\n",
- " )\n",
- " (1): GELU()\n",
- " (2): LinearWithLoRA(\n",
- " (linear): Linear(in_features=3072, out_features=768, bias=True)\n",
- " (lora): LoRALayer()\n",
- " )\n",
- " )\n",
- " )\n",
- " (norm1): LayerNorm()\n",
- " (norm2): LayerNorm()\n",
- " (drop_resid): Dropout(p=0.0, inplace=False)\n",
- " )\n",
- " (4): TransformerBlock(\n",
- " (att): MultiHeadAttention(\n",
- " (W_query): LinearWithLoRA(\n",
- " (linear): Linear(in_features=768, out_features=768, bias=True)\n",
- " (lora): LoRALayer()\n",
- " )\n",
- " (W_key): LinearWithLoRA(\n",
- " (linear): Linear(in_features=768, out_features=768, bias=True)\n",
- " (lora): LoRALayer()\n",
- " )\n",
- " (W_value): LinearWithLoRA(\n",
- " (linear): Linear(in_features=768, out_features=768, bias=True)\n",
- " (lora): LoRALayer()\n",
- " )\n",
- " (out_proj): LinearWithLoRA(\n",
- " (linear): Linear(in_features=768, out_features=768, bias=True)\n",
- " (lora): LoRALayer()\n",
- " )\n",
- " (dropout): Dropout(p=0.0, inplace=False)\n",
- " )\n",
- " (ff): FeedForward(\n",
- " (layers): Sequential(\n",
- " (0): LinearWithLoRA(\n",
- " (linear): Linear(in_features=768, out_features=3072, bias=True)\n",
- " (lora): LoRALayer()\n",
- " )\n",
- " (1): GELU()\n",
- " (2): LinearWithLoRA(\n",
- " (linear): Linear(in_features=3072, out_features=768, bias=True)\n",
- " (lora): LoRALayer()\n",
- " )\n",
- " )\n",
- " )\n",
- " (norm1): LayerNorm()\n",
- " (norm2): LayerNorm()\n",
- " (drop_resid): Dropout(p=0.0, inplace=False)\n",
- " )\n",
- " (5): TransformerBlock(\n",
- " (att): MultiHeadAttention(\n",
- " (W_query): LinearWithLoRA(\n",
- " (linear): Linear(in_features=768, out_features=768, bias=True)\n",
- " (lora): LoRALayer()\n",
- " )\n",
- " (W_key): LinearWithLoRA(\n",
- " (linear): Linear(in_features=768, out_features=768, bias=True)\n",
- " (lora): LoRALayer()\n",
- " )\n",
- " (W_value): LinearWithLoRA(\n",
- " (linear): Linear(in_features=768, out_features=768, bias=True)\n",
- " (lora): LoRALayer()\n",
- " )\n",
- " (out_proj): LinearWithLoRA(\n",
- " (linear): Linear(in_features=768, out_features=768, bias=True)\n",
- " (lora): LoRALayer()\n",
- " )\n",
- " (dropout): Dropout(p=0.0, inplace=False)\n",
- " )\n",
- " (ff): FeedForward(\n",
- " (layers): Sequential(\n",
- " (0): LinearWithLoRA(\n",
- " (linear): Linear(in_features=768, out_features=3072, bias=True)\n",
- " (lora): LoRALayer()\n",
- " )\n",
- " (1): GELU()\n",
- " (2): LinearWithLoRA(\n",
- " (linear): Linear(in_features=3072, out_features=768, bias=True)\n",
- " (lora): LoRALayer()\n",
- " )\n",
- " )\n",
- " )\n",
- " (norm1): LayerNorm()\n",
- " (norm2): LayerNorm()\n",
- " (drop_resid): Dropout(p=0.0, inplace=False)\n",
- " )\n",
- " (6): TransformerBlock(\n",
- " (att): MultiHeadAttention(\n",
- " (W_query): LinearWithLoRA(\n",
- " (linear): Linear(in_features=768, out_features=768, bias=True)\n",
- " (lora): LoRALayer()\n",
- " )\n",
- " (W_key): LinearWithLoRA(\n",
- " (linear): Linear(in_features=768, out_features=768, bias=True)\n",
- " (lora): LoRALayer()\n",
- " )\n",
- " (W_value): LinearWithLoRA(\n",
- " (linear): Linear(in_features=768, out_features=768, bias=True)\n",
- " (lora): LoRALayer()\n",
- " )\n",
- " (out_proj): LinearWithLoRA(\n",
- " (linear): Linear(in_features=768, out_features=768, bias=True)\n",
- " (lora): LoRALayer()\n",
- " )\n",
- " (dropout): Dropout(p=0.0, inplace=False)\n",
- " )\n",
- " (ff): FeedForward(\n",
- " (layers): Sequential(\n",
- " (0): LinearWithLoRA(\n",
- " (linear): Linear(in_features=768, out_features=3072, bias=True)\n",
- " (lora): LoRALayer()\n",
- " )\n",
- " (1): GELU()\n",
- " (2): LinearWithLoRA(\n",
- " (linear): Linear(in_features=3072, out_features=768, bias=True)\n",
- " (lora): LoRALayer()\n",
- " )\n",
- " )\n",
- " )\n",
- " (norm1): LayerNorm()\n",
- " (norm2): LayerNorm()\n",
- " (drop_resid): Dropout(p=0.0, inplace=False)\n",
- " )\n",
- " (7): TransformerBlock(\n",
- " (att): MultiHeadAttention(\n",
- " (W_query): LinearWithLoRA(\n",
- " (linear): Linear(in_features=768, out_features=768, bias=True)\n",
- " (lora): LoRALayer()\n",
- " )\n",
- " (W_key): LinearWithLoRA(\n",
- " (linear): Linear(in_features=768, out_features=768, bias=True)\n",
- " (lora): LoRALayer()\n",
- " )\n",
- " (W_value): LinearWithLoRA(\n",
- " (linear): Linear(in_features=768, out_features=768, bias=True)\n",
- " (lora): LoRALayer()\n",
- " )\n",
- " (out_proj): LinearWithLoRA(\n",
- " (linear): Linear(in_features=768, out_features=768, bias=True)\n",
- " (lora): LoRALayer()\n",
- " )\n",
- " (dropout): Dropout(p=0.0, inplace=False)\n",
- " )\n",
- " (ff): FeedForward(\n",
- " (layers): Sequential(\n",
- " (0): LinearWithLoRA(\n",
- " (linear): Linear(in_features=768, out_features=3072, bias=True)\n",
- " (lora): LoRALayer()\n",
- " )\n",
- " (1): GELU()\n",
- " (2): LinearWithLoRA(\n",
- " (linear): Linear(in_features=3072, out_features=768, bias=True)\n",
- " (lora): LoRALayer()\n",
- " )\n",
- " )\n",
- " )\n",
- " (norm1): LayerNorm()\n",
- " (norm2): LayerNorm()\n",
- " (drop_resid): Dropout(p=0.0, inplace=False)\n",
- " )\n",
- " (8): TransformerBlock(\n",
- " (att): MultiHeadAttention(\n",
- " (W_query): LinearWithLoRA(\n",
- " (linear): Linear(in_features=768, out_features=768, bias=True)\n",
- " (lora): LoRALayer()\n",
- " )\n",
- " (W_key): LinearWithLoRA(\n",
- " (linear): Linear(in_features=768, out_features=768, bias=True)\n",
- " (lora): LoRALayer()\n",
- " )\n",
- " (W_value): LinearWithLoRA(\n",
- " (linear): Linear(in_features=768, out_features=768, bias=True)\n",
- " (lora): LoRALayer()\n",
- " )\n",
- " (out_proj): LinearWithLoRA(\n",
- " (linear): Linear(in_features=768, out_features=768, bias=True)\n",
- " (lora): LoRALayer()\n",
- " )\n",
- " (dropout): Dropout(p=0.0, inplace=False)\n",
- " )\n",
- " (ff): FeedForward(\n",
- " (layers): Sequential(\n",
- " (0): LinearWithLoRA(\n",
- " (linear): Linear(in_features=768, out_features=3072, bias=True)\n",
- " (lora): LoRALayer()\n",
- " )\n",
- " (1): GELU()\n",
- " (2): LinearWithLoRA(\n",
- " (linear): Linear(in_features=3072, out_features=768, bias=True)\n",
- " (lora): LoRALayer()\n",
- " )\n",
- " )\n",
- " )\n",
- " (norm1): LayerNorm()\n",
- " (norm2): LayerNorm()\n",
- " (drop_resid): Dropout(p=0.0, inplace=False)\n",
- " )\n",
- " (9): TransformerBlock(\n",
- " (att): MultiHeadAttention(\n",
- " (W_query): LinearWithLoRA(\n",
- " (linear): Linear(in_features=768, out_features=768, bias=True)\n",
- " (lora): LoRALayer()\n",
- " )\n",
- " (W_key): LinearWithLoRA(\n",
- " (linear): Linear(in_features=768, out_features=768, bias=True)\n",
- " (lora): LoRALayer()\n",
- " )\n",
- " (W_value): LinearWithLoRA(\n",
- " (linear): Linear(in_features=768, out_features=768, bias=True)\n",
- " (lora): LoRALayer()\n",
- " )\n",
- " (out_proj): LinearWithLoRA(\n",
- " (linear): Linear(in_features=768, out_features=768, bias=True)\n",
- " (lora): LoRALayer()\n",
- " )\n",
- " (dropout): Dropout(p=0.0, inplace=False)\n",
- " )\n",
- " (ff): FeedForward(\n",
- " (layers): Sequential(\n",
- " (0): LinearWithLoRA(\n",
- " (linear): Linear(in_features=768, out_features=3072, bias=True)\n",
- " (lora): LoRALayer()\n",
- " )\n",
- " (1): GELU()\n",
- " (2): LinearWithLoRA(\n",
- " (linear): Linear(in_features=3072, out_features=768, bias=True)\n",
- " (lora): LoRALayer()\n",
- " )\n",
- " )\n",
- " )\n",
- " (norm1): LayerNorm()\n",
- " (norm2): LayerNorm()\n",
- " (drop_resid): Dropout(p=0.0, inplace=False)\n",
- " )\n",
- " (10): TransformerBlock(\n",
- " (att): MultiHeadAttention(\n",
- " (W_query): LinearWithLoRA(\n",
- " (linear): Linear(in_features=768, out_features=768, bias=True)\n",
- " (lora): LoRALayer()\n",
- " )\n",
- " (W_key): LinearWithLoRA(\n",
- " (linear): Linear(in_features=768, out_features=768, bias=True)\n",
- " (lora): LoRALayer()\n",
- " )\n",
- " (W_value): LinearWithLoRA(\n",
- " (linear): Linear(in_features=768, out_features=768, bias=True)\n",
- " (lora): LoRALayer()\n",
- " )\n",
- " (out_proj): LinearWithLoRA(\n",
- " (linear): Linear(in_features=768, out_features=768, bias=True)\n",
- " (lora): LoRALayer()\n",
- " )\n",
- " (dropout): Dropout(p=0.0, inplace=False)\n",
- " )\n",
- " (ff): FeedForward(\n",
- " (layers): Sequential(\n",
- " (0): LinearWithLoRA(\n",
- " (linear): Linear(in_features=768, out_features=3072, bias=True)\n",
- " (lora): LoRALayer()\n",
- " )\n",
- " (1): GELU()\n",
- " (2): LinearWithLoRA(\n",
- " (linear): Linear(in_features=3072, out_features=768, bias=True)\n",
- " (lora): LoRALayer()\n",
- " )\n",
- " )\n",
- " )\n",
- " (norm1): LayerNorm()\n",
- " (norm2): LayerNorm()\n",
- " (drop_resid): Dropout(p=0.0, inplace=False)\n",
- " )\n",
- " (11): TransformerBlock(\n",
- " (att): MultiHeadAttention(\n",
- " (W_query): LinearWithLoRA(\n",
- " (linear): Linear(in_features=768, out_features=768, bias=True)\n",
- " (lora): LoRALayer()\n",
- " )\n",
- " (W_key): LinearWithLoRA(\n",
- " (linear): Linear(in_features=768, out_features=768, bias=True)\n",
- " (lora): LoRALayer()\n",
- " )\n",
- " (W_value): LinearWithLoRA(\n",
- " (linear): Linear(in_features=768, out_features=768, bias=True)\n",
- " (lora): LoRALayer()\n",
- " )\n",
- " (out_proj): LinearWithLoRA(\n",
- " (linear): Linear(in_features=768, out_features=768, bias=True)\n",
- " (lora): LoRALayer()\n",
- " )\n",
- " (dropout): Dropout(p=0.0, inplace=False)\n",
- " )\n",
- " (ff): FeedForward(\n",
- " (layers): Sequential(\n",
- " (0): LinearWithLoRA(\n",
- " (linear): Linear(in_features=768, out_features=3072, bias=True)\n",
- " (lora): LoRALayer()\n",
- " )\n",
- " (1): GELU()\n",
- " (2): LinearWithLoRA(\n",
- " (linear): Linear(in_features=3072, out_features=768, bias=True)\n",
- " (lora): LoRALayer()\n",
- " )\n",
- " )\n",
- " )\n",
- " (norm1): LayerNorm()\n",
- " (norm2): LayerNorm()\n",
- " (drop_resid): Dropout(p=0.0, inplace=False)\n",
- " )\n",
- " )\n",
- " (final_norm): LayerNorm()\n",
- " (out_head): LinearWithLoRA(\n",
- " (linear): Linear(in_features=768, out_features=2, bias=True)\n",
- " (lora): LoRALayer()\n",
- " )\n",
- ")\n"
- ]
- }
- ],
- "source": [
- "device = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\n",
- "model.to(device)\n",
- "\n",
- "print(model)"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "c4bbc9d7-65ec-4675-bab8-2e56eb0cfb55",
- "metadata": {},
- "source": [
- "- Based on the model architecture above, we can see that the model now contains our new `LinearWithLoRA` layers\n",
- "- Also, since we initialized matrix $B$ with 0's, we expect the initial model performance to be unchanged compared to before"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 19,
- "id": "DAlrb_I00VEU",
- "metadata": {
- "colab": {
- "base_uri": "https://localhost:8080/"
- },
- "id": "DAlrb_I00VEU",
- "outputId": "3dae5ff0-316d-408e-c8dc-2b8c60f9b994"
- },
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "Training accuracy: 46.25%\n",
- "Validation accuracy: 45.00%\n",
- "Test accuracy: 48.75%\n"
- ]
- }
- ],
- "source": [
- "torch.manual_seed(123)\n",
- "train_accuracy = calc_accuracy_loader(train_loader, model, device, num_batches=10)\n",
- "val_accuracy = calc_accuracy_loader(val_loader, model, device, num_batches=10)\n",
- "test_accuracy = calc_accuracy_loader(test_loader, model, device, num_batches=10)\n",
- "\n",
- "print(f\"Training accuracy: {train_accuracy*100:.2f}%\")\n",
- "print(f\"Validation accuracy: {val_accuracy*100:.2f}%\")\n",
- "print(f\"Test accuracy: {test_accuracy*100:.2f}%\")"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "13735b3e-f0c3-4dba-ae3d-4141b2878101",
- "metadata": {},
- "source": [
- "- Let's now get to the interesting part and finetune the model by reusing the training function from chapter 6\n",
- "- The training takes about 15 minutes on a M3 MacBook Air laptop computer and less than half a minute on a V100 or A100 GPU"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 20,
- "id": "wCParRvr0eff",
- "metadata": {
- "colab": {
- "base_uri": "https://localhost:8080/"
- },
- "id": "wCParRvr0eff",
- "outputId": "b86fd5f4-1527-4549-e0b0-9dff37836f0a"
- },
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "Ep 1 (Step 000000): Train loss 2.849, Val loss 2.565\n",
- "Ep 1 (Step 000050): Train loss 0.515, Val loss 0.465\n",
- "Ep 1 (Step 000100): Train loss 0.191, Val loss 0.423\n",
- "Training accuracy: 97.50% | Validation accuracy: 97.50%\n",
- "Ep 2 (Step 000150): Train loss 0.170, Val loss 0.072\n",
- "Ep 2 (Step 000200): Train loss 0.014, Val loss 0.087\n",
- "Ep 2 (Step 000250): Train loss 0.027, Val loss 0.197\n",
- "Training accuracy: 100.00% | Validation accuracy: 92.50%\n",
- "Ep 3 (Step 000300): Train loss 0.014, Val loss 0.321\n",
- "Ep 3 (Step 000350): Train loss 0.015, Val loss 0.146\n",
- "Training accuracy: 100.00% | Validation accuracy: 97.50%\n",
- "Ep 4 (Step 000400): Train loss 0.008, Val loss 0.103\n",
- "Ep 4 (Step 000450): Train loss 0.010, Val loss 0.178\n",
- "Ep 4 (Step 000500): Train loss 0.097, Val loss 0.056\n",
- "Training accuracy: 100.00% | Validation accuracy: 97.50%\n",
- "Ep 5 (Step 000550): Train loss 0.032, Val loss 0.091\n",
- "Ep 5 (Step 000600): Train loss 0.002, Val loss 0.058\n",
- "Training accuracy: 100.00% | Validation accuracy: 100.00%\n",
- "Ep 6 (Step 000650): Train loss 0.001, Val loss 0.009\n",
- "Ep 6 (Step 000700): Train loss 0.001, Val loss 0.039\n",
- "Ep 6 (Step 000750): Train loss 0.000, Val loss 0.038\n",
- "Training accuracy: 100.00% | Validation accuracy: 95.00%\n",
- "Training completed in 13.70 minutes.\n"
- ]
- }
- ],
- "source": [
- "import time\n",
- "from previous_chapters import train_classifier_simple\n",
- "\n",
- "\n",
- "start_time = time.time()\n",
- "\n",
- "torch.manual_seed(123)\n",
- "\n",
- "optimizer = torch.optim.AdamW(model.parameters(), lr=5e-5, weight_decay=0.1)\n",
- "\n",
- "num_epochs = 6\n",
- "train_losses, val_losses, train_accs, val_accs, examples_seen = train_classifier_simple(\n",
- " model, train_loader, val_loader, optimizer, device,\n",
- " num_epochs=num_epochs, eval_freq=50, eval_iter=5,\n",
- " tokenizer=tokenizer\n",
- ")\n",
- "\n",
- "end_time = time.time()\n",
- "execution_time_minutes = (end_time - start_time) / 60\n",
- "print(f\"Training completed in {execution_time_minutes:.2f} minutes.\")"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "d0c89e82-3aa8-44c6-b046-0b16200b8e6c",
- "metadata": {},
- "source": [
- "- Finally, let's evaluate the model"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 21,
- "id": "bawWGijA0iF3",
- "metadata": {
- "colab": {
- "base_uri": "https://localhost:8080/",
- "height": 307
- },
- "id": "bawWGijA0iF3",
- "outputId": "4b05b245-ffac-4d36-881b-8306a4da6b75"
- },
- "outputs": [
- {
- "data": {
- "image/png": "iVBORw0KGgoAAAANSUhEUgAAAdwAAAEiCAYAAABTO2OcAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjcuMiwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy8pXeV/AAAACXBIWXMAAA9hAAAPYQGoP6dpAABPd0lEQVR4nO3dd3hUVfrA8e9Mkpn03ikBJECAEEJdRBAlUlRWsMCyqEFRFw0iIoqsCog/DXYsLAoq6FpiAxcVqVIU6b2E0AKhpICQSurM+f1xk0mGACaQzKS8n+e5T+aWufc9Icw759xzz9EppRRCCCGEqFV6ewcghBBCNAaScIUQQggbkIQrhBBC2IAkXCGEEMIGJOEKIYQQNiAJVwghhLABSbhCCCGEDUjCFUIIIWxAEq4QQghhA5JwhWiE+vXrx4QJE+wdhhCNiiRcIa7C6NGj0el0lZZBgwbZOzQhRB3laO8AhKivBg0axPz58622GY1GO0UjhKjrpIYrxFUyGo0EBwdbLT4+PgCsWbMGg8HAb7/9Zjn+tddeIzAwkPT0dACWLl3KDTfcgLe3N35+ftx+++0cOXLEcvyxY8fQ6XR888039OnTBxcXF7p3787BgwfZsmUL3bp1w93dncGDB3PmzBnL+0aPHs3QoUN58cUXCQgIwNPTk7Fjx1JUVHTZshQWFjJp0iSaNGmCm5sbPXv2ZM2aNZb9x48fZ8iQIfj4+ODm5kaHDh1YsmTJZc/3n//8h/DwcJydnQkKCuLuu++27DObzcTHx9OyZUtcXFyIioriu+++s3r/3r17GTx4MO7u7gQFBXHfffdx9uxZy/5+/foxfvx4nnnmGXx9fQkODmb69OmXjUeIukASrhC1oOwe6X333UdWVhY7duzghRde4KOPPiIoKAiAvLw8Jk6cyNatW1m1ahV6vZ5hw4ZhNputzjVt2jSef/55tm/fjqOjI//85z955plneOedd/jtt984fPgwU6dOtXrPqlWrSExMZM2aNXz11VcsXLiQF1988bLxjhs3jg0bNpCQkMDu3bu55557GDRoEIcOHQIgLi6OwsJC1q1bx549e3j11Vdxd3e/5Lm2bt3K+PHjmTFjBklJSSxdupS+ffta9sfHx/PZZ5/xwQcfsG/fPp588knuvfde1q5dC0BmZiY333wz0dHRbN26laVLl5Kens7w4cOtrvPpp5/i5ubGpk2beO2115gxYwYrVqyo4r+QEHaghBDVFhsbqxwcHJSbm5vV8vLLL1uOKSwsVJ07d1bDhw9X7du3Vw8//PAVz3nmzBkFqD179iillEpOTlaA+uijjyzHfPXVVwpQq1atsmyLj49Xbdu2tYrN19dX5eXlWbbNmTNHubu7K5PJpJRS6sYbb1RPPPGEUkqp48ePKwcHB3Xq1CmrePr376+mTJmilFIqMjJSTZ8+vUq/m++//155enqq7OzsSvsKCgqUq6ur+uOPP6y2jxkzRo0cOVIppdRLL72kBgwYYLX/xIkTClBJSUmW+G+44QarY7p3764mT55cpRiFsAe5hyvEVbrpppuYM2eO1TZfX1/La4PBwBdffEGnTp0ICwvj7bfftjr20KFDTJ06lU2bNnH27FlLzTYlJYWOHTtajuvUqZPldVntODIy0mpbRkaG1bmjoqJwdXW1rPfq1Yvc3FxOnDhBWFiY1bF79uzBZDLRpk0bq+2FhYX4+fkBMH78eB599FGWL19OTEwMd911l1VcFd1yyy2EhYXRqlUrBg0axKBBgxg2bBiurq4cPnyYCxcucMstt1i9p6ioiOjoaAB27drF6tWrL1mDPnLkiCXOi68fEhJS6fcgRF0iCVeIq+Tm5kbr1q2veMwff/wBwLlz5zh37hxubm6WfUOGDCEsLIx58+YRGhqK2WymY8eOle61Ojk5WV7rdLpLbru4Gbo6cnNzcXBwYNu2bTg4OFjtK0t6Dz30EAMHDuTnn39m+fLlxMfH8+abb/L4449XOp+Hhwfbt29nzZo1LF++nKlTpzJ9+nS2bNlCbm4uAD///DNNmjSxel9Zh7Pc3FyGDBnCq6++WuncISEhltcVfwdw7b8HIWqbJFwhasmRI0d48sknmTdvHl9//TWxsbGsXLkSvV7Pn3/+SVJSEvPmzaNPnz4A/P777zV27V27dpGfn4+LiwsAGzduxN3dnWbNmlU6Njo6GpPJREZGhiWWS2nWrBljx45l7NixTJkyhXnz5l0y4QI4OjoSExNDTEwM06ZNw9vbm19//ZVbbrkFo9FISkoKN9544yXf26VLF77//ntatGiBo6N8RImGQ/6ahbhKhYWFpKWlWW1zdHTE398fk8nEvffey8CBA3nggQcYNGgQkZGRvPnmmzz99NP4+Pjg5+fH3LlzCQkJISUlhWeffbbGYisqKmLMmDE8//zzHDt2jGnTpjFu3Dj0+sr9JNu0acOoUaO4//77efPNN4mOjubMmTOsWrWKTp06cdtttzFhwgQGDx5MmzZtOH/+PKtXryYiIuKS1/7pp584evQoffv2xcfHhyVLlmA2m2nbti0eHh5MmjSJJ598ErPZzA033EBWVhbr16/H09OT2NhY4uLimDdvHiNHjrT0Qj58+DAJCQl89NFHlWrhQtQXknCFuEpLly61auIEaNu2LQcOHODll1/m+PHj/PTTT4DWFDp37lxGjhzJgAEDiIqKIiEhgfHjx9OxY0fatm3Lu+++S79+/Woktv79+xMeHk7fvn0pLCxk5MiRV3xsZv78+fzf//0fTz31FKdOncLf35+//e1v3H777QCYTCbi4uI4efIknp6eDBo0qNI96TLe3t4sXLiQ6dOnU1BQQHh4OF999RUdOnQA4KWXXiIgIID4+HiOHj2Kt7c3Xbp04d///jcAoaGhrF+/nsmTJzNgwAAKCwsJCwtj0KBBl/zCIER9oVNKKXsHIYSoOaNHjyYzM5MffvjB3qEIISqQr4tCCCGEDUjCFUIIIWxAmpSFEEIIG5AarhBCCGEDknCFEEIIG5CEK4QQQtiAJNxSs2fPpkWLFjg7O9OzZ082b95s75CqZN26dQwZMoTQ0FB0Ol2lR0GUUkydOpWQkBBcXFyIiYmxzABT5ty5c4waNQpPT0+8vb0ZM2aMZQi+Mrt376ZPnz44OzvTrFkzXnvttdou2mXFx8fTvXt3PDw8CAwMZOjQoSQlJVkdU1BQQFxcHH5+fri7u3PXXXdZpsUrk5KSwm233YarqyuBgYE8/fTTlJSUWB2zZs0aunTpgtFopHXr1ixYsKC2i3dJc+bMoVOnTnh6euLp6UmvXr345ZdfLPsbWnkvNnPmTHQ6HRMmTLBsa4hlnj59Ojqdzmpp166dZX9DLDPAqVOnuPfee/Hz88PFxYXIyEi2bt1q2d9gPsfsOXNCXZGQkKAMBoP65JNP1L59+9TDDz+svL29VXp6ur1D+0tLlixRzz33nFq4cKEC1KJFi6z2z5w5U3l5eakffvhB7dq1S/39739XLVu2VPn5+ZZjBg0apKKiotTGjRvVb7/9plq3bm2ZuUUppbKyslRQUJAaNWqU2rt3r/rqq6+Ui4uL+vDDD21VTCsDBw5U8+fPV3v37lU7d+5Ut956q2revLnKzc21HDN27FjVrFkztWrVKrV161b1t7/9TV1//fWW/SUlJapjx44qJiZG7dixQy1ZskT5+/tbZsdRSqmjR48qV1dXNXHiRLV//3713nvvKQcHB7V06VKbllcppRYvXqx+/vlndfDgQZWUlKT+/e9/KycnJ7V3794GWd6KNm/erFq0aKE6depkmeFIqYZZ5mnTpqkOHTqo1NRUy3LmzBnL/oZY5nPnzqmwsDA1evRotWnTJnX06FG1bNkydfjwYcsxDeVzTBKuUqpHjx4qLi7Osm4ymVRoaKiKj4+3Y1TVd3HCNZvNKjg4WL3++uuWbZmZmcpoNKqvvvpKKaXU/v37FaC2bNliOeaXX35ROp3OMl3bf/7zH+Xj46MKCwstx0yePNlqSjh7ysjIUIBau3atUkoro5OTk/r2228txyQmJipAbdiwQSmlfVHR6/UqLS3NcsycOXOUp6enpZzPPPOM6tChg9W1RowYoQYOHFjbRaoSHx8f9dFHHzXo8ubk5Kjw8HC1YsUKqykFG2qZp02bpqKioi65r6GWefLkyZWmWqyoIX2ONfom5aKiIrZt20ZMTIxlm16vJyYmhg0bNtgxsmuXnJxMWlqaVdm8vLzo2bOnpWwbNmzA29ubbt26WY6JiYlBr9ezadMmyzF9+/bFYDBYjhk4cCBJSUmcP3/eRqW5vKysLKB8arxt27ZRXFxsVe527drRvHlzq3JHRkZaprsDrUzZ2dns27fPckzFc5QdY++/C5PJREJCAnl5efTq1atBlzcuLo7bbrutUlwNucyHDh0iNDSUVq1aMWrUKFJSUoCGW+bFixfTrVs37rnnHgIDA4mOjmbevHmW/Q3pc6zRJ9yzZ89iMpms/kBBm2P04oHp65uy+K9UtrS0NAIDA632Ozo64uvra3XMpc5R8Rr2YjabmTBhAr1797bMIZuWlobBYMDb29vq2IvL/Vdlutwx2dnZ5Ofn10ZxrmjPnj24u7tjNBoZO3YsixYton379g22vAkJCWzfvp34+PhK+xpqmXv27MmCBQtYunQpc+bMITk5mT59+pCTk9Ngy3z06FHmzJlDeHg4y5Yt49FHH2X8+PF8+umnVnE3hM8xmbxA1GtxcXHs3bu3Rqe2q6vatm3Lzp07ycrK4rvvviM2Npa1a9faO6xaceLECZ544glWrFiBs7OzvcOxmcGDB1ted+rUiZ49exIWFsY333xjmWqxoTGbzXTr1o1XXnkF0KaL3Lt3Lx988AGxsbF2jq5mNfoarr+/Pw4ODpV6+qWnpxMcHGynqGpGWfxXKltwcDAZGRlW+0tKSjh37pzVMZc6R8Vr2MO4ceP46aefWL16NU2bNrVsDw4OpqioiMzMTKvjLy73X5Xpcsd4enra5cPPYDDQunVrunbtSnx8PFFRUbzzzjsNsrzbtm0jIyODLl264OjoiKOjI2vXruXdd9/F0dGRoKCgBlfmS/H29qZNmzYcPny4Qf47gzaTVvv27a22RUREWJrSG9LnWKNPuAaDga5du7Jq1SrLNrPZzKpVq+jVq5cdI7t2LVu2JDg42Kps2dnZbNq0yVK2Xr16kZmZybZt2yzH/Prrr5jNZnr27Gk5Zt26dRQXF1uOWbFiBW3btsXHx8dGpSmnlGLcuHEsWrSIX3/9lZYtW1rt79q1K05OTlblTkpKIiUlxarce/bssfpPumLFCjw9PS3/+Xv16mV1jrJj6srfhdlsprCwsEGWt3///uzZs4edO3dalm7dujFq1CjL64ZW5kvJzc3lyJEjhISENMh/Z4DevXtXeqzv4MGDhIWFAQ3sc8xm3bPqsISEBGU0GtWCBQvU/v371SOPPKK8vb2tevrVVTk5OWrHjh1qx44dClBvvfWW2rFjhzp+/LhSSutO7+3trf73v/+p3bt3qzvuuOOS3emjo6PVpk2b1O+//67Cw8OtutNnZmaqoKAgdd9996m9e/eqhIQE5erqarfHgh599FHl5eWl1qxZY/X4xIULFyzHjB07VjVv3lz9+uuvauvWrapXr16qV69elv1lj08MGDBA7dy5Uy1dulQFBARc8vGJp59+WiUmJqrZs2fb7fGJZ599Vq1du1YlJyer3bt3q2effVbpdDq1fPnyBlneS6nYS1mphlnmp556Sq1Zs0YlJyer9evXq5iYGOXv768yMjKUUg2zzJs3b1aOjo7q5ZdfVocOHVJffPGFcnV1VZ9//rnlmIbyOSYJt9R7772nmjdvrgwGg+rRo4fauHGjvUOqktWrVyug0hIbG6uU0rrUv/DCCyooKEgZjUbVv39/lZSUZHWOP//8U40cOVK5u7srT09P9cADD6icnByrY3bt2qVuuOEGZTQaVZMmTdTMmTNtVcRKLlVeQM2fP99yTH5+vnrssceUj4+PcnV1VcOGDVOpqalW5zl27JgaPHiwcnFxUf7+/uqpp55SxcXFVsesXr1ade7cWRkMBtWqVSura9jSgw8+qMLCwpTBYFABAQGqf//+lmSrVMMr76VcnHAbYplHjBihQkJClMFgUE2aNFEjRoyweh61IZZZKaV+/PFH1bFjR2U0GlW7du3U3LlzrfY3lM8xmS1ICCGEsIFGfw9XCCGEsAVJuEIIIYQNSMIVQgghbEASrhBCCGEDknCFEEIIG5CEK4QQQtiAJNxShYWFTJ8+ncLCQnuHYjNS5sZBytw4SJnrPnkOt1R2djZeXl5kZWXh6elp73BsQsosZW6opMxS5rpIarhCCCGEDUjCFUIIIWygXs+HW1JSwo4dOwgKCkKvv7bvDjk5OQCcOnWK7OzsmgivzpMyS5kbKimzlNmWzGYz6enpREdH4+h4+bRar+/hbtmyhR49etg7DCGEEILNmzfTvXv3y+6v1zXcoKAgQCtkSEiInaMRQgjRGKWmptKjRw9LTrqcep1wy5qRQ0JCaNq0qZ2jEUII0Zj91a1N6TQlhBBC2IAkXCGEEMIGJOEKIYQQNlCv7+EKIcSVmEwmiouL7R2GqOecnJxwcHC45vNIwgWUUuxPzeZAag63dQrB2enaf7FCCPtRSpGWlkZmZqa9QxENhLe3N8HBweh0uqs+hyTcUvd+tInzF4ppE+RBZFMve4cjhLgGZck2MDAQV1fXa/qQFI2bUooLFy6QkZEBcE2PoErCBXQ6He2CPdlw9E8SU7Ml4QpRj5lMJkuy9fPzs3c4ogFwcXEBICMjg8DAwKtuXpZOU6UiQrSZJhLTGseQaEI0VGX3bF1dXe0ciWhIyv6erqVPgCTcUhEhHgAkpkrCFaIhkGZkUZNq4u9JEm4pSw03NYd6PLy0EEKIOkoSbqnWge446HVk5ReTmlVg73CEEOKatWjRglmzZlX5+DVr1qDT6Wq9d/eCBQvw9vau1WvURZJwSzk7OXBdgBsAB+Q+rhDChnQ63RWX6dOnX9V5t2zZwiOPPFLl46+//npSU1Px8pKOo7VBeilXEBHiycH0XBJTc7i53ZVnfRBCiJqSmppqef31118zdepUkpKSLNvc3d0tr5VSmEymK867WiYgIKBacRgMBoKDg6v1HlF1UsOtoOw+7n7pOCWEsKHg4GDL4uXlhU6ns6wfOHAADw8PfvnlF7p27YrRaOT333/nyJEj3HHHHQQFBeHu7k737t1ZuXKl1XkvblLW6XR89NFHDBs2DFdXV8LDw1m8eLFl/8VNymVNv8uWLSMiIgJ3d3cGDRpk9QWhpKSE8ePH4+3tjZ+fH5MnTyY2NpahQ4dW63cwZ84crrvuOgwGA23btuW///2vZZ9SiunTp9O8eXOMRiOhoaGMHz/esv8///kP4eHhODs7ExQUxN13312ta9uKJNwKyjtOScIVoiFRSnGhqMTmS012wHz22WeZOXMmiYmJdOrUidzcXG699VZWrVrFjh07GDRoEEOGDCElJeWK53nxxRcZPnw4u3fv5tZbb2XUqFGcO3fussdfuHCBN954g//+97+sW7eOlJQUJk2aZNn/6quv8sUXXzB//nzWr19PdnY2P/zwQ7XKtmjRIp544gmeeuop9u7dy7/+9S8eeOABVq9eDcD333/P22+/zYcffsihQ4f44YcfiIyMBGDr1q2MHz+eGTNmkJSUxNKlS+nbt2+1rm8r0qRcQUSw9mjQsbN55BeZcDHIEI9CNAT5xSbaT11m8+vunzEQV0PNfMzOmDGDW265xbLu6+tLVFSUZf2ll15i0aJFLF68mHHjxl32PKNHj2bkyJEAvPLKK7z77rts3ryZQYMGXfL44uJiPvjgA6677joAxo0bx4wZMyz733vvPaZMmcKwYcMAeP/991myZEm1yvbGG28wevRoHnvsMQAmTpzIxo0beeONN7jppptISUkhODiYmJgYnJycaN68OT169AAgJSUFNzc3br/9djw8PAgLCyM6Orpa17cVqeFWEOBhxM/NgFnBwfQce4cjhBAW3bp1s1rPzc1l0qRJRERE4O3tjbu7O4mJiX9Zw+3UqZPltZubG56enpZhCy/F1dXVkmxBG9qw7PisrCzS09MtyQ/AwcGBrl27VqtsiYmJ9O7d22pb7969SUxMBOCee+4hPz+fVq1a8fDDD7No0SJKSkoAuOWWWwgLC6NVq1bcd999fPHFF1y4cKFa17cVqeFWoNPpiAjx5PfDZ0lMzSaqmbe9QxJC1AAXJwf2zxhol+vWFDc3N6v1SZMmsWLFCt544w1at26Ni4sLd999N0VFRVc8j5OTk9W6TqfDbDZX63hbj1XQrFkzkpKSWLlyJStWrOCxxx7j9ddfZ+3atXh4eLB9+3bWrFnD8uXLmTp1KtOnT2fLli117tEjqeFeREacEqLh0el0uBocbb7U5mhX69evZ/To0QwbNozIyEiCg4M5duxYrV3vUry8vAgKCmLLli2WbSaTie3bt1frPBEREaxfv95q2/r162nfvr1l3cXFhSFDhvDuu++yZs0aNmzYwJ49ewBwdHQkJiaG1157jd27d3Ps2DF+/fXXayhZ7ZAa7kXaBZePOCWEEHVVeHg4CxcuZMiQIeh0Ol544YUr1lRry+OPP058fDytW7emXbt2vPfee5w/f75aXzaefvpphg8fTnR0NDExMfz4448sXLjQ0ut6wYIFmEwmevbsiaurK59//jkuLi6EhYXx008/cfToUfr27YuPjw9LlizBbDbTtm3b2iryVZOEe5GKkxgopWQ8ViFEnfTWW2/x4IMPcv311+Pv78/kyZPJzrZ9y9zkyZNJS0vj/vvvx8HBgUceeYSBAwdWa0adoUOH8s477/DGG2/wxBNP0LJlS+bPn0+/fv0AbS7amTNnMnHiREwmE5GRkfz444/4+fnh7e3NwoULmT59OgUFBYSHh/PVV1/RoUOHWirx1dOpejxw8MmTJ2nWrBknTpygadOmNXLOohIzHaYtpdik+H3yTTT1kRlHhKhPCgoKSE5OpmXLljg7O9s7nEbHbDYTERHB8OHDeemll+wdTo250t9VVXOR1HDLpO+Hk1swtOzDdQHuHEjLITE1RxKuEEJcwfHjx1m+fDk33ngjhYWFvP/++yQnJ/PPf/7T3qHVOdJpqsyKqfDjeDi8ivYyAIYQQlSJXq9nwYIFdO/end69e7Nnzx5WrlxJRESEvUOrc6SGWyY0Gg6vgNM7aRdyM+yQhCuEEH+lWbNmlXoYi0uTGm6Z0NKRSU7vsHScOpAmPZWFEELUDEm4ZcoS7plEIvy1iv+xP/O4UFRix6CEEEI0FJJwy3iGgHswKDP+OQcJ8DCilNRyhRBC1AxJuBVVaFZuFywjTgkhhKg5knArqpBwy3oqH5ARp4QQQtQASbgVXaLjlNRwhRBC1ARJuBWFdtZ+nj1Iez/tV3MgLQezud4OxiWEaET69evHhAkTLOstWrRg1qxZV3yPTqer9oTxtXmeK5k+fTqdO3eu1WvUJkm4FbkHgmdTQNGq5DAGBz25hSWcPJ9v78iEEA3YkCFDLjsB/G+//YZOp2P37t3VPu+WLVt45JFHrjU8K5dLeqmpqQwePLhGr9XQSMK9WGkt1zFtF60D3QFtIgMhhKgtY8aMYcWKFZw8ebLSvvnz59OtWzerieOrKiAgAFdX2wxPGxwcjNFotMm16itJuBeT+7hCCBu7/fbbCQgIYMGCBVbbc3Nz+fbbbxkzZgx//vknI0eOpEmTJri6uhIZGclXX311xfNe3KR86NAh+vbti7OzM+3bt2fFihWV3jN58mTatGmDq6srrVq14oUXXqC4uBjQpsl78cUX2bVrFzqdDp1OZ4n54iblPXv2cPPNN+Pi4oKfnx+PPPIIubm5lv2jR49m6NChvPHGG4SEhODn50dcXJzlWlVhNpuZMWMGTZs2xWg00rlzZ5YuXWrZX1RUxLhx4wgJCcHZ2ZmwsDDi4+MBUEoxffp0mjdvjtFoJDQ0lPHjx1f52ldDhna8WNNuEBwJvq2IcJJHg4RoUIryqv8eByM4lH5UmkrAVAg6PTi5XPm8BrcqX8LR0ZH777+fBQsW8Nxzz1mmBf32228xmUyMHDmS3NxcunbtyuTJk/H09OTnn3/mvvvu47rrrqNHjx5/eQ2z2cydd95JUFAQmzZtIisry+p+bxkPDw8WLFhAaGgoe/bs4eGHH8bDw4NnnnmGESNGsHfvXpYuXWqZq9bLy6vSOfLy8hg4cCC9evViy5YtZGRk8NBDDzFu3DirLxWrV68mJCSE1atXc/jwYUaMGEHnzp15+OGHq/R7e+edd3jzzTf58MMPiY6O5pNPPuHvf/87+/btIzw8nHfffZfFixfzzTff0Lx5c06cOMGJEycA+P7773n77bdJSEigQ4cOpKWlsWvXripd92pJwr1Yq34w9ncA2h8+C8hk9EI0GK+EVv899yyADsO01wd+hG9HQ9gN8MDP5cfMioQLf1q/b3pWtS7z4IMP8vrrr7N27VrLPLDz58/nrrvuwsvLCy8vLyZNmmQ5/vHHH2fZsmV88803VUq4K1eu5MCBAyxbtozQUO338Morr1S67/r8889bXrdo0YJJkyaRkJDAM888g4uLC+7u7jg6OhIcHHzZa3355ZcUFBTw2Wef4eamffF4//33GTJkCK+++ipBQUEA+Pj48P777+Pg4EC7du247bbbWLVqVZUT7htvvMHkyZP5xz/+AcCrr77K6tWrmTVrFrNnzyYlJYXw8HBuuOEGdDodYWFhlvempKQQHBxMTEwMTk5ONG/evEq/x2shTcpX0K60STnl3AVyCqrezCGEENXVrl07rr/+ej755BMADh8+zG+//caYMWMAMJlMvPTSS0RGRuLr64u7uzvLli0jJSWlSudPTEykWbNmlmQL0KtXr0rHff311/Tu3Zvg4GDc3d15/vnnq3yNiteKioqyJFuA3r17YzabSUpKsmzr0KGD1UT1ISEhZGRkVOka2dnZnD59mt69e1tt7927N4mJiYDWbL1z507atm3L+PHjWb58ueW4e+65h/z8fFq1asXDDz/MokWLKCmp3aF8pYZ7OSVF+JJNkKeR9OxCDqbn0DXM195RCSGuxb9PV/89DhU6ArUbop1Dd1FdZcKea4ur1JgxY3j88ceZPXs28+fP57rrruPGG28E4PXXX+edd95h1qxZREZG4ubmxoQJEygqKqqRawNs2LCBUaNG8eKLLzJw4EC8vLxISEjgzTffrLFrVOTk5GS1rtPpMJvNNXb+Ll26kJyczC+//MLKlSsZPnw4MTExfPfddzRr1oykpCRWrlzJihUreOyxxywtDBfHVVOkhnspO7+E+Cbw81OWjlP7pVlZiPrP4Fb9xaFCvcTBUdtW8f7t5c57FYYPH45er+fLL7/ks88+48EHH7Tcz12/fj133HEH9957L1FRUbRq1YqDBw9W+dwRERGcOHGC1NRUy7aNGzdaHfPHH38QFhbGc889R7du3QgPD+f48ePWRTUYMJlMf3mtXbt2kZdXfm97/fr16PV62rZtW+WYr8TT05PQ0NBKUwOuX7+e9u3bWx03YsQI5s2bx9dff83333/PuXPnAHBxcWHIkCG8++67rFmzhg0bNrBnT818eboUqeFeimcTMBXBuSNEtPBkTdIZ6TglhKh17u7ujBgxgilTppCdnc3o0aMt+8LDw/nuu+/4448/8PHx4a233iI9Pd0quVxJTEwMbdq0ITY2ltdff53s7Gyee+45q2PCw8NJSUkhISGB7t278/PPP7No0SKrY1q0aEFycjI7d+6kadOmeHh4VHocaNSoUUybNo3Y2FimT5/OmTNnePzxx7nvvvss929rwtNPP820adO47rrr6Ny5M/Pnz2fnzp188cUXALz11luEhIQQHR2NXq/n22+/JTg4GG9vbxYsWIDJZKJnz564urry+eef4+LiYnWft6ZJDfdSmvWA8TvhX7/Jo0FCCJsaM2YM58+fZ+DAgVb3W59//nm6dOnCwIED6devH8HBwQwdOrTK59Xr9SxatIj8/Hx69OjBQw89xMsvv2x1zN///neefPJJxo0bR+fOnfnjjz944YUXrI656667GDRoEDfddBMBAQGXfDTJ1dWVZcuWce7cObp3787dd99N//79ef/996v3y/gL48ePZ+LEiTz11FNERkaydOlSFi9eTHh4OKD1uH7ttdfo1q0b3bt359ixYyxZsgS9Xo+3tzfz5s2jd+/edOrUiZUrV/Ljjz/i5+dXozFWpFNK1dtxC0+ePEmzZs04ceIETZs2rZVrHErP4Za31+FqcGDv9IHo9bpauY4QomYUFBSQnJxMy5YtcXZ2tnc4ooG40t9VVXOR1HD/Qkt/NwyOei4UmUg5d8He4QghhKinJOFezqlt8M39OC59mrZBMgCGEEKIayMJ93KKC2D//yBpKREhknCFEEJcG0m4lxPSCdBB9kk6+2iDXsijQUIIIa6WJNzLMXqAfxsAop2OAXBAZg0SQghxlSThXknpzEEtC7WHy0+ezydbhngUol6oyRGLhKiJvycZ+OJKQqNhdwLOZ3YT6tWT01kFHEjNoUdLGeJRiLrKYDCg1+s5ffo0AQEBGAwGy2hNQlSXUoqioiLOnDmDXq/HYDBc9bkk4V7JRXPjns4qIDE1WxKuEHWYXq+nZcuWpKamcvr0VYydLMQluLq60rx5c/T6q28YloR7JcGR2iDluWl0bVvAKuQ+rhD1gcFgoHnz5pSUlPzluL9C/BUHBwccHR2vuaXErgk3Pj6ehQsXcuDAAVxcXLj++ut59dVXa2xw62tmcIWACMjYR3en44CP9FQWop7Q6XQ4OTnV2swvQlSXXTtNrV27lri4ODZu3MiKFSsoLi5mwIABVjNM2F1ps/J1JYcASErLxmSut6NhCiGEsBO71nCXLl1qtb5gwQICAwPZtm0bffv2tVNUFwntDDs/xydzH85Of6Og2MyxP/O4LsDd3pEJIYSoR+rUY0FZWVkA+PpeulNSYWEh2dnZliUnxwbNu6FdANCd3kHbQC3JyohTQgghqqvOJFyz2cyECRPo3bs3HTt2vOQx8fHxeHl5WZaqzgN5TYI6gN4RLpzlb/4FAByQ+7hCCCGqqc4k3Li4OPbu3UtCQsJlj5kyZQpZWVmWZf/+/bUfmJMz3PIS3PMpzZtoc1NKDVcIIUR11YnHgsaNG8dPP/3EunXrrjiXoNFoxGg0Wtazs22U+Ho9BkB48jngmCRcIYQQ1WbXGq5SinHjxrFo0SJ+/fVXWrZsac9w/lK70lmDTmcVkHmhyM7RCCGEqE/smnDj4uL4/PPP+fLLL/Hw8CAtLY20tDTy8/PtGVZlpmI48iue2/5DEy9nAA6kyX1cIYQQVWfXhDtnzhyysrLo168fISEhluXrr7+2Z1iVKQVfjoAVU+kToD0jLM3KQgghqsOu93CVqicDSDgaoO1g0DvS1skZDhdLwhVCCFEtdaLTVL0w/DMAgvekwobtJMqjQUIIIaqhzjwWVF+0C/EE4GB6DiUmmW9TCCFE1UjCrQ6zmTB1GjeDjsISbYhHIYQQoiok4VaV2QxvtUM/uxv9/LXmZJk5SAghRFVJwq0qvR68wwDo634SkJ7KQgghqk4SbnWUTtUXqTsKSMIVQghRdZJwq6M04TYrSAJkEgMhhBBVJwm3OkoTrvu5/egxk5ZdwPk8GeJRCCHEX5OEWx3+4eDkhq44j97e5wFpVhZCCFE1knCrQ+8AIVEA3OypdZzaLwlXCCFEFUjCra7QzgB0djgGyCQGQgghqkYSbnWV3sdtUXgQkCZlIYQQVSMJt7pKE65XViIOmDiUnkuxDPEohBDiL0jCrS7f68Dggd5UQJQxjSKTmaNnZIhHIYQQVyYJt7r0est93P5epwBpVhZCCPHXJOFejdKE294lE4DENEm4QgghruyqEu6JEyc4efKkZX3z5s1MmDCBuXPn1lhgdVrvCfBsCqe7TASQuXGFEEL8patKuP/85z9ZvXo1AGlpadxyyy1s3ryZ5557jhkzZtRogHWSmz84exFROjeuNCkLIYT4K1eVcPfu3UuPHj0A+Oabb+jYsSN//PEHX3zxBQsWLKjJ+Oq0tkEe6HRwJqeQs7mF9g5HCCFEHXZVCbe4uBij0QjAypUr+fvf/w5Au3btSE1Nrbno6rJtC3D78g4e8tgEyEQGQgghruyqEm6HDh344IMP+O2331ixYgWDBg0C4PTp0/j5+dVogHXW+WNw/Hf6Oh8GpFlZCCHElTlezZteffVVhg0bxuuvv05sbCxRUdr4wosXL7Y0NTd4HYaBXzhHTwVDRoEkXCGEEFd0VQm3X79+nD17luzsbHx8fCzbH3nkEVxdXWssuDotJApCogg1psPvW2USAyGEEFd0VU3K+fn5FBYWWpLt8ePHmTVrFklJSQQGBtZogHVdu2APAI6cyaWoRIZ4FEIIcWlXlXDvuOMOPvvsMwAyMzPp2bMnb775JkOHDmXOnDk1GmCdlnGApgc/I8Y5kWKT4siZXHtHJIQQoo66qoS7fft2+vTpA8B3331HUFAQx48f57PPPuPdd9+t0QDrtL3foVs6mVEuWk9luY8rhBDicq4q4V64cAEPD60pdfny5dx5553o9Xr+9re/cfz48RoNsE4L7QJAhDoCSMIVQghxeVeVcFu3bs0PP/zAiRMnWLZsGQMGDAAgIyMDT0/PGg2wTiudqi+wIBlnCmWIRyGEEJd1VQl36tSpTJo0iRYtWtCjRw969eoFaLXd6OjoGg2wTvMMAfdg9JhprzvOAZnEQAghxGVcVcK9++67SUlJYevWrSxbtsyyvX///rz99ts1Fly9UFrLjXI4ytncIjJyCuwckBBCiLroqqfnCw4OJjo6mtOnT1tmDurRowft2rWrseDqhdKE28s5BZCZg4QQQlzaVSVcs9nMjBkz8PLyIiwsjLCwMLy9vXnppZcwmxvZs6ilCTdSnwxIxykhhBCXdlUjTT333HN8/PHHzJw5k969ewPw+++/M336dAoKCnj55ZdrNMg6rXQy+uCiFNzI54AkXCGEEJdwVQn3008/5aOPPrLMEgTQqVMnmjRpwmOPPda4Eq57IHg2RZd9kg66YySmNq6RtoQQQlTNVTUpnzt37pL3atu1a8e5c+euOah6p7SWG6k/ypEzuRSWmOwbjxBCiDrnqhJuVFQU77//fqXt77//Pp06dbrmoOqd0vu4XZ2OUWJWHEqXIR6FEEJYu6om5ddee43bbruNlStXWp7B3bBhAydOnGDJkiU1GmC9UJpwox2OAVrHqY5NvOwYkBBCiLrmqmq4N954IwcPHmTYsGFkZmaSmZnJnXfeyb59+/jvf/9b0zHWfaHR0LQ7x32vR4eZA2nyaJAQQghrV1XDBQgNDa3UOWrXrl18/PHHzJ0795oDq1dcfeGhlaRsOYE6sVseDRJCCFHJVQ98ISqLCNHGkU5MzUYpZedohBBC1CWScGtQuI+O6/SnOX+hmPTsQnuHI4QQog6RhFtTTmzG+Y0wPje+DkCiTGQghBCigmrdw73zzjuvuD8zM/NaYqnf/NuAMuPiaMKFAhJTs7mprQyCIYQQQlOthOvldeVHXby8vLj//vuvKaB6y8UbJh3iy63Z5C9NkkkMhBBCWKlWwp0/f35txdEwuAcSEaK9lJ7KQgghKpJ7uDWsfWlP5aNncikoliEehRBCaCTh1qTMEwT+eD8/O7+AWSFDPAohhLCQhFuTXLzRHVpOB47gT5Y0KwshhLCwa8Jdt24dQ4YMITQ0FJ1Oxw8//GDPcK6d0UPrrQx01B9lvyRcIYQQpeyacPPy8oiKimL27Nn2DKNmlU5k0EmXLDVcIYQQFlc9lnJNGDx4MIMHD7ZnCDUvNBp2JxCpP8qnaTkopdDpdPaOSgghhJ3JPdyaVlbD1R8lK7+Y1KwCOwckhBCiLrBrDbe6CgsLKSwsH6M4J6cODi4RHAk6PUFkEsh5ElOzCfV2sXdUQggh7Kxe1XDj4+Px8vKyLO3bt7d3SJUZXCEgAtBquXIfVwghBNSzhDtlyhSysrIsy/79++0d0qWVNitH6o+SKJPRCyGEoJ4lXKPRiKenp2Xx8PCwd0iXFtoZgE46qeEKIYTQ2PUebm5uLocPH7asJycns3PnTnx9fWnevLkdI7tGoV0AiNQnc+xsLvlFJlwMDnYOSgghhD3ZtYa7detWoqOjiY7WmmAnTpxIdHQ0U6dOtWdY1y6oA+gd8ddlE6z+JCldmpWFEKKxs2sNt1+/fiil7BlC7XByhsD2kLZbu4+bmk3nZt72jkoIIYQd1avHguqVm/7N11tPsmGPJ8FyH1cIIRq9etVpql5pOxiniMFk4y6T0QshhJCEW5siSufGTUzLbphN50IIIapMEm4tap31B087fYtbQTonz+fbOxwhhBB2JAm3Fjmte5U4h0V01R/igAyAIYQQjZok3NoUcTubvAaTqnxlAAwhhGjkpJdyberzFLvVUbYvSSRIEq4QQjRqUsOtZZaOU5JwhRCiUZOEW8siAp1prztG5rkM8gpL7B2OEEIIO5GEW8v8Fg5nifHf9NXtkiEehRCiEZOEW9uCOgIyN64QQjR2knBrm2Vu3GRJuEII0YhJwq1tpQm3oy6ZpNOZ9o1FCCGE3UjCrW3+4ZgdXXHTFVKYfhCzWYZ4FEKIxkgSbm3TO0BoFACtiw/KEI9CCNFIScK1AX1oF0C7j7tf7uMKIUSjJAnXFkrv40pPZSGEaLwk4dpCacLtoDvGwdTzdg5GCCGEPUjCtQXfVpQ4ueOsKyb/9H57RyOEEMIOJOHagl6PCtY6TgXk7CenoNjOAQkhhLA1Sbg24tRM6zjVSXeUJJkbVwghGh1JuLYSGo0JPe66fOk4JYQQjZDMh2srbW/j3R6reWfdKULXHKFjEy+im/vYOyohhBA2IjVcW3FyZmTvdrTwc+V0VgHDP9zAJ78no5SMPCWEEI2BJFwbCvZy5sd/deaFFol4m84z46f9PPr5drKlE5UQQjR40qRsS0m/4LH2Ncakbadp77mM26hj6b40mp5czNPGHzD6NAH3IPAILv0ZAh5B4B6s/XT2Bp3O3qUQQghxFSTh2lJIZwhqDwY3Bl7fjW87+xH3xXZcck5iLDwG2ceu/H5H5/KEHNAW/v5e+b5T28HRCL6twMmlFgshhBDiakjCtSXPELhjtmW1sx/8PP4GZiSUcM+h9gTpznNzUzO3t9JjuJABOWmQmw45qVCQBSUFkHlcW4ouWJ970b/g7EG4fzG0utHGBRNCCPFXJOHambergTcfuIW5667jtWVJ/JSimJ3vxpx7u9ImyKP8wOL80uSbpi0OTtYncvXTFo+Q8m0Hl4NnKAR3tE1hhBBCXJZO1eNusidPnqRZs2acOHGCpk2b2juca7bl2DnGfbmd9OxCnJ30/N/QSO7uepXlSt8HH8UAOrjzQ4gYUqOxCiGE0FQ1F0kv5TqkewtflozvQ59wfwqKzUz6dheTv9tNQbGp+ifzCIGm3aE4D76+F9a+BvX3u5UQQtR7knDrGD93Iwse6MHEW9qg08HXW08wdPZ6jp7Jrd6JXH3h3oXQ41/a+uqX4dvRUJRX4zELIYT4a5Jw6yAHvY7x/cP5fExP/N0NHEjLYch7v/PT7tPVPJEj3PoaDHkH9E6w/wf4ZBBknqiVuIUQQlyeJNw6rHdrf5aM70OPlr7kFZkY9+UOpv1vL4Ul1Wxi7joaYheDqz+k7YZ5N0HKplqJWVwk9wyUFJWv55+HP4/YLx4hhN1Iwq3jAj2d+fKhnjzW7zoAPt1wnHs+2MCJcxf+4p0XCbseHlkNQZGQdwYW3AY7Pq+FiIXFVyPhjdZw/Pfybds+hfe6wGdDIfFHMJXYLTwhhG1Jwq0HHB30PDOoHfNHd8fb1YndJ7O47d3fWLE/vXon8m4ODy6FiL+DuRj+FwdL/y0f+teqIAv2LYKfnwKzuXy7q6/2M3V3+bbM44AOjq7WOrPNioQ1MyE71aYhCyFsTx4LqmdOZeYz7svt7EjJBOCRvq14emBbnByq8d3JbIZ1r8GaeG39upvhn99UfrZXXJpScOYAHFquPet8YiOYS7+0PPQrNO2qvT5/XBv9yyPY+v3nj8G2BbD9v3DhrLZN5wDtboPuY6DljTKEpxD1SFVzkSTceqioxMzMXw7wyfpkALqG+fD+P6MJ8armkI77/weLxkK3B2Hgy7UQaQNSdAGO/QYHl8GhFZCVYr3fvw2ED9B+l37XVe2cJYVas/KWjyHlj/Ltfq2180SNLK8lCyHqLEm4jcDSvak8/e1ucgpL8HF1YtY/ormxTUD1TnL2EPi01Ho0A5hNoHeo+WDro/PHtBrsoeVasi0pKN/nYISWfSB8IITfAr4tr+1a6fth68ew62soytG2OTpDx7u0L0MuMneyEHWVJNxG4vifeTz2xXb2nc5Gp4NxN7VmQkwbHPRX0SRZUgSf3wltBkKvcY2vWdNsBn2FpvkP+mi9ust4NdNqseEDoGVfMLjWfAyFubDnG9jyCaTvAc8m8MTu8i9ESjW+fxch6riq5iIZS7meC/Nz4/tHr+eln/bzxaYU3vv1MFuPnWfGHR3wdTPgZnTE6KhHV5UP6X2LtJpc6i6tZuUZWvsFqAuK8rSm9eN/wBO7wOiubW93Gxg9tRpsm4EQ0K72k53RXWtO7voAnNyi9SgvS7amYvjgBu0e701TpNYrRD0jNdwG5H87TzFl4R4uFFk/p+uo1+FqcMDd6IibZXHAzeBo2eZqdMDdyYFuGd9R7BVGdrObcDNq+yu+1706CbwuyjqlfanIPw9/e1TbphS821lrQh6ZAG0Hl2+vS+VMWgpfjQC3AHhyPzga7B1R7SjIhoxE7d8jqIO21KV/ByEuIjXcRuiOzk3oEOrF5O93s+90FgXF2iMqJWZFdkEJ2QVVefwnqvTnDgB66BLJx8ge1cpyhENpAg9wNzKwYzB3RjchvOLMRnVJ7hktwSav037+eVjbbvSE7g9rtUedDga/rs22FBpd/t669iEfPkAbrvPCufJkazbDJwO1+8ldR2uPftUXphLt3yNjn3YPO32f9jrzog5pkffAXR/ZJ0YhapDUcBswk1mRV1RCXmHZYiKvsITcwpLS7abyfUUV9pWue1w4yVtZT2KkkOfMY/m+6G+XvVZkEy/u7NKEIVGh+LsbbVjKi+Sfh2Pry5Nsxn7r/To9hHTW7sH2nQTGOvpFoaoOr4TP79Je6/RaUm4doz2K5BGi/XQPqjuPfO39Xuvlnb4XzhwEU+Glj/MIBe9m2jPMMdPhb2O17VmntLmfG2s/A1EnSacpce0KsuD7h7ReuoD5hqfI6z2ZC8WK3MISDqTmsGjHKdYkZVBi1v6MHPQ6+rUJ4M4uTekfEYizkw16POeegT/e1RJs6i7goj/poEitBtiyrzbilrNX7cdkK6ZiSFqiPVqUvPYyB+nAzb88CXuGwu2zypNV2fPCbgE110PdVAzLn9dqrf/8Ggxu2vafnoStn5QfZ3CHwAgIbA9BHSGovfa67HGo4gLtGeey++rbPoUfx2szYT20svw8x9ZDSKf6/wVK1EuScEXNMJtg1Yuw/h1tve1t2vy6FT7Y/swt5Mddp1m04xS7TmZZtns4O3J7pxDu7NKUbmE+NXPftzgfTmwCZdYG7ADti8GrLbRtoD0T27JvaYK9Adz8rv269cHZQ7DzC+1nTpq25KaVD8pRxi0Qnj5Uvr7gdq1F4K6PIfJubdupbbDji/JaskcIeARpP118AQXnjpY2A5c2B7sFwJBZ5ed9vbXW6evhX6FJ6WAgh1dp5w7qoCVW7zDrnuF/JeskHFiiJeSyWPMz4bVWWg0/rFdpLf8WCGgrNWBhE5JwRc3a9TUsflxrAgxsDyO/Ap8WlQ47nJHDwu2n+GHHKU5nlT+32szXhWHRTbkzugkt/N2qft2SIm0YyrIa0o7PtSEpm/aAh1aUH7f2Ne154pZ9Ko/s1JiZzXDhT8hJLU3CqaBMWk/oMp8M0r7ExP4ILW7Qtm39RKuNXoreSasJV3wuGbTHpp7cW76+5SNwdNGaf938a7ZcFaXuhm/uh/PJF8XTHMJjKjzGVf53dz6viD2nsthzSvuC2NLfjVYBbrTwc6u5VhmltN993lkIbFcz5xR1kiRcUfNOboWEUVqtycUX7pqn1SaLC6AkX7tXWJrszLlnOfLHd2w+nssrJzqSV9pz+iGHn+nmeZ5wH0eaeYBBFWm11uJ87RzFBdavS/JhwMvQ6zEthvPHtU5C1/WHO96XGkxNKRtPu+wRpFPbtFG1sk+X15ZzUsuHogQtmQa2K62tlvYmbnWj7WMv8+cR7fbHoRVw7Her+8NmvYFTXl3Y5NCFhbnt+SPTB6j8t6PTQaiXC60C3LguwN2SiFsFuBPi7oS+4Lz2O8g7q9XeyxJq3hlte9PucP3j2skKsmFmM+31v1PLn9te9hzs/kbrpOfmr9XWXf1KF//Sn74V9vuBUzVHkRM2Jb2URc1r2k2bcSjhn3B6R3lnnTL9p0KfpwDQZ58k/I/JhHuEcufze1m+P43vt5/i1mOb6JJ/GPKrcd1T28pf+4TBxERJtDXN4aKPgiZdy5uBKyopgtx0rdXBO6xujUrmdx257g+zL3A4ic3SKDi0luCMdXQp3EpzztDs/EaasZG7gf2GMOI83qFjU2+MehPOpzeTn3WG7wu6cSozn1OZ+bQ/Op+2DjvxIxtnXTaQC7q/qJ+YTeUJ1+ihjRbm6AwFmeUJNycN8jK05UwVyxY+AEZ9W77+4xPg5Ap9ny6/3515AkxF2mApTs5V/70Jm6kTCXf27Nm8/vrrpKWlERUVxXvvvUePHj3sHZa4FM9QeOAX+HkS7E4AvaP2geLkotV4yrj6ar1l3QJwMThwR+cm3NG5Cdm//4ttR46wI7WAEzmKfAwUKAOORhc6twqhV9tmtA71R+fkop3TyVW7N1iRJFv7cTRovYfrgPwiE/tTs9h9Mos9J7PYfSqLI2dyKW+zCwPuA+7leu/z3Om+n56m7TTJ2k54eAdWj7pJO6wwF+L7A/DvZ5I5mg1Hz+TSbvNXRJ1JrHTd88qdc8qDs3hxTnlwTnmS7+SN3j0Qc2EbzOuO0NLfnVYBbjSffAInp4uelx40E3o/odWOL/ypPeZ14WyF9T8hr8Jrc7F1ZzBTiTb5BVi+4AJax8HNc7XXbgFa4vVqqi2eTcCrCXiWrnsE160vS2WUgqJcrXUABegq/H/XaV8kKg74klfa4uLiW94XoOiC9sVDp6v8/orbdHqts6AN2b1J+euvv+b+++/ngw8+oGfPnsyaNYtvv/2WpKQkAgMDr/heaVK2s2scGGL/6WwW7TjJDztPcyanvPmvVYAbd0Y3YWh0E5r6VH/4xBKTmcKSssVEYXGF1yVmCoorb1MKjI56DI56jI56jE4O2k9HPUZHhwrbtfWyffV2AJB6prDExIHUHHafymL3iUz2nMriUEYuJnPlj69QL2cim3rRqak3kU28iGzihY9bhaRXmKs9Plb2xUEpbRhPoweM+G/5/eYTmyHrBLj6U+Lsy6kSD47mOnHkz0KOnMnj6Jlcks/mkZFzmUeb0HrtN/d1JczPFV83A94uBrxdnfB2dcLLxQkf19J1FwNerk54Ojta/00pBYU5Wse3sppsSSFs+lBL0jdPLW+d+OVZLRGXVKH5SOegdYBrdxvc+lr59oPLwT1A6zFeU4+SlRRqTfwFWdDxzvLtG2ZD8m+Qf0779yhbLu7kV1HEEBhRYR7v6aVPHEw6BO6l+eLnSbBl3l/H5d8Gxm2pfnkuod7cw+3Zsyfdu3fn/fffB8BsNtOsWTMef/xxnn322Su+VxJuw1BiMrP+yJ8s3H6SZfvSLAN2APRo6UsTb5dLJk5tvUISLU2yl/oQri0Gh8qJ2FCWsB3Ktmv7nBx06HQ6yv7LlUVZ9j+wfN16P5b9ynK8usS2MnqdDgcHHU56HQ56PY76i9YddDjqtaXiukPZNgd96bE6HB20Yy5ed9DpMCvt6qriz9LYFGj7VWmUCsxKi1d7XaEsKMxmKp2rsMRMYmoOe05lkpSWQ7Gp8r+rv7uRqKZepQnWi45NvAj0sG1zak5BMcln8zh6Jo+jZ7VEfPRMHsln88gvNv31CSpw0OvwcnHC28UJL1ftp4+rofR1ebL2djXg7eJkSdYezo7o9TrtF5p/XuvNnXUSsk9Z/8w6BTmny5Na1EgY9oH2uqQQ/q80aT19pPyLx+Z5cGp7aQ25idb5rGKCvNTS4U647Q3t/fnntacIAJ4/Uz5oy8JHYPfXl/5F6By0Gqjlj19pryOGwPDPyo+ThFt1RUVFuLq68t133zF06FDL9tjYWDIzM/nf//5ndXxhYSGFheXfJk+dOkX79u0l4TYgOQXFLN2bxsLtp9iY/CfX+td5pYToXKEmC9q0h4UlptKfl68h199uhvWbr5uByCZaYtV+ehPkaayzrQxmsyI9p4CjZ/I4ce4CmfnFZF4oJiu/iPN5xWTmF5Wua9urm5wr0uvA08XJMi+2pRG19IWudItOB3plwo8sgtQZ8nUuJOvDAPBWmbxe9DLeZDPC+QPQ6dCh4/8KZ9LXtLFa8fzq0Jvpxqe1FWXi44KnyNG585zxWXJ1bigFXU27CVHpZOvcyVYe2k+dO9l4UIhBu77Ougw6nVY2na60REppLcSATq9HBzhgQo8ZPTp0pffc9TrQU/ZaodcpQI+HpzefjO5e7d/3xepFp6mzZ89iMpkICgqy2h4UFMSBAwcqHR8fH8+LL75oq/CEHXg4O3FPt2bc060ZpzLzWbk/naISs1VNsWICdbZq4nWodJz+amZNugKlFCVmVVrDLq9VF12m5l0xgReVmCudryxZXPwBab1NZ7XPcshl3mtWWquByazFajIrii9aLzEpSsxmbd2kbb943VS6XmIqfY/ZXHouhVkpy4eeTqfVqss+DNHp0F/0wagv3Vn5+LIP1YvOBej1OloHutOpiVaDbeLtUmeT66Xo9TpCvFyqPE91QbHJknwzLxSRmV9M1oVizpe+LkvWmReKOX+hmKzS7ReKTJgVZF4ornJsp3BFu8cNZT0YT2HkVmaUbip/5Otj/Y1s1IURovuTEN2fOFNEJh5kKjcycSdTuZOFG5lKe52JO2eVJ+fzLljOcQuvaC8ulF/vJOFA+CWiMwMFl9he80Lzsm1ynTJ1otNUVU2ZMoWJEyda1stquKJhauLtQuz1LewdhhWdToeTgw4nBz3uxnr130fUcc5ODjg7ORDkWb0m8cISLVFnXSimxKyueLvhcvsq3s64+FYH9Kp0bHDpApfqxlG+4eJ9Fx9a8QvUxfvKbi2Uvy6NXFnfjim/DVFeLhRW2yrGbiljab8NW7LrJ4a/vz8ODg6kp6dbbU9PTyc4uPLgBUajEaOxvFdZdrZtv50IIURdY3R0INDDweb3rkX12Ta9X8RgMNC1a1dWrVpl2WY2m1m1ahW9evWyY2RCCCFEzbJ7m9jEiROJjY2lW7du9OjRg1mzZpGXl8cDDzxg79CEEEKIGmP3hDtixAjOnDnD1KlTSUtLo3PnzixdurRSRyohhBCiPrN7wgUYN24c48aNs3cYQgghRK2x6z1cIYQQorGoEzXcq2U2a881pqam2jkSIYQQjVVZDirLSZdTrxNu2eNEMtGBEEIIe0tPT6d58+aX3W/3sZSvRUlJCTt27CAoKAi9/tpax3Nycmjfvj379+/Hw8Pjr9/QwDTm8jfmskPjLr+UvXGWHWq2/GazmfT0dKKjo3F0vHw9tl4n3JqUnZ2Nl5cXWVlZeHp62jscm2vM5W/MZYfGXX4pe+MsO9in/NJpSgghhLABSbhCCCGEDUjCLWU0Gpk2bZrVWM2NSWMuf2MuOzTu8kvZG2fZwT7ll3u4QgghhA1IDVcIIYSwAUm4QgghhA1IwhVCCCFsQBJuqdmzZ9OiRQucnZ3p2bMnmzdvtndINrFu3TqGDBlCaGgoOp2OH374wd4h2Ux8fDzdu3fHw8ODwMBAhg4dSlJSkr3Dsok5c+bQqVMnPD098fT0pFevXvzyyy/2DssuZs6ciU6nY8KECfYOxSamT5+OTqezWtq1a2fvsGzm1KlT3Hvvvfj5+eHi4kJkZCRbt261ybUl4QJff/01EydOZNq0aWzfvp2oqCgGDhxIRkaGvUOrdXl5eURFRTF79mx7h2Jza9euJS4ujo0bN7JixQqKi4sZMGAAeXl59g6t1jVt2pSZM2eybds2tm7dys0338wdd9zBvn377B2aTW3ZsoUPP/yQTp062TsUm+rQoQOpqamW5ffff7d3SDZx/vx5evfujZOTE7/88gv79+/nzTffxMfHxzYBKKF69Oih4uLiLOsmk0mFhoaq+Ph4O0Zle4BatGiRvcOwm4yMDAWotWvX2jsUu/Dx8VEfffSRvcOwmZycHBUeHq5WrFihbrzxRvXEE0/YOySbmDZtmoqKirJ3GHYxefJkdcMNN9jt+o2+hltUVMS2bduIiYmxbNPr9cTExLBhwwY7RiZsLSsrCwBfX187R2JbJpOJhIQE8vLy6NWrl73DsZm4uDhuu+02q//7jcWhQ4cIDQ2lVatWjBo1ipSUFHuHZBOLFy+mW7du3HPPPQQGBhIdHc28efNsdv1Gn3DPnj2LyWQiKCjIantQUBBpaWl2ikrYmtlsZsKECfTu3ZuOHTvaOxyb2LNnD+7u7hiNRsaOHcuiRYto3769vcOyiYSEBLZv3058fLy9Q7G5nj17smDBApYuXcqcOXNITk6mT58+5OTk2Du0Wnf06FHmzJlDeHg4y5Yt49FHH2X8+PF8+umnNrl+vZ6eT4iaEhcXx969exvNvSyAtm3bsnPnTrKysvjuu++IjY1l7dq1DT7pnjhxgieeeIIVK1bg7Oxs73BsbvDgwZbXnTp1omfPnoSFhfHNN98wZswYO0ZW+8xmM926deOVV14BIDo6mr179/LBBx8QGxtb69dv9DVcf39/HBwcLHPrlklPTyc4ONhOUQlbGjduHD/99BOrV6+madOm9g7HZgwGA61bt6Zr167Ex8cTFRXFO++8Y++wat22bdvIyMigS5cuODo64ujoyNq1a3n33XdxdHTEZDLZO0Sb8vb2pk2bNhw+fNjeodS6kJCQSl8oIyIibNak3ugTrsFgoGvXrqxatcqyzWw2s2rVqkZ1P6sxUkoxbtw4Fi1axK+//krLli3tHZJdmc1mCgsL7R1Grevfvz979uxh586dlqVbt26MGjWKnTt34uDgYO8QbSo3N5cjR44QEhJi71BqXe/evSs9+nfw4EHCwsJscn1pUgYmTpxIbGws3bp1o0ePHsyaNYu8vDweeOABe4dW63Jzc62+2SYnJ7Nz5058fX1p3ry5HSOrfXFxcXz55Zf873//w8PDw3LP3svLCxcXFztHV7umTJnC4MGDad68OTk5OXz55ZesWbOGZcuW2Tu0Wufh4VHpPr2bmxt+fn6N4v79pEmTGDJkCGFhYZw+fZpp06bh4ODAyJEj7R1arXvyySe5/vrreeWVVxg+fDibN29m7ty5zJ071zYB2K1/dB3z3nvvqebNmyuDwaB69OihNm7caO+QbGL16tUKqLTExsbaO7Rad6lyA2r+/Pn2Dq3WPfjggyosLEwZDAYVEBCg+vfvr5YvX27vsOymMT0WNGLECBUSEqIMBoNq0qSJGjFihDp8+LC9w7KZH3/8UXXs2FEZjUbVrl07NXfuXJtdW2YLEkIIIWyg0d/DFUIIIWxBEq4QQghhA5JwhRBCCBuQhCuEEELYgCRcIYQQwgYk4QohhBA2IAlXCCGEsAFJuEIIIYQNSMIVQlSJTqfjhx9+sHcYQtRbknCFqAdGjx6NTqertAwaNMjeoQkhqkgmLxCinhg0aBDz58+32mY0Gu0UjRCiuqSGK0Q9YTQaCQ4Otlp8fHwArbl3zpw5DB48GBcXF1q1asV3331n9f49e/Zw88034+Ligp+fH4888gi5ublWx3zyySd06NABo9FISEgI48aNs9p/9uxZhg0bhqurK+Hh4SxevNiy7/z584waNYqAgABcXFwIDw+v9AVBiMZMEq4QDcQLL7zAXXfdxa5duxg1ahT/+Mc/SExMBCAvL4+BAwfi4+PDli1b+Pbbb1m5cqVVQp0zZw5xcXE88sgj7Nmzh8WLF9O6dWura7z44osMHz6c3bt3c+uttzJq1CjOnTtnuf7+/fv55ZdfSExMZM6cOfj7+9vuFyBEXWezeYmEEFctNjZWOTg4KDc3N6vl5ZdfVkppUw2OHTvW6j09e/ZUjz76qFJKqblz5yofHx+Vm5tr2f/zzz8rvV6v0tLSlFJKhYaGqueee+6yMQDq+eeft6zn5uYqQP3yyy9KKaWGDBmiHnjggZopsBANkNzDFaKeuOmmm5gzZ47VNl9fX8vrXr16We3r1asXO3fuBCAxMZGoqCjc3Nws+3v37o3ZbCYpKQmdTsfp06fp37//FWPo1KmT5bWbmxuenp5kZGQA8Oijj3LXXXexfft2BgwYwNChQ7n++uuvqqxCNESScIWoJ9zc3Co18dYUFxeXKh3n5ORkta7T6TCbzQAMHjyY48ePs2TJElasWEH//v2Ji4vjjTfeqPF4haiP5B6uEA3Exo0bK61HREQAEBERwa5du8jLy7PsX79+PXq9nrZt2+Lh4UGLFi1YtWrVNcUQEBBAbGwsn3/+ObNmzWLu3LnXdD4hGhKp4QpRTxQWFpKWlma1zdHR0dIx6dtvv6Vbt27ccMMNfPHFF2zevJmPP/4YgFGjRjFt2jRiY2OZPn06Z86c4fHHH+e+++4jKCgIgOnTpzN27FgCAwMZPHgwOTk5rF+/nscff7xK8U2dOpWuXbvSoUMHCgsL+emnnywJXwghCVeIemPp0qWEhIRYbWvbti0HDhwAtB7ECQkJPPbYY4SEhPDVV1/Rvn17AFxdXVm2bBlPPPEE3bt3x9XVlbvuuou33nrLcq7Y2FgKCgp4++23mTRpEv7+/tx9991Vjs9gMDBlyhSOHTuGi4sLffr0ISEhoQZKLkTDoFNKKXsHIYS4NjqdjkWLFjF06FB7hyKEuAy5hyuEEELYgCRcIYQQwgbkHq4QDYDcGRKi7pMarhBCCGEDknCFEEIIG5CEK4QQQtiAJFwhhBDCBiThCiGEEDYgCVcIIYSwAUm4QgghhA1IwhVCCCFsQBKuEEIIYQP/DwJSReFGugSPAAAAAElFTkSuQmCC",
- "text/plain": [
- ""
+ "cell_type": "markdown",
+ "id": "c024bfa4-1a7a-4751-b5a1-827225a3478b",
+ "metadata": {
+ "id": "c024bfa4-1a7a-4751-b5a1-827225a3478b"
+ },
+ "source": [
+ "\n",
+ "Supplementary code for \"Build a Large Language Model From Scratch\": https://www.manning.com/books/build-a-large-language-model-from-scratch by Sebastian Raschka
\n",
+ "Code repository: https://github.com/rasbt/LLMs-from-scratch\n",
+ ""
]
- },
- "metadata": {},
- "output_type": "display_data"
- }
- ],
- "source": [
- "from previous_chapters import plot_values\n",
- "\n",
- "epochs_tensor = torch.linspace(0, num_epochs, len(train_losses))\n",
- "examples_seen_tensor = torch.linspace(0, examples_seen, len(train_losses))\n",
- "\n",
- "plot_values(epochs_tensor, examples_seen_tensor, train_losses, val_losses, label=\"loss\")"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "aa074723-e3f7-4f7e-a267-855531a037dc",
- "metadata": {},
- "source": [
- "- Note that we previously calculated the accuracy values on 5 batches only via the `eval_iter=5` setting; below, we calculate the accuracies on the full dataset"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 22,
- "id": "1D2awlEq0gZi",
- "metadata": {
- "colab": {
- "base_uri": "https://localhost:8080/"
},
- "id": "1D2awlEq0gZi",
- "outputId": "b482af19-5ebd-45b9-a9f0-99f621203ef9"
- },
- "outputs": [
{
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "Training accuracy: 100.00%\n",
- "Validation accuracy: 96.64%\n",
- "Test accuracy: 98.00%\n"
- ]
+ "cell_type": "markdown",
+ "id": "58b8c870-fb72-490e-8916-d8129bd5d1ff",
+ "metadata": {
+ "id": "58b8c870-fb72-490e-8916-d8129bd5d1ff"
+ },
+ "source": [
+ "# Appendix E: Parameter-efficient Finetuning with LoRA"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 1,
+ "id": "5b7e01c2-1c84-4f2a-bb51-2e0b74abda90",
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "5b7e01c2-1c84-4f2a-bb51-2e0b74abda90",
+ "outputId": "316166b4-027a-4756-e9b4-fe88ae75dd4f"
+ },
+ "outputs": [
+ {
+ "output_type": "stream",
+ "name": "stdout",
+ "text": [
+ "matplotlib version: 3.7.1\n",
+ "numpy version: 1.25.2\n",
+ "tiktoken version: 0.7.0\n",
+ "torch version: 2.2.1+cu121\n",
+ "tensorflow version: 2.15.0\n",
+ "pandas version: 2.2.2\n"
+ ]
+ }
+ ],
+ "source": [
+ "from importlib.metadata import version\n",
+ "\n",
+ "pkgs = [\"matplotlib\",\n",
+ " \"numpy\",\n",
+ " \"tiktoken\",\n",
+ " \"torch\",\n",
+ " \"tensorflow\", # For OpenAI's pretrained weights\n",
+ " \"pandas\" # Dataset loading\n",
+ " ]\n",
+ "for p in pkgs:\n",
+ " print(f\"{p} version: {version(p)}\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "21532056-0ef4-4c98-82c7-e91f61c6485e",
+ "metadata": {
+ "id": "21532056-0ef4-4c98-82c7-e91f61c6485e"
+ },
+ "source": [
+ "## E.1 Introduction to LoRA"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "66edc999-3d91-4a1c-a157-9d056392e8d8",
+ "metadata": {
+ "id": "66edc999-3d91-4a1c-a157-9d056392e8d8"
+ },
+ "source": [
+ "- No code in this section\n",
+ "- Low-rank adaptation (LoRA) is a machine learning technique that modifies a pretrained model to better suit a specific, often smaller, dataset by adjusting only a small, low-rank subset of the model's parameters\n",
+ "- This approach is important because it allows for efficient finetuning of large models on task-specific data, significantly reducing the computational cost and time required for finetuning"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "5bb75b5d-d59c-4948-821a-1594a5883dc1",
+ "metadata": {
+ "id": "5bb75b5d-d59c-4948-821a-1594a5883dc1"
+ },
+ "source": [
+ "- Suppose we have a large weight matrix $W$ for a given layer\n",
+ "- During backpropagation, we learn a $\\Delta W$ matrix, which contains information on how much we want to update the original weights to minimize the loss function during training\n",
+ "- In regular training and finetuning, the weight update is defined as follows:\n",
+ "\n",
+ "$$W_{\\text{updated}} = W + \\Delta W$$\n",
+ "\n",
+ "- The LoRA method proposed by [Hu et al.](https://arxiv.org/abs/2106.09685) offers a more efficient alternative to computing the weight updates $\\Delta W$ by learning an approximation of it, $\\Delta W \\approx AB$.\n",
+ "- In other words, in LoRA, we have the following, where $A$ and $B$ are two small weight matrices:\n",
+ "\n",
+ "$$W_{\\text{updated}} = W + AB$$\n",
+ "\n",
+ "- The figure below illustrates these formulas for full finetuning and LoRA side by side"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "a8a7419d-cae9-4525-bb44-1641f6ef4f3b",
+ "metadata": {
+ "id": "a8a7419d-cae9-4525-bb44-1641f6ef4f3b"
+ },
+ "source": [
+ "
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "4edd43c9-8ec5-48e6-b3fc-5fb3c16037cc",
+ "metadata": {
+ "id": "4edd43c9-8ec5-48e6-b3fc-5fb3c16037cc"
+ },
+ "source": [
+ "- If you paid close attention, the full finetuning and LoRA depictions in the figure above look slightly different from the formulas I have shown earlier\n",
+ "- That's due to the distributive law of matrix multiplication: we don't have to add the weights with the updated weights but can keep them separate\n",
+ "- For instance, if $x$ is the input data, then we can write the following for regular finetuning:\n",
+ "\n",
+ "$$x (W+\\Delta W) = x W + x \\Delta W$$\n",
+ "\n",
+ "- Similarly, we can write the following for LoRA:\n",
+ "\n",
+ "$$x (W+A B) = x W + x A B$$\n",
+ "\n",
+ "- The fact that we can keep the LoRA weight matrices separate makes LoRA especially attractive\n",
+ "- In practice, this means that we don't have to modify the weights of the pretrained model at all, as we can apply the LoRA matrices on the fly\n",
+ "- After setting up the dataset and loading the model, we will implement LoRA in the code to make these concepts less abstract"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "8c7017a2-32aa-4002-a2f3-12aac293ccdf",
+ "metadata": {
+ "id": "8c7017a2-32aa-4002-a2f3-12aac293ccdf"
+ },
+ "source": [
+ "## E.2 Preparing the dataset"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "669c64df-4431-4d27-834d-2bb38a01fc02",
+ "metadata": {
+ "id": "669c64df-4431-4d27-834d-2bb38a01fc02"
+ },
+ "source": [
+ "- This section repeats the code from chapter 6 to load and prepare the dataset\n",
+ "- Instead of repeating this code, one could open and run the chapter 6 notebook and then insert the LoRA code from section E.4 there\n",
+ "- (The LoRA code was originally the last section of chapter 6 but was moved to the appendix due to the length of chapter 6)\n",
+ "- In a similar fashion, we could also apply LoRA to the models in chapter 7 for instruction finetuning"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 2,
+ "id": "def7c09b-af9c-4216-90ce-5e67aed1065c",
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "def7c09b-af9c-4216-90ce-5e67aed1065c",
+ "outputId": "a67a7afe-b401-4463-c731-87025d20f72d"
+ },
+ "outputs": [
+ {
+ "output_type": "stream",
+ "name": "stdout",
+ "text": [
+ "sms_spam_collection/SMSSpamCollection.tsv already exists. Skipping download and extraction.\n"
+ ]
+ }
+ ],
+ "source": [
+ "from pathlib import Path\n",
+ "import pandas as pd\n",
+ "from previous_chapters import (\n",
+ " download_and_unzip_spam_data,\n",
+ " create_balanced_dataset,\n",
+ " random_split\n",
+ ")\n",
+ "\n",
+ "\n",
+ "url = \"https://archive.ics.uci.edu/static/public/228/sms+spam+collection.zip\"\n",
+ "zip_path = \"sms_spam_collection.zip\"\n",
+ "extracted_path = \"sms_spam_collection\"\n",
+ "data_file_path = Path(extracted_path) / \"SMSSpamCollection.tsv\"\n",
+ "\n",
+ "download_and_unzip_spam_data(url, zip_path, extracted_path, data_file_path)\n",
+ "\n",
+ "df = pd.read_csv(data_file_path, sep=\"\\t\", header=None, names=[\"Label\", \"Text\"])\n",
+ "balanced_df = create_balanced_dataset(df)\n",
+ "balanced_df[\"Label\"] = balanced_df[\"Label\"].map({\"ham\": 0, \"spam\": 1})\n",
+ "\n",
+ "train_df, validation_df, test_df = random_split(balanced_df, 0.7, 0.1)\n",
+ "train_df.to_csv(\"train.csv\", index=None)\n",
+ "validation_df.to_csv(\"validation.csv\", index=None)\n",
+ "test_df.to_csv(\"test.csv\", index=None)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 3,
+ "id": "74c3c463-8763-4cc0-9320-41c7eaad8ab7",
+ "metadata": {
+ "id": "74c3c463-8763-4cc0-9320-41c7eaad8ab7"
+ },
+ "outputs": [],
+ "source": [
+ "import torch\n",
+ "from torch.utils.data import Dataset\n",
+ "import tiktoken\n",
+ "from previous_chapters import SpamDataset\n",
+ "\n",
+ "\n",
+ "tokenizer = tiktoken.get_encoding(\"gpt2\")\n",
+ "train_dataset = SpamDataset(\"train.csv\", max_length=None, tokenizer=tokenizer)\n",
+ "val_dataset = SpamDataset(\"validation.csv\", max_length=train_dataset.max_length, tokenizer=tokenizer)\n",
+ "test_dataset = SpamDataset(\"test.csv\", max_length=train_dataset.max_length, tokenizer=tokenizer)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 4,
+ "id": "8681adc0-6f02-4e75-b01a-a6ab75d05542",
+ "metadata": {
+ "id": "8681adc0-6f02-4e75-b01a-a6ab75d05542"
+ },
+ "outputs": [],
+ "source": [
+ "from torch.utils.data import DataLoader\n",
+ "\n",
+ "num_workers = 0\n",
+ "batch_size = 8\n",
+ "\n",
+ "torch.manual_seed(123)\n",
+ "\n",
+ "train_loader = DataLoader(\n",
+ " dataset=train_dataset,\n",
+ " batch_size=batch_size,\n",
+ " shuffle=True,\n",
+ " num_workers=num_workers,\n",
+ " drop_last=True,\n",
+ ")\n",
+ "\n",
+ "val_loader = DataLoader(\n",
+ " dataset=val_dataset,\n",
+ " batch_size=batch_size,\n",
+ " num_workers=num_workers,\n",
+ " drop_last=False,\n",
+ ")\n",
+ "\n",
+ "test_loader = DataLoader(\n",
+ " dataset=test_dataset,\n",
+ " batch_size=batch_size,\n",
+ " num_workers=num_workers,\n",
+ " drop_last=False,\n",
+ ")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "ab7335db-e0bb-4e27-80c5-eea11e593a57",
+ "metadata": {
+ "id": "ab7335db-e0bb-4e27-80c5-eea11e593a57"
+ },
+ "source": [
+ "- As a verification step, we iterate through the data loaders and check that the batches contain 8 training examples each, where each training example consists of 120 tokens"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 5,
+ "id": "4dee6882-4c3a-4964-af15-fa31f86ad047",
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "4dee6882-4c3a-4964-af15-fa31f86ad047",
+ "outputId": "2ae34de1-dd01-4f99-d2c8-ba4dca400754"
+ },
+ "outputs": [
+ {
+ "output_type": "stream",
+ "name": "stdout",
+ "text": [
+ "Train loader:\n",
+ "Input batch dimensions: torch.Size([8, 120])\n",
+ "Label batch dimensions torch.Size([8])\n"
+ ]
+ }
+ ],
+ "source": [
+ "print(\"Train loader:\")\n",
+ "for input_batch, target_batch in train_loader:\n",
+ " pass\n",
+ "\n",
+ "print(\"Input batch dimensions:\", input_batch.shape)\n",
+ "print(\"Label batch dimensions\", target_batch.shape)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "5cdd7947-7039-49bf-8a5e-c0a2f4281ca1",
+ "metadata": {
+ "id": "5cdd7947-7039-49bf-8a5e-c0a2f4281ca1"
+ },
+ "source": [
+ "- Lastly, let's print the total number of batches in each dataset"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 6,
+ "id": "IZfw-TYD2zTj",
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "IZfw-TYD2zTj",
+ "outputId": "4d19ed61-cf7a-4ec4-b822-c847dd1c5d77"
+ },
+ "outputs": [
+ {
+ "output_type": "stream",
+ "name": "stdout",
+ "text": [
+ "130 training batches\n",
+ "19 validation batches\n",
+ "38 test batches\n"
+ ]
+ }
+ ],
+ "source": [
+ "print(f\"{len(train_loader)} training batches\")\n",
+ "print(f\"{len(val_loader)} validation batches\")\n",
+ "print(f\"{len(test_loader)} test batches\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "dec9aa4a-ffd2-4d9f-a835-cce1059fe604",
+ "metadata": {
+ "id": "dec9aa4a-ffd2-4d9f-a835-cce1059fe604"
+ },
+ "source": [
+ "## E.3 Initializing the model"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "f36ebdaf-810e-46a2-9ad9-e017a04051b1",
+ "metadata": {
+ "id": "f36ebdaf-810e-46a2-9ad9-e017a04051b1"
+ },
+ "source": [
+ "- This section repeats the code from chapter 6 to load and prepare the model"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 7,
+ "id": "02b3a506-3879-4258-82b5-93a5b6bafa74",
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "02b3a506-3879-4258-82b5-93a5b6bafa74",
+ "outputId": "b8c9b125-bb52-45d3-8071-fa5054dbf5a9"
+ },
+ "outputs": [
+ {
+ "output_type": "stream",
+ "name": "stderr",
+ "text": [
+ "2024-05-20 00:06:21.369837: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered\n",
+ "2024-05-20 00:06:21.369891: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered\n",
+ "2024-05-20 00:06:21.371329: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered\n",
+ "2024-05-20 00:06:21.380176: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.\n",
+ "To enable the following instructions: AVX2 AVX512F FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.\n",
+ "2024-05-20 00:06:22.621156: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT\n"
+ ]
+ },
+ {
+ "output_type": "stream",
+ "name": "stdout",
+ "text": [
+ "File already exists and is up-to-date: gpt2/124M/checkpoint\n",
+ "File already exists and is up-to-date: gpt2/124M/encoder.json\n",
+ "File already exists and is up-to-date: gpt2/124M/hparams.json\n",
+ "File already exists and is up-to-date: gpt2/124M/model.ckpt.data-00000-of-00001\n",
+ "File already exists and is up-to-date: gpt2/124M/model.ckpt.index\n",
+ "File already exists and is up-to-date: gpt2/124M/model.ckpt.meta\n",
+ "File already exists and is up-to-date: gpt2/124M/vocab.bpe\n"
+ ]
+ }
+ ],
+ "source": [
+ "from gpt_download import download_and_load_gpt2\n",
+ "from previous_chapters import GPTModel, load_weights_into_gpt\n",
+ "\n",
+ "\n",
+ "CHOOSE_MODEL = \"gpt2-small (124M)\"\n",
+ "INPUT_PROMPT = \"Every effort moves\"\n",
+ "\n",
+ "BASE_CONFIG = {\n",
+ " \"vocab_size\": 50257, # Vocabulary size\n",
+ " \"context_length\": 1024, # Context length\n",
+ " \"drop_rate\": 0.0, # Dropout rate\n",
+ " \"qkv_bias\": True # Query-key-value bias\n",
+ "}\n",
+ "\n",
+ "model_configs = {\n",
+ " \"gpt2-small (124M)\": {\"emb_dim\": 768, \"n_layers\": 12, \"n_heads\": 12},\n",
+ " \"gpt2-medium (355M)\": {\"emb_dim\": 1024, \"n_layers\": 24, \"n_heads\": 16},\n",
+ " \"gpt2-large (774M)\": {\"emb_dim\": 1280, \"n_layers\": 36, \"n_heads\": 20},\n",
+ " \"gpt2-xl (1558M)\": {\"emb_dim\": 1600, \"n_layers\": 48, \"n_heads\": 25},\n",
+ "}\n",
+ "\n",
+ "BASE_CONFIG.update(model_configs[CHOOSE_MODEL])\n",
+ "\n",
+ "model_size = CHOOSE_MODEL.split(\" \")[-1].lstrip(\"(\").rstrip(\")\")\n",
+ "settings, params = download_and_load_gpt2(model_size=model_size, models_dir=\"gpt2\")\n",
+ "\n",
+ "model = GPTModel(BASE_CONFIG)\n",
+ "load_weights_into_gpt(model, params)\n",
+ "model.eval();"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "252614cd-7ce6-4908-83e6-3761f519904e",
+ "metadata": {
+ "id": "252614cd-7ce6-4908-83e6-3761f519904e"
+ },
+ "source": [
+ "- To ensure that the model was loaded corrected, let's double-check that it generates coherent text"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 8,
+ "id": "8b6ce20c-0700-4783-8be0-4cf17c200a7f",
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "8b6ce20c-0700-4783-8be0-4cf17c200a7f",
+ "outputId": "28ccbca5-8de9-41a0-c093-da00fcbaa91c"
+ },
+ "outputs": [
+ {
+ "output_type": "stream",
+ "name": "stdout",
+ "text": [
+ "Every effort moves you forward.\n",
+ "\n",
+ "The first step is to understand the importance of your work\n"
+ ]
+ }
+ ],
+ "source": [
+ "from previous_chapters import (\n",
+ " generate_text_simple,\n",
+ " text_to_token_ids,\n",
+ " token_ids_to_text\n",
+ ")\n",
+ "\n",
+ "\n",
+ "text_1 = \"Every effort moves you\"\n",
+ "\n",
+ "token_ids = generate_text_simple(\n",
+ " model=model,\n",
+ " idx=text_to_token_ids(text_1, tokenizer),\n",
+ " max_new_tokens=15,\n",
+ " context_size=BASE_CONFIG[\"context_length\"]\n",
+ ")\n",
+ "\n",
+ "print(token_ids_to_text(token_ids, tokenizer))"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "8174b31b-1ab5-4115-b01c-245369da5af3",
+ "metadata": {
+ "id": "8174b31b-1ab5-4115-b01c-245369da5af3"
+ },
+ "source": [
+ "- Then, we prepare the model for classification finetuning similar to chapter 6, where we replace the output layer"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 9,
+ "id": "e255ce91-d73a-4854-90a4-95804928eb16",
+ "metadata": {
+ "id": "e255ce91-d73a-4854-90a4-95804928eb16"
+ },
+ "outputs": [],
+ "source": [
+ "torch.manual_seed(123)\n",
+ "\n",
+ "num_classes = 2\n",
+ "model.out_head = torch.nn.Linear(in_features=768, out_features=num_classes)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 10,
+ "id": "02e6f057-1383-4ece-8444-0a88e71ac75d",
+ "metadata": {
+ "id": "02e6f057-1383-4ece-8444-0a88e71ac75d"
+ },
+ "outputs": [],
+ "source": [
+ "device = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\n",
+ "model.to(device); # no assignment model = model.to(device) necessary for nn.Module classes"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "8e951cd6-5e42-44d2-b21f-895cb61004fe",
+ "metadata": {
+ "id": "8e951cd6-5e42-44d2-b21f-895cb61004fe"
+ },
+ "source": [
+ "- Lastly, let's calculate the initial classification accuracy of the non-finetuned model (we expect this to be around 50%, which means that the model is not able to distinguish between spam and non-spam messages yet reliably)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 11,
+ "id": "fc7dd72c-73a2-4881-ade0-0a9605f1ab8c",
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "fc7dd72c-73a2-4881-ade0-0a9605f1ab8c",
+ "outputId": "74848515-5a49-4125-fecb-9f4bac23f812"
+ },
+ "outputs": [
+ {
+ "output_type": "stream",
+ "name": "stdout",
+ "text": [
+ "Training accuracy: 46.25%\n",
+ "Validation accuracy: 45.00%\n",
+ "Test accuracy: 48.75%\n"
+ ]
+ }
+ ],
+ "source": [
+ "from previous_chapters import calc_accuracy_loader\n",
+ "\n",
+ "\n",
+ "torch.manual_seed(123)\n",
+ "train_accuracy = calc_accuracy_loader(train_loader, model, device, num_batches=10)\n",
+ "val_accuracy = calc_accuracy_loader(val_loader, model, device, num_batches=10)\n",
+ "test_accuracy = calc_accuracy_loader(test_loader, model, device, num_batches=10)\n",
+ "\n",
+ "print(f\"Training accuracy: {train_accuracy*100:.2f}%\")\n",
+ "print(f\"Validation accuracy: {val_accuracy*100:.2f}%\")\n",
+ "print(f\"Test accuracy: {test_accuracy*100:.2f}%\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "398a1ec9-e2a1-43d6-bf9f-12ee54b46a7b",
+ "metadata": {
+ "id": "398a1ec9-e2a1-43d6-bf9f-12ee54b46a7b"
+ },
+ "source": [
+ "## E.4 Parameter-efficient finetuning with LoRA"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "652a4a82-61ef-4d0a-9858-8988e844f12c",
+ "metadata": {
+ "id": "652a4a82-61ef-4d0a-9858-8988e844f12c"
+ },
+ "source": [
+ "- We begin by initializing a LoRALayer that creates the matrices $A$ and $B$, along with the `alpha` scaling hyperparameter and the `rank` ($r$) hyperparameters\n",
+ "- This layer can accept an input and compute the corresponding output, as illustrated in the figure below\n",
+ "\n",
+ "
\n",
+ "\n",
+ "In code, this LoRA layer depicted in the figure above looks like as follows"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 12,
+ "id": "2ds9ywjMwvIW",
+ "metadata": {
+ "id": "2ds9ywjMwvIW"
+ },
+ "outputs": [],
+ "source": [
+ "import math\n",
+ "\n",
+ "class LoRALayer(torch.nn.Module):\n",
+ " def __init__(self, in_dim, out_dim, rank, alpha):\n",
+ " super().__init__()\n",
+ " self.A = torch.nn.Parameter(torch.empty(in_dim, rank))\n",
+ " torch.nn.init.kaiming_uniform_(self.A, a=math.sqrt(5))\n",
+ " self.B = torch.nn.Parameter(torch.zeros(rank, out_dim))\n",
+ " self.alpha = alpha\n",
+ "\n",
+ " def forward(self, x):\n",
+ " x = self.alpha * (x @ self.A @ self.B)\n",
+ " return x"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "ad21faa8-0614-4257-93cd-68952193e14a",
+ "metadata": {
+ "id": "ad21faa8-0614-4257-93cd-68952193e14a"
+ },
+ "source": [
+ "- In the code above, `rank` is a hyperparameter that controls the inner dimension of the matrices $A$ and $B$\n",
+ "- In other words, this parameter controls the number of additional parameters introduced by LoRA and is a key factor in determining the balance between model adaptability and parameter efficiency\n",
+ "- The second hyperparameter, alpha, is a scaling hyperparameter applied to the output of the low-rank adaptation\n",
+ "- It essentially controls the extent to which the adapted layer's output is allowed to influence the original output of the layer being adapted\n",
+ "- This can be seen as a way to regulate the impact of the low-rank adaptation on the layer's output\n",
+ "- So far, the `LoRALayer` class we implemented above allows us to transform the layer inputs $x$\n",
+ "- However, in LoRA, we are usually interested in replacing existing `Linear` layers so that the weight update is applied to the existing pretrained weights, as shown in the figure below\n",
+ "\n",
+ "
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "3e6d5da0-dfce-4808-b89b-29ff333f563f",
+ "metadata": {
+ "id": "3e6d5da0-dfce-4808-b89b-29ff333f563f"
+ },
+ "source": [
+ "- To incorporate the original `Linear` layer weights as shown in the figure above, we implement a `LinearWithLoRA` layer below that uses the previously implemented LoRALayer and can be used to replace existing `Linear` layers in a neural network, for example, the self-attention module or feed forward modules in an LLM"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 13,
+ "id": "127d3a64-8359-4b21-b056-78d58cc75fe8",
+ "metadata": {
+ "id": "127d3a64-8359-4b21-b056-78d58cc75fe8"
+ },
+ "outputs": [],
+ "source": [
+ "class LinearWithLoRA(torch.nn.Module):\n",
+ " def __init__(self, linear, rank, alpha):\n",
+ " super().__init__()\n",
+ " self.linear = linear\n",
+ " self.lora = LoRALayer(\n",
+ " linear.in_features, linear.out_features, rank, alpha\n",
+ " )\n",
+ "\n",
+ " def forward(self, x):\n",
+ " return self.linear(x) + self.lora(x)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "e1145a90-35ff-462c-820b-15483fa5b051",
+ "metadata": {
+ "id": "e1145a90-35ff-462c-820b-15483fa5b051"
+ },
+ "source": [
+ "- Note that since we initialize the weight matrix $B$ (`self.B` in `LoRALayer`) with zero values in the LoRA layer, the matrix multiplication between $A$ and $B$ results in a matrix consisting of 0's and doesn't affect the original weights (since adding 0 to the original weights does not modify them)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "e98a6d36-7bc9-434c-a7f1-533f26aff06d",
+ "metadata": {
+ "id": "e98a6d36-7bc9-434c-a7f1-533f26aff06d"
+ },
+ "source": [
+ "- To try LoRA on the GPT model we defined earlier, we define a `replace_linear_with_lora` function to replace all `Linear` layers in the model with the new `LinearWithLoRA` layers\n",
+ "\n",
+ "
"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 14,
+ "id": "WlQZ8ygqzN_g",
+ "metadata": {
+ "id": "WlQZ8ygqzN_g"
+ },
+ "outputs": [],
+ "source": [
+ "def replace_linear_with_lora(model, rank, alpha):\n",
+ " for name, module in model.named_children():\n",
+ " if isinstance(module, torch.nn.Linear):\n",
+ " # Replace the Linear layer with LinearWithLoRA\n",
+ " setattr(model, name, LinearWithLoRA(module, rank, alpha))\n",
+ " else:\n",
+ " # Recursively apply the same function to child modules\n",
+ " replace_linear_with_lora(module, rank, alpha)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "8c172164-cdde-4489-b7d7-aaed9cc2f5f2",
+ "metadata": {
+ "id": "8c172164-cdde-4489-b7d7-aaed9cc2f5f2"
+ },
+ "source": [
+ "- We then freeze the original model parameter and use the `replace_linear_with_lora` to replace the said `Linear` layers using the code below\n",
+ "- This will replace the `Linear` layers in the LLM with `LinearWithLoRA` layers"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 15,
+ "id": "dbe15350-4da9-4829-9d23-98bbd3d0b1a1",
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "dbe15350-4da9-4829-9d23-98bbd3d0b1a1",
+ "outputId": "fd4c208f-854a-4701-d9d3-9d73af733364"
+ },
+ "outputs": [
+ {
+ "output_type": "stream",
+ "name": "stdout",
+ "text": [
+ "Total trainable parameters before: 124,441,346\n",
+ "Total trainable parameters after: 0\n"
+ ]
+ }
+ ],
+ "source": [
+ "total_params = sum(p.numel() for p in model.parameters() if p.requires_grad)\n",
+ "print(f\"Total trainable parameters before: {total_params:,}\")\n",
+ "\n",
+ "for param in model.parameters():\n",
+ " param.requires_grad = False\n",
+ "\n",
+ "total_params = sum(p.numel() for p in model.parameters() if p.requires_grad)\n",
+ "print(f\"Total trainable parameters after: {total_params:,}\")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 16,
+ "id": "mLk_fPq0yz_u",
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "mLk_fPq0yz_u",
+ "outputId": "0a93b8fc-05d7-4ace-ee47-e2fc6bdd7d75"
+ },
+ "outputs": [
+ {
+ "output_type": "stream",
+ "name": "stdout",
+ "text": [
+ "Total trainable LoRA parameters: 2,666,528\n"
+ ]
+ }
+ ],
+ "source": [
+ "replace_linear_with_lora(model, rank=16, alpha=16)\n",
+ "\n",
+ "total_params = sum(p.numel() for p in model.parameters() if p.requires_grad)\n",
+ "print(f\"Total trainable LoRA parameters: {total_params:,}\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "b8b6819e-ef7a-4f0d-841a-1b467496bef9",
+ "metadata": {
+ "id": "b8b6819e-ef7a-4f0d-841a-1b467496bef9"
+ },
+ "source": [
+ "- As we can see, we reduced the number of trainable parameters by almost 100x when using LoRA\n",
+ "- Let's now double-check whether the layers have been modified as intended by printing the model architecture"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 17,
+ "id": "1711be61-bb2c-466f-9b5b-24f4aa5ccd9c",
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "1711be61-bb2c-466f-9b5b-24f4aa5ccd9c",
+ "outputId": "acff8eca-3775-45a2-b62d-032a986ef037"
+ },
+ "outputs": [
+ {
+ "output_type": "stream",
+ "name": "stdout",
+ "text": [
+ "GPTModel(\n",
+ " (tok_emb): Embedding(50257, 768)\n",
+ " (pos_emb): Embedding(1024, 768)\n",
+ " (drop_emb): Dropout(p=0.0, inplace=False)\n",
+ " (trf_blocks): Sequential(\n",
+ " (0): TransformerBlock(\n",
+ " (att): MultiHeadAttention(\n",
+ " (W_query): LinearWithLoRA(\n",
+ " (linear): Linear(in_features=768, out_features=768, bias=True)\n",
+ " (lora): LoRALayer()\n",
+ " )\n",
+ " (W_key): LinearWithLoRA(\n",
+ " (linear): Linear(in_features=768, out_features=768, bias=True)\n",
+ " (lora): LoRALayer()\n",
+ " )\n",
+ " (W_value): LinearWithLoRA(\n",
+ " (linear): Linear(in_features=768, out_features=768, bias=True)\n",
+ " (lora): LoRALayer()\n",
+ " )\n",
+ " (out_proj): LinearWithLoRA(\n",
+ " (linear): Linear(in_features=768, out_features=768, bias=True)\n",
+ " (lora): LoRALayer()\n",
+ " )\n",
+ " (dropout): Dropout(p=0.0, inplace=False)\n",
+ " )\n",
+ " (ff): FeedForward(\n",
+ " (layers): Sequential(\n",
+ " (0): LinearWithLoRA(\n",
+ " (linear): Linear(in_features=768, out_features=3072, bias=True)\n",
+ " (lora): LoRALayer()\n",
+ " )\n",
+ " (1): GELU()\n",
+ " (2): LinearWithLoRA(\n",
+ " (linear): Linear(in_features=3072, out_features=768, bias=True)\n",
+ " (lora): LoRALayer()\n",
+ " )\n",
+ " )\n",
+ " )\n",
+ " (norm1): LayerNorm()\n",
+ " (norm2): LayerNorm()\n",
+ " (drop_resid): Dropout(p=0.0, inplace=False)\n",
+ " )\n",
+ " (1): TransformerBlock(\n",
+ " (att): MultiHeadAttention(\n",
+ " (W_query): LinearWithLoRA(\n",
+ " (linear): Linear(in_features=768, out_features=768, bias=True)\n",
+ " (lora): LoRALayer()\n",
+ " )\n",
+ " (W_key): LinearWithLoRA(\n",
+ " (linear): Linear(in_features=768, out_features=768, bias=True)\n",
+ " (lora): LoRALayer()\n",
+ " )\n",
+ " (W_value): LinearWithLoRA(\n",
+ " (linear): Linear(in_features=768, out_features=768, bias=True)\n",
+ " (lora): LoRALayer()\n",
+ " )\n",
+ " (out_proj): LinearWithLoRA(\n",
+ " (linear): Linear(in_features=768, out_features=768, bias=True)\n",
+ " (lora): LoRALayer()\n",
+ " )\n",
+ " (dropout): Dropout(p=0.0, inplace=False)\n",
+ " )\n",
+ " (ff): FeedForward(\n",
+ " (layers): Sequential(\n",
+ " (0): LinearWithLoRA(\n",
+ " (linear): Linear(in_features=768, out_features=3072, bias=True)\n",
+ " (lora): LoRALayer()\n",
+ " )\n",
+ " (1): GELU()\n",
+ " (2): LinearWithLoRA(\n",
+ " (linear): Linear(in_features=3072, out_features=768, bias=True)\n",
+ " (lora): LoRALayer()\n",
+ " )\n",
+ " )\n",
+ " )\n",
+ " (norm1): LayerNorm()\n",
+ " (norm2): LayerNorm()\n",
+ " (drop_resid): Dropout(p=0.0, inplace=False)\n",
+ " )\n",
+ " (2): TransformerBlock(\n",
+ " (att): MultiHeadAttention(\n",
+ " (W_query): LinearWithLoRA(\n",
+ " (linear): Linear(in_features=768, out_features=768, bias=True)\n",
+ " (lora): LoRALayer()\n",
+ " )\n",
+ " (W_key): LinearWithLoRA(\n",
+ " (linear): Linear(in_features=768, out_features=768, bias=True)\n",
+ " (lora): LoRALayer()\n",
+ " )\n",
+ " (W_value): LinearWithLoRA(\n",
+ " (linear): Linear(in_features=768, out_features=768, bias=True)\n",
+ " (lora): LoRALayer()\n",
+ " )\n",
+ " (out_proj): LinearWithLoRA(\n",
+ " (linear): Linear(in_features=768, out_features=768, bias=True)\n",
+ " (lora): LoRALayer()\n",
+ " )\n",
+ " (dropout): Dropout(p=0.0, inplace=False)\n",
+ " )\n",
+ " (ff): FeedForward(\n",
+ " (layers): Sequential(\n",
+ " (0): LinearWithLoRA(\n",
+ " (linear): Linear(in_features=768, out_features=3072, bias=True)\n",
+ " (lora): LoRALayer()\n",
+ " )\n",
+ " (1): GELU()\n",
+ " (2): LinearWithLoRA(\n",
+ " (linear): Linear(in_features=3072, out_features=768, bias=True)\n",
+ " (lora): LoRALayer()\n",
+ " )\n",
+ " )\n",
+ " )\n",
+ " (norm1): LayerNorm()\n",
+ " (norm2): LayerNorm()\n",
+ " (drop_resid): Dropout(p=0.0, inplace=False)\n",
+ " )\n",
+ " (3): TransformerBlock(\n",
+ " (att): MultiHeadAttention(\n",
+ " (W_query): LinearWithLoRA(\n",
+ " (linear): Linear(in_features=768, out_features=768, bias=True)\n",
+ " (lora): LoRALayer()\n",
+ " )\n",
+ " (W_key): LinearWithLoRA(\n",
+ " (linear): Linear(in_features=768, out_features=768, bias=True)\n",
+ " (lora): LoRALayer()\n",
+ " )\n",
+ " (W_value): LinearWithLoRA(\n",
+ " (linear): Linear(in_features=768, out_features=768, bias=True)\n",
+ " (lora): LoRALayer()\n",
+ " )\n",
+ " (out_proj): LinearWithLoRA(\n",
+ " (linear): Linear(in_features=768, out_features=768, bias=True)\n",
+ " (lora): LoRALayer()\n",
+ " )\n",
+ " (dropout): Dropout(p=0.0, inplace=False)\n",
+ " )\n",
+ " (ff): FeedForward(\n",
+ " (layers): Sequential(\n",
+ " (0): LinearWithLoRA(\n",
+ " (linear): Linear(in_features=768, out_features=3072, bias=True)\n",
+ " (lora): LoRALayer()\n",
+ " )\n",
+ " (1): GELU()\n",
+ " (2): LinearWithLoRA(\n",
+ " (linear): Linear(in_features=3072, out_features=768, bias=True)\n",
+ " (lora): LoRALayer()\n",
+ " )\n",
+ " )\n",
+ " )\n",
+ " (norm1): LayerNorm()\n",
+ " (norm2): LayerNorm()\n",
+ " (drop_resid): Dropout(p=0.0, inplace=False)\n",
+ " )\n",
+ " (4): TransformerBlock(\n",
+ " (att): MultiHeadAttention(\n",
+ " (W_query): LinearWithLoRA(\n",
+ " (linear): Linear(in_features=768, out_features=768, bias=True)\n",
+ " (lora): LoRALayer()\n",
+ " )\n",
+ " (W_key): LinearWithLoRA(\n",
+ " (linear): Linear(in_features=768, out_features=768, bias=True)\n",
+ " (lora): LoRALayer()\n",
+ " )\n",
+ " (W_value): LinearWithLoRA(\n",
+ " (linear): Linear(in_features=768, out_features=768, bias=True)\n",
+ " (lora): LoRALayer()\n",
+ " )\n",
+ " (out_proj): LinearWithLoRA(\n",
+ " (linear): Linear(in_features=768, out_features=768, bias=True)\n",
+ " (lora): LoRALayer()\n",
+ " )\n",
+ " (dropout): Dropout(p=0.0, inplace=False)\n",
+ " )\n",
+ " (ff): FeedForward(\n",
+ " (layers): Sequential(\n",
+ " (0): LinearWithLoRA(\n",
+ " (linear): Linear(in_features=768, out_features=3072, bias=True)\n",
+ " (lora): LoRALayer()\n",
+ " )\n",
+ " (1): GELU()\n",
+ " (2): LinearWithLoRA(\n",
+ " (linear): Linear(in_features=3072, out_features=768, bias=True)\n",
+ " (lora): LoRALayer()\n",
+ " )\n",
+ " )\n",
+ " )\n",
+ " (norm1): LayerNorm()\n",
+ " (norm2): LayerNorm()\n",
+ " (drop_resid): Dropout(p=0.0, inplace=False)\n",
+ " )\n",
+ " (5): TransformerBlock(\n",
+ " (att): MultiHeadAttention(\n",
+ " (W_query): LinearWithLoRA(\n",
+ " (linear): Linear(in_features=768, out_features=768, bias=True)\n",
+ " (lora): LoRALayer()\n",
+ " )\n",
+ " (W_key): LinearWithLoRA(\n",
+ " (linear): Linear(in_features=768, out_features=768, bias=True)\n",
+ " (lora): LoRALayer()\n",
+ " )\n",
+ " (W_value): LinearWithLoRA(\n",
+ " (linear): Linear(in_features=768, out_features=768, bias=True)\n",
+ " (lora): LoRALayer()\n",
+ " )\n",
+ " (out_proj): LinearWithLoRA(\n",
+ " (linear): Linear(in_features=768, out_features=768, bias=True)\n",
+ " (lora): LoRALayer()\n",
+ " )\n",
+ " (dropout): Dropout(p=0.0, inplace=False)\n",
+ " )\n",
+ " (ff): FeedForward(\n",
+ " (layers): Sequential(\n",
+ " (0): LinearWithLoRA(\n",
+ " (linear): Linear(in_features=768, out_features=3072, bias=True)\n",
+ " (lora): LoRALayer()\n",
+ " )\n",
+ " (1): GELU()\n",
+ " (2): LinearWithLoRA(\n",
+ " (linear): Linear(in_features=3072, out_features=768, bias=True)\n",
+ " (lora): LoRALayer()\n",
+ " )\n",
+ " )\n",
+ " )\n",
+ " (norm1): LayerNorm()\n",
+ " (norm2): LayerNorm()\n",
+ " (drop_resid): Dropout(p=0.0, inplace=False)\n",
+ " )\n",
+ " (6): TransformerBlock(\n",
+ " (att): MultiHeadAttention(\n",
+ " (W_query): LinearWithLoRA(\n",
+ " (linear): Linear(in_features=768, out_features=768, bias=True)\n",
+ " (lora): LoRALayer()\n",
+ " )\n",
+ " (W_key): LinearWithLoRA(\n",
+ " (linear): Linear(in_features=768, out_features=768, bias=True)\n",
+ " (lora): LoRALayer()\n",
+ " )\n",
+ " (W_value): LinearWithLoRA(\n",
+ " (linear): Linear(in_features=768, out_features=768, bias=True)\n",
+ " (lora): LoRALayer()\n",
+ " )\n",
+ " (out_proj): LinearWithLoRA(\n",
+ " (linear): Linear(in_features=768, out_features=768, bias=True)\n",
+ " (lora): LoRALayer()\n",
+ " )\n",
+ " (dropout): Dropout(p=0.0, inplace=False)\n",
+ " )\n",
+ " (ff): FeedForward(\n",
+ " (layers): Sequential(\n",
+ " (0): LinearWithLoRA(\n",
+ " (linear): Linear(in_features=768, out_features=3072, bias=True)\n",
+ " (lora): LoRALayer()\n",
+ " )\n",
+ " (1): GELU()\n",
+ " (2): LinearWithLoRA(\n",
+ " (linear): Linear(in_features=3072, out_features=768, bias=True)\n",
+ " (lora): LoRALayer()\n",
+ " )\n",
+ " )\n",
+ " )\n",
+ " (norm1): LayerNorm()\n",
+ " (norm2): LayerNorm()\n",
+ " (drop_resid): Dropout(p=0.0, inplace=False)\n",
+ " )\n",
+ " (7): TransformerBlock(\n",
+ " (att): MultiHeadAttention(\n",
+ " (W_query): LinearWithLoRA(\n",
+ " (linear): Linear(in_features=768, out_features=768, bias=True)\n",
+ " (lora): LoRALayer()\n",
+ " )\n",
+ " (W_key): LinearWithLoRA(\n",
+ " (linear): Linear(in_features=768, out_features=768, bias=True)\n",
+ " (lora): LoRALayer()\n",
+ " )\n",
+ " (W_value): LinearWithLoRA(\n",
+ " (linear): Linear(in_features=768, out_features=768, bias=True)\n",
+ " (lora): LoRALayer()\n",
+ " )\n",
+ " (out_proj): LinearWithLoRA(\n",
+ " (linear): Linear(in_features=768, out_features=768, bias=True)\n",
+ " (lora): LoRALayer()\n",
+ " )\n",
+ " (dropout): Dropout(p=0.0, inplace=False)\n",
+ " )\n",
+ " (ff): FeedForward(\n",
+ " (layers): Sequential(\n",
+ " (0): LinearWithLoRA(\n",
+ " (linear): Linear(in_features=768, out_features=3072, bias=True)\n",
+ " (lora): LoRALayer()\n",
+ " )\n",
+ " (1): GELU()\n",
+ " (2): LinearWithLoRA(\n",
+ " (linear): Linear(in_features=3072, out_features=768, bias=True)\n",
+ " (lora): LoRALayer()\n",
+ " )\n",
+ " )\n",
+ " )\n",
+ " (norm1): LayerNorm()\n",
+ " (norm2): LayerNorm()\n",
+ " (drop_resid): Dropout(p=0.0, inplace=False)\n",
+ " )\n",
+ " (8): TransformerBlock(\n",
+ " (att): MultiHeadAttention(\n",
+ " (W_query): LinearWithLoRA(\n",
+ " (linear): Linear(in_features=768, out_features=768, bias=True)\n",
+ " (lora): LoRALayer()\n",
+ " )\n",
+ " (W_key): LinearWithLoRA(\n",
+ " (linear): Linear(in_features=768, out_features=768, bias=True)\n",
+ " (lora): LoRALayer()\n",
+ " )\n",
+ " (W_value): LinearWithLoRA(\n",
+ " (linear): Linear(in_features=768, out_features=768, bias=True)\n",
+ " (lora): LoRALayer()\n",
+ " )\n",
+ " (out_proj): LinearWithLoRA(\n",
+ " (linear): Linear(in_features=768, out_features=768, bias=True)\n",
+ " (lora): LoRALayer()\n",
+ " )\n",
+ " (dropout): Dropout(p=0.0, inplace=False)\n",
+ " )\n",
+ " (ff): FeedForward(\n",
+ " (layers): Sequential(\n",
+ " (0): LinearWithLoRA(\n",
+ " (linear): Linear(in_features=768, out_features=3072, bias=True)\n",
+ " (lora): LoRALayer()\n",
+ " )\n",
+ " (1): GELU()\n",
+ " (2): LinearWithLoRA(\n",
+ " (linear): Linear(in_features=3072, out_features=768, bias=True)\n",
+ " (lora): LoRALayer()\n",
+ " )\n",
+ " )\n",
+ " )\n",
+ " (norm1): LayerNorm()\n",
+ " (norm2): LayerNorm()\n",
+ " (drop_resid): Dropout(p=0.0, inplace=False)\n",
+ " )\n",
+ " (9): TransformerBlock(\n",
+ " (att): MultiHeadAttention(\n",
+ " (W_query): LinearWithLoRA(\n",
+ " (linear): Linear(in_features=768, out_features=768, bias=True)\n",
+ " (lora): LoRALayer()\n",
+ " )\n",
+ " (W_key): LinearWithLoRA(\n",
+ " (linear): Linear(in_features=768, out_features=768, bias=True)\n",
+ " (lora): LoRALayer()\n",
+ " )\n",
+ " (W_value): LinearWithLoRA(\n",
+ " (linear): Linear(in_features=768, out_features=768, bias=True)\n",
+ " (lora): LoRALayer()\n",
+ " )\n",
+ " (out_proj): LinearWithLoRA(\n",
+ " (linear): Linear(in_features=768, out_features=768, bias=True)\n",
+ " (lora): LoRALayer()\n",
+ " )\n",
+ " (dropout): Dropout(p=0.0, inplace=False)\n",
+ " )\n",
+ " (ff): FeedForward(\n",
+ " (layers): Sequential(\n",
+ " (0): LinearWithLoRA(\n",
+ " (linear): Linear(in_features=768, out_features=3072, bias=True)\n",
+ " (lora): LoRALayer()\n",
+ " )\n",
+ " (1): GELU()\n",
+ " (2): LinearWithLoRA(\n",
+ " (linear): Linear(in_features=3072, out_features=768, bias=True)\n",
+ " (lora): LoRALayer()\n",
+ " )\n",
+ " )\n",
+ " )\n",
+ " (norm1): LayerNorm()\n",
+ " (norm2): LayerNorm()\n",
+ " (drop_resid): Dropout(p=0.0, inplace=False)\n",
+ " )\n",
+ " (10): TransformerBlock(\n",
+ " (att): MultiHeadAttention(\n",
+ " (W_query): LinearWithLoRA(\n",
+ " (linear): Linear(in_features=768, out_features=768, bias=True)\n",
+ " (lora): LoRALayer()\n",
+ " )\n",
+ " (W_key): LinearWithLoRA(\n",
+ " (linear): Linear(in_features=768, out_features=768, bias=True)\n",
+ " (lora): LoRALayer()\n",
+ " )\n",
+ " (W_value): LinearWithLoRA(\n",
+ " (linear): Linear(in_features=768, out_features=768, bias=True)\n",
+ " (lora): LoRALayer()\n",
+ " )\n",
+ " (out_proj): LinearWithLoRA(\n",
+ " (linear): Linear(in_features=768, out_features=768, bias=True)\n",
+ " (lora): LoRALayer()\n",
+ " )\n",
+ " (dropout): Dropout(p=0.0, inplace=False)\n",
+ " )\n",
+ " (ff): FeedForward(\n",
+ " (layers): Sequential(\n",
+ " (0): LinearWithLoRA(\n",
+ " (linear): Linear(in_features=768, out_features=3072, bias=True)\n",
+ " (lora): LoRALayer()\n",
+ " )\n",
+ " (1): GELU()\n",
+ " (2): LinearWithLoRA(\n",
+ " (linear): Linear(in_features=3072, out_features=768, bias=True)\n",
+ " (lora): LoRALayer()\n",
+ " )\n",
+ " )\n",
+ " )\n",
+ " (norm1): LayerNorm()\n",
+ " (norm2): LayerNorm()\n",
+ " (drop_resid): Dropout(p=0.0, inplace=False)\n",
+ " )\n",
+ " (11): TransformerBlock(\n",
+ " (att): MultiHeadAttention(\n",
+ " (W_query): LinearWithLoRA(\n",
+ " (linear): Linear(in_features=768, out_features=768, bias=True)\n",
+ " (lora): LoRALayer()\n",
+ " )\n",
+ " (W_key): LinearWithLoRA(\n",
+ " (linear): Linear(in_features=768, out_features=768, bias=True)\n",
+ " (lora): LoRALayer()\n",
+ " )\n",
+ " (W_value): LinearWithLoRA(\n",
+ " (linear): Linear(in_features=768, out_features=768, bias=True)\n",
+ " (lora): LoRALayer()\n",
+ " )\n",
+ " (out_proj): LinearWithLoRA(\n",
+ " (linear): Linear(in_features=768, out_features=768, bias=True)\n",
+ " (lora): LoRALayer()\n",
+ " )\n",
+ " (dropout): Dropout(p=0.0, inplace=False)\n",
+ " )\n",
+ " (ff): FeedForward(\n",
+ " (layers): Sequential(\n",
+ " (0): LinearWithLoRA(\n",
+ " (linear): Linear(in_features=768, out_features=3072, bias=True)\n",
+ " (lora): LoRALayer()\n",
+ " )\n",
+ " (1): GELU()\n",
+ " (2): LinearWithLoRA(\n",
+ " (linear): Linear(in_features=3072, out_features=768, bias=True)\n",
+ " (lora): LoRALayer()\n",
+ " )\n",
+ " )\n",
+ " )\n",
+ " (norm1): LayerNorm()\n",
+ " (norm2): LayerNorm()\n",
+ " (drop_resid): Dropout(p=0.0, inplace=False)\n",
+ " )\n",
+ " )\n",
+ " (final_norm): LayerNorm()\n",
+ " (out_head): LinearWithLoRA(\n",
+ " (linear): Linear(in_features=768, out_features=2, bias=True)\n",
+ " (lora): LoRALayer()\n",
+ " )\n",
+ ")\n"
+ ]
+ }
+ ],
+ "source": [
+ "device = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\n",
+ "model.to(device)\n",
+ "\n",
+ "print(model)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "c4bbc9d7-65ec-4675-bab8-2e56eb0cfb55",
+ "metadata": {
+ "id": "c4bbc9d7-65ec-4675-bab8-2e56eb0cfb55"
+ },
+ "source": [
+ "- Based on the model architecture above, we can see that the model now contains our new `LinearWithLoRA` layers\n",
+ "- Also, since we initialized matrix $B$ with 0's, we expect the initial model performance to be unchanged compared to before"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 18,
+ "id": "DAlrb_I00VEU",
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "DAlrb_I00VEU",
+ "outputId": "3da44ac4-230b-4358-d996-30b63f0d962a"
+ },
+ "outputs": [
+ {
+ "output_type": "stream",
+ "name": "stdout",
+ "text": [
+ "Training accuracy: 46.25%\n",
+ "Validation accuracy: 45.00%\n",
+ "Test accuracy: 48.75%\n"
+ ]
+ }
+ ],
+ "source": [
+ "torch.manual_seed(123)\n",
+ "train_accuracy = calc_accuracy_loader(train_loader, model, device, num_batches=10)\n",
+ "val_accuracy = calc_accuracy_loader(val_loader, model, device, num_batches=10)\n",
+ "test_accuracy = calc_accuracy_loader(test_loader, model, device, num_batches=10)\n",
+ "\n",
+ "print(f\"Training accuracy: {train_accuracy*100:.2f}%\")\n",
+ "print(f\"Validation accuracy: {val_accuracy*100:.2f}%\")\n",
+ "print(f\"Test accuracy: {test_accuracy*100:.2f}%\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "13735b3e-f0c3-4dba-ae3d-4141b2878101",
+ "metadata": {
+ "id": "13735b3e-f0c3-4dba-ae3d-4141b2878101"
+ },
+ "source": [
+ "- Let's now get to the interesting part and finetune the model by reusing the training function from chapter 6\n",
+ "- The training takes about 15 minutes on a M3 MacBook Air laptop computer and less than half a minute on a V100 or A100 GPU"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 19,
+ "id": "wCParRvr0eff",
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "wCParRvr0eff",
+ "outputId": "ce910a9c-ee89-48bb-bfa6-49c6aee1e450"
+ },
+ "outputs": [
+ {
+ "output_type": "stream",
+ "name": "stdout",
+ "text": [
+ "Ep 1 (Step 000000): Train loss 3.820, Val loss 3.462\n",
+ "Ep 1 (Step 000050): Train loss 0.396, Val loss 0.364\n",
+ "Ep 1 (Step 000100): Train loss 0.111, Val loss 0.229\n",
+ "Training accuracy: 97.50% | Validation accuracy: 95.00%\n",
+ "Ep 2 (Step 000150): Train loss 0.135, Val loss 0.073\n",
+ "Ep 2 (Step 000200): Train loss 0.007, Val loss 0.053\n",
+ "Ep 2 (Step 000250): Train loss 0.021, Val loss 0.180\n",
+ "Training accuracy: 97.50% | Validation accuracy: 97.50%\n",
+ "Ep 3 (Step 000300): Train loss 0.103, Val loss 0.065\n",
+ "Ep 3 (Step 000350): Train loss 0.059, Val loss 0.167\n",
+ "Training accuracy: 100.00% | Validation accuracy: 100.00%\n",
+ "Ep 4 (Step 000400): Train loss 0.006, Val loss 0.118\n",
+ "Ep 4 (Step 000450): Train loss 0.004, Val loss 0.179\n",
+ "Ep 4 (Step 000500): Train loss 0.001, Val loss 0.060\n",
+ "Training accuracy: 97.50% | Validation accuracy: 92.50%\n",
+ "Ep 5 (Step 000550): Train loss 0.021, Val loss 0.128\n",
+ "Ep 5 (Step 000600): Train loss 0.051, Val loss 0.051\n",
+ "Training accuracy: 100.00% | Validation accuracy: 97.50%\n",
+ "Training completed in 0.83 minutes.\n"
+ ]
+ }
+ ],
+ "source": [
+ "import time\n",
+ "from previous_chapters import train_classifier_simple\n",
+ "\n",
+ "\n",
+ "start_time = time.time()\n",
+ "\n",
+ "torch.manual_seed(123)\n",
+ "\n",
+ "optimizer = torch.optim.AdamW(model.parameters(), lr=5e-5, weight_decay=0.1)\n",
+ "\n",
+ "num_epochs = 5\n",
+ "train_losses, val_losses, train_accs, val_accs, examples_seen = train_classifier_simple(\n",
+ " model, train_loader, val_loader, optimizer, device,\n",
+ " num_epochs=num_epochs, eval_freq=50, eval_iter=5,\n",
+ " tokenizer=tokenizer\n",
+ ")\n",
+ "\n",
+ "end_time = time.time()\n",
+ "execution_time_minutes = (end_time - start_time) / 60\n",
+ "print(f\"Training completed in {execution_time_minutes:.2f} minutes.\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "d0c89e82-3aa8-44c6-b046-0b16200b8e6c",
+ "metadata": {
+ "id": "d0c89e82-3aa8-44c6-b046-0b16200b8e6c"
+ },
+ "source": [
+ "- Finally, let's evaluate the model"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 20,
+ "id": "bawWGijA0iF3",
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/",
+ "height": 308
+ },
+ "id": "bawWGijA0iF3",
+ "outputId": "af70782a-d605-4376-fa6c-d33b38979cfa"
+ },
+ "outputs": [
+ {
+ "output_type": "display_data",
+ "data": {
+ "text/plain": [
+ ""
+ ],
+ "image/png": "iVBORw0KGgoAAAANSUhEUgAAAeoAAAEiCAYAAAA21pHjAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjcuMSwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy/bCgiHAAAACXBIWXMAAA9hAAAPYQGoP6dpAABPtUlEQVR4nO3deVzUdf7A8dfMwAz3ISKHAl6IJ2KKhppZUmLlplub67qFrdWvQs3MMrdSrG21204r23TbLSkrzS3T1DzKNE8UL7wFlcOLGwaY+fz+GBgYT0BgBnw/H4/vY77H5/v9vufjyPv7+V4fjVJKIYQQQgiHpLV3AEIIIYS4PEnUQgghhAOTRC2EEEI4MEnUQgghhAOTRC2EEEI4MEnUQgghhAOTRC2EEEI4MEnUQgghhAOTRC2EEEI4MEnUQggbgwcPZtKkSfYOQwhRQRK1EPVs7NixaDSai4a4uDh7hyaEaIKc7B2AEM1RXFwc8+fPt5lnMBjsFI0QoimTFrUQDcBgMBAYGGgz+Pr6ArB27Vr0ej2//PKLtfyrr75Kq1atyMrKAmD58uUMHDgQHx8f/Pz8uOuuuzh8+LC1/LFjx9BoNHz11VfcdNNNuLq6Eh0dzYEDB9iyZQt9+vTBw8ODYcOGcfr0aet6Y8eOZcSIEcycORN/f3+8vLx49NFHKS0tvex3MRqNTJkyhdatW+Pu7k6/fv1Yu3atdfnx48cZPnw4vr6+uLu7061bN5YtW3bZ7X3wwQeEh4fj4uJCQEAA9957r3WZ2Wxm1qxZtGvXDldXV3r27MnXX39ts/7u3bsZNmwYHh4eBAQEcP/993PmzBnr8sGDBzNx4kSeeeYZWrRoQWBgIImJiZeNRwhHJ4laiEZWeQ34/vvvJzc3lx07dvDCCy/wySefEBAQAEBhYSGTJ09m69atrF69Gq1Wy8iRIzGbzTbbmjFjBs8//zzbt2/HycmJv/zlLzzzzDO8/fbb/PLLLxw6dIjp06fbrLN69Wr27dvH2rVrWbhwId9++y0zZ868bLzjx49n48aNJCUlsWvXLv70pz8RFxfHwYMHAUhISMBoNLJ+/XpSUlJ45ZVX8PDwuOS2tm7dysSJE3nxxRdJTU1l+fLlDBo0yLp81qxZfPbZZ3z44Yfs2bOHJ598kr/+9a+sW7cOgJycHG699VZ69erF1q1bWb58OVlZWdx33302+/n3v/+Nu7s7v//+O6+++iovvvgiK1eurOG/kBAORgkh6lV8fLzS6XTK3d3dZnj55ZetZYxGo4qKilL33Xef6tq1q3r44YevuM3Tp08rQKWkpCillDp69KgC1CeffGIts3DhQgWo1atXW+fNmjVLRURE2MTWokULVVhYaJ03d+5c5eHhoUwmk1JKqZtvvlk98cQTSimljh8/rnQ6nTp58qRNPEOGDFHTpk1TSinVo0cPlZiYWKO6+eabb5SXl5fKy8u7aFlJSYlyc3NTv/32m838cePGqdGjRyullHrppZfU7bffbrM8PT1dASo1NdUa/8CBA23KREdHq6lTp9YoRiEcjVyjFqIB3HLLLcydO9dmXosWLazjer2ezz//nMjISMLCwnjrrbdsyh48eJDp06fz+++/c+bMGWtLOi0tje7du1vLRUZGWscrW+M9evSwmZednW2z7Z49e+Lm5madjomJoaCggPT0dMLCwmzKpqSkYDKZ6NSpk818o9GIn58fABMnTuSxxx7jp59+IjY2lnvuuccmrupuu+02wsLCaN++PXFxccTFxTFy5Ejc3Nw4dOgQRUVF3HbbbTbrlJaW0qtXLwB27tzJmjVrLtliP3z4sDXOC/cfFBR0UT0I0VRIohaiAbi7u9OxY8crlvntt98AOHfuHOfOncPd3d26bPjw4YSFhTFv3jyCg4Mxm8107979omvJzs7O1nGNRnPJeReeLq+NgoICdDod27ZtQ6fT2SyrTJYPPfQQQ4cO5YcffuCnn35i1qxZvPHGG0yYMOGi7Xl6erJ9+3bWrl3LTz/9xPTp00lMTGTLli0UFBQA8MMPP9C6dWub9SpvxCsoKGD48OG88sorF207KCjIOl69DuDa60EIe5JELYQdHD58mCeffJJ58+bx5ZdfEh8fz6pVq9BqtZw9e5bU1FTmzZvHTTfdBMCvv/5ab/veuXMnxcXFuLq6ArBp0yY8PDwICQm5qGyvXr0wmUxkZ2dbY7mUkJAQHn30UR599FGmTZvGvHnzLpmoAZycnIiNjSU2NpYZM2bg4+PDzz//zG233YbBYCAtLY2bb775kuvecMMNfPPNN7Rt2xYnJ/nzJa4P8ksXogEYjUYyMzNt5jk5OdGyZUtMJhN//etfGTp0KA8++CBxcXH06NGDN954g6effhpfX1/8/Pz4+OOPCQoKIi0tjWeffbbeYistLWXcuHE8//zzHDt2jBkzZjB+/Hi02ovvLe3UqRNjxozhgQce4I033qBXr16cPn2a1atXExkZyZ133smkSZMYNmwYnTp14vz586xZs4YuXbpcct/ff/89R44cYdCgQfj6+rJs2TLMZjMRERF4enoyZcoUnnzyScxmMwMHDiQ3N5cNGzbg5eVFfHw8CQkJzJs3j9GjR1vv6j506BBJSUl88sknF7X6hWgOJFEL0QCWL19ucyoWICIigv379/Pyyy9z/Phxvv/+e8Byyvbjjz9m9OjR3H777fTs2ZOkpCQmTpxI9+7diYiI4J133mHw4MH1EtuQIUMIDw9n0KBBGI1GRo8efcXHl+bPn88//vEPnnrqKU6ePEnLli258cYbueuuuwAwmUwkJCRw4sQJvLy8iIuLu+iaeyUfHx++/fZbEhMTKSkpITw8nIULF9KtWzcAXnrpJfz9/Zk1axZHjhzBx8eHG264gb///e8ABAcHs2HDBqZOncrtt9+O0WgkLCyMuLi4Sx5oCNEcaJRSyt5BCCEax9ixY8nJyWHJkiX2DkUIUUNyCCqEEEI4MEnUQgghhAOTU99CCCGEA5MWtRBCCOHAJFELIYQQDkwStRBCCOHAJFFXeP/992nbti0uLi7069ePzZs32zukBrd+/XqGDx9OcHAwGo3mokd2lFJMnz6doKAgXF1diY2NtfaYVOncuXOMGTMGLy8vfHx8GDdunPVVkJV27drFTTfdhIuLCyEhIbz66qsN/dXq3axZs4iOjsbT05NWrVoxYsQIUlNTbcqUlJSQkJCAn58fHh4e3HPPPdZuKyulpaVx55134ubmRqtWrXj66acpLy+3KbN27VpuuOEGDAYDHTt2ZMGCBQ399erd3LlziYyMxMvLCy8vL2JiYvjxxx+ty6WuLm/27NloNBomTZpknSf1VSUxMRGNRmMzdO7c2bq8WdaVXbsEcRBJSUlKr9erTz/9VO3Zs0c9/PDDysfHR2VlZdk7tAa1bNky9dxzz6lvv/1WAWrx4sU2y2fPnq28vb3VkiVL1M6dO9Uf/vAH1a5dO1VcXGwtExcXp3r27Kk2bdqkfvnlF9WxY0drT0dKKZWbm6sCAgLUmDFj1O7du9XChQuVq6ur+uijjxrra9aLoUOHqvnz56vdu3er5ORkdccdd6jQ0FBVUFBgLfPoo4+qkJAQtXr1arV161Z14403qv79+1uXl5eXq+7du6vY2Fi1Y8cOtWzZMtWyZUtrL1RKKXXkyBHl5uamJk+erPbu3aveffddpdPp1PLlyxv1+16rpUuXqh9++EEdOHBApaamqr///e/K2dlZ7d69WykldXU5mzdvVm3btlWRkZHWHsyUkvqqbsaMGapbt24qIyPDOpw+fdq6vDnWlSRqpVTfvn1VQkKCddpkMqng4GA1a9YsO0bVuC5M1GazWQUGBqrXXnvNOi8nJ0cZDAa1cOFCpZRSe/fuVYDasmWLtcyPP/6oNBqNtVvEDz74QPn6+iqj0WgtM3XqVJuuF5ui7OxsBah169YppSx14+zsrBYtWmQts2/fPgWojRs3KqUsB0ZarVZlZmZay8ydO1d5eXlZ6+eZZ55R3bp1s9nXqFGj1NChQxv6KzU4X19f9cknn0hdXUZ+fr4KDw9XK1eutOlqVOrL1owZM1TPnj0vuay51tV1f+q7tLSUbdu2ERsba52n1WqJjY1l48aNdozMvo4ePUpmZqZNvXh7e9OvXz9rvWzcuBEfHx/69OljLRMbG4tWq+X333+3lhk0aBB6vd5aZujQoaSmpnL+/PlG+jb1Lzc3F6jqunLbtm2UlZXZ1Ffnzp0JDQ21qa8ePXpYu6MES13k5eWxZ88ea5nq26gs05R/iyaTiaSkJAoLC4mJiZG6uoyEhATuvPPOi76T1NfFDh48SHBwMO3bt2fMmDGkpaUBzbeurvtEfebMGUwmk80/Glj68b2wU4XrSeV3v1K9ZGZm0qpVK5vlTk5OtGjRwqbMpbZRfR9NjdlsZtKkSQwYMMDaN3RmZiZ6vR4fHx+bshfW19Xq4nJl8vLyKC4uboiv02BSUlLw8PDAYDDw6KOPsnjxYrp27Sp1dQlJSUls376dWbNmXbRM6stWv379WLBgAcuXL2fu3LkcPXqUm266ifz8/GZbV9IphxC1lJCQwO7du+u168nmKCIiguTkZHJzc/n666+Jj49n3bp19g7L4aSnp/PEE0+wcuVKXFxc7B2Owxs2bJh1PDIykn79+hEWFsZXX31l7bq1ubnuW9QtW7ZEp9NddFdgVlYWgYGBdorK/iq/+5XqJTAwkOzsbJvl5eXlnDt3zqbMpbZRfR9Nyfjx4/n+++9Zs2YNbdq0sc4PDAyktLSUnJwcm/IX1tfV6uJyZby8vJrcHyG9Xk/Hjh3p3bs3s2bNomfPnrz99ttSVxfYtm0b2dnZ3HDDDTg5OeHk5MS6det45513cHJyIiAgQOrrCnx8fOjUqROHDh1qtr+t6z5R6/V6evfuzerVq63zzGYzq1evJiYmxo6R2Ve7du0IDAy0qZe8vDx+//13a73ExMSQk5PDtm3brGV+/vlnzGYz/fr1s5ZZv349ZWVl1jIrV64kIiICX1/fRvo2104pxfjx41m8eDE///wz7dq1s1neu3dvnJ2dbeorNTWVtLQ0m/pKSUmxObhZuXIlXl5edO3a1Vqm+jYqyzSH36LZbMZoNEpdXWDIkCGkpKSQnJxsHfr06cOYMWOs41Jfl1dQUMDhw4cJCgpqvr8tu9zC5mCSkpKUwWBQCxYsUHv37lWPPPKI8vHxsbkrsDnKz89XO3bsUDt27FCAevPNN9WOHTvU8ePHlVKWx7N8fHzUd999p3bt2qXuvvvuSz6e1atXL/X777+rX3/9VYWHh9s8npWTk6MCAgLU/fffr3bv3q2SkpKUm5tbk3s867HHHlPe3t5q7dq1No+FFBUVWcs8+uijKjQ0VP38889q69atKiYmRsXExFiXVz4Wcvvtt6vk5GS1fPly5e/vf8nHQp5++mm1b98+9f777zfJR2ieffZZtW7dOnX06FG1a9cu9eyzzyqNRqN++uknpZTU1dVUv+tbKamv6p566im1du1adfToUbVhwwYVGxurWrZsqbKzs5VSzbOuJFFXePfdd1VoaKjS6/Wqb9++atOmTfYOqcGtWbNGARcN8fHxSinLI1ovvPCCCggIUAaDQQ0ZMkSlpqbabOPs2bNq9OjRysPDQ3l5eakHH3xQ5efn25TZuXOnGjhwoDIYDKp169Zq9uzZjfUV682l6glQ8+fPt5YpLi5Wjz/+uPL19VVubm5q5MiRKiMjw2Y7x44dU8OGDVOurq6qZcuW6qmnnlJlZWU2ZdasWaOioqKUXq9X7du3t9lHU/G3v/1NhYWFKb1er/z9/dWQIUOsSVopqauruTBRS31VGTVqlAoKClJ6vV61bt1ajRo1Sh06dMi6vDnWlfSeJYQQQjiw6/4atRBCCOHIJFELIYQQDkwStRBCCOHAJFELIYQQDkwStRBCCOHAJFELIYQQDkwSdTVGo5HExESMRqO9Q3F4Ule1I/VVc1JXtSP1VXNNta4c5jnq2bNnM23aNJ544gnmzJljlxjy8vLw9vYmNzcXLy8vu8TQVEhd1Y7UV81JXdWO1FfNNdW6cogW9ZYtW/joo4+IjIy0dyhCCCGEQ7F7oi4oKGDMmDHMmzevSXXSIIQQQjQGu/dHnZCQwJ133klsbCz/+Mc/arVueXk5O3bsICAgAK322o858vPzATh58iR5eXnXvL3mTOqqdqS+ak7qqnakvmrOkerKbDaTlZVFr169cHK6ciq2a6JOSkpi+/btbNmypUbljUajzU0A27Zt49Zbb633uCq7OhNXJ3VVO1JfNSd1VTtSXzXnSHW1efNmoqOjr1jGbok6PT2dJ554gpUrV+Li4lKjdWbNmsXMmTMvmr9582aCgoLqO0QhhBCiQWRkZNC3b18CAgKuWtZud30vWbKEkSNHotPprPNMJhMajQatVovRaLRZBhe3qE+ePEnXrl1JT0+nTZs2jRa7EEIIcS1OnDhBSEhIjfKX3VrUQ4YMISUlxWbegw8+SOfOnZk6depFSRrAYDBgMBis0/a+xiCEEEI0NLslak9PT7p3724zz93dHT8/v4vmCyGEENcruz+eJYQQQojLs/vjWdWtXbvW3iEIIa5zJpOJsrIye4chmjhnZ+dLXsKtC4dK1PZUaCxnZ3oO5WbFoE7+9g5HCNHIlFJkZmaSk5Nj71BEM+Hj40NgYCAajeaatiOJusLq/dlMXLiDyDbekqiFuA5VJulWrVrh5uZ2zX9cxfVLKUVRURHZ2dkA1/z4sCTqCr1CfADYl5FHSZkJF+f6OWUhhHB8JpPJmqT9/PzsHY5oBlxdXQHIzs6mVatW13QaXG4mq9DG1xU/dz1lJsWeU/LYlxDXk8pr0m5ubnaORDQnlb+na73nQRJ1BY1GQ69QHwB2pJ23bzBCCLuQ092iPtXX70kSdTVRFae/k9Nz7BqHEEIIUUkSdTVRIZZuNiVRCyGuZ23btmXOnDk1Lr927Vo0Gk2D3zG/YMECfHx8GnQfjkgSdTWRId5oNHDifDFnCoxXX0EIIexIo9FccUhMTKzTdrds2cIjjzxS4/L9+/cnIyMDb2/vOu1PXJnc9V2Nl4szHfw9OJRdQHJaDrFdr96riRBC2EtGRoZ1/Msvv2T69OmkpqZa53l4eFjHlVKYTKar9n0M4O9fu0dU9Xo9gYGBtVpH1Jy0qC8g16mFEE1FYGCgdfD29kaj0Vin9+/fj6enJz/++CO9e/fGYDDw66+/cvjwYe6++24CAgLw8PAgOjqaVatW2Wz3wlPfGo2GTz75hJEjR+Lm5kZ4eDhLly61Lr/w1HflKeoVK1bQpUsXPDw8iIuLszmwKC8vZ+LEifj4+ODn58fUqVOJj49nxIgRtaqDuXPn0qFDB/R6PREREfznP/+xLlNKkZiYSGhoKAaDgeDgYCZOnGhd/sEHHxAeHo6LiwsBAQHce++9tdp3Y5FEfQFJ1EIIqHhpRWm5XYb67H342WefZfbs2ezbt4/IyEgKCgq44447WL16NTt27CAuLo7hw4eTlpZ2xe3MnDmT++67j127dnHHHXcwZswYzp07d9nyRUVFvP766/znP/9h/fr1pKWlMWXKFOvyV155hc8//5z58+ezYcMG8vLyWLJkSa2+2+LFi3niiSd46qmn2L17N//3f//Hgw8+yJo1awD45ptveOutt/joo484ePAgS5YsoUePHgBs3bqViRMn8uKLL5Kamsry5csZNGhQrfbfWOTU9wUqE/XO9BzMZoVWK49rCHE9Ki4z0XX6Crvse++LQ3HT18+f5xdffJHbbrvNOt2iRQt69uxpnX7ppZdYvHgxS5cuZfz48ZfdztixYxk9ejQA//znP3nnnXfYvHkzcXFxlyxfVlbGhx9+SIcOHQAYP348L774onX5u+++y7Rp0xg5ciQA7733HsuWLavVd3v99dcZO3Ysjz/+OACTJ09m06ZNvP7669xyyy2kpaURGBhIbGwszs7OhIaG0rdvXwDS0tJwd3fnrrvuwtPTk7CwMHr16lWr/TcWaVFfoHOgJy7OWvKN5Rw+XWDvcIQQ4pr06dPHZrqgoIApU6bQpUsXfHx88PDwYN++fVdtUUdGRlrH3d3d8fLysr4i81Lc3NysSRosr9GsLJ+bm0tWVpY1aQLodDp69+5dq++2b98+BgwYYDNvwIAB7Nu3D4A//elPFBcX0759ex5++GEWL15MeXk5ALfddhthYWG0b9+e+++/n88//5yioqJa7b+xSIv6Ak46LZGtfdh87Bw70nMID/C0d0hCCDtwddax98Whdtt3fXF3d7eZnjJlCitXruT111+nY8eOuLq6cu+991JaWnrF7Tg7O9tMazQazGZzrcrX5yn9mggJCSE1NZVVq1axcuVKHn/8cV577TXWrVuHp6cn27dvZ+3atfz0009Mnz6dxMREtmzZ4nCPgEmL+hKiKt5QJtephbh+aTQa3PROdhka8g1pGzZsYOzYsYwcOZIePXoQGBjIsWPHGmx/l+Lt7U1AQABbtmyxzjOZTGzfvr1W2+nSpQsbNmywmbdhwwa6du1qnXZ1dWX48OG88847rF27lo0bN5KSkgKAk5MTsbGxvPrqq+zatYtjx47x888/X8M3axjSor4E6w1laTl2jUMIIepbeHg43377LcOHD0ej0fDCCy9csWXcUCZMmMCsWbPo2LEjnTt35t133+X8+fO1Okh5+umnue++++jVqxexsbH873//49tvv7Xexb5gwQJMJhP9+vXDzc2N//73v7i6uhIWFsb333/PkSNHGDRoEL6+vixbtgyz2UxERERDfeU6k0R9CZWJOjUrn+JSE6566UlLCNE8vPnmm/ztb3+jf//+tGzZkqlTp5KX1/gdEU2dOpXMzEweeOABdDodjzzyCEOHDq1VL1MjRozg7bff5vXXX+eJJ56gXbt2zJ8/n8GDBwOW/qBnz57N5MmTMZlM9OjRg//973/4+fnh4+PDt99+S2JiIiUlJYSHh7Nw4UK6devWQN+47jSqsS8a1KMTJ04QEhJCeno6bdq0ubaNlRvh+G9w9hAq+iH6/XM12flGvvq/GPq2a1E/AQshHFJJSQlHjx6lXbt2uLi42Duc65LZbKZLly7cd999vPTSS/YOp15c6XdVm/wl16grFefAf0bAsqfRlORWe55aetISQoj6dvz4cebNm8eBAwdISUnhscce4+jRo/zlL3+xd2gORxJ1Jc8AaNEeUJC+mV6h0kGHEEI0FK1Wy4IFC4iOjmbAgAGkpKSwatUqunTpYu/QHI5co64utD+cOwJpvxHVzvI83w65oUwIIepdSEjIRXdsi0uTFnV1YTGWz+MbiWzjjVYDGbklZOWV2DcuIYQQ1y1J1NWFViTqU9tx15bTqeJlJ9KqFkIIYS+SqKtr0R7cW4GpFE5ukw46hBBC2J0k6uo0mqrT32m/yZ3fQggh7E4S9YVC+1s+j2+0vko05UQuJnOTfdxcCCFEEyaJ+kKVLer0zYS3dMNdr6Ow1MTB7Hz7xiWEEOK6JIn6QgHdweAFpfnoTu8hso0PIO/9FkI0X4MHD2bSpEnW6bZt2zJnzpwrrqPRaFiyZMk177u+tnMliYmJREVFNeg+GpIk6gtpdRBS0UdqtdPfcue3EMLRDB8+nLi4uEsu++WXX9BoNOzatavW292yZQuPPPLItYZn43LJMiMjg2HDhtXrvpobSdSXEnqpG8py7BaOEEJcyrhx41i5ciUnTpy4aNn8+fPp06cPkZGRtd6uv78/bm5u9RHiVQUGBmIwGBplX02VJOpLaXcztL8FwgbQqyJRH8jOp8BYbt+4hBCimrvuugt/f38WLFhgM7+goIBFixYxbtw4zp49y+jRo2ndujVubm706NGDhQsXXnG7F576PnjwIIMGDcLFxYWuXbuycuXKi9aZOnUqnTp1ws3Njfbt2/PCCy9QVlYGWLqbnDlzJjt37kSj0aDRaKwxX3jqOyUlhVtvvRVXV1f8/Px45JFHKCgosC4fO3YsI0aM4PXXXycoKAg/Pz8SEhKs+6oJs9nMiy++SJs2bTAYDERFRbF8+XLr8tLSUsaPH09QUBAuLi6EhYUxa9YsAJRSJCYmEhoaisFgIDg4mIkTJ9Z433UhrxC9lJBoeGAJAK2AYG8XTuWWsOtEDv07tLRraEKIRlZaWPt1dAbQVfx5NZWDyQgaLTi7Xn27evca78bJyYkHHniABQsW8Nxzz1n7cl60aBEmk4nRo0dTUFBA7969mTp1Kl5eXvzwww/cf//9dOjQgb59+151H2azmT/+8Y8EBATw+++/k5uba3M9u5KnpycLFiwgODiYlJQUHn74YTw9PXnmmWcYNWoUu3fvZvny5da+or29vS/aRmFhIUOHDiUmJoYtW7aQnZ3NQw89xPjx420ORtasWUNQUBBr1qzh0KFDjBo1iqioKB5++OEa1dvbb7/NG2+8wUcffUSvXr349NNP+cMf/sCePXsIDw/nnXfeYenSpXz11VeEhoaSnp5Oeno6AN988w1vvfUWSUlJdOvWjczMTHbu3Fmj/daVJOoaiAr14VRKJsnpkqiFuO78M7j26/xpAXQbaRnf/z9YNBbCBsKDP1SVmdMDis5evG5ibq129be//Y3XXnuNdevWWfthnj9/Pvfccw/e3t54e3szZcoUa/kJEyawYsUKvvrqqxol6lWrVrF//35WrFhBcLClLv75z39edF35+eeft463bduWKVOmkJSUxDPPPIOrqyseHh44OTkRGBh42X198cUXlJSU8Nlnn+Hubjlgee+99xg+fDivvPIKAQEBAPj6+vLee++h0+no3Lkzd955J6tXr65xon799deZOnUqf/7znwF45ZVXWLNmDXPmzOH9998nLS2N8PBwBg4ciEajISwszLpuWloagYGBxMbG4uzsTGhoaI3q8VrIqe8rKciGk9urrlPLDWVCCAfTuXNn+vfvz6effgrAoUOH+OWXXxg3bhwAJpOJl156iR49etCiRQs8PDxYsWIFaWlpNdr+vn37CAkJsSZpgJiYmIvKffnllwwYMIDAwEA8PDx4/vnna7yP6vvq2bOnNUkDDBgwALPZTGpqqnVet27d0Ol01umgoCCys7NrtI+8vDxOnTrFgAEDbOYPGDCAffv2AZbT68nJyURERDBx4kR++ukna7k//elPFBcX0759ex5++GEWL15MeXnDXha1a4t67ty5zJ07l2PHjgGWyp8+fbpj3AF4dD38ezj4tiPqDz8DlhvKlFLW00tCiOvA30/Vfh1dtZujOg+3bENzQbtoUsq1xVXNuHHjmDBhAu+//z7z58+nQ4cO3HzzzQC89tprvP3228yZM4cePXrg7u7OpEmTKC0trbf9b9y4kTFjxjBz5kyGDh2Kt7c3SUlJvPHGG/W2j+qcnZ1tpjUaDWazud62f8MNN3D06FF+/PFHVq1axX333UdsbCxff/01ISEhpKamsmrVKlauXMnjjz9uPaNxYVz1xa4t6jZt2jB79my2bdvG1q1bufXWW7n77rvZs2ePPcOyCOoJGh04u9HD3wmdVkN2vpGMXOlJS4jrit699oOuWhtI52SZV/369JW2Wwf33XcfWq2WL774gs8++4y//e1v1gbFhg0buPvuu/nrX/9Kz549ad++PQcOHKjxtrt06UJ6ejoZGRnWeZs2bbIp89tvvxEWFsZzzz1Hnz59CA8P5/jx47ZfV6/HZDJddV87d+6ksLDq+v2GDRvQarVERETUOOYr8fLyIjg4+KIuNjds2EDXrl1tyo0aNYp58+bx5Zdf8s0333Du3DkAXF1dGT58OO+88w5r165l48aNpKTU34HXhezaoh4+fLjN9Msvv8zcuXPZtGkT3bp1s1NUFVy8YeoxcPHCFegc6MmeU3nsSMsh2Mf1amsLIUSj8fDwYNSoUUybNo28vDzGjh1rXRYeHs7XX3/Nb7/9hq+vL2+++SZZWVk2SelKYmNj6dSpE/Hx8bz22mvk5eXx3HPP2ZQJDw8nLS2NpKQkoqOj+eGHH1i8eLFNmbZt23L06FGSk5Np06YNnp6eFz2WNWbMGGbMmEF8fDyJiYmcPn2aCRMmcP/991uvT9eHp59+mhkzZtChQweioqKYP38+ycnJfP755wC8+eabBAUF0atXL7RaLYsWLSIwMBAfHx8WLFiAyWSiX79+uLm58d///hdXV1eb69j1zWGuUZtMJpKSkigsLLzk9Q8Ao9FIXl6edcjPb+DXerp4WUelgw4hhCMbN24c58+fZ+jQoTbXk59//nluuOEGhg4dyuDBgwkMDGTEiBE13q5Wq2Xx4sUUFxfTt29fHnroIV5++WWbMn/4wx948sknGT9+PFFRUfz222+88MILNmXuuece4uLiuOWWW/D397/kI2Jubm6sWLGCc+fOER0dzb333suQIUN47733alcZVzFx4kQmT57MU089RY8ePVi+fDlLly4lPDwcsNzB/uqrr9KnTx+io6M5duwYy5YtQ6vV4uPjw7x58xgwYACRkZGsWrWK//3vf/j5+dVrjNVplFJ27W0iJSWFmJgYSkpK8PDw4IsvvuCOO+64ZNnExERmzpx50fz09HTatGnTcEGayli0I5Onv95FdFtfFj3av+H2JYRodCUlJRw9epR27drh4uJi73BEM3Gl39WJEycICQmpUf6ye4s6IiKC5ORkfv/9dx577DHi4+PZu3fvJctOmzaN3Nxc63C5cvWmrBg+HQazQ+kdYLnek3IylzJT/d20IIQQQlyJ3Z+j1uv1dOzYEYDevXuzZcsW3n77bT766KOLyhoMBptrGnl5eQ0bnLMr5GdAWRFti/bg6eJEfkk5qZn5dG998cP6QgghRH2ze4v6QmazGaPRaO8wqoRZTnNr0zfSs7InLXnvtxBCiEZi10Q9bdo01q9fz7Fjx0hJSWHatGmsXbuWMWPG2DMsW5UddBzfKB10CCGEaHR2PfWdnZ3NAw88QEZGBt7e3kRGRrJixQpuu+02e4Zlq6JFzant9O5neSxLErUQQojGYtdE/a9//cueu6+ZFu3BvRUUZnOD01EADmUXkFtchrdrw7yFRghhH/X5dish6uv3ZPebyRyeRgNhMbD3O7yztxDS4gbSzxWz60QON4X72zs6IUQ90Ov1aLVaTp06hb+/P3q9Xl4VLOpMKUVpaSmnT59Gq9Wi1+uvaXuSqGsitD/s/Q7SNhIVMoT0c8Ukp0miFqK50Gq1tGvXjoyMDE6dqsO7vYW4BDc3N0JDQ9Fqr+12MEnUNRFWcUNZ+mZ6DfTkfzvlOrUQzY1eryc0NJTy8vKrvpNaiKvR6XQ4OTnVy5kZSdQ1EdAd9J5gzONGd8uL6aUnLSGaH41Gg7Ozc4P1giREXTjcc9QOSauDEEvH4OElKTjrNJwtLOXE+WI7ByaEEKK5k0RdUxWnv51PbKJLkKWzjh1y+lsIIUQDk0RdU6EVz1Of3EGvyhefpOXYLRwhhBDXB0nUNdW6Nzz4I4zfQlSoDwA7pMtLIYQQDUxuJqspZxfrW8qiQnwB2HMqj9JyM3onOd4RQgjRMCTD1EFbPzd83JwpLTezL6OBe/ASQghxXZNEXRv5WbDsaTQLR0tPWkIIIRqFJOracDLA5nlw4Ef6B5QDkqiFEEI0LLlGXRuuPjDkBWjRns4Ewy/nJVELIYRoUJKoa+umpwCILCwF9nD0TCE5RaX4uF3bS9eFEEKIS5FT33Xk666nXUt3QE5/CyGEaDiSqGtLKTj2K6x7jRuDLSckdsiLT4QQQjQQSdS1pdHAd+NhzT8Y4n4MkBa1EEKIhiOJui4qXnzSw7wXgJ0nLD1pCSGEEPVNEnVdhFo66PA/tw29k5acojKOnS2yc1BCCCGaI0nUdVHRotae2k5UkAsAyfLebyGEEA1AEnVdtGgP7q3AVMow31OA9KQlhBCiYUiirguNBkJvBOBGpwOA3FAmhBCiYUiirquK099tC3cCsDcjj5Iykz0jEkII0QxJoq6rihvKXDK34e+mo8yk2Cs9aQkhhKhnkqjrKrAH6D3RGPO4M8ByI5m8+EQIIUR9k0RdV1odhPQF4BbXQ4BcpxZCCFH/JFFfizDL6e+u5XsAeURLCCFE/ZNEfS1CLTeU+Z3dBijSzxVztsBo35iEEEI0K5Kor0Xr3tDuZrS94+nc0gDI6W8hhBD1SxL1tXB2gfilcOvzdA9rBUiiFkIIUb/qlKjT09M5ceKEdXrz5s1MmjSJjz/+uN4Ca2qiQnwASdRCCCHqV50S9V/+8hfWrFkDQGZmJrfddhubN2/mueee48UXX6zXAJuEonMM1OwCLInabJaetIQQQtSPOiXq3bt307ev5dGkr776iu7du/Pbb7/x+eefs2DBgvqMz/EZC+C1jrT98a+0cc4jv6ScI2cK7B2VEEKIZqJOibqsrAyDwXLz1KpVq/jDH/4AQOfOncnIyKjxdmbNmkV0dDSenp60atWKESNGkJqaWpeQ7MfgAa26QstODGhVCsiLT4QQQtSfOiXqbt268eGHH/LLL7+wcuVK4uLiADh16hR+fn413s66detISEhg06ZNrFy5krKyMm6//XYKCwvrEpb9PLQKxm/Bq300INephRBC1B+nuqz0yiuvMHLkSF577TXi4+Pp2bMnAEuXLrWeEq+J5cuX20wvWLCAVq1asW3bNgYNGlSX0OzD2dIndVSIL3BUErUQQoh6U6dEPXjwYM6cOUNeXh6+vr7W+Y888ghubm51DiY3NxeAFi1a1Hkb9hTV2h0nytmfmU9xqQlXvc7eIQkhhGji6nTqu7i4GKPRaE3Sx48fZ86cOaSmptKqVas6BWI2m5k0aRIDBgyge/fulyxjNBrJy8uzDvn5+XXaV4NY8jjBH0Zwp3sqJrNi96lce0ckhBCiGahTor777rv57LPPAMjJyaFfv3688cYbjBgxgrlz59YpkISEBHbv3k1SUtJly8yaNQtvb2/r0LVr1zrtq6FoyooY6nkEgGS5oUwIIUQ9qFOi3r59OzfddBMAX3/9NQEBARw/fpzPPvuMd955p9bbGz9+PN9//z1r1qyhTZs2ly03bdo0cnNzrcPevXvrEn7DCL0RgCi1D5AbyoQQQtSPOl2jLioqwtPTE4CffvqJP/7xj2i1Wm688UaOHz9e4+0opZgwYQKLFy9m7dq1tGvX7orlDQaD9bEwgLy8vLqE3zAqOugIzN+LgVJ2pElPWkIIIa5dnVrUHTt2ZMmSJaSnp7NixQpuv/12ALKzs/Hy8qrxdhISEvjvf//LF198gaenJ5mZmWRmZlJcXFyXsOzLrwO4+6M1lxKpPcKp3BKy80rsHZUQQogmrk6Jevr06UyZMoW2bdvSt29fYmIs/TL/9NNP9OrVq8bbmTt3Lrm5uQwePJigoCDr8OWXX9YlLPvSaCDUUg/DPI8BsENOfwshhLhGdTr1fe+99zJw4EAyMjKsz1ADDBkyhJEjR9Z4O0o1s3dih/WHfUvp73wAiCM5PYeh3QLtHZUQQogmrE6JGiAwMJDAwEBrL1pt2rSp1ctOmqWKFnX7kt1oMcud30IIIa5ZnU59m81mXnzxRby9vQkLCyMsLAwfHx9eeuklzGZzfcfYdAT2AL0n+vICOmvS2HUiB5P0pCWEEOIa1KlF/dxzz/Gvf/2L2bNnM2DAAAB+/fVXEhMTKSkp4eWXX67XIJsMrQ5C+sLh1QxwPsDe0rYcyi4gItDT3pEJIYRoouqUqP/973/zySefWHvNAoiMjKR169Y8/vjj12+iBgiLgcOrudXtMPNKITn9vCRqIYQQdVanU9/nzp2jc+fOF83v3Lkz586du+agmrSK69Q9THsBJV1eCiGEuCZ1StQ9e/bkvffeu2j+e++9R2Rk5DUH1aS17g1aZ9zKcwnknLyhTAghxDWp06nvV199lTvvvJNVq1ZZn6HeuHEj6enpLFu2rF4DbHKcXWHcT5x2CSPztU1kZ+VTaCzH3VDnG+yFEEJcx+rUor755ps5cOAAI0eOJCcnh5ycHP74xz+yZ88e/vOf/9R3jE1P6xsI8PMjyNsFs4JdJ6QnLSGEEHVT52ZecHDwRTeN7dy5k3/96198/PHH1xxYc9Ar1IeMlEyS03OI6eBn73CEEEI0QXVqUYurUApWPEdi5nj8ySE5XTroEEIIUTeSqBuCRgNH1tIqfy99tKlyQ5kQQog6kzucGspNkyktK2frIsXpPCMZucUEebvaOyohhBBNTK0S9R//+McrLs/JybmWWJqX7vegB/zX/8LpjDx2pOUQ1EMStRBCiNqpVaL29va+6vIHHnjgmgJqbqJCfdibkUdyeg539AiydzhCCCGamFol6vnz5zdUHM3TqWT+bFzCLo0fyWkt7B2NEEKIJkhuJmtIv39E5P63GKrbSsrJXMpN13HPYkIIIepEEnVDCrO8tS1Gl0pxmYnUrHw7BySEEKKpkUTdkEL7AxCpOYSeMnlMSwghRK1Jom5Ifh3A3R89ZfTQHCFZetISQghRS5KoG5JGY+32sq+8+EQIIUQdSKJuaGGW09/R2v0cOl1AXkmZnQMSQgjRlEiibmihNwIQrTuARpnZlS49aQkhhKg5SdQNLaAH6D3wpIgITbp00CGEEKJWJFE3NJ0ThPQFLKe/5Tq1EEKI2pBE3RgqHtOqvKFMKWXngIQQQjQVkqgbQ8WLT6K1+zlTYOTE+WI7BySEEKKpkETdGFr3Bq0zAZocQjXZcvpbCCFEjUmibgzOrnDD/fzS6q+UKSdJ1EIIIWqsVr1niWtw11uc3n6CjLSd7EiTO7+FEELUjLSoG1FUiA8Au0/lUVouPWkJIYS4OknUjaidRzl3uOzGpTyP/Zl59g5HCCFEEyCJuhFp/n0XH/BPBmj3yHVqIYQQNSKJujGF3EiOS2tLl5fSk5YQQogasGuiXr9+PcOHDyc4OBiNRsOSJUvsGU7Di5vFjpFr+c48UFrUQgghasSuibqwsJCePXvy/vvv2zOMxqNzJqqNDwBHzhSSWyQ9aQkhhLgyuz6eNWzYMIYNG2bPEBqdr7ue9i0MZJ7LJflEDjd38rd3SEIIIRyYXKNubL+9yw8lD/CY01J5nloIIcRVNakXnhiNRoxGo3U6Pz/fjtHUkcELV3MhfbX7mSvXqYUQQlxFk2pRz5o1C29vb+vQtWtXe4dUe2GWnrSiNIfZm3ZaetISQghxRU0qUU+bNo3c3FzrsHfvXnuHVHt+HVHu/hg0ZYSUpHL8bJG9IxJCCOHAmlSiNhgMeHl5WQdPT097h1R7Gg2a0BuBqv6phRBCiMuxa6IuKCggOTmZ5ORkAI4ePUpycjJpaWn2DKvhhVpOf0dr90uiFkIIcUV2TdRbt26lV69e9OrVC4DJkyfTq1cvpk+fbs+wGl5YDAB9tAfYmXbWzsEIIYRwZHa963vw4MHX581UAT0wO7vjVVaIKWMvxvIBGJx09o5KCCGEA2pS16ibDZ0TmtB+AESxlz2npCctIYQQlyaJ2k40Fdep+2pTpYMOIYQQlyWJ2l4qrlNHa/eTLG8oE0IIcRmSqO2ldW/MWmcCNDlkp+23dzRCCCEclCRqe3F2xRzUC5PS4JV3gLMFxquvI4QQ4rojidqOnP74ISM8v+AnczQ7T+TYOxwhhBAOSBK1Pfl1oFNoawC5oUwIIcQlSaK2s6hQHwB2yBvKhBBCXIIkajsbkruYb/Qz8E1fidl8Hb78RQghxBVJorazgLI0emsPElWewpEzhfYORwghhIOx6ytEBeiixvDuAW8WZrfFKz2Hjq087B2SEEIIByItantr05u8iD9xipYkp8uLT4QQQtiSRO0AokJ8AaTLSyGEEBeRU98OoLdXDuN0yzib5UNJWX9cnKUnLSGEEBbSonYAAWc384LzfxmtXcXuk7n2DkcIIYQDkUTtADRhlp60ojSH2XU8287RCCGEcCSSqB2BX0eKnFtg0JRx/uDv9o5GCCGEA5FE7Qg0GooDowFwy9xs52CEEEI4EknUDsI9fCAAEcbdZOeX2DkaIYQQjkIStYNw6WBJ1H20B9h+7KydoxFCCOEoJFE7isBIjFpXvDRFfJD0P8Z/sZ1fDp6W938LIcR1Tp6jdhQ6J8qDozGcWM9s3Qf8vvdnlu4JY757J3pGD+De6Ha09nG1d5RCCCEamSRqB+Le8244sZ6u2uN01R4HwGzU0H31v5jz81FuCvcnISSNG0I8cA7tC24t7ByxEEKIhiaJ2pFEPwSh/SEjGTJ3Y8rcTW5uDpGuwWw6co71B06TcOw1nLX7WdJuOl3jHqFTgCecPQxpmyCwO/h3BieDvb+JEEKIeiKJ2tEEdLUMgA5oASQBx84UsmhbOid/D+FgeT5z97uRum89USE+vNByHb33vWJZX+sELTtBQDcI6G5J3gE9wDPAXt9ICCHENZBE3US0benO00M7Ux77FesOnCZsSzqH92eTnJ7Dv08WUu7clR5O6biZ8iF7r2VIWVS1AXf/asm7BwRFQavOdvs+QgghakYSdRPjpNMypEsAQ7oEcDrfyLfbT/DlFndGnekPRkUQ57jVN4sRQeeIdDqB4ew+OHsICk/DkbWWASBsADy4rGrD2xaATxiExoCzix2+mRBCiEuRRN2E+Xsa+L+bO/DIoPZsPX6eL7ek88MuJz4/78fn58FJq2FIl1aMvqUlA71O43R6D2Tuhqw9ENKvakPGAvjfJEDB04erEvXx38BUCsG9wMXbHl9RCCGue5KomwGNRkN02xZEt23BjOFd+X5XBklb0tmZnsOKPVms2JNFgJeBP/WO4b4b7yPUz812A8Z86DIcCrLBvWXV/F/egEOrAI3lunfr3tD6BmjTB1p1Ayd9o35PIYS4HmmUUk32jRonTpwgJCSE9PR02rRpY+9wHE5qZj5fbkln8Y4TnC8qs86Pae/Hn/uGMLRb4JX7vv7+SUuizkm7eJnOAEGRFcm7jyWBt2gPGk0DfJPrgFKQewKy91XcY7AP8jPAt63lIKllJ/CPAN8we0faeMpLITcdNFpw9QGDN2ib4TualIKSXCg6C4VnoOiM5VJV4ZmqeeZy8Oto+Q34R1jGneW9Ck1ZbfKXJOrrgLHcxMq9WXy5JZ1fD52h8l/cy8WJEb1ac1+fELq3vsKp7YLTcGo7nNgKJ7dZhpKci8u5+MCwV6HnKMu0UpK4LydrLxz7pSopZ+8DY96V12ndGx7+uWp64weWMyCd4sDFq2HjbSjGAjh/FM4drfg8UjWeewKUuVphjeV7jvgQOt9hmXViG+z4zHKDZPRDVUWPb7QkMlcfy+/S4NV4SV4py/+PwrOWhNsyvOpMVdom2DzPkmhvmVZV/iV/MJdddpMX08CIDyDqL5bJgtNw/hj4d5LLVE1EbfKXnPq+DhicdNwVGcxdkcGcOF/E19tOsGjrCU7mFPPZxuN8tvE43YK9+HN0CH+Iao23q7PtBjz8odNQywCgFOrsYUzpWzGd2Ib21DacsnejKcnhWLELZ4+fw1hmxuPYcjpu/ycnguPY1mkSxjITJeVmSspMGC/8LDNjLDdRUvHp7aonItCDiEAvIgI8ae/vjrOuibamkhdCxk4YMBG8gi3z9n8Pa162LVf5aF2rLpbBM9jyx/fMAcsNgYE9qsqaymDlC5aW1pN7qxL1js8tz+G37GRJEH7hln3a+4Ap96Tlnge9e1WSLTfCrDbAFdoKTq6W2MuKLOVKcm0vuWTvtdwIGX67baL+7z1QVlg1rdFaEpiLT1Xyrv7p6gvhQ6uehDAWWFqzbi3A4GmZpxSkbazW6j1z6fGis5Z/l0p/WgDdRlrG8zNh99cQcmNVotZoLIncWADufuDW0jLt1tIy7e5vKXfmAJxOtQwlOeDVumofh1bCkseg7U0w9vuq+Ts+B59Qy/sV3Fva/3dQU0pB8XnL5biCLMtn0VnL79wzyPKb9gxqugeotSSJ+jrTxteNSbGdmHBrOBsOneHLrems3JPFnlN5vPDdHv7xwz76tfcDqEigFyfVymRqVp7AYGAwzpTTWZPG4SXlFLERgKedVhDpdJJt+48wbXcKAM6Us0ifyF5zW5JVB3aaO3BQtcF8idfOr9qXZR131mno4O9BpwBPIgI9iaj4bO3jilZr5z8+5UY4c7CiZbwHyoph2CtVyze+D1kp0P7mqkTdJhoi7qxKyq26WlpZNb3uX1oIvf4K549XbRPgwHLYt9S2rN7Dsu3KU+gtwy2fLdpfdIe/Uoq84nIy8orJyCnBWG7C08UZLxdnPF2cKgZn9E7V/r3MZsg/ZWkJnztS1UK+8TEIvdFS5vhv8O1DlqcNKhO1k8GSbMoKwbedJZ4WFZ++7SzjHgGW5FJuhOKcigRV7fsG9oDB0yyXCCqZyi3JqSTHsk55saVlXnzeMpy/TJ16ta5K1EfWwJd/hTZ94aGVVWU+GwEm45X+ZaoYvMDNz3KQUCmoJ9z+suU7VvfErpr/2ytlaakbqiUpU6klcflXe+TSmA/fPV417eoLLSMsrW7/zlXjXm0a/5JC5m44exACI8Gvg2Ve+mb48ZmK5JxdszMMeg+YmGxpTAAcWGE5uG070PI4ajMhp74F5wtLWbzjJF9uSSc1K79O2zA4aXFx1uHirMXgZPn00RrpxiFK9T5kuYVjcNbRsewgTx59xGbdMp0rZ726ct43kny/SApaRpFW5kNqdiGpmXkcyCqgwFh+yf2663V0qpa4Kz/9PBrg7Wxmk+WPgPV09V7LKeyzh0CZqsrpDPD3U6CrOA7+9S3Lqcmo0bat4oaw/wfLH7wzBy0tsHNHbGOr/nXQkusSzEavO/hcfw8ZuSVk5RahL83lPLYtFWfKaa05TVtNFqGaLDrosmmvyyZUk0WQOQs9F/9R3dx5KicjHsDLxZmAooO02/oS5uAbULe9iIfeyXKAVVpoaWU3pLKSqqR94Wfx+arxGx+D4CjLOilfw5LHLQdXY6q9j+DTYZbWsnv1Vm+1z+rj9nhDoNkE2or7TvJOWZ7mOJNqOaC73JkLZ7eqeyBadoJe99fuBUllxVXJtSALCrNtW8IF2ZYzDeO3VR0QfBUPe5dA3Ctw46OWeemb4V+32W7bxcdysObRynJ2oyTXclYiLwOMuZaDoBfOVH3nRQ/Cnm9h6D8hJsEy7+R2WDgavIIsZ6k8A6vGq38avBr1jEOTu0b9/vvv89prr5GZmUnPnj1599136du371XXk0Rdv5RS7DqRy96MPPQ6S+KtTMAGZy0uTraJ2FCx3OCkRVPTH3hJLhxZV3Wt+9QOKC24uJxGa0l4TnqUTs/JsVs4cLaU/Zn5tE95m9CczXxovJ2l5ZYWW1tNBhOdFlOqnCjFGZ2zAU8Pd7w9PPD18sDP25OWPp4YDK6WP6A6veWzw61VN+XknrT80fYIqLqmmLUXNr5neaTtdKqldXYpBm/LG+UqW8e97m/U59GVUuQWl5GRW0JGbrHlM6eE7Jx8zOeO4Jp7BJ+iY4Spk3TQnKKD5hRemiIA3ikfwZvl9wEQzBl+c5nIaXyI9/kMV4Mz+SVl/Cf3QQK4fPerZUrHCdWSNBXA8YrhV3N3UlXoJctrNOBhcLK21L1cnPFytbTWK6c9XZzwcnXG29UZH1dnvN2c8XXT4+PmjKuzrua/uWthKgOd89XLObqy4qqDt9OpcHp/xSWVwxe3XCdsr2rlblsAh3+GHn+yPBkClsT30/NVifhq91ZUevpw1f+rda/B4dXQ528QafntUZJrOfPi0ari/6D/lQ92SgstMVQ/O7HpQzi+AaLHQfvBlnn7/mc5O3I1zu4Vibvi1Ppdb1UdROakg0ZDqYs/GifnerkM16QS9ZdffskDDzzAhx9+SL9+/ZgzZw6LFi0iNTWVVq1aXXFdSdTNgNlk+QNystqNall7bK/xAUw/f9HRuCnuFY60G8P+zHyKDqxj1J5Ha737w/HbCQ1tZ/mPt+xp2PwxDHoabn3eUuDENvjk1qoVnFwsLY9W3aqScqsuDXodWClFTpElCWfmFXMqp4TM3BJO5RaTmWsZz8gtobjs0i3nC7Vw1xPoaaCzZzHd9JkYWrTBLSiCQG8X2uXvIGjJvZbTouM3V630aRxk7ET5tqXMuy0lHqHku4WQ69KGs/rWZGv9ySu1nDbPLykjv6ScvAs+80vKyCsup9RkvnxwNaTXafF2syRwXze9ddzHzRmfimTu42r59HZ1xtddj4+rM276RkrwTYWpzHKZ4kzFte+zh+Du96taqN88ZHnDYWwiDHzSMu/EVvhkiO12dIaqlq91qJh2rxgPjmrQswxlJjNFpSZKykwUl5ooLjNRVGqitCgP7dlDaAoy0RVk4FSUiaEoC5eSbNyN2XiUnsbVZHsm0YSW4d7fUlAGxWUmXip9nTjNRmaUxdPl7in8ue+lD0Bro0kl6n79+hEdHc17770HgNlsJiQkhAkTJvDss89ecV1J1M1U5anKcqPl2pup1PZ606kdljuCA7pVHU3npFtOpZUbKSst4VxeAbn5BeQWFFJYWEhRcTGqvAQ95RgoQ68pR08ZD5Q+i1HnTgd/D6Zov6B//gpOdn0Yw6BJGJy1lJcU4LZ1LiUtIij0iaDYLYQyNJSbFOUmM2Xmik+Totxstsw32y4rNynKKpddZh3bcTPlZkVRaTlZeUYycospKatZcvNz1xPo7UKQtwtB3q4240HeLgR6u1z5kTyA0iLLNdDqj4KV5FluqqqHJFdSZqqWvMvJK66WyCuTe3FVks8tLiOnqIyc4jJyikopM9X9T5azToN3RQL3dXO2jl8pyXu7OaPTaKxfXYPGpho0Gsu8qnGsBwOaynlN9eAg7Xc4sQXaDbI8jgmWlu+hVRWJ2JKMzc6elJoVxnIzpeVmSk1mjGUmSk0V0xWDsdq0tWy5bbnKMsZy23VLTWZrAi4pq0rEJRXzys11/124UkKA5jyBmvMEcA4vTRH/Md1uXf6B8xxu025jfNkEbrxzLA8OaHetNdt0EnVpaSlubm58/fXXjBgxwjo/Pj6enJwcvvvuO5vyRqMRo7HqRo6TJ0/StWtXSdSiRs4WGEnNyudAZj6pWfnsz7SMF5bWrCVqb37ueoJ8XAj0ciXYx+WiJBzgVYMk3MQppSgqNVmTdm5FAj9fVEpOUWVSt4xbknvVeH205OtDZTK3jGuqjVckfOsBge1BgFZTrbwGtBUHDxqqxqk4iNBqqg4oKg8cLAcMlvmX3Fa1/WmqLddULC83K5ukWj2RXkuSrG9aDbjpnXBx1uGm1+HqrMNFr8PVWYub3sky7azDVa+1lnN1rlruoq+c1uGqr/bppMHVWYObi6HRT33b9a7vM2fOYDKZCAiwvXEhICCA/fv3X1R+1qxZzJw5s7HCE82Mn4eB/h4G+neoevuaUooT54s5UJm4s/JJzczn8OkCzMryGlZnnRYnnQYnrRZnnQYnnQZn7YXztNayOq3GMq+ijHPFMied1jq/cjuXWr9y+wZnLQFeLgR7u9LKy9Dsk3BNaDQa3A1OuBucaO1T8xd+KKUoKTNbE3pOsW2Sz71EYs8pLuV8URml5fWb4JWqdlvXRe0kx0l410LvpMWg01o+nSyf1kFXOa5Dr7P8zg26Sy2vmjY46zDotBckUS2uzk5VybQioTrrNE33DMZlNKnHs6ZNm8bkyZOt05UtaiHqSqPRENLCjZAWbgzpIl2BNlcajcbyB13vSnAtEjxYXhhkrsjVCmWTaJVS1cYBZSlTOa0qyljWrZynKjdms27ltqvKKps8rhSYK/anqn+qqm2brdMVn9XGq9a17MFcubxivrliQ+pS+1Kg02psk6eT5cbSC5Nqc0yU9mbXRN2yZUt0Oh1ZWVk287OysggMDLyovMFgwGCouhkhL6+GdxsKIUQdGZzkTIawL7u+6kmv19O7d29Wr15tnWc2m1m9ejUxMTF2jEwIIYRwDHY/9T158mTi4+Pp06cPffv2Zc6cORQWFvLggw/aOzQhhBDC7uyeqEeNGsXp06eZPn06mZmZREVFsXz58otuMBNCCCGuR3ZP1ADjx49n/Pjx9g5DCCGEcDhNtDsiIYQQ4vrgEC3qujJXPDORkZFh50iEEEKImqvMW5V57EqadKKufKyrJh14CCGEEI4mKyuL0NArvzvc7u/6vhbl5eXs2LGDgIAAtPXQn2p+fj5du3Zl7969eHp61kOE1wept7qTuqsbqbe6k7qrm/quN7PZTFZWFr169cLJ6cpt5iadqOtbXl4e3t7e5Obm4uXldfUVBCD1di2k7upG6q3upO7qxp71JjeTCSGEEA5MErUQQgjhwCRRV2MwGJgxY4bN+8TF1Um91Z3UXd1IvdWd1F3d2LPe5Bq1EEII4cCkRS2EEEI4MEnUQgghhAOTRC2EEEI4MEnUFd5//33atm2Li4sL/fr1Y/PmzfYOyeGtX7+e4cOHExwcjEajYcmSJfYOqUmYNWsW0dHReHp60qpVK0aMGEFqaqq9w2oS5s6dS2RkJF5eXnh5eRETE8OPP/5o77CanNmzZ6PRaJg0aZK9Q3F4iYmJaDQam6Fz586NGoMkauDLL79k8uTJzJgxg+3bt9OzZ0+GDh1Kdna2vUNzaIWFhfTs2ZP333/f3qE0KevWrSMhIYFNmzaxcuVKysrKuP322yksLLR3aA6vTZs2zJ49m23btrF161ZuvfVW7r77bvbs2WPv0JqMLVu28NFHHxEZGWnvUJqMbt26kZGRYR1+/fXXxg1ACdW3b1+VkJBgnTaZTCo4OFjNmjXLjlE1LYBavHixvcNokrKzsxWg1q1bZ+9QmiRfX1/1ySef2DuMJiE/P1+Fh4erlStXqptvvlk98cQT9g7J4c2YMUP17NnTrjFc9y3q0tJStm3bRmxsrHWeVqslNjaWjRs32jEycb3Izc0FoEWLFnaOpGkxmUwkJSVRWFhITEyMvcNpEhISErjzzjtt/t6Jqzt48CDBwcG0b9+eMWPGkJaW1qj7b9K9Z9WHM2fOYDKZCAgIsJkfEBDA/v377RSVuF6YzWYmTZrEgAED6N69u73DaRJSUlKIiYmhpKQEDw8PFi9eTNeuXe0dlsNLSkpi+/btbNmyxd6hNCn9+vVjwYIFREREkJGRwcyZM7npppvYvXt3o3Vqct0naiHsKSEhgd27dzf+Na8mLCIiguTkZHJzc/n666+Jj49n3bp1kqyvID09nSeeeIKVK1fi4uJi73CalGHDhlnHIyMj6devH2FhYXz11VeMGzeuUWK47hN1y5Yt0el01r6tK2VlZREYGGinqMT1YPz48Xz//fesX7+eNm3a2DucJkOv19OxY0cAevfuzZYtW3j77bf56KOP7ByZ49q2bRvZ2dnccMMN1nkmk4n169fz3nvvYTQa0el0doyw6fDx8aFTp04cOnSo0fZ53V+j1uv19O7dm9WrV1vnmc1mVq9eLde9RINQSjF+/HgWL17Mzz//TLt27ewdUpNmNpsxGo32DsOhDRkyhJSUFJKTk61Dnz59GDNmDMnJyZKka6GgoIDDhw8TFBTUaPu87lvUAJMnTyY+Pp4+ffrQt29f5syZQ2FhIQ8++KC9Q3NoBQUFNkeVR48eJTk5mRYtWhAaGmrHyBxbQkICX3zxBd999x2enp5kZmYC4O3tjaurq52jc2zTpk1j2LBhhIaGkp+fzxdffMHatWtZsWKFvUNzaJ6enhfdA+Hu7o6fn5/cG3EVU6ZMYfjw4YSFhXHq1ClmzJiBTqdj9OjRjRaDJGpg1KhRnD59munTp5OZmUlUVBTLly+/6AYzYWvr1q3ccsst1unJkycDEB8fz4IFC+wUleObO3cuAIMHD7aZP3/+fMaOHdv4ATUh2dnZPPDAA2RkZODt7U1kZCQrVqzgtttus3doopk6ceIEo0eP5uzZs/j7+zNw4EA2bdqEv79/o8UgvWcJIYQQDuy6v0YthBBCODJJ1EIIIYQDk0QthBBCODBJ1EIIIYQDk0QthBBCODBJ1EIIIYQDk0QthBBCODBJ1EIIIYQDk0QthLhmGo2GJUuW2DsMIZolSdRCNHFjx45Fo9FcNMTFxdk7NCFEPZB3fQvRDMTFxTF//nybeQaDwU7RCCHqk7SohWgGDAYDgYGBNoOvry9gOS09d+5chg0bhqurK+3bt+frr7+2WT8lJYVbb70VV1dX/Pz8eOSRRygoKLAp8+mnn9KtWzcMBgNBQUGMHz/eZvmZM2cYOXIkbm5uhIeHs3TpUuuy8+fPM2bMGPz9/XF1dSU8PPyiAwshxKVJohbiOvDCCy9wzz33sHPnTsaMGcOf//xn9u3bB0BhYSFDhw7F19eXLVu2sGjRIlatWmWTiOfOnUtCQgKPPPIIKSkpLF26lI4dO9rsY+bMmdx3333s2rWLO+64gzFjxnDu3Dnr/vfu3cuPP/7Ivn37mDt3Li1btmy8ChCiKVNCiCYtPj5e6XQ65e7ubjO8/PLLSimlAPXoo4/arNOvXz/12GOPKaWU+vjjj5Wvr68qKCiwLv/hhx+UVqtVmZmZSimlgoOD1XPPPXfZGAD1/PPPW6cLCgoUoH788UellFLDhw9XDz74YP18YSGuM3KNWohm4JZbbrH2c12pRYsW1vGYmBibZTExMSQnJwOwb98+evbsibu7u3X5gAEDMJvNpKamotFoOHXqFEOGDLliDJGRkdZxd3d3vLy8yM7OBuCxxx7jnnvuYfv27dx+++2MGDGC/v371+m7CnG9kUQtRDPg7u5+0ano+uLq6lqjcs7OzjbTGo0Gs9kMwLBhwzh+/DjLli1j5cqVDBkyhISEBF5//fV6j1eI5kauUQtxHdi0adNF0126dAGgS5cu7Ny5k8LCQuvyDRs2oNVqiYiIwNPTk7Zt27J69eprisHf35/4+Hj++9//MmfOHD7++ONr2p4Q1wtpUQvRDBiNRjIzM23mOTk5WW/YWrRoEX369GHgwIF8/vnnbN68mX/9618AjBkzhhkzZhAfH09iYiKnT59mwoQJ3H///QQEBACQmJjIo48+SqtWrRg2bBj5+fls2LCBCRMm1Ci+6dOn07t3b7p164bRaOT777+3HigIIa5MErUQzcDy5csJCgqymRcREcH+/fsByx3ZSUlJPP744wQFBbFw4UK6du0KgJubGytWrOCJJ54gOjoaNzc37rnnHt58803rtuLj4ykpKeGtt95iypQptGzZknvvvbfG8en1eqZNm8axY8dwdXXlpptuIikpqR6+uRDNn0YppewdhBCi4Wg0GhYvXsyIESPsHYoQog7kGrUQQgjhwCRRCyGEEA5MrlEL0czJ1S0hmjZpUQshhBAOTBK1EEII4cAkUQshhBAOTBK1EEII4cAkUQshhBAOTBK1EEII4cAkUQshhBAOTBK1EEII4cAkUQshhBAO7P8BjwYPyAUrLWAAAAAASUVORK5CYII=\n"
+ },
+ "metadata": {}
+ }
+ ],
+ "source": [
+ "from previous_chapters import plot_values\n",
+ "\n",
+ "epochs_tensor = torch.linspace(0, num_epochs, len(train_losses))\n",
+ "examples_seen_tensor = torch.linspace(0, examples_seen, len(train_losses))\n",
+ "\n",
+ "plot_values(epochs_tensor, examples_seen_tensor, train_losses, val_losses, label=\"loss\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "aa074723-e3f7-4f7e-a267-855531a037dc",
+ "metadata": {
+ "id": "aa074723-e3f7-4f7e-a267-855531a037dc"
+ },
+ "source": [
+ "- Note that we previously calculated the accuracy values on 5 batches only via the `eval_iter=5` setting; below, we calculate the accuracies on the full dataset"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 21,
+ "id": "1D2awlEq0gZi",
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "1D2awlEq0gZi",
+ "outputId": "d603eda1-d912-43eb-ec9c-af6a622510a0"
+ },
+ "outputs": [
+ {
+ "output_type": "stream",
+ "name": "stdout",
+ "text": [
+ "Training accuracy: 100.00%\n",
+ "Validation accuracy: 97.32%\n",
+ "Test accuracy: 97.33%\n"
+ ]
+ }
+ ],
+ "source": [
+ "from previous_chapters import calc_accuracy_loader\n",
+ "\n",
+ "train_accuracy = calc_accuracy_loader(train_loader, model, device)\n",
+ "val_accuracy = calc_accuracy_loader(val_loader, model, device)\n",
+ "test_accuracy = calc_accuracy_loader(test_loader, model, device)\n",
+ "\n",
+ "print(f\"Training accuracy: {train_accuracy*100:.2f}%\")\n",
+ "print(f\"Validation accuracy: {val_accuracy*100:.2f}%\")\n",
+ "print(f\"Test accuracy: {test_accuracy*100:.2f}%\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "1f87f5e6-339e-4fcf-900b-6d845d3c713d",
+ "metadata": {
+ "id": "1f87f5e6-339e-4fcf-900b-6d845d3c713d"
+ },
+ "source": [
+ "- As we can see based on the relatively high accuracy values above, the LoRA finetuning was successful"
+ ]
+ }
+ ],
+ "metadata": {
+ "accelerator": "GPU",
+ "colab": {
+ "gpuType": "V100",
+ "provenance": []
+ },
+ "kernelspec": {
+ "display_name": "Python 3 (ipykernel)",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.10.11"
}
- ],
- "source": [
- "from previous_chapters import calc_accuracy_loader\n",
- "\n",
- "train_accuracy = calc_accuracy_loader(train_loader, model, device)\n",
- "val_accuracy = calc_accuracy_loader(val_loader, model, device)\n",
- "test_accuracy = calc_accuracy_loader(test_loader, model, device)\n",
- "\n",
- "print(f\"Training accuracy: {train_accuracy*100:.2f}%\")\n",
- "print(f\"Validation accuracy: {val_accuracy*100:.2f}%\")\n",
- "print(f\"Test accuracy: {test_accuracy*100:.2f}%\")"
- ]
},
- {
- "cell_type": "markdown",
- "id": "1f87f5e6-339e-4fcf-900b-6d845d3c713d",
- "metadata": {},
- "source": [
- "- As we can see based on the relatively high accuracy values above, the LoRA finetuning was successful"
- ]
- }
- ],
- "metadata": {
- "accelerator": "GPU",
- "colab": {
- "gpuType": "V100",
- "provenance": []
- },
- "kernelspec": {
- "display_name": "Python 3 (ipykernel)",
- "language": "python",
- "name": "python3"
- },
- "language_info": {
- "codemirror_mode": {
- "name": "ipython",
- "version": 3
- },
- "file_extension": ".py",
- "mimetype": "text/x-python",
- "name": "python",
- "nbconvert_exporter": "python",
- "pygments_lexer": "ipython3",
- "version": "3.10.11"
- }
- },
- "nbformat": 4,
- "nbformat_minor": 5
-}
+ "nbformat": 4,
+ "nbformat_minor": 5
+}
\ No newline at end of file
diff --git a/ch06/02_bonus_additional-experiments/README.md b/ch06/02_bonus_additional-experiments/README.md
index 7e011f6..61f6e07 100644
--- a/ch06/02_bonus_additional-experiments/README.md
+++ b/ch06/02_bonus_additional-experiments/README.md
@@ -19,7 +19,7 @@ For example,
| 6 | gpt2-large (774M) | pretrained | last | last_block | longest train ex. (120) | 99.52% | 98.66% | 96.67% | 1.50 min | A100 |
| 7 | gpt2-xl (1558M) | pretrained | last | last_block | longest train ex. (120) | 99.81% | 99.33% | 98.33% | 2.83 min | A100 |
| 8 | gpt2-small (124M) | random | last | all | longest train ex. (120) | 100% | 96.64% | 93.67% | 0.69 min | A100 |
-| 9 | gpt2-small (124M) | pretrained | last | LoRA | longest train ex. (120) | 99.52% | 97.99% | 97.67% | 0.75 min | A100 |
+| 9 | gpt2-small (124M) | pretrained | last | LoRA | longest train ex. (120) | 100.00% | 97.32% | 96.67% | 0.75 min | A100 |
| 10 | gpt2-small (124M) | pretrained | last | last_block | context length (1024) | 83.08% | 87.92% | 78.33% | 2.46 min | A100 |
| 11 | gpt2-small (124M) | pretrained | last | last_block | variable: no padding (batch size 1) | 100.00% | 98.66% | 98.00% | 1.75 min | A100 |
| 12 | gpt2-small (124M) | pretrained | last | last_block | variable: no padding (batch size 8) | 99.33% | 98.66% | 98.33% | 1.70 min | A100 |
@@ -41,7 +41,7 @@ You can use the following code to reproduce the experiments:
- Row 6: `python additional-experiments.py --model_size "gpt2-large (774M)"`
- Row 7: `python additional-experiments.py --model_size "gpt2-xl (1558M)"`
- Row 8: `python additional-experiments.py --weights random --trainable_layers all`
-- Row 9: `python additional-experiments.py --trainable_layers lora --lora_rank 16 --lora_alpha 8`
+- Row 9: `python additional-experiments.py --trainable_layers lora --lora_rank 16 --lora_alpha 16`
- Row 10: `python additional-experiments.py --context_length "model_context_length"`
- Row 11: `python additional-experiments.py --no_padding --batch_size 1`
- Row 12: `python additional-experiments.py --no_padding --batch_size 1 --accumulation_steps 8`
@@ -59,7 +59,7 @@ I've kept the LLM and dataset small on purpose, so you can run the training on a
3. **Training All Layers vs. Last Transformer Block (Row 1 vs. 4)**: Training all layers shows a modest improvement of ~2% over just training the last transformer block, but it requires almost three times longer in terms of training duration.
4. **Using Larger Pretrained Models (Row 1 vs 5, and Row 1 vs. 6 and 7)**: Employing a 3x larger pretrained model leads to worse results. However, using a 5x larger model improves performance compared to the initial model, as was anticipated. Similarly, the 12x larger model improves the predictive performance even further. (The medium model was perhaps not well pretrained or the particular finetuning configuration works not as well for this model.)
5. **Using a Model with Random Weights vs. Pretrained Weights (Row 1 vs. 8)**: Utilizing a model with random weights yields results that are only slightly worse by 1.3% compared to using pretrained weights.
-6. **Using LoRA (Low-Rank Adaptation) vs Training All Layers (Row 9 vs. 4)**: Keeping the model frozen and adding trainable LoRA layers (see [Appendix E](../../appendix-E/01_main-chapter-code/appendix-E.ipynb) for details) is a viable alternative to training all model parameters and even improves the performance by 1% point. As it can be seen by the 1% lower gap between the training and validation accuracy when using LoRA, this is likely due to less overfitting. Moreover, using LoRA is also slightly faster because fewer parameters have to be updated.
+6. **Using LoRA (Low-Rank Adaptation) vs Training All Layers (Row 9 vs. 4)**: Keeping the model frozen and adding trainable LoRA layers (see [Appendix E](../../appendix-E/01_main-chapter-code/appendix-E.ipynb) for details) is a viable alternative to training all model parameters and even improves the performance by 1% point. As it can be seen by the ~1% lower gap between the training and validation accuracy when using LoRA, this is likely due to less overfitting. Moreover, using LoRA is also slightly faster because fewer parameters have to be updated.
7. **Padding Input to Full Context Length vs. Longest Training Example (Row 1 vs. 10)**: Padding the input to the full supported context length results is significantly worse.
8. **Padding vs no padding (Row 1 vs. 11 and 12)**: The `--no_padding` option disables the padding in the dataset, which requires training the model with a batch size of 1 since the inputs have variable lengths. This results in a better test accuracy but takes longer to train. In row 12, we additionally enable gradient accumulation with 8 steps to achieve the same batch size as in the other experiments, which helps reduce overfitting and slightly boost the test set accuracy.
9. **Disabling the causal attention mask (Row 1 vs. 13)**: Disables the causal attention mask used in the multi-head attention module. This means all tokens can attend all other tokens. The model accuracy is slightly improved compared to the GPT model with causal mask.
diff --git a/ch06/02_bonus_additional-experiments/additional-experiments.py b/ch06/02_bonus_additional-experiments/additional-experiments.py
index a3dd719..8228778 100644
--- a/ch06/02_bonus_additional-experiments/additional-experiments.py
+++ b/ch06/02_bonus_additional-experiments/additional-experiments.py
@@ -4,6 +4,7 @@
# Code: https://github.com/rasbt/LLMs-from-scratch
import argparse
+import math
import os
from pathlib import Path
import time
@@ -23,8 +24,8 @@ from previous_chapters import GPTModel, load_weights_into_gpt
class LoRALayer(torch.nn.Module):
def __init__(self, in_dim, out_dim, rank, alpha):
super().__init__()
- std_dev = 1 / torch.sqrt(torch.tensor(rank).float())
- self.A = torch.nn.Parameter(torch.randn(in_dim, rank) * std_dev)
+ self.A = torch.nn.Parameter(torch.empty(in_dim, rank))
+ torch.nn.init.kaiming_uniform_(self.A, a=math.sqrt(5))
self.B = torch.nn.Parameter(torch.zeros(rank, out_dim))
self.alpha = alpha