LLMs-from-scratch/ch02/05_bpe-from-scratch/bpe-from-scratch.ipynb
Sebastian Raschka 4ff743051e
BPE cosmetics (#629)
* Llama3 from scratch improvements

* Cosmetic BPE improvements

* restore

* Update ch02/05_bpe-from-scratch/bpe-from-scratch.ipynb

* Update ch02/05_bpe-from-scratch/bpe-from-scratch.ipynb

* endoftext whitespace
2025-04-18 18:57:09 -05:00

1482 lines
52 KiB
Plaintext
Raw Permalink Blame History

{
"cells": [
{
"cell_type": "markdown",
"id": "9dec0dfb-3d60-41d0-a63a-b010dce67e32",
"metadata": {},
"source": [
"<table style=\"width:100%\">\n",
"<tr>\n",
"<td style=\"vertical-align:middle; text-align:left;\">\n",
"<font size=\"2\">\n",
"Supplementary code for the <a href=\"http://mng.bz/orYv\">Build a Large Language Model From Scratch</a> book by <a href=\"https://sebastianraschka.com\">Sebastian Raschka</a><br>\n",
"<br>Code repository: <a href=\"https://github.com/rasbt/LLMs-from-scratch\">https://github.com/rasbt/LLMs-from-scratch</a>\n",
"</font>\n",
"</td>\n",
"<td style=\"vertical-align:middle; text-align:left;\">\n",
"<a href=\"http://mng.bz/orYv\"><img src=\"https://sebastianraschka.com/images/LLMs-from-scratch-images/cover-small.webp\" width=\"100px\"></a>\n",
"</td>\n",
"</tr>\n",
"</table>"
]
},
{
"cell_type": "markdown",
"id": "5e475425-8300-43f2-a5e8-6b5d2de59925",
"metadata": {},
"source": [
"# Byte Pair Encoding (BPE) Tokenizer From Scratch"
]
},
{
"cell_type": "markdown",
"id": "a1bfc3f3-8ec1-4fd3-b378-d9a3d7807a54",
"metadata": {},
"source": [
"- This is a standalone notebook implementing the popular byte pair encoding (BPE) tokenization algorithm, which is used in models like GPT-2 to GPT-4, Llama 3, etc., from scratch for educational purposes\n",
"- For more details about the purpose of tokenization, please refer to [Chapter 2](https://github.com/rasbt/LLMs-from-scratch/blob/main/ch02/01_main-chapter-code/ch02.ipynb); this code here is bonus material explaining the BPE algorithm\n",
"- The original BPE tokenizer that OpenAI implemented for training the original GPT models can be found [here](https://github.com/openai/gpt-2/blob/master/src/encoder.py)\n",
"- The BPE algorithm was originally described in 1994: \"[A New Algorithm for Data Compression](http://www.pennelynn.com/Documents/CUJ/HTML/94HTML/19940045.HTM)\" by Philip Gage\n",
"- Most projects, including Llama 3, nowadays use OpenAI's open-source [tiktoken library](https://github.com/openai/tiktoken) due to its computational performance; it allows loading pretrained GPT-2 and GPT-4 tokenizers, for example (the Llama 3 models were trained using the GPT-4 tokenizer as well)\n",
"- The difference between the implementations above and my implementation in this notebook, besides it being is that it also includes a function for training the tokenizer (for educational purposes)\n",
"- There's also an implementation called [minBPE](https://github.com/karpathy/minbpe) with training support, which is maybe more performant (my implementation here is focused on educational purposes); in contrast to `minbpe` my implementation additionally allows loading the original OpenAI tokenizer vocabulary and BPE \"merges\" (additionally, Hugging Face tokenizers are also capable of training and loading various tokenizers; see [this GitHub discussion](https://github.com/rasbt/LLMs-from-scratch/discussions/485) by a reader who trained a BPE tokenizer on the Nepali language for more info)"
]
},
{
"cell_type": "markdown",
"id": "f62336db-f45c-4894-9167-7583095dbdf1",
"metadata": {},
"source": [
"&nbsp;\n",
"# 1. The main idea behind byte pair encoding (BPE)"
]
},
{
"cell_type": "markdown",
"id": "cd3f1231-bd42-41b5-a017-974b8c660a44",
"metadata": {},
"source": [
"- The main idea in BPE is to convert text into an integer representation (token IDs) for LLM training (see [Chapter 2](https://github.com/rasbt/LLMs-from-scratch/blob/main/ch02/01_main-chapter-code/ch02.ipynb))\n",
"\n",
"<img src=\"https://sebastianraschka.com/images/LLMs-from-scratch-images/bonus/bpe-from-scratch/bpe-overview.webp\" width=\"600px\">"
]
},
{
"cell_type": "markdown",
"id": "760c625d-26a1-4896-98a2-0fdcd1591256",
"metadata": {},
"source": [
"&nbsp;\n",
"## 1.1 Bits and bytes"
]
},
{
"cell_type": "markdown",
"id": "d4ddaa35-0ed7-4012-827e-911de11c266c",
"metadata": {},
"source": [
"- Before getting to the BPE algorithm, let's introduce the notion of bytes\n",
"- Consider converting text into a byte array (BPE stands for \"byte\" pair encoding after all):"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "8c9bc9e4-120f-4bac-8fa6-6523c568d12e",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"bytearray(b'This is some text')\n"
]
}
],
"source": [
"text = \"This is some text\"\n",
"byte_ary = bytearray(text, \"utf-8\")\n",
"print(byte_ary)"
]
},
{
"cell_type": "markdown",
"id": "dbd92a2a-9d74-4dc7-bb53-ac33d6cf2fab",
"metadata": {},
"source": [
"- When we call `list()` on a `bytearray` object, each byte is treated as an individual element, and the result is a list of integers corresponding to the byte values:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "6c586945-d459-4f9a-855d-bf73438ef0e3",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[84, 104, 105, 115, 32, 105, 115, 32, 115, 111, 109, 101, 32, 116, 101, 120, 116]\n"
]
}
],
"source": [
"ids = list(byte_ary)\n",
"print(ids)"
]
},
{
"cell_type": "markdown",
"id": "71efea37-f4c3-4cb8-bfa5-9299175faf9a",
"metadata": {},
"source": [
"- This would be a valid way to convert text into a token ID representation that we need for the embedding layer of an LLM\n",
"- However, the downside of this approach is that it is creating one ID for each character (that's a lot of IDs for a short text!)\n",
"- I.e., this means for a 17-character input text, we have to use 17 token IDs as input to the LLM:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "0d5b61d9-79a0-48b4-9b3e-64ab595c5b01",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Number of characters: 17\n",
"Number of token IDs: 17\n"
]
}
],
"source": [
"print(\"Number of characters:\", len(text))\n",
"print(\"Number of token IDs:\", len(ids))"
]
},
{
"cell_type": "markdown",
"id": "68cc833a-c0d4-4d46-9180-c0042fd6addc",
"metadata": {},
"source": [
"- If you have worked with LLMs before, you may know that the BPE tokenizers have a vocabulary where we have a token ID for whole words or subwords instead of each character\n",
"- For example, the GPT-2 tokenizer tokenizes the same text (\"This is some text\") into only 4 instead of 17 tokens: `1212, 318, 617, 2420`\n",
"- You can double-check this using the interactive [tiktoken app](https://tiktokenizer.vercel.app/?model=gpt2) or the [tiktoken library](https://github.com/openai/tiktoken):\n",
"\n",
"<img src=\"https://sebastianraschka.com/images/LLMs-from-scratch-images/bonus/bpe-from-scratch/tiktokenizer.webp\" width=\"600px\">\n",
"\n",
"```python\n",
"import tiktoken\n",
"\n",
"gpt2_tokenizer = tiktoken.get_encoding(\"gpt2\")\n",
"gpt2_tokenizer.encode(\"This is some text\")\n",
"# prints [1212, 318, 617, 2420]\n",
"```"
]
},
{
"cell_type": "markdown",
"id": "425b99de-cbfc-441c-8b3e-296a5dd7bb27",
"metadata": {},
"source": [
"- Since a byte consists of 8 bits, there are 2<sup>8</sup> = 256 possible values that a single byte can represent, ranging from 0 to 255\n",
"- You can confirm this by executing the code `bytearray(range(0, 257))`, which will warn you that `ValueError: byte must be in range(0, 256)`)\n",
"- A BPE tokenizer usually uses these 256 values as its first 256 single-character tokens; one could visually check this by running the following code:\n",
"\n",
"```python\n",
"import tiktoken\n",
"gpt2_tokenizer = tiktoken.get_encoding(\"gpt2\")\n",
"\n",
"for i in range(300):\n",
" decoded = gpt2_tokenizer.decode([i])\n",
" print(f\"{i}: {decoded}\")\n",
"\"\"\"\n",
"prints:\n",
"0: !\n",
"1: \"\n",
"2: #\n",
"...\n",
"255: <20> # <---- single character tokens up to here\n",
"256: t\n",
"257: a\n",
"...\n",
"298: ent\n",
"299: n\n",
"\"\"\"\n",
"```"
]
},
{
"cell_type": "markdown",
"id": "97ff0207-7f8e-44fa-9381-2a4bd83daab3",
"metadata": {},
"source": [
"- Above, note that entries 256 and 257 are not single-character values but double-character values (a whitespace + a letter), which is a little shortcoming of the original GPT-2 BPE Tokenizer (this has been improved in the GPT-4 tokenizer)"
]
},
{
"cell_type": "markdown",
"id": "8241c23a-d487-488d-bded-cdf054e24920",
"metadata": {},
"source": [
"&nbsp;\n",
"## 1.2 Building the vocabulary"
]
},
{
"cell_type": "markdown",
"id": "d7c2ceb7-0b3f-4a62-8dcc-07810cd8886e",
"metadata": {},
"source": [
"- The goal of the BPE tokenization algorithm is to build a vocabulary of commonly occurring subwords like `298: ent` (which can be found in *entangle, entertain, enter, entrance, entity, ...*, for example), or even complete words like \n",
"\n",
"```\n",
"318: is\n",
"617: some\n",
"1212: This\n",
"2420: text\n",
"```"
]
},
{
"cell_type": "markdown",
"id": "8c0d4420-a4c7-4813-916a-06f4f46bc3f0",
"metadata": {},
"source": [
"- The BPE algorithm was originally described in 1994: \"[A New Algorithm for Data Compression](http://www.pennelynn.com/Documents/CUJ/HTML/94HTML/19940045.HTM)\" by Philip Gage\n",
"- Before we get to the actual code implementation, the form that is used for LLM tokenizers today can be summarized as described in the following sections."
]
},
{
"cell_type": "markdown",
"id": "ebc71db9-b070-48c4-8412-81f45b308ab3",
"metadata": {},
"source": [
"&nbsp;\n",
"## 1.3 BPE algorithm outline\n",
"\n",
"**1. Identify frequent pairs**\n",
"- In each iteration, scan the text to find the most commonly occurring pair of bytes (or characters)\n",
"\n",
"**2. Replace and record**\n",
"\n",
"- Replace that pair with a new placeholder ID (one not already in use, e.g., if we start with 0...255, the first placeholder would be 256)\n",
"- Record this mapping in a lookup table\n",
"- The size of the lookup table is a hyperparameter, also called \"vocabulary size\" (for GPT-2, that's\n",
"50,257)\n",
"\n",
"**3. Repeat until no gains**\n",
"\n",
"- Keep repeating steps 1 and 2, continually merging the most frequent pairs\n",
"- Stop when no further compression is possible (e.g., no pair occurs more than once)\n",
"\n",
"**Decompression (decoding)**\n",
"\n",
"- To restore the original text, reverse the process by substituting each ID with its corresponding pair, using the lookup table\n",
"\n"
]
},
{
"cell_type": "markdown",
"id": "e9f5ac9a-3528-4186-9468-8420c7b2ac00",
"metadata": {},
"source": [
"&nbsp;\n",
"## 1.4 BPE algorithm example\n",
"\n",
"### 1.4.1 Concrete example of the encoding part (steps 1 & 2 in section 1.3)\n",
"\n",
"- Suppose we have the text (training dataset) `the cat in the hat` from which we want to build the vocabulary for a BPE tokenizer\n",
"\n",
"**Iteration 1**\n",
"\n",
"1. Identify frequent pairs\n",
" - In this text, \"th\" appears twice (at the beginning and before the second \"e\")\n",
"\n",
"2. Replace and record\n",
" - replace \"th\" with a new token ID that is not already in use, e.g., 256\n",
" - the new text is: `<256>e cat in <256>e hat`\n",
" - the new vocabulary is\n",
"\n",
"```\n",
" 0: ...\n",
" ...\n",
" 256: \"th\"\n",
"```\n",
"\n",
"**Iteration 2**\n",
"\n",
"1. **Identify frequent pairs** \n",
" - In the text `<256>e cat in <256>e hat`, the pair `<256>e` appears twice\n",
"\n",
"2. **Replace and record** \n",
" - replace `<256>e` with a new token ID that is not already in use, for example, `257`. \n",
" - The new text is:\n",
" ```\n",
" <257> cat in <257> hat\n",
" ```\n",
" - The updated vocabulary is:\n",
" ```\n",
" 0: ...\n",
" ...\n",
" 256: \"th\"\n",
" 257: \"<256>e\"\n",
" ```\n",
"\n",
"**Iteration 3**\n",
"\n",
"1. **Identify frequent pairs** \n",
" - In the text `<257> cat in <257> hat`, the pair `<257> ` appears twice (once at the beginning and once before “hat”).\n",
"\n",
"2. **Replace and record** \n",
" - replace `<257> ` with a new token ID that is not already in use, for example, `258`. \n",
" - the new text is:\n",
" ```\n",
" <258>cat in <258>hat\n",
" ```\n",
" - The updated vocabulary is:\n",
" ```\n",
" 0: ...\n",
" ...\n",
" 256: \"th\"\n",
" 257: \"<256>e\"\n",
" 258: \"<257> \"\n",
" ```\n",
" \n",
"- and so forth\n",
"\n",
"&nbsp;\n",
"### 1.4.2 Concrete example of the decoding part (step 3 in section 1.3)\n",
"\n",
"- To restore the original text, we reverse the process by substituting each token ID with its corresponding pair in the reverse order they were introduced\n",
"- Start with the final compressed text: `<258>cat in <258>hat`\n",
"- Substitute `<258>` → `<257> `: `<257> cat in <257> hat` \n",
"- Substitute `<257>` → `<256>e`: `<256>e cat in <256>e hat`\n",
"- Substitute `<256>` → \"th\": `the cat in the hat`"
]
},
{
"cell_type": "markdown",
"id": "a2324948-ddd0-45d1-8ba8-e8eda9fc6677",
"metadata": {},
"source": [
"&nbsp;\n",
"## 2. A simple BPE implementation"
]
},
{
"cell_type": "markdown",
"id": "429ca709-40d7-4e3d-bf3e-4f5687a2e19b",
"metadata": {},
"source": [
"- Below is an implementation of this algorithm described above as a Python class that mimics the `tiktoken` Python user interface\n",
"- Note that the encoding part above describes the original training step via `train()`; however, the `encode()` method works similarly (although it looks a bit more complicated because of the special token handling):\n",
"\n",
"1. Split the input text into individual bytes\n",
"2. Repeatedly find & replace (merge) adjacent tokens (pairs) when they match any pair in the learned BPE merges (from highest to lowest \"rank,\" i.e., in the order they were learned)\n",
"3. Continue merging until no more merges can be applied\n",
"4. The final list of token IDs is the encoded output"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "3e4a15ec-2667-4f56-b7c1-34e8071b621d",
"metadata": {},
"outputs": [],
"source": [
"from collections import Counter, deque\n",
"from functools import lru_cache\n",
"import json\n",
"\n",
"\n",
"class BPETokenizerSimple:\n",
" def __init__(self):\n",
" # Maps token_id to token_str (e.g., {11246: \"some\"})\n",
" self.vocab = {}\n",
" # Maps token_str to token_id (e.g., {\"some\": 11246})\n",
" self.inverse_vocab = {}\n",
" # Dictionary of BPE merges: {(token_id1, token_id2): merged_token_id}\n",
" self.bpe_merges = {}\n",
"\n",
" # For the official OpenAI GPT-2 merges, use a rank dict:\n",
" # of form {(string_A, string_B): rank}, where lower rank = higher priority\n",
" self.bpe_ranks = {}\n",
"\n",
" def train(self, text, vocab_size, allowed_special={\"<|endoftext|>\"}):\n",
" \"\"\"\n",
" Train the BPE tokenizer from scratch.\n",
"\n",
" Args:\n",
" text (str): The training text.\n",
" vocab_size (int): The desired vocabulary size.\n",
" allowed_special (set): A set of special tokens to include.\n",
" \"\"\"\n",
"\n",
" # Preprocess: Replace spaces with \"Ġ\"\n",
" # Note that Ġ is a particularity of the GPT-2 BPE implementation\n",
" # E.g., \"Hello world\" might be tokenized as [\"Hello\", \"Ġworld\"]\n",
" # (GPT-4 BPE would tokenize it as [\"Hello\", \" world\"])\n",
" processed_text = []\n",
" for i, char in enumerate(text):\n",
" if char == \" \" and i != 0:\n",
" processed_text.append(\"Ġ\")\n",
" if char != \" \":\n",
" processed_text.append(char)\n",
" processed_text = \"\".join(processed_text)\n",
"\n",
" # Initialize vocab with unique characters, including \"Ġ\" if present\n",
" # Start with the first 256 ASCII characters\n",
" unique_chars = [chr(i) for i in range(256)]\n",
" unique_chars.extend(\n",
" char for char in sorted(set(processed_text))\n",
" if char not in unique_chars\n",
" )\n",
" if \"Ġ\" not in unique_chars:\n",
" unique_chars.append(\"Ġ\")\n",
"\n",
" self.vocab = {i: char for i, char in enumerate(unique_chars)}\n",
" self.inverse_vocab = {char: i for i, char in self.vocab.items()}\n",
"\n",
" # Add allowed special tokens\n",
" if allowed_special:\n",
" for token in allowed_special:\n",
" if token not in self.inverse_vocab:\n",
" new_id = len(self.vocab)\n",
" self.vocab[new_id] = token\n",
" self.inverse_vocab[token] = new_id\n",
"\n",
" # Tokenize the processed_text into token IDs\n",
" token_ids = [self.inverse_vocab[char] for char in processed_text]\n",
"\n",
" # BPE steps 1-3: Repeatedly find and replace frequent pairs\n",
" for new_id in range(len(self.vocab), vocab_size):\n",
" pair_id = self.find_freq_pair(token_ids, mode=\"most\")\n",
" if pair_id is None:\n",
" break\n",
" token_ids = self.replace_pair(token_ids, pair_id, new_id)\n",
" self.bpe_merges[pair_id] = new_id\n",
"\n",
" # Build the vocabulary with merged tokens\n",
" for (p0, p1), new_id in self.bpe_merges.items():\n",
" merged_token = self.vocab[p0] + self.vocab[p1]\n",
" self.vocab[new_id] = merged_token\n",
" self.inverse_vocab[merged_token] = new_id\n",
"\n",
" def load_vocab_and_merges_from_openai(self, vocab_path, bpe_merges_path):\n",
" \"\"\"\n",
" Load pre-trained vocabulary and BPE merges from OpenAI's GPT-2 files.\n",
"\n",
" Args:\n",
" vocab_path (str): Path to the vocab file (GPT-2 calls it 'encoder.json').\n",
" bpe_merges_path (str): Path to the bpe_merges file (GPT-2 calls it 'vocab.bpe').\n",
" \"\"\"\n",
" # Load vocabulary\n",
" with open(vocab_path, \"r\", encoding=\"utf-8\") as file:\n",
" loaded_vocab = json.load(file)\n",
" # Convert loaded vocabulary to correct format\n",
" self.vocab = {int(v): k for k, v in loaded_vocab.items()}\n",
" self.inverse_vocab = {k: int(v) for k, v in loaded_vocab.items()}\n",
"\n",
" # Handle newline character without adding a new token\n",
" if \"\\n\" not in self.inverse_vocab:\n",
" # Use an existing token ID as a placeholder for '\\n'\n",
" # Preferentially use \"<|endoftext|>\" if available\n",
" fallback_token = next((token for token in [\"<|endoftext|>\", \"Ġ\", \"\"] if token in self.inverse_vocab), None)\n",
" if fallback_token is not None:\n",
" newline_token_id = self.inverse_vocab[fallback_token]\n",
" else:\n",
" # If no fallback token is available, raise an error\n",
" raise KeyError(\"No suitable token found in vocabulary to map '\\\\n'.\")\n",
"\n",
" self.inverse_vocab[\"\\n\"] = newline_token_id\n",
" self.vocab[newline_token_id] = \"\\n\"\n",
"\n",
" # Load GPT-2 merges and store them with an assigned \"rank\"\n",
" self.bpe_ranks = {} # reset ranks\n",
" with open(bpe_merges_path, \"r\", encoding=\"utf-8\") as file:\n",
" lines = file.readlines()\n",
" if lines and lines[0].startswith(\"#\"):\n",
" lines = lines[1:]\n",
"\n",
" rank = 0\n",
" for line in lines:\n",
" pair = tuple(line.strip().split())\n",
" if len(pair) == 2:\n",
" token1, token2 = pair\n",
" # If token1 or token2 not in vocab, skip\n",
" if token1 in self.inverse_vocab and token2 in self.inverse_vocab:\n",
" self.bpe_ranks[(token1, token2)] = rank\n",
" rank += 1\n",
" else:\n",
" print(f\"Skipping pair {pair} as one token is not in the vocabulary.\")\n",
"\n",
" def encode(self, text, allowed_special=None):\n",
" \"\"\"\n",
" Encode the input text into a list of token IDs, with tiktoken-style handling of special tokens.\n",
" \n",
" Args:\n",
" text (str): The input text to encode.\n",
" allowed_special (set or None): Special tokens to allow passthrough. If None, special handling is disabled.\n",
" \n",
" Returns:\n",
" List of token IDs.\n",
" \"\"\"\n",
" import re\n",
" \n",
" token_ids = []\n",
" \n",
" # If special token handling is enabled\n",
" if allowed_special is not None and len(allowed_special) > 0:\n",
" # Build regex to match allowed special tokens\n",
" special_pattern = (\n",
" \"(\" + \"|\".join(re.escape(tok) for tok in sorted(allowed_special, key=len, reverse=True)) + \")\"\n",
" )\n",
" \n",
" last_index = 0\n",
" for match in re.finditer(special_pattern, text):\n",
" prefix = text[last_index:match.start()]\n",
" token_ids.extend(self.encode(prefix, allowed_special=None)) # Encode prefix without special handling\n",
" \n",
" special_token = match.group(0)\n",
" if special_token in self.inverse_vocab:\n",
" token_ids.append(self.inverse_vocab[special_token])\n",
" else:\n",
" raise ValueError(f\"Special token {special_token} not found in vocabulary.\")\n",
" last_index = match.end()\n",
" \n",
" text = text[last_index:] # Remaining part to process normally\n",
" \n",
" # Check if any disallowed special tokens are in the remainder\n",
" disallowed = [\n",
" tok for tok in self.inverse_vocab\n",
" if tok.startswith(\"<|\") and tok.endswith(\"|>\") and tok in text and tok not in allowed_special\n",
" ]\n",
" if disallowed:\n",
" raise ValueError(f\"Disallowed special tokens encountered in text: {disallowed}\")\n",
" \n",
" # If no special tokens, or remaining text after special token split:\n",
" tokens = []\n",
" lines = text.split(\"\\n\")\n",
" for i, line in enumerate(lines):\n",
" if i > 0:\n",
" tokens.append(\"\\n\")\n",
" words = line.split()\n",
" for j, word in enumerate(words):\n",
" if j == 0 and i > 0:\n",
" tokens.append(\"Ġ\" + word)\n",
" elif j == 0:\n",
" tokens.append(word)\n",
" else:\n",
" tokens.append(\"Ġ\" + word)\n",
" \n",
" for token in tokens:\n",
" if token in self.inverse_vocab:\n",
" token_ids.append(self.inverse_vocab[token])\n",
" else:\n",
" token_ids.extend(self.tokenize_with_bpe(token))\n",
" \n",
" return token_ids\n",
"\n",
" def tokenize_with_bpe(self, token):\n",
" \"\"\"\n",
" Tokenize a single token using BPE merges.\n",
"\n",
" Args:\n",
" token (str): The token to tokenize.\n",
"\n",
" Returns:\n",
" List[int]: The list of token IDs after applying BPE.\n",
" \"\"\"\n",
" # Tokenize the token into individual characters (as initial token IDs)\n",
" token_ids = [self.inverse_vocab.get(char, None) for char in token]\n",
" if None in token_ids:\n",
" missing_chars = [char for char, tid in zip(token, token_ids) if tid is None]\n",
" raise ValueError(f\"Characters not found in vocab: {missing_chars}\")\n",
"\n",
" # If we haven't loaded OpenAI's GPT-2 merges, use my approach\n",
" if not self.bpe_ranks:\n",
" can_merge = True\n",
" while can_merge and len(token_ids) > 1:\n",
" can_merge = False\n",
" new_tokens = []\n",
" i = 0\n",
" while i < len(token_ids) - 1:\n",
" pair = (token_ids[i], token_ids[i + 1])\n",
" if pair in self.bpe_merges:\n",
" merged_token_id = self.bpe_merges[pair]\n",
" new_tokens.append(merged_token_id)\n",
" # Uncomment for educational purposes:\n",
" # print(f\"Merged pair {pair} -> {merged_token_id} ('{self.vocab[merged_token_id]}')\")\n",
" i += 2 # Skip the next token as it's merged\n",
" can_merge = True\n",
" else:\n",
" new_tokens.append(token_ids[i])\n",
" i += 1\n",
" if i < len(token_ids):\n",
" new_tokens.append(token_ids[i])\n",
" token_ids = new_tokens\n",
" return token_ids\n",
"\n",
" # Otherwise, do GPT-2-style merging with the ranks:\n",
" # 1) Convert token_ids back to string \"symbols\" for each ID\n",
" symbols = [self.vocab[id_num] for id_num in token_ids]\n",
"\n",
" # Repeatedly merge all occurrences of the lowest-rank pair\n",
" while True:\n",
" # Collect all adjacent pairs\n",
" pairs = set(zip(symbols, symbols[1:]))\n",
" if not pairs:\n",
" break\n",
"\n",
" # Find the pair with the best (lowest) rank\n",
" min_rank = float(\"inf\")\n",
" bigram = None\n",
" for p in pairs:\n",
" r = self.bpe_ranks.get(p, float(\"inf\"))\n",
" if r < min_rank:\n",
" min_rank = r\n",
" bigram = p\n",
"\n",
" # If no valid ranked pair is present, we're done\n",
" if bigram is None or bigram not in self.bpe_ranks:\n",
" break\n",
"\n",
" # Merge all occurrences of that pair\n",
" first, second = bigram\n",
" new_symbols = []\n",
" i = 0\n",
" while i < len(symbols):\n",
" # If we see (first, second) at position i, merge them\n",
" if i < len(symbols) - 1 and symbols[i] == first and symbols[i+1] == second:\n",
" new_symbols.append(first + second) # merged symbol\n",
" i += 2\n",
" else:\n",
" new_symbols.append(symbols[i])\n",
" i += 1\n",
" symbols = new_symbols\n",
"\n",
" if len(symbols) == 1:\n",
" break\n",
"\n",
" # Finally, convert merged symbols back to IDs\n",
" merged_ids = [self.inverse_vocab[sym] for sym in symbols]\n",
" return merged_ids\n",
"\n",
" def decode(self, token_ids):\n",
" \"\"\"\n",
" Decode a list of token IDs back into a string.\n",
"\n",
" Args:\n",
" token_ids (List[int]): The list of token IDs to decode.\n",
"\n",
" Returns:\n",
" str: The decoded string.\n",
" \"\"\"\n",
" decoded_string = \"\"\n",
" for i, token_id in enumerate(token_ids):\n",
" if token_id not in self.vocab:\n",
" raise ValueError(f\"Token ID {token_id} not found in vocab.\")\n",
" token = self.vocab[token_id]\n",
" if token == \"\\n\":\n",
" if decoded_string and not decoded_string.endswith(\" \"):\n",
" decoded_string += \" \" # Add space if not present before a newline\n",
" decoded_string += token\n",
" elif token.startswith(\"Ġ\"):\n",
" decoded_string += \" \" + token[1:]\n",
" else:\n",
" decoded_string += token\n",
" return decoded_string\n",
"\n",
" def save_vocab_and_merges(self, vocab_path, bpe_merges_path):\n",
" \"\"\"\n",
" Save the vocabulary and BPE merges to JSON files.\n",
"\n",
" Args:\n",
" vocab_path (str): Path to save the vocabulary.\n",
" bpe_merges_path (str): Path to save the BPE merges.\n",
" \"\"\"\n",
" # Save vocabulary\n",
" with open(vocab_path, \"w\", encoding=\"utf-8\") as file:\n",
" json.dump(self.vocab, file, ensure_ascii=False, indent=2)\n",
"\n",
" # Save BPE merges as a list of dictionaries\n",
" with open(bpe_merges_path, \"w\", encoding=\"utf-8\") as file:\n",
" merges_list = [{\"pair\": list(pair), \"new_id\": new_id}\n",
" for pair, new_id in self.bpe_merges.items()]\n",
" json.dump(merges_list, file, ensure_ascii=False, indent=2)\n",
"\n",
" def load_vocab_and_merges(self, vocab_path, bpe_merges_path):\n",
" \"\"\"\n",
" Load the vocabulary and BPE merges from JSON files.\n",
"\n",
" Args:\n",
" vocab_path (str): Path to the vocabulary file.\n",
" bpe_merges_path (str): Path to the BPE merges file.\n",
" \"\"\"\n",
" # Load vocabulary\n",
" with open(vocab_path, \"r\", encoding=\"utf-8\") as file:\n",
" loaded_vocab = json.load(file)\n",
" self.vocab = {int(k): v for k, v in loaded_vocab.items()}\n",
" self.inverse_vocab = {v: int(k) for k, v in loaded_vocab.items()}\n",
"\n",
" # Load BPE merges\n",
" with open(bpe_merges_path, \"r\", encoding=\"utf-8\") as file:\n",
" merges_list = json.load(file)\n",
" for merge in merges_list:\n",
" pair = tuple(merge[\"pair\"])\n",
" new_id = merge[\"new_id\"]\n",
" self.bpe_merges[pair] = new_id\n",
"\n",
" @lru_cache(maxsize=None)\n",
" def get_special_token_id(self, token):\n",
" return self.inverse_vocab.get(token, None)\n",
"\n",
" @staticmethod\n",
" def find_freq_pair(token_ids, mode=\"most\"):\n",
" pairs = Counter(zip(token_ids, token_ids[1:]))\n",
"\n",
" if not pairs:\n",
" return None\n",
"\n",
" if mode == \"most\":\n",
" return max(pairs.items(), key=lambda x: x[1])[0]\n",
" elif mode == \"least\":\n",
" return min(pairs.items(), key=lambda x: x[1])[0]\n",
" else:\n",
" raise ValueError(\"Invalid mode. Choose 'most' or 'least'.\")\n",
"\n",
" @staticmethod\n",
" def replace_pair(token_ids, pair_id, new_id):\n",
" dq = deque(token_ids)\n",
" replaced = []\n",
"\n",
" while dq:\n",
" current = dq.popleft()\n",
" if dq and (current, dq[0]) == pair_id:\n",
" replaced.append(new_id)\n",
" # Remove the 2nd token of the pair, 1st was already removed\n",
" dq.popleft()\n",
" else:\n",
" replaced.append(current)\n",
"\n",
" return replaced"
]
},
{
"cell_type": "markdown",
"id": "46db7310-79c7-4ee0-b5fa-d760c6e1aa67",
"metadata": {},
"source": [
"- There is a lot of code in the `BPETokenizerSimple` class above, and discussing it in detail is out of scope for this notebook, but the next section offers a short overview of the usage to understand the class methods a bit better"
]
},
{
"cell_type": "markdown",
"id": "8ffe1836-eed4-40dc-860b-2d23074d067e",
"metadata": {},
"source": [
"## 3. BPE implementation walkthrough"
]
},
{
"cell_type": "markdown",
"id": "3c7c996c-fd34-484f-a877-13d977214cf7",
"metadata": {},
"source": [
"- In practice, I highly recommend using [tiktoken](https://github.com/openai/tiktoken) as my implementation above focuses on readability and educational purposes, not on performance\n",
"- However, the usage is more or less similar to tiktoken, except that tiktoken does not have a training method\n",
"- Let's see how my `BPETokenizerSimple` Python code above works by looking at some examples below (a detailed code discussion is out of scope for this notebook)"
]
},
{
"cell_type": "markdown",
"id": "e82acaf6-7ed5-4d3b-81c0-ae4d3559d2c7",
"metadata": {},
"source": [
"### 3.1 Training, encoding, and decoding"
]
},
{
"cell_type": "markdown",
"id": "962bf037-903e-4555-b09c-206e1a410278",
"metadata": {},
"source": [
"- First, let's consider some sample text as our training dataset:"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "51872c08-e01b-40c3-a8a0-e8d6a773e3df",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"the-verdict.txt already exists in ./the-verdict.txt\n"
]
}
],
"source": [
"import os\n",
"import urllib.request\n",
"\n",
"def download_file_if_absent(url, filename, search_dirs):\n",
" for directory in search_dirs:\n",
" file_path = os.path.join(directory, filename)\n",
" if os.path.exists(file_path):\n",
" print(f\"{filename} already exists in {file_path}\")\n",
" return file_path\n",
"\n",
" target_path = os.path.join(search_dirs[0], filename)\n",
" try:\n",
" with urllib.request.urlopen(url) as response, open(target_path, \"wb\") as out_file:\n",
" out_file.write(response.read())\n",
" print(f\"Downloaded {filename} to {target_path}\")\n",
" except Exception as e:\n",
" print(f\"Failed to download {filename}. Error: {e}\")\n",
" return target_path\n",
"\n",
"verdict_path = download_file_if_absent(\n",
" url=(\n",
" \"https://raw.githubusercontent.com/rasbt/\"\n",
" \"LLMs-from-scratch/main/ch02/01_main-chapter-code/\"\n",
" \"the-verdict.txt\"\n",
" ),\n",
" filename=\"the-verdict.txt\",\n",
" search_dirs=\".\"\n",
")\n",
"\n",
"with open(verdict_path, \"r\", encoding=\"utf-8\") as f: # added ../01_main-chapter-code/\n",
" text = f.read()"
]
},
{
"cell_type": "markdown",
"id": "04d1b6ac-71d3-4817-956a-9bc7e463a84a",
"metadata": {},
"source": [
"- Next, let's initialize and train the BPE tokenizer with a vocabulary size of 1,000\n",
"- Note that the vocabulary size is already 256 by default due to the byte values discussed earlier, so we are only \"learning\" 744 vocabulary entries (if we consider the `<|endoftext|>` special token and the `Ġ` whitespace token; so, that's 742 to be precise)\n",
"- For comparison, the GPT-2 vocabulary is 50,257 tokens, the GPT-4 vocabulary is 100,256 tokens (`cl100k_base` in tiktoken), and GPT-4o uses 199,997 tokens (`o200k_base` in tiktoken); they have all much bigger training sets compared to our simple example text above"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "027348fd-d52f-4396-93dd-38eed142df9b",
"metadata": {},
"outputs": [],
"source": [
"tokenizer = BPETokenizerSimple()\n",
"tokenizer.train(text, vocab_size=1000, allowed_special={\"<|endoftext|>\"})"
]
},
{
"cell_type": "markdown",
"id": "2474ff05-5629-4f13-9e03-a47b1e713850",
"metadata": {},
"source": [
"- You may want to inspect the vocabulary contents (but note it will create a long list)"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "f705a283-355e-4460-b940-06bbc2ae4e61",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"1000\n"
]
}
],
"source": [
"# print(tokenizer.vocab)\n",
"print(len(tokenizer.vocab))"
]
},
{
"cell_type": "markdown",
"id": "36c9da0f-8a18-41cd-91ea-9ccc2bb5febb",
"metadata": {},
"source": [
"- This vocabulary is created by merging 742 times (`= 1000 - len(range(0, 256)) - len(special_tokens) - \"Ġ\" = 1000 - 256 - 1 - 1 = 742`)"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "3da42d1c-f75c-4ba7-a6c5-4cb8543d4a44",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"742\n"
]
}
],
"source": [
"print(len(tokenizer.bpe_merges))"
]
},
{
"cell_type": "markdown",
"id": "5dac69c9-8413-482a-8148-6b2afbf1fb89",
"metadata": {},
"source": [
"- This means that the first 256 entries are single-character tokens"
]
},
{
"cell_type": "markdown",
"id": "451a4108-7c8b-4b98-9c67-d622e9cdf250",
"metadata": {},
"source": [
"- Next, let's use the created merges via the `encode` method to encode some text:"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "e1db5cce-e015-412b-ad56-060b8b638078",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[424, 256, 654, 531, 302, 311, 256, 296, 97, 465, 121, 595, 841, 116, 287, 466, 256, 326, 972, 46]\n"
]
}
],
"source": [
"input_text = \"Jack embraced beauty through art and life.\"\n",
"token_ids = tokenizer.encode(input_text)\n",
"print(token_ids)"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "78249752-38d7-47b9-b259-912bcc093dc4",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[424, 256, 654, 531, 302, 311, 256, 296, 97, 465, 121, 595, 841, 116, 287, 466, 256, 326, 972, 46, 60, 124, 271, 683, 102, 116, 461, 116, 124, 62]\n"
]
}
],
"source": [
"input_text = \"Jack embraced beauty through art and life.<|endoftext|> \"\n",
"token_ids = tokenizer.encode(input_text)\n",
"print(token_ids)"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "0331d37d-49a3-44f7-9aa9-9834e0938741",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[424, 256, 654, 531, 302, 311, 256, 296, 97, 465, 121, 595, 841, 116, 287, 466, 256, 326, 972, 46, 257]\n"
]
}
],
"source": [
"input_text = \"Jack embraced beauty through art and life.<|endoftext|> \"\n",
"token_ids = tokenizer.encode(input_text, allowed_special={\"<|endoftext|>\"})\n",
"print(token_ids)"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "1ed1b344-f7d4-4e9e-ac34-2a04b5c5b7a8",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Number of characters: 56\n",
"Number of token IDs: 21\n"
]
}
],
"source": [
"print(\"Number of characters:\", len(input_text))\n",
"print(\"Number of token IDs:\", len(token_ids))"
]
},
{
"cell_type": "markdown",
"id": "50c1cfb9-402a-4e1e-9678-0b7547406248",
"metadata": {},
"source": [
"- From the lengths above, we can see that a 42-character sentence was encoded into 20 token IDs, effectively cutting the input length roughly in half compared to a character-byte-based encoding"
]
},
{
"cell_type": "markdown",
"id": "252693ee-e806-4dac-ab76-2c69086360f4",
"metadata": {},
"source": [
"- Note that the vocabulary itself is used in the `decode()` method, which allows us to map the token IDs back into text:"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "da0e1faf-1933-43d9-b681-916c282a8f86",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[424, 256, 654, 531, 302, 311, 256, 296, 97, 465, 121, 595, 841, 116, 287, 466, 256, 326, 972, 46, 257]\n"
]
}
],
"source": [
"print(token_ids)"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "8b690e83-5d6b-409a-804e-321c287c24a4",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Jack embraced beauty through art and life.<|endoftext|>\n"
]
}
],
"source": [
"print(tokenizer.decode(token_ids))"
]
},
{
"cell_type": "markdown",
"id": "adea5d09-e5ef-4721-994b-b9b25662fa0a",
"metadata": {},
"source": [
"- Iterating over each token ID can give us a better understanding of how the token IDs are decoded via the vocabulary:"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "2b9e6289-92cb-4d88-b3c8-e836d7c8095f",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"424 -> Jack\n",
"256 -> \n",
"654 -> em\n",
"531 -> br\n",
"302 -> ac\n",
"311 -> ed\n",
"256 -> \n",
"296 -> be\n",
"97 -> a\n",
"465 -> ut\n",
"121 -> y\n",
"595 -> through\n",
"841 -> ar\n",
"116 -> t\n",
"287 -> a\n",
"466 -> nd\n",
"256 -> \n",
"326 -> li\n",
"972 -> fe\n",
"46 -> .\n",
"257 -> <|endoftext|>\n"
]
}
],
"source": [
"for token_id in token_ids:\n",
" print(f\"{token_id} -> {tokenizer.decode([token_id])}\")"
]
},
{
"cell_type": "markdown",
"id": "5ea41c6c-5538-4fd5-8b5f-195960853b71",
"metadata": {},
"source": [
"- As we can see, most token IDs represent 2-character subwords; that's because the training data text is very short with not that many repetitive words, and because we used a relatively small vocabulary size"
]
},
{
"cell_type": "markdown",
"id": "600055a3-7ec8-4abf-b88a-c4186fb71463",
"metadata": {},
"source": [
"- As a summary, calling `decode(encode())` should be able to reproduce arbitrary input texts:"
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "c7056cb1-a9a3-4cf6-8364-29fb493ae240",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'This is some text.'"
]
},
"execution_count": 16,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"tokenizer.decode(\n",
" tokenizer.encode(\"This is some text.\")\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "37bc6753-8f35-4ec7-b23e-df4a12103cb4",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'This is some text with \\n newline characters.'"
]
},
"execution_count": 17,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"tokenizer.decode(\n",
" tokenizer.encode(\"This is some text with \\n newline characters.\")\n",
")"
]
},
{
"cell_type": "markdown",
"id": "a63b42bb-55bc-4c9d-b859-457a28b76302",
"metadata": {},
"source": [
"### 3.2 Saving and loading the tokenizer"
]
},
{
"cell_type": "markdown",
"id": "86210925-06dc-4e8c-87bd-821569cd7142",
"metadata": {},
"source": [
"- Next, let's look at how we can save the trained tokenizer for reuse later:"
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "955181cb-0910-4c6a-9c22-d8292a3ec1fc",
"metadata": {},
"outputs": [],
"source": [
"# Save trained tokenizer\n",
"tokenizer.save_vocab_and_merges(vocab_path=\"vocab.json\", bpe_merges_path=\"bpe_merges.txt\")"
]
},
{
"cell_type": "code",
"execution_count": 19,
"id": "6e5ccfe7-ac67-42f3-b727-87886a8867f1",
"metadata": {},
"outputs": [],
"source": [
"# Load tokenizer\n",
"tokenizer2 = BPETokenizerSimple()\n",
"tokenizer2.load_vocab_and_merges(vocab_path=\"vocab.json\", bpe_merges_path=\"bpe_merges.txt\")"
]
},
{
"cell_type": "markdown",
"id": "e7f9bcc2-3b27-4473-b75e-4f289d52a7cc",
"metadata": {},
"source": [
"- The loaded tokenizer should be able to produce the same results as before:"
]
},
{
"cell_type": "code",
"execution_count": 20,
"id": "00d9bf8f-756f-48bf-81b8-b890e2c2ef13",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Jack embraced beauty through art and life.<|endoftext|>\n"
]
}
],
"source": [
"print(tokenizer2.decode(token_ids))"
]
},
{
"cell_type": "code",
"execution_count": 21,
"id": "e7addb64-2892-4e1c-85dd-4f5152740099",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'This is some text with \\n newline characters.'"
]
},
"execution_count": 21,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"tokenizer2.decode(\n",
" tokenizer2.encode(\"This is some text with \\n newline characters.\")\n",
")"
]
},
{
"cell_type": "markdown",
"id": "b24d10b2-1ab8-44ee-b51a-14248e30d662",
"metadata": {},
"source": [
"&nbsp;\n",
"### 3.3 Loading the original GPT-2 BPE tokenizer from OpenAI"
]
},
{
"cell_type": "markdown",
"id": "df07e031-9495-4af1-929f-3f16cbde82a5",
"metadata": {},
"source": [
"- Finally, let's load OpenAI's GPT-2 tokenizer files"
]
},
{
"cell_type": "code",
"execution_count": 22,
"id": "b45b4366-2c2b-4309-9a14-febf3add8512",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"vocab.bpe already exists in ../02_bonus_bytepair-encoder/gpt2_model/vocab.bpe\n",
"encoder.json already exists in ../02_bonus_bytepair-encoder/gpt2_model/encoder.json\n"
]
}
],
"source": [
"# Download files if not already present in this directory\n",
"\n",
"# Define the directories to search and the files to download\n",
"search_directories = [\".\", \"../02_bonus_bytepair-encoder/gpt2_model/\"]\n",
"\n",
"files_to_download = {\n",
" \"https://openaipublic.blob.core.windows.net/gpt-2/models/124M/vocab.bpe\": \"vocab.bpe\",\n",
" \"https://openaipublic.blob.core.windows.net/gpt-2/models/124M/encoder.json\": \"encoder.json\"\n",
"}\n",
"\n",
"# Ensure directories exist and download files if needed\n",
"paths = {}\n",
"for url, filename in files_to_download.items():\n",
" paths[filename] = download_file_if_absent(url, filename, search_directories)"
]
},
{
"cell_type": "markdown",
"id": "3fe260a0-1d5f-4bbd-9934-5117052764d1",
"metadata": {},
"source": [
"- Next, we load the files via the `load_vocab_and_merges_from_openai` method:"
]
},
{
"cell_type": "code",
"execution_count": 23,
"id": "74306e6c-47d3-45a3-9e0f-93f7303ef601",
"metadata": {},
"outputs": [],
"source": [
"tokenizer_gpt2 = BPETokenizerSimple()\n",
"tokenizer_gpt2.load_vocab_and_merges_from_openai(\n",
" vocab_path=paths[\"encoder.json\"], bpe_merges_path=paths[\"vocab.bpe\"]\n",
")"
]
},
{
"cell_type": "markdown",
"id": "e1d012ce-9e87-47d7-8a1b-b6d6294d76c0",
"metadata": {},
"source": [
"- The vocabulary size should be `50257` as we can confirm via the code below:"
]
},
{
"cell_type": "code",
"execution_count": 24,
"id": "2bb722b4-dbf5-4a0c-9120-efda3293f132",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"50257"
]
},
"execution_count": 24,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"len(tokenizer_gpt2.vocab)"
]
},
{
"cell_type": "markdown",
"id": "7ea44b45-f524-44b5-a53a-f6d7f483fc19",
"metadata": {},
"source": [
"- We can now use the GPT-2 tokenizer via our `BPETokenizerSimple` object:"
]
},
{
"cell_type": "code",
"execution_count": 25,
"id": "e4866de7-fb32-4dd6-a878-469ec734641c",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[1212, 318, 617, 2420]\n"
]
}
],
"source": [
"input_text = \"This is some text\"\n",
"token_ids = tokenizer_gpt2.encode(input_text)\n",
"print(token_ids)"
]
},
{
"cell_type": "code",
"execution_count": 26,
"id": "3da8d9b2-af55-4b09-95d7-fabd983e919e",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"This is some text\n"
]
}
],
"source": [
"print(tokenizer_gpt2.decode(token_ids))"
]
},
{
"cell_type": "markdown",
"id": "b3b1e2dc-f69b-4533-87ef-549e6fb9b5a0",
"metadata": {},
"source": [
"- You can double-check that this produces the correct tokens using the interactive [tiktoken app](https://tiktokenizer.vercel.app/?model=gpt2) or the [tiktoken library](https://github.com/openai/tiktoken):\n",
"\n",
"<img src=\"https://sebastianraschka.com/images/LLMs-from-scratch-images/bonus/bpe-from-scratch/tiktokenizer.webp\" width=\"600px\">\n",
"\n",
"```python\n",
"import tiktoken\n",
"\n",
"gpt2_tokenizer = tiktoken.get_encoding(\"gpt2\")\n",
"gpt2_tokenizer.encode(\"This is some text\")\n",
"# prints [1212, 318, 617, 2420]\n",
"```\n"
]
},
{
"cell_type": "markdown",
"id": "3558af04-483c-4f6b-88f5-a534f37316cd",
"metadata": {},
"source": [
"&nbsp;\n",
"# 4. Conclusion"
]
},
{
"cell_type": "markdown",
"id": "410ed0e6-ad06-4bb3-bb39-6b8110c1caa4",
"metadata": {},
"source": [
"- That's it! That's how BPE works in a nutshell, complete with a training method for creating new tokenizers or loading the GPT-2 tokenizer vocabular and merges from the original OpenAI GPT-2 model\n",
"- I hope you found this brief tutorial useful for educational purposes; if you have any questions, please feel free to open a new Discussion [here](https://github.com/rasbt/LLMs-from-scratch/discussions/categories/q-a)\n",
"- For a performance comparison with other tokenizer implementations, please see [this notebook](https://github.com/rasbt/LLMs-from-scratch/blob/main/ch02/02_bonus_bytepair-encoder/compare-bpe-tiktoken.ipynb)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.16"
}
},
"nbformat": 4,
"nbformat_minor": 5
}