fix typos & inconsistent texts (#269)

Co-authored-by: TRAN <you@example.com>
This commit is contained in:
Thanh Tran 2024-07-17 21:34:51 +09:00 committed by GitHub
parent ee1d4730ba
commit a2bb045984
3 changed files with 3 additions and 3 deletions

View File

@ -705,7 +705,7 @@
" - `[BOS]` (beginning of sequence) marks the beginning of text\n",
" - `[EOS]` (end of sequence) marks where the text ends (this is usually used to concatenate multiple unrelated texts, e.g., two different Wikipedia articles or two different books, and so on)\n",
" - `[PAD]` (padding) if we train LLMs with a batch size greater than 1 (we may include multiple texts with different lengths; with the padding token we pad the shorter texts to the longest length so that all texts have an equal length)\n",
"- `[UNK]` to represent works that are not included in the vocabulary\n",
"- `[UNK]` to represent words that are not included in the vocabulary\n",
"\n",
"- Note that GPT-2 does not need any of these tokens mentioned above but only uses an `<|endoftext|>` token to reduce complexity\n",
"- The `<|endoftext|>` is analogous to the `[EOS]` token mentioned above\n",

View File

@ -1180,7 +1180,7 @@
"- In the original GPT-2 paper, the researchers applied weight tying, which means that they reused the token embedding layer (`tok_emb`) as the output layer, which means setting `self.out_head.weight = self.tok_emb.weight`\n",
"- The token embedding layer projects the 50,257-dimensional one-hot encoded input tokens to a 768-dimensional embedding representation\n",
"- The output layer projects 768-dimensional embeddings back into a 50,257-dimensional representation so that we can convert these back into words (more about that in the next section)\n",
"- So, the embedding and output layer have the same number of weight parameters, as we can see based on the shape of their weight matrices: the next chapter\n",
"- So, the embedding and output layer have the same number of weight parameters, as we can see based on the shape of their weight matrices\n",
"- However, a quick note about its size: we previously referred to it as a 124M parameter model; we can double check this number as follows:"
]
},

View File

@ -19,7 +19,7 @@ python python_environment_check.py
<img src="https://sebastianraschka.com/images/LLMs-from-scratch-images/setup/02_installing-python-libraries/check_1.jpg" width="600px">
It's also recommended to check the versions in JupyterLab by running the `jupyter_environment_check.ipynb` in this directory, which should ideally give you the same results as above.
It's also recommended to check the versions in JupyterLab by running the `python_environment_check.ipynb` in this directory, which should ideally give you the same results as above.
<img src="https://sebastianraschka.com/images/LLMs-from-scratch-images/setup/02_installing-python-libraries/check_2.jpg" width="500px">