diff --git a/ch02/01_main-chapter-code/ch02.ipynb b/ch02/01_main-chapter-code/ch02.ipynb index ccec674..9bc801c 100644 --- a/ch02/01_main-chapter-code/ch02.ipynb +++ b/ch02/01_main-chapter-code/ch02.ipynb @@ -16,6 +16,14 @@ "## 2.1 Understanding word embeddings" ] }, + { + "cell_type": "markdown", + "id": "0b6816ae-e927-43a9-b4dd-e47a9b0e1cf6", + "metadata": {}, + "source": [ + "- No code in this section" + ] + }, { "cell_type": "markdown", "id": "eddbb984-8d23-40c5-bbfa-c3c379e7eec3", @@ -1549,6 +1557,40 @@ "input_embeddings = token_embeddings + pos_embeddings\n", "print(input_embeddings.shape)" ] + }, + { + "cell_type": "markdown", + "id": "63230f2e-258f-4497-9e2e-8deee4530364", + "metadata": {}, + "source": [ + "# Summary and takeaways" + ] + }, + { + "cell_type": "markdown", + "id": "8b3293a6-45a5-47cd-aa00-b23e3ca0a73f", + "metadata": {}, + "source": [ + "**See the [./dataloader.ipynb](./dataloader.ipynb) code notebook**, which is a concise version of the data loader that we implemented in this chapter and will need for training the GPT model in upcoming chapters.\n", + "\n", + "**See [./exercise-solutions.ipynb](./exercise-solutions.ipynb) for the exercise solutions.**" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "ce133c96-2ec5-400d-a103-3875d1da6f31", + "metadata": {}, + "outputs": [], + "source": [] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "8352810e-461d-4c33-9c78-3648983a9777", + "metadata": {}, + "outputs": [], + "source": [] } ], "metadata": { @@ -1567,7 +1609,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.11.4" + "version": "3.10.12" } }, "nbformat": 4,