From 8ad50a33155453459524d39dbce70b8ac59be76c Mon Sep 17 00:00:00 2001 From: rasbt Date: Mon, 9 Sep 2024 06:31:28 -0500 Subject: [PATCH] update gpt-2 paper link --- ch04/01_main-chapter-code/ch04.ipynb | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/ch04/01_main-chapter-code/ch04.ipynb b/ch04/01_main-chapter-code/ch04.ipynb index 03a575b..4a4c1da 100644 --- a/ch04/01_main-chapter-code/ch04.ipynb +++ b/ch04/01_main-chapter-code/ch04.ipynb @@ -106,7 +106,7 @@ "source": [ "- In previous chapters, we used small embedding dimensions for token inputs and outputs for ease of illustration, ensuring they fit on a single page\n", "- In this chapter, we consider embedding and model sizes akin to a small GPT-2 model\n", - "- We'll specifically code the architecture of the smallest GPT-2 model (124 million parameters), as outlined in Radford et al.'s [Language Models are Unsupervised Multitask Learners](https://www.semanticscholar.org/paper/Language-Models-are-Unsupervised-Multitask-Learners-Radford-Wu/9405cc0d6169988371b2755e573cc28650d14dfe) (note that the initial report lists it as 117M parameters, but this was later corrected in the model weight repository)\n", + "- We'll specifically code the architecture of the smallest GPT-2 model (124 million parameters), as outlined in Radford et al.'s [Language Models are Unsupervised Multitask Learners](https://scholar.google.com/citations?view_op=view_citation&hl=en&user=dOad5HoAAAAJ&citation_for_view=dOad5HoAAAAJ:YsMSGLbcyi4C) (note that the initial report lists it as 117M parameters, but this was later corrected in the model weight repository)\n", "- Chapter 6 will show how to load pretrained weights into our implementation, which will be compatible with model sizes of 345, 762, and 1542 million parameters" ] }, @@ -1271,7 +1271,7 @@ "id": "309a3be4-c20a-4657-b4e0-77c97510b47c", "metadata": {}, "source": [ - "- Exercise: you can try the following other configurations, which are referenced in the [GPT-2 paper](https://www.semanticscholar.org/paper/Language-Models-are-Unsupervised-Multitask-Learners-Radford-Wu/9405cc0d6169988371b2755e573cc28650d14dfe), as well.\n", + "- Exercise: you can try the following other configurations, which are referenced in the [GPT-2 paper](https://scholar.google.com/citations?view_op=view_citation&hl=en&user=dOad5HoAAAAJ&citation_for_view=dOad5HoAAAAJ:YsMSGLbcyi4C), as well.\n", "\n", " - **GPT2-small** (the 124M configuration we already implemented):\n", " - \"emb_dim\" = 768\n",