(If you downloaded the code bundle from the Manning website, please consider visiting the official code repository on GitHub at [https://github.com/rasbt/LLMs-from-scratch](https://github.com/rasbt/LLMs-from-scratch).)
In [*Build a Large Language Model (from Scratch)*](http://mng.bz/orYv), you'll discover how LLMs work from the inside out. In this book, I'll guide you step by step through creating your own LLM, explaining each stage with clear text, diagrams, and examples.
The method described in this book for training and developing your own small-but-functional model for educational purposes mirrors the approach used in creating large-scale foundational models such as those behind ChatGPT.
Please note that the `Readme.md` file is a Markdown (`.md`) file. If you have downloaded this code bundle from the Manning website and are viewing it on your local computer, I recommend using a Markdown editor or previewer for proper viewing. If you haven't installed a Markdown editor yet, MarkText[https://www.marktext.cc](https://www.marktext.cc) is a good free option.
Alternatively, you can view this and other files on GitHub at [https://github.com/rasbt/LLMs-from-scratch](https://github.com/rasbt/LLMs-from-scratch).
| Ch 2: Working with Text Data | All chapter code: [ch02.ipynb](ch02/01_main-chapter-code/ch02.ipynb)<br/>Chapter takeaway: [dataloader.ipynb](ch02/01_main-chapter-code/dataloader.ipynb)<br/>[exercise-solutions.ipynb](ch02/01_main-chapter-code/exercise-solutions.ipynb) | [./ch02](./ch02) |
| Appendix A: Introduction to PyTorch | Code up to GPU training: [code-part1.ipynb](03_main-chapter-code/01_main-chapter-code/code-part1.ipynb)<br/>GPU training sections: [code-part2.ipynb](03_main-chapter-code/01_main-chapter-code/code-part2.ipynb)<br/>Multi-GPU training script: [DDP-script.py](03_main-chapter-code/01_main-chapter-code/DDP-script.py)<br/>[exercise-solutions.ipynb](https://github.com/rasbt/LLMs-from-scratch/blob/main/appendix-A/03_main-chapter-code/exercise-solutions.ipynb) | [./appendix-A](./appendix-A) |