mirror of
https://github.com/rasbt/LLMs-from-scratch.git
synced 2025-08-30 03:20:51 +00:00
23 lines
1.0 KiB
Markdown
23 lines
1.0 KiB
Markdown
# Chapter 4: Implementing a GPT Model from Scratch to Generate Text
|
|
|
|
|
|
## Main Chapter Code
|
|
|
|
- [01_main-chapter-code](01_main-chapter-code) contains the main chapter code.
|
|
|
|
|
|
## Bonus Materials
|
|
|
|
- [02_performance-analysis](02_performance-analysis) contains optional code analyzing the performance of the GPT model(s) implemented in the main chapter
|
|
- [03_kv-cache](03_kv-cache) implements a KV cache to speed up the text generation during inference
|
|
- [ch05/07_gpt_to_llama](../ch05/07_gpt_to_llama) contains a step-by-step guide for converting a GPT architecture implementation to Llama 3.2 and loads pretrained weights from Meta AI (it might be interesting to look at alternative architectures after completing chapter 4, but you can also save that for after reading chapter 5)
|
|
|
|
|
|
|
|
In the video below, I provide a code-along session that covers some of the chapter contents as supplementary material.
|
|
|
|
<br>
|
|
<br>
|
|
|
|
[](https://www.youtube.com/watch?v=YSAkgEarBGE)
|