mirror of
https://github.com/rasbt/LLMs-from-scratch.git
synced 2025-08-09 01:02:50 +00:00
10 lines
496 B
Markdown
10 lines
496 B
Markdown
# Chapter 3: Coding Attention Mechanisms
|
|
|
|
## Main Chapter Code
|
|
|
|
- [01_main-chapter-code](01_main-chapter-code) contains the main chapter code.
|
|
|
|
## Bonus Materials
|
|
|
|
- [02_bonus_efficient-multihead-attention](02_bonus_efficient-multihead-attention) implements and compares different implementation variants of multihead-attention
|
|
- [03_understanding-buffers](03_understanding-buffers) explains the idea behind PyTorch buffers, which are used to implement the causal attention mechanism in chapter 3 |