Sebastian Raschka f61baf86f2 Fix link (#596)
2025-04-02 09:47:07 -05:00

22 lines
759 B
Markdown

# Chapter 3: Coding Attention Mechanisms
 
## Main Chapter Code
- [01_main-chapter-code](01_main-chapter-code) contains the main chapter code.
 
## Bonus Materials
- [02_bonus_efficient-multihead-attention](02_bonus_efficient-multihead-attention) implements and compares different implementation variants of multihead-attention
- [03_understanding-buffers](03_understanding-buffers) explains the idea behind PyTorch buffers, which are used to implement the causal attention mechanism in chapter 3
In the video below, I provide a code-along session that covers some of the chapter contents as supplementary material.
<br>
<br>
[![Link to the video](https://img.youtube.com/vi/-Ll8DtpNtvk/0.jpg)](https://www.youtube.com/watch?v=-Ll8DtpNtvk)