mirror of
https://github.com/rasbt/LLMs-from-scratch.git
synced 2025-11-15 17:44:48 +00:00
* Fix MHAEinsum weight dimension bug when d_in != d_out (#857) Previously MHAEinsum initialized weight matrices with shape (d_out, d_in) and used inappropriate einsum notation, causing failures for non-square input-output dimensions. This commit corrects weight initialization to shape (d_in, d_out), updates einsum notation to 'bnd,do->bno', and adds three unit tests to verify parity across different d_in and d_out settings. All tests pass successfully. * use pytest * Update .gitignore --------- Co-authored-by: rasbt <mail@sebastianraschka.com>
Chapter 3: Coding Attention Mechanisms
Main Chapter Code
- 01_main-chapter-code contains the main chapter code.
Bonus Materials
- 02_bonus_efficient-multihead-attention implements and compares different implementation variants of multihead-attention
- 03_understanding-buffers explains the idea behind PyTorch buffers, which are used to implement the causal attention mechanism in chapter 3
In the video below, I provide a code-along session that covers some of the chapter contents as supplementary material.
