mirror of
https://github.com/rasbt/LLMs-from-scratch.git
synced 2025-08-20 22:52:41 +00:00

* fixed typos * fixed formatting * Update ch03/02_bonus_efficient-multihead-attention/mha-implementations.ipynb * del weights after load into model --------- Co-authored-by: Sebastian Raschka <mail@sebastianraschka.com>
More Efficient Multi-Head Attention Implementations
- mha-implementations.ipynb contains and compares different implementations of multi-head attention
Summary
The figures below summarize the performance benchmarks (lower is better).
Forward pass only
Forward and backward pass