LLMs-from-scratch/ch03/02_bonus_efficient-multihead-attention
Daniel Kleine 5ff72c2850
fixed typos (#414)
* fixed typos

* fixed formatting

* Update ch03/02_bonus_efficient-multihead-attention/mha-implementations.ipynb

* del weights after load into model

---------

Co-authored-by: Sebastian Raschka <mail@sebastianraschka.com>
2024-10-24 18:23:53 -05:00
..
2024-10-24 18:23:53 -05:00
2024-09-05 18:24:33 +02:00

More Efficient Multi-Head Attention Implementations

Summary

The figures below summarize the performance benchmarks (lower is better).

 

Forward pass only

 

Forward and backward pass

 

Forward and backward pass after compilation