Daniel Kleine 0ed1e0d099 fixed typos (#414)
* fixed typos

* fixed formatting

* Update ch03/02_bonus_efficient-multihead-attention/mha-implementations.ipynb

* del weights after load into model

---------

Co-authored-by: Sebastian Raschka <mail@sebastianraschka.com>
2024-10-24 18:23:53 -05:00
..
2024-10-23 19:19:58 -05:00
2024-09-23 08:56:16 -05:00
2024-09-23 07:34:06 -05:00
2024-10-05 07:30:47 -05:00

Converting GPT to Llama

This folder contains code for converting the GPT implementation from chapter 4 and 5 to Meta AI's Llama architecture in the following recommended reading order: