17 Commits

Author SHA1 Message Date
casinca
e700c66b7a removed old args in GQA class (#674) 2025-06-17 13:09:53 -05:00
Daniel Kleine
479b0e2aa9 fixed gqa qkv code comments (#660) 2025-06-13 08:21:28 -05:00
Sebastian Raschka
a3c4c33347 Reduce Llama 3 RoPE memory requirements (#658)
* Llama3 from scratch improvements

* Fix Llama 3 expensive RoPE memory issue

* updates

* update package

* benchmark

* remove unused rescale_theta
2025-06-12 11:08:02 -05:00
Sebastian Raschka
c43d7ef663 reformat nbs (#602) 2025-04-05 16:18:27 -05:00
Sebastian Raschka
7114ccd10d Add PyPI package (#576)
* Add PyPI package

* fixes

* fixes
2025-03-23 19:28:49 -05:00
Sebastian Raschka
5016499d1d Uv workflow improvements (#531)
* Uv workflow improvements

* Uv workflow improvements

* linter improvements

* pytproject.toml fixes

* pytproject.toml fixes

* pytproject.toml fixes

* pytproject.toml fixes

* pytproject.toml fixes

* pytproject.toml fixes

* windows fixes

* windows fixes

* windows fixes

* windows fixes

* windows fixes

* windows fixes

* win32 fix

* win32 fix

* win32 fix

* win32 fix

* win32 fix

* win32 fix

* win32 fix

* win32 fix

* win32 fix

* win32 fix

* win32 fix

* win32 fix

* win32 fix

* win32 fix

* win32 fix

* win32 fix

* win32 fix

* win32 fix

* win32 fix
2025-02-16 13:16:51 -06:00
casinca
57fdd94358 [minor] typo & comments (#441)
* typo & comment

- safe -> save
- commenting code: batch_size, seq_len = in_idx.shape

* comment

- adding # NEW for assert num_heads % num_kv_groups == 0

* update memory wording

---------

Co-authored-by: rasbt <mail@sebastianraschka.com>
2024-11-18 19:52:42 +09:00
Daniel Kleine
2b24a7ef30 minor fixes: Llama 3.2 standalone (#420)
* minor fixes

* reformat rope base as float

---------

Co-authored-by: rasbt <mail@sebastianraschka.com>
2024-10-25 21:08:06 -05:00
Sebastian Raschka
75ede3e340 RoPE theta rescaling (#419)
* rope fixes

* update

* update

* cleanup
2024-10-25 15:27:23 -05:00
Daniel Kleine
0ed1e0d099 fixed typos (#414)
* fixed typos

* fixed formatting

* Update ch03/02_bonus_efficient-multihead-attention/mha-implementations.ipynb

* del weights after load into model

---------

Co-authored-by: Sebastian Raschka <mail@sebastianraschka.com>
2024-10-24 18:23:53 -05:00
Daniel Kleine
8b60460319 Updated Llama 2 to 3 paths (#413)
* llama 2 and 3 path fixes

* updated llama 3, 3.1 and 3.2 paths

* updated .gitignore

* Typo fix

---------

Co-authored-by: Sebastian Raschka <mail@sebastianraschka.com>
2024-10-24 07:40:08 -05:00
Sebastian Raschka
f8bdfe12e1 RoPE updates (#412)
* RoPE updates

* Apply suggestions from code review

* updates

* updates

* updates
2024-10-23 18:07:49 -05:00
Sebastian Raschka
9726ca6546 RoPE increase (#407) 2024-10-21 19:58:38 -05:00
Sebastian Raschka
06604f4b84 Introduce buffers to improve Llama 3.2 efficiency (#389)
* Introduce buffers to improve Llama 3.2 efficiency

* update

* update
2024-10-06 12:49:04 -05:00
Daniel Kleine
4f9775d91c fixed Llama 2 to 3.2 NBs (#388)
* updated requirements

* fixes llama2 to llama3

* fixed llama 3.2 standalone

* fixed typo

* fixed rope formula

* Update requirements-extra.txt

* Update ch05/07_gpt_to_llama/converting-llama2-to-llama3.ipynb

* Update ch05/07_gpt_to_llama/converting-llama2-to-llama3.ipynb

* Update ch05/07_gpt_to_llama/standalone-llama32.ipynb

---------

Co-authored-by: Sebastian Raschka <mail@sebastianraschka.com>
2024-10-06 09:56:55 -05:00
Sebastian Raschka
81053ccadd Add a note about weight tying in Llama 3.2 (#386) 2024-10-05 09:20:54 -05:00
Sebastian Raschka
6f86c78763 Implement Llama 3.2 (#383) 2024-10-05 07:30:47 -05:00