43 Commits

Author SHA1 Message Date
Sebastian Raschka
a5ea296259 Use more recent sentencepiece tokenizer API (#696) 2025-06-22 13:52:30 -05:00
Sebastian Raschka
0b15a00574 Qwen3 KV cache (#688) 2025-06-21 17:34:39 -05:00
Sebastian Raschka
9d62ca0598 Llama 3 KV Cache (#685)
* Llama 3 KV Cache

* skip expensive tests on Gh actions

* Update __init__.py
2025-06-21 10:55:20 -05:00
casinca
e700c66b7a removed old args in GQA class (#674) 2025-06-17 13:09:53 -05:00
Daniel Kleine
479b0e2aa9 fixed gqa qkv code comments (#660) 2025-06-13 08:21:28 -05:00
Sebastian Raschka
a3c4c33347 Reduce Llama 3 RoPE memory requirements (#658)
* Llama3 from scratch improvements

* Fix Llama 3 expensive RoPE memory issue

* updates

* update package

* benchmark

* remove unused rescale_theta
2025-06-12 11:08:02 -05:00
Sebastian Raschka
3eca919a52 Llama3 from scratch improvements (#621)
* Llama3 from scratch improvements

* restore
2025-04-16 18:08:26 -05:00
Sebastian Raschka
97a199e40b Disable mask saving as weight in Llama 3 model (#604)
* Disable mask saving as weight

* update pixi

* update pixi
2025-04-06 09:33:36 -05:00
Sebastian Raschka
c43d7ef663 reformat nbs (#602) 2025-04-05 16:18:27 -05:00
Sebastian Raschka
396e96ab07 Fix Llama language typo in bonus materials (#597) 2025-04-02 21:41:36 -05:00
Sebastian Raschka
4128a91c1d Add Llama 3.2 to pkg (#591)
* Add Llama 3.2 to pkg

* remove redundant attributes

* update tests

* updates

* updates

* updates

* fix link

* fix link
2025-03-31 18:59:47 -05:00
casinca
d7c316533a removing unused RoPE parameters (#590)
* removing unused RoPE parameters

* remove redundant context_length in GQA

---------

Co-authored-by: Sebastian Raschka <mail@sebastianraschka.com>
2025-03-31 17:10:39 -05:00
Sebastian Raschka
4e3b752e5e Memory optimized Llama (#588)
* Memory optimized Llama

* re-ad login
2025-03-30 15:18:12 -05:00
Sebastian Raschka
7114ccd10d Add PyPI package (#576)
* Add PyPI package

* fixes

* fixes
2025-03-23 19:28:49 -05:00
Sebastian Raschka
5016499d1d Uv workflow improvements (#531)
* Uv workflow improvements

* Uv workflow improvements

* linter improvements

* pytproject.toml fixes

* pytproject.toml fixes

* pytproject.toml fixes

* pytproject.toml fixes

* pytproject.toml fixes

* pytproject.toml fixes

* windows fixes

* windows fixes

* windows fixes

* windows fixes

* windows fixes

* windows fixes

* win32 fix

* win32 fix

* win32 fix

* win32 fix

* win32 fix

* win32 fix

* win32 fix

* win32 fix

* win32 fix

* win32 fix

* win32 fix

* win32 fix

* win32 fix

* win32 fix

* win32 fix

* win32 fix

* win32 fix

* win32 fix

* win32 fix
2025-02-16 13:16:51 -06:00
Sebastian Raschka
992f3068d1 Auto download DPO dataset if not already available in path (#479)
* Auto download DPO dataset if not already available in path

* update tests to account for latest HF transformers release in unit tests

* pep 8
2025-01-12 12:27:28 -06:00
casinca
57fdd94358 [minor] typo & comments (#441)
* typo & comment

- safe -> save
- commenting code: batch_size, seq_len = in_idx.shape

* comment

- adding # NEW for assert num_heads % num_kv_groups == 0

* update memory wording

---------

Co-authored-by: rasbt <mail@sebastianraschka.com>
2024-11-18 19:52:42 +09:00
Daniel Kleine
7e6f8ce020 updated RoPE statement (#423)
* updated RoPE statement

* updated .gitignore

* Update ch05/07_gpt_to_llama/converting-gpt-to-llama2.ipynb

---------

Co-authored-by: Sebastian Raschka <mail@sebastianraschka.com>
2024-10-30 08:00:08 -05:00
ROHAN WINSOR
e85d154522 Fix argument name in LlamaTokenizer constructor (#421)
This PR addresses an oversight in the LlamaTokenizer class by changing the constructor argument from filepath to tokenizer_file.
2024-10-29 18:01:36 -05:00
Daniel Kleine
2b24a7ef30 minor fixes: Llama 3.2 standalone (#420)
* minor fixes

* reformat rope base as float

---------

Co-authored-by: rasbt <mail@sebastianraschka.com>
2024-10-25 21:08:06 -05:00
Sebastian Raschka
75ede3e340 RoPE theta rescaling (#419)
* rope fixes

* update

* update

* cleanup
2024-10-25 15:27:23 -05:00
Daniel Kleine
0ed1e0d099 fixed typos (#414)
* fixed typos

* fixed formatting

* Update ch03/02_bonus_efficient-multihead-attention/mha-implementations.ipynb

* del weights after load into model

---------

Co-authored-by: Sebastian Raschka <mail@sebastianraschka.com>
2024-10-24 18:23:53 -05:00
Daniel Kleine
8b60460319 Updated Llama 2 to 3 paths (#413)
* llama 2 and 3 path fixes

* updated llama 3, 3.1 and 3.2 paths

* updated .gitignore

* Typo fix

---------

Co-authored-by: Sebastian Raschka <mail@sebastianraschka.com>
2024-10-24 07:40:08 -05:00
Sebastian Raschka
632d7772b2 Update test-requirements-extra.txt 2024-10-23 19:19:58 -05:00
Sebastian Raschka
f8bdfe12e1 RoPE updates (#412)
* RoPE updates

* Apply suggestions from code review

* updates

* updates

* updates
2024-10-23 18:07:49 -05:00
Sebastian Raschka
6dd3fbd79d Update tests.py 2024-10-23 07:48:33 -05:00
Sebastian Raschka
9726ca6546 RoPE increase (#407) 2024-10-21 19:58:38 -05:00
Sebastian Raschka
37db3f0913 Add Llama 3.2 RoPE to CI (#391)
* add Llama 3.2 RoPE to CI

* update
2024-10-08 08:28:34 -05:00
Sebastian Raschka
06604f4b84 Introduce buffers to improve Llama 3.2 efficiency (#389)
* Introduce buffers to improve Llama 3.2 efficiency

* update

* update
2024-10-06 12:49:04 -05:00
Daniel Kleine
4f9775d91c fixed Llama 2 to 3.2 NBs (#388)
* updated requirements

* fixes llama2 to llama3

* fixed llama 3.2 standalone

* fixed typo

* fixed rope formula

* Update requirements-extra.txt

* Update ch05/07_gpt_to_llama/converting-llama2-to-llama3.ipynb

* Update ch05/07_gpt_to_llama/converting-llama2-to-llama3.ipynb

* Update ch05/07_gpt_to_llama/standalone-llama32.ipynb

---------

Co-authored-by: Sebastian Raschka <mail@sebastianraschka.com>
2024-10-06 09:56:55 -05:00
Sebastian Raschka
81053ccadd Add a note about weight tying in Llama 3.2 (#386) 2024-10-05 09:20:54 -05:00
Sebastian Raschka
8d6b25785d Llama 3.2 requirements file 2024-10-05 07:32:43 -05:00
Sebastian Raschka
6f86c78763 Implement Llama 3.2 (#383) 2024-10-05 07:30:47 -05:00
Sebastian Raschka
d313f61c86 Cos-sin fix in Llama 2 bonus notebook (#381) 2024-10-03 20:45:40 -05:00
Sebastian Raschka
feb0647c79 Improve rope settings for llama3 (#380) 2024-10-03 08:29:54 -05:00
rasbt
2ae4ad15ba add section numbers 2024-09-30 08:42:22 -05:00
Sebastian Raschka
b8497c1bf5 Add llama2 unit tests (#372)
* add llama2 unit tests

* update

* updates

* updates

* update file path

* update requirements file

* rmsnorm test

* update
2024-09-25 19:40:36 -05:00
rasbt
a23fca84d5 improve formatting 2024-09-24 18:49:17 -05:00
Daniel Kleine
4541177063 ch05/07 gpt_to_llama text improvements (#369)
* fixed typo

* fixed RMSnorm formula

* fixed SwiGLU formula

* temperature=0 for untrained model for reproducibility

* added extra info hf token
2024-09-24 18:45:49 -05:00
rasbt
941629d2c7 add json import 2024-09-23 09:12:35 -05:00
rasbt
835832a0f9 move access token to config.json 2024-09-23 08:56:16 -05:00
rasbt
5e6c7230ac add llama3 comparison 2024-09-23 08:17:10 -05:00
Sebastian Raschka
c38b003aa9 GPT to Llama (#368)
* GPT to Llama

* fix urls
2024-09-23 07:34:06 -05:00