43 Commits

Author SHA1 Message Date
Sebastian Raschka
7ca7c47e4a
Make quote style consistent (#891) 2025-10-21 19:42:33 -05:00
Sebastian Raschka
7bd263144e
Switch from urllib to requests to improve reliability (#867)
* Switch from urllib to requests to improve reliability

* Keep ruff linter-specific

* update

* update

* update
2025-10-07 15:22:59 -05:00
Sebastian Raschka
8552565bda
Add missing comma in imports in README (#865) 2025-10-06 16:03:04 -05:00
Sebastian Raschka
9bc827ea7e
Numerically stable generate on mps (#849)
* Numerically stable generate on mps

* add file
2025-09-26 22:42:44 -05:00
Sebastian Raschka
f492c949d3
Requirements update (#851)
* Requirements update

* Code change to tricker workers

* update
2025-09-26 22:19:57 -05:00
Sebastian Raschka
e742d8af2c
Improve MoE implementation (#841) 2025-09-22 15:21:06 -05:00
Sebastian Raschka
2aa8e8130d
Note about RoPE usage (#839)
* Note about devcontainer root usage

* Add note about RoPE implementation
2025-09-20 16:25:58 +00:00
casinca
42c130623b
Qwen3Tokenizer fix for Qwen3 Base models and generation mismatch with HF (#828)
* prevent `self.apply_chat_template` being applied for base Qwen models

* - added no chat template comparison in `test_chat_wrap_and_equivalence`
- removed duplicate comparison

* Revert "- added no chat template comparison in `test_chat_wrap_and_equivalence`"

This reverts commit 3a5ee8cfa19aa7e4874cd5f35171098be760b05f.

* Revert "prevent `self.apply_chat_template` being applied for base Qwen models"

This reverts commit df504397a8957886c6d6d808615545e37ceffcad.

* copied `download_file` in `utils` from https://github.com/rasbt/reasoning-from-scratch/blob/main/reasoning_from_scratch/utils.py

* added copy of test `def test_tokenizer_equivalence()` from `reasoning-from-scratch` in `test_qwen3.py`

* removed duplicate code fragment in`test_chat_wrap_and_equivalence`

* use apply_chat_template

* add toggle for instruct model

* Update tokenizer usage

---------

Co-authored-by: rasbt <mail@sebastianraschka.com>
2025-09-17 08:14:11 -05:00
Sebastian Raschka
b6cd0a312f
More efficient angles computation in RoPE (#830) 2025-09-16 03:23:33 +00:00
Sebastian Raschka
8add26cbe9
Improve weight tying handling (#826)
* Improve weight tying handling

* fix
2025-09-14 15:46:48 -05:00
Sebastian Raschka
8f3e5b024d
Add LoRA scaling (#823) 2025-09-14 11:57:55 -05:00
Sebastian Raschka
32965e0edd
remove redundant next_cache (#817) 2025-09-11 15:16:08 -05:00
Sebastian Raschka
c7a4362ca4
Add defensive context trimming for multiturn (#815)
* Add defensive context trimming for multiturn

* add all mods
2025-09-09 20:19:00 -05:00
Sebastian Raschka
5ae41c402e
Fix code comment 2025-09-05 14:02:24 -05:00
Sebastian Raschka
9eee9296d9
Interactive qwen3 chat interface (#801)
* Interactive qwen3 chat interface

* update

* update

* update url
2025-09-01 20:50:25 -05:00
Sebastian Raschka
70edd53809
Improve RoPE (#799) 2025-08-31 11:46:36 -05:00
Sebastian Raschka
80d4732456
add HF equivalency tests for standalone nbs (#774)
* add HF equivalency tests for standalone nbs

* update

* update

* update

* update
2025-08-18 18:58:46 -05:00
Sebastian Raschka
e9c1c1da38
Fix qk_norm comment (#769) 2025-08-15 08:38:48 -05:00
Sebastian Raschka
b14325e56d
Qwen3 and Llama3 equivalency teests with HF transformers (#768)
* Qwen3 and Llama3 equivalency teests with HF transformers

* update
2025-08-14 18:36:07 -05:00
Sebastian Raschka
f92b40e4ab
Qwen3 Coder Flash & MoE from Scratch (#760)
* Qwen3 Coder Flash & MoE from Scratch

* update

* refinements

* updates

* update

* update

* update
2025-08-01 19:13:17 -05:00
Sebastian Raschka
a354555049
Batched KV Cache Inference for Qwen3 (#735) 2025-07-10 08:09:35 -05:00
Sebastian Raschka
b8c8237251
Qwen3 tokenizer sanity checks (#730) 2025-07-09 13:52:35 -05:00
Sebastian Raschka
21c41721cc
Add more sophisticated Qwen3 tokenizer (#729) 2025-07-09 13:16:26 -05:00
Sebastian Raschka
3c9dc4807b
Simplify KV cache usage (#728)
* Simplify KV cache usage

* Swap mark text with ghostwriter
2025-07-08 12:56:55 -05:00
Sebastian Raschka
9cf64170ed
Update Qwen3 tokenizer test (#727)
* Update Qwen3 tokenizer test

* add tokenizers to dev dependencies

* add tokenizers to dev dependencies
2025-07-08 06:59:46 -05:00
Sebastian Raschka
0405b0c8e7
Handle other Qwen3 tokenizer settings (#716) 2025-06-30 17:49:51 -05:00
Sebastian Raschka
c4ec55edac
Support different Qwen3 sizes in pkg (#714) 2025-06-28 08:00:23 -05:00
Sebastian Raschka
81eda38d3b
Improve KV cache code for torch.compile (#705)
* Improve KV cache code for torch.compile

* cleanup

* cleanup
2025-06-23 18:08:49 -05:00
Sebastian Raschka
37b26c2e04
CPU compile performance for Qwen3 models (#704)
* Ch06 classifier function asserts

* Qwen3 cpu compilation perf
2025-06-23 11:06:10 -05:00
Sajjad Baloch
661a6e84ee
Fix: Typo in appendix_d.py comments. (#682)
* Fix: pkg/llms_from_scratch/appendix_d.py

* minor language typo fix

* fix 691

---------

Co-authored-by: PrinceSajjadHussain <PrinceSajjadHussain@users.noreply.github.com>
Co-authored-by: rasbt <mail@sebastianraschka.com>
2025-06-22 12:15:12 -05:00
Sebastian Raschka
0a2e8c39c4
Qwen3 KV cache (#688) 2025-06-21 17:34:39 -05:00
Daniel Kleine
14c054d36c
added pkg fixes (#676)
Co-authored-by: Sebastian Raschka <mail@sebastianraschka.com>
2025-06-21 16:07:50 -05:00
Sebastian Raschka
fdc3e1b701
Add GPT-2 KV cache to pkg (#687) 2025-06-21 12:29:04 -05:00
Sebastian Raschka
3be0f3202a
Llama 3 KV Cache (#685)
* Llama 3 KV Cache

* skip expensive tests on Gh actions

* Update __init__.py
2025-06-21 10:55:20 -05:00
Sebastian Raschka
e719bd86ad
Qwen3 From Scratch (#678)
* Qwen3 From Scratch

* rev other file

* upd

* upd

* upd

* url fixes
2025-06-19 18:44:38 -05:00
Daniel Kleine
c2cfb47b1a
fixed gqa qkv code comments (#660) 2025-06-13 08:21:28 -05:00
Sebastian Raschka
c4cde1c21b
Reduce Llama 3 RoPE memory requirements (#658)
* Llama3 from scratch improvements

* Fix Llama 3 expensive RoPE memory issue

* updates

* update package

* benchmark

* remove unused rescale_theta
2025-06-12 11:08:02 -05:00
Sebastian Raschka
43e25a5165
Llama3Fast (#593)
* Llama3Fast

* Update pkg/llms_from_scratch/tests/test_llama3.py
2025-04-01 12:56:11 -05:00
Sebastian Raschka
aedad7efc3
Add Llama 3.2 to pkg (#591)
* Add Llama 3.2 to pkg

* remove redundant attributes

* update tests

* updates

* updates

* updates

* fix link

* fix link
2025-03-31 18:59:47 -05:00
Sebastian Raschka
3f93d73d6d
Alt weight loading code via PyTorch (#585)
* Alt weight loading code via PyTorch

* commit additional files
2025-03-27 20:10:23 -05:00
Sebastian Raschka
ffd4035144
Add GPTModelFast (#584)
* Add GPTModelFast

* update
2025-03-27 14:00:25 -05:00
Sebastian Raschka
feb1e9a83d
Add readme (#577) 2025-03-23 19:35:12 -05:00
Sebastian Raschka
c21bfe4a23
Add PyPI package (#576)
* Add PyPI package

* fixes

* fixes
2025-03-23 19:28:49 -05:00