793 Commits

Author SHA1 Message Date
Sebastian Raschka
a08d7aaa84
Uv workflow improvements (#531)
* Uv workflow improvements

* Uv workflow improvements

* linter improvements

* pytproject.toml fixes

* pytproject.toml fixes

* pytproject.toml fixes

* pytproject.toml fixes

* pytproject.toml fixes

* pytproject.toml fixes

* windows fixes

* windows fixes

* windows fixes

* windows fixes

* windows fixes

* windows fixes

* win32 fix

* win32 fix

* win32 fix

* win32 fix

* win32 fix

* win32 fix

* win32 fix

* win32 fix

* win32 fix

* win32 fix

* win32 fix

* win32 fix

* win32 fix

* win32 fix

* win32 fix

* win32 fix

* win32 fix

* win32 fix

* win32 fix
2025-02-16 13:16:51 -06:00
rasbt
29353c74d8
reduce redundancies 2025-02-15 21:31:22 -06:00
Sebastian Raschka
fc11940ace
Add performance comparison 2025-02-15 21:16:36 -06:00
rasbt
7e335f8af0
formatting 2025-02-15 21:05:58 -06:00
rasbt
61ca01c7c5
cosmetics 2025-02-15 20:53:26 -06:00
Sebastian Raschka
3e3dc3c5dc
Native uv docs (#530)
* Replace pip by more modern uv

* uv tests

* Native uv docs

* resolve merge conflicts

* resolve merge conflicts
2025-02-15 20:35:23 -06:00
Sebastian Raschka
e9c4dac3ad
Update README.md 2025-02-15 13:17:43 -06:00
Sebastian Raschka
88fd849b88
Switch from pip to uv (#529)
* Replace pip by more modern uv

* uv tests

* update yaml

* update yaml

* update yaml

* update flake8

* update windows commands

* fix windows test

* windows fix

* windows fix

* windows fix

* windows fix

* windows fix

* windows fix

* windows fix

* windows fix

* windows fix

* windows fix
2025-02-15 13:13:13 -06:00
Sebastian Raschka
074a6efb33
Update link to vocab size increase (#526)
* Update link to vocab size increase

* Update ch05/10_llm-training-speed/README.md

* Update ch05/10_llm-training-speed/README.md
2025-02-14 08:03:01 -06:00
Sebastian Raschka
908dd2f71e
PyTorch tips for better training performance (#525)
* PyTorch tips for better training performance

* formatting

* pep 8
2025-02-12 16:10:34 -06:00
Sebastian Raschka
3c29b67cd0
Add torchrun bonus code (#524) 2025-02-11 17:01:09 -06:00
Victor Skvortsov
f90bec7dfb
Comment that DDP-script.py does not work with GPUs > 2 (#523) 2025-02-11 13:23:49 -06:00
Sebastian Raschka
a6cc574605
Upgrade to NumPy 2.0 (#520)
* Upgrade to NumPy 2.0

* bump pytorch

* bump pytorch

* bump pytorch

* bump pytorch

* bump pytorch

* update

* update packages
2025-02-09 06:21:58 -06:00
Sebastian Raschka
68e2efe1c9
Mention small discrepancy due to Dropout non-reproducibility in PyTorch (#519)
* Mention small discrepancy due to Dropout non-reproducibility in PyTorch

* bump pytorch version
2025-02-06 14:59:52 -06:00
Daniel Kleine
bd8f7522cb
fixed indention and enumeration for nvct (#518) 2025-02-06 08:17:12 -06:00
Sebastian Raschka
2dc46bedc6
Fix typo in Ch02 comments (#516) 2025-02-04 20:16:07 -06:00
Sebastian Raschka
8cfa52bf1d
More pythonic way to find the longest sequence (#512)
* More pythonic way to find the longest sequence

* pep8 fix
2025-02-01 10:22:47 -06:00
Sebastian Raschka
0e14c76dee
Test PyTorch nightly releases (#509) 2025-01-30 12:45:48 -06:00
Sebastian Raschka
25ea71e713
Alternative weight loading via .safetensors (#507) 2025-01-29 08:15:29 -06:00
Sebastian Raschka
9daa7e7511
Fix default argument in ex 7.2 (#506) 2025-01-25 10:46:48 -06:00
Sebastian Raschka
fd8d77a79d
A few cosmetic updates (#504) 2025-01-23 09:38:55 -06:00
Sebastian Raschka
0911e71497
Test for PyTorch 2.6 release candidate (#500)
* Test for PyTorch 2.6 release candidate

* update

* update

* remove extra added file
2025-01-22 18:37:48 -06:00
Sebastian Raschka
a22d612be6
Bonus material: extending tokenizers (#496)
* Bonus material: extending tokenizers

* small wording update
2025-01-22 09:26:54 -06:00
Daniel Kleine
dce46038da
add GPT2TokenizerFast to BPE comparison (#498)
* added HF BPE Fast

* update benchmarks

* add note about performance

* revert accidental changes

---------

Co-authored-by: rasbt <mail@sebastianraschka.com>
2025-01-22 09:26:44 -06:00
Austin Welch
0f35e370ed
fix: preserve newline tokens in BPE encoder (#495)
* fix: preserve newline tokens in BPE encoder

* further fixes

* more fixes

---------

Co-authored-by: rasbt <mail@sebastianraschka.com>
2025-01-21 12:47:15 -06:00
Daniel Kleine
60acb94894
BPE: fixed typo (#492)
* fixed typo

* use rel path if exists

* mod gitignore and use existing vocab files

---------

Co-authored-by: rasbt <mail@sebastianraschka.com>
2025-01-20 20:49:53 -06:00
Sebastian Raschka
0d4967eda6
Implementingthe BPE Tokenizer from Scratch (#487) 2025-01-17 12:22:00 -06:00
rvaneijk
2fef2116a6
04_optional-aws-sagemaker-notebook (#451)
* 04_optional-aws-sagemaker-notebook

* Update setup/04_optional-aws-sagemaker-notebook/cloudformation-template.yml

* Update README.md

---------

Co-authored-by: Sebastian Raschka <mail@sebastianraschka.com>
2025-01-17 10:07:10 -06:00
Sebastian Raschka
126adb7663
Include mathematical breakdown for exercise solution 4.1 (#483) 2025-01-14 19:23:00 -06:00
Henry Shi
b3150eebd8
Print out embeddings for more illustrative learning (#481)
* print out embeddings for illustrative learning

* suggestion print embeddingcontents

---------

Co-authored-by: rasbt <mail@sebastianraschka.com>
2025-01-13 14:44:06 -06:00
rasbt
b524afe3da
fix reward margins plot label in dpo nb 2025-01-12 14:04:05 -06:00
Sebastian Raschka
4bfbcd069d
Auto download DPO dataset if not already available in path (#479)
* Auto download DPO dataset if not already available in path

* update tests to account for latest HF transformers release in unit tests

* pep 8
2025-01-12 12:27:28 -06:00
Sebastian Raschka
a48f9c7fe2
adds no-grad context for reference model to DPO (#473) 2025-01-07 20:49:01 -06:00
Sebastian Raschka
2d7ca7ee4b
fix ch07 unit test (#470) 2025-01-05 17:40:57 -06:00
Sebastian Raschka
701090815e
Add backup URL for gpt2 weights (#469)
* Add backup URL for gpt2 weights

* newline
2025-01-05 11:28:09 -06:00
QS
9b95557ba2
typo fixed (#468)
* typo fixed

* only update plot

---------

Co-authored-by: rasbt <mail@sebastianraschka.com>
2025-01-05 09:17:13 -06:00
Tao Qian
cec445f146
Minor readability improvement in dataloader.ipynb (#461)
* Minor readability improvement in dataloader.ipynb

- The tokenizer and encoded_text variables at the root level are unused.
- The default params for create_dataloader_v1 are confusing, especially for the default batch_size 4, which happens to be the same as the max_length.

* readability improvements

---------

Co-authored-by: rasbt <mail@sebastianraschka.com>
2025-01-04 11:26:10 -06:00
Sebastian Raschka
1b635f760e
fix misplaced parenthesis and update license (#466) 2025-01-04 11:14:08 -06:00
casinca
bb31de8999
[minor] typo & comments (#441)
* typo & comment

- safe -> save
- commenting code: batch_size, seq_len = in_idx.shape

* comment

- adding # NEW for assert num_heads % num_kv_groups == 0

* update memory wording

---------

Co-authored-by: rasbt <mail@sebastianraschka.com>
2024-11-18 19:52:42 +09:00
Daniel Kleine
e95c898545
Fixed command for row 16 additional experiment (#439)
* fixed command for row 16 experiment

* Update README.md

---------

Co-authored-by: Sebastian Raschka <mail@sebastianraschka.com>
2024-11-17 06:50:00 +09:00
Sebastian Raschka
ccade77bf4
Add flexible padding bonus experiment (#438)
* Add flexible padding bonus experiment

* fix links
2024-11-15 08:51:01 +09:00
Sebastian Raschka
f6281ab91b
Add utility to prevent double execution of certain cells (#437) 2024-11-14 19:56:49 +09:00
Sebastian Raschka
f61c008c5d
Add missing device transfer in gpt_generate.py (#436) 2024-11-14 19:12:53 +09:00
Sebastian Raschka
27a6a7e64a
Add chapter names 2024-11-08 08:39:34 -06:00
Sebastian Raschka
f4ed263847
Add "What's next" section (#432)
* Add What's next section

* Delete appendix-D/01_main-chapter-code/appendix-D-Copy2.ipynb

* Delete ch03/01_main-chapter-code/ch03-Copy1.ipynb

* Delete appendix-D/01_main-chapter-code/appendix-D-Copy1.ipynb

* Update ch07.ipynb

* Update ch07.ipynb
2024-11-07 20:12:59 -06:00
rasbt
1183fd7837
add dropout scaling note 2024-11-06 05:52:47 -06:00
casinca
9ce0be333b
potential little fixes appendix-D4 .ipynb (#427)
* Update appendix-D.ipynb

- lr missing argument for passing peak_lr to the optimizer
- filling 1 step gap for gradient clipping

* adjustments

---------

Co-authored-by: rasbt <mail@sebastianraschka.com>
2024-11-03 12:12:58 -06:00
Sebastian Raschka
ba3137fa2c
Update CITATION.cff 2024-11-01 21:32:17 -05:00
Sebastian Raschka
734f36aac1
Update CITATION.cff 2024-11-01 21:29:22 -05:00
rasbt
7553e87af0
Add citation file 2024-11-01 21:21:25 -05:00