mirror of
https://github.com/rasbt/LLMs-from-scratch.git
synced 2025-11-01 02:10:15 +00:00
Update link to vocab size increase (#526)
* Update link to vocab size increase * Update ch05/10_llm-training-speed/README.md * Update ch05/10_llm-training-speed/README.md
This commit is contained in:
parent
6370898ce6
commit
e818be42e1
@ -169,7 +169,7 @@ After:
|
||||
|
||||
### 9. Using a nicer vocab size value
|
||||
|
||||
- This is a tip suggested to me by my former colleague Carlos Moccholi, who mentioned that this tip comes from Andrej Karpathy (I suspect it's from the [nanoGPT](https://github.com/karpathy/nanoGPT/blob/93a43d9a5c22450bbf06e78da2cb6eeef084b717/model.py#L111) repository)
|
||||
- Here, we slightly increase the vocabulary size from 50,257 to 50,304, which is the nearest multiple of 64. This tip was suggested to me by my former colleague Carlos Mocholi, who mentioned that it originally came from Andrej Karpathy (likely from [this post](https://x.com/karpathy/status/1621578354024677377)). Karpathy's recommendation is based on [NVIDIA's guidelines on tensor shapes](https://docs.nvidia.com/deeplearning/performance/mixed-precision-training/index.html#tensor-core-shape), where batch sizes and linear layer dimensions are commonly chosen as multiples of certain values.
|
||||
|
||||
Before:
|
||||
- `Step tok/sec: 112046`
|
||||
@ -204,4 +204,4 @@ Before (single GPU):
|
||||
|
||||
After (4 GPUs):
|
||||
- `Step tok/sec: 419259`
|
||||
- `Reserved memory: 22.7969 GB`
|
||||
- `Reserved memory: 22.7969 GB`
|
||||
|
||||
Loading…
x
Reference in New Issue
Block a user