mirror of
https://github.com/rasbt/LLMs-from-scratch.git
synced 2025-12-25 06:02:07 +00:00
experiments with largest model
This commit is contained in:
parent
2df81f59d3
commit
756ff780de
@ -17,8 +17,9 @@ For example,
|
||||
| 4 | gpt2-small (124M) | pretrained | last | all | longest train ex. (120) | V100 | 0.94 min | 99.62% | 96.64% | 96.67% |
|
||||
| 5 | gpt2-medium (355M) | pretrained | last | last_block | longest train ex. (120) | V100 | 0.91 min | 87.50% | 91.28% | 84.67% |
|
||||
| 6 | gpt2-large (774M) | pretrained | last | last_block | longest train ex. (120) | V100 | 1.91 min | 99.52% | 98.66% | 96.67% |
|
||||
| 7 | gpt2-small (124M) | random | last | all | longest train ex. (120) | V100 | 0.93 min | 100% | 96.64% | 93.67% |
|
||||
| 8 | gpt2-small (124M) | pretrained | last | last_block | context length (1024) | V100 | 3.24 min | 83.08% | 87.92% | 78.33% |
|
||||
| 7 | gpt2-xl (1558M) | pretrained | last | last_block | longest train ex. (120) | V100 | 3.84 min | 99.81% | 99.33% | 98.33% |
|
||||
| 8 | gpt2-small (124M) | random | last | all | longest train ex. (120) | V100 | 0.93 min | 100% | 96.64% | 93.67% |
|
||||
| 9 | gpt2-small (124M) | pretrained | last | last_block | context length (1024) | V100 | 3.24 min | 83.08% | 87.92% | 78.33% |
|
||||
|
||||
|
||||
|
||||
@ -32,8 +33,9 @@ You can use the following code to reproduce the experiments:
|
||||
- Row 4: `python additional-experiments.py --trainable_layers all`
|
||||
- Row 5: `python additional-experiments.py --model_size "gpt2-medium (355M)"`
|
||||
- Row 6: `python additional-experiments.py --model_size "gpt2-large (774M)"`
|
||||
- Row 7: `python additional-experiments.py --weights random --trainable_layers all`
|
||||
- Row 8: `python additional-experiments.py --context_length "model_context_length"`
|
||||
- Row 7: `python additional-experiments.py --model_size "gpt2-xl (1558M)"`
|
||||
- Row 8: `python additional-experiments.py --weights random --trainable_layers all`
|
||||
- Row 9: `python additional-experiments.py --context_length "model_context_length"`
|
||||
|
||||
I've kept the LLM and dataset small on purpose, so you can run the training on a regular laptop like a MacBook Air M3 in about 15 minutes in case you don't have access to a GPU.
|
||||
|
||||
@ -47,8 +49,8 @@ I've kept the LLM and dataset small on purpose, so you can run the training on a
|
||||
|
||||
3. **Training All Layers vs. Last Transformer Block (Row 1 vs. 4)**: Training all layers shows a modest improvement of ~2% over just training the last transformer block, but it requires almost three times longer in terms of training duration.
|
||||
|
||||
4. **Using Larger Pretrained Models (Row 1 vs 5, and Row 1 vs. 6)**: Employing a 3x larger pretrained model leads to worse results. However, using a 5x larger model improves performance compared to the initial model, as was anticipated. (The medium model was perhaps not well pretrained or the particular finetuning configuration works not as well for this model.)
|
||||
4. **Using Larger Pretrained Models (Row 1 vs 5, and Row 1 vs. 6 and 7)**: Employing a 3x larger pretrained model leads to worse results. However, using a 5x larger model improves performance compared to the initial model, as was anticipated. Similarly, the 12x larger model improves the predictive performance even further. (The medium model was perhaps not well pretrained or the particular finetuning configuration works not as well for this model.)
|
||||
|
||||
5. **Using a Model with Random Weights vs. Pretrained Weights (Row 1 vs. 7)**: Utilizing a model with random weights yields results that are only slightly worse by 1.3% compared to using pretrained weights.
|
||||
5. **Using a Model with Random Weights vs. Pretrained Weights (Row 1 vs. 8)**: Utilizing a model with random weights yields results that are only slightly worse by 1.3% compared to using pretrained weights.
|
||||
|
||||
6. **Padding Input to Full Context Length vs. Longest Training Example (Row 1 vs. 8)**: Padding the input to the full supported context length results is significantly worse.
|
||||
6. **Padding Input to Full Context Length vs. Longest Training Example (Row 1 vs. 9)**: Padding the input to the full supported context length results is significantly worse.
|
||||
|
||||
@ -315,7 +315,7 @@ if __name__ == "__main__":
|
||||
elif args.model_size == "gpt2-large (774M)":
|
||||
in_features = 1280
|
||||
elif args.model_size == "gpt2-xl (1558M)":
|
||||
in_features = 1280
|
||||
in_features = 1600
|
||||
else:
|
||||
raise ValueError("Invalid --model_size argument")
|
||||
|
||||
|
||||
@ -259,7 +259,7 @@ if __name__ == "__main__":
|
||||
elif args.model_size == "gpt2-large (774M)":
|
||||
in_features = 1280
|
||||
elif args.model_size == "gpt2-xl (1558M)":
|
||||
in_features = 1280
|
||||
in_features = 1600
|
||||
else:
|
||||
raise ValueError("Invalid --model_size argument")
|
||||
|
||||
|
||||
Loading…
x
Reference in New Issue
Block a user