mirror of
https://github.com/rasbt/LLMs-from-scratch.git
synced 2025-06-26 23:50:03 +00:00
Llama 3 (#384)
* Implement Llama 3.2 * Add Llama 3.2 files * exclude IMDB link because stanford website seems down
This commit is contained in:
parent
8553644440
commit
8a448a4410
2
.github/workflows/check-links.yml
vendored
2
.github/workflows/check-links.yml
vendored
@ -29,6 +29,6 @@ jobs:
|
||||
|
||||
- name: Check links
|
||||
run: |
|
||||
pytest --check-links ./ --check-links-ignore "https://platform.openai.com/*" --check-links-ignore "https://openai.com/*" --check-links-ignore "https://arena.lmsys.org" --check-links-ignore "https://www.reddit.com/r/*"
|
||||
pytest --check-links ./ --check-links-ignore "https://platform.openai.com/*" --check-links-ignore "https://openai.com/*" --check-links-ignore "https://arena.lmsys.org" --check-links-ignore "https://www.reddit.com/r/*" --check-links-ignore "https://ai.stanford.edu/~amaas/data/sentiment/"
|
||||
# pytest --check-links ./ --check-links-ignore "https://platform.openai.com/*" --check-links-ignore "https://arena.lmsys.org" --retries 2 --retry-delay 5
|
||||
|
||||
|
3
.gitignore
vendored
3
.gitignore
vendored
@ -38,6 +38,9 @@ ch05/06_user_interface/gpt2
|
||||
ch05/07_gpt_to_llama/Llama-2-7b
|
||||
ch05/07_gpt_to_llama/Llama-2-7b-chat
|
||||
ch05/07_gpt_to_llama/.cache
|
||||
ch05/07_gpt_to_llama/llama3-files
|
||||
ch05/07_gpt_to_llama/llama31-files
|
||||
ch05/07_gpt_to_llama/llama32-files
|
||||
|
||||
ch06/01_main-chapter-code/gpt2
|
||||
ch06/02_bonus_additional-experiments/gpt2
|
||||
|
@ -117,6 +117,7 @@ Several folders contain optional materials as a bonus for interested readers:
|
||||
- [Optimizing Hyperparameters for Pretraining](ch05/05_bonus_hparam_tuning)
|
||||
- [Building a User Interface to Interact With the Pretrained LLM](ch05/06_user_interface)
|
||||
- [Converting GPT to Llama](ch05/07_gpt_to_llama)
|
||||
- [Llama 3.2 From Scratch](ch05/07_gpt_to_llama/standalone-llama32.ipynb)
|
||||
- **Chapter 6:**
|
||||
- [Additional experiments finetuning different layers and using larger models](ch06/02_bonus_additional-experiments)
|
||||
- [Finetuning different models on 50k IMDB movie review dataset](ch06/03_bonus_imdb-classification)
|
||||
|
@ -11,4 +11,4 @@
|
||||
- [04_learning_rate_schedulers](04_learning_rate_schedulers) contains code implementing a more sophisticated training function including learning rate schedulers and gradient clipping
|
||||
- [05_bonus_hparam_tuning](05_bonus_hparam_tuning) contains an optional hyperparameter tuning script
|
||||
- [06_user_interface](06_user_interface) implements an interactive user interface to interact with the pretrained LLM
|
||||
- [07_gpt_to_llama](07_gpt_to_llama) contains a step-by-step guide for converting a GPT architecture implementation to Llama and loads pretrained weights from Meta AI
|
||||
- [07_gpt_to_llama](07_gpt_to_llama) contains a step-by-step guide for converting a GPT architecture implementation to Llama 3.2 and loads pretrained weights from Meta AI
|
||||
|
Loading…
x
Reference in New Issue
Block a user