Sebastian Raschka adaf4faaae
Dpo vocab size clarification (#628)
* Llama3 from scratch improvements

* vocab size should be 50257 not 50256

* restore
2025-04-18 17:20:56 -05:00
..
2025-03-23 19:28:49 -05:00
2025-04-12 14:51:02 -05:00

Chapter 7: Finetuning to Follow Instructions

 

Main Chapter Code

 

Bonus Materials

  • 02_dataset-utilities contains utility code that can be used for preparing an instruction dataset
  • 03_model-evaluation contains utility code for evaluating instruction responses using a local Llama 3 model and the GPT-4 API
  • 04_preference-tuning-with-dpo implements code for preference finetuning with Direct Preference Optimization (DPO)
  • 05_dataset-generation contains code to generate and improve synthetic datasets for instruction finetuning
  • 06_user_interface implements an interactive user interface to interact with the pretrained LLM


Link to the video