Andreas Yin 8e170312fe
fix: correct role of the beta hyperparameter on the DPO loss (#818)
Increasing beta leads to less divergence between the new model and the reference model.
2025-09-12 20:21:38 -05:00
..
2025-03-23 19:28:49 -05:00
2025-06-13 10:50:17 -05:00

Chapter 7: Finetuning to Follow Instructions

 

Main Chapter Code

 

Bonus Materials

  • 02_dataset-utilities contains utility code that can be used for preparing an instruction dataset
  • 03_model-evaluation contains utility code for evaluating instruction responses using a local Llama 3 model and the GPT-4 API
  • 04_preference-tuning-with-dpo implements code for preference finetuning with Direct Preference Optimization (DPO)
  • 05_dataset-generation contains code to generate and improve synthetic datasets for instruction finetuning
  • 06_user_interface implements an interactive user interface to interact with the pretrained LLM


Link to the video