# Chapter 7: Finetuning to Follow Instructions ## Main Chapter Code - [01_main-chapter-code](01_main-chapter-code) contains the main chapter code and exercise solutions ## Bonus Materials - [02_dataset-utilities](02_dataset-utilities) contains utility code that can be used for preparing an instruction dataset - [03_model-evaluation](03_model-evaluation) contains utility code for evaluating instruction responses using a local Llama 3 model and the GPT-4 API - [04_preference-tuning-with-dpo](04_preference-tuning-with-dpo) implements code for preference finetuning with Direct Preference Optimization (DPO) - [05_dataset-generation](05_dataset-generation) contains code to generate and improve synthetic datasets for instruction finetuning