2024-06-08 10:38:41 -05:00
|
|
|
# Chapter 7: Finetuning to Follow Instructions
|
2024-05-25 11:22:51 -05:00
|
|
|
|
2024-10-12 10:26:08 -05:00
|
|
|
|
2024-06-11 21:07:42 -05:00
|
|
|
## Main Chapter Code
|
|
|
|
|
|
|
|
- [01_main-chapter-code](01_main-chapter-code) contains the main chapter code and exercise solutions
|
|
|
|
|
2024-10-12 10:26:08 -05:00
|
|
|
|
2024-06-11 21:07:42 -05:00
|
|
|
## Bonus Materials
|
|
|
|
|
2024-08-13 07:33:10 -05:00
|
|
|
- [02_dataset-utilities](02_dataset-utilities) contains utility code that can be used for preparing an instruction dataset
|
|
|
|
- [03_model-evaluation](03_model-evaluation) contains utility code for evaluating instruction responses using a local Llama 3 model and the GPT-4 API
|
2024-08-04 08:57:36 -05:00
|
|
|
- [04_preference-tuning-with-dpo](04_preference-tuning-with-dpo) implements code for preference finetuning with Direct Preference Optimization (DPO)
|
2024-09-10 21:42:12 -05:00
|
|
|
- [05_dataset-generation](05_dataset-generation) contains code to generate and improve synthetic datasets for instruction finetuning
|
2024-09-21 18:33:00 -07:00
|
|
|
- [06_user_interface](06_user_interface) implements an interactive user interface to interact with the pretrained LLM
|