Sebastian Raschka 74e04a9169 Add "What's next" section (#432)
* Add What's next section

* Delete appendix-D/01_main-chapter-code/appendix-D-Copy2.ipynb

* Delete ch03/01_main-chapter-code/ch03-Copy1.ipynb

* Delete appendix-D/01_main-chapter-code/appendix-D-Copy1.ipynb

* Update ch07.ipynb

* Update ch07.ipynb
2024-11-07 20:12:59 -06:00
..

Chapter 7: Finetuning to Follow Instructions

 

Main Chapter Code

 

Bonus Materials

  • 02_dataset-utilities contains utility code that can be used for preparing an instruction dataset
  • 03_model-evaluation contains utility code for evaluating instruction responses using a local Llama 3 model and the GPT-4 API
  • 04_preference-tuning-with-dpo implements code for preference finetuning with Direct Preference Optimization (DPO)
  • 05_dataset-generation contains code to generate and improve synthetic datasets for instruction finetuning
  • 06_user_interface implements an interactive user interface to interact with the pretrained LLM