Sebastian Raschka 3dfd7e5f06
Update pixi (#661)
* Llama3 from scratch improvements

* Update HF hub version in pixi.toml

* Update README.md
2025-06-13 10:50:17 -05:00

25 lines
1009 B
Markdown

# Chapter 7: Finetuning to Follow Instructions
 
## Main Chapter Code
- [01_main-chapter-code](01_main-chapter-code) contains the main chapter code and exercise solutions
 
## Bonus Materials
- [02_dataset-utilities](02_dataset-utilities) contains utility code that can be used for preparing an instruction dataset
- [03_model-evaluation](03_model-evaluation) contains utility code for evaluating instruction responses using a local Llama 3 model and the GPT-4 API
- [04_preference-tuning-with-dpo](04_preference-tuning-with-dpo) implements code for preference finetuning with Direct Preference Optimization (DPO)
- [05_dataset-generation](05_dataset-generation) contains code to generate and improve synthetic datasets for instruction finetuning
- [06_user_interface](06_user_interface) implements an interactive user interface to interact with the pretrained LLM
<br>
<br>
[![Link to the video](https://img.youtube.com/vi/4yNswvhPWCQ/0.jpg)](https://www.youtube.com/watch?v=4yNswvhPWCQ)