mirror of
				https://github.com/rasbt/LLMs-from-scratch.git
				synced 2025-10-26 15:29:25 +00:00 
			
		
		
		
	Add "What's next" section (#432)
* Add What's next section * Delete appendix-D/01_main-chapter-code/appendix-D-Copy2.ipynb * Delete ch03/01_main-chapter-code/ch03-Copy1.ipynb * Delete appendix-D/01_main-chapter-code/appendix-D-Copy1.ipynb * Update ch07.ipynb * Update ch07.ipynb
This commit is contained in:
		
							parent
							
								
									5348565e0f
								
							
						
					
					
						commit
						74e04a9169
					
				| @ -2743,6 +2743,23 @@ | |||||||
|     "- The [./load-finetuned-model.ipynb](./load-finetuned-model.ipynb) notebook illustrates how to load the finetuned model in a new session\n", |     "- The [./load-finetuned-model.ipynb](./load-finetuned-model.ipynb) notebook illustrates how to load the finetuned model in a new session\n", | ||||||
|     "- You can find the exercise solutions in [./exercise-solutions.ipynb](./exercise-solutions.ipynb)" |     "- You can find the exercise solutions in [./exercise-solutions.ipynb](./exercise-solutions.ipynb)" | ||||||
|    ] |    ] | ||||||
|  |   }, | ||||||
|  |   { | ||||||
|  |    "cell_type": "markdown", | ||||||
|  |    "id": "b9cc51ec-e06c-4470-b626-48401a037851", | ||||||
|  |    "metadata": {}, | ||||||
|  |    "source": [ | ||||||
|  |     "## What's next?\n", | ||||||
|  |     "\n", | ||||||
|  |     "- Congrats on completing the book; in case you are looking for additional resources, I added several bonus sections to this GitHub repository that you might find interesting\n", | ||||||
|  |     "- The complete list of bonus materials can be viewed in the main README's [Bonus Material](https://github.com/rasbt/LLMs-from-scratch?tab=readme-ov-file#bonus-material) section\n", | ||||||
|  |     "- To highlight a few of my favorites:\n", | ||||||
|  |     "  1. [Direct Preference Optimization (DPO) for LLM Alignment (From Scratch)](../04_preference-tuning-with-dpo/dpo-from-scratch.ipynb) implements a popular preference tuning mechanism to align the model from this chapter more closely with human preferences\n", | ||||||
|  |     "  2. [Llama 3.2 From Scratch (A Standalone Notebook)](../../ch05/07_gpt_to_llama/standalone-llama32.ipynb), a from-scratch implementation of Meta AI's popular Llama 3.2, including loading the official pretrained weights; if you are up to some additional experiments, you can replace the `GPTModel` model in each of the chapters with the `Llama3Model` class (it should work as a 1:1 replacement)\n", | ||||||
|  |     "  3. [Converting GPT to Llama](../../ch05/07_gpt_to_llama) contains code with step-by-step guides that explain the differences between GPT-2 and the various Llama models\n", | ||||||
|  |     "  4. [Understanding the Difference Between Embedding Layers and Linear Layers](../../ch02/03_bonus_embedding-vs-matmul/embeddings-and-linear-layers.ipynb) is a conceptual explanation illustrating that the `Embedding` layer in PyTorch, which we use at the input stage of an LLM, is mathematically equivalent to a linear layer applied to one-hot encoded data\n", | ||||||
|  |     "- Happy further reading!" | ||||||
|  |    ] | ||||||
|   } |   } | ||||||
|  ], |  ], | ||||||
|  "metadata": { |  "metadata": { | ||||||
|  | |||||||
		Loading…
	
	
			
			x
			
			
		
	
		Reference in New Issue
	
	Block a user
	 Sebastian Raschka
						Sebastian Raschka