Automated link checking (#117)

* Automated link checking

* Fix links in Jupyter Nbs
This commit is contained in:
Sebastian Raschka 2024-04-12 19:08:34 -04:00 committed by GitHub
parent 33b27368a3
commit 55ebabf95c
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
3 changed files with 19 additions and 24 deletions

View File

@ -1,4 +1,4 @@
name: Check Markdown Links name: Check hyperlinks
on: on:
push: push:
@ -9,27 +9,22 @@ on:
- main - main
jobs: jobs:
check-links: test:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- name: Checkout Repository - uses: actions/checkout@v4
uses: actions/checkout@v3
- name: Install Markdown Link Checker - name: Set up Python
run: npm install -g markdown-link-check uses: actions/setup-python@v5
with:
python-version: '3.10'
- name: Create config for markdown link checker - name: Install dependencies
run: | run: |
echo '{ python -m pip install --upgrade pip
"projectBaseUrl":"${{ github.workspace }}", pip install pytest pytest-check-links
"ignorePatterns": [
{
"pattern": "^#"
}
]
}' > $GITHUB_WORKSPACE/md_checker_config.json
- name: Find Markdown Files and Check Links - name: Check links
run: | run: |
find . -name \*.md -print0 | xargs -0 -n1 markdown-link-check -c $GITHUB_WORKSPACE/md_checker_config.json pytest --check-links ./

View File

@ -1621,7 +1621,7 @@
"id": "08218d9f-aa1a-4afb-a105-72ff96a54e73", "id": "08218d9f-aa1a-4afb-a105-72ff96a54e73",
"metadata": {}, "metadata": {},
"source": [ "source": [
"- **You may be interested in the bonus content comparing embedding layers with regular linear layers: [../02_bonus_efficient-multihead-attention](../02_bonus_efficient-multihead-attention)**" "- **You may be interested in the bonus content comparing embedding layers with regular linear layers: [../03_bonus_embedding-vs-matmul](../03_bonus_embedding-vs-matmul)**"
] ]
}, },
{ {
@ -1874,7 +1874,7 @@
"name": "python", "name": "python",
"nbconvert_exporter": "python", "nbconvert_exporter": "python",
"pygments_lexer": "ipython3", "pygments_lexer": "ipython3",
"version": "3.10.6" "version": "3.10.10"
} }
}, },
"nbformat": 4, "nbformat": 4,

View File

@ -1164,7 +1164,7 @@
"metadata": {}, "metadata": {},
"source": [ "source": [
"- In this section, we finally implement the code for training the LLM\n", "- In this section, we finally implement the code for training the LLM\n",
"- We focus on a simple training function (if you are interested in augmenting this training function with more advanced techniques, such as learning rate warmup, cosine annealing, and gradient clipping, please refer to [Appendix D](../../appendix-D/03_main-chapter-code))\n", "- We focus on a simple training function (if you are interested in augmenting this training function with more advanced techniques, such as learning rate warmup, cosine annealing, and gradient clipping, please refer to [Appendix D](../../appendix-D/01_main-chapter-code))\n",
"\n", "\n",
"<img src=\"https://sebastianraschka.com/images/LLMs-from-scratch-images/ch05_compressed/train-steps.webp\" width=300px>" "<img src=\"https://sebastianraschka.com/images/LLMs-from-scratch-images/ch05_compressed/train-steps.webp\" width=300px>"
] ]
@ -2028,7 +2028,7 @@
"metadata": {}, "metadata": {},
"source": [ "source": [
"- Previously, we only trained a small GPT-2 model using a very small short-story book for educational purposes\n", "- Previously, we only trained a small GPT-2 model using a very small short-story book for educational purposes\n",
"- Interested readers can also find a longer pretraining run on the complete Project Gutenberg book corpus in [../03_bonus_pretraining_on_gutenberg](03_bonus_pretraining_on_gutenberg)\n", "- Interested readers can also find a longer pretraining run on the complete Project Gutenberg book corpus in [../03_bonus_pretraining_on_gutenberg](../03_bonus_pretraining_on_gutenberg)\n",
"- Fortunately, we don't have to spend tens to hundreds of thousands of dollars to pretrain the model on a large pretraining corpus but can load the pretrained weights provided by OpenAI" "- Fortunately, we don't have to spend tens to hundreds of thousands of dollars to pretrain the model on a large pretraining corpus but can load the pretrained weights provided by OpenAI"
] ]
}, },
@ -2438,7 +2438,7 @@
"name": "python", "name": "python",
"nbconvert_exporter": "python", "nbconvert_exporter": "python",
"pygments_lexer": "ipython3", "pygments_lexer": "ipython3",
"version": "3.12.2" "version": "3.10.10"
} }
}, },
"nbformat": 4, "nbformat": 4,