mirror of
https://github.com/rasbt/LLMs-from-scratch.git
synced 2025-08-28 10:30:36 +00:00
fix typos, add codespell pre-commit hook (#264)
* fix typos, add codespell pre-commit hook * Update .pre-commit-config.yaml --------- Co-authored-by: Sebastian Raschka <mail@sebastianraschka.com>
This commit is contained in:
parent
3b79631672
commit
70cfced899
17
.pre-commit-config.yaml
Normal file
17
.pre-commit-config.yaml
Normal file
@ -0,0 +1,17 @@
|
|||||||
|
# A tool used by developers to identify spelling errors in text.
|
||||||
|
# Readers may ignore this file.
|
||||||
|
|
||||||
|
default_stages: [commit]
|
||||||
|
|
||||||
|
repos:
|
||||||
|
- repo: https://github.com/codespell-project/codespell
|
||||||
|
rev: v2.3.0
|
||||||
|
hooks:
|
||||||
|
- id: codespell
|
||||||
|
name: codespell
|
||||||
|
description: Check for spelling errors in text.
|
||||||
|
entry: codespell
|
||||||
|
language: python
|
||||||
|
args:
|
||||||
|
- "-L ocassion,occassion,ot,te,tje"
|
||||||
|
files: \.txt$|\.md$|\.py|\.ipynb$
|
@ -317,7 +317,7 @@
|
|||||||
"id": "f78e346f-3b85-44e6-9feb-f01131381148"
|
"id": "f78e346f-3b85-44e6-9feb-f01131381148"
|
||||||
},
|
},
|
||||||
"source": [
|
"source": [
|
||||||
"- The implementation below uses PyTorch's [`scaled_dot_product_attention`](https://pytorch.org/docs/stable/generated/torch.nn.functional.scaled_dot_product_attention.html) function, which implements a memory-optimized version of self-attention calld [flash attention](https://arxiv.org/abs/2205.14135)"
|
"- The implementation below uses PyTorch's [`scaled_dot_product_attention`](https://pytorch.org/docs/stable/generated/torch.nn.functional.scaled_dot_product_attention.html) function, which implements a memory-optimized version of self-attention called [flash attention](https://arxiv.org/abs/2205.14135)"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
@ -1043,7 +1043,7 @@
|
|||||||
"id": "dec7d03d-9ff3-4ca3-ad67-01b67c2f5457",
|
"id": "dec7d03d-9ff3-4ca3-ad67-01b67c2f5457",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"- We are almost there: now let's plug in the transformer block into the architecture we coded at the very beginning of this chapter so that we obtain a useable GPT architecture\n",
|
"- We are almost there: now let's plug in the transformer block into the architecture we coded at the very beginning of this chapter so that we obtain a usable GPT architecture\n",
|
||||||
"- Note that the transformer block is repeated multiple times; in the case of the smallest 124M GPT-2 model, we repeat it 12 times:"
|
"- Note that the transformer block is repeated multiple times; in the case of the smallest 124M GPT-2 model, we repeat it 12 times:"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
|
@ -370,7 +370,7 @@ if __name__ == "__main__":
|
|||||||
action='store_true',
|
action='store_true',
|
||||||
default=False,
|
default=False,
|
||||||
help=(
|
help=(
|
||||||
"Disable padding, which means each example may have a different lenght."
|
"Disable padding, which means each example may have a different length."
|
||||||
" This requires setting `--batch_size 1`."
|
" This requires setting `--batch_size 1`."
|
||||||
)
|
)
|
||||||
)
|
)
|
||||||
|
@ -166,7 +166,7 @@
|
|||||||
" return response.choices[0].message.content\n",
|
" return response.choices[0].message.content\n",
|
||||||
"\n",
|
"\n",
|
||||||
"\n",
|
"\n",
|
||||||
"# Prepare intput\n",
|
"# Prepare input\n",
|
||||||
"sentence = \"I ate breakfast\"\n",
|
"sentence = \"I ate breakfast\"\n",
|
||||||
"prompt = f\"Convert the following sentence to passive voice: '{sentence}'\"\n",
|
"prompt = f\"Convert the following sentence to passive voice: '{sentence}'\"\n",
|
||||||
"run_chatgpt(prompt, client)"
|
"run_chatgpt(prompt, client)"
|
||||||
|
Loading…
x
Reference in New Issue
Block a user