diff --git a/ch04/08_deltanet/README.md b/ch04/08_deltanet/README.md
index 8bf719c..ca533e0 100644
--- a/ch04/08_deltanet/README.md
+++ b/ch04/08_deltanet/README.md
@@ -4,7 +4,7 @@ Recently, [Qwen3-Next](https://qwen.ai/blog?id=4074cca80393150c248e508aa62983f9c
Both Qwen3-Next and Kimi Linear use a 3:1 ratio, meaning for every three transformer blocks employing the linear Gated DeltaNet variant, there’s one block that uses full attention, as shown in the figure below.
-
+
@@ -125,7 +125,7 @@ The delta rule part refers to computing the difference (delta, Δ) between new a
Gated DeltaNet has a gate similar to the gate in gated attention discussed earlier, except that it uses a SiLU instead of logistic sigmoid activation, as illustrated below. (The SiLU choice is likely to improve gradient flow and stability over the standard sigmoid.)
-
+
However, as shown in the figure above, the "gated" in the Gated DeltaNet also refers to several additional gates:
@@ -271,7 +271,7 @@ context = context.reshape(b, num_tokens, self.d_out)
-
+
In Gated DeltaNet, there's no *n*-by-*n* attention matrix. Instead, the model processes tokens one by one. It keeps a running memory (a state) that gets updated as each new token comes in. This is what's implemented as, where `S` is the state that gets updated recurrently for each time step *t*.
@@ -353,4 +353,4 @@ uv run plot_memory_estimates_gated_deltanet.py \
Note that the above computes the `head_dim` as `emb_dim / n_heads`. I.e., 2048 / 16 = 128.
-
+