Marek Połom f333d7fe7f
feat: Json elements to HTML converter (#3936)
## NOTE
`test_unstructured_ingest/expected-structured-output-html` contains all
test HTML fixtures. Original JSON files, from which these HTML fixtures
are generated, were taken from
`test_unstructured_ingest/expected-structured-output`
2025-03-04 13:57:35 +00:00

63 lines
4.6 KiB
HTML
Raw Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8"/>
<meta content="width=device-width, initial-scale=1.0" name="viewport"/>
<title>
</title>
</head>
<body>
<p class="NarrativeText" id="7581b3e14a56c276896da707704c221e">
output values. These are concatenated and once again projected, resulting in the final values, as depicted in Figure 2.
</p>
<p class="NarrativeText" id="5f0b9e258d134a12434aaa080638e9de">
Multi-head attention allows the model to jointly attend to information from different representation subspaces at different positions. With a single attention head, averaging inhibits this.
</p>
<div class="Formula" id="2f5b0b2ffa8872dde498f34cd4af6bd9">
MultiHead(Q, K, V ) = Concat(head1, ..., headh)W O where headi = Attention(QW Q i , KW K i , V W V i )
</div>
<p class="NarrativeText" id="703f1d4e9204c8b7ea94191f87138425">
Where the projections are parameter matrices W Q and W O ∈ Rhdv×dmodel. i ∈ Rdmodel×dk , W K i ∈ Rdmodel×dk , W V i ∈ Rdmodel×dv
</p>
<p class="NarrativeText" id="e3e4737377b1614b02426ccc77bdcfc3">
In this work we employ h = 8 parallel attention layers, or heads. For each of these we use dk = dv = dmodel/h = 64. Due to the reduced dimension of each head, the total computational cost is similar to that of single-head attention with full dimensionality.
</p>
<h1 class="Title" id="31e28cc49f5625cec5e262fbb4b7e5f0">
3.2.3 Applications of Attention in our Model
</h1>
<p class="NarrativeText" id="f84e983da98f26bd5c141846aeffd0aa">
The Transformer uses multi-head attention in three different ways:
</p>
<li class="ListItem" id="fd24bf7bf21b4aab2a36021f9ebb253b">
• In "encoder-decoder attention" layers, the queries come from the previous decoder layer, and the memory keys and values come from the output of the encoder. This allows every position in the decoder to attend over all positions in the input sequence. This mimics the typical encoder-decoder attention mechanisms in sequence-to-sequence models such as [38, 2, 9].
</li>
<li class="ListItem" id="77762865993fd26c55c87cb45d75cad8">
• The encoder contains self-attention layers. In a self-attention layer all of the keys, values and queries come from the same place, in this case, the output of the previous layer in the encoder. Each position in the encoder can attend to all positions in the previous layer of the encoder.
</li>
<li class="ListItem" id="41b9b9d2a4329a8f6075f4776403c2de">
• Similarly, self-attention layers in the decoder allow each position in the decoder to attend to all positions in the decoder up to and including that position. We need to prevent leftward information flow in the decoder to preserve the auto-regressive property. We implement this inside of scaled dot-product attention by masking out (setting to −∞) all values in the input of the softmax which correspond to illegal connections. See Figure 2.
</li>
<h1 class="Title" id="3b1f6da814e3826309b614d8b8dc9266">
3.3 Position-wise Feed-Forward Networks
</h1>
<p class="NarrativeText" id="46bb05e8d9c19147942fb75345ae3dbb">
In addition to attention sub-layers, each of the layers in our encoder and decoder contains a fully connected feed-forward network, which is applied to each position separately and identically. This consists of two linear transformations with a ReLU activation in between.
</p>
<div class="Formula" id="eda9b46d50730928c8437d6149e01a2b">
FFN(x) = max(0, xW1 + b1)W2 + b2 (2)
</div>
<p class="NarrativeText" id="43c1741dc91b5b67a03a726873df3be5">
While the linear transformations are the same across different positions, they use different parameters from layer to layer. Another way of describing this is as two convolutions with kernel size 1. The dimensionality of input and output is dmodel = 512, and the inner-layer has dimensionality df f = 2048.
</p>
<h1 class="Title" id="63fc763509dec0fa03ba8296e4b0616e">
3.4 Embeddings and Softmax
</h1>
<p class="NarrativeText" id="ab8cefbb53c308302ee0e3c0c7ecfd25">
Similarly to other sequence transduction models, we use learned embeddings to convert the input tokens and output tokens to vectors of dimension dmodel. We also use the usual learned linear transfor- mation and softmax function to convert the decoder output to predicted next-token probabilities. In our model, we share the same weight matrix between the two embedding layers and the pre-softmax dmodel. linear transformation, similar to [30]. In the embedding layers, we multiply those weights by
</p>
<div class="Footer" id="b45e24bb89196d4b50d76df531acfaf2">
5
</div>
</body>
</html>