Fixes order of content type detection strategies for byte-encoded jsons.
Before
```
json_bytes = json.dumps([{"example": "data"}]).encode("utf-8")
file_buffer = io.BytesIO(json_bytes)
detect_filetype(file=file_buffer, metadata_file_path="filename.pdf")
```
Before
PDF
Now
JSON
This PR targets the most memory expensive operation in partition pdf and
images: deduplicate pdfminer elements. In large pages the number of
elements can be over 10k, which would generate multiple 10k x 10k square
double float matrices during deduplication, pushing peak memory usage
close to 13Gb

This PR breaks this computation down by computing partial IOU. More
precisely it computes IOU for each 2000 elements against all the
elements at a time to reduce peak memory usage by about 10x to around
1.6Gb.

The block size is configurable based on user preference for peak memory
usage and it is set by changing the env `UNST_MATMUL_MEMORY_CAP_IN_GB`.
The purpose of this PR is to enable registering new file types
dynamically.
The PR enables this through 2 primary functions:
1. `unstructured.file_utils.model.create_file_type` This registers the
new `FileType` enum which enables the rest of unstructured to understand
a new type of file
2. `unstructured.file_utils.model.register_partitioner` Decorator that
enables registering a partitioner function to run for a file type.
---------
Co-authored-by: Roman Isecke <136338424+rbiseck3@users.noreply.github.com>
## NOTE
`test_unstructured_ingest/expected-structured-output-html` contains all
test HTML fixtures. Original JSON files, from which these HTML fixtures
are generated, were taken from
`test_unstructured_ingest/expected-structured-output`
This PR allows element types with CamelCase names to be extractable
using `extract_image_block_types` variable.
Before: specify `extract_image_block_types=["NarrativeText"]` (or any
casing for `NarrativeText`) would raise a warning that it doesn't match
any available types and not image would be extracted for this element
type
Now: specify `extract_image_block_types=["NarrativeText"]` would extract
images for this element type
## testing
```python
from unstructured.partition.auto import partition
f = "example-docs/pdf/embedded-images-tables.pdf"
elements = partition(f, strategy="hi_res", extract_image_block_types=["narrativetext"])
```
Without this PR no figures would be extracted. With this PR a local
folder would be created to contain images of the narrative text elements
in path like `./figures/figure-1-1.jpg`
---------
Co-authored-by: ryannikolaidis <1208590+ryannikolaidis@users.noreply.github.com>
This pull request fixes the scenario when SpooledTemporaryFile is passed
to detect_file type. In such cases some weird number was assigned as
'name' (and it couldn't be overwritten as SpooledTemporaryFile can't
have fields assigned 😩 ) so I added in our object factory just another
scenario where we parse this type of file.
For BytesIo `name` attr is None as it should be and some other metadata
fields are leveraged for file type recognition
This pull request adds the ability to configure multiple pdfminer
parameters (with the simple possibility to extend for the additional
parameters). One of the parameters overwrites the default from LA Params
config class.
Example:
```python3
partition(
filename=example_doc_path("pdf/layout-parser-paper-fast.pdf"),
pdfminer_line_margin=1.123,
pdfminer_char_margin=None,
pdfminer_line_overlap=0.0123,
pdfminer_word_margin=3.21,
)
assert pdfminer_mock.call_args.kwargs == {
"line_margin": 1.123,
"line_overlap": 0.0123,
"word_margin": 3.21,
}
```
---------
Co-authored-by: ryannikolaidis <1208590+ryannikolaidis@users.noreply.github.com>
Co-authored-by: plutasnyy <plutasnyy@users.noreply.github.com>
### Description
NDJSON files were being detected as JSON due to having the same
mime-type. This adds additional logic to skip mime-type based detection
if extension is `.ndjson`
This PR rewrites the logic in `unstructured_inference` that merges
extracted with inferred layout using vectorized operations. The goal is
to:
- vectorize the operation to improve memory and cpu efficiency
- apply logic equally without order being a factor (the
`unstructured_inference` version uses loops and modifies the content of
the inner loop on the fly -> order of the out loop, which is the order
of extracted elements becomes a factor) determining the merging results
- rewrite the loop into clear steps with clear rules
- setup stage for followup improvements
While this PR aim to reproduce the existing behavior as much as possible
it is not an exact replica of the looped version. Because order is not a
factor any more some extracted elements that used to be not considered
part of a larger inferred element (due to processing order being not
optimum) are now properly merged. This lead to changes in one ingest
test. For example, the change shows that now we properly merge the
section numerical number with the section title as the full title
element.
## Test:
Since the goal of this refactor is to preserve as much existing behavior
as possible we rely on existing tests. As mentioned above the one file
that changed output during ingest test is a net positive change.
---------
Co-authored-by: ryannikolaidis <1208590+ryannikolaidis@users.noreply.github.com>
Co-authored-by: badGarnet <badGarnet@users.noreply.github.com>
#### Summary
A recent security review showed that it was possible to partition
arbitrary local files in cases where the filetype supports an "include"
functionality that brings in the content of files external to the
partitioned file. This affects `rst` and `org` files.
#### Fix
This PR fixes the above issue by passing the parameter `sandbox=True` in
all cases where `pypandoc.convert_file` is called.
Note I also added the parameter to a call to this method in the ODT
code. I haven't investigated whether there was a security issue with ODT
files, but it seems better to use pandoc in sandbox mode given the
security issues we know about.
#### Testing
To verify that the tests that are added with this PR find the relevant
issue:
- Remove the `sandbox=True` text from
`unstructured/file_utils/file_conversion.py` line 17.
- Run the tests
`test_unstructured.partition.test_rst.test_rst_wont_include_external_files`
and
`test_unstructured.partition.test_org.test_org_wont_include_external_files`.
Both should fail due to the partitioning containing the word "wombat",
which only appears in a file external to the partitioned file.
- Add the parameter back in, and the tests pass.
This PR:
- Fixes removing HTML tags that exist in <td> cells
- stripping function was in general problematic to implement in easy and
straightforward way (you can't modify `descendants` in-place). So I
decided instead of patching something in table cell I added stripping
everywhere in the same consistent way. This is why some tests needed
small edits with removing one white-space in each tag. I believe this
won't cause any problems for downstream tasks.
Tested HTML:
```html
<table class="Table">
<tbody>
<tr>
<td colspan="2">
Some text
</td>
<td>
<input checked="" class="Checkbox" type="checkbox"/>
</td>
</tr>
</tbody>
</table>
```
Before & After
```html
'<table class="Table" id="..."> <tbody> <tr> <td colspan="2">Some text</td><td></td></tr></tbody></table>'
'<table class="Table" id="..."><tbody><tr><td colspan="2">Some text</td><td><input checked="" type="checkbox"/></td></tr></tbody></table>''
```
- there is a bug in deciding if a page has tables before performing
table extraction. This logic checks if the id associated with Table type
element is True
- however, it should be checking if the id is `None` because sometimes
the id can be 0 (the first type of element in the page)
- the fix updates the logic
- adds a unit test for this specific case
I noticed that `make tidy` wasn't working in my development environment.
This happens if you, a developer, forget to follow the specific
instructions in `README.md` and install exactly the right versions of
the necessary tools, including a *quite old* version of Ruff. This
version will nonetheless warn you:
warning: `ruff <path>` is deprecated. Use `ruff check <path>` instead.
So this fixes that, in order to future-proof and avoid confusion!
This PR fixes a bug in `build_layout_elements_from_ocr_regions` where
texts are joint in incorrect orders.
The bug is due to incorrect masking of the `ocr_regions` after some are
already selected as one of the final groups. The fix uses simpler method
to mask the indices by simply use the same indices that adds the regions
to the final groups to mask them so they are not considered again.
## Testing
This PR adds a unit test specifically aimed for this bug. Without the
fix the test would fail.
Additionally any PDF files with repeated texts has a potential to
trigger this bug. e.g., create a simple pdf use the test text
```python
"LayoutParser: \n\nA Unified Toolkit for Deep Learning Based Document Image\n\nLayoutParser for Deep Learning"
```
and partition with `ocr_only` mode on main branch would hit this bug and
output text where position of the second "LayoutParser" is incorrect.
```python
[
'LayoutParser:',
'A Unified Toolkit for Deep Learning Based Document Image',
'for Deep Learning LayoutParser',
]
```
E.g., now can run:
```bash
# extracts base64 encoded image data for `Table` and `Image` elements
$ unstructured-get-json.sh --trace --verbose --images /t/docs/Captur-1317-5_ENG-p5.pdf
# also extracts `Title` elements (see screenshot)
$ IMAGE_BLOCK_TYPES='"title","table","image"' unstructured-get-json.sh --trace --verbose --images /t/docs/Captur-1317-5_ENG-p5.pdf
```
It was discovered during testing that "narrativetext" does not work,
probably due to camel casing of NarrativeText 😬

- **Add auto-download for NLTK for Python Enviroment** When user import
`tokenize`, It will automatically download nltk data.
- Added `AUTO_DOWNLOAD_NLTK` flag in `tokenize.py` to download
`NLTK_DATA`
Fixes: #3815
Verified on my very large documents that it doesn't unnecessarily and
unsuccessfully "repair" them.
You may or may not wish to keep the version check in `patch_psparser`.
Since ~you're pinning the version of pdfminer.six and since it isn't
guaranteed that the bug in question will be fixed in the next
pdfminer.six release (but it is rather serious, so I should hope so),
then perhaps you just want to unconditionally patch it.~ it seems like
pinning of versions is only operative when running from Docker (good!)
so never mind! Keep that version check!
Also corrected an import so that if you do feel like using a newer
version of pdfminer.six, it won't break on you.
---------
Authored-by: David Huggins-Daines <dhdaines@logisphere.ca>
This PR refactors the data structure for `list[LayoutElement]` and
`list[TextRegion]` used in partition pdf/image files.
- new data structure replaces a list of objects with one object with
`numpy` array to store data
- this only affects partition internal steps and it doesn't change input
or output signature of `partition` function itself, i.e., `partition`
still returns `list[Element]`
- internally `list[LayoutElement]` -> `LayoutElements`;
`list[TextRegion]` -> `TextRegions`
- current refactor stops before clean up pdfminer elements inside
inferred layout elements -> the algorithm of clean up needs to be
refactored before the data structure refactor can move forward. So
current refactor converts the array data structure into list data
structure with `element_array.as_list()` call. This is the last step
before turning `list[LayoutElement]` into `list[Element]` as return
- a future PR will update this last step so that we build
`list[Element]` from `LayoutElements` data structure instead.
The goal of this PR is to replace the data structure as much as possible
without changing underlying logic. There are a few places where the
slicing or filtering logic was simple enough to be converted into vector
data structure operations. Those are refactored to be vector based. As a
result there is some small improvements observed in ingest test. This is
likely because the vector operations cleaned up some previous
inconsistency in data types and operations.
---------
Co-authored-by: ryannikolaidis <1208590+ryannikolaidis@users.noreply.github.com>
Co-authored-by: badGarnet <badGarnet@users.noreply.github.com>
this PR is to release to 0.16.5 which has below updates:
- **Update `unstructured-inference`** to 0.8.6 in requirements which
removed `layoutparser` dependency libs
- **Update `pdfminer-six` to 20240706**
This PR fixes a bug when using `partition` to partition an email with
image attachments with hi_res and allow table structure inference -> the
partitioning of the image would encounter a value error: `got multiple
values for keyword argument 'infer_table_structure'`.
This is because pass `kwargs` into partition "other" types of files in
this
[block](50ea6fe7fc/unstructured/partition/auto.py (L270-L280))
`infer_table_structure` is packaged into `partitioning_kwargs`. Then for
email at least when there are attachments that can be partitioned with
`hi_res` we pass that dict of `kwargs` right back into `partition` entry
-> so when we get
[here](50ea6fe7fc/unstructured/partition/auto.py (L222-L235))
we are both specifying explicitly `infer_table_structure` and have it in
`kwargs` variable
The fix is to detect first if `kwargs` already contains
`infer_table_structure` and if yes use that and pop it from `kwargs`.
---------
Co-authored-by: Kamil Plucinski <kamil.plucinski@deepsense.ai>
Co-authored-by: christinestraub <christinemstraub@gmail.com>
Co-authored-by: ryannikolaidis <1208590+ryannikolaidis@users.noreply.github.com>
Co-authored-by: christinestraub <christinestraub@users.noreply.github.com>
This change adds the ability to filter out characters predicted by
Tesseract with low confidence scores.
Some notes:
- I intentionally disabled it by default; I think some low score(like
0.9-0.95 for Tesseract) could be a safe choice though
- I wanted to use character bboxes and combine them into word bbox
later. However, a bug in Tesseract in some specific scenarios returns
incorrect character bboxes (unit tests caught it 🥳 ). More in comment in
the code
This pull request adds NLTK data to the Docker image by pre-packaging
the data to ensure a more reliable and efficient deployment process, as
the required NLTK resources are readily available within the container.
**Current updated solution:**
- Dockerfile Update: Integrated NLTK data directly into the Docker
image, ensuring that the API can operate independently of external -
data sources. The data is stored at /home/notebook-user/nltk_data.
- Environment Variable Setup: Configured the NLTK_PATH environment
variable, enabling Python scripts to automatically locate and use the
embedded NLTK data. This eliminates the need for manual configuration in
deployment environments.
- Code Cleanup: Removed outdated code in tokenize.py and related scripts
that previously downloaded NLTK data from S3. This streamlines the
codebase and removes unnecessary dependencies.
- Script Updates: Updated tokenize.py and test_tokenize.py to utilize
the NLTK_PATH variable, ensuring consistent access to the embedded data
across all environments.
- Dependency Elimination: Fully eliminated reliance on the S3 bucket for
NLTK data, mitigating risks from network failures or access changes.
- Improved System Reliability: By embedding assets within the Docker
image, the API now has a self-contained setup that ensures consistent
behavior regardless of deployment location.
- Updated the Dockerfile to copy the local NLTK data to the appropriate
directory within the container.
- Adjusted the application setup to verify the presence of NLTK assets
during the container build process.
**Summary**
Improve element-type mapping for Chinese text. Fixes bug where Chinese
text would produce large numbers of false-positive `Title` elements.
Fixes#3084
---------
Co-authored-by: scanny <scanny@users.noreply.github.com>
Co-authored-by: ryannikolaidis <1208590+ryannikolaidis@users.noreply.github.com>
**Summary**
Fixes a bug where a CSV file with asserted content-type
`application/vnd.ms-excel` was incorrectly identified as an XLS file and
failed partitioning.
**Additional Context**
The `content_type` argument to partitioning is often authored by the
client system (e.g. Unstructured SDK) and is both unreliable and outside
the control of the user. In this case the `.csv -> XLS` mapping is
correct for certain purposes (Excel is often used to load and edit CSV
files) but not for partitioning, and the user has no readily available
way to override the mapping.
XLS files as well as seven other common binary file types can be
efficiently detected 100% of the time (at least 99.999%) using code we
already have in the file detector.
- Promote this direct-inspection strategy to be tried first.
- When DOC, DOCX, EPUB, ODT, PPT, PPTX, XLS, or XLSX is detected, use
that file-type.
- When one of those types is NOT detected, clear the asserted
`content_type` when it matches any of those types. This prevents the
problem seen in the bug where the asserted content type was used to
determine the file-type.
- The remaining content_type, guess MIME-type, and filename-extension
mapping strategies are tried, in that order, only when direct inspection
fails. This is largely the same as it was before.
- Fix#3781 while we were in the neighborhood.
- Fix#3596 as well, essentially an earlier report of #3781.
**Summary**
CVE-2024-11053 https://curl.se/docs/CVE-2024-11053.html (severity Low)
was published on Dec 11, 2024 and began failing CI builds on open-core
on Dec 13, 2024 when it appeared in `grype` apparently misclassified as
a critical vulnerability.
The severity reported on the CVE is "Low" so it should not fail builds.
Add a `.grype.yaml` file to ignore this CVE until grype is updated.
**Summary**
Prepare auto-partitioning for pluggable partitioners.
Move toward a uniform partitioner call signature in `auto/partition()`
such that a custom or override partitioner can be registered without
requiring code changes.
**Additional Context**
The central job of `auto/partition()` is to detect the file-type of the
given file and use that to dispatch partitioning to the corresponding
partitioner function e.g. `partition_pdf()` or `partition_docx()`.
In the existing code, each partitioner function is called with
parameters "hand-picked" from the available parameters passed to the
`partition()` function. This is unnecessary and couples those
partitioners tightly with the dispatch function. The desired state is
that all available arguments are passed as `kwargs` and the partitioner
function "self-selects" the arguments it will be sensitive to, applies
its own appropriate default values when the argument is omitted, and
simply ignore any arguments it doesn't use. Note that achieving this
requires no changes to partitioner functions because they already do
precisely this.
So the job is to pass all arguments (other than `filename` and `file`)
to the partitioner as `kwargs`. This will allow additional or alternate
partitioners to be registered at runtime and dispatched to, because as
long as they have the signature `partition_x(filename, file, kwargs) ->
list[Element]` then they can be dispatched to without customization.