697 Commits

Author SHA1 Message Date
Pluto
5bb95b5841
Fix parsing table cells (#3904)
This PR:
- Fixes removing HTML tags that exist in <td> cells 
- stripping function was in general problematic to implement in easy and
straightforward way (you can't modify `descendants` in-place). So I
decided instead of patching something in table cell I added stripping
everywhere in the same consistent way. This is why some tests needed
small edits with removing one white-space in each tag. I believe this
won't cause any problems for downstream tasks.

Tested HTML:
```html
<table class="Table">
    <tbody>
        <tr>
            <td colspan="2">
                Some text                                        
            </td>
            <td>
                <input checked="" class="Checkbox" type="checkbox"/>
            </td>
        </tr>
    </tbody>
</table>
```
Before & After
```html
'<table class="Table" id="..."> <tbody> <tr> <td colspan="2">Some text</td><td></td></tr></tbody></table>'
'<table class="Table" id="..."><tbody><tr><td colspan="2">Some text</td><td><input checked="" type="checkbox"/></td></tr></tbody></table>''
```
2025-02-05 15:28:49 +00:00
Yao You
9d58b34ab4
Fix/fix table id checking logic (#3898)
- there is a bug in deciding if a page has tables before performing
table extraction. This logic checks if the id associated with Table type
element is True
- however, it should be checking if the id is `None` because sometimes
the id can be 0 (the first type of element in the page)
- the fix updates the logic
- adds a unit test for this specific case
2025-01-31 10:19:14 -08:00
Yao You
a9ff1e70b2
Fix/fix ocr region to elements bug (#3891)
This PR fixes a bug in `build_layout_elements_from_ocr_regions` where
texts are joint in incorrect orders.

The bug is due to incorrect masking of the `ocr_regions` after some are
already selected as one of the final groups. The fix uses simpler method
to mask the indices by simply use the same indices that adds the regions
to the final groups to mask them so they are not considered again.

## Testing

This PR adds a unit test specifically aimed for this bug. Without the
fix the test would fail.
Additionally any PDF files with repeated texts has a potential to
trigger this bug. e.g., create a simple pdf use the test text

```python
"LayoutParser: \n\nA Unified Toolkit for Deep Learning Based Document Image\n\nLayoutParser for Deep Learning"
```
and partition with `ocr_only` mode on main branch would hit this bug and
output text where position of the second "LayoutParser" is incorrect.
```python
[
    'LayoutParser:', 
    'A Unified Toolkit for Deep Learning Based Document Image',
    'for Deep Learning LayoutParser',
]
```
2025-01-29 12:11:17 +00:00
fzowl
0fbdd4ea36
Refactoring VoyageAI integration (#3878)
Using VoyageAI's python package directly, allowing more features than
through langchain
2025-01-28 21:45:40 +00:00
David Huggins-Daines
9e5ff225f6
fix: Correctly patch pdfminer to avoid unnecessarily and unsuccessfully repairing PDFs with long content streams, causing needless and endless OCR (#3822)
Fixes: #3815 

Verified on my very large documents that it doesn't unnecessarily and
unsuccessfully "repair" them.

You may or may not wish to keep the version check in `patch_psparser`.
Since ~you're pinning the version of pdfminer.six and since it isn't
guaranteed that the bug in question will be fixed in the next
pdfminer.six release (but it is rather serious, so I should hope so),
then perhaps you just want to unconditionally patch it.~ it seems like
pinning of versions is only operative when running from Docker (good!)
so never mind! Keep that version check!

Also corrected an import so that if you do feel like using a newer
version of pdfminer.six, it won't break on you.

---------

Authored-by: David Huggins-Daines <dhdaines@logisphere.ca>
2025-01-24 14:27:25 -06:00
Yao You
8f2a719873
Feat/refactor layoutelement textregion to vectorized data structure (#3881)
This PR refactors the data structure for `list[LayoutElement]` and
`list[TextRegion]` used in partition pdf/image files.

- new data structure replaces a list of objects with one object with
`numpy` array to store data
- this only affects partition internal steps and it doesn't change input
or output signature of `partition` function itself, i.e., `partition`
still returns `list[Element]`
- internally `list[LayoutElement]` -> `LayoutElements`;
`list[TextRegion]` -> `TextRegions`
- current refactor stops before clean up pdfminer elements inside
inferred layout elements -> the algorithm of clean up needs to be
refactored before the data structure refactor can move forward. So
current refactor converts the array data structure into list data
structure with `element_array.as_list()` call. This is the last step
before turning `list[LayoutElement]` into `list[Element]` as return
- a future PR will update this last step so that we build
`list[Element]` from `LayoutElements` data structure instead.

The goal of this PR is to replace the data structure as much as possible
without changing underlying logic. There are a few places where the
slicing or filtering logic was simple enough to be converted into vector
data structure operations. Those are refactored to be vector based. As a
result there is some small improvements observed in ingest test. This is
likely because the vector operations cleaned up some previous
inconsistency in data types and operations.

---------

Co-authored-by: ryannikolaidis <1208590+ryannikolaidis@users.noreply.github.com>
Co-authored-by: badGarnet <badGarnet@users.noreply.github.com>
2025-01-23 17:11:38 +00:00
Yao You
27cd53bd45
fix: fix multiple values for infer_table_structure (#3870)
This PR fixes a bug when using `partition` to partition an email with
image attachments with hi_res and allow table structure inference -> the
partitioning of the image would encounter a value error: `got multiple
values for keyword argument 'infer_table_structure'`.

This is because pass `kwargs` into partition "other" types of files in
this
[block](50ea6fe7fc/unstructured/partition/auto.py (L270-L280))
`infer_table_structure` is packaged into `partitioning_kwargs`. Then for
email at least when there are attachments that can be partitioned with
`hi_res` we pass that dict of `kwargs` right back into `partition` entry
-> so when we get
[here](50ea6fe7fc/unstructured/partition/auto.py (L222-L235))
we are both specifying explicitly `infer_table_structure` and have it in
`kwargs` variable

The fix is to detect first if `kwargs` already contains
`infer_table_structure` and if yes use that and pop it from `kwargs`.

---------

Co-authored-by: Kamil Plucinski <kamil.plucinski@deepsense.ai>
Co-authored-by: christinestraub <christinemstraub@gmail.com>
Co-authored-by: ryannikolaidis <1208590+ryannikolaidis@users.noreply.github.com>
Co-authored-by: christinestraub <christinestraub@users.noreply.github.com>
2025-01-17 18:41:04 +00:00
Pluto
8685905bd1
Character confidence threshold (#3860)
This change adds the ability to filter out characters predicted by
Tesseract with low confidence scores.

Some notes:
- I intentionally disabled it by default; I think some low score(like
0.9-0.95 for Tesseract) could be a safe choice though
- I wanted to use character bboxes and combine them into word bbox
later. However, a bug in Tesseract in some specific scenarios returns
incorrect character bboxes (unit tests caught it 🥳 ). More in comment in
the code
2025-01-13 13:12:46 +00:00
Christine Straub
8378c26035
Feat/contain nltk assets in docker image (#3853)
This pull request adds NLTK data to the Docker image by pre-packaging
the data to ensure a more reliable and efficient deployment process, as
the required NLTK resources are readily available within the container.

**Current updated solution:**
- Dockerfile Update: Integrated NLTK data directly into the Docker
image, ensuring that the API can operate independently of external -
data sources. The data is stored at /home/notebook-user/nltk_data.
- Environment Variable Setup: Configured the NLTK_PATH environment
variable, enabling Python scripts to automatically locate and use the
embedded NLTK data. This eliminates the need for manual configuration in
deployment environments.
- Code Cleanup: Removed outdated code in tokenize.py and related scripts
that previously downloaded NLTK data from S3. This streamlines the
codebase and removes unnecessary dependencies.
- Script Updates: Updated tokenize.py and test_tokenize.py to utilize
the NLTK_PATH variable, ensuring consistent access to the embedded data
across all environments.
- Dependency Elimination: Fully eliminated reliance on the S3 bucket for
NLTK data, mitigating risks from network failures or access changes.
- Improved System Reliability: By embedding assets within the Docker
image, the API now has a self-contained setup that ensures consistent
behavior regardless of deployment location.
- Updated the Dockerfile to copy the local NLTK data to the appropriate
directory within the container.
- Adjusted the application setup to verify the presence of NLTK assets
during the container build process.
2025-01-08 22:00:13 +00:00
Roman Isecke
50ea6fe7fc
feat: add ndjson support (#3845)
### Description
Add ndjson file type support and treat is the same as json files.
2024-12-19 14:39:26 +00:00
Steve Canny
b3a2dd4755
fix: html incorrectly categorizing text (#3841)
Fixes #3666

---------

Co-authored-by: ryannikolaidis <1208590+ryannikolaidis@users.noreply.github.com>
Co-authored-by: scanny <scanny@users.noreply.github.com>
2024-12-18 18:46:54 +00:00
Steve Canny
9ece0b5ad2
fix: improve false-positive Title elements on Chinese text (#3836)
**Summary**
Improve element-type mapping for Chinese text. Fixes bug where Chinese
text would produce large numbers of false-positive `Title` elements.

Fixes #3084

---------

Co-authored-by: scanny <scanny@users.noreply.github.com>
Co-authored-by: ryannikolaidis <1208590+ryannikolaidis@users.noreply.github.com>
2024-12-18 01:16:42 +00:00
Steve Canny
b5ff79d8db
fix: refine filetype detection (#3828)
**Summary**
Fixes a bug where a CSV file with asserted content-type
`application/vnd.ms-excel` was incorrectly identified as an XLS file and
failed partitioning.

**Additional Context**
The `content_type` argument to partitioning is often authored by the
client system (e.g. Unstructured SDK) and is both unreliable and outside
the control of the user. In this case the `.csv -> XLS` mapping is
correct for certain purposes (Excel is often used to load and edit CSV
files) but not for partitioning, and the user has no readily available
way to override the mapping.

XLS files as well as seven other common binary file types can be
efficiently detected 100% of the time (at least 99.999%) using code we
already have in the file detector.

- Promote this direct-inspection strategy to be tried first.
- When DOC, DOCX, EPUB, ODT, PPT, PPTX, XLS, or XLSX is detected, use
that file-type.
- When one of those types is NOT detected, clear the asserted
`content_type` when it matches any of those types. This prevents the
problem seen in the bug where the asserted content type was used to
determine the file-type.
- The remaining content_type, guess MIME-type, and filename-extension
mapping strategies are tried, in that order, only when direct inspection
fails. This is largely the same as it was before.
- Fix #3781 while we were in the neighborhood.
- Fix #3596 as well, essentially an earlier report of #3781.
2024-12-17 00:56:21 +00:00
Steve Canny
10f0d54ac2
build: remove ruff version upper bound (#3829)
**Summary**
Remove pin on `ruff` linter and fix the handful of lint errors a newer
version catches.
2024-12-16 23:01:22 +00:00
Steve Canny
3b718ec89a
rfctr: prep for pluggable partitioners (#3806)
**Summary**
Prepare auto-partitioning for pluggable partitioners.

Move toward a uniform partitioner call signature in `auto/partition()`
such that a custom or override partitioner can be registered without
requiring code changes.

**Additional Context**
The central job of `auto/partition()` is to detect the file-type of the
given file and use that to dispatch partitioning to the corresponding
partitioner function e.g. `partition_pdf()` or `partition_docx()`.

In the existing code, each partitioner function is called with
parameters "hand-picked" from the available parameters passed to the
`partition()` function. This is unnecessary and couples those
partitioners tightly with the dispatch function. The desired state is
that all available arguments are passed as `kwargs` and the partitioner
function "self-selects" the arguments it will be sensitive to, applies
its own appropriate default values when the argument is omitted, and
simply ignore any arguments it doesn't use. Note that achieving this
requires no changes to partitioner functions because they already do
precisely this.

So the job is to pass all arguments (other than `filename` and `file`)
to the partitioner as `kwargs`. This will allow additional or alternate
partitioners to be registered at runtime and dispatched to, because as
long as they have the signature `partition_x(filename, file, kwargs) ->
list[Element]` then they can be dispatched to without customization.
2024-12-10 20:44:34 +00:00
Magnus F
1e2da6df46
fix: ipv4 address regex (#3808)
I noticed the ipv4 regex is wrong (it only capture one or two-digit
octets, e.g. `n.nn.n.nn`). Here's a correction and a bumped test for it.

If you wish I can break out the ipv4 test to its own case, so we don't
interfere with the existing `EMAIL_META_DATA_INPUT` ipv6 extraction
test.

Side note: The comment at `unstructured/nlp/patterns.py#95` includes a
bad ipv4 address example (last octet is wrongfully left-padded with a
zero). I left it as it is because I'm not sure if the intention is to
include "non-conventional" ipv4 addresses, like octal or hexadecimal
octets.
2024-12-09 14:19:13 -08:00
Steve Canny
4379d883a3
chunk: relax table segregation during chunking (#3812)
**Summary**
Relax table-segregation rule applied during chunking such that a `Table`
and `Text`-subtype elements can be combined into a single chunk when the
chunking window allows.

**Additional Context**
Until now, `Table` elements have always been segregated during chunking,
i.e. a chunk that contained a table would never contain any other
element. In certain scenarios, especially when a large chunking window
of say 2000 characters is used, this behavior can reduce retrieval
effectiveness by isolating the table from surrounding context.

---------

Co-authored-by: ryannikolaidis <1208590+ryannikolaidis@users.noreply.github.com>
Co-authored-by: scanny <scanny@users.noreply.github.com>
2024-12-09 18:57:22 +00:00
Christine Straub
9076d56d9f fix: resolve mergeing conflict error 2024-12-07 19:40:11 -08:00
Tracy Shen
8c58bc57db
fix doctype parsing error (#3811)
- per [ticket](https://unstructured-ai.atlassian.net/browse/ML-551),
there is a bug in the `unstructured` lib under metrics/evaluate.py that
incorrectly retrieves the file extension before the conversion to cct
file from paths like '*.pdf.txt' . (see below screenshot)
    - the current status is in the top example
- we should have the correct version in the bottom example of the
screenshot.
   

![image](https://github.com/user-attachments/assets/6d82de85-3b54-4e77-a637-28a27fcb279d)

- in addition, i also observe the doctype returned are not aligned, some
returning '.*' and some are returning without the dot.
- therefore, i just aligned them to be output into the same version
which is '.*".
2024-12-06 23:55:01 +00:00
Christine Straub
7d06c120dc
Merge branch 'main' into ML-593/quote-standardization 2024-12-05 10:27:26 -08:00
Christine Straub
c0c3fd673f test: enhance quote standardization tests with additional Unicode scenarios 2024-12-04 13:02:07 -08:00
Christine Straub
c821f12d29 test: update string tests for consistent quote handling 2024-12-04 12:17:16 -08:00
Christine Straub
4e0f7cdbc0 Feat: enhance quote standardization with comprehensive Unicode coverage and update tests 2024-12-04 11:33:03 -08:00
Christine Straub
371cb7528d Feat: add quote standardization and update edit distance calculation 2024-12-03 21:21:39 -08:00
Nathan Van Gheem
0fb814db61
Use native ntlk download (#3796)
This PR changes how we download NLTK data to use the native nltk
downloader.

We had moved to our own hosted NLTK dataset because of this CVE:
https://nvd.nist.gov/vuln/detail/CVE-2024-39705

Ref: https://github.com/Unstructured-IO/unstructured/pull/3361

Latest versions of NLTK have fixed this issue:
https://github.com/nltk/nltk/blob/develop/ChangeLog
2024-12-02 19:30:28 +00:00
Pluto
e48d79eca1
image alt support (#3797) 2024-11-26 16:20:23 +00:00
Yao You
3b9b01c502
Feat: weighted average table metrics (#3348)
This PR uses (number of actual table) weighted average instead of
average without weights for table metrics.

- pages where there are ground truth tables the weight is proportional
to the number of ground truth tables in that page
- pages where there are no ground truth tables but has predicted tables
(false positive) are assigned as 1 table worth of weight for the whole
page for calculating the mean value of `table_level_acc`
- pages with false positive tables do not contribute to table structural
or table content metrics

## test

This PR updates the existing test for evaluating table metrics:
- adds a second file with just 1 table vs. the existing file with 2
tables
- test the weighted average is written to the report
2024-11-20 17:14:57 +00:00
Pluto
85ecdab077
Add text as html to orig elements chunks (#3779)
This simplest solution doesn't drop HTML from metadata when merging
Elements from HTML input. We still need to address how to handle nested
elements, and if we want to have `LayoutElements` in the metadata of
Composite Elements, a unit test showing the current behavior.
Note: metadata still contains `orig_elements` which has all the
metadata.
2024-11-20 13:27:17 +00:00
Pluto
e1babf0660
Define default HTML to ontology mapping (#3784) 2024-11-20 13:01:28 +00:00
Pluto
ca27b8aa97
Set <table> to be ontology.Table not UncategorizedText (#3782) 2024-11-15 14:30:48 +00:00
Pluto
c2d17b1ca4
Fix extracting value from field (#3774) 2024-11-07 18:21:39 +00:00
Pluto
66d1e5a5cb
Add max recursion limit and fix to_text() method (#3773) 2024-11-07 15:08:16 +00:00
Christine Straub
df156ebe5a
feat: support pdf link extraction in hi_res strategy (#3753)
This PR aims to add support for link extraction in pdf `hi_res`
strategy. The `partition_pdf()` function now supports link extraction
when using the `hi_res` strategy, allowing users to extract hyperlinks
from PDF documents.

### Summary
- Added functionalities to support link extraction in hi_res flow
- Enhanced word extraction functionality used for link extraction in
both `fast` and `hi_res` flows, resulted in more correct `start_index`
and `text` in `links` metadata.
- Updated ingest fixture update workflow to not skip Astra DB source
test

### Testing
```
elements = partition_pdf(
    filename="example-docs/pdf/embedded-link.pdf",
    strategy="hi_res"
)
assert len(elements[0].metadata.links) == 3
```

---------

Co-authored-by: ryannikolaidis <1208590+ryannikolaidis@users.noreply.github.com>
Co-authored-by: christinestraub <christinestraub@users.noreply.github.com>
Co-authored-by: cragwolfe <crag@unstructured.io>
2024-10-31 16:52:27 +00:00
Pluto
1953b8699f
Ml 415/merge inline elements (#3749) 2024-10-31 12:17:25 +00:00
Maksymilian Operlejn
eb1b294b73
ML-405/ML-427 - OntologyElement improvements (#3758)
- the "value" attribute from <input/> tag will be taken into account and
processed as "text" in ontology
- the tables will now be parsed without any ids and classes - we have
different reasons behind that, for example, embeddings with ids and
classes can lose some semantic value. Also, more tokens = more expensive
LLM call
-  cleaned to_html, created to_text for OntologyElement
2024-10-31 01:30:53 +00:00
Pluto
5a91f0cda9
Fix layout parsing (#3754) 2024-10-25 14:42:06 +00:00
Pluto
2417f8ed84
Fix when parent id is none for first element in v2 notion: (#3752) 2024-10-25 09:43:36 +00:00
Marianna
aa5935b357
Ml 384/whitespaces in cct (#3747)
This ticket ensures that CCT metric will not be sensitive to differences
in whitespace (including newline).
All whitespaces in string are changed to single space `" "` in both GT
and PRED before the metric is computed.

Additional changes in CHANGELOG due to auto-formatting.
2024-10-24 13:02:34 +00:00
Pawel Kmiecik
bdfcc14e3d
fix: fix partition_via_api retry mechanism when the default SDK's retry config is empty. (#3746) 2024-10-24 09:37:22 +00:00
Pluto
03a3ed8d3b
Add parsing HTML to unstructured elements (#3732)
> This is POC change; not everything is working correctly and code
quality could be improved significantly

This ticket add parsing HTML to unstructured element and back. How is it
working?

HTML has a tree structure, Unstructured Elements is a list.
HTML structure is traversed in DFS order, creating Elements and adding
them to list. So the reading order from HTML is preserved. To be able to
compose tree again all elements has IDs, and metadata.parent_id is
leveraged

How html is preserved if there are 'layout' without text, or there are
deeply nested HTMLs that are just text from the point of view of
Unstructured Element?
Each element is parsed back to HTML using metadata.text_as_html field.
For layout elements only html_tag are there, for long text elements
there is everything required to recreate HTML - you can see examples in
unit tests or .json file I attached.

Pros of solution:
- Nothing had to be changed in element types 

Cons:
- There are elements without Text which may be confusing (they could be
replaced by some special type)

Core transformation logic can be found in 2 functions in
`unstructured/documents/transformations.py`

Knowns bugs (they are minor):
- sometimes html tag is changed incorrectly
- metadata.category_depth and metadata.page_number are not set
- page break is not added between pages 

How to test. Generate HTML:
```python3
from pathlib import Path

from vlm_partitioner.src.partition import partition

if __name__ == "__main__":
    doc_dir = Path("out_dir")
    file_path = Path("example_doc.pdf")
    partition(str(file_path), provider="anthropic", output_dir=str(doc_dir))
```

Then parse to unstructured elements and back to html
```python3
from pathlib import Path

from unstructured.documents.html_utils import indent_html
from unstructured.documents.transformations import parse_html_to_ontology, ontology_to_unstructured_elements, \
    unstructured_elements_to_ontology
from unstructured.staging.base import elements_to_json

if __name__ == "__main__":
    output_dir = Path("out_dir/")
    output_dir.mkdir(exist_ok=True, parents=True)

    doc_path = Path("out_dir/example_doc.html")

    html_content = doc_path.read_text()

    ontology = parse_html_to_ontology(html_content)
    unstructured_elements = ontology_to_unstructured_elements(ontology)

    elements_to_json(unstructured_elements, str(output_dir / f"{doc_path.stem}_unstr.json"))

    parsed_ontology = unstructured_elements_to_ontology(unstructured_elements)
    html_to_save = indent_html(parsed_ontology.to_html())

    Path(output_dir / f"{doc_path.stem}_parsed_unstr.html").write_text(html_to_save)
```

I attached example doc before and after running these scripts

[outputs.zip](https://github.com/user-attachments/files/17438673/outputs.zip)
2024-10-23 12:28:07 +00:00
Pawel Kmiecik
6bceac1749
feat: expose retry params in partition via api (#3724)
This PR:
- adds parameters to control the retry-mechanism behaviour for
`partition_via_api`:
```
    retries_initial_interval: [int] = None,
    retries_max_interval: Optional[int] = None,
    retries_exponent: Optional[float] = None,
    retries_max_elapsed_time: Optional[int] = None,
    retries_connection_errors: Optional[bool] = None,
```
- adds tests that check using them according to defaults
2024-10-22 14:43:28 +00:00
Yao You
a11ad22609
bump unstructured-inference (#3711)
This PR bumps `unstructured-inference` to `0.8.0`, which introduces
vectorized data structure for layout elements and text regions.
This PR also cleans up a few places in CI that has repeated definition
of env variables or missing installation of testing dependencies in
cache.

A few document ingest results are changed:
- two places for `biomed-api` (actually processed locally on runner) are
due to very small changes in numerical results of the bounding box
areas: one results in a duplicated page number/header and another
results in a deduplication of a word of a sentence that starts in a new
line. (yes, two cases goes in opposite directions)
- the layout parser paper now outputs the code lines with page number
inside the code box as list items

---------

Co-authored-by: ryannikolaidis <1208590+ryannikolaidis@users.noreply.github.com>
Co-authored-by: badGarnet <badGarnet@users.noreply.github.com>
Co-authored-by: christinestraub <christinemstraub@gmail.com>
2024-10-21 21:55:08 +00:00
Steve Canny
3240e3d17a
rfctr(pptx): minify HTML and table.text is cct (#3734)
**Summary**
Eliminate historical "idiosyncracies" of `table.metadata.text_as_html`
HTML introduced by `partition_pptx()`. Produce minified `.text_as_html`
consistent with that formed by chunking.

**Additional Context**
- PPTX `.metadata.text_as_html` is minified (no extra whitespace or
thead, tbody, tfoot elements).
- `table.text` is clean-concatenated-text (CCT) of table.
- Last use of `tabulate` library is removed and that dependency is
removed from `base.in`.
2024-10-21 16:23:15 +00:00
Steve Canny
208c7edc52
rfctr(csv): minify HTML and table text is cct (#3733)
**Summary**
Eliminate historical "idiosyncracies" of `table.metadata.text_as_html`
HTML introduced by `partition_csv()`. Produce minified `.text_as_html`
consistent with that formed by chunking.

**Additional Context**
- CSV `.metadata.text_as_html` is minified (no extra whitespace or
thead, tbody, tfoot elements).
- `table.text` is clean-concatenated-text (CCT) of table.

---------

Co-authored-by: ryannikolaidis <1208590+ryannikolaidis@users.noreply.github.com>
Co-authored-by: scanny <scanny@users.noreply.github.com>
2024-10-19 06:49:09 +00:00
Steve Canny
c85f29e6ca
fix(xlsx): XLSX emits std minified .text_as_html (#3558)
**Summary**
Eliminate historical "idiosyncracies" of `table.metadata.text_as_html`
HTML introduced by `partition_xlsx()`. Produce minified `.text_as_html`
consistent with that formed by chunking.

**Additional Context**
- XLSX `.text_as_html` is minified (no extra whitespace or thead, tbody,
tfoot elements).
- `table.text` is clean-concatenated-text (CCT) of table.

---------

Co-authored-by: scanny <scanny@users.noreply.github.com>
2024-10-17 22:05:11 +00:00
Nathan Van Gheem
b092d45816
Remove unsupported chipper model (#3728)
The chipper model is no longer supported.
2024-10-17 17:40:45 +00:00
Steve Canny
1eceac26c8
rfctr(email): eml partitioner rewrite (#3694)
**Summary**
Initial attempts to incrementally refactor `partition_email()` into
shape to allow pluggable partitioning quickly became too complex for
ready code-review. Prepare separate rewritten module and tests and swap
them out whole.

**Additional Context**
- Uses the modern stdlib `email` module to reliably accomplish several
manual decoding steps in the legacy code.
- Remove obsolete email-specific element-types which were replaced 18
months or so ago with email-specific metadata fields for things like Cc:
addresses, subject, etc.
- Remove accepting an email as `text: str` because MIME-email is
inherently a binary format which can and often does contain multiple and
contradictory character-encodings.
- Remove `encoding` parameters as it is now unused. An email file is not
a text file and as such does not have a single overall encoding.
Character encoding is specified individually for each MIME-part within
the message and often varies from one part to another in the same
message.
- Remove the need for a caller to specify `attachment_partitioner`.
There is only one reasonable choice for this which is
`auto.partition()`, consistent with the same interface and operation in
`partition_msg()`.
- Fixes #3671 along the way by silently skipping attachments with a
file-type for which there is no partitioner.
- Substantially extend the test-suite to cover multiple
transport-encoding/charset combinations.

---------

Co-authored-by: ryannikolaidis <1208590+ryannikolaidis@users.noreply.github.com>
Co-authored-by: scanny <scanny@users.noreply.github.com>
2024-10-16 02:02:33 +00:00
Roman Isecke
9049e4e2be
feat/remove ingest code, use new dep for tests (#3595)
### Description
Alternative to https://github.com/Unstructured-IO/unstructured/pull/3572
but maintaining all ingest tests, running them by pulling in the latest
version of unstructured-ingest.

---------

Co-authored-by: ryannikolaidis <1208590+ryannikolaidis@users.noreply.github.com>
Co-authored-by: rbiseck3 <rbiseck3@users.noreply.github.com>
Co-authored-by: Christine Straub <christinemstraub@gmail.com>
Co-authored-by: christinestraub <christinestraub@users.noreply.github.com>
2024-10-15 10:01:34 -05:00
David Blore
ecf0267b85
fix: add language to OCRAgentGoogleVision constructor (#3696)
This PR addresses issue #3659 by adding an optional `language` parameter
to the `OCRAgentGoogleVision` class constructor.

This parameter serves as a "language hint" for the
`document_text_detection` method in the `ImageAnnotatorClient`. For more
information on language hints, refer to the [Google Cloud Vision
documentation](https://cloud.google.com/vision/docs/languages).


**Default Behavior**: 
The language parameter defaults to None, allowing Google Cloud Vision to
auto-detect the language, as recommended in their documentation.

**Purpose**: 
This change is necessary because the `OCRAgent`'s `get_instance` method
expects all `OCRAgent`s to include a language parameter in their
constructors.

**Context on Issue:**
When trying to parse a PDF with
`OCR_AGENT=unstructured.partition.utils.ocr_models.google_vision_ocr.OCRAgentGoogleVision`,
an error occurs in the `get_instance` method. The method expects a
`language` parameter, which the current `OCRAgentGoogleVision`
constructor does not support, leading to a positional argument error.

---------

Co-authored-by: Christine Straub <christinemstraub@gmail.com>
2024-10-14 05:35:05 +00:00
Steve Canny
2f496f867c
fix(auto): quick fix for auto test failing in CI (#3715)
Better fix to follow.
2024-10-10 18:44:00 +00:00