### Description
**Ingest destination connectors support for writing raw list of
elements** Along with the default write method used in the ingest
pipeline to write the json content associated with the ingest docs, each
destination connector can now also write a raw list of elements to the
desired downstream location without having an ingest doc associated with
it.
### Description
Currently linting only takes place over the base unstructured directory
but we support python files throughout the repo. It makes sense for all
those files to also abide by the same linting rules so the entire repo
was set to be inspected when the linters are run. Along with that
autoflake was added as a linter which has a lot of added benefits such
as removing unused imports for you that would currently break flake and
require manual intervention.
The only real relevant changes in this PR are in the `Makefile`,
`setup.cfg`, and `requirements/test.in`. The rest is the result of
running the linters.
### Summary
Some `OCR` elements with only spaces in the text have full-page width in
the bounding box, which causes the `xycut` sorting to not work as
expected. Now the logic to parse OCR results removes any elements with
only spaces (more than one space).
---------
Co-authored-by: ryannikolaidis <1208590+ryannikolaidis@users.noreply.github.com>
Co-authored-by: christinestraub <christinestraub@users.noreply.github.com>
The current code assumes the first line of csv and tsv files are a
header line. Most csv and tsv files don't have a header line, and even
for those that do, dropping this line may not be the desired behavior.
Here is a snippet of code that demonstrates the current behavior and the
proposed fix
```
import pandas as pd
from lxml.html.soupparser import fromstring as soupparser_fromstring
c1 = """
Stanley Cups,,
Team,Location,Stanley Cups
Blues,STL,1
Flyers,PHI,2
Maple Leafs,TOR,13
"""
f = "./test.csv"
with open(f, 'w') as ff:
ff.write(c1)
print("Suggested Improvement Keep First Line")
table = pd.read_csv(f, header=None)
html_text = table.to_html(index=False, header=False, na_rep="")
text = soupparser_fromstring(html_text).text_content()
print(text)
print("\n\nOriginal Looses First Line")
table = pd.read_csv(f)
html_text = table.to_html(index=False, header=False, na_rep="")
text = soupparser_fromstring(html_text).text_content()
print(text)
```
---------
Co-authored-by: cragwolfe <crag@unstructured.io>
Co-authored-by: Yao You <theyaoyou@gmail.com>
Co-authored-by: Yao You <yao@unstructured.io>
**Executive Summary**
Adds function to calculate the percent match between two element type
frequency output from `get_element_type_frequency` function.
**Technical Detail**
- The function takes two `Dict` input which both should be output from
`get_element_type_frequency`
- Implementors can define weight `category_depth_weight` they want to
give to the matching `type` but different in `category_depth` case
- The function loops through output item list first to find exact match
and count total exact match, and collect the remaining value for both
output and source in new list (of `dict` type). Then it loops through
existing source item list that has not been an exact match, to find
`type` match which then weigh with the factor of `category_depth_weight`
defined earlier, default at 0.5)
**Output**
output
```
{
("Title", 0): 2,
("Title", 1): 1,
("NarrativeText", None): 3,
("UncategorizedText", None): 1,
}
```
source
```
{
("Title", 0): 1,
("Title", 1): 2,
("NarrativeText", None): 5,
}
```
With this output and source, and weight of 0.5, the % match will yield
5.5 / 8 -- for 5 exact match, and 1 partial match with 0.5 weight.
---------
Co-authored-by: shreyanid <42684285+shreyanid@users.noreply.github.com>
### Summary
Closes#1714
Changes the default value for `languages` to `None` for elements that
don't have text or the language can't be detected.
### Testing
```
from unstructured.partition.auto import partition
filename = "example-docs/handbook-1p.docx"
elements = partition(filename=filename, detect_language_per_element=True)
# PageBreak elements don't have text and will be collected here
none_langs = [element for element in elements if element.metadata.languages is None]
none_langs[0].text
```
---------
Co-authored-by: ryannikolaidis <1208590+ryannikolaidis@users.noreply.github.com>
Co-authored-by: Coniferish <Coniferish@users.noreply.github.com>
Co-authored-by: cragwolfe <crag@unstructured.io>
Fixes recursion limit error that was being raised when partitioning
Excel documents of a certain size.
Previously we used a recursive method to find subtables within an excel
sheet. However this would run afoul of Python's recursion depth limit
when there was a contiguous block of more than 1000 cells within a
sheet. This function has been updated to use the NetworkX library which
avoids Python recursion issues.
* Updated `_get_connected_components` to use `networkx` graph methods
rather than implementing our own algorithm for finding contiguous groups
of cells within a sheet.
* Added a test and example doc that replicates the `RecursionError`
prior to the change.
* Added `networkx` to `extra_xlsx` dependencies and `pip-compile`d.
#### Testing:
The following run from a Python terminal should raise a `RecursionError`
on `main` and succeed on this branch:
```python
import sys
from unstructured.partition.xlsx import partition_xlsx
old_recursion_limit = sys.getrecursionlimit()
try:
sys.setrecursionlimit(1000)
filename = "example-docs/more-than-1k-cells.xlsx"
partition_xlsx(filename=filename)
finally:
sys.setrecursionlimit(old_recursion_limit)
```
Note: the recursion limit is different in different contexts. Checking
my own system, the default in a notebook seems to be 3000, but in a
terminal it's 1000. The documented Python default recursion limit is
1000.
The function `under_non_alpha_ratio` in
`unstructured.partition.text_type` was producing a divide-by-zero error.
After investigation I found this was a possibility when the function was
passed a string of all spaces.
---------
Co-authored-by: cragwolfe <crag@unstructured.io>
Current file detection logic for csv in file_utils/filetype.py is not
considering all the lines for counting the no. of comma's, it is
considering just the first line which will return always return true
```
lines = lines[: len(lines)] if len(lines) < 10 else lines[:10]
header_count = _count_commas(lines[0])
if any("," not in line for line in lines):
return False
return all(_count_commas(line) == header_count for line in lines[:1])
```
fixed issue by considering all the lines except the first line as shown
below
```
lines = lines[: len(lines)] if len(lines) < 10 else lines[:10]
header_count = _count_commas(lines[0])
if any("," not in line for line in lines):
return False
return all(_count_commas(line) == header_count for line in lines[1:])
```
PR to support schema changes introduced from [PR
232](https://github.com/Unstructured-IO/unstructured-inference/pull/232)
in `unstructured-inference`.
Specifically what needs to be supported is:
* Change to the way `LayoutElement` from `unstructured-inference` is
structured, specifically that this class is no longer a subclass of
`Rectangle`, and instead `LayoutElement` has a `bbox` property that
captures the location information and a `from_coords` method that allows
construction of a `LayoutElement` directly from coordinates.
* Removal of `LocationlessLayoutElement` since chipper now exports
bounding boxes, and if we need to support elements without bounding
boxes, we can make the `bbox` property mentioned above optional.
* Getting hierarchy data directly from the inference elements rather
than in post-processing
* Don't try to reorder elements received from chipper v2, as they should
already be ordered.
#### Testing:
The following demonstrates that the new version of chipper is inferring
hierarchy.
```python
from unstructured.partition.pdf import partition_pdf
elements = partition_pdf("example-docs/layout-parser-paper-fast.pdf", strategy="hi_res", model_name="chipper")
children = [el for el in elements if el.metadata.parent_id is not None]
print(children)
```
Also verify that running the traditional `hi_res` gives different
results:
```python
from unstructured.partition.pdf import partition_pdf
elements = partition_pdf("example-docs/layout-parser-paper-fast.pdf", strategy="hi_res")
```
---------
Co-authored-by: Sebastian Laverde Alfonso <lavmlk20201@gmail.com>
Co-authored-by: ryannikolaidis <1208590+ryannikolaidis@users.noreply.github.com>
Co-authored-by: christinestraub <christinemstraub@gmail.com>
This PR:
- defines rbac_data as a SourceMetadata field,
- manages connections to an external api for obtaining rbac data with
ConnectorRBAC class,
- serializes rbac data and saves it to the disk,
- matches the rbac_data in the disk to each IngestDoc, using a common
field,
- forwards rbac data to Elements, via the partition() function
To test the changes, run `examples/ingest/sharepoint/ingest.sh` with the
relevant rbac & connector credentials
---------
Co-authored-by: ryannikolaidis <1208590+ryannikolaidis@users.noreply.github.com>
Co-authored-by: ahmetmeleq <ahmetmeleq@users.noreply.github.com>
### Summary
In order to enable larger scale testing of the new text extraction
metrics, create a helper function to get the clean, concatenated text
(CCT) from partitioned elements.
### Test
Partition any file, then pass the resulting elements into the new
`elements_to_text` function. Can test getting the output as string or as
text file.
```
from unstructured.partition.auto import partition
from unstructured.staging.base import elements_to_text
elements = partition(filename="example-docs/chevron-page.pdf", strategy="hi_res")
elements_text = elements_to_text(elements, "output-text-file.txt")
print(elements_text)
```
Currently when the OpenAIEmbeddingEncoder adds embeddings to Elements in
`_add_embeddings_to_elements` it overwrites each Element's `to_dict`
method, mistakenly resulting in each Element having identical values
with the exception of the actual embedding value. This was due to the
way it leverages a nested `new_to_dict` method to overwrite. Instead,
this updates the original definition of Element itself to accommodate
the `embeddings` field when available. This also adds a test to validate
that values are not duplicated.
Currently adding the embedding flag to any unstructured-ingest call
results in this failure:
```
2023-10-11 22:42:14,177 MainProcess ERROR 'b8a98c5d963a9dd75847a8f110cbf7c9'
multiprocessing.pool.RemoteTraceback:
"""
Traceback (most recent call last):
File "/Users/ryannikolaidis/.pyenv/versions/3.10.11/lib/python3.10/multiprocessing/pool.py", line 125, in worker
result = (True, func(*args, **kwds))
File "/Users/ryannikolaidis/.pyenv/versions/3.10.11/lib/python3.10/multiprocessing/pool.py", line 48, in mapstar
return list(map(*args))
File "/Users/ryannikolaidis/Development/unstructured/unstructured/unstructured/ingest/pipeline/copy.py", line 14, in run
ingest_doc_json = self.pipeline_context.ingest_docs_map[doc_hash]
File "<string>", line 2, in __getitem__
File "/Users/ryannikolaidis/.pyenv/versions/3.10.11/lib/python3.10/multiprocessing/managers.py", line 833, in _callmethod
raise convert_to_error(kind, result)
KeyError: 'b8a98c5d963a9dd75847a8f110cbf7c9'
"""
```
This is because the run method for the embedding node is not adding the
IngestDoc to the context map. This PR adds that logic and adds a test to
validate that the embeddings option works as expected.
NOTE: until https://github.com/Unstructured-IO/unstructured/pull/1719
goes in, the expected results include the duplicate element bug, however
currently this does at least prove that embeddings are generated and the
function doesn't error.
Each partitioner has a test like `test_partition_x_with_json()`. What
these do is serialize the elements produced by the partitioner to JSON,
then read them back in from JSON and compare the before and after
elements.
Because our element equality (`Element.__eq__()`) is shallow, this
doesn't tell us a lot, but if we take it one more step, like
`List[Element] -> JSON -> List[Element] -> JSON` and then compare the
JSON, it gives us some confidence that the serialized elements can be
"re-hydrated" without losing any information.
This actually showed up a few problems, all in the
serialization/deserialization (serde) code that all elements share.
### Description
Set language to None by default. Update ingest test to use local file
used in language unit tests to validate.
---------
Co-authored-by: ryannikolaidis <1208590+ryannikolaidis@users.noreply.github.com>
Co-authored-by: rbiseck3 <rbiseck3@users.noreply.github.com>
### Description
Add new parameter to map to `skip_infer_table_types` partition arg.
Applies to partition config which is set on all connectors.
---------
Co-authored-by: ryannikolaidis <1208590+ryannikolaidis@users.noreply.github.com>
Co-authored-by: rbiseck3 <rbiseck3@users.noreply.github.com>
The current implementation removes elements from the beginning of the
element list and duplicates the list items
---------
Co-authored-by: Klaijan <klaijan@unstructured.io>
Co-authored-by: yuming <305248291@qq.com>
Co-authored-by: ryannikolaidis <1208590+ryannikolaidis@users.noreply.github.com>
Co-authored-by: yuming-long <yuming-long@users.noreply.github.com>
There is a built in option to not send data by setting an env var,
SCARF_NO_ANALYTICS=true.
DoD:
- When importing or running unstructured package it will make a get call
to scarf
- When env variable is set to not track, call is not made
### Summary
Closes#1534 and #1535
Detects document language using `langdetect` package.
Creates new kwargs for user to set the document language (`languages`)
or detect the language at the element level instead of the default
document level (`detect_language_per_element`)
---------
Co-authored-by: shreyanid <42684285+shreyanid@users.noreply.github.com>
Co-authored-by: ryannikolaidis <1208590+ryannikolaidis@users.noreply.github.com>
Co-authored-by: Coniferish <Coniferish@users.noreply.github.com>
Co-authored-by: cragwolfe <crag@unstructured.io>
Co-authored-by: Austin Walker <austin@unstructured.io>
**Executive Summary**
Add function that returns frequency of given element types and depth.
---------
Co-authored-by: shreyanid <42684285+shreyanid@users.noreply.github.com>
Adds data source properties to git connectors:
- data_created
- date_modified
- version
- record_locator
These properties are instantiated when supported by the connector.
Separates the logic between fetching the file from source and
`get_file`. Retrieves file metadata when any of the properties are
called.
Adds logic to check if file exists in the remote source. For connectors
that don't directly support it, adds exception handling to check any
issues while retrieving the file.
---------
Co-authored-by: ryannikolaidis <1208590+ryannikolaidis@users.noreply.github.com>
Co-authored-by: rvztz <rvztz@users.noreply.github.com>
### Summary
Missing text is a particularly important metric of quality for the
Unstructured library because information from the document is not being
captured and therefore not usable by downstream applications.
Add function to calculate the percent of text missing relative to the
source transcription. Function takes 2 text strings (output and source)
as input, and returns the percentage of text missing as a decimal.
### Technical Details
- The 2 input strings are both assumed to already contain clean and
concatenated text (CCT)
- Implementation compares the bags of words (frequency counts for each
word present in the text) of each input text
- Duplicated/extra text is not penalized
- Value is limited to the range [0, 1]
### Test
- Several edge cases are covered in the test function (missing text,
duplicated text, spaced out words, etc).
- Can test other cases or text inputs by calling the function with 2 CCT
strings as "output" and "source"
Address: https://github.com/Unstructured-IO/unstructured/issues/1663
## Summary
While trying to find how overlap between a element bbox and annotation
bbox, we find the intersection of two bboxes and divide it by the size
of annotation bbox, this will cause a zero division error if size of
annotation bbox is 0.
* this PR fix the zero division error for function
`check_annotations_within_element`
* also fix error: `TypeError: unsupported operand type(s) for -: 'float'
and 'NoneType'` by stop inserting empty word with None bbox into list of
words in function `get_word_bounding_box_from_element`
## Test
reproduce with code and document as the user mentioned and should see no
error:
```
from unstructured.partition.auto import partition
elements = partition(
filename="./IZSAM8.2_221012.pdf",
strategy="fast",
)
```
This PR adds the `bag_of_words` function to count the frequency of words
for evaluation.
**Testing**
```Python
from unstructured.cleaners.core import bag_of_words
string = "The dog loved the cat, but the cat loved the cow."
print(bag_of_words)
---------
Co-authored-by: Mallori Harrell <mallori@Malloris-MacBook-Pro.local>
Co-authored-by: Klaijan <klaijan@unstructured.io>
Co-authored-by: Shreya Nidadavolu <shreyanid9@gmail.com>
Co-authored-by: shreyanid <42684285+shreyanid@users.noreply.github.com>
### Description
In order to add a retry strategy to the notion http calls, leveraging a
generic backoff library with some tweaks to pass in values from the CLI.
We’re probably unfairly (to the test) making a large volume of new
connections and requests to test services when all of our ingest tests
run across the full python test matrix and when a lot of PRs a firing at
once. Lets limit the full matrix run to a select few, but still have all
ingest tests run on python v3.10. This is done by checking the version
and skipping in ingest-test.sh.
Bonus: Bumps ingest test fixture workflow to use 3.10. This technically
shouldn't make a difference, but since we're making 3.10 the default of
the matrix strategy, it probably makes sense to use 3.10 for the ingest
fixture generation as well for consistency.
## Testing
-
[example](https://github.com/Unstructured-IO/unstructured/actions/runs/6460319121/job/17537900978?pr=1687)
running all tests in 3.10
-
[example](https://github.com/Unstructured-IO/unstructured/actions/runs/6460319121/job/17537899999?pr=1687)
skipping/running the expected tests in 3.8
This PR adds the `max_characters` (hard max) param to non-table element
chunking. Additionally updates the `num_characters` metadata to
`max_characters` to make it clearer which param we're referencing.
To test:
```
from unstructured.partition.html import partition_html
filename = "example-docs/example-10k-1p.html"
chunk_elements = partition_html(
filename,
chunking_strategy="by_title",
combine_text_under_n_chars=0,
new_after_n_chars=50,
max_characters=100,
)
for chunk in chunk_elements:
print(len(chunk.text))
# previously we were only respecting the "soft max" (default of 500) for elements other than tables
# now we should see that all the elements have text fields under 100 chars.
```
---------
Co-authored-by: cragwolfe <crag@unstructured.io>
When running test-ingest test fixtures locally (but not in CI), keep
output .json's and other workdir artifacts around for the convenience of
debugging.
**Test Instructions**
Run
bash -x ./test_unstructured_ingest/test-ingest-azure.sh
and witness output .json's are visible. Yay! Now, to instead clean up
output .json's and workdir, run:
UNSTRUCTURED_CLEANUP_DEV_FIXTURES=1 bash -x
./test_unstructured_ingest/test-ingest-azure.sh
and witness the files have been cleaned up. Yay!
**Executive Summary**
Adds function to calculate edit distance (Levenshtein distance) between
two strings. The function can return as: 1. score (similarity = 1 -
distance/source_len) 2. distance (raw levenshtein distance)
**Technical details**
- The `weights` param is set to default at (2,1,1) for (insertion,
deletion, substitution), meaning that we will penalize the insertion we
need to add from output (target) in comparison with the source
(reference). In other word, the missing extraction will be penalized
higher.
- The function takes in 2 strings in an assumption that both string are
already clean and concatenated (CCT)
**Important Note!**
Test case needs to be updated to use CCT once the function is ready. It
is now only tested the "functionality" of edit distance, not the edit
distance with CCT as its intended to be.
---------
Co-authored-by: cragwolfe <crag@unstructured.io>