1418 Commits

Author SHA1 Message Date
cragwolfe
9ea3734fd0
fix: memory issue resolved for chipper v2 (#1772)
Co-authored-by: Austin Walker <austin@unstructured.io>
Co-authored-by: Austin Walker <awalk89@gmail.com>
0.10.24
2023-10-17 14:37:25 +00:00
Roman Isecke
aeaae5fd17
destination connector method elements input (#1674)
### Description
**Ingest destination connectors support for writing raw list of
elements** Along with the default write method used in the ingest
pipeline to write the json content associated with the ingest docs, each
destination connector can now also write a raw list of elements to the
desired downstream location without having an ingest doc associated with
it.
2023-10-17 12:47:59 +00:00
Roman Isecke
b265d8874b
refactoring linting (#1739)
### Description
Currently linting only takes place over the base unstructured directory
but we support python files throughout the repo. It makes sense for all
those files to also abide by the same linting rules so the entire repo
was set to be inspected when the linters are run. Along with that
autoflake was added as a linter which has a lot of added benefits such
as removing unused imports for you that would currently break flake and
require manual intervention.

The only real relevant changes in this PR are in the `Makefile`,
`setup.cfg`, and `requirements/test.in`. The rest is the result of
running the linters.
2023-10-17 12:45:12 +00:00
Christine Straub
237d04c896
feat: improve natural reading order by filtering OCR results (#1768)
### Summary
Some `OCR` elements with only spaces in the text have full-page width in
the bounding box, which causes the `xycut` sorting to not work as
expected. Now the logic to parse OCR results removes any elements with
only spaces (more than one space).

---------

Co-authored-by: ryannikolaidis <1208590+ryannikolaidis@users.noreply.github.com>
Co-authored-by: christinestraub <christinestraub@users.noreply.github.com>
2023-10-16 23:05:55 +00:00
Léa
89fa88f076
fix: stop csv and tsv dropping the first line of the file (#1530)
The current code assumes the first line of csv and tsv files are a
header line. Most csv and tsv files don't have a header line, and even
for those that do, dropping this line may not be the desired behavior.

Here is a snippet of code that demonstrates the current behavior and the
proposed fix

```
import pandas as pd
from lxml.html.soupparser import fromstring as soupparser_fromstring

c1 = """
    Stanley Cups,,
    Team,Location,Stanley Cups
    Blues,STL,1
    Flyers,PHI,2
    Maple Leafs,TOR,13
    """

f = "./test.csv"
with open(f, 'w') as ff:
    ff.write(c1)
  
print("Suggested Improvement Keep First Line") 
table = pd.read_csv(f, header=None)
html_text = table.to_html(index=False, header=False, na_rep="")
text = soupparser_fromstring(html_text).text_content()
print(text)

print("\n\nOriginal Looses First Line") 
table = pd.read_csv(f)
html_text = table.to_html(index=False, header=False, na_rep="")
text = soupparser_fromstring(html_text).text_content()
print(text)
```

---------

Co-authored-by: cragwolfe <crag@unstructured.io>
Co-authored-by: Yao You <theyaoyou@gmail.com>
Co-authored-by: Yao You <yao@unstructured.io>
2023-10-16 17:59:35 -05:00
Yuming Long
4907d1e2b5
Fix: ModuleNotFoundError for partition.utils.ocr_models (#1767)
### Summary

Fix https://github.com/Unstructured-IO/unstructured-api/issues/286 where
`partition/utils/ocr_models` folder is not uploaded to PyPI since its
missing an `__init__.py` file
2023-10-16 19:47:09 +00:00
Klaijan
ba4c649cf0
feat: calculate element type percent match (#1723)
**Executive Summary**
Adds function to calculate the percent match between two element type
frequency output from `get_element_type_frequency` function.

**Technical Detail**
- The function takes two `Dict` input which both should be output from
`get_element_type_frequency`
- Implementors can define weight `category_depth_weight` they want to
give to the matching `type` but different in `category_depth` case
- The function loops through output item list first to find exact match
and count total exact match, and collect the remaining value for both
output and source in new list (of `dict` type). Then it loops through
existing source item list that has not been an exact match, to find
`type` match which then weigh with the factor of `category_depth_weight`
defined earlier, default at 0.5)

**Output**
output
```
{
  ("Title", 0): 2,
  ("Title", 1): 1,
  ("NarrativeText", None): 3,
  ("UncategorizedText", None): 1,
}
```

source
```
{
  ("Title", 0): 1,
  ("Title", 1): 2,
  ("NarrativeText", None): 5,
}
```

With this output and source, and weight of 0.5, the % match will yield
5.5 / 8 -- for 5 exact match, and 1 partial match with 0.5 weight.

---------

Co-authored-by: shreyanid <42684285+shreyanid@users.noreply.github.com>
2023-10-16 17:57:28 +00:00
Roman Isecke
9c7ee8921a
roman/fsspec compression support (#1730)
### Description 
Opened to replace original PR:
[1443](https://github.com/Unstructured-IO/unstructured/pull/1443)
2023-10-16 14:26:30 +00:00
cragwolfe
282b8f700d
build: release unstructured==0.10.23 (#1762)
Cut the release.
0.10.23
2023-10-15 21:26:46 -07:00
John
6d7fe3ab02
fix: default to None for the languages metadata field (#1743)
### Summary
Closes #1714
Changes the default value for `languages` to `None` for elements that
don't have text or the language can't be detected.

### Testing
```
from unstructured.partition.auto import partition
filename = "example-docs/handbook-1p.docx"
elements = partition(filename=filename, detect_language_per_element=True)

# PageBreak elements don't have text and will be collected here
none_langs = [element for element in elements if element.metadata.languages is None]
none_langs[0].text
```

---------

Co-authored-by: ryannikolaidis <1208590+ryannikolaidis@users.noreply.github.com>
Co-authored-by: Coniferish <Coniferish@users.noreply.github.com>
Co-authored-by: cragwolfe <crag@unstructured.io>
2023-10-14 22:46:24 +00:00
Amanda Cameron
d0c84d605c
chore: updating table docs with file extensions (#1702)
gh issue: https://github.com/Unstructured-IO/unstructured/issues/1691

Adding filetype extensions from this
[list](f98d5e65ca/unstructured/file_utils/filetype.py (L154-L200))
where applicable.

---------

Co-authored-by: cragwolfe <crag@unstructured.io>
Co-authored-by: Crag Wolfe <crag@unstructuredai.io>
2023-10-14 14:14:52 -07:00
qued
cf31c9a2c4
fix: use nx to avoid recursion limit (#1761)
Fixes recursion limit error that was being raised when partitioning
Excel documents of a certain size.

Previously we used a recursive method to find subtables within an excel
sheet. However this would run afoul of Python's recursion depth limit
when there was a contiguous block of more than 1000 cells within a
sheet. This function has been updated to use the NetworkX library which
avoids Python recursion issues.

* Updated `_get_connected_components` to use `networkx` graph methods
rather than implementing our own algorithm for finding contiguous groups
of cells within a sheet.
* Added a test and example doc that replicates the `RecursionError`
prior to the change.
*  Added `networkx` to `extra_xlsx` dependencies and `pip-compile`d.

#### Testing:
The following run from a Python terminal should raise a `RecursionError`
on `main` and succeed on this branch:
```python
import sys
from unstructured.partition.xlsx import partition_xlsx
old_recursion_limit = sys.getrecursionlimit()
try:
    sys.setrecursionlimit(1000)
    filename = "example-docs/more-than-1k-cells.xlsx"
    partition_xlsx(filename=filename)
finally:
    sys.setrecursionlimit(old_recursion_limit)

```
Note: the recursion limit is different in different contexts. Checking
my own system, the default in a notebook seems to be 3000, but in a
terminal it's 1000. The documented Python default recursion limit is
1000.
2023-10-14 19:38:21 +00:00
cragwolfe
3f32c6702a
feat: bump unstructured-inference=0.7.5 for faster chipper (#1756)
**Improved inference speed for Chipper V2** API requests with
'hi_res_model_name=chipper' now have ~2-3x faster responses.
2023-10-14 13:03:59 -07:00
Minwoo Byeon (Dylan)
3331c5c6c0
Remove the temporary files when the conversion is finished. (#1696)
Co-authored-by: cragwolfe <crag@unstructured.io>
Co-authored-by: Yao You <theyaoyou@gmail.com>
2023-10-13 18:51:44 -05:00
qued
95728ead0f
fix: zero divide in under_non_alpha_ratio (#1753)
The function `under_non_alpha_ratio` in
`unstructured.partition.text_type` was producing a divide-by-zero error.
After investigation I found this was a possibility when the function was
passed a string of all spaces.

---------

Co-authored-by: cragwolfe <crag@unstructured.io>
2023-10-13 21:20:01 +00:00
M Bharat lal
21df17f7fa
fix: consider all the required lines instead of first line to detect file type as CSV (#1728)
Current file detection logic for csv in file_utils/filetype.py is not
considering all the lines for counting the no. of comma's, it is
considering just the first line which will return always return true

```
lines = lines[: len(lines)] if len(lines) < 10 else lines[:10]
header_count = _count_commas(lines[0])
if any("," not in line for line in lines):
        return False
return all(_count_commas(line) == header_count for line in lines[:1])
```

fixed issue by considering all the lines except the first line as shown
below

```
lines = lines[: len(lines)] if len(lines) < 10 else lines[:10]
header_count = _count_commas(lines[0])
if any("," not in line for line in lines):
        return False
return all(_count_commas(line) == header_count for line in lines[1:])
```
2023-10-13 13:36:05 -07:00
Christine Straub
ef391e1a3e
feat: less precision in json floats (#1718)
Closes #1340.
### Summary
- add functionality to limit precision when serializing to JSON
### Testing
```
elements = partition(raw_doc.<extension>)
output_json = elements_to_json(elements)
print(output_json)
```
2023-10-13 11:06:36 -07:00
Austin Walker
ad1b93dbaa
chore: cut the 0.10.22 release (#1749) 0.10.22 2023-10-13 17:17:21 +00:00
ryannikolaidis
d9a0bd741a
fix: build test failures (#1748)
* Fix missing HF_TOKEN when running containerized test for the build
process
* Fix pytest args when running specific test

## Testing
Example run of the HF_TOKEN assgned for the containerized test in the
build process:
https://github.com/Unstructured-IO/unstructured/actions/runs/6504556437/job/17666669155

Example run of the pytest args working for the arm test (ran in a new
workflow for testing on push):
https://github.com/Unstructured-IO/unstructured/actions/runs/6504213010
2023-10-13 01:08:27 -07:00
Steve Canny
4b84d596c2
docx: add hyperlink metadata (#1746) 2023-10-13 06:26:14 +00:00
qued
8100f1e7e2
chore: process chipper hierarchy (#1634)
PR to support schema changes introduced from [PR
232](https://github.com/Unstructured-IO/unstructured-inference/pull/232)
in `unstructured-inference`.

Specifically what needs to be supported is:
* Change to the way `LayoutElement` from `unstructured-inference` is
structured, specifically that this class is no longer a subclass of
`Rectangle`, and instead `LayoutElement` has a `bbox` property that
captures the location information and a `from_coords` method that allows
construction of a `LayoutElement` directly from coordinates.
* Removal of `LocationlessLayoutElement` since chipper now exports
bounding boxes, and if we need to support elements without bounding
boxes, we can make the `bbox` property mentioned above optional.
* Getting hierarchy data directly from the inference elements rather
than in post-processing
* Don't try to reorder elements received from chipper v2, as they should
already be ordered.

#### Testing:

The following demonstrates that the new version of chipper is inferring
hierarchy.

```python
from unstructured.partition.pdf import partition_pdf
elements = partition_pdf("example-docs/layout-parser-paper-fast.pdf", strategy="hi_res", model_name="chipper")
children = [el for el in elements if el.metadata.parent_id is not None]
print(children)

```
Also verify that running the traditional `hi_res` gives different
results:
```python
from unstructured.partition.pdf import partition_pdf
elements = partition_pdf("example-docs/layout-parser-paper-fast.pdf", strategy="hi_res")

```

---------

Co-authored-by: Sebastian Laverde Alfonso <lavmlk20201@gmail.com>
Co-authored-by: ryannikolaidis <1208590+ryannikolaidis@users.noreply.github.com>
Co-authored-by: christinestraub <christinemstraub@gmail.com>
2023-10-13 01:28:46 +00:00
Ahmet Melek
94836cfad4
feat: add file-based access permissions for SharePoint ingest (#1628)
This PR:

- defines rbac_data as a SourceMetadata field,
- manages connections to an external api for obtaining rbac data with
ConnectorRBAC class,
- serializes rbac data and saves it to the disk,
- matches the rbac_data in the disk to each IngestDoc, using a common
field,
- forwards rbac data to Elements, via the partition() function

To test the changes, run `examples/ingest/sharepoint/ingest.sh` with the
relevant rbac & connector credentials

---------

Co-authored-by: ryannikolaidis <1208590+ryannikolaidis@users.noreply.github.com>
Co-authored-by: ahmetmeleq <ahmetmeleq@users.noreply.github.com>
2023-10-13 00:38:08 +00:00
shreyanid
3ec3673d34
feat: staging function to extract element text into one string (#1741)
### Summary
In order to enable larger scale testing of the new text extraction
metrics, create a helper function to get the clean, concatenated text
(CCT) from partitioned elements.

### Test
Partition any file, then pass the resulting elements into the new
`elements_to_text` function. Can test getting the output as string or as
text file.

```
from unstructured.partition.auto import partition
from unstructured.staging.base import elements_to_text

elements = partition(filename="example-docs/chevron-page.pdf", strategy="hi_res")
elements_text = elements_to_text(elements, "output-text-file.txt")
print(elements_text)
```
2023-10-12 23:59:16 +00:00
ryannikolaidis
40523061ca
fix: _add_embeddings_to_elements bug resulting in duplicated elements (#1719)
Currently when the OpenAIEmbeddingEncoder adds embeddings to Elements in
`_add_embeddings_to_elements` it overwrites each Element's `to_dict`
method, mistakenly resulting in each Element having identical values
with the exception of the actual embedding value. This was due to the
way it leverages a nested `new_to_dict` method to overwrite. Instead,
this updates the original definition of Element itself to accommodate
the `embeddings` field when available. This also adds a test to validate
that values are not duplicated.
2023-10-12 21:47:32 +00:00
Roman Isecke
ebf0722dcc
roman/ingest continue on error (#1736)
### Description
Add flag to raise an error on failure but default to only log it and
continue with other docs
2023-10-12 21:33:10 +00:00
ryannikolaidis
d22044a44c
fix: unstructured-ingest embedding KeyError (#1727)
Currently adding the embedding flag to any unstructured-ingest call
results in this failure:

```
2023-10-11 22:42:14,177 MainProcess ERROR    'b8a98c5d963a9dd75847a8f110cbf7c9'
multiprocessing.pool.RemoteTraceback:
"""
Traceback (most recent call last):
  File "/Users/ryannikolaidis/.pyenv/versions/3.10.11/lib/python3.10/multiprocessing/pool.py", line 125, in worker
    result = (True, func(*args, **kwds))
  File "/Users/ryannikolaidis/.pyenv/versions/3.10.11/lib/python3.10/multiprocessing/pool.py", line 48, in mapstar
    return list(map(*args))
  File "/Users/ryannikolaidis/Development/unstructured/unstructured/unstructured/ingest/pipeline/copy.py", line 14, in run
    ingest_doc_json = self.pipeline_context.ingest_docs_map[doc_hash]
  File "<string>", line 2, in __getitem__
  File "/Users/ryannikolaidis/.pyenv/versions/3.10.11/lib/python3.10/multiprocessing/managers.py", line 833, in _callmethod
    raise convert_to_error(kind, result)
KeyError: 'b8a98c5d963a9dd75847a8f110cbf7c9'
"""
```

This is because the run method for the embedding node is not adding the
IngestDoc to the context map. This PR adds that logic and adds a test to
validate that the embeddings option works as expected.

NOTE: until https://github.com/Unstructured-IO/unstructured/pull/1719
goes in, the expected results include the duplicate element bug, however
currently this does at least prove that embeddings are generated and the
function doesn't error.
2023-10-12 20:27:30 +00:00
Steve Canny
d726963e42
serde tests round-trip through JSON (#1681)
Each partitioner has a test like `test_partition_x_with_json()`. What
these do is serialize the elements produced by the partitioner to JSON,
then read them back in from JSON and compare the before and after
elements.

Because our element equality (`Element.__eq__()`) is shallow, this
doesn't tell us a lot, but if we take it one more step, like
`List[Element] -> JSON -> List[Element] -> JSON` and then compare the
JSON, it gives us some confidence that the serialized elements can be
"re-hydrated" without losing any information.

This actually showed up a few problems, all in the
serialization/deserialization (serde) code that all elements share.
2023-10-12 19:47:55 +00:00
Yuming Long
cb247d8cc4
doc: update comment for ingest test pdf-fast-reprocess (#1733) 2023-10-12 17:33:25 +00:00
Roman Isecke
22e568cf64
roman/bugfix fix default language ingest option (#1729)
### Description
Set language to None by default. Update ingest test to use local file
used in language unit tests to validate.

---------

Co-authored-by: ryannikolaidis <1208590+ryannikolaidis@users.noreply.github.com>
Co-authored-by: rbiseck3 <rbiseck3@users.noreply.github.com>
2023-10-12 17:31:23 +00:00
Roman Isecke
9b5d5e0f9e
roman/cli infer table arg (#1685)
### Description
Add new parameter to map to `skip_infer_table_types` partition arg.
Applies to partition config which is set on all connectors.

---------

Co-authored-by: ryannikolaidis <1208590+ryannikolaidis@users.noreply.github.com>
Co-authored-by: rbiseck3 <rbiseck3@users.noreply.github.com>
2023-10-12 16:14:53 +00:00
Roman Isecke
35852bba83
roman/bugfix ingest pipeline reformat dir (#1717)
Closes #1724 
### Description
Add file needed to make that dir discoverable
2023-10-12 15:44:30 +00:00
Trevor Bossert
9864086bc8
Drop patch version of python in Scarf anonymous analytics (#1725) 2023-10-12 01:36:47 +00:00
Trevor Bossert
569561e59b
Add more params for scarf (#1720)
Allow to slice on further metrics
2023-10-12 00:09:19 +00:00
Inscore
8ab40c20c1
fix: correct PDF list item parsing (#1693)
The current implementation removes elements from the beginning of the
element list and duplicates the list items

---------

Co-authored-by: Klaijan <klaijan@unstructured.io>
Co-authored-by: yuming <305248291@qq.com>
Co-authored-by: ryannikolaidis <1208590+ryannikolaidis@users.noreply.github.com>
Co-authored-by: yuming-long <yuming-long@users.noreply.github.com>
2023-10-11 20:38:36 +00:00
Trevor Bossert
6acd06987b
Remove extra index url from docs (#1711)
It’s no longer required to specify the extra index url as we utilize a
different method of gathering install anonymous analytics.
2023-10-11 19:34:49 +00:00
Trevor Bossert
f0a63e2712
Add basic call to scarf to get anonymous analytics (#1705)
There is a built in option to not send data by setting an env var,
SCARF_NO_ANALYTICS=true.

DoD:
- When importing or running unstructured package it will make a get call
to scarf
- When env variable is set to not track, call is not made
0.10.21
2023-10-11 09:15:36 -07:00
John
9500d04791
detect document language across all partitioners (#1627)
### Summary
Closes #1534 and #1535
Detects document language using `langdetect` package. 
Creates new kwargs for user to set the document language (`languages`)
or detect the language at the element level instead of the default
document level (`detect_language_per_element`)

---------

Co-authored-by: shreyanid <42684285+shreyanid@users.noreply.github.com>
Co-authored-by: ryannikolaidis <1208590+ryannikolaidis@users.noreply.github.com>
Co-authored-by: Coniferish <Coniferish@users.noreply.github.com>
Co-authored-by: cragwolfe <crag@unstructured.io>
Co-authored-by: Austin Walker <austin@unstructured.io>
0.10.20
2023-10-11 01:47:56 +00:00
Klaijan
ee75ce25e2
feat: element type frequency (#1688)
**Executive Summary**

Add function that returns frequency of given element types and depth.

---------

Co-authored-by: shreyanid <42684285+shreyanid@users.noreply.github.com>
2023-10-11 00:36:44 +00:00
rvztz
7fd61e3a7f
Adds data source properties to git connectors (#1280)
Adds data source properties to git connectors:
-  data_created
-  date_modified
-  version
-  record_locator
These properties are instantiated when supported by the connector.

Separates the logic between fetching the file from source and
`get_file`. Retrieves file metadata when any of the properties are
called.

Adds logic to check if file exists in the remote source. For connectors
that don't directly support it, adds exception handling to check any
issues while retrieving the file.

---------

Co-authored-by: ryannikolaidis <1208590+ryannikolaidis@users.noreply.github.com>
Co-authored-by: rvztz <rvztz@users.noreply.github.com>
2023-10-10 22:56:56 +00:00
shreyanid
9d228c7ecb
feat: calculate metric for percent of text missing (#1701)
### Summary
Missing text is a particularly important metric of quality for the
Unstructured library because information from the document is not being
captured and therefore not usable by downstream applications.

Add function to calculate the percent of text missing relative to the
source transcription. Function takes 2 text strings (output and source)
as input, and returns the percentage of text missing as a decimal.

### Technical Details
- The 2 input strings are both assumed to already contain clean and
concatenated text (CCT)
- Implementation compares the bags of words (frequency counts for each
word present in the text) of each input text
- Duplicated/extra text is not penalized
- Value is limited to the range [0, 1]

### Test
- Several edge cases are covered in the test function (missing text,
duplicated text, spaced out words, etc).
- Can test other cases or text inputs by calling the function with 2 CCT
strings as "output" and "source"
2023-10-10 20:54:49 +00:00
Yuming Long
e597ec7a0f
Fix: skip empty annotation bbox (#1665)
Address: https://github.com/Unstructured-IO/unstructured/issues/1663
## Summary
While trying to find how overlap between a element bbox and annotation
bbox, we find the intersection of two bboxes and divide it by the size
of annotation bbox, this will cause a zero division error if size of
annotation bbox is 0.

* this PR fix the zero division error for function
`check_annotations_within_element`
* also fix error: `TypeError: unsupported operand type(s) for -: 'float'
and 'NoneType'` by stop inserting empty word with None bbox into list of
words in function `get_word_bounding_box_from_element`

## Test
reproduce with code and document as the user mentioned and should see no
error:
```
from unstructured.partition.auto import partition

elements = partition(
    filename="./IZSAM8.2_221012.pdf",
    strategy="fast",
)
```
2023-10-10 20:48:44 +00:00
Mallori Harrell
a5d7ae4611
Feat: Bag of words for testing metric (#1650)
This PR adds the `bag_of_words` function to count the frequency of words
for evaluation.

**Testing**
```Python
from unstructured.cleaners.core import bag_of_words
string = "The dog loved the cat, but the cat loved the cow."

print(bag_of_words)

---------

Co-authored-by: Mallori Harrell <mallori@Malloris-MacBook-Pro.local>
Co-authored-by: Klaijan <klaijan@unstructured.io>
Co-authored-by: Shreya Nidadavolu <shreyanid9@gmail.com>
Co-authored-by: shreyanid <42684285+shreyanid@users.noreply.github.com>
2023-10-10 18:46:01 +00:00
Roman Isecke
b38a6b3022
feat: add Notion connector retry strategy (#1492)
### Description
In order to add a retry strategy to the notion http calls, leveraging a
generic backoff library with some tweaks to pass in values from the CLI.
2023-10-10 17:41:18 +00:00
Klaijan
1d80beaaf2
fix: initialize uri before try-except (#1690)
Fix github issue
https://github.com/Unstructured-IO/unstructured/issues/1686
2023-10-10 17:29:10 +00:00
ryannikolaidis
3e101d3e4f
build(test): skip full python matrix for most ingest tests (#1687)
We’re probably unfairly (to the test) making a large volume of new
connections and requests to test services when all of our ingest tests
run across the full python test matrix and when a lot of PRs a firing at
once. Lets limit the full matrix run to a select few, but still have all
ingest tests run on python v3.10. This is done by checking the version
and skipping in ingest-test.sh.

Bonus: Bumps ingest test fixture workflow to use 3.10. This technically
shouldn't make a difference, but since we're making 3.10 the default of
the matrix strategy, it probably makes sense to use 3.10 for the ingest
fixture generation as well for consistency.

## Testing
-
[example](https://github.com/Unstructured-IO/unstructured/actions/runs/6460319121/job/17537900978?pr=1687)
running all tests in 3.10
-
[example](https://github.com/Unstructured-IO/unstructured/actions/runs/6460319121/job/17537899999?pr=1687)
skipping/running the expected tests in 3.8
2023-10-10 16:39:34 +00:00
Dev Khant
f09b87da23
Doc : replace link upstream connectors with source connectors (#1683)
Fixes #1502

Here I have replaced `stream_connectors.html` with
`source_connectors.html`.
2023-10-09 21:37:51 -07:00
Amanda Cameron
f98d5e65ca
chore: adding max_characters to other element type chunking (#1673)
This PR adds the `max_characters` (hard max) param to non-table element
chunking. Additionally updates the `num_characters` metadata to
`max_characters` to make it clearer which param we're referencing.

To test:

```
from unstructured.partition.html import partition_html

filename = "example-docs/example-10k-1p.html"
chunk_elements = partition_html(
        filename,
        chunking_strategy="by_title",
        combine_text_under_n_chars=0,
        new_after_n_chars=50,
        max_characters=100,
    )

for chunk in chunk_elements:
     print(len(chunk.text))

# previously we were only respecting the "soft max" (default of 500) for elements other than tables
# now we should see that all the elements have text fields under 100 chars.
```

---------

Co-authored-by: cragwolfe <crag@unstructured.io>
2023-10-09 19:42:36 +00:00
David Potter
8b93217a33
built(test): exclude version metadata from google drive test (#1682) 2023-10-07 19:34:32 -07:00
cragwolfe
46cb1b642a
chore: don't cleanup ingest test outputs (non-CI) (#1680)
When running test-ingest test fixtures locally (but not in CI), keep
output .json's and other workdir artifacts around for the convenience of
debugging.

**Test Instructions**

Run 

    bash -x ./test_unstructured_ingest/test-ingest-azure.sh

and witness output .json's are visible. Yay! Now, to instead clean up
output .json's and workdir, run:

UNSTRUCTURED_CLEANUP_DEV_FIXTURES=1 bash -x
./test_unstructured_ingest/test-ingest-azure.sh
    
and witness the files have been cleaned up. Yay!
2023-10-07 02:18:37 +00:00
Klaijan
33edbf84f5
feat: add calculate edit distance feature (#1656)
**Executive Summary**

Adds function to calculate edit distance (Levenshtein distance) between
two strings. The function can return as: 1. score (similarity = 1 -
distance/source_len) 2. distance (raw levenshtein distance)

**Technical details**
- The `weights` param is set to default at (2,1,1) for (insertion,
deletion, substitution), meaning that we will penalize the insertion we
need to add from output (target) in comparison with the source
(reference). In other word, the missing extraction will be penalized
higher.
- The function takes in 2 strings in an assumption that both string are
already clean and concatenated (CCT)

**Important Note!**
Test case needs to be updated to use CCT once the function is ready. It
is now only tested the "functionality" of edit distance, not the edit
distance with CCT as its intended to be.

---------

Co-authored-by: cragwolfe <crag@unstructured.io>
2023-10-07 01:21:14 +00:00