Carrying `skip_infer_table_types` to `infer_table_structure` in
partition flow. Now PPT/X, DOC/X, etc. Table elements should not have a
`text_as_html` field.
Note: I've continued to exclude this var from partitioners that go
through html flow, I think if we've already got the html it doesn't make
sense to carry the infer variable along, since we're not 'infer-ing' the
html table in these cases.
TODO:
✅ add unit tests
---------
Co-authored-by: ryannikolaidis <1208590+ryannikolaidis@users.noreply.github.com>
Co-authored-by: amanda103 <amanda103@users.noreply.github.com>
Summary: Added support for AWS Bedrock embeddings. Leverages
"amazon.titan-tg1-large" for the embedding model.
Test
- find your aws secret access key and key id; make sure the account has
access to bedrock's tian embed model
- follow the instructions in
d5e797cd44/docs/source/bricks/embedding.rst (bedrockembeddingencoder)
---------
Co-authored-by: Ahmet Melek <39141206+ahmetmeleq@users.noreply.github.com>
Co-authored-by: Yao You <yao@unstructured.io>
Co-authored-by: Yao You <theyaoyou@gmail.com>
Co-authored-by: Ahmet Melek <ahmetmeleq@gmail.com>
### Description
* Update all existing connector docs to use new pipeline approach
### Additional changes:
* Some defaults were set for the runners to match those in the configs
to make those easy to handle, i.e. the biomed runner:
```python
max_retries: int = 5,
max_request_time: int = 45,
decay: float = 0.3,
```
### Description
Currently linting only takes place over the base unstructured directory
but we support python files throughout the repo. It makes sense for all
those files to also abide by the same linting rules so the entire repo
was set to be inspected when the linters are run. Along with that
autoflake was added as a linter which has a lot of added benefits such
as removing unused imports for you that would currently break flake and
require manual intervention.
The only real relevant changes in this PR are in the `Makefile`,
`setup.cfg`, and `requirements/test.in`. The rest is the result of
running the linters.
This PR:
- defines rbac_data as a SourceMetadata field,
- manages connections to an external api for obtaining rbac data with
ConnectorRBAC class,
- serializes rbac data and saves it to the disk,
- matches the rbac_data in the disk to each IngestDoc, using a common
field,
- forwards rbac data to Elements, via the partition() function
To test the changes, run `examples/ingest/sharepoint/ingest.sh` with the
relevant rbac & connector credentials
---------
Co-authored-by: ryannikolaidis <1208590+ryannikolaidis@users.noreply.github.com>
Co-authored-by: ahmetmeleq <ahmetmeleq@users.noreply.github.com>
This PR adds the `max_characters` (hard max) param to non-table element
chunking. Additionally updates the `num_characters` metadata to
`max_characters` to make it clearer which param we're referencing.
To test:
```
from unstructured.partition.html import partition_html
filename = "example-docs/example-10k-1p.html"
chunk_elements = partition_html(
filename,
chunking_strategy="by_title",
combine_text_under_n_chars=0,
new_after_n_chars=50,
max_characters=100,
)
for chunk in chunk_elements:
print(len(chunk.text))
# previously we were only respecting the "soft max" (default of 500) for elements other than tables
# now we should see that all the elements have text fields under 100 chars.
```
---------
Co-authored-by: cragwolfe <crag@unstructured.io>
* Updated Metadata page: add common and additional metadata fields by
document types and connectors
* Updated specific installation extra by document types and connectors
* Added embedding brick page in Sphinx TOC
* Fixed Sphinx warnings in new pages
### Description
Updating the python version of the example docs to show how to run the
same code that the CLI runs, but using python. Rather than copying the
same command that would be run via the terminal and using the subprocess
library to run it, this updates it to use the supported code exposed in
the inference directory.
For now only the wikipedia one has been updated to get some opinions on
this before updating all other connector docs.
Would close out
https://github.com/Unstructured-IO/unstructured/issues/1445
### Description
This PR is two-fold:
**Embeddings:**
* Embeddings incorporated into the sharepoint source connector, which
will now call out to OpenAI and create embeddings if the flag is passed
in and the api key provided.
**Writing vector content (embeddings) to Azure cognitive search index:**
* The schema for the index expected to exist in Azure has been updated
to include the vector field type and a test script has been added to
test the new content being produced from the Sharepoint connector to
push the embedding content.
Some important notes about other changes in here:
* The embedding code had to be updated to patch the `to_dict` method on
elements to add `embeddings` to the dict output if that was added. While
the code originally added the embedding content, when `to_dict` was
called to save the content as json, this was lost.
This updates the docker image download url to pass through the scarf
gateway, this allows anonymous tracking of downloads
Related to:
https://github.com/Unstructured-IO/unstructured#chart_with_upwards_trend-analytics
Testing:
docker pull
downloads.unstructured.io/unstructured-io/unstructured:latest
Result:
Image should download
### Description
New [Azure Cognitive
Search](https://azure.microsoft.com/en-us/products/ai-services/cognitive-search)
destination connector added. Writes each json element from the created
json files via partition and writes that content to an index.
**Bonus bug fix:** Due to a recent change where the default version of
python used in the repo was bumped to `3.10` from `3.8`, this means
running `pip-compile` now runs it against that version rather than the
lowest we support which is still `3.8`. This breaks the setup for those
lower versions because some of the versions pulled in by `pip-compile`
exist for `3.10` but not `3.8`. `pip-compile` was updates to run as a
script that checks the version of python being used first, which helps
guarantee that all dependencies meet the minimum python version
requirement.
Closes out https://github.com/Unstructured-IO/unstructured/issues/1466
Closes https://github.com/Unstructured-IO/unstructured/issues/1319,
closes https://github.com/Unstructured-IO/unstructured/issues/1372
This module:
- implements EmbeddingEncoder classes which track embedding related data
- implements embed_documents method which receives a list of Elements,
obtains embeddings for the text within Elements, updates the Elements
with an attribute named embeddings , and returns the updated Elements
- the module uses langchain to obtain the embeddings
-----
- The PR additionally fixes a JSON de-serialization issue on the
metadata fields.
To test the changes, run `examples/embed/example.py`
Reviewers: I recommend reviewing commit-by-commit or just looking at the
final version of `partition/docx.py` as View File.
This refactor solves a few problems but mostly lays the groundwork to
allow us to refine further aspects such as page-break detection,
list-item detection, and moving python-docx internals upstream to that
library so our work doesn't depend on that domain-knowledge.
This PR adds documentation of models supported by the `Unstructured`
tool. The changes reflect the tool's capabilities, usage examples, and
the process for integrating custom models.
Sections:
- Detailed the basic usage of the `Unstructured` partition with the
model name.
- Provided a list of available models in the `Unstructured` partition.
- Added instructions on using non-default models via three distinct
methods.
- Explained leveraging models from the LayoutParser's model zoo with
`UnstructuredDetectronModel`.
- Guided users in integrating their custom object detection models using
the `UnstructuredObjectDetectionModel` class.
Tested the docs build with:
> cd docs
> pip install -r requirements.txt
> make html
This PR does two things:
1. Adds test case (and alters sample doc) for rtf and epub files with
table
2. Adds `xls/x` file extension to `skip_infer_table_types` default list
---------
Co-authored-by: shreyanid <42684285+shreyanid@users.noreply.github.com>
### Description
Update all other connectors to use the new downstream architecture that
was recently introduced for the s3 connector.
Closes#1313 and #1311
This connector:
- takes a Jira Cloud URL, user email and api token; to authenticate into
Jira Cloud
- ingests:
- either all issues in all projects in a Jira Cloud Organization
- or
- issues in user specified projects, boards
- user specified issues
- processes this kind of data:
- text fields such as issue summary, description, and comments
- dropdown fields such as issue type, status, priority, assignee,
reporter, labels, and components
- other data such as issue id, issue key, project id, information on
subtasks
- notes down attachment URLs, however does not process attachments
- stores each downloaded issue in a txt file, in a predefined template
form (consisting of the data above)
- then processes each downloaded issue document into elements using
unstructured library
- related to: https://github.com/Unstructured-IO/unstructured/issues/263
To test the changes, make the necessary setups and run the relevant
ingest test scripts.
---------
Co-authored-by: ryannikolaidis <1208590+ryannikolaidis@users.noreply.github.com>
Co-authored-by: ahmetmeleq <ahmetmeleq@users.noreply.github.com>
Updated:
- Added back support document types for partitioning
- Added more tabs for python code in the API page
- Added a RAG section in Key Concepts
- Added a Common Use case section in overview
### Summary
Closes#1229. Updates `partition_xml` so that the element type is
inferred on each leaf node when `xml_keep_tags=False` instead of
delegating splitting and partitioning to `partition_xml`. If
`xml_keep_tags=True`, the file is treated like a text file still and
partitioning is still delegated to `partition_text`.
Also adds the option to pass `text` as an input to `partition_xml`.
### Testing
Create a `parrots.xml` file that looks like:
```xml
<xml><parrot><name>Conure</name><description>A conure is a very friendly bird.
Conures are feathery and like to dance.</description></parrot></xml>
```
Run:
```python
from unstructured.partition.xml import partition_xml
from unstructured.staging.base import convert_to_dict
elements = partition_xml(filename="parrots.xml")
convert_to_dict(elements)
```
One `main`, the output is the following. Notice how the `<name>` tag
incorrectly gets merged into `<description>` in the first element.
```python
[{'element_id': '7ae4074435df8dfcefcf24a4e6c52026',
'metadata': {'file_directory': '/home/matt/tmp',
'filename': 'parrots.xml',
'filetype': 'application/xml',
'last_modified': '2023-08-30T14:21:38'},
'text': 'Conure A conure is a very friendly bird.',
'type': 'NarrativeText'},
{'element_id': '859ecb332da6961acd2fb6a0185d1549',
'metadata': {'file_directory': '/home/matt/tmp',
'filename': 'parrots.xml',
'filetype': 'application/xml',
'last_modified': '2023-08-30T14:21:38'},
'text': 'Conures are feathery and like to dance.',
'type': 'NarrativeText'}]
```
One the feature branch, the output is the following, and the tags are
correctly separated.
```python
[{'element_id': '5512218914e4eeacf71a9cd42c373710',
'metadata': {'file_directory': '/home/matt/tmp',
'filename': 'parrots.xml',
'filetype': 'application/xml',
'last_modified': '2023-08-30T14:21:38'},
'text': 'Conure',
'type': 'Title'},
{'element_id': '113bf8d250c2b1a77c9c2caa4b812f85',
'metadata': {'file_directory': '/home/matt/tmp',
'filename': 'parrots.xml',
'filetype': 'application/xml',
'last_modified': '2023-08-30T14:21:38'},
'text': 'A conure is a very friendly bird.\n'
'\n'
'Conures are feathery and like to dance.',
'type': 'NarrativeText'}]
```
### Summary
An initial pass on smart chunking for RAG applications. Breaks a
document into sections based on the presence of `Title` elements. Also
starts a new section under the following conditions:
- If metadata changes, indicating a change in section or page or a
switch to processing attachments. If `multipage_sections=True`, sections
can span pages. `multipage_sections` defaults to True.
- If the length of the section exceeds `new_after_n_chars` characters.
The default is `1500`. The chunking function does not split individual
elements, so it's possible for a section to exceed that threshold if an
individual element if over `new_after_n_chars` characters, which could
occur with a long `NarrativeText` element.
- Section under `combine_under_n_chars` characters are combined. The
default is `500`.
### Testing
```python
from unstructured.partition.html import partition_html
from unstructured.chunking.title import chunk_by_title
url = "https://understandingwar.org/backgrounder/russian-offensive-campaign-assessment-august-27-2023-0"
elements = partition_html(url=url)
chunks = chunk_by_title(elements)
for chunk in chunks:
print(chunk)
print("\n\n" + "-"*80)
input()
```
### Summary
Closes#1018. Enables `partition_email` and `partition_msg` to detect if
an email has PGP encrypted content. Based on the specification in [RFC
2015](https://www.ietf.org/rfc/rfc2015.txt). The test emails are based
on the example email in the spec. If PGP detected content is detected, a
warning is emitted and an empty set of lists is returned.
### Testing
```python
from unstructured.partition_email import partition_email
filename = "example-docs/eml/fake-encrypted.eml"
partition_email(filename=filename)
```
```python
from unstructured.partition_msg import partition_msg
filename = "example-docs/fake-encrypted.msg"
partition_msgl(filename=filename)
```
### Summary
Closes#1007. Adds a deprecation warning for the `file_filename` kwarg
to `partition`, `partition_via_api`, and `partition_multiple_via_api`.
Also catches a warning in `ebooklib` that we do not want to emit in
`unstructured`.
### Testing
```python
from unstructured.partition.auto import partition
filename = "example-docs/winter-sports.epub"
# Should not emit a warning
with open(filename, "rb") as f:
elements = partition(file=f, metadata_filename="test.epub")
# Should be test.epub
elements[0].metadata.filename
# Should emit a warning
with open(filename, "rb") as f:
elements = partition(file=f, file_filename="test.epub")
# Should be test.epub
elements[0].metadata.filename
# Should raise an error
with open(filename, "rb") as f:
elements = partition(file=f, metadata_filename="test.epub", file_filename="test.epub")
```
### Description
Add delta table connector and test against a delta table generated via
delta.io and uploaded to s3. Shows an example of how to use the
connection options to leverage s3.
I was able to get this to work with s3 if I pass in the access and
secret keys as storage options. Even though the s3 bucket being used is
public, would not work without those.
---------
Co-authored-by: ryannikolaidis <1208590+ryannikolaidis@users.noreply.github.com>
Co-authored-by: rbiseck3 <rbiseck3@users.noreply.github.com>
Documentation Overhaul
- Added documentation hierarchy
- Added options for Bash vs Python for API & Upstream Connectors
- Added Introduction section (Overview, Key Concepts, Getting Started)
- Redid connectors section
- Installation is now broken up (needs further work)
* add the first version of airtable connector
* change imports as inline to fail gracefully in case of lacking dependency
* parse tables as csv rather than plain text
* add relevant logic to be able to use --airtable-list-of-paths
* add script for creation of reseources for testing, add test script (large) for testing with a large number of tables to validate scroll functionality, update test script (diff) based on the new settings
* fix ingest test names
* add scripts for the large table test
* remove large table test from diff test
* make base and table ids explicit
* add and remove comments
* use -ne instead of !=
* update code based on the recent ingest refactor, update changelog and version
* shellcheck fix
* update comments
* update check-num-rows-and-columns-output error message
Co-authored-by: ryannikolaidis <1208590+ryannikolaidis@users.noreply.github.com>
* update help comments
* update help comments
* update help comments
* update workflows to set auth tokens and to run make install
* add comments on create_scale_test_components
* separate component ids from the test script, add comments to document test component creation
* add LARGE_BASE test, implement LARGE_BASE component creation, replace component id
* shellcheck fixes
* shellcheck fixes
* update docs
* update comment
* bump version
* add wrongly deleted file
* sort columns before saving to process
* Update ingest test fixtures (#1098)
Co-authored-by: ahmetmeleq <ahmetmeleq@users.noreply.github.com>
---------
Co-authored-by: ryannikolaidis <1208590+ryannikolaidis@users.noreply.github.com>
Co-authored-by: ahmetmeleq <ahmetmeleq@users.noreply.github.com>
* feat: add func for checking on EmailAddress type
* feat: add EmailAddress type
* feat: add check for email type
* feat: add test for cheking EmailAdress type
* feat: update existing example files with email
* feat: add new exampe fileds with email in the text
* fix: apply linter
* feat: update changelog file
* feat: add test for is_email_address function
* don't push
* fix: clean up code
* apply linter
* fix: clean up
* fix: remove file chaanges
* fix: remove not used files for email address test
* fix: remove not necessary tests
* clean up
* fix: apply linter
* fix: update CHANGELOG
* fix: change version
* fix: fix msg test
* fix: apply linter for tests
* fix: remove spaces
* fix: apply linter with longer line
* feat: update documentation
* fix: remove duplicates
* Update getting_started.rst
---------
Co-authored-by: Matt Robinson <mrobinson@unstructured.io>