mirror of
https://github.com/Unstructured-IO/unstructured.git
synced 2025-07-19 15:06:21 +00:00
11 Commits
Author | SHA1 | Message | Date | |
---|---|---|---|---|
![]() |
252405c780
|
Dynamic ElementMetadata implementation (#2043)
### Executive Summary The structure of element metadata is currently static, meaning only predefined fields can appear in the metadata. We would like the flexibility for end-users, at their own discretion, to define and use additional metadata fields that make sense for their particular use-case. ### Concepts A key concept for dynamic metadata is _known field_. A known-field is one of those explicitly defined on `ElementMetadata`. Each of these has a type and can be specified when _constructing_ a new `ElementMetadata` instance. This is in contrast to an _end-user defined_ (or _ad-hoc_) metadata field, one not known at "compile" time and added at the discretion of an end-user to suit the purposes of their application. An ad-hoc field can only be added by _assignment_ on an already constructed instance. ### End-user ad-hoc metadata field behaviors An ad-hoc field can be added to an `ElementMetadata` instance by assignment: ```python >>> metadata = ElementMetadata() >>> metadata.coefficient = 0.536 ``` A field added in this way can be accessed by name: ```python >>> metadata.coefficient 0.536 ``` and that field will appear in the JSON/dict for that instance: ```python >>> metadata = ElementMetadata() >>> metadata.coefficient = 0.536 >>> metadata.to_dict() {"coefficient": 0.536} ``` However, accessing a "user-defined" value that has _not_ been assigned on that instance raises `AttributeError`: ```python >>> metadata.coeffcient # -- misspelled "coefficient" -- AttributeError: 'ElementMetadata' object has no attribute 'coeffcient' ``` This makes "tagging" a metadata item with a value very convenient, but entails the proviso that if an end-user wants to add a metadata field to _some_ elements and not others (sparse population), AND they want to access that field by name on ANY element and receive `None` where it has not been assigned, they will need to use an expression like this: ```python coefficient = metadata.coefficient if hasattr(metadata, "coefficient") else None ``` ### Implementation Notes - **ad-hoc metadata fields** are discarded during consolidation (for chunking) because we don't have a consolidation strategy defined for those. We could consider using a default consolidation strategy like `FIRST` or possibly allow a user to register a strategy (although that gets hairy in non-private and multiple-memory-space situations.) - ad-hoc metadata fields **cannot start with an underscore**. - We have no way to distinguish an ad-hoc field from any "noise" fields that might appear in a JSON/dict loaded using `.from_dict()`, so unlike the original (which only loaded known-fields), we'll rehydrate anything that we find there. - No real type-safety is possible on ad-hoc fields but the type-checker does not complain because the type of all ad-hoc fields is `Any` (which is the best available behavior in my view). - We may want to consider whether end-users should be able to add ad-hoc fields to "sub" metadata objects too, like `DataSourceMetadata` and conceivably `CoordinatesMetadata` (although I'm not immediately seeing a use-case for the second one). |
||
![]() |
51d07b6434
|
fix: flaky chunk metadata (#1947)
**Executive Summary.** When the elements in a _section_ are combined into a _chunk_, the metadata in each of the elements is _consolidated_ into a single `ElementMetadata` instance. There are two main problems with the current implementation: 1. The current algorithm simply uses the metadata of the first element as the metadata for the chunk. This produces: - **empty chunk metadata** when the first element has no metadata, such as a `PageBreak("")` - **missing chunk metadata** when the first element contains only partial metadata such as a `Header()` or `Footer()` - **misleading metadata** when the first element contains values applicable only to that element, such as `category_depth`, `coordinates` (bounding-box), `header_footer_type`, or `parent_id` 2. Second, list metadata such as `emphasized_text_content`, `emphasized_text_tags`, `link_texts` and `link_urls` is only combined when it is unique within the combined list. These lists are "unzipped" pairs. For example, the first `link_texts` corresponds to the first `link_urls` value. When an item is removed from one (because it matches a prior entry) and not the other (say same text "here" but different URL) the positional correspondence is broken and downstream processing will at best be wrong, at worst raise an exception. ### Technical Discussion Element metadata cannot be determined in the general case simply by sampling that of the first element. At the same time, a simple union of all values is also not sufficient. To effectively consolidate the current variety of metadata fields we need four distinct strategies, selecting which to apply to each field based on that fields provenance and other characteristics. The four strategies are: - `FIRST` - Select the first non-`None` value across all the elements. Several fields are determined by the document source (`filename`, `file_directory`, etc.) and will not change within the output of a single partitioning run. They might not appear in every element, but they will be the same whenever they do appear. This strategy takes the first one that appears, if any, as proxy for the value for the entire chunk. - `LIST` - Consolidate the four list fields like `emphasized_text_content` and `link_urls` by concatenating them in element order (no set semantics apply). All values from `elements[n]` appear before those from `elements[n+1]` and existing order is preserved. - `LIST_UNIQUE` - Combine only unique elements across the (list) values of the elements, preserving order in which a unique item first appeared. - `REGEX` - Regex metadata has its own rules, including adjusting the `start` and `end` offset of each match based its new position in the concatenated text. - `DROP` - Not all metadata can or should appear in a chunk. For example, a chunk cannot be guaranteed to have a single `category_depth` or `parent_id`. Other strategies such as `COORDINATES` could be added to consolidate the bounding box of the chunk from the coordinates of its elements, roughly `min(lefts)`, `max(rights)`, etc. Others could be `LAST`, `MAJORITY`, or `SUM` depending on how metadata evolves. The proposed strategy assignments are these: - `attached_to_filename`: FIRST, - `category_depth`: DROP, - `coordinates`: DROP, - `data_source`: FIRST, - `detection_class_prob`: DROP, # -- ? confirm -- - `detection_origin`: DROP, # -- ? confirm -- - `emphasized_text_contents`: LIST, - `emphasized_text_tags`: LIST, - `file_directory`: FIRST, - `filename`: FIRST, - `filetype`: FIRST, - `header_footer_type`: DROP, - `image_path`: DROP, - `is_continuation`: DROP, # -- not expected, added by chunking, not before -- - `languages`: LIST_UNIQUE, - `last_modified`: FIRST, - `link_texts`: LIST, - `link_urls`: LIST, - `links`: DROP, # -- deprecated field -- - `max_characters`: DROP, # -- unused in code, probably remove from ElementMetadata -- - `page_name`: FIRST, - `page_number`: FIRST, - `parent_id`: DROP, - `regex_metadata`: REGEX, - `section`: FIRST, # -- section unconditionally breaks on new section -- - `sent_from`: FIRST, - `sent_to`: FIRST, - `subject`: FIRST, - `text_as_html`: DROP, # -- not expected, only occurs in TableSection -- - `url`: FIRST, **Assumptions:** - each .eml file is partitioned->chunked separately (not in batches), therefore sent-from, sent-to, and subject will not change within a section. ### Implementation Implementation of this behavior requires two steps: 1. **Collect** all non-`None` values from all elements, each in a sequence by field-name. Fields not populated in any of the elements do not appear in the collection. ```python all_meta = { "filename": ["memo.docx", "memo.docx"] "link_texts": [["here", "here"], ["and here"]] "parent_id": ["f273a7cb", "808b4ced"] } ``` 2. **Apply** the specified strategy to each item in the overall collection to produce the consolidated chunk meta (see implementation). ### Factoring For the following reasons, the implementation of metadata consolidation is extracted from its current location in `chunk_by_title()` to a handful of collaborating methods in `_TextSection`. - The current implementation of metadata consolidation "inline" in `chunk_by_title()` already has too many moving pieces to be understood without extended study. Adding strategies to that would make it worse. - `_TextSection` is the only section type where metadata is consolidated (the other two types always have exactly one element so already exactly one metadata.) - `_TextSection` is already the expert on all the information required to consolidate metadata, in particular the elements that make up the section and their text. Some other problems were also fixed in that transition, such as mutation of elements during the consolidation process. ### Technical Risk: adding new `ElementMetadata` field breaks metadata If each metadata field requires a strategy assignment to be consolidated and a developer adds a new `ElementMetadata` field without adding a corresponding strategy mapping, metadata consolidation could break or produce incorrect results. This risk can be mitigated multiple ways: 1. Add a test that verifies a strategy is defined for each (Recommended). 2. Define a default strategy, either `DROP` or `FIRST` for scalar types, `LIST` for list types. 3. Raise an exception when an unknown metadata field is encountered. This PR implements option 1 such that a developer will be notified before merge if they add a new metadata field but do not define a strategy for it. ### Other Considerations - If end-users can in-future add arbitrary metadata fields _before_ chunking, then we'll need to define metadata-consolidation behavior for such fields. Depending on how we implement user-defined metadata fields we might: - Require explicit definition of a new metadata field before use, perhaps with a method like `ElementMetadata.add_custom_field()` which requires a consolidation strategy to be defined (and/or has a default value). - Have a default strategy, perhaps `DROP` or `FIRST`, or `LIST` if the field is type `list`. ### Further Context Metadata is only consolidated for `TextSection` because the other two section types (`TableSection` and `NonTextSection`) can only contain a single element. --- ## Further discussion on consolidation strategy by field ### document-static These fields are very likely to be the same for all elements in a single document: - `attached_to_filename` - `data_source` - `file_directory` - `filename` - `filetype` - `last_modified` - `sent_from` - `sent_to` - `subject` - `url` *Consolidation strategy:* `FIRST` - use first one found, if any. ### section-static These fields are very likely to be the same for all elements in a single section, which is the scope we really care about for metadata consolidation: - `section` - an EPUB document-section unconditionally starts new section. *Consolidation strategy:* `FIRST` - use first one found, if any. ### consolidated list-items These `List` fields are consolidated by concatenating the lists from each element that has one: - `emphasized_text_contents` - `emphasized_text_tags` - `link_texts` - `link_urls` - `regex_metadata` - special case, this one gets indexes adjusted too. *Consolidation strategy:* `LIST` - concatenate lists across elements. ### dynamic These fields are likely to hold unique data for each element: - `category_depth` - `coordinates` - `image_path` - `parent_id` *Consolidation strategy:* - `DROP` as likely misleading. - `COORDINATES` strategy could be added to compute the bounding box from all bounding boxes. - Consider allowing if they are all the same, perhaps an `ALL` strategy. ### slow-changing These fields are somewhere in-between, likely to be common between multiple elements but varied within a document: - `header_footer_type` - *strategy:* drop as not-consolidatable - `languages` - *strategy:* take first occurence - `page_name` - *strategy:* take first occurence - `page_number` - *strategy:* take first occurence, will all be the same when `multipage_sections` is `False`. Worst-case semantics are "this chunk began on this page". ### N/A These field types do not figure in metadata-consolidation: - `detection_class_prob` - I'm thinking this is for debug and should not appear in chunks, but need confirmation. - `detection_origin` - for debug only - `is_continuation` - is _produced_ by chunking, never by partitioning (not in our code anyway). - `links` (deprecated, probably should be dropped) - `max_characters` - is unused as far as I can tell, is unreferenced in source code. Should be removed from `ElementMetadata` as far as I can tell. - `text_as_html` - only appears in a `Table` element, each of which appears in its own section so needs no consolidation. Never appears in `TextSection`. *Consolidation strategy:* `DROP` any that appear (several never will) |
||
![]() |
d9c2516364
|
fix: chunks break on regex-meta changes and regex-meta start/stop not adjusted (#1779)
**Executive Summary.** Introducing strict type-checking as preparation for adding the chunk-overlap feature revealed a type mismatch for regex-metadata between chunking tests and the (authoritative) ElementMetadata definition. The implementation of regex-metadata aspects of chunking passed the tests but did not produce the appropriate behaviors in production where the actual data-structure was different. This PR fixes these two bugs. 1. **Over-chunking.** The presence of `regex-metadata` in an element was incorrectly being interpreted as a semantic boundary, leading to such elements being isolated in their own chunks. 2. **Discarded regex-metadata.** regex-metadata present on the second or later elements in a section (chunk) was discarded. **Technical Summary** The type of `ElementMetadata.regex_metadata` is `Dict[str, List[RegexMetadata]]`. `RegexMetadata` is a `TypedDict` like `{"text": "this matched", "start": 7, "end": 19}`. Multiple regexes can be specified, each with a name like "mail-stop", "version", etc. Each of those may produce its own set of matches, like: ```python >>> element.regex_metadata { "mail-stop": [{"text": "MS-107", "start": 18, "end": 24}], "version": [ {"text": "current: v1.7.2", "start": 7, "end": 21}, {"text": "supersedes: v1.7.0", "start": 22, "end": 40}, ], } ``` *Forensic analysis* * The regex-metadata feature was added by Matt Robinson on 06/16/2023 commit: 4ea71683. The regex_metadata data structure is the same as when it was added. * The chunk-by-title feature was added by Matt Robinson on 08/29/2023 commit: f6a745a7. The mistaken regex-metadata data structure in the tests is present in that commit. Looks to me like a mis-remembering of the regex-metadata data-structure and insufficient type-checking rigor (type-checker strictness level set too low) to warn of the mistake. **Over-chunking Behavior** The over-chunking looked like this: Chunking three elements with regex metadata should combine them into a single chunk (`CompositeElement` object), subject to maximum size rules (default 500 chars). ```python elements: List[Element] = [ Title( "Lorem Ipsum", metadata=ElementMetadata( regex_metadata={"ipsum": [RegexMetadata(text="Ipsum", start=6, end=11)]} ), ), Text( "Lorem ipsum dolor sit amet consectetur adipiscing elit.", metadata=ElementMetadata( regex_metadata={"dolor": [RegexMetadata(text="dolor", start=12, end=17)]} ), ), Text( "In rhoncus ipsum sed lectus porta volutpat.", metadata=ElementMetadata( regex_metadata={"ipsum": [RegexMetadata(text="ipsum", start=11, end=16)]} ), ), ] chunks = chunk_by_title(elements) assert chunks == [ CompositeElement( "Lorem Ipsum\n\nLorem ipsum dolor sit amet consectetur adipiscing elit.\n\nIn rhoncus" " ipsum sed lectus porta volutpat." ) ] ``` Observed behavior looked like this: ```python chunks => [ CompositeElement('Lorem Ipsum') CompositeElement('Lorem ipsum dolor sit amet consectetur adipiscing elit.') CompositeElement('In rhoncus ipsum sed lectus porta volutpat.') ] ``` The fix changed the approach from breaking on any metadata field not in a specified group (`regex_metadata` was missing from this group) to only breaking on specified fields (whitelisting instead of blacklisting). This avoids overchunking every time we add a new metadata field and is also simpler and easier to understand. This change in approach is discussed in more detail here #1790. **Dropping regex-metadata Behavior** Chunking this section: ```python elements: List[Element] = [ Title( "Lorem Ipsum", metadata=ElementMetadata( regex_metadata={"ipsum": [RegexMetadata(text="Ipsum", start=6, end=11)]} ), ), Text( "Lorem ipsum dolor sit amet consectetur adipiscing elit.", metadata=ElementMetadata( regex_metadata={ "dolor": [RegexMetadata(text="dolor", start=12, end=17)], "ipsum": [RegexMetadata(text="ipsum", start=6, end=11)], } ), ), Text( "In rhoncus ipsum sed lectus porta volutpat.", metadata=ElementMetadata( regex_metadata={"ipsum": [RegexMetadata(text="ipsum", start=11, end=16)]} ), ), ] ``` ..should produce this regex_metadata on the single produced chunk: ```python assert chunk == CompositeElement( "Lorem Ipsum\n\nLorem ipsum dolor sit amet consectetur adipiscing elit.\n\nIn rhoncus" " ipsum sed lectus porta volutpat." ) assert chunk.metadata.regex_metadata == { "dolor": [RegexMetadata(text="dolor", start=25, end=30)], "ipsum": [ RegexMetadata(text="Ipsum", start=6, end=11), RegexMetadata(text="ipsum", start=19, end=24), RegexMetadata(text="ipsum", start=81, end=86), ], } ``` but instead produced this: ```python regex_metadata == {"ipsum": [{"text": "Ipsum", "start": 6, "end": 11}]} ``` Which is the regex-metadata from the first element only. The fix was to remove the consolidation+adjustment process from inside the "list-attribute-processing" loop (because regex-metadata is not a list) and process regex metadata separately. |
||
![]() |
f34c277bca
|
fix: add backwards compatibility to ElementMetadata (#1526)
Fixes https://github.com/Unstructured-IO/unstructured-api/issues/237 The problem: The `ElementMetadata` class was not able to ignore fields that it didn't know about. This surfaced in `partition_via_api`, when the hosted api schema is newer than the local `unstructured` version. In `ElementMetadata.from_json()` we get errors such as `TypeError: __init__() got an unexpected keyword argument 'parent_id'`. The fix: The `from_json` methods for these dataclasses should drop any unexpected fields before calling `__init__`. To verify: This shouldn't throw an error ``` from unstructured.staging.base import elements_from_json import json test_api_result = json.dumps([ { "type": "Title", "element_id": "2f7cc75f6467bba468022c4c2875335e", "metadata": { "filename": "layout-parser-paper.pdf", "filetype": "application/pdf", "page_number": 1, "new_field": "foo", }, "text": "LayoutParser: A Unified Toolkit for Deep Learning Based Document Image Analysis" } ]) elements = elements_from_json(text=test_api_result) print(elements) ``` |
||
![]() |
fa5a3dbd81
|
feat: unique_element_ids kwarg for UUID elements (#1085)
* added kwarg for unique elements * test for unique ids * update docs * changelog and version |
||
![]() |
ef5091f276
|
feat: added UUID option for element_id arg in element constructor (#1076)
* added UUID option for element_id arg in element constructor and updated unit tests * updated CHANGELOG and bumped to dev2 |
||
![]() |
24ebd0fa4e
|
chore: Move coordinate details from Element model to a metadata model (#827) | ||
![]() |
db4c5dfdf7
|
feat: coordinate systems (#774)
Added the CoordinateSystem class for tracking the system in which coordinates are represented, and changing the system if desired. |
||
![]() |
5eb1466acc
|
Resolve various style issues to improve overall code quality (#282)
* Apply import sorting ruff . --select I --fix * Remove unnecessary open mode parameter ruff . --select UP015 --fix * Use f-string formatting rather than .format * Remove extraneous parentheses Also use "" instead of str() * Resolve missing trailing commas ruff . --select COM --fix * Rewrite list() and dict() calls using literals ruff . --select C4 --fix * Add () to pytest.fixture, use tuples for parametrize, etc. ruff . --select PT --fix * Simplify code: merge conditionals, context managers ruff . --select SIM --fix * Import without unnecessary alias ruff . --select PLR0402 --fix * Apply formatting via black * Rewrite ValueError somewhat Slightly unrelated to the rest of the PR * Apply formatting to tests via black * Update expected exception message to match 0d81564 * Satisfy E501 line too long in test * Update changelog & version * Add ruff to make tidy and test deps * Run 'make tidy' * Update changelog & version * Update changelog & version * Add ruff to 'check' target Doing so required me to also fix some non-auto-fixable issues. Two of them I fixed with a noqa: SIM115, but especially the one in __init__ may need some attention. That said, that refactor is out of scope of this PR. |
||
![]() |
1d68bb2482
|
feat: apply method to apply cleaning bricks to elements (#102)
* add apply method to apply cleaners to elements * bump version * add check for string output * documentations for the apply method * change interface to *cleaners |
||
![]() |
5f40c78f25 | Initial Release |