7 Commits

Author SHA1 Message Date
Christine Straub
0eb461acc2
refactor: restructure PDF/Image example document organization (#3410)
This PR aims to improve the organization and readability of our example
documents used in unit tests, specifically focusing on PDF and image
files.

### Summary
- Created two new subdirectories in the `example-docs` folder:
  - `pdf/`: for all PDF example files
  - `img/`: for all image example files
- Moved relevant PDF files from `example-docs/` to `example-docs/pdf/`
- Moved relevant image files from `example-docs/` to `example-docs/img/`
- Updated file paths in affected unit & ingest tests to reflect the new
directory structure

### Testing
All unit & ingest tests should be updated and verified to work with the
new file structure.

## Notes
Other file types (e.g., office documents, HTML files) remain in the root
of `example-docs/` for now.

## Next Steps
Consider similar reorganization for other file types if this structure
proves to be beneficial.

---------

Co-authored-by: ryannikolaidis <1208590+ryannikolaidis@users.noreply.github.com>
Co-authored-by: christinestraub <christinestraub@users.noreply.github.com>
2024-07-18 22:21:32 +00:00
Pluto
5d89b41b1a
Fix not counting false negatives and false positives in table metrics (#3300)
This pull request fixes counting tables metric for three cases:
- False Negatives: when table exist in ground truth but any of the
predicted tables doesn't match the table, the table should count as 0
and the file should not be completely skipped (before it was np.NaN).
- False Positives: When there is a predicted table that didn't match any
ground truth table it should be counted as 0, right now it is skipped in
processing (matched_indices==-1)
- The file should be completely skipped only if there is no tables in
ground truth and in prediction

In short we can say that previous metric calculation didn't consider OD
mistakes
2024-07-02 10:07:24 +00:00
Pawel Kmiecik
29e64eb281
feat: table evaluations for fixed html table generation (#3196)
Update to the evaluation script to handle correct HTML syntax for
tables.
See https://github.com/Unstructured-IO/unstructured-inference/pull/355
for details.

This change:
- modifies transforming HTML tables to evaluation internal `cells`
format
- fixes the indexing of the output (internal format cells) when HTML
cells use spans
2024-06-14 09:03:27 +00:00
Pluto
4397dd6a10
Add calculation of table related metrics based on table_as_cells (#2898)
This pull request add metrics that are calculated based on
table_as_cells instead of text_as_html. This change is required for
comprehensive metrics calculation, as previously every colspan or
rowspan predicted was considered to be an incorrect predicted (even if
it was correct prediction)

This change has to be merged after
https://github.com/Unstructured-IO/unstructured/pull/2892 which
introduces table_as_cells field
2024-05-07 13:57:38 +00:00
Yao You
911f9983c1
feat: redefine table level acc (#2620)
This PR redefines the `table_level_acc` metric as follow:
- for each predicted table use sequence matching ratio as its accuracy
- as a prerequisite for the sequence matching we sort the table cells by
row then column for both predicted and ground truth to ensure they are
ordered the same
- average all predicted table accuracy
- any prediction without a matching ground truth (false positive) would
decrease the score
- prediction that splits ground truth into smaller tables would also
have low score with perfectly equal splits having lowest score

This new definition makes the new metric a value between 0 and 1 per
file. This replaces the existing definition where the metric is defined
as (the number of predicted table that has a match to ground truth) to
(the number of ground truth table). This existing metric actually gives
higher values for predictions that splits tables and can be higher than
1. The new definition prefers predictions that do not split ground truth
tables.
2024-03-08 17:00:57 +00:00
Pawel Kmiecik
e35306cfc7
fix: table evaluation metrics fix calculations when no tables found in predictions (#2619)
The current way table structure metrics are computed does not cover
cases when none table is found and all stats are empty.

This PR fixes this + adds some hardenning tests for table eval
processor.

---------

Co-authored-by: Yao You <theyaoyou@gmail.com>
2024-03-07 18:39:19 +00:00
Yao You
42f8cf1997
chore: add metric helper for table structure eval (#1877)
- add helper to run inference over an image or pdf of table and compare
it against a ground truth csv file
- this metric generates a similarity score between 1 and 0, where 1 is
perfect match and 0 is no match at all
- add example docs for testing
- NOTE: this metric is only relevant to table structure detection.
Therefore the input should be just the table area in an image/pdf file;
we are not evaluating table element detection in this metric
2023-10-27 13:23:44 -05:00