unstructured/examples/weaviate/weaviate.ipynb
Yuming Long ce40cdc55f
Chore (refactor): support table extraction with pre-computed ocr data (#1801)
### Summary

Table OCR refactor, move the OCR part for table model in inference repo
to unst repo.
* Before this PR, table model extracts OCR tokens with texts and
bounding box and fills the tokens to the table structure in inference
repo. This means we need to do an additional OCR for tables.
* After this PR, we use the OCR data from entire page OCR and pass the
OCR tokens to inference repo, which means we only do one OCR for the
entire document.

**Tech details:**
* Combined env `ENTIRE_PAGE_OCR` and `TABLE_OCR` to `OCR_AGENT`, this
means we use the same OCR agent for entire page and tables since we only
do one OCR.
* Bump inference repo to `0.7.9`, which allow table model in inference
to use pre-computed OCR data from unst repo. Please check in
[PR](https://github.com/Unstructured-IO/unstructured-inference/pull/256).
* All notebooks lint are made by `make tidy`
* This PR also fixes
[issue](https://github.com/Unstructured-IO/unstructured/issues/1564),
I've added test for the issue in
`test_pdf.py::test_partition_pdf_hi_table_extraction_with_languages`
* Add same scaling logic to image [similar to previous Table
OCR](https://github.com/Unstructured-IO/unstructured-inference/blob/main/unstructured_inference/models/tables.py#L109C1-L113),
but now scaling is applied to entire image

### Test
* Not much to manually testing expect table extraction still works
* But due to change on scaling and use pre-computed OCR data from entire
page, there are some slight (better) changes on table output, here is an
comparison on test outputs i found from the same test
`test_partition_image_with_table_extraction`:

screen shot for table in `layout-parser-paper-with-table.jpg`:
<img width="343" alt="expected"
src="https://github.com/Unstructured-IO/unstructured/assets/63475068/278d7665-d212-433d-9a05-872c4502725c">
before refactor:
<img width="709" alt="before"
src="https://github.com/Unstructured-IO/unstructured/assets/63475068/347fbc3b-f52b-45b5-97e9-6f633eaa0d5e">
after refactor:
<img width="705" alt="after"
src="https://github.com/Unstructured-IO/unstructured/assets/63475068/b3cbd809-cf67-4e75-945a-5cbd06b33b2d">

### TODO
(added as a ticket) Still have some clean up to do in inference repo
since now unst repo have duplicate logic, but can keep them as a fall
back plan. If we want to remove anything OCR related in inference, here
are items that is deprecated and can be removed:
*
[`get_tokens`](https://github.com/Unstructured-IO/unstructured-inference/blob/main/unstructured_inference/models/tables.py#L77)
(already noted in code)
* parameter `extract_tables` in inference
*
[`interpret_table_block`](https://github.com/Unstructured-IO/unstructured-inference/blob/main/unstructured_inference/inference/layoutelement.py#L88)
*
[`load_agent`](https://github.com/Unstructured-IO/unstructured-inference/blob/main/unstructured_inference/models/tables.py#L197)
* env `TABLE_OCR` 

### Note
if we want to fallback for an additional table OCR (may need this for
using paddle for table), we need to:
* pass `infer_table_structure` to inference with `extract_tables`
parameter
* stop passing `infer_table_structure` to `ocr.py`

---------

Co-authored-by: Yao You <yao@unstructured.io>
2023-10-21 00:24:23 +00:00

271 lines
7.5 KiB
Plaintext

{
"cells": [
{
"cell_type": "markdown",
"id": "a3ce962e",
"metadata": {},
"source": [
"## Loading Data into Weaviate with `unstructured`\n",
"\n",
"This notebook shows a basic workflow for uploading document elements into Weaviate using the `unstructured` library. To get started with this notebook, first install the dependencies with `pip install -r requirements.txt` and start the Weaviate docker container with `docker-compose up`."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "5d9ffc17",
"metadata": {
"ExecuteTime": {
"end_time": "2023-08-09T22:54:56.713106Z",
"start_time": "2023-08-09T22:54:55.721284Z"
}
},
"outputs": [],
"source": [
"import json\n",
"\n",
"import tqdm\n",
"from unstructured.partition.pdf import partition_pdf\n",
"from unstructured.staging.weaviate import create_unstructured_weaviate_class, stage_for_weaviate\n",
"import weaviate\n",
"from weaviate.util import generate_uuid5"
]
},
{
"cell_type": "markdown",
"id": "673715e9",
"metadata": {},
"source": [
"The first step is to partition the document using the `unstructured` library. In the following example, we partition a PDF with `partition_pdf`. You can also partition over a dozen document types with the `partition` function."
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "f9fc0cf9",
"metadata": {
"ExecuteTime": {
"end_time": "2023-08-09T22:54:58.584857Z",
"start_time": "2023-08-09T22:54:58.300351Z"
}
},
"outputs": [],
"source": [
"filename = \"../../example-docs/layout-parser-paper-fast.pdf\"\n",
"elements = partition_pdf(filename=filename, strategy=\"fast\")"
]
},
{
"cell_type": "markdown",
"id": "3ae76364",
"metadata": {},
"source": [
"Next, we'll create a schema for our Weaviate database using the `create_unstructured_weaviate_class` helper function from the `unstructured` library. The helper function generates a schema that includes all of the elements in the `ElementMetadata` object from `unstructured`. This includes information such as the filename and the page number of the document element. After specifying the schema, we create a connection to the database with the Weaviate client library and create the schema. You can change the name of the class by updating the `unstructured_class_name` variable."
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "91057cb1",
"metadata": {
"ExecuteTime": {
"end_time": "2023-08-09T22:54:59.298547Z",
"start_time": "2023-08-09T22:54:59.296005Z"
}
},
"outputs": [],
"source": [
"unstructured_class_name = \"UnstructuredDocument\""
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "78e804bb",
"metadata": {
"ExecuteTime": {
"end_time": "2023-08-09T22:54:59.727082Z",
"start_time": "2023-08-09T22:54:59.722593Z"
}
},
"outputs": [],
"source": [
"unstructured_class = create_unstructured_weaviate_class(unstructured_class_name)\n",
"schema = {\"classes\": [unstructured_class]}"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "3e317a2d",
"metadata": {
"ExecuteTime": {
"end_time": "2023-08-09T22:55:01.606118Z",
"start_time": "2023-08-09T22:55:00.684623Z"
}
},
"outputs": [],
"source": [
"client = weaviate.Client(\"http://localhost:8080\")"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "0c508784",
"metadata": {
"ExecuteTime": {
"end_time": "2023-08-09T22:55:17.579418Z",
"start_time": "2023-08-09T22:55:17.039304Z"
}
},
"outputs": [],
"source": [
"client.schema.delete_all()\n",
"client.schema.create(schema)"
]
},
{
"cell_type": "markdown",
"id": "024ae133",
"metadata": {},
"source": [
"Next, we stage the elements for Weaviate using the `stage_for_weaviate` function and batch upload the results to Weaviate. `stage_for_weaviate` outputs a dictionary that conforms to the schema we created earlier. Once that data is stage, we can use the Weaviate client library to batch upload the results to Weaviate."
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "a7018bb1",
"metadata": {
"ExecuteTime": {
"end_time": "2023-08-09T22:55:21.595936Z",
"start_time": "2023-08-09T22:55:21.591105Z"
}
},
"outputs": [],
"source": [
"data_objects = stage_for_weaviate(elements)"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "af712d8e",
"metadata": {
"ExecuteTime": {
"end_time": "2023-08-09T22:55:23.590915Z",
"start_time": "2023-08-09T22:55:23.036903Z"
}
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"100%|██████████| 28/28 [00:00<00:00, 69.56it/s]\n"
]
}
],
"source": [
"with client.batch(batch_size=10) as batch:\n",
" for data_object in tqdm.tqdm(data_objects):\n",
" batch.add_data_object(\n",
" data_object,\n",
" unstructured_class_name,\n",
" uuid=generate_uuid5(data_object),\n",
" )"
]
},
{
"cell_type": "markdown",
"id": "dac10bf5",
"metadata": {},
"source": [
"Now that the documents are in Weaviate, we're able to run queries against Weaviate!"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "14098434",
"metadata": {
"ExecuteTime": {
"end_time": "2023-08-09T22:59:53.384425Z",
"start_time": "2023-08-09T22:59:53.202823Z"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{\n",
" \"data\": {\n",
" \"Get\": {\n",
" \"UnstructuredDocument\": [\n",
" {\n",
" \"_additional\": {\n",
" \"score\": \"0.23643185\"\n",
" },\n",
" \"text\": \"Deep Learning(DL)-based approaches are the state-of-the-art for a wide range of document image analysis (DIA) tasks including document image classi\\ufb01cation [11,\"\n",
" },\n",
" {\n",
" \"_additional\": {\n",
" \"score\": \"0.22914983\"\n",
" },\n",
" \"text\": \"LayoutParser: A Uni\\ufb01ed Toolkit for Deep Learning Based Document Image Analysis\"\n",
" }\n",
" ]\n",
" }\n",
" }\n",
"}\n"
]
}
],
"source": [
"response = (\n",
" client.query.get(\"UnstructuredDocument\", [\"text\", \"_additional {score}\"])\n",
" .with_bm25(query=\"document understanding\")\n",
" .with_limit(2)\n",
" .do()\n",
")\n",
"\n",
"print(json.dumps(response, indent=4))"
]
},
{
"cell_type": "markdown",
"id": "ec2993a3fa4c1bed",
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.17"
}
},
"nbformat": 4,
"nbformat_minor": 5
}