2024-10-17 18:14:48 +02:00
## Conversion
### Convert a single document
2024-11-04 08:27:02 -05:00
To convert individual PDF documents, use `convert()` , for example:
2024-10-17 18:14:48 +02:00
```python
from docling.document_converter import DocumentConverter
source = "https://arxiv.org/pdf/2408.09869" # PDF path or URL
converter = DocumentConverter()
result = converter.convert(source)
print(result.document.export_to_markdown()) # output: "### Docling Technical Report[...]"
```
### CLI
You can also use Docling directly from your command line to convert individual files —be it local or by URL— or whole directories.
```console
docling https://arxiv.org/pdf/2206.01062
```
2025-03-19 15:38:54 +01:00
You can also use 🥚[SmolDocling ](https://huggingface.co/ds4sd/SmolDocling-256M-preview ) and other VLMs via Docling CLI:
```bash
docling --pipeline vlm --vlm-model smoldocling https://arxiv.org/pdf/2206.01062
```
This will use MLX acceleration on supported Apple Silicon hardware.
2024-10-17 18:14:48 +02:00
2025-03-04 14:24:38 +01:00
To see all available options (export formats etc.) run `docling --help` . More details in the [CLI reference page ](../reference/cli.md ).
2024-10-17 18:14:48 +02:00
### Advanced options
2025-02-06 15:46:32 +01:00
#### Model prefetching and offline usage
2024-10-17 18:14:48 +02:00
2025-02-06 15:46:32 +01:00
By default, models are downloaded automatically upon first usage. If you would prefer
to explicitly prefetch them for offline use (e.g. in air-gapped environments) you can do
that as follows:
2024-10-17 18:14:48 +02:00
2025-02-06 15:46:32 +01:00
**Step 1: Prefetch the models**
2024-10-17 18:14:48 +02:00
2025-02-06 15:46:32 +01:00
Use the `docling-tools models download` utility:
2024-10-17 18:14:48 +02:00
2025-02-06 15:46:32 +01:00
```sh
$ docling-tools models download
Downloading layout model...
Downloading tableformer model...
Downloading picture classifier model...
Downloading code formula model...
Downloading easyocr models...
Models downloaded into $HOME/.cache/docling/models.
```
Alternatively, models can be programmatically downloaded using `docling.utils.model_downloader.download_models()` .
2024-10-17 18:14:48 +02:00
2025-02-06 15:46:32 +01:00
**Step 2: Use the prefetched models**
2024-10-17 18:14:48 +02:00
```python
from docling.datamodel.base_models import InputFormat
2025-02-06 15:46:32 +01:00
from docling.datamodel.pipeline_options import EasyOcrOptions, PdfPipelineOptions
2024-10-17 18:14:48 +02:00
from docling.document_converter import DocumentConverter, PdfFormatOption
2025-02-06 15:46:32 +01:00
artifacts_path = "/local/path/to/models"
2024-10-17 18:14:48 +02:00
2025-02-06 15:46:32 +01:00
pipeline_options = PdfPipelineOptions(artifacts_path=artifacts_path)
2024-10-17 18:14:48 +02:00
doc_converter = DocumentConverter(
format_options={
InputFormat.PDF: PdfFormatOption(pipeline_options=pipeline_options)
}
)
```
2025-02-06 15:46:32 +01:00
Or using the CLI:
```sh
docling --artifacts-path="/local/path/to/models" FILE
```
2025-03-06 07:30:07 +01:00
Or using the `DOCLING_ARTIFACTS_PATH` environment variable:
```sh
export DOCLING_ARTIFACTS_PATH="/local/path/to/models"
python my_docling_script.py
```
2025-02-12 15:18:01 +01:00
#### Using remote services
The main purpose of Docling is to run local models which are not sharing any user data with remote services.
Anyhow, there are valid use cases for processing part of the pipeline using remote services, for example invoking OCR engines from cloud vendors or the usage of hosted LLMs.
In Docling we decided to allow such models, but we require the user to explicitly opt-in in communicating with external services.
```py
from docling.datamodel.base_models import InputFormat
from docling.datamodel.pipeline_options import PdfPipelineOptions
from docling.document_converter import DocumentConverter, PdfFormatOption
pipeline_options = PdfPipelineOptions(enable_remote_services=True)
doc_converter = DocumentConverter(
format_options={
InputFormat.PDF: PdfFormatOption(pipeline_options=pipeline_options)
}
)
```
When the value `enable_remote_services=True` is not set, the system will raise an exception `OperationNotAllowed()` .
_Note: This option is only related to the system sending user data to remote services. Control of pulling data (e.g. model weights) follows the logic described in [Model prefetching and offline usage ](#model-prefetching-and-offline-usage )._
##### List of remote model services
The options in this list require the explicit `enable_remote_services=True` when processing the documents.
- `PictureDescriptionApiOptions` : Using vision models via API calls.
2025-02-06 15:46:32 +01:00
#### Adjust pipeline features
2025-03-04 14:24:38 +01:00
The example file [custom_convert.py ](../examples/custom_convert.py ) contains multiple ways
2025-02-06 15:46:32 +01:00
one can adjust the conversion pipeline and features.
##### Control PDF table extraction options
You can control if table structure recognition should map the recognized structure back to PDF cells (default) or use text cells from the structure prediction itself.
This can improve output quality if you find that multiple columns in extracted tables are erroneously merged into one.
2024-10-17 18:14:48 +02:00
```python
from docling.datamodel.base_models import InputFormat
from docling.document_converter import DocumentConverter, PdfFormatOption
2025-02-06 15:46:32 +01:00
from docling.datamodel.pipeline_options import PdfPipelineOptions
2024-10-17 18:14:48 +02:00
pipeline_options = PdfPipelineOptions(do_table_structure=True)
2025-02-06 15:46:32 +01:00
pipeline_options.table_structure_options.do_cell_matching = False # uses text cells predicted from table structure model
2024-10-17 18:14:48 +02:00
doc_converter = DocumentConverter(
format_options={
InputFormat.PDF: PdfFormatOption(pipeline_options=pipeline_options)
}
)
```
2025-03-11 10:53:49 +01:00
Since docling 1.16.0: You can control which TableFormer mode you want to use. Choose between `TableFormerMode.FAST` (faster but less accurate) and `TableFormerMode.ACCURATE` (default) to receive better quality with difficult table structures.
2024-11-04 14:27:56 +01:00
```python
from docling.datamodel.base_models import InputFormat
from docling.document_converter import DocumentConverter, PdfFormatOption
2025-02-06 15:46:32 +01:00
from docling.datamodel.pipeline_options import PdfPipelineOptions, TableFormerMode
2024-11-04 14:27:56 +01:00
2025-02-06 15:46:32 +01:00
pipeline_options = PdfPipelineOptions(do_table_structure=True)
pipeline_options.table_structure_options.mode = TableFormerMode.ACCURATE # use more accurate TableFormer model
2024-11-04 14:27:56 +01:00
doc_converter = DocumentConverter(
format_options={
InputFormat.PDF: PdfFormatOption(pipeline_options=pipeline_options)
}
)
```
2025-02-06 15:46:32 +01:00
2024-10-17 18:14:48 +02:00
#### Impose limits on the document size
You can limit the file size and number of pages which should be allowed to process per document:
```python
from pathlib import Path
from docling.document_converter import DocumentConverter
source = "https://arxiv.org/pdf/2408.09869"
converter = DocumentConverter()
result = converter.convert(source, max_num_pages=100, max_file_size=20971520)
```
#### Convert from binary PDF streams
You can convert PDFs from a binary stream instead of from the filesystem as follows:
```python
from io import BytesIO
from docling.datamodel.base_models import DocumentStream
from docling.document_converter import DocumentConverter
buf = BytesIO(your_binary_stream)
2024-11-15 09:24:15 +01:00
source = DocumentStream(name="my_doc.pdf", stream=buf)
2024-10-17 18:14:48 +02:00
converter = DocumentConverter()
result = converter.convert(source)
```
#### Limit resource usage
You can limit the CPU threads used by Docling by setting the environment variable `OMP_NUM_THREADS` accordingly. The default setting is using 4 CPU threads.
2025-01-26 08:10:33 +01:00
#### Use specific backend converters
2025-01-28 13:23:30 +01:00
!!! note
2025-03-04 14:24:38 +01:00
This section discusses directly invoking a [backend ](../concepts/architecture.md ),
2025-01-28 13:23:30 +01:00
i.e. using a low-level API. This should only be done when necessary. For most cases,
using a `DocumentConverter` (high-level API) as discussed in the sections above
should suffice — and is the recommended way.
2025-04-28 09:51:54 +03:00
By default, Docling will try to identify the document format to apply the appropriate conversion backend (see the list of [supported formats ](supported_formats.md )).
2025-03-04 14:24:38 +01:00
You can restrict the `DocumentConverter` to a set of allowed document formats, as shown in the [Multi-format conversion ](../examples/run_with_formats.py ) example.
2025-01-26 08:10:33 +01:00
Alternatively, you can also use the specific backend that matches your document content. For instance, you can use `HTMLDocumentBackend` for HTML pages:
```python
import urllib.request
from io import BytesIO
from docling.backend.html_backend import HTMLDocumentBackend
from docling.datamodel.base_models import InputFormat
from docling.datamodel.document import InputDocument
url = "https://en.wikipedia.org/wiki/Duck"
text = urllib.request.urlopen(url).read()
in_doc = InputDocument(
path_or_stream=BytesIO(text),
format=InputFormat.HTML,
backend=HTMLDocumentBackend,
filename="duck.html",
)
backend = HTMLDocumentBackend(in_doc=in_doc, path_or_stream=BytesIO(text))
2025-01-28 13:23:30 +01:00
dl_doc = backend.convert()
print(dl_doc.export_to_markdown())
2025-01-26 08:10:33 +01:00
```
2024-10-17 18:14:48 +02:00
## Chunking
2025-03-04 14:24:38 +01:00
You can chunk a Docling document using a [chunker ](../concepts/chunking.md ), such as a
2024-12-10 16:03:02 +01:00
`HybridChunker` , as shown below (for more details check out
2025-03-04 14:24:38 +01:00
[this example ](../examples/hybrid_chunking.ipynb )):
2024-10-17 18:14:48 +02:00
```python
from docling.document_converter import DocumentConverter
2024-12-10 16:03:02 +01:00
from docling.chunking import HybridChunker
2024-10-17 18:14:48 +02:00
conv_res = DocumentConverter().convert("https://arxiv.org/pdf/2206.01062")
doc = conv_res.document
2024-12-10 16:03:02 +01:00
chunker = HybridChunker(tokenizer="BAAI/bge-small-en-v1.5") # set tokenizer as needed
chunk_iter = chunker.chunk(doc)
```
An example chunk would look like this:
```python
print(list(chunk_iter)[11])
2024-10-17 18:14:48 +02:00
# {
2024-12-10 16:03:02 +01:00
# "text": "In this paper, we present the DocLayNet dataset. [...]",
2024-10-17 18:14:48 +02:00
# "meta": {
# "doc_items": [{
2024-12-10 16:03:02 +01:00
# "self_ref": "#/texts/28",
2024-10-17 18:14:48 +02:00
# "label": "text",
# "prov": [{
# "page_no": 2,
2024-12-10 16:03:02 +01:00
# "bbox": {"l": 53.29, "t": 287.14, "r": 295.56, "b": 212.37, ...},
# }], ...,
# }, ...],
# "headings": ["1 INTRODUCTION"],
2024-10-17 18:14:48 +02:00
# }
# }
```