Create Enterprise Platform Documentation (#2486)

To test:
> cd docs && make html

Structures:
* Getting Started with Platform (User Account Management)
* Set Up workflow automation
* Job Scheduling
* Platform Source Connectors: 
   * Azure Blob Storage, 
   * Amazon S3
   * Salesforce
   * Sharepoint
   * Google Cloud Storage
   * Google Drive
   * One Drive
   * Elasticsearch
   * SFTP Storage
* Platform Destination Connectors: (i) 
   * Amazon S3
   * Azure Cognitive Search
   * Google Cloud Storage
   * Pinecone
   * Elasticsearch
   * Weaviate
   * MongoDB
   * AWS OpenSearch
   * Databricks

---------

Co-authored-by: Matt Robinson <mrobinson@unstructured.io>
Co-authored-by: Matt Robinson <mrobinson@unstructuredai.io>
This commit is contained in:
Ronny H 2024-03-06 11:16:08 -08:00 committed by GitHub
parent 9c1c41f493
commit 2afd347e6b
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
59 changed files with 1063 additions and 2 deletions

View File

@ -1,10 +1,11 @@
## 0.12.6-dev3
## 0.12.6-dev4
### Enhancements
* **Refactor `add_chunking_strategy` decorator to dispatch by name.** Add `chunk()` function to be used by the `add_chunking_strategy` decorator to dispatch chunking call based on a chunking-strategy name (that can be dynamic at runtime). This decouples chunking dispatch from only those chunkers known at "compile" time and enables runtime registration of custom chunkers.
### Features
* **Added Unstructured Platform Documentation** The Unstructured Platform is currently in beta. The documentation provides how-to guides for setting up workflow automation, job scheduling, and configuring source and destination connectors.
### Fixes

View File

@ -13,6 +13,9 @@ Library Documentation
:doc:`api`
Access all the power of ``unstructured`` through the ``unstructured-api`` or learn to host it locally.
:doc:`platform`
Explore the enterprise-grade platform for enterprises and high-growth companies with large data volume looking to automatically retrieve, transform, and stage their data for LLMs.
:doc:`core`
Learn more about the core partitioning, chunking, cleaning, and staging functionality within the
Unstructured library.
@ -42,6 +45,7 @@ Library Documentation
introduction
installing
api
platform
core
ingest/index
metadata

23
docs/source/platform.rst Normal file
View File

@ -0,0 +1,23 @@
Unstructured Platform
#####################
.. warning::
The Unstructured Platform is currently in beta. To join the waitlist, please fill out the form on our `Platform product page <https://unstructured.io/platform>`__.
Welcome to the Unstructured Platform User Guide. This guide provides comprehensive instructions and insights for users navigating the Unstructured Platform.
The Unstructured Platform stands out with its advanced features: it includes everything from our `commercial APIs <https://unstructured-io.github.io/unstructured/api.html>`__ and `open source library <https://github.com/Unstructured-IO/unstructured>`__, offers end-to-end continuous data hydration, and supports seamless integration with `data storage platforms <https://unstructured-io.github.io/unstructured/ingest/source_connectors.html>`__ and major `vector databases <https://unstructured-io.github.io/unstructured/ingest/destination_connectors.html>`__.
Its key functionalities are enhanced with `chunking strategies <https://unstructured-io.github.io/unstructured/core/chunking.html>`__, `embedding generation <https://unstructured-io.github.io/unstructured/core/embedding.html>`__ for RAG, and compatibility with multiple file types (text, images, PDFs, and many more). Additionally, it's designed for global reach with SOC 2 compliance and support for over 50 languages, ensuring a secure, versatile, and comprehensive data management solution.
Table of Content
****************
.. toctree::
:maxdepth: 2
platforms/workflow
platforms/job
platforms/source_platform
platforms/destination_platform

View File

@ -0,0 +1,26 @@
Platform Destination Connectors
===============================
Destination Connectors in the ``Unstructured Platform`` are designed to specify the endpoint for data processed within the platform. These connectors ensure that the transformed and analyzed data is securely and efficiently transferred to a storage system for future use, often to a vector database for tasks that involve high-speed retrieval and advanced data analytics operations.
.. figure:: imgs/02-Destination-Dashboard.png
:alt: destinations
Destinations Dashboard
**List of Destination Connectors**
.. toctree::
:maxdepth: 1
platform_destinations/amazon_s3_destination
platform_destinations/azure_cognitive_search
platform_destinations/chroma
platform_destinations/databricks
platform_destinations/elasticsearch_destination
platform_destinations/google_cloud_destination
platform_destinations/mongodb
platform_destinations/opensearch
platform_destinations/pinecone
platform_destinations/postgresql
platform_destinations/weaviate

Binary file not shown.

After

Width:  |  Height:  |  Size: 38 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 49 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 120 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 57 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 62 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 62 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 67 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 113 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 276 KiB

View File

@ -0,0 +1,71 @@
Jobs Scheduling
===============
Job Dashboard
--------------
The job dashboard provides a centralized view for managing and monitoring the execution of data processing tasks within your workflows.
.. image:: imgs/04a-Job-Dashboard.png
:alt: job dashboard
**Here is how to navigate and view job status:**
- The central panel lists all jobs with their associated workflows **Name**, **ID**, **Status**, and **Execution Start Time**.
- The **Status** column provides at-a-glance information:
- A ``New`` status indicates a job has been created but has yet to run.
- A ``Scheduled`` status shows that a job is set to run at a future date and time, as indicated in the 'Execution Start Time' column.
- A ``Partitioning`` status means that documents are currently being processed.
- A ``Finished`` status indicates the job has been completed.
- A ``Failed`` status indicates the job has encountered some errors.
Run a Job
----------
.. image:: imgs/04b-Create-Adhoc-Job.png
:alt: create adhoc job
**To run a workflow, follow these steps:**
1. Click on the "Jobs" tab in the side navigation menu and click the **Run Job** button to open the job configuration pop-up window.
2. From the **Select a Workflow or create a new one** dropdown menu, you can **select a workflow** that you have previously created.
3. Alternatively, you can select to **create a new workflow** by completing the following fields:
- ``Sources``: Specify the source connector for the job.
- ``Destination``: Determine the destination connector where the processed data will be sent.
- ``Strategy``: Select the processing strategy for the data.
- ``Settings``: Configure additional job settings.
4. After you click the Run button, the system will run the workflow immediately.
Monitor Jobs Activity Logs
----------------------------
.. image:: imgs/04c-Jobs-Detail-Status.png
:alt: job detail status
The Job Details page is a comprehensive section for monitoring the specific details of jobs executed within a particular workflow. To access this page, click the specific *Workflow* or *ID* on the Job Dashboard.
Here is the information provided by the Job Details page:
- **Job Summary**: At the top of the dashboard, you will see the following document status:
- ``Documents``: Total number of documents included in the workflow.
- ``New``: number of new documents to be processed.
- ``Partitioning``: number of documents being processed.
- ``Finished``: number of documents that have been completed.
- ``Failed``: number of documents that failed to be processed.
- **Job Status and Execution Information**: The page provides a detailed log of the job's execution, including ``status``, ``expected execution time``, and ``Job ID`` for reference.
- **Activity Logs**: The activity logs display a timestamped sequence of events during the job's execution. This can include when new documents are found, when documents are processed, and any errors or messages related to the job.

View File

@ -0,0 +1,39 @@
Amazon S3
=========
This page contains the information to store processed data to Amazon S3.
Prerequisites
--------------
- Amazon S3 Bucket Name
- AWS Access and Secret Keys (if not using anonymous access)
- Token (if required for temporary security credentials)
For more information, please refer to `Amazon S3 documentation <https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html>`__.
Step-by-Step Guide
-------------------
.. image:: imgs/Destination-S3.png
:alt: Destination Connector Amazon S3
1. **Access the Create Destination Page**. Navigate to the "Destinations" section within the platform's side navigation menu and click on "New Destination" to initiate the setup of a new destination for your processed data.
2. **Select Destination Type**. Select **Amazon S3** destination connector from the ``Type`` dropdown menu.
3. **Configure Destination Details**
- ``Name`` (*required*): Assign a descriptive name to the new destination connector.
- ``Bucket Name`` (*required*): Enter the name of your Amazon S3 bucket.
- ``AWS Access Key``: Input your AWS access key ID if your bucket is private.
- ``AWS Secret Key``: Enter your AWS secret access key corresponding to the access key ID.
- ``Token``: If required, provide the security token for temporary access.
- ``Endpoint URL``: Specify a custom URL if connecting to a non-AWS S3 service.
4. **Additional Settings**
- Check ``Anonymous`` if you are connecting without AWS credentials.
- Check ``Recursive`` if you want the platform to store data recursively into sub-folders within the bucket.
5. **Submit**. Review all the details entered to ensure accuracy. Click 'Submit' to finalize the creation of the Destination Connector. The newly completed Amazon S3 connector will be listed on the Destinations dashboard.

View File

@ -0,0 +1,37 @@
Azure Cognitive Search
======================
This page contains the information to store processed data to Azure Cognitive Search.
Prerequisites
--------------
- API Key for Azure Cognitive Search
- Azure Cognitive Search Index and Endpoint
For more information, please refer to `Azure Cognitive Search documentation <https://docs.microsoft.com/en-us/azure/search/>`__.
.. warning::
Ensure that the index schema is compatible with the data you intend to write.
If you need guidance on structuring your schema, consult the `sample index schema <https://unstructured-io.github.io/unstructured/ingest/destination_connectors/azure_cognitive_search.html#sample-index-schema>`__ for reference.
Step-by-Step Guide
-------------------
.. image:: imgs/Destination-Azure-Cognitive-Search.png
:alt: Destination Connector Azure Cognitive Search
1. **Access the Create Destination Page**. Navigate to the "Destinations" section within the platform's side navigation menu and click on "New Destination" to initiate the setup of a new destination for your processed data.
2. **Select Destination Type**. Select **Azure Cognitive Search** destination connector from the ``Type`` dropdown menu.
3. **Configure Destination Details**
- ``Name`` (*required*): Assign a descriptive name to the new destination connector.
- ``Endpoint`` (*required*): Enter the endpoint URL for your Azure Cognitive Search service.
- ``API Key`` (*required*): Provide the API key for your Azure Cognitive Search service.
- ``Index Name`` (*required*): Input the name of the index where the data will be stored.
4. **Submit**. Review all the details entered to ensure accuracy. Click 'Submit' to finalize the creation of the Destination Connector. The newly completed Azure Cognitive Search connector will be listed on the Destinations dashboard.

View File

@ -0,0 +1,36 @@
Chroma
======
This page contains the information to store processed data to a Chroma instance.
Prerequisites
--------------
- ChromaDB Installation
For more information, please refer to `Chroma documentation <https://docs.trychroma.com/getting-started>`__.
Step-by-Step Guide
-------------------
.. image:: imgs/Destination-Chroma.png
:alt: Destination Connector Chroma
1. **Access the Create Destination Page**. Navigate to the "Destinations" section within the platform's side navigation menu and click on "New Destination" to initiate the setup of a new destination for your processed data.
2. **Select Destination Type**. Select **Chroma** destination connector from the ``Type`` dropdown menu.
3. **Configure Destination Details**
- ``Name`` (*required*): Assign a descriptive name to the new destination connector.
- ``Settings JSON``: Input the JSON string of settings to communicate with the Chroma server.
- ``Tenant``: Specify the tenant to use for this client.
- ``Database``: Enter the name of the database to use for this client.
- ``Host``: Provide the hostname of the Chroma server.
- ``Port``: Indicate the port of the Chroma server.
- Check ``SSL`` if an SSL connection is required.
- ``Headers JSON``: Enter a JSON string of headers to send to the Chroma server.
- ``Collection Name`` (*required*): Specify the name of the collection to write into.
- ``Batch Size`` (*required*): Define the number of records per batch to write into Chroma.
4. **Submit**. Review all the details entered to ensure accuracy. Click 'Submit' to finalize the creation of the Destination Connector. The newly completed Chroma connector will be listed on the Destinations dashboard.

View File

@ -0,0 +1,44 @@
Databricks
==========
This page contains the information to store processed data to Databricks.
Prerequisites
--------------
- Host URL for Databricks workspace
- Account ID for Databricks
- Username and Password for Databricks authentication (if applicable)
- Personal Access Token for Databricks
- Cluster ID
- Catalog, Schema, and Volume within Databricks
For more information, please refer to `Databricks documentation <https://docs.databricks.com/>`__.
Step-by-Step Guide
-------------------
.. image:: imgs/Destination-Databricks.png
:alt: Destination Connector Databricks
1. **Access the Create Destination Page**. Navigate to the "Destinations" section within the platform's side navigation menu and click on "New Destination" to initiate the setup of a new destination for your processed data.
2. **Select Destination Type**. Select **Databricks** destination connector from the ``Type`` dropdown menu.
3. **Configure Destination Details**
- ``Name`` (*required*): Assign a descriptive name to the new destination connector.
- ``Host`` (*required*): Enter the Databricks workspace host URL.
- ``Account ID``: Specify the Databricks account ID.
- ``Username``: Provide the Databricks username.
- ``Password``: Enter the Databricks password.
- ``Token``: Input the Databricks personal access token.
- ``Cluster ID``: Indicate the Databricks cluster ID.
- ``Catalog`` (*required*): Name of the catalog in the Databricks Unity Catalog service.
- ``Schema``: Specify the schema associated with the volume.
- ``Volume`` (*required*): Name of the volume in the Unity Catalog.
- ``Volume Path``: Provide an optional path within the volume to which to write.
- Check ``Overwrite`` if existing data should be overwritten.
- ``Encoding``: Select the encoding applied to the data when written to the volume.
4. **Submit**. Review all the details entered to ensure accuracy. Click 'Submit' to finalize the creation of the Destination Connector. The newly completed Databricks connector will be listed on the Destinations dashboard.

View File

@ -0,0 +1,45 @@
Elasticsearch
=============
This page contains the information to store processed data to an Elasticsearch cluster.
Prerequisites
--------------
- Elasticsearch Local Install or Cloud Service
- Index Name
- Username and Password for Elasticsearch access (if required)
- Cloud ID (if using Elastic Cloud)
- API Key and API Key ID for authentication (if required)
For more information, please refer to `Elasticsearch documentation <https://www.elastic.co/guide/en/elasticsearch/reference/current/index.html>`__.
.. warning::
Ensure that the index schema is compatible with the data you intend to write.
If you need guidance on structuring your schema, consult the `Vector Search Sample Mapping <https://unstructured-io.github.io/unstructured/ingest/destination_connectors/elasticsearch.html#vector-search-sample-mapping>`__ for reference.
Step-by-Step Guide
-------------------
.. image:: imgs/Destination-Elasticsearch.png
:alt: Destination Connector Elasticsearch
1. **Access the Create Destination Page**. Navigate to the "Destinations" section within the platform's side navigation menu and click on "New Destination" to initiate the setup of a new destination for your processed data.
2. **Select Destination Type**. Select **Elasticsearch** destination connector from the ``Type`` dropdown menu.
3. **Configure Destination Details**
- ``Name`` (*required*): Assign a descriptive name to the new destination connector.
- ``URL`` (*required*): Enter the URL of the Elasticsearch cluster.
- ``Batch Size``: Set the number of documents per batch to be uploaded.
- ``Index Name`` (*required*): Provide the name of the Elasticsearch index to store the data.
- ``Username``: Input the username for the Elasticsearch cluster if authentication is enabled.
- ``Password``: Enter the password associated with the username.
- ``Cloud ID``: Specify the Cloud ID if connecting to Elastic Cloud.
- ``API Key``: Provide the API Key for authentication if this method is used.
- ``API Key ID``: Enter the ID associated with the API Key.
- ``Bearer Auth``, ``CA Certs``, ``SSL Assert Fingerprint``: Provide these details if needed for a secure SSL connection.
4. **Submit**. Review all the details entered to ensure accuracy. Click 'Submit' to finalize the creation of the Destination Connector. The newly completed Elasticsearch connector will be listed on the Destinations dashboard.

View File

@ -0,0 +1,30 @@
Google Cloud Storage
====================
This page contains the information to store processed data to Google Cloud Storage.
Prerequisites
--------------
- Google Cloud Storage Bucket URL
- Service Account Key for Google Cloud Storage
For more information, please refer to `Google Cloud Storage documentation <https://cloud.google.com/storage/docs>`__.
Step-by-Step Guide
-------------------
.. image:: imgs/Destination-Google-Cloud.png
:alt: Destination Connector Google Cloud Storage
1. **Access the Create Destination Page**. Navigate to the "Destinations" section within the platform's side navigation menu and click on "New Destination" to initiate the setup of a new destination for your processed data.
2. **Select Destination Type**. Select **Google Cloud Storage** destination connector from the ``Type`` dropdown menu.
3. **Configure Destination Details**
- ``Name`` (*required*): Assign a descriptive name to the new destination connector.
- ``Remote URL`` (*required*): Enter the Google Cloud Storage bucket URL where the data will be stored.
- ``Service Account Key`` (*required*): Provide the Service Account Key that has been granted access to the specified Google Cloud Storage bucket.
4. **Submit**. Review all the details entered to ensure accuracy. Click 'Submit' to finalize the creation of the Destination Connector. The newly completed Google Cloud Storage connector will be listed on the Destinations dashboard.

Binary file not shown.

After

Width:  |  Height:  |  Size: 151 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 156 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 172 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 148 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 120 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 113 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 120 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 104 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 104 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 162 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 140 KiB

View File

@ -0,0 +1,31 @@
MongoDB
=======
This page contains the information to store processed data to a MongoDB database.
Prerequisites
--------------
- MongoDB Local Install
- Database and Collection
For more information, please refer to `MongoDB documentation <https://docs.mongodb.com/>`__.
Step-by-Step Guide
-------------------
.. image:: imgs/Destination-MongoDB.png
:alt: Destination Connector MongoDB
1. **Access the Create Destination Page**. Navigate to the "Destinations" section within the platform's side navigation menu and click on "New Destination" to initiate the setup of a new destination for your processed data.
2. **Select Destination Type**. Select **MongoDB** destination connector from the ``Type`` dropdown menu.
3. **Configure Destination Details**
- ``Name`` (*required*): Assign a descriptive name to the new destination connector.
- ``URI`` (*required*): Enter the MongoDB connection URI.
- ``Database`` (*required*): Provide the name of the target MongoDB database.
- ``Collection``: Specify the name of the target MongoDB collection within the database.
4. **Submit**. Review all the details entered to ensure accuracy. Click 'Submit' to finalize the creation of the Destination Connector. The newly completed MongoDB connector will be listed on the Destinations dashboard.

View File

@ -0,0 +1,40 @@
OpenSearch
==========
This page contains the information to store processed data to an OpenSearch cluster.
Prerequisites
--------------
- OpenSearch Hosts
- Index Name
- Username and Password (if required)
- SSL configuration (if required)
For more information, please refer to `OpenSearch documentation <https://opensearch.org/docs/latest/>`__.
.. warning::
Ensure that the index schema is compatible with the data you intend to write.
If you need guidance on structuring your schema, consult the `Vector Search Sample Mapping <https://unstructured-io.github.io/unstructured/ingest/destination_connectors/opensearch.html#vector-search-sample-mapping>`__ for reference.
Step-by-Step Guide
-------------------
.. image:: imgs/Destination-OpenSearch.png
:alt: Destination Connector OpenSearch
1. **Access the Create Destination Page**. Navigate to the "Destinations" section within the platform's side navigation menu and click on "New Destination" to initiate the setup of a new destination for your processed data.
2. **Select Destination Type**. Select **OpenSearch** destination connector from the ``Type`` dropdown menu.
3. **Configure Destination Details**
- ``Name`` (*required*): Assign a descriptive name to the new destination connector.
- ``Hosts`` (*required*): Enter the comma-delimited list of OpenSearch hosts.
- ``Index Name`` (*required*): Provide the name of the index where the data will be stored.
- ``Username``: Input the username for the OpenSearch cluster if authentication is enabled.
- ``Password``: Enter the password associated with the username.
- Check ``Use SSL for the connection`` if the OpenSearch cluster requires an SSL connection for security purposes.
4. **Submit**. Review all the details entered to ensure accuracy. Click 'Submit' to finalize the creation of the Destination Connector. The newly completed OpenSearch connector will be listed on the Destinations dashboard.

View File

@ -0,0 +1,35 @@
Pinecone
=========
This page contains the information to store processed data to Pinecone vector database.
Prerequisites
--------------
- Pinecone Account and API Key
- Pinecone Index
For more information, please refer to `Pinecone documentation <https://docs.pinecone.io/docs/quickstart>`__.
Step-by-Step Guide
-------------------
.. image:: imgs/Destination-Pinecone.png
:alt: Pinecone Destination Connector
1. **Access the Create Destination Page**. Navigate to the "Destinations" section within the platform's side navigation menu and click on "New Destination" to initiate the setup of a new destination for your processed data.
2. **Select Destination Type**. Select **Pinecone** destination connector from the ``Type`` dropdown menu.
3. **Configure Destination Details**
- ``Name`` (*required*): Assign a descriptive name to the new destination connector.
- ``Index Name`` (*required*): Enter the name of the index in the Pinecone database where the data will be stored.
- ``Environment`` (*required*): Enter the Pinecone environment in which the index instance is hosted.
- ``Batch Size`` (*required*): Define the number of records the platform will send in a single batch to the destination.
- ``API Key`` (*required*): Input the API key provided by Pinecone for secure access.
4. **Submit**. Review all the details entered to ensure accuracy. Click 'Submit' to finalize the creation of the Destination Connector. The newly completed connector will be listed on the Destinations dashboard.

View File

@ -0,0 +1,39 @@
PostgreSQL
==========
This page contains the information to store processed data to a PostgreSQL database.
Prerequisites
--------------
- PostgreSQL Server Hostname
- Database Name and Port Number
- Username and Password for Database Access
For more information, please refer to `PostgreSQL documentation <https://www.postgresql.org/docs/>`__.
.. warning::
Ensure that the index schema is compatible with the data you intend to write.
If you need guidance on structuring your schema, consult the `Sample Index Schema <https://unstructured-io.github.io/unstructured/ingest/destination_connectors/sql.html#sample-index-schema>`__ for reference.
Step-by-Step Guide
-------------------
.. image:: imgs/Destination-PostgreSQL.png
:alt: Destination Connector PostgreSQL
1. **Access the Create Destination Page**. Navigate to the "Destinations" section within the platform's side navigation menu and click on "New Destination" to initiate the setup of a new destination for your processed data.
2. **Select Destination Type**. Select **PostgreSQL** destination connector from the ``Type`` dropdown menu.
3. **Configure Destination Details**
- ``Name`` (*required*): Assign a descriptive name to the new destination connector.
- ``Host`` (*required*): Enter the hostname or IP address of the PostgreSQL server.
- ``Database`` (*required*): Provide the name of the PostgreSQL database.
- ``Port``: Specify the port number for the PostgreSQL server (default is 5432).
- ``Username``: Input the username for the PostgreSQL database access.
- ``Password``: Enter the password associated with the username.
4. **Submit**. Review all the details entered to ensure accuracy. Click 'Submit' to finalize the creation of the Destination Connector. The newly completed PostgreSQL connector will be listed on the Destinations dashboard.

View File

@ -0,0 +1,45 @@
Weaviate
========
This page contains the information to store processed data to Weaviate.
Prerequisites
--------------
- Weaviate Local Install or Cloud Service
- Weaviate URL and Class Name
- Authentication Credentials (if required)
For more information, please refer to `Weaviate documentation <https://weaviate.io/developers/weaviate/current/>`__.
.. warning::
Ensure that the index schema is compatible with the data you intend to write.
If you need guidance on structuring your schema, consult the `Sample Index Schema <https://unstructured-io.github.io/unstructured/ingest/destination_connectors/weaviate.html#sample-index-schema>`__ for reference.
Step-by-Step Guide
-------------------
.. image:: imgs/Destination-Weaviate.png
:alt: Destination Connector Weaviate
1. **Access the Create Destination Page**. Navigate to the "Destinations" section within the platform's side navigation menu and click on "New Destination" to initiate the setup of a new destination for your processed data.
2. **Select Destination Type**. Select **Weaviate** destination connector from the ``Type`` dropdown menu.
3. **Configure Destination Details**
- ``Name`` (*required*): Assign a descriptive name to the new destination connector.
- ``Host URL`` (*required*): Enter the URL of the Weaviate instance.
- ``Class Name`` (*required*): Specify the class name within Weaviate where data will be stored.
- ``Batch Size`` (*required*): Define the number of records the platform will send in a single batch.
- ``Username``: Provide the username if authentication is required.
- ``Password``: Enter the password corresponding to the username.
- ``Access Token``, ``API Key``, ``Refresh Token``, ``Client Secret``: Provide these details if needed for the Weaviate authentication process.
- ``Scope``: Specify the scope if applicable for OAuth.
4. **Additional Settings**
- Check ``Anonymous`` if you are connecting without authentication.
5. **Submit**. Review all the details entered to ensure accuracy. Click 'Submit' to finalize the creation of the Destination Connector. The newly completed Weaviate connector will be listed on the Destinations dashboard.

View File

@ -0,0 +1,42 @@
Amazon S3
=========
This page contains the information to ingest your documents from Amazon S3 buckets.
Prerequisites
--------------
- AWS Account and API Key
- S3 Bucket
- IAM User with S3 Access
For more information, please refer to `Amazon S3 documentation <https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html>`__.
Step-by-Step Guide
-------------------
.. image:: imgs/Source-AWS-S3.png
:alt: Source Connector Amazon S3
1. **Access the Create Source Page**. Navigate to the "Sources" section on the left sidebar and click the "New Source" button.
2. **Select Source Type**. Select **Amazon S3** from the ``Type`` dropdown menu.
3. **Configure Source Details to connect to the AWS Platform**
- ``Name`` (*required*): Enter a unique name for your source to identify it within the platform.
- ``Bucket Name`` (*required*): Provide the name of your Amazon S3 bucket.
- ``AWS Access Key``: Enter your AWS access key ID if your bucket is private. Leave blank if anonymous access is configured.
- ``AWS Secret Key``: Enter your AWS secret access key corresponding to the above access key ID.
- ``Token``: If required, enter the security token for temporary access.
- ``Endpoint URL``: Specify a custom URL if you connect to a non-AWS S3 bucket.
4. **Additional Settings**
- Check ``Anonymous`` if you are connecting to a bucket with public access and dont want to associate the connection with your account.
- Check ``Recursive`` if you want the platform to ingest data from sub-folders within the bucket.
5. **Submit**. After filling in the necessary information, click 'Submit' to create the Source Connector. The newly completed connector will be listed on the Sources dashboard.

View File

@ -0,0 +1,38 @@
Azure Blob Storage
==================
This page contains the information to ingest your documents from Azure Blob Storage.
Prerequisites
--------------
- Azure Account
- Azure Blob Storage Container
For more information, please refer to `Azure Blob Storage documentation <https://docs.microsoft.com/en-us/azure/storage/blobs/storage-blobs-introduction>`__.
Step-by-Step Guide
-------------------
.. image:: imgs/Source-Azure-Blob.png
:alt: Source Connector Azure Blob Storage
1. **Access the Create Source Page**. Navigate to the "Sources" section on the left sidebar and click the "New Source" button.
2. **Select Source Type**. Select **Azure Blob Storage** from the ``Type`` dropdown menu.
3. **Configure Source Details to connect to Azure Blob Storage**
- ``Name`` (*required*): Enter a unique name for the source connector.
- ``Remote URL`` (*required*): Enter the URL that points to the Azure Blob Storage container.
- ``Azure Account Name`` (*required*): Provide the name of your Azure storage account.
- ``Azure Account Key``: Enter your Azure storage account key for authentication.
- ``Azure Connection String``: If applicable, provide the Azure connection string that overrides all other connection parameters.
- ``SAS Token``: If using a shared access signature for authentication, provide the SAS token here.
4. **Additional Settings**
- Check ``Recursive`` if you want the platform to recursively ingest data from sub-folders within the container.
5. **Submit**. After filling in the necessary information, click 'Submit' to create the Source Connector. The newly completed connector will be listed on the Sources dashboard.

View File

@ -0,0 +1,43 @@
Elasticsearch
=============
This page contains the information to ingest your data from Elasticsearch.
Prerequisites
--------------
- Elasticsearch Local Install or Cloud Service
- Index Name
- Username and Password for Elasticsearch access (if required)
- Cloud ID (if using Elastic Cloud)
- API Key and API Key ID for authentication (if required)
For more information, please refer to `Elasticsearch documentation <https://www.elastic.co/guide/en/elasticsearch/reference/current/index.html>`__.
Step-by-Step Guide
-------------------
.. image:: imgs/Source-Elasticsearch.png
:alt: Source Connector Elasticsearch
1. **Access the Create Source Page**. Navigate to the "Sources" section on the left sidebar and click the "New Source" button.
2. **Select Source Type**. Select **Elasticsearch** from the ``Type`` dropdown menu.
3. **Configure Source Details to connect to Elasticsearch**
- ``Name`` (*required*): Enter a unique name for the Elasticsearch source connector.
- ``URL`` (*required*): Specify the Elasticsearch cluster URL.
- ``Batch Size`` (*required*): Set the number of records the platform will process in a single batch.
- ``Index Name`` (*required*): Provide the name of the index in the Elasticsearch cluster.
- ``Username``: Input the username for the Elasticsearch cluster if authentication is enabled.
- ``Password``: Enter the password associated with the username for the Elasticsearch cluster.
- ``Cloud ID``: If using Elastic Cloud, specify the Cloud ID.
- ``API Key``: Provide the API Key for authentication if this method is used.
- ``API Key ID``: Enter the ID associated with the API Key if needed.
- ``Bearer Auth``: Specify if bearer authentication is used (leave blank if not applicable).
- ``CA Certs``: Include the CA Certificates for SSL connection if required.
- ``SSL Assert Fingerprint``: Provide the SSL fingerprint for secure connection if necessary.
4. **Submit**. After filling in the necessary information, click 'Submit' to create the Source Connector. The newly completed Elasticsearch connector will be listed on the Sources dashboard.

View File

@ -0,0 +1,36 @@
Google Cloud Storage
====================
This page contains the information to ingest your data from Google Cloud Storage.
Prerequisites
--------------
- Google Cloud Storage Bucket URL
- Service Account Key for Google Cloud Storage
For more information, please refer to `Google Cloud Storage documentation <https://cloud.google.com/storage/docs>`__.
Step-by-Step Guide
-------------------
.. image:: imgs/Source-Google-Cloud.png
:alt: Source Connector Google Cloud Storage
1. **Access the Create Source Page**. Navigate to the "Sources" section on the left sidebar and click the "New Source" button.
2. **Select Source Type**. Select **Google Cloud Storage** from the ``Type`` dropdown menu.
3. **Configure Source Details to connect to Google Cloud Storage**
- ``Name`` (*required*): Enter a unique name for the Google Cloud Storage source connector.
- ``Remote URL`` (*required*): Specify the gs:// URL pointing to your Google Cloud Storage bucket and path.
- ``Service Account Key`` (*required*): Enter the JSON content of a Google Service Account Key that has the necessary permissions to access the specified bucket.
4. **Additional Settings**
- Check ``Uncompress archives`` if the data to be ingested is compressed and requires uncompression.
- Check ``Recursive processing`` if you want the platform to ingest data recursively from sub-folders within the bucket.
5. **Submit**. After filling in the necessary information, click 'Submit' to create the Source Connector. The newly completed Google Cloud Storage connector will be listed on the Sources dashboard.

View File

@ -0,0 +1,37 @@
Google Drive
============
This page contains the information to ingest your data from Google Drive.
Prerequisites
--------------
- Google Account
- Google Drive Folders and Files
- Service Account Key with permissions to access the Google Drive
For more information, please refer to `Google Drive API documentation <https://developers.google.com/drive/api/v3/about-sdk>`__.
Step-by-Step Guide
-------------------
.. image:: imgs/Source-Google-Drive.png
:alt: Source Connector Google Drive
1. **Access the Create Source Page**. Navigate to the "Sources" section on the left sidebar and click the "New Source" button.
2. **Select Source Type**. Select **Google Drive** from the ``Type`` dropdown menu.
3. **Configure Source Details to connect to Google Drive**
- ``Name`` (*required*): Enter a unique name for the Google Drive source connector.
- ``Drive ID`` (*required*): Input the Drive ID associated with the Google Drive you wish to connect.
- ``Service Account Key`` (*required*): Provide the Service Account Key that has been granted access to the specified Google Drive.
- ``Extension``: Specify the file extensions to be included in the ingestion process, if filtering is required.
4. **Additional Settings**
- Check ``Recursive`` if you want the platform to recursively ingest files from sub-folders within the specified Google Drive.
5. **Submit**. After filling in the necessary information, click 'Submit' to create the Source Connector. The newly completed Google Drive connector will be listed on the Sources dashboard.

Binary file not shown.

After

Width:  |  Height:  |  Size: 132 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 190 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 137 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 155 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 137 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 189 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 155 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 145 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 140 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 179 KiB

View File

@ -0,0 +1,43 @@
OneDrive Cloud Storage
======================
This page contains the information to ingest your data from OneDrive.
Prerequisites
--------------
- Microsoft OneDrive Account
- Client ID and Client Credential with Azure AD
- Tenant ID for Azure AD
- Principle Name (usually Azure AD email)
- Path to the OneDrive folder to ingest from
For more information, please refer to `OneDrive API documentation <https://docs.microsoft.com/en-us/onedrive/developer/rest-api/>`__.
Step-by-Step Guide
-------------------
.. image:: imgs/Source-OneDrive.png
:alt: Source Connector OneDrive
1. **Access the Create Source Page**. Navigate to the "Sources" section on the left sidebar and click the "New Source" button.
2. **Select Source Type**. Select **OneDrive Cloud Storage** from the ``Type`` dropdown menu.
3. **Configure Source Details to connect to OneDrive**
- ``Name`` (*required*): Enter a unique name for the OneDrive source connector.
- ``Client ID`` (*required*): Input the Client ID associated with Azure AD.
- ``Client Credential`` (*required*): Enter the Client Credential associated with the Client ID.
- ``Tenant ID`` (*required*): Specify the Tenant ID associated with Azure AD.
- ``Authority URL``: Provide the URL for the authentication token provider for Microsoft apps.
- ``Principle Name`` (*required*): Input the Principle Name associated with Azure AD, usually your Azure AD email.
- ``Path``: Specify the path within OneDrive from which to start parsing files.
4. **Additional Settings**
- Check ``Recursive`` if you want the platform to recursively ingest files from sub-folders within the specified OneDrive path.
5. **Submit**. After filling in the necessary information, click 'Submit' to create the Source Connector. The newly completed OneDrive connector will be listed on the Sources dashboard.

View File

@ -0,0 +1,40 @@
OpenSearch
==========
This page contains the information to ingest your data from OpenSearch.
Prerequisites
--------------
- OpenSearch Hosts
- Index Name
- Username and Password (if required)
- SSL configuration (if required)
For more information, please refer to `OpenSearch documentation <https://opensearch.org/docs/latest/>`__.
Step-by-Step Guide
-------------------
.. image:: imgs/Source-OpenSearch.png
:alt: Source Connector OpenSearch
1. **Access the Create Source Page**. Navigate to the "Sources" section on the left sidebar and click the "New Source" button.
2. **Select Source Type**. Select **OpenSearch** from the ``Type`` dropdown menu.
3. **Configure Source Details to connect to OpenSearch**
- ``Name`` (*required*): Enter a unique name for the OpenSearch source connector.
- ``Hosts`` (*required*): Specify the OpenSearch cluster hosts, including the protocol and port, separated by commas.
- ``Index Name`` (*required*): Provide the name of the index from which to start ingesting data.
- ``Username``: Input the username for the OpenSearch cluster if authentication is enabled.
- ``Password``: Enter the password associated with the username for the OpenSearch cluster.
- ``Fields``: List the specific fields to be ingested from the OpenSearch index, separated by commas.
4. **Additional Settings**
- Check ``Use SSL for the connection`` if the OpenSearch cluster requires an SSL connection for security purposes.
5. **Submit**. After filling in the necessary information, click 'Submit' to create the Source Connector. The newly completed OpenSearch connector will be listed on the Sources dashboard.

View File

@ -0,0 +1,36 @@
Salesforce
==========
This page contains the information to ingest your data from Salesforce.
Prerequisites
--------------
- Salesforce Account
- Salesforce categories (objects) you wish to access
- Consumer Key and Private Key (PEM) from Salesforce connected app
For more information, please refer to `Salesforce API documentation <https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/>`__.
Step-by-Step Guide
-------------------
.. image:: imgs/Source-Salesforce.png
:alt: Source Connector Salesforce
1. **Access the Create Source Page**. Navigate to the "Sources" section on the left sidebar and click the "New Source" button.
2. **Select Source Type**. Select **Salesforce** from the ``Type`` dropdown menu.
3. **Configure Source Details to connect to Salesforce**
- ``Name`` (*required*): Enter a unique name for the Salesforce source connector.
- ``Username`` (*required*): Enter the Salesforce username that has access to the required Salesforce categories.
- ``Salesforce categories`` (*required*): Specify the Salesforce objects you wish to access, such as Account, Case, etc.
- ``Consumer Key`` (*required*): Provide the Consumer Key from the Salesforce connected app.
- ``Private Key (PEM)`` (*required*): Input the Private Key associated with the Consumer Key for the Salesforce connected app.
- *Note: PEM begins with ``-----BEGIN RSA PRIVATE KEY-----`` and ends with ``-----END RSA PRIVATE KEY-----``.*
4. **Submit**. After filling in the necessary information, click 'Submit' to create the Source Connector. The newly completed Salesforce connector will be listed on the Sources dashboard.

View File

@ -0,0 +1,37 @@
SFTP Storage
============
This page contains the information to ingest your data from an SFTP server.
Prerequisites
--------------
- SFTP server URL, Username and Password
- Directory path to start the data ingestion from
For more information, please refer to `SFTP documentation <https://en.wikipedia.org/wiki/SSH_File_Transfer_Protocol>`__.
Step-by-Step Guide
-------------------
.. image:: imgs/Source-SFTP.png
:alt: Source Connector SFTP
1. **Access the Create Source Page**. Navigate to the "Sources" section on the left sidebar and click the "New Source" button.
2. **Select Source Type**. Select **SFTP Storage** from the ``Type`` dropdown menu.
3. **Configure Source Details to connect to SFTP**
- ``Name`` (*required*): Enter a unique name for the SFTP source connector.
- ``Remote URL`` (*required*): Specify the SFTP server URL with the full path (e.g., sftp://host:port/path/).
- ``SFTP username``: Input the username for logging into the SFTP server.
- ``Password`` (*required*): Enter the password associated with the SFTP username.
4. **Additional Settings**
- Check ``Uncompress`` if the files to be ingested are compressed and require decompression.
- Check ``Recursive`` if you want the platform to recursively ingest files from subdirectories.
5. **Submit**. After filling in the necessary information, click 'Submit' to create the Source Connector. The newly completed SFTP connector will be listed on the Sources dashboard.

View File

@ -0,0 +1,38 @@
Sharepoint
==========
This page contains the information to ingest your documents from Sharepoint sites.
Prerequisites
--------------
- Sharepoint Site URL
- Client ID and Client Credential with access to the Sharepoint instance
For more information, please refer to `Sharepoint Online documentation <https://docs.microsoft.com/en-us/sharepoint/dev/>`__.
Step-by-Step Guide
-------------------
.. image:: imgs/Source-Sharepoint.png
:alt: Source Connector Sharepoint
1. **Access the Create Source Page**. Navigate to the "Sources" section on the left sidebar and click the "New Source" button.
2. **Select Source Type**. Select **Sharepoint** from the ``Type`` dropdown menu.
3. **Configure Source Details to connect to Sharepoint**
- ``Name`` (*required*): Enter a unique name for the Sharepoint source connector.
- ``Client ID`` (*required*): Input the Client ID provided by Sharepoint for app registration.
- ``Client Credential`` (*required*): Enter the Client Credential (secret or certificate) associated with the Client ID.
- ``Site URL`` (*required*): Provide the base URL of the Sharepoint site you wish to connect to.
- ``Path`` (*required*): Specify the path from which to start parsing files. Default is "Shared Documents".
4. **Additional Settings**
- Check ``Recursive`` if you want the platform to recursively ingest data from sub-folders within the specified path.
- Check ``Files Only`` if you want the platform to ingest files without considering the folder structure.
5. **Submit**. After filling in the necessary information, click 'Submit' to create the Source Connector. The newly completed Sharepoint connector will be listed on the Sources dashboard.

View File

@ -0,0 +1,25 @@
Platform Source Connectors
==========================
Source connectors are essential components in data integration systems that establish a link between your files and the data ingestion process. They facilitate the batch processing of files, allowing for the systematic retrieval and ingestion of data stored in various file formats.
.. figure:: imgs/01-Sources-Dashboard.png
:alt: sources
Sources Dashboard
**List of Source Connectors**
.. toctree::
:maxdepth: 1
platform_sources/amazon_s3_source
platform_sources/azure_blob
platform_sources/elasticsearch_source
platform_sources/google_cloud_source
platform_sources/google_drive
platform_sources/onedrive
platform_sources/opensearch
platform_sources/salesforce
platform_sources/sftp
platform_sources/sharepoint

View File

@ -0,0 +1,100 @@
Workflows Automation
====================
Workflow Dashboard
------------------
A Workflow in the Unstructured Platform is a defined sequence of processes that automate the data handling from source to destination. It allows users to configure how and when data should be ingested, processed, and stored.
Workflows are crucial for establishing a systematic approach to managing data flows within the platform, ensuring consistency, efficiency, and adherence to specific data processing requirements.
.. image:: imgs/03b-Workflow-completed.png
:alt: workflow completed
.. note::
The "Show Ad Hoc Jobs" switch views or hides one-time jobs that do not repeat on a schedule.
Set up A Workflow Automation
----------------------------
.. image:: imgs/03a-Workflow-overview.png
:alt: workflow overview
The key components of a Workflow include:
- **Name**: Assign a unique identifier to the workflow for easy recognition and management.
- **Schedule Type**: Determine how frequently the workflow should run. While scheduling is available for regular automated execution daily or weekly, workflows can also be triggered on a one-off basis as needed.
- **Sources and Destination**: Select the data's origin (Source Connectors) and endpoint (Destination Connectors) for the workflow. You can also configure multiple Source Connectors to aggregate data from various origins. Furthermore, you can also configure multiple Destination Connectors to replicate the processed data in various locations.
- **Strategy**: Choose a data processing strategy. See the alternative strategies you can use in the following section.
- **Options**: Tailor the workflow further with options to exclude certain elements, adjust connector settings (including page breaks or retaining XML tags), and decide whether to reprocess all documents. See the explanation of these options in the following section.
- **Chunking and Embedding Options**: Fine-tune how the data is segmented ('Chunker Type') and select the type of data encoding ('Encoder Type') if necessary. See the explanation of these options in the following section.
**Options to Fine-Tune Data Processing in the Workflows**
- **Strategy**: Choose a data processing strategy. The available options are:
- ``auto`` (default strategy): The “auto” strategy will choose the partitioning strategy based on document characteristics and the function kwargs.
- ``fast``: The “fast” strategy will leverage traditional NLP extraction techniques to pull all text elements quickly. The “fast” strategy is not good for image-based file types.
- ``hi_res``: The “hi_res” strategy will identify the document's layout using detectron2. The advantage of “hi_res” is that it uses the document layout to gain additional information about document elements. We recommend using this strategy if your use case is highly sensitive to correct classifications for document elements.
- ``ocr_only``: Leverage Optical Character Recognition to extract text from the image-based files.
- **Options**: Tailor the workflow further with the following options
- ``Elements to Exclude``: Select the `element types <https://unstructured-io.github.io/unstructured/introduction/overview.html#id1>`__ you want to exclude from document processing. This option is useful if you want to include or exclude elements, such as Table or Image elements.
- Connector Settings:
- ``Include Page Breaks``: If checked, the output will include page breaks if the file type supports it. For more information about page break, check out the documentation `here <https://unstructured-io.github.io/unstructured/apis/api_parameters.html#include-page-breaks>`__.
- ``Infer Table Structure``: Check if you want to extract tables from PDFs or images.
- ``Keep XML Tags``: If checked, the output will retain the XML tags. This only applies to partition_xml. For more information about XML tags, check out the documentation `here <https://unstructured-io.github.io/unstructured/apis/api_parameters.html#xml-keep-tags>`__.
- ``Reprocess all documents``: The workflow will process the previously processed documents if checked.
- **Chunking Options**: To turn chunking the processed data on or off. When turned on, users can select one of the two chunking strategies:
- ``Chunk by Title``: When a "Title" element appears, it marks the start of a new section. The system will then finish the current chunk and begin a new one, even if the current chunk has space to include the "Title" element. For more information about chunk by title, please refer to the documentation `here <https://unstructured-io.github.io/unstructured/core/chunking.html#by-title-chunking-strategy>`__.
- ``Basic``: This strategy combines the sequential elements to optimize the size of each chunk while adhering to the predefined "max_characters" (hard maximum) and "new_after_n_chars" (soft maximum) settings. For more information about basic chunking, please refer to the documentation `here <https://unstructured-io.github.io/unstructured/core/chunking.html#basic-chunking-strategy>`__.
- **Embedding Options**: To turn on or off vectorizing the processed data. When turned on, users can select one of these two embedding models:
- ``OpenAI``: enter the *API Key* and select the model name from the dropdown menu. For more information about OpenAI embedding, please refer to the documentation `here <https://unstructured-io.github.io/unstructured/core/embedding.html#openaiembeddingencoder>`__.
- ``Bedrock``: enter the *AWS Access Key*, *AWS Secret Key*, and *AWS Region* to connect to AWS Bedrock embedding models. For more information about AWS Bedrock embedding, please refer to the documentation `here <https://unstructured-io.github.io/unstructured/core/embedding.html#bedrockembeddingencoder>`__.
Managing Workflow Actions
--------------------------
For each of the workflows on the Workflow list page, the following actions are available under the ``Actions`` dropdown menu next to the respective workflow name:
- **Edit**: Modify the existing configuration of your workflow. This can include changing the source, destination, scheduling, and chunking strategies, among other settings.
- **Delete**: Remove the workflow from the platform. Use this action cautiously, as it will permanently delete the workflow and its configurations.
- **Run**: Manually initiate the workflow outside of its scheduled runs. This is particularly useful for testing or ad-hoc data processing needs.
.. image:: imgs/03c-Workflows-Actions.png
:alt: workflow actions
Monitoring Workflow Status
---------------------------
The status of the workflows is a quick visual indicator of their current state on the Workflows list page. The status of your workflows can be one of the following three states:
- **Active**: An 'Active' status means the workflow is enabled and will run as scheduled. It is ready to process data according to its configuration.
- **Pause**: If you need to temporarily halt the workflow without altering its configuration, you can pause it. This is useful when the source data is undergoing maintenance, or you're implementing changes that may affect the workflow's operation.
- **Archive**: Workflows that are no longer in use but need to be kept for record-keeping or compliance purposes can be archived. This status removes the workflow from active duty without deleting its setup.
.. image:: imgs/03d-Workflows-Status.png
:alt: workflow status

View File

@ -1 +1 @@
__version__ = "0.12.6-dev3" # pragma: no cover
__version__ = "0.12.6-dev4" # pragma: no cover