Docs: Adding Docs for Import/Export Database/Schema/Tables (#18131)

Co-authored-by: Prajwal Pandit
This commit is contained in:
Prajwal214 2024-10-07 10:23:30 +05:30 committed by GitHub
parent 2c84442e39
commit bb02af26f0
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
65 changed files with 1100 additions and 63 deletions

View File

@ -579,6 +579,12 @@ site_menu:
url: /how-to-guides/data-discovery/details
- category: How-to Guides / Data Discovery / Add Complex Queries using Advanced Search
url: /how-to-guides/data-discovery/advanced
- category: How-to Guides / Data Discovery / Bulk Upload Data Assets
url: /how-to-guides/data-discovery/bulk-upload
- category: How-to Guides / Data Discovery / How to Bulk Import Data Asset
url: /how-to-guides/data-discovery/import
- category: How-to Guides / Data Discovery / How to Export Data Asset
url: /how-to-guides/data-discovery/export
- category: How-to Guides / Data Collaboration
url: /how-to-guides/data-collaboration

View File

@ -22,14 +22,14 @@ You can easily set up a database service for metadata extraction from Collate Sa
alt="Selecting Database Service"
caption="Selecting Database Service" /%}
4. **Enter the Connection Details** You can view the available documentation in the side panel for guidance. Also, refer to the connector [documentation](/connectors).
3. **Enter the Connection Details** You can view the available documentation in the side panel for guidance. Also, refer to the connector [documentation](/connectors).
{% image
src="/images/v1.5/getting-started/configure-connector.png"
alt="Updating Connection Details"
caption="Updating Connection Details" /%}
5. **Allow the Collate SaaS IP**. In the Connection Details, you will see the IP Address unique to your cluster, You need to Allow the `IP` to Access the datasource.
4. **Allow the Collate SaaS IP**. In the Connection Details, you will see the IP Address unique to your cluster, You need to Allow the `IP` to Access the datasource.
{% note %}
@ -41,7 +41,7 @@ This step is required only for Collate SaaS. If you are using Hybrid SaaS, you w
alt="Collate SaaS IP"
caption="Collate SaaS IP" /%}
6. **Test the connection** to verify the status. The test connection will check if the Service is reachable from Collate.
5. **Test the connection** to verify the status. The test connection will check if the Service is reachable from Collate.
{% image
src="/images/v1.5/getting-started/test-connection.png"

View File

@ -0,0 +1,30 @@
---
title: Bulk Upload Data Assets
slug: /how-to-guides/data-discovery/bulk-upload
collate: true
---
# How to Bulk Upload Data Assets
Collate offers a Data Assets Bulk Upload feature, enabling users to efficiently upload multiple data assets via a CSV file. This functionality allows for the bulk import or update of database, schema, and table entities in a single operation, saving time and effort. Additionally, the inline editor provides a convenient way to validate and modify the data assets before finalizing the import process.
{% youtube videoId="CXxDdS6AifY" start="0:00" end="2:19" width="560px" height="315px" /%}
Both importing and exporting the Data Assets from Collate is quick and easy!
{%inlineCallout
color="violet-70"
bold="Data Asset Import"
icon="MdArrowForward"
href="/how-to-guides/data-discovery/import"%}
Quickly import a data assets as a CSV file.
{%/inlineCallout%}
{%inlineCallout
color="violet-70"
bold="Data Asset Export"
icon="MdArrowForward"
href="/how-to-guides/data-discovery/export"%}
Quickly export data assets as a CSV file.
{%/inlineCallout%}

View File

@ -0,0 +1,92 @@
---
title: Export Data Asset
slug: /how-to-guides/data-discovery/export
collate: true
---
# Export Data Asset
Exporting a Data Asset from Collate is simple. Below are the steps to bulk export various data assets, such as Database Services, Databases, Schemas, and Tables.
## How to Bulk Export a Database Service
1. Navigate to the Database Service you want to export by going to **Settings > Services > Database**.
2. For this example, we are exporting in the `Snowflake` service.
3. Click on the **⋮** icon and select **Export** to download the Database Service CSV file.
{% image
src="/images/v1.5/how-to-guides/discovery/export1.png"
alt="Export Database Service CSV File"
caption="Export Database Service CSV File"
/%}
{% note %}
You can also export the Database Service using the API with the following endpoint:
`/api/v1/services/databaseServices/name/{name}/export`
Make sure to replace `{name}` with the Fully Qualified Name (FQN) of the Database Service.
{% /note %}
## How to Bulk Export a Database
1. In this example, we are exporting in the `DEMO` database under **Snowflake**.
2. Click on the **⋮** icon and select **Export** to download the Database CSV file.
{% image
src="/images/v1.5/how-to-guides/discovery/export2.png"
alt="Export Database CSV File"
caption="Export Database CSV File"
/%}
{% note %}
You can also export the Database using the API with the following endpoint:
`/api/v1/databases/name/{name}/export`
Make sure to replace `{name}` with the Fully Qualified Name (FQN) of the Database.
{% /note %}
## How to Bulk Export a Database Schema
1. In this example, we are exporting in the `JAFFLE_SHOP` schema under **Snowflake > DEMO**.
2. Click on the **⋮** icon and select **Export** to download the Database Schema CSV file.
{% image
src="/images/v1.5/how-to-guides/discovery/export3.png"
alt="Export Database Schema CSV File"
caption="Export Database Schema CSV File"
/%}
{% note %}
You can also export the Database Schema using the API with the following endpoint:
`/api/v1/databaseSchemas/name/{name}/export`
Make sure to replace `{name}` with the Fully Qualified Name (FQN) of the Database Schema.
{% /note %}
## How to Bulk Export a Tables
1. In this example, we are exporting in the `CUSTOMERS` table under **Snowflake > DEMO > JAFFLE_SHOP**.
2. Click on the **⋮** icon and select **Export** to download the Table CSV file.
{% image
src="/images/v1.5/how-to-guides/discovery/export4.png"
alt="Export Table CSV File"
caption="Export Table CSV File"
/%}
{% note %}
You can also export the Tables using the API with the following endpoint:
`/api/v1/tables/name/{name}/export`
Make sure to replace `{name}` with the Fully Qualified Name (FQN) of the Table.
{% /note %}
{%inlineCallout
color="violet-70"
bold="Data Asset Import"
icon="MdArrowBack"
href="/how-to-guides/data-discovery/import"%}
Quickly import a data assets as a CSV file.
{%/inlineCallout%}

View File

@ -0,0 +1,327 @@
---
title: Bulk Import Data Asset
slug: /how-to-guides/data-discovery/import
collate: true
---
# Import Data Asset
Importing a Data Asset from Collate is simple. Below are the steps to bulk import various data assets, such as Databases, Schemas, and Tables.
## How to Bulk Import a Database Service
To import a Database Service into Collate:
1. Navigate to the Database Service you want to import by going to **Settings > Services > Database**.
2. For this example, we are importing in the `Snowflake` service.
3. Click on the **⋮** icon and select **Import** to download the Database Service file.
{% image
src="/images/v1.5/how-to-guides/discovery/import1.png"
alt="Import a Database Service"
caption="Import a Database Service"
/%}
4. Upload/Drop the Database Service CSV file that you want to import. Alternatively, you can `export` an existing Database Service CSV as a template, make the necessary edits, and then upload the updated file.
Once you have the template, you can fill in the following details:
- **name** (required): This field contains the name of the database.
- **displayName**: This field holds the display name of the database.
- **description**: This field contains a detailed description or information about the database.
- **owner**: This field specifies the owner of the database.
- **tags**: This field contains the tags associated with the database.
- **glossaryTerms**: This field holds the glossary terms linked to the database.
- **tiers**: This field defines the tiers associated with the database service.
- **domain**: This field contains the domain assigned to the data asset.
{% image
src="/images/v1.5/how-to-guides/discovery/import2.png"
alt="Upload the Database Service CSV file"
caption="Upload the Database Service CSV file"
/%}
5. You can now preview the uploaded Database Service CSV file and add or modify data using the inline editor.
{% image
src="/images/v1.5/how-to-guides/discovery/import3.png"
alt="Preview of the Database Service"
caption="Preview of the Database Service"
/%}
6. Validate the updated Data Assets and confirm the changes. A success or failure message will then be displayed based on the outcome.
{% image
src="/images/v1.5/how-to-guides/discovery/import4.png"
alt="Validate the updated Data Assets"
caption="Validate the updated Data Assets"
/%}
7. The Database Service has been updated successfully, and you can now view the changes in the Database Service.
{% image
src="/images/v1.5/how-to-guides/discovery/import5.png"
alt="Database Service Import successful"
caption="DatabaseService Import successful"
/%}
{% note %}
You can also import the Database Service using the API with the following endpoint:
`/api/v1/services/databaseServices/name/{name}/import`
Make sure to replace `{name}` with the Fully Qualified Name (FQN) of the Database Service.
{% /note %}
## How to Bulk Import a Database
To import a Database into Collate:
1. In this example, we are Importing the `DEMO` database under **Snowflake**.
2. Click on the **⋮** icon and select **Import** to upload the Database CSV file.
{% image
src="/images/v1.5/how-to-guides/discovery/import6.png"
alt="Import a Database"
caption="Import a Database"
/%}
3. Upload/Drop the Database CSV file that you want to import. Alternatively, you can `export` an existing Database CSV as a template, make the necessary edits, and then upload the updated file.
Once you have the template, you can fill in the following details:
- **name** (required): This field contains the name of the database.
- **displayName**: This field holds the display name of the database.
- **description**: This field contains a detailed description or information about the database.
- **owner**: This field specifies the owner of the database.
- **tags**: This field contains the tags associated with the database.
- **glossaryTerms**: This field holds the glossary terms linked to the database.
- **tiers**: This field defines the tiers associated with the database.
- **sourceUrl**: This field contains the Source URL of the data asset. Example for the Snowflake database: `https://app.snowflake.com/<account>/#/data/databases/DEMO/`
- **retentionPeriod**: This field contains the retention period of the data asset. Period is expressed as a duration in ISO 8601 format in UTC. Example - `P23DT23H`.
- **domain**: This field contains the domain assigned to the data asset.
{% image
src="/images/v1.5/how-to-guides/discovery/import7.png"
alt="Upload the Database CSV file"
caption="Upload the Database CSV file"
/%}
4. You can now preview the uploaded Database CSV file and add or modify data using the inline editor.
{% image
src="/images/v1.5/how-to-guides/discovery/import8.png"
alt="Preview of the Database"
caption="Preview of the Database"
/%}
5. Validate the updated Data Assets and confirm the changes. A success or failure message will then be displayed based on the outcome.
{% image
src="/images/v1.5/how-to-guides/discovery/import9.png"
alt="Validate the updated Data Assets"
caption="Validate the updated Data Assets"
/%}
6. The Database has been updated successfully, and you can now view the changes in the Database.
{% image
src="/images/v1.5/how-to-guides/discovery/import10.png"
alt="Database Import successful"
caption="DatabaseImport successful"
/%}
{% note %}
You can also import the Database using the API with the following endpoint:
`/api/v1/databases/name/{name}/import`
Make sure to replace `{name}` with the Fully Qualified Name (FQN) of the Database.
{% /note %}
## How to Bulk Import a Database Schema
To import a Database Schema into Collate:
1. In this example, we are importing the `JAFFLE_SHOP` schema under **Snowflake > DEMO**.
2. Click on the **⋮** icon and select **Import** to upload the Database Schema CSV file.
{% image
src="/images/v1.5/how-to-guides/discovery/import11.png"
alt="Import a Database Schema"
caption="Import a Database Schema"
/%}
3. Upload/Drop the Database Schema CSV file that you want to import. Alternatively, you can `export` an existing Database Schema CSV as a template, make the necessary edits, and then upload the updated file.
Once you have the template, you can fill in the following details:
- **name** (required): This field contains the name of the database schema.
- **displayName**: This field holds the display name of the database schema.
- **description**: This field contains a detailed description or information about the database schema.
- **owner**: This field specifies the owner of the database schema.
- **tags**: This field contains the tags associated with the database schema.
- **glossaryTerms**: This field holds the glossary terms linked to the database schema.
- **tiers**: This field defines the tiers associated with the database schema.
- **sourceUrl**: This field contains the Source URL of the data asset. Example for the Snowflake database schema: `https://app.snowflake.com/<account>/#/data/databases/DEMO/schemas/JAFFLE_SHOP`
- **retentionPeriod**: This field contains the retention period of the data asset. Period is expressed as a duration in ISO 8601 format in UTC. Example - `P23DT23H`.
{% image
src="/images/v1.5/how-to-guides/discovery/import12.png"
alt="Upload the Database Schema CSV file"
caption="Upload the Database Schema CSV file"
/%}
4. You can now preview the uploaded Database Schema CSV file and add or modify data using the inline editor.
{% image
src="/images/v1.5/how-to-guides/discovery/import13.png"
alt="Preview of the Database Schema"
caption="Preview of the Database Schema"
/%}
5. Validate the updated Data Assets and confirm the changes. A success or failure message will then be displayed based on the outcome.
{% image
src="/images/v1.5/how-to-guides/discovery/import14.png"
alt="Validate the updated Data Assets"
caption="Validate the updated Data Assets"
/%}
6. The Database Schema has been updated successfully, and you can now view the changes in the Database Schema.
{% image
src="/images/v1.5/how-to-guides/discovery/import15.png"
alt="Database Schema Import successful"
caption="DatabaseSchema Import successful"
/%}
{% note %}
You can also import the Database Schema using the API with the following endpoint:
`/api/v1/databaseSchemas/name/{name}/import`
Make sure to replace `{name}` with the Fully Qualified Name (FQN) of the Database Schema.
{% /note %}
## How to Bulk Import a Table
To import a Table into Collate:
1. In this example, we are importing the `CUSTOMERS` table under **Snowflake > DEMO > JAFFLE_SHOP**.
2. Click on the **⋮** icon and select **Import** to download the Table CSV file.
{% image
src="/images/v1.5/how-to-guides/discovery/import16.png"
alt="Import a Table"
caption="Import a Table"
/%}
3. Upload/Drop the Table CSV file that you want to import. Alternatively, you can `export` an existing table CSV as a template, make the necessary edits, and then upload the updated file.
Once you have the template, you can fill in the following details:
- **name**: This field contains the name of the table.
- **displayName**: This field holds the display name of the table.
- **description**: This field contains a detailed description or information about the table.
- **owner**: This field specifies the owner of the table.
- **tags**: This field contains the tags associated with the table.
- **glossaryTerms**: This field holds the glossary terms linked to the table.
- **tiers**: This field defines the tiers associated with the table.
- **sourceUrl**: This field contains the Source URL of the data asset. Example for the Snowflake table: `https://app.snowflake.com/<account>/#/data/databases/DEMO/schemas/JAFFLE_SHOP/table/CUSTOMERS`
- **retentionPeriod**: This field contains the retention period of the data asset. Period is expressed as a duration in ISO 8601 format in UTC. Example - `P23DT23H`.
- **column.fullyQualifiedName** (required): This field holds the fully qualified name of the column.
- **column.displayName**: This field holds the display name of the column, if different from the technical name.
- **column.description**: This field holds a detailed description or information about the column's purpose or content.
- **column.dataTypeDisplay**: This field holds the data type for display purposes.
- **column.dataType**: This field holds the data type of the column (e.g., `VARCHAR`, `INT`, `BOOLEAN`).
- **column.arrayDataType**: If the column is an array, this field will specify the data type of the array elements.
- **column.dataLength**: This field holds the length or size of the data.
- **column.tags**: This field holds the Tags associated with the column, which help categorize.
- **column.glossaryTerms**: This field holds the Glossary terms linked to the column to provide standardized definitions.
{% image
src="/images/v1.5/how-to-guides/discovery/import17.png"
alt="Upload the Table CSV file"
caption="Upload the Table CSV file"
/%}
4. You can now preview the uploaded Table CSV file and add or modify data using the inline editor.
{% image
src="/images/v1.5/how-to-guides/discovery/import18.png"
alt="Preview of the Table"
caption="Preview of the Table"
/%}
5. Validate the updated Data Assets and confirm the changes. A success or failure message will then be displayed based on the outcome.
{% image
src="/images/v1.5/how-to-guides/discovery/import19.png"
alt="Validate the updated Data Assets"
caption="Validate the updated Data Assets"
/%}
6. The Table has been updated successfully, and you can now view the changes in the Table.
{% image
src="/images/v1.5/how-to-guides/discovery/import20.png"
alt="Table Import successful"
caption="Table Import successful"
/%}
{% note %}
You can also import the Tables using the API with the following endpoint:
`/api/v1/tables/name/{name}/import`
Make sure to replace `{name}` with the Fully Qualified Name (FQN) of the Table.
{% /note %}
{%inlineCallout
color="violet-70"
bold="Data Asset Export"
icon="MdArrowForward"
href="/how-to-guides/data-discovery/export"%}
Quickly export data assets as a CSV file.
{%/inlineCallout%}

View File

@ -599,6 +599,12 @@ site_menu:
url: /how-to-guides/data-discovery/details
- category: How-to Guides / Data Discovery / Add Complex Queries using Advanced Search
url: /how-to-guides/data-discovery/advanced
- category: How-to Guides / Data Discovery / Bulk Upload Data Assets
url: /how-to-guides/data-discovery/bulk-upload
- category: How-to Guides / Data Discovery / How to Bulk Import Data Asset
url: /how-to-guides/data-discovery/import
- category: How-to Guides / Data Discovery / How to Export Data Asset
url: /how-to-guides/data-discovery/export
- category: How-to Guides / Data Collaboration
url: /how-to-guides/data-collaboration

View File

@ -0,0 +1,57 @@
---
title: Collate SaaS
slug: /getting-started/day-1/collate-saas
collate: true
---
## Setting Up a Database Service for Metadata Extraction
You can easily set up a database service for metadata extraction from Collate SaaS in just a few minutes. For example, heres how to set up a connection using the `Snowflake` Connector:
1. Log in to your Collate SaaS instance, then navigate to **Settings > Services > Databases** & Click on Add New Service.
{% image
src="/images/v1.6/getting-started/add-service.png"
alt="Adding Database Service"
caption="Adding Database Service" /%}
2. **Select the database type** you want to use. Enter details such as the name and description to identify the database. In this Case we are selecting `Snowflake`.
{% image
src="/images/v1.6/getting-started/select-service.png"
alt="Selecting Database Service"
caption="Selecting Database Service" /%}
3. **Enter the Connection Details** You can view the available documentation in the side panel for guidance. Also, refer to the connector [documentation](/connectors).
{% image
src="/images/v1.6/getting-started/configure-connector.png"
alt="Updating Connection Details"
caption="Updating Connection Details" /%}
4. **Allow the Collate SaaS IP**. In the Connection Details, you will see the IP Address unique to your cluster, You need to Allow the `IP` to Access the datasource.
{% note %}
This step is required only for Collate SaaS. If you are using Hybrid SaaS, you will not see the IP address in the Service Connection details.
{% /note %}
{% image
src="/images/v1.6/getting-started/collate-saas-ip.png"
alt="Collate SaaS IP"
caption="Collate SaaS IP" /%}
5. **Test the connection** to verify the status. The test connection will check if the Service is reachable from Collate.
{% image
src="/images/v1.6/getting-started/test-connection.png"
alt="Verifying the Test Connection"
caption="Verifying the Test Connection" /%}
{%inlineCallout
color="violet-70"
bold="Explore Hybrid SaaS"
icon="MdArrowForward"
href="/getting-started/day-1/hybrid-saas"%}
You can read more about Hybrid SaaS.
{%/inlineCallout%}

View File

@ -4,7 +4,7 @@ slug: /getting-started/day-1/hybrid-saas/airflow
collate: true
---
{% partial file="/v1.5/deployment/external-ingestion.md" /%}
{% partial file="/v1.6/deployment/external-ingestion.md" /%}
# Run the ingestion from your Airflow
@ -102,7 +102,7 @@ the whole process.
The drawback here? You need to install some requirements, which is not always possible. Here you have two alternatives,
either you use the `PythonVirtualenvOperator`, or read below on how to run the ingestion with the `DockerOperator`.
{% partial file="/v1.5/deployment/run-connectors-class.md" /%}
{% partial file="/v1.6/deployment/run-connectors-class.md" /%}
## Docker Operator

View File

@ -234,7 +234,7 @@ Checking the credentials from the Airflow UI, we will see:
{% image
src="/images/v1.5/connectors/credentials/airflow-connection.png"
src="/images/v1.6/connectors/credentials/airflow-connection.png"
alt="Airflow Connection" /%}
#### Step 2 - Understanding the shape of a Connection

View File

@ -4,7 +4,7 @@ slug: /getting-started/day-1/hybrid-saas/gcs-composer
collate: true
---
{% partial file="/v1.5/deployment/external-ingestion.md" /%}
{% partial file="/v1.6/deployment/external-ingestion.md" /%}
# Run the ingestion from GCS Composer
@ -93,7 +93,7 @@ with DAG(
)
```
{% partial file="/v1.5/deployment/run-connectors-class.md" /%}
{% partial file="/v1.6/deployment/run-connectors-class.md" /%}
## Using the Kubernetes Pod Operator

View File

@ -4,7 +4,7 @@ slug: /getting-started/day-1/hybrid-saas/github-actions
collate: true
---
{% partial file="/v1.5/deployment/external-ingestion.md" /%}
{% partial file="/v1.6/deployment/external-ingestion.md" /%}
# Run the ingestion from GitHub Actions

View File

@ -16,7 +16,7 @@ There's two options on how to set up a data connector:
Any tool capable of running Python code can be used to configure the metadata extraction from your sources.
{% partial file="/v1.5/connectors/python-requirements.md" /%}
{% partial file="/v1.6/connectors/python-requirements.md" /%}
In this section we'll show you how the ingestion process works and how to test it from your laptop.

View File

@ -4,7 +4,7 @@ slug: /getting-started/day-1/hybrid-saas/mwaa
collate: true
---
{% partial file="/v1.5/deployment/external-ingestion.md" /%}
{% partial file="/v1.6/deployment/external-ingestion.md" /%}
# Run the ingestion from AWS MWAA
@ -92,7 +92,7 @@ with DAG(
Where you can update the YAML configuration and workflow classes accordingly. accordingly. Further examples on how to
run the ingestion can be found on the documentation (e.g., [Snowflake](/connectors/database/snowflake)).
{% partial file="/v1.5/deployment/run-connectors-class.md" /%}
{% partial file="/v1.6/deployment/run-connectors-class.md" /%}
## Ingestion Workflows as an ECS Operator
@ -435,4 +435,4 @@ For Airflow providers, you will want to pull the provider versions from [the mat
Also note that the ingestion workflow function must be entirely self-contained as it will run by itself in the virtualenv. Any imports it needs, including the configuration, must exist within the function itself.
{% partial file="/v1.5/deployment/run-connectors-class.md" /%}
{% partial file="/v1.6/deployment/run-connectors-class.md" /%}

View File

@ -6,12 +6,14 @@ collate: true
# Getting Started: Day 1
Lets get started with your Collate service in five steps:
1. Set up a data connector
2. Ingest metadata
3. Invite users
4. Add roles
5. Create teams and add users
Get started with your Collate service in just few simple steps:
1. Set up a Data Connector: Connect your data sources to begin collecting metadata.
2. Ingest Metadata: Run the metadata ingestion to gather and push data insights.
3. Invite Users: Add team members to collaborate and manage metadata together.
4. Explore the Features: Dive into Collate's rich feature set to unlock the full potential of your data.
**Ready to begin? Let's get started!**
## Requirements
@ -29,29 +31,52 @@ Connections to [custom data sources](/connectors/custom-connectors) can also be
There's two options on how to set up a data connector:
1. **Run the connector in Collate SaaS**: In this scenario, you'll get an IP when you add the service. You need to give
access to this IP in your data sources.
2. **Run the connector in your infrastructure or laptop**: In this case, Collate won't be accessing the data, but rather
you'd control where and how the process is executed and Collate will only receive the output of the metadata extraction.
This is an interesting option for sources lying behind private networks or when external SaaS services are not allowed to
connect to your data sources. You can read more about how to extract metadata in these cases [here](/getting-started/day-1/hybrid-saas).
{% tilesContainer %}
{% tile
title="Run the connector in Collate SaaS"
description="Guide to start ingesting metadata seamlessly from your data sources."
link="/getting-started/day-1/collate-saas"
icon="discovery"
/%}
{% /tilesContainer %}
You can easily set up a database service in minutes to run the metadata extraction directly from Collate SaaS:
- Navigate to **Settings > Services > Databases**.
- Click on **Add New Service**.
- Select the database type you want. Enter the information, like name and description, to identify the database.
- Enter the Connection Details. You can view the documentation available in the side panel.
- Test the connection to verify the connection status.
2. **Run the connector in your infrastructure or laptop**: The hybrid model offers organizations the flexibility to run metadata ingestion components within their own infrastructure. This approach ensures that Collate's managed service doesn't require direct access to the underlying data. Instead, only the metadata is collected locally and securely transmitted to our SaaS platform, maintaining data privacy and security while still enabling robust metadata management. You can read more about how to extract metadata in these cases [here](/getting-started/day-1/hybrid-saas).
## Step 2: Ingest Metadata
Once the connector has been added, set up a [metadata ingestion pipeline](/how-to-guides/admin-guide/how-to-ingest-metadata)
to bring in the metadata into Collate at a regular schedule.
- Go to **Settings > Services > Databases** and click on the service you have added.
- Navigate to the Ingestion tab to **Add Metadata Ingestion**.
- Go to **Settings > Services > Databases** and click on the service you have added. Navigate to the Ingestion tab to **Add Metadata Ingestion**.
{% image
src="/images/v1.6/getting-started/add-ingestion.png"
alt="Adding Ingestion"
caption="Adding Ingestion" /%}
- Make any necessary configuration changes or filters for the ingestion, with documentation available in the side panel.
{% image
src="/images/v1.6/getting-started/ingestion-config.png"
alt="Configure Ingestion"
caption="Configure Ingestion" /%}
- Schedule the pipeline to ingest metadata regularly.
{% image
src="/images/v1.6/getting-started/schedule-ingesgtion.png"
alt="Schedule Ingestion"
caption="Schedule Ingestion" /%}
- Once scheduled, you can also set up additional ingestion pipelines to bring in lineage, profiler, or dbt information.
- Once the metadata ingestion has been completed, you can see the available data assets under **Explore** in the main menu.
{% image
src="/images/v1.6/getting-started/explore-tab.png"
alt="Ingested Data"
caption="Ingested Data under Explore Tab" /%}
- You can repeat these steps to ingest metadata from other data sources.
## Step 3: Invite Users
@ -60,53 +85,98 @@ Once the metadata is ingested into the platform, you can [invite users](/how-to-
to collaborate on the data and assign different roles.
- Navigate to **Settings > Team & User Management > Users**.
{% image
src="/images/v1.6/getting-started/users.png"
alt="Users Navigation"
caption="Users Navigation" /%}
- Click on **Add User**, and enter their email and other details to provide access to the platform.
{% image
src="/images/v1.6/getting-started/add-users.png"
alt="Adding New User"
height="750px"
caption="Adding New User" /%}
- You can organize users into different Teams, as well as assign them to different Roles.
- Users will inherit the access defined for their assigned Teams and Roles.
- Admin access can also be granted. Admins will have access to all settings and can invite other users.
{% image
src="/images/v1.6/getting-started/update-user.png"
alt="Users Profile"
caption="Users Profile" /%}
- New users will receive an email invitation to set up their account.
## Step 4: Add Roles and Policies
## Step 4: Explore Features of OpenMetadata
Add well-defined roles based on the users job description, such as Data Scientist or Data Steward.
Each role can be associated with certain policies, such as the Data Consumer Policy. These policies further comprise
fine-grained Rules to define access.
OpenMetadata provides a comprehensive solution for data teams to break down silos, securely share data assets across various sources, foster collaboration around trusted data, and establish a documentation-first data culture within the organization.
- Navigate to **Settings > Access Control** to define the Rules, Policies, and Roles.
- Refer to [this use case guide](/how-to-guides/admin-guide/roles-policies/use-cases) to understand the configuration for different circumstances.
- Start by creating a Policy. Define the rules for the policy.
- Then, create a Role and apply the related policies.
- Navigate to **Settings > Team & User Management** to assign roles to users or teams.
{% tilesContainer %}
{% tile
title="Data Discovery"
description="Discover the right data assets to make timely business decisions."
link="/how-to-guides/data-discovery"
icon="discovery"
/%}
{% tile
title="Data Collaboration"
description="Foster data team collaboration to enhance data understanding."
link="/how-to-guides/data-collaboration"
icon="collaboration"
/%}
{% tile
title="Data Quality & Observability"
description="Trust your data with quality tests & monitor the health of your data systems."
link="/how-to-guides/data-quality-observability"
icon="observability"
/%}
{% tile
title="Data Lineage"
description="Trace the path of data across tables, pipelines, and dashboards."
link="/how-to-guides/data-lineage"
icon="lineage"
/%}
{% tile
title="Data Insights"
description="Define KPIs and set goals to proactively hone the data culture of your company."
link="/how-to-guides/data-insights"
icon="discovery"
/%}
{% tile
title="Data Governance"
description="Enhance your data platform governance using OpenMetadata."
link="/how-to-guides/data-governance"
icon="governance"
/%}
{% /tilesContainer %}
For more detailed instructions, refer to the [Advanced Guide for Roles and Policies](/how-to-guides/admin-guide/roles-policies).
## Deep Dive into OpenMetadata: Guides for Admins and Data Users
## Step 5: Create Teams and Assign Users
Now that you have users added and roles defined, grant users access to the data assets they need. The easiest way to
manage this at scale is to create teams with the appropriate permissions, and to invite users to their assigned teams.
- Collate supports a hierarchical team structure with [multiple team types](/how-to-guides/admin-guide/teams-and-users/team-structure-openmetadata).
- The root team-type Organization supports other child teams and users within it.
- Business Units, Divisions, Departments, and Groups are the other team types in the hierarchy.
- Note: Only the team-type Organization and Groups can have users. Only the team-type Groups can own data assets.
Planning the [team hierarchy](/how-to-guides/admin-guide/teams-and-users/team-structure-openmetadata) can help save time
later, when creating the teams structure in **Settings > Team and User Management > Teams**. Continue to invite additional
users to onboard them to Collate, with their assigned teams and roles.
## Next Steps
You now have data sources loaded into Collate, and team structure set up. Continue to add more data sources to gain a
more complete view of your data estate, and invite users to foster broader collaboration. You can check out
the [advanced guide to roles and policies](/how-to-guides/admin-guide/roles-policies) to fine-tune role or team access to data.
{% tilesContainer %}
{% tile
title="Admin Guide"
description="Admin users can get started with OpenMetadata with just three quick and easy steps & know-it-all with the advanced guides."
link="/how-to-guides/admin-guide"
icon="administration"
/%}
{% tile
title="Guide for Data Users"
description="Get to know the basics of OpenMetadata and about the data assets that you can explore in the all-in-one platform."
link="/how-to-guides/guide-for-data-users"
icon="steward"
/%}
{% /tilesContainer %}
From here, you can further your understanding and management of your data with Collate:
- You can check out the [advanced guide to roles and policies](/how-to-guides/admin-guide/roles-policies) to fine-tune role or team access to data.
- Trace your data flow with [column-level lineage](/how-to-guides/data-lineage) graphs to understand where your data comes from, how it is used, and how it is managed.
- Build [no-code data quality tests](how-to-guides/data-quality-observability/quality/tab) to ensure its technical and
business quality, and set up an [alert](/how-to-guides/data-quality-observability/observability) for any test case failures to be quickly notified of critical data issues.
- Write [Knowledge Center](/how-to-guides/data-collaboration/knowledge-center) articles associated with data assets to document key information for your team, such as technical details, business context, and best practices.
- Review the different [Data Insights Reports](/how-to-guides/data-insights/report) on Data Assets, App Analytics, KPIs, and [Cost Analysis](/how-to-guides/data-insights/cost-analysis) to understand the health, utilization, and costs of your data estate.
- Build no-code workflows with [Metadata Automations](https://www.youtube.com/watch?v=ug08aLUyTyE&ab_channel=OpenMetadata) to add attributes like owners, tiers, domains, descriptions, glossary terms, and more to data assets, as well as propagate them using column-level lineage for more automated data management.
You can also review additional [How-To Guides](/how-to-guides) on popular topics like data discovery, data quality, and data governance.
- Build no-code workflows with [Metadata Automations](https://www.youtube.com/watch?v=ug08aLUyTyE&ab_channel=OpenMetadata) to add attributes like owners, tiers, domains, descriptions, glossary terms, and more to data assets, as well as propagate them using column-level lineage for more automated data management.

View File

@ -0,0 +1,30 @@
---
title: Bulk Upload Data Assets
slug: /how-to-guides/data-discovery/bulk-upload
collate: true
---
# How to Bulk Upload Data Assets
Collate offers a Data Assets Bulk Upload feature, enabling users to efficiently upload multiple data assets via a CSV file. This functionality allows for the bulk import or update of database, schema, and table entities in a single operation, saving time and effort. Additionally, the inline editor provides a convenient way to validate and modify the data assets before finalizing the import process.
{% youtube videoId="CXxDdS6AifY" start="0:00" end="2:19" width="560px" height="315px" /%}
Both importing and exporting the Data Assets from Collate is quick and easy!
{%inlineCallout
color="violet-70"
bold="Data Asset Import"
icon="MdArrowForward"
href="/how-to-guides/data-discovery/import"%}
Quickly import a data assets as a CSV file.
{%/inlineCallout%}
{%inlineCallout
color="violet-70"
bold="Data Asset Export"
icon="MdArrowForward"
href="/how-to-guides/data-discovery/export"%}
Quickly export data assets as a CSV file.
{%/inlineCallout%}

View File

@ -0,0 +1,92 @@
---
title: Export Data Asset
slug: /how-to-guides/data-discovery/export
collate: true
---
# Export Data Asset
Exporting a Data Asset from Collate is simple. Below are the steps to bulk export various data assets, such as Database Services, Databases, Schemas, and Tables.
## How to Bulk Export a Database Service
1. Navigate to the Database Service you want to export by going to **Settings > Services > Database**.
2. For this example, we are exporting in the `Snowflake` service.
3. Click on the **⋮** icon and select **Export** to download the Database Service CSV file.
{% image
src="/images/v1.6/how-to-guides/discovery/export1.png"
alt="Export Database Service CSV File"
caption="Export Database Service CSV File"
/%}
{% note %}
You can also export the Database Service using the API with the following endpoint:
`/api/v1/services/databaseServices/name/{name}/export`
Make sure to replace `{name}` with the Fully Qualified Name (FQN) of the Database Service.
{% /note %}
## How to Bulk Export a Database
1. In this example, we are exporting in the `DEMO` database under **Snowflake**.
2. Click on the **⋮** icon and select **Export** to download the Database CSV file.
{% image
src="/images/v1.6/how-to-guides/discovery/export2.png"
alt="Export Database CSV File"
caption="Export Database CSV File"
/%}
{% note %}
You can also export the Database using the API with the following endpoint:
`/api/v1/databases/name/{name}/export`
Make sure to replace `{name}` with the Fully Qualified Name (FQN) of the Database.
{% /note %}
## How to Bulk Export a Database Schema
1. In this example, we are exporting in the `JAFFLE_SHOP` schema under **Snowflake > DEMO**.
2. Click on the **⋮** icon and select **Export** to download the Database Schema CSV file.
{% image
src="/images/v1.6/how-to-guides/discovery/export3.png"
alt="Export Database Schema CSV File"
caption="Export Database Schema CSV File"
/%}
{% note %}
You can also export the Database Schema using the API with the following endpoint:
`/api/v1/databaseSchemas/name/{name}/export`
Make sure to replace `{name}` with the Fully Qualified Name (FQN) of the Database Schema.
{% /note %}
## How to Bulk Export a Tables
1. In this example, we are exporting in the `CUSTOMERS` table under **Snowflake > DEMO > JAFFLE_SHOP**.
2. Click on the **⋮** icon and select **Export** to download the Table CSV file.
{% image
src="/images/v1.6/how-to-guides/discovery/export4.png"
alt="Export Table CSV File"
caption="Export Table CSV File"
/%}
{% note %}
You can also export the Tables using the API with the following endpoint:
`/api/v1/tables/name/{name}/export`
Make sure to replace `{name}` with the Fully Qualified Name (FQN) of the Table.
{% /note %}
{%inlineCallout
color="violet-70"
bold="Data Asset Import"
icon="MdArrowBack"
href="/how-to-guides/data-discovery/import"%}
Quickly import a data assets as a CSV file.
{%/inlineCallout%}

View File

@ -0,0 +1,327 @@
---
title: Bulk Import Data Asset
slug: /how-to-guides/data-discovery/import
collate: true
---
# Import Data Asset
Importing a Data Asset from Collate is simple. Below are the steps to bulk import various data assets, such as Databases, Schemas, and Tables.
## How to Bulk Import a Database Service
To import a Database Service into Collate:
1. Navigate to the Database Service you want to import by going to **Settings > Services > Database**.
2. For this example, we are importing in the `Snowflake` service.
3. Click on the **⋮** icon and select **Import** to download the Database Service file.
{% image
src="/images/v1.6/how-to-guides/discovery/import1.png"
alt="Import a Database Service"
caption="Import a Database Service"
/%}
4. Upload/Drop the Database Service CSV file that you want to import. Alternatively, you can `export` an existing Database Service CSV as a template, make the necessary edits, and then upload the updated file.
Once you have the template, you can fill in the following details:
- **name** (required): This field contains the name of the database.
- **displayName**: This field holds the display name of the database.
- **description**: This field contains a detailed description or information about the database.
- **owner**: This field specifies the owner of the database.
- **tags**: This field contains the tags associated with the database.
- **glossaryTerms**: This field holds the glossary terms linked to the database.
- **tiers**: This field defines the tiers associated with the database service.
- **domain**: This field contains the domain assigned to the data asset.
{% image
src="/images/v1.6/how-to-guides/discovery/import2.png"
alt="Upload the Database Service CSV file"
caption="Upload the Database Service CSV file"
/%}
5. You can now preview the uploaded Database Service CSV file and add or modify data using the inline editor.
{% image
src="/images/v1.6/how-to-guides/discovery/import3.png"
alt="Preview of the Database Service"
caption="Preview of the Database Service"
/%}
6. Validate the updated Data Assets and confirm the changes. A success or failure message will then be displayed based on the outcome.
{% image
src="/images/v1.6/how-to-guides/discovery/import4.png"
alt="Validate the updated Data Assets"
caption="Validate the updated Data Assets"
/%}
7. The Database Service has been updated successfully, and you can now view the changes in the Database Service.
{% image
src="/images/v1.6/how-to-guides/discovery/import5.png"
alt="Database Service Import successful"
caption="DatabaseService Import successful"
/%}
{% note %}
You can also import the Database Service using the API with the following endpoint:
`/api/v1/services/databaseServices/name/{name}/import`
Make sure to replace `{name}` with the Fully Qualified Name (FQN) of the Database Service.
{% /note %}
## How to Bulk Import a Database
To import a Database into Collate:
1. In this example, we are Importing the `DEMO` database under **Snowflake**.
2. Click on the **⋮** icon and select **Import** to upload the Database CSV file.
{% image
src="/images/v1.6/how-to-guides/discovery/import6.png"
alt="Import a Database"
caption="Import a Database"
/%}
3. Upload/Drop the Database CSV file that you want to import. Alternatively, you can `export` an existing Database CSV as a template, make the necessary edits, and then upload the updated file.
Once you have the template, you can fill in the following details:
- **name** (required): This field contains the name of the database.
- **displayName**: This field holds the display name of the database.
- **description**: This field contains a detailed description or information about the database.
- **owner**: This field specifies the owner of the database.
- **tags**: This field contains the tags associated with the database.
- **glossaryTerms**: This field holds the glossary terms linked to the database.
- **tiers**: This field defines the tiers associated with the database.
- **sourceUrl**: This field contains the Source URL of the data asset. Example for the Snowflake database: `https://app.snowflake.com/<account>/#/data/databases/DEMO/`
- **retentionPeriod**: This field contains the retention period of the data asset. Period is expressed as a duration in ISO 8601 format in UTC. Example - `P23DT23H`.
- **domain**: This field contains the domain assigned to the data asset.
{% image
src="/images/v1.6/how-to-guides/discovery/import7.png"
alt="Upload the Database CSV file"
caption="Upload the Database CSV file"
/%}
4. You can now preview the uploaded Database CSV file and add or modify data using the inline editor.
{% image
src="/images/v1.6/how-to-guides/discovery/import8.png"
alt="Preview of the Database"
caption="Preview of the Database"
/%}
5. Validate the updated Data Assets and confirm the changes. A success or failure message will then be displayed based on the outcome.
{% image
src="/images/v1.6/how-to-guides/discovery/import9.png"
alt="Validate the updated Data Assets"
caption="Validate the updated Data Assets"
/%}
6. The Database has been updated successfully, and you can now view the changes in the Database.
{% image
src="/images/v1.6/how-to-guides/discovery/import10.png"
alt="Database Import successful"
caption="DatabaseImport successful"
/%}
{% note %}
You can also import the Database using the API with the following endpoint:
`/api/v1/databases/name/{name}/import`
Make sure to replace `{name}` with the Fully Qualified Name (FQN) of the Database.
{% /note %}
## How to Bulk Import a Database Schema
To import a Database Schema into Collate:
1. In this example, we are importing the `JAFFLE_SHOP` schema under **Snowflake > DEMO**.
2. Click on the **⋮** icon and select **Import** to upload the Database Schema CSV file.
{% image
src="/images/v1.6/how-to-guides/discovery/import11.png"
alt="Import a Database Schema"
caption="Import a Database Schema"
/%}
3. Upload/Drop the Database Schema CSV file that you want to import. Alternatively, you can `export` an existing Database Schema CSV as a template, make the necessary edits, and then upload the updated file.
Once you have the template, you can fill in the following details:
- **name** (required): This field contains the name of the database schema.
- **displayName**: This field holds the display name of the database schema.
- **description**: This field contains a detailed description or information about the database schema.
- **owner**: This field specifies the owner of the database schema.
- **tags**: This field contains the tags associated with the database schema.
- **glossaryTerms**: This field holds the glossary terms linked to the database schema.
- **tiers**: This field defines the tiers associated with the database schema.
- **sourceUrl**: This field contains the Source URL of the data asset. Example for the Snowflake database schema: `https://app.snowflake.com/<account>/#/data/databases/DEMO/schemas/JAFFLE_SHOP`
- **retentionPeriod**: This field contains the retention period of the data asset. Period is expressed as a duration in ISO 8601 format in UTC. Example - `P23DT23H`.
{% image
src="/images/v1.6/how-to-guides/discovery/import12.png"
alt="Upload the Database Schema CSV file"
caption="Upload the Database Schema CSV file"
/%}
4. You can now preview the uploaded Database Schema CSV file and add or modify data using the inline editor.
{% image
src="/images/v1.6/how-to-guides/discovery/import13.png"
alt="Preview of the Database Schema"
caption="Preview of the Database Schema"
/%}
5. Validate the updated Data Assets and confirm the changes. A success or failure message will then be displayed based on the outcome.
{% image
src="/images/v1.6/how-to-guides/discovery/import14.png"
alt="Validate the updated Data Assets"
caption="Validate the updated Data Assets"
/%}
6. The Database Schema has been updated successfully, and you can now view the changes in the Database Schema.
{% image
src="/images/v1.6/how-to-guides/discovery/import15.png"
alt="Database Schema Import successful"
caption="DatabaseSchema Import successful"
/%}
{% note %}
You can also import the Database Schema using the API with the following endpoint:
`/api/v1/databaseSchemas/name/{name}/import`
Make sure to replace `{name}` with the Fully Qualified Name (FQN) of the Database Schema.
{% /note %}
## How to Bulk Import a Table
To import a Table into Collate:
1. In this example, we are importing the `CUSTOMERS` table under **Snowflake > DEMO > JAFFLE_SHOP**.
2. Click on the **⋮** icon and select **Import** to download the Table CSV file.
{% image
src="/images/v1.6/how-to-guides/discovery/import16.png"
alt="Import a Table"
caption="Import a Table"
/%}
3. Upload/Drop the Table CSV file that you want to import. Alternatively, you can `export` an existing table CSV as a template, make the necessary edits, and then upload the updated file.
Once you have the template, you can fill in the following details:
- **name**: This field contains the name of the table.
- **displayName**: This field holds the display name of the table.
- **description**: This field contains a detailed description or information about the table.
- **owner**: This field specifies the owner of the table.
- **tags**: This field contains the tags associated with the table.
- **glossaryTerms**: This field holds the glossary terms linked to the table.
- **tiers**: This field defines the tiers associated with the table.
- **sourceUrl**: This field contains the Source URL of the data asset. Example for the Snowflake table: `https://app.snowflake.com/<account>/#/data/databases/DEMO/schemas/JAFFLE_SHOP/table/CUSTOMERS`
- **retentionPeriod**: This field contains the retention period of the data asset. Period is expressed as a duration in ISO 8601 format in UTC. Example - `P23DT23H`.
- **column.fullyQualifiedName** (required): This field holds the fully qualified name of the column.
- **column.displayName**: This field holds the display name of the column, if different from the technical name.
- **column.description**: This field holds a detailed description or information about the column's purpose or content.
- **column.dataTypeDisplay**: This field holds the data type for display purposes.
- **column.dataType**: This field holds the data type of the column (e.g., `VARCHAR`, `INT`, `BOOLEAN`).
- **column.arrayDataType**: If the column is an array, this field will specify the data type of the array elements.
- **column.dataLength**: This field holds the length or size of the data.
- **column.tags**: This field holds the Tags associated with the column, which help categorize.
- **column.glossaryTerms**: This field holds the Glossary terms linked to the column to provide standardized definitions.
{% image
src="/images/v1.6/how-to-guides/discovery/import17.png"
alt="Upload the Table CSV file"
caption="Upload the Table CSV file"
/%}
4. You can now preview the uploaded Table CSV file and add or modify data using the inline editor.
{% image
src="/images/v1.6/how-to-guides/discovery/import18.png"
alt="Preview of the Table"
caption="Preview of the Table"
/%}
5. Validate the updated Data Assets and confirm the changes. A success or failure message will then be displayed based on the outcome.
{% image
src="/images/v1.6/how-to-guides/discovery/import19.png"
alt="Validate the updated Data Assets"
caption="Validate the updated Data Assets"
/%}
6. The Table has been updated successfully, and you can now view the changes in the Table.
{% image
src="/images/v1.6/how-to-guides/discovery/import20.png"
alt="Table Import successful"
caption="Table Import successful"
/%}
{% note %}
You can also import the Tables using the API with the following endpoint:
`/api/v1/tables/name/{name}/import`
Make sure to replace `{name}` with the Fully Qualified Name (FQN) of the Table.
{% /note %}
{%inlineCallout
color="violet-70"
bold="Data Asset Export"
icon="MdArrowForward"
href="/how-to-guides/data-discovery/export"%}
Quickly export data assets as a CSV file.
{%/inlineCallout%}

Binary file not shown.

After

Width:  |  Height:  |  Size: 391 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 421 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 426 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 379 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 364 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 394 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 390 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 187 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 358 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 392 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 392 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 402 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 188 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 218 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 278 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 192 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 358 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 229 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 250 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 353 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 343 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 190 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 264 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 280 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 391 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 421 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 426 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 379 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 364 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 394 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 390 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 187 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 358 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 392 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 392 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 402 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 188 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 218 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 278 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 192 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 358 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 229 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 250 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 353 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 343 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 190 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 264 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 280 KiB