Sample Data Storage & External Profiler Docs (#13960)

* rename title from S3 to Storage config

* Sample Data Storage & External Profiler Docs

* Some changes

* Some changes

* Some changes

---------

Co-authored-by: Pere Miquel Brull <peremiquelbrull@gmail.com>
This commit is contained in:
Mayur Singal 2023-11-14 13:29:32 +05:30 committed by GitHub
parent 0c31dc4641
commit a8bca817b4
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
34 changed files with 360 additions and 26 deletions

View File

@ -1,5 +1,5 @@
---
title: Lineage Ingestion
title: Auto PII Tagging
slug: /connectors/ingestion/auto_tagging
---

View File

@ -0,0 +1,151 @@
---
title: External Profiler Workflow
slug: /connectors/ingestion/workflows/profiler/external-workflow
---
# External Profiler Workflow
{% note %}
Note that this requires OpenMetadata 1.2.1 or higher.
{% note %}
Consider a use case where you have a large database source with multiple databases and schemas which are maintained by
different teams within your organization. You have created multiple database services within OpenMetadata depending on
your use case by applying various filters on this large source. Now, instead of running a profiler pipeline for each
service, you want to run a **single workflow profiler for the entire source**, irrespective of the OpenMetadata service which
an asset would belong to. This document will guide you on how to achieve this.
{% note %}
Note that running a single profiler workflow is only supported if you run the workflow **externally**, not from OpenMetadata.
{% /note %}
{% partial file="/v1.2/connectors/external-ingestion-deployment.md" /%}
## 1. Define the YAML Config
You will need to prepare a yaml file for the data profiler depending on the database source.
You can get details of how to define a yaml file for data profiler for each connector [here](https://docs.open-metadata.org/v1.2.x/connectors/database).
For example, consider if the data source was snowflake, then the yaml file would have looked like as follows.
```snowflake_external_profiler.yaml
source:
type: snowflake
serviceConnection:
config:
type: Snowflake
username: my_username
password: my_password
account: snow-account-name
warehouse: COMPUTE_WH
sourceConfig:
config:
type: Profiler
generateSampleData: true
# schemaFilterPattern:
# includes:
# # - .*mydatabase.*
# - .*default.*
# tableFilterPattern:
# includes:
# # - ^cloudfront_logs11$
# - ^map_table$
# # - .*om_glue_test.*
processor:
type: "orm-profiler"
config: {}
# tableConfig:
# - fullyQualifiedName: local_snowflake.mydatabase.mydschema.mytable
# sampleDataCount: 50
# schemaConfig:
# - fullyQualifiedName: demo_snowflake.new_database.new_dschema
# sampleDataCount: 50
# profileSample: 1
# profileSampleType: ROWS
# sampleDataStorageConfig:
# bucketName: awsdatalake-testing
# prefix: data/sales/demo1
# overwriteData: false
# storageConfig:
# awsRegion: us-east-2
# databaseConfig:
# - fullyQualifiedName: snowflake_prod.prod_db
# sampleDataCount: 50
# profileSample: 1
# profileSampleType: ROWS
# sampleDataStorageConfig:
# bucketName: awsdatalake-testing
# prefix: data/sales/demo1
# overwriteData: false
# storageConfig:
# awsRegion: us-east-2
sink:
type: metadata-rest
config: {}
workflowConfig:
loggerLevel: DEBUG
openMetadataServerConfig:
hostPort: http://localhost:8585/api
authProvider: openmetadata
securityConfig:
jwtToken: "your-jwt-token"
```
{% note %}
Note that we do **NOT pass the Service Name** in this yaml file, unlike your typical profiler workflow
{% /note %}
## 2. Run the Workflow
### Run the Workflow with the CLI
One option to running the workflow externally is by leveraging the `metadata` CLI.
After saving the YAML config, we will run the command:
```
metadata profile -c <path-to-yaml>
```
### Run the Workflow from Python using the SDK
If you'd rather have a Python script taking care of the execution, you can use:
```python
from metadata.workflow.profiler import ProfilerWorkflow
from metadata.workflow.workflow_output_handler import print_status
# Specify your YAML configuration
CONFIG = """
source:
...
workflowConfig:
openMetadataServerConfig:
hostPort: 'http://localhost:8585/api'
authProvider: openmetadata
securityConfig:
jwtToken: ...
"""
def run():
workflow_config = yaml.safe_load(CONFIG)
workflow = ProfilerWorkflow.create(workflow_config)
workflow.execute()
workflow.raise_from_status()
print_status(workflow)
workflow.stop()
if __name__ == "__main__":
run()
```

View File

@ -0,0 +1,179 @@
---
title: External Storage for Sample Data
slug: /connectors/ingestion/workflows/profiler/external-sample-data
---
# External Storage for Sample Data
{% note %}
Note that this requires OpenMetadata 1.2.1 or higher.
{% note %}
While running the profiler workflow if you have enabled the `Generate Sample Data` flag in your profiler configuration,
sample data will be ingested for all the tables included in the profiler workflow. This data is a randomly sampled
from the table and by default would contain 50 rows of data, which is now configurable.
With OpenMetadata release 1.2.1, a new capability allows users to take advantage of this sample data by uploading
it to an S3 bucket in Parquet format. This means that the random sample, once generated, can be stored in a standardized,
columnar storage format, facilitating efficient and scalable data analysis.
To leverage this functionality, follow the documentation provided for uploading sample data to an S3 bucket in Parquet
format as part of your profiling workflow.
## Configure the Sample Data Storage Credentials
To upload the sample data on you need to first configure your storage account credentials, and there are multiple ways how you can do this.
### Storage Credentials at the Database Service
You can configure the Sample Data Storage Credentials at Database Service level while creating a new service or editing connection details of an existing Database Service.
You will provide the storage credential details in advance config section of connection details form.
{% image
src="/images/v1.2/features/ingestion/workflows/profiler/sample-data-config-service.png"
alt="Database Service Storage Config"
caption="Database Service Storage Config"
/%}
### Storage Credentials at the Database
You can configure the Sample Data Storage Credentials at the Database level via the `Profiler Settings` option from the menu.
{% image
src="/images/v1.2/features/ingestion/workflows/profiler/sample-data-config-database-1.png"
alt="Database Storage Config - 1"
caption="Database Storage Config - 1"
/%}
{% image
src="/images/v1.2/features/ingestion/workflows/profiler/sample-data-config-database-2.png"
alt="Database Storage Config - 2"
caption="Database Storage Config - 2"
/%}
### Storage Credentials at the Database Schema
You can configure the Sample Data Storage Credentials at the Database Schema level via the `Profiler Settings` option from the menu.
{% image
src="/images/v1.2/features/ingestion/workflows/profiler/sample-data-config-schema-1.png"
alt="Database Schema Storage Config - 1"
caption="Database Schema Storage Config - 1"
/%}
{% image
src="/images/v1.2/features/ingestion/workflows/profiler/sample-data-config-schema-2.png"
alt="Database Schema Storage Config - 2"
caption="Database Schema Storage Config - 2"
/%}
### Configuration Details
- **Profile Sample Value**: Percentage of data or number of rows to use when sampling tables. By default, the profiler will run against the entire table.
- **Profile Sample Type**: The sample type can be set to either:
- **Percentage**: this will use a percentage to sample the table (e.g. if table has 100 rows, and we set sample percentage tp 50%, the profiler will use 50 random rows to compute the metrics).
- **Row Count**: this will use a number of rows to sample the table (e.g. if table has 100 rows, and we set row count to 10, the profiler will use 10 random rows to compute the metrics).
- **Sample Data Rows Count**: Number of rows of sample data to be ingested, if generate sample data option is enabled.
{% note %}
The OpenMetadata UI will always show 50 or fewer rows of sample data. *Sample Data Rows Count* higher than 50 is only used for maintaining the row count of sample data that will be stored in parquet file in an object storage.
{% /note %}
- **Bucket Name**: A bucket name is a unique identifier used to organize and store data objects. It's similar to a folder name, but it's used for object storage rather than file storage.
- **Prefix**: The prefix of a data source refers to the first part of the data path that identifies the source or origin of the data. The generated sample data parquet file will be uploaded to this prefix path in your bucket.
- **Overwrite Sample Data**: If this flag is enabled, only one parquet file will be generated per table to store the sample data. Otherwise, a parquet file will be generated for each day when the profiler workflow runs.
#### Connection Details for AWS S3
- **AWS Access Key ID** & **AWS Secret Access Key**: When you interact with AWS, you specify your AWS security credentials to verify who you are and whether you have
permission to access the resources that you are requesting. AWS uses the security credentials to authenticate and
authorize your requests ([docs](https://docs.aws.amazon.com/IAM/latest/UserGuide/security-creds.html)).
Access keys consist of two parts: An **access key ID** (for example, `AKIAIOSFODNN7EXAMPLE`), and a **secret access key** (for example, `wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY`).
You must use both the access key ID and secret access key together to authenticate your requests.
You can find further information on how to manage your access keys [here](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html).
- **AWS Region**: Each AWS Region is a separate geographic area in which AWS clusters data centers ([docs](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.RegionsAndAvailabilityZones.html)).
As AWS can have instances in multiple regions, we need to know the region the service you want reach belongs to.
Note that the AWS Region is the only required parameter when configuring a connection. When connecting to the
services programmatically, there are different ways in which we can extract and use the rest of AWS configurations.
You can find further information about configuring your credentials [here](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html#configuring-credentials).
- **AWS Session Token (optional)**: If you are using temporary credentials to access your services, you will need to inform the AWS Access Key ID
and AWS Secrets Access Key. Also, these will include an AWS Session Token.
You can find more information on [Using temporary credentials with AWS resources](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_use-resources.html).
- **Endpoint URL (optional)**: To connect programmatically to an AWS service, you use an endpoint. An *endpoint* is the URL of the
entry point for an AWS web service. The AWS SDKs and the AWS Command Line Interface (AWS CLI) automatically use the
default endpoint for each service in an AWS Region. But you can specify an alternate endpoint for your API requests.
Find more information on [AWS service endpoints](https://docs.aws.amazon.com/general/latest/gr/rande.html).
- **Profile Name**: A named profile is a collection of settings and credentials that you can apply to a AWS CLI command.
When you specify a profile to run a command, the settings and credentials are used to run that command.
Multiple named profiles can be stored in the config and credentials files.
You can inform this field if you'd like to use a profile other than `default`.
Find here more information about [Named profiles for the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-profiles.html).
- **Assume Role Arn**: Typically, you use `AssumeRole` within your account or for cross-account access. In this field you'll set the
`ARN` (Amazon Resource Name) of the policy of the other account.
A user who wants to access a role in a different account must also have permissions that are delegated from the account
administrator. The administrator must attach a policy that allows the user to call `AssumeRole` for the `ARN` of the role in the other account.
This is a required field if you'd like to `AssumeRole`.
Find more information on [AssumeRole](https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.html).
- **Assume Role Session Name**: An identifier for the assumed role session. Use the role session name to uniquely identify a session when the same role
is assumed by different principals or for different reasons.
By default, we'll use the name `OpenMetadataSession`.
Find more information about the [Role Session Name](https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.html#:~:text=An%20identifier%20for%20the%20assumed%20role%20session.).
- **Assume Role Source Identity**: The source identity specified by the principal that is calling the `AssumeRole` operation. You can use source identity
information in AWS CloudTrail logs to determine who took actions with a role.
Find more information about [Source Identity](https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.html#:~:text=Required%3A%20No-,SourceIdentity,-The%20source%20identity).
#### OpenMetadata Storage Config
This option is useful when you want to skip uploading sample data of a schema or database from being uploaded to object storage.
For example, consider the scenario where you have a database with three schemas A, B & C. You have configured the S3 Storage
credentials for your database and, if you do not have any storage configuration at database schema level, then by default the sample data of
the tables in the schemas will be uploaded to the storage config.
Suppose that you do not wish to upload sample data of tables from schema A, then you can choose the OpenMetadata Storage Config option to achieve the same.
### Order of Precedence
As described above, you can configure the storage configuration to upload the sample data at Database Service, Database or Database Schema level. The order of precedence for selecting the storage configuration would be
```
Database Schema > Database > Database Service
```
Which means that if you have configured the storage credentials at database schema level, the sample data of all the tables
would be uploaded with that storage config. If that is not configured, then the preference would be given to the database storage
options and at last to the database service storage options.

View File

@ -556,6 +556,10 @@ site_menu:
url: /connectors/ingestion/workflows/profiler
- category: Connectors / Ingestion / Workflows / Profiler / Metrics
url: /connectors/ingestion/workflows/profiler/metrics
- category: Connectors / Ingestion / Workflows / Profiler / Sample Data
url: /connectors/ingestion/workflows/profiler/external-sample-data
- category: Connectors / Ingestion / Workflows / Profiler / External Workflow
url: /connectors/ingestion/workflows/profiler/external-workflow
- category: Connectors / Ingestion / Workflows / Data Quality
url: /connectors/ingestion/workflows/data-quality
- category: Connectors / Ingestion / Workflows / Data Quality / Tests

Binary file not shown.

After

Width:  |  Height:  |  Size: 803 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.1 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 758 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.0 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.1 MiB

View File

@ -87,7 +87,7 @@
"default": true
},
"sampleDataStorageConfig": {
"title": "S3 Config for Sample Data",
"title": "Storage Config for Sample Data",
"$ref": "../connectionBasicType.json#/definitions/sampleDataStorageConfig"
}
},

View File

@ -97,7 +97,7 @@
"$ref": "../connectionBasicType.json#/definitions/supportsDatabase"
},
"sampleDataStorageConfig": {
"title": "S3 Config for Sample Data",
"title": "Storage Config for Sample Data",
"$ref": "../connectionBasicType.json#/definitions/sampleDataStorageConfig"
}
},

View File

@ -102,7 +102,7 @@
"$ref": "../connectionBasicType.json#/definitions/supportsQueryComment"
},
"sampleDataStorageConfig": {
"title": "S3 Config for Sample Data",
"title": "Storage Config for Sample Data",
"$ref": "../connectionBasicType.json#/definitions/sampleDataStorageConfig"
}
},

View File

@ -108,7 +108,7 @@
"$ref": "../connectionBasicType.json#/definitions/supportsQueryComment"
},
"sampleDataStorageConfig": {
"title": "S3 Config for Sample Data",
"title": "Storage Config for Sample Data",
"$ref": "../connectionBasicType.json#/definitions/sampleDataStorageConfig"
}
},

View File

@ -104,7 +104,7 @@
"$ref": "../connectionBasicType.json#/definitions/supportsQueryComment"
},
"sampleDataStorageConfig": {
"title": "S3 Config for Sample Data",
"title": "Storage Config for Sample Data",
"$ref": "../connectionBasicType.json#/definitions/sampleDataStorageConfig"
}
},

View File

@ -80,7 +80,7 @@
"$ref": "../connectionBasicType.json#/definitions/supportsProfiler"
},
"sampleDataStorageConfig": {
"title": "S3 Config for Sample Data",
"title": "Storage Config for Sample Data",
"$ref": "../connectionBasicType.json#/definitions/sampleDataStorageConfig"
}
},

View File

@ -81,7 +81,7 @@
"$ref": "../connectionBasicType.json#/definitions/supportsQueryComment"
},
"sampleDataStorageConfig": {
"title": "S3 Config for Sample Data",
"title": "Storage Config for Sample Data",
"$ref": "../connectionBasicType.json#/definitions/sampleDataStorageConfig"
}
},

View File

@ -77,7 +77,7 @@
"$ref": "../connectionBasicType.json#/definitions/supportsDBTExtraction"
},
"sampleDataStorageConfig": {
"title": "S3 Config for Sample Data",
"title": "Storage Config for Sample Data",
"$ref": "../connectionBasicType.json#/definitions/sampleDataStorageConfig"
}
},

View File

@ -113,7 +113,7 @@
"$ref": "../connectionBasicType.json#/definitions/supportsQueryComment"
},
"sampleDataStorageConfig": {
"title": "S3 Config for Sample Data",
"title": "Storage Config for Sample Data",
"$ref": "../connectionBasicType.json#/definitions/sampleDataStorageConfig"
}
},

View File

@ -111,7 +111,7 @@
"$ref": "../connectionBasicType.json#/definitions/supportsProfiler"
},
"sampleDataStorageConfig": {
"title": "S3 Config for Sample Data",
"title": "Storage Config for Sample Data",
"$ref": "../connectionBasicType.json#/definitions/sampleDataStorageConfig"
}
},

View File

@ -100,7 +100,7 @@
"$ref": "../connectionBasicType.json#/definitions/supportsProfiler"
},
"sampleDataStorageConfig": {
"title": "S3 Config for Sample Data",
"title": "Storage Config for Sample Data",
"$ref": "../connectionBasicType.json#/definitions/sampleDataStorageConfig"
}
},

View File

@ -82,7 +82,7 @@
"$ref": "../connectionBasicType.json#/definitions/supportsQueryComment"
},
"sampleDataStorageConfig": {
"title": "S3 Config for Sample Data",
"title": "Storage Config for Sample Data",
"$ref": "../connectionBasicType.json#/definitions/sampleDataStorageConfig"
}
},

View File

@ -95,7 +95,7 @@
"$ref": "../connectionBasicType.json#/definitions/supportsLineageExtraction"
},
"sampleDataStorageConfig": {
"title": "S3 Config for Sample Data",
"title": "Storage Config for Sample Data",
"$ref": "../connectionBasicType.json#/definitions/sampleDataStorageConfig"
}
},

View File

@ -103,7 +103,7 @@
"$ref": "../connectionBasicType.json#/definitions/supportsQueryComment"
},
"sampleDataStorageConfig": {
"title": "S3 Config for Sample Data",
"title": "Storage Config for Sample Data",
"$ref": "../connectionBasicType.json#/definitions/sampleDataStorageConfig"
}
},

View File

@ -141,7 +141,7 @@
"$ref": "../connectionBasicType.json#/definitions/supportsQueryComment"
},
"sampleDataStorageConfig": {
"title": "S3 Config for Sample Data",
"title": "Storage Config for Sample Data",
"$ref": "../connectionBasicType.json#/definitions/sampleDataStorageConfig"
}
},

View File

@ -82,7 +82,7 @@
"$ref": "../connectionBasicType.json#/definitions/supportsQueryComment"
},
"sampleDataStorageConfig": {
"title": "S3 Config for Sample Data",
"title": "Storage Config for Sample Data",
"$ref": "../connectionBasicType.json#/definitions/sampleDataStorageConfig"
}
},

View File

@ -125,7 +125,7 @@
"$ref": "../connectionBasicType.json#/definitions/supportsQueryComment"
},
"sampleDataStorageConfig": {
"title": "S3 Config for Sample Data",
"title": "Storage Config for Sample Data",
"$ref": "../connectionBasicType.json#/definitions/sampleDataStorageConfig"
}
},

View File

@ -86,7 +86,7 @@
"$ref": "../connectionBasicType.json#/definitions/supportsQueryComment"
},
"sampleDataStorageConfig": {
"title": "S3 Config for Sample Data",
"title": "Storage Config for Sample Data",
"$ref": "../connectionBasicType.json#/definitions/sampleDataStorageConfig"
}
},

View File

@ -102,7 +102,7 @@
"$ref": "../connectionBasicType.json#/definitions/supportsQueryComment"
},
"sampleDataStorageConfig": {
"title": "S3 Config for Sample Data",
"title": "Storage Config for Sample Data",
"$ref": "../connectionBasicType.json#/definitions/sampleDataStorageConfig"
}
},

View File

@ -117,7 +117,7 @@
"$ref": "../connectionBasicType.json#/definitions/supportsQueryComment"
},
"sampleDataStorageConfig": {
"title": "S3 Config for Sample Data",
"title": "Storage Config for Sample Data",
"$ref": "../connectionBasicType.json#/definitions/sampleDataStorageConfig"
}
},

View File

@ -82,7 +82,7 @@
"$ref": "../connectionBasicType.json#/definitions/supportsQueryComment"
},
"sampleDataStorageConfig": {
"title": "S3 Config for Sample Data",
"title": "Storage Config for Sample Data",
"$ref": "../connectionBasicType.json#/definitions/sampleDataStorageConfig"
}
},

View File

@ -126,7 +126,7 @@
"$ref": "../connectionBasicType.json#/definitions/supportsQueryComment"
},
"sampleDataStorageConfig": {
"title": "S3 Config for Sample Data",
"title": "Storage Config for Sample Data",
"$ref": "../connectionBasicType.json#/definitions/sampleDataStorageConfig"
}
},

View File

@ -83,7 +83,7 @@
"$ref": "../connectionBasicType.json#/definitions/supportsQueryComment"
},
"sampleDataStorageConfig": {
"title": "S3 Config for Sample Data",
"title": "Storage Config for Sample Data",
"$ref": "../connectionBasicType.json#/definitions/sampleDataStorageConfig"
}
},

View File

@ -108,7 +108,7 @@
"$ref": "../connectionBasicType.json#/definitions/supportsQueryComment"
},
"sampleDataStorageConfig": {
"title": "S3 Config for Sample Data",
"title": "Storage Config for Sample Data",
"$ref": "../connectionBasicType.json#/definitions/sampleDataStorageConfig"
}
},

View File

@ -87,7 +87,7 @@
"$ref": "../connectionBasicType.json#/definitions/supportsQueryComment"
},
"sampleDataStorageConfig": {
"title": "S3 Config for Sample Data",
"title": "Storage Config for Sample Data",
"$ref": "../connectionBasicType.json#/definitions/sampleDataStorageConfig"
}
},