
* Rename docs and clean SSO * Add connector partials * Add connector partials * Rename path
13 KiB
title | slug |
---|---|
Run DB2 Connector using the CLI | /connectors/database/db2/cli |
Run DB2 using the metadata CLI
{% multiTablesWrapper %}
Feature | Status |
---|---|
Stage | PROD |
Metadata | {% icon iconName="check" /%} |
Query Usage | {% icon iconName="cross" /%} |
Data Profiler | {% icon iconName="check" /%} |
Data Quality | {% icon iconName="check" /%} |
Lineage | Partially via Views |
DBT | {% icon iconName="check" /%} |
Supported Versions | -- |
Feature | Status |
---|---|
Lineage | Partially via Views |
Table-level | {% icon iconName="check" /%} |
Column-level | {% icon iconName="check" /%} |
{% /multiTablesWrapper %}
In this section, we provide guides and references to use the DB2 connector.
Configure and schedule DB2 metadata and profiler workflows from the OpenMetadata UI:
Requirements
{%inlineCallout icon="description" bold="OpenMetadata 0.12 or later" href="/deployment"%} To deploy OpenMetadata, check the Deployment guides. {%/inlineCallout%}
To run the Ingestion via the UI you'll need to use the OpenMetadata Ingestion Container, which comes shipped with custom Airflow plugins to handle the workflow deployment.
To create a new Db2 user please follow the guidelines mentioned here
Db2 user must have the below permissions to ingest the metadata:
SELECT
privilege onSYSCAT.SCHEMATA
to fetch the metadata of schemas.
-- Grant SELECT on tables for schema metadata
GRANT SELECT ON SYSCAT.SCHEMATA TO USER_NAME;
SELECT
privilege onSYSCAT.TABLES
to fetch the metadata of tables.
-- Grant SELECT on tables for table metadata
GRANT SELECT ON SYSCAT.TABLES TO USER_NAME;
SELECT
privilege onSYSCAT.VIEWS
to fetch the metadata of views.
-- Grant SELECT on tables for view metadata
GRANT SELECT ON SYSCAT.VIEWS TO USER_NAME;
Profiler & Data Quality
Executing the profiler worflow or data quality tests, will require the user to have SELECT
permission on the tables/schemas where the profiler/tests will be executed. More information on the profiler workflow setup can be found here and data quality tests here.
Python Requirements
To run the DB2 ingestion, you will need to install:
pip3 install "openmetadata-ingestion[db2]"
Metadata Ingestion
All connectors are defined as JSON Schemas. Here you can find the structure to create a connection to DB2.
In order to create and run a Metadata Ingestion workflow, we will follow the steps to create a YAML configuration able to connect to the source, process the Entities if needed, and reach the OpenMetadata server.
The workflow is modeled around the following JSON Schema
1. Define the YAML Config
This is a sample config for DB2:
{% codePreview %}
{% codeInfoContainer %}
Source Configuration - Service Connection
{% codeInfo srNumber=1 %}
username: Specify the User to connect to DB2. It should have enough privileges to read all the metadata.
{% /codeInfo %}
{% codeInfo srNumber=2 %}
password: Password to connect to DB2.
{% /codeInfo %}
{% codeInfo srNumber=3 %}
hostPort: Enter the fully qualified hostname and port number for your DB2 deployment in the Host and Port field.
{% /codeInfo %}
{% codeInfo srNumber=4 %}
database: Database of the data source.
{% /codeInfo %}
Source Configuration - Source Config
{% codeInfo srNumber=7 %}
The sourceConfig
is defined here:
markDeletedTables: To flag tables as soft-deleted if they are not present anymore in the source system.
includeTables: true or false, to ingest table data. Default is true.
includeViews: true or false, to ingest views definitions.
databaseFilterPattern, schemaFilterPattern, tableFilternPattern: Note that the filter supports regex as include or exclude. You can find examples here
{% /codeInfo %}
Sink Configuration
{% codeInfo srNumber=8 %}
To send the metadata to OpenMetadata, it needs to be specified as type: metadata-rest
.
{% /codeInfo %}
{% partial file="workflow-config.md" /%}
Advanced Configuration
{% codeInfo srNumber=5 %}
Connection Options (Optional): Enter the details for any additional connection options that can be sent to Athena during the connection. These details must be added as Key-Value pairs.
{% /codeInfo %}
{% codeInfo srNumber=6 %}
Connection Arguments (Optional): Enter the details for any additional connection arguments such as security or protocol configs that can be sent to Athena during the connection. These details must be added as Key-Value pairs.
- In case you are using Single-Sign-On (SSO) for authentication, add the
authenticator
details in the Connection Arguments as a Key-Value pair as follows:"authenticator" : "sso_login_url"
{% /codeInfo %}
{% /codeInfoContainer %}
{% codeBlock fileName="filename.yaml" %}
source:
type: db2
serviceName: local_db2
serviceConnection:
config:
type: Db2
username: openmetadata_user
password: openmetadata_password
hostPort: localhost:5432
# databaseSchema: schema
# connectionOptions:
# key: value
# connectionArguments:
# key: value
sourceConfig:
config:
type: DatabaseMetadata
markDeletedTables: true
includeTables: true
includeViews: true
# includeTags: true
# databaseFilterPattern:
# includes:
# - database1
# - database2
# excludes:
# - database3
# - database4
# schemaFilterPattern:
# includes:
# - schema1
# - schema2
# excludes:
# - schema3
# - schema4
# tableFilterPattern:
# includes:
# - users
# - type_test
# excludes:
# - table3
# - table4
sink:
type: metadata-rest
config: {}
{% partial file="workflow-config-yaml.md" /%}
{% /codeBlock %}
{% /codePreview %}
2. Run with the CLI
First, we will need to save the YAML file. Afterward, and with all requirements installed, we can run:
metadata ingest -c <path-to-yaml>
Note that from connector to connector, this recipe will always be the same. By updating the YAML configuration, you will be able to extract metadata from different sources.
Data Profiler
The Data Profiler workflow will be using the orm-profiler
processor.
After running a Metadata Ingestion workflow, we can run Data Profiler workflow.
While the serviceName
will be the same to that was used in Metadata Ingestion, so the ingestion bot can get the serviceConnection
details from the server.
1. Define the YAML Config
This is a sample config for the profiler:
{% codePreview %}
{% codeInfoContainer %}
{% codeInfo srNumber=10 %}
Source Configuration - Source Config
You can find all the definitions and types for the sourceConfig
here.
generateSampleData: Option to turn on/off generating sample data.
{% /codeInfo %}
{% codeInfo srNumber=11 %}
profileSample: Percentage of data or no. of rows we want to execute the profiler and tests on.
{% /codeInfo %}
{% codeInfo srNumber=12 %}
threadCount: Number of threads to use during metric computations.
{% /codeInfo %}
{% codeInfo srNumber=13 %}
processPiiSensitive: Optional configuration to automatically tag columns that might contain sensitive information.
{% /codeInfo %}
{% codeInfo srNumber=14 %}
confidence: Set the Confidence value for which you want the column to be marked
{% /codeInfo %}
{% codeInfo srNumber=15 %}
timeoutSeconds: Profiler Timeout in Seconds
{% /codeInfo %}
{% codeInfo srNumber=16 %}
databaseFilterPattern: Regex to only fetch databases that matches the pattern.
{% /codeInfo %}
{% codeInfo srNumber=17 %}
schemaFilterPattern: Regex to only fetch tables or databases that matches the pattern.
{% /codeInfo %}
{% codeInfo srNumber=18 %}
tableFilterPattern: Regex to only fetch tables or databases that matches the pattern.
{% /codeInfo %}
{% codeInfo srNumber=19 %}
Processor Configuration
Choose the orm-profiler
. Its config can also be updated to define tests from the YAML itself instead of the UI:
tableConfig: tableConfig
allows you to set up some configuration at the table level.
{% /codeInfo %}
{% codeInfo srNumber=20 %}
Sink Configuration
To send the metadata to OpenMetadata, it needs to be specified as type: metadata-rest
.
{% /codeInfo %}
{% codeInfo srNumber=21 %}
Workflow Configuration
The main property here is the openMetadataServerConfig
, where you can define the host and security provider of your OpenMetadata installation.
For a simple, local installation using our docker containers, this looks like:
{% /codeInfo %}
{% /codeInfoContainer %}
{% codeBlock fileName="filename.yaml" %}
source:
type: db2
serviceName: local_db2
sourceConfig:
config:
type: Profiler
generateSampleData: true
# profileSample: 85
# threadCount: 5
processPiiSensitive: false
# confidence: 80
# timeoutSeconds: 43200
# databaseFilterPattern:
# includes:
# - database1
# - database2
# excludes:
# - database3
# - database4
# schemaFilterPattern:
# includes:
# - schema1
# - schema2
# excludes:
# - schema3
# - schema4
# tableFilterPattern:
# includes:
# - table1
# - table2
# excludes:
# - table3
# - table4
processor:
type: orm-profiler
config: {} # Remove braces if adding properties
# tableConfig:
# - fullyQualifiedName: <table fqn>
# profileSample: <number between 0 and 99> # default
# profileSample: <number between 0 and 99> # default will be 100 if omitted
# profileQuery: <query to use for sampling data for the profiler>
# columnConfig:
# excludeColumns:
# - <column name>
# includeColumns:
# - columnName: <column name>
# - metrics:
# - MEAN
# - MEDIAN
# - ...
# partitionConfig:
# enablePartitioning: <set to true to use partitioning>
# partitionColumnName: <partition column name. Must be a timestamp or datetime/date field type>
# partitionInterval: <partition interval>
# partitionIntervalUnit: <YEAR, MONTH, DAY, HOUR>
sink:
type: metadata-rest
config: {}
workflowConfig:
# loggerLevel: DEBUG # DEBUG, INFO, WARN or ERROR
openMetadataServerConfig:
hostPort: <OpenMetadata host and port>
authProvider: <OpenMetadata auth provider>
{% /codeBlock %}
{% /codePreview %}
- You can learn more about how to configure and run the Profiler Workflow to extract Profiler data and execute the Data Quality from here
2. Run with the CLI
After saving the YAML config, we will run the command the same way we did for the metadata ingestion:
metadata profile -c <path-to-yaml>
Note now instead of running ingest
, we are using the profile
command to select the Profiler workflow.
dbt Integration
{% tilesContainer %}
{% tile icon="mediation" title="dbt Integration" description="Learn more about how to ingest dbt models' definitions and their lineage." link="/connectors/ingestion/workflows/dbt" /%}
{% /tilesContainer %}
Related
{% tilesContainer %}
{% tile title="Ingest with Airflow" description="Configure the ingestion using Airflow SDK" link="/connectors/database/db2/airflow" / %}
{% /tilesContainer %}