Add greenplum docs & fix couchbase docs (#13722)

This commit is contained in:
Mayur Singal 2023-10-26 16:04:03 +05:30 committed by GitHub
parent 17421b75a6
commit e7f3218459
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
10 changed files with 985 additions and 2 deletions

View File

@ -0,0 +1,142 @@
---
title: Greenplum
slug: /connectors/database/greenplum
---
# Greenplum
{% multiTablesWrapper %}
| Feature | Status |
| :----------------- | :--------------------------- |
| Stage | PROD |
| Metadata | {% icon iconName="check" /%} |
| Query Usage | {% icon iconName="cross" /%} |
| Data Profiler | {% icon iconName="check" /%} |
| Data Quality | {% icon iconName="check" /%} |
| Lineage | {% icon iconName="cross" /%} |
| DBT | {% icon iconName="check" /%} |
| Supported Versions | - |
| Feature | Status |
| :----------- | :--------------------------- |
| Lineage | {% icon iconName="cross" /%} |
| Table-level | {% icon iconName="cross" /%} |
| Column-level | {% icon iconName="cross" /%} |
{% /multiTablesWrapper %}
In this section, we provide guides and references to use the Greenplum connector.
Configure and schedule Greenplum metadata and profiler workflows from the OpenMetadata UI:
- [Requirements](#requirements)
- [Metadata Ingestion](#metadata-ingestion)
- [Query Usage](/connectors/ingestion/workflows/usage)
- [Data Profiler](/connectors/ingestion/workflows/profiler)
- [Data Quality](/connectors/ingestion/workflows/data-quality)
- [Lineage](/connectors/ingestion/lineage)
- [dbt Integration](/connectors/ingestion/workflows/dbt)
{% partial file="/v1.2/connectors/ingestion-modes-tiles.md" variables={yamlPath: "/connectors/database/greenplum/yaml"} /%}
## Requirements
## Metadata Ingestion
{% partial
file="/v1.2/connectors/metadata-ingestion-ui.md"
variables={
connector: "Greenplum",
selectServicePath: "/images/v1.2/connectors/greenplum/select-service.png",
addNewServicePath: "/images/v1.2/connectors/greenplum/add-new-service.png",
serviceConnectionPath: "/images/v1.2/connectors/greenplum/service-connection.png",
}
/%}
{% stepsContainer %}
{% extraContent parentTagName="stepsContainer" %}
#### Connection Details
- **Username**: Specify the User to connect to Greenplum. It should have enough privileges to read all the metadata.
- **Auth Type**: Basic Auth or IAM based auth to connect to instances / cloud rds.
- **Basic Auth**:
- **Password**: Password to connect to Greenplum
- **IAM Based Auth**:
- **AWS Access Key ID** & **AWS Secret Access Key**: When you interact with AWS, you specify your AWS security credentials to verify who you are and whether you have
permission to access the resources that you are requesting. AWS uses the security credentials to authenticate and
authorize your requests ([docs](https://docs.aws.amazon.com/IAM/latest/UserGuide/security-creds.html)).
Access keys consist of two parts: An **access key ID** (for example, `AKIAIOSFODNN7EXAMPLE`), and a **secret access key** (for example, `wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY`).
You must use both the access key ID and secret access key together to authenticate your requests.
You can find further information on how to manage your access keys [here](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html).
- **AWS Region**: Each AWS Region is a separate geographic area in which AWS clusters data centers ([docs](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.RegionsAndAvailabilityZones.html)).
As AWS can have instances in multiple regions, we need to know the region the service you want reach belongs to.
Note that the AWS Region is the only required parameter when configuring a connection. When connecting to the
services programmatically, there are different ways in which we can extract and use the rest of AWS configurations.
You can find further information about configuring your credentials [here](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html#configuring-credentials).
- **AWS Session Token (optional)**: If you are using temporary credentials to access your services, you will need to inform the AWS Access Key ID
and AWS Secrets Access Key. Also, these will include an AWS Session Token.
You can find more information on [Using temporary credentials with AWS resources](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_use-resources.html).
- **Endpoint URL (optional)**: To connect programmatically to an AWS service, you use an endpoint. An *endpoint* is the URL of the
entry point for an AWS web service. The AWS SDKs and the AWS Command Line Interface (AWS CLI) automatically use the
default endpoint for each service in an AWS Region. But you can specify an alternate endpoint for your API requests.
Find more information on [AWS service endpoints](https://docs.aws.amazon.com/general/latest/gr/rande.html).
- **Profile Name**: A named profile is a collection of settings and credentials that you can apply to a AWS CLI command.
When you specify a profile to run a command, the settings and credentials are used to run that command.
Multiple named profiles can be stored in the config and credentials files.
You can inform this field if you'd like to use a profile other than `default`.
Find here more information about [Named profiles for the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-profiles.html).
- **Assume Role Arn**: Typically, you use `AssumeRole` within your account or for cross-account access. In this field you'll set the
`ARN` (Amazon Resource Name) of the policy of the other account.
A user who wants to access a role in a different account must also have permissions that are delegated from the account
administrator. The administrator must attach a policy that allows the user to call `AssumeRole` for the `ARN` of the role in the other account.
This is a required field if you'd like to `AssumeRole`.
Find more information on [AssumeRole](https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.html).
- **Assume Role Session Name**: An identifier for the assumed role session. Use the role session name to uniquely identify a session when the same role
is assumed by different principals or for different reasons.
By default, we'll use the name `OpenMetadataSession`.
Find more information about the [Role Session Name](https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.html#:~:text=An%20identifier%20for%20the%20assumed%20role%20session.).
- **Assume Role Source Identity**: The source identity specified by the principal that is calling the `AssumeRole` operation. You can use source identity
information in AWS CloudTrail logs to determine who took actions with a role.
Find more information about [Source Identity](https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.html#:~:text=Required%3A%20No-,SourceIdentity,-The%20source%20identity).
- **Host and Port**: Enter the fully qualified hostname and port number for your Greenplum deployment in the Host and Port field.
{% partial file="/v1.2/connectors/database/advanced-configuration.md" /%}
{% /extraContent %}
{% partial file="/v1.2/connectors/test-connection.md" /%}
{% partial file="/v1.2/connectors/database/configure-ingestion.md" /%}
{% partial file="/v1.2/connectors/ingestion-schedule-and-deploy.md" /%}
{% /stepsContainer %}
{% partial file="/v1.2/connectors/troubleshooting.md" /%}
{% partial file="/v1.2/connectors/database/related.md" /%}

View File

@ -0,0 +1,661 @@
---
title: Run the Greenplum Connector Externally
slug: /connectors/database/greenplum/yaml
---
# Run the Greenplum Connector Externally
{% multiTablesWrapper %}
| Feature | Status |
| :----------------- | :--------------------------- |
| Stage | PROD |
| Metadata | {% icon iconName="check" /%} |
| Query Usage | {% icon iconName="cross" /%} |
| Data Profiler | {% icon iconName="check" /%} |
| Data Quality | {% icon iconName="check" /%} |
| Lineage | {% icon iconName="cross" /%} |
| DBT | {% icon iconName="check" /%} |
| Supported Versions | - |
| Feature | Status |
| :----------- | :--------------------------- |
| Lineage | {% icon iconName="cross" /%} |
| Table-level | {% icon iconName="cross" /%} |
| Column-level | {% icon iconName="cross" /%} |
{% /multiTablesWrapper %}
In this section, we provide guides and references to use the Greenplum connector.
Configure and schedule Greenplum metadata and profiler workflows from the OpenMetadata UI:
- [Requirements](#requirements)
- [Metadata Ingestion](#metadata-ingestion)
- [Query Usage](#query-usage)
- [Data Profiler](#data-profiler)
- [Lineage](#lineage)
- [dbt Integration](#dbt-integration)
{% partial file="/v1.2/connectors/external-ingestion-deployment.md" /%}
## Requirements
{%inlineCallout icon="description" bold="OpenMetadata 0.12 or later" href="/deployment"%}
To deploy OpenMetadata, check the Deployment guides.
{%/inlineCallout%}
### Python Requirements
To run the Greenplum ingestion, you will need to install:
```bash
pip3 install "openmetadata-ingestion[postgres]"
```
## Metadata Ingestion
All connectors are defined as JSON Schemas.
[Here](https://github.com/open-metadata/OpenMetadata/blob/main/openmetadata-spec/src/main/resources/json/schema/entity/services/connections/database/greenplumConnection.json)
you can find the structure to create a connection to Greenplum.
In order to create and run a Metadata Ingestion workflow, we will follow
the steps to create a YAML configuration able to connect to the source,
process the Entities if needed, and reach the OpenMetadata server.
The workflow is modeled around the following
[JSON Schema](https://github.com/open-metadata/OpenMetadata/blob/main/openmetadata-spec/src/main/resources/json/schema/metadataIngestion/workflow.json)
### 1. Define the YAML Config
This is a sample config for Greenplum:
{% codePreview %}
{% codeInfoContainer %}
#### Source Configuration - Service Connection
{% codeInfo srNumber=1 %}
**username**: Specify the User to connect to Greenplum. It should have enough privileges to read all the metadata.
{% /codeInfo %}
{% codeInfo srNumber=2 %}
**authType**: Choose from basic auth and IAM based auth.
#### Basic Auth
**password**: Password comes under Basic Auth type.
{% /codeInfo %}
{% codeInfo srNumber=3 %}
#### IAM BASED Auth
- **awsAccessKeyId** & **awsSecretAccessKey**: When you interact with AWS, you specify your AWS security credentials to verify who you are and whether you have
permission to access the resources that you are requesting. AWS uses the security credentials to authenticate and
authorize your requests ([docs](https://docs.aws.amazon.com/IAM/latest/UserGuide/security-creds.html)).
Access keys consist of two parts: An **access key ID** (for example, `AKIAIOSFODNN7EXAMPLE`), and a **secret access key** (for example, `wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY`).
You must use both the access key ID and secret access key together to authenticate your requests.
You can find further information on how to manage your access keys [here](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html).
**awsSessionToken**: If you are using temporary credentials to access your services, you will need to inform the AWS Access Key ID
and AWS Secrets Access Key. Also, these will include an AWS Session Token.
**awsRegion**: Each AWS Region is a separate geographic area in which AWS clusters data centers ([docs](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.RegionsAndAvailabilityZones.html)).
As AWS can have instances in multiple regions, we need to know the region the service you want reach belongs to.
Note that the AWS Region is the only required parameter when configuring a connection. When connecting to the
services programmatically, there are different ways in which we can extract and use the rest of AWS configurations.
You can find further information about configuring your credentials [here](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html#configuring-credentials).
**endPointURL**: To connect programmatically to an AWS service, you use an endpoint. An *endpoint* is the URL of the
entry point for an AWS web service. The AWS SDKs and the AWS Command Line Interface (AWS CLI) automatically use the
default endpoint for each service in an AWS Region. But you can specify an alternate endpoint for your API requests.
Find more information on [AWS service endpoints](https://docs.aws.amazon.com/general/latest/gr/rande.html).
**profileName**: A named profile is a collection of settings and credentials that you can apply to a AWS CLI command.
When you specify a profile to run a command, the settings and credentials are used to run that command.
Multiple named profiles can be stored in the config and credentials files.
You can inform this field if you'd like to use a profile other than `default`.
Find here more information about [Named profiles for the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-profiles.html).
**assumeRoleArn**: Typically, you use `AssumeRole` within your account or for cross-account access. In this field you'll set the
`ARN` (Amazon Resource Name) of the policy of the other account.
A user who wants to access a role in a different account must also have permissions that are delegated from the account
administrator. The administrator must attach a policy that allows the user to call `AssumeRole` for the `ARN` of the role in the other account.
This is a required field if you'd like to `AssumeRole`.
Find more information on [AssumeRole](https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.html).
**assumeRoleSessionName**: An identifier for the assumed role session. Use the role session name to uniquely identify a session when the same role
is assumed by different principals or for different reasons.
By default, we'll use the name `OpenMetadataSession`.
Find more information about the [Role Session Name](https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.html#:~:text=An%20identifier%20for%20the%20assumed%20role%20session.).
**assumeRoleSourceIdentity**: The source identity specified by the principal that is calling the `AssumeRole` operation. You can use source identity
information in AWS CloudTrail logs to determine who took actions with a role.
Find more information about [Source Identity](https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.html#:~:text=Required%3A%20No-,SourceIdentity,-The%20source%20identity).
{% /codeInfo %}
{% codeInfo srNumber=4 %}
**hostPort**: Enter the fully qualified hostname and port number for your Greenplum deployment in the Host and Port field.
{% /codeInfo %}
{% codeInfo srNumber=5 %}
**database**: Initial Greenplum database to connect to. If you want to ingest all databases, set ingestAllDatabases to true.
{% /codeInfo %}
{% codeInfo srNumber=6 %}
**ingestAllDatabases**: Ingest data from all databases in Greenplum. You can use databaseFilterPattern on top of this.
{% /codeInfo %}
#### Source Configuration - Source Config
{% codeInfo srNumber=9 %}
The `sourceConfig` is defined [here](https://github.com/open-metadata/OpenMetadata/blob/main/openmetadata-spec/src/main/resources/json/schema/metadataIngestion/databaseServiceMetadataPipeline.json):
**markDeletedTables**: To flag tables as soft-deleted if they are not present anymore in the source system.
**includeTables**: true or false, to ingest table data. Default is true.
**includeViews**: true or false, to ingest views definitions.
**databaseFilterPattern**, **schemaFilterPattern**, **tableFilterPattern**: Note that the filter supports regex as include or exclude. You can find examples [here](/connectors/ingestion/workflows/metadata/filter-patterns/database)
{% /codeInfo %}
#### Sink Configuration
{% codeInfo srNumber=10 %}
To send the metadata to OpenMetadata, it needs to be specified as `type: metadata-rest`.
{% /codeInfo %}
{% partial file="/v1.2/connectors/workflow-config.md" /%}
#### Advanced Configuration
{% codeInfo srNumber=7 %}
**Connection Options (Optional)**: Enter the details for any additional connection options that can be sent to Athena during the connection. These details must be added as Key-Value pairs.
{% /codeInfo %}
{% codeInfo srNumber=8 %}
**Connection Arguments (Optional)**: Enter the details for any additional connection arguments such as security or protocol configs that can be sent to Athena during the connection. These details must be added as Key-Value pairs.
- In case you are using Single-Sign-On (SSO) for authentication, add the `authenticator` details in the Connection Arguments as a Key-Value pair as follows: `"authenticator" : "sso_login_url"`
{% /codeInfo %}
{% /codeInfoContainer %}
{% codeBlock fileName="filename.yaml" %}
```yaml
source:
type: greenplum
serviceName: local_greenplum
serviceConnection:
config:
type: Greenplum
```
```yaml {% srNumber=1 %}
username: username
```
```yaml {% srNumber=2 %}
authType:
password: <password>
```
```yaml {% srNumber=3 %}
authType:
awsConfig:
awsAccessKeyId: access key id
awsSecretAccessKey: access secret key
awsRegion: aws region name
```
```yaml {% srNumber=4 %}
hostPort: localhost:5432
```
```yaml {% srNumber=5 %}
database: database
```
```yaml {% srNumber=6 %}
ingestAllDatabases: true
```
```yaml {% srNumber=7 %}
# connectionOptions:
# key: value
```
```yaml {% srNumber=8 %}
# connectionArguments:
# key: value
```
```yaml {% srNumber=9 %}
sourceConfig:
config:
type: DatabaseMetadata
markDeletedTables: true
includeTables: true
includeViews: true
# includeTags: true
# databaseFilterPattern:
# includes:
# - database1
# - database2
# excludes:
# - database3
# - database4
# schemaFilterPattern:
# includes:
# - schema1
# - schema2
# excludes:
# - schema3
# - schema4
# tableFilterPattern:
# includes:
# - users
# - type_test
# excludes:
# - table3
# - table4
```
```yaml {% srNumber=10 %}
sink:
type: metadata-rest
config: {}
```
{% partial file="/v1.2/connectors/workflow-config-yaml.md" /%}
{% /codeBlock %}
{% /codePreview %}
### 2. Run with the CLI
First, we will need to save the YAML file. Afterward, and with all requirements installed, we can run:
```bash
metadata ingest -c <path-to-yaml>
```
Note that from connector to connector, this recipe will always be the same. By updating the YAML configuration,
you will be able to extract metadata from different sources.
## Data Profiler
The Data Profiler workflow will be using the `orm-profiler` processor.
After running a Metadata Ingestion workflow, we can run Data Profiler workflow.
While the `serviceName` will be the same to that was used in Metadata Ingestion, so the ingestion bot can get the `serviceConnection` details from the server.
### 1. Define the YAML Config
This is a sample config for the profiler:
{% codePreview %}
{% codeInfoContainer %}
{% codeInfo srNumber=17 %}
#### Source Configuration - Source Config
You can find all the definitions and types for the `sourceConfig` [here](https://github.com/open-metadata/OpenMetadata/blob/main/openmetadata-spec/src/main/resources/json/schema/metadataIngestion/databaseServiceProfilerPipeline.json).
**generateSampleData**: Option to turn on/off generating sample data.
{% /codeInfo %}
{% codeInfo srNumber=18 %}
**profileSample**: Percentage of data or no. of rows we want to execute the profiler and tests on.
{% /codeInfo %}
{% codeInfo srNumber=19 %}
**threadCount**: Number of threads to use during metric computations.
{% /codeInfo %}
{% codeInfo srNumber=20 %}
**processPiiSensitive**: Optional configuration to automatically tag columns that might contain sensitive information.
{% /codeInfo %}
{% codeInfo srNumber=21 %}
**confidence**: Set the Confidence value for which you want the column to be marked
{% /codeInfo %}
{% codeInfo srNumber=22 %}
**timeoutSeconds**: Profiler Timeout in Seconds
{% /codeInfo %}
{% codeInfo srNumber=23 %}
**databaseFilterPattern**: Regex to only fetch databases that matches the pattern.
{% /codeInfo %}
{% codeInfo srNumber=24 %}
**schemaFilterPattern**: Regex to only fetch tables or databases that matches the pattern.
{% /codeInfo %}
{% codeInfo srNumber=25 %}
**tableFilterPattern**: Regex to only fetch tables or databases that matches the pattern.
{% /codeInfo %}
{% codeInfo srNumber=26 %}
#### Processor Configuration
Choose the `orm-profiler`. Its config can also be updated to define tests from the YAML itself instead of the UI:
**tableConfig**: `tableConfig` allows you to set up some configuration at the table level.
{% /codeInfo %}
{% codeInfo srNumber=27 %}
#### Sink Configuration
To send the metadata to OpenMetadata, it needs to be specified as `type: metadata-rest`.
{% /codeInfo %}
{% codeInfo srNumber=28 %}
#### Workflow Configuration
The main property here is the `openMetadataServerConfig`, where you can define the host and security provider of your OpenMetadata installation.
For a simple, local installation using our docker containers, this looks like:
{% /codeInfo %}
{% /codeInfoContainer %}
{% codeBlock fileName="filename.yaml" %}
```yaml
source:
type: greenplum
serviceName: local_greenplum
sourceConfig:
config:
type: Profiler
```
```yaml {% srNumber=17 %}
generateSampleData: true
```
```yaml {% srNumber=18 %}
# profileSample: 85
```
```yaml {% srNumber=19 %}
# threadCount: 5
```
```yaml {% srNumber=20 %}
processPiiSensitive: false
```
```yaml {% srNumber=21 %}
# confidence: 80
```
```yaml {% srNumber=22 %}
# timeoutSeconds: 43200
```
```yaml {% srNumber=23 %}
# databaseFilterPattern:
# includes:
# - database1
# - database2
# excludes:
# - database3
# - database4
```
```yaml {% srNumber=24 %}
# schemaFilterPattern:
# includes:
# - schema1
# - schema2
# excludes:
# - schema3
# - schema4
```
```yaml {% srNumber=25 %}
# tableFilterPattern:
# includes:
# - table1
# - table2
# excludes:
# - table3
# - table4
```
```yaml {% srNumber=26 %}
processor:
type: orm-profiler
config: {} # Remove braces if adding properties
# tableConfig:
# - fullyQualifiedName: <table fqn>
# profileSample: <number between 0 and 99> # default
# profileSample: <number between 0 and 99> # default will be 100 if omitted
# profileQuery: <query to use for sampling data for the profiler>
# columnConfig:
# excludeColumns:
# - <column name>
# includeColumns:
# - columnName: <column name>
# - metrics:
# - MEAN
# - MEDIAN
# - ...
# partitionConfig:
# enablePartitioning: <set to true to use partitioning>
# partitionColumnName: <partition column name. Must be a timestamp or datetime/date field type>
# partitionInterval: <partition interval>
# partitionIntervalUnit: <YEAR, MONTH, DAY, HOUR>
```
```yaml {% srNumber=27 %}
sink:
type: metadata-rest
config: {}
```
```yaml {% srNumber=28 %}
workflowConfig:
# loggerLevel: DEBUG # DEBUG, INFO, WARN or ERROR
openMetadataServerConfig:
hostPort: <OpenMetadata host and port>
authProvider: <OpenMetadata auth provider>
```
{% /codeBlock %}
{% /codePreview %}
- You can learn more about how to configure and run the Profiler Workflow to extract Profiler data and execute the Data Quality from [here](/connectors/ingestion/workflows/profiler)
### 2. Prepare the Profiler DAG
Here, we follow a similar approach as with the metadata and usage pipelines, although we will use a different Workflow class:
{% codePreview %}
{% codeInfoContainer %}
{% codeInfo srNumber=29 %}
#### Import necessary modules
The `ProfilerWorkflow` class that is being imported is a part of a metadata orm_profiler framework, which defines a process of extracting Profiler data.
Here we are also importing all the basic requirements to parse YAMLs, handle dates and build our DAG.
{% /codeInfo %}
{% codeInfo srNumber=30 %}
**Default arguments for all tasks in the Airflow DAG.**
- Default arguments dictionary contains default arguments for tasks in the DAG, including the owner's name, email address, number of retries, retry delay, and execution timeout.
{% /codeInfo %}
{% codeInfo srNumber=31 %}
- **config**: Specifies config for the profiler as we prepare above.
{% /codeInfo %}
{% codeInfo srNumber=32 %}
- **metadata_ingestion_workflow()**: This code defines a function `metadata_ingestion_workflow()` that loads a YAML configuration, creates a `ProfilerWorkflow` object, executes the workflow, checks its status, prints the status to the console, and stops the workflow.
{% /codeInfo %}
{% codeInfo srNumber=33 %}
- **DAG**: creates a DAG using the Airflow framework, and tune the DAG configurations to whatever fits with your requirements
- For more Airflow DAGs creation details visit [here](https://airflow.apache.org/docs/apache-airflow/stable/core-concepts/dags.html#declaring-a-dag).
{% /codeInfo %}
{% /codeInfoContainer %}
{% codeBlock fileName="filename.py" %}
```python {% srNumber=30 %}
import yaml
from datetime import timedelta
from airflow import DAG
from metadata.workflow.profiler import ProfilerWorkflow
from metadata.workflow.workflow_output_handler import print_status
try:
from airflow.operators.python import PythonOperator
except ModuleNotFoundError:
from airflow.operators.python_operator import PythonOperator
from airflow.utils.dates import days_ago
```
```python {% srNumber=31 %}
default_args = {
"owner": "user_name",
"email_on_failure": False,
"retries": 3,
"retry_delay": timedelta(seconds=10),
"execution_timeout": timedelta(minutes=60),
}
```
```python {% srNumber=32 %}
config = """
<your YAML configuration>
"""
```
```python {% srNumber=33 %}
def metadata_ingestion_workflow():
workflow_config = yaml.safe_load(config)
workflow = ProfilerWorkflow.create(workflow_config)
workflow.execute()
workflow.raise_from_status()
print_status(workflow)
workflow.stop()
```
```python {% srNumber=34 %}
with DAG(
"profiler_example",
default_args=default_args,
description="An example DAG which runs a OpenMetadata ingestion workflow",
start_date=days_ago(1),
is_paused_upon_creation=False,
catchup=False,
) as dag:
ingest_task = PythonOperator(
task_id="profile_and_test_using_recipe",
python_callable=metadata_ingestion_workflow,
)
```
{% /codeBlock %}
{% /codePreview %}
## Lineage
You can learn more about how to ingest lineage [here](/connectors/ingestion/workflows/lineage).
## dbt Integration
You can learn more about how to ingest dbt models' definitions and their lineage [here](/connectors/ingestion/workflows/dbt).

View File

@ -19,6 +19,7 @@ This is the supported list of connectors for Database Services:
- [Druid](/connectors/database/druid)
- [DynamoDB](/connectors/database/dynamodb)
- [Glue](/connectors/database/glue)
- [Greenplum](/connectors/database/greenplum)
- [Hive](/connectors/database/hive)
- [MariaDB](/connectors/database/mariadb)
- [MSSQL](/connectors/database/mssql)

View File

@ -104,6 +104,7 @@ This is a sample config for Postgres:
**username**: Specify the User to connect to Postgres. It should have enough privileges to read all the metadata.
{% /codeInfo %}
{% codeInfo srNumber=2 %}
**authType**: Choose from basic auth and IAM based auth.
@ -179,6 +180,8 @@ information in AWS CloudTrail logs to determine who took actions with a role.
Find more information about [Source Identity](https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.html#:~:text=Required%3A%20No-,SourceIdentity,-The%20source%20identity).
{% /codeInfo %}
{% codeInfo srNumber=4 %}

View File

@ -269,6 +269,10 @@ site_menu:
url: /connectors/database/glue
- category: Connectors / Database / Glue / Run Externally
url: /connectors/database/glue/yaml
- category: Connectors / Database / Greenplum
url: /connectors/database/greenplum
- category: Connectors / Database / Greenplum / Run Externally
url: /connectors/database/greenplum/yaml
- category: Connectors / Database / Hive
url: /connectors/database/hive
- category: Connectors / Database / Hive / Run Externally

Binary file not shown.

After

Width:  |  Height:  |  Size: 80 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 226 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 329 KiB

View File

@ -20,9 +20,11 @@ $$
$$section
### Hostport $(id="hostport")
This parameter specifies the hostname/ endpoint of your client connection of the Couchbase instance.
This parameter specifies the hostname/ endpoint of your client connection of the Couchbase instance.
$$
$$section
### Bucket Name $(id="bucket")
In OpenMetadata, the Database Service hierarchy works as follows:
```
@ -31,4 +33,3 @@ Database Service > Bucket > Schema > Table
In the case of Couchbase, if you don't provide bucket name then by default it will ingest all availabe buckets.
$$

View File

@ -0,0 +1,171 @@
# Greenplum
In this section, we provide guides and references to use the Greenplum connector.
## Requirements
### Profiler & Data Quality
Executing the profiler Workflow or data quality tests, will require the user to have `SELECT` permission on the tables/schemas where the profiler/tests will be executed. More information on the profiler workflow setup can be found [here](https://docs.open-metadata.org/connectors/ingestion/workflows/profiler) and data quality tests [here](https://docs.open-metadata.org/connectors/ingestion/workflows/data-quality).
You can find further information on the Greenplum connector in the [docs](https://docs.open-metadata.org/connectors/database/greenplum).
## Connection Details
$$section
### Connection Scheme $(id="scheme")
SQLAlchemy driver scheme options.
$$
$$section
### Username $(id="username")
Username to connect to Postgres. This user should have privileges to read all the metadata in Postgres.
$$
$$section
### Auth Config $(id="authType")
There are 2 types of auth configs:
- Basic Auth.
- IAM based Auth.
User can authenticate the Postgres Instance with auth type as `Basic Authentication` i.e. Password **or** by using `IAM based Authentication` to connect to AWS related services.
$$
## Basic Auth
$$section
### Password $(id="password")
Password to connect to Postgres.
$$
## IAM Auth Config
$$section
### AWS Access Key ID $(id="awsAccessKeyId")
When you interact with AWS, you specify your AWS security credentials to verify who you are and whether you have permission to access the resources that you are requesting. AWS uses the security credentials to authenticate and authorize your requests ([docs](https://docs.aws.amazon.com/IAM/latest/UserGuide/security-creds.html)).
Access keys consist of two parts:
1. An access key ID (for example, `AKIAIOSFODNN7EXAMPLE`),
2. And a secret access key (for example, `wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY`).
You must use both the access key ID and secret access key together to authenticate your requests.
You can find further information on how to manage your access keys [here](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html)
$$
$$section
### AWS Secret Access Key $(id="awsSecretAccessKey")
Secret access key (for example, `wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY`).
$$
$$section
### AWS Region $(id="awsRegion")
Each AWS Region is a separate geographic area in which AWS clusters data centers ([docs](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.RegionsAndAvailabilityZones.html)).
As AWS can have instances in multiple regions, we need to know the region the service you want reach belongs to.
Note that the AWS Region is the only required parameter when configuring a connection. When connecting to the services programmatically, there are different ways in which we can extract and use the rest of AWS configurations. You can find further information about configuring your credentials [here](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html#configuring-credentials).
$$
$$section
### AWS Session Token $(id="awsSessionToken")
If you are using temporary credentials to access your services, you will need to inform the AWS Access Key ID and AWS Secrets Access Key. Also, these will include an AWS Session Token.
You can find more information on [Using temporary credentials with AWS resources](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_use-resources.html).
$$
$$section
### Endpoint URL $(id="endPointURL")
To connect programmatically to an AWS service, you use an endpoint. An *endpoint* is the URL of the entry point for an AWS web service. The AWS SDKs and the AWS Command Line Interface (AWS CLI) automatically use the default endpoint for each service in an AWS Region. But you can specify an alternate endpoint for your API requests.
Find more information on [AWS service endpoints](https://docs.aws.amazon.com/general/latest/gr/rande.html).
$$
$$section
### Profile Name $(id="profileName")
A named profile is a collection of settings and credentials that you can apply to an AWS CLI command. When you specify a profile to run a command, the settings and credentials are used to run that command. Multiple named profiles can be stored in the config and credentials files.
You can inform this field if you'd like to use a profile other than `default`.
Find here more information about [Named profiles for the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-profiles.html).
$$
$$section
### Assume Role ARN $(id="assumeRoleArn")
Typically, you use `AssumeRole` within your account or for cross-account access. In this field you'll set the `ARN` (Amazon Resource Name) of the policy of the other account.
A user who wants to access a role in a different account must also have permissions that are delegated from the account administrator. The administrator must attach a policy that allows the user to call `AssumeRole` for the `ARN` of the role in the other account.
This is a required field if you'd like to `AssumeRole`.
Find more information on [AssumeRole](https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.html).
$$
$$section
### Assume Role Session Name $(id="assumeRoleSessionName")
An identifier for the assumed role session. Use the role session name to uniquely identify a session when the same role is assumed by different principals or for different reasons.
By default, we'll use the name `OpenMetadataSession`.
Find more information about the [Role Session Name](https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.html#:~:text=An%20identifier%20for%20the%20assumed%20role%20session.).
$$
$$section
### Assume Role Source Identity $(id="assumeRoleSourceIdentity")
The source identity specified by the principal that is calling the `AssumeRole` operation. You can use source identity information in AWS CloudTrail logs to determine who took actions with a role.
Find more information about [Source Identity](https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.html#:~:text=Required%3A%20No-,SourceIdentity,-The%20source%20identity).
$$
$$section
### Host and Port $(id="hostPort")
This parameter specifies the host and port of the Postgres instance. This should be specified as a string in the format `hostname:port`. For example, you might set the hostPort parameter to `localhost:5432`.
If you are running the OpenMetadata ingestion in a docker and your services are hosted on the `localhost`, then use `host.docker.internal:5432` as the value.
$$
$$section
### Database $(id="database")
Initial Postgres database to connect to. If you want to ingest all databases, set `ingestAllDatabases` to true.
$$
$$section
### SSL Mode $(id="sslMode")
SSL Mode to connect to postgres database. E.g, `prefer`, `verify-ca`, `allow` etc.
$$
$$note
if you are using `IAM auth`, select either `allow` (recommended) or other option based on your use case.
$$
$$section
### Ingest All Databases $(id="ingestAllDatabases")
If ticked, the workflow will be able to ingest all database in the cluster. If not ticked, the workflow will only ingest tables from the database set above.
$$
$$section
### Connection Arguments $(id="connectionArguments")
Additional connection arguments such as security or protocol configs that can be sent to service during connection.
$$
$$section
### Connection Options $(id="connectionOptions")
Additional connection options to build the URL that can be sent to service during the connection.
$$