1.**SQL Connection**, where you will the username, password and host.
2.**HDB User Store** [connection](https://help.sap.com/docs/SAP_HANA_PLATFORM/b3ee5778bc2e4a089d3299b82ec762a7/dd95ac9dbb571014a7d7f0234d762fdb.html?version=2.0.05&locale=en-US).
Note that the HDB Store will need to be locally available to the instance running the ingestion process.
If you are unsure about this setting, you can run the ingestion process passing the usual SQL connection details.
##### SQL Connection
If using the SQL Connection, inform:
{% codeInfo srNumber=1 %}
**hostPort**: Host and port of the SAP Hana service. This should be specified as a string in the format `hostname:port`. E.g., `localhost:39041`, `host.docker.internal:39041`.
{% /codeInfo %}
{% codeInfo srNumber=2 %}
**username**: Specify the User to connect to SAP Hana. It should have enough privileges to read all the metadata.
{% /codeInfo %}
{% codeInfo srNumber=3 %}
**password**: Password to connect to SAP Hana.
{% /codeInfo %}
{% codeInfo srNumber=4 %}
**database**: Optional parameter to connect to a specific database.
{% /codeInfo %}
{% codeInfo srNumber=5 %}
**databaseSchema**: databaseSchema of the data source. This is an optional parameter, if you would like to restrict the metadata reading to a single schema. When left blank, OpenMetadata Ingestion attempts to scan all the schemas.
{% /codeInfo %}
##### HDB User Store
If you have a User Store configured, then:
{% codeInfo srNumber=6 %}
**userKey**: HDB Store User Key generated from the command `hdbuserstore SET <KEY> <host:port> <USERNAME> <PASSWORD>`.
{% /codeInfo %}
#### Source Configuration - Source Config
{% codeInfo srNumber=9 %}
The `sourceConfig` is defined [here](https://github.com/open-metadata/OpenMetadata/blob/main/openmetadata-spec/src/main/resources/json/schema/metadataIngestion/databaseServiceMetadataPipeline.json):
**markDeletedTables**: To flag tables as soft-deleted if they are not present anymore in the source system.
**includeTables**: true or false, to ingest table data. Default is true.
**includeViews**: true or false, to ingest views definitions.
**databaseFilterPattern**, **schemaFilterPattern**, **tableFilternPattern**: Note that the filter supports regex as include or exclude. You can find examples [here](/connectors/ingestion/workflows/metadata/filter-patterns/database)
{% /codeInfo %}
#### Sink Configuration
{% codeInfo srNumber=10 %}
To send the metadata to OpenMetadata, it needs to be specified as `type: metadata-rest`.
**Connection Options (Optional)**: Enter the details for any additional connection options that can be sent to Athena during the connection. These details must be added as Key-Value pairs.
{% /codeInfo %}
{% codeInfo srNumber=8 %}
**Connection Arguments (Optional)**: Enter the details for any additional connection arguments such as security or protocol configs that can be sent to Athena during the connection. These details must be added as Key-Value pairs.
- In case you are using Single-Sign-On (SSO) for authentication, add the `authenticator` details in the Connection Arguments as a Key-Value pair as follows: `"authenticator" : "sso_login_url"`
Create a Python file in your Airflow DAGs directory with the following contents:
{% codePreview %}
{% codeInfoContainer %}
{% codeInfo srNumber=10 %}
#### Import necessary modules
The `Workflow` class that is being imported is a part of a metadata ingestion framework, which defines a process of getting data from different sources and ingesting it into a central metadata repository.
Here we are also importing all the basic requirements to parse YAMLs, handle dates and build our DAG.
{% /codeInfo %}
{% codeInfo srNumber=11 %}
**Default arguments for all tasks in the Airflow DAG.**
- Default arguments dictionary contains default arguments for tasks in the DAG, including the owner's name, email address, number of retries, retry delay, and execution timeout.
{% /codeInfo %}
{% codeInfo srNumber=12 %}
- **config**: Specifies config for the metadata ingestion as we prepare above.
{% /codeInfo %}
{% codeInfo srNumber=13 %}
- **metadata_ingestion_workflow()**: This code defines a function `metadata_ingestion_workflow()` that loads a YAML configuration, creates a `Workflow` object, executes the workflow, checks its status, prints the status to the console, and stops the workflow.
{% /codeInfo %}
{% codeInfo srNumber=14 %}
- **DAG**: creates a DAG using the Airflow framework, and tune the DAG configurations to whatever fits with your requirements
- For more Airflow DAGs creation details visit [here](https://airflow.apache.org/docs/apache-airflow/stable/core-concepts/dags.html#declaring-a-dag).
{% /codeInfo %}
Note that from connector to connector, this recipe will always be the same.
By updating the `YAML configuration`, you will be able to extract metadata from different sources.
{% /codeInfoContainer %}
{% codeBlock fileName="filename.py" %}
```python {% srNumber=10 %}
import pathlib
import yaml
from datetime import timedelta
from airflow import DAG
from metadata.config.common import load_config_file
from metadata.ingestion.api.workflow import Workflow
from airflow.utils.dates import days_ago
try:
from airflow.operators.python import PythonOperator
except ModuleNotFoundError:
from airflow.operators.python_operator import PythonOperator
```
```python {% srNumber=11 %}
default_args = {
"owner": "user_name",
"email": ["username@org.com"],
"email_on_failure": False,
"retries": 3,
"retry_delay": timedelta(minutes=5),
"execution_timeout": timedelta(minutes=60)
}
```
```python {% srNumber=12 %}
config = """
<yourYAMLconfiguration>
"""
```
```python {% srNumber=13 %}
def metadata_ingestion_workflow():
workflow_config = yaml.safe_load(config)
workflow = Workflow.create(workflow_config)
workflow.execute()
workflow.raise_from_status()
workflow.print_status()
workflow.stop()
```
```python {% srNumber=14 %}
with DAG(
"sample_data",
default_args=default_args,
description="An example DAG which runs a OpenMetadata ingestion workflow",
start_date=days_ago(1),
is_paused_upon_creation=False,
schedule_interval='*/5 * ** *',
catchup=False,
) as dag:
ingest_task = PythonOperator(
task_id="ingest_using_recipe",
python_callable=metadata_ingestion_workflow,
)
```
{% /codeBlock %}
{% /codePreview %}
## Data Profiler
The Data Profiler workflow will be using the `orm-profiler` processor.
After running a Metadata Ingestion workflow, we can run Data Profiler workflow.
While the `serviceName` will be the same to that was used in Metadata Ingestion, so the ingestion bot can get the `serviceConnection` details from the server.
### 1. Define the YAML Config
This is a sample config for the profiler:
{% codePreview %}
{% codeInfoContainer %}
{% codeInfo srNumber=15 %}
#### Source Configuration - Source Config
You can find all the definitions and types for the `sourceConfig` [here](https://github.com/open-metadata/OpenMetadata/blob/main/openmetadata-spec/src/main/resources/json/schema/metadataIngestion/databaseServiceProfilerPipeline.json).
**generateSampleData**: Option to turn on/off generating sample data.
{% /codeInfo %}
{% codeInfo srNumber=16 %}
**profileSample**: Percentage of data or no. of rows we want to execute the profiler and tests on.
{% /codeInfo %}
{% codeInfo srNumber=17 %}
**threadCount**: Number of threads to use during metric computations.
{% /codeInfo %}
{% codeInfo srNumber=18 %}
**processPiiSensitive**: Optional configuration to automatically tag columns that might contain sensitive information.
{% /codeInfo %}
{% codeInfo srNumber=19 %}
**confidence**: Set the Confidence value for which you want the column to be marked
{% /codeInfo %}
{% codeInfo srNumber=20 %}
**timeoutSeconds**: Profiler Timeout in Seconds
{% /codeInfo %}
{% codeInfo srNumber=21 %}
**databaseFilterPattern**: Regex to only fetch databases that matches the pattern.
{% /codeInfo %}
{% codeInfo srNumber=22 %}
**schemaFilterPattern**: Regex to only fetch tables or databases that matches the pattern.
{% /codeInfo %}
{% codeInfo srNumber=23 %}
**tableFilterPattern**: Regex to only fetch tables or databases that matches the pattern.
{% /codeInfo %}
{% codeInfo srNumber=24 %}
#### Processor Configuration
Choose the `orm-profiler`. Its config can also be updated to define tests from the YAML itself instead of the UI:
**tableConfig**: `tableConfig` allows you to set up some configuration at the table level.
{% /codeInfo %}
{% codeInfo srNumber=25 %}
#### Sink Configuration
To send the metadata to OpenMetadata, it needs to be specified as `type: metadata-rest`.
{% /codeInfo %}
{% codeInfo srNumber=26 %}
#### Workflow Configuration
The main property here is the `openMetadataServerConfig`, where you can define the host and security provider of your OpenMetadata installation.
For a simple, local installation using our docker containers, this looks like:
{% /codeInfo %}
{% /codeInfoContainer %}
{% codeBlock fileName="filename.yaml" %}
```yaml
source:
type: sapHana
serviceName: <servicename>
sourceConfig:
config:
type: Profiler
```
```yaml {% srNumber=15 %}
generateSampleData: true
```
```yaml {% srNumber=16 %}
# profileSample: 85
```
```yaml {% srNumber=17 %}
# threadCount: 5
```
```yaml {% srNumber=18 %}
processPiiSensitive: false
```
```yaml {% srNumber=19 %}
# confidence: 80
```
```yaml {% srNumber=20 %}
# timeoutSeconds: 43200
```
```yaml {% srNumber=21 %}
# databaseFilterPattern:
# includes:
# - database1
# - database2
# excludes:
# - database3
# - database4
```
```yaml {% srNumber=22 %}
# schemaFilterPattern:
# includes:
# - schema1
# - schema2
# excludes:
# - schema3
# - schema4
```
```yaml {% srNumber=23 %}
# tableFilterPattern:
# includes:
# - table1
# - table2
# excludes:
# - table3
# - table4
```
```yaml {% srNumber=24 %}
processor:
type: orm-profiler
config: {} # Remove braces if adding properties
# tableConfig:
# - fullyQualifiedName: <tablefqn>
# profileSample: <numberbetween0and99> # default
# profileSample: <numberbetween0and99> # default will be 100 if omitted
- You can learn more about how to configure and run the Profiler Workflow to extract Profiler data and execute the Data Quality from [here](/connectors/ingestion/workflows/profiler)
### 2. Prepare the Profiler DAG
Here, we follow a similar approach as with the metadata and usage pipelines, although we will use a different Workflow class:
{% codePreview %}
{% codeInfoContainer %}
{% codeInfo srNumber=27 %}
#### Import necessary modules
The `ProfilerWorkflow` class that is being imported is a part of a metadata orm_profiler framework, which defines a process of extracting Profiler data.
Here we are also importing all the basic requirements to parse YAMLs, handle dates and build our DAG.
{% /codeInfo %}
{% codeInfo srNumber=28 %}
**Default arguments for all tasks in the Airflow DAG.**
- Default arguments dictionary contains default arguments for tasks in the DAG, including the owner's name, email address, number of retries, retry delay, and execution timeout.
{% /codeInfo %}
{% codeInfo srNumber=29 %}
- **config**: Specifies config for the profiler as we prepare above.
{% /codeInfo %}
{% codeInfo srNumber=30 %}
- **metadata_ingestion_workflow()**: This code defines a function `metadata_ingestion_workflow()` that loads a YAML configuration, creates a `ProfilerWorkflow` object, executes the workflow, checks its status, prints the status to the console, and stops the workflow.
{% /codeInfo %}
{% codeInfo srNumber=31 %}
- **DAG**: creates a DAG using the Airflow framework, and tune the DAG configurations to whatever fits with your requirements
- For more Airflow DAGs creation details visit [here](https://airflow.apache.org/docs/apache-airflow/stable/core-concepts/dags.html#declaring-a-dag).
{% /codeInfo %}
{% /codeInfoContainer %}
{% codeBlock fileName="filename.py" %}
```python {% srNumber=27 %}
import yaml
from datetime import timedelta
from airflow import DAG
from metadata.profiler.api.workflow import ProfilerWorkflow
try:
from airflow.operators.python import PythonOperator
except ModuleNotFoundError:
from airflow.operators.python_operator import PythonOperator