--- title: Run Oracle Connector using Airflow SDK slug: /connectors/database/oracle/airflow --- # Run Oracle using the Airflow SDK In this section, we provide guides and references to use the Oracle connector. Configure and schedule Oracle metadata and profiler workflows from the OpenMetadata UI: - [Requirements](#requirements) - [Metadata Ingestion](#metadata-ingestion) - [Data Profiler](#data-profiler) - [dbt Integration](#dbt-integration) ## Requirements To deploy OpenMetadata, check the Deployment guides. To run the Ingestion via the UI you'll need to use the OpenMetadata Ingestion Container, which comes shipped with custom Airflow plugins to handle the workflow deployment. Note: To fetch metadata from oracle db we use python-oracledb and this support 12c, 18c, 19c and 21c versions! ### Python Requirements To run the Oracle ingestion, you will need to install: ```bash pip3 install "openmetadata-ingestion[oracle]" ``` ## Metadata Ingestion All connectors are defined as JSON Schemas. [Here](https://github.com/open-metadata/OpenMetadata/blob/main/openmetadata-spec/src/main/resources/json/schema/entity/services/connections/database/oracleConnection.json) you can find the structure to create a connection to Oracle. In order to create and run a Metadata Ingestion workflow, we will follow the steps to create a YAML configuration able to connect to the source, process the Entities if needed, and reach the OpenMetadata server. The workflow is modeled around the following [JSON Schema](https://github.com/open-metadata/OpenMetadata/blob/main/openmetadata-spec/src/main/resources/json/schema/metadataIngestion/workflow.json) ### 1. Define the YAML Config This is a sample config for Oracle: ```yaml source: type: oracle serviceName: local_oracle serviceConnection: config: type: Oracle hostPort: hostPort username: username password: password # The type can either be oracleServiceName or databaseSchema oracleConnectionType: oracleServiceName: serviceName # databaseSchema: schema sourceConfig: config: markDeletedTables: true includeTables: true includeViews: true # includeTags: true # databaseFilterPattern: # includes: # - database1 # - database2 # excludes: # - database3 # - database4 # schemaFilterPattern: # includes: # - schema1 # - schema2 # excludes: # - schema3 # - schema4 # tableFilterPattern: # includes: # - table1 # - table2 # excludes: # - table3 # - table4 # For dbt, choose one of Cloud, Local, HTTP, S3 or GCS configurations # dbtConfigSource: # # For cloud # dbtCloudAuthToken: token # dbtCloudAccountId: ID # # For Local # dbtCatalogFilePath: path-to-catalog.json # dbtManifestFilePath: path-to-manifest.json # # For HTTP # dbtCatalogHttpPath: http://path-to-catalog.json # dbtManifestHttpPath: http://path-to-manifest.json # # For S3 # dbtSecurityConfig: # These are modeled after all AWS credentials # awsAccessKeyId: KEY # awsSecretAccessKey: SECRET # awsRegion: us-east-2 # dbtPrefixConfig: # dbtBucketName: bucket # dbtObjectPrefix: "dbt/" # # For GCS # dbtSecurityConfig: # These are modeled after all GCS credentials # type: My Type # projectId: project ID # privateKeyId: us-east-2 # privateKey: | # -----BEGIN PRIVATE KEY----- # Super secret key # -----END PRIVATE KEY----- # clientEmail: client@mail.com # clientId: 1234 # authUri: https://accounts.google.com/o/oauth2/auth (default) # tokenUri: https://oauth2.googleapis.com/token (default) # authProviderX509CertUrl: https://www.googleapis.com/oauth2/v1/certs (default) # clientX509CertUrl: https://cert.url (URI) # dbtPrefixConfig: # dbtBucketName: bucket # dbtObjectPrefix: "dbt/" sink: type: metadata-rest config: {} workflowConfig: # loggerLevel: DEBUG # DEBUG, INFO, WARN or ERROR openMetadataServerConfig: hostPort: authProvider: ``` #### Source Configuration - Service Connection - **username**: Specify the User to connect to Oracle. It should have enough privileges to read all the metadata. - **password**: Password to connect to Oracle. - **hostPort**: Enter the fully qualified hostname and port number for your Oracle deployment in the Host and Port field. - **oracleConnectionType** : - **oracleServiceName**: The Oracle Service name is the TNS alias that you give when you remotely connect to your database and this Service name is recorded in tnsnames. - **databaseSchema**: The name of the database schema available in Oracle that you want to connect with. - **Oracle instant client directory**: The directory pointing to where the `instantclient` binaries for Oracle are located. In the ingestion Docker image we provide them by default at `/instantclient`. If this parameter is informed (it is by default), we will run the [thick oracle client](https://python-oracledb.readthedocs.io/en/latest/user_guide/initialization.html#initializing-python-oracledb). We are shipping the binaries for ARM and AMD architectures from [here](https://www.oracle.com/database/technologies/instant-client/linux-x86-64-downloads.html) and [here](https://www.oracle.com/database/technologies/instant-client/linux-arm-aarch64-downloads.html) for the instant client version 19. - **Connection Options (Optional)**: Enter the details for any additional connection options that can be sent to Oracle during the connection. These details must be added as Key-Value pairs. - **Connection Arguments (Optional)**: Enter the details for any additional connection arguments such as security or protocol configs that can be sent to Oracle during the connection. These details must be added as Key-Value pairs. - In case you are using Single-Sign-On (SSO) for authentication, add the `authenticator` details in the Connection Arguments as a Key-Value pair as follows: `"authenticator" : "sso_login_url"` - In case you authenticate with SSO using an external browser popup, then add the `authenticator` details in the Connection Arguments as a Key-Value pair as follows: `"authenticator" : "externalbrowser"` #### Source Configuration - Source Config The `sourceConfig` is defined [here](https://github.com/open-metadata/OpenMetadata/blob/main/openmetadata-spec/src/main/resources/json/schema/metadataIngestion/databaseServiceMetadataPipeline.json): - `markDeletedTables`: To flag tables as soft-deleted if they are not present anymore in the source system. - `includeTables`: true or false, to ingest table data. Default is true. - `includeViews`: true or false, to ingest views definitions. - `databaseFilterPattern`, `schemaFilterPattern`, `tableFilternPattern`: Note that the they support regex as include or exclude. E.g., ```yaml tableFilterPattern: includes: - users - type_test ``` #### Sink Configuration To send the metadata to OpenMetadata, it needs to be specified as `type: metadata-rest`. #### Workflow Configuration The main property here is the `openMetadataServerConfig`, where you can define the host and security provider of your OpenMetadata installation. For a simple, local installation using our docker containers, this looks like: ```yaml workflowConfig: openMetadataServerConfig: hostPort: 'http://localhost:8585/api' authProvider: openmetadata securityConfig: jwtToken: '{bot_jwt_token}' ``` We support different security providers. You can find their definitions [here](https://github.com/open-metadata/OpenMetadata/tree/main/openmetadata-spec/src/main/resources/json/schema/security/client). You can find the different implementation of the ingestion below. ### Openmetadata JWT Auth ```yaml workflowConfig: openMetadataServerConfig: hostPort: 'http://localhost:8585/api' authProvider: openmetadata securityConfig: jwtToken: '{bot_jwt_token}' ``` ### Auth0 SSO ```yaml workflowConfig: openMetadataServerConfig: hostPort: 'http://localhost:8585/api' authProvider: auth0 securityConfig: clientId: '{your_client_id}' secretKey: '{your_client_secret}' domain: '{your_domain}' ``` ### Azure SSO ```yaml workflowConfig: openMetadataServerConfig: hostPort: 'http://localhost:8585/api' authProvider: azure securityConfig: clientSecret: '{your_client_secret}' authority: '{your_authority_url}' clientId: '{your_client_id}' scopes: - your_scopes ``` ### Custom OIDC SSO ```yaml workflowConfig: openMetadataServerConfig: hostPort: 'http://localhost:8585/api' authProvider: custom-oidc securityConfig: clientId: '{your_client_id}' secretKey: '{your_client_secret}' domain: '{your_domain}' ``` ### Google SSO ```yaml workflowConfig: openMetadataServerConfig: hostPort: 'http://localhost:8585/api' authProvider: google securityConfig: secretKey: '{path-to-json-creds}' ``` ### Okta SSO ```yaml workflowConfig: openMetadataServerConfig: hostPort: http://localhost:8585/api authProvider: okta securityConfig: clientId: "{CLIENT_ID - SPA APP}" orgURL: "{ISSUER_URL}/v1/token" privateKey: "{public/private keypair}" email: "{email}" scopes: - token ``` ### Amazon Cognito SSO The ingestion can be configured by [Enabling JWT Tokens](https://docs.open-metadata.org/deployment/security/enable-jwt-tokens) ```yaml workflowConfig: openMetadataServerConfig: hostPort: 'http://localhost:8585/api' authProvider: auth0 securityConfig: clientId: '{your_client_id}' secretKey: '{your_client_secret}' domain: '{your_domain}' ``` ### OneLogin SSO Which uses Custom OIDC for the ingestion ```yaml workflowConfig: openMetadataServerConfig: hostPort: 'http://localhost:8585/api' authProvider: custom-oidc securityConfig: clientId: '{your_client_id}' secretKey: '{your_client_secret}' domain: '{your_domain}' ``` ### KeyCloak SSO Which uses Custom OIDC for the ingestion ```yaml workflowConfig: openMetadataServerConfig: hostPort: 'http://localhost:8585/api' authProvider: custom-oidc securityConfig: clientId: '{your_client_id}' secretKey: '{your_client_secret}' domain: '{your_domain}' ``` ### 2. Prepare the Ingestion DAG Create a Python file in your Airflow DAGs directory with the following contents: ```python import pathlib import yaml from datetime import timedelta from airflow import DAG try: from airflow.operators.python import PythonOperator except ModuleNotFoundError: from airflow.operators.python_operator import PythonOperator from metadata.config.common import load_config_file from metadata.ingestion.api.workflow import Workflow from airflow.utils.dates import days_ago default_args = { "owner": "user_name", "email": ["username@org.com"], "email_on_failure": False, "retries": 3, "retry_delay": timedelta(minutes=5), "execution_timeout": timedelta(minutes=60) } config = """ """ def metadata_ingestion_workflow(): workflow_config = yaml.safe_load(config) workflow = Workflow.create(workflow_config) workflow.execute() workflow.raise_from_status() workflow.print_status() workflow.stop() with DAG( "sample_data", default_args=default_args, description="An example DAG which runs a OpenMetadata ingestion workflow", start_date=days_ago(1), is_paused_upon_creation=False, schedule_interval='*/5 * * * *', catchup=False, ) as dag: ingest_task = PythonOperator( task_id="ingest_using_recipe", python_callable=metadata_ingestion_workflow, ) ``` Note that from connector to connector, this recipe will always be the same. By updating the YAML configuration, you will be able to extract metadata from different sources. ## Data Profiler The Data Profiler workflow will be using the `orm-profiler` processor. While the `serviceConnection` will still be the same to reach the source system, the `sourceConfig` will be updated from previous configurations. ### 1. Define the YAML Config This is a sample config for the profiler: ```yaml source: type: oracle serviceName: local_oracle serviceConnection: config: type: Oracle hostPort: hostPort username: username password: password # The type can either be oracleServiceName or databaseSchema oracleConnectionType: oracleServiceName: serviceName # databaseSchema: schema sourceConfig: config: type: Profiler # generateSampleData: true # profileSample: 85 # threadCount: 5 (default) # databaseFilterPattern: # includes: # - database1 # - database2 # excludes: # - database3 # - database4 # schemaFilterPattern: # includes: # - schema1 # - schema2 # excludes: # - schema3 # - schema4 # tableFilterPattern: # includes: # - table1 # - table2 # excludes: # - table3 # - table4 processor: type: orm-profiler config: {} # Remove braces if adding properties # tableConfig: # - fullyQualifiedName: # profileSample: # default will be 100 if omitted # profileQuery: # columnConfig: # excludeColumns: # - # includeColumns: # - columnName: # - metrics: # - MEAN # - MEDIAN # - ... sink: type: metadata-rest config: {} workflowConfig: # loggerLevel: DEBUG # DEBUG, INFO, WARN or ERROR openMetadataServerConfig: hostPort: "" authProvider: "" ``` #### Source Configuration - You can find all the definitions and types for the `serviceConnection` [here](https://github.com/open-metadata/OpenMetadata/blob/main/openmetadata-spec/src/main/resources/json/schema/entity/services/connections/database/oracleConnection.json). - The `sourceConfig` is defined [here](https://github.com/open-metadata/OpenMetadata/blob/main/openmetadata-spec/src/main/resources/json/schema/metadataIngestion/databaseServiceProfilerPipeline.json). Note that the filter patterns support regex as includes or excludes. E.g., ```yaml tableFilterPattern: includes: - *users$ ``` #### Processor Choose the `orm-profiler`. Its config can also be updated to define tests from the YAML itself instead of the UI: ```yaml processor: type: orm-profiler config: tableConfig: - fullyQualifiedName:
profileSample: partitionConfig: partitionField: partitionQueryDuration: partitionValues: profileQuery: columnConfig: excludeColumns: - includeColumns: - columnName: - metrics: - MEAN - MEDIAN - ... ``` `tableConfig` allows you to set up some configuration at the table level. All the properties are optional. `metrics` should be one of the metrics listed [here](https://docs.open-metadata.org/openmetadata/ingestion/workflows/profiler/metrics) #### Workflow Configuration The same as the metadata ingestion. ### 2. Prepare the Profiler DAG Here, we follow a similar approach as with the metadata and usage pipelines, although we will use a different Workflow class: ```python import yaml from datetime import timedelta from airflow import DAG try: from airflow.operators.python import PythonOperator except ModuleNotFoundError: from airflow.operators.python_operator import PythonOperator from airflow.utils.dates import days_ago from metadata.orm_profiler.api.workflow import ProfilerWorkflow default_args = { "owner": "user_name", "email_on_failure": False, "retries": 3, "retry_delay": timedelta(seconds=10), "execution_timeout": timedelta(minutes=60), } config = """ """ def metadata_ingestion_workflow(): workflow_config = yaml.safe_load(config) workflow = ProfilerWorkflow.create(workflow_config) workflow.execute() workflow.raise_from_status() workflow.print_status() workflow.stop() with DAG( "profiler_example", default_args=default_args, description="An example DAG which runs a OpenMetadata ingestion workflow", start_date=days_ago(1), is_paused_upon_creation=False, catchup=False, ) as dag: ingest_task = PythonOperator( task_id="profile_and_test_using_recipe", python_callable=metadata_ingestion_workflow, ) ``` ## dbt Integration You can learn more about how to ingest dbt models' definitions and their lineage [here](/connectors/ingestion/workflows/dbt).