mirror of
				https://github.com/open-metadata/OpenMetadata.git
				synced 2025-10-31 10:39:30 +00:00 
			
		
		
		
	
		
			
				
	
	
		
			336 lines
		
	
	
		
			10 KiB
		
	
	
	
		
			Markdown
		
	
	
	
	
	
			
		
		
	
	
			336 lines
		
	
	
		
			10 KiB
		
	
	
	
		
			Markdown
		
	
	
	
	
	
| ---
 | |
| title: Run DeltaLake Connector using the CLI
 | |
| slug: /connectors/database/deltalake/cli
 | |
| ---
 | |
| 
 | |
| # Run Deltalake using the metadata CLI
 | |
| <Table>
 | |
| 
 | |
| | Stage | Metadata |Query Usage | Data Profiler | Data Quality | Lineage | DBT | Supported Versions |
 | |
| |:------:|:------:|:-----------:|:-------------:|:------------:|:-------:|:---:|:------------------:|
 | |
| |  PROD  |   ✅   |      ❌      |       ❌       |       ❌      |    Partially via Views    |  ❌  |  --  |
 | |
| 
 | |
| </Table>
 | |
| 
 | |
| <Table>
 | |
| 
 | |
| | Lineage | Table-level | Column-level |
 | |
| |:------:|:-----------:|:-------------:|
 | |
| | Partially via Views | ✅ | ✅ |
 | |
| 
 | |
| </Table>
 | |
| 
 | |
| In this section, we provide guides and references to use the Deltalake connector.
 | |
| 
 | |
| Configure and schedule Deltalake metadata and profiler workflows from the OpenMetadata UI:
 | |
| - [Requirements](#requirements)
 | |
| - [Metadata Ingestion](#metadata-ingestion)
 | |
| - [dbt Integration](#dbt-integration)
 | |
| 
 | |
| ## Requirements
 | |
| 
 | |
| <InlineCallout color="violet-70" icon="description" bold="OpenMetadata 0.12.1 or later" href="/deployment">
 | |
| To deploy OpenMetadata, check the <a href="/deployment">Deployment</a> guides.
 | |
| </InlineCallout>
 | |
| 
 | |
| To run the Ingestion via the UI you'll need to use the OpenMetadata Ingestion Container, which comes shipped with
 | |
| custom Airflow plugins to handle the workflow deployment.
 | |
| 
 | |
| ### Python Requirements
 | |
| 
 | |
| To run the Deltalake ingestion, you will need to install:
 | |
| 
 | |
| ```bash
 | |
| pip3 install "openmetadata-ingestion[deltalake]"
 | |
| ```
 | |
| 
 | |
| ## Metadata Ingestion
 | |
| 
 | |
| All connectors are defined as JSON Schemas.
 | |
| [Here](https://github.com/open-metadata/OpenMetadata/blob/main/openmetadata-spec/src/main/resources/json/schema/entity/services/connections/database/deltaLakeConnection.json)
 | |
| you can find the structure to create a connection to Deltalake.
 | |
| 
 | |
| In order to create and run a Metadata Ingestion workflow, we will follow
 | |
| the steps to create a YAML configuration able to connect to the source,
 | |
| process the Entities if needed, and reach the OpenMetadata server.
 | |
| 
 | |
| The workflow is modeled around the following
 | |
| [JSON Schema](https://github.com/open-metadata/OpenMetadata/blob/main/openmetadata-spec/src/main/resources/json/schema/metadataIngestion/workflow.json)
 | |
| 
 | |
| ### 1. Define the YAML Config
 | |
| 
 | |
| This is a sample config for Deltalake:
 | |
| 
 | |
| ```yaml
 | |
| source:
 | |
|   type: deltalake
 | |
|   serviceName: "<service name>"
 | |
|   serviceConnection:
 | |
|     config:
 | |
|       type: DeltaLake
 | |
|       metastoreConnection:
 | |
|         # Pick only of the three
 | |
|         metastoreHostPort: "<metastore host port>"
 | |
|         # metastoreDb: jdbc:mysql://localhost:3306/demo_hive
 | |
|         # metastoreFilePath: "<path_to_metastore>/metastore_db"
 | |
|       appName: MyApp
 | |
|   sourceConfig:
 | |
|     config:
 | |
|       type: DatabaseMetadata
 | |
|       markDeletedTables: true
 | |
|       includeTables: true
 | |
|       includeViews: true
 | |
|       # includeTags: true
 | |
|       # databaseFilterPattern:
 | |
|       #   includes:
 | |
|       #     - database1
 | |
|       #     - database2
 | |
|       #   excludes:
 | |
|       #     - database3
 | |
|       #     - database4
 | |
|       # schemaFilterPattern:
 | |
|       #   includes:
 | |
|       #     - schema1
 | |
|       #     - schema2
 | |
|       #   excludes:
 | |
|       #     - schema3
 | |
|       #     - schema4
 | |
|       # tableFilterPattern:
 | |
|       #   includes:
 | |
|       #     - table1
 | |
|       #     - table2
 | |
|       #   excludes:
 | |
|       #     - table3
 | |
|       #     - table4
 | |
| sink:
 | |
|   type: metadata-rest
 | |
|   config: {}
 | |
| workflowConfig:
 | |
|   # loggerLevel: DEBUG  # DEBUG, INFO, WARN or ERROR
 | |
|   openMetadataServerConfig:
 | |
|     hostPort: "<OpenMetadata host and port>"
 | |
|     authProvider: "<OpenMetadata auth provider>"
 | |
| 
 | |
| ```
 | |
| 
 | |
| #### Source Configuration - Service Connection
 | |
| 
 | |
| - **Metastore Host Port**: Enter the Host & Port of Hive Metastore Service to configure the Spark Session. Either
 | |
|   of `metastoreHostPort`, `metastoreDb` or `metastoreFilePath` is required.
 | |
| - **Metastore File Path**: Enter the file path to local Metastore in case Spark cluster is running locally. Either
 | |
|   of `metastoreHostPort`, `metastoreDb` or `metastoreFilePath` is required.
 | |
| - **Metastore DB**: The JDBC connection to the underlying Hive metastore DB. Either
 | |
|   of `metastoreHostPort`, `metastoreDb` or `metastoreFilePath` is required.
 | |
| - **appName (Optional)**: Enter the app name of spark session.
 | |
| - **Connection Arguments (Optional)**: Key-Value pairs that will be used to pass extra `config` elements to the Spark
 | |
|   Session builder.
 | |
| 
 | |
| We are internally running with `pyspark` 3.X and `delta-lake` 2.0.0. This means that we need to consider Spark
 | |
| configuration options for 3.X.
 | |
| 
 | |
| ##### Metastore Host Port
 | |
| 
 | |
| When connecting to an External Metastore passing the parameter `Metastore Host Port`, we will be preparing a Spark Session with the configuration
 | |
| 
 | |
| ```
 | |
| .config("hive.metastore.uris", "thrift://{connection.metastoreHostPort}") 
 | |
| ```
 | |
| 
 | |
| Then, we will be using the `catalog` functions from the Spark Session to pick up the metadata exposed by the Hive Metastore.
 | |
| 
 | |
| ##### Metastore File Path
 | |
| 
 | |
| If instead we use a local file path that contains the metastore information (e.g., for local testing with the default `metastore_db` directory), we will set
 | |
| 
 | |
| ```
 | |
| .config("spark.driver.extraJavaOptions", "-Dderby.system.home={connection.metastoreFilePath}") 
 | |
| ```
 | |
| 
 | |
| To update the `Derby` information. More information about this in a great [SO thread](https://stackoverflow.com/questions/38377188/how-to-get-rid-of-derby-log-metastore-db-from-spark-shell).
 | |
| 
 | |
| - You can find all supported configurations [here](https://spark.apache.org/docs/latest/configuration.html)
 | |
| - If you need further information regarding the Hive metastore, you can find
 | |
|   it [here](https://spark.apache.org/docs/3.0.0-preview/sql-data-sources-hive-tables.html), and in The Internals of
 | |
|   Spark SQL [book](https://jaceklaskowski.gitbooks.io/mastering-spark-sql/content/spark-sql-hive-metastore.html).
 | |
| 
 | |
| #### Source Configuration - Source Config
 | |
| 
 | |
| The `sourceConfig` is defined [here](https://github.com/open-metadata/OpenMetadata/blob/main/openmetadata-spec/src/main/resources/json/schema/metadataIngestion/databaseServiceMetadataPipeline.json):
 | |
| 
 | |
| - `markDeletedTables`: To flag tables as soft-deleted if they are not present anymore in the source system.
 | |
| - `includeTables`: true or false, to ingest table data. Default is true.
 | |
| - `includeViews`: true or false, to ingest views definitions.
 | |
| - `databaseFilterPattern`, `schemaFilterPattern`, `tableFilternPattern`: Note that the they support regex as include or exclude. E.g.,
 | |
| 
 | |
| ```yaml
 | |
| tableFilterPattern:
 | |
|   includes:
 | |
|     - users
 | |
|     - type_test
 | |
| ```
 | |
| 
 | |
| #### Sink Configuration
 | |
| 
 | |
| To send the metadata to OpenMetadata, it needs to be specified as `type: metadata-rest`.
 | |
| 
 | |
| #### Workflow Configuration
 | |
| 
 | |
| The main property here is the `openMetadataServerConfig`, where you can define the host and security provider of your OpenMetadata installation.
 | |
| 
 | |
| For a simple, local installation using our docker containers, this looks like:
 | |
| 
 | |
| ```yaml
 | |
| workflowConfig:
 | |
|   openMetadataServerConfig:
 | |
|     hostPort: 'http://localhost:8585/api'
 | |
|     authProvider: openmetadata
 | |
|     securityConfig:
 | |
|       jwtToken: '{bot_jwt_token}'
 | |
| ```
 | |
| 
 | |
| We support different security providers. You can find their definitions [here](https://github.com/open-metadata/OpenMetadata/tree/main/openmetadata-spec/src/main/resources/json/schema/security/client).
 | |
| You can find the different implementation of the ingestion below.
 | |
| 
 | |
| <Collapse title="Configure SSO in the Ingestion Workflows">
 | |
| 
 | |
| ### Openmetadata JWT Auth
 | |
| 
 | |
| ```yaml
 | |
| workflowConfig:
 | |
|   openMetadataServerConfig:
 | |
|     hostPort: 'http://localhost:8585/api'
 | |
|     authProvider: openmetadata
 | |
|     securityConfig:
 | |
|       jwtToken: '{bot_jwt_token}'
 | |
| ```
 | |
| 
 | |
| ### Auth0 SSO
 | |
| 
 | |
| ```yaml
 | |
| workflowConfig:
 | |
|   openMetadataServerConfig:
 | |
|     hostPort: 'http://localhost:8585/api'
 | |
|     authProvider: auth0
 | |
|     securityConfig:
 | |
|       clientId: '{your_client_id}'
 | |
|       secretKey: '{your_client_secret}'
 | |
|       domain: '{your_domain}'
 | |
| ```
 | |
| 
 | |
| ### Azure SSO
 | |
| 
 | |
| ```yaml
 | |
| workflowConfig:
 | |
|   openMetadataServerConfig:
 | |
|     hostPort: 'http://localhost:8585/api'
 | |
|     authProvider: azure
 | |
|     securityConfig:
 | |
|       clientSecret: '{your_client_secret}'
 | |
|       authority: '{your_authority_url}'
 | |
|       clientId: '{your_client_id}'
 | |
|       scopes:
 | |
|         - your_scopes
 | |
| ```
 | |
| 
 | |
| ### Custom OIDC SSO
 | |
| 
 | |
| ```yaml
 | |
| workflowConfig:
 | |
|   openMetadataServerConfig:
 | |
|     hostPort: 'http://localhost:8585/api'
 | |
|     authProvider: custom-oidc
 | |
|     securityConfig:
 | |
|       clientId: '{your_client_id}'
 | |
|       secretKey: '{your_client_secret}'
 | |
|       domain: '{your_domain}'
 | |
| ```
 | |
| 
 | |
| ### Google SSO
 | |
| 
 | |
| ```yaml
 | |
| workflowConfig:
 | |
|   openMetadataServerConfig:
 | |
|     hostPort: 'http://localhost:8585/api'
 | |
|     authProvider: google
 | |
|     securityConfig:
 | |
|       secretKey: '{path-to-json-creds}'
 | |
| ```
 | |
| 
 | |
| ### Okta SSO
 | |
| 
 | |
| ```yaml
 | |
| workflowConfig:
 | |
|   openMetadataServerConfig:
 | |
|     hostPort: http://localhost:8585/api
 | |
|     authProvider: okta
 | |
|     securityConfig:
 | |
|       clientId: "{CLIENT_ID - SPA APP}"
 | |
|       orgURL: "{ISSUER_URL}/v1/token"
 | |
|       privateKey: "{public/private keypair}"
 | |
|       email: "{email}"
 | |
|       scopes:
 | |
|         - token
 | |
| ```
 | |
| 
 | |
| ### Amazon Cognito SSO
 | |
| 
 | |
| The ingestion can be configured by [Enabling JWT Tokens](https://docs.open-metadata.org/deployment/security/enable-jwt-tokens)
 | |
| 
 | |
| ```yaml
 | |
| workflowConfig:
 | |
|   openMetadataServerConfig:
 | |
|     hostPort: 'http://localhost:8585/api'
 | |
|     authProvider: auth0
 | |
|     securityConfig:
 | |
|       clientId: '{your_client_id}'
 | |
|       secretKey: '{your_client_secret}'
 | |
|       domain: '{your_domain}'
 | |
| ```
 | |
| 
 | |
| ### OneLogin SSO
 | |
| 
 | |
| Which uses Custom OIDC for the ingestion
 | |
| 
 | |
| ```yaml
 | |
| workflowConfig:
 | |
|   openMetadataServerConfig:
 | |
|     hostPort: 'http://localhost:8585/api'
 | |
|     authProvider: custom-oidc
 | |
|     securityConfig:
 | |
|       clientId: '{your_client_id}'
 | |
|       secretKey: '{your_client_secret}'
 | |
|       domain: '{your_domain}'
 | |
| ```
 | |
| 
 | |
| ### KeyCloak SSO
 | |
| 
 | |
| Which uses Custom OIDC for the ingestion
 | |
| 
 | |
| ```yaml
 | |
| workflowConfig:
 | |
|   openMetadataServerConfig:
 | |
|     hostPort: 'http://localhost:8585/api'
 | |
|     authProvider: custom-oidc
 | |
|     securityConfig:
 | |
|       clientId: '{your_client_id}'
 | |
|       secretKey: '{your_client_secret}'
 | |
|       domain: '{your_domain}'
 | |
| ```
 | |
| 
 | |
| </Collapse>
 | |
| 
 | |
| ### 2. Run with the CLI
 | |
| 
 | |
| First, we will need to save the YAML file. Afterward, and with all requirements installed, we can run:
 | |
| 
 | |
| ```bash
 | |
| metadata ingest -c <path-to-yaml>
 | |
| ```
 | |
| 
 | |
| Note that from connector to connector, this recipe will always be the same. By updating the YAML configuration,
 | |
| you will be able to extract metadata from different sources.
 | |
| 
 | |
| ## dbt Integration
 | |
| 
 | |
| You can learn more about how to ingest dbt models' definitions and their lineage [here](/connectors/ingestion/workflows/dbt).
 | 
