
* Rename docs and clean SSO * Add connector partials * Add connector partials * Rename path
5.0 KiB
title | slug |
---|---|
Run Spline Connector using the CLI | /connectors/pipeline/spline/cli |
Run Spline using the metadata CLI
In this section, we provide guides and references to use the Spline connector.
Configure and schedule Spline metadata and profiler workflows from the OpenMetadata UI:
Requirements
{%inlineCallout icon="description" bold="OpenMetadata 0.12 or later" href="/deployment"%} To deploy OpenMetadata, check the Deployment guides. {% /inlineCallout %}
To run the Ingestion via the UI you'll need to use the OpenMetadata Ingestion Container, which comes shipped with custom Airflow plugins to handle the workflow deployment.
The Spline connector support lineage of data source of type jdbc
or dbfs
i.e. The spline connector would be able to extract lineage if the data source is either a jdbc connection or the data source is databricks instance.
{% note %}
Currently we do not support data source of type aws s3 or any other cloud storage, which also means that the lineage for external tables from databricks will not be extracted.
{% /note %}
You can refer this documentation on how to configure databricks with spline.
Python Requirements
To run the Spline ingestion, you will need to install:
pip3 install "openmetadata-ingestion"
Metadata Ingestion
All connectors are defined as JSON Schemas. Here you can find the structure to create a connection to Spline.
In order to create and run a Metadata Ingestion workflow, we will follow the steps to create a YAML configuration able to connect to the source, process the Entities if needed, and reach the OpenMetadata server.
The workflow is modeled around the following JSON Schema
1. Define the YAML Config
This is a sample config for Spline:
{% codePreview %}
{% codeInfoContainer %}
Source Configuration - Service Connection
{% codeInfo srNumber=1 %}
hostPort: Spline REST Server API Host & Port, OpenMetadata uses Spline REST Server APIs to extract the execution details from spline to generate lineage. This should be specified as a URI string in the format scheme://hostname:port
. E.g., http://localhost:8080
, http://host.docker.internal:8080
.
uiHostPort: Spline UI Host & Port is an optional field which is used for generating redirection URL from OpenMetadata to Spline Portal. This should be specified as a URI string in the format scheme://hostname:port
. E.g., http://localhost:9090
, http://host.docker.internal:9090
.
{% /codeInfo %}
Source Configuration - Source Config
{% codeInfo srNumber=2 %}
The sourceConfig
is defined here:
dbServiceNames: Database Service Name for the creation of lineage, if the source supports it.
includeTags: Set the 'Include Tags' toggle to control whether to include tags as part of metadata ingestion.
markDeletedPipelines: Set the Mark Deleted Pipelines toggle to flag pipelines as soft-deleted if they are not present anymore in the source system.
pipelineFilterPattern and chartFilterPattern: Note that the pipelineFilterPattern
and chartFilterPattern
both support regex as include or exclude.
{% /codeInfo %}
Sink Configuration
{% codeInfo srNumber=3 %}
To send the metadata to OpenMetadata, it needs to be specified as type: metadata-rest
.
{% /codeInfo %}
{% partial file="workflow-config.md" /%}
{% /codeInfoContainer %}
{% codeBlock fileName="filename.yaml" %}
source:
type: spline
serviceName: spline_source
serviceConnection:
config:
type: Spline
hostPort: http://localhost:8080
uiHostPort: http://localhost:9090
sourceConfig:
config:
type: PipelineMetadata
# markDeletedPipelines: True
# includeTags: True
# includeLineage: true
# dbServiceNames:
# - local_hive
# pipelineFilterPattern:
# includes:
# - pipeline1
# - pipeline2
# excludes:
# - pipeline3
# - pipeline4
sink:
type: metadata-rest
config: {}
{% partial file="workflow-config-yaml.md" /%}
{% /codeBlock %}
{% /codePreview %}
2. Run with the CLI
First, we will need to save the YAML file. Afterward, and with all requirements installed, we can run:
metadata ingest -c <path-to-yaml>
Note that from connector to connector, this recipe will always be the same. By updating the YAML configuration, you will be able to extract metadata from different sources.