Documentation for DatabricksPipeline,Airbyte,Databricks (#11153)

* Fix:10805 Documentation for DatabricksPipeline,Airbyte,Databricks

* Fix:10805 Documentation for DatabricksPipeline,Airbyte,Databricks
This commit is contained in:
Milan Bariya 2023-04-20 10:22:13 +05:30 committed by GitHub
parent 91cd1491ee
commit cc71ba2bd0
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
12 changed files with 852 additions and 59 deletions

View File

@ -3,7 +3,7 @@ title: Run Airbyte Connector using Airflow SDK
slug: /connectors/pipeline/airbyte/airflow
---
# Run Airbyte using the metadata CLI
# Run Airbyte using the Airflow SDK
In this section, we provide guides and references to use the Airbyte connector.

View File

@ -0,0 +1,318 @@
---
title: Run Databricks Pipeline Connector using Airflow SDK
slug: /connectors/pipeline/databricks-pipeline/airflow
---
# Run Databricks Pipeline using the Airflow SDK
In this section, we provide guides and references to use the Databricks Pipeline connector.
Configure and schedule Databricks Pipeline metadata and profiler workflows from the OpenMetadata UI:
- [Requirements](#requirements)
- [Metadata Ingestion](#metadata-ingestion)
## Requirements
{%inlineCallout icon="description" bold="OpenMetadata 0.12 or later" href="/deployment"%}
To deploy OpenMetadata, check the Deployment guides.
{% /inlineCallout %}
To run the Ingestion via the UI you'll need to use the OpenMetadata Ingestion Container, which comes shipped with custom Airflow plugins to handle the workflow deployment.
### Python Requirements
To run the Databricks Pipeline ingestion, you will need to install:
```bash
pip3 install "openmetadata-ingestion[databricks]"
```
## Metadata Ingestion
All connectors are defined as JSON Schemas.
[Here](https://github.com/open-metadata/OpenMetadata/blob/main/openmetadata-spec/src/main/resources/json/schema/entity/services/connections/pipeline/databricksPipelineConnection.json)
you can find the structure to create a connection to Databricks Pipeline.
In order to create and run a Metadata Ingestion workflow, we will follow
the steps to create a YAML configuration able to connect to the source,
process the Entities if needed, and reach the OpenMetadata server.
The workflow is modeled around the following
[JSON Schema](https://github.com/open-metadata/OpenMetadata/blob/main/openmetadata-spec/src/main/resources/json/schema/metadataIngestion/workflow.json)
### 1. Define the YAML Config
This is a sample config for Databricks Pipeline:
{% codePreview %}
{% codeInfoContainer %}
#### Source Configuration - Service Connection
{% codeInfo srNumber=1 %}
**Host and Port**: Enter the fully qualified hostname and port number for your Databricks Pipeline deployment in the Host and Port field.
{% /codeInfo %}
{% codeInfo srNumber=2 %}
**Token**: Generated Token to connect to Databricks Pipeline.
{% /codeInfo %}
{% codeInfo srNumber=3 %}
**Connection Arguments (Optional)**: Enter the details for any additional connection arguments such as security or protocol configs that can be sent to Databricks during the connection. These details must be added as Key-Value pairs.
- In case you are using Single-Sign-On (SSO) for authentication, add the `authenticator` details in the Connection Arguments as a Key-Value pair as follows: `"authenticator" : "sso_login_url"`
- In case you authenticate with SSO using an external browser popup, then add the `authenticator` details in the Connection Arguments as a Key-Value pair as follows: `"authenticator" : "externalbrowser"`
**HTTP Path**: Databricks Pipeline compute resources URL.
{% /codeInfo %}
#### Source Configuration - Source Config
{% codeInfo srNumber=4 %}
The `sourceConfig` is defined [here](https://github.com/open-metadata/OpenMetadata/blob/main/openmetadata-spec/src/main/resources/json/schema/metadataIngestion/pipelineServiceMetadataPipeline.json):
**dbServiceNames**: Database Service Name for the creation of lineage, if the source supports it.
**includeTags**: Set the Include tags toggle to control whether or not to include tags as part of metadata ingestion.
**markDeletedPipelines**: Set the Mark Deleted Pipelines toggle to flag pipelines as soft-deleted if they are not present anymore in the source system.
**pipelineFilterPattern** and **chartFilterPattern**: Note that the `pipelineFilterPattern` and `chartFilterPattern` both support regex as include or exclude.
{% /codeInfo %}
#### Sink Configuration
{% codeInfo srNumber=5 %}
To send the metadata to OpenMetadata, it needs to be specified as `type: metadata-rest`.
{% /codeInfo %}
#### Workflow Configuration
{% codeInfo srNumber=6 %}
The main property here is the `openMetadataServerConfig`, where you can define the host and security provider of your OpenMetadata installation.
For a simple, local installation using our docker containers, this looks like:
{% /codeInfo %}
{% /codeInfoContainer %}
{% codeBlock fileName="filename.yaml" %}
```yaml
source:
type: databrickspipeline
serviceName: local_databricks_pipeline
serviceConnection:
config:
type: DatabricksPipeline
```
```yaml {% srNumber=1 %}
hostPort: localhost:443
```
```yaml {% srNumber=2 %}
token: <databricks token>
```
```yaml {% srNumber=3 %}
connectionArguments:
http_path: <http path of databricks cluster>
```
```yaml {% srNumber=4 %}
sourceConfig:
config:
type: PipelineMetadata
# markDeletedPipelines: True
# includeTags: True
# includeLineage: true
# pipelineFilterPattern:
# includes:
# - pipeline1
# - pipeline2
# excludes:
# - pipeline3
# - pipeline4
```
```yaml {% srNumber=5 %}
sink:
type: metadata-rest
config: {}
```
```yaml {% srNumber=6 %}
workflowConfig:
openMetadataServerConfig:
hostPort: "http://localhost:8585/api"
authProvider: openmetadata
securityConfig:
jwtToken: "{bot_jwt_token}"
```
{% /codeBlock %}
{% /codePreview %}
### Workflow Configs for Security Provider
We support different security providers. You can find their definitions [here](https://github.com/open-metadata/OpenMetadata/tree/main/openmetadata-spec/src/main/resources/json/schema/security/client).
## Openmetadata JWT Auth
- JWT tokens will allow your clients to authenticate against the OpenMetadata server. To enable JWT Tokens, you will get more details [here](/deployment/security/enable-jwt-tokens).
```yaml
workflowConfig:
openMetadataServerConfig:
hostPort: "http://localhost:8585/api"
authProvider: openmetadata
securityConfig:
jwtToken: "{bot_jwt_token}"
```
- You can refer to the JWT Troubleshooting section [link](/deployment/security/jwt-troubleshooting) for any issues in your JWT configuration. If you need information on configuring the ingestion with other security providers in your bots, you can follow this doc [link](/deployment/security/workflow-config-auth).
### 2. Prepare the Ingestion DAG
Create a Python file in your Airflow DAGs directory with the following contents:
{% codePreview %}
{% codeInfoContainer %}
{% codeInfo srNumber=7 %}
#### Import necessary modules
The `Workflow` class that is being imported is a part of a metadata ingestion framework, which defines a process of getting data from different sources and ingesting it into a central metadata repository.
Here we are also importing all the basic requirements to parse YAMLs, handle dates and build our DAG.
{% /codeInfo %}
{% codeInfo srNumber=8 %}
**Default arguments for all tasks in the Airflow DAG.**
- Default arguments dictionary contains default arguments for tasks in the DAG, including the owner's name, email address, number of retries, retry delay, and execution timeout.
{% /codeInfo %}
{% codeInfo srNumber=9 %}
- **config**: Specifies config for the metadata ingestion as we prepare above.
{% /codeInfo %}
{% codeInfo srNumber=10 %}
- **metadata_ingestion_workflow()**: This code defines a function `metadata_ingestion_workflow()` that loads a YAML configuration, creates a `Workflow` object, executes the workflow, checks its status, prints the status to the console, and stops the workflow.
{% /codeInfo %}
{% codeInfo srNumber=11 %}
- **DAG**: creates a DAG using the Airflow framework, and tune the DAG configurations to whatever fits with your requirements
- For more Airflow DAGs creation details visit [here](https://airflow.apache.org/docs/apache-airflow/stable/core-concepts/dags.html#declaring-a-dag).
{% /codeInfo %}
Note that from connector to connector, this recipe will always be the same.
By updating the `YAML configuration`, you will be able to extract metadata from different sources.
{% /codeInfoContainer %}
{% codeBlock fileName="filename.py" %}
```python {% srNumber=7 %}
import pathlib
import yaml
from datetime import timedelta
from airflow import DAG
from metadata.config.common import load_config_file
from metadata.ingestion.api.workflow import Workflow
from airflow.utils.dates import days_ago
try:
from airflow.operators.python import PythonOperator
except ModuleNotFoundError:
from airflow.operators.python_operator import PythonOperator
```
```python {% srNumber=8 %}
default_args = {
"owner": "user_name",
"email": ["username@org.com"],
"email_on_failure": False,
"retries": 3,
"retry_delay": timedelta(minutes=5),
"execution_timeout": timedelta(minutes=60)
}
```
```python {% srNumber=9 %}
config = """
<your YAML configuration>
"""
```
```python {% srNumber=10 %}
def metadata_ingestion_workflow():
workflow_config = yaml.safe_load(config)
workflow = Workflow.create(workflow_config)
workflow.execute()
workflow.raise_from_status()
workflow.print_status()
workflow.stop()
```
```python {% srNumber=11 %}
with DAG(
"sample_data",
default_args=default_args,
description="An example DAG which runs a OpenMetadata ingestion workflow",
start_date=days_ago(1),
is_paused_upon_creation=False,
schedule_interval='*/5 * * * *',
catchup=False,
) as dag:
ingest_task = PythonOperator(
task_id="ingest_using_recipe",
python_callable=metadata_ingestion_workflow,
)
```
{% /codeBlock %}
{% /codePreview %}

View File

@ -0,0 +1,202 @@
---
title: Run Databricks Pipeline Connector using the CLI
slug: /connectors/pipeline/databricks-pipeline/cli
---
# Run Databricks Pipeline using the metadata CLI
In this section, we provide guides and references to use the Databricks Pipeline connector.
Configure and schedule Databricks Pipeline metadata and profiler workflows from the OpenMetadata UI:
- [Requirements](#requirements)
- [Metadata Ingestion](#metadata-ingestion)
## Requirements
{%inlineCallout icon="description" bold="OpenMetadata 0.12 or later" href="/deployment"%}
To deploy OpenMetadata, check the Deployment guides.
{% /inlineCallout %}
To run the Ingestion via the UI you'll need to use the OpenMetadata Ingestion Container, which comes shipped with
custom Airflow plugins to handle the workflow deployment.
### Python Requirements
To run the Databricks Pipeline ingestion, you will need to install:
```bash
pip3 install "openmetadata-ingestion[databricks]"
```
## Metadata Ingestion
All connectors are defined as JSON Schemas.
[Here](https://github.com/open-metadata/OpenMetadata/blob/main/openmetadata-spec/src/main/resources/json/schema/entity/services/connections/pipeline/databricksPipelineConnection.json)
you can find the structure to create a connection to Databricks Pipeline.
In order to create and run a Metadata Ingestion workflow, we will follow
the steps to create a YAML configuration able to connect to the source,
process the Entities if needed, and reach the OpenMetadata server.
The workflow is modeled around the following
[JSON Schema](https://github.com/open-metadata/OpenMetadata/blob/main/openmetadata-spec/src/main/resources/json/schema/metadataIngestion/workflow.json)
### 1. Define the YAML Config
This is a sample config for Databricks Pipeline:
{% codePreview %}
{% codeInfoContainer %}
#### Source Configuration - Service Connection
{% codeInfo srNumber=1 %}
**Host and Port**: Enter the fully qualified hostname and port number for your Databricks Pipeline deployment in the Host and Port field.
{% /codeInfo %}
{% codeInfo srNumber=2 %}
**Token**: Generated Token to connect to Databricks Pipeline.
{% /codeInfo %}
{% codeInfo srNumber=3 %}
**Connection Arguments (Optional)**: Enter the details for any additional connection arguments such as security or protocol configs that can be sent to Databricks during the connection. These details must be added as Key-Value pairs.
- In case you are using Single-Sign-On (SSO) for authentication, add the `authenticator` details in the Connection Arguments as a Key-Value pair as follows: `"authenticator" : "sso_login_url"`
- In case you authenticate with SSO using an external browser popup, then add the `authenticator` details in the Connection Arguments as a Key-Value pair as follows: `"authenticator" : "externalbrowser"`
**HTTP Path**: Databricks Pipeline compute resources URL.
{% /codeInfo %}
#### Source Configuration - Source Config
{% codeInfo srNumber=4 %}
The `sourceConfig` is defined [here](https://github.com/open-metadata/OpenMetadata/blob/main/openmetadata-spec/src/main/resources/json/schema/metadataIngestion/pipelineServiceMetadataPipeline.json):
**dbServiceNames**: Database Service Name for the creation of lineage, if the source supports it.
**includeTags**: Set the Include tags toggle to control whether or not to include tags as part of metadata ingestion.
**markDeletedPipelines**: Set the Mark Deleted Pipelines toggle to flag pipelines as soft-deleted if they are not present anymore in the source system.
**pipelineFilterPattern** and **chartFilterPattern**: Note that the `pipelineFilterPattern` and `chartFilterPattern` both support regex as include or exclude.
{% /codeInfo %}
#### Sink Configuration
{% codeInfo srNumber=5 %}
To send the metadata to OpenMetadata, it needs to be specified as `type: metadata-rest`.
{% /codeInfo %}
#### Workflow Configuration
{% codeInfo srNumber=6 %}
The main property here is the `openMetadataServerConfig`, where you can define the host and security provider of your OpenMetadata installation.
For a simple, local installation using our docker containers, this looks like:
{% /codeInfo %}
{% /codeInfoContainer %}
{% codeBlock fileName="filename.yaml" %}
```yaml
source:
type: databrickspipeline
serviceName: local_databricks_pipeline
serviceConnection:
config:
type: DatabricksPipeline
```
```yaml {% srNumber=1 %}
hostPort: localhost:443
```
```yaml {% srNumber=2 %}
token: <databricks token>
```
```yaml {% srNumber=3 %}
connectionArguments:
http_path: <http path of databricks cluster>
```
```yaml {% srNumber=4 %}
sourceConfig:
config:
type: PipelineMetadata
# markDeletedPipelines: True
# includeTags: True
# includeLineage: true
# pipelineFilterPattern:
# includes:
# - pipeline1
# - pipeline2
# excludes:
# - pipeline3
# - pipeline4
```
```yaml {% srNumber=5 %}
sink:
type: metadata-rest
config: {}
```
```yaml {% srNumber=6 %}
workflowConfig:
openMetadataServerConfig:
hostPort: "http://localhost:8585/api"
authProvider: openmetadata
securityConfig:
jwtToken: "{bot_jwt_token}"
```
{% /codeBlock %}
{% /codePreview %}
### Workflow Configs for Security Provider
We support different security providers. You can find their definitions [here](https://github.com/open-metadata/OpenMetadata/tree/main/openmetadata-spec/src/main/resources/json/schema/security/client).
## Openmetadata JWT Auth
- JWT tokens will allow your clients to authenticate against the OpenMetadata server. To enable JWT Tokens, you will get more details [here](/deployment/security/enable-jwt-tokens).
```yaml
workflowConfig:
openMetadataServerConfig:
hostPort: "http://localhost:8585/api"
authProvider: openmetadata
securityConfig:
jwtToken: "{bot_jwt_token}"
```
- You can refer to the JWT Troubleshooting section [link](/deployment/security/jwt-troubleshooting) for any issues in your JWT configuration. If you need information on configuring the ingestion with other security providers in your bots, you can follow this doc [link](/deployment/security/workflow-config-auth).
### 2. Run with the CLI
First, we will need to save the YAML file. Afterward, and with all requirements installed, we can run:
```bash
metadata ingest -c <path-to-yaml>
```
Note that from connector to connector, this recipe will always be the same. By updating the YAML configuration,
you will be able to extract metadata from different sources.

View File

@ -0,0 +1,292 @@
---
title: Databricks Pipeline
slug: /connectors/pipeline/databricks-pipeline
---
# Databricks Pipeline
In this section, we provide guides and references to use the Databricks Pipeline connector.
Configure and schedule Databricks Pipeline metadata workflows from the OpenMetadata UI:
- [Requirements](#requirements)
- [Metadata Ingestion](#metadata-ingestion)
If you don't want to use the OpenMetadata Ingestion container to configure the workflows via the UI, then you can check the following docs to connect using Airflow SDK or with the CLI.
{% tilesContainer %}
{% tile
title="Ingest with Airflow"
description="Configure the ingestion using Airflow SDK"
link="/connectors/dashboard/databrickspipeline/airflow"
/ %}
{% tile
title="Ingest with the CLI"
description="Run a one-time ingestion using the metadata CLI"
link="/connectors/dashboard/databrickspipeline/cli"
/ %}
{% /tilesContainer %}
## Requirements
{%inlineCallout icon="description" bold="OpenMetadata 0.12 or later" href="/deployment"%}
To deploy OpenMetadata, check the Deployment guides.
{% /inlineCallout %}
To run the Ingestion via the UI you'll need to use the OpenMetadata Ingestion Container, which comes shipped with custom Airflow plugins to handle the workflow deployment.
## Metadata Ingestion
{% stepsContainer %}
{% step srNumber=1 %}
{% stepDescription title="1. Visit the Services Page" %}
The first step is ingesting the metadata from your sources. Under
Settings, you will find a Services link an external source system to
OpenMetadata. Once a service is created, it can be used to configure
metadata, usage, and profiler workflows.
To visit the Services page, select Services from the Settings menu.
{% /stepDescription %}
{% stepVisualInfo %}
{% image
src="/images/v1.0.0/openmetadata/connectors/visit-services.png"
alt="Visit Services Page"
caption="Find Pipeline option on left panel of the settings page" /%}
{% /stepVisualInfo %}
{% /step %}
{% step srNumber=2 %}
{% stepDescription title="2. Create a New Service" %}
Click on the 'Add New Service' button to start the Service creation.
{% /stepDescription %}
{% stepVisualInfo %}
{% image
src="/images/v1.0.0/openmetadata/connectors/create-service.png"
alt="Create a new service"
caption="Add a new Service from the Dashboard Services page" /%}
{% /stepVisualInfo %}
{% /step %}
{% step srNumber=3 %}
{% stepDescription title="3. Select the Service Type" %}
Select Databricks Pipeline as the service type and click Next.
{% /stepDescription %}
{% stepVisualInfo %}
{% image
src="/images/v1.0.0/openmetadata/connectors/databrickspipeline/select-service.png"
alt="Select Service"
caption="Select your service from the list" /%}
{% /stepVisualInfo %}
{% /step %}
{% step srNumber=4 %}
{% stepDescription title="4. Name and Describe your Service" %}
Provide a name and description for your service as illustrated below.
#### Service Name
OpenMetadata uniquely identifies services by their Service Name. Provide
a name that distinguishes your deployment from other services, including
the other {connector} services that you might be ingesting metadata
from.
{% /stepDescription %}
{% stepVisualInfo %}
{% image
src="/images/v1.0.0/openmetadata/connectors/databrickspipeline/add-new-service.png"
alt="Add New Service"
caption="Provide a Name and description for your Service" /%}
{% /stepVisualInfo %}
{% /step %}
{% step srNumber=5 %}
{% stepDescription title="5. Configure the Service Connection" %}
In this step, we will configure the connection settings required for
this connector. Please follow the instructions below to ensure that
you've configured the connector to read from your databrickspipeline service as
desired.
{% /stepDescription %}
{% stepVisualInfo %}
{% image
src="/images/v1.0.0/openmetadata/connectors/databrickspipeline/service-connection.png"
alt="Configure service connection"
caption="Configure the service connection by filling the form" /%}
{% /stepVisualInfo %}
{% /step %}
{% extraContent parentTagName="stepsContainer" %}
#### Connection Options
- **Host and Port**: Enter the fully qualified hostname and port number for your Databricks Pipeline deployment in the Host and Port field.
- **Token**: Generated Token to connect to Databricks Pipeline.
- **HTTP Path**: Databricks Pipeline compute resources URL.
- **Connection Arguments (Optional)**: Enter the details for any additional connection arguments such as security or protocol configs that can be sent to Databricks during the connection. These details must be added as Key-Value pairs.
- In case you are using Single-Sign-On (SSO) for authentication, add the `authenticator` details in the Connection Arguments as a Key-Value pair as follows: `"authenticator" : "sso_login_url"`
- In case you authenticate with SSO using an external browser popup, then add the `authenticator` details in the Connection Arguments as a Key-Value pair as follows: `"authenticator" : "externalbrowser"`
{% /extraContent %}
{% step srNumber=6 %}
{% stepDescription title="6. Test the Connection" %}
Once the credentials have been added, click on `Test Connection` and Save
the changes.
{% /stepDescription %}
{% stepVisualInfo %}
{% image
src="/images/v1.0.0/openmetadata/connectors/test-connection.png"
alt="Test Connection"
caption="Test the connection and save the Service" /%}
{% /stepVisualInfo %}
{% /step %}
{% step srNumber=7 %}
{% stepDescription title="7. Configure Metadata Ingestion" %}
In this step we will configure the metadata ingestion pipeline,
Please follow the instructions below
{% /stepDescription %}
{% stepVisualInfo %}
{% image
src="/images/v1.0.0/openmetadata/connectors/configure-metadata-ingestion-dashboard.png"
alt="Configure Metadata Ingestion"
caption="Configure Metadata Ingestion Page" /%}
{% /stepVisualInfo %}
{% /step %}
{% extraContent parentTagName="stepsContainer" %}
#### Metadata Ingestion Options
- **Name**: This field refers to the name of ingestion pipeline, you can customize the name or use the generated name.
- **Pipeline Filter Pattern (Optional)**: Use to pipeline filter patterns to control whether or not to include pipeline as part of metadata ingestion.
- **Include**: Explicitly include pipeline by adding a list of comma-separated regular expressions to the Include field. OpenMetadata will include all pipeline with names matching one or more of the supplied regular expressions. All other schemas will be excluded.
- **Exclude**: Explicitly exclude pipeline by adding a list of comma-separated regular expressions to the Exclude field. OpenMetadata will exclude all pipeline with names matching one or more of the supplied regular expressions. All other schemas will be included.
- **Include lineage (toggle)**: Set the Include lineage toggle to control whether or not to include lineage between pipelines and data sources as part of metadata ingestion.
- **Enable Debug Log (toggle)**: Set the Enable Debug Log toggle to set the default log level to debug, these logs can be viewed later in Airflow.
- **Mark Deleted Pipelines (toggle)**: Set the Mark Deleted Pipelines toggle to flag pipelines as soft-deleted if they are not present anymore in the source system.
{% /extraContent %}
{% step srNumber=8 %}
{% stepDescription title="8. Schedule the Ingestion and Deploy" %}
Scheduling can be set up at an hourly, daily, or weekly cadence. The
timezone is in UTC. Select a Start Date to schedule for ingestion. It is
optional to add an End Date.
Review your configuration settings. If they match what you intended,
click Deploy to create the service and schedule metadata ingestion.
If something doesn't look right, click the Back button to return to the
appropriate step and change the settings as needed.
After configuring the workflow, you can click on Deploy to create the
pipeline.
{% /stepDescription %}
{% stepVisualInfo %}
{% image
src="/images/v1.0.0/openmetadata/connectors/schedule.png"
alt="Schedule the Workflow"
caption="Schedule the Ingestion Pipeline and Deploy" /%}
{% /stepVisualInfo %}
{% /step %}
{% step srNumber=9 %}
{% stepDescription title="9. View the Ingestion Pipeline" %}
Once the workflow has been successfully deployed, you can view the
Ingestion Pipeline running from the Service Page.
{% /stepDescription %}
{% stepVisualInfo %}
{% image
src="/images/v1.0.0/openmetadata/connectors/view-ingestion-pipeline.png"
alt="View Ingestion Pipeline"
caption="View the Ingestion Pipeline from the Service Page" /%}
{% /stepVisualInfo %}
{% /step %}
{% /stepsContainer %}
## Troubleshooting
### Workflow Deployment Error
If there were any errors during the workflow deployment process, the
Ingestion Pipeline Entity will still be created, but no workflow will be
present in the Ingestion container.
- You can then edit the Ingestion Pipeline and Deploy it again.
- From the Connection tab, you can also Edit the Service if needed.
{% image
src="/images/v1.0.0/openmetadata/connectors/workflow-deployment-error.png"
alt="Workflow Deployment Error"
caption="Edit and Deploy the Ingestion Pipeline" /%}

View File

@ -13,6 +13,7 @@ This is the supported list of connectors for Pipeline Services:
- [Fivetran](/connectors/pipeline/fivetran)
- [Dagster](/connectors/pipeline/dagster)
- [Domo Pipeline](/connectors/pipeline/domo-pipeline)
- [Databricks Pipeline](/connectors/pipeline/databricks-pipeline)
If you have a request for a new connector, don't hesitate to reach out in [Slack](https://slack.open-metadata.org/) or
open a [feature request](https://github.com/open-metadata/OpenMetadata/issues/new/choose) in our GitHub repo.

View File

@ -463,6 +463,12 @@ site_menu:
url: /connectors/pipeline/nifi/airflow
- category: Connectors / Pipeline / Nifi / CLI
url: /connectors/pipeline/nifi/cli
- category: Connectors / Pipeline / Databricks Pipeline
url: /connectors/pipeline/databricks-pipeline
- category: Connectors / Pipeline / Databricks Pipeline / Airflow
url: /connectors/pipeline/databricks-pipeline/airflow
- category: Connectors / Pipeline / Databricks Pipeline / CLI
url: /connectors/pipeline/databricks-pipeline/cli
- category: Connectors / Pipeline / Glue Pipeline
url: /connectors/pipeline/glue-pipeline
- category: Connectors / Pipeline / Glue Pipeline / Airflow

Binary file not shown.

After

Width:  |  Height:  |  Size: 108 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 140 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 106 KiB

View File

@ -1,80 +1,66 @@
# Databricks
In this section, we provide guides and references to use the Databricks connector.
In this section, we provide guides and references to use the Databricks connector. You can view the full documentation for Databricks [here](https://docs.open-metadata.org/connectors/database/databricks).
# Requirements
<!-- to be updated -->
You can find further information on the Kafka connector in the [docs](https://docs.open-metadata.org/connectors/database/databricks).
You can find further information on the Databricks connector in the [docs](https://docs.open-metadata.org/connectors/database/databricks).
To learn more about the Databricks Connection Details(`hostPort`,`token`, `http_path`) information visit this [docs](https://docs.open-metadata.org/connectors/database/databricks/troubleshooting).
$$note
we have tested it out with Databricks version 11.3LTS runtime version. (runtime version must be 9 and above)
$$
### Usage & Lineage
$$note
To get Query Usage and Lineage details, need a Azure Databricks Premium account.
$$
## Connection Details
$$section
### Scheme $(id="scheme")
SQLAlchemy driver scheme options.
<!-- scheme to be updated -->
SQLAlchemy driver scheme options. If you are unsure about this setting, you can use the default value.
$$
$$section
### Host Port $(id="hostPort")
Host and port of the Databricks service.
<!-- hostPort to be updated -->
Host and port of the Databricks service. This should be specified as a string in the format 'hostname:port'.
**Example**: `adb-xyz.azuredatabricks.net:443`
$$
$$section
### Token $(id="token")
Generated Token to connect to Databricks.
<!-- token to be updated -->
**Example**: `dapw488e89a7176f7eb39bbc718617891564`
$$
$$section
### Http Path $(id="httpPath")
Databricks compute resources URL.
<!-- httpPath to be updated -->
**Example**: `/sql/1.0/warehouses/xyz123`
$$
$$section
### Catalog $(id="catalog")
Catalog of the data source(Example: hive_metastore). This is optional parameter, if you would like to restrict the metadata reading to a single catalog. When left blank, OpenMetadata Ingestion attempts to scan all the catalog.
<!-- catalog to be updated -->
Catalog of the data source. This is optional parameter, if you would like to restrict the metadata reading to a single catalog. When left blank, OpenMetadata Ingestion attempts to scan all the catalog.
**Example**: `hive_metastore`
$$
$$section
### Database Schema $(id="databaseSchema")
databaseSchema of the data source. This is optional parameter, if you would like to restrict the metadata reading to a single databaseSchema. When left blank, OpenMetadata Ingestion attempts to scan all the databaseSchema.
<!-- databaseSchema to be updated -->
**Example**: `default`
$$
$$section
### Connection Timeout $(id="connectionTimeout")
The maximum amount of time (in seconds) to wait for a successful connection to the data source. If the connection attempt takes longer than this timeout period, an error will be returned.
<!-- connectionTimeout to be updated -->
$$
$$section
### Connection Options $(id="connectionOptions")
Additional connection options to build the URL that can be sent to service during the connection.
<!-- connectionOptions to be updated -->
$$
$$section
### Connection Arguments $(id="connectionArguments")
Additional connection arguments such as security or protocol configs that can be sent to service during connection.
<!-- connectionArguments to be updated -->
$$
$$section
### Supports Database $(id="supportsDatabase")
The source service supports the database concept in its hierarchy
<!-- supportsDatabase to be updated -->
$$

View File

@ -1,30 +1,23 @@
# Airbyte
In this section, we provide guides and references to use the Airbyte connector.
In this section, we provide guides and references to use the Airbyte connector. You can view the full documentation for Airbyte [here](https://docs.open-metadata.org/connectors/pipeline/airbyte).
# Requirements
<!-- to be updated -->
You can find further information on the Kafka connector in the [docs](https://docs.open-metadata.org/connectors/pipeline/airbyte).
You can find further information on the Airbyte connector in the [docs](https://docs.open-metadata.org/connectors/pipeline/airbyte).
## Connection Details
$$section
### Host Port $(id="hostPort")
Pipeline Service Management/UI URL.
<!-- hostPort to be updated -->
Pipeline Service Management/UI URL. This should be specified as a string in the format 'hostname:port'.
**Example**: `localhost:8000`, `host.docker.internal:8000`
$$
$$section
### Username $(id="username")
Username to connect to Airbyte.
<!-- username to be updated -->
$$
$$section
### Password $(id="password")
Password to connect to Airbyte.
<!-- password to be updated -->
$$

View File

@ -1,37 +1,32 @@
# DatabricksPipeline
In this section, we provide guides and references to use the DatabricksPipeline connector.
In this section, we provide guides and references to use the Databricks Pipeline connector. You can view the full documentation for DatabricksPipeline [here](https://docs.open-metadata.org/connectors/pipeline/databrickspipeline).
# Requirements
<!-- to be updated -->
You can find further information on the Kafka connector in the [docs](https://docs.open-metadata.org/connectors/pipeline/databrickspipeline).
You can find further information on the Databricks Pipeline connector in the [docs](https://docs.open-metadata.org/connectors/pipeline/databrickspipeline).
To learn more about the Databricks Connection Details(`hostPort`,`token`, `http_path`) information visit this [docs](https://docs.open-metadata.org/connectors/database/databricks/troubleshooting).
## Connection Details
$$section
### Host Port $(id="hostPort")
Host and port of the Databricks service.
<!-- hostPort to be updated -->
Host and port of the Databricks service. This should be specified as a string in the format 'hostname:port'.
**Example**: `adb-xyz.azuredatabricks.net:443`
$$
$$section
### Token $(id="token")
Generated Token to connect to Databricks.
<!-- token to be updated -->
Generated Token to connect to Databricks Pipeline.
**Example**: `dapw488e89a7176f7eb39bbc718617891564`
$$
$$section
### Http Path $(id="httpPath")
Databricks compute resources URL.
<!-- httpPath to be updated -->
**Example**: `/sql/1.0/warehouses/xyz123`
$$
$$section
### Connection Arguments $(id="connectionArguments")
Additional connection arguments such as security or protocol configs that can be sent to service during connection.
<!-- connectionArguments to be updated -->
$$