4.5 KiB

title slug
Run the dbt Cloud Connector Externally /connectors/pipeline/dbtcloud/yaml

{% connectorDetailsHeader name="dbt Cloud" stage="PROD" platform="Collate" availableFeatures=["Pipelines", "Pipeline Status", "Tags"] unavailableFeatures=["Owners", "Lineage"] / %}

In this section, we provide guides and references to use the dbt Cloud connector.

Configure and schedule dbt Cloud metadata and profiler workflows from the OpenMetadata UI:

{% partial file="/v1.5/connectors/external-ingestion-deployment.md" /%}

Requirements

Python Requirements

{% partial file="/v1.5/connectors/python-requirements.md" /%}

Metadata Ingestion

All connectors are defined as JSON Schemas. Here you can find the structure to create a connection to DBT cloud.

In order to create and run a Metadata Ingestion workflow, we will follow the steps to create a YAML configuration able to connect to the source, process the Entities if needed, and reach the OpenMetadata server.

The workflow is modeled around the following JSON Schema

1. Define the YAML Config

This is a sample config for dbt Cloud:

{% codePreview %}

{% codeInfoContainer %}

Source Configuration - Service Connection

{% codeInfo srNumber=1 %}

host: DBT cloud Access URL eg.https://abc12.us1.dbt.com. Go to your dbt cloud account settings then go to the Access URLs section. In there you will find various URLs we need the Access URL from that section as the Host. For more info visit here.

{% /codeInfo %}

{% codeInfo srNumber=2 %}

discoveryAPI: DBT cloud Discovery API URL eg. https://abc12.metadata.us1.dbt.com/graphql. Go to your dbt cloud account settings where you found your Access URL. In there scroll down to find Discovery API URL . If your Discovery API URL doesn't contain the /graphql at the end please add it. Make sure you have /graphql at the end of your URL. Note that Semantic Layer GraphQL API URL is different from Discovery API URL.

{% /codeInfo %}

{% codeInfo srNumber=3 %}

accountId: The Account ID of your DBT cloud Project. Go to your dbt cloud account settings then in the Account information you will find Account ID. This will be a numeric value but in openmetadata we parse it as a string.

{% /codeInfo %}

{% codeInfo srNumber=4 %}

jobId: Optional. The Job ID of your DBT cloud Job in your Project to fetch metadata for. Look for the segment after "jobs" in the URL. For instance, in a URL like https://cloud.getdbt.com/accounts/123/projects/87477/jobs/73659994, the job ID is 73659994. This will be a numeric value but in openmetadata we parse it as a string. If not passed all Jobs under the Account id will be ingested.

{% /codeInfo %}

{% codeInfo srNumber=5 %}

token: The Authentication Token of your DBT cloud API Account. To get your access token you can follow the docs here. Make sure you have the necessary permissions on the token to run graphql queries and get job and run details.

{% /codeInfo %}

{% partial file="/v1.5/connectors/yaml/pipeline/source-config-def.md" /%}

{% partial file="/v1.5/connectors/yaml/ingestion-sink-def.md" /%}

{% partial file="/v1.5/connectors/yaml/workflow-config-def.md" /%}

{% /codeInfoContainer %}

{% codeBlock fileName="filename.yaml" %}

source:
  type: dbtcloud
  serviceName: dbtcloud_source
  serviceConnection:
    config:
      type: DBTCloud
        host: "https://account_prefix.account_region.dbt.com"
        discoveryAPI: "https://metadata.cloud.getdbt.com/graphql"
        accountId: "numeric_account_id"
        # jobId: "numeric_job_id"
        token: auth_token

{% partial file="/v1.5/connectors/yaml/pipeline/source-config.md" /%}

{% partial file="/v1.5/connectors/yaml/ingestion-sink.md" /%}

{% partial file="/v1.5/connectors/yaml/workflow-config.md" /%}

{% /codeBlock %}

{% /codePreview %}

{% partial file="/v1.5/connectors/yaml/ingestion-cli.md" /%}