 bea88c0ba9
			
		
	
	
		bea88c0ba9
		
			
		
	
	
	
	
		
			
			* Docs: Add in dbt tile & Renaming Datafactory * Docs: Add in dbt tile & Renaming Datafactory --------- Co-authored-by: Prajwal Pandit <prajwalpandit@Prajwals-MacBook-Air.local>
		
			
				
	
	
	
		
			6.6 KiB
		
	
	
	
	
	
	
	
			
		
		
	
	| title | slug | collate | 
|---|---|---|
| Run the Azure Data Factory Connector Externally | /connectors/pipeline/datafactory/yaml | true | 
{% connectorDetailsHeader name="Azure Data Factory" stage="PROD" platform="Collate" availableFeatures=["Pipelines", "Pipeline Status", "Lineage"] unavailableFeatures=["Owners", "Tags"] / %}
In this section, we provide guides and references to use the Azure Data Factory connector.
Configure and schedule Azure Data Factory metadata and profiler workflows from the OpenMetadata UI:
{% partial file="/v1.7/connectors/external-ingestion-deployment.md" /%}
Requirements
Data Factory Versions
The Ingestion framework uses Azure Data Factory APIs to connect to the Data Factory and fetch metadata.
You can find further information on the Azure Data Factory connector in the docs.
Permissions
Ensure that the service principal or managed identity you’re using has the necessary permissions in the Data Factory resource (Reader, Contributor or Data Factory Contributor role at minimum).
Python Requirements
{% partial file="/v1.7/connectors/python-requirements.md" /%}
To run the Data Factory ingestion, you will need to install:
pip3 install "openmetadata-ingestion[datafactory]"
Metadata Ingestion
All connectors are defined as JSON Schemas. Here you can find the structure to create a connection to Data Factory.
In order to create and run a Metadata Ingestion workflow, we will follow the steps to create a YAML configuration able to connect to the source, process the Entities if needed, and reach the OpenMetadata server.
The workflow is modeled around the following JSON Schema
1. Define the YAML Config
This is a sample config for Data Factory:
{% codePreview %}
{% codeInfoContainer %}
Source Configuration - Service Connection
{% codeInfo srNumber=1 %}
clientId: To get the Client ID (also known as application ID), follow these steps:
- Log into Microsoft Azure.
- Search for App registrationsand select theApp registrations link.
- Select the Azure ADapp you're using for this connection.
- From the Overview section, copy the Application (client) ID.
{% /codeInfo %}
{% codeInfo srNumber=2 %}
clientSecret: To get the client secret, follow these steps:
- Log into Microsoft Azure.
- Search for App registrationsand select theApp registrations link.
- Select the Azure ADapp you're using for this connection.
- Under Manage, selectCertificates & secrets.
- Under Client secrets, selectNew client secret.
- In the Add a client secretpop-up window, provide a description for your application secret. Choose when the application should expire, and selectAdd.
- From the Client secretssection, copy the string in theValuecolumn of the newly created application secret.
{% /codeInfo %}
{% codeInfo srNumber=3 %}
tenantId: To get the tenant ID, follow these steps:
- Log into Microsoft Azure.
- Search for App registrationsand select theApp registrations link.
- Select the Azure ADapp you're using for Power BI.
- From the Overviewsection, copy theDirectory (tenant) ID.
{% /codeInfo %}
{% codeInfo srNumber=4 %}
accountName: Here are the step-by-step instructions for finding the account name for an Azure Data Lake Storage account:
- Sign in to the Azure portal and navigate to the Storage accountspage.
- Find the Data Lake Storage account you want to access and click on its name.
- In the account overview page, locate the Account namefield. This is the unique identifier for the Data Lake Storage account.
- You can use this account name to access and manage the resources associated with the account, such as creating and managing containers and directories.
{% /codeInfo %}
{% codeInfo srNumber=5 %}
subscription_id: Your Azure subscription’s unique identifier. In the Azure portal, navigate to Subscriptions > Your Subscription > Overview. You’ll see the subscription ID listed there.
{% /codeInfo %}
{% codeInfo srNumber=6 %}
resource_group_name: This is the name of the resource group that contains your Data Factory instance. In the Azure portal, navigate to Resource Groups. Find your resource group, and note the name.
{% /codeInfo %}
{% codeInfo srNumber=7 %}
factory_name: The name of your Data Factory instance. In the Azure portal, navigate to Data Factories and find your Data Factory. The Data Factory name will be listed there.
{% /codeInfo %}
{% codeInfo srNumber=8 %}
run_filter_days: The days range when filtering pipeline runs. It specifies how many days back from the current date to look for pipeline runs, and filter runs within the given period of days. Default is 7 days. Optional
{% /codeInfo %}
{% partial file="/v1.7/connectors/yaml/pipeline/source-config-def.md" /%}
{% partial file="/v1.7/connectors/yaml/ingestion-sink-def.md" /%}
{% partial file="/v1.7/connectors/yaml/workflow-config-def.md" /%}
{% /codeInfoContainer %}
{% codeBlock fileName="filename.yaml" %}
source:
  type: datafactory
  serviceName: datafactory_source
  serviceConnection:
    config:
      type: DataFactory
      configSource: 
        clientId: client_id
        clientSecret: client_secret
        tenantId: tenant_id
        accountName: account_name
      subscription_id: subscription_id
      resource_group_name: resource_group_name
      factory_name: factory_name
      run_filter_days: 7
{% partial file="/v1.7/connectors/yaml/pipeline/source-config.md" /%}
{% partial file="/v1.7/connectors/yaml/ingestion-sink.md" /%}
{% partial file="/v1.7/connectors/yaml/workflow-config.md" /%}
{% /codeBlock %}
{% /codePreview %}
{% partial file="/v1.7/connectors/yaml/ingestion-cli.md" /%}