Feat/update connectors docs (#19415)

This commit is contained in:
tarunpandey23 2025-01-17 13:25:06 +05:30 committed by GitHub
parent d0f65991f2
commit c636ccf68e
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
12 changed files with 368 additions and 36 deletions

View File

@ -5,11 +5,37 @@
The `sourceConfig` is defined [here](https://github.com/open-metadata/OpenMetadata/blob/main/openmetadata-spec/src/main/resources/json/schema/metadataIngestion/dashboardServiceMetadataPipeline.json):
- **dbServiceNames**: Database Service Names for ingesting lineage if the source supports it.
- **dashboardFilterPattern**, **chartFilterPattern**, **dataModelFilterPattern**: Note that all of them support regex as include or exclude. E.g., "My dashboard, My dash.*, .*Dashboard".
- **projectFilterPattern**: Filter the dashboards, charts and data sources by projects. Note that all of them support regex as include or exclude. E.g., "My project, My proj.*, .*Project".
- **includeOwners**: Set the 'Include Owners' toggle to control whether to include owners to the ingested entity if the owner email matches with a user stored in the OM server as part of metadata ingestion. If the ingested entity already exists and has an owner, the owner will not be overwritten.
- **includeTags**: Set the 'Include Tags' toggle to control whether to include tags in metadata ingestion.
- **includeDataModels**: Set the 'Include Data Models' toggle to control whether to include tags as part of metadata ingestion.
- **markDeletedDashboards**: Set the 'Mark Deleted Dashboards' toggle to flag dashboards as soft-deleted if they are not present anymore in the source system.
- **Include Draft Dashboard (toogle)**: Set the 'Include Draft Dashboard' toggle to include draft dashboards. By default it will include draft dashboards.
- **dataModelFilterPattern**: Regex exclude or include data models that matches the pattern.
- **includeOwners**:Enabling a flag will replace the current owner with a new owner from the source during metadata ingestion, if the current owner is null. It is recommended to keep the flag enabled to obtain the owner information during the first metadata ingestion.`includeOwners` supports boolean value either true or false.
- **markDeletedDashboards**: Optional configuration to soft delete dashboards in OpenMetadata if the source dashboards are deleted. Also, if the dashboard is deleted, all the associated entities like lineage, etc., with that dashboard will be deleted.`markDeletedDashboards` supports boolean value either true or false.
- **markDeletedDataModels**: Optional configuration to soft delete data models in OpenMetadata if the source data models are deleted. Also, if the data models is deleted, all the associated entities like lineage, etc., with that data models will be deleted.`includeOwners` supports boolean value either true or false.
- **includeTags**:Optional configuration to toggle the tags ingestion.`markDeletedDataModels` supports boolean value either true or false.
- **includeDataModels**: Optional configuration to toggle the ingestion of data models.`includeDataModels` supports boolean value either true or false.
- **includeDraftDashboard**: Optional Configuration to include/exclude draft dashboards. By default it will include draft dashboards.`includeDraftDashboard` supports boolean value either true or false.
- **overrideMetadata**: Set the 'Override Metadata' toggle to control whether to override the existing metadata in the OpenMetadata server with the metadata fetched from the source. If the toggle is set to true, the metadata fetched from the source will override the existing metadata in the OpenMetadata server. If the toggle is set to false, the metadata fetched from the source will not override the existing metadata in the OpenMetadata server. This is applicable for fields like description, tags, owner and displayName.`overrideMetadata` supports boolean value either true or false.
- **overrideLineage**: Set the 'Override Lineage' toggle to control whether to override the existing lineage.`overrideLineage` supports boolean value either true or false.
{% /codeInfo %}

View File

@ -27,4 +27,19 @@
# excludes:
# - project3
# - project4
# dataModelFilterPattern:
# includes:
# - dataModel1
# - dataModel2
# excludes:
# - dataModel3
# - dataModel4
# includeOwners: false # true
# markDeletedDashboards: true # false
# markDeletedDataModels: true # false
# includeTags: true # false
# includeDataModels: true # false
# includeDraftDashboard: true # false
# overrideMetadata: false # true
# overrideLineage: false # true
```

View File

@ -5,11 +5,37 @@
The `sourceConfig` is defined [here](https://github.com/open-metadata/OpenMetadata/blob/main/openmetadata-spec/src/main/resources/json/schema/metadataIngestion/dashboardServiceMetadataPipeline.json):
- **dbServiceNames**: Database Service Names for ingesting lineage if the source supports it.
- **dashboardFilterPattern**, **chartFilterPattern**, **dataModelFilterPattern**: Note that all of them support regex as include or exclude. E.g., "My dashboard, My dash.*, .*Dashboard".
- **projectFilterPattern**: Filter the dashboards, charts and data sources by projects. Note that all of them support regex as include or exclude. E.g., "My project, My proj.*, .*Project".
- **includeOwners**: Set the 'Include Owners' toggle to control whether to include owners to the ingested entity if the owner email matches with a user stored in the OM server as part of metadata ingestion. If the ingested entity already exists and has an owner, the owner will not be overwritten.
- **includeTags**: Set the 'Include Tags' toggle to control whether to include tags in metadata ingestion.
- **includeDataModels**: Set the 'Include Data Models' toggle to control whether to include tags as part of metadata ingestion.
- **markDeletedDashboards**: Set the 'Mark Deleted Dashboards' toggle to flag dashboards as soft-deleted if they are not present anymore in the source system.
- **Include Draft Dashboard (toogle)**: Set the 'Include Draft Dashboard' toggle to include draft dashboards. By default it will include draft dashboards.
- **dataModelFilterPattern**: Regex exclude or include data models that matches the pattern.
- **includeOwners**:Enabling a flag will replace the current owner with a new owner from the source during metadata ingestion, if the current owner is null. It is recommended to keep the flag enabled to obtain the owner information during the first metadata ingestion.`includeOwners` supports boolean value either true or false.
- **markDeletedDashboards**: Optional configuration to soft delete dashboards in OpenMetadata if the source dashboards are deleted. Also, if the dashboard is deleted, all the associated entities like lineage, etc., with that dashboard will be deleted.`markDeletedDashboards` supports boolean value either true or false.
- **markDeletedDataModels**: Optional configuration to soft delete data models in OpenMetadata if the source data models are deleted. Also, if the data models is deleted, all the associated entities like lineage, etc., with that data models will be deleted.`includeOwners` supports boolean value either true or false.
- **includeTags**:Optional configuration to toggle the tags ingestion.`markDeletedDataModels` supports boolean value either true or false.
- **includeDataModels**: Optional configuration to toggle the ingestion of data models.`includeDataModels` supports boolean value either true or false.
- **includeDraftDashboard**: Optional Configuration to include/exclude draft dashboards. By default it will include draft dashboards.`includeDraftDashboard` supports boolean value either true or false.
- **overrideMetadata**: Set the 'Override Metadata' toggle to control whether to override the existing metadata in the OpenMetadata server with the metadata fetched from the source. If the toggle is set to true, the metadata fetched from the source will override the existing metadata in the OpenMetadata server. If the toggle is set to false, the metadata fetched from the source will not override the existing metadata in the OpenMetadata server. This is applicable for fields like description, tags, owner and displayName.`overrideMetadata` supports boolean value either true or false.
- **overrideLineage**: Set the 'Override Lineage' toggle to control whether to override the existing lineage.`overrideLineage` supports boolean value either true or false.
{% /codeInfo %}

View File

@ -27,4 +27,19 @@
# excludes:
# - project3
# - project4
# dataModelFilterPattern:
# includes:
# - dataModel1
# - dataModel2
# excludes:
# - dataModel3
# - dataModel4
# includeOwners: false # true
# markDeletedDashboards: true # false
# markDeletedDataModels: true # false
# includeTags: true # false
# includeDataModels: true # false
# includeDraftDashboard: true # false
# overrideMetadata: false # true
# overrideLineage: false # true
```

View File

@ -98,6 +98,10 @@ Name of the mode workspace from where the metadata is to be fetched.
{% /codeInfo %}
{% codeInfo srNumber=5 %}
**filterQueryParam**: Filter query parameter for some of the Mode API calls.
{% /codeInfo %}
{% partial file="/v1.6/connectors/yaml/dashboard/source-config-def.md" /%}
{% partial file="/v1.6/connectors/yaml/ingestion-sink-def.md" /%}
@ -128,6 +132,9 @@ source:
```yaml {% srNumber=4 %}
workspace_name: workspace_name
```
```yaml {% srNumber=5 %}
# filterQueryParam: ""
```
{% partial file="/v1.6/connectors/yaml/dashboard/source-config.md" /%}

View File

@ -216,6 +216,14 @@ Refer to the section [here](/connectors/dashboard/powerbi#powerbi-admin-and-nona
{% /codeInfo %}
{% codeInfo srNumber=9 %}
**pbitFilesSource**: Source to get the .pbit files to extract lineage information. Select one of local, azureConfig, gcsConfig, s3Config.
- `pbitFileConfigType`: Determines the storage backend type (azure, gcs, or s3).
- `securityConfig`: Authentication credentials for accessing the storage backend.
- `prefixConfig`: Details of the location in the storage backend.
- `pbitFilesExtractDir`: Specifies the local directory where extracted .pbit files will be stored for processing.
{% /codeInfo %}
{% partial file="/v1.6/connectors/yaml/dashboard/source-config-def.md" /%}
{% partial file="/v1.6/connectors/yaml/ingestion-sink-def.md" /%}
@ -259,6 +267,79 @@ source:
```yaml {% srNumber=8 %}
# useAdminApis: true (default)
```
```yaml {% srNumber=9 %}
# Select one of local, azureConfig, gcsConfig, s3Config.
# For Azure
# pbitFilesSource:
# pbitFileConfigType: azure # Specify the storage type as Azure Blob Storage
# securityConfig:
# clientId: "" # Azure application Client ID
# clientSecret: "" # Azure application Client Secret
# tenantId: "" # Azure tenant ID
# accountName: "" # Azure storage account name
# vaultName: "" # Optional: Azure vault name for secrets management
# scopes: "" # Optional: OAuth scopes for Azure
# prefixConfig:
# bucketName: "" # Name of the Azure Blob Storage container
# objectPrefix: "" # Path prefix to locate files within the container
# pbitFilesExtractDir: /tmp/pbitFiles # Local directory for extracted files
# For gcsConfig
# GCP credentials configurations
# We support two ways of authenticating to GCP: via GCP Credentials Values or GCP Credentials Path.
# Option 1: Authenticate using GCP Credentials Values
# pbitFilesSource:
# pbitFileConfigType: gcs # Specify the storage type as Google Cloud Storage
# securityConfig:
# type: service_account # Authentication type
# projectId: "" # GCP project ID (can be single or multiple)
# privateKeyId: "" # Private Key ID from GCP service account
# privateKey: "" # Private Key from GCP service account
# clientEmail: "" # Service account email
# clientId: "" # Client ID
# authUri: "https://accounts.google.com/o/oauth2/auth" # OAuth URI
# authProviderX509CertUrl: "https://www.googleapis.com/oauth2/v1/certs"
# clientX509CertUrl: "" # Certificate URL
# prefixConfig:
# bucketName: "" # Name of the GCS bucket
# objectPrefix: "" # Path prefix to locate files within the bucket
# pbitFilesExtractDir: /tmp/pbitFiles # Local directory for extracted files
# Option 2: Authenticate using Raw Credential Values
# pbitFilesSource:
# pbitFileConfigType: gcs # Specify the storage type as Google Cloud Storage
# securityConfig:
# type: external_account # Authentication type
# externalType: "external_account" # External account authentication
# audience: "" # Audience for token validation
# subjectTokenType: "" # Type of subject token
# tokenURL: "" # URL to obtain the token
# credentialSource: {} # Raw JSON object with credential source details
# prefixConfig:
# bucketName: "" # Name of the GCS bucket
# objectPrefix: "" # Path prefix to locate files within the bucket
# pbitFilesExtractDir: /tmp/pbitFiles # Local directory for extracted files
# For s3Config
# pbitFilesSource:
# pbitFileConfigType: s3 # Specify the storage type as Amazon S3
# securityConfig:
# awsAccessKeyId: "" # AWS Access Key ID
# awsSecretAccessKey: "" # AWS Secret Access Key
# awsRegion: "" # AWS region for the bucket
# awsSessionToken: "" # Optional session token
# endPointURL: "" # Optional custom S3 endpoint URL
# profileName: "" # Optional AWS CLI profile name
# assumeRoleArn: "" # ARN of the role to assume (if required)
# assumeRoleSessionName: "" # Session name for assumed role
# assumeRoleSourceIdentity: "" # Source identity for assumed session
# prefixConfig:
# bucketName: "" # Name of the S3 bucket
# objectPrefix: "" # Path prefix to locate files within the bucket
# pbitFilesExtractDir: /tmp/pbitFiles # Local directory for extracted files
```
{% partial file="/v1.6/connectors/yaml/dashboard/source-config.md" /%}

View File

@ -117,6 +117,19 @@ This is a sample config for Tableau:
{% /codeInfo %}
{% codeInfo srNumber=12 %}
**verifySSL**: Client SSL verification. Make sure to configure the SSLConfig if enabled. Supported values `no-ssl`, `ignore`, `validate`.
{% /codeInfo %}
{% codeInfo srNumber=13 %}
**sslMode**: Mode of SSL. Default is `disabled`. Supported modes are `disable`, `allow`, `prefer`, `require`, `verify-ca`, `verify-full`.
{% /codeInfo %}
{% codeInfo srNumber=14 %}
**sslConfig**: Client SSL configuration.
{% /codeInfo %}
#### Source Configuration - Source Config
{% codeInfo srNumber=8 %}
@ -131,6 +144,7 @@ The `sourceConfig` is defined [here](https://github.com/open-metadata/OpenMetada
- **includeDataModels**: Set the 'Include Data Models' toggle to control whether to include tags as part of metadata ingestion.
- **markDeletedDashboards**: Set the 'Mark Deleted Dashboards' toggle to flag dashboards as soft-deleted if they are not present anymore in the source system.
{% /codeInfo %}
#### Sink Configuration
@ -141,6 +155,7 @@ To send the metadata to OpenMetadata, it needs to be specified as `type: metadat
{% /codeInfo %}
{% partial file="/v1.6/connectors/yaml/workflow-config-def.md" /%}
{% /codeInfoContainer %}
@ -183,6 +198,28 @@ source:
```yaml {% srNumber=11 %}
paginationLimit: pagination_limit
```
```yaml {% srNumber=12 %}
# verifySSL: no-ssl
```
```yaml {% srNumber=13 %}
# sslMode: disable
```
```yaml {% srNumber=14 %}
# sslConfig:
# caCertificate: |
# -----BEGIN CERTIFICATE-----
# sample certificate
# -----END CERTIFICATE-----
# sslCertificate: |
# -----BEGIN CERTIFICATE-----
# sample certificate
# -----END CERTIFICATE-----
# sslKey: |
# -----BEGIN PRIVATE KEY-----
# sample certificate
# -----END PRIVATE KEY-----
```
{% partial file="/v1.6/connectors/yaml/dashboard/source-config.md" /%}

View File

@ -18,7 +18,7 @@ Configure and schedule Lightdash metadata and profiler workflows from the OpenMe
- [Requirements](#requirements)
- [Metadata Ingestion](#metadata-ingestion)
{% partial file="/v1.7/connectors/external-ingestion-deployment.md" /%}
{% partial file="/v1.6/connectors/external-ingestion-deployment.md" /%}
## Requirements
@ -26,7 +26,7 @@ To integrate Lightdash, ensure you are using OpenMetadata version 1.2.x or highe
### Python Requirements
{% partial file="/v1.7/connectors/python-requirements.md" /%}
{% partial file="/v1.6/connectors/python-requirements.md" /%}
To run the Lightdash ingestion, you will need to install:
@ -89,11 +89,11 @@ Ensure the specified port is open and accessible through network firewall settin
{% /codeInfo %}
{% partial file="/v1.7/connectors/yaml/dashboard/source-config-def.md" /%}
{% partial file="/v1.6/connectors/yaml/dashboard/source-config-def.md" /%}
{% partial file="/v1.7/connectors/yaml/ingestion-sink-def.md" /%}
{% partial file="/v1.6/connectors/yaml/ingestion-sink-def.md" /%}
{% partial file="/v1.7/connectors/yaml/workflow-config-def.md" /%}
{% partial file="/v1.6/connectors/yaml/workflow-config-def.md" /%}
{% /codeInfoContainer %}
@ -123,14 +123,14 @@ source:
proxyAuthentication: <ProxyAuthentication>
```
{% partial file="/v1.7/connectors/yaml/dashboard/source-config.md" /%}
{% partial file="/v1.6/connectors/yaml/dashboard/source-config.md" /%}
{% partial file="/v1.7/connectors/yaml/ingestion-sink.md" /%}
{% partial file="/v1.6/connectors/yaml/ingestion-sink.md" /%}
{% partial file="/v1.7/connectors/yaml/workflow-config.md" /%}
{% partial file="/v1.6/connectors/yaml/workflow-config.md" /%}
{% /codeBlock %}
{% /codePreview %}
{% partial file="/v1.7/connectors/yaml/ingestion-cli.md" /%}
{% partial file="/v1.6/connectors/yaml/ingestion-cli.md" /%}

View File

@ -18,7 +18,7 @@ Configure and schedule Mode metadata and profiler workflows from the OpenMetadat
- [Requirements](#requirements)
- [Metadata Ingestion](#metadata-ingestion)
{% partial file="/v1.7/connectors/external-ingestion-deployment.md" /%}
{% partial file="/v1.6/connectors/external-ingestion-deployment.md" /%}
## Requirements
@ -26,7 +26,7 @@ OpenMetadata relies on Mode's API, which is exclusive to members of the Mode Bus
### Python Requirements
{% partial file="/v1.7/connectors/python-requirements.md" /%}
{% partial file="/v1.6/connectors/python-requirements.md" /%}
To run the Mode ingestion, you will need to install:
@ -98,11 +98,15 @@ Name of the mode workspace from where the metadata is to be fetched.
{% /codeInfo %}
{% partial file="/v1.7/connectors/yaml/dashboard/source-config-def.md" /%}
{% codeInfo srNumber=5 %}
**filterQueryParam**: Filter query parameter for some of the Mode API calls.
{% /codeInfo %}
{% partial file="/v1.7/connectors/yaml/ingestion-sink-def.md" /%}
{% partial file="/v1.6/connectors/yaml/dashboard/source-config-def.md" /%}
{% partial file="/v1.7/connectors/yaml/workflow-config-def.md" /%}
{% partial file="/v1.6/connectors/yaml/ingestion-sink-def.md" /%}
{% partial file="/v1.6/connectors/yaml/workflow-config-def.md" /%}
{% /codeInfoContainer %}
@ -128,16 +132,19 @@ source:
```yaml {% srNumber=4 %}
workspace_name: workspace_name
```
```yaml {% srNumber=5 %}
# filterQueryParam: ""
```
{% partial file="/v1.7/connectors/yaml/dashboard/source-config.md" /%}
{% partial file="/v1.6/connectors/yaml/dashboard/source-config.md" /%}
{% partial file="/v1.7/connectors/yaml/ingestion-sink.md" /%}
{% partial file="/v1.6/connectors/yaml/ingestion-sink.md" /%}
{% partial file="/v1.7/connectors/yaml/workflow-config.md" /%}
{% partial file="/v1.6/connectors/yaml/workflow-config.md" /%}
{% /codeBlock %}
{% /codePreview %}
{% partial file="/v1.7/connectors/yaml/ingestion-cli.md" /%}
{% partial file="/v1.6/connectors/yaml/ingestion-cli.md" /%}

View File

@ -18,7 +18,7 @@ Configure and schedule PowerBI metadata and profiler workflows from the OpenMeta
- [Requirements](#requirements)
- [Metadata Ingestion](#metadata-ingestion)
{% partial file="/v1.7/connectors/external-ingestion-deployment.md" /%}
{% partial file="/v1.6/connectors/external-ingestion-deployment.md" /%}
## Requirements
@ -93,7 +93,7 @@ For reference here is a [thread](https://community.powerbi.com/t5/Service/Error-
### Python Requirements
{% partial file="/v1.7/connectors/python-requirements.md" /%}
{% partial file="/v1.6/connectors/python-requirements.md" /%}
To run the PowerBI ingestion, you will need to install:
@ -216,11 +216,19 @@ Refer to the section [here](/connectors/dashboard/powerbi#powerbi-admin-and-nona
{% /codeInfo %}
{% partial file="/v1.7/connectors/yaml/dashboard/source-config-def.md" /%}
{% codeInfo srNumber=9 %}
**pbitFilesSource**: Source to get the .pbit files to extract lineage information. Select one of local, azureConfig, gcsConfig, s3Config.
- `pbitFileConfigType`: Determines the storage backend type (azure, gcs, or s3).
- `securityConfig`: Authentication credentials for accessing the storage backend.
- `prefixConfig`: Details of the location in the storage backend.
- `pbitFilesExtractDir`: Specifies the local directory where extracted .pbit files will be stored for processing.
{% /codeInfo %}
{% partial file="/v1.7/connectors/yaml/ingestion-sink-def.md" /%}
{% partial file="/v1.6/connectors/yaml/dashboard/source-config-def.md" /%}
{% partial file="/v1.7/connectors/yaml/workflow-config-def.md" /%}
{% partial file="/v1.6/connectors/yaml/ingestion-sink-def.md" /%}
{% partial file="/v1.6/connectors/yaml/workflow-config-def.md" /%}
{% /codeInfoContainer %}
@ -259,15 +267,88 @@ source:
```yaml {% srNumber=8 %}
# useAdminApis: true (default)
```
```yaml {% srNumber=9 %}
{% partial file="/v1.7/connectors/yaml/dashboard/source-config.md" /%}
# Select one of local, azureConfig, gcsConfig, s3Config.
# For Azure
# pbitFilesSource:
# pbitFileConfigType: azure # Specify the storage type as Azure Blob Storage
# securityConfig:
# clientId: "" # Azure application Client ID
# clientSecret: "" # Azure application Client Secret
# tenantId: "" # Azure tenant ID
# accountName: "" # Azure storage account name
# vaultName: "" # Optional: Azure vault name for secrets management
# scopes: "" # Optional: OAuth scopes for Azure
# prefixConfig:
# bucketName: "" # Name of the Azure Blob Storage container
# objectPrefix: "" # Path prefix to locate files within the container
# pbitFilesExtractDir: /tmp/pbitFiles # Local directory for extracted files
{% partial file="/v1.7/connectors/yaml/ingestion-sink.md" /%}
# For gcsConfig
# GCP credentials configurations
# We support two ways of authenticating to GCP: via GCP Credentials Values or GCP Credentials Path.
{% partial file="/v1.7/connectors/yaml/workflow-config.md" /%}
# Option 1: Authenticate using GCP Credentials Values
# pbitFilesSource:
# pbitFileConfigType: gcs # Specify the storage type as Google Cloud Storage
# securityConfig:
# type: service_account # Authentication type
# projectId: "" # GCP project ID (can be single or multiple)
# privateKeyId: "" # Private Key ID from GCP service account
# privateKey: "" # Private Key from GCP service account
# clientEmail: "" # Service account email
# clientId: "" # Client ID
# authUri: "https://accounts.google.com/o/oauth2/auth" # OAuth URI
# authProviderX509CertUrl: "https://www.googleapis.com/oauth2/v1/certs"
# clientX509CertUrl: "" # Certificate URL
# prefixConfig:
# bucketName: "" # Name of the GCS bucket
# objectPrefix: "" # Path prefix to locate files within the bucket
# pbitFilesExtractDir: /tmp/pbitFiles # Local directory for extracted files
# Option 2: Authenticate using Raw Credential Values
# pbitFilesSource:
# pbitFileConfigType: gcs # Specify the storage type as Google Cloud Storage
# securityConfig:
# type: external_account # Authentication type
# externalType: "external_account" # External account authentication
# audience: "" # Audience for token validation
# subjectTokenType: "" # Type of subject token
# tokenURL: "" # URL to obtain the token
# credentialSource: {} # Raw JSON object with credential source details
# prefixConfig:
# bucketName: "" # Name of the GCS bucket
# objectPrefix: "" # Path prefix to locate files within the bucket
# pbitFilesExtractDir: /tmp/pbitFiles # Local directory for extracted files
# For s3Config
# pbitFilesSource:
# pbitFileConfigType: s3 # Specify the storage type as Amazon S3
# securityConfig:
# awsAccessKeyId: "" # AWS Access Key ID
# awsSecretAccessKey: "" # AWS Secret Access Key
# awsRegion: "" # AWS region for the bucket
# awsSessionToken: "" # Optional session token
# endPointURL: "" # Optional custom S3 endpoint URL
# profileName: "" # Optional AWS CLI profile name
# assumeRoleArn: "" # ARN of the role to assume (if required)
# assumeRoleSessionName: "" # Session name for assumed role
# assumeRoleSourceIdentity: "" # Source identity for assumed session
# prefixConfig:
# bucketName: "" # Name of the S3 bucket
# objectPrefix: "" # Path prefix to locate files within the bucket
# pbitFilesExtractDir: /tmp/pbitFiles # Local directory for extracted files
```
{% partial file="/v1.6/connectors/yaml/dashboard/source-config.md" /%}
{% partial file="/v1.6/connectors/yaml/ingestion-sink.md" /%}
{% partial file="/v1.6/connectors/yaml/workflow-config.md" /%}
{% /codeBlock %}
{% /codePreview %}
{% partial file="/v1.7/connectors/yaml/ingestion-cli.md" /%}
{% partial file="/v1.6/connectors/yaml/ingestion-cli.md" /%}

View File

@ -19,7 +19,7 @@ Configure and schedule Tableau metadata and profiler workflows from the OpenMeta
- [Metadata Ingestion](#metadata-ingestion)
- [Enable Security](#securing-tableau-connection-with-ssl-in-openmetadata)
{% partial file="/v1.7/connectors/external-ingestion-deployment.md" /%}
{% partial file="/v1.6/connectors/external-ingestion-deployment.md" /%}
## Requirements
@ -30,7 +30,7 @@ For more information on enabling the Tableau Metadata APIs follow the link [here
### Python Requirements
{% partial file="/v1.7/connectors/python-requirements.md" /%}
{% partial file="/v1.6/connectors/python-requirements.md" /%}
To run the Tableau ingestion, you will need to install:
@ -117,6 +117,19 @@ This is a sample config for Tableau:
{% /codeInfo %}
{% codeInfo srNumber=12 %}
**verifySSL**: Client SSL verification. Make sure to configure the SSLConfig if enabled. Supported values `no-ssl`, `ignore`, `validate`.
{% /codeInfo %}
{% codeInfo srNumber=13 %}
**sslMode**: Mode of SSL. Default is `disabled`. Supported modes are `disable`, `allow`, `prefer`, `require`, `verify-ca`, `verify-full`.
{% /codeInfo %}
{% codeInfo srNumber=14 %}
**sslConfig**: Client SSL configuration.
{% /codeInfo %}
#### Source Configuration - Source Config
{% codeInfo srNumber=8 %}
@ -131,6 +144,7 @@ The `sourceConfig` is defined [here](https://github.com/open-metadata/OpenMetada
- **includeDataModels**: Set the 'Include Data Models' toggle to control whether to include tags as part of metadata ingestion.
- **markDeletedDashboards**: Set the 'Mark Deleted Dashboards' toggle to flag dashboards as soft-deleted if they are not present anymore in the source system.
{% /codeInfo %}
#### Sink Configuration
@ -141,7 +155,8 @@ To send the metadata to OpenMetadata, it needs to be specified as `type: metadat
{% /codeInfo %}
{% partial file="/v1.7/connectors/yaml/workflow-config-def.md" /%}
{% partial file="/v1.6/connectors/yaml/workflow-config-def.md" /%}
{% /codeInfoContainer %}
@ -183,12 +198,34 @@ source:
```yaml {% srNumber=11 %}
paginationLimit: pagination_limit
```
```yaml {% srNumber=12 %}
# verifySSL: no-ssl
```
```yaml {% srNumber=13 %}
# sslMode: disable
```
```yaml {% srNumber=14 %}
# sslConfig:
# caCertificate: |
# -----BEGIN CERTIFICATE-----
# sample certificate
# -----END CERTIFICATE-----
# sslCertificate: |
# -----BEGIN CERTIFICATE-----
# sample certificate
# -----END CERTIFICATE-----
# sslKey: |
# -----BEGIN PRIVATE KEY-----
# sample certificate
# -----END PRIVATE KEY-----
```
{% partial file="/v1.7/connectors/yaml/dashboard/source-config.md" /%}
{% partial file="/v1.7/connectors/yaml/ingestion-sink.md" /%}
{% partial file="/v1.6/connectors/yaml/dashboard/source-config.md" /%}
{% partial file="/v1.7/connectors/yaml/workflow-config.md" /%}
{% partial file="/v1.6/connectors/yaml/ingestion-sink.md" /%}
{% partial file="/v1.6/connectors/yaml/workflow-config.md" /%}
{% /codeBlock %}
@ -326,4 +363,4 @@ To establish secure connections between OpenMetadata and Tableau, in the `YAML`
sslKey: "/path/to/your/ssl_key"
```
{% partial file="/v1.7/connectors/yaml/ingestion-cli.md" /%}
{% partial file="/v1.6/connectors/yaml/ingestion-cli.md" /%}