mirror of
https://github.com/open-metadata/OpenMetadata.git
synced 2025-12-26 15:10:05 +00:00
chore(ui): add support for service documentation (part-1) (#10668)
* chore(ui): add support for service documentation md file * sync local * chore: add method for fetching markdown file * chore(ui): add support for service documentation * chore: move fields to connections * chore: update logic to fetch requirements * chore: right panel component for service * fix: key prop is not present in the skeleton component * chore: only fetch md files when required fields are present * chore: use hook for fetching airflow status * chore: refactor add service component * chore: remove id prefix and id separator prop from form builder * fix: fieldName issue on right panel * fix: active Field name issue * fix:unit test * test: add unit test * chore: handle edit service form * chore: add fallback logic * fix: cy test * chore: update service doc md files/folder structure, * chore: push image example * Athena docs * Add glue docs * Add hive related changes * chore: take last field for fetching field doc * add datalake * Added connection information for oracle and redshift (english + french) * fix: fallback logic * Bigquery & Snowflake Requirements * mysql and amundsen requirements (#10752) * Revert removal of descriptions * Add Doc For Mssql and Postgres * Added powerbi conn md files * Align requirements files * Add Kafka and Redpanda * refined powerbi docs * Add Tableu requirements, move Athena and Glue fields, change footer some connectors * Add missing connectors fields descriptions default * re: datalake * Add Tableau field descriptions * fix: markdown styling * chore: improve button styling * chore: rename right panel to service right panel and move it to common * fix: doc for select and list field , cy test. * fix: unit test * fix: test connection service type issue * Added powerbi docs link in req * Add info on hive * Remove unused markdowns * Add req for datalake * add hive requirements header * Snowflake & Biguqery * Update Mssql and Postgres * mysql and amundsen requirements updated * Update Mssql and Postgres * added username * chore: fix cy expression issue * chore: reset active field state on step change. * fix: affix target container issue * fix: unit test * fix: cypress for postgres and glue --------- Co-authored-by: Milan Bariya <52292922+MilanBariya@users.noreply.github.com> Co-authored-by: Pere Miquel Brull <peremiquelbrull@gmail.com> Co-authored-by: Ayush Shah <ayush@getcollate.io> Co-authored-by: Teddy Crepineau <teddy.crepineau@gmail.com> Co-authored-by: ulixius9 <mayursingal9@gmail.com> Co-authored-by: NiharDoshi99 <51595473+NiharDoshi99@users.noreply.github.com> Co-authored-by: Milan Bariya <milanbariya12@gmail.com> Co-authored-by: Onkar Ravgan <onkar.10r@gmail.com> Co-authored-by: Nahuel Verdugo Revigliono <nahuel@getcollate.io> Co-authored-by: Nihar Doshi <nihardoshi16@gmail.com>
This commit is contained in:
parent
28cc956c90
commit
0a92a897a1
@ -59,7 +59,7 @@ custom Airflow plugins to handle the workflow deployment.
|
||||
|
||||
<Note>
|
||||
|
||||
Datalake connector supports extracting metadata from file types `JSON`, `CSV`, `TSV` & `Parquet`.
|
||||
Datalake connector supports extracting metadata from file types `JSON`, `CSV`, `TSV`, `Parquet` & `Avro`.
|
||||
|
||||
</Note>
|
||||
|
||||
|
||||
@ -95,4 +95,4 @@
|
||||
},
|
||||
"additionalProperties": false,
|
||||
"required": ["hostPort"]
|
||||
}
|
||||
}
|
||||
@ -151,7 +151,12 @@ export const testServiceCreationAndIngestion = (
|
||||
cy.get(`[data-testid="${serviceType}"]`).should('exist').click();
|
||||
cy.get('[data-testid="next-button"]').should('exist').click();
|
||||
|
||||
// Enter service name in step 2
|
||||
// Should show requirements in step 2
|
||||
|
||||
cy.get('[data-testid="service-requirements"]').should('exist');
|
||||
cy.get('[data-testid="next-button"]').should('exist').click();
|
||||
|
||||
// Enter service name in step 3
|
||||
cy.get('[data-testid="service-name"]').should('exist').type(serviceName);
|
||||
interceptURL(
|
||||
'GET',
|
||||
@ -160,7 +165,8 @@ export const testServiceCreationAndIngestion = (
|
||||
);
|
||||
cy.get('[data-testid="next-button"]').should('exist').click();
|
||||
verifyResponseStatusCode('@getIngestionPipelineStatus', 200);
|
||||
// Connection Details in step 3
|
||||
|
||||
// Connection Details in step 4
|
||||
cy.get('[data-testid="add-new-service-container"]')
|
||||
.parent()
|
||||
.parent()
|
||||
@ -212,6 +218,11 @@ export const testServiceCreationAndIngestion = (
|
||||
'/api/v1/services/ingestionPipelines/status',
|
||||
'getIngestionPipelineStatus'
|
||||
);
|
||||
interceptURL(
|
||||
'POST',
|
||||
'/api/v1/services/ingestionPipelines/deploy/*',
|
||||
'deployPipeline'
|
||||
);
|
||||
cy.get('[data-testid="submit-btn"]').should('exist').click();
|
||||
verifyResponseStatusCode('@getIngestionPipelineStatus', 200);
|
||||
// check success
|
||||
@ -242,6 +253,8 @@ export const testServiceCreationAndIngestion = (
|
||||
|
||||
scheduleIngestion();
|
||||
|
||||
verifyResponseStatusCode('@deployPipeline', 200);
|
||||
|
||||
cy.contains(`${serviceName}_metadata`).should('be.visible');
|
||||
// On the Right panel
|
||||
cy.contains('Metadata Ingestion Added & Deployed Successfully').should(
|
||||
@ -876,10 +889,10 @@ export const updateOwner = () => {
|
||||
};
|
||||
|
||||
export const mySqlConnectionInput = () => {
|
||||
cy.get('#root_username').type(Cypress.env('mysqlUsername'));
|
||||
cy.get('#root_password').type(Cypress.env('mysqlPassword'));
|
||||
cy.get('#root_hostPort').type(Cypress.env('mysqlHostPort'));
|
||||
cy.get('#root_databaseSchema').type(Cypress.env('mysqlDatabaseSchema'));
|
||||
cy.get('#root\\/username').type(Cypress.env('mysqlUsername'));
|
||||
cy.get('#root\\/password').type(Cypress.env('mysqlPassword'));
|
||||
cy.get('#root\\/hostPort').type(Cypress.env('mysqlHostPort'));
|
||||
cy.get('#root\\/databaseSchema').type(Cypress.env('mysqlDatabaseSchema'));
|
||||
};
|
||||
|
||||
export const login = (username, password) => {
|
||||
|
||||
@ -36,23 +36,23 @@ describe('BigQuery Ingestion', () => {
|
||||
goToAddNewServicePage(SERVICE_TYPE.Database);
|
||||
const connectionInput = () => {
|
||||
const clientEmail = Cypress.env('bigqueryClientEmail');
|
||||
cy.get('.form-group > #root_type')
|
||||
cy.get('.form-group > #root\\/type')
|
||||
.scrollIntoView()
|
||||
.type('service_account');
|
||||
cy.get(':nth-child(3) > .form-group > #root_projectId')
|
||||
cy.get(':nth-child(3) > .form-group > #root\\/projectId')
|
||||
.scrollIntoView()
|
||||
.type(Cypress.env('bigqueryProjectId'));
|
||||
cy.get('#root_privateKeyId')
|
||||
cy.get('#root\\/privateKeyId')
|
||||
.scrollIntoView()
|
||||
.type(Cypress.env('bigqueryPrivateKeyId'));
|
||||
cy.get('#root_privateKey')
|
||||
cy.get('#root\\/privateKey')
|
||||
.scrollIntoView()
|
||||
.type(Cypress.env('bigqueryPrivateKey'));
|
||||
cy.get('#root_clientEmail').scrollIntoView().type(clientEmail);
|
||||
cy.get('#root_clientId')
|
||||
cy.get('#root\\/clientEmail').scrollIntoView().type(clientEmail);
|
||||
cy.get('#root\\/clientId')
|
||||
.scrollIntoView()
|
||||
.type(Cypress.env('bigqueryClientId'));
|
||||
cy.get('#root_clientX509CertUrl')
|
||||
cy.get('#root\\/clientX509CertUrl')
|
||||
.scrollIntoView()
|
||||
.type(
|
||||
`https://www.googleapis.com/robot/v1/metadata/x509/${encodeURIComponent(
|
||||
|
||||
@ -35,19 +35,19 @@ describe('Glue Ingestion', () => {
|
||||
it('add and ingest data', () => {
|
||||
goToAddNewServicePage(SERVICE_TYPE.Database);
|
||||
const connectionInput = () => {
|
||||
cy.get('#root_awsConfig_awsAccessKeyId')
|
||||
cy.get('#root\\/awsConfig\\/awsAccessKeyId')
|
||||
.scrollIntoView()
|
||||
.type(Cypress.env('glueAwsAccessKeyId'));
|
||||
cy.get('#root_awsConfig_awsSecretAccessKey')
|
||||
cy.get('#root\\/awsConfig\\/awsSecretAccessKey')
|
||||
.scrollIntoView()
|
||||
.type(Cypress.env('glueAwsSecretAccessKey'));
|
||||
cy.get('#root_awsConfig_awsRegion')
|
||||
cy.get('#root\\/awsConfig\\/awsRegion')
|
||||
.scrollIntoView()
|
||||
.type(Cypress.env('glueAwsRegion'));
|
||||
cy.get('#root_awsConfig_endPointURL')
|
||||
cy.get('#root\\/awsConfig\\/endPointURL')
|
||||
.scrollIntoView()
|
||||
.type(Cypress.env('glueEndPointURL'));
|
||||
cy.get('#root_storageServiceName')
|
||||
cy.get('#root\\/storageServiceName')
|
||||
.scrollIntoView()
|
||||
.type(Cypress.env('glueStorageServiceName'));
|
||||
};
|
||||
|
||||
@ -41,10 +41,10 @@ describe('Kafka Ingestion', () => {
|
||||
.click();
|
||||
|
||||
const connectionInput = () => {
|
||||
cy.get('#root_bootstrapServers').type(
|
||||
cy.get('#root\\/bootstrapServers').type(
|
||||
Cypress.env('kafkaBootstrapServers')
|
||||
);
|
||||
cy.get('#root_schemaRegistryURL').type(
|
||||
cy.get('#root\\/schemaRegistryURL').type(
|
||||
Cypress.env('kafkaSchemaRegistryUrl')
|
||||
);
|
||||
};
|
||||
|
||||
@ -41,11 +41,11 @@ describe('Metabase Ingestion', () => {
|
||||
.click();
|
||||
|
||||
const connectionInput = () => {
|
||||
cy.get('#root_username').type(Cypress.env('metabaseUsername'));
|
||||
cy.get('#root_password')
|
||||
cy.get('#root\\/username').type(Cypress.env('metabaseUsername'));
|
||||
cy.get('#root\\/password')
|
||||
.scrollIntoView()
|
||||
.type(Cypress.env('metabasePassword'));
|
||||
cy.get('#root_hostPort')
|
||||
cy.get('#root\\/hostPort')
|
||||
.scrollIntoView()
|
||||
.type(Cypress.env('metabaseHostPort'));
|
||||
};
|
||||
|
||||
@ -48,16 +48,16 @@ describe('Postgres Ingestion', () => {
|
||||
it('add and ingest data', () => {
|
||||
goToAddNewServicePage(SERVICE_TYPE.Database);
|
||||
const connectionInput = () => {
|
||||
cy.get('[id="root_username"]')
|
||||
cy.get('#root\\/username')
|
||||
.scrollIntoView()
|
||||
.type(Cypress.env('postgresUsername'));
|
||||
cy.get('[name="root_password"]')
|
||||
cy.get('#root\\/password')
|
||||
.scrollIntoView()
|
||||
.type(Cypress.env('postgresPassword'));
|
||||
cy.get('[id="root_hostPort"]')
|
||||
cy.get('#root\\/hostPort')
|
||||
.scrollIntoView()
|
||||
.type(Cypress.env('postgresHostPort'));
|
||||
cy.get('#root_database')
|
||||
cy.get('#root\\/database')
|
||||
.scrollIntoView()
|
||||
.type(Cypress.env('postgresDatabase'));
|
||||
};
|
||||
|
||||
@ -39,14 +39,14 @@ describe('RedShift Ingestion', () => {
|
||||
it('add and ingest data', () => {
|
||||
goToAddNewServicePage(SERVICE_TYPE.Database);
|
||||
const connectionInput = () => {
|
||||
cy.get('#root_username').type(Cypress.env('redshiftUsername'));
|
||||
cy.get('#root_password')
|
||||
cy.get('#root\\/username').type(Cypress.env('redshiftUsername'));
|
||||
cy.get('#root\\/password')
|
||||
.scrollIntoView()
|
||||
.type(Cypress.env('redshiftPassword'));
|
||||
cy.get('#root_hostPort')
|
||||
cy.get('#root\\/hostPort')
|
||||
.scrollIntoView()
|
||||
.type(Cypress.env('redshiftHost'));
|
||||
cy.get('#root_database')
|
||||
cy.get('#root\\/database')
|
||||
.scrollIntoView()
|
||||
.type(Cypress.env('redshiftDatabase'));
|
||||
};
|
||||
|
||||
@ -35,11 +35,11 @@ describe('Snowflake Ingestion', () => {
|
||||
it('add and ingest data', { defaultCommandTimeout: 8000 }, () => {
|
||||
goToAddNewServicePage(SERVICE_TYPE.Database);
|
||||
const connectionInput = () => {
|
||||
cy.get('#root_username').type(Cypress.env('snowflakeUsername'));
|
||||
cy.get('#root_password').type(Cypress.env('snowflakePassword'));
|
||||
cy.get('#root_account').type(Cypress.env('snowflakeAccount'));
|
||||
cy.get('#root_database').type(Cypress.env('snowflakeDatabase'));
|
||||
cy.get('#root_warehouse').type(Cypress.env('snowflakeWarehouse'));
|
||||
cy.get('#root\\/username').type(Cypress.env('snowflakeUsername'));
|
||||
cy.get('#root\\/password').type(Cypress.env('snowflakePassword'));
|
||||
cy.get('#root\\/account').type(Cypress.env('snowflakeAccount'));
|
||||
cy.get('#root\\/database').type(Cypress.env('snowflakeDatabase'));
|
||||
cy.get('#root\\/warehouse').type(Cypress.env('snowflakeWarehouse'));
|
||||
};
|
||||
|
||||
const addIngestionInput = () => {
|
||||
|
||||
@ -41,11 +41,11 @@ describe('Superset Ingestion', () => {
|
||||
.click();
|
||||
|
||||
const connectionInput = () => {
|
||||
cy.get('#root_username').type(Cypress.env('supersetUsername'));
|
||||
cy.get('#root_password')
|
||||
cy.get('#root\\/username').type(Cypress.env('supersetUsername'));
|
||||
cy.get('#root\\/password')
|
||||
.scrollIntoView()
|
||||
.type(Cypress.env('supersetPassword'));
|
||||
cy.get('#root_hostPort')
|
||||
cy.get('#root\\/hostPort')
|
||||
.scrollIntoView()
|
||||
.focus()
|
||||
.clear()
|
||||
|
||||
Binary file not shown.
|
After Width: | Height: | Size: 13 KiB |
Binary file not shown.
|
After Width: | Height: | Size: 507 KiB |
@ -0,0 +1,2 @@
|
||||
Additional connection options to build the URL that can be sent to service during the connection.
|
||||
<!-- connectionOptions to be updated -->
|
||||
@ -0,0 +1,2 @@
|
||||
Source Python Class Name to instantiated by the ingestion workflow
|
||||
<!-- sourcePythonClass to be updated -->
|
||||
@ -0,0 +1,3 @@
|
||||
# Requirements
|
||||
<!-- to be updated -->
|
||||
You can find further information on the Kafka connector in the [docs](https://docs.open-metadata.org/connectors/dashboard/customdashboard).
|
||||
@ -0,0 +1,2 @@
|
||||
Access token to connect to to DOMO
|
||||
<!-- accessToken to be updated -->
|
||||
@ -0,0 +1,2 @@
|
||||
API Host to connect to DOMO instance
|
||||
<!-- apiHost to be updated -->
|
||||
@ -0,0 +1,2 @@
|
||||
Client ID for DOMO
|
||||
<!-- clientId to be updated -->
|
||||
@ -0,0 +1,2 @@
|
||||
Connect to Sandbox Domain
|
||||
<!-- sandboxDomain to be updated -->
|
||||
@ -0,0 +1,2 @@
|
||||
Secret Token to connect DOMO
|
||||
<!-- secretToken to be updated -->
|
||||
@ -0,0 +1,3 @@
|
||||
# Requirements
|
||||
<!-- to be updated -->
|
||||
You can find further information on the Kafka connector in the [docs](https://docs.open-metadata.org/connectors/dashboard/domodashboard).
|
||||
@ -0,0 +1,2 @@
|
||||
User's Client ID. This user should have privileges to read all the metadata in Looker.
|
||||
<!-- clientId to be updated -->
|
||||
@ -0,0 +1,2 @@
|
||||
User's Client Secret.
|
||||
<!-- clientSecret to be updated -->
|
||||
@ -0,0 +1,2 @@
|
||||
URL to the Looker instance.
|
||||
<!-- hostPort to be updated -->
|
||||
@ -0,0 +1,3 @@
|
||||
# Requirements
|
||||
<!-- to be updated -->
|
||||
You can find further information on the Kafka connector in the [docs](https://docs.open-metadata.org/connectors/dashboard/looker).
|
||||
@ -0,0 +1,2 @@
|
||||
Host and Port of the Metabase instance.
|
||||
<!-- hostPort to be updated -->
|
||||
@ -0,0 +1,2 @@
|
||||
Password to connect to Metabase.
|
||||
<!-- password to be updated -->
|
||||
@ -0,0 +1,2 @@
|
||||
Username to connect to Metabase. This user should have privileges to read all the metadata in Metabase.
|
||||
<!-- username to be updated -->
|
||||
@ -0,0 +1,3 @@
|
||||
# Requirements
|
||||
<!-- to be updated -->
|
||||
You can find further information on the Kafka connector in the [docs](https://docs.open-metadata.org/connectors/dashboard/metabase).
|
||||
@ -0,0 +1,2 @@
|
||||
Access Token for Mode Dashboard
|
||||
<!-- accessToken to be updated -->
|
||||
@ -0,0 +1,2 @@
|
||||
Access Token Password for Mode Dashboard
|
||||
<!-- accessTokenPassword to be updated -->
|
||||
@ -0,0 +1,2 @@
|
||||
URL for the mode instance.
|
||||
<!-- hostPort to be updated -->
|
||||
@ -0,0 +1,2 @@
|
||||
Mode Workspace Name
|
||||
<!-- workspaceName to be updated -->
|
||||
@ -0,0 +1,3 @@
|
||||
# Requirements
|
||||
<!-- to be updated -->
|
||||
You can find further information on the Kafka connector in the [docs](https://docs.open-metadata.org/connectors/dashboard/mode).
|
||||
@ -0,0 +1,3 @@
|
||||
To identify a token authority, you can provide a URL that points to the authority in question.
|
||||
|
||||
If you don't specify a URL for the token authority, we'll use the default value of https://login.microsoftonline.com/.
|
||||
@ -0,0 +1,9 @@
|
||||
To get the client ID (also know as application ID), follow these steps:
|
||||
|
||||
1. Log into [Microsoft Azure](https://ms.portal.azure.com/#allservices).
|
||||
|
||||
2. Search for App registrations and select the App registrations link.
|
||||
|
||||
3. Select the Azure AD app you're using for embedding your Power BI content.
|
||||
|
||||
4. From the Overview section, copy the Application (client) ID.
|
||||
@ -0,0 +1,15 @@
|
||||
To get the client secret, follow these steps:
|
||||
|
||||
1. Log into [Microsoft Azure](https://ms.portal.azure.com/#allservices).
|
||||
|
||||
2. Search for App registrations and select the App registrations link.
|
||||
|
||||
3. Select the Azure AD app you're using for embedding your Power BI content.
|
||||
|
||||
4. Under Manage, select Certificates & secrets.
|
||||
|
||||
5. Under Client secrets, select New client secret.
|
||||
|
||||
6. In the Add a client secret pop-up window, provide a description for your application secret, select when the application secret expires, and select Add.
|
||||
|
||||
7. From the Client secrets section, copy the string in the Value column of the newly created application secret.
|
||||
@ -0,0 +1,3 @@
|
||||
To connect with your Power BI instance, you'll need to provide the host URL. If you're using an on-premise installation of Power BI, this will be the domain name associated with your instance.
|
||||
|
||||
If you don't specify a host URL, we'll use the default value of https://app.powerbi.com to connect with your Power BI instance.
|
||||
@ -0,0 +1,3 @@
|
||||
The pagination limit for Power BI APIs can be set using this parameter. The limit determines the number of records to be displayed per page.
|
||||
|
||||
By default, the pagination limit is set to 100 records, which is also the maximum value allowed.
|
||||
@ -0,0 +1,4 @@
|
||||
To let OM use the Power BI APIs using your Azure AD app, you'll need to add the following scopes:
|
||||
- https://analysis.windows.net/powerbi/api/.default
|
||||
|
||||
Instructions for adding these scopes to your app can be found by following this link: https://analysis.windows.net/powerbi/api/.default.
|
||||
@ -0,0 +1,9 @@
|
||||
To get the tenant ID, follow these steps:
|
||||
|
||||
1. Log into [Microsoft Azure](https://ms.portal.azure.com/#allservices).
|
||||
|
||||
2. Search for App registrations and select the App registrations link.
|
||||
|
||||
3. Select the Azure AD app you're using for Power BI.
|
||||
|
||||
4. From the Overview section, copy the Directory (tenant) ID.
|
||||
@ -0,0 +1,28 @@
|
||||
# Requirements
|
||||
PowerBi Pro license is required to access the APIs
|
||||
|
||||
## PowerBI Account Setup and Permissions
|
||||
|
||||
### Step 1: Create an Azure AD app and configure the PowerBI Admin consle
|
||||
|
||||
Please follow the steps mentioned [here](https://docs.microsoft.com/en-us/power-bi/developer/embedded/embed-service-principal) for setting up the Azure AD application service principle and configure PowerBI admin settings
|
||||
|
||||
Login to Power BI as Admin and from `Tenant` settings allow below permissions.
|
||||
- Allow service principles to use Power BI APIs
|
||||
- Allow service principals to use read-only Power BI admin APIs
|
||||
- Enhance admin APIs responses with detailed metadata
|
||||
|
||||
### Step 2: Provide necessary API permissions to the app
|
||||
Go to the `Azure Ad app registrations` page, select your app and add the dashboard permissions to the app for PowerBI service and grant admin consent for the same:
|
||||
- Dashboard.Read.All
|
||||
- Dashboard.ReadWrite.All
|
||||
|
||||
**Note**:
|
||||
Make sure that in the API permissions section **Tenant** related permissions are not being given to the app
|
||||
Please refer [here](https://stackoverflow.com/questions/71001110/power-bi-rest-api-requests-not-authorizing-as-expected) for detailed explanation
|
||||
|
||||
### Step 3: Create New PowerBI workspace
|
||||
The service principal only works with [new workspaces](https://docs.microsoft.com/en-us/power-bi/collaborate-share/service-create-the-new-workspaces).
|
||||
[For reference](https://community.powerbi.com/t5/Service/Error-while-executing-Get-dataset-call-quot-API-is-not/m-p/912360#M85711)
|
||||
|
||||
You can find further information on the PowerBi connector in the [docs](https://docs.open-metadata.org/connectors/dashboard/powerbi).
|
||||
@ -0,0 +1,2 @@
|
||||
The Amazon Resource Name (ARN) of the role to assume. Required Field in case of Assume Role
|
||||
<!-- assumeRoleArn to be updated -->
|
||||
@ -0,0 +1,2 @@
|
||||
An identifier for the assumed role session. Use the role session name to uniquely identify a session when the same role is assumed by different principals or for different reasons. Required Field in case of Assume Role
|
||||
<!-- assumeRoleSessionName to be updated -->
|
||||
@ -0,0 +1,2 @@
|
||||
The Amazon Resource Name (ARN) of the role to assume. Optional Field in case of Assume Role
|
||||
<!-- assumeRoleSourceIdentity to be updated -->
|
||||
@ -0,0 +1,2 @@
|
||||
AWS Access key ID.
|
||||
<!-- awsAccessKeyId to be updated -->
|
||||
@ -0,0 +1,2 @@
|
||||
AWS Account ID
|
||||
<!-- awsAccountId to be updated -->
|
||||
@ -0,0 +1,2 @@
|
||||
AWS credentials configs.
|
||||
<!-- awsConfig to be updated -->
|
||||
@ -0,0 +1,2 @@
|
||||
AWS Region
|
||||
<!-- awsRegion to be updated -->
|
||||
@ -0,0 +1,2 @@
|
||||
AWS Secret Access Key.
|
||||
<!-- awsSecretAccessKey to be updated -->
|
||||
@ -0,0 +1,2 @@
|
||||
AWS Session Token.
|
||||
<!-- awsSessionToken to be updated -->
|
||||
@ -0,0 +1,2 @@
|
||||
EndPoint URL for the AWS
|
||||
<!-- endPointURL to be updated -->
|
||||
@ -0,0 +1,2 @@
|
||||
The authentication method that the user uses to sign in.
|
||||
<!-- identityType to be updated -->
|
||||
@ -0,0 +1,2 @@
|
||||
The Amazon QuickSight namespace that contains the dashboard IDs in this request ( To be provided when identityType is `ANONYMOUS` )
|
||||
<!-- namespace to be updated -->
|
||||
@ -0,0 +1,2 @@
|
||||
The name of a profile to use with the boto session.
|
||||
<!-- profileName to be updated -->
|
||||
@ -0,0 +1,3 @@
|
||||
# Requirements
|
||||
<!-- to be updated -->
|
||||
You can find further information on the Kafka connector in the [docs](https://docs.open-metadata.org/connectors/dashboard/quicksight).
|
||||
@ -0,0 +1,2 @@
|
||||
API key of the redash instance to access.
|
||||
<!-- apiKey to be updated -->
|
||||
@ -0,0 +1,2 @@
|
||||
URL for the Redash instance
|
||||
<!-- hostPort to be updated -->
|
||||
@ -0,0 +1,2 @@
|
||||
Version of the Redash instance
|
||||
<!-- redashVersion to be updated -->
|
||||
@ -0,0 +1,2 @@
|
||||
Username for Redash
|
||||
<!-- username to be updated -->
|
||||
@ -0,0 +1,3 @@
|
||||
# Requirements
|
||||
<!-- to be updated -->
|
||||
You can find further information on the Kafka connector in the [docs](https://docs.open-metadata.org/connectors/dashboard/redash).
|
||||
@ -0,0 +1,2 @@
|
||||
Custom OpenMetadata Classification name for Postgres policy tags.
|
||||
<!-- classificationName to be updated -->
|
||||
@ -0,0 +1,2 @@
|
||||
Choose between API or database connection fetch metadata from superset.
|
||||
<!-- connection to be updated -->
|
||||
@ -0,0 +1,2 @@
|
||||
connectionArguments
|
||||
<!-- connectionArguments to be updated -->
|
||||
@ -0,0 +1,2 @@
|
||||
Additional connection options that can be sent to service during the connection.
|
||||
<!-- connectionOptions to be updated -->
|
||||
@ -0,0 +1,2 @@
|
||||
Database of the data source. This is optional parameter, if you would like to restrict the metadata reading to a single database. When left blank, OpenMetadata Ingestion attempts to scan all the databases.
|
||||
<!-- database to be updated -->
|
||||
@ -0,0 +1,2 @@
|
||||
Optional name to give to the database in OpenMetadata. If left blank, we will use default as the database name.
|
||||
<!-- databaseName to be updated -->
|
||||
@ -0,0 +1,2 @@
|
||||
databaseSchema of the data source. This is optional parameter, if you would like to restrict the metadata reading to a single databaseSchema. When left blank, OpenMetadata Ingestion attempts to scan all the databaseSchema.
|
||||
<!-- databaseSchema to be updated -->
|
||||
@ -0,0 +1,2 @@
|
||||
Host and port of the MySQL service.
|
||||
<!-- hostPort to be updated -->
|
||||
@ -0,0 +1,2 @@
|
||||
Ingest data from all databases in Postgres. You can use databaseFilterPattern on top of this.
|
||||
<!-- ingestAllDatabases to be updated -->
|
||||
@ -0,0 +1,2 @@
|
||||
Password to connect to MySQL.
|
||||
<!-- password to be updated -->
|
||||
@ -0,0 +1,2 @@
|
||||
Authentication provider for the Superset service. For basic user/password authentication, the default value `db` can be used. This parameter is used internally to connect to Superset's REST API.
|
||||
<!-- provider to be updated -->
|
||||
@ -0,0 +1,2 @@
|
||||
SQLAlchemy driver scheme options.
|
||||
<!-- scheme to be updated -->
|
||||
@ -0,0 +1,2 @@
|
||||
Provide the path to ssl ca file
|
||||
<!-- sslCA to be updated -->
|
||||
@ -0,0 +1,2 @@
|
||||
Provide the path to ssl client certificate file (ssl_cert)
|
||||
<!-- sslCert to be updated -->
|
||||
@ -0,0 +1,2 @@
|
||||
Provide the path to ssl client certificate file (ssl_key)
|
||||
<!-- sslKey to be updated -->
|
||||
@ -0,0 +1,2 @@
|
||||
SSL Mode to connect to postgres database. E.g, prefer, verify-ca etc.
|
||||
<!-- sslMode to be updated -->
|
||||
@ -0,0 +1,2 @@
|
||||
The source service supports the database concept in its hierarchy
|
||||
<!-- supportsDatabase to be updated -->
|
||||
@ -0,0 +1,2 @@
|
||||
Username to connect to MySQL. This user should have privileges to read all the metadata in Mysql.
|
||||
<!-- username to be updated -->
|
||||
@ -0,0 +1,3 @@
|
||||
# Requirements
|
||||
<!-- to be updated -->
|
||||
You can find further information on the Kafka connector in the [docs](https://docs.open-metadata.org/connectors/dashboard/superset).
|
||||
@ -0,0 +1,5 @@
|
||||
When we make a request, we include the API version number as part of the request, as in the following example:
|
||||
|
||||
`https://{hostPort}/api/{api_version}/auth/signin`
|
||||
|
||||
A lists versions of Tableau Server and of the corresponding REST API and REST API schema versions can be found [here](https://help.tableau.com/current/api/rest_api/en-us/REST/rest_api_concepts_versions.htm).
|
||||
@ -0,0 +1,3 @@
|
||||
CA certificate path in the instance where the ingestion run. E.g., `/path/to/public.cert`.
|
||||
|
||||
Will be used if Verify SSL is set to `validate`.
|
||||
@ -0,0 +1 @@
|
||||
The config object can have multiple environments. The default environment is defined as `tableau_prod`, and you can change this if needed by specifying an `env` parameter.
|
||||
@ -0,0 +1,3 @@
|
||||
Name or IP address of your installation of Tableau Server.
|
||||
|
||||
For example: `https://my-prod-env.online.tableau.com/`.
|
||||
@ -0,0 +1 @@
|
||||
The password of the user.
|
||||
@ -0,0 +1,3 @@
|
||||
The personal access token name.
|
||||
|
||||
For more information to get a Personal Access Token please visit this [link](https://help.tableau.com/current/server/en-us/security_personal_access_tokens.htm).
|
||||
@ -0,0 +1,3 @@
|
||||
The personal access token value.
|
||||
|
||||
For more information to get a Personal Access Token please visit this [link](https://help.tableau.com/current/server/en-us/security_personal_access_tokens.htm).
|
||||
@ -0,0 +1,7 @@
|
||||
This corresponds to the `contentUrl` attribute in the Tableau REST API.
|
||||
|
||||
The `site_name` is the portion of the URL that follows the `/site/` in the URL.
|
||||
|
||||
For example, _MarketingTeam_ is the `site_name` in the following URL `MyServer/#/site/MarketingTeam/projects`.
|
||||
|
||||
If it is empty, the default Tableau site will be used.
|
||||
@ -0,0 +1 @@
|
||||
If it is empty, the default Tableau site will be used.
|
||||
@ -0,0 +1 @@
|
||||
Client SSL configuration in case we are connection to a host with SSL enabled.
|
||||
@ -0,0 +1 @@
|
||||
The name of the user whose credentials will be used to sign in.
|
||||
@ -0,0 +1,6 @@
|
||||
Client SSL verification. Make sure to configure the SSLConfig if enabled.
|
||||
|
||||
Possible values:
|
||||
- `validate`: Validate the certificate using the public certificate (recommended).
|
||||
- `ignore`: Ignore the certification validation (not recommended for production).
|
||||
- `no-ssl`: SSL validation is not needed.
|
||||
@ -0,0 +1,7 @@
|
||||
# Requirements
|
||||
|
||||
To ingest Tableau metadata, the username used in the configuration **must** have at least the following role: `Site Role: Viewer`.
|
||||
|
||||
To create lineage between Tableau dashboards and any database service via the queries provided from Tableau Metadata API, please enable the Tableau Metadata API for your tableau server. For more information on enabling the Tableau Metadata APIs follow the link [here](https://help.tableau.com/current/api/metadata_api/en-us/docs/meta_api_start.html).
|
||||
|
||||
You can find further information on the Kafka connector in the [docs](https://docs.open-metadata.org/connectors/dashboard/tableau).
|
||||
@ -0,0 +1,9 @@
|
||||
Typically, you use `AssumeRole` within your account or for cross-account access. In this field you'll set the
|
||||
`ARN` (Amazon Resource Name) of the policy of the other account.
|
||||
|
||||
A user who wants to access a role in a different account must also have permissions that are delegated from the account
|
||||
administrator. The administrator must attach a policy that allows the user to call `AssumeRole` for the `ARN` of the role in the other account.
|
||||
|
||||
This is a required field if you'd like to `AssumeRole`.
|
||||
|
||||
Find more information on [AssumeRole](https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.html).
|
||||
@ -0,0 +1,6 @@
|
||||
An identifier for the assumed role session. Use the role session name to uniquely identify a session when the same role
|
||||
is assumed by different principals or for different reasons.
|
||||
|
||||
By default, we'll use the name `OpenMetadataSession`.
|
||||
|
||||
Find more information about the [Role Session Name](https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.html#:~:text=An%20identifier%20for%20the%20assumed%20role%20session.).
|
||||
@ -0,0 +1,4 @@
|
||||
The source identity specified by the principal that is calling the `AssumeRole` operation. You can use source identity
|
||||
information in AWS CloudTrail logs to determine who took actions with a role.
|
||||
|
||||
Find more information about [Source Identity](https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.html#:~:text=Required%3A%20No-,SourceIdentity,-The%20source%20identity).
|
||||
@ -0,0 +1,11 @@
|
||||
When you interact with AWS, you specify your AWS security credentials to verify who you are and whether you have
|
||||
permission to access the resources that you are requesting. AWS uses the security credentials to authenticate and
|
||||
authorize your requests ([docs](https://docs.aws.amazon.com/IAM/latest/UserGuide/security-creds.html)).
|
||||
|
||||
Access keys consist of two parts:
|
||||
1. An access key ID (for example, `AKIAIOSFODNN7EXAMPLE`),
|
||||
2. And a secret access key (for example, `wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY`).
|
||||
|
||||
You must use both the access key ID and secret access key together to authenticate your requests.
|
||||
|
||||
You can find further information on how to manage your access keys [here](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html)
|
||||
@ -0,0 +1,2 @@
|
||||
AWS credentials configs.
|
||||
<!-- awsConfig to be updated -->
|
||||
@ -0,0 +1,7 @@
|
||||
Each AWS Region is a separate geographic area in which AWS clusters data centers ([docs](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.RegionsAndAvailabilityZones.html)).
|
||||
|
||||
As AWS can have instances in multiple regions, we need to know the region the service you want reach belongs to.
|
||||
|
||||
Note that the AWS Region is the only required parameter when configuring a connection. When connecting to the
|
||||
services programmatically, there are different ways in which we can extract and use the rest of AWS configurations.
|
||||
You can find further information about configuring your credentials [here](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html#configuring-credentials).
|
||||
Some files were not shown because too many files have changed in this diff Show More
Loading…
x
Reference in New Issue
Block a user