MINOR - Ingestion docs cleanup (#15283)

This commit is contained in:
Pere Miquel Brull 2024-02-21 07:44:40 +01:00 committed by GitHub
parent 95cdf6b4c4
commit 35efd7d333
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
81 changed files with 19 additions and 432 deletions

View File

@ -26,10 +26,6 @@ Configure and schedule DomoDashboard metadata and profiler workflows from the Op
## Requirements
{%inlineCallout icon="description" bold="OpenMetadata 0.12 or later" href="/deployment"%}
To deploy OpenMetadata, check the Deployment guides.
{%/inlineCallout%}
**Note:** For metadata ingestion, kindly make sure add atleast `dashboard` scopes to the clientId provided.
Question related to scopes, click [here](https://developer.domo.com/portal/1845fc11bbe5d-api-authentication).

View File

@ -26,10 +26,6 @@ Configure and schedule Looker metadata and profiler workflows from the OpenMetad
## Requirements
{%inlineCallout icon="description" bold="OpenMetadata 0.12 or later" href="/deployment"%}
To deploy OpenMetadata, check the Deployment guides.
{%/inlineCallout%}
There are two types of metadata we ingest from Looker:
- Dashboards & Charts
- LookML Models

View File

@ -26,9 +26,7 @@ Configure and schedule Metabase metadata and profiler workflows from the OpenMet
## Requirements
{%inlineCallout icon="description" bold="OpenMetadata 0.12 or later" href="/deployment"%}
To deploy OpenMetadata, check the Deployment guides.
{%/inlineCallout%}
**Note:** We have tested Metabase with Versions `0.42.4` and `0.43.4`.
## Metadata Ingestion

View File

@ -26,10 +26,6 @@ Configure and schedule Metabase metadata and profiler workflows from the OpenMet
## Requirements
{%inlineCallout icon="description" bold="OpenMetadata 0.12 or later" href="/deployment"%}
To deploy OpenMetadata, check the Deployment guides.
{%/inlineCallout%}
**Note:** We have tested Metabase with Versions `0.42.4` and `0.43.4`.
### Python Requirements

View File

@ -26,10 +26,6 @@ Configure and schedule Mode metadata and profiler workflows from the OpenMetadat
## Requirements
{%inlineCallout icon="description" bold="OpenMetadata 0.12 or later" href="/deployment"%}
To deploy OpenMetadata, check the Deployment guides.
{%/inlineCallout%}
OpenMetadata relies on Mode's API, which is exclusive to members of the Mode Business Workspace. This means that only resources that belong to a Mode Business Workspace can be accessed via the API.
### Python Requirements

View File

@ -26,10 +26,6 @@ Configure and schedule PowerBI metadata and profiler workflows from the OpenMeta
## Requirements
{%inlineCallout icon="description" bold="OpenMetadata 0.12 or later" href="/deployment"%}
To deploy OpenMetadata, check the Deployment guides.
{%/inlineCallout%}
### Python Requirements
To run the PowerBI ingestion, you will need to install:

View File

@ -26,10 +26,6 @@ Configure and schedule QuickSight metadata and profiler workflows from the OpenM
## Requirements
{%inlineCallout icon="description" bold="OpenMetadata 0.12 or later" href="/deployment"%}
To deploy OpenMetadata, check the Deployment guides.
{%/inlineCallout%}
AWS QuickSight Permissions
To execute metadata extraction and usage workflow successfully the IAM User should have enough access to fetch required data. Following table describes the minimum required permissions

View File

@ -26,11 +26,6 @@ Configure and schedule Redash metadata and profiler workflows from the OpenMetad
## Requirements
{%inlineCallout icon="description" bold="OpenMetadata 0.12 or later" href="/deployment"%}
To deploy OpenMetadata, check the Deployment guides.
{%/inlineCallout%}
### Python Requirements
To run the Redash ingestion, you will need to install:

View File

@ -26,10 +26,6 @@ Configure and schedule Superset metadata and profiler workflows from the OpenMet
## Requirements
{%inlineCallout icon="description" bold="OpenMetadata 0.12 or later" href="/deployment"%}
To deploy OpenMetadata, check the Deployment guides.
{%/inlineCallout%}
The ingestion also works with Superset 2.0.0 🎉
**Note:**

View File

@ -28,10 +28,6 @@ Configure and schedule Tableau metadata and profiler workflows from the OpenMeta
To ingest tableau metadata, minimum `Site Role: Viewer` is required for the tableau user.
{%inlineCallout icon="description" bold="OpenMetadata 0.12 or later" href="/deployment"%}
To deploy OpenMetadata, check the Deployment guides.
{%/inlineCallout%}
To create lineage between tableau dashboard and any database service via the queries provided from Tableau Metadata API, please enable the Tableau Metadata API for your tableau server.
For more information on enabling the Tableau Metadata APIs follow the link [here](https://help.tableau.com/current/api/metadata_api/en-us/docs/meta_api_start.html)

View File

@ -43,12 +43,6 @@ Configure and schedule Athena metadata and profiler workflows from the OpenMetad
## Requirements
{%inlineCallout icon="description" bold="OpenMetadata 0.12 or later" href="/deployment"%}
To deploy OpenMetadata, check the Deployment guides.
{%/inlineCallout%}
The Athena connector ingests metadata through JDBC connections.
{% note %}

View File

@ -42,10 +42,6 @@ Configure and schedule AzureSQL metadata and profiler workflows from the OpenMet
## Requirements
{%inlineCallout icon="description" bold="OpenMetadata 0.12 or later" href="/deployment"%}
To deploy OpenMetadata, check the Deployment guides.
{%/inlineCallout%}
Make sure if you have whitelisted ingestion container IP on Azure SQL firewall rules. Checkout [this](https://learn.microsoft.com/en-us/azure/azure-sql/database/firewall-configure?view=azuresql#use-the-azure-portal-to-manage-server-level-ip-firewall-rules) document on how to whitelist your IP using azure portal.
AzureSQL database user must grant `SELECT` privilege to fetch the metadata of tables and views.

View File

@ -45,10 +45,6 @@ Configure and schedule BigQuery metadata and profiler workflows from the OpenMet
{% partial file="/v1.3/connectors/external-ingestion-deployment.md" /%}
{%inlineCallout icon="description" bold="OpenMetadata 0.12 or later" href="/deployment"%}
To deploy OpenMetadata, check the Deployment guides.
{%/inlineCallout%}
## Requirements
### Data Catalog API Permissions

View File

@ -44,12 +44,6 @@ Configure and schedule BigQuery metadata and profiler workflows from the OpenMet
## Requirements
{%inlineCallout icon="description" bold="OpenMetadata 0.12 or later" href="/deployment"%}
To deploy OpenMetadata, check the Deployment guides.
{%/inlineCallout%}
### Python Requirements
To run the BigQuery ingestion, you will need to install:

View File

@ -39,12 +39,6 @@ Configure and schedule BigTable metadata and profiler workflows from the OpenMet
## Requirements
{%inlineCallout icon="description" bold="OpenMetadata 0.12 or later" href="/deployment"%}
To deploy OpenMetadata, check the Deployment guides.
{%/inlineCallout%}
### Python Requirements
To run the BigTable ingestion, you will need to install:

View File

@ -69,12 +69,6 @@ Executing the profiler workflow or data quality tests, will require the user to
### Usage & Lineage
For the usage and lineage workflow, the user will need `SELECT` privilege. You can find more information on the usage workflow [here](https://docs.open-metadata.org/connectors/ingestion/workflows/usage) and the lineage workflow [here](https://docs.open-metadata.org/connectors/ingestion/workflows/lineage).
{%inlineCallout icon="description" bold="OpenMetadata 0.12 or later" href="/deployment"%}
To deploy OpenMetadata, check the Deployment guides.
{%/inlineCallout%}
### Python Requirements
To run the Clickhouse ingestion, you will need to install:

View File

@ -38,12 +38,6 @@ Configure and schedule Couchbase metadata workflows from the OpenMetadata UI:
{% partial file="/v1.3/connectors/ingestion-modes-tiles.md" variables={yamlPath: "/connectors/database/couchbase/yaml"} /%}
## Requirements
{%inlineCallout icon="description" bold="OpenMetadata 0.12 or later" href="/deployment"%}
To deploy OpenMetadata, check the Deployment guides.
{%/inlineCallout%}
## Metadata Ingestion
{% partial

View File

@ -41,10 +41,6 @@ Configure and schedule Couchbase metadata workflows from the OpenMetadata UI:
## Requirements
{%inlineCallout icon="description" bold="OpenMetadata 0.12 or later" href="/deployment"%}
To deploy OpenMetadata, check the Deployment guides.
{%/inlineCallout%}
### Python Requirements
To run the Couchbase ingestion, you will need to install:

View File

@ -48,10 +48,6 @@ Configure and schedule Databricks metadata and profiler workflows from the OpenM
{% partial file="/v1.3/connectors/external-ingestion-deployment.md" /%}
{%inlineCallout icon="description" bold="OpenMetadata 0.12 or later" href="/deployment"%}
To deploy OpenMetadata, check the Deployment guides.
{%/inlineCallout%}
## Unity Catalog
If you are using unity catalog in Databricks, then checkout the [Unity Catalog](/connectors/database/unity-catalog) connector.

View File

@ -43,13 +43,6 @@ Configure and schedule Databricks metadata and profiler workflows from the OpenM
## Requirements
{%inlineCallout icon="description" bold="OpenMetadata 0.12 or later" href="/deployment"%}
To deploy OpenMetadata, check the Deployment guides.
{%/inlineCallout%}
### Python Requirements
To run the Databricks ingestion, you will need to install:

View File

@ -40,13 +40,6 @@ Configure and schedule Datalake metadata and profiler workflows from the OpenMet
## Requirements
{%inlineCallout icon="description" bold="OpenMetadata 0.12 or later" href="/deployment"%}
To deploy OpenMetadata, check the Deployment guides.
{%/inlineCallout%}
**Note:** Datalake connector supports extracting metadata from file types `JSON`, `CSV`, `TSV` & `Parquet`.

View File

@ -42,12 +42,6 @@ Configure and schedule DB2 metadata and profiler workflows from the OpenMetadata
## Requirements
{%inlineCallout icon="description" bold="OpenMetadata 0.12 or later" href="/deployment"%}
To deploy OpenMetadata, check the Deployment guides.
{%/inlineCallout%}
To create a new Db2 user please follow the guidelines mentioned [here](https://www.ibm.com/docs/ko/samfess/8.2.0?topic=schema-creating-users-manually)
Db2 user must have the below permissions to ingest the metadata:

View File

@ -41,12 +41,6 @@ Configure and schedule DomoDatabase metadata and profiler workflows from the Ope
## Requirements
{%inlineCallout icon="description" bold="OpenMetadata 0.12 or later" href="/deployment"%}
To deploy OpenMetadata, check the Deployment guides.
{%/inlineCallout%}
**Note:**
For metadata ingestion, kindly make sure add atleast `data` scopes to the clientId provided.

View File

@ -40,12 +40,6 @@ Configure and schedule Druid metadata and profiler workflows from the OpenMetada
{% partial file="/v1.3/connectors/ingestion-modes-tiles.md" variables={yamlPath: "/connectors/database/athena/yaml"} /%}
## Requirements
{%inlineCallout icon="description" bold="OpenMetadata 0.12 or later" href="/deployment"%}
To deploy OpenMetadata, check the Deployment guides.
{%/inlineCallout%}
## Metadata Ingestion
{% partial

View File

@ -42,12 +42,6 @@ Configure and schedule Druid metadata and profiler workflows from the OpenMetada
## Requirements
{%inlineCallout icon="description" bold="OpenMetadata 0.12 or later" href="/deployment"%}
To deploy OpenMetadata, check the Deployment guides.
{%/inlineCallout%}
### Python Requirements
To run the Druid ingestion, you will need to install:

View File

@ -39,12 +39,6 @@ Configure and schedule DynamoDB metadata workflows from the OpenMetadata UI:
## Requirements
{%inlineCallout icon="description" bold="OpenMetadata 0.12 or later" href="/deployment"%}
To deploy OpenMetadata, check the Deployment guides.
{%/inlineCallout%}
The DynamoDB connector ingests metadata using the DynamoDB boto3 client.
OpenMetadata retrieves information about all tables in the AWS account, the user must have permissions to perform the `dynamodb:ListTables` operation.

View File

@ -39,12 +39,6 @@ Configure and schedule Glue metadata and profiler workflows from the OpenMetadat
## Requirements
{%inlineCallout icon="description" bold="OpenMetadata 0.12 or later" href="/deployment"%}
To deploy OpenMetadata, check the Deployment guides.
{%/inlineCallout%}
User must have `glue:GetDatabases` and `glue:GetTables` permissions to ingest the basic metadata.
### Python Requirements

View File

@ -45,11 +45,6 @@ Configure and schedule Greenplum metadata and profiler workflows from the OpenMe
## Requirements
{%inlineCallout icon="description" bold="OpenMetadata 0.12 or later" href="/deployment"%}
To deploy OpenMetadata, check the Deployment guides.
{%/inlineCallout%}
### Python Requirements
To run the Greenplum ingestion, you will need to install:

View File

@ -42,10 +42,6 @@ Configure and schedule Hive metadata and profiler workflows from the OpenMetadat
## Requirements
{%inlineCallout icon="description" bold="OpenMetadata 0.12 or later" href="/deployment"%}
To deploy OpenMetadata, check the Deployment guides.
{%/inlineCallout%}
### Metadata
To extract metadata, the user used in the connection needs to be able to perform `SELECT`, `SHOW`, and `DESCRIBE` operations in the database/schema where the metadata needs to be extracted from.

View File

@ -41,12 +41,6 @@ Configure and schedule Hive metadata and profiler workflows from the OpenMetadat
## Requirements
{%inlineCallout icon="description" bold="OpenMetadata 0.12 or later" href="/deployment"%}
To deploy OpenMetadata, check the Deployment guides.
{%/inlineCallout%}
### Python Requirements
To run the Hive ingestion, you will need to install:

View File

@ -40,10 +40,6 @@ Configure and schedule Greenplum metadata and profiler workflows from the OpenMe
## Requirements
{%inlineCallout icon="description" bold="OpenMetadata 0.12 or later" href="/deployment"%}
To deploy OpenMetadata, check the Deployment guides.
{%/inlineCallout%}
The requirements actually depend on the Catalog and the FileSystem used. In a nutshell, the used credentials must have access to reading the Catalog and the Metadata File.
### Glue Catalog

View File

@ -39,12 +39,6 @@ Configure and schedule Impala metadata and profiler workflows from the OpenMetad
{% partial file="/v1.3/connectors/ingestion-modes-tiles.md" variables={yamlPath: "/connectors/database/impala/yaml"} /%}
## Requirements
{%inlineCallout icon="description" bold="OpenMetadata 0.12 or later" href="/deployment"%}
To deploy OpenMetadata, check the Deployment guides.
{%/inlineCallout%}
## Metadata Ingestion
{% partial

View File

@ -40,12 +40,6 @@ Configure and schedule Impala metadata and profiler workflows from the OpenMetad
## Requirements
{%inlineCallout icon="description" bold="OpenMetadata 0.12 or later" href="/deployment"%}
To deploy OpenMetadata, check the Deployment guides.
{%/inlineCallout%}
### Python Requirements
To run the Impala ingestion, you will need to install:

View File

@ -40,12 +40,6 @@ Configure and schedule MariaDB metadata and profiler workflows from the OpenMeta
## Requirements
{%inlineCallout icon="description" bold="OpenMetadata 0.12 or later" href="/deployment"%}
To deploy OpenMetadata, check the Deployment guides.
{%/inlineCallout%}
### Python Requirements
To run the MariaDB ingestion, you will need to install:

View File

@ -40,10 +40,6 @@ Configure and schedule MongoDB metadata workflows from the OpenMetadata UI:
## Requirements
{%inlineCallout icon="description" bold="OpenMetadata 0.12 or later" href="/deployment"%}
To deploy OpenMetadata, check the Deployment guides.
{%/inlineCallout%}
To fetch the metadata from MongoDB to OpenMetadata, the MongoDB user must have access to perform `find` operation on collection and `listCollection` operations on database available in MongoDB.
## Metadata Ingestion

View File

@ -41,10 +41,6 @@ Configure and schedule MongoDB metadata workflows from the OpenMetadata UI:
## Requirements
{%inlineCallout icon="description" bold="OpenMetadata 0.12 or later" href="/deployment"%}
To deploy OpenMetadata, check the Deployment guides.
{%/inlineCallout%}
To fetch the metadata from MongoDB to OpenMetadata, the MongoDB user must have access to perform `find` operation on collection and `listCollection` operations on database available in MongoDB.
### Python Requirements

View File

@ -45,10 +45,6 @@ Configure and schedule MSSQL metadata and profiler workflows from the OpenMetada
## Requirements
{%inlineCallout icon="description" bold="OpenMetadata 0.12 or later" href="/deployment"%}
To deploy OpenMetadata, check the Deployment guides.
{%/inlineCallout%}
MSSQL User must grant `SELECT` privilege to fetch the metadata of tables and views.
```sql

View File

@ -44,12 +44,6 @@ Configure and schedule MSSQL metadata and profiler workflows from the OpenMetada
## Requirements
{%inlineCallout icon="description" bold="OpenMetadata 0.12 or later" href="/deployment"%}
To deploy OpenMetadata, check the Deployment guides.
{%/inlineCallout%}
MSSQL User must grant `SELECT` privilege to fetch the metadata of tables and views.
```sql

View File

@ -42,12 +42,6 @@ Configure and schedule MySQL metadata and profiler workflows from the OpenMetada
## Requirements
{%inlineCallout icon="description" bold="OpenMetadata 0.12 or later" href="/deployment"%}
To deploy OpenMetadata, check the Deployment guides.
{%/inlineCallout%}
### Python Requirements
To run the MySQL ingestion, you will need to install:
@ -214,14 +208,14 @@ source:
```
```yaml {% srNumber=2 %}
authType:
password: <password>
password: <password>
```
```yaml {% srNumber=3 %}
authType:
awsConfig:
awsAccessKeyId: access key id
awsSecretAccessKey: access secret key
awsRegion: aws region name
awsConfig:
awsAccessKeyId: access key id
awsSecretAccessKey: access secret key
awsRegion: aws region name
```
```yaml {% srNumber=4 %}
hostPort: <hostPort>

View File

@ -44,12 +44,6 @@ Configure and schedule Oracle metadata and profiler workflows from the OpenMetad
## Requirements
{%inlineCallout icon="description" bold="OpenMetadata 0.12 or later" href="/deployment"%}
To deploy OpenMetadata, check the Deployment guides.
{%/inlineCallout%}
**Note**: To retrieve metadata from an Oracle database, the python-oracledb library can be utilized, which provides support for versions 12c, 18c, 19c, and 21c.
To ingest metadata from oracle user must have `CREATE SESSION` privilege for the user.

View File

@ -40,12 +40,6 @@ Configure and schedule PinotDB metadata and profiler workflows from the OpenMeta
{% partial file="/v1.3/connectors/ingestion-modes-tiles.md" variables={yamlPath: "/connectors/database/pinotdb/yaml"} /%}
## Requirements
{%inlineCallout icon="description" bold="OpenMetadata 0.12 or later" href="/deployment"%}
To deploy OpenMetadata, check the Deployment guides.
{%/inlineCallout%}
## Metadata Ingestion
{% partial

View File

@ -41,12 +41,6 @@ Configure and schedule PinotDB metadata and profiler workflows from the OpenMeta
## Requirements
{%inlineCallout icon="description" bold="OpenMetadata 0.12 or later" href="/deployment"%}
To deploy OpenMetadata, check the Deployment guides.
{%/inlineCallout%}
### Python Requirements
To run the PinotDB ingestion, you will need to install:

View File

@ -44,12 +44,6 @@ Configure and schedule Postgres metadata and profiler workflows from the OpenMet
## Requirements
{%inlineCallout icon="description" bold="OpenMetadata 0.12 or later" href="/deployment"%}
To deploy OpenMetadata, check the Deployment guides.
{%/inlineCallout%}
**Note:** Note that we only support officially supported Postgres versions. You can check the version list [here](https://www.postgresql.org/support/versioning/).
### Usage and Lineage considerations

View File

@ -69,7 +69,7 @@ Executing the profiler workflow or data quality tests, will require the user to
- **Username**: Specify the User to connect to Presto. It should have enough privileges to read all the metadata.
- **Password**: Password to connect to Presto.
- **Host and Port**: Enter the fully qualified hostname and port number for your Presto deployment in the Host and Port field.
- **Catalog**: Presto offers a catalog feature where all the databases are stored. (Providing the Catalog is not mandatory from 0.12.2 or greater versions)
- **Catalog**: Presto offers a catalog feature where all the databases are stored.
- **DatabaseSchema**: DatabaseSchema of the data source. This is optional parameter, if you would like to restrict the metadata reading to a single databaseSchema. When left blank, OpenMetadata Ingestion attempts to scan all the databaseSchema.
{% partial file="/v1.3/connectors/database/advanced-configuration.md" /%}

View File

@ -41,12 +41,6 @@ Configure and schedule Presto metadata and profiler workflows from the OpenMetad
## Requirements
{%inlineCallout icon="description" bold="OpenMetadata 0.12 or later" href="/deployment"%}
To deploy OpenMetadata, check the Deployment guides.
{%/inlineCallout%}
### Python Requirements
To run the Presto ingestion, you will need to install:
@ -97,7 +91,7 @@ This is a sample config for Presto:
{% codeInfo srNumber=4 %}
**catalog**: Presto offers a catalog feature where all the databases are stored. (Providing the Catalog is not mandatory from 0.12.2 or greater versions)
**catalog**: Presto offers a catalog feature where all the databases are stored.
{% /codeInfo %}

View File

@ -44,10 +44,6 @@ Configure and schedule Redshift metadata and profiler workflows from the OpenMet
## Requirements
{%inlineCallout icon="description" bold="OpenMetadata 0.12 or later" href="/deployment"%}
To deploy OpenMetadata, check the Deployment guides.
{%/inlineCallout%}
### Metadata
Redshift user must grant `SELECT` privilege on table [SVV_TABLE_INFO](https://docs.aws.amazon.com/redshift/latest/dg/r_SVV_TABLE_INFO.html) to fetch the metadata of tables and views. For more information visit [here](https://docs.aws.amazon.com/redshift/latest/dg/c_visibility-of-data.html).

View File

@ -44,12 +44,6 @@ Configure and schedule Redshift metadata and profiler workflows from the OpenMet
## Requirements
{%inlineCallout icon="description" bold="OpenMetadata 0.12 or later" href="/deployment"%}
To deploy OpenMetadata, check the Deployment guides.
{%/inlineCallout%}
Redshift user must grant `SELECT` privilege on table [SVV_TABLE_INFO](https://docs.aws.amazon.com/redshift/latest/dg/r_SVV_TABLE_INFO.html) to fetch the metadata of tables and views. For more information visit [here](https://docs.aws.amazon.com/redshift/latest/dg/c_visibility-of-data.html).
```sql

View File

@ -39,10 +39,6 @@ Configure and schedule Salesforce metadata and profiler workflows from the OpenM
## Requirements
{%inlineCallout icon="description" bold="OpenMetadata 0.12 or later" href="/deployment"%}
To deploy OpenMetadata, check the Deployment guides.
{%/inlineCallout%}
Following are the permissions you will require to fetch the metadata from Salesforce.
**API Access**: You must have the API Enabled permission in your Salesforce organization.

View File

@ -41,12 +41,6 @@ Configure and schedule Singlestore metadata and profiler workflows from the Open
## Requirements
{%inlineCallout icon="description" bold="OpenMetadata 0.12 or later" href="/deployment"%}
To deploy OpenMetadata, check the Deployment guides.
{%/inlineCallout%}
### Python Requirements
To run the Singlestore ingestion, you will need to install:

View File

@ -44,12 +44,6 @@ Configure and schedule Snowflake metadata and profiler workflows from the OpenMe
## Requirements
{%inlineCallout icon="description" bold="OpenMetadata 0.12 or later" href="/deployment"%}
To deploy OpenMetadata, check the Deployment guides.
{%/inlineCallout%}
### Python Requirements
To run the Snowflake ingestion, you will need to install:

View File

@ -44,12 +44,6 @@ Configure and schedule SQLite metadata and profiler workflows from the OpenMetad
## Requirements
{%inlineCallout icon="description" bold="OpenMetadata 0.12 or later" href="/deployment"%}
To deploy OpenMetadata, check the Deployment guides.
{%/inlineCallout%}
### Python Requirements
To ingest basic metadata sqlite user must have the following privileges:

View File

@ -75,7 +75,7 @@ Executing the profiler workflow or data quality tests, will require the user to
- **JWT Auth Config**:
- **JWT**: JWT can be used to authenticate with trino. Follow the steps in the [official trino](https://trino.io/docs/current/security/jwt.html) documentation to setup trino with jwt.
- **Host and Port**: Enter the fully qualified hostname and port number for your Trino deployment in the Host and Port field.
- **Catalog**: Trino offers a catalog feature where all the databases are stored. (Providing the Catalog is not mandatory from 0.12.2 or greater versions)
- **Catalog**: Trino offers a catalog feature where all the databases are stored.
- **DatabaseSchema**: DatabaseSchema of the data source. This is optional parameter, if you would like to restrict the metadata reading to a single databaseSchema. When left blank, OpenMetadata Ingestion attempts to scan all the databaseSchema.
- **proxies**: Proxies for the connection to Trino data source
- **params**: URL parameters for connection to the Trino data source

View File

@ -41,12 +41,6 @@ Configure and schedule Trino metadata and profiler workflows from the OpenMetada
## Requirements
{%inlineCallout icon="description" bold="OpenMetadata 0.12 or later" href="/deployment"%}
To deploy OpenMetadata, check the Deployment guides.
{%/inlineCallout%}
### Python Requirements
To run the Trino ingestion, you will need to install:
@ -107,7 +101,7 @@ This is a sample config for Trino:
{% /codeInfo %}
{% codeInfo srNumber=4 %}
**catalog**: Trino offers a catalog feature where all the databases are stored. (Providing the Catalog is not mandatory from 0.12.2 or greater versions)
**catalog**: Trino offers a catalog feature where all the databases are stored.
{% /codeInfo %}
{% codeInfo srNumber=5 %}

View File

@ -43,13 +43,6 @@ Configure and schedule Unity Catalog metadata workflow from the OpenMetadata UI:
{% partial file="/v1.3/connectors/external-ingestion-deployment.md" /%}
## Requirements
{%inlineCallout icon="description" bold="OpenMetadata 0.12 or later" href="/deployment"%}
To deploy OpenMetadata, check the Deployment guides.
{%/inlineCallout%}
## Metadata Ingestion
{% partial

View File

@ -42,13 +42,6 @@ Configure and schedule Unity Catalog metadata workflow from the OpenMetadata UI:
## Requirements
{%inlineCallout icon="description" bold="OpenMetadata 0.12 or later" href="/deployment"%}
To deploy OpenMetadata, check the Deployment guides.
{%/inlineCallout%}
### Python Requirements
To run the Unity Catalog ingestion, you will need to install:

View File

@ -41,12 +41,6 @@ Configure and schedule Vertica metadata and profiler workflows from the OpenMeta
## Requirements
{%inlineCallout icon="description" bold="OpenMetadata 0.12 or later" href="/deployment"%}
To deploy OpenMetadata, check the Deployment guides.
{%/inlineCallout%}
### Permissions
To run the ingestion we need a user with `SELECT` grants on the schemas that you'd like to ingest, as well as to the

View File

@ -65,16 +65,6 @@ to identify the graph nodes as OpenMetadata Entities.
Note that if a Model is not materialized, its data won't be ingested.
### Query Log
{% note %}
Up until 0.11, Query Log analysis for lineage happens during the Usage Workflow.
From 0.12 onwards, there is a separated Lineage Workflow that will take care of this process.
{% /note %}
#### How to run?
The main difference here is between those sources that provide internal access to query logs and those that do not. For

View File

@ -9,7 +9,7 @@ Learn how you can use OpenMetadata to define Data Quality tests and measure your
## Requirements
### OpenMetadata (version 0.12 or later)
### OpenMetadata
You must have a running deployment of OpenMetadata to use this guide. OpenMetadata includes the following services:

View File

@ -21,12 +21,6 @@ Configure and schedule Kafka metadata and profiler workflows from the OpenMetada
## Requirements
{%inlineCallout icon="description" bold="OpenMetadata 0.12 or later" href="/deployment"%}
To deploy OpenMetadata, check the Deployment guides.
{%/inlineCallout%}
### Python Requirements
To run the Kafka ingestion, you will need to install:

View File

@ -21,14 +21,8 @@ Configure and schedule Kinesis metadata workflows from the OpenMetadata UI:
## Requirements
{%inlineCallout icon="description" bold="OpenMetadata 0.12 or later" href="/deployment"%}
To deploy OpenMetadata, check the Deployment guides.
{%/inlineCallout%}
OpenMetadata retrieves information about streams and sample data from the streams in the AWS account.
The user must have following policy set to access the metadata from Kinesis.
The user must have the following policy set to access the metadata from Kinesis.
```json
{

View File

@ -21,12 +21,6 @@ Configure and schedule Redpanda metadata and profiler workflows from the OpenMet
## Requirements
{%inlineCallout icon="description" bold="OpenMetadata 0.12 or later" href="/deployment"%}
To deploy OpenMetadata, check the Deployment guides.
{%/inlineCallout%}
### Python Requirements
To run the Redpanda ingestion, you will need to install:

View File

@ -20,10 +20,6 @@ Configure and schedule Amundsen metadata and profiler workflows from the OpenMet
Before this, you must ingest the database / messaging service you want to get metadata for.
For more details click [here](/connectors/metadata/amundsen#create-database-service)
{%inlineCallout icon="description" bold="OpenMetadata 0.12 or later" href="/deployment"%}
To deploy OpenMetadata, check the Deployment guides.
{%/inlineCallout%}
### Python Requirements
To run the Amundsen ingestion, you will need to install:

View File

@ -20,12 +20,6 @@ Configure and schedule Atlas metadata and profiler workflows from the OpenMetada
Before this, you must ingest the database / messaging service you want to get metadata for.
For more details click [here](/connectors/metadata/atlas#create-database-service)
{%inlineCallout icon="description" bold="OpenMetadata 0.12 or later" href="/deployment"%}
To deploy OpenMetadata, check the Deployment guides.
{%/inlineCallout%}
### Python Requirements
To run the Atlas ingestion, you will need to install:

View File

@ -16,12 +16,6 @@ Configure and schedule MLflow metadata and profiler workflows from the OpenMetad
## Requirements
{%inlineCallout icon="description" bold="OpenMetadata 0.12 or later" href="/deployment"%}
To deploy OpenMetadata, check the Deployment guides.
{%/inlineCallout%}
### Python Requirements
To run the MLflow ingestion, you will need to install:

View File

@ -16,14 +16,8 @@ Configure and schedule Sagemaker metadata and profiler workflows from the OpenMe
## Requirements
{%inlineCallout icon="description" bold="OpenMetadata 0.12 or later" href="/deployment"%}
To deploy OpenMetadata, check the Deployment guides.
{%/inlineCallout%}
OpenMetadata retrieves information about models and tags associated with the models in the AWS account.
The user must have following policy set to ingest the metadata from Sagemaker.
The user must have the following policy set to ingest the metadata from Sagemaker.
```json
{

View File

@ -22,12 +22,6 @@ Configure and schedule Airbyte metadata and profiler workflows from the OpenMeta
{% partial file="/v1.3/connectors/ingestion-modes-tiles.md" variables={yamlPath: "/connectors/pipeline/airbyte/yaml"} /%}
## Requirements
{%inlineCallout icon="description" bold="OpenMetadata 0.12 or later" href="/deployment"%}
To deploy OpenMetadata, check the Deployment guides.
{% /inlineCallout %}
## Metadata Ingestion
{% partial

View File

@ -24,12 +24,6 @@ Configure and schedule Airbyte metadata and profiler workflows from the OpenMeta
## Requirements
{%inlineCallout icon="description" bold="OpenMetadata 0.12 or later" href="/deployment"%}
To deploy OpenMetadata, check the Deployment guides.
{% /inlineCallout %}
### Python Requirements
To run the Airbyte ingestion, you will need to install:

View File

@ -135,8 +135,6 @@ we will rely on the `KubernetesPodOperator` to use the underlying k8s cluster of
Then, the code won't directly run using the hosts' environment, but rather inside a container that we created
with only the `openmetadata-ingestion` package.
**Note:** This approach only has the `openmetadata/ingestion-base` ready from version 0.12.1 or higher!
### Requirements
The only thing we need to handle here is getting the URL of the underlying Composer's database. You can follow

View File

@ -24,12 +24,6 @@ Configure and schedule Airbyte metadata and profiler workflows from the OpenMeta
## Requirements
{%inlineCallout icon="description" bold="OpenMetadata 0.12 or later" href="/deployment"%}
To deploy OpenMetadata, check the Deployment guides.
{% /inlineCallout %}
### Python Requirements
To run the Airflow ingestion, you will need to install:
@ -38,8 +32,7 @@ To run the Airflow ingestion, you will need to install:
pip3 install "openmetadata-ingestion[airflow]"
```
Note that this installs the same Airflow version that we ship in the Ingestion Container, which is
Airflow `2.3.3` from Release `0.12`.
Note that this installs the same Airflow version that we ship in the Ingestion Container.
The ingestion using Airflow version 2.3.3 as a source package has been tested against Airflow 2.3.3 and Airflow 2.2.5.
@ -140,13 +133,15 @@ source:
connection:
type: Mysql
username: airflow_user
password: airflow_pass
authType:
password: airflow_pass
databaseSchema: airflow_db
hostPort: localhost:3306
# #
# type: Postgres
# username: airflow_user
# password: airflow_pass
# authType:
# password: airflow_pass
# database: airflow_db
# hostPort: localhost:3306
# #

View File

@ -25,12 +25,6 @@ Configure and schedule Dagster metadata and profiler workflows from the OpenMeta
## Requirements
{%inlineCallout icon="description" bold="OpenMetadata 0.12 or later" href="/deployment"%}
To deploy OpenMetadata, check the Deployment guides.
{% /inlineCallout %}
### Python Requirements
To run the Dagster ingestion, you will need to install:

View File

@ -22,12 +22,6 @@ Configure and schedule Databricks Pipeline metadata workflows from the OpenMetad
{% partial file="/v1.3/connectors/ingestion-modes-tiles.md" variables={yamlPath: "/connectors/pipeline/databricks-pipeline/yaml"} /%}
## Requirements
{%inlineCallout icon="description" bold="OpenMetadata 0.12 or later" href="/deployment"%}
To deploy OpenMetadata, check the Deployment guides.
{% /inlineCallout %}
## Metadata Ingestion
{% partial

View File

@ -24,12 +24,6 @@ Configure and schedule Databricks Pipeline metadata and profiler workflows from
## Requirements
{%inlineCallout icon="description" bold="OpenMetadata 0.12 or later" href="/deployment"%}
To deploy OpenMetadata, check the Deployment guides.
{% /inlineCallout %}
### Python Requirements
To run the Databricks Pipeline ingestion, you will need to install:

View File

@ -25,12 +25,6 @@ Configure and schedule Domo Pipeline metadata and profiler workflows from the Op
## Requirements
{%inlineCallout icon="description" bold="OpenMetadata 0.12 or later" href="/deployment"%}
To deploy OpenMetadata, check the Deployment guides.
{% /inlineCallout %}
**Note:** For metadata ingestion, kindly make sure add atleast `data` scopes to the clientId provided.
Question related to scopes, click [here](https://developer.domo.com/portal/1845fc11bbe5d-api-authentication).

View File

@ -24,12 +24,6 @@ Configure and schedule Fivetran metadata and profiler workflows from the OpenMet
## Requirements
{%inlineCallout icon="description" bold="OpenMetadata 0.12 or later" href="/deployment"%}
To deploy OpenMetadata, check the Deployment guides.
{% /inlineCallout %}
To access Fivetran APIs, a Fivetran account on a Standard, Enterprise, or Business Critical plan is required.
### Python Requirements

View File

@ -23,10 +23,6 @@ Configure and schedule Glue metadata and profiler workflows from the OpenMetadat
## Requirements
{%inlineCallout icon="description" bold="OpenMetadata 0.12 or later" href="/deployment"%}
To deploy OpenMetadata, check the Deployment guides.
{% /inlineCallout %}
The Glue connector ingests metadata through AWS [Boto3](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/glue.html) Client.
We will ingest Workflows, its jobs and their run status.

View File

@ -24,10 +24,6 @@ Configure and schedule Nifi metadata workflows from the OpenMetadata UI:
## Requirements
{%inlineCallout icon="description" bold="OpenMetadata 0.12 or later" href="/deployment"%}
To deploy OpenMetadata, check the Deployment guides.
{% /inlineCallout %}
### Metadata
OpenMetadata supports 2 types of connection for the Nifi connector:
- **basic authentication**: use username/password to authenticate to Nifi.

View File

@ -24,12 +24,6 @@ Configure and schedule Nifi metadata and profiler workflows from the OpenMetadat
## Requirements
{%inlineCallout icon="description" bold="OpenMetadata 0.12 or later" href="/deployment"%}
To deploy OpenMetadata, check the Deployment guides.
{% /inlineCallout %}
### Python Requirements
To run the Nifi ingestion, you will need to install:

View File

@ -24,12 +24,6 @@ Configure and schedule Spline metadata and profiler workflows from the OpenMetad
## Requirements
{%inlineCallout icon="description" bold="OpenMetadata 0.12 or later" href="/deployment"%}
To deploy OpenMetadata, check the Deployment guides.
{% /inlineCallout %}
The Spline connector support lineage of data source of type `jdbc` or `dbfs` i.e. The spline connector would be able to extract lineage if the data source is either a jdbc connection or the data source is databricks instance.
{% note %}

View File

@ -16,12 +16,6 @@ Configure and schedule Elasticsearch metadata and profiler workflows from the Op
## Requirements
{%inlineCallout icon="description" bold="OpenMetadata 0.12 or later" href="/deployment"%}
To deploy OpenMetadata, check the Deployment guides.
{% /inlineCallout %}
### Python Requirements
To run the Elasticsearch ingestion, you will need to install:

View File

@ -21,10 +21,6 @@ Configure and schedule S3 metadata workflows from the OpenMetadata UI:
## Requirements
{%inlineCallout icon="description" bold="OpenMetadata 0.12 or later" href="/deployment"%}
To deploy OpenMetadata, check the Deployment guides.
{%/inlineCallout%}
We need the following permissions in AWS:
### S3 Permissions

View File

@ -250,7 +250,7 @@ with DAG(
ingest_task = PythonVirtualenvOperator(
task_id="ingest_using_recipe",
requirements=[
'openmetadata-ingestion[mysql]>=1.3.0', # Specify any additional Python package dependencies
'openmetadata-ingestion[mysql]~=1.3.0', # Specify any additional Python package dependencies
],
system_site_packages=False, # Set to True if you want to include system site-packages in the virtual environment
python_version="3.9", # Remove if necessary