diff --git a/openmetadata-docs/content/menu.md b/openmetadata-docs/content/menu.md index 1ea9d062407..e931deba55a 100644 --- a/openmetadata-docs/content/menu.md +++ b/openmetadata-docs/content/menu.md @@ -441,6 +441,10 @@ site_menu: url: /openmetadata/ingestion/workflows/profiler - category: OpenMetadata / Ingestion / Workflows / Profiler / Metrics url: /openmetadata/ingestion/workflows/profiler/metrics + - category: OpenMetadata / Ingestion / Workflows / Data Quality + url: /openmetadata/ingestion/workflows/data-quality + - category: OpenMetadata / Ingestion / Workflows / Data Quality / Tests + url: /openmetadata/ingestion/workflows/data-quality/tests - category: OpenMetadata / Ingestion / Lineage url: /openmetadata/ingestion/lineage - category: OpenMetadata / Ingestion / Lineage / Edit Data Lineage Manually diff --git a/openmetadata-docs/content/openmetadata/connectors/database/athena/airflow.md b/openmetadata-docs/content/openmetadata/connectors/database/athena/airflow.md index a3bc21f9881..56979bad2f4 100644 --- a/openmetadata-docs/content/openmetadata/connectors/database/athena/airflow.md +++ b/openmetadata-docs/content/openmetadata/connectors/database/athena/airflow.md @@ -10,7 +10,7 @@ In this section, we provide guides and references to use the Athena connector. Configure and schedule Athena metadata and profiler workflows from the OpenMetadata UI: - [Requirements](#requirements) - [Metadata Ingestion](#metadata-ingestion) -- [Data Profiler and Quality Tests](#data-profiler-and-quality-tests) +- [Data Profiler](#data-profiler) - [DBT Integration](#dbt-integration) ## Requirements @@ -361,7 +361,7 @@ with DAG( Note that from connector to connector, this recipe will always be the same. By updating the YAML configuration, you will be able to extract metadata from different sources. -## Data Profiler and Quality Tests +## Data Profiler The Data Profiler workflow will be using the `orm-profiler` processor. While the `serviceConnection` will still be the same to reach the source system, the `sourceConfig` will be diff --git a/openmetadata-docs/content/openmetadata/connectors/database/athena/cli.md b/openmetadata-docs/content/openmetadata/connectors/database/athena/cli.md index 1603733a4c3..26a03789cf1 100644 --- a/openmetadata-docs/content/openmetadata/connectors/database/athena/cli.md +++ b/openmetadata-docs/content/openmetadata/connectors/database/athena/cli.md @@ -10,7 +10,7 @@ In this section, we provide guides and references to use the Athena connector. Configure and schedule Athena metadata and profiler workflows from the OpenMetadata UI: - [Requirements](#requirements) - [Metadata Ingestion](#metadata-ingestion) -- [Data Profiler and Quality Tests](#data-profiler-and-quality-tests) +- [Data Profiler](#data-profiler) - [DBT Integration](#dbt-integration) ## Requirements @@ -314,7 +314,7 @@ metadata ingest -c Note that from connector to connector, this recipe will always be the same. By updating the YAML configuration, you will be able to extract metadata from different sources. -## Data Profiler and Quality Tests +## Data Profiler The Data Profiler workflow will be using the `orm-profiler` processor. While the `serviceConnection` will still be the same to reach the source system, the `sourceConfig` will be diff --git a/openmetadata-docs/content/openmetadata/connectors/database/athena/index.md b/openmetadata-docs/content/openmetadata/connectors/database/athena/index.md index 01665c3a7e8..b129bbd457e 100644 --- a/openmetadata-docs/content/openmetadata/connectors/database/athena/index.md +++ b/openmetadata-docs/content/openmetadata/connectors/database/athena/index.md @@ -10,7 +10,7 @@ In this section, we provide guides and references to use the Athena connector. Configure and schedule Athena metadata and profiler workflows from the OpenMetadata UI: - [Requirements](#requirements) - [Metadata Ingestion](#metadata-ingestion) -- [Data Profiler and Quality Tests](#data-profiler-and-quality-tests) +- [Data Profiler](#data-profiler) - [DBT Integration](#dbt-integration) If you don't want to use the OpenMetadata Ingestion container to configure the workflows via the UI, then you can check @@ -221,7 +221,7 @@ caption="Edit and Deploy the Ingestion Pipeline" From the Connection tab, you can also Edit the Service if needed. -## Data Profiler and Quality Tests +## Data Profiler Note that from connector to connector, this recipe will always be the same. By updating the YAML configuration, you will be able to extract metadata from different sources. -## Data Profiler and Quality Tests +## Data Profiler The Data Profiler workflow will be using the `orm-profiler` processor. While the `serviceConnection` will still be the same to reach the source system, the `sourceConfig` will be diff --git a/openmetadata-docs/content/openmetadata/connectors/database/azuresql/index.md b/openmetadata-docs/content/openmetadata/connectors/database/azuresql/index.md index 83efc64483a..9fdc35c3217 100644 --- a/openmetadata-docs/content/openmetadata/connectors/database/azuresql/index.md +++ b/openmetadata-docs/content/openmetadata/connectors/database/azuresql/index.md @@ -10,7 +10,7 @@ In this section, we provide guides and references to use the AzureSQL connector. Configure and schedule AzureSQL metadata and profiler workflows from the OpenMetadata UI: - [Requirements](#requirements) - [Metadata Ingestion](#metadata-ingestion) -- [Data Profiler and Quality Tests](#data-profiler-and-quality-tests) +- [Data Profiler](#data-profiler) - [DBT Integration](#dbt-integration) If you don't want to use the OpenMetadata Ingestion container to configure the workflows via the UI, then you can check @@ -218,7 +218,7 @@ caption="Edit and Deploy the Ingestion Pipeline" From the Connection tab, you can also Edit the Service if needed. -## Data Profiler and Quality Tests +## Data Profiler ``` -## Data Profiler and Quality Tests +## Data Profiler The Data Profiler workflow will be using the `orm-profiler` processor. While the `serviceConnection` will still be the same to reach the source system, the `sourceConfig` will be diff --git a/openmetadata-docs/content/openmetadata/connectors/database/bigquery/index.md b/openmetadata-docs/content/openmetadata/connectors/database/bigquery/index.md index 2ef312aad28..1701f1997b2 100644 --- a/openmetadata-docs/content/openmetadata/connectors/database/bigquery/index.md +++ b/openmetadata-docs/content/openmetadata/connectors/database/bigquery/index.md @@ -11,7 +11,7 @@ Configure and schedule BigQuery metadata and profiler workflows from the OpenMet - [Requirements](#requirements) - [Metadata Ingestion](#metadata-ingestion) - [Query Usage and Lineage Ingestion](#query-usage-and-lineage-ingestion) -- [Data Profiler and Quality Tests](#data-profiler-and-quality-tests) +- [Data Profiler](#data-profiler) - [DBT Integration](#dbt-integration) If you don't want to use the OpenMetadata Ingestion container to configure the workflows via the UI, then you can check @@ -268,7 +268,7 @@ text="Learn more about how to configure the Usage Workflow to ingest Query and L link="/openmetadata/ingestion/workflows/usage" /> -## Data Profiler and Quality Tests +## Data Profiler ``` -## Data Profiler and Quality Tests +## Data Profiler The Data Profiler workflow will be using the `orm-profiler` processor. While the `serviceConnection` will still be the same to reach the source system, the `sourceConfig` will be diff --git a/openmetadata-docs/content/openmetadata/connectors/database/clickhouse/index.md b/openmetadata-docs/content/openmetadata/connectors/database/clickhouse/index.md index ba76cb64d7a..872d8f59c3a 100644 --- a/openmetadata-docs/content/openmetadata/connectors/database/clickhouse/index.md +++ b/openmetadata-docs/content/openmetadata/connectors/database/clickhouse/index.md @@ -11,7 +11,7 @@ Configure and schedule Clickhouse metadata and profiler workflows from the OpenM - [Requirements](#requirements) - [Metadata Ingestion](#metadata-ingestion) - [Query Usage and Lineage Ingestion](#query-usage-and-lineage-ingestion) -- [Data Profiler and Quality Tests](#data-profiler-and-quality-tests) +- [Data Profiler](#data-profiler) - [DBT Integration](#dbt-integration) If you don't want to use the OpenMetadata Ingestion container to configure the workflows via the UI, then you can check @@ -226,7 +226,7 @@ text="Learn more about how to configure the Usage Workflow to ingest Query and L link="/openmetadata/ingestion/workflows/usage" /> -## Data Profiler and Quality Tests +## Data Profiler Note that from connector to connector, this recipe will always be the same. By updating the YAML configuration, you will be able to extract metadata from different sources. -## Data Profiler and Quality Tests +## Data Profiler The Data Profiler workflow will be using the `orm-profiler` processor. While the `serviceConnection` will still be the same to reach the source system, the `sourceConfig` will be diff --git a/openmetadata-docs/content/openmetadata/connectors/database/databricks/index.md b/openmetadata-docs/content/openmetadata/connectors/database/databricks/index.md index a66397cfff4..f96f1b413bd 100644 --- a/openmetadata-docs/content/openmetadata/connectors/database/databricks/index.md +++ b/openmetadata-docs/content/openmetadata/connectors/database/databricks/index.md @@ -10,7 +10,7 @@ In this section, we provide guides and references to use the Databricks connecto Configure and schedule Databricks metadata and profiler workflows from the OpenMetadata UI: - [Requirements](#requirements) - [Metadata Ingestion](#metadata-ingestion) -- [Data Profiler and Quality Tests](#data-profiler-and-quality-tests) +- [Data Profiler](#data-profiler) - [DBT Integration](#dbt-integration) If you don't want to use the OpenMetadata Ingestion container to configure the workflows via the UI, then you can check @@ -217,7 +217,7 @@ caption="Edit and Deploy the Ingestion Pipeline" From the Connection tab, you can also Edit the Service if needed. -## Data Profiler and Quality Tests +## Data Profiler Note that from connector to connector, this recipe will always be the same. By updating the YAML configuration, you will be able to extract metadata from different sources. -## Data Profiler and Quality Tests +## Data Profiler The Data Profiler workflow will be using the `orm-profiler` processor. While the `serviceConnection` will still be the same to reach the source system, the `sourceConfig` will be diff --git a/openmetadata-docs/content/openmetadata/connectors/database/db2/index.md b/openmetadata-docs/content/openmetadata/connectors/database/db2/index.md index 20ddb9c4fa8..e0605e50d25 100644 --- a/openmetadata-docs/content/openmetadata/connectors/database/db2/index.md +++ b/openmetadata-docs/content/openmetadata/connectors/database/db2/index.md @@ -10,7 +10,7 @@ In this section, we provide guides and references to use the DB2 connector. Configure and schedule DB2 metadata and profiler workflows from the OpenMetadata UI: - [Requirements](#requirements) - [Metadata Ingestion](#metadata-ingestion) -- [Data Profiler and Quality Tests](#data-profiler-and-quality-tests) +- [Data Profiler](#data-profiler) - [DBT Integration](#dbt-integration) If you don't want to use the OpenMetadata Ingestion container to configure the workflows via the UI, then you can check @@ -216,7 +216,7 @@ caption="Edit and Deploy the Ingestion Pipeline" From the Connection tab, you can also Edit the Service if needed. -## Data Profiler and Quality Tests +## Data Profiler Note that from connector to connector, this recipe will always be the same. By updating the YAML configuration, you will be able to extract metadata from different sources. -## Data Profiler and Quality Tests +## Data Profiler The Data Profiler workflow will be using the `orm-profiler` processor. While the `serviceConnection` will still be the same to reach the source system, the `sourceConfig` will be diff --git a/openmetadata-docs/content/openmetadata/connectors/database/druid/index.md b/openmetadata-docs/content/openmetadata/connectors/database/druid/index.md index e09bae4bb5a..055558fd846 100644 --- a/openmetadata-docs/content/openmetadata/connectors/database/druid/index.md +++ b/openmetadata-docs/content/openmetadata/connectors/database/druid/index.md @@ -10,7 +10,7 @@ In this section, we provide guides and references to use the Druid connector. Configure and schedule Druid metadata and profiler workflows from the OpenMetadata UI: - [Requirements](#requirements) - [Metadata Ingestion](#metadata-ingestion) -- [Data Profiler and Quality Tests](#data-profiler-and-quality-tests) +- [Data Profiler](#data-profiler) - [DBT Integration](#dbt-integration) If you don't want to use the OpenMetadata Ingestion container to configure the workflows via the UI, then you can check @@ -215,7 +215,7 @@ caption="Edit and Deploy the Ingestion Pipeline" From the Connection tab, you can also Edit the Service if needed. -## Data Profiler and Quality Tests +## Data Profiler Note that from connector to connector, this recipe will always be the same. By updating the YAML configuration, you will be able to extract metadata from different sources. -## Data Profiler and Quality Tests +## Data Profiler The Data Profiler workflow will be using the `orm-profiler` processor. While the `serviceConnection` will still be the same to reach the source system, the `sourceConfig` will be diff --git a/openmetadata-docs/content/openmetadata/connectors/database/hive/index.md b/openmetadata-docs/content/openmetadata/connectors/database/hive/index.md index 69f4696f6a5..9aa79aab9fa 100644 --- a/openmetadata-docs/content/openmetadata/connectors/database/hive/index.md +++ b/openmetadata-docs/content/openmetadata/connectors/database/hive/index.md @@ -10,7 +10,7 @@ In this section, we provide guides and references to use the Hive connector. Configure and schedule Hive metadata and profiler workflows from the OpenMetadata UI: - [Requirements](#requirements) - [Metadata Ingestion](#metadata-ingestion) -- [Data Profiler and Quality Tests](#data-profiler-and-quality-tests) +- [Data Profiler](#data-profiler) - [DBT Integration](#dbt-integration) If you don't want to use the OpenMetadata Ingestion container to configure the workflows via the UI, then you can check @@ -217,7 +217,7 @@ caption="Edit and Deploy the Ingestion Pipeline" From the Connection tab, you can also Edit the Service if needed. -## Data Profiler and Quality Tests +## Data Profiler Note that from connector to connector, this recipe will always be the same. By updating the YAML configuration, you will be able to extract metadata from different sources. -## Data Profiler and Quality Tests +## Data Profiler The Data Profiler workflow will be using the `orm-profiler` processor. While the `serviceConnection` will still be the same to reach the source system, the `sourceConfig` will be diff --git a/openmetadata-docs/content/openmetadata/connectors/database/mariadb/index.md b/openmetadata-docs/content/openmetadata/connectors/database/mariadb/index.md index bda2f3df28d..e3f9840c579 100644 --- a/openmetadata-docs/content/openmetadata/connectors/database/mariadb/index.md +++ b/openmetadata-docs/content/openmetadata/connectors/database/mariadb/index.md @@ -10,7 +10,7 @@ In this section, we provide guides and references to use the MariaDB connector. Configure and schedule MariaDB metadata and profiler workflows from the OpenMetadata UI: - [Requirements](#requirements) - [Metadata Ingestion](#metadata-ingestion) -- [Data Profiler and Quality Tests](#data-profiler-and-quality-tests) +- [Data Profiler](#data-profiler) - [DBT Integration](#dbt-integration) If you don't want to use the OpenMetadata Ingestion container to configure the workflows via the UI, then you can check @@ -216,7 +216,7 @@ caption="Edit and Deploy the Ingestion Pipeline" From the Connection tab, you can also Edit the Service if needed. -## Data Profiler and Quality Tests +## Data Profiler ``` -## Data Profiler and Quality Tests +## Data Profiler The Data Profiler workflow will be using the `orm-profiler` processor. While the `serviceConnection` will still be the same to reach the source system, the `sourceConfig` will be diff --git a/openmetadata-docs/content/openmetadata/connectors/database/mssql/index.md b/openmetadata-docs/content/openmetadata/connectors/database/mssql/index.md index 600cf8e736d..c1631a2ccc5 100644 --- a/openmetadata-docs/content/openmetadata/connectors/database/mssql/index.md +++ b/openmetadata-docs/content/openmetadata/connectors/database/mssql/index.md @@ -11,7 +11,7 @@ Configure and schedule MSSQL metadata and profiler workflows from the OpenMetada - [Requirements](#requirements) - [Metadata Ingestion](#metadata-ingestion) - [Query Usage and Lineage Ingestion](#query-usage-and-lineage-ingestion) -- [Data Profiler and Quality Tests](#data-profiler-and-quality-tests) +- [Data Profiler](#data-profiler) - [DBT Integration](#dbt-integration) If you don't want to use the OpenMetadata Ingestion container to configure the workflows via the UI, then you can check @@ -229,7 +229,7 @@ text="Learn more about how to configure the Usage Workflow to ingest Query and L link="/openmetadata/ingestion/workflows/usage" /> -## Data Profiler and Quality Tests +## Data Profiler Note that from connector to connector, this recipe will always be the same. By updating the YAML configuration, you will be able to extract metadata from different sources. -## Data Profiler and Quality Tests +## Data Profiler The Data Profiler workflow will be using the `orm-profiler` processor. While the `serviceConnection` will still be the same to reach the source system, the `sourceConfig` will be diff --git a/openmetadata-docs/content/openmetadata/connectors/database/mysql/index.md b/openmetadata-docs/content/openmetadata/connectors/database/mysql/index.md index 24f202dcb78..ad464ac88c0 100644 --- a/openmetadata-docs/content/openmetadata/connectors/database/mysql/index.md +++ b/openmetadata-docs/content/openmetadata/connectors/database/mysql/index.md @@ -10,7 +10,7 @@ In this section, we provide guides and references to use the MySQL connector. Configure and schedule MySQL metadata and profiler workflows from the OpenMetadata UI: - [Requirements](#requirements) - [Metadata Ingestion](#metadata-ingestion) -- [Data Profiler and Quality Tests](#data-profiler-and-quality-tests) +- [Data Profiler](#data-profiler) - [DBT Integration](#dbt-integration) If you don't want to use the OpenMetadata Ingestion container to configure the workflows via the UI, then you can check @@ -218,7 +218,7 @@ caption="Edit and Deploy the Ingestion Pipeline" From the Connection tab, you can also Edit the Service if needed. -## Data Profiler and Quality Tests +## Data Profiler Note that from connector to connector, this recipe will always be the same. By updating the YAML configuration, you will be able to extract metadata from different sources. -## Data Profiler and Quality Tests +## Data Profiler The Data Profiler workflow will be using the `orm-profiler` processor. While the `serviceConnection` will still be the same to reach the source system, the `sourceConfig` will be diff --git a/openmetadata-docs/content/openmetadata/connectors/database/oracle/index.md b/openmetadata-docs/content/openmetadata/connectors/database/oracle/index.md index df9591699d1..bf82644ec17 100644 --- a/openmetadata-docs/content/openmetadata/connectors/database/oracle/index.md +++ b/openmetadata-docs/content/openmetadata/connectors/database/oracle/index.md @@ -10,7 +10,7 @@ In this section, we provide guides and references to use the Oracle connector. Configure and schedule Oracle metadata and profiler workflows from the OpenMetadata UI: - [Requirements](#requirements) - [Metadata Ingestion](#metadata-ingestion) -- [Data Profiler and Quality Tests](#data-profiler-and-quality-tests) +- [Data Profiler](#data-profiler) - [DBT Integration](#dbt-integration) If you don't want to use the OpenMetadata Ingestion container to configure the workflows via the UI, then you can check @@ -217,7 +217,7 @@ caption="Edit and Deploy the Ingestion Pipeline" From the Connection tab, you can also Edit the Service if needed. -## Data Profiler and Quality Tests +## Data Profiler Note that from connector to connector, this recipe will always be the same. By updating the YAML configuration, you will be able to extract metadata from different sources. -## Data Profiler and Quality Tests +## Data Profiler The Data Profiler workflow will be using the `orm-profiler` processor. While the `serviceConnection` will still be the same to reach the source system, the `sourceConfig` will be diff --git a/openmetadata-docs/content/openmetadata/connectors/database/postgres/index.md b/openmetadata-docs/content/openmetadata/connectors/database/postgres/index.md index 16f2314a3f6..18a62f22e09 100644 --- a/openmetadata-docs/content/openmetadata/connectors/database/postgres/index.md +++ b/openmetadata-docs/content/openmetadata/connectors/database/postgres/index.md @@ -10,7 +10,7 @@ In this section, we provide guides and references to use the PostgreSQL connecto Configure and schedule PostgreSQL metadata and profiler workflows from the OpenMetadata UI: - [Requirements](#requirements) - [Metadata Ingestion](#metadata-ingestion) -- [Data Profiler and Quality Tests](#data-profiler-and-quality-tests) +- [Data Profiler](#data-profiler) - [DBT Integration](#dbt-integration) If you don't want to use the OpenMetadata Ingestion container to configure the workflows via the UI, then you can check @@ -216,7 +216,7 @@ caption="Edit and Deploy the Ingestion Pipeline" From the Connection tab, you can also Edit the Service if needed. -## Data Profiler and Quality Tests +## Data Profiler Note that from connector to connector, this recipe will always be the same. By updating the YAML configuration, you will be able to extract metadata from different sources. -## Data Profiler and Quality Tests +## Data Profiler The Data Profiler workflow will be using the `orm-profiler` processor. While the `serviceConnection` will still be the same to reach the source system, the `sourceConfig` will be diff --git a/openmetadata-docs/content/openmetadata/connectors/database/presto/index.md b/openmetadata-docs/content/openmetadata/connectors/database/presto/index.md index 038e13a5159..790a58c9bbb 100644 --- a/openmetadata-docs/content/openmetadata/connectors/database/presto/index.md +++ b/openmetadata-docs/content/openmetadata/connectors/database/presto/index.md @@ -10,7 +10,7 @@ In this section, we provide guides and references to use the Presto connector. Configure and schedule Presto metadata and profiler workflows from the OpenMetadata UI: - [Requirements](#requirements) - [Metadata Ingestion](#metadata-ingestion) -- [Data Profiler and Quality Tests](#data-profiler-and-quality-tests) +- [Data Profiler](#data-profiler) - [DBT Integration](#dbt-integration) If you don't want to use the OpenMetadata Ingestion container to configure the workflows via the UI, then you can check @@ -217,7 +217,7 @@ caption="Edit and Deploy the Ingestion Pipeline" From the Connection tab, you can also Edit the Service if needed. -## Data Profiler and Quality Tests +## Data Profiler ``` -## Data Profiler and Quality Tests +## Data Profiler The Data Profiler workflow will be using the `orm-profiler` processor. While the `serviceConnection` will still be the same to reach the source system, the `sourceConfig` will be diff --git a/openmetadata-docs/content/openmetadata/connectors/database/redshift/index.md b/openmetadata-docs/content/openmetadata/connectors/database/redshift/index.md index 2d0f273ca6d..84428d5ae21 100644 --- a/openmetadata-docs/content/openmetadata/connectors/database/redshift/index.md +++ b/openmetadata-docs/content/openmetadata/connectors/database/redshift/index.md @@ -11,7 +11,7 @@ Configure and schedule Redshift metadata and profiler workflows from the OpenMet - [Requirements](#requirements) - [Metadata Ingestion](#metadata-ingestion) - [Query Usage and Lineage Ingestion](#query-usage-and-lineage-ingestion) -- [Data Profiler and Quality Tests](#data-profiler-and-quality-tests) +- [Data Profiler](#data-profiler) - [DBT Integration](#dbt-integration) If you don't want to use the OpenMetadata Ingestion container to configure the workflows via the UI, then you can check @@ -232,7 +232,7 @@ text="Learn more about how to configure the Usage Workflow to ingest Query and L link="/openmetadata/ingestion/workflows/usage" /> -## Data Profiler and Quality Tests +## Data Profiler Note that from connector to connector, this recipe will always be the same. By updating the YAML configuration, you will be able to extract metadata from different sources. -## Data Profiler and Quality Tests +## Data Profiler The Data Profiler workflow will be using the `orm-profiler` processor. While the `serviceConnection` will still be the same to reach the source system, the `sourceConfig` will be diff --git a/openmetadata-docs/content/openmetadata/connectors/database/salesforce/index.md b/openmetadata-docs/content/openmetadata/connectors/database/salesforce/index.md index 9c05bfa0954..80c11072903 100644 --- a/openmetadata-docs/content/openmetadata/connectors/database/salesforce/index.md +++ b/openmetadata-docs/content/openmetadata/connectors/database/salesforce/index.md @@ -10,7 +10,7 @@ In this section, we provide guides and references to use the Salesforce connecto Configure and schedule Salesforce metadata and profiler workflows from the OpenMetadata UI: - [Requirements](#requirements) - [Metadata Ingestion](#metadata-ingestion) -- [Data Profiler and Quality Tests](#data-profiler-and-quality-tests) +- [Data Profiler](#data-profiler) - [DBT Integration](#dbt-integration) If you don't want to use the OpenMetadata Ingestion container to configure the workflows via the UI, then you can check @@ -218,7 +218,7 @@ caption="Edit and Deploy the Ingestion Pipeline" From the Connection tab, you can also Edit the Service if needed. -## Data Profiler and Quality Tests +## Data Profiler Note that from connector to connector, this recipe will always be the same. By updating the YAML configuration, you will be able to extract metadata from different sources. -## Data Profiler and Quality Tests +## Data Profiler The Data Profiler workflow will be using the `orm-profiler` processor. While the `serviceConnection` will still be the same to reach the source system, the `sourceConfig` will be diff --git a/openmetadata-docs/content/openmetadata/connectors/database/singlestore/index.md b/openmetadata-docs/content/openmetadata/connectors/database/singlestore/index.md index 8141da923f9..2cc98a01e6b 100644 --- a/openmetadata-docs/content/openmetadata/connectors/database/singlestore/index.md +++ b/openmetadata-docs/content/openmetadata/connectors/database/singlestore/index.md @@ -10,7 +10,7 @@ In this section, we provide guides and references to use the Singlestore connect Configure and schedule Singlestore metadata and profiler workflows from the OpenMetadata UI: - [Requirements](#requirements) - [Metadata Ingestion](#metadata-ingestion) -- [Data Profiler and Quality Tests](#data-profiler-and-quality-tests) +- [Data Profiler](#data-profiler) - [DBT Integration](#dbt-integration) If you don't want to use the OpenMetadata Ingestion container to configure the workflows via the UI, then you can check @@ -215,7 +215,7 @@ caption="Edit and Deploy the Ingestion Pipeline" From the Connection tab, you can also Edit the Service if needed. -## Data Profiler and Quality Tests +## Data Profiler ``` -## Data Profiler and Quality Tests +## Data Profiler The Data Profiler workflow will be using the `orm-profiler` processor. While the `serviceConnection` will still be the same to reach the source system, the `sourceConfig` will be diff --git a/openmetadata-docs/content/openmetadata/connectors/database/snowflake/index.md b/openmetadata-docs/content/openmetadata/connectors/database/snowflake/index.md index 13d5f6fb6fb..f06fdf5ed43 100644 --- a/openmetadata-docs/content/openmetadata/connectors/database/snowflake/index.md +++ b/openmetadata-docs/content/openmetadata/connectors/database/snowflake/index.md @@ -11,7 +11,7 @@ Configure and schedule Snowflake metadata and profiler workflows from the OpenMe - [Requirements](#requirements) - [Metadata Ingestion](#metadata-ingestion) - [Query Usage and Lineage Ingestion](#query-usage-and-lineage-ingestion) -- [Data Profiler and Quality Tests](#data-profiler-and-quality-tests) +- [Data Profiler](#data-profiler) - [DBT Integration](#dbt-integration) If you don't want to use the OpenMetadata Ingestion container to configure the workflows via the UI, then you can check @@ -239,7 +239,7 @@ text="Learn more about how to configure the Usage Workflow to ingest Query and L link="/openmetadata/ingestion/workflows/usage" /> -## Data Profiler and Quality Tests +## Data Profiler Note that from connector to connector, this recipe will always be the same. By updating the YAML configuration, you will be able to extract metadata from different sources. -## Data Profiler and Quality Tests +## Data Profiler The Data Profiler workflow will be using the `orm-profiler` processor. While the `serviceConnection` will still be the same to reach the source system, the `sourceConfig` will be diff --git a/openmetadata-docs/content/openmetadata/connectors/database/trino/index.md b/openmetadata-docs/content/openmetadata/connectors/database/trino/index.md index 39aa78d9e4d..31f5027a006 100644 --- a/openmetadata-docs/content/openmetadata/connectors/database/trino/index.md +++ b/openmetadata-docs/content/openmetadata/connectors/database/trino/index.md @@ -10,7 +10,7 @@ In this section, we provide guides and references to use the Trino connector. Configure and schedule Trino metadata and profiler workflows from the OpenMetadata UI: - [Requirements](#requirements) - [Metadata Ingestion](#metadata-ingestion) -- [Data Profiler and Quality Tests](#data-profiler-and-quality-tests) +- [Data Profiler](#data-profiler) - [DBT Integration](#dbt-integration) If you don't want to use the OpenMetadata Ingestion container to configure the workflows via the UI, then you can check @@ -217,7 +217,7 @@ caption="Edit and Deploy the Ingestion Pipeline" From the Connection tab, you can also Edit the Service if needed. -## Data Profiler and Quality Tests +## Data Profiler Note that from connector to connector, this recipe will always be the same. By updating the YAML configuration, you will be able to extract metadata from different sources. -## Data Profiler and Quality Tests +## Data Profiler The Data Profiler workflow will be using the `orm-profiler` processor. While the `serviceConnection` will still be the same to reach the source system, the `sourceConfig` will be diff --git a/openmetadata-docs/content/openmetadata/connectors/database/vertica/index.md b/openmetadata-docs/content/openmetadata/connectors/database/vertica/index.md index 75c36fbc6aa..8c3153936fd 100644 --- a/openmetadata-docs/content/openmetadata/connectors/database/vertica/index.md +++ b/openmetadata-docs/content/openmetadata/connectors/database/vertica/index.md @@ -10,7 +10,7 @@ In this section, we provide guides and references to use the Vertica connector. Configure and schedule Vertica metadata and profiler workflows from the OpenMetadata UI: - [Requirements](#requirements) - [Metadata Ingestion](#metadata-ingestion) -- [Data Profiler and Quality Tests](#data-profiler-and-quality-tests) +- [Data Profiler](#data-profiler) - [DBT Integration](#dbt-integration) If you don't want to use the OpenMetadata Ingestion container to configure the workflows via the UI, then you can check @@ -216,7 +216,7 @@ caption="Edit and Deploy the Ingestion Pipeline" From the Connection tab, you can also Edit the Service if needed. -## Data Profiler and Quality Tests +## Data Profiler - -Note that you can configure the ingestion pipelines with `source.config.data_profiler_enabled` as `"true"` or `"false"` to run the profiler as well during the metadata ingestion. This, however, **does not support** Quality Tests. - - - -### Profiling Overview -#### Requirements - -The source layer of the Profiling workflow is the OpenMetadata API. Based on the source configuration, this process lists the tables to be executed. - -#### Description - -The steps of the **Profiling** pipeline are the following: - -1. First, use the source configuration to create a connection. -2. Next, iterate over the selected tables and schemas that the Ingestion has previously recorded in OpenMetadata. -3. Run a default set of metrics to all the table's columns. (We will add more customization in the future releases). -4. Finally, compare the metrics' results against the configured Data Quality tests. - - - -Note that all the results are published to the OpenMetadata API, both the Profiling and the tests executions. This will allow users to visit the evolution of the data and its reliability directly in the UI. - - - -You can take a look at the supported metrics and tests here: - - - - - - -## How to Add Tests - -Tests are part of the Table Entity. We can add new tests to a Table from the UI or directly use the JSON configuration of the workflows. - - - -Note that in order to add tests and run the Profiler workflow, the metadata should have already been ingested. - - - -### Add Tests in the UI - -To create a new test, we can go to the _Table_ page under the _Data Quality_ tab: -Data Quality Tab in the Table Page - -Clicking on _Add Test_ will allow us two options: **Table Test** or **Column Test**. A Table Test will be run on metrics from the whole table, such as the number of rows or columns, while Column Tests are specific to each column's values. - -#### Add Table Tests - -Adding a Table Test will show us the following view: - -Add a Table Test - -* **Test Type**: It allows us to specify the test we want to configure. -* **Description**: To explain why the test is necessary and what scenarios we want to validate. -* **Value**: Different tests will show different values here. For example, the `tableColumnCountToEqual` requires us to specify the number of columns we expect. Other tests will have other forms when we need to add values such as `min` and `max`, while other tests require no value at all, such as tests validating that there are no nulls in a column. - -#### Add Column Tests - -Adding a Column Test will have a similar view: - -Add Column Test - -The Column Test form will be similar to the Table Test one. The only difference is the **Column Name** field, where we need to select the column we will be targeting for the test. - - - -You can review the supported tests [here](/openmetadata/data-quality/tests). We will keep expanding the support for new tests in the upcoming releases. - - - -Once tests are added, we will be able to see them in the _Data Quality_ tab: - -Freshly created tests - -Note how the tests are grouped in Table and Column tests. All tests from the same column will also be grouped together. From this view, we can both edit and delete the tests if needed. - -In the global Table information at the top, we will also be able to see how many Table Tests have been configured. - -### Add Tests with the JSON Config - -In the [connectors](/openmetadata/connectors) documentation for each source, we showcase how to run the Profiler Workflow using the Airflow SDK or the `metadata` CLI. When configuring the JSON configuration for the workflow, we can add tests as well. - -Any tests added to the JSON configuration will also be reflected in the Data Quality tab. This JSON configuration can be used for both the Airflow SDK and to run the workflow with the CLI. - -You can find further information on how to prepare the JSON configuration for each of the sources. However, adding any number of tests is a matter of updating the `processor` configuration as follows: - -```json - "processor": { - "type": "orm-profiler", - "config": { - "test_suite": { - "name": "", - "tests": [ - { - "table": "", - "table_tests": [ - { - "testCase": { - "config": { - "value": 100 - }, - "tableTestType": "tableRowCountToEqual" - } - } - ], - "column_tests": [ - { - "columnName": "", - "testCase": { - "config": { - "minValue": 0, - "maxValue": 99 - }, - "columnTestType": "columnValuesToBeBetween" - } - } - ] - } - ] - } - } - },son -``` - -`tests` is a list of test definitions that will be applied to the `table`, informed by its FQN. For each table, one can then define a list of `table_tests` and `column_tests`. Review the supported tests and their definitions to learn how to configure the different cases [here](/openmetadata/data-quality/tests). - -## How to Run Tests - -Both the Profiler and Tests are executed in the Profiler Workflow. All the results will be available through the UI in the _Profiler_ and _Data Quality_ tabs. - - - -To learn how to prepare and run the Profiler Workflow for a given source, you can take a look at the documentation for that specific [connector](/openmetadata/connectors). - -## Where are the Tests stored? - -Once you create a Test definition for a Table or any of its Columns, that Test becomes a part of the Table Entity. This means that it does not matter from where you create tests (JSON Configuration vs. UI). As once the test gets registered to OpenMetadata, it will always be executed as part of the Profiler Workflow. - -You can check what tests an Entity has configured in the **Data Quality** tab of the UI, or by using the API: - -```python -from metadata.ingestion.ometa.ometa_api import OpenMetadata -from metadata.ingestion.ometa.openmetadata_rest import MetadataServerConfig - -from metadata.generated.schema.entity.data.table import Table - - -server_config = MetadataServerConfig(api_endpoint="http://localhost:8585/api") -metadata = OpenMetadata(server_config) - -table = metadata.get_by_name(entity=Table, fqdn="FQDN", fields=["tests"]) -``` - -You can then check `table.tableTests`, or for each Column `column.columnTests` to get the test information. \ No newline at end of file diff --git a/openmetadata-docs/content/openmetadata/ingestion/workflows/data-quality/index.md b/openmetadata-docs/content/openmetadata/ingestion/workflows/data-quality/index.md new file mode 100644 index 00000000000..dd8c5366a83 --- /dev/null +++ b/openmetadata-docs/content/openmetadata/ingestion/workflows/data-quality/index.md @@ -0,0 +1,352 @@ +--- +title: Data Quality +slug: /openmetadata/ingestion/workflows/data-quality +--- + +# Data Quality +Learn how you can use OpenMetadata to define Data Quality tests and measure your data reliability. +## Requirements + +### OpenMetadata (version 0.12 or later) + +You must have a running deployment of OpenMetadata to use this guide. OpenMetadata includes the following services: + +* OpenMetadata server supporting the metadata APIs and user interface +* Elasticsearch for metadata search and discovery +* MySQL as the backing store for all metadata +* Airflow for metadata ingestion workflows + +To deploy OpenMetadata checkout the [deployment guide](/deployment) + +### Python (version 3.8.0 or later) + +Please use the following command to check the version of Python you have. + +``` +python3 --version +``` + +## Building Trust with Data Quality + +OpenMetadata is where all users share and collaborate around data. It is where you make your assets discoverable; with data quality you make these assets **trustable**. + +This section will show you how to configure and run Data Quality pipelines with the OpenMetadata built-in tests. + +## Main Concepts +### Test Suite +Test Suites are containers allowing you to group related Test Cases together. Once configured, a Test Suite can easily be deployed to execute all the Test Cases it contains. + +### Test Definition +Test Definitions are generic tests definition elements specific to a test such as: +- test name +- column name +- data type + +### Test Cases +Test Cases specify a Test Definition. It will define what condition a test must meet to be successful (e.g. `max=n`, etc.). One Test Definition can be linked to multiple Test Cases. + +## Adding Tests Through the UI + +**Note:** you will need to make sure you have the right permission in OpenMetadata to create a test. + +### Step 1: Creating a Test Suite +From your table service click on the `profiler` tab. From there you will be able to create table tests by clicking on the purple background `Add Test` top button or column tests by clicking on the white background `Add Test` button. + + +On the next page you will be able to either select an existing Test Suite or Create a new one. If you select an existing one your Test Case will automatically be added to the Test Suite + + + +### Step 2: Create a Test Case +On the next page, you will create a Test Case. You will need to select a Test Definition from the drop down menu and specify the parameters of your Test Case. + +**Note:** Test Case name needs to be unique across the whole platform. A warning message will show if your Test Case name is not unique. + + + +### Step 3: Add Ingestion Workflow +If you have created a new test suite you will see a purple background `Add Ingestion` button after clicking `submit`. This will allow you to schedule the execution of your Test Suite. If you have selected an existing Test Suite you are all set. + +After clicking `Add Ingestion` you will be able to select an execution schedule for your Test Suite (note that you can edit this later). Once you have selected the desired scheduling time, click submit and you are all set. + + + + +## Adding Tests with the YAML Config +When creating a JSON config for a test workflow the source configuration is very simple. +``` +source: + type: TestSuite + serviceName: + sourceConfig: + config: + type: TestSuite +``` +The only section you need to modify here is the `serviceName` key. Note that this name needs to be unique across OM platform Test Suite name. + +Once you have defined your source configuration you'll need to define te processor configuration. +``` +processor: + type: "orm-test-runner" + config: + testSuites: + - name: [test_suite_name] + description: [test suite description] + testCases: + - name: [test_case_name] + description: [test case description] + testDefinitionName: [test definition name*] + entityLink: ["<#E::table::fqn> or <#E::table::fqn::columns::column_name>"] + parameterValues: + - name: [column parameter name] + value: [value] + - ... +``` +The processor type should be set to ` "orm-test-runner"`. For accepted test definition names and parameter value names refer to the [tests page](/content/openmetadata/ingestion/workflows/data-quality/tests.md). + + +`sink` and `workflowConfig` will have the same settings than the ingestion and profiler workflow. + +### Full `yaml` config example + +``` +source: + type: TestSuite + serviceName: MyAwesomeTestSuite + sourceConfig: + config: + type: TestSuite + +processor: + type: "orm-test-runner" + config: + testSuites: + - name: test_suite_one + description: this is a test testSuite to confirm test suite workflow works as expected + testCases: + - name: a_column_test + description: A test case + testDefinitionName: columnValuesToBeBetween + entityLink: "<#E::table::local_redshift.dev.dbt_jaffle.customers::columns::number_of_orders>" + parameterValues: + - name: minValue + value: 2 + - name: maxValue + value: 20 + +sink: + type: metadata-rest + config: {} +workflowConfig: + openMetadataServerConfig: + hostPort: http://localhost:8585/api + authProvider: no-auth +``` + +### How to Run Tests +To run the tests from the CLI execute the following command +``` +metadata test -c /path/to/my/config.yaml +``` + +## How to Visualize Test Results +### From the Test Suite View +From the home page click on the Test Suite menu in the left pannel. + + +This will bring you to the Test Suite page where you can select a specific Test Suite. + + +From there you can select a Test Suite and visualize the results associated with this specific Test Suite. + + +### From a Table Entity +Navigate to your table and click on the `profiler` tab. From there you'll be able to see test results at the table or column level. +#### Table Level Test Results +In the top pannel, click on the white background `Data Quality` button. This will bring you to a summary of all your quality tests at the table level + + +#### Column Level Test Results +On the profiler page, click on a specific column name. This will bring you to a new page where you can click the white background `Quality Test` button to see all the tests results related to your column. + + +## Adding Custom Tests +While OpenMetadata provides out of the box tests, you may want to write your test results from your own custom quality test suite. This is very easy to do using the API. +### Creating a `TestDefinition` +First, you'll need to create a Test Definition for your test. You can use the following endpoint `/api/v1/testDefinition` using a POST protocol to create your Test Definition. You will need to pass the following data in the body your request at minimum. + +``` +{ + "description": "", + "entityType": "
", + "name": "", + "testPlatforms": [""], + "parameterDefinition": [ + { + "name": "" + }, + { + "name": "" + } + ] +} +``` + +Here is a complete CURL request + +``` +curl --request POST 'http://localhost:8585/api/v1/testDefinition' \ +--header 'Content-Type: application/json' \ +--data-raw '{ + "description": "A demo custom test", + "entityType": "TABLE", + "name": "demo_test_definition", + "testPlatforms": ["Soda", "DBT"], + "parameterDefinition": [{ + "name": "ColumnOne" + }] +}' +``` + +Make sure to keep the `UUID` from the response as you will need it to create the Test Case. + +### Creating a `TestSuite` +You'll also need to create a Test Suite for your Test Case -- note that you can also use an existing one if you want to. You can use the following endpoint `/api/v1/testSuite` using a POST protocol to create your Test Definition. You will need to pass the following data in the body your request at minimum. + +``` +{ + "name": "", + "description": "" +} +``` + +Here is a complete CURL request + +``` +curl --request POST 'http://localhost:8585/api/v1/testSuite' \ +--header 'Content-Type: application/json' \ +--data-raw '{ + "name": "", + "description": "" +}' +``` + +Make sure to keep the `UUID` from the response as you will need it to create the Test Case. + + +### Creating a `TestCase` +Once you have your Test Definition created you can create a Test Case -- which is a specification of your Test Definition. You can use the following endpoint `/api/v1/testCase` using a POST protocol to create your Test Case. You will need to pass the following data in the body your request at minimum. + +``` +{ + "entityLink": "<#E::table::fqn> or <#E::table::fqn::columns::column name>", + "name": "", + "testDefinition": { + "id": "", + "type": "testDefinition" + }, + "testSuite": { + "id": "", + "type": "testSuite" + } +} +``` +**Important:** for `entityLink` make sure to include the starting and ending `<>` + +Here is a complete CURL request + +``` +curl --request POST 'http://localhost:8585/api/v1/testCase' \ +--header 'Content-Type: application/json' \ +--data-raw '{ + "entityLink": "<#E::table::local_redshift.dev.dbt_jaffle.customers>", + "name": "custom_test_Case", + "testDefinition": { + "id": "1f3ce6f5-67be-45db-8314-2ee42d73239f", + "type": "testDefinition" + }, + "testSuite": { + "id": "3192ed9b-5907-475d-a623-1b3a1ef4a2f6", + "type": "testSuite" + }, + "parameterValues": [ + { + "name": "colName", + "value": 10 + } + ] +}' +``` + +Make sure to keep the `UUID` from the response as you will need it to create the Test Case. + + +### Writing `TestCaseResults` +Once you have your Test Case created you can write your results to it. You can use the following endpoint `/api/v1/testCase/{test FQN}/testCaseResult` using a PUT protocol to add Test Case Results. You will need to pass the following data in the body your request at minimum. + +``` +{ + "result": "", + "testCaseStatus": "", + "timestamp": , + "testResultValue": [ + { + "value": "" + } + ] +} +``` + +Here is a complete CURL request + +``` +curl --location --request PUT 'http://localhost:8585/api/v1/testCase/local_redshift.dev.dbt_jaffle.customers.custom_test_Case/testCaseResult' \ +--header 'Content-Type: application/json' \ +--data-raw '{ + "result": "found 1 values expected n", + "testCaseStatus": "Success", + "timestamp": 1662129151, + "testResultValue": [{ + "value": "10" + }] +}' +``` + +You will now be able to see your test in the Test Suite or the table entity. + diff --git a/openmetadata-docs/content/openmetadata/data-quality/tests.md b/openmetadata-docs/content/openmetadata/ingestion/workflows/data-quality/tests.md similarity index 99% rename from openmetadata-docs/content/openmetadata/data-quality/tests.md rename to openmetadata-docs/content/openmetadata/ingestion/workflows/data-quality/tests.md index 43db8dbc226..475a7124476 100644 --- a/openmetadata-docs/content/openmetadata/data-quality/tests.md +++ b/openmetadata-docs/content/openmetadata/ingestion/workflows/data-quality/tests.md @@ -1,6 +1,6 @@ --- title: Tests -slug: /openmetadata/data-quality/tests +slug: /openmetadata/ingestion/workflows/data-quality/tests --- # Tests diff --git a/openmetadata-docs/images/openmetadata/data-quality/column-test.png b/openmetadata-docs/images/openmetadata/data-quality/column-test.png deleted file mode 100644 index 52029fb5dd4..00000000000 Binary files a/openmetadata-docs/images/openmetadata/data-quality/column-test.png and /dev/null differ diff --git a/openmetadata-docs/images/openmetadata/data-quality/created-tests.png b/openmetadata-docs/images/openmetadata/data-quality/created-tests.png deleted file mode 100644 index 93e1d1a7438..00000000000 Binary files a/openmetadata-docs/images/openmetadata/data-quality/created-tests.png and /dev/null differ diff --git a/openmetadata-docs/images/openmetadata/data-quality/data-quality-tab.png b/openmetadata-docs/images/openmetadata/data-quality/data-quality-tab.png deleted file mode 100644 index 16f2ccb74c8..00000000000 Binary files a/openmetadata-docs/images/openmetadata/data-quality/data-quality-tab.png and /dev/null differ diff --git a/openmetadata-docs/images/openmetadata/data-quality/table-test.png b/openmetadata-docs/images/openmetadata/data-quality/table-test.png deleted file mode 100644 index 61bf467aa54..00000000000 Binary files a/openmetadata-docs/images/openmetadata/data-quality/table-test.png and /dev/null differ diff --git a/openmetadata-docs/images/openmetadata/data-quality/test-results.png b/openmetadata-docs/images/openmetadata/data-quality/test-results.png deleted file mode 100644 index 8ce5f68bf3b..00000000000 Binary files a/openmetadata-docs/images/openmetadata/data-quality/test-results.png and /dev/null differ diff --git a/openmetadata-docs/images/openmetadata/ingestion/workflows/data-quality/colum-level-test-results.png b/openmetadata-docs/images/openmetadata/ingestion/workflows/data-quality/colum-level-test-results.png new file mode 100644 index 00000000000..3ec0aefedbc Binary files /dev/null and b/openmetadata-docs/images/openmetadata/ingestion/workflows/data-quality/colum-level-test-results.png differ diff --git a/openmetadata-docs/images/openmetadata/ingestion/workflows/data-quality/ingestion-page.png b/openmetadata-docs/images/openmetadata/ingestion/workflows/data-quality/ingestion-page.png new file mode 100644 index 00000000000..099b3a57d1d Binary files /dev/null and b/openmetadata-docs/images/openmetadata/ingestion/workflows/data-quality/ingestion-page.png differ diff --git a/openmetadata-docs/images/openmetadata/ingestion/workflows/data-quality/profiler-tab-view.png b/openmetadata-docs/images/openmetadata/ingestion/workflows/data-quality/profiler-tab-view.png new file mode 100644 index 00000000000..d0d2ec34d63 Binary files /dev/null and b/openmetadata-docs/images/openmetadata/ingestion/workflows/data-quality/profiler-tab-view.png differ diff --git a/openmetadata-docs/images/openmetadata/ingestion/workflows/data-quality/table-results-entity.png b/openmetadata-docs/images/openmetadata/ingestion/workflows/data-quality/table-results-entity.png new file mode 100644 index 00000000000..8268b70b504 Binary files /dev/null and b/openmetadata-docs/images/openmetadata/ingestion/workflows/data-quality/table-results-entity.png differ diff --git a/openmetadata-docs/images/openmetadata/ingestion/workflows/data-quality/test-case-page.png b/openmetadata-docs/images/openmetadata/ingestion/workflows/data-quality/test-case-page.png new file mode 100644 index 00000000000..e3458338aa9 Binary files /dev/null and b/openmetadata-docs/images/openmetadata/ingestion/workflows/data-quality/test-case-page.png differ diff --git a/openmetadata-docs/images/openmetadata/ingestion/workflows/data-quality/test-suite-home-page.png b/openmetadata-docs/images/openmetadata/ingestion/workflows/data-quality/test-suite-home-page.png new file mode 100644 index 00000000000..df2f3e811bc Binary files /dev/null and b/openmetadata-docs/images/openmetadata/ingestion/workflows/data-quality/test-suite-home-page.png differ diff --git a/openmetadata-docs/images/openmetadata/ingestion/workflows/data-quality/test-suite-landing.png b/openmetadata-docs/images/openmetadata/ingestion/workflows/data-quality/test-suite-landing.png new file mode 100644 index 00000000000..352ac974e9e Binary files /dev/null and b/openmetadata-docs/images/openmetadata/ingestion/workflows/data-quality/test-suite-landing.png differ diff --git a/openmetadata-docs/images/openmetadata/ingestion/workflows/data-quality/test-suite-page.png b/openmetadata-docs/images/openmetadata/ingestion/workflows/data-quality/test-suite-page.png new file mode 100644 index 00000000000..dee031156c7 Binary files /dev/null and b/openmetadata-docs/images/openmetadata/ingestion/workflows/data-quality/test-suite-page.png differ diff --git a/openmetadata-docs/images/openmetadata/ingestion/workflows/data-quality/test-suite-results.png b/openmetadata-docs/images/openmetadata/ingestion/workflows/data-quality/test-suite-results.png new file mode 100644 index 00000000000..c5eb696e4c1 Binary files /dev/null and b/openmetadata-docs/images/openmetadata/ingestion/workflows/data-quality/test-suite-results.png differ diff --git a/openmetadata-docs/images/openmetadata/data-quality/tests/create-test-from-profiler-tab.png b/openmetadata-docs/images/openmetadata/ingestion/workflows/data-quality/tests/create-test-from-profiler-tab.png similarity index 100% rename from openmetadata-docs/images/openmetadata/data-quality/tests/create-test-from-profiler-tab.png rename to openmetadata-docs/images/openmetadata/ingestion/workflows/data-quality/tests/create-test-from-profiler-tab.png diff --git a/openmetadata-docs/images/openmetadata/data-quality/tests/sample-form-to-create-test.png b/openmetadata-docs/images/openmetadata/ingestion/workflows/data-quality/tests/sample-form-to-create-test.png similarity index 100% rename from openmetadata-docs/images/openmetadata/data-quality/tests/sample-form-to-create-test.png rename to openmetadata-docs/images/openmetadata/ingestion/workflows/data-quality/tests/sample-form-to-create-test.png diff --git a/openmetadata-docs/images/openmetadata/data-quality/tests/write-your-first-tests.png b/openmetadata-docs/images/openmetadata/ingestion/workflows/data-quality/tests/write-your-first-tests.png similarity index 100% rename from openmetadata-docs/images/openmetadata/data-quality/tests/write-your-first-tests.png rename to openmetadata-docs/images/openmetadata/ingestion/workflows/data-quality/tests/write-your-first-tests.png