mirror of
https://github.com/open-metadata/OpenMetadata.git
synced 2025-08-16 04:57:11 +00:00
Fix#10584: Tableau E2E and docs (#11054)
This commit is contained in:
parent
ec794d3eb9
commit
ea70580aff
2
.github/workflows/py-cli-e2e-tests.yml
vendored
2
.github/workflows/py-cli-e2e-tests.yml
vendored
@ -21,7 +21,7 @@ jobs:
|
||||
strategy:
|
||||
fail-fast: false
|
||||
matrix:
|
||||
e2e-test: ['python', 'mysql', 'bigquery', 'snowflake', 'dbt_redshift', 'mssql', 'vertica']
|
||||
e2e-test: ['python', 'mysql', 'bigquery', 'snowflake', 'dbt_redshift', 'mssql', 'vertica', 'tableau']
|
||||
environment: test
|
||||
|
||||
steps:
|
||||
|
@ -197,6 +197,9 @@ class MetadataRestSink(Sink[Entity]):
|
||||
"""
|
||||
try:
|
||||
self.metadata.create_or_update(record.classification_request)
|
||||
self.status.records_written(
|
||||
f"Classification: {record.classification_request.name.__root__}"
|
||||
)
|
||||
except Exception as exc:
|
||||
logger.debug(traceback.format_exc())
|
||||
logger.warning(
|
||||
@ -204,6 +207,7 @@ class MetadataRestSink(Sink[Entity]):
|
||||
)
|
||||
try:
|
||||
self.metadata.create_or_update(record.tag_request)
|
||||
self.status.records_written(f"Tag: {record.tag_request.name.__root__}")
|
||||
except Exception as exc:
|
||||
logger.debug(traceback.format_exc())
|
||||
logger.warning(
|
||||
|
@ -1,22 +1,12 @@
|
||||
# E2E CLI tests
|
||||
|
||||
Currently, it runs CLI tests for any database connector.
|
||||
### How to add a connector
|
||||
|
||||
- `test_cli_db_base` has 8 test definitions for database connectors. It is an abstract class.
|
||||
- `test_cli_db_base_common` is another abstract class which for those connectors whose sources implement the `CommonDbSourceService` class.
|
||||
- It partially implements some methods from `test_cli_db_base`.
|
||||
- `test_cli_{connector}` is the specific connector test. More tests apart the ones implemented by the `test_cli_db_base` can be run inside this class.
|
||||
|
||||
## How to add a database connector
|
||||
|
||||
1. Use `test_cli_mysql.py` as example. Your connector E2E CLI test must follow the name convention: `test_cli_{connector}.py` and the test
|
||||
class must extend from `CliCommonDB.TestSuite` if the connector's source implement the `CommonDbSourceService` class, otherwise, from `CliDBBase.TestSuite`.
|
||||
|
||||
2. Add an ingestion YAML file with the service and the credentials of it. Use when possible a Dockerized environment, otherwise, remember to use environment
|
||||
1. Add an ingestion YAML file with the service and the credentials of it. Use when possible a Dockerized environment, otherwise, remember to use environment
|
||||
variables for sensitive information in case of external resources. On each test, the YAML file will be modified by the `build_yaml` method which will create
|
||||
a copy of the file and prepare it for the tests. This way, we avoid adding (and maintaining) an extra YAML for each test.
|
||||
|
||||
3. The `{connector}` name must be added in the list of connectors in the GH Action: `.github/workflows/py-cli-e2e-tests.yml`
|
||||
2. The `{connector}` name must be added in the list of connectors in the GH Action: `.github/workflows/py-cli-e2e-tests.yml`
|
||||
|
||||
```yaml
|
||||
|
||||
@ -29,7 +19,21 @@ jobs:
|
||||
e2e-test: ['mysql', '{connector}']
|
||||
```
|
||||
|
||||
4. If it is a database connector whose source implement the `CommonDbSourceService` class, these methods must be overwritten:
|
||||
## Database connectors
|
||||
|
||||
Currently, it runs CLI tests for any database connector.
|
||||
|
||||
- `./base/test_cli_db` has 8 test definitions for database connectors. It is an abstract class.
|
||||
- `./common/test_cli_db` is another abstract class for those connectors whose sources implement the `CommonDbSourceService` class.
|
||||
- It partially implements some methods from `test_cli_db_base`.
|
||||
- `test_cli_{connector}` is the specific connector test. More tests apart the ones implemented by the `./base/test_cli_db` can be run inside this class.
|
||||
|
||||
### How to add a database connector
|
||||
|
||||
1. Use `test_cli_mysql.py` as example. Your connector E2E CLI test must follow the name convention: `test_cli_{connector}.py` and the test
|
||||
class must extend from `CliCommonDB.TestSuite` if the connector's source implement the `CommonDbSourceService` class, otherwise, from `CliDBBase.TestSuite`.
|
||||
|
||||
2. If it is a database connector whose source implement the `CommonDbSourceService` class, these methods must be overwritten:
|
||||
|
||||
```python
|
||||
# the connector name
|
||||
@ -89,8 +93,82 @@ jobs:
|
||||
pass
|
||||
```
|
||||
|
||||
|
||||
|
||||
## Dashboard connectors
|
||||
|
||||
Currently, it runs CLI tests for any database connector.
|
||||
|
||||
- `./base/test_cli_dashboard` has 3 test definitions for database connectors. It is an abstract class.
|
||||
- `./common/test_cli_dashboard` is another class that partially implements some methods from `test_cli_dashboard_base`.
|
||||
- `test_cli_{connector}` is the specific connector test. More tests apart the ones implemented by the `./base/test_cli_dashboard` can be run inside this class.
|
||||
|
||||
### How to add a dashboard connector
|
||||
|
||||
1. Use `test_cli_tableau.py` as example. Your connector E2E CLI test must follow the name convention: `test_cli_{connector}.py` and the test
|
||||
class must extend from `CliCommonDashboard.TestSuite`.
|
||||
|
||||
2. These methods must be overwritten:
|
||||
|
||||
```python
|
||||
# in case we want to do something before running the tests
|
||||
def prepare() -> None:
|
||||
pass
|
||||
|
||||
# the connector name
|
||||
def get_connector_name() -> str:
|
||||
return "{connector}"
|
||||
|
||||
# the dashboard to include in filters
|
||||
def get_includes_dashboards() -> List[str]:
|
||||
pass
|
||||
|
||||
# the dashboard to exclude in filters
|
||||
def get_excludes_dashboards() -> List[str]:
|
||||
pass
|
||||
|
||||
# the charts to include in filters
|
||||
def get_includes_charts() -> List[str]:
|
||||
pass
|
||||
|
||||
# the charts to exclude in filters
|
||||
def get_excludes_charts() -> List[str]:
|
||||
pass
|
||||
|
||||
# the data models to include in filters
|
||||
def get_includes_datamodels() -> List[str]:
|
||||
pass
|
||||
|
||||
# the data models to exclude in filters
|
||||
def get_excludes_datamodels() -> List[str]:
|
||||
pass
|
||||
|
||||
# expected number of entities to be ingested
|
||||
def expected_entities() -> int:
|
||||
pass
|
||||
|
||||
# expected number of lineage to be ingested
|
||||
def expected_lineage() -> int:
|
||||
pass
|
||||
|
||||
# expected number of tags to be ingested
|
||||
def expected_tags() -> int:
|
||||
pass
|
||||
|
||||
# expected number of entities to be filtered when testing include tags and data models options
|
||||
def expected_not_included_entities() -> int:
|
||||
pass
|
||||
|
||||
# expected number of entities to be filtered in the sink step when testing include tags and data models options
|
||||
def expected_not_included_sink_entities() -> int:
|
||||
pass
|
||||
|
||||
# expected number of entities to be filtered out when testing mix of filters
|
||||
def expected_filtered_mix() -> int:
|
||||
pass
|
||||
|
||||
# expected number of entities to be filtered out in the sink step when testing mix of filters
|
||||
def expected_filtered_sink_mix() -> int:
|
||||
pass
|
||||
```
|
||||
|
||||
|
||||
|
||||
|
0
ingestion/tests/cli_e2e/base/__init__.py
Normal file
0
ingestion/tests/cli_e2e/base/__init__.py
Normal file
181
ingestion/tests/cli_e2e/base/test_cli.py
Normal file
181
ingestion/tests/cli_e2e/base/test_cli.py
Normal file
@ -0,0 +1,181 @@
|
||||
# Copyright 2022 Collate
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
"""
|
||||
Test database connectors with CLI
|
||||
"""
|
||||
import os
|
||||
import re
|
||||
import subprocess
|
||||
from abc import ABC, abstractmethod
|
||||
from enum import Enum
|
||||
from pathlib import Path
|
||||
|
||||
import yaml
|
||||
|
||||
from metadata.config.common import load_config_file
|
||||
from metadata.ingestion.api.sink import SinkStatus
|
||||
from metadata.ingestion.api.source import SourceStatus
|
||||
from metadata.ingestion.api.workflow import Workflow
|
||||
from metadata.ingestion.ometa.ometa_api import OpenMetadata
|
||||
from metadata.utils.constants import UTF_8
|
||||
|
||||
PATH_TO_RESOURCES = os.path.dirname(Path(os.path.realpath(__file__)).parent)
|
||||
|
||||
REGEX_AUX = {"log": r"\s+\[[^]]+]\s+[A-Z]+\s+[^}]+}\s+-\s+"}
|
||||
|
||||
|
||||
class E2EType(Enum):
|
||||
"""
|
||||
E2E Type Enum Class
|
||||
"""
|
||||
|
||||
INGEST = "ingest"
|
||||
PROFILER = "profiler"
|
||||
INGEST_DB_FILTER_SCHEMA = "ingest-db-filter-schema"
|
||||
INGEST_DB_FILTER_TABLE = "ingest-db-filter-table"
|
||||
INGEST_DB_FILTER_MIX = "ingest-db-filter-mix"
|
||||
INGEST_DASHBOARD_FILTER_MIX = "ingest-dashboard-filter-mix"
|
||||
INGEST_DASHBOARD_NOT_INCLUDING = "ingest-dashboard-not-including"
|
||||
|
||||
|
||||
class CliBase(ABC):
|
||||
"""
|
||||
CLI Base class
|
||||
"""
|
||||
|
||||
openmetadata: OpenMetadata
|
||||
test_file_path: str
|
||||
config_file_path: str
|
||||
|
||||
def run_command(self, command: str = "ingest", test_file_path=None) -> str:
|
||||
file_path = (
|
||||
test_file_path if test_file_path is not None else self.test_file_path
|
||||
)
|
||||
args = [
|
||||
"metadata",
|
||||
command,
|
||||
"-c",
|
||||
file_path,
|
||||
]
|
||||
process_status = subprocess.Popen(args, stderr=subprocess.PIPE)
|
||||
_, stderr = process_status.communicate()
|
||||
return stderr.decode("utf-8")
|
||||
|
||||
def retrieve_lineage(self, entity_fqn: str) -> dict:
|
||||
return self.openmetadata.client.get(
|
||||
f"/lineage/table/name/{entity_fqn}?upstreamDepth=3&downstreamDepth=3"
|
||||
)
|
||||
|
||||
def build_config_file(
|
||||
self, test_type: E2EType = E2EType.INGEST, extra_args: dict = None
|
||||
) -> None:
|
||||
with open(self.config_file_path, encoding=UTF_8) as config_file:
|
||||
config_yaml = yaml.safe_load(config_file)
|
||||
config_yaml = self.build_yaml(config_yaml, test_type, extra_args)
|
||||
with open(self.test_file_path, "w", encoding=UTF_8) as test_file:
|
||||
yaml.dump(config_yaml, test_file)
|
||||
|
||||
def retrieve_statuses(self, result):
|
||||
source_status: SourceStatus = self.extract_source_status(result)
|
||||
sink_status: SinkStatus = self.extract_sink_status(result)
|
||||
return sink_status, source_status
|
||||
|
||||
@staticmethod
|
||||
def get_workflow(connector: str, test_type: str) -> Workflow:
|
||||
config_file = Path(
|
||||
PATH_TO_RESOURCES + f"/{test_type}/{connector}/{connector}.yaml"
|
||||
)
|
||||
config_dict = load_config_file(config_file)
|
||||
return Workflow.create(config_dict)
|
||||
|
||||
@staticmethod
|
||||
def extract_source_status(output) -> SourceStatus:
|
||||
output_clean = output.replace("\n", " ")
|
||||
output_clean = re.sub(" +", " ", output_clean)
|
||||
output_clean_ansi = re.compile(r"\x1b[^m]*m")
|
||||
output_clean = output_clean_ansi.sub(" ", output_clean)
|
||||
regex = r"Source Status:%(log)s(.*?)%(log)sSink Status: .*" % REGEX_AUX
|
||||
output_clean = re.findall(regex, output_clean.strip())
|
||||
return SourceStatus.parse_obj(
|
||||
eval(output_clean[0].strip()) # pylint: disable=eval-used
|
||||
)
|
||||
|
||||
@staticmethod
|
||||
def extract_sink_status(output) -> SinkStatus:
|
||||
output_clean = output.replace("\n", " ")
|
||||
output_clean = re.sub(" +", " ", output_clean)
|
||||
output_clean_ansi = re.compile(r"\x1b[^m]*m")
|
||||
output_clean = output_clean_ansi.sub("", output_clean)
|
||||
if re.match(".* Processor Status: .*", output_clean):
|
||||
regex = r"Sink Status:%(log)s(.*?)%(log)sProcessor Status: .*" % REGEX_AUX
|
||||
output_clean = re.findall(regex, output_clean.strip())[0].strip()
|
||||
else:
|
||||
regex = r".*Sink Status:%(log)s(.*?)%(log)sWorkflow Summary.*" % REGEX_AUX
|
||||
output_clean = re.findall(regex, output_clean.strip())[0].strip()
|
||||
return SinkStatus.parse_obj(eval(output_clean)) # pylint: disable=eval-used
|
||||
|
||||
@staticmethod
|
||||
def build_yaml(config_yaml: dict, test_type: E2EType, extra_args: dict):
|
||||
"""
|
||||
Build yaml as per E2EType
|
||||
"""
|
||||
if test_type == E2EType.PROFILER:
|
||||
del config_yaml["source"]["sourceConfig"]["config"]
|
||||
config_yaml["source"]["sourceConfig"] = {
|
||||
"config": {
|
||||
"type": "Profiler",
|
||||
"generateSampleData": True,
|
||||
"profileSample": extra_args.get("profileSample", 1)
|
||||
if extra_args
|
||||
else 1,
|
||||
}
|
||||
}
|
||||
config_yaml["processor"] = {"type": "orm-profiler", "config": {}}
|
||||
if test_type == E2EType.INGEST_DB_FILTER_SCHEMA:
|
||||
config_yaml["source"]["sourceConfig"]["config"][
|
||||
"schemaFilterPattern"
|
||||
] = extra_args
|
||||
if test_type == E2EType.INGEST_DB_FILTER_TABLE:
|
||||
config_yaml["source"]["sourceConfig"]["config"][
|
||||
"tableFilterPattern"
|
||||
] = extra_args
|
||||
if test_type == E2EType.INGEST_DB_FILTER_MIX:
|
||||
config_yaml["source"]["sourceConfig"]["config"][
|
||||
"schemaFilterPattern"
|
||||
] = extra_args["schema"]
|
||||
config_yaml["source"]["sourceConfig"]["config"][
|
||||
"tableFilterPattern"
|
||||
] = extra_args["table"]
|
||||
if test_type == E2EType.INGEST_DASHBOARD_FILTER_MIX:
|
||||
config_yaml["source"]["sourceConfig"]["config"][
|
||||
"dashboardFilterPattern"
|
||||
] = extra_args["dashboards"]
|
||||
config_yaml["source"]["sourceConfig"]["config"][
|
||||
"chartFilterPattern"
|
||||
] = extra_args["charts"]
|
||||
config_yaml["source"]["sourceConfig"]["config"][
|
||||
"dataModelFilterPattern"
|
||||
] = extra_args["dataModels"]
|
||||
if test_type == E2EType.INGEST_DASHBOARD_NOT_INCLUDING:
|
||||
config_yaml["source"]["sourceConfig"]["config"]["includeTags"] = extra_args[
|
||||
"includeTags"
|
||||
]
|
||||
config_yaml["source"]["sourceConfig"]["config"][
|
||||
"includeDataModels"
|
||||
] = extra_args["includeDataModels"]
|
||||
|
||||
return config_yaml
|
||||
|
||||
@staticmethod
|
||||
@abstractmethod
|
||||
def get_test_type():
|
||||
pass
|
149
ingestion/tests/cli_e2e/base/test_cli_dashboard.py
Normal file
149
ingestion/tests/cli_e2e/base/test_cli_dashboard.py
Normal file
@ -0,0 +1,149 @@
|
||||
# Copyright 2022 Collate
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
"""
|
||||
Test dashboard connectors with CLI
|
||||
"""
|
||||
from abc import abstractmethod
|
||||
from typing import List
|
||||
from unittest import TestCase
|
||||
|
||||
import pytest
|
||||
|
||||
from metadata.ingestion.api.sink import SinkStatus
|
||||
from metadata.ingestion.api.source import SourceStatus
|
||||
|
||||
from .test_cli import CliBase, E2EType
|
||||
|
||||
|
||||
class CliDashboardBase(TestCase):
|
||||
"""
|
||||
CLI Dashboard Base class
|
||||
"""
|
||||
|
||||
class TestSuite(TestCase, CliBase): # pylint: disable=too-many-public-methods
|
||||
"""
|
||||
TestSuite class to define test structure
|
||||
"""
|
||||
|
||||
# 1. deploy without including tags and data models
|
||||
@pytest.mark.order(1)
|
||||
def test_not_including(self) -> None:
|
||||
# do anything need before running first test
|
||||
self.prepare()
|
||||
# build config file for ingest without including data models and tags
|
||||
self.build_config_file(
|
||||
E2EType.INGEST_DASHBOARD_NOT_INCLUDING,
|
||||
{
|
||||
"includeTags": "False",
|
||||
"includeDataModels": "False",
|
||||
},
|
||||
)
|
||||
# run ingest
|
||||
result = self.run_command()
|
||||
sink_status, source_status = self.retrieve_statuses(result)
|
||||
self.assert_not_including(source_status, sink_status)
|
||||
|
||||
# 2. deploy vanilla ingestion including lineage, tags and data models
|
||||
@pytest.mark.order(2)
|
||||
def test_vanilla_ingestion(self) -> None:
|
||||
# build config file for ingest
|
||||
self.build_config_file(E2EType.INGEST)
|
||||
# run ingest
|
||||
result = self.run_command()
|
||||
sink_status, source_status = self.retrieve_statuses(result)
|
||||
self.assert_for_vanilla_ingestion(source_status, sink_status)
|
||||
|
||||
# 3. deploy with mixed filter patterns
|
||||
@pytest.mark.order(3)
|
||||
def test_filter_mix(self) -> None:
|
||||
# build config file for ingest with filters
|
||||
self.build_config_file(
|
||||
E2EType.INGEST_DASHBOARD_FILTER_MIX,
|
||||
{
|
||||
"dashboards": {
|
||||
"includes": self.get_includes_dashboards(),
|
||||
"excludes": self.get_excludes_charts(),
|
||||
},
|
||||
"charts": {
|
||||
"includes": self.get_includes_charts(),
|
||||
"excludes": self.get_excludes_charts(),
|
||||
},
|
||||
"dataModels": {
|
||||
"includes": self.get_includes_datamodels(),
|
||||
"excludes": self.get_excludes_datamodels(),
|
||||
},
|
||||
},
|
||||
)
|
||||
# run ingest
|
||||
result = self.run_command()
|
||||
sink_status, source_status = self.retrieve_statuses(result)
|
||||
self.assert_filtered_mix(source_status, sink_status)
|
||||
|
||||
@staticmethod
|
||||
@abstractmethod
|
||||
def get_connector_name() -> str:
|
||||
raise NotImplementedError()
|
||||
|
||||
@abstractmethod
|
||||
def assert_not_including(
|
||||
self, source_status: SourceStatus, sink_status: SinkStatus
|
||||
):
|
||||
raise NotImplementedError()
|
||||
|
||||
@abstractmethod
|
||||
def assert_for_vanilla_ingestion(
|
||||
self, source_status: SourceStatus, sink_status: SinkStatus
|
||||
) -> None:
|
||||
raise NotImplementedError()
|
||||
|
||||
@abstractmethod
|
||||
def assert_filtered_mix(
|
||||
self, source_status: SourceStatus, sink_status: SinkStatus
|
||||
):
|
||||
raise NotImplementedError()
|
||||
|
||||
@staticmethod
|
||||
@abstractmethod
|
||||
def get_includes_dashboards() -> List[str]:
|
||||
raise NotImplementedError()
|
||||
|
||||
@staticmethod
|
||||
@abstractmethod
|
||||
def get_excludes_dashboards() -> List[str]:
|
||||
raise NotImplementedError()
|
||||
|
||||
@staticmethod
|
||||
@abstractmethod
|
||||
def get_includes_charts() -> List[str]:
|
||||
raise NotImplementedError()
|
||||
|
||||
@staticmethod
|
||||
@abstractmethod
|
||||
def get_excludes_charts() -> List[str]:
|
||||
raise NotImplementedError()
|
||||
|
||||
@staticmethod
|
||||
@abstractmethod
|
||||
def get_includes_datamodels() -> List[str]:
|
||||
raise NotImplementedError()
|
||||
|
||||
@staticmethod
|
||||
@abstractmethod
|
||||
def get_excludes_datamodels() -> List[str]:
|
||||
raise NotImplementedError()
|
||||
|
||||
@staticmethod
|
||||
def get_test_type() -> str:
|
||||
return "dashboard"
|
||||
|
||||
def prepare(self) -> None:
|
||||
pass
|
@ -12,41 +12,17 @@
|
||||
"""
|
||||
Test database connectors with CLI
|
||||
"""
|
||||
import os
|
||||
import re
|
||||
import subprocess
|
||||
from abc import abstractmethod
|
||||
from enum import Enum
|
||||
from pathlib import Path
|
||||
from typing import List
|
||||
from unittest import TestCase
|
||||
|
||||
import pytest
|
||||
import yaml
|
||||
|
||||
from metadata.config.common import load_config_file
|
||||
from metadata.generated.schema.entity.data.table import Table
|
||||
from metadata.ingestion.api.sink import SinkStatus
|
||||
from metadata.ingestion.api.source import SourceStatus
|
||||
from metadata.ingestion.api.workflow import Workflow
|
||||
from metadata.ingestion.ometa.ometa_api import OpenMetadata
|
||||
from metadata.utils.constants import UTF_8
|
||||
|
||||
PATH_TO_RESOURCES = os.path.dirname(os.path.realpath(__file__))
|
||||
|
||||
REGEX_AUX = {"log": r"\s+\[[^]]+]\s+[A-Z]+\s+[^}]+}\s+-\s+"}
|
||||
|
||||
|
||||
class E2EType(Enum):
|
||||
"""
|
||||
E2E Type Enum Class
|
||||
"""
|
||||
|
||||
INGEST = "ingest"
|
||||
PROFILER = "profiler"
|
||||
INGEST_FILTER_SCHEMA = "ingest-filter-schema"
|
||||
INGEST_FILTER_TABLE = "ingest-filter-table"
|
||||
INGEST_FILTER_MIX = "ingest-filter-mix"
|
||||
from .test_cli import CliBase, E2EType
|
||||
|
||||
|
||||
class CliDBBase(TestCase):
|
||||
@ -54,15 +30,11 @@ class CliDBBase(TestCase):
|
||||
CLI DB Base class
|
||||
"""
|
||||
|
||||
class TestSuite(TestCase): # pylint: disable=too-many-public-methods
|
||||
class TestSuite(TestCase, CliBase): # pylint: disable=too-many-public-methods
|
||||
"""
|
||||
TestSuite class to define test structure
|
||||
"""
|
||||
|
||||
openmetadata: OpenMetadata
|
||||
test_file_path: str
|
||||
config_file_path: str
|
||||
|
||||
# 1. deploy vanilla ingestion
|
||||
@pytest.mark.order(1)
|
||||
def test_vanilla_ingestion(self) -> None:
|
||||
@ -111,7 +83,8 @@ class CliDBBase(TestCase):
|
||||
def test_schema_filter_includes(self) -> None:
|
||||
# build config file for ingest with filters
|
||||
self.build_config_file(
|
||||
E2EType.INGEST_FILTER_SCHEMA, {"includes": self.get_includes_schemas()}
|
||||
E2EType.INGEST_DB_FILTER_SCHEMA,
|
||||
{"includes": self.get_includes_schemas()},
|
||||
)
|
||||
# run ingest
|
||||
result = self.run_command()
|
||||
@ -124,7 +97,8 @@ class CliDBBase(TestCase):
|
||||
def test_schema_filter_excludes(self) -> None:
|
||||
# build config file for ingest with filters
|
||||
self.build_config_file(
|
||||
E2EType.INGEST_FILTER_SCHEMA, {"excludes": self.get_includes_schemas()}
|
||||
E2EType.INGEST_DB_FILTER_SCHEMA,
|
||||
{"excludes": self.get_includes_schemas()},
|
||||
)
|
||||
# run ingest
|
||||
result = self.run_command()
|
||||
@ -136,7 +110,7 @@ class CliDBBase(TestCase):
|
||||
def test_table_filter_includes(self) -> None:
|
||||
# build config file for ingest with filters
|
||||
self.build_config_file(
|
||||
E2EType.INGEST_FILTER_TABLE, {"includes": self.get_includes_tables()}
|
||||
E2EType.INGEST_DB_FILTER_TABLE, {"includes": self.get_includes_tables()}
|
||||
)
|
||||
# run ingest
|
||||
result = self.run_command()
|
||||
@ -149,7 +123,7 @@ class CliDBBase(TestCase):
|
||||
def test_table_filter_excludes(self) -> None:
|
||||
# build config file for ingest with filters
|
||||
self.build_config_file(
|
||||
E2EType.INGEST_FILTER_TABLE, {"excludes": self.get_includes_tables()}
|
||||
E2EType.INGEST_DB_FILTER_TABLE, {"excludes": self.get_includes_tables()}
|
||||
)
|
||||
# run ingest
|
||||
result = self.run_command()
|
||||
@ -161,7 +135,7 @@ class CliDBBase(TestCase):
|
||||
def test_table_filter_mix(self) -> None:
|
||||
# build config file for ingest with filters
|
||||
self.build_config_file(
|
||||
E2EType.INGEST_FILTER_MIX,
|
||||
E2EType.INGEST_DB_FILTER_MIX,
|
||||
{
|
||||
"schema": {"includes": self.get_includes_schemas()},
|
||||
"table": {
|
||||
@ -187,31 +161,6 @@ class CliDBBase(TestCase):
|
||||
# to be implemented
|
||||
pass
|
||||
|
||||
def run_command(self, command: str = "ingest") -> str:
|
||||
args = [
|
||||
"metadata",
|
||||
command,
|
||||
"-c",
|
||||
self.test_file_path,
|
||||
]
|
||||
process_status = subprocess.Popen(args, stderr=subprocess.PIPE)
|
||||
_, stderr = process_status.communicate()
|
||||
return stderr.decode("utf-8")
|
||||
|
||||
def build_config_file(
|
||||
self, test_type: E2EType = E2EType.INGEST, extra_args: dict = None
|
||||
) -> None:
|
||||
with open(self.config_file_path, encoding=UTF_8) as config_file:
|
||||
config_yaml = yaml.safe_load(config_file)
|
||||
config_yaml = self.build_yaml(config_yaml, test_type, extra_args)
|
||||
with open(self.test_file_path, "w", encoding=UTF_8) as test_file:
|
||||
yaml.dump(config_yaml, test_file)
|
||||
|
||||
def retrieve_statuses(self, result):
|
||||
source_status: SourceStatus = self.extract_source_status(result)
|
||||
sink_status: SinkStatus = self.extract_sink_status(result)
|
||||
return sink_status, source_status
|
||||
|
||||
def retrieve_table(self, table_name_fqn: str) -> Table:
|
||||
return self.openmetadata.get_by_name(entity=Table, fqn=table_name_fqn)
|
||||
|
||||
@ -221,49 +170,11 @@ class CliDBBase(TestCase):
|
||||
)
|
||||
return self.openmetadata.get_sample_data(table=table)
|
||||
|
||||
def retrieve_lineage(self, table_name_fqn: str) -> dict:
|
||||
def retrieve_lineage(self, entity_fqn: str) -> dict:
|
||||
return self.openmetadata.client.get(
|
||||
f"/lineage/table/name/{table_name_fqn}?upstreamDepth=3&downstreamDepth=3"
|
||||
f"/lineage/table/name/{entity_fqn}?upstreamDepth=3&downstreamDepth=3"
|
||||
)
|
||||
|
||||
@staticmethod
|
||||
def get_workflow(connector: str) -> Workflow:
|
||||
config_file = Path(
|
||||
PATH_TO_RESOURCES + f"/database/{connector}/{connector}.yaml"
|
||||
)
|
||||
config_dict = load_config_file(config_file)
|
||||
return Workflow.create(config_dict)
|
||||
|
||||
@staticmethod
|
||||
def extract_source_status(output) -> SourceStatus:
|
||||
output_clean = output.replace("\n", " ")
|
||||
output_clean = re.sub(" +", " ", output_clean)
|
||||
output_clean_ansi = re.compile(r"\x1b[^m]*m")
|
||||
output_clean = output_clean_ansi.sub(" ", output_clean)
|
||||
regex = r"Source Status:%(log)s(.*?)%(log)sSink Status: .*" % REGEX_AUX
|
||||
output_clean = re.findall(regex, output_clean.strip())
|
||||
return SourceStatus.parse_obj(
|
||||
eval(output_clean[0].strip()) # pylint: disable=eval-used
|
||||
)
|
||||
|
||||
@staticmethod
|
||||
def extract_sink_status(output) -> SinkStatus:
|
||||
output_clean = output.replace("\n", " ")
|
||||
output_clean = re.sub(" +", " ", output_clean)
|
||||
output_clean_ansi = re.compile(r"\x1b[^m]*m")
|
||||
output_clean = output_clean_ansi.sub("", output_clean)
|
||||
if re.match(".* Processor Status: .*", output_clean):
|
||||
regex = (
|
||||
r"Sink Status:%(log)s(.*?)%(log)sProcessor Status: .*" % REGEX_AUX
|
||||
)
|
||||
output_clean = re.findall(regex, output_clean.strip())[0].strip()
|
||||
else:
|
||||
regex = (
|
||||
r".*Sink Status:%(log)s(.*?)%(log)sWorkflow Summary.*" % REGEX_AUX
|
||||
)
|
||||
output_clean = re.findall(regex, output_clean.strip())[0].strip()
|
||||
return SinkStatus.parse_obj(eval(output_clean)) # pylint: disable=eval-used
|
||||
|
||||
@staticmethod
|
||||
@abstractmethod
|
||||
def get_connector_name() -> str:
|
||||
@ -341,35 +252,5 @@ class CliDBBase(TestCase):
|
||||
raise NotImplementedError()
|
||||
|
||||
@staticmethod
|
||||
def build_yaml(config_yaml: dict, test_type: E2EType, extra_args: dict):
|
||||
"""
|
||||
Build yaml as per E2EType
|
||||
"""
|
||||
if test_type == E2EType.PROFILER:
|
||||
del config_yaml["source"]["sourceConfig"]["config"]
|
||||
config_yaml["source"]["sourceConfig"] = {
|
||||
"config": {
|
||||
"type": "Profiler",
|
||||
"generateSampleData": True,
|
||||
"profileSample": extra_args.get("profileSample", 1)
|
||||
if extra_args
|
||||
else 1,
|
||||
}
|
||||
}
|
||||
config_yaml["processor"] = {"type": "orm-profiler", "config": {}}
|
||||
if test_type == E2EType.INGEST_FILTER_SCHEMA:
|
||||
config_yaml["source"]["sourceConfig"]["config"][
|
||||
"schemaFilterPattern"
|
||||
] = extra_args
|
||||
if test_type == E2EType.INGEST_FILTER_TABLE:
|
||||
config_yaml["source"]["sourceConfig"]["config"][
|
||||
"tableFilterPattern"
|
||||
] = extra_args
|
||||
if test_type == E2EType.INGEST_FILTER_MIX:
|
||||
config_yaml["source"]["sourceConfig"]["config"][
|
||||
"schemaFilterPattern"
|
||||
] = extra_args["schema"]
|
||||
config_yaml["source"]["sourceConfig"]["config"][
|
||||
"tableFilterPattern"
|
||||
] = extra_args["table"]
|
||||
return config_yaml
|
||||
def get_test_type() -> str:
|
||||
return "database"
|
@ -12,40 +12,30 @@
|
||||
"""
|
||||
Test DBT with CLI
|
||||
"""
|
||||
import os
|
||||
import subprocess
|
||||
from abc import abstractmethod
|
||||
from pathlib import Path
|
||||
from typing import List
|
||||
from unittest import TestCase
|
||||
|
||||
import pytest
|
||||
|
||||
from metadata.config.common import load_config_file
|
||||
from metadata.generated.schema.entity.data.table import Table
|
||||
from metadata.generated.schema.tests.testCase import TestCase as OMTestCase
|
||||
from metadata.generated.schema.tests.testSuite import TestSuite
|
||||
from metadata.ingestion.api.sink import SinkStatus
|
||||
from metadata.ingestion.api.source import SourceStatus
|
||||
from metadata.ingestion.api.workflow import Workflow
|
||||
from metadata.ingestion.ometa.ometa_api import OpenMetadata
|
||||
|
||||
from .test_cli_db_base import CliDBBase
|
||||
|
||||
PATH_TO_RESOURCES = os.path.dirname(os.path.realpath(__file__))
|
||||
from .test_cli import CliBase
|
||||
|
||||
|
||||
class CliDBTBase(TestCase):
|
||||
class TestSuite(TestCase):
|
||||
openmetadata: OpenMetadata
|
||||
class TestSuite(TestCase, CliBase):
|
||||
dbt_file_path: str
|
||||
config_file_path: str
|
||||
|
||||
# 1. deploy vanilla ingestion
|
||||
@pytest.mark.order(1)
|
||||
def test_connector_ingestion(self) -> None:
|
||||
# run ingest with dbt tables
|
||||
result = self.run_command(file_path=self.config_file_path)
|
||||
result = self.run_command(test_file_path=self.config_file_path)
|
||||
sink_status, source_status = self.retrieve_statuses(result)
|
||||
self.assert_for_vanilla_ingestion(source_status, sink_status)
|
||||
|
||||
@ -53,7 +43,7 @@ class CliDBTBase(TestCase):
|
||||
@pytest.mark.order(2)
|
||||
def test_dbt_ingestion(self) -> None:
|
||||
# run the dbt ingestion
|
||||
result = self.run_command(file_path=self.dbt_file_path)
|
||||
result = self.run_command(test_file_path=self.dbt_file_path)
|
||||
sink_status, source_status = self.retrieve_statuses(result)
|
||||
self.assert_for_dbt_ingestion(source_status, sink_status)
|
||||
|
||||
@ -62,7 +52,7 @@ class CliDBTBase(TestCase):
|
||||
def test_entities(self) -> None:
|
||||
for table_fqn in self.fqn_dbt_tables():
|
||||
table: Table = self.openmetadata.get_by_name(
|
||||
entity=Table, fqn=table_fqn, fields="*"
|
||||
entity=Table, fqn=table_fqn, fields=["*"]
|
||||
)
|
||||
data_model = table.dataModel
|
||||
self.assertTrue(len(data_model.columns) > 0)
|
||||
@ -86,7 +76,7 @@ class CliDBTBase(TestCase):
|
||||
test_case_entity_list = self.openmetadata.list_entities(
|
||||
entity=OMTestCase,
|
||||
fields=["testSuite", "entityLink", "testDefinition"],
|
||||
params={"testSuiteId": test_suite.id.__root__},
|
||||
params={"testSuiteId": str(test_suite.id.__root__)},
|
||||
)
|
||||
self.assertTrue(len(test_case_entity_list.entities) == 23)
|
||||
|
||||
@ -97,35 +87,9 @@ class CliDBTBase(TestCase):
|
||||
lineage = self.retrieve_lineage(table_fqn)
|
||||
self.assertTrue(len(lineage["upstreamEdges"]) >= 4)
|
||||
|
||||
def run_command(self, file_path: str, command: str = "ingest") -> str:
|
||||
args = [
|
||||
"metadata",
|
||||
command,
|
||||
"-c",
|
||||
file_path,
|
||||
]
|
||||
process_status = subprocess.Popen(args, stderr=subprocess.PIPE)
|
||||
_, stderr = process_status.communicate()
|
||||
return stderr.decode("utf-8")
|
||||
|
||||
@staticmethod
|
||||
def retrieve_statuses(result):
|
||||
source_status: SourceStatus = CliDBBase.TestSuite.extract_source_status(
|
||||
result
|
||||
)
|
||||
sink_status: SinkStatus = CliDBBase.TestSuite.extract_sink_status(result)
|
||||
return sink_status, source_status
|
||||
|
||||
def retrieve_lineage(self, table_name_fqn: str) -> dict:
|
||||
return self.openmetadata.client.get(
|
||||
f"/lineage/table/name/{table_name_fqn}?upstreamDepth=3&downstreamDepth=3"
|
||||
)
|
||||
|
||||
@staticmethod
|
||||
def get_workflow(connector: str) -> Workflow:
|
||||
config_file = Path(PATH_TO_RESOURCES + f"/dbt/{connector}/{connector}.yaml")
|
||||
config_dict = load_config_file(config_file)
|
||||
return Workflow.create(config_dict)
|
||||
def get_test_type() -> str:
|
||||
return "dbt"
|
||||
|
||||
@staticmethod
|
||||
@abstractmethod
|
0
ingestion/tests/cli_e2e/common/__init__.py
Normal file
0
ingestion/tests/cli_e2e/common/__init__.py
Normal file
126
ingestion/tests/cli_e2e/common/test_cli_dashboard.py
Normal file
126
ingestion/tests/cli_e2e/common/test_cli_dashboard.py
Normal file
@ -0,0 +1,126 @@
|
||||
# Copyright 2022 Collate
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
"""
|
||||
Test dashboard connectors with CLI
|
||||
"""
|
||||
from abc import ABC, abstractmethod
|
||||
from pathlib import Path
|
||||
|
||||
from metadata.ingestion.api.sink import SinkStatus
|
||||
from metadata.ingestion.api.source import SourceStatus
|
||||
from metadata.ingestion.api.workflow import Workflow
|
||||
|
||||
from ..base.test_cli import PATH_TO_RESOURCES
|
||||
from ..base.test_cli_dashboard import CliDashboardBase
|
||||
|
||||
|
||||
class CliCommonDashboard:
|
||||
"""
|
||||
CLI Dashboard Common class
|
||||
"""
|
||||
|
||||
class TestSuite(
|
||||
CliDashboardBase.TestSuite, ABC
|
||||
): # pylint: disable=too-many-public-methods
|
||||
"""
|
||||
TestSuite class to define test structure
|
||||
"""
|
||||
|
||||
@classmethod
|
||||
def setUpClass(cls) -> None:
|
||||
connector = cls.get_connector_name()
|
||||
workflow: Workflow = cls.get_workflow(connector, cls.get_test_type())
|
||||
cls.openmetadata = workflow.source.metadata
|
||||
cls.config_file_path = str(
|
||||
Path(PATH_TO_RESOURCES + f"/dashboard/{connector}/{connector}.yaml")
|
||||
)
|
||||
cls.test_file_path = str(
|
||||
Path(PATH_TO_RESOURCES + f"/dashboard/{connector}/test.yaml")
|
||||
)
|
||||
|
||||
def assert_not_including(
|
||||
self, source_status: SourceStatus, sink_status: SinkStatus
|
||||
):
|
||||
self.assertTrue(len(source_status.failures) == 0)
|
||||
self.assertTrue(len(source_status.warnings) == 0)
|
||||
self.assertTrue(len(source_status.filtered) == 0)
|
||||
self.assertEqual(
|
||||
self.expected_not_included_entities(), len(source_status.records)
|
||||
)
|
||||
self.assertTrue(len(sink_status.failures) == 0)
|
||||
self.assertTrue(len(sink_status.warnings) == 0)
|
||||
self.assertEqual(
|
||||
self.expected_not_included_sink_entities(), len(sink_status.records)
|
||||
)
|
||||
|
||||
def assert_for_vanilla_ingestion(
|
||||
self, source_status: SourceStatus, sink_status: SinkStatus
|
||||
) -> None:
|
||||
self.assertTrue(len(source_status.failures) == 0)
|
||||
self.assertTrue(len(source_status.warnings) == 0)
|
||||
self.assertTrue(len(source_status.filtered) == 0)
|
||||
self.assertEqual(len(source_status.records), self.expected_entities())
|
||||
self.assertTrue(len(sink_status.failures) == 0)
|
||||
self.assertTrue(len(sink_status.warnings) == 0)
|
||||
self.assertEqual(
|
||||
len(sink_status.records),
|
||||
self.expected_entities()
|
||||
+ self.expected_tags()
|
||||
+ self.expected_lineage(),
|
||||
)
|
||||
|
||||
def assert_filtered_mix(
|
||||
self, source_status: SourceStatus, sink_status: SinkStatus
|
||||
):
|
||||
self.assertTrue(len(source_status.failures) == 0)
|
||||
self.assertTrue(len(source_status.warnings) == 0)
|
||||
self.assertEqual(self.expected_filtered_mix(), len(source_status.filtered))
|
||||
self.assertTrue(len(sink_status.failures) == 0)
|
||||
self.assertTrue(len(sink_status.warnings) == 0)
|
||||
self.assertEqual(
|
||||
self.expected_filtered_sink_mix(), len(sink_status.records)
|
||||
)
|
||||
|
||||
@staticmethod
|
||||
@abstractmethod
|
||||
def expected_entities() -> int:
|
||||
raise NotImplementedError()
|
||||
|
||||
@staticmethod
|
||||
@abstractmethod
|
||||
def expected_tags() -> int:
|
||||
raise NotImplementedError()
|
||||
|
||||
@staticmethod
|
||||
@abstractmethod
|
||||
def expected_lineage() -> int:
|
||||
raise NotImplementedError()
|
||||
|
||||
@staticmethod
|
||||
@abstractmethod
|
||||
def expected_not_included_entities() -> int:
|
||||
raise NotImplementedError()
|
||||
|
||||
@staticmethod
|
||||
@abstractmethod
|
||||
def expected_not_included_sink_entities() -> int:
|
||||
raise NotImplementedError()
|
||||
|
||||
@staticmethod
|
||||
@abstractmethod
|
||||
def expected_filtered_mix() -> int:
|
||||
raise NotImplementedError()
|
||||
|
||||
@staticmethod
|
||||
@abstractmethod
|
||||
def expected_filtered_sink_mix() -> int:
|
||||
raise NotImplementedError()
|
@ -21,7 +21,8 @@ from metadata.ingestion.api.sink import SinkStatus
|
||||
from metadata.ingestion.api.source import SourceStatus
|
||||
from metadata.ingestion.api.workflow import Workflow
|
||||
|
||||
from .test_cli_db_base import PATH_TO_RESOURCES, CliDBBase
|
||||
from ..base.test_cli import PATH_TO_RESOURCES
|
||||
from ..base.test_cli_db import CliDBBase
|
||||
|
||||
|
||||
class CliCommonDB:
|
||||
@ -31,7 +32,7 @@ class CliCommonDB:
|
||||
@classmethod
|
||||
def setUpClass(cls) -> None:
|
||||
connector = cls.get_connector_name()
|
||||
workflow: Workflow = cls.get_workflow(connector)
|
||||
workflow: Workflow = cls.get_workflow(connector, cls.get_test_type())
|
||||
cls.engine = workflow.source.engine
|
||||
cls.openmetadata = workflow.source.metadata
|
||||
cls.config_file_path = str(
|
25
ingestion/tests/cli_e2e/dashboard/tableau/redshift.yaml
Normal file
25
ingestion/tests/cli_e2e/dashboard/tableau/redshift.yaml
Normal file
@ -0,0 +1,25 @@
|
||||
source:
|
||||
type: redshift
|
||||
serviceName: local_redshift
|
||||
serviceConnection:
|
||||
config:
|
||||
hostPort: $E2E_REDSHIFT_HOST_PORT
|
||||
username: $E2E_REDSHIFT_USERNAME
|
||||
password: $E2E_REDSHIFT_PASSWORD
|
||||
database: $E2E_REDSHIFT_DATABASE
|
||||
type: Redshift
|
||||
sourceConfig:
|
||||
config:
|
||||
schemaFilterPattern:
|
||||
includes:
|
||||
- dbt_jaffle
|
||||
sink:
|
||||
type: metadata-rest
|
||||
config: {}
|
||||
workflowConfig:
|
||||
loggerLevel: DEBUG
|
||||
openMetadataServerConfig:
|
||||
hostPort: http://localhost:8585/api
|
||||
authProvider: openmetadata
|
||||
securityConfig:
|
||||
"jwtToken": "eyJraWQiOiJHYjM4OWEtOWY3Ni1nZGpzLWE5MmotMDI0MmJrOTQzNTYiLCJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiJ9.eyJzdWIiOiJhZG1pbiIsImlzQm90IjpmYWxzZSwiaXNzIjoib3Blbi1tZXRhZGF0YS5vcmciLCJpYXQiOjE2NjM5Mzg0NjIsImVtYWlsIjoiYWRtaW5Ab3Blbm1ldGFkYXRhLm9yZyJ9.tS8um_5DKu7HgzGBzS1VTA5uUjKWOCU0B_j08WXBiEC0mr0zNREkqVfwFDD-d24HlNEbrqioLsBuFRiwIWKc1m_ZlVQbG7P36RUxhuv2vbSp80FKyNM-Tj93FDzq91jsyNmsQhyNv_fNr3TXfzzSPjHt8Go0FMMP66weoKMgW2PbXlhVKwEuXUHyakLLzewm9UMeQaEiRzhiTMU3UkLXcKbYEJJvfNFcLwSl9W8JCO_l0Yj3ud-qt_nQYEZwqW6u5nfdQllN133iikV4fM5QZsMCnm8Rq1mvLR0y9bmJiD7fwM1tmJ791TUWqmKaTnP49U493VanKpUAfzIiOiIbhg"
|
27
ingestion/tests/cli_e2e/dashboard/tableau/tableau.yaml
Normal file
27
ingestion/tests/cli_e2e/dashboard/tableau/tableau.yaml
Normal file
@ -0,0 +1,27 @@
|
||||
source:
|
||||
type: tableau
|
||||
serviceName: local_tableau
|
||||
serviceConnection:
|
||||
config:
|
||||
type: Tableau
|
||||
authType:
|
||||
username: $E2E_TABLEAU_USERNAME
|
||||
password: $E2E_TABLEAU_PASSWORD
|
||||
env: tableau_prod
|
||||
hostPort: $E2E_TABLEAU_HOST_PORT
|
||||
siteName: $E2E_TABLEAU_SITE
|
||||
siteUrl: $E2E_TABLEAU_SITE
|
||||
apiVersion: 3.19
|
||||
sourceConfig:
|
||||
config:
|
||||
type: DashboardMetadata
|
||||
sink:
|
||||
type: metadata-rest
|
||||
config: {}
|
||||
workflowConfig:
|
||||
loggerLevel: DEBUG
|
||||
openMetadataServerConfig:
|
||||
authProvider: openmetadata
|
||||
hostPort: http://localhost:8585/api
|
||||
securityConfig:
|
||||
jwtToken: eyJraWQiOiJHYjM4OWEtOWY3Ni1nZGpzLWE5MmotMDI0MmJrOTQzNTYiLCJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiJ9.eyJzdWIiOiJhZG1pbiIsImlzQm90IjpmYWxzZSwiaXNzIjoib3Blbi1tZXRhZGF0YS5vcmciLCJpYXQiOjE2NjM5Mzg0NjIsImVtYWlsIjoiYWRtaW5Ab3Blbm1ldGFkYXRhLm9yZyJ9.tS8um_5DKu7HgzGBzS1VTA5uUjKWOCU0B_j08WXBiEC0mr0zNREkqVfwFDD-d24HlNEbrqioLsBuFRiwIWKc1m_ZlVQbG7P36RUxhuv2vbSp80FKyNM-Tj93FDzq91jsyNmsQhyNv_fNr3TXfzzSPjHt8Go0FMMP66weoKMgW2PbXlhVKwEuXUHyakLLzewm9UMeQaEiRzhiTMU3UkLXcKbYEJJvfNFcLwSl9W8JCO_l0Yj3ud-qt_nQYEZwqW6u5nfdQllN133iikV4fM5QZsMCnm8Rq1mvLR0y9bmJiD7fwM1tmJ791TUWqmKaTnP49U493VanKpUAfzIiOiIbhg
|
@ -14,8 +14,8 @@ Test Bigquery connector with CLI
|
||||
"""
|
||||
from typing import List
|
||||
|
||||
from .common.test_cli_db import CliCommonDB
|
||||
from .common_e2e_sqa_mixins import SQACommonMethods
|
||||
from .test_cli_db_base_common import CliCommonDB
|
||||
|
||||
|
||||
class BigqueryCliTest(CliCommonDB.TestSuite, SQACommonMethods):
|
||||
|
@ -21,7 +21,8 @@ from metadata.ingestion.api.sink import SinkStatus
|
||||
from metadata.ingestion.api.source import SourceStatus
|
||||
from metadata.ingestion.api.workflow import Workflow
|
||||
|
||||
from .test_cli_dbt_base import PATH_TO_RESOURCES, CliDBTBase
|
||||
from .base.test_cli import PATH_TO_RESOURCES
|
||||
from .base.test_cli_dbt import CliDBTBase
|
||||
|
||||
|
||||
class DbtCliTest(CliDBTBase.TestSuite):
|
||||
@ -30,7 +31,9 @@ class DbtCliTest(CliDBTBase.TestSuite):
|
||||
@classmethod
|
||||
def setUpClass(cls) -> None:
|
||||
connector = cls.get_connector_name()
|
||||
workflow: Workflow = cls.get_workflow(connector)
|
||||
workflow: Workflow = cls.get_workflow(
|
||||
test_type=cls.get_test_type(), connector=connector
|
||||
)
|
||||
cls.engine = workflow.source.engine
|
||||
cls.openmetadata = workflow.source.metadata
|
||||
cls.config_file_path = str(
|
||||
|
@ -20,9 +20,8 @@ import yaml
|
||||
|
||||
from metadata.utils.constants import UTF_8
|
||||
|
||||
from .common.test_cli_db import CliCommonDB
|
||||
from .common_e2e_sqa_mixins import SQACommonMethods
|
||||
from .test_cli_db_base import E2EType
|
||||
from .test_cli_db_base_common import CliCommonDB
|
||||
|
||||
|
||||
class MSSQLCliTest(CliCommonDB.TestSuite, SQACommonMethods):
|
||||
|
@ -14,8 +14,8 @@ Test MySql connector with CLI
|
||||
"""
|
||||
from typing import List
|
||||
|
||||
from .common.test_cli_db import CliCommonDB
|
||||
from .common_e2e_sqa_mixins import SQACommonMethods
|
||||
from .test_cli_db_base_common import CliCommonDB
|
||||
|
||||
|
||||
class MysqlCliTest(CliCommonDB.TestSuite, SQACommonMethods):
|
||||
|
@ -19,8 +19,8 @@ import pytest
|
||||
from metadata.ingestion.api.sink import SinkStatus
|
||||
from metadata.ingestion.api.source import SourceStatus
|
||||
|
||||
from .test_cli_db_base import E2EType
|
||||
from .test_cli_db_base_common import CliCommonDB
|
||||
from .base.test_cli_db import E2EType
|
||||
from .common.test_cli_db import CliCommonDB
|
||||
|
||||
|
||||
class SnowflakeCliTest(CliCommonDB.TestSuite):
|
||||
|
75
ingestion/tests/cli_e2e/test_cli_tableau.py
Normal file
75
ingestion/tests/cli_e2e/test_cli_tableau.py
Normal file
@ -0,0 +1,75 @@
|
||||
# Copyright 2022 Collate
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
"""
|
||||
Test Tableau connector with CLI
|
||||
"""
|
||||
from pathlib import Path
|
||||
from typing import List
|
||||
|
||||
from .base.test_cli import PATH_TO_RESOURCES
|
||||
from .common.test_cli_dashboard import CliCommonDashboard
|
||||
|
||||
|
||||
class TableauCliTest(CliCommonDashboard.TestSuite):
|
||||
|
||||
# in case we want to do something before running the tests
|
||||
def prepare(self) -> None:
|
||||
redshift_file_path = str(
|
||||
Path(
|
||||
PATH_TO_RESOURCES
|
||||
+ f"/dashboard/{self.get_connector_name()}/redshift.yaml"
|
||||
)
|
||||
)
|
||||
self.run_command(test_file_path=redshift_file_path)
|
||||
|
||||
@staticmethod
|
||||
def get_connector_name() -> str:
|
||||
return "tableau"
|
||||
|
||||
def get_includes_dashboards(self) -> List[str]:
|
||||
return [".*Test.*", "Regional"]
|
||||
|
||||
def get_excludes_dashboards(self) -> List[str]:
|
||||
return ["Superstore"]
|
||||
|
||||
def get_includes_charts(self) -> List[str]:
|
||||
return [".*Sheet", "Economy"]
|
||||
|
||||
def get_excludes_charts(self) -> List[str]:
|
||||
return ["Obesity"]
|
||||
|
||||
def get_includes_datamodels(self) -> List[str]:
|
||||
return ["Test.*"]
|
||||
|
||||
def get_excludes_datamodels(self) -> List[str]:
|
||||
return ["Random.*"]
|
||||
|
||||
def expected_entities(self) -> int:
|
||||
return 28
|
||||
|
||||
def expected_lineage(self) -> int:
|
||||
return 8
|
||||
|
||||
def expected_tags(self) -> int:
|
||||
return 2
|
||||
|
||||
def expected_not_included_entities(self) -> int:
|
||||
return 20
|
||||
|
||||
def expected_not_included_sink_entities(self) -> int:
|
||||
return 21
|
||||
|
||||
def expected_filtered_mix(self) -> int:
|
||||
return 10
|
||||
|
||||
def expected_filtered_sink_mix(self) -> int:
|
||||
return 9
|
@ -18,11 +18,8 @@ regenerated via: `./opt/vertica/examples/VMart_Schema/vmart_gen`
|
||||
"""
|
||||
from typing import List
|
||||
|
||||
import pytest
|
||||
|
||||
from .common.test_cli_db import CliCommonDB
|
||||
from .common_e2e_sqa_mixins import SQACommonMethods
|
||||
from .test_cli_db_base import E2EType
|
||||
from .test_cli_db_base_common import CliCommonDB
|
||||
|
||||
|
||||
class VerticaCliTest(CliCommonDB.TestSuite, SQACommonMethods):
|
||||
|
@ -110,9 +110,9 @@ workflowConfig:
|
||||
The `sourceConfig` is defined [here](https://github.com/open-metadata/OpenMetadata/blob/main/openmetadata-spec/src/main/resources/json/schema/metadataIngestion/dashboardServiceMetadataPipeline.json):
|
||||
|
||||
- `dbServiceNames`: Database Service Name for the creation of lineage, if the source supports it.
|
||||
- `dashboardFilterPattern` and `chartFilterPattern`: Note that the `dashboardFilterPattern` and `chartFilterPattern` both support regex as include or exclude. E.g.,
|
||||
- `overrideOwner`: Flag to override current owner by new owner from source, if found during metadata ingestion
|
||||
- `includeTags`: Set the Include tags toggle to control whether or not to include tags as part of metadata ingestion.
|
||||
- `dashboardFilterPattern` / `chartFilterPattern`: Note that all of them support regex as include or exclude. E.g., "My dashboard, My dash.*, .*Dashboard".
|
||||
- `overrideOwner`: Flag to override current owner by new owner from source, if found during metadata ingestion.
|
||||
- `includeTags`: Set the 'Include Tags' toggle to control whether to include tags as part of metadata ingestion.
|
||||
- `markDeletedDashboards`: Set the Mark Deleted Dashboards toggle to flag dashboards as soft-deleted if they are not present anymore in the source system.
|
||||
|
||||
```yaml
|
||||
|
@ -112,8 +112,8 @@ The `sourceConfig` is defined [here](https://github.com/open-metadata/OpenMetada
|
||||
|
||||
- `dbServiceNames`: Database Service Name for the creation of lineage, if the source supports it.
|
||||
- `dashboardFilterPattern` and `chartFilterPattern`: Note that the `dashboardFilterPattern` and `chartFilterPattern` both support regex as include or exclude. E.g.
|
||||
- `overrideOwner`: Flag to override current owner by new owner from source, if found during metadata ingestion
|
||||
- `includeTags`: Set the Include tags toggle to control whether or not to include tags as part of metadata ingestion.
|
||||
- `overrideOwner`: Flag to override current owner by new owner from source, if found during metadata ingestion.
|
||||
- `includeTags`: Set the 'Include Tags' toggle to control whether to include tags as part of metadata ingestion.
|
||||
- `markDeletedDashboards`: Set the Mark Deleted Dashboards toggle to flag dashboards as soft-deleted if they are not present anymore in the source system.
|
||||
|
||||
```yaml
|
||||
|
@ -61,6 +61,7 @@ source:
|
||||
overrideOwner: True
|
||||
markDeletedDashboards: True
|
||||
includeTags: True
|
||||
includeDataModels: True
|
||||
# dbServiceNames:
|
||||
# - service1
|
||||
# - service2
|
||||
@ -78,6 +79,13 @@ source:
|
||||
# excludes:
|
||||
# - chart3
|
||||
# - chart4
|
||||
# dataModelFilterPattern:
|
||||
# includes:
|
||||
# - datamodel1
|
||||
# - datamodel2
|
||||
# excludes:
|
||||
# - datamodel3
|
||||
# - datamodel4
|
||||
sink:
|
||||
type: metadata-rest
|
||||
config: {}
|
||||
@ -100,9 +108,10 @@ workflowConfig:
|
||||
The `sourceConfig` is defined [here](https://github.com/open-metadata/OpenMetadata/blob/main/openmetadata-spec/src/main/resources/json/schema/metadataIngestion/dashboardServiceMetadataPipeline.json):
|
||||
|
||||
- `dbServiceNames`: Database Service Name for the creation of lineage, if the source supports it.
|
||||
- `dashboardFilterPattern` and `chartFilterPattern`: Note that the `dashboardFilterPattern` and `chartFilterPattern` both support regex as include or exclude. E.g.,
|
||||
- `overrideOwner`: Flag to override current owner by new owner from source, if found during metadata ingestion
|
||||
- `includeTags`: Set the Include tags toggle to control whether or not to include tags as part of metadata ingestion.
|
||||
- `dashboardFilterPattern` / `chartFilterPattern` / `dataModelFilterPattern`: Note that all of them support regex as include or exclude. E.g., "My dashboard, My dash.*, .*Dashboard".
|
||||
- `overrideOwner`: Flag to override current owner by new owner from source, if found during metadata ingestion.
|
||||
- `includeTags`: Set the 'Include Tags' toggle to control whether to include tags as part of metadata ingestion.
|
||||
- `includeDataModels`: Set the 'Include Data Models' toggle to control whether to include tags as part of metadata ingestion.
|
||||
- `markDeletedDashboards`: Set the Mark Deleted Dashboards toggle to flag dashboards as soft-deleted if they are not present anymore in the source system.
|
||||
|
||||
```yaml
|
||||
|
@ -78,6 +78,13 @@ source:
|
||||
# excludes:
|
||||
# - chart3
|
||||
# - chart4
|
||||
# dataModelFilterPattern:
|
||||
# includes:
|
||||
# - datamodel1
|
||||
# - datamodel2
|
||||
# excludes:
|
||||
# - datamodel3
|
||||
# - datamodel4
|
||||
sink:
|
||||
type: metadata-rest
|
||||
config: {}
|
||||
@ -100,9 +107,10 @@ workflowConfig:
|
||||
The `sourceConfig` is defined [here](https://github.com/open-metadata/OpenMetadata/blob/main/openmetadata-spec/src/main/resources/json/schema/metadataIngestion/dashboardServiceMetadataPipeline.json):
|
||||
|
||||
- `dbServiceNames`: Database Service Name for the creation of lineage, if the source supports it.
|
||||
- `dashboardFilterPattern` and `chartFilterPattern`: Note that the `dashboardFilterPattern` and `chartFilterPattern` both support regex as include or exclude. E.g.,
|
||||
- `overrideOwner`: Flag to override current owner by new owner from source, if found during metadata ingestion
|
||||
- `includeTags`: Set the Include tags toggle to control whether or not to include tags as part of metadata ingestion.
|
||||
- `dashboardFilterPattern` / `chartFilterPattern` / `dataModelFilterPattern`: Note that all of them support regex as include or exclude. E.g., "My dashboard, My dash.*, .*Dashboard".
|
||||
- `overrideOwner`: Flag to override current owner by new owner from source, if found during metadata ingestion.
|
||||
- `includeTags`: Set the 'Include Tags' toggle to control whether to include tags as part of metadata ingestion.
|
||||
- `includeDataModels`: Set the 'Include Data Models' toggle to control whether to include tags as part of metadata ingestion.
|
||||
- `markDeletedDashboards`: Set the Mark Deleted Dashboards toggle to flag dashboards as soft-deleted if they are not present anymore in the source system.
|
||||
|
||||
```yaml
|
||||
|
@ -101,8 +101,8 @@ workflowConfig:
|
||||
The `sourceConfig` is defined [here](https://github.com/open-metadata/OpenMetadata/blob/main/openmetadata-spec/src/main/resources/json/schema/metadataIngestion/dashboardServiceMetadataPipeline.json):
|
||||
|
||||
- `dbServiceNames`: Database Service Name for the creation of lineage, if the source supports it.
|
||||
- `dashboardFilterPattern` and `chartFilterPattern`: Note that the `dashboardFilterPattern` and `chartFilterPattern` both support regex as include or exclude. E.g.,
|
||||
- `includeTags`: Set the Include tags toggle to control whether or not to include tags as part of metadata ingestion.
|
||||
- `dashboardFilterPattern` / `chartFilterPattern`: Note that all of them support regex as include or exclude. E.g., "My dashboard, My dash.*, .*Dashboard".
|
||||
- `includeTags`: Set the 'Include Tags' toggle to control whether to include tags as part of metadata ingestion.
|
||||
- `markDeletedDashboards`: Set the Mark Deleted Dashboards toggle to flag dashboards as soft-deleted if they are not present anymore in the source system.
|
||||
|
||||
```yaml
|
||||
|
@ -101,8 +101,8 @@ workflowConfig:
|
||||
The `sourceConfig` is defined [here](https://github.com/open-metadata/OpenMetadata/blob/main/openmetadata-spec/src/main/resources/json/schema/metadataIngestion/dashboardServiceMetadataPipeline.json):
|
||||
|
||||
- `dbServiceNames`: Database Service Name for the creation of lineage, if the source supports it.
|
||||
- `dashboardFilterPattern` and `chartFilterPattern`: Note that the `dashboardFilterPattern` and `chartFilterPattern` both support regex as include or exclude. E.g.,
|
||||
- `includeTags`: Set the Include tags toggle to control whether or not to include tags as part of metadata ingestion.
|
||||
- `dashboardFilterPattern` / `chartFilterPattern`: Note that all of them support regex as include or exclude. E.g., "My dashboard, My dash.*, .*Dashboard".
|
||||
- `includeTags`: Set the 'Include Tags' toggle to control whether to include tags as part of metadata ingestion.
|
||||
- `markDeletedDashboards`: Set the Mark Deleted Dashboards toggle to flag dashboards as soft-deleted if they are not present anymore in the source system.
|
||||
|
||||
```yaml
|
||||
|
@ -99,8 +99,8 @@ workflowConfig:
|
||||
The `sourceConfig` is defined [here](https://github.com/open-metadata/OpenMetadata/blob/main/openmetadata-spec/src/main/resources/json/schema/metadataIngestion/dashboardServiceMetadataPipeline.json):
|
||||
|
||||
- `dbServiceNames`: Database Service Name for the creation of lineage, if the source supports it.
|
||||
- `dashboardFilterPattern` and `chartFilterPattern`: Note that the `dashboardFilterPattern` and `chartFilterPattern` both support regex as include or exclude. E.g.,
|
||||
- `includeTags`: Set the Include tags toggle to control whether or not to include tags as part of metadata ingestion.
|
||||
- `dashboardFilterPattern` / `chartFilterPattern`: Note that all of them support regex as include or exclude. E.g., "My dashboard, My dash.*, .*Dashboard".
|
||||
- `includeTags`: Set the 'Include Tags' toggle to control whether to include tags as part of metadata ingestion.
|
||||
- `markDeletedDashboards`: Set the Mark Deleted Dashboards toggle to flag dashboards as soft-deleted if they are not present anymore in the source system.
|
||||
|
||||
```yaml
|
||||
|
@ -99,8 +99,8 @@ workflowConfig:
|
||||
The `sourceConfig` is defined [here](https://github.com/open-metadata/OpenMetadata/blob/main/openmetadata-spec/src/main/resources/json/schema/metadataIngestion/dashboardServiceMetadataPipeline.json):
|
||||
|
||||
- `dbServiceNames`: Database Service Name for the creation of lineage, if the source supports it.
|
||||
- `dashboardFilterPattern` and `chartFilterPattern`: Note that the `dashboardFilterPattern` and `chartFilterPattern` both support regex as include or exclude. E.g.,
|
||||
- `includeTags`: Set the Include tags toggle to control whether or not to include tags as part of metadata ingestion.
|
||||
- `dashboardFilterPattern` / `chartFilterPattern`: Note that all of them support regex as include or exclude. E.g., "My dashboard, My dash.*, .*Dashboard".
|
||||
- `includeTags`: Set the 'Include Tags' toggle to control whether to include tags as part of metadata ingestion.
|
||||
- `markDeletedDashboards`: Set the Mark Deleted Dashboards toggle to flag dashboards as soft-deleted if they are not present anymore in the source system.
|
||||
|
||||
```yaml
|
||||
|
@ -116,8 +116,8 @@ Using the non-admin APIs will only fetch the dashboard and chart metadata from t
|
||||
The `sourceConfig` is defined [here](https://github.com/open-metadata/OpenMetadata/blob/main/openmetadata-spec/src/main/resources/json/schema/metadataIngestion/dashboardServiceMetadataPipeline.json):
|
||||
|
||||
- `dbServiceNames`: Database Service Name for the creation of lineage, if the source supports it.
|
||||
- `dashboardFilterPattern` and `chartFilterPattern`: Note that the `dashboardFilterPattern` and `chartFilterPattern` both support regex as include or exclude. E.g.,
|
||||
- `includeTags`: Set the Include tags toggle to control whether or not to include tags as part of metadata ingestion.
|
||||
- `dashboardFilterPattern` / `chartFilterPattern`: Note that all of them support regex as include or exclude. E.g., "My dashboard, My dash.*, .*Dashboard".
|
||||
- `includeTags`: Set the 'Include Tags' toggle to control whether to include tags as part of metadata ingestion.
|
||||
- `markDeletedDashboards`: Set the Mark Deleted Dashboards toggle to flag dashboards as soft-deleted if they are not present anymore in the source system.
|
||||
|
||||
```yaml
|
||||
|
@ -116,8 +116,8 @@ Using the non-admin APIs will only fetch the dashboard and chart metadata from t
|
||||
The `sourceConfig` is defined [here](https://github.com/open-metadata/OpenMetadata/blob/main/openmetadata-spec/src/main/resources/json/schema/metadataIngestion/dashboardServiceMetadataPipeline.json):
|
||||
|
||||
- `dbServiceNames`: Database Service Name for the creation of lineage, if the source supports it.
|
||||
- `dashboardFilterPattern` and `chartFilterPattern`: Note that the `dashboardFilterPattern` and `chartFilterPattern` both support regex as include or exclude. E.g.,
|
||||
- `includeTags`: Set the Include tags toggle to control whether or not to include tags as part of metadata ingestion.
|
||||
- `dashboardFilterPattern` / `chartFilterPattern`: Note that all of them support regex as include or exclude. E.g., "My dashboard, My dash.*, .*Dashboard".
|
||||
- `includeTags`: Set the 'Include Tags' toggle to control whether to include tags as part of metadata ingestion.
|
||||
- `markDeletedDashboards`: Set the Mark Deleted Dashboards toggle to flag dashboards as soft-deleted if they are not present anymore in the source system.
|
||||
|
||||
```yaml
|
||||
|
@ -98,8 +98,8 @@ workflowConfig:
|
||||
The `sourceConfig` is defined [here](https://github.com/open-metadata/OpenMetadata/blob/main/openmetadata-spec/src/main/resources/json/schema/metadataIngestion/dashboardServiceMetadataPipeline.json):
|
||||
|
||||
- `dbServiceNames`: Database Service Name for the creation of lineage, if the source supports it.
|
||||
- `dashboardFilterPattern` and `chartFilterPattern`: Note that the `dashboardFilterPattern` and `chartFilterPattern` both support regex as include or exclude. E.g.,
|
||||
- `includeTags`: Set the Include tags toggle to control whether or not to include tags as part of metadata ingestion.
|
||||
- `dashboardFilterPattern` / `chartFilterPattern`: Note that all of them support regex as include or exclude. E.g., "My dashboard, My dash.*, .*Dashboard".
|
||||
- `includeTags`: Set the 'Include Tags' toggle to control whether to include tags as part of metadata ingestion.
|
||||
- `markDeletedDashboards`: Set the Mark Deleted Dashboards toggle to flag dashboards as soft-deleted if they are not present anymore in the source system.
|
||||
|
||||
```yaml
|
||||
|
@ -109,8 +109,8 @@ workflowConfig:
|
||||
The `sourceConfig` is defined [here](https://github.com/open-metadata/OpenMetadata/blob/main/openmetadata-spec/src/main/resources/json/schema/metadataIngestion/dashboardServiceMetadataPipeline.json):
|
||||
|
||||
- `dbServiceNames`: Database Service Name for the creation of lineage, if the source supports it.
|
||||
- `dashboardFilterPattern` and `chartFilterPattern`: Note that the `dashboardFilterPattern` and `chartFilterPattern` both support regex as include or exclude. E.g.,
|
||||
- `includeTags`: Set the Include tags toggle to control whether or not to include tags as part of metadata ingestion.
|
||||
- `dashboardFilterPattern` / `chartFilterPattern`: Note that all of them support regex as include or exclude. E.g., "My dashboard, My dash.*, .*Dashboard".
|
||||
- `includeTags`: Set the 'Include Tags' toggle to control whether to include tags as part of metadata ingestion.
|
||||
- `markDeletedDashboards`: Set the Mark Deleted Dashboards toggle to flag dashboards as soft-deleted if they are not present anymore in the source system.
|
||||
|
||||
```yaml
|
||||
|
@ -102,8 +102,8 @@ workflowConfig:
|
||||
The `sourceConfig` is defined [here](https://github.com/open-metadata/OpenMetadata/blob/main/openmetadata-spec/src/main/resources/json/schema/metadataIngestion/dashboardServiceMetadataPipeline.json):
|
||||
|
||||
- `dbServiceNames`: Database Service Name for the creation of lineage, if the source supports it.
|
||||
- `dashboardFilterPattern` and `chartFilterPattern`: Note that the `dashboardFilterPattern` and `chartFilterPattern` both support regex as include or exclude. E.g.,
|
||||
- `includeTags`: Set the Include tags toggle to control whether or not to include tags as part of metadata ingestion.
|
||||
- `dashboardFilterPattern` / `chartFilterPattern`: Note that all of them support regex as include or exclude. E.g., "My dashboard, My dash.*, .*Dashboard".
|
||||
- `includeTags`: Set the 'Include Tags' toggle to control whether to include tags as part of metadata ingestion.
|
||||
- `markDeletedDashboards`: Set the Mark Deleted Dashboards toggle to flag dashboards as soft-deleted if they are not present anymore in the source system.
|
||||
|
||||
```yaml
|
||||
|
@ -102,8 +102,8 @@ workflowConfig:
|
||||
The `sourceConfig` is defined [here](https://github.com/open-metadata/OpenMetadata/blob/main/openmetadata-spec/src/main/resources/json/schema/metadataIngestion/dashboardServiceMetadataPipeline.json):
|
||||
|
||||
- `dbServiceNames`: Database Service Name for the creation of lineage, if the source supports it.
|
||||
- `dashboardFilterPattern` and `chartFilterPattern`: Note that the `dashboardFilterPattern` and `chartFilterPattern` both support regex as include or exclude. E.g.,
|
||||
- `includeTags`: Set the Include tags toggle to control whether or not to include tags as part of metadata ingestion.
|
||||
- `dashboardFilterPattern` / `chartFilterPattern`: Note that all of them support regex as include or exclude. E.g., "My dashboard, My dash.*, .*Dashboard".
|
||||
- `includeTags`: Set the 'Include Tags' toggle to control whether to include tags as part of metadata ingestion.
|
||||
- `markDeletedDashboards`: Set the Mark Deleted Dashboards toggle to flag dashboards as soft-deleted if they are not present anymore in the source system.
|
||||
|
||||
```yaml
|
||||
|
@ -161,9 +161,9 @@ workflowConfig:
|
||||
The `sourceConfig` is defined [here](https://github.com/open-metadata/OpenMetadata/blob/main/openmetadata-spec/src/main/resources/json/schema/metadataIngestion/dashboardServiceMetadataPipeline.json):
|
||||
|
||||
- `dbServiceNames`: Database Service Name for the creation of lineage, if the source supports it.
|
||||
- `dashboardFilterPattern` and `chartFilterPattern`: Note that the `dashboardFilterPattern` and `chartFilterPattern` both support regex as include or exclude. E.g.,
|
||||
- `overrideOwner`: Flag to override current owner by new owner from source, if found during metadata ingestion
|
||||
- `includeTags`: Set the Include tags toggle to control whether or not to include tags as part of metadata ingestion.
|
||||
- `dashboardFilterPattern` / `chartFilterPattern`: Note that all of them support regex as include or exclude. E.g., "My dashboard, My dash.*, .*Dashboard".
|
||||
- `overrideOwner`: Flag to override current owner by new owner from source, if found during metadata ingestion.
|
||||
- `includeTags`: Set the 'Include Tags' toggle to control whether to include tags as part of metadata ingestion.
|
||||
- `markDeletedDashboards`: Set the Mark Deleted Dashboards toggle to flag dashboards as soft-deleted if they are not present anymore in the source system.
|
||||
|
||||
```yaml
|
||||
|
@ -162,9 +162,9 @@ workflowConfig:
|
||||
The `sourceConfig` is defined [here](https://github.com/open-metadata/OpenMetadata/blob/main/openmetadata-spec/src/main/resources/json/schema/metadataIngestion/dashboardServiceMetadataPipeline.json):
|
||||
|
||||
- `dbServiceNames`: Database Service Name for the creation of lineage, if the source supports it.
|
||||
- `dashboardFilterPattern` and `chartFilterPattern`: Note that the `dashboardFilterPattern` and `chartFilterPattern` both support regex as include or exclude. E.g.,
|
||||
- `overrideOwner`: Flag to override current owner by new owner from source, if found during metadata ingestion
|
||||
- `includeTags`: Set the Include tags toggle to control whether or not to include tags as part of metadata ingestion.
|
||||
- `dashboardFilterPattern` / `chartFilterPattern`: Note that all of them support regex as include or exclude. E.g., "My dashboard, My dash.*, .*Dashboard".
|
||||
- `overrideOwner`: Flag to override current owner by new owner from source, if found during metadata ingestion.
|
||||
- `includeTags`: Set the 'Include Tags' toggle to control whether to include tags as part of metadata ingestion.
|
||||
- `markDeletedDashboards`: Set the Mark Deleted Dashboards toggle to flag dashboards as soft-deleted if they are not present anymore in the source system.
|
||||
|
||||
```yaml
|
||||
|
@ -77,6 +77,7 @@ source:
|
||||
overrideOwner: True
|
||||
markDeletedDashboards: True
|
||||
includeTags: True
|
||||
includeDataModels: True
|
||||
# dbServiceNames:
|
||||
# - service1
|
||||
# - service2
|
||||
@ -94,6 +95,13 @@ source:
|
||||
# excludes:
|
||||
# - chart3
|
||||
# - chart4
|
||||
# dataModelFilterPattern:
|
||||
# includes:
|
||||
# - datamodel1
|
||||
# - datamodel2
|
||||
# excludes:
|
||||
# - datamodel3
|
||||
# - datamodel4
|
||||
sink:
|
||||
type: metadata-rest
|
||||
config: {}
|
||||
@ -137,6 +145,7 @@ source:
|
||||
overrideOwner: True
|
||||
markDeletedDashboards: True
|
||||
includeTags: True
|
||||
includeDataModels: True
|
||||
# dbServiceNames:
|
||||
# - service1
|
||||
# - service2
|
||||
@ -154,6 +163,13 @@ source:
|
||||
# excludes:
|
||||
# - chart3
|
||||
# - chart4
|
||||
# dataModelFilterPattern:
|
||||
# includes:
|
||||
# - datamodel1
|
||||
# - datamodel2
|
||||
# excludes:
|
||||
# - datamodel3
|
||||
# - datamodel4
|
||||
sink:
|
||||
type: metadata-rest
|
||||
config: {}
|
||||
@ -201,6 +217,7 @@ source:
|
||||
overrideOwner: True
|
||||
markDeletedDashboards: True
|
||||
includeTags: True
|
||||
includeDataModels: True
|
||||
# dbServiceNames:
|
||||
# - service1
|
||||
# - service2
|
||||
@ -218,6 +235,13 @@ source:
|
||||
# excludes:
|
||||
# - chart3
|
||||
# - chart4
|
||||
# dataModelFilterPattern:
|
||||
# includes:
|
||||
# - datamodel1
|
||||
# - datamodel2
|
||||
# excludes:
|
||||
# - datamodel3
|
||||
# - datamodel4
|
||||
sink:
|
||||
type: metadata-rest
|
||||
config: {}
|
||||
@ -249,9 +273,10 @@ workflowConfig:
|
||||
The `sourceConfig` is defined [here](https://github.com/open-metadata/OpenMetadata/blob/main/openmetadata-spec/src/main/resources/json/schema/metadataIngestion/dashboardServiceMetadataPipeline.json):
|
||||
|
||||
- `dbServiceNames`: Database Service Name for the creation of lineage, if the source supports it.
|
||||
- `dashboardFilterPattern` and `chartFilterPattern`: Note that the `dashboardFilterPattern` and `chartFilterPattern` both support regex as include or exclude. E.g.,
|
||||
- `overrideOwner`: Flag to override current owner by new owner from source, if found during metadata ingestion
|
||||
- `includeTags`: Set the Include tags toggle to control whether or not to include tags as part of metadata ingestion.
|
||||
- `dashboardFilterPattern` / `chartFilterPattern` / `dataModelFilterPattern`: Note that all of them support regex as include or exclude. E.g., "My dashboard, My dash.*, .*Dashboard".
|
||||
- `overrideOwner`: Flag to override current owner by new owner from source, if found during metadata ingestion.
|
||||
- `includeTags`: Set the 'Include Tags' toggle to control whether to include tags as part of metadata ingestion.
|
||||
- `includeDataModels`: Set the 'Include Data Models' toggle to control whether to include tags as part of metadata ingestion.
|
||||
- `markDeletedDashboards`: Set the Mark Deleted Dashboards toggle to flag dashboards as soft-deleted if they are not present anymore in the source system.
|
||||
|
||||
```yaml
|
||||
|
@ -77,6 +77,7 @@ source:
|
||||
overrideOwner: True
|
||||
markDeletedDashboards: True
|
||||
includeTags: True
|
||||
includeDataModels: True
|
||||
# dbServiceNames:
|
||||
# - service1
|
||||
# - service2
|
||||
@ -94,6 +95,13 @@ source:
|
||||
# excludes:
|
||||
# - chart3
|
||||
# - chart4
|
||||
# dataModelFilterPattern:
|
||||
# includes:
|
||||
# - datamodel1
|
||||
# - datamodel2
|
||||
# excludes:
|
||||
# - datamodel3
|
||||
# - datamodel4
|
||||
sink:
|
||||
type: metadata-rest
|
||||
config: {}
|
||||
@ -136,6 +144,7 @@ source:
|
||||
overrideOwner: True
|
||||
markDeletedDashboards: True
|
||||
includeTags: True
|
||||
includeDataModels: True
|
||||
type: DashboardMetadata
|
||||
# dbServiceNames:
|
||||
# - service1
|
||||
@ -154,6 +163,13 @@ source:
|
||||
# excludes:
|
||||
# - chart3
|
||||
# - chart4
|
||||
# dataModelFilterPattern:
|
||||
# includes:
|
||||
# - datamodel1
|
||||
# - datamodel2
|
||||
# excludes:
|
||||
# - datamodel3
|
||||
# - datamodel4
|
||||
sink:
|
||||
type: metadata-rest
|
||||
config: {}
|
||||
@ -201,6 +217,7 @@ source:
|
||||
overrideOwner: True
|
||||
markDeletedDashboards: True
|
||||
includeTags: True
|
||||
includeDataModels: True
|
||||
# dbServiceNames:
|
||||
# - service1
|
||||
# - service2
|
||||
@ -218,6 +235,13 @@ source:
|
||||
# excludes:
|
||||
# - chart3
|
||||
# - chart4
|
||||
# dataModelFilterPattern:
|
||||
# includes:
|
||||
# - datamodel1
|
||||
# - datamodel2
|
||||
# excludes:
|
||||
# - datamodel3
|
||||
# - datamodel4
|
||||
sink:
|
||||
type: metadata-rest
|
||||
config: {}
|
||||
@ -248,9 +272,10 @@ workflowConfig:
|
||||
The `sourceConfig` is defined [here](https://github.com/open-metadata/OpenMetadata/blob/main/openmetadata-spec/src/main/resources/json/schema/metadataIngestion/dashboardServiceMetadataPipeline.json):
|
||||
|
||||
- `dbServiceNames`: Database Service Name for the creation of lineage, if the source supports it.
|
||||
- `dashboardFilterPattern` and `chartFilterPattern`: Note that the `dashboardFilterPattern` and `chartFilterPattern` both support regex as include or exclude. E.g.,
|
||||
- `overrideOwner`: Flag to override current owner by new owner from source, if found during metadata ingestion
|
||||
- `includeTags`: Set the Include tags toggle to control whether or not to include tags as part of metadata ingestion.
|
||||
- `dashboardFilterPattern` / `chartFilterPattern` / `dataModelFilterPattern`: Note that all of them support regex as include or exclude. E.g., "My dashboard, My dash.*, .*Dashboard".
|
||||
- `overrideOwner`: Flag to override current owner by new owner from source, if found during metadata ingestion.
|
||||
- `includeTags`: Set the 'Include Tags' toggle to control whether to include tags as part of metadata ingestion.
|
||||
- `includeDataModels`: Set the 'Include Data Models' toggle to control whether to include tags as part of metadata ingestion.
|
||||
- `markDeletedDashboards`: Set the Mark Deleted Dashboards toggle to flag dashboards as soft-deleted if they are not present anymore in the source system.
|
||||
|
||||
```yaml
|
||||
|
Loading…
x
Reference in New Issue
Block a user