mirror of
https://github.com/open-metadata/OpenMetadata.git
synced 2025-12-11 23:36:25 +00:00
[Docs] - Breaking changes (#11260)
* Fix link * Content updates * Add Breaking Changes * Update openmetadata-docs-v1/content/v1.0.0/deployment/upgrade/index.md Co-authored-by: Teddy <teddy.crepineau@gmail.com> * Update openmetadata-docs-v1/content/v1.0.0/releases/index.md Co-authored-by: Nahuel <nahuel@getcollate.io> * Update docs * doc: added sdk and api name change * doc: fix module name in breaking change --------- Co-authored-by: Teddy <teddy.crepineau@gmail.com> Co-authored-by: Nahuel <nahuel@getcollate.io>
This commit is contained in:
parent
be2aa8c1f9
commit
63f2b6772e
@ -7,129 +7,104 @@ slug: /deployment/upgrade
|
||||
|
||||
## Releases
|
||||
|
||||
OpenMetadata community will be doing feature releases and stable releases.
|
||||
The OpenMetadata community will be doing feature releases and stable releases.
|
||||
|
||||
- Feature releases are to upgrade your sandbox or POCs to give feedback to the community and any potential bugs that the community needs to fix.
|
||||
- Stable releases are to upgrade your production environments and share it with your users.
|
||||
|
||||
## 0.13.2 - Stable Release
|
||||
## 1.0 - Stable Release 🎉
|
||||
|
||||
OpenMetadata 0.13.2 is a stable release. Please check the [release notes](https://github.com/open-metadata/OpenMetadata/releases/tag/0.13.1-release)
|
||||
OpenMetadata 1.0 is a stable release. Please check the [release notes](/releases/latest-release).
|
||||
|
||||
If you are upgrading production this is the recommended version to upgrade.
|
||||
|
||||
## Breaking Changes for 0.13.2 Stable Release
|
||||
## Breaking Changes for 1.0 Stable Release
|
||||
|
||||
### EntityName
|
||||
To better manage and harmonize `entityName` value and allow users to form better expectations around these values the team introduced an enforcement of the `entityName` format using regex pattern.
|
||||
### Airflow Configuration & Pipeline Service Client
|
||||
|
||||
All the OpenMetadata entities `entityName` fields will enforce by the default the following regex pattern:
|
||||
- `^[\w'\- .&]+$`: match any word characters (equivalent to `[a-zA-Z0-9_]`) or the characters `'- .&`. For example when creating a pipeline service the following name will not be allowed `MyPipelineIngestion?!` while the following will be `My-Pipeline `.
|
||||
The new section on the `openmetadata.yaml` configuration for the Pipeline Service Client has been updated.
|
||||
|
||||
Some entities are enforcing specific patterns:
|
||||
- Users: `([\w\-.]|[^@])+$` match any word characters (equivalent to `[a-zA-Z0-9_]`) or the characters `-.` and don't match the character `@`
|
||||
- Webhook: `^[\w'\-.]+$` match any word characters (equivalent to `[a-zA-Z0-9_]`) or the characters `-.`
|
||||
- Table: `^[\w'\- ./]+$` match any word characters (equivalent to `[a-zA-Z0-9_]`) or the characters `'-. /`
|
||||
- Location: `^[\w'\-./]+$` match any word characters (equivalent to `[a-zA-Z0-9_]`) or the characters `-.'/`
|
||||
- Type: `^[a-z][\w]+$` match any word characters (equivalent to `[a-zA-Z0-9_]`) starting with a lowercase letter (e.g. `tHisChar` will match, `ThisChar` will not)
|
||||
We now have a `pipelineServiceClientConfiguration`, instead of the old [airflowConfiguration](https://github.com/open-metadata/OpenMetadata/blob/0.13.3/conf/openmetadata.yaml#L214).
|
||||
|
||||
If an entity name does not follow the pattern an error will be returned by the OpenMetadata platform.
|
||||
```yaml
|
||||
pipelineServiceClientConfiguration:
|
||||
className: ${PIPELINE_SERVICE_CLIENT_CLASS_NAME:-"org.openmetadata.service.clients.pipeline.airflow.AirflowRESTClient"}
|
||||
apiEndpoint: ${PIPELINE_SERVICE_CLIENT_ENDPOINT:-http://localhost:8080}
|
||||
metadataApiEndpoint: ${SERVER_HOST_API_URL:-http://localhost:8585/api}
|
||||
hostIp: ${PIPELINE_SERVICE_CLIENT_HOST_IP:-""}
|
||||
verifySSL: ${PIPELINE_SERVICE_CLIENT_VERIFY_SSL:-"no-ssl"} # Possible values are "no-ssl", "ignore", "validate"
|
||||
sslConfig:
|
||||
validate:
|
||||
certificatePath: ${PIPELINE_SERVICE_CLIENT_SSL_CERT_PATH:-""} # Local path for the Pipeline Service Client
|
||||
|
||||
**The change should be transparent for the end user and no action should be required.**
|
||||
# Default required parameters for Airflow as Pipeline Service Client
|
||||
parameters:
|
||||
username: ${AIRFLOW_USERNAME:-admin}
|
||||
password: ${AIRFLOW_PASSWORD:-admin}
|
||||
timeout: ${AIRFLOW_TIMEOUT:-10}
|
||||
```
|
||||
|
||||
## EntityLink
|
||||
Similar to the implementation done for `entityName`, `entityLink` will now enforce a specific pattern. The structure of `entityLink` is in the form `<#E::{entities}::{entityType}::{field}::{fieldName}::{fieldValue}>`
|
||||
Most existing environment variables remain the same, except for these three:
|
||||
- `AIRFLOW_HOST_IP` → `PIPELINE_SERVICE_CLIENT_HOST_IP`
|
||||
- `AIRFLOW_VERIFY_SSL` → `PIPELINE_SERVICE_CLIENT_VERIFY_SSL`
|
||||
- `AIRFLOW_SSL_CERT_PATH` → `PIPELINE_SERVICE_CLIENT_SSL_CERT_PATH`
|
||||
|
||||
All the OpenMetadata entities `entityLink` fields will enforce by the default the following regex pattern:
|
||||
- `^<#E::\w+::[\w'\- .&/\:+\"\\]+>$`: this means that `{entities}` value needs to match any word characters (equivalent to `[a-zA-Z0-9_]`) and the part after `{entities}` can match any word characters (equivalent to `[a-zA-Z0-9_]`) or the characters `'- .&/\:+\"\`
|
||||
When upgrading, make sure to update the environment variables and, if working on Bare Metal, make sure to use the updated `openmetadata.yaml`.
|
||||
|
||||
If an entity name does not follow the pattern an error will be returned by the OpenMetadata platform.
|
||||
### Deprecation Notice
|
||||
|
||||
**The change should be transparent for the end user and no action should be required.**
|
||||
- When configuring Bots, **JWT** tokens will be the preferred method of authentication. Any existing SSO-based service accounts
|
||||
will continue to work on 1.0, but will be fully deprecated on future releases.
|
||||
- As we added the new Impala connector, We will remove the `impala` scheme from Hive in the next release.
|
||||
|
||||
### Tags API
|
||||
### API Endpoint Changes
|
||||
The following endpoints have been renamed in 1.0
|
||||
|
||||
Tags APIs were coded long before other entities' APIs were added. Due to this, tag API does not follow the API convention
|
||||
that all the other entities are following. This issue makes backward incompatible changes to follow the same convention as glossaries.
|
||||
|Previous Endpoint|New Endpoint|
|
||||
|---|---|
|
||||
|`api/v1`|**Removed**|
|
||||
|`api/v1/services`|**Removed**|
|
||||
|`api/v1/version`|`api/v1/system/version`|
|
||||
|`api/v1/util/entities/count`|`api/v1/system/entities/count`|
|
||||
|`api/v1/util/services/count`|`api/v1/system/services/count`|
|
||||
|`api/v1/settings`|`api/v1/system/settings`|
|
||||
|`api/v1/config`|`api/v1/system/config`|
|
||||
|`api/v1/testSuite`|`api/v1/dataQuality/testSuites`|
|
||||
|`api/v1/testCase`|`api/v1/dataQuality/testCases`|
|
||||
|`api/v1/testDefinition`|`api/v1/dataQuality/testDefinitions`|
|
||||
|`api/v1/automations/workflow`|`api/v1/automations/workflows`|
|
||||
|`api/v1/events/subscription`|`api/v1/events/subscriptions`|
|
||||
|`api/v1/analytic/reportData`|`api/v1/analytics/dataInsights/data`|
|
||||
|`api/v1/analytics/webAnalyticEvent/`|`api/v1/analytics/web/events/`|
|
||||
|`api/v1/indexResource/reindex`|`api/v1/search/reindex`|
|
||||
|`api/v1/indexResource/reindex/status/{runMode}`|`api/v1/search/reindex/status/{runMode}`|
|
||||
|
||||
You can find the full list of API paths changes in the following [issue](https://github.com/open-metadata/OpenMetadata/issues/9259).
|
||||
### Sample Data Deprecation
|
||||
|
||||
### Metabase and Domo Dashboards `name`
|
||||
The `SampleData` service has been deprecated. It is now a `CustomConnector`. If you have some entities in `SampleData`, please DELETE the service if you don’t want to keep them, or we can help you migrate them to a Custom Connector.
|
||||
|
||||
With the new restrictions on the `EntityName` and to ensure unicity of the assets, the **Metabase** and **Domo Dashboard** sources
|
||||
now ingest the `name` of the charts and dashboards with their internal ID value. Their `name` value will be used
|
||||
as the display name, but not as the OpenMetadata `name` anymore.
|
||||
Note that this service type was mostly used on quickstarts and tests to add some example assets into OpenMetadata. This should be transparent for most of the users.
|
||||
|
||||
The recommended approach here is to create a new service and ingest the metadata again. If you ingest from the same
|
||||
service, the assets in there will end up being duplicated, as the `name` in OpenMetadata is used to identify each asset.
|
||||
### Location Entity
|
||||
|
||||
Let us know if you have any questions around this topic.
|
||||
We are deprecating the `Location` Entity in favor of the Containers and new Storage Service:
|
||||
- Dropping the `location_entity` table,
|
||||
- Removing the `Location` APIs.
|
||||
|
||||
### Ingestion Framework sources directory structure
|
||||
If you did not have any custom implementation, this was partially used in the Glue Database connector. However, the information was not being actively shown.
|
||||
|
||||
We have converted source modules (e.g., `redshift.py`) into packages containing all the necessary information (e.g., `redshift/metadata.py`).
|
||||
This has helped us reorganise functionalities and easily focus on each connector independently.
|
||||
If you had custom implementations on top of the `Location` APIs, reach out to us, and we can help migrate to the new Storage Services.
|
||||
|
||||
If you're extending any of the sources, you'll need to update your imports. You can take a look at the new
|
||||
structure [here](https://github.com/open-metadata/OpenMetadata/tree/main/ingestion/src/metadata/ingestion/source).
|
||||
### AWS Connectors
|
||||
|
||||
### MySQL Helm Chart Version Reverted to 8.8.23
|
||||
The `endpointURL` property is now formatted as a proper URI, e.g., `http://something.com`. If you have added this configuration
|
||||
in your connectors, please update the `endpointURL` format with the right scheme.
|
||||
|
||||
OpenMetadata Helm Chart Release with Application Version `0.13.2` updates the Bitnami MySQL Helm Chart version from `9.2.1` to `8.8.23`. This is a breaking change and users will face an issue as mentioned in the documentation [here](/deployment/upgrade/kubernetes#mysql-pod-fails-on-upgrade). Please note that OpenMetadata Dependencies Helm Chart is not recommended for production use cases. The steps mentioned in the section will help you fix the issue.
|
||||
Note that this property is OPTIONAL, and for the most cases it will either be left blank or already configured with the right format for it to work properly, e.g., `s3://...`.
|
||||
|
||||
## 0.13.1 - Stable Release
|
||||
|
||||
OpenMetadata 0.13.1 is a stable release. Please check the [release notes](https://github.com/open-metadata/OpenMetadata/releases/tag/0.13.1-release)
|
||||
|
||||
## Breaking Changes for 0.13.1 Stable Release
|
||||
|
||||
OpenMetadata Release 0.13.1 introduces below breaking changes -
|
||||
|
||||
### Webhooks
|
||||
|
||||
Starting from 0.13.1 , OpenMetadata will be deprecating the existing webhooks for Slack, MSTeams.
|
||||
|
||||
Before upgrading to 0.13.1 it is recommended to save the existing Webhook configs(like webhook url) to use them later.
|
||||
|
||||
We have added Alerts/Notifications , which can be configured to receive customised alerts on updates in OM using Triggers, Filtering Information to different destinations like Slack, MsTeams or even Emails.
|
||||
Please use the same webhook config that you had saved from previous version to configure the Alerts Destination after upgrading.
|
||||
|
||||
|
||||
OpenMetadata Release 0.13.x introduces below breaking changes:
|
||||
|
||||
### Docker Volumes
|
||||
|
||||
OpenMetadata Release 0.13.x introduces Default Docker Volumes for Database (MySQL, PostgreSQL) and ElasticSearch with Docker deployment.
|
||||
|
||||
- If you are looking for the fresh deployment of 0.13.x - [here](https://docs.open-metadata.org/deployment/docker)
|
||||
- If you are looking for upgrading of the new version i.e 0.13.x - [here](https://docs.open-metadata.org/deployment/upgrade/docker)
|
||||
|
||||
### MySQL Helm Chart Version Updated to 9.2.1
|
||||
|
||||
OpenMetadata Helm Chart Release with Application Version `0.13.1` updates the Bitnami MySQL Helm Chart version to `9.2.1` from `8.8.23`. This is not a breaking change but existing user's trying to upgrade will experience a slight delay in OpenMetadata Dependencies Helm Chart Upgrades as it pulls new docker image for MySQL. Please note that OpenMetadata Dependencies Helm Chart is not recommended for production use cases. Please follow the [kubernetes deployment](/deployment/kubernetes) for new installation or [upgrade kubernetes](/deployment/upgrade/kubernetes) for upgrading OpenMetadata in Kubernetes.
|
||||
|
||||
### dbt Workflow
|
||||
|
||||
dbt ingestion has been separated from the metadata ingestion. It can now be configured as a separate workflow after completing the metadata ingestion workflow.
|
||||
|
||||
We will remove the dbt configuration from your existing metadata ingestion pipelines and they will keep working as expected.
|
||||
|
||||
After upgrading you will have to create the dbt workflow for the dbt ingestion to start working again.
|
||||
|
||||
### Airflow Lineage Backend
|
||||
|
||||
- The import for the Airflow Lineage Backend has been updated from `airflow_provider_openmetadata.lineage.openmetadata.OpenMetadataLineageBackend`
|
||||
to `airflow_provider_openmetadata.lineage.backend.OpenMetadataLineageBackend`.
|
||||
- We removed support from Airflow v1.
|
||||
- The failure callback now only updates the pipeline status if the Pipeline already exists in OpenMetadata.
|
||||
|
||||
## 0.13.0 - Feature Release
|
||||
|
||||
OpenMetadata 0.13.0 is a **feature release**.
|
||||
|
||||
**Don't upgrade your production with 0.13.0 feature release**
|
||||
|
||||
Explore 0.13.0 by following up [Deployment guides](https://docs.open-metadata.org/deployment) and please give us any feedback on our [community slack](https://slack.open-metadata.org)
|
||||
### Python SDK Submodules name change
|
||||
- **`metadata.test_suite.*`**: this submodule has been renamed `metadata.data_quality.*`. You can view the full change [here](https://github.com/open-metadata/OpenMetadata/pull/10890/files)
|
||||
- **`metadata.orm_profiler.*`**: this submodule has been renamed `metadata.profiler.*`. You can view the full change [here](https://github.com/open-metadata/OpenMetadata/pull/10350/files)
|
||||
|
||||
## Backup Metadata
|
||||
|
||||
|
||||
@ -9,6 +9,10 @@ Upgrading from 0.12 to 0.13 can be done directly on your instances. This page wi
|
||||
|
||||
## 0.13.2 Highlights
|
||||
|
||||
OpenMetadata 0.13.2 is a stable release. Please check the [release notes](https://github.com/open-metadata/OpenMetadata/releases/tag/0.13.1-release)
|
||||
|
||||
If you are upgrading production this is the recommended version to upgrade.
|
||||
|
||||
### Service Connection Updates
|
||||
|
||||
- Oracle:
|
||||
@ -21,8 +25,71 @@ Upgrading from 0.12 to 0.13 can be done directly on your instances. This page wi
|
||||
- Added: `connection` field the connection can be of type `ApiConnection`, `postgresConnection` or `mysqlConnection`
|
||||
- Removed: `username`, `password` & `provider` fields as now these fields will be part of `ApiConnection`
|
||||
|
||||
## Breaking Changes for 0.13.2 Stable Release
|
||||
|
||||
### EntityName
|
||||
To better manage and harmonize `entityName` value and allow users to form better expectations around these values the team introduced an enforcement of the `entityName` format using regex pattern.
|
||||
|
||||
All the OpenMetadata entities `entityName` fields will enforce by the default the following regex pattern:
|
||||
- `^[\w'\- .&]+$`: match any word characters (equivalent to `[a-zA-Z0-9_]`) or the characters `'- .&`. For example when creating a pipeline service the following name will not be allowed `MyPipelineIngestion?!` while the following will be `My-Pipeline `.
|
||||
|
||||
Some entities are enforcing specific patterns:
|
||||
- Users: `([\w\-.]|[^@])+$` match any word characters (equivalent to `[a-zA-Z0-9_]`) or the characters `-.` and don't match the character `@`
|
||||
- Webhook: `^[\w'\-.]+$` match any word characters (equivalent to `[a-zA-Z0-9_]`) or the characters `-.`
|
||||
- Table: `^[\w'\- ./]+$` match any word characters (equivalent to `[a-zA-Z0-9_]`) or the characters `'-. /`
|
||||
- Location: `^[\w'\-./]+$` match any word characters (equivalent to `[a-zA-Z0-9_]`) or the characters `-.'/`
|
||||
- Type: `^[a-z][\w]+$` match any word characters (equivalent to `[a-zA-Z0-9_]`) starting with a lowercase letter (e.g. `tHisChar` will match, `ThisChar` will not)
|
||||
|
||||
If an entity name does not follow the pattern an error will be returned by the OpenMetadata platform.
|
||||
|
||||
**The change should be transparent for the end user and no action should be required.**
|
||||
|
||||
## EntityLink
|
||||
Similar to the implementation done for `entityName`, `entityLink` will now enforce a specific pattern. The structure of `entityLink` is in the form `<#E::{entities}::{entityType}::{field}::{fieldName}::{fieldValue}>`
|
||||
|
||||
All the OpenMetadata entities `entityLink` fields will enforce by the default the following regex pattern:
|
||||
- `^<#E::\w+::[\w'\- .&/\:+\"\\]+>$`: this means that `{entities}` value needs to match any word characters (equivalent to `[a-zA-Z0-9_]`) and the part after `{entities}` can match any word characters (equivalent to `[a-zA-Z0-9_]`) or the characters `'- .&/\:+\"\`
|
||||
|
||||
If an entity name does not follow the pattern an error will be returned by the OpenMetadata platform.
|
||||
|
||||
**The change should be transparent for the end user and no action should be required.**
|
||||
|
||||
### Tags API
|
||||
|
||||
Tags APIs were coded long before other entities' APIs were added. Due to this, tag API does not follow the API convention
|
||||
that all the other entities are following. This issue makes backward incompatible changes to follow the same convention as glossaries.
|
||||
|
||||
You can find the full list of API paths changes in the following [issue](https://github.com/open-metadata/OpenMetadata/issues/9259).
|
||||
|
||||
### Metabase and Domo Dashboards `name`
|
||||
|
||||
With the new restrictions on the `EntityName` and to ensure unicity of the assets, the **Metabase** and **Domo Dashboard** sources
|
||||
now ingest the `name` of the charts and dashboards with their internal ID value. Their `name` value will be used
|
||||
as the display name, but not as the OpenMetadata `name` anymore.
|
||||
|
||||
The recommended approach here is to create a new service and ingest the metadata again. If you ingest from the same
|
||||
service, the assets in there will end up being duplicated, as the `name` in OpenMetadata is used to identify each asset.
|
||||
|
||||
Let us know if you have any questions around this topic.
|
||||
|
||||
### Ingestion Framework sources directory structure
|
||||
|
||||
We have converted source modules (e.g., `redshift.py`) into packages containing all the necessary information (e.g., `redshift/metadata.py`).
|
||||
This has helped us reorganise functionalities and easily focus on each connector independently.
|
||||
|
||||
If you're extending any of the sources, you'll need to update your imports. You can take a look at the new
|
||||
structure [here](https://github.com/open-metadata/OpenMetadata/tree/main/ingestion/src/metadata/ingestion/source).
|
||||
|
||||
### MySQL Helm Chart Version Reverted to 8.8.23
|
||||
|
||||
OpenMetadata Helm Chart Release with Application Version `0.13.2` updates the Bitnami MySQL Helm Chart version from `9.2.1` to `8.8.23`. This is a breaking change and users will face an issue as mentioned in the documentation [here](/deployment/upgrade/kubernetes#mysql-pod-fails-on-upgrade). Please note that OpenMetadata Dependencies Helm Chart is not recommended for production use cases. The steps mentioned in the section will help you fix the issue.
|
||||
|
||||
---
|
||||
|
||||
## 0.13.1 Highlights
|
||||
|
||||
OpenMetadata 0.13.1 is a stable release. Please check the [release notes](https://github.com/open-metadata/OpenMetadata/releases/tag/0.13.1-release).
|
||||
|
||||
### Deprecating botPrincipals from OpenMetadata Configuration
|
||||
|
||||
Starting with `0.13.0`, we have deprecated the initial configurations for Authorizer Bot Principals. This means that all
|
||||
@ -53,7 +120,7 @@ to `airflow_provider_openmetadata.lineage.backend.OpenMetadataLineageBackend`.
|
||||
- We removed support from Airflow v1.
|
||||
- The failure callback now only updates the pipeline status if the Pipeline already exists in OpenMetadata.
|
||||
|
||||
## Webhooks
|
||||
### Webhooks
|
||||
|
||||
OpenMetadata will be deprecating the existing webhooks for Slack, MSTeams.
|
||||
|
||||
@ -68,3 +135,45 @@ Please use the same webhook config that you had saved from previous version to c
|
||||
- We will remove the dbt configuration from your existing metadata ingestion pipelines and they will keep working as expected.
|
||||
- After upgrading you will have to create the dbt workflow for the dbt ingestion to start working again.
|
||||
- dbt workflow can be configured via going to `services->selecting your service->ingestion tab->Add dbt Ingestion`
|
||||
|
||||
## Breaking Changes for 0.13.1 Stable Release
|
||||
|
||||
OpenMetadata Release 0.13.1 introduces below breaking changes -
|
||||
|
||||
### Webhooks
|
||||
|
||||
Starting from 0.13.1 , OpenMetadata will be deprecating the existing webhooks for Slack, MSTeams.
|
||||
|
||||
Before upgrading to 0.13.1 it is recommended to save the existing Webhook configs(like webhook url) to use them later.
|
||||
|
||||
We have added Alerts/Notifications , which can be configured to receive customised alerts on updates in OM using Triggers, Filtering Information to different destinations like Slack, MsTeams or even Emails.
|
||||
Please use the same webhook config that you had saved from previous version to configure the Alerts Destination after upgrading.
|
||||
|
||||
|
||||
OpenMetadata Release 0.13.x introduces below breaking changes:
|
||||
|
||||
### Docker Volumes
|
||||
|
||||
OpenMetadata Release 0.13.x introduces Default Docker Volumes for Database (MySQL, PostgreSQL) and ElasticSearch with Docker deployment.
|
||||
|
||||
- If you are looking for the fresh deployment of 0.13.x - [here](https://docs.open-metadata.org/deployment/docker)
|
||||
- If you are looking for upgrading of the new version i.e 0.13.x - [here](https://docs.open-metadata.org/deployment/upgrade/docker)
|
||||
|
||||
### MySQL Helm Chart Version Updated to 9.2.1
|
||||
|
||||
OpenMetadata Helm Chart Release with Application Version `0.13.1` updates the Bitnami MySQL Helm Chart version to `9.2.1` from `8.8.23`. This is not a breaking change but existing user's trying to upgrade will experience a slight delay in OpenMetadata Dependencies Helm Chart Upgrades as it pulls new docker image for MySQL. Please note that OpenMetadata Dependencies Helm Chart is not recommended for production use cases. Please follow the [kubernetes deployment](/deployment/kubernetes) for new installation or [upgrade kubernetes](/deployment/upgrade/kubernetes) for upgrading OpenMetadata in Kubernetes.
|
||||
|
||||
### dbt Workflow
|
||||
|
||||
dbt ingestion has been separated from the metadata ingestion. It can now be configured as a separate workflow after completing the metadata ingestion workflow.
|
||||
|
||||
We will remove the dbt configuration from your existing metadata ingestion pipelines and they will keep working as expected.
|
||||
|
||||
After upgrading you will have to create the dbt workflow for the dbt ingestion to start working again.
|
||||
|
||||
### Airflow Lineage Backend
|
||||
|
||||
- The import for the Airflow Lineage Backend has been updated from `airflow_provider_openmetadata.lineage.openmetadata.OpenMetadataLineageBackend`
|
||||
to `airflow_provider_openmetadata.lineage.backend.OpenMetadataLineageBackend`.
|
||||
- We removed support from Airflow v1.
|
||||
- The failure callback now only updates the pipeline status if the Pipeline already exists in OpenMetadata.
|
||||
|
||||
@ -25,4 +25,92 @@ Upgrading from 0.13 to 1.0 can be done directly on your instances. This page wil
|
||||
|
||||
- Superset:
|
||||
- Removed: `connectionOptions`
|
||||
- This field was not being used anywhere, hence removed.
|
||||
- This field was not being used anywhere, hence removed.
|
||||
|
||||
## Breaking Changes for 1.0 Stable Release
|
||||
|
||||
### Airflow Configuration & Pipeline Service Client
|
||||
|
||||
The new section on the `openmetadata.yaml` configuration for the Pipeline Service Client has been updated.
|
||||
|
||||
We now have a `pipelineServiceClientConfiguration`, instead of the old [airflowConfiguration](https://github.com/open-metadata/OpenMetadata/blob/0.13.3/conf/openmetadata.yaml#L214).
|
||||
|
||||
```yaml
|
||||
pipelineServiceClientConfiguration:
|
||||
className: ${PIPELINE_SERVICE_CLIENT_CLASS_NAME:-"org.openmetadata.service.clients.pipeline.airflow.AirflowRESTClient"}
|
||||
apiEndpoint: ${PIPELINE_SERVICE_CLIENT_ENDPOINT:-http://localhost:8080}
|
||||
metadataApiEndpoint: ${SERVER_HOST_API_URL:-http://localhost:8585/api}
|
||||
hostIp: ${PIPELINE_SERVICE_CLIENT_HOST_IP:-""}
|
||||
verifySSL: ${PIPELINE_SERVICE_CLIENT_VERIFY_SSL:-"no-ssl"} # Possible values are "no-ssl", "ignore", "validate"
|
||||
sslConfig:
|
||||
validate:
|
||||
certificatePath: ${PIPELINE_SERVICE_CLIENT_SSL_CERT_PATH:-""} # Local path for the Pipeline Service Client
|
||||
|
||||
# Default required parameters for Airflow as Pipeline Service Client
|
||||
parameters:
|
||||
username: ${AIRFLOW_USERNAME:-admin}
|
||||
password: ${AIRFLOW_PASSWORD:-admin}
|
||||
timeout: ${AIRFLOW_TIMEOUT:-10}
|
||||
```
|
||||
|
||||
Most existing environment variables remain the same, except for these three:
|
||||
- `AIRFLOW_HOST_IP` → `PIPELINE_SERVICE_CLIENT_HOST_IP`
|
||||
- `AIRFLOW_VERIFY_SSL` → `PIPELINE_SERVICE_CLIENT_VERIFY_SSL`
|
||||
- `AIRFLOW_SSL_CERT_PATH` → `PIPELINE_SERVICE_CLIENT_SSL_CERT_PATH`
|
||||
|
||||
When upgrading, make sure to update the environment variables and, if working on Bare Metal, make sure to use the updated `openmetadata.yaml`.
|
||||
|
||||
### Deprecation Notice
|
||||
|
||||
- When configuring Bots, **JWT** tokens will be the preferred method of authentication. Any existing SSO-based service accounts
|
||||
will continue to work on 1.0, but will be fully deprecated on future releases.
|
||||
- As we added the new Impala connector, We will remove the `impala` scheme from Hive in the next release.
|
||||
|
||||
### API Endpoint Changes
|
||||
The following endpoints have been renamed in 1.0
|
||||
|
||||
|Previous Endpoint|New Endpoint|
|
||||
|---|---|
|
||||
|`api/v1`|**Removed**|
|
||||
|`api/v1/services`|**Removed**|
|
||||
|`api/v1/version`|`api/v1/system/version`|
|
||||
|`api/v1/util/entities/count`|`api/v1/system/entities/count`|
|
||||
|`api/v1/util/services/count`|`api/v1/system/services/count`|
|
||||
|`api/v1/settings`|`api/v1/system/settings`|
|
||||
|`api/v1/config`|`api/v1/system/config`|
|
||||
|`api/v1/testSuite`|`api/v1/dataQuality/testSuites`|
|
||||
|`api/v1/testCase`|`api/v1/dataQuality/testCases`|
|
||||
|`api/v1/testDefinition`|`api/v1/dataQuality/testDefinitions`|
|
||||
|`api/v1/automations/workflow`|`api/v1/automations/workflows`|
|
||||
|`api/v1/events/subscription`|`api/v1/events/subscriptions`|
|
||||
|`api/v1/analytic/reportData`|`api/v1/analytics/dataInsights/data`|
|
||||
|`api/v1/analytics/webAnalyticEvent/`|`api/v1/analytics/web/events/`|
|
||||
|`api/v1/indexResource/reindex`|`api/v1/search/reindex`|
|
||||
|`api/v1/indexResource/reindex/status/{runMode}`|`api/v1/search/reindex/status/{runMode}`|
|
||||
|
||||
### Sample Data Deprecation
|
||||
|
||||
The `SampleData` service has been deprecated. It is now a `CustomConnector`. If you have some entities in `SampleData`, please DELETE the service if you don’t want to keep them, or we can help you migrate them to a Custom Connector.
|
||||
|
||||
Note that this service type was mostly used on quickstarts and tests to add some example assets into OpenMetadata. This should be transparent for most of the users.
|
||||
|
||||
### Location Entity
|
||||
|
||||
We are deprecating the `Location` Entity in favor of the Containers and new Storage Service:
|
||||
- Dropping the `location_entity` table,
|
||||
- Removing the `Location` APIs.
|
||||
|
||||
If you did not have any custom implementation, this was partially used in the Glue Database connector. However, the information was not being actively shown.
|
||||
|
||||
If you had custom implementations on top of the `Location` APIs, reach out to us, and we can help migrate to the new Storage Services.
|
||||
|
||||
### AWS Connectors
|
||||
|
||||
The `endpointURL` property is now formatted as a proper URI, e.g., `http://something.com`. If you have added this configuration
|
||||
in your connectors, please update the `endpointURL` format with the right scheme.
|
||||
|
||||
Note that this property is OPTIONAL, and for the most cases it will either be left blank or already configured with the right format for it to work properly, e.g., `s3://...`.
|
||||
|
||||
### Python SDK Submodules name change
|
||||
- **`metadata.test_suite.*`**: this submodule has been renamed `metadata.data_quality.*`. You can view the full change [here](https://github.com/open-metadata/OpenMetadata/pull/10890/files)
|
||||
- **`metadata.orm_profiler.*`**: this submodule has been renamed `metadata.profiler.*`. You can view the full change [here](https://github.com/open-metadata/OpenMetadata/pull/10350/files)
|
||||
@ -11,21 +11,21 @@ You can find further information about specific version upgrades in the followin
|
||||
{% inlineCallout
|
||||
color="violet-70"
|
||||
icon="10k"
|
||||
bold="Upgrade 0.12"
|
||||
bold="Upgrade to 0.12"
|
||||
href="/deployment/upgrade/versions/011-to-012" %}
|
||||
Upgrade from 0.11 to 0.12 inplace.
|
||||
{% /inlineCallout %}
|
||||
{% inlineCallout
|
||||
color="violet-70"
|
||||
icon="10k"
|
||||
bold="Upgrade 0.13"
|
||||
bold="Upgrade to 0.13"
|
||||
href="/deployment/upgrade/versions/012-to-013" %}
|
||||
Upgrade from 0.12 to 0.13 inplace.
|
||||
{% /inlineCallout %}
|
||||
{% inlineCallout
|
||||
color="violet-70"
|
||||
icon="10k"
|
||||
bold="Upgrade 1.0"
|
||||
bold="Upgrade to 1.0"
|
||||
href="/deployment/upgrade/versions/013-to-100" %}
|
||||
Upgrade from 0.13 to 1.0 inplace.
|
||||
{% /inlineCallout %}
|
||||
|
||||
@ -56,6 +56,9 @@ site_menu:
|
||||
- category: Deployment / Kubernetes Deployment / GKE Troubleshooting
|
||||
url: /deployment/kubernetes/gke-troubleshooting
|
||||
|
||||
- category: Deployment / Airflow
|
||||
url: /deployment/airflow
|
||||
|
||||
- category: Deployment / Enable Security
|
||||
url: /deployment/security
|
||||
- category: Deployment / Enable Security / Basic Authentication
|
||||
@ -160,6 +163,9 @@ site_menu:
|
||||
- category: Deployment / Enable Secrets Manager / How to add a new implementation
|
||||
url: /deployment/secrets-manager/how-to-add-a-new-implementation
|
||||
|
||||
- category: Deployment / Server Configuration Reference
|
||||
url: /deployment/configuration
|
||||
|
||||
- category: Deployment / Upgrade OpenMetadata
|
||||
url: /deployment/upgrade
|
||||
- category: Deployment / Upgrade OpenMetadata / Upgrade on Bare Metal
|
||||
@ -180,12 +186,6 @@ site_menu:
|
||||
- category: Deployment / Backup & Restore Metadata
|
||||
url: /deployment/backup-restore-metadata
|
||||
|
||||
- category: Deployment / Server Configuration Reference
|
||||
url: /deployment/configuration
|
||||
|
||||
- category: Deployment / Airflow
|
||||
url: /deployment/airflow
|
||||
|
||||
- category: Connectors
|
||||
url: /connectors
|
||||
color: violet-70
|
||||
|
||||
@ -5,41 +5,64 @@ slug: /releases
|
||||
|
||||
# 1.0 Release 🎉
|
||||
|
||||
## Ingestion
|
||||
- We are improving the overall UX and UI around creating new connections to your sources. When integrating your systems, you will now have detailed documentation on all the necessary information directly in the app.
|
||||
- Testing the connection is no longer an OK/KO response. We are testing every internal step of the metadata extraction process to let you know which specific permissions you might be missing, and if all or only partial metadata will be ingested based on that.
|
||||
- We have improved the performance of multiple connectors (e.g., Redshift) by fetching as much information as possible in bulk.
|
||||
- We are providing more levers for you to tune how you want the ingestion to behave, enabling or disabling the ingestion of tags or owners.
|
||||
- We have improved the parsing process and the overall performance of the dbt workflows
|
||||
- New Impala Connector. In the next release we'll remove the impala schemes from the Hive connector
|
||||
## APIs & Schema
|
||||
- **Stabilized** and improved the Schemas and APIs.
|
||||
- The APIs are **backward compatible**.
|
||||
|
||||
## Data Models
|
||||
- Dashboard Services now support the concept of Data Models: data that can be directly defined and managed in the Dashboard tooling itself, such as LookML models in Looker.
|
||||
- Data Models will help us close the gap between engineering and business by providing all the necessary metadata from sources typically used and managed by analysts or business users.
|
||||
- The first implementation has been done for Tableau and Looker.
|
||||
## Ingestion
|
||||
- Connecting to your data sources has never been easier. Find all the necessary **permissions** and **connection details** directly in the UI.
|
||||
- When testing the connection, we now have a comprehensive list of **validations** to let you know which pieces of metadata can be extracted with the provided configuration.
|
||||
- **Performance** improvements when extracting metadata from sources such as Snowflake, Redshift, Postgres, and dbt.
|
||||
- New **Apache Impala** connector.
|
||||
|
||||
## Storage Services
|
||||
- Based on all your feedback, we have added a new way to handle Storage Services. Thank you for your ideas and contributions.
|
||||
- The Data Lake connector ingested one table per file, which covered only some of the use cases in a Data Platform.
|
||||
- With the new Storage Services, you now have complete control over how you want to present your data lakes in OpenMetadata.
|
||||
- The first implementation has been done on S3, and you can specify your tables and partitions and see them reflected with the rest of your metadata.
|
||||
- This has been a major contribution from Cristian Calugaru, Principal Engineer @Forter
|
||||
- Based on your [feedback](https://github.com/open-metadata/OpenMetadata/discussions/8124), we created a new service to extract metadata from your **cloud storage**.
|
||||
- The Data Lake connector ingested one table per file, which covered only some of the use cases in a Data Platform. With **Storage Services**, you can now present accurate metadata from your tables, even when **partitioned**.
|
||||
- The first implementation has been done on **S3**, and we will keep adding support for other sources in the upcoming releases.
|
||||
|
||||
## Query as an Entity & UI Overhaul
|
||||
- While we were already ingesting queries in the Usage Workflows, their presentation and the overall interaction with users in the platform was lacking.
|
||||
- In this release, we allow users to also manually enter the queries they want to share with the rest of their peers, and discuss and react to the other present queries in each table.
|
||||
## Dashboard Data Models
|
||||
- Dashboard Services now support the concept of **Data Models**: data that can be directly defined and managed in the Dashboard tooling itself, e.g., LookML models in Looker.
|
||||
- Data Models will help us close the gap between engineering and business by providing all the necessary metadata from sources typically used and managed by analysts or business users.
|
||||
- The first implementation has been done for **Tableau** and **Looker**.
|
||||
|
||||
## Security:
|
||||
- Added SAML support
|
||||
## Queries
|
||||
- Improved UI for **SQL Queries**, with faster loading times and allowing users to **vote** for popular queries!
|
||||
- Users can now create and share a **Query** directly from the UI, linking it to multiple tables if needed.
|
||||
|
||||
## Localization
|
||||
- In 1.0, we have added **Localization** support for OpenMetadata.
|
||||
- Now you can use OpenMetadata in **English**, **French**, **Chinese**, **Japanese**, **Portuguese**, and **Spanish**.
|
||||
|
||||
## Glossary
|
||||
- New and Improved **Glossary UI**
|
||||
- Easily search for Glossaries and any Glossary Term directly in the **global search**.
|
||||
- Instead of searching and tagging their assets individually, users can add Glossary Terms to multiple **assets** from the Glossary UI.
|
||||
|
||||
## Auto PII Classification
|
||||
- Implemented an automated way to **tag PII data**.
|
||||
- The auto-classification is an optional step of the **Profiler** workflow. We will analyze the column names, and if sample data is being ingested, we will run NLP models on top of it.
|
||||
|
||||
## Search
|
||||
- **Improved Relevancy**, with added support for partial matches.
|
||||
- **Improved Ranking**, with most used or higher Tier assets at the top of the search.
|
||||
- Support for **Classifications** and **Glossaries** in the global search.
|
||||
|
||||
## Security
|
||||
- **SAML** support has been added.
|
||||
- Added option to mask passwords in the API response except for the `ingestion-bot` by setting the environment variable `MASK_PASSWORDS_API=true`.
|
||||
- **[DEPRECATION NOTICE]** SSO Service accounts for Bots will be deprecated. JWT authentication will be the preferred method.
|
||||
- **Deprecation Notice**: **SSO** Service accounts for Bots will be deprecated. **JWT** authentication will be the preferred method for creating Bots.
|
||||
|
||||
## Auto PII Classification:
|
||||
- During the profiler workflow, users can now choose to have PII data automatically tagged as such using NLP models on the ingested sample data.
|
||||
## Lineage
|
||||
- Enhanced Lineage UI to display a large number of **nodes (1000+)**.
|
||||
- Improved UI for **better navigation**.
|
||||
- Improved **SQL parser** to extract lineage in the Lineage Workflows.
|
||||
|
||||
## Global search
|
||||
- You can search for glossary terms/tags from the global search.
|
||||
## Chrome Browser Extension
|
||||
- All the metadata is at your fingertips while browsing Looker, Superset, etc., with the OpenMetadata Chrome Browser Extension.
|
||||
- **Chrome extension** supports Google SSO, Azure SSO, Okta, and AWS Cognito authentication.
|
||||
- You can Install the Chrome extension from **Chrome Web Store**.
|
||||
|
||||
## Localisation
|
||||
- Full support of English (US) language with partial support of languages like: French, Chinese, Japanese, Portuguese and Spanish.
|
||||
- We are happy to have your contributions to the above languages you can find the details for your contribution [here](/how-to-guides/how-to-add-language-support#how-to-add-language-support).
|
||||
## Other Changes
|
||||
- The **Explore page** cards will now display a maximum of **ten tags**.
|
||||
- **Entity names** support apostrophes.
|
||||
- The **Summary panel** has been improved to be consistent across the UI.
|
||||
|
||||
@ -12,6 +12,69 @@ version. To see what's coming in next releases, please check our [Roadmap](/rele
|
||||
|
||||
{% /note %}
|
||||
|
||||
# 1.0 Release 🎉
|
||||
|
||||
## APIs & Schema
|
||||
- **Stabilized** and improved the Schemas and APIs.
|
||||
- The APIs are **backward compatible**.
|
||||
|
||||
## Ingestion
|
||||
- Connecting to your data sources has never been easier. Find all the necessary **permissions** and **connection details** directly in the UI.
|
||||
- When testing the connection, we now have a comprehensive list of **validations** to let you know which pieces of metadata can be extracted with the provided configuration.
|
||||
- **Performance** improvements when extracting metadata from sources such as Snowflake, Redshift, Postgres, and dbt.
|
||||
- New **Apache Impala** connector.
|
||||
|
||||
## Storage Services
|
||||
- Based on your [feedback](https://github.com/open-metadata/OpenMetadata/discussions/8124), we created a new service to extract metadata from your **cloud storage**.
|
||||
- The Data Lake connector ingested one table per file, which covered only some of the use cases in a Data Platform. With **Storage Services**, you can now present accurate metadata from your tables, even when **partitioned**.
|
||||
- The first implementation has been done on **S3**, and we will keep adding support for other sources in the upcoming releases.
|
||||
|
||||
## Dashboard Data Models
|
||||
- Dashboard Services now support the concept of **Data Models**: data that can be directly defined and managed in the Dashboard tooling itself, e.g., LookML models in Looker.
|
||||
- Data Models will help us close the gap between engineering and business by providing all the necessary metadata from sources typically used and managed by analysts or business users.
|
||||
- The first implementation has been done for **Tableau** and **Looker**.
|
||||
|
||||
## Queries
|
||||
- Improved UI for **SQL Queries**, with faster loading times and allowing users to **vote** for popular queries!
|
||||
- Users can now create and share a **Query** directly from the UI, linking it to multiple tables if needed.
|
||||
|
||||
## Localization
|
||||
- In 1.0, we have added **Localization** support for OpenMetadata.
|
||||
- Now you can use OpenMetadata in **English**, **French**, **Chinese**, **Japanese**, **Portuguese**, and **Spanish**.
|
||||
|
||||
## Glossary
|
||||
- New and Improved **Glossary UI**
|
||||
- Easily search for Glossaries and any Glossary Term directly in the **global search**.
|
||||
- Instead of searching and tagging their assets individually, users can add Glossary Terms to multiple **assets** from the Glossary UI.
|
||||
|
||||
## Auto PII Classification
|
||||
- Implemented an automated way to **tag PII data**.
|
||||
- The auto-classification is an optional step of the **Profiler** workflow. We will analyze the column names, and if sample data is being ingested, we will run NLP models on top of it.
|
||||
|
||||
## Search
|
||||
- **Improved Relevancy**, with added support for partial matches.
|
||||
- **Improved Ranking**, with most used or higher Tier assets at the top of the search.
|
||||
- Support for **Classifications** and **Glossaries** in the global search.
|
||||
|
||||
## Security
|
||||
- **SAML** support has been added.
|
||||
- **Deprecation Notice**: **SSO** Service accounts for Bots will be deprecated. **JWT** authentication will be the preferred method for creating Bots.
|
||||
|
||||
## Lineage
|
||||
- Enhanced Lineage UI to display a large number of **nodes (1000+)**.
|
||||
- Improved UI for **better navigation**.
|
||||
- Improved **SQL parser** to extract lineage in the Lineage Workflows.
|
||||
|
||||
## Chrome Browser Extension
|
||||
- All the metadata is at your fingertips while browsing Looker, Superset, etc., with the OpenMetadata Chrome Browser Extension.
|
||||
- **Chrome extension** supports Google SSO, Azure SSO, Okta, and AWS Cognito authentication.
|
||||
- You can Install the Chrome extension from **Chrome Web Store**.
|
||||
|
||||
## Other Changes
|
||||
- The **Explore page** cards will now display a maximum of **ten tags**.
|
||||
- **Entity names** support apostrophes.
|
||||
- The **Summary panel** has been improved to be consistent across the UI.
|
||||
|
||||
# [0.13.3 Release](https://github.com/open-metadata/OpenMetadata/releases/tag/0.13.3-release) - March 30th 2023 🎉
|
||||
|
||||
## Ingestion Framework
|
||||
|
||||
@ -5,40 +5,63 @@ slug: /releases/latest-release
|
||||
|
||||
# 1.0 Release 🎉
|
||||
|
||||
## Ingestion
|
||||
- We are improving the overall UX and UI around creating new connections to your sources. When integrating your systems, you will now have detailed documentation on all the necessary information directly in the app.
|
||||
- Testing the connection is no longer an OK/KO response. We are testing every internal step of the metadata extraction process to let you know which specific permissions you might be missing, and if all or only partial metadata will be ingested based on that.
|
||||
- We have improved the performance of multiple connectors (e.g., Redshift) by fetching as much information as possible in bulk.
|
||||
- We are providing more levers for you to tune how you want the ingestion to behave, enabling or disabling the ingestion of tags or owners.
|
||||
- We have improved the parsing process and the overall performance of the dbt workflows
|
||||
- New Impala Connector. In the next release we'll remove the impala schemes from the Hive connector
|
||||
## APIs & Schema
|
||||
- **Stabilized** and improved the Schemas and APIs.
|
||||
- The APIs are **backward compatible**.
|
||||
|
||||
## Data Models
|
||||
- Dashboard Services now support the concept of Data Models: data that can be directly defined and managed in the Dashboard tooling itself, such as LookML models in Looker.
|
||||
- Data Models will help us close the gap between engineering and business by providing all the necessary metadata from sources typically used and managed by analysts or business users.
|
||||
- The first implementation has been done for Tableau and Looker.
|
||||
## Ingestion
|
||||
- Connecting to your data sources has never been easier. Find all the necessary **permissions** and **connection details** directly in the UI.
|
||||
- When testing the connection, we now have a comprehensive list of **validations** to let you know which pieces of metadata can be extracted with the provided configuration.
|
||||
- **Performance** improvements when extracting metadata from sources such as Snowflake, Redshift, Postgres, and dbt.
|
||||
- New **Apache Impala** connector.
|
||||
|
||||
## Storage Services
|
||||
- Based on all your feedback, we have added a new way to handle Storage Services. Thank you for your ideas and contributions.
|
||||
- The Data Lake connector ingested one table per file, which covered only some of the use cases in a Data Platform.
|
||||
- With the new Storage Services, you now have complete control over how you want to present your data lakes in OpenMetadata.
|
||||
- The first implementation has been done on S3, and you can specify your tables and partitions and see them reflected with the rest of your metadata.
|
||||
- This has been a major contribution from Cristian Calugaru, Principal Engineer @Forter
|
||||
- Based on your [feedback](https://github.com/open-metadata/OpenMetadata/discussions/8124), we created a new service to extract metadata from your **cloud storage**.
|
||||
- The Data Lake connector ingested one table per file, which covered only some of the use cases in a Data Platform. With **Storage Services**, you can now present accurate metadata from your tables, even when **partitioned**.
|
||||
- The first implementation has been done on **S3**, and we will keep adding support for other sources in the upcoming releases.
|
||||
|
||||
## Query as an Entity & UI Overhaul
|
||||
- While we were already ingesting queries in the Usage Workflows, their presentation and the overall interaction with users in the platform was lacking.
|
||||
- In this release, we allow users to also manually enter the queries they want to share with the rest of their peers, and discuss and react to the other present queries in each table.
|
||||
## Dashboard Data Models
|
||||
- Dashboard Services now support the concept of **Data Models**: data that can be directly defined and managed in the Dashboard tooling itself, e.g., LookML models in Looker.
|
||||
- Data Models will help us close the gap between engineering and business by providing all the necessary metadata from sources typically used and managed by analysts or business users.
|
||||
- The first implementation has been done for **Tableau** and **Looker**.
|
||||
|
||||
## Security:
|
||||
- Added SAML support
|
||||
- **[DEPRECATION NOTICE]** SSO Service accounts for Bots will be deprecated. JWT authentication will be the preferred method.
|
||||
## Queries
|
||||
- Improved UI for **SQL Queries**, with faster loading times and allowing users to **vote** for popular queries!
|
||||
- Users can now create and share a **Query** directly from the UI, linking it to multiple tables if needed.
|
||||
|
||||
## Auto PII Classification:
|
||||
- During the profiler workflow, users can now choose to have PII data automatically tagged as such using NLP models on the ingested sample data.
|
||||
## Localization
|
||||
- In 1.0, we have added **Localization** support for OpenMetadata.
|
||||
- Now you can use OpenMetadata in **English**, **French**, **Chinese**, **Japanese**, **Portuguese**, and **Spanish**.
|
||||
|
||||
## Global search
|
||||
- You can search for glossary terms/tags from the global search.
|
||||
## Glossary
|
||||
- New and Improved **Glossary UI**
|
||||
- Easily search for Glossaries and any Glossary Term directly in the **global search**.
|
||||
- Instead of searching and tagging their assets individually, users can add Glossary Terms to multiple **assets** from the Glossary UI.
|
||||
|
||||
## Localisation
|
||||
- Full support of English (US) language with partial support of languages like: French, Chinese, Japanese, Portuguese and Spanish.
|
||||
- We are happy to have your contributions to the above languages you can find the details for your contribution [here](/how-to-guides/how-to-add-language-support#how-to-add-language-support).
|
||||
## Auto PII Classification
|
||||
- Implemented an automated way to **tag PII data**.
|
||||
- The auto-classification is an optional step of the **Profiler** workflow. We will analyze the column names, and if sample data is being ingested, we will run NLP models on top of it.
|
||||
|
||||
## Search
|
||||
- **Improved Relevancy**, with added support for partial matches.
|
||||
- **Improved Ranking**, with most used or higher Tier assets at the top of the search.
|
||||
- Support for **Classifications** and **Glossaries** in the global search.
|
||||
|
||||
## Security
|
||||
- **SAML** support has been added.
|
||||
- **Deprecation Notice**: **SSO** Service accounts for Bots will be deprecated. **JWT** authentication will be the preferred method for creating Bots.
|
||||
|
||||
## Lineage
|
||||
- Enhanced Lineage UI to display a large number of **nodes (1000+)**.
|
||||
- Improved UI for **better navigation**.
|
||||
- Improved **SQL parser** to extract lineage in the Lineage Workflows.
|
||||
|
||||
## Chrome Browser Extension
|
||||
- All the metadata is at your fingertips while browsing Looker, Superset, etc., with the OpenMetadata Chrome Browser Extension.
|
||||
- **Chrome extension** supports Google SSO, Azure SSO, Okta, and AWS Cognito authentication.
|
||||
- You can Install the Chrome extension from **Chrome Web Store**.
|
||||
|
||||
## Other Changes
|
||||
- The **Explore page** cards will now display a maximum of **ten tags**.
|
||||
- **Entity names** support apostrophes.
|
||||
- The **Summary panel** has been improved to be consistent across the UI.
|
||||
|
||||
@ -12,14 +12,14 @@ Here are the articles in this section:
|
||||
color="violet-70"
|
||||
icon="play_arrow"
|
||||
bold="Python SDK"
|
||||
href="sdk/python" %}
|
||||
href="/sdk/python" %}
|
||||
Presentation of a high-level Python API as a type-safe and gentle wrapper for the OpenMetadata backend.
|
||||
{% /inlineCallout %}
|
||||
{% inlineCallout
|
||||
color="violet-70"
|
||||
icon="play_arrow"
|
||||
bold="Java SDK"
|
||||
href="sdk/java" %}
|
||||
Cooming soon.
|
||||
href="/sdk/java" %}
|
||||
Provision, manage, and use OpenMetadata resources directly from your Java applications.
|
||||
{% /inlineCallout %}
|
||||
{% /inlineCalloutContainer %}
|
||||
BIN
openmetadata-docs-v1/images/connectors/quicksight.png
Normal file
BIN
openmetadata-docs-v1/images/connectors/quicksight.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 38 KiB |
Loading…
x
Reference in New Issue
Block a user