2025-08-12 13:29:42 +05:30

6.0 KiB
Raw Blame History

Prerequisites

Everytime that you plan on upgrading OpenMetadata to a newer version, make sure to go over all these steps:

Backup your Metadata

Before upgrading your OpenMetadata version we strongly recommend backing up the metadata.

The source of truth is stored in the underlying database (MySQL and Postgres supported). During each version upgrade there is a database migration process that needs to run. It will directly attack your database and update the shape of the data to the newest OpenMetadata release.

It is important that we backup the data because if we face any unexpected issues during the upgrade process, you will be able to get back to the previous version without any loss.

{% note %}

You can learn more about how the migration process works here.

During the upgrade, please note that the backup is only for safety and should not be used to restore data to a higher version.

{% /note %}

Since version 1.4.0, OpenMetadata encourages using the builtin-tools for creating logical backups of the metadata:

For PROD deployment we recommend users to rely on cloud services for their databases, be it AWS RDS, Azure SQL or GCP Cloud SQL.

If you're a user of these services, you can leverage their backup capabilities directly:

You can refer to the following guide to get more details about the backup and restore:

{% inlineCalloutContainer %} {% inlineCallout color="violet-70" icon="luggage" bold="Backup Metadata" href="/deployment/backup-restore-metadata" %} Learn how to back up MySQL or Postgres data. {% /inlineCallout %} {% /inlineCalloutContainer %}

Understanding the "Running" State in OpenMetadata

In OpenMetadata, the "Running" state indicates that the OpenMetadata server has received a response from Airflow confirming that a workflow is in progress. However, if Airflow unexpectedly stops or crashes before it can send a failure status update through the Failure Callback, OpenMetadata remains unaware of the workflows actual state. As a result, the workflow may appear to be stuck in "Running" even though it is no longer executing.

This situation can also occur during an OpenMetadata upgrade. If an ingestion pipeline was running at the time of the upgrade and the process caused Airflow to shut down, OpenMetadata would not receive any further updates from Airflow. Consequently, the pipeline status remains "Running" indefinitely.

{% image src="/images/v1.8/deployment/upgrade/running-state-in-openmetadata.png" alt="Running State in OpenMetadata" caption="Running State in OpenMetadata" /%}

Expected Steps to Resolve

To resolve this issue:

  • Ensure that Airflow is restarted properly after an unexpected shutdown.
  • Manually update the pipeline status if necessary.
  • Check Airflow logs to verify if the DAG execution was interrupted.

Update sort_buffer_size (MySQL) or work_mem (Postgres)

Before running the migrations, it is important to update these parameters to ensure there are no runtime errors. A safe value would be setting them to 20MB.

If using MySQL

You can update it via SQL (note that it will reset after the server restarts):

SET GLOBAL sort_buffer_size = 20971520

To make the configuration persistent, you'd need to navigate to your MySQL Server install directory and update the my.ini or my.cnf files with sort_buffer_size = 20971520.

If using RDS, you will need to update your instance's Parameter Group to include the above change.

If using Postgres

You can update it via SQL (not that it will reset after the server restarts):

SET work_mem = '20MB';

To make the configuration persistent, you'll need to update the postgresql.conf file with work_mem = 20MB.

If using RDS, you will need to update your instance's Parameter Group to include the above change.

Note that this value would depend on the size of your query_entity table. If you still see Out of Sort Memory Errors during the migration after bumping this value, you can increase them further.

After the migration is finished, you can revert this changes.

Backward Incompatible Changes

1.9.0

Strong validation of test case parameters

parameterValues name of a testCase will be strongly validated against the name of the parameterDefinition in the testDefinition. If both parameter names do not match an error will be thrown on testCase creation

Multi-domain Support

All entities now support multi-domains. Their domain field is now renamed to domains and modelled as a list of domains instead of a single domain.

If you're using the API or the SDK, you will need to update your code to use the new domains field instead of domain. We have also updated the patch_domain implementation, which now has a new signature to support the new domains field.

{% note %}

While the schema and APIs are all updated, the Multi-domain support is not enabled by default.

If you want to allow your assets to belong to multiple domains, you need to go to Settings > Preferences > Data Asset Rules and disable the Multiple Domains are not allowed rule.

{% /note %}