2024-03-11 10:42:26 +01:00
---
title: Backup Metadata
slug: /deployment/backup-restore-metadata
---
# Backup & Restore Metadata
## Introduction
2024-05-03 10:18:29 +02:00
Before upgrading your OpenMetadata version we strongly recommend backing up the metadata.
2024-03-11 10:42:26 +01:00
2024-05-03 10:18:29 +02:00
The source of truth is stored in the underlying database (MySQL and Postgres supported). During each version upgrade there
is a database migration process that needs to run. It will directly attack your database and update the shape of the
data to the newest OpenMetadata release.
It is important that we backup the data because if we face any unexpected issues during the upgrade process,
you will be able to get back to the previous version without any loss.
{% note %}
You can learn more about how the migration process works [here ](/deployment/upgrade/how-does-it-work ).
{% /note %}
Since version 1.4.0, **OpenMetadata encourages using the builtin-tools for creating logical backups of the metadata** :
- [mysqldump ](https://dev.mysql.com/doc/refman/8.0/en/mysqldump.html ) for MySQL
- [pg_dump ](https://www.postgresql.org/docs/current/app-pgdump.html ) for Postgres
For PROD deployment we recommend users to rely on cloud services for their databases, be it [AWS RDS ](https://docs.aws.amazon.com/rds/ ),
[Azure SQL ](https://azure.microsoft.com/en-in/products/azure-sql/database ) or [GCP Cloud SQL ](https://cloud.google.com/sql/ ).
If you're a user of these services, you can leverage their backup capabilities directly:
- [Creating a DB snapshot in AWS ](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_CreateSnapshot.html )
- [Backup and restore in Azure MySQL ](https://learn.microsoft.com/en-us/azure/mysql/single-server/concepts-backup )
- [About GCP Cloud SQL backup ](https://cloud.google.com/sql/docs/mysql/backup-recovery/backups )
2024-03-11 10:42:26 +01:00
## Requirements
2024-05-03 10:18:29 +02:00
- `mysqldump` 8.3 or higher
- `pg_dump` 13.3 or higher
If you're running the project using docker compose, the `ingestion` container already comes packaged with the
correct `mysqldump` and `pg_dump` versions ready to use.
## Storing the backup files
It's important that when you backup your database, you keep the snapshot safe in case you need in later.
You can check these two examples on how to:
- Use pipes to stream the result directly to S3 (or AWS blob storage) ([link ](https://devcoops.com/pg_dump-to-s3-directly/?utm_content=cmp-true )).
- Dump to a file and copy to storage ([link ](https://gist.github.com/bbcoimbra/0914c7e0f96e8ad53dfad79c64863c87 )).
2024-03-11 10:42:26 +01:00
2024-05-03 10:18:29 +02:00
# Example with Docker
2024-03-11 10:42:26 +01:00
2024-03-13 11:30:11 +01:00
Start a local instance of OpenMetadata using the `docker-compose` file provided in the repository. Then, we can use the following commands to backup the metadata:
2024-03-11 10:42:26 +01:00
2024-03-13 11:30:11 +01:00
## MySQL
2024-03-11 10:42:26 +01:00
2024-03-13 11:30:11 +01:00
### 1. Start a local docker deployment
2024-03-11 10:42:26 +01:00
2024-03-13 11:30:11 +01:00
```shell
docker/run_local_docker.sh
2024-03-11 10:42:26 +01:00
```
2024-03-13 11:30:11 +01:00
Ingest some data...
### 2. Backup and Restore
```shell
BACKUP_FILE="backup_$(date +%Y%m%d%H%M).sql"
DOCKER_COMPOSE_FILE="docker/development/docker-compose.yml"
# backup
docker compose -f $DOCKER_COMPOSE_FILE exec ingestion mysqldump --no-tablespaces -u openmetadata_user -popenmetadata_password -h mysql -P 3306 openmetadata_db > $BACKUP_FILE
# create the restore database
docker compose -f $DOCKER_COMPOSE_FILE exec mysql mysql -u root -ppassword -e "create database restore;"
docker compose -f $DOCKER_COMPOSE_FILE exec mysql mysql -u root -ppassword -e "grant all privileges on restore.* to 'openmetadata_user'@'%';"
docker compose -f $DOCKER_COMPOSE_FILE exec mysql mysql -u root -ppassword -e "flush privileges;"
# restore from the backup
docker compose -f $DOCKER_COMPOSE_FILE exec -T ingestion mysql -u openmetadata_user -popenmetadata_password -h mysql -P 3306 restore < $BACKUP_FILE
2024-03-11 10:42:26 +01:00
```
2024-03-13 11:30:11 +01:00
### 3. Restart the docker deployment with the restored database
2024-03-11 10:42:26 +01:00
2024-03-13 11:30:11 +01:00
```shell
export OM_DATABASE=restore
docker compose -f $DOCKER_COMPOSE_FILE up -d
2024-03-11 10:42:26 +01:00
```
2024-03-13 11:30:11 +01:00
## PostgreSQL
2024-03-11 10:42:26 +01:00
2024-03-13 11:30:11 +01:00
### 1. Start a local docker deployment
2024-03-11 10:42:26 +01:00
2024-03-13 11:30:11 +01:00
```shell
docker/run_local_docker.sh -d postgres
2024-03-11 10:42:26 +01:00
```
2024-03-13 11:30:11 +01:00
Ingest some data...
2024-03-11 10:42:26 +01:00
2024-03-13 11:30:11 +01:00
### 2. Backup and Restore
2024-03-11 10:42:26 +01:00
2024-03-13 11:30:11 +01:00
```shell
BACKUP_FILE="backup_$(date +%Y%m%d%H%M).sql"
DOCKER_COMPOSE_FILE="docker/development/docker-compose-postgres.yml"
# backup
docker compose -f $DOCKER_COMPOSE_FILE exec -e PGPASSWORD=openmetadata_password ingestion pg_dump -U openmetadata_user -h postgresql -d openmetadata_db > $BACKUP_FILE
# create the restore database
docker compose -f $DOCKER_COMPOSE_FILE exec -e PGPASSWORD=openmetadata_password postgresql psql -U postgres -c "create database restore;"
docker compose -f $DOCKER_COMPOSE_FILE exec -e PGPASSWORD=openmetadata_password postgresql psql -U postgres -c "ALTER DATABASE restore OWNER TO openmetadata_user;"
# restore from the backup
docker compose -f $DOCKER_COMPOSE_FILE exec -e PGPASSWORD=openmetadata_password -T ingestion psql -U openmetadata_user -h postgresql -d restore < $BACKUP_FILE
2024-03-11 10:42:26 +01:00
```
2024-03-13 11:30:11 +01:00
### 3. Restart the docker deployment with the restored database
2024-03-11 10:42:26 +01:00
2024-03-13 11:30:11 +01:00
```shell
export OM_DATABASE=restore
docker compose -f $DOCKER_COMPOSE_FILE up -d
2024-03-11 10:42:26 +01:00
```