Upgrading from 0.11 to 0.12 can be done directly on your instances. This page will list few general details you should take into consideration when running the upgrade.
- Along with `manifest.json` and `catalog.json` files, We now provide an option to ingest the `run_results.json` file generated from the dbt run and ingest the test results from it.
- The field to enter the `run_results.json` file path is an optional field in the case of local and http dbt configs. The test results will be ingested from this file if it is provided.
Snowflake users may experience a circular import error. This is a known issue with `snowflake-connector-python`. If you experience such error we recommend to either 1) run the ingestion workflow in Python 3.8 environment or 2) if you can't manage your environement set `ThreadCount` to 1. You can find more information on the profiler setting [here](/connectors/ingestion/workflows/profiler)
Upgrading airflow from 2.1.4 to 2.3 requires a few steps. If you are using your airflow instance only to run OpenMetadata workflow we recommend you to simply drop the airflow database. You can simply connect to your database engine where your airflow database exist, make a back of your database, drop it and recreate it.
```
DROP DATABASE airflow_db;
CREATE DATABASE airflow_db CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
GRANT ALL PRIVILEGES ON airflow_db.* TO 'airflow_user'@'%' WITH GRANT OPTION;
```
If you would like to keep the existing data in your airflow instance or simply cannot drop your airflow database you will need to follow the steps below:
1. Make a backup of your database (this will come handing if the migration fails and you need to perform any kind of restore)
1. Upgrade your airflow instance from 2.1.x to 2.2.x
2. Before upgrading to the 2.3.x version perform the steps describe on the airflow documentation page [[here]](https://airflow.apache.org/docs/apache-airflow/2.4.0/installation/upgrading.html#wrong-encoding-in-mysql-database) to make sure character set/collation uses the correct encoding -- this has been changing across MySQL versions.
3. Once the above has been performed you should be able to upgrade to airflow 2.3.x
- **Oracle**: In `0.11.x` and previous releases, we were using the [Cx_Oracle](https://oracle.github.io/python-cx_Oracle/) driver to extract the metadata from oracledb. The drawback of using this driver was it required Oracle Client libraries to be installed in the host machine in order to run the ingestion. With the `0.12` release, we will be using the [python-oracledb](https://oracle.github.io/python-oracledb/) driver which is a upgraded version of `Cx_Oracle`. `python-oracledb` with `Thin` mode does not need Oracle Client libraries.
- **Azure SQL & MSSQL**: Azure SQL & MSSQL with pyodbc scheme requires ODBC driver to be installed, with `0.12` release we are shipping the `ODBC Driver 18 for SQL Server` out of the box in our ingestion docker image.
- Removed: `connectionOptions` and `supportsProfiler`
- Looker
- Renamed `username` to `clientId` and `password` to `clientSecret` to align on the internals required for the metadata extraction.
- Removed: `env`
- Oracle
- Removed: `databaseSchema` and `oracleServiceName` from the root.
- Added: `oracleConnectionType` which will either contain `oracleServiceName` or `databaseSchema`. This will reduce confusion on setting up the connection.
- Added: `dbtRunResultsFilePath` and `dbtRunResultsHttpPath` where path of the `run_results.json` file can be passed to get the test results data from dbt.