OpenMetadata/docs/install/run-openmetadata.md
parthp2107 e2578d6be3
Added documentation changes done in 0.5.0 branch to main (#1168)
* GitBook: [#177] Documentation Update - Airflow

* GitBook: [#195] Removing Cron from databaseServices

* GitBook: [#196] Added trino

* GitBook: [#197] removed cron from config

* GitBook: [#198] Added Redash Documentation

* GitBook: [#199] Added Bigquery Usage Documentation

* GitBook: [#200] Added page link for presto

* GitBook: [#201] Added Local Docker documentation

* GitBook: [#202] Added Documentation for Local Docker Setup

* GitBook: [#203] Added Git Command to clone Openmetadata in docs

* GitBook: [#207] links update

* GitBook: [#208] Updating Airflow Documentation

* GitBook: [#210] Adding Python installation package under Airflow Lineage config

* GitBook: [#211] Change the links to 0.5..0

* GitBook: [#213] Move buried connectors page up

* GitBook: [#214] Update to connectors page

* GitBook: [#215] Removed sub-categories

* GitBook: [#212] Adding Discovery tutorial

* GitBook: [#220] Updated steps to H2s.

* GitBook: [#230] Complex queries

* GitBook: [#231] Add lineage to feature overview

* GitBook: [#232] Make feature overview headers verbs instead of nouns

* GitBook: [#233] Add data reliability to features overview

* GitBook: [#234] Add complex data types to feature overview

* GitBook: [#235] Simplify and further distinguish discovery feature headers

* GitBook: [#236] Add data importance to feature overview

* GitBook: [#237] Break Connectors into its own section

* GitBook: [#238] Reorganize first section of docs.

* GitBook: [#239] Add connectors to feature overview

* GitBook: [#240] Organize layout of feature overview into feature categories as agreed with Harsha.

* GitBook: [#242] Make overview paragraph more descriptive.

* GitBook: [#243] Create a link to Connectors section from feature overview.

* GitBook: [#244] Add "discover data through association" to feature overview.

* GitBook: [#245] Update importance and owners gifs

* GitBook: [#246] Include a little more descriptive documentation for key features.

* GitBook: [#248] Small tweaks to intro paragraph.

* GitBook: [#249] Clean up data profiler paragraph.

* GitBook: [#250] Promote Complex Data Types to its own feature.

* GitBook: [#251] Update to advanced search

* GitBook: [#252] Update Roadmap

* GitBook: [#254] Remove old features page (text and screenshot based).

* GitBook: [#255] Remove references to removed page.

* GitBook: [#256] Add Descriptions and Tags section to feature overview.

* GitBook: [#257] Update title for "Know Your Data"

Co-authored-by: Ayush Shah <ayush.shah@deuexsolutions.com>
Co-authored-by: Suresh Srinivas <suresh@getcollate.io>
Co-authored-by: Shannon Bradshaw <shannon.bradshaw@arrikto.com>
Co-authored-by: OpenMetadata <github@harsha.io>
2021-11-13 09:33:20 -08:00

3.6 KiB

description
This installation doc will help you start a OpenMetadata standalone instance on your local machine.

Run OpenMetadata

Run Docker (Latest Release)

Docker is an open platform for developing, shipping, and running applications that enables you to separate your applications from your infrastructure so you can deliver software quickly using OS-level virtualization to deliver software in packages called containers.

{% hint style="info" %} Prerequisites

  • Docker >= 20.10.x
  • Minimum allocated memory to Docker >= 4GB (Preferences -> Resources -> Advanced) {% endhint %}
git clone https://github.com/open-metadata/OpenMetadata
cd OpenMetadata/docker/metadata
docker-compose up

Next Steps

  1. Docker for OpenMetadata will depend on Mysql Container to be up, It may take few seconds to run.
  2. Once OpenMetadata UI is accessible, Go to Airflow UI to invoke the pipelines to ingest data.

The above command brings up all the necessary services

  1. MySQL
  2. ElasticSearch
  3. OpenMetadata Sever
  4. Ingestion with Airflow

To access the OpenMetadata

Open http://localhost:8585 in your browser

Airflow UI available at http://localhost:8080

Run Docker (Local Server)

{% hint style="info" %} This Docker will enable users to access the Local OpenMetadata Server and Ingestion.

Prerequisites

  • Docker >= 20.10.x
  • Minimum allocated memory to Docker >= 4GB (Preferences -> Advanced -> Resources) {% endhint %}

Run the below script to create the latest Maven build of the local and run the Docker with the respective Maven build and Ingestion.

#Run Script to initialize Maven Build and start building Docker
git clone https://github.com/open-metadata/OpenMetadata
cd OpenMetadata
./docker/run_local_docker.sh

Run Manually

{% hint style="success" %} This is a quick start guide that will show you how to quickly start a standalone server. {% endhint %}

Download the distribution

Prerequisites

{% hint style="info" %} OpenMetadata is built using Java, DropWizard, Jetty, and MySQL.

  1. Java 11 or above
  2. MySQL 8 or above {% endhint %}

{% tabs %} {% tab title="Download the release" %} Download the latest binary release from OpenMetadata, Once you have the tar file,

# untar it
tar -zxvf openmetadata-0.5.0.tar.gz

# navigate to directory containing the launcher scripts
cd openmetadata-0.5.0

{% endtab %} {% endtabs %}

Install on your local machine

macOS

  1. Setup Database

    • Install MySQL

       brew install mysql
      
    • Configure MySQL

      mysqladmin -u root password 'yourpassword'
      mysql -u root -p
      
    • Setup Database

      mysql -u root -p
      CREATE DATABASE openmetadata_db;
      CREATE USER 'openmetadata_user'@'localhost' IDENTIFIED BY 'openmetadata_password';
      GRANT ALL PRIVILEGES ON openmetadata_db.* TO 'openmetadata_user'@'localhost' WITH GRANT OPTION;
      commit;
      
  2. Run bootstrap scripts to initialize the database and tables

       cd openmetadata-0.5.0
       ./bootstrap/bootstrap_storage.sh migrate
    
  3. Start the OpenMetadata Server

       cd openmetadata-0.5.0 
       ./bin/openmetadata.sh start
    

Ingest Sample Data

Previous steps start OpenMetadata server. To start using it we need to run ElasticSearch and ingest sample metadata. Please follow the below guide:

Ingest Sample Data