mirror of
				https://github.com/open-metadata/OpenMetadata.git
				synced 2025-10-31 18:48:35 +00:00 
			
		
		
		
	 e2578d6be3
			
		
	
	
		e2578d6be3
		
			
		
	
	
	
	
		
			
			* GitBook: [#177] Documentation Update - Airflow * GitBook: [#195] Removing Cron from databaseServices * GitBook: [#196] Added trino * GitBook: [#197] removed cron from config * GitBook: [#198] Added Redash Documentation * GitBook: [#199] Added Bigquery Usage Documentation * GitBook: [#200] Added page link for presto * GitBook: [#201] Added Local Docker documentation * GitBook: [#202] Added Documentation for Local Docker Setup * GitBook: [#203] Added Git Command to clone Openmetadata in docs * GitBook: [#207] links update * GitBook: [#208] Updating Airflow Documentation * GitBook: [#210] Adding Python installation package under Airflow Lineage config * GitBook: [#211] Change the links to 0.5..0 * GitBook: [#213] Move buried connectors page up * GitBook: [#214] Update to connectors page * GitBook: [#215] Removed sub-categories * GitBook: [#212] Adding Discovery tutorial * GitBook: [#220] Updated steps to H2s. * GitBook: [#230] Complex queries * GitBook: [#231] Add lineage to feature overview * GitBook: [#232] Make feature overview headers verbs instead of nouns * GitBook: [#233] Add data reliability to features overview * GitBook: [#234] Add complex data types to feature overview * GitBook: [#235] Simplify and further distinguish discovery feature headers * GitBook: [#236] Add data importance to feature overview * GitBook: [#237] Break Connectors into its own section * GitBook: [#238] Reorganize first section of docs. * GitBook: [#239] Add connectors to feature overview * GitBook: [#240] Organize layout of feature overview into feature categories as agreed with Harsha. * GitBook: [#242] Make overview paragraph more descriptive. * GitBook: [#243] Create a link to Connectors section from feature overview. * GitBook: [#244] Add "discover data through association" to feature overview. * GitBook: [#245] Update importance and owners gifs * GitBook: [#246] Include a little more descriptive documentation for key features. * GitBook: [#248] Small tweaks to intro paragraph. * GitBook: [#249] Clean up data profiler paragraph. * GitBook: [#250] Promote Complex Data Types to its own feature. * GitBook: [#251] Update to advanced search * GitBook: [#252] Update Roadmap * GitBook: [#254] Remove old features page (text and screenshot based). * GitBook: [#255] Remove references to removed page. * GitBook: [#256] Add Descriptions and Tags section to feature overview. * GitBook: [#257] Update title for "Know Your Data" Co-authored-by: Ayush Shah <ayush.shah@deuexsolutions.com> Co-authored-by: Suresh Srinivas <suresh@getcollate.io> Co-authored-by: Shannon Bradshaw <shannon.bradshaw@arrikto.com> Co-authored-by: OpenMetadata <github@harsha.io>
		
			
				
	
	
	
		
			3.1 KiB
		
	
	
	
	
	
	
	
			
		
		
	
	
			3.1 KiB
		
	
	
	
	
	
	
	
| description | 
|---|
| This guide will help install BigQuery Usage connector and run manually | 
BigQuery Usage
{% hint style="info" %} Prerequisites
OpenMetadata is built using Java, DropWizard, Jetty, and MySQL.
- Python 3.7 or above {% endhint %}
Install from PyPI
{% tabs %} {% tab title="Install Using PyPI" %}
pip install 'openmetadata-ingestion[bigquery-usage]'
{% endtab %} {% endtabs %}
Run Manually
metadata ingest -c ./examples/workflows/bigquery_usage.json
Configuration
{% code title="bigquery-creds.json (boilerplate)" %}
{
  "type": "service_account",
  "project_id": "project_id",
  "private_key_id": "private_key_id",
  "private_key": "",
  "client_email": "gcpuser@project_id.iam.gserviceaccount.com",
  "client_id": "",
  "auth_uri": "https://accounts.google.com/o/oauth2/auth",
  "token_uri": "https://oauth2.googleapis.com/token",
  "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
  "client_x509_cert_url": ""
}
{% endcode %}
{% code title="bigquery_usage.json" %}
{
  "source": {
    "type": "bigquery-usage",
    "config": {
      "project_id": "project_id",
      "host_port": "https://bigquery.googleapis.com",
      "username": "gcpuser@project_id.iam.gserviceaccount.com",
      "service_name": "gcp_bigquery",
      "duration": 2,
      "options": {
        "credentials_path": "examples/creds/bigquery-cred.json"
      }
    }
  },
{% endcode %}
- username - pass the Bigquery username.
- password - the password for the Bigquery username.
- service_name - Service Name for this Bigquery cluster. If you added the Bigquery cluster through OpenMetadata UI, make sure the service name matches the same.
- filter_pattern - It contains includes, excludes options to choose which pattern of datasets you want to ingest into OpenMetadata.
- database - Database name from where data is to be fetched.
Publish to OpenMetadata
Below is the configuration to publish Bigquery data into the OpenMetadata service.
Add optionallyquery-parser processor, table-usage stage and metadata-usage bulk_sink along with metadata-server config
{% code title="bigquery_usage.json" %}
{
  "source": {
    "type": "bigquery-usage",
    "config": {
      "project_id": "project_id",
      "host_port": "https://bigquery.googleapis.com",
      "username": "gcpuser@project_id.iam.gserviceaccount.com",
      "service_name": "gcp_bigquery",
      "duration": 2,
      "options": {
        "credentials_path": "examples/creds/bigquery-cred.json"
      }
    }
  },
  "processor": {
    "type": "query-parser",
    "config": {
      "filter": ""
    }
  },
  "stage": {
    "type": "table-usage",
    "config": {
      "filename": "/tmp/bigquery_usage"
    }
  },
  "bulk_sink": {
    "type": "metadata-usage",
    "config": {
      "filename": "/tmp/bigquery_usage"
    }
  },
  "metadata_server": {
    "type": "metadata-server",
    "config": {
      "api_endpoint": "http://localhost:8585/api",
      "auth_provider_type": "no-auth"
    }
  }
}
{% endcode %}