mirror of
https://github.com/open-metadata/OpenMetadata.git
synced 2025-08-09 01:28:11 +00:00

* GitBook: [#177] Documentation Update - Airflow * GitBook: [#195] Removing Cron from databaseServices * GitBook: [#196] Added trino * GitBook: [#197] removed cron from config * GitBook: [#198] Added Redash Documentation * GitBook: [#199] Added Bigquery Usage Documentation * GitBook: [#200] Added page link for presto * GitBook: [#201] Added Local Docker documentation * GitBook: [#202] Added Documentation for Local Docker Setup * GitBook: [#203] Added Git Command to clone Openmetadata in docs * GitBook: [#207] links update * GitBook: [#208] Updating Airflow Documentation * GitBook: [#210] Adding Python installation package under Airflow Lineage config * GitBook: [#211] Change the links to 0.5..0 * GitBook: [#213] Move buried connectors page up * GitBook: [#214] Update to connectors page * GitBook: [#215] Removed sub-categories * GitBook: [#212] Adding Discovery tutorial * GitBook: [#220] Updated steps to H2s. * GitBook: [#230] Complex queries * GitBook: [#231] Add lineage to feature overview * GitBook: [#232] Make feature overview headers verbs instead of nouns * GitBook: [#233] Add data reliability to features overview * GitBook: [#234] Add complex data types to feature overview * GitBook: [#235] Simplify and further distinguish discovery feature headers * GitBook: [#236] Add data importance to feature overview * GitBook: [#237] Break Connectors into its own section * GitBook: [#238] Reorganize first section of docs. * GitBook: [#239] Add connectors to feature overview * GitBook: [#240] Organize layout of feature overview into feature categories as agreed with Harsha. * GitBook: [#242] Make overview paragraph more descriptive. * GitBook: [#243] Create a link to Connectors section from feature overview. * GitBook: [#244] Add "discover data through association" to feature overview. * GitBook: [#245] Update importance and owners gifs * GitBook: [#246] Include a little more descriptive documentation for key features. * GitBook: [#248] Small tweaks to intro paragraph. * GitBook: [#249] Clean up data profiler paragraph. * GitBook: [#250] Promote Complex Data Types to its own feature. * GitBook: [#251] Update to advanced search * GitBook: [#252] Update Roadmap * GitBook: [#254] Remove old features page (text and screenshot based). * GitBook: [#255] Remove references to removed page. * GitBook: [#256] Add Descriptions and Tags section to feature overview. * GitBook: [#257] Update title for "Know Your Data" Co-authored-by: Ayush Shah <ayush.shah@deuexsolutions.com> Co-authored-by: Suresh Srinivas <suresh@getcollate.io> Co-authored-by: Shannon Bradshaw <shannon.bradshaw@arrikto.com> Co-authored-by: OpenMetadata <github@harsha.io>
100 lines
2.8 KiB
Markdown
100 lines
2.8 KiB
Markdown
---
|
||
description: This guide will help install Redshift connector and run manually
|
||
---
|
||
|
||
# Redshift
|
||
|
||
{% hint style="info" %}
|
||
**Prerequisites**
|
||
|
||
OpenMetadata is built using Java, DropWizard, Jetty, and MySQL.
|
||
|
||
1. Python 3.7 or above
|
||
2. OpenMetadata Server up and running
|
||
|
||
{% endhint %}
|
||
|
||
## Install from PyPI <a href="install-from-pypi-or-source" id="install-from-pypi-or-source"></a>
|
||
|
||
```bash
|
||
pip install 'openmetadata-ingestion[redshift]'
|
||
```
|
||
|
||
## Run Manually <a href="run-manually" id="run-manually"></a>
|
||
|
||
```bash
|
||
metadata ingest -c ./examples/workflows/redshift.json
|
||
```
|
||
|
||
## Configuration
|
||
|
||
{% code title="redshift.json" %}
|
||
```javascript
|
||
{
|
||
"source": {
|
||
"type": "redshift",
|
||
"config": {
|
||
"host_port": "cluster.name.region.redshift.amazonaws.com:5439",
|
||
"username": "username",
|
||
"password": "strong_password",
|
||
"database": "warehouse",
|
||
"service_name": "aws_redshift",
|
||
"data_profiler_enabled": "true",
|
||
"data_profiler_offset": "0",
|
||
"data_profiler_limit": "50000",
|
||
"filter_pattern": {
|
||
"excludes": ["information_schema.*", "[\\w]*event_vw.*"]
|
||
}
|
||
},
|
||
...
|
||
```
|
||
{% endcode %}
|
||
|
||
1. **username** - pass the Redshift username.
|
||
2. **password** - the password for the Redshift username.
|
||
3. **service\_name** - Service Name for this Redshift cluster. If you added the Redshift cluster through OpenMetadata UI, make sure the service name matches the same.
|
||
4. **filter\_pattern** - It contains includes, excludes options to choose which pattern of datasets you want to ingest into OpenMetadata.
|
||
5. **database -** Database name from where data is to be fetched.
|
||
6. **data\_profiler\_enabled** - Enable data-profiling (Optional). It will provide you the newly ingested data.
|
||
7. **data\_profiler\_offset** - Specify offset.
|
||
8. **data\_profiler\_limit** - Specify limit.
|
||
|
||
## Publish to OpenMetadata <a href="publish-to-openmetadata" id="publish-to-openmetadata"></a>
|
||
|
||
Below is the configuration to publish Redshift data into the OpenMetadata service.
|
||
|
||
Add Optionally `pii` processor and `metadata-rest` sink along with `metadata-server` config
|
||
|
||
{% code title="redshift.json" %}
|
||
```javascript
|
||
{
|
||
"source": {
|
||
"type": "redshift",
|
||
"config": {
|
||
"host_port": "cluster.name.region.redshift.amazonaws.com:5439",
|
||
"username": "username",
|
||
"password": "strong_password",
|
||
"database": "warehouse",
|
||
"service_name": "aws_redshift",
|
||
"data_profiler_enabled": "true",
|
||
"data_profiler_offset": "0",
|
||
"data_profiler_limit": "50000",
|
||
"filter_pattern": {
|
||
"excludes": ["information_schema.*", "[\\w]*event_vw.*"]
|
||
}
|
||
},
|
||
"sink": {
|
||
"type": "metadata-rest",
|
||
"config": {}
|
||
},
|
||
"metadata_server": {
|
||
"type": "metadata-server",
|
||
"config": {
|
||
"api_endpoint": "http://localhost:8585/api",
|
||
"auth_provider_type": "no-auth"
|
||
}
|
||
}
|
||
}
|
||
```
|
||
{% endcode %}
|