7.3 KiB
Hive
For context on getting started with ingestion, check out our metadata ingestion guide.
Setup
To install this plugin, run pip install 'acryl-datahub[hive]'
.
Capabilities
This plugin extracts the following:
- Metadata for databases, schemas, and tables
- Column types associated with each table
- Detailed table and storage information
- Table, row, and column statistics via optional SQL profiling
Quickstart recipe
Check out the following recipe to get started with ingestion! See below for full configuration options.
For general pointers on writing and running a recipe, see our main recipe guide.
source:
type: hive
config:
# Coordinates
host_port: localhost:10000
database: DemoDatabase # optional, if not specified, ingests from all databases
# Credentials
username: user # optional
password: pass # optional
# For more details on authentication, see the PyHive docs:
# https://github.com/dropbox/PyHive#passing-session-configuration.
# LDAP, Kerberos, etc. are supported using connect_args, which can be
# added under the `options` config parameter.
#scheme: 'hive+http' # set this if Thrift should use the HTTP transport
#scheme: 'hive+https' # set this if Thrift should use the HTTP with SSL transport
sink:
# sink configs
Ingestion with
Azure HDInsight
# Connecting to Microsoft Azure HDInsight using TLS.
source:
type: hive
config:
# Coordinates
host_port: <cluster_name>.azurehdinsight.net:443
# Credentials
username: admin
password: password
# Options
options:
connect_args:
http_path: "/hive2"
auth: BASIC
sink:
# sink configs
Databricks
Ensure that databricks-dbapi is installed. If not, use pip install databricks-dbapi
to install.
Use the http_path
from your Databricks cluster in the following recipe. See here for instructions to find http_path
.
source:
type: hive
config:
host_port: <databricks workspace URL>:443
username: token
password: <api token>
scheme: 'databricks+pyhive'
options:
connect_args:
http_path: 'sql/protocolv1/o/xxxyyyzzzaaasa/1234-567890-hello123'
sink:
# sink configs
Config details
Note that a .
is used to denote nested fields in the YAML recipe.
As a SQL-based service, the Athena integration is also supported by our SQL profiler. See here for more details on configuration.
Field | Required | Default | Description |
---|---|---|---|
username |
Database username. | ||
password |
Database password. | ||
host_port |
✅ | Host URL and port to connect to. | |
database |
Database to ingest. | ||
database_alias |
Alias to apply to database when ingesting. | ||
env |
"PROD" |
Environment to use in namespace when constructing URNs. | |
options.<option> |
Any options specified here will be passed to SQLAlchemy's create_engine as kwargs.See https://docs.sqlalchemy.org/en/14/core/engines.html#sqlalchemy.create_engine for details. |
||
table_pattern.allow |
List of regex patterns for tables to include in ingestion. | ||
table_pattern.deny |
List of regex patterns for tables to exclude from ingestion. | ||
table_pattern.ignoreCase |
True |
Whether to ignore case sensitivity during pattern matching. | |
schema_pattern.allow |
List of regex patterns for schemas to include in ingestion. | ||
schema_pattern.deny |
List of regex patterns for schemas to exclude from ingestion. | ||
schema_pattern.ignoreCase |
True |
Whether to ignore case sensitivity during pattern matching. | |
view_pattern.allow |
List of regex patterns for views to include in ingestion. | ||
view_pattern.deny |
List of regex patterns for views to exclude from ingestion. | ||
view_pattern.ignoreCase |
True |
Whether to ignore case sensitivity during pattern matching. | |
include_tables |
True |
Whether tables should be ingested. |
Compatibility
Coming soon!
Questions
If you've got any questions on configuring this source, feel free to ping us on our Slack!