mirror of
https://github.com/datahub-project/datahub.git
synced 2025-11-16 19:33:41 +00:00
* add dali view owner etl * add idpc ui * add the internal flag to switch linkedin internal features * add idpc ui * add the internal flag to switch linkedin internal features * DSS-3495, implement the UI for IDPC JIRA part * DSS-4076, update the metric view since data model changed * DSS-4092, add metric into search and advanced search * update metric database table name and fix the refId and refIdType issue * remove duplicated idpc entry and javascript log * Add fetch_owner hive script * support Appworx flow and job definition and execution * implement the Appworx log parser * bring the script finder back * update the script finder source table name * add the flow_path into lineage and extract the script info * fix the appwors flow job and lineage extract issues * bring the git location back to lineage script node * sort the script finder lineage info by type * bring the script info back for lineage job tab * fix the master branch merge issue * fix the oracle unixtime calculating issue * shorten the flow&job extract interval time to 2 hours instead of 1 day * shorten the appworx refresh time * add license header; include RUNNING chains from SO_JOB_QUEUE for Appworx * implement the list view for metrics * Modify /dataset POST method to perform INSERT or UPDATE of the DatasetRecord * apply the list view css change to metric * upgrade idpc and script finder to ember 2.6.2 * metadata dashboard confidential field data collecting * implement the confidential fields of metadata dashboard * metadata dashboard dataset description collecting * update the final table name * update the final table name for other load function * exchange the source target of cfg_object_name_map * implement the description tab for metadata dashboard * add the load dataset and field comments function * implemented the bar and pie chart for description * implement the ownership section for metadata dashboard * fix the issue that appworx lineage job running too long * add the table job attempt source code * implemented the idpc compliance section * Security Compliance Tab UI (#246) * Add back WhereHows internal tracking (#251) * DSS-5178 DSS-5277: Implements Compliance and Confidential Spec Adds 'logs/' to ignored files Updates EmberSelectorComponent to handle a list of string options or list of options with value and label, flags the currently selected option, and bubble change actions with 'selectionDidChange' action DSS-5178: Removes previous updates to search.js: moving jQuery + DOM heavy imperative implementation to Ember component DSS-5178: Adds templates and components DropRegion and DraggableItem DSS-5178: Adds getSecuritySpec action and compliance types to Dataset controller, cleans up Datasets route and removes inline securitySpec fetch from route DSS-5178: Updates templates for compliance spec DSS-5178: Adds compliance component and updates template Adds .DS_Store to gitignore DSS-5277: Adds dataset-confidential component to DOM, Creates DatasetConfidential component, refactors out data handling from component DSS-5277: Moves data fetching to Dataset Route model and set model data on controller, Adds template for confidential spec component DSS-5178: Moves view related complianceTypes to component DSS-5277 DSS-5178: Adds styling for tab content * DSS-5277 DSS-5178: Adds support for modifying compliancePurgeEntities that don't currently have identifierFields persisted on the remote, PurgeableEntityFieldIdentifierType enum is sourced in client * DSS-5178 DSS-5277: Adds dataType field to UI for schema field name search result. Refactors processSchema into parseSchema to get fields and types * DSS-5277 Fixes bug with missing params property on controller depending on route entry point * DSS-5543: Fixes rendering of datasets in detailview navigating from sidebar/ treeview (#259) * DSS-5677: Changes component from block syntax to inline. Add property for creating a new PrivacyCompliancePolicy and SecuritySpecification for statasets without either * DSS-5677: Adds ability to create a new PrivacyCompliancePolicy and SecuritySpecification from the client UI. Also fixes issue with matching fields and data type properties on schema with inconsistent shapes * DSS-5677: Add create banner for datasets without Privacy policy or Security specification * DSS-5677: Updates UI to more closely match spec, changes search input behaviour to filter from search * ADD ESPRESSO_DATASET_METADATA_ETL job to fetch Espresso metadata from Nuage * Update Nuage load process, fix owner subtype and source * Add VOLDEMORT ETL job to fetch datasets from Nuage * Add KAFKA ETL job to fetch topics from Nuage * skip KAFKA topics starting with 'test' when fetching from Nuage * Merges front-end changes from master -> DSS-5178 DSS-5577 DSS-5677 DSS-5277 DSS-5677 * DSS-5784: Fixes issue with AdvancedSearch and ScriptFinder URL queries being RFC-3986 incompliant * ScriptFinder Controller add URL decoding for Json fields (#290) * DSS-5888 Adds configuration support for Piwik environment tracking. Setting the 'tracking.piwik.siteid' to a value will get rendered in the template and consumed by the tracking initializer * DSS-5888 DSS-5875 Adds tracking for users. Adds client side tracking for keyword and init for Piwik script module * Fixes mismatch with compliance api property name: privacyCompliancePolicy != privacyCompliance * DSS-5888 Fixes tracking userId for noscript tag * DSS-5865 Removes spinner on metadata/dashboard/idpc-compliance fail * DSS-6177 Removed unused links in Metric Detail page * Update Appworx Execution and Lineage jobs (#321) * DSS-6197: Adds default value for classification property on security specification if not defined * DSS-6198: Fixes issue with nested fields not getting rendered in the schema for compliance and confidential tabs * DSS-6018 Adds ui feature to track feedback on user search results relevance using a up/down voting mechanism * Make unit tests buildable again for backend and web (#325) * Make unit tests buildable again for backend and web. * Add back fest dependency so the tests can stay more of less the same as before. * Generate code coverage reports (#334) * Add playCoverage task to run code coverage using JaCoco for backend and web. * Add jacocoTestReport task to run code coverage for testNG-based tests in wherehows-common & metadata-etl. * Add data platform filter for dashboard APIs (#322) * Add data platform filter for dashboard APIs * Add exception handling for Espresso and Kafka ETL job * restli client to populate espresso and oracle metadata
255 lines
17 KiB
Java
255 lines
17 KiB
Java
/**
|
|
* Copyright 2015 LinkedIn Corp. All rights reserved.
|
|
*
|
|
* Licensed under the Apache License, Version 2.0 (the "License");
|
|
* you may not use this file except in compliance with the License.
|
|
* You may obtain a copy of the License at
|
|
*
|
|
* http://www.apache.org/licenses/LICENSE-2.0
|
|
*
|
|
* Unless required by applicable law or agreed to in writing, software
|
|
* distributed under the License is distributed on an "AS IS" BASIS,
|
|
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
*/
|
|
package wherehows.common;
|
|
|
|
/**
|
|
* Created by zsun on 9/29/15.
|
|
*/
|
|
public class Constant {
|
|
|
|
// For property_name in wh_property table
|
|
/** The property_name field in wh_property table for WhereHows database connection information */
|
|
public static final String WH_DB_URL_KEY = "wherehows.db.jdbc.url";
|
|
public static final String WH_DB_USERNAME_KEY = "wherehows.db.username";
|
|
public static final String WH_DB_PASSWORD_KEY = "wherehows.db.password";
|
|
public static final String WH_DB_DRIVER_KEY = "wherehows.db.driver";
|
|
|
|
/** The property_name field in wh_property table. Location of the folder that store interim csv file. */
|
|
public static final String WH_APP_FOLDER_KEY = "wherehows.app_folder";
|
|
|
|
/** The property_name for logback CONTEXT_NAME. Used to set/fetch the system property */
|
|
public static final String LOGGER_CONTEXT_NAME_KEY = "CONTEXT_NAME";
|
|
|
|
// For property_name in wh_etl_job_property table
|
|
// Lineage
|
|
/** The property_name field in wh_etl_job_property table. Azkaban database connection info */
|
|
public static final String AZ_DB_URL_KEY = "az.db.jdbc.url";
|
|
public static final String AZ_DB_USERNAME_KEY = "az.db.username";
|
|
public static final String AZ_DB_PASSWORD_KEY = "az.db.password";
|
|
public static final String AZ_DB_DRIVER_KEY = "az.db.driver";
|
|
|
|
/** The property_name field in wh_etl_job_property table. The time period backtrace for execution data ETL */
|
|
public static final String AZ_EXEC_ETL_LOOKBACK_MINS_KEY = "az.exec_etl.lookback_period.in.minutes";
|
|
|
|
/** The property_name field in wh_etl_job_property table. Hadoop job history url for retrieving map-reduce job logs */
|
|
public static final String AZ_HADOOP_JOBHISTORY_KEY = "az.hadoop.jobhistory.server.url";
|
|
/** The property_name field in wh_etl_job_property table. Default hadoop database id for this azkaban instance */
|
|
public static final String AZ_DEFAULT_HADOOP_DATABASE_ID_KEY = "az.default.hadoop.database.id";
|
|
/** The property_name field in wh_etl_job_property table. For this period of jobs that finished, we will extract their lineage */
|
|
public static final String AZ_LINEAGE_ETL_LOOKBACK_MINS_KEY = "az.lineage_etl.lookback_period.in.minutes";
|
|
/** The property_name field in wh_etl_job_property table. In lineage ETl, Akka actor time out time */
|
|
public static final String LINEAGE_ACTOR_TIMEOUT_KEY = "az.lineage.actor.timeout";
|
|
|
|
public static final String LINEAGE_ACTOR_NUM = "az.lineage.actor.num";
|
|
/** The property_name field in wh_etl_job_property table. Optional property used for debug. Default end timestamp is now */
|
|
public static final String AZ_LINEAGE_ETL_END_TIMESTAMP_KEY = "az.lineage_etl.end_timestamp";
|
|
/** The property_name field in wh_etl_job_property table. Azkaban Server URL (optional way to get azkaban execution log) */
|
|
public static final String AZ_SERVICE_URL_KEY = "az.server.url";
|
|
public static final String AZ_SERVICE_USERNAME_KEY = "az.server.username";
|
|
public static final String AZ_SERVICE_PASSWORD_KEY = "az.server.password";
|
|
|
|
//Appworx
|
|
public static final String AW_DB_URL_KEY = "aw.db.jdbc.url";
|
|
public static final String AW_DB_USERNAME_KEY = "aw.db.username";
|
|
public static final String AW_DB_PASSWORD_KEY = "aw.db.password";
|
|
public static final String AW_DB_NAME_KEY = "aw.db.name";
|
|
public static final String AW_DB_DRIVER_KEY = "aw.db.driver";
|
|
public static final String AW_DB_PORT_KEY = "aw.db.port";
|
|
public static final String AW_ARCHIVE_DIR = "aw.archive.dir";
|
|
public static final String AW_REMOTE_HADOOP_SCRIPT_DIR = "aw.remote_hadoop_script_dir";
|
|
public static final String AW_LOCAL_SCRIPT_PATH = "aw.local_script_path";
|
|
public static final String AW_REMOTE_SCRIPT_PATH = "aw.remote_script_path";
|
|
public static final String AW_BTEQ_SOURCE_TARGET_OVERRIDE = "aw.bteq_source_target_override";
|
|
public static final String AW_METRIC_OVERRIDE = "aw.metric_override";
|
|
public static final String AW_SKIP_ALREADY_PARSED = "aw.skip_already_parsed";
|
|
/** The property_name field in wh_etl_job_property table. The time period backtrace for execution data ETL */
|
|
public static final String AW_EXEC_ETL_LOOKBACK_KEY = "aw.exec_etl.lookback_period.in.days";
|
|
public static final String AW_LINEAGE_ETL_LOOKBACK_KEY = "aw.lineage_etl.lookback_period.in.days";
|
|
|
|
// Oozie
|
|
/** The property_name field in wh_etl_job_property table. Oozie database connection info */
|
|
public static final String OZ_DB_URL_KEY = "oz.db.jdbc.url";
|
|
public static final String OZ_DB_USERNAME_KEY = "oz.db.username";
|
|
public static final String OZ_DB_PASSWORD_KEY = "oz.db.password";
|
|
public static final String OZ_DB_DRIVER_KEY = "oz.db.driver";
|
|
/** The property_name field in wh_etl_job_property table. Oozie execution info ETL lookback time */
|
|
public static final String OZ_EXEC_ETL_LOOKBACK_MINS_KEY = "oz.exec_etl.lookback_period.in.minutes";
|
|
|
|
/** Optional. The property_name field in wh_etl_job_property table. Set innodb_lock_wait_timeout for mysql */
|
|
public static final String INNODB_LOCK_WAIT_TIMEOUT = "innodb_lock_wait_timeout";
|
|
|
|
// Teradata
|
|
/** The property_name field in wh_etl_job_property table. Teradata connection info */
|
|
public static final String TD_DB_URL_KEY = "teradata.db.jdbc.url";
|
|
public static final String TD_DB_USERNAME_KEY = "teradata.db.username";
|
|
public static final String TD_DB_PASSWORD_KEY = "teradata.db.password";
|
|
public static final String TD_DB_DRIVER_KEY = "teradata.db.driver";
|
|
/** The property_name field in wh_etl_job_property table. Teradata metadata raw interim file store location */
|
|
public static final String TD_METADATA_KEY = "teradata.metadata";
|
|
/** The property_name field in wh_etl_job_property table. Teradata field metadata interim file store location */
|
|
public static final String TD_FIELD_METADATA_KEY = "teradata.field_metadata";
|
|
/** The property_name field in wh_etl_job_property table. Teradata schema interim file store location */
|
|
public static final String TD_SCHEMA_OUTPUT_KEY = "teradata.schema_output";
|
|
/** The property_name field in wh_etl_job_property table. Teradata sample data interim file store location */
|
|
public static final String TD_SAMPLE_OUTPUT_KEY = "teradata.sample_output";
|
|
/** The property_name field in wh_etl_job_property table. Teradata log file store location */
|
|
public static final String TD_LOG_KEY = "teradata.log";
|
|
/** The property_name field in wh_etl_job_property table. Teradata databases to collect metadata */
|
|
public static final String TD_TARGET_DATABASES_KEY = "teradata.databases";
|
|
/** The property_name field in wh_etl_job_property table. Used for connecting */
|
|
public static final String TD_DEFAULT_DATABASE_KEY = "teradata.default_database";
|
|
/** Optional. The property_name field in wh_etl_job_property table. Decide whether load sample data or not */
|
|
public static final String TD_LOAD_SAMPLE = "teradata.load_sample";
|
|
/** The property_name field in wh_etl_job_property table. Collect sample data collection only for certain weekdays */
|
|
public static final String TD_COLLECT_SAMPLE_DATA_DAYS = "teradata.collect.sample.data.days";
|
|
|
|
// Hdfs
|
|
/** The property_name field in wh_etl_job_property table. Whether using remote mode or not */
|
|
public static final String HDFS_REMOTE = "hdfs.remote.mode";
|
|
/** The property_name field in wh_etl_job_property table. The hfds remote user that run the hadoop job on gateway */
|
|
public static final String HDFS_REMOTE_USER_KEY = "hdfs.remote.user";
|
|
/** The property_name field in wh_etl_job_property table. The gateway machine name*/
|
|
public static final String HDFS_REMOTE_MACHINE_KEY = "hdfs.remote.machine";
|
|
/** The property_name field in wh_etl_job_property table. The private key location of remote user */
|
|
public static final String HDFS_PRIVATE_KEY_LOCATION_KEY = "hdfs.private_key_location";
|
|
/** The property_name field in wh_etl_job_property table. The jar file location that need to run */
|
|
public static final String HDFS_REMOTE_JAR_KEY = "hdfs.remote.jar";
|
|
/** The property_name field in wh_etl_job_property table. The raw hfds metadata file (in json format) location store on local machine */
|
|
public static final String HDFS_SCHEMA_LOCAL_PATH_KEY = "hdfs.local.raw_metadata";
|
|
/** The property_name field in wh_etl_job_property table. The hfds metadata file location store on remote hadoop gateway */
|
|
public static final String HDFS_SCHEMA_REMOTE_PATH_KEY = "hdfs.remote.raw_metadata";
|
|
/** The property_name field in wh_etl_job_property table. The hfds sample data file location store on local machine */
|
|
public static final String HDFS_SAMPLE_LOCAL_PATH_KEY = "hdfs.local.sample";
|
|
/** The property_name field in wh_etl_job_property table. The hfds sample data file location store on remote hadoop gateway */
|
|
public static final String HDFS_SAMPLE_REMOTE_PATH_KEY = "hdfs.remote.sample";
|
|
/** The property_name field in wh_etl_job_property table. Hadoop cluster name in short form */
|
|
public static final String HDFS_CLUSTER_KEY = "hdfs.cluster";
|
|
/** The property_name field in wh_etl_job_property table. The list of directories as a start point to fetch metadata.
|
|
* (include all of their sub directories) */
|
|
public static final String HDFS_WHITE_LIST_KEY = "hdfs.white_list";
|
|
/** The property_name field in wh_etl_job_property table. Number of thread to do the metadata collecting */
|
|
public static final String HDFS_NUM_OF_THREAD_KEY = "hdfs.num_of_thread";
|
|
/** The property_name field in wh_etl_job_property table. The hfds metadata file (in csv format) location store on local machine */
|
|
public static final String HDFS_SCHEMA_RESULT_KEY = "hdfs.local.metadata";
|
|
/** The property_name field in wh_etl_job_property table. The field metadata file (in csv format) location store on local machine */
|
|
public static final String HDFS_FIELD_RESULT_KEY = "hdfs.local.field_metadata";
|
|
/** The property_name field in wh_etl_job_property table. The map of file path regex and dataset source
|
|
* e.g. [{"/data/tracking.*":"Kafka"},{"/data/retail.*":"Teradata"}] */
|
|
public static final String HDFS_FILE_SOURCE_MAP_KEY = "hdfs.file_path_regex_source_map";
|
|
/** The property_name field in wh_etl_job_property table. Keytab file location */
|
|
public static final String HDFS_REMOTE_KEYTAB_LOCATION_KEY = "hdfs.remote.keytab.location";
|
|
/** The property_name field in wh_etl_job_property table. hdfs default uri (IPC) */
|
|
public static final String HDFS_NAMENODE_IPC_URI_KEY = "hdfs.namenode.ipc.uri";
|
|
|
|
/** The property_name field in wh_etl_job_property table. For dataset owner ETL. The hfds location to copy files */
|
|
public static final String HDFS_REMOTE_WORKING_DIR = "hdfs.remote.working.dir";
|
|
|
|
// ui
|
|
/** File name of dataset tree that used by front end to show tree */
|
|
public static final String DATASET_TREE_FILE_NAME_KEY = "wherehows.ui.tree.dataset.file";
|
|
/** File name of flow tree that used by front end to show tree */
|
|
public static final String FLOW_TREE_FILE_NAME_KEY = "wherehows.ui.tree.flow.file";
|
|
|
|
// hdfs owner
|
|
public static final String HDFS_OWNER_HIVE_QUERY_KEY = "hdfs.owner.hive.query";
|
|
|
|
// ldap
|
|
public static final String LDAP_CEO_USER_ID_KEY = "ldap.ceo.user.id";
|
|
public static final String LDAP_CONTEXT_FACTORY_KEY = "ldap.context.factory";
|
|
public static final String LDAP_CONTEXT_PROVIDER_URL_KEY = "ldap.context.provider.url";
|
|
public static final String LDAP_CONTEXT_SECURITY_PRINCIPAL_KEY = "ldap.context.security.principal";
|
|
public static final String LDAP_CONTEXT_SECURITY_CREDENTIALS_KEY = "ldap.context.security.credentials";
|
|
public static final String LDAP_SEARCH_DOMAINS_KEY = "ldap.search.domains";
|
|
public static final String LDAP_INACTIVE_DOMAIN_KEY = "ldap.inactive.domain";
|
|
public static final String LDAP_SEARCH_RETURN_ATTRS_KEY = "ldap.search.return.attributes";
|
|
public static final String LDAP_GROUP_CONTEXT_FACTORY_KEY = "ldap.group.context.factory";
|
|
public static final String LDAP_GROUP_CONTEXT_PROVIDER_URL_KEY = "ldap.group.context.provider.url";
|
|
public static final String LDAP_GROUP_CONTEXT_SECURITY_PRINCIPAL_KEY = "ldap.group.context.security.principal";
|
|
public static final String LDAP_GROUP_CONTEXT_SECURITY_CREDENTIALS_KEY = "ldap.group.context.security.credentials";
|
|
public static final String LDAP_GROUP_APP_ID_KEY = "ldap.group.app.id";
|
|
public static final String LDAP_GROUP_SEARCH_DOMAINS_KEY = "ldap.group.search.domains";
|
|
public static final String LDAP_GROUP_SEARCH_RETURN_ATTRS_KEY = "ldap.group.search.return.attributes";
|
|
|
|
// git
|
|
public static final String GIT_HOST_KEY = "git.host";
|
|
public static final String GIT_PROJECT_WHITELIST_KEY = "git.project.whitelist";
|
|
|
|
// hive
|
|
public static final String HIVE_METASTORE_JDBC_DRIVER = "hive.metastore.jdbc.driver";
|
|
public static final String HIVE_METASTORE_JDBC_URL = "hive.metastore.jdbc.url";
|
|
public static final String HIVE_METASTORE_USERNAME = "hive.metastore.username";
|
|
public static final String HIVE_METASTORE_PASSWORD = "hive.metastore.password";
|
|
|
|
public static final String HIVE_DATABASE_WHITELIST_KEY = "hive.database_white_list";
|
|
public static final String HIVE_DATABASE_BLACKLIST_KEY = "hive.database_black_list";
|
|
public static final String HIVE_SCHEMA_JSON_FILE_KEY = "hive.schema_json_file";
|
|
public static final String HIVE_DEPENDENCY_CSV_FILE_KEY = "hive.dependency_csv_file";
|
|
public static final String HIVE_INSTANCE_CSV_FILE_KEY = "hive.instance_csv_file";
|
|
public static final String HIVE_SAMPLE_CSV_FILE_KEY = "hive.sample_csv_file";
|
|
public static final String HIVE_SCHEMA_CSV_FILE_KEY = "hive.schema_csv_file";
|
|
public static final String HIVE_HDFS_MAP_CSV_FILE_KEY = "hive.hdfs_map_csv_file";
|
|
public static final String HIVE_FIELD_METADATA_KEY = "hive.field_metadata";
|
|
|
|
public static final String KERBEROS_AUTH_KEY = "kerberos.auth";
|
|
public static final String KERBEROS_PRINCIPAL_KEY = "kerberos.principal";
|
|
public static final String KERBEROS_KEYTAB_FILE_KEY = "kerberos.keytab.file";
|
|
|
|
/** Property name of app id. For ETL process. ETL process will use this to identify the application */
|
|
public static final String APP_ID_KEY = "app.id";
|
|
/** Property name of database id. ETL process will use this to identify the database */
|
|
public static final String DB_ID_KEY = "db.id";
|
|
/** Property name of wherehows execution id for ETL process. */
|
|
public static final String WH_EXEC_ID_KEY = "wh.exec.id";
|
|
|
|
public static final String WH_ELASTICSEARCH_URL_KEY = "wh.elasticsearch.url";
|
|
public static final String WH_ELASTICSEARCH_PORT_KEY = "wh.elasticsearch.port";
|
|
public static final String WH_ELASTICSEARCH_INDEX_KEY = "wh.elasticsearch.index";
|
|
|
|
// Oracle
|
|
public static final String ORA_DB_USERNAME_KEY = "oracle.db.username";
|
|
public static final String ORA_DB_PASSWORD_KEY = "oracle.db.password";
|
|
public static final String ORA_DB_DRIVER_KEY = "oracle.db.driver";
|
|
public static final String ORA_DB_URL_KEY = "oracle.db.jdbc.url";
|
|
public static final String ORA_SCHEMA_OUTPUT_KEY = "oracle.metadata";
|
|
public static final String ORA_FIELD_OUTPUT_KEY = "oracle.field_metadata";
|
|
public static final String ORA_SAMPLE_OUTPUT_KEY = "oracle.sample_data";
|
|
public static final String ORA_LOAD_SAMPLE = "oracle.load_sample";
|
|
public static final String ORA_EXCLUDE_DATABASES_KEY = "oracle.exclude_db";
|
|
|
|
// Multiproduct
|
|
public static final String MULTIPRODUCT_SERVICE_URL = "multiproduct.service.url";
|
|
public static final String GIT_URL_PREFIX = "git.url.prefix";
|
|
public static final String SVN_URL_PREFIX = "svn.url.prefix";
|
|
public static final String GIT_PROJECT_OUTPUT_KEY = "git.project.metadata";
|
|
public static final String PRODUCT_REPO_OUTPUT_KEY = "product.repo.metadata";
|
|
public static final String PRODUCT_REPO_OWNER_OUTPUT_KEY = "product.repo.owner";
|
|
|
|
// code search
|
|
public static final String DATABASE_SCM_REPO_OUTPUT_KEY = "database.scm.repo";
|
|
public static final String BASE_URL_KEY = "base.url.key";
|
|
|
|
// dali
|
|
public static final String DALI_GIT_URN_KEY = "dali.git.urn";
|
|
public static final String GIT_COMMITTER_BLACKLIST_KEY = "git.committer.blacklist";
|
|
|
|
// Nuage
|
|
public static final String D2_PROXY_URL = "d2.proxy.url";
|
|
public static final String ESPRESSO_OUTPUT_KEY = "espresso.metadata";
|
|
public static final String VOLDEMORT_OUTPUT_KEY = "voldemort.metadata";
|
|
public static final String KAFKA_OUTPUT_KEY = "kafka.metadata";
|
|
|
|
// metadata-store restli server
|
|
public static final String WH_RESTLI_SERVER_URL = "wherehows.restli.server.url";
|
|
}
|