mirror of
https://github.com/datahub-project/datahub.git
synced 2025-06-27 05:03:31 +00:00
feat(k8s): Move helm charts out of contrib (#2440)
This commit is contained in:
parent
851e00ba9f
commit
ae4def2f25
@ -1,70 +0,0 @@
|
||||
---
|
||||
title: "Deploying with Kubernetes"
|
||||
hide_title: true
|
||||
---
|
||||
|
||||
# Kubernetes Setup for DataHub
|
||||
|
||||
## Introduction
|
||||
[This directory](https://github.com/linkedin/datahub/tree/master/contrib/kubernetes/datahub) provides the Kubernetes [Helm](https://helm.sh/) charts for DataHub.
|
||||
|
||||
## Setup
|
||||
This kubernetes deployment doesn't contain the below artifacts. The idea is to use the original helm charts for deploying each of these separately.
|
||||
|
||||
* Kafka and Schema Registry [Chart Link](https://hub.helm.sh/charts/incubator/kafka)
|
||||
* Elasticsearch [Chart Link](https://hub.helm.sh/charts/elastic/elasticsearch)
|
||||
* Mysql [Chart Link](https://hub.helm.sh/charts/stable/mysql)
|
||||
* Neo4j [Chart Link](https://hub.helm.sh/charts/stable/neo4j)
|
||||
|
||||
Also, these can be installed on-prem or can be leveraged as managed service on any cloud platform.
|
||||
|
||||
## Quickstart
|
||||
|
||||
### Docker & Kubernetes
|
||||
Install Docker & Kubernetes by following the instructions [here](https://kubernetes.io/docs/setup/). Easiest is to use Docker Desktop for your platform [Mac](https://docs.docker.com/docker-for-mac/) & [Windows](https://docs.docker.com/docker-for-windows/)
|
||||
|
||||
### Helm
|
||||
Helm is an open-source packaging tool that helps you install applications and services on kubernetes. Helm uses a packaging format called charts. Charts are a collection of YAML templates that describes a related set of kubernetes resources.
|
||||
|
||||
Install helm by following the instructions [here](https://helm.sh/docs/intro/install/). We support Helm3 version.
|
||||
|
||||
### DataHub Helm Chart Configurations
|
||||
|
||||
The following table lists the configuration parameters and its default values
|
||||
|
||||
#### Chart Requirements
|
||||
|
||||
| Repository | Name | Version |
|
||||
|------------|------|---------|
|
||||
| file://./charts/datahub-frontend | datahub-frontend | 0.2.1 |
|
||||
| file://./charts/datahub-gms | datahub-gms | 0.2.1 |
|
||||
| file://./charts/datahub-mae-consumer | datahub-mae-consumer | 0.2.1 |
|
||||
| file://./charts/datahub-mce-consumer | datahub-mce-consumer | 0.2.1 |
|
||||
| file://./charts/datahub-ingestion-cron | datahub-ingestion-cron | 0.2.1 |
|
||||
|
||||
## Install DataHub
|
||||
Navigate to the current directory and run the below command. Update the `datahub/values.yaml` file with valid hostname/IP address configuration for elasticsearch, neo4j, schema-registry, broker & mysql.
|
||||
|
||||
``
|
||||
helm install datahub datahub/
|
||||
``
|
||||
|
||||
## Testing
|
||||
For testing this setup, we can use the existing quickstart's [docker-compose](https://github.com/linkedin/datahub/blob/master/docker/quickstart/docker-compose.yml) file but commenting out `data-hub-gms`, `datahub-frontend`, `datahub-mce-consumer` & `datahub-mae-consumer` sections for setting up prerequisite software
|
||||
and then performing helm install by updating the values.yaml with proper IP address of Host Machine for elasticsearch, neo4j, schema-registry, broker & mysql in `global.hostAliases[0].ip` section.
|
||||
|
||||
|
||||
Alternatively, you can run this command directly without making any changes to `datahub/values.yaml` file
|
||||
``
|
||||
helm install --set "global.hostAliases[0].ip"="<<docker_host_ip>>","global.hostAliases[0].hostnames"="{broker,mysql,elasticsearch,neo4j}" datahub datahub/
|
||||
``
|
||||
|
||||
## Other useful commands
|
||||
|
||||
| Command | Description |
|
||||
|-----|------|
|
||||
| helm uninstall datahub | Remove DataHub |
|
||||
| helm ls | List of Helm charts |
|
||||
| helm history | Fetch a release history |
|
||||
|
||||
|
@ -1,65 +0,0 @@
|
||||
datahub
|
||||
=======
|
||||
A Helm chart for LinkedIn DataHub
|
||||
|
||||
Current chart version is `0.1.2`
|
||||
|
||||
#### Chart Values
|
||||
|
||||
| Key | Type | Default | Description |
|
||||
|-----|------|---------|-------------|
|
||||
| datahub-frontend.enabled | bool | `true` | |
|
||||
| datahub-frontend.image.repository | string | `"linkedin/datahub-frontend"` | |
|
||||
| datahub-frontend.image.tag | string | `"latest"` | |
|
||||
| datahub-gms.enabled | bool | `true` | |
|
||||
| datahub-gms.image.repository | string | `"linkedin/datahub-gms"` | |
|
||||
| datahub-gms.image.tag | string | `"latest"` | |
|
||||
| datahub-mae-consumer.enabled | bool | `true` | |
|
||||
| datahub-mae-consumer.image.repository | string | `"linkedin/datahub-mae-consumer"` | |
|
||||
| datahub-mae-consumer.image.tag | string | `"latest"` | |
|
||||
| datahub-mce-consumer.enabled | bool | `true` | |
|
||||
| datahub-mce-consumer.image.repository | string | `"linkedin/datahub-mce-consumer"` | |
|
||||
| datahub-mce-consumer.image.tag | string | `"latest"` | |
|
||||
| datahub-ingestion-cron.enabled | bool | `false` | |
|
||||
| elasticsearchSetupJob.enabled | bool | `true` | |
|
||||
| elasticsearchSetupJob.image.repository | string | `"linkedin/datahub-elasticsearch-setup"` | |
|
||||
| elasticsearchSetupJob.image.tag | string | `"latest"` | |
|
||||
| kafkaSetupJob.enabled | bool | `true` | |
|
||||
| kafkaSetupJob.image.repository | string | `"linkedin/datahub-kafka-setup"` | |
|
||||
| kafkaSetupJob.image.tag | string | `"latest"` | |
|
||||
| mysqlSetupJob.enabled | bool | `false` | |
|
||||
| mysqlSetupJob.image.repository | string | `""` | |
|
||||
| mysqlSetupJob.image.tag | string | `""` | |
|
||||
| global.datahub.appVersion | string | `"1.0"` | |
|
||||
| global.datahub.gms.port | string | `"8080"` | |
|
||||
| global.elasticsearch.host | string | `"elasticsearch"` | |
|
||||
| global.elasticsearch.port | string | `"9200"` | |
|
||||
| global.hostAliases[0].hostnames[0] | string | `"broker"` | |
|
||||
| global.hostAliases[0].hostnames[1] | string | `"mysql"` | |
|
||||
| global.hostAliases[0].hostnames[2] | string | `"elasticsearch"` | |
|
||||
| global.hostAliases[0].hostnames[3] | string | `"neo4j"` | |
|
||||
| global.hostAliases[0].ip | string | `"192.168.0.104"` | |
|
||||
| global.kafka.bootstrap.server | string | `"broker:29092"` | |
|
||||
| global.kafka.zookeeper.server | string | `"zookeeper:2181"` | |
|
||||
| global.kafka.schemaregistry.url | string | `"http://schema-registry:8081"` | |
|
||||
| global.neo4j.host | string | `"neo4j:7474"` | |
|
||||
| global.neo4j.uri | string | `"bolt://neo4j"` | |
|
||||
| global.neo4j.username | string | `"neo4j"` | |
|
||||
| global.neo4j.password.secretRef | string | `"neo4j-secrets"` | |
|
||||
| global.neo4j.password.secretKey | string | `"neo4j-password"` | |
|
||||
| global.sql.datasource.driver | string | `"com.mysql.jdbc.Driver"` | |
|
||||
| global.sql.datasource.host | string | `"mysql:3306"` | |
|
||||
| global.sql.datasource.hostForMysqlClient | string | `"mysql"` | |
|
||||
| global.sql.datasource.url | string | `"jdbc:mysql://mysql:3306/datahub?verifyServerCertificate=false\u0026useSSL=true"` | |
|
||||
| global.sql.datasource.username | string | `"datahub"` | |
|
||||
| global.sql.datasource.password.secretRef | string | `"mysql-secrets"` | |
|
||||
| global.sql.datasource.password.secretKey | string | `"mysql-password"` | |
|
||||
|
||||
#### Optional Chart Values
|
||||
|
||||
| Key | Type | Default | Description |
|
||||
|-----|------|---------|-------------|
|
||||
| global.credentialsAndCertsSecrets.path | string | `"/mnt/certs"` | |
|
||||
| global.credentialsAndCertsSecrets.name | string | `""` | |
|
||||
| global.credentialsAndCertsSecrets.secureEnv | map | `{}` | |
|
||||
| global.springKafkaConfigurationOverrides | map | `{}` | |
|
134
datahub-kubernetes/README.md
Normal file
134
datahub-kubernetes/README.md
Normal file
@ -0,0 +1,134 @@
|
||||
---
|
||||
title: "Deploying with Kubernetes"
|
||||
hide_title: true
|
||||
---
|
||||
|
||||
# Deploying Datahub with Kubernetes
|
||||
|
||||
## Introduction
|
||||
[This directory](https://github.com/linkedin/datahub/tree/master/datahub-kubernetes) provides
|
||||
the Kubernetes [Helm](https://helm.sh/) charts for deploying [Datahub](https://github.com/linkedin/datahub/tree/master/datahub-kubernetes/datahub) and it's [dependencies](https://github.com/linkedin/datahub/tree/master/datahub-kubernetes/prerequisites)
|
||||
(Elasticsearch, Neo4j, MySQL, and Kafka) on a Kubernetes cluster.
|
||||
|
||||
## Setup
|
||||
1. Set up a kubernetes cluster
|
||||
- In a cloud platform of choice like [Amazon EKS](https://aws.amazon.com/eks),
|
||||
[Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine),
|
||||
and [Azure Kubernetes Service](https://azure.microsoft.com/en-us/services/kubernetes-service/) OR
|
||||
- In local environment using [Minikube](https://minikube.sigs.k8s.io/docs/).
|
||||
Note, more than 7GB of RAM is required to run Datahub and it's dependencies
|
||||
2. Install the following tools:
|
||||
- [kubectl](https://kubernetes.io/docs/tasks/tools/) to manage kubernetes resources
|
||||
- [helm](https://helm.sh/docs/intro/install/) to deploy the resources based on helm charts.
|
||||
Note, we only support Helm 3.
|
||||
|
||||
## Components
|
||||
Datahub consists of 4 main components: [GMS](https://datahubproject.io/docs/gms),
|
||||
[MAE Consumer](https://datahubproject.io/docs/metadata-jobs/mae-consumer-job),
|
||||
[MCE Consumer](https://datahubproject.io/docs/metadata-jobs/mce-consumer-job), and
|
||||
[Frontend](https://datahubproject.io/docs/datahub-frontend). Kubernetes deployment
|
||||
for each of the components are defined as subcharts under the main
|
||||
[Datahub](https://github.com/linkedin/datahub/tree/master/datahub-kubernetes/datahub)
|
||||
helm chart.
|
||||
|
||||
The main components are powered by 4 external dependencies:
|
||||
- Kafka
|
||||
- Local DB (MySQL, Postgres, MariaDB)
|
||||
- Search Index (Elasticsearch)
|
||||
- Graph Index (Supports only Neo4j)
|
||||
|
||||
The dependencies must be deployed before deploying Datahub. We created a separate
|
||||
[chart](https://github.com/linkedin/datahub/tree/master/datahub-kubernetes/prerequisites)
|
||||
for deploying the dependencies with example configuration. They could also be deployed
|
||||
separately on-prem or leveraged as managed services.
|
||||
|
||||
## Quickstart
|
||||
Assuming kubectl context points to the correct kubernetes cluster, first create kubernetes secrets that contain MySQL and Neo4j passwords.
|
||||
|
||||
```(shell)
|
||||
kubectl create secret generic mysql-secrets --from-literal=mysql-root-password=datahub
|
||||
kubectl create secret generic neo4j-secrets --from-literal=neo4j-password=datahub
|
||||
```
|
||||
|
||||
The above commands sets the passwords to "datahub" as an example. Change to any password of choice.
|
||||
|
||||
Second, deploy the dependencies by running the following
|
||||
|
||||
```(shell)
|
||||
helm install prerequisites prerequisites/
|
||||
```
|
||||
|
||||
Note, after changing the configurations in the values.yaml file, you can run
|
||||
|
||||
```(shell)
|
||||
helm upgrade prerequisites prerequisites/
|
||||
```
|
||||
|
||||
To just redeploy the dependencies impacted by the change.
|
||||
|
||||
Run `kubectl get pods` to check whether all the pods for the dependencies are running.
|
||||
You should get a result similar to below.
|
||||
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
elasticsearch-master-0 1/1 Running 0 62m
|
||||
elasticsearch-master-1 1/1 Running 0 62m
|
||||
elasticsearch-master-2 1/1 Running 0 62m
|
||||
prerequisites-cp-schema-registry-cf79bfccf-kvjtv 2/2 Running 1 63m
|
||||
prerequisites-kafka-0 1/1 Running 2 62m
|
||||
prerequisites-mysql-0 1/1 Running 1 62m
|
||||
prerequisites-neo4j-community-0 1/1 Running 0 52m
|
||||
prerequisites-zookeeper-0 1/1 Running 0 62m
|
||||
```
|
||||
|
||||
deploy Datahub by running the following
|
||||
|
||||
```(shell)
|
||||
helm install datahub datahub/ --values datahub/quickstart-values.yaml
|
||||
```
|
||||
|
||||
Values in [quickstart-values.yaml](https://github.com/linkedin/datahub/tree/master/datahub-kubernetes/datahub/quickstart-values.yaml)
|
||||
have been preset to point to the dependencies deployed using the [prerequisites](https://github.com/linkedin/datahub/tree/master/datahub-kubernetes/prerequisites)
|
||||
chart with release name "prerequisites". If you deployed the helm chart using a different release name, update the quickstart-values.yaml file accordingly before installing.
|
||||
|
||||
Run `kubectl get pods` to check whether all the datahub pods are running. You should get a result similar to below.
|
||||
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
datahub-datahub-frontend-84c58df9f7-5bgwx 1/1 Running 0 4m2s
|
||||
datahub-datahub-gms-58b676f77c-c6pfx 1/1 Running 0 4m2s
|
||||
datahub-datahub-mae-consumer-7b98bf65d-tjbwx 1/1 Running 0 4m3s
|
||||
datahub-datahub-mce-consumer-8c57d8587-vjv9m 1/1 Running 0 4m2s
|
||||
datahub-elasticsearch-setup-job-8dz6b 0/1 Completed 0 4m50s
|
||||
datahub-kafka-setup-job-6blcj 0/1 Completed 0 4m40s
|
||||
datahub-mysql-setup-job-b57kc 0/1 Completed 0 4m7s
|
||||
elasticsearch-master-0 1/1 Running 0 97m
|
||||
elasticsearch-master-1 1/1 Running 0 97m
|
||||
elasticsearch-master-2 1/1 Running 0 97m
|
||||
prerequisites-cp-schema-registry-cf79bfccf-kvjtv 2/2 Running 1 99m
|
||||
prerequisites-kafka-0 1/1 Running 2 97m
|
||||
prerequisites-mysql-0 1/1 Running 1 97m
|
||||
prerequisites-neo4j-community-0 1/1 Running 0 88m
|
||||
prerequisites-zookeeper-0 1/1 Running 0 97m
|
||||
```
|
||||
|
||||
You can run the following to expose the frontend locally. Note, you can find the pod name using the command above.
|
||||
In this case, the datahub-frontend pod name was `datahub-datahub-frontend-84c58df9f7-5bgwx`.
|
||||
|
||||
```(shell)
|
||||
kubectl port-forward <datahub-frontend pod name> 9002:9002
|
||||
```
|
||||
|
||||
You should be able to access the frontend via http://localhost:9002.
|
||||
|
||||
Once you confirm that the pods are running well, you can set up ingress for datahub-frontend
|
||||
to expose the 9002 port to the public.
|
||||
## Other useful commands
|
||||
|
||||
| Command | Description |
|
||||
|-----|------|
|
||||
| helm uninstall datahub | Remove DataHub |
|
||||
| helm ls | List of Helm charts |
|
||||
| helm history | Fetch a release history |
|
||||
|
||||
|
67
datahub-kubernetes/datahub/README.md
Normal file
67
datahub-kubernetes/datahub/README.md
Normal file
@ -0,0 +1,67 @@
|
||||
datahub
|
||||
=======
|
||||
A Helm chart for LinkedIn DataHub
|
||||
|
||||
Current chart version is `0.1.2`
|
||||
|
||||
## Install DataHub
|
||||
Navigate to the current directory and run the below command. Update the `datahub/values.yaml` file with valid hostname/IP address configuration for elasticsearch, neo4j, schema-registry, broker & mysql.
|
||||
|
||||
``
|
||||
helm install datahub datahub/
|
||||
``
|
||||
|
||||
## Chart Values
|
||||
|
||||
| Key | Type | Default | Description |
|
||||
|-----|------|---------|-------------|
|
||||
| datahub-frontend.enabled | bool | `true` | Enable Datahub Front-end |
|
||||
| datahub-frontend.image.repository | string | `"linkedin/datahub-frontend-react"` | Image repository for datahub-frontend |
|
||||
| datahub-frontend.image.tag | string | `"latest"` | Image tag for datahub-frontend |
|
||||
| datahub-gms.enabled | bool | `true` | Enable GMS |
|
||||
| datahub-gms.image.repository | string | `"linkedin/datahub-gms"` | Image repository for datahub-gms |
|
||||
| datahub-gms.image.tag | string | `"latest"` | Image tag for datahub-gms |
|
||||
| datahub-mae-consumer.enabled | bool | `true` | Enable MAE Consumer |
|
||||
| datahub-mae-consumer.image.repository | string | `"linkedin/datahub-mae-consumer"` | Image repository for datahub-mae-consumer |
|
||||
| datahub-mae-consumer.image.tag | string | `"latest"` | Image tag for datahub-mae-consumer |
|
||||
| datahub-mce-consumer.enabled | bool | `true` | Enable MCE Consumer |
|
||||
| datahub-mce-consumer.image.repository | string | `"linkedin/datahub-mce-consumer"` | Image repository for datahub-mce-consumer |
|
||||
| datahub-mce-consumer.image.tag | string | `"latest"` | Image tag for datahub-mce-consumer |
|
||||
| datahub-ingestion-cron.enabled | bool | `false` | Enable cronjob for periodic ingestion |
|
||||
| elasticsearchSetupJob.enabled | bool | `true` | Enable setup job for elasicsearch |
|
||||
| elasticsearchSetupJob.image.repository | string | `"linkedin/datahub-elasticsearch-setup"` | Image repository for elasticsearchSetupJob |
|
||||
| elasticsearchSetupJob.image.tag | string | `"latest"` | Image repository for elasticsearchSetupJob |
|
||||
| kafkaSetupJob.enabled | bool | `true` | Enable setup job for kafka |
|
||||
| kafkaSetupJob.image.repository | string | `"linkedin/datahub-kafka-setup"` | Image repository for kafkaSetupJob |
|
||||
| kafkaSetupJob.image.tag | string | `"latest"` | Image repository for kafkaSetupJob |
|
||||
| mysqlSetupJob.enabled | bool | `false` | Enable setup job for mysql |
|
||||
| mysqlSetupJob.image.repository | string | `""` | Image repository for mysqlSetupJob |
|
||||
| mysqlSetupJob.image.tag | string | `""` | Image repository for mysqlSetupJob |
|
||||
| global.datahub.appVersion | string | `"1.0"` | App version for annotation |
|
||||
| global.datahub.gms.port | string | `"8080"` | Port of GMS service |
|
||||
| global.elasticsearch.host | string | `"elasticsearch"` | Elasticsearch host name (endpoint) |
|
||||
| global.elasticsearch.port | string | `"9200"` | Elasticsearch port |
|
||||
| global.kafka.bootstrap.server | string | `"broker:9092"` | Kafka bootstrap servers (with port) |
|
||||
| global.kafka.zookeeper.server | string | `"zookeeper:2181"` | Kafka zookeeper servers (with port) |
|
||||
| global.kafka.schemaregistry.url | string | `"http://schema-registry:8081"` | URL to kafka schema registry |
|
||||
| global.neo4j.host | string | `"neo4j:7474"` | Neo4j host address (with port) |
|
||||
| global.neo4j.uri | string | `"bolt://neo4j"` | Neo4j URI |
|
||||
| global.neo4j.username | string | `"neo4j"` | Neo4j user name |
|
||||
| global.neo4j.password.secretRef | string | `"neo4j-secrets"` | Secret that contains the Neo4j password |
|
||||
| global.neo4j.password.secretKey | string | `"neo4j-password"` | Secret key that contains the Neo4j password |
|
||||
| global.sql.datasource.driver | string | `"com.mysql.jdbc.Driver"` | Driver for the SQL database |
|
||||
| global.sql.datasource.host | string | `"mysql:3306"` | SQL database host (with port) |
|
||||
| global.sql.datasource.hostForMysqlClient | string | `"mysql"` | SQL database host (without port) |
|
||||
| global.sql.datasource.url | string | `"jdbc:mysql://mysql:3306/datahub?verifyServerCertificate=false\u0026useSSL=true"` | URL to access SQL database |
|
||||
| global.sql.datasource.username | string | `"datahub"` | SQL user name |
|
||||
| global.sql.datasource.password.secretRef | string | `"mysql-secrets"` | Secret that contains the MySQL password |
|
||||
| global.sql.datasource.password.secretKey | string | `"mysql-password"` | Secret key that contains the MySQL password |
|
||||
|
||||
## Optional Chart Values
|
||||
|
||||
| Key | Type | Default | Description |
|
||||
|-----|------|---------|-------------|
|
||||
| global.credentialsAndCertsSecrets.name | string | `""` | Name of the secret that holds SSL certificates (keystores, truststores) |
|
||||
| global.credentialsAndCertsSecrets.path | string | `"/mnt/certs"` | Path to mount the SSL certificates |
|
||||
| global.credentialsAndCertsSecrets.secureEnv | map | `{}` | Map of SSL config name and the corresponding value in the secret |
|
||||
| global.springKafkaConfigurationOverrides | map | `{}` | Map of configuration overrides for accessing kafka |
|
@ -18,7 +18,7 @@ Current chart version is `0.2.0`
|
||||
| fullnameOverride | string | `"datahub-frontend"` | |
|
||||
| global.datahub.gms.port | string | `"8080"` | |
|
||||
| image.pullPolicy | string | `"IfNotPresent"` | |
|
||||
| image.repository | string | `"linkedin/datahub-frontend"` | |
|
||||
| image.repository | string | `"linkedin/datahub-frontend-react"` | |
|
||||
| image.tag | string | `"latest"` | |
|
||||
| imagePullSecrets | list | `[]` | |
|
||||
| ingress.annotations | object | `{}` | |
|
@ -5,7 +5,7 @@
|
||||
replicaCount: 1
|
||||
|
||||
image:
|
||||
repository: linkedin/datahub-frontend
|
||||
repository: linkedin/datahub-frontend-react
|
||||
tag: "latest"
|
||||
pullPolicy: Always
|
||||
|
@ -23,7 +23,7 @@ Current chart version is `0.2.0`
|
||||
| global.hostAliases[0].hostnames[2] | string | `"elasticsearch"` | |
|
||||
| global.hostAliases[0].hostnames[3] | string | `"neo4j"` | |
|
||||
| global.hostAliases[0].ip | string | `"192.168.0.104"` | |
|
||||
| global.kafka.bootstrap.server | string | `"broker:29092"` | |
|
||||
| global.kafka.bootstrap.server | string | `"broker:9092"` | |
|
||||
| global.kafka.schemaregistry.url | string | `"http://schema-registry:8081"` | |
|
||||
| global.neo4j.host | string | `"neo4j:7474"` | |
|
||||
| global.neo4j.uri | string | `"bolt://neo4j"` | |
|
@ -152,7 +152,7 @@ global:
|
||||
|
||||
kafka:
|
||||
bootstrap:
|
||||
server: "broker:29092"
|
||||
server: "broker:9092"
|
||||
schemaregistry:
|
||||
url: "http://schema-registry:8081"
|
||||
|
@ -16,7 +16,7 @@ Current chart version is `0.2.0`
|
||||
| fullnameOverride | string | `"datahub-mae-consumer"` | |
|
||||
| global.elasticsearch.host | string | `"elasticsearch"` | |
|
||||
| global.elasticsearch.port | string | `"9200"` | |
|
||||
| global.kafka.bootstrap.server | string | `"broker:29092"` | |
|
||||
| global.kafka.bootstrap.server | string | `"broker:9092"` | |
|
||||
| global.kafka.schemaregistry.url | string | `"http://schema-registry:8081"` | |
|
||||
| global.neo4j.host | string | `"neo4j:7474"` | |
|
||||
| global.neo4j.uri | string | `"bolt://neo4j"` | |
|
@ -151,7 +151,7 @@ global:
|
||||
|
||||
kafka:
|
||||
bootstrap:
|
||||
server: "broker:29092"
|
||||
server: "broker:9092"
|
||||
schemaregistry:
|
||||
url: "http://schema-registry:8081"
|
||||
|
@ -14,7 +14,7 @@ Current chart version is `0.2.0`
|
||||
| extraVolumes | Templatable string of additional `volumes` to be passed to the `tpl` function | "" |
|
||||
| extraVolumeMounts | Templatable string of additional `volumeMounts` to be passed to the `tpl` function | "" |
|
||||
| fullnameOverride | string | `""` | |
|
||||
| global.kafka.bootstrap.server | string | `"broker:29092"` | |
|
||||
| global.kafka.bootstrap.server | string | `"broker:9092"` | |
|
||||
| global.kafka.schemaregistry.url | string | `"http://schema-registry:8081"` | |
|
||||
| global.datahub.gms.port | string | `"8080"` | |
|
||||
| global.hostAliases[0].hostnames[0] | string | `"broker"` | |
|
@ -147,7 +147,7 @@ readinessProbe:
|
||||
global:
|
||||
kafka:
|
||||
bootstrap:
|
||||
server: "broker:29092"
|
||||
server: "broker:9092"
|
||||
schemaregistry:
|
||||
url: "http://schema-registry:8081"
|
||||
|
88
datahub-kubernetes/datahub/quickstart-values.yaml
Normal file
88
datahub-kubernetes/datahub/quickstart-values.yaml
Normal file
@ -0,0 +1,88 @@
|
||||
# Values to start up datahub after starting up the datahub-prerequisites chart with "prerequisites" release name
|
||||
# Copy this chart and change configuration as needed.
|
||||
datahub-gms:
|
||||
enabled: true
|
||||
image:
|
||||
repository: linkedin/datahub-gms
|
||||
tag: "latest"
|
||||
|
||||
datahub-frontend:
|
||||
enabled: true
|
||||
image:
|
||||
repository: linkedin/datahub-frontend-react
|
||||
tag: "latest"
|
||||
# Set up ingress to expose react front-end
|
||||
ingress:
|
||||
enabled: false
|
||||
|
||||
datahub-mae-consumer:
|
||||
enabled: true
|
||||
image:
|
||||
repository: linkedin/datahub-mae-consumer
|
||||
tag: "latest"
|
||||
|
||||
datahub-mce-consumer:
|
||||
enabled: true
|
||||
image:
|
||||
repository: linkedin/datahub-mce-consumer
|
||||
tag: "latest"
|
||||
|
||||
elasticsearchSetupJob:
|
||||
enabled: true
|
||||
image:
|
||||
repository: linkedin/datahub-elasticsearch-setup
|
||||
tag: "latest"
|
||||
|
||||
kafkaSetupJob:
|
||||
enabled: true
|
||||
image:
|
||||
repository: linkedin/datahub-kafka-setup
|
||||
tag: "latest"
|
||||
|
||||
mysqlSetupJob:
|
||||
enabled: true
|
||||
image:
|
||||
repository: acryldata/datahub-mysql-setup
|
||||
tag: "latest"
|
||||
|
||||
datahub-ingestion-cron:
|
||||
enabled: false
|
||||
|
||||
global:
|
||||
elasticsearch:
|
||||
host: "elasticsearch-master"
|
||||
port: "9200"
|
||||
indexPrefix: demo
|
||||
|
||||
kafka:
|
||||
bootstrap:
|
||||
server: "prerequisites-kafka:9092"
|
||||
zookeeper:
|
||||
server: "prerequisites-zookeeper:2181"
|
||||
schemaregistry:
|
||||
url: "http://prerequisites-cp-schema-registry:8081"
|
||||
|
||||
neo4j:
|
||||
host: "prerequisites-neo4j-community:7474"
|
||||
uri: "bolt://prerequisites-neo4j-community"
|
||||
username: "neo4j"
|
||||
password:
|
||||
secretRef: neo4j-secrets
|
||||
secretKey: neo4j-password
|
||||
|
||||
sql:
|
||||
datasource:
|
||||
host: "prerequisites-mysql:3306"
|
||||
hostForMysqlClient: "prerequisites-mysql"
|
||||
port: "3306"
|
||||
url: "jdbc:mysql://prerequisites-mysql:3306/datahub?verifyServerCertificate=false&useSSL=true&useUnicode=yes&characterEncoding=UTF-8"
|
||||
driver: "com.mysql.jdbc.Driver"
|
||||
username: "root"
|
||||
password:
|
||||
secretRef: mysql-secrets
|
||||
secretKey: mysql-root-password
|
||||
|
||||
datahub:
|
||||
gms:
|
||||
port: "8080"
|
||||
appVersion: "1.0"
|
@ -11,7 +11,7 @@ datahub-gms:
|
||||
datahub-frontend:
|
||||
enabled: true
|
||||
image:
|
||||
repository: linkedin/datahub-frontend
|
||||
repository: linkedin/datahub-frontend-react
|
||||
tag: "latest"
|
||||
|
||||
datahub-mae-consumer:
|
||||
@ -45,7 +45,10 @@ kafkaSetupJob:
|
||||
tag: "latest"
|
||||
|
||||
mysqlSetupJob:
|
||||
enabled: false
|
||||
enabled: true
|
||||
image:
|
||||
repository: acryldata/datahub-mysql-setup
|
||||
tag: "latest"
|
||||
|
||||
global:
|
||||
elasticsearch:
|
||||
@ -54,7 +57,7 @@ global:
|
||||
|
||||
kafka:
|
||||
bootstrap:
|
||||
server: "broker:29092"
|
||||
server: "broker:9092"
|
||||
zookeeper:
|
||||
server: "zookeeper:2181"
|
||||
schemaregistry:
|
||||
@ -72,6 +75,7 @@ global:
|
||||
datasource:
|
||||
host: "mysql:3306"
|
||||
hostForMysqlClient: "mysql"
|
||||
port: "3306"
|
||||
url: "jdbc:mysql://mysql:3306/datahub?verifyServerCertificate=false&useSSL=true"
|
||||
driver: "com.mysql.jdbc.Driver"
|
||||
username: "datahub"
|
||||
@ -84,14 +88,15 @@ global:
|
||||
port: "8080"
|
||||
appVersion: "1.0"
|
||||
|
||||
hostAliases:
|
||||
- ip: "192.168.0.104"
|
||||
hostnames:
|
||||
- "broker"
|
||||
- "mysql"
|
||||
- "elasticsearch"
|
||||
- "neo4j"
|
||||
# hostAliases:
|
||||
# - ip: "192.168.0.104"
|
||||
# hostnames:
|
||||
# - "broker"
|
||||
# - "mysql"
|
||||
# - "elasticsearch"
|
||||
# - "neo4j"
|
||||
|
||||
## Add below to enable SSL for kafka
|
||||
# credentialsAndCertsSecrets:
|
||||
# name: datahub-certs
|
||||
# path: /mnt/datahub/certs
|
2
datahub-kubernetes/prerequisites/.gitignore
vendored
Normal file
2
datahub-kubernetes/prerequisites/.gitignore
vendored
Normal file
@ -0,0 +1,2 @@
|
||||
*.tgz
|
||||
*.lock
|
37
datahub-kubernetes/prerequisites/Chart.yaml
Normal file
37
datahub-kubernetes/prerequisites/Chart.yaml
Normal file
@ -0,0 +1,37 @@
|
||||
apiVersion: v2
|
||||
name: datahub-prerequisites
|
||||
description: A Helm chart for packages that Datahub depends on
|
||||
type: application
|
||||
# This is the chart version. This version number should be incremented each time you make changes
|
||||
# to the chart and its templates, including the app version.
|
||||
version: 0.0.1
|
||||
dependencies:
|
||||
- name: elasticsearch
|
||||
version: 7.9.3
|
||||
repository: https://helm.elastic.co
|
||||
condition: elasticsearch.enabled
|
||||
# This chart deploys an enterprise version of neo4j that requires commercial license
|
||||
- name: neo4j
|
||||
version: 4.2.2-1
|
||||
repository: https://neo4j-contrib.github.io/neo4j-helm/
|
||||
condition: neo4j.enabled
|
||||
# This chart deploys a community version of neo4j
|
||||
- name: neo4j-community
|
||||
version: 1.2.4
|
||||
repository: https://equinor.github.io/helm-charts/charts/
|
||||
condition: neo4j-community.enabled
|
||||
- name: mysql
|
||||
version: 8.5.4
|
||||
repository: https://charts.bitnami.com/bitnami
|
||||
condition: mysql.enabled
|
||||
# This chart deploys an enterprise version of kafka that requires commercial license
|
||||
# Note, Schema registry and kafka rest proxy do not require the commercial license
|
||||
- name: cp-helm-charts
|
||||
version: 0.6.0
|
||||
repository: https://confluentinc.github.io/cp-helm-charts/
|
||||
condition: cp-helm-charts.enabled
|
||||
# This chart deploys a community version of kafka
|
||||
- name: kafka
|
||||
version: 12.17.4
|
||||
repository: https://charts.bitnami.com/bitnami
|
||||
condition: kafka.enabled
|
87
datahub-kubernetes/prerequisites/values.yaml
Normal file
87
datahub-kubernetes/prerequisites/values.yaml
Normal file
@ -0,0 +1,87 @@
|
||||
# Default configuration for pre-requisites to get you started
|
||||
# Copy this file and update to the configuration of choice
|
||||
elasticsearch:
|
||||
enabled: true # set this to false, if you want to provide your own ES instance.
|
||||
cluster:
|
||||
env:
|
||||
MINIMUM_MASTER_NODES: 1
|
||||
EXPECTED_MASTER_NODES: 1
|
||||
RECOVER_AFTER_MASTER_NODES: 1
|
||||
master:
|
||||
replicas: 1
|
||||
data:
|
||||
replicas: 1
|
||||
client:
|
||||
replicas: 1
|
||||
|
||||
# Uncomment if running on minikube - Reference https://github.com/elastic/helm-charts/blob/master/elasticsearch/examples/minikube/values.yaml
|
||||
# Permit co-located instances for solitary minikube virtual machines.
|
||||
antiAffinity: "soft"
|
||||
|
||||
# Shrink default JVM heap.
|
||||
esJavaOpts: "-Xmx128m -Xms128m"
|
||||
|
||||
# Allocate smaller chunks of memory per pod.
|
||||
resources:
|
||||
requests:
|
||||
cpu: "100m"
|
||||
memory: "512M"
|
||||
limits:
|
||||
cpu: "1000m"
|
||||
memory: "512M"
|
||||
|
||||
# Request smaller persistent volumes.
|
||||
volumeClaimTemplate:
|
||||
accessModes: [ "ReadWriteOnce" ]
|
||||
storageClassName: "standard"
|
||||
resources:
|
||||
requests:
|
||||
storage: 100M
|
||||
|
||||
# Official neo4j chart uses the Neo4j Enterprise Edition which requires a license
|
||||
neo4j:
|
||||
enabled: false # set this to true, if you have a license for the enterprise edition
|
||||
acceptLicenseAgreement: "yes"
|
||||
defaultDatabase: "graph.db"
|
||||
neo4jPassword: "datahub"
|
||||
# For better security, add password to neo4j-secrets k8s secret and uncomment below
|
||||
# existingPasswordSecret: neo4j-secrets
|
||||
core:
|
||||
standalone: true
|
||||
|
||||
# Deploys neo4j community version. Only supports single node
|
||||
neo4j-community:
|
||||
enabled: true # set this to false, if you have a license for the enterprise edition
|
||||
acceptLicenseAgreement: "yes"
|
||||
defaultDatabase: "graph.db"
|
||||
# For better security, add password to neo4j-secrets k8s secret and uncomment below
|
||||
existingPasswordSecret: neo4j-secrets
|
||||
|
||||
mysql:
|
||||
enabled: true
|
||||
auth:
|
||||
# For better security, add mysql-secrets k8s secret with mysql-root-password, mysql-replication-password and mysql-password
|
||||
existingSecret: mysql-secrets
|
||||
|
||||
cp-helm-charts:
|
||||
# Schema registry is under the community license
|
||||
cp-schema-registry:
|
||||
enabled: true
|
||||
kafka:
|
||||
bootstrapServers: "prerequisites-kafka:9092" ## <<release-name>>-kafka:9092
|
||||
cp-kafka:
|
||||
enabled: false
|
||||
cp-zookeeper:
|
||||
enabled: false
|
||||
cp-kafka-rest:
|
||||
enabled: false
|
||||
cp-kafka-connect:
|
||||
enabled: false
|
||||
cp-ksql-server:
|
||||
enabled: false
|
||||
cp-control-center:
|
||||
enabled: false
|
||||
|
||||
# Bitnami version of Kafka that deploys open source Kafka https://artifacthub.io/packages/helm/bitnami/kafka
|
||||
kafka:
|
||||
enabled: true
|
@ -50,7 +50,9 @@ function list_markdown_files(): string[] {
|
||||
// Ignore everything within this directory.
|
||||
/^docs-website\//,
|
||||
// Don't want hosted docs for these.
|
||||
/^contrib\/(?!kubernetes\/README\.md)/, // Keeps the main Kubernetes docs.
|
||||
/^contrib\//,
|
||||
// Keep main docs for kubernetes, but skip the inner docs
|
||||
/^datahub-kubernetes\/datahub\//,
|
||||
/^datahub-web\//,
|
||||
/^metadata-ingestion-examples\//,
|
||||
/^docs\/rfc\/templates\/000-template\.md$/,
|
||||
|
@ -105,7 +105,7 @@ module.exports = {
|
||||
Deployment: [
|
||||
"docs/how/kafka-config",
|
||||
"docker/README",
|
||||
"contrib/kubernetes/README",
|
||||
"datahub-kubernetes/README",
|
||||
// Purposely not including the following:
|
||||
// - "docker/datahub-frontend/README",
|
||||
// - "docker/datahub-gms-graphql-service/README",
|
||||
|
Loading…
x
Reference in New Issue
Block a user