OpenMetadata supports the Installation and Running of Application on Google Kubernetes Engine through Helm Charts.
However, there are some additional configurations which needs to be done as prerequisites for the same.
<Note>
Google Kubernetes Engine (GKE) Auto Pilot Mode is not compatible with one of OpenMetadata Dependencies - ElasticSearch.
The reason being that ElasticSearch Pods require Elevated permissions to run initContainers for changing configurations which is not allowed by GKE AutoPilot PodSecurityPolicy.
</Note>
<Note>
All the code snippets in this section assume the `default` namespace for kubernetes.
</Note>
## Prerequisites
### Persistent Volumes with ReadWriteMany Access Modes
OpenMetadata helm chart depends on Airflow and Airflow expects a presistent disk that support ReadWriteMany (the volume can be mounted as read-write by many nodes).
The workaround is to create nfs-server disk on Google Kubernetes Engine and use that as the presistent claim and delpoy OpenMetadata by implementing the following steps in order.
## Create NFS Share
### Provision GCP Persistent Disk for Google Kubernetes Engine
Run the below command to create a gcloud compute zonal disk. For more information on Google Cloud Disk Options, please visit [here](https://cloud.google.com/compute/docs/disks).
Run the commands below and ensure the pods are running.
```commandline
kubectl create -f nfs-server-deployment.yml
kubectl create -f nfs-cluster-ip-service.yml
```
We create a CluserIP Service for pods to access NFS within the cluster at a fixed IP/DNS.
Now your nfs server pods are accessible either at the IP (note yours from the service output) or via its name nfs-server.default.svc.cluster.local. By default every service is addressable via name `<service-name>.<namespace>.svc.cluster.local`.
</Collapse>
### Provision NFS backed PV and PVC for Airflow DAGs and Airflow Logs
<Collapsetitle="Code Samples for PV and PVC for Airflow DAGs">
```yaml
# dags_pv_pvc.yml
apiVersion: v1
kind: PersistentVolume
metadata:
name: openmetadata-dependencies-dags-pv
spec:
capacity:
storage: 11Gi
accessModes:
- ReadWriteMany
nfs:
server: nfs-server.default.svc.cluster.local
path: "/"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
app: airflow
release: openmetadata-dependencies
name: openmetadata-dependencies-dags
namespace: default
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
storageClassName: ""
```
Create Persistent Volumes and Persistent Volume claims with the below command.
```commandline
kubectl create -f dags_pv_pvc.yml
```
</Collapse>
<Collapsetitle="Code Samples for PV and PVC for Airflow Logs">
```yaml
# logs_pv_pvc.yml
apiVersion: v1
kind: PersistentVolume
metadata:
name: openmetadata-dependencies-logs-pv
spec:
capacity:
storage: 11Gi
accessModes:
- ReadWriteMany
nfs:
server: nfs-server.default.svc.cluster.local
path: "/"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
app: airflow
name: openmetadata-dependencies-logs
namespace: default
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
storageClassName: ""
```
Create Persistent Volumes and Persistent Volume claims with the below command.
```commandline
kubectl create -f logs_pv_pvc.yml
```
</Collapse>
## Change owner and permission manually on disks
Since airflow pods run as non root users, they would not have write access on the nfs server volumes. In order to fix the permission here, spin up a pod with persistent volumes attached and run it once.