### Persistent Volumes with ReadWriteMany Access Modes
OpenMetadata helm chart depends on Airflow and Airflow expects a presistent disk that support ReadWriteMany (the volume can be mounted as read-write by many nodes).
The workaround is to create nfs-share and use that as the presistent claim to delpoy OpenMetadata by implementing the following steps in order.
This guide assumes you have NFS Server already setup with Hostname or IP Address which is reachable from your on premises Kubernetes cluster and you have configured a path to be used for OpenMetadata Airflow Helm Dependency.
To provision PersistentVolume dynamically using the StorageClass, you need to install the NFS provisioner.
It is recommended to use [nfs-subdir-external-provisioner](https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner) helm charts for this case.
Replace the `NFS_HOSTNAME_OR_IP` with your NFS Server value and run the commands.
</p>
<p>
This will create a new StorageClass with `nfs-subdir-external-provisioner`. You can view the same using the kubectl command `kubectl get storageclass -n nfs-provisioner`.
</p>
## Provision NFS backed PVC for Airflow DAGs and Airflow Logs
Create Persistent Volumes and Persistent Volume claims with the below command.
```commandline
kubectl create -f logs_pvc.yml
```
## Change owner and permission manually on disks
Since airflow pods run as non root users, they would not have write access on the nfs server volumes. In order to fix the permission here, spin up a pod with persistent volumes attached and run it once.