DataHub containers, datahub GMS (backend server) and datahub frontend (UI server), write log files to the local container filesystem. To extract these logs, you'll need to get them from inside the container where the services are running.
You can do so easily using the Docker CLI if you're deploying with vanilla docker or compose, and kubectl if you're on K8s.
## Step 1: Find the id of the container you're interested in
You'll first need to get the id of the container that you'd like to extract logs for. For example, datahub-gms.
### Docker & Docker Compose
To do so, you can view all containers that Docker knows about by running the following command:
```
johnjoyce@Johns-MBP datahub-fork % docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1.**Info Logs**: These include info, warn, error log lines. They are what print to stdout when the container runs.
2.**Debug Logs**: These files have shorter retention (past 1 day) but include more granular debug information from the DataHub code specifically. We ignore debug logs from external libraries that DataHub depends on.
### Docker & Docker Compose
Since log files are named based on the current date, you'll need to use "ls" to see which files currently exist. To do so, you can use the `docker exec` command, using the container id recorded in step one:
Depending on your issue, you may be interested to view both debug and normal info logs.
### Kubernetes & Helm
Since log files are named based on the current date, you'll need to use "ls" to see which files currently exist. To do so, you can use the `kubectl exec` command, using the pod name recorded in step one:
There are a few ways to get files out of the pod and into a local file. You can either use `kubectl cp` or simply `cat` and pipe the file of interest. We'll show an example using the latter approach: