2024-09-25 10:49:44 +05:30
---
title: Run the GCS Datalake Connector Externally
slug: /connectors/database/gcs-datalake/yaml
---
{% connectorDetailsHeader
name="GCS Datalake"
stage="PROD"
platform="OpenMetadata"
2025-03-03 14:39:14 +05:30
availableFeatures=["Metadata", "Data Profiler", "Data Quality", "Sample Data"]
2024-09-25 10:49:44 +05:30
unavailableFeatures=["Query Usage", "Lineage", "Column-level Lineage", "Owners", "dbt", "Tags", "Stored Procedures"]
/ %}
In this section, we provide guides and references to use the GCS Datalake connector.
Configure and schedule GCS Datalake metadata and profiler workflows from the OpenMetadata UI:
- [Requirements ](#requirements )
- [Metadata Ingestion ](#metadata-ingestion )
- [dbt Integration ](#dbt-integration )
2025-04-18 08:42:17 +02:00
{% partial file="/v1.8/connectors/external-ingestion-deployment.md" /%}
2024-09-25 10:49:44 +05:30
## Requirements
**Note:** GCS Datalake connector supports extracting metadata from file types `JSON` , `CSV` , `TSV` & `Parquet` .
### Python Requirements
2025-04-18 08:42:17 +02:00
{% partial file="/v1.8/connectors/python-requirements.md" /%}
2024-09-25 10:49:44 +05:30
If running OpenMetadata version greater than 0.13, you will need to install the Datalake ingestion for GCS
#### GCS installation
```bash
pip3 install "openmetadata-ingestion[datalake-gcp]"
```
#### If version <0.13
You will be installing the requirements for GCS
```bash
pip3 install "openmetadata-ingestion[datalake]"
```
## Metadata Ingestion
All connectors are defined as JSON Schemas. Here you can find the structure to create a connection to Datalake.
In order to create and run a Metadata Ingestion workflow, we will follow the steps to create a YAML configuration able to connect to the source, process the Entities if needed, and reach the OpenMetadata server.
The workflow is modeled around the following JSON Schema.
## 1. Define the YAML Config
### This is a sample config for Datalake using GCS:
{% codePreview %}
{% codeInfoContainer %}
#### Source Configuration - Service Connection
2025-04-18 08:42:17 +02:00
{% partial file="/v1.8/connectors/yaml/common/gcp-config-def.md" /%}
2025-03-19 19:43:34 +05:30
2024-09-25 10:49:44 +05:30
{% codeInfo srNumber=5 %}
* **bucketName**: name of the bucket in GCS
* **Prefix**: prefix in gcp bucket
{% /codeInfo %}
2025-04-18 08:42:17 +02:00
{% partial file="/v1.8/connectors/yaml/database/source-config-def.md" /%}
2024-09-25 10:49:44 +05:30
2025-04-18 08:42:17 +02:00
{% partial file="/v1.8/connectors/yaml/ingestion-sink-def.md" /%}
2024-09-25 10:49:44 +05:30
2025-04-18 08:42:17 +02:00
{% partial file="/v1.8/connectors/yaml/workflow-config-def.md" /%}
2024-09-25 10:49:44 +05:30
{% /codeInfoContainer %}
{% codeBlock fileName="filename.yaml" %}
```yaml {% isCodeBlock=true %}
source:
type: datalake
serviceName: local_datalake
serviceConnection:
config:
type: Datalake
configSource:
securityConfig:
2025-03-19 19:43:34 +05:30
gcpConfig:
2024-09-25 10:49:44 +05:30
```
2025-04-18 08:42:17 +02:00
{% partial file="/v1.8/connectors/yaml/common/gcp-config.md" /%}
2025-03-19 19:43:34 +05:30
2024-09-25 10:49:44 +05:30
```yaml {% srNumber=5 %}
bucketName: bucket name
prefix: prefix
```
2025-04-18 08:42:17 +02:00
{% partial file="/v1.8/connectors/yaml/database/source-config.md" /%}
2024-09-25 10:49:44 +05:30
2025-04-18 08:42:17 +02:00
{% partial file="/v1.8/connectors/yaml/ingestion-sink.md" /%}
2024-09-25 10:49:44 +05:30
2025-04-18 08:42:17 +02:00
{% partial file="/v1.8/connectors/yaml/workflow-config.md" /%}
2024-09-25 10:49:44 +05:30
{% /codeBlock %}
{% /codePreview %}
2025-04-18 08:42:17 +02:00
{% partial file="/v1.8/connectors/yaml/ingestion-cli.md" /%}
2024-09-25 10:49:44 +05:30
## dbt Integration
You can learn more about how to ingest dbt models' definitions and their lineage [here ](/connectors/ingestion/workflows/dbt ).