mirror of
https://github.com/datahub-project/datahub.git
synced 2025-11-11 08:52:58 +00:00
docs(ingest): clarify bigquery-beta multiproject setup (#6071)
This commit is contained in:
parent
129a27abef
commit
7e08a05a22
@ -1,41 +1,51 @@
|
|||||||
### Prerequisites
|
### Prerequisites
|
||||||
|
|
||||||
#### Create a datahub profile in GCP
|
#### Create a datahub profile in GCP
|
||||||
1. Create a custom role for datahub as per [BigQuery docs](https://cloud.google.com/iam/docs/creating-custom-roles#creating_a_custom_role)
|
|
||||||
|
1. Create a custom role for datahub as per [BigQuery docs](https://cloud.google.com/iam/docs/creating-custom-roles#creating_a_custom_role).
|
||||||
2. Grant the following permissions to this role:
|
2. Grant the following permissions to this role:
|
||||||
|
|
||||||
##### Basic Requirements (needs for metadata ingestion)
|
:::info
|
||||||
| permission | Description |
|
|
||||||
|------------|----------------------------------------------|
|
|
||||||
| `bigquery.datasets.get` | This needs to list datasets |
|
|
||||||
| `bigquery.datasets.getIamPolicy` | This needs to list datasets |
|
|
||||||
| `bigquery.jobs.create` | Needs to submit queries. |
|
|
||||||
| `bigquery.jobs.list` | Needs to check submitted queries status. |
|
|
||||||
| `bigquery.tables.get` | Needs to get metadata about Bigquery Tables. |
|
|
||||||
| `bigquery.tables.list` | Needs to list metadata about Bigquery Tables. |
|
|
||||||
| `bigquery.readsessions.create` | Needs to get resultset of queries. |
|
|
||||||
| `bigquery.readsessions.getData` | Needs to get resultset of queries. |
|
|
||||||
| `resourcemanager.projects.get` | Needs to get resultset of queries. |
|
|
||||||
|
|
||||||
|
If you have multiple projects in your BigQuery setup, the role should be granted these permissions in each of the projects.
|
||||||
|
|
||||||
|
:::
|
||||||
|
|
||||||
|
##### Basic Requirements (needs for metadata ingestion)
|
||||||
|
|
||||||
|
| permission | Description |
|
||||||
|
| -------------------------------- | ----------------------------------------------------- |
|
||||||
|
| `bigquery.datasets.get` | Retrieve metadata about a dataset. |
|
||||||
|
| `bigquery.datasets.getIamPolicy` | Read a dataset's IAM permissions. |
|
||||||
|
| `bigquery.jobs.create` | Run jobs (e.g. queries) within the project. |
|
||||||
|
| `bigquery.jobs.list` | Manage the queries that the service account has sent. |
|
||||||
|
| `bigquery.tables.list` | List BigQuery tables. |
|
||||||
|
| `bigquery.tables.get` | Retrieve metadata for a table. |
|
||||||
|
| `bigquery.readsessions.create` | Create a session for streaming large results. |
|
||||||
|
| `bigquery.readsessions.getData` | Get data from the read session. |
|
||||||
|
| `resourcemanager.projects.get` | Retrieve project names and metadata. |
|
||||||
|
|
||||||
##### Lineage/usage generation requirements
|
##### Lineage/usage generation requirements
|
||||||
|
|
||||||
Additional requirements needed on the top of the basic requirements.
|
Additional requirements needed on the top of the basic requirements.
|
||||||
If you want to get lineage from multiple projects you have to grant this permission
|
If you want to get lineage from multiple projects you have to grant this permission
|
||||||
for each of them.
|
for each of them.
|
||||||
|
|
||||||
| permission | Description |
|
| permission | Description |
|
||||||
|------------|-------------------------------------------------------------------------------------|
|
| -------------------------------- | ------------------------------------------------------------------------------------------------------------ |
|
||||||
| `bigquery.jobs.listAll` | Needs for lineage generation and usage to see all the queries were run on a project |
|
| `bigquery.jobs.listAll` | List all jobs (queries) submitted by any user. |
|
||||||
| `logging.logEntries.list` | Needs for lineage generation via GCP logging |
|
| `logging.logEntries.list` | Fetch log entries for lineage/usage data. Not required if `use_exported_bigquery_audit_metadata` is enabled. |
|
||||||
| `logging.privateLogEntries.list` | Needs for lineage generation via GCP logging |
|
| `logging.privateLogEntries.list` | Fetch log entries for lineage/usage data. Not required if `use_exported_bigquery_audit_metadata` is enabled. |
|
||||||
|
|
||||||
##### Profiling requirements
|
##### Profiling requirements
|
||||||
|
|
||||||
Additional requirements needed on the top of the basic requirements.
|
Additional requirements needed on the top of the basic requirements.
|
||||||
|
|
||||||
| permission | Description |
|
| permission | Description |
|
||||||
|------------|---------------------------------------------------------------------------------------------------|
|
| ------------------------- | ----------------------------------------------------------------------------------------- |
|
||||||
| `bigquery.tables.getData` | profiler needs to access data to do the profiling |
|
| `bigquery.tables.getData` | Access table data to do the profiling. |
|
||||||
| `bigquery.tables.create` | It needs to create temporary tables to profile partitioned/sharded tables. See below for details. |
|
| `bigquery.tables.create` | Create temporary tables when profiling partitioned/sharded tables. See below for details. |
|
||||||
| `bigquery.tables.delete` | It needs to create temporary tables to profile partitioned/sharded tables. See below for details. |
|
| `bigquery.tables.delete` | Delete temporary tables when profiling partitioned/sharded tables. See below for details. |
|
||||||
|
|
||||||
Profiler creates temporary tables to profile partitioned/sharded tables and that is why it needs table create/delete privilege.
|
Profiler creates temporary tables to profile partitioned/sharded tables and that is why it needs table create/delete privilege.
|
||||||
Use `profiling.bigquery_temp_table_schema` to restrict to one specific dataset the create/delete permission
|
Use `profiling.bigquery_temp_table_schema` to restrict to one specific dataset the create/delete permission
|
||||||
@ -43,9 +53,10 @@ Use `profiling.bigquery_temp_table_schema` to restrict to one specific dataset t
|
|||||||
#### Create a service account
|
#### Create a service account
|
||||||
|
|
||||||
1. Setup a ServiceAccount as per [BigQuery docs](https://cloud.google.com/iam/docs/creating-managing-service-accounts#iam-service-accounts-create-console)
|
1. Setup a ServiceAccount as per [BigQuery docs](https://cloud.google.com/iam/docs/creating-managing-service-accounts#iam-service-accounts-create-console)
|
||||||
and assign the previously created role to this service account.
|
and assign the previously created role to this service account.
|
||||||
2. Download a service account JSON keyfile.
|
2. Download a service account JSON keyfile.
|
||||||
Example credential file:
|
Example credential file:
|
||||||
|
|
||||||
```json
|
```json
|
||||||
{
|
{
|
||||||
"type": "service_account",
|
"type": "service_account",
|
||||||
@ -60,22 +71,27 @@ and assign the previously created role to this service account.
|
|||||||
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/test%suppproject-id-1234567.iam.gserviceaccount.com"
|
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/test%suppproject-id-1234567.iam.gserviceaccount.com"
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
3. To provide credentials to the source, you can either:
|
|
||||||
Set an environment variable:
|
|
||||||
$ export GOOGLE_APPLICATION_CREDENTIALS="/path/to/keyfile.json"
|
|
||||||
|
|
||||||
*or*
|
3. To provide credentials to the source, you can either:
|
||||||
|
|
||||||
|
Set an environment variable:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
$ export GOOGLE_APPLICATION_CREDENTIALS="/path/to/keyfile.json"
|
||||||
|
```
|
||||||
|
|
||||||
|
_or_
|
||||||
|
|
||||||
Set credential config in your source based on the credential json file. For example:
|
Set credential config in your source based on the credential json file. For example:
|
||||||
|
|
||||||
```yml
|
```yml
|
||||||
credential:
|
credential:
|
||||||
project_id: project-id-1234567
|
project_id: project-id-1234567
|
||||||
private_key_id: "d0121d0000882411234e11166c6aaa23ed5d74e0"
|
private_key_id: "d0121d0000882411234e11166c6aaa23ed5d74e0"
|
||||||
private_key: "-----BEGIN PRIVATE KEY-----\nMIIyourkey\n-----END PRIVATE KEY-----\n"
|
private_key: "-----BEGIN PRIVATE KEY-----\nMIIyourkey\n-----END PRIVATE KEY-----\n"
|
||||||
client_email: "test@suppproject-id-1234567.iam.gserviceaccount.com"
|
client_email: "test@suppproject-id-1234567.iam.gserviceaccount.com"
|
||||||
client_id: "123456678890"
|
client_id: "123456678890"
|
||||||
```
|
```
|
||||||
|
|
||||||
### Lineage Computation Details
|
### Lineage Computation Details
|
||||||
|
|
||||||
@ -95,7 +111,7 @@ tables into a predefined schema by setting `profiling.bigquery_temp_table_schema
|
|||||||
Temporary tables are removed after profiling.
|
Temporary tables are removed after profiling.
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
profiling:
|
profiling:
|
||||||
enabled: true
|
enabled: true
|
||||||
bigquery_temp_table_schema: my-project-id.my-schema-where-views-can-be-created
|
bigquery_temp_table_schema: my-project-id.my-schema-where-views-can-be-created
|
||||||
```
|
```
|
||||||
@ -106,22 +122,6 @@ Due to performance reasons, we only profile the latest partition for Partitioned
|
|||||||
You can set partition explicitly with `partition.partition_datetime` property if you want. (partition will be applied to all partitioned tables)
|
You can set partition explicitly with `partition.partition_datetime` property if you want. (partition will be applied to all partitioned tables)
|
||||||
:::
|
:::
|
||||||
|
|
||||||
### Working with multi-project GCP setups
|
|
||||||
|
|
||||||
Sometimes you may have multiple GCP project with one only giving you view access rights and other project where you have view/modify rights.
|
|
||||||
|
|
||||||
The GCP roles with which this setup has been tested are as follows
|
|
||||||
- Storage Project
|
|
||||||
- BigQuery Data Viewer
|
|
||||||
- BigQuery Metadata Viewer
|
|
||||||
- Logs Viewer
|
|
||||||
- Private Logs Viewer
|
|
||||||
- Compute Project
|
|
||||||
- BigQuery Admin
|
|
||||||
- BigQuery Data Editor
|
|
||||||
- BigQuery Job User
|
|
||||||
|
|
||||||
If you are using `use_exported_bigquery_audit_metadata = True` then make sure you prefix the datasets in `bigquery_audit_metadata_datasets` with storage project id.
|
|
||||||
|
|
||||||
### Caveats
|
### Caveats
|
||||||
- For Materialized views lineage is dependent on logs being retained. If your GCP logging is retained for 30 days (default) and 30 days have passed since the creation of the materialized view we won't be able to get lineage for them.
|
|
||||||
|
- For materialized views, lineage is dependent on logs being retained. If your GCP logging is retained for 30 days (default) and 30 days have passed since the creation of the materialized view we won't be able to get lineage for them.
|
||||||
|
|||||||
@ -57,6 +57,13 @@ class BigQueryV2Config(BigQueryConfig):
|
|||||||
description="Number of table queried in batch when getting metadata. This is a low leve config propert which should be touched with care. This restriction needed because we query partitions system view which throws error if we try to touch too many tables.",
|
description="Number of table queried in batch when getting metadata. This is a low leve config propert which should be touched with care. This restriction needed because we query partitions system view which throws error if we try to touch too many tables.",
|
||||||
)
|
)
|
||||||
|
|
||||||
|
# The inheritance hierarchy is wonky here, but these options need modifications.
|
||||||
|
project_id: Optional[str] = Field(
|
||||||
|
default=None,
|
||||||
|
description="[deprecated] Use project_id_pattern instead.",
|
||||||
|
)
|
||||||
|
storage_project_id: None = Field(default=None, exclude=True)
|
||||||
|
|
||||||
@root_validator(pre=False)
|
@root_validator(pre=False)
|
||||||
def profile_default_settings(cls, values: Dict) -> Dict:
|
def profile_default_settings(cls, values: Dict) -> Dict:
|
||||||
# Extra default SQLAlchemy option for better connection pooling and threading.
|
# Extra default SQLAlchemy option for better connection pooling and threading.
|
||||||
|
|||||||
Loading…
x
Reference in New Issue
Block a user