To execute metadata extraction AWS account should have enough access to fetch required data. The <strong>Bucket Policy</strong> in AWS requires at least these permissions:
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::<mybucket>",
"arn:aws:s3:::<mybucket>/*"
]
}
]
}
```
## Metadata Ingestion
{% stepsContainer %}
{% step srNumber=1 %}
{% stepDescription title="1. Visit the Services Page" %}
The first step is ingesting the metadata from your sources. Under
Settings, you will find a Services link an external source system to
OpenMetadata. Once a service is created, it can be used to configure
metadata, usage, and profiler workflows.
To visit the Services page, select Services from the Settings menu.
caption="Configure the service connection by filling the form" /%}
{% /stepVisualInfo %}
{% /step %}
{% extraContent parentTagName="stepsContainer" %}
#### Connection Options
**S3 Permissions**
To execute metadata extraction AWS account should have enough access to fetch required data. The <strong>Bucket Policy</strong> in AWS requires at least these permissions:
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::<mybucket>",
"arn:aws:s3:::<mybucket>/*"
]
}
]
}
```
{% /extraContent %}
{% step srNumber=6 %}
{% stepDescription title="5.1 Datalake using AWS S3" %}
**AWS Access Key ID**
Enter your secure access key ID for your DynamoDB connection. The specified key ID should be authorized to read all databases you want to include in the metadata ingestion workflow.
**AWS Secret Access Key**
Enter the Secret Access Key (the passcode key pair to the key ID from above).
**AWS Region**
Specify the region in which your DynamoDB is located.
Note: This setting is required even if you have configured a local AWS profile.
**AWS Session Token**
The AWS session token is an optional parameter. If you want, enter the details of your temporary session token.
**Endpoint URL (optional)**
The DynamoDB connector will automatically determine the DynamoDB endpoint URL based on the AWS Region. You may specify a value to override this behavior.
**Database (Optional)**
The database of the data source is an optional parameter, if you would like to restrict the metadata reading to a single database. If left blank, OpenMetadata ingestion attempts to scan all the databases.
**Connection Options (Optional)**
Enter the details for any additional connection options that can be sent to DynamoDB during the connection. These details must be added as Key-Value pairs.
**Connection Arguments (Optional)**
Enter the details for any additional connection arguments such as security or protocol configs that can be sent to DynamoDB during the connection. These details must be added as Key-Value pairs.
{% stepDescription title="5.2 Datalake using GCS" %}
**BUCKET NAME**
A bucket name in DataLake is a unique identifier used to organize and store data objects.
It's similar to a folder name, but it's used for object storage rather than file storage.
**PREFIX**
The prefix of a data source in datalake refers to the first part of the data path that identifies the source or origin of the data. It's used to organize and categorize data within the datalake, and can help users easily locate and access the data they need.
**GCS Credentials**
We support two ways of authenticating to GCS:
1. Passing the raw credential values provided by BigQuery. This requires us to provide the following information, all provided by BigQuery:
1. Credentials type, e.g. `service_account`.
2. Project ID
3. Private Key ID
4. Private Key
5. Client Email
6. Client ID
7. Auth URI, [https://accounts.google.com/o/oauth2/auth](https://accounts.google.com/o/oauth2/auth) by default
8. Token URI, [https://oauth2.googleapis.com/token](https://oauth2.googleapis.com/token) by default
9. Authentication Provider X509 Certificate URL, [https://www.googleapis.com/oauth2/v1/certs](https://www.googleapis.com/oauth2/v1/certs) by default
{% stepDescription title="5.3 Datalake using Azure" %}
- **Azure Credentials**
- **Client ID** : Client ID of the data storage account
- **Client Secret** : Client Secret of the account
- **Tenant ID** : Tenant ID under which the data storage account falls
- **Account Name** : Account Name of the data Storage
- **Required Roles**
Please make sure the following roles associated with the data storage account.
-`Storage Blob Data Contributor`
-`Storage Queue Data Contributor`
The current approach for authentication is based on `app registration`, reach out to us on [slack](https://slack.open-metadata.org/) if you find the need for another auth system
- **Name**: This field refers to the name of ingestion pipeline, you can customize the name or use the generated name.
- **Database Filter Pattern (Optional)**: Use to database filter patterns to control whether or not to include database as part of metadata ingestion.
- **Include**: Explicitly include databases by adding a list of comma-separated regular expressions to the Include field. OpenMetadata will include all databases with names matching one or more of the supplied regular expressions. All other databases will be excluded.
- **Exclude**: Explicitly exclude databases by adding a list of comma-separated regular expressions to the Exclude field. OpenMetadata will exclude all databases with names matching one or more of the supplied regular expressions. All other databases will be included.
- **Schema Filter Pattern (Optional)**: Use to schema filter patterns to control whether or not to include schemas as part of metadata ingestion.
- **Include**: Explicitly include schemas by adding a list of comma-separated regular expressions to the Include field. OpenMetadata will include all schemas with names matching one or more of the supplied regular expressions. All other schemas will be excluded.
- **Exclude**: Explicitly exclude schemas by adding a list of comma-separated regular expressions to the Exclude field. OpenMetadata will exclude all schemas with names matching one or more of the supplied regular expressions. All other schemas will be included.
- **Table Filter Pattern (Optional)**: Use to table filter patterns to control whether or not to include tables as part of metadata ingestion.
- **Include**: Explicitly include tables by adding a list of comma-separated regular expressions to the Include field. OpenMetadata will include all tables with names matching one or more of the supplied regular expressions. All other tables will be excluded.
- **Exclude**: Explicitly exclude tables by adding a list of comma-separated regular expressions to the Exclude field. OpenMetadata will exclude all tables with names matching one or more of the supplied regular expressions. All other tables will be included.
- **Include views (toggle)**: Set the Include views toggle to control whether or not to include views as part of metadata ingestion.
- **Include tags (toggle)**: Set the Include tags toggle to control whether or not to include tags as part of metadata ingestion.
- **Enable Debug Log (toggle)**: Set the Enable Debug Log toggle to set the default log level to debug, these logs can be viewed later in Airflow.
- **Mark Deleted Tables (toggle)**: Set the Mark Deleted Tables toggle to flag tables as soft-deleted if they are not present anymore in the source system.
- **Mark Deleted Tables from Filter Only (toggle)**: Set the Mark Deleted Tables from Filter Only toggle to flag tables as soft-deleted if they are not present anymore within the filtered schema or database only. This flag is useful when you have more than one ingestion pipelines. For example if you have a schema
{% /extraContent %}
{% step srNumber=11 %}
{% stepDescription title="8. Schedule the Ingestion and Deploy" %}
Scheduling can be set up at an hourly, daily, or weekly cadence. The
timezone is in UTC. Select a Start Date to schedule for ingestion. It is
optional to add an End Date.
Review your configuration settings. If they match what you intended,
click Deploy to create the service and schedule metadata ingestion.
If something doesn't look right, click the Back button to return to the
appropriate step and change the settings as needed.
After configuring the workflow, you can click on Deploy to create the