mirror of
https://github.com/open-metadata/OpenMetadata.git
synced 2025-07-12 19:48:26 +00:00
377 lines
13 KiB
Markdown
377 lines
13 KiB
Markdown
![]() |
---
|
||
|
title: Kinesis
|
||
|
slug: /connectors/messaging/kinesis
|
||
|
---
|
||
|
|
||
|
# Kinesis
|
||
|
|
||
|
In this section, we provide guides and references to use the Kinesis connector.
|
||
|
|
||
|
Configure and schedule Kinesis metadata workflows from the OpenMetadata UI:
|
||
|
|
||
|
- [Requirements](#requirements)
|
||
|
- [Metadata Ingestion](#metadata-ingestion)
|
||
|
|
||
|
If you don't want to use the OpenMetadata Ingestion container to configure the workflows via the UI, then you can check
|
||
|
the following docs to connect using Airflow SDK or with the CLI.
|
||
|
|
||
|
{% tilesContainer %}
|
||
|
|
||
|
{% tile
|
||
|
title="Ingest with Airflow"
|
||
|
description="Configure the ingestion using Airflow SDK"
|
||
|
link="/connectors/dashboard/kinesis/airflow"
|
||
|
/ %}
|
||
|
{% tile
|
||
|
title="Ingest with the CLI"
|
||
|
description="Run a one-time ingestion using the metadata CLI"
|
||
|
link="/connectors/dashboard/kinesis/cli"
|
||
|
/ %}
|
||
|
|
||
|
{% /tilesContainer %}
|
||
|
|
||
|
## Requirements
|
||
|
|
||
|
{%inlineCallout icon="description" bold="OpenMetadata 0.12 or later" href="/deployment"%}
|
||
|
To deploy OpenMetadata, check the Deployment guides.
|
||
|
{%/inlineCallout%}
|
||
|
|
||
|
To run the Ingestion via the UI you'll need to use the OpenMetadata Ingestion Container, which comes shipped with
|
||
|
custom Airflow plugins to handle the workflow deployment.
|
||
|
|
||
|
The Kinesis connector ingests metadata using the Kinesis boto3 client.
|
||
|
|
||
|
OpenMetadata retrieves information about streams and sample data from the streams in the AWS account.
|
||
|
The user must have following policy set to access the metadata from Kinesis.
|
||
|
|
||
|
```json
|
||
|
{
|
||
|
"Version": "2012-10-17",
|
||
|
"Statement": [
|
||
|
{
|
||
|
"Sid": "KinesisPolicy",
|
||
|
"Effect": "Allow",
|
||
|
"Action": [
|
||
|
"kinesis:ListStreams",
|
||
|
"kinesis:DescribeStreamSummary",
|
||
|
"kinesis:ListShards",
|
||
|
"kinesis:GetShardIterator",
|
||
|
"kinesis:GetRecords"
|
||
|
],
|
||
|
"Resource": "*"
|
||
|
}
|
||
|
]
|
||
|
}
|
||
|
```
|
||
|
|
||
|
For more information on Kinesis permissions visit the [AWS Kinesis official documentation](https://docs.aws.amazon.com/streams/latest/dev/controlling-access.html).
|
||
|
|
||
|
## Metadata Ingestion
|
||
|
|
||
|
{% stepsContainer %}
|
||
|
|
||
|
{% step srNumber=1 %}
|
||
|
|
||
|
{% stepDescription title="1. Visit the Services Page" %}
|
||
|
|
||
|
The first step is ingesting the metadata from your sources. Under
|
||
|
Settings, you will find a Services link an external source system to
|
||
|
OpenMetadata. Once a service is created, it can be used to configure
|
||
|
metadata, usage, and profiler workflows.
|
||
|
|
||
|
To visit the Services page, select Services from the Settings menu.
|
||
|
|
||
|
{% /stepDescription %}
|
||
|
|
||
|
{% stepVisualInfo %}
|
||
|
|
||
|
{% image
|
||
|
src="/images/v1.0.0/openmetadata/connectors/visit-services.png"
|
||
|
alt="Visit Services Page"
|
||
|
caption="Find Dashboard option on left panel of the settings page" /%}
|
||
|
|
||
|
{% /stepVisualInfo %}
|
||
|
|
||
|
{% /step %}
|
||
|
|
||
|
{% step srNumber=2 %}
|
||
|
|
||
|
{% stepDescription title="2. Create a New Service" %}
|
||
|
|
||
|
Click on the 'Add New Service' button to start the Service creation.
|
||
|
|
||
|
{% /stepDescription %}
|
||
|
|
||
|
{% stepVisualInfo %}
|
||
|
|
||
|
{% image
|
||
|
src="/images/v1.0.0/openmetadata/connectors/create-service.png"
|
||
|
alt="Create a new service"
|
||
|
caption="Add a new Service from the Dashboard Services page" /%}
|
||
|
|
||
|
{% /stepVisualInfo %}
|
||
|
|
||
|
{% /step %}
|
||
|
|
||
|
|
||
|
|
||
|
{% step srNumber=3 %}
|
||
|
|
||
|
{% stepDescription title="3. Select the Service Type" %}
|
||
|
|
||
|
Select Kinesis as the service type and click Next.
|
||
|
|
||
|
{% /stepDescription %}
|
||
|
|
||
|
{% stepVisualInfo %}
|
||
|
|
||
|
{% image
|
||
|
src="/images/v1.0.0/openmetadata/connectors/kinesis/select-service.png"
|
||
|
alt="Select Service"
|
||
|
caption="Select your service from the list" /%}
|
||
|
|
||
|
{% /stepVisualInfo %}
|
||
|
|
||
|
{% /step %}
|
||
|
|
||
|
{% step srNumber=4 %}
|
||
|
|
||
|
{% stepDescription title="4. Name and Describe your Service" %}
|
||
|
|
||
|
Provide a name and description for your service as illustrated below.
|
||
|
|
||
|
#### Service Name
|
||
|
|
||
|
OpenMetadata uniquely identifies services by their Service Name. Provide
|
||
|
a name that distinguishes your deployment from other services, including
|
||
|
the other {connector} services that you might be ingesting metadata
|
||
|
from.
|
||
|
|
||
|
{% /stepDescription %}
|
||
|
|
||
|
{% stepVisualInfo %}
|
||
|
|
||
|
{% image
|
||
|
src="/images/v1.0.0/openmetadata/connectors/kinesis/add-new-service.png"
|
||
|
alt="Add New Service"
|
||
|
caption="Provide a Name and description for your Service" /%}
|
||
|
|
||
|
{% /stepVisualInfo %}
|
||
|
|
||
|
{% /step %}
|
||
|
|
||
|
{% step srNumber=5 %}
|
||
|
|
||
|
{% stepDescription title="5. Configure the Service Connection" %}
|
||
|
|
||
|
In this step, we will configure the connection settings required for
|
||
|
this connector. Please follow the instructions below to ensure that
|
||
|
you've configured the connector to read from your kinesis service as
|
||
|
desired.
|
||
|
|
||
|
{% /stepDescription %}
|
||
|
|
||
|
{% stepVisualInfo %}
|
||
|
|
||
|
{% image
|
||
|
src="/images/v1.0.0/openmetadata/connectors/kinesis/service-connection.png"
|
||
|
alt="Configure service connection"
|
||
|
caption="Configure the service connection by filling the form" /%}
|
||
|
|
||
|
{% /stepVisualInfo %}
|
||
|
|
||
|
{% /step %}
|
||
|
|
||
|
{% extraContent parentTagName="stepsContainer" %}
|
||
|
|
||
|
#### Connection Options
|
||
|
|
||
|
- **AWS Access Key ID** & **AWS Secret Access Key**: When you interact with AWS, you specify your AWS security credentials to verify who you are and whether you have
|
||
|
permission to access the resources that you are requesting. AWS uses the security credentials to authenticate and
|
||
|
authorize your requests ([docs](https://docs.aws.amazon.com/IAM/latest/UserGuide/security-creds.html)).
|
||
|
|
||
|
Access keys consist of two parts: An **access key ID** (for example, `AKIAIOSFODNN7EXAMPLE`), and a **secret access key** (for example, `wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY`).
|
||
|
|
||
|
You must use both the access key ID and secret access key together to authenticate your requests.
|
||
|
|
||
|
You can find further information on how to manage your access keys [here](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html).
|
||
|
|
||
|
- **AWS Region**: Each AWS Region is a separate geographic area in which AWS clusters data centers ([docs](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.RegionsAndAvailabilityZones.html)).
|
||
|
|
||
|
As AWS can have instances in multiple regions, we need to know the region the service you want reach belongs to.
|
||
|
|
||
|
Note that the AWS Region is the only required parameter when configuring a connection. When connecting to the
|
||
|
services programmatically, there are different ways in which we can extract and use the rest of AWS configurations.
|
||
|
|
||
|
You can find further information about configuring your credentials [here](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html#configuring-credentials).
|
||
|
|
||
|
- **AWS Session Token (optional)**: If you are using temporary credentials to access your services, you will need to inform the AWS Access Key ID
|
||
|
and AWS Secrets Access Key. Also, these will include an AWS Session Token.
|
||
|
|
||
|
You can find more information on [Using temporary credentials with AWS resources](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_use-resources.html).
|
||
|
|
||
|
- **Endpoint URL (optional)**: To connect programmatically to an AWS service, you use an endpoint. An *endpoint* is the URL of the
|
||
|
entry point for an AWS web service. The AWS SDKs and the AWS Command Line Interface (AWS CLI) automatically use the
|
||
|
default endpoint for each service in an AWS Region. But you can specify an alternate endpoint for your API requests.
|
||
|
|
||
|
Find more information on [AWS service endpoints](https://docs.aws.amazon.com/general/latest/gr/rande.html).
|
||
|
|
||
|
- **Profile Name**: A named profile is a collection of settings and credentials that you can apply to a AWS CLI command.
|
||
|
When you specify a profile to run a command, the settings and credentials are used to run that command.
|
||
|
Multiple named profiles can be stored in the config and credentials files.
|
||
|
|
||
|
You can inform this field if you'd like to use a profile other than `default`.
|
||
|
|
||
|
Find here more information about [Named profiles for the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-profiles.html).
|
||
|
|
||
|
- **Assume Role Arn**: Typically, you use `AssumeRole` within your account or for cross-account access. In this field you'll set the
|
||
|
`ARN` (Amazon Resource Name) of the policy of the other account.
|
||
|
|
||
|
A user who wants to access a role in a different account must also have permissions that are delegated from the account
|
||
|
administrator. The administrator must attach a policy that allows the user to call `AssumeRole` for the `ARN` of the role in the other account.
|
||
|
|
||
|
This is a required field if you'd like to `AssumeRole`.
|
||
|
|
||
|
Find more information on [AssumeRole](https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.html).
|
||
|
|
||
|
- **Assume Role Session Name**: An identifier for the assumed role session. Use the role session name to uniquely identify a session when the same role
|
||
|
is assumed by different principals or for different reasons.
|
||
|
|
||
|
By default, we'll use the name `OpenMetadataSession`.
|
||
|
|
||
|
Find more information about the [Role Session Name](https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.html#:~:text=An%20identifier%20for%20the%20assumed%20role%20session.).
|
||
|
|
||
|
- **Assume Role Source Identity**: The source identity specified by the principal that is calling the `AssumeRole` operation. You can use source identity
|
||
|
information in AWS CloudTrail logs to determine who took actions with a role.
|
||
|
|
||
|
Find more information about [Source Identity](https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.html#:~:text=Required%3A%20No-,SourceIdentity,-The%20source%20identity).
|
||
|
|
||
|
{% /extraContent %}
|
||
|
|
||
|
{% step srNumber=6 %}
|
||
|
|
||
|
{% stepDescription title="6. Test the Connection" %}
|
||
|
|
||
|
Once the credentials have been added, click on `Test Connection` and Save
|
||
|
the changes.
|
||
|
|
||
|
{% /stepDescription %}
|
||
|
|
||
|
{% stepVisualInfo %}
|
||
|
|
||
|
{% image
|
||
|
src="/images/v1.0.0/openmetadata/connectors/test-connection.png"
|
||
|
alt="Test Connection"
|
||
|
caption="Test the connection and save the Service" /%}
|
||
|
|
||
|
{% /stepVisualInfo %}
|
||
|
|
||
|
{% /step %}
|
||
|
|
||
|
{% step srNumber=7 %}
|
||
|
|
||
|
{% stepDescription title="7. Configure Metadata Ingestion" %}
|
||
|
|
||
|
In this step we will configure the metadata ingestion pipeline,
|
||
|
Please follow the instructions below
|
||
|
|
||
|
{% /stepDescription %}
|
||
|
|
||
|
{% stepVisualInfo %}
|
||
|
|
||
|
{% image
|
||
|
src="/images/v1.0.0/openmetadata/connectors/configure-metadata-ingestion-dashboard.png"
|
||
|
alt="Configure Metadata Ingestion"
|
||
|
caption="Configure Metadata Ingestion Page" /%}
|
||
|
|
||
|
{% /stepVisualInfo %}
|
||
|
|
||
|
{% /step %}
|
||
|
|
||
|
{% extraContent parentTagName="stepsContainer" %}
|
||
|
|
||
|
#### Metadata Ingestion Options
|
||
|
|
||
|
- **Name**: This field refers to the name of ingestion pipeline, you can customize the name or use the generated name.
|
||
|
- **Topic Filter Pattern (Optional)**: Use to pipeline filter patterns to control whether or not to include topics as part of metadata ingestion.
|
||
|
- **Include**: Explicitly include topics by adding a list of comma-separated regular expressions to the Include field. OpenMetadata will include all topics with names matching one or more of the supplied regular expressions. All other topics will be excluded.
|
||
|
- **Exclude**: Explicitly exclude topics by adding a list of comma-separated regular expressions to the Exclude field. OpenMetadata will exclude all topics with names matching one or more of the supplied regular expressions. All other topics will be included.
|
||
|
- **Ingest Sample Data (toggle)**: To ingest sample data from the topics.
|
||
|
- **Enable Debug Log (toggle)**: Set the Enable Debug Log toggle to set the default log level to debug, these logs can be viewed later in Airflow.
|
||
|
- **Mark Deleted Topics (toggle):**: Set the Mark Deleted Topics toggle to flag topics as soft-deleted if they are not present anymore in the source system.
|
||
|
|
||
|
{% /extraContent %}
|
||
|
|
||
|
{% step srNumber=8 %}
|
||
|
|
||
|
{% stepDescription title="8. Schedule the Ingestion and Deploy" %}
|
||
|
|
||
|
Scheduling can be set up at an hourly, daily, or weekly cadence. The
|
||
|
timezone is in UTC. Select a Start Date to schedule for ingestion. It is
|
||
|
optional to add an End Date.
|
||
|
|
||
|
Review your configuration settings. If they match what you intended,
|
||
|
click Deploy to create the service and schedule metadata ingestion.
|
||
|
|
||
|
If something doesn't look right, click the Back button to return to the
|
||
|
appropriate step and change the settings as needed.
|
||
|
|
||
|
After configuring the workflow, you can click on Deploy to create the
|
||
|
pipeline.
|
||
|
|
||
|
{% /stepDescription %}
|
||
|
|
||
|
{% stepVisualInfo %}
|
||
|
|
||
|
{% image
|
||
|
src="/images/v1.0.0/openmetadata/connectors/schedule.png"
|
||
|
alt="Schedule the Workflow"
|
||
|
caption="Schedule the Ingestion Pipeline and Deploy" /%}
|
||
|
|
||
|
{% /stepVisualInfo %}
|
||
|
|
||
|
{% /step %}
|
||
|
|
||
|
|
||
|
{% step srNumber=9 %}
|
||
|
|
||
|
{% stepDescription title="9. View the Ingestion Pipeline" %}
|
||
|
|
||
|
Once the workflow has been successfully deployed, you can view the
|
||
|
Ingestion Pipeline running from the Service Page.
|
||
|
|
||
|
{% /stepDescription %}
|
||
|
|
||
|
{% stepVisualInfo %}
|
||
|
|
||
|
{% image
|
||
|
src="/images/v1.0.0/openmetadata/connectors/view-ingestion-pipeline.png"
|
||
|
alt="View Ingestion Pipeline"
|
||
|
caption="View the Ingestion Pipeline from the Service Page" /%}
|
||
|
|
||
|
{% /stepVisualInfo %}
|
||
|
|
||
|
{% /step %}
|
||
|
|
||
|
{% /stepsContainer %}
|
||
|
|
||
|
## Troubleshooting
|
||
|
|
||
|
### Workflow Deployment Error
|
||
|
|
||
|
If there were any errors during the workflow deployment process, the
|
||
|
Ingestion Pipeline Entity will still be created, but no workflow will be
|
||
|
present in the Ingestion container.
|
||
|
|
||
|
- You can then edit the Ingestion Pipeline and Deploy it again.
|
||
|
|
||
|
- From the Connection tab, you can also Edit the Service if needed.
|
||
|
|
||
|
{% image
|
||
|
src="/images/v1.0.0/openmetadata/connectors/workflow-deployment-error.png"
|
||
|
alt="Workflow Deployment Error"
|
||
|
caption="Edit and Deploy the Ingestion Pipeline" /%}
|
||
|
|
||
|
|
||
|
|