
* Rename docs and clean SSO * Add connector partials * Add connector partials * Rename path
12 KiB
title | slug |
---|---|
Run Sagemaker Connector using Airflow SDK | /connectors/ml-model/sagemaker/airflow |
Run Sagemaker using the Airflow SDK
In this section, we provide guides and references to use the sagemaker connector.
Configure and schedule Sagemaker metadata and profiler workflows from the OpenMetadata UI:
Requirements
{%inlineCallout icon="description" bold="OpenMetadata 0.12 or later" href="/deployment"%} To deploy OpenMetadata, check the Deployment guides. {%/inlineCallout%}
To run the Ingestion via the UI you'll need to use the OpenMetadata Ingestion Container, which comes shipped with custom Airflow plugins to handle the workflow deployment.
OpenMetadata retrieves information about models and tags associated with the models in the AWS account. The user must have following policy set to ingest the metadata from Sagemaker.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "SageMakerPolicy",
"Effect": "Allow",
"Action": [
"sagemaker:ListModels",
"sagemaker:DescribeModel",
"sagemaker:ListTags"
],
"Resource": "*"
}
]
}
For more information on Sagemaker permissions visit the AWS Sagemaker official documentation.
Python Requirements
To run the Sagemaker ingestion, you will need to install:
pip3 install "openmetadata-ingestion[sagemaker]"
Metadata Ingestion
All connectors are defined as JSON Schemas. Here you can find the structure to create a connection to Sagemaker.
In order to create and run a Metadata Ingestion workflow, we will follow the steps to create a YAML configuration able to connect to the source, process the Entities if needed, and reach the OpenMetadata server.
The workflow is modeled around the following JSON Schema
1. Define the YAML Config
This is a sample config for Sagemaker:
{% codePreview %}
{% codeInfoContainer %}
Source Configuration - Service Connection
{% codeInfo srNumber=1 %}
- awsAccessKeyId & awsSecretAccessKey: When you interact with AWS, you specify your AWS security credentials to verify who you are and whether you have permission to access the resources that you are requesting. AWS uses the security credentials to authenticate and authorize your requests (docs).
Access keys consist of two parts: An access key ID (for example, AKIAIOSFODNN7EXAMPLE
), and a secret access key (for example, wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
).
You must use both the access key ID and secret access key together to authenticate your requests.
You can find further information on how to manage your access keys here.
{% /codeInfo %}
{% codeInfo srNumber=2 %} awsSessionToken: If you are using temporary credentials to access your services, you will need to inform the AWS Access Key ID and AWS Secrets Access Key. Also, these will include an AWS Session Token.
{% /codeInfo %}
{% codeInfo srNumber=3 %}
awsRegion: Each AWS Region is a separate geographic area in which AWS clusters data centers (docs).
As AWS can have instances in multiple regions, we need to know the region the service you want reach belongs to.
Note that the AWS Region is the only required parameter when configuring a connection. When connecting to the services programmatically, there are different ways in which we can extract and use the rest of AWS configurations.
You can find further information about configuring your credentials here.
{% /codeInfo %}
{% codeInfo srNumber=4 %}
endPointURL: To connect programmatically to an AWS service, you use an endpoint. An endpoint is the URL of the entry point for an AWS web service. The AWS SDKs and the AWS Command Line Interface (AWS CLI) automatically use the default endpoint for each service in an AWS Region. But you can specify an alternate endpoint for your API requests.
Find more information on AWS service endpoints.
{% /codeInfo %}
{% codeInfo srNumber=5 %}
profileName: A named profile is a collection of settings and credentials that you can apply to a AWS CLI command. When you specify a profile to run a command, the settings and credentials are used to run that command. Multiple named profiles can be stored in the config and credentials files.
You can inform this field if you'd like to use a profile other than default
.
Find here more information about Named profiles for the AWS CLI.
{% /codeInfo %}
{% codeInfo srNumber=6 %}
assumeRoleArn: Typically, you use AssumeRole
within your account or for cross-account access. In this field you'll set the
ARN
(Amazon Resource Name) of the policy of the other account.
A user who wants to access a role in a different account must also have permissions that are delegated from the account
administrator. The administrator must attach a policy that allows the user to call AssumeRole
for the ARN
of the role in the other account.
This is a required field if you'd like to AssumeRole
.
Find more information on AssumeRole. {% /codeInfo %}
{% codeInfo srNumber=7 %}
assumeRoleSessionName: An identifier for the assumed role session. Use the role session name to uniquely identify a session when the same role is assumed by different principals or for different reasons.
By default, we'll use the name OpenMetadataSession
.
Find more information about the Role Session Name.
{% /codeInfo %}
{% codeInfo srNumber=8 %}
assumeRoleSourceIdentity: The source identity specified by the principal that is calling the AssumeRole
operation. You can use source identity
information in AWS CloudTrail logs to determine who took actions with a role.
Find more information about Source Identity.
{% /codeInfo %}
Source Configuration - Source Config
{% codeInfo srNumber=9 %}
The sourceConfig is defined here:
markDeletedMlModels: Set the Mark Deleted Ml Models toggle to flag ml models as soft-deleted if they are not present anymore in the source system.
{% /codeInfo %}
Sink Configuration
{% codeInfo srNumber=10 %}
To send the metadata to OpenMetadata, it needs to be specified as type: metadata-rest
.
{% /codeInfo %}
{% partial file="workflow-config.md" /%}
{% /codeInfoContainer %}
{% codeBlock fileName="filename.yaml" %}
source:
type: sagemaker
serviceName: local_sagemaker
serviceConnection:
config:
type: SageMaker
awsConfig:
awsAccessKeyId: KEY
awsSecretAccessKey: SECRET
# awsSessionToken: TOKEN
awsRegion: us-east-2
# endPointURL: https://athena.us-east-2.amazonaws.com/custom
# profileName: profile
# assumeRoleArn: "arn:partition:service:region:account:resource"
# assumeRoleSessionName: session
# assumeRoleSourceIdentity: identity
sourceConfig:
config:
type: MlModelMetadata
# markDeletedMlModels: true
sink:
type: metadata-rest
config: {}
{% partial file="workflow-config-yaml.md" /%}
{% /codeBlock %}
{% /codePreview %}
2. Prepare the Ingestion DAG
Create a Python file in your Airflow DAGs directory with the following contents:
{% codePreview %}
{% codeInfoContainer %}
{% codeInfo srNumber=12 %}
Import necessary modules
The Workflow
class that is being imported is a part of a metadata ingestion framework, which defines a process of getting data from different sources and ingesting it into a central metadata repository.
Here we are also importing all the basic requirements to parse YAMLs, handle dates and build our DAG.
{% /codeInfo %}
{% codeInfo srNumber=13 %}
Default arguments for all tasks in the Airflow DAG.
- Default arguments dictionary contains default arguments for tasks in the DAG, including the owner's name, email address, number of retries, retry delay, and execution timeout.
{% /codeInfo %}
{% codeInfo srNumber=14 %}
- config: Specifies config for the metadata ingestion as we prepare above.
{% /codeInfo %}
{% codeInfo srNumber=15 %}
- metadata_ingestion_workflow(): This code defines a function
metadata_ingestion_workflow()
that loads a YAML configuration, creates aWorkflow
object, executes the workflow, checks its status, prints the status to the console, and stops the workflow.
{% /codeInfo %}
{% codeInfo srNumber=16 %}
- DAG: creates a DAG using the Airflow framework, and tune the DAG configurations to whatever fits with your requirements
- For more Airflow DAGs creation details visit here.
{% /codeInfo %}
Note that from connector to connector, this recipe will always be the same.
By updating the YAML configuration
, you will be able to extract metadata from different sources.
{% /codeInfoContainer %}
{% codeBlock fileName="filename.py" %}
import pathlib
import yaml
from datetime import timedelta
from airflow import DAG
from metadata.config.common import load_config_file
from metadata.ingestion.api.workflow import Workflow
from airflow.utils.dates import days_ago
try:
from airflow.operators.python import PythonOperator
except ModuleNotFoundError:
from airflow.operators.python_operator import PythonOperator
default_args = {
"owner": "user_name",
"email": ["username@org.com"],
"email_on_failure": False,
"retries": 3,
"retry_delay": timedelta(minutes=5),
"execution_timeout": timedelta(minutes=60)
}
config = """
<your YAML configuration>
"""
def metadata_ingestion_workflow():
workflow_config = yaml.safe_load(config)
workflow = Workflow.create(workflow_config)
workflow.execute()
workflow.raise_from_status()
workflow.print_status()
workflow.stop()
with DAG(
"sample_data",
default_args=default_args,
description="An example DAG which runs a OpenMetadata ingestion workflow",
start_date=days_ago(1),
is_paused_upon_creation=False,
schedule_interval='*/5 * * * *',
catchup=False,
) as dag:
ingest_task = PythonOperator(
task_id="ingest_using_recipe",
python_callable=metadata_ingestion_workflow,
)
{% /codeBlock %}
{% /codePreview %}