Set up Openflow Connector for Amazon Kinesis Data Streams

备注

This connector is subject to the Snowflake Connector Terms.

This topic describes how to set up Openflow Connector for Amazon Kinesis Data Streams.

Openflow Connector for Amazon Kinesis Data Streams is designed for JSON message ingestion from Kinesis streams to Snowflake tables, with schema evolution capabilities.

Set up the Openflow Connector for Kinesis

先决条件

  1. Review Openflow Connector for Amazon Kinesis Data Streams.

  2. Ensure that you have 设置 Openflow - BYOC or Set up Openflow - Snowflake Deployments.

  3. If you are using Openflow - Snowflake Deployments, ensure that you have reviewed configuring required domains and have granted access to the required domains for the Kinesis connector.

Set up IAM roles and policies in AWS

作为 AWS 管理员,在您的 AWS 账户中执行以下操作:

  1. Create an AWS IAM user or role that Openflow will use to access the Kinesis data stream. For more information, see Creating IAM users (https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html) in the AWS documentation.

  2. Ensure that the AWS user has configured Access Key credentials (https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html).

  3. Grant the AWS user the following IAM permissions:

    Service

    Actions

    Resources (ARNs)

    Purpose

    Amazon Kinesis Data Streams

    kinesis:DescribeStream, kinesis:DescribeStreamConsumer, kinesis:GetRecords, kinesis:GetShardIterator, kinesis:ListShards, kinesis:RegisterStreamConsumer

    arn:aws:kinesis:${REGION}:${ACCOUNT_ID}:stream/${STREAM_NAME}

    Discovers shards, reads records through shared-throughput polling, resolves the stream ARN, registers an Enhanced Fan-Out consumer, and polls consumer status during registration.

    Amazon Kinesis Data Streams

    kinesis:DeregisterStreamConsumer, kinesis:DescribeStreamConsumer, kinesis:SubscribeToShard

    arn:aws:kinesis:${REGION}:${ACCOUNT_ID}:stream/${STREAM_NAME}/consumer/*

    Describes, subscribes to, and deregisters Enhanced Fan-Out consumers by consumer ARN.

    Amazon DynamoDB

    dynamodb:CreateTable, dynamodb:DeleteTable, dynamodb:DescribeTable, dynamodb:GetItem, dynamodb:PutItem, dynamodb:Query, dynamodb:Scan, dynamodb:UpdateItem

    arn:aws:dynamodb:${REGION}:${ACCOUNT_ID}:table/${APPLICATION_NAME}, arn:aws:dynamodb:${REGION}:${ACCOUNT_ID}:table/${APPLICATION_NAME}_migration

    Creates and manages the checkpoint/lease table (shard leases, node heartbeats, checkpoints) and a temporary migration table used during one-time migration from legacy checkpoint tables.

    Example IAM policy:

    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Sid": "KinesisStreamAccess",
                "Effect": "Allow",
                "Action": [
                    "kinesis:DescribeStream",
                    "kinesis:DescribeStreamConsumer",
                    "kinesis:GetRecords",
                    "kinesis:GetShardIterator",
                    "kinesis:ListShards",
                    "kinesis:RegisterStreamConsumer"
                ],
                "Resource": "arn:aws:kinesis:${REGION}:${ACCOUNT_ID}:stream/${STREAM_NAME}"
            },
            {
                "Sid": "KinesisConsumerAccess",
                "Effect": "Allow",
                "Action": [
                    "kinesis:DeregisterStreamConsumer",
                    "kinesis:DescribeStreamConsumer",
                    "kinesis:SubscribeToShard"
                ],
                "Resource": "arn:aws:kinesis:${REGION}:${ACCOUNT_ID}:stream/${STREAM_NAME}/consumer/*"
            },
            {
                "Sid": "DynamoDBTableAccess",
                "Effect": "Allow",
                "Action": [
                    "dynamodb:CreateTable",
                    "dynamodb:DeleteTable",
                    "dynamodb:DescribeTable",
                    "dynamodb:GetItem",
                    "dynamodb:PutItem",
                    "dynamodb:Query",
                    "dynamodb:Scan",
                    "dynamodb:UpdateItem"
                ],
                "Resource": [
                    "arn:aws:dynamodb:${REGION}:${ACCOUNT_ID}:table/${APPLICATION_NAME}",
                    "arn:aws:dynamodb:${REGION}:${ACCOUNT_ID}:table/${APPLICATION_NAME}_migration"
                ]
            }
        ]
    }
    

    Before using the example policy, replace the following placeholders:

    Placeholder

    描述

    ${REGION}

    Your AWS region (for example, us-east-1)

    ${ACCOUNT_ID}

    Your AWS account ID (for example, 123456789012)

    ${STREAM_NAME}

    The value of the AWS Kinesis Stream Name connector parameter

    ${APPLICATION_NAME}

    The value of the AWS Kinesis Application Name connector parameter. Used as the DynamoDB checkpoint table name and as the Enhanced Fan-Out registered consumer name.

    备注

    • The ${APPLICATION_NAME}_migration table is a temporary DynamoDB table created only during a one-time migration from legacy checkpoint tables to the new schema. It's deleted automatically when migration completes. If your deployment has never used the legacy KCL-based connector, you can omit the migration table ARN from the policy.

    • The dynamodb:DeleteTable action is used during the migration process and can be removed from the policy after migration is confirmed complete.

    • The kinesis:DeregisterStreamConsumer action is invoked when the processor is removed from the canvas. If the IAM principal doesn't have this permission, the consumer must be deregistered manually through the AWS console or CLI.

设置 Snowflake 账户

作为 Snowflake 账户管理员,请执行以下任务:

  1. 创建一个类型为 SERVICE 的新 Snowflake 服务用户。

  2. Create a new role or use an existing role and grant the database privileges.

    The connector requires the user to create the destination table. Make sure the user has the required privileges for managing Snowflake objects:

    对象

    权限

    备注

    数据库

    USAGE

    架构

    USAGE

    OWNERSHIP

    Required for the connector to ingest data into a table.

    Snowflake recommends creating a separate user and role for each Kinesis stream for better access control.

    您可以使用以下脚本创建和配置自定义角色(需要 SECURITYADMIN 或等效角色):

    USE ROLE securityadmin;
    
    CREATE ROLE openflow_kinesis_connector_role_1;
    GRANT USAGE ON DATABASE kinesis_db TO ROLE openflow_kinesis_connector_role_1;
    GRANT USAGE ON SCHEMA kinesis_schema TO ROLE openflow_kinesis_connector_role_1;
    

    备注

    Privileges must be granted directly to the connector role and can't be inherited.

  3. Configure the destination table

    We highly recommend using server-side schema evolution for schema changes and an error table for DML error logging.

    The example below shows how to create a table and add OWNERSHIP permissions.

    USE ROLE openflow_kinesis_connector_role_1;
    
    CREATE TABLE kinesis_db.kinesis_schema.<DESTINATION_TABLE_NAME> (
      kinesisMetadata object
    )
    ENABLE_SCHEMA_EVOLUTION = TRUE
    ERROR_LOGGING = TRUE;
    
    USE ROLE securityadmin;
    GRANT OWNERSHIP ON TABLE <DESTINATION_TABLE_NAME> TO ROLE openflow_kinesis_connector_role_1;
    

    These connectors provide support for automatic schema detection and evolution. The structure of tables in Snowflake is defined and evolved automatically to support the structure of new data loaded by the connector. It will automatically map the record content's first-level keys to table columns matching by name (case-insensitive).

    With Schema evolution enabled, Snowflake can automatically expand the destination table by adding new columns that are detected in the incoming stream and dropping NOT NULL constraints to accommodate new data patterns. For more information, see Table schema evolution.

    If ENABLE_SCHEMA_EVOLUTION is not enabled, then you have to create the schema manually by extending the table definition. The connector tries to match the record content's first-level keys to the table columns by name. If keys from the JSON do not match the table columns, the connector ignores the keys.

  4. (Optional) Configure a secrets manager

    Snowflake 强烈建议执行此步骤。配置 Openflow 支持的密钥管理器(例如 AWS、Azure 和 Hashicorp),并将公钥和私钥存储在密钥存储库中。

    1. Once the secrets manager is configured, determine how you will authenticate to it. On AWS, it's recommended that you use the EC2 instance role associated with Openflow as this way no other secrets have to be persisted.

    2. In the Openflow canvas, configure a Parameter Provider associated with this Secrets Manager, from the hamburger menu in the upper right. Navigate to Controller Settings » Parameter Provider and then fetch your parameter values.

    3. 此时,可以使用关联的参数路径引用所有凭据,无需在 Openflow 中保留敏感值。

  5. Grant access to users

    Any other Snowflake users who require access to the raw ingested data by the connector (for example, for custom processing in Snowflake), should be granted the role created in step 2.

设置连接器

作为数据工程师,执行以下任务以安装和配置连接器:

安装连接器

  1. Navigate to the Openflow overview page. In the Featured connectors section, select View more connectors.

  2. On the Openflow connectors page, find the Openflow connector for Amazon Kinesis Data Streams and select Add to runtime.

  3. In the Select runtime dialog, select your runtime from the Available runtimes drop-down list and click Add.

    备注

    Before you install the connector, ensure that you have created a database, schema, and a table in Snowflake for the connector to store ingested data.

  4. Authenticate to the deployment with your Snowflake account credentials and select Allow when prompted to allow the runtime application to access your Snowflake account. The connector installation process takes a few minutes to complete.

  5. 使用您的 Snowflake 账户凭据进行运行时身份验证。

    此时将显示 Openflow 画布,其中添加了连接器进程组。

配置连接器

  1. If needed, customize the connector configuration before configuring the built-in parameters.

  2. Populate the process group parameters

    1. 右键点击导入的进程组并选择 Parameters

    2. Fill out the required parameter values.

Common parameters

参数

描述

必填

AWS 访问密钥 ID

用于连接您的 Kinesis Stream 和 DynamoDB 的 AWS 访问密钥 ID。

AWS Kinesis Region

The AWS Region to connect to. Use regular AWS region format, for example: us-west-2, ap-southeast-1, eu-west-1. See the AWS Regions (https://docs.aws.amazon.com/general/latest/gr/rande.html#kinesis_region) page.

AWS 私密访问密钥

用于连接您的 Kinesis Stream 和 DynamoDB 的 AWS 私密访问密钥。

AWS Kinesis Application Name

The name that is used as the DynamoDB table name for tracking the application's progress on Kinesis Stream consumption.

AWS Kinesis Consumer Type

The strategy used to read records from a Kinesis Stream.

Must be one of the following values: SHARED_THROUGHPUT, ENHANCED_FAN_OUT.

For more information, see Differences between shared throughput consumer and enhanced fan-out consumer (https://docs.aws.amazon.com/streams/latest/dev/enhanced-consumers.html).

AWS Kinesis Initial Stream Position

The initial stream position from which the data starts replication. This takes effect only during the initial start for a given AWS Kinesis Application Name.

可能的值:

LATEST: Latest stored record,

TRIM_HORIZON: Earliest stored record.

AWS Kinesis Stream Name

The AWS Kinesis Stream Name to consume data from.

Snowflake Destination Database

用于永久保存数据的数据库。它必须已经存在于 Snowflake 中。名称区分大小写。对于未加引号的标识符,请提供大写形式的名称。

Snowflake Destination Schema

将持久保存数据的架构,该架构必须已存在于 Snowflake 中。名称区分大小写。对于未加引号的标识符,请以大写形式提供名称。

请参阅以下示例:

CREATE SCHEMA SCHEMA_NAME or CREATE SCHEMA schema_name: use SCHEMA_NAME.

CREATE SCHEMA "schema_name" or CREATE SCHEMA "SCHEMA_NAME": use schema_name or SCHEMA_NAME, respectively.

Snowflake Destination Table

The table where data will be persisted. It must already exist in Snowflake. The name is case-sensitive. For unquoted identifiers, provide the name in uppercase.

Start the connector

  1. 右键点击“飞机”图标并选择 Enable all Controller Services

  2. Right-click on the plane and select Start. The connector starts data ingestion.

Understanding KINESISMETADATA column

The connector populates the KINESISMETADATA structure with metadata about the Kinesis record. The structure contains the following information:

Field Name

Field Type

Example Value

描述

stream

String

stream-name

The name of the Kinesis stream the record came from.

shardId

String

shardId-000000000001

The identifier of the shard in the stream the record came from.

approximateArrival

String

2025-11-05T09:12:15.300

The approximate time that the record was inserted into the stream (ISO 8601 format).

partitionKey

String

key-1234

The partition key specified by the data producer for the record.

sequenceNumber

String

123456789

The unique sequence number assigned by Kinesis Data Streams to the record in the shard.

subSequenceNumber

Number

2

The subsequence number for the record (used for aggregated records with the same sequence number).

shardedSequenceNumber

String

12345678900002

A combination of the sequence number and the subsequence number for the record.

Measuring ingestion latency

For change tracking, incremental processing, and Time Travel queries based on row modification time, the ROW_TIMESTAMP feature can be used.

It can be enabled by running the following command on your destination table:

ALTER TABLE <DESTINATION_TABLE> SET ROW_TIMESTAMP = TRUE;

After row timestamps are enabled, tables expose the METADATA$ROW_LAST_COMMIT_TIME column, which returns the timestamp when each row was last modified.

For more information, see Row timestamps.

备注

Row timestamp isn't available for interactive tables. For more information, see 交互式表的限制.

Using the connector with Apache Iceberg™ tables

The connector can ingest data into Snowflake-managed Apache Iceberg™ tables but must meet the following requirements:

  • You must have been granted the USAGE privilege on the external volume associated with your Apache Iceberg™ table.

  • You must create an Apache Iceberg™ table before running the connector.

Grant usage on an external volume

For example, if your Iceberg table uses the kinesis_external_volume external volume and the connector uses the role openflow_kinesis_connector_role_1, run the following statement:

USE ROLE ACCOUNTADMIN;
GRANT USAGE ON EXTERNAL VOLUME kinesis_external_volume TO ROLE openflow_kinesis_connector_role_1;

Create an Apache Iceberg™ table for ingestion

The connector does not create Iceberg tables automatically and does not support schema evolution. Before you run the connector, you must create an Iceberg table manually.

When you create an Iceberg table, you can use Iceberg data types (including VARIANT) or compatible Snowflake types.

例如,请考虑以下消息:

{
  "id": 1,
  "name": "Steve",
  "body_temperature": 36.6,
  "approved_coffee_types": ["Espresso", "Doppio", "Ristretto", "Lungo"],
  "animals_possessed": {
    "dogs": true,
    "cats": false
  },
  "options": {
    "can_walk": true,
    "can_talk": false
  },
  "date_added": "2024-10-15"
}

To create an Iceberg table for the example message, use one of the following statements:

CREATE OR REPLACE ICEBERG TABLE my_iceberg_table (
  kinesisMetadata OBJECT(
    stream STRING,
    shardId STRING,
    approximateArrival STRING,
    partitionKey STRING,
    sequenceNumber STRING,
    subSequenceNumber INTEGER,
    shardedSequenceNumber STRING
  ),
  id INT,
  name string,
  body_temperature float,
  approved_coffee_types array(string),
  animals_possessed variant,
  date_added date,
  options object(can_walk boolean, can_talk boolean)
)
EXTERNAL_VOLUME = 'my_volume'
CATALOG = 'SNOWFLAKE'
BASE_LOCATION = 'my_location/my_iceberg_table'
ICEBERG_VERSION = 3;

Using the connector with Interactive Tables

Interactive tables are a special type of Snowflake table optimized for low-latency, high-concurrency queries. You can find out more about interactive tables in the interactive tables documentation.

  1. Create an interactive table:

    CREATE INTERACTIVE TABLE REALTIME_METRICS (
      metric_name VARCHAR,
      metric_value NUMBER,
      source_topic VARCHAR,
      timestamp TIMESTAMP_NTZ
    ) CLUSTER BY (metric_name)
    AS (SELECT
      $1:M_NAME::VARCHAR,
      $1:M_VALUE::NUMBER,
      $1:RECORD_METADATA.topic::VARCHAR,
      $1:RECORD_METADATA.timestamp::TIMESTAMP_NTZ
    from TABLE(DATA_SOURCE(TYPE => 'STREAMING')));
    

Important considerations:

  • Interactive tables have specific limitations and query restrictions. Review the interactive tables documentation before using them with the connector.

  • For interactive tables, any required transformations must be handled in the table definition.

  • Interactive warehouses are required to query interactive tables efficiently.

Using the connector with a customer-defined schema for the destination table

The connector treats each Kinesis record as a row to be inserted into a Snowflake table. For example, if you have a Kinesis topic with the content of the message structured like the following JSON:

{
  "order_id": 12345,
  "customer_name": "John",
  "order_total": 100.00,
  "isPaid": true
}

By default you don't have to specify all fields from the JSON. Schema evolution will take care of it. However, if you prefer a static schema, it can be created by running:

CREATE TABLE ORDERS (
  kinesisMetadata OBJECT,
  order_id NUMBER,
  customer_name VARCHAR,
  order_total FLOAT,
  ispaid BOOLEAN
);

Using the connector with a customer-defined PIPE

If you choose to create your own pipe, you can define the data transformation logic in the pipe's COPY INTO statement. You can rename columns as required and cast the data types as needed. For example:

CREATE TABLE ORDERS (
  order_id VARCHAR,
  customer_name VARCHAR,
  order_total VARCHAR,
  ispaid VARCHAR
);

CREATE PIPE ORDERS AS
COPY INTO ORDERS
FROM (
  SELECT
    $1:order_id::STRING,
    $1:customer_name,
    $1:order_total::STRING,
    $1:isPaid::STRING
  FROM TABLE(DATA_SOURCE(TYPE => 'STREAMING'))
);

When you define your own pipe your destination table columns do not have to match the JSON keys. You can rename the columns to your desired names and cast the data types if required.

To adjust the connector to work with a custom pipe, perform the following tasks:

  1. Right-click on the PublishSnowpipeStreaming processor used in your Kinesis ingestion flow in the Openflow canvas.

  2. Select Configure from the context menu.

  3. Navigate to the Properties tab.

  4. In the Destination type field, pick Pipe.

  5. In the Pipe field, type the name of your pipe.

  6. Select Apply to save the configuration.

Customizing error handling

Error handling is split between Openflow-side failures and server-side failures within the Snowpipe Streaming service.

  • Openflow Errors (Client-Side Failures): Errors such as unparseable payloads or custom transformation failures occur before records reach Snowflake. By default these records are discarded. It's possible to process these errors in Openflow - use FlowFiles from the parse failure relationship in the ConsumeKinesis processor.

  • Snowpipe Streaming Errors (Server-Side Failures): Errors for records that successfully reach Snowflake but are incompatible with the destination table's schema (for example, type mismatches) are captured by the Snowflake infrastructure. When error logging is enabled on the destination table (error_logging = true), these failed rows are automatically ingested into the destination Error table.

Next steps