Set up Openflow Connector for Amazon Kinesis Data Streams¶
Note
This connector is subject to the Snowflake Connector Terms.
This topic describes how to set up Openflow Connector for Amazon Kinesis Data Streams.
Openflow Connector for Amazon Kinesis Data Streams is designed for JSON message ingestion from Kinesis streams to Snowflake tables, with schema evolution capabilities.
Set up the Openflow Connector for Kinesis¶
Prerequisites¶
Ensure that you have Set up Openflow - BYOC or Set up Openflow - Snowflake Deployments.
If you are using Openflow - Snowflake Deployments, ensure that you have reviewed configuring required domains and have granted access to the required domains for the Kinesis connector.
Set up IAM roles and policies in AWS¶
As an AWS administrator, perform the following actions in your AWS account:
Create an AWS IAM user or role that Openflow will use to access the Kinesis data stream. For more information, see Creating IAM users (https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html) in the AWS documentation.
Ensure that the AWS user has configured Access Key credentials (https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html).
Grant the AWS user the following IAM permissions:
Service
Actions
Resources (ARNs)
Purpose
Amazon Kinesis Data Streams
kinesis:DescribeStream,kinesis:DescribeStreamConsumer,kinesis:GetRecords,kinesis:GetShardIterator,kinesis:ListShards,kinesis:RegisterStreamConsumerarn:aws:kinesis:${REGION}:${ACCOUNT_ID}:stream/${STREAM_NAME}Discovers shards, reads records through shared-throughput polling, resolves the stream ARN, registers an Enhanced Fan-Out consumer, and polls consumer status during registration.
Amazon Kinesis Data Streams
kinesis:DeregisterStreamConsumer,kinesis:DescribeStreamConsumer,kinesis:SubscribeToShardarn:aws:kinesis:${REGION}:${ACCOUNT_ID}:stream/${STREAM_NAME}/consumer/*Describes, subscribes to, and deregisters Enhanced Fan-Out consumers by consumer ARN.
Amazon DynamoDB
dynamodb:CreateTable,dynamodb:DeleteTable,dynamodb:DescribeTable,dynamodb:GetItem,dynamodb:PutItem,dynamodb:Query,dynamodb:Scan,dynamodb:UpdateItemarn:aws:dynamodb:${REGION}:${ACCOUNT_ID}:table/${APPLICATION_NAME},arn:aws:dynamodb:${REGION}:${ACCOUNT_ID}:table/${APPLICATION_NAME}_migrationCreates and manages the checkpoint/lease table (shard leases, node heartbeats, checkpoints) and a temporary migration table used during one-time migration from legacy checkpoint tables.
Example IAM policy:
Before using the example policy, replace the following placeholders:
Placeholder
Description
${REGION}Your AWS region (for example,
us-east-1)${ACCOUNT_ID}Your AWS account ID (for example,
123456789012)${STREAM_NAME}The value of the AWS Kinesis Stream Name connector parameter
${APPLICATION_NAME}The value of the AWS Kinesis Application Name connector parameter. Used as the DynamoDB checkpoint table name and as the Enhanced Fan-Out registered consumer name.
Note
The
${APPLICATION_NAME}_migrationtable is a temporary DynamoDB table created only during a one-time migration from legacy checkpoint tables to the new schema. It’s deleted automatically when migration completes. If your deployment has never used the legacy KCL-based connector, you can omit the migration table ARN from the policy.The
dynamodb:DeleteTableaction is used during the migration process and can be removed from the policy after migration is confirmed complete.The
kinesis:DeregisterStreamConsumeraction is invoked when the processor is removed from the canvas. If the IAM principal doesn’t have this permission, the consumer must be deregistered manually through the AWS console or CLI.
Set up Snowflake account¶
As a Snowflake account administrator, perform the following tasks:
Create a new Snowflake service user with the type as SERVICE.
Create a new role or use an existing role and grant the database privileges.
The connector requires the user to create the destination table. Make sure the user has the required privileges for managing Snowflake objects:
Object
Privilege
Notes
Database
USAGE
Schema
USAGE
Table
OWNERSHIP
Required for the connector to ingest data into a table.
Snowflake recommends creating a separate user and role for each Kinesis stream for better access control.
You can use the following script to create and configure a custom role (requires SECURITYADMIN or equivalent):
Note
Privileges must be granted directly to the connector role and can’t be inherited.
Configure the destination table
We highly recommend using server-side schema evolution for schema changes and an error table for DML error logging.
The example below shows how to create a table and add OWNERSHIP permissions.
These connectors provide support for automatic schema detection and evolution. The structure of tables in Snowflake is defined and evolved automatically to support the structure of new data loaded by the connector. It will automatically map the record content’s first-level keys to table columns matching by name (case-insensitive).
With Schema evolution enabled, Snowflake can automatically expand the destination table by adding new columns that are detected in the incoming stream and dropping NOT NULL constraints to accommodate new data patterns. For more information, see Table schema evolution.
If ENABLE_SCHEMA_EVOLUTION is not enabled, then you have to create the schema manually by extending the table definition. The connector tries to match the record content’s first-level keys to the table columns by name. If keys from the JSON do not match the table columns, the connector ignores the keys.
(Optional) Configure a secrets manager
Snowflake strongly recommends this step. Configure a secrets manager supported by Openflow, for example, AWS, Azure, and Hashicorp, and store the public and private keys in the secret store.
Once the secrets manager is configured, determine how you will authenticate to it. On AWS, it’s recommended that you use the EC2 instance role associated with Openflow as this way no other secrets have to be persisted.
In the Openflow canvas, configure a Parameter Provider associated with this Secrets Manager, from the hamburger menu in the upper right. Navigate to Controller Settings » Parameter Provider and then fetch your parameter values.
At this point all credentials can be referenced with the associated parameter paths and no sensitive values need to be persisted within Openflow.
Grant access to users
Any other Snowflake users who require access to the raw ingested data by the connector (for example, for custom processing in Snowflake), should be granted the role created in step 2.
Set up the connector¶
As a data engineer, perform the following tasks to install and configure the connector:
Install the connector¶
Navigate to the Openflow overview page. In the Featured connectors section, select View more connectors.
On the Openflow connectors page, find the Openflow connector for Amazon Kinesis Data Streams and select Add to runtime.
In the Select runtime dialog, select your runtime from the Available runtimes drop-down list and click Add.
Note
Before you install the connector, ensure that you have created a database, schema, and a table in Snowflake for the connector to store ingested data.
Authenticate to the deployment with your Snowflake account credentials and select Allow when prompted to allow the runtime application to access your Snowflake account. The connector installation process takes a few minutes to complete.
Authenticate to the runtime with your Snowflake account credentials.
The Openflow canvas appears with the connector process group added to it.
Configure the connector¶
If needed, customize the connector configuration before configuring the built-in parameters.
Populate the process group parameters
Right-click on the imported process group and select Parameters.
Fill out the required parameter values.
Common parameters¶
Parameter |
Description |
Required |
|---|---|---|
AWS Access Key ID |
The AWS Access Key ID to connect to your Kinesis Stream and DynamoDB. |
Yes |
AWS Kinesis Region |
The AWS Region to connect to. Use regular AWS region format, for example: |
Yes |
AWS Secret Access Key |
The AWS Secret Access Key to connect to your Kinesis Stream and DynamoDB. |
Yes |
AWS Kinesis Application Name |
The name that is used as the DynamoDB table name for tracking the application’s progress on Kinesis Stream consumption. |
Yes |
AWS Kinesis Consumer Type |
The strategy used to read records from a Kinesis Stream. Must be one of the following values: SHARED_THROUGHPUT, ENHANCED_FAN_OUT. For more information, see Differences between shared throughput consumer and enhanced fan-out consumer (https://docs.aws.amazon.com/streams/latest/dev/enhanced-consumers.html). |
Yes |
AWS Kinesis Initial Stream Position |
The initial stream position from which the data starts replication. This takes effect only during the initial start for a given AWS Kinesis Application Name. Possible values are: LATEST: Latest stored record, TRIM_HORIZON: Earliest stored record. |
Yes |
AWS Kinesis Stream Name |
The AWS Kinesis Stream Name to consume data from. |
Yes |
Snowflake Destination Database |
The database where data will be persisted. It must already exist in Snowflake. The name is case-sensitive. For unquoted identifiers, provide the name in uppercase. |
Yes |
Snowflake Destination Schema |
The schema where data will be persisted, which must already exist in Snowflake. The name is case-sensitive. For unquoted identifiers, provide the name in uppercase. See the following examples:
|
Yes |
Snowflake Destination Table |
The table where data will be persisted. It must already exist in Snowflake. The name is case-sensitive. For unquoted identifiers, provide the name in uppercase. |
Yes |
Start the connector¶
Right-click on the plane and select Enable all Controller Services.
Right-click on the plane and select Start. The connector starts data ingestion.
Understanding KINESISMETADATA column¶
The connector populates the KINESISMETADATA structure with metadata about the Kinesis record. The structure contains the following information:
Field Name |
Field Type |
Example Value |
Description |
|---|---|---|---|
stream |
String |
|
The name of the Kinesis stream the record came from. |
shardId |
String |
|
The identifier of the shard in the stream the record came from. |
approximateArrival |
String |
|
The approximate time that the record was inserted into the stream (ISO 8601 format). |
partitionKey |
String |
|
The partition key specified by the data producer for the record. |
sequenceNumber |
String |
|
The unique sequence number assigned by Kinesis Data Streams to the record in the shard. |
subSequenceNumber |
Number |
|
The subsequence number for the record (used for aggregated records with the same sequence number). |
shardedSequenceNumber |
String |
|
A combination of the sequence number and the subsequence number for the record. |
Measuring ingestion latency¶
For change tracking, incremental processing, and Time Travel queries based on row modification time, the ROW_TIMESTAMP feature can be used.
It can be enabled by running the following command on your destination table:
After row timestamps are enabled, tables expose the METADATA$ROW_LAST_COMMIT_TIME column, which returns the timestamp when each row was last modified.
For more information, see Row timestamps.
Note
Row timestamp isn’t available for interactive tables. For more information, see Limitations of interactive tables.
Using the connector with Apache Iceberg™ tables¶
The connector can ingest data into Snowflake-managed Apache Iceberg™ tables but must meet the following requirements:
You must have been granted the USAGE privilege on the external volume associated with your Apache Iceberg™ table.
You must create an Apache Iceberg™ table before running the connector.
Grant usage on an external volume¶
For example, if your Iceberg table uses the kinesis_external_volume external volume and the connector uses the role openflow_kinesis_connector_role_1, run the following statement:
Create an Apache Iceberg™ table for ingestion¶
The connector does not create Iceberg tables automatically and does not support schema evolution. Before you run the connector, you must create an Iceberg table manually.
When you create an Iceberg table, you can use Iceberg data types (including VARIANT) or compatible Snowflake types.
For example, consider the following message:
To create an Iceberg table for the example message, use one of the following statements:
Using the connector with Interactive Tables¶
Interactive tables are a special type of Snowflake table optimized for low-latency, high-concurrency queries. You can find out more about interactive tables in the interactive tables documentation.
Create an interactive table:
Important considerations:
Interactive tables have specific limitations and query restrictions. Review the interactive tables documentation before using them with the connector.
For interactive tables, any required transformations must be handled in the table definition.
Interactive warehouses are required to query interactive tables efficiently.
Using the connector with a customer-defined schema for the destination table¶
The connector treats each Kinesis record as a row to be inserted into a Snowflake table. For example, if you have a Kinesis topic with the content of the message structured like the following JSON:
By default you don’t have to specify all fields from the JSON. Schema evolution will take care of it. However, if you prefer a static schema, it can be created by running:
Using the connector with a customer-defined PIPE¶
If you choose to create your own pipe, you can define the data transformation logic in the pipe’s COPY INTO statement. You can rename columns as required and cast the data types as needed. For example:
When you define your own pipe your destination table columns do not have to match the JSON keys. You can rename the columns to your desired names and cast the data types if required.
To adjust the connector to work with a custom pipe, perform the following tasks:
Right-click on the PublishSnowpipeStreaming processor used in your Kinesis ingestion flow in the Openflow canvas.
Select Configure from the context menu.
Navigate to the Properties tab.
In the Destination type field, pick Pipe.
In the Pipe field, type the name of your pipe.
Select Apply to save the configuration.
Customizing error handling¶
Error handling is split between Openflow-side failures and server-side failures within the Snowpipe Streaming service.
Openflow Errors (Client-Side Failures): Errors such as unparseable payloads or custom transformation failures occur before records reach Snowflake. By default these records are discarded. It’s possible to process these errors in Openflow - use FlowFiles from the parse failure relationship in the ConsumeKinesis processor.
Snowpipe Streaming Errors (Server-Side Failures): Errors for records that successfully reach Snowflake but are incompatible with the destination table’s schema (for example, type mismatches) are captured by the Snowflake infrastructure. When error logging is enabled on the destination table (
error_logging = true), these failed rows are automatically ingested into the destination Error table.