Snowflake Connector for Kafka

The Snowflake Connector for Kafka (v4) is a sink connector that reads data from one or more Apache Kafka topics and loads that data into Snowflake tables. Built on Snowflake’s high-performance Snowpipe Streaming architecture, the connector delivers up to 10 GB/s throughput with 5 to 10 second end-to-end latency, with exactly-once and ordered delivery semantics.

For more information about Kafka Connect and its framework, see The Apache Kafka and Kafka connect framework.

Benefits

The Snowflake Connector for Kafka leverages Snowflake’s high-performance Snowpipe Streaming architecture, which is engineered for modern, data-intensive organizations requiring near real-time insights. This next-generation architecture significantly advances throughput, efficiency, and flexibility for real-time ingestion into Snowflake.

The high-performance architecture offers several key advantages:

  • Superior throughput and latency: Designed to support ingest speeds of up to 10 GB/s per table with end-to-end ingest to query latencies within 5 to 10 seconds, enabling near-real-time analytics.

  • Flat, throughput-based pricing: Pricing is based on the volume of data ingested (GB), the same model as Snowpipe Streaming high-performance architecture. For pricing details, see Snowpipe Streaming cost.

  • Enhanced performance: Uses a Rust-based client core that delivers improved client-side performance and lower resource usage compared to previous implementations.

  • In-flight transformations: Supports data cleansing and reshaping during ingestion using COPY command syntax within the PIPE object, allowing you to transform data before it reaches the target table.

  • Server-side schema validation: Moves schema validation from the client side to the server side through the PIPE object, ensuring data quality and reducing client complexity. Invalid records are captured in Error Tables for inspection and replay.

  • Pre-clustering capability: Can cluster data during ingestion when the target table has clustering keys defined, improving query performance without requiring post-ingestion maintenance.

The connector uses Snowflake PIPE objects as the central component for managing ingestion. The PIPE object acts as the entry point and definition layer for all streaming data, defining how data is processed, transformed, and validated before being committed to the target table. For more information about how the connector works with tables and pipes, see How the connector works with tables and pipes.

Choosing a connector version

The Kafka connector runs in a Kafka Connect cluster, reading data from the Kafka topics and writing into Snowflake tables.

Snowflake provides two versions of the connector. Both versions of the connector provide the same core functionality for streaming data from Kafka to Snowflake.

  • Confluent version of the connector

    The Confluent version is packaged as a zip file and includes all external libraries required to run the connector. Choose this version if you’re using the Confluent Platform.

    Note

    The v4 connector isn’t yet available as a native Confluent Cloud connector. On Confluent Cloud, install it as a custom plugin connector. Contact Snowflake Support for the Confluent package.

    For more information, see Kafka Connect (https://docs.confluent.io/current/connect/).

  • OSS Apache Kafka version of the connector

    Available from open source software (OSS) Apache Kafka package (https://mvnrepository.com/artifact/com.snowflake/snowflake-kafka-connector/).

    The Apache version is distributed as a standard fat JAR file and requires manual installation into your Apache Kafka Connect cluster. This version requires Bouncy Castle (https://www.bouncycastle.org/) cryptography libraries that must be downloaded separately.

    For more information, see Apache Kafka (https://kafka.apache.org/).

Using the connector with Apache Iceberg™ tables

The connector can ingest data into a Snowflake-managed Apache Iceberg™ tables. Before you configure the Kafka connector for Iceberg table ingestion, you must create an Iceberg table. See Create an Apache Iceberg™ table for ingestion for more information.

Limitations

The Snowflake Connector for Kafka has the following limitations.

Apache Iceberg™ tables and schema evolution

The connector does not support schema evolution for Apache Iceberg™ tables.

Migration of existing pipelines from version 3.x

The v4 connector requires a new configuration (new connector class, removed properties, changed defaults). Migration from both Snowpipe mode and Snowpipe Streaming mode is supported without gaps or duplicates when configured correctly. The switchover must happen within offsets.retention.minutes (default 7 days). See Migrate from v3 to v4 for details.

Single Message Transformations (SMTs):

Most Single Message Transformations (SMTs) are supported when using community converters, with the exception of regex.router which is currently not supported.

Authentication

The connector supports key-pair authentication only. OAuth isn’t supported in v4.

Error handling behavior depends on validation mode

With server-side validation, broken records are captured in Error Tables. With client-side validation, the connector fails immediately on invalid records, or routes them to a Dead Letter Queue (DLQ) when errors.tolerance=all is configured. For details, see Validation and error handling.

Fault tolerance limitations

Kafka topics can be configured with a limit on storage space or retention time.

  • If the system is offline for more than the retention time, expired records won’t be loaded. Similarly, if Kafka’s storage space limit is exceeded, some messages won’t be delivered.

  • If messages in the Kafka topic are deleted, these changes won’t be reflected in the Snowflake table.

For more information about SMTs, see Kafka Connect Single Message Transform Reference for Confluent Cloud or Confluent Platform (https://docs.confluent.io/current/connect/transforms/index.html).

Supported connector versions

The following table describes the supported connector versions.

Release Series

Status

Notes

4.x.x

Generally Available

Latest version. Migration from 3.x and 2.x must be done manually.

3.x.x

Officially supported

Upgrade to v4 recommended.

2.x.x

Officially supported

Upgrade recommended.

1.x.x

Not supported

Note

Looking for the classic Kafka connector (v3 and earlier)? See Kafka connector v3 (classic). For migration guidance, see Migrate from v3 to v4.

Next steps

Review how the connector works topic for more information about how the connector works with tables and pipes. . Review Set up tasks for the Snowflake Connector for Kafka topic for the steps to set up the Snowflake Connector for Kafka.