Troubleshoot the Snowflake Connector for Kafka¶
This topic describes how to troubleshoot common issues with the Snowflake Connector for Kafka.
Ingestion errors¶
Channel reports increasing rows_error_count¶
If the Snowpipe Streaming channel reports an increasing rows_error_count, the connector
behavior depends on the errors.tolerance setting:
With
errors.tolerance=none(default), the connector task fails withERROR_5030.With
errors.tolerance=all, the connector continues but logs the error count.
Note
With server-side validation and errors.tolerance=none, errors are asynchronous. The
connector detects the error on the next pre-commit cycle, so some additional records may be
ingested before the task fails.
To investigate:
Check the Error Table associated with your target table to identify the failing records. See Error handling in Snowpipe Streaming high-performance architecture for details.
Use the gap-finding technique described in Detect and recover from errors using metadata offsets with Kafka offset information from the
RECORD_METADATAcolumn.Review the connector logs for error details (enable
errors.log.enable=truefor verbose logging).
Connector task fails with ERROR_5030¶
ERROR_5030 indicates that the connector detected a data ingestion error.
Common causes include:
Data type mismatches between the Kafka record and the target table schema.
A user-created pipe exists while
snowflake.validation=client_sideis configured. Client-side validation only works with the default pipe.Schema changes in the Kafka records that can’t be automatically evolved.
To resolve:
Review the error message and connector logs for the specific cause.
If using client-side validation with a user-defined pipe, switch to
snowflake.validation=server_sideor remove the user-defined pipe.Fix the data in the source Kafka topic or adjust the target table schema.
Schema evolution issues¶
With server-side validation, schema evolution can’t always infer the correct data type. For
example, it can’t infer binary columns and may interpret a string like "2026-04-13" as DATE
instead of TEXT.
If schema evolution produces unexpected column types:
Use client-side validation (
snowflake.validation=client_side) for better type inference.Pre-create the table with the correct schema before starting the connector.
Note
The connector only caches the table schema. Concurrent DDL operations on the target table while the connector is running may cause undefined behavior. Avoid running DDL on tables that the connector is actively ingesting into.
Connection and authentication issues¶
Authentication failures¶
The v4 connector supports key-pair authentication only. Common authentication issues:
Invalid private key: Verify that the
snowflake.private.keyvalue is a valid Base64-encoded PKCS#8 private key.Key passphrase: If your key is encrypted, set
snowflake.private.key.passphraseto the correct passphrase.Role privileges: Verify that the role specified in
snowflake.role.namehas the required privileges. See Snowflake Connector for Kafka: Configure Snowflake for details.
Configuration issues¶
Unsupported converter with schematization¶
When snowflake.enable.schematization=true (the default), the StringConverter and
ByteArrayConverter aren’t supported as value converters. Use structured converters instead:
org.apache.kafka.connect.json.JsonConverterio.confluent.connect.avro.AvroConverterio.confluent.connect.protobuf.ProtobufConverter
Removed v3 configuration properties¶
If you see errors about unrecognized configuration properties, check whether you’re using properties that were removed in v4. See Migrate from Kafka connector v3 to v4 for the full list of removed configurations.
Compatibility validator failures at startup¶
If the connector fails at startup with errors about missing or incompatible configuration values,
the compatibility validator (snowflake.streaming.validate.compatibility.with.classic) is
checking your config against v3 migration requirements.
For new installations: Set
snowflake.streaming.validate.compatibility.with.classic=falseto skip the check.For migrations from v3: Set all the required compatibility properties explicitly. See snowflake.streaming.validate.compatibility.with.classic for the full list.
Performance issues¶
Ingestion lag growing¶
If the latest-consumer-offset minus persisted-in-snowflake-offset gap is increasing
(visible through JMX metrics), the connector is falling behind.
To resolve:
Increase tasks: Set
tasks.maxcloser to the total number of Kafka partitions. Optimal performance is typically 2 tasks per CPU core across the Kafka Connect cluster.Check backpressure: If the
backpressure-rewind-countmetric is increasing, the Snowpipe Streaming SDK is at capacity. Consider scaling out your Kafka Connect cluster.Review JVM memory: Limit JVM heap to approximately 50% of available memory. The Rust-based Snowpipe Streaming SDK uses off-heap memory for buffering, which isn’t managed by the JVM.
Table and pipe caching¶
The connector caches table and pipe existence checks to reduce database queries. If you encounter issues with the connector not detecting newly created tables or pipes, adjust the cache expiration time:
Sustained channel recovery¶
Occasional channel recoveries are normal. However, if the channel-recovery-count metric
is continuously increasing, it may indicate:
Schema changes on the target table that conflict with the connector’s cached schema.
Permission changes that affect the connector’s role.
Network instability between the Kafka Connect cluster and Snowflake.
Review the connector logs for specific recovery reasons.
SDK client leak¶
If the sdk-client-count JMX metric grows continuously, the connector may be leaking
Snowpipe Streaming SDK clients. Each distinct target table should have one SDK client.
If the count exceeds the number of distinct tables, contact Snowflake Support.
Migration issues¶
SSv1 channel not found during offset migration¶
If the connector fails with a channel-not-found error when using
snowflake.streaming.classic.offset.migration=strict:
Verify that you’re using the same connector name as your v3 deployment.
Check whether
snowflake.streaming.classic.offset.migration.include.connector.namematches your v3 setting forsnowflake.streaming.channel.name.include.connector.name.Switch to
best_effortmode if the channel has already been cleaned up, or if you’re adding new topics that didn’t exist in v3.
Duplicates after migration¶
If you see duplicate records after migrating from v3:
Verify that
RECORD_METADATAcontains topic, partition, and offset fields.Use the deduplication query in Downgrading from v4 to v3 to remove duplicates.
Logging¶
The Snowpipe Streaming SDK can produce verbose logs. To reduce log noise, set the following environment variable on your Kafka Connect workers:
For detailed connector logging with context, configure the log pattern:
Report issues¶
For issues not covered by this guide, contact Snowflake Support.