Openflow Connector for Kafka¶
Note
The connector is subject to the Connector Terms.
This topic describes the basic concepts of Openflow Connector for Kafka and limitations.
Apache Kafka software uses a publish and subscribe model to write and read streams of records, similar to a message queue or enterprise messaging system. Kafka allows processes to read and write messages asynchronously. A subscriber does not need to be connected directly to a publisher; a publisher can queue a message in Kafka for the subscriber to receive later.
An application publishes messages to a topic, and an application subscribes to a topic to receive those messages. Kafka can process, as well as transmit, messages; however, that is outside the scope of this document. Topics can be divided into partitions to increase scalability.
The Openflow Connector for Kafka reads data from Kafka topics and writes it into Snowflake tables using the Snowpipe Streaming mechanism.
Limitations¶
If the
Topic To Table Map
parameter is not set:Table names must precisely match the topic of the data they hold.
Table names must be in uppercase format.
If the
Topic To Table Map
parameter is set:Table names must match the table names specified in the mapping. The table names must be a valid Snowflake unquoted identifier. For information about valid table names, see Identifier requirements.
Only JSON and AVRO formats are supported.
Only Confluent Schema Registry is supported.
PLAINTEXT, SASL_PLAIN, SSL, and SASL_SSL security protocols are supported.
PLAIN, SCRAM-SHA-256, SCRAM-SHA-512 and AWS_MSK_IAM SASL mechanisms are supported.
In case of data insertion failure into a table, the connector will keep retrying infinitely.