设置 Openflow Connector for SQL Server¶
备注
This connector is subject to the Snowflake Connector Terms.
This topic describes how to set up the Openflow Connector for SQL Server.
For information on the incremental load process, see Incremental replication.
先决条件¶
Before setting up the connector, ensure that you have completed the following prerequisites:
确保您已查看 支持的 SQL 服务器版本。
Ensure that you have set up your runtime deployment. For more information, see the following topics:
If you use Openflow - Snowflake Deployments, ensure that you have reviewed configuring required domains and have granted access to the required domains for the SQL Server connector.
Set up your SQL Server instance¶
Before setting up the connector, perform the following tasks in your SQL Server environment:
备注
You must perform these tasks as a database administrator.
Enable change tracking on the databases (https://learn.microsoft.com/en-us/sql/relational-databases/track-changes/enable-and-disable-change-tracking-sql-server?view=sql-server-ver16#enable-change-tracking-for-a-database) and tables (https://learn.microsoft.com/en-us/sql/relational-databases/track-changes/enable-and-disable-change-tracking-sql-server?view=sql-server-ver16#enable-change-tracking-for-a-table) that you plan to replicate, as shown in the following SQL Server example:
ALTER DATABASE <database> SET CHANGE_TRACKING = ON (CHANGE_RETENTION = 2 DAYS, AUTO_CLEANUP = ON); ALTER TABLE <schema>.<table> ENABLE CHANGE_TRACKING WITH (TRACK_COLUMNS_UPDATED = ON); .. note:: Run these commands for every database and table that you plan to replicate.
The connector requires that change tracking is enabled on the databases and tables before replication starts. Ensure that every table that you plan to replicate has enabled change tracking. You can also enable change tracking on additional tables while the connector is running.
Create a login for the SQL Server instance:
CREATE LOGIN <user_name> WITH PASSWORD = '<password>';
This login is used to create users for the databases you plan to replicate.
Create a user for each database you are replicating by running the following SQL Server command in each database:
USE <source_database>; CREATE USER <user_name> FOR LOGIN <user_name>;
Grant the SELECT and VIEW CHANGE TRACKING permissions to the user for each database that you are replicating:
GRANT SELECT ON <database>.<schema>.<table> TO <user_name>; GRANT VIEW CHANGE TRACKING ON <database>.<schema>.<table> TO <user_name>;
Run these commands in each database for every table that you plan to replicate. These permissions must be granted to the user of each database that you created in a previous step.
(Optional) Configure SSL connection.
If you use an SSL connection to connect SQL Server, create the root certificate for your database server. This is required when configuring the connector.
Set up your Snowflake environment¶
As a Snowflake administrator, perform the following tasks:
Create a destination database in Snowflake to store the replicated data:
CREATE DATABASE <destination_database>;
Create a Snowflake service user:
CREATE USER <openflow_user> TYPE = SERVICE COMMENT='Service user for automated access of Openflow';
Create a Snowflake role for the connector and grant the required privileges:
CREATE ROLE <openflow_role>; GRANT ROLE <openflow_role> TO USER <openflow_user>; GRANT USAGE ON DATABASE <destination_database> TO ROLE <openflow_role>; GRANT CREATE SCHEMA ON DATABASE <destination_database> TO ROLE <openflow_role>;
Use this role to manage the connector's access to the Snowflake database.
To create objects in the destination database, you must grant the USAGE and CREATE SCHEMA privileges on the database to the role used to manage access.
Create a Snowflake warehouse for the connector and grant the required privileges:
CREATE WAREHOUSE <openflow_warehouse> WITH WAREHOUSE_SIZE = 'MEDIUM' AUTO_SUSPEND = 300 AUTO_RESUME = TRUE; GRANT USAGE, OPERATE ON WAREHOUSE <openflow_warehouse> TO ROLE <openflow_role>;
Snowflake recommends starting with a MEDIUM warehouse size, then experimenting with size depending on the number of tables being replicated and the amount of data transferred. Large numbers of tables typically scale better with multi-cluster warehouses, rather than a larger warehouse size. For more information, see multi-cluster warehouses.
Set up the public and private keys for key pair authentication:
Create a pair of secure keys (public and private).
Store the private key for the user in a file to supply to the connector's configuration.
Assign the public key to the Snowflake service user:
ALTER USER <openflow_user> SET RSA_PUBLIC_KEY = 'thekey';
有关更多信息,请参阅 密钥对身份验证和密钥对轮换。
配置连接器¶
As a data engineer, install and configure the connector using the following sections.
安装连接器¶
Navigate to the Openflow overview page. In the Featured connectors section, select View more connectors.
在 Openflow 连接器页面上,找到连接器并选择 Add to runtime。
In the Select runtime dialog, select your runtime from the Available runtimes drop-down list and click Add.
备注
在安装连接器之前,请确保在 Snowflake 中为连接器创建了数据库和架构,用于存储引入的数据。
使用您的 Snowflake 账户凭据对部署进行身份验证,并在系统提示时选择 Allow,以允许运行时应用程序访问您的 Snowflake 账户。连接器安装过程需要几分钟才能完成。
使用您的 Snowflake 账户凭据进行运行时身份验证。
此时将显示 Openflow 画布,其中添加了连接器进程组。
配置连接器¶
To configure the connector, perform the following steps:
右键点击导入的进程组并选择 Parameters。
Populate the required parameter values as described in 流参数.
流参数¶
Start by setting the parameters of the SQLServer Source Parameters context, then the SQLServer Destination Parameters context. After you complete this, enable the connector. The connector connects to both SQLServer and Snowflake and starts running. However, the connector does not replicate any data until any tables to be replicated are explicitly added to its configuration.
要为复制配置特定的表,请编辑 SQLServer 引入参数上下文。将更改应用到 SQLServer 引入参数上下文后,连接器会选择配置,并且将为每张表启动复制生命周期。
SQLServer 源参数上下文¶
参数 |
描述 |
|---|---|
SQL Server Connection URL |
指向源数据库的完整 JDBC URL。 示例:
|
SQL Server JDBC Driver |
Select the Reference asset checkbox to upload the SQL Server JDBC driver (https://learn.microsoft.com/sql/connect/jdbc/download-microsoft-jdbc-driver-for-sql-server). |
SQL Server Username |
The user name for the connector. |
SQL Server Password |
连接器的密码。 |
SQLServer 目标参数上下文¶
参数 |
描述 |
必填 |
|---|---|---|
目标数据库 |
The database where data is persisted. It must already exist in Snowflake. The name is case-sensitive. For unquoted identifiers, provide the name in uppercase. |
是 |
Snowflake 身份验证策略 |
使用以下方式时:
|
是 |
Snowflake 账户标识符 |
使用以下方式时:
|
是 |
Snowflake Object Identifier Resolution |
Specifies how source object identifiers such as schemas, tables, and columns names are stored and queried in Snowflake. This setting dictates whether you must use double quotes in SQL queries. Option 1: Default, case-insensitive (recommended).
备注 Snowflake recommends using this option if database objects are not expected to have mixed case names. 重要 Do not change this setting after connector ingestion has begun. Changing this setting after ingestion has begun breaks the existing ingestion. If you must change this setting, create a new connector instance. Option 2: case-sensitive.
备注 Snowflake recommends using this option if you must preserve source casing for legacy or compatibility reasons.
For example, if the source database includes table names that differ in case only, such as |
是 |
Snowflake 私钥 |
使用以下方式时:
|
否 |
Snowflake 私钥文件 |
使用以下方式时:
|
否 |
Snowflake 私钥密码 |
使用以下方式时:
|
否 |
Snowflake 角色 |
使用以下方式时:
|
是 |
Snowflake 用户名 |
使用以下方式时:
|
是 |
Snowflake 仓库 |
Snowflake warehouse used to run queries. |
是 |
SQLServer 引入参数上下文¶
参数 |
描述 |
|---|---|
包括表名 |
A comma-separated list of source table paths, including their databases and schemas, for example:
|
包括表正则表达式 |
A regular expression to match against table paths, including database and schema names. Every path matching the expression is replicated, and new tables matching the pattern that are created later are also included automatically, for example:
|
筛选器 JSON |
A JSON containing a list of fully-qualified table names and a regex pattern for column names that should be included into replication. The following example includes all columns that end with
|
合并任务计划 CRON |
定义触发从日志到目标表的合并任务的 CRON 表达式。如果您想持续合并或按照计划时间来限制仓库运行时间,请将其设置为 例如:
有关其他信息和示例,请参阅 Quartz 文档 (https://www.quartz-scheduler.org/documentation/quartz-2.2.2/tutorials/tutorial-lesson-06.html) 中的 cron 触发教程 |
从复制中移除并重新添加表¶
To remove a table from replication, remove it from the Included Table Names or Included Table Regex parameters in the Replication Parameters context.
To re-add the table to replication later, first delete the corresponding destination table in Snowflake.
Afterward, add the table back to the Included Table Names or Included Table Regex parameters.
This ensures that the replication process starts fresh for the table.
此方法还可用于从失败的表复制场景中恢复复制。
复制表中列的子集¶
The connector filters the data replicated per table to a subset of configured columns.
要对列应用筛选器,请在复制参数上下文中修改列筛选器属性,添加一个配置数组,每个条目对应一张需要筛选列的表。
Include or exclude columns by name or pattern. You can apply a single condition per table, or combine multiple conditions, with exclusions always taking precedence over inclusions.
以下示例显示了可用的字段。schema 和 table 字段是必填字段。必须至少填写以下其中一个字段:included、excluded、includedPattern、excludedPattern。
[
{
"schema": "<source table schema>",
"table" : "<source table name>",
"included": ["<column name>", "<column name>"],
"excluded": ["<column name>", "<column name>"],
"includedPattern": "<regular expression>",
"excludedPattern": "<regular expression>",
}
]
跟踪表中的数据变化¶
The connector replicates the current state of data from the source tables, as well as every state of every row from every changeset. This data is stored in journal tables created in the same schema as the destination table.
The journal table names are formatted as: <source table name>_JOURNAL_<timestamp>_<schema generation>
where <timestamp> is the value of epoch seconds when the source table was added to replication, and <schema generation> is an integer increasing with every schema change on the source table.
As a result, source tables that undergo schema changes will have multiple journal tables.
When you remove a table from replication, then add it back, the <timestamp> value changes, and <schema generation> starts again from 1.
重要
Snowflake recommends not altering the structure of journal tables in any way. The connector uses them to update the destination table as part of the replication process.
The connector never drops journal tables, but uses the latest journal for every replicated source table, only reading append-only streams on top of journals. To reclaim the storage, you can:
Truncate all journal tables at any time.
Drop the journal tables related to source tables that were removed from replication.
Drop all but the latest generation journal tables for actively replicated tables.
For example, if your connector is set to actively replicate source table orders,
and you have earlier removed table customers from replication, you may have
the following journal tables. In this case you can drop all of them except orders_5678_2.
customers_1234_1
customers_1234_2
orders_5678_1
orders_5678_2
配置合并任务的调度¶
连接器使用仓库将变更数据捕获 (CDC) 数据合并到目标表中。此操作由 MergeSnowflakeJournalTable 处理器触发。如果没有新的更改,或者 MergeSnowflakeJournalTable 队列中没有新的待处理的 FlowFile,则不会触发合并,仓库会自动暂停。
Use the CRON expression in the Merge task Schedule CRON parameter to limit the warehouse cost and limit merges to only scheduled time. It throttles the flow files coming to the MergeSnowflakeJournalTable processor and merges are triggered only in a dedicated period of time. For more information about scheduling, see Scheduling strategy (https://nifi.apache.org/docs/nifi-docs/html/user-guide.html#scheduling-strategy).
运行流¶
右键点击“飞机”图标并选择 Enable all Controller Services。
右键点击导入的进程组并选择 Start。连接器开始数据引入。