SnowConvert AI - Redshift - SQL Statements¶
Translation reference for all the supported statements by SnowConvert AI for Redshift.
CALL¶
描述¶
运行存储过程。CALL 命令必须包含过程名称和输入实参值。必须使用 CALL 语句调用存储过程。(Redshift SQL 语言参考 CALL (https://docs.aws.amazon.com/redshift/latest/dg/r_CALL_procedure.html))。
语法¶
CALL sp_name ( [ argument ] [, ...] )
示例源模式¶
基本场景¶
输入代码:¶
Redshift¶
CREATE PROCEDURE sp_insert_values(IN arg1 INT, IN arg2 DATE)
LANGUAGE plpgsql
AS
$$
BEGIN
INSERT INTO event VALUES (arg1, arg2);
END;
$$;
CALL sp_insert_values(1, CURRENT_DATE);
输出代码:¶
Redshift¶
CREATE PROCEDURE sp_insert_values (arg1 INT, arg2 DATE)
RETURNS VARCHAR
LANGUAGE SQL
COMMENT = '{ "origin": "sf_sc", "name": "snowconvert", "version": { "major": 0, "minor": 0, "patch": "0" }, "attributes": { "component": "redshift", "convertedOn": "07/11/2025", "domain": "no-domain-provided" }}'
AS
$$
BEGIN
INSERT INTO event
VALUES (:arg1, : arg2);
END;
$$;
CALL sp_insert_values(1, CURRENT_DATE());
使用输出参数模式调用(INOUT、OUT)¶
输入代码:¶
Redshift¶
CREATE OR REPLACE PROCEDURE sp_calculate_sum_product(IN a NUMERIC, IN b NUMERIC, INOUT sum_result NUMERIC, INOUT product_result NUMERIC)
LANGUAGE plpgsql
AS $$
BEGIN
sum_result := a + b;
product_result := a * b;
END;
$$;
CREATE OR REPLACE PROCEDURE call_sp_calculate_sum_product()
LANGUAGE plpgsql
AS $$
DECLARE
sum_value NUMERIC DEFAULT null;
product_value NUMERIC DEFAULT null;
BEGIN
CALL sp_calculate_sum_product(FLOOR(20.5)::NUMERIC, CEIL(20.7)::NUMERIC, sum_value, product_value);
INSERT INTO test VALUES (sum_value, product_value);
END;
$$;
CALL call_sp_calculate_sum_product();
输出代码:¶
Redshift¶
CREATE OR REPLACE PROCEDURE sp_calculate_sum_product (a NUMERIC, b NUMERIC, sum_result OUT NUMERIC, product_result OUT NUMERIC)
RETURNS VARCHAR
LANGUAGE SQL
COMMENT = '{ "origin": "sf_sc", "name": "snowconvert", "version": { "major": 0, "minor": 0, "patch": "0" }, "attributes": { "component": "redshift", "convertedOn": "07/11/2025", "domain": "no-domain-provided" }}'
AS $$
BEGIN
sum_result := a + b;
product_result := a * b;
END;
$$;
CREATE OR REPLACE PROCEDURE call_sp_calculate_sum_product ()
RETURNS VARCHAR
LANGUAGE SQL
COMMENT = '{ "origin": "sf_sc", "name": "snowconvert", "version": { "major": 0, "minor": 0, "patch": "0" }, "attributes": { "component": "redshift", "convertedOn": "07/11/2025", "domain": "no-domain-provided" }}'
AS $$
DECLARE
sum_value NUMERIC DEFAULT null;
product_value NUMERIC DEFAULT null;
BEGIN
CALL sp_calculate_sum_product(FLOOR(20.5)::NUMERIC, CEIL(20.7)::NUMERIC, : sum_value, : product_value);
INSERT INTO test
VALUES (:sum_value, : product_value);
END;
$$;
CALL call_sp_calculate_sum_product();
已知问题¶
Output parameters from calls outside procedures won't work.
CREATE DATABASE¶
语法¶
CREATE DATABASE database_name
[ { [ WITH ]
[ OWNER [=] db_owner ]
[ CONNECTION LIMIT { limit | UNLIMITED } ]
[ COLLATE { CASE_SENSITIVE | CASE_INSENSITIVE } ]
[ ISOLATION LEVEL { SERIALIZABLE | SNAPSHOT } ]
}
| { [ WITH PERMISSIONS ] FROM DATASHARE datashare_name ] OF [ ACCOUNT account_id ] NAMESPACE namespace_guid }
| { FROM { { ARN '<arn>' } { WITH DATA CATALOG SCHEMA '<schema>' | WITH NO DATA CATALOG SCHEMA } }
| { INTEGRATION '<integration_id>'} }
| { IAM_ROLE {default | 'SESSION' | 'arn:aws:iam::<account-id>:role/<role-name>' } }
有关更多信息,请参阅 Redshift CREATE DATABASE 文档 (https://docs.aws.amazon.com/redshift/latest/dg/r_CREATE_DATABASE.html)。
示例源模式¶
基本示例¶
输入代码:¶
Redshift¶
CREATE DATABASE database_name;
输出代码:¶
Snowflake¶
CREATE DATABASE IF NOT EXISTS database_name
COMMENT = '{ "origin": "sf_sc", "name": "snowconvert", "version": { "major": 0, "minor": 0, "patch": "0" }, "attributes": { "component": "redshift", "convertedOn": "09/25/2024" }}';
Collate 子句¶
输入代码:¶
Redshift¶
CREATE DATABASE database_collate
COLLATE CASE_INSENSITIVE;
输出代码:¶
Snowflake¶
CREATE DATABASE IF NOT EXISTS database_collate
DEFAULT_DDL_COLLATION='en-ci'
COMMENT = '{ "origin": "sf_sc", "name": "snowconvert", "version": { "major": 0, "minor": 0, "patch": "0" }, "attributes": { "component": "redshift", "convertedOn": "09/24/2024" }}';
Connection Limit 子句¶
输入代码:¶
Redshift¶
CREATE DATABASE database_connection
CONNECTION LIMIT UNLIMITED;
输出代码:¶
Snowflake¶
CREATE DATABASE IF NOT EXISTS database_connection
COMMENT = '{ "origin": "sf_sc", "name": "snowconvert", "version": { "major": 0, "minor": 0, "patch": "0" }, "attributes": { "component": "redshift", "convertedOn": "09/24/2024" }}';
警告
由于 Snowflake 中的连接并发由仓库管理,因此删除了 connection limit 子句。有关更多信息,请点击 此处。
From ARN 子句¶
输入代码:¶
Redshift¶
CREATE DATABASE database_fromARN
FROM ARN 'arn' WITH NO DATA CATALOG SCHEMA IAM_ROLE 'arn:aws:iam::<account-id>:role/<role-name';
输出代码:¶
Snowflake¶
CREATE DATABASE IF NOT EXISTS database_fromARN
COMMENT = '{ "origin": "sf_sc", "name": "snowconvert", "version": { "major": 0, "minor": 0, "patch": "0" }, "attributes": { "component": "redshift", "convertedOn": "09/24/2024" }}';
警告
该子句已删除,因为它用于引用 Amazon 资源 (https://docs.aws.amazon.com/IAM/latest/UserGuide/reference-arns.html),在 Snowflake 中无效。
Owner 子句¶
输入代码¶
Redshift¶
CREATE DATABASE database_Owner
OWNER db_owner
ENCODING 'encoding';
输出代码¶
Snowflake¶
CREATE DATABASE IF NOT EXISTS database_Owner
COMMENT = '{ "origin": "sf_sc", "name": "snowconvert", "version": { "major": 0, "minor": 0, "patch": "0" }, "attributes": { "component": "redshift", "convertedOn": "09/24/2024" }}';
警告
请注意,在这种情况下,owner 子句已从代码中删除,因为 Snowflake 数据库归角色所有,而不是个人用户所有。有关更多信息,请参阅 Snowflake GRANT OWNERSHIP 文档。
Isolation Level 子句¶
输入代码¶
Redshift¶
CREATE DATABASE database_Isolation
ISOLATION LEVEL SNAPSHOT;
输出代码¶
Snowflake¶
CREATE DATABASE IF NOT EXISTS database_Isolation
ISOLATION LEVEL SNAPSHOT !!!RESOLVE EWI!!! /*** SSC-EWI-0073 - PENDING FUNCTIONAL EQUIVALENCE REVIEW FOR 'IsolationLevelAttribute' NODE ***/!!!
COMMENT = '{ "origin": "sf_sc", "name": "snowconvert", "version": { "major": 0, "minor": 0, "patch": "0" }, "attributes": { "component": "redshift", "convertedOn": "09/24/2024" }}';
备注
Isolation Level 的转换计划在未来实现。
相关的 EWIs¶
SSC-EWI-0073:待进行功能等效性审查
CREATE EXTERNAL TABLE¶
描述 ¶
Currently SnowConvert AI is transforming CREATE EXTERNAL TABLES to regular tables, that implies additional effort because data stored in external RedShift tables must be transferred to the Snowflake database.
语法 ¶
CREATE EXTERNAL TABLE
external_schema.table_name
(column_name data_type [, …] )
[ PARTITIONED BY (col_name data_type [, … ] )]
[ { ROW FORMAT DELIMITED row_format |
ROW FORMAT SERDE 'serde_name'
[ WITH SERDEPROPERTIES ( 'property_name' = 'property_value' [, ...] ) ] } ]
STORED AS file_format
LOCATION { 's3://bucket/folder/' | 's3://bucket/manifest_file' }
[ TABLE PROPERTIES ( 'property_name'='property_value' [, ...] ) ]
CREATE EXTERNAL TABLE
external_schema.table_name
[ PARTITIONED BY (col_name [, … ] ) ]
[ ROW FORMAT DELIMITED row_format ]
STORED AS file_format
LOCATION { 's3://bucket/folder/' }
[ TABLE PROPERTIES ( 'property_name'='property_value' [, ...] ) ]
AS
{ select_statement }
点击 此处 (https://docs.aws.amazon.com/redshift/latest/dg/r_CREATE_EXTERNAL_TABLE.html) 查看此语法的规范。
示例源模式¶
输入代码:¶
Redshift¶
CREATE EXTERNAL TABLE
external_schema.sales_data
(
sales_id INT,
product_id INT,
sales_amount DECIMAL(10, 2),
sales_date DATE
)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
STORED AS TEXTFILE
LOCATION 's3://mybucket/sales_data/';
输出代码:¶
Snowflake¶
--** SSC-FDM-0004 - EXTERNAL TABLE TRANSLATED TO REGULAR TABLE **
CREATE TABLE external_schema.sales_data
(
sales_id INT,
product_id INT,
sales_amount DECIMAL(10, 2),
sales_date DATE
)
--ROW FORMAT DELIMITED
--FIELDS TERMINATED BY ','
--STORED AS TEXTFILE
--LOCATION 's3://mybucket/sales_data/'
COMMENT = '{ "origin": "sf_sc", "name": "snowconvert", "version": { "major": 0, "minor": 0, "patch": "0" }, "attributes": { "component": "redshift", "convertedOn": "09/17/2024" }}'
;
Create External Table AS¶
输入代码:¶
Redshift¶
CREATE EXTERNAL TABLE spectrum.partitioned_lineitem
PARTITIONED BY (l_shipdate, l_shipmode)
STORED AS parquet
LOCATION 'S3://amzn-s3-demo-bucket/cetas/partitioned_lineitem/'
AS SELECT l_orderkey, l_shipmode, l_shipdate, l_partkey FROM local_table;
输出代码:¶
Snowflake¶
--** SSC-FDM-0004 - EXTERNAL TABLE TRANSLATED TO REGULAR TABLE **
CREATE TABLE spectrum.partitioned_lineitem
COMMENT = '{ "origin": "sf_sc", "name": "snowconvert", "version": { "major": 0, "minor": 0, "patch": "0" }, "attributes": { "component": "redshift", "convertedOn": "09/17/2024" }}'
--PARTITIONED BY (l_shipdate, l_shipmode)
--STORED AS parquet
--LOCATION 'S3://amzn-s3-demo-bucket/cetas/partitioned_lineitem/'
AS SELECT l_orderkey, l_shipmode, l_shipdate, l_partkey FROM
local_table;
建议¶
有关 Create External Table 在 Snowflake 中的用法,可以参阅 Snowflake 文档。
相关的 EWIs¶
SSC-FDM-0004:外部表已转换为常规表
CREATE MATERIALIZED VIEW¶
描述¶
In SnowConvert AI, Redshift Materialized Views are transformed into Snowflake Dynamic Tables. To properly configure Dynamic Tables, two essential parameters must be defined: TARGET_LAG and WAREHOUSE. If these parameters are left unspecified in the configuration options, SnowConvert AI will default to preassigned values during the conversion, as demonstrated in the example below.
有关物化视图的更多信息,请点击 此处 (https://docs.aws.amazon.com/redshift/latest/dg/materialized-view-create-sql-command.html)。
有关动态表所需参数的详细信息,请点击 此处。
语法¶
以下 SQL 语法可用于在 Amazon Redshift 中创建视图。点击 此处 (https://docs.aws.amazon.com/redshift/latest/dg/materialized-view-create-sql-command.html) 查看该语法的 Redshift 规范。
CREATE MATERIALIZED VIEW mv_name
[ BACKUP { YES | NO } ]
[ table_attributes ]
[ AUTO REFRESH { YES | NO } ]
AS query
示例源模式¶
输入代码:¶
Redshift¶
CREATE MATERIALIZED VIEW mv_baseball AS
SELECT ball AS baseball FROM baseball_table;
输出代码:¶
Snowflake¶
CREATE DYNAMIC TABLE mv_baseball
--** SSC-FDM-0031 - DYNAMIC TABLE REQUIRED PARAMETERS SET BY DEFAULT **
TARGET_LAG='1 day'
WAREHOUSE=UPDATE_DUMMY_WAREHOUSE
COMMENT = '{ "origin": "sf_sc", "name": "snowconvert", "version": { "major": 0, "minor": 0, "patch": "0" }, "attributes": { "component": "redshift", "convertedOn": "11/26/2024", "domain": "test" }}'
AS
SELECT ball AS baseball FROM
baseball_table;
警告
BACKUP 和 AUTO REFRESH 子句已删除,因为它们不适用于 Snowflake 的动态表
相关的 EWI¶
SSC-FDM-0031:默认设置动态表必填参数
CREATE SCHEMA¶
语法¶
CREATE SCHEMA [ IF NOT EXISTS ] schema_name [ AUTHORIZATION username ]
[ QUOTA {quota [MB | GB | TB] | UNLIMITED} ] [ schema_element [ ... ]
CREATE SCHEMA AUTHORIZATION username [ QUOTA {quota [MB | GB | TB] | UNLIMITED} ]
[ schema_element [ ... ] ]
有关更多信息,请参阅 Redshift CREATE SCHEMA 文档 (https://docs.aws.amazon.com/redshift/latest/dg/r_CREATE_SCHEMA.html)。
示例源模式¶
基本示例¶
输入代码:¶
Redshift¶
CREATE SCHEMA s1;
CREATE SCHEMA IF NOT EXISTS s2;
CREATE SCHEMA s3
CREATE TABLE t1
(
col1 INT
)
CREATE VIEW v1 AS SELECT * FROM t1;
输出代码:¶
Snowflake¶
CREATE SCHEMA IF NOT EXISTS s1
COMMENT = '{ "origin": "sf_sc", "name": "snowconvert", "version": { "major": 0, "minor": 0, "patch": "0" }, "attributes": { "component": "redshift", "convertedOn": "09/23/2024" }}'
;
CREATE SCHEMA IF NOT EXISTS s2
COMMENT = '{ "origin": "sf_sc", "name": "snowconvert", "version": { "major": 0, "minor": 0, "patch": "0" }, "attributes": { "component": "redshift", "convertedOn": "09/23/2024" }}'
;
CREATE SCHEMA IF NOT EXISTS s3
COMMENT = '{ "origin": "sf_sc", "name": "snowconvert", "version": { "major": 0, "minor": 0, "patch": "0" }, "attributes": { "component": "redshift", "convertedOn": "09/23/2024" }}'
;
CREATE TABLE t1
(
col1 INT
)
COMMENT = '{ "origin": "sf_sc", "name": "snowconvert", "version": { "major": 0, "minor": 0, "patch": "0" }, "attributes": { "component": "redshift", "convertedOn": "09/23/2024" }}'
;
CREATE VIEW v1
COMMENT = '{ "origin": "sf_sc", "name": "snowconvert", "version": { "major": 0, "minor": 0, "patch": "0" }, "attributes": { "component": "redshift", "convertedOn": "09/23/2024" }}'
AS SELECT * FROM
t1;
Quota 子句¶
输入代码:¶
Redshift¶
CREATE SCHEMA s1 QUOTA UNLIMITED;
CREATE SCHEMA s2 QUOTA 10 TB;
输出代码:¶
Snowflake¶
CREATE SCHEMA IF NOT EXISTS s1
COMMENT = '{ "origin": "sf_sc", "name": "snowconvert", "version": { "major": 0, "minor": 0, "patch": "0" }, "attributes": { "component": "redshift", "convertedOn": "09/23/2024" }}'
;
CREATE SCHEMA IF NOT EXISTS s2
COMMENT = '{ "origin": "sf_sc", "name": "snowconvert", "version": { "major": 0, "minor": 0, "patch": "0" }, "attributes": { "component": "redshift", "convertedOn": "09/23/2024" }}'
;
备注
在 Snowflake 中,不允许按方案定义配额。存储管理是在账户和仓库层面完成的,Snowflake 会自动处理。因此,它已从代码中删除。
相关的 EWIs¶
没有已知问题。
CREATE FUNCTION¶
描述¶
This command defines a user-defined function (UDF) within the database. These functions encapsulate reusable logic that can be invoked within SQL queries.
语法¶
以下 SQL 语法可用于在 Amazon Redshift 中创建视图。点击 此处 (https://docs.aws.amazon.com/redshift/latest/dg/r_CREATE_VIEW.html) 查看该语法的 Redshift 规范。
CREATE [ OR REPLACE ] FUNCTION f_function_name
( { [py_arg_name py_arg_data_type |
sql_arg_data_type } [ , ... ] ] )
RETURNS data_type
{ VOLATILE | STABLE | IMMUTABLE }
AS $$
{ python_program | SELECT_clause }
$$ LANGUAGE { plpythonu | sql }
SQL Language¶
Volatility category¶
In Snowflake, VOLATILE and IMMUTABLE function volatility are functionally equivalent. Given that STABLE is inherently transformed to the default VOLATILE behavior, explicit use of STABLE will be delete.
输入代码:¶
Redshift¶
CREATE OR REPLACE FUNCTION get_sale(INTEGER)
RETURNS FLOAT
STABLE
AS $$
SELECT price FROM sales where id = $1
$$ LANGUAGE SQL;
输出代码:¶
Snowflake¶
CREATE OR REPLACE FUNCTION get_sale (SC_ARG1 INTEGER)
RETURNS FLOAT
LANGUAGE SQL
COMMENT = '{ "origin": "sf_sc", "name": "snowconvert", "version": { "major": 0, "minor": 0, "patch": "0" }, "attributes": { "component": "redshift", "convertedOn": "07/11/2025", "domain": "no-domain-provided" }}'
AS $$
SELECT price FROM
sales
where id = SC_ARG1
$$
;
Python Language¶
Within the SnowConvert AI scope, the Python language for CREATE FUNCTION statements is not supported. Consequently, the language plpythonu will be flagged with an EWI (SSC-EWI-0073), and its body could appear with parsing errors.
输入代码:¶
Redshift¶
create function f_py_greater (a float, b float)
returns float
stable
as $$
if a > b:
return a
return b
$$ language plpythonu;
输出代码:¶
Snowflake¶
create function f_py_greater (a float, b float)
returns float
language plpythonu !!!RESOLVE EWI!!! /*** SSC-EWI-0073 - PENDING FUNCTIONAL EQUIVALENCE REVIEW FOR 'LANGUAGE PLPythonU' NODE ***/!!!
COMMENT = '{ "origin": "sf_sc", "name": "snowconvert", "version": { "major": 0, "minor": 0, "patch": "0" }, "attributes": { "component": "redshift", "convertedOn": "07/11/2025", "domain": "no-domain-provided" }}'
as $$
-- ** SSC-EWI-0001 - UNRECOGNIZED TOKEN ON LINE '5' COLUMN '3' OF THE SOURCE CODE STARTING AT 'if'. EXPECTED 'STATEMENT' GRAMMAR. LAST MATCHING TOKEN WAS 'if' ON LINE '5' COLUMN '3'. **
-- if a > b:
-- return a
-- return b
$$
;
相关的 EWIs¶
没有已知问题。
CREATE VIEW¶
描述¶
此命令可在数据库中创建视图,每次在查询中引用该视图时都会运行该视图。使用 WITH NO SCHEMA BINDING 子句,可以为尚不存在的外部表或对象创建视图。但是,此子句要求您指定所引用的对象或表的限定名称。
语法¶
以下 SQL 语法可用于在 Amazon Redshift 中创建视图。点击 此处 (https://docs.aws.amazon.com/redshift/latest/dg/r_CREATE_VIEW.html) 查看该语法的 Redshift 规范。
CREATE [ OR REPLACE ] VIEW name [ ( column_name [, ...] ) ] AS query
[ WITH NO SCHEMA BINDING ]
示例源模式¶
考虑到 Redshift 命令中的必需子句与可选子句,迁移到 Snowflake 后的输出非常相似。
输入代码:¶
Redshift¶
CREATE VIEW myuser
AS
SELECT lastname FROM users;
CREATE VIEW myuser2
AS
SELECT lastname FROM users2
WITH NO SCHEMA BINDING;
输出代码:¶
Snowflake¶
CREATE VIEW myuser
COMMENT = '{ "origin": "sf_sc", "name": "snowconvert", "version": { "major": 0, "minor": 0, "patch": "0" }, "attributes": { "component": "redshift", "convertedOn": "01/16/2025", "domain": "test" }}'
AS
SELECT lastname FROM
users;
CREATE VIEW myuser2
COMMENT = '{ "origin": "sf_sc", "name": "snowconvert", "version": { "major": 0, "minor": 0, "patch": "0" }, "attributes": { "component": "redshift", "convertedOn": "01/16/2025", "domain": "test" }}'
AS
SELECT lastname FROM
users2
!!!RESOLVE EWI!!! /*** SSC-EWI-RS0003 - WITH NO SCHEMA BINDING STATEMENT CAN NOT BE REMOVED DUE TO MISSING REFERENCES. ***/!!!
WITH NO SCHEMA BINDING;
不过,也有一些例外情况,即 Redshift 中存在一些不受支持的子句,因此实施了 EWI 来涵盖这种情况。
相关的 EWIs¶
SSC-EWI-RS0003:Snowflake 不支持 With no schema binding 语句。
DELETE¶
描述¶
从表中删除行。(Redshift SQL 语言参考 Delete 语句 (https://docs.aws.amazon.com/redshift/latest/dg/r_DELETE.html))。
Note
Snowflake 完全支持此语法。
语法¶
[ WITH [RECURSIVE] common_table_expression [, common_table_expression , ...] ]
DELETE [ FROM ] { table_name | materialized_view_name }
[ USING table_name, ... ]
[ WHERE condition ]
示例源模式¶
Setup data¶
Redshift¶
CREATE TABLE employees (
id INT PRIMARY KEY,
name VARCHAR(255) NOT NULL,
department VARCHAR(255),
manager_id INT REFERENCES employees(id)
);
INSERT INTO employees (id, name, department, manager_id) VALUES
(1, 'Alice', 'Sales', 2),
(2, 'Bob', 'Sales', 1),
(3, 'Charlie', 'Sales', 1),
(4, 'David', 'Marketing', 2),
(5, 'Eve', 'Marketing', 4),
(6, 'Frank', 'Marketing', 4),
(7, 'Grace', 'Engineering', 6),
(8, 'Helen', 'Engineering', 7),
(9, 'Ivy', 'Engineering', 7),
(10, 'John', 'Sales', 3),
(11, 'Joe', 'Engineering', 5);
CREATE TABLE departments (
department_name VARCHAR(255)
);
INSERT INTO departments (department_name) VALUES
('Sales'),
('Marketing'),
('Engineering');
From 子句¶
通过引用其他表中的信息来更新表。在 Redshift 中,FROM 关键字是可选的,但在 Snowflake 中,它是强制性的。因此,如果缺失则会将其添加。
输入代码:¶
Redshift¶
DELETE employees;
SELECT * FROM employees ORDER BY id;
Result¶
ID |
NAME |
DEPARTMENT |
MANAGER_ID |
|---|---|---|---|
输出代码:¶
Snowflake¶
DELETE FROM
employees;
SELECT * FROM employees ORDER BY id;
Result¶
ID |
NAME |
DEPARTMENT |
MANAGER_ID |
|---|---|---|---|
Where 子句¶
限制对符合条件的行的更新。当条件返回 true 时,将更新指定的 SET 列。条件可以是列的简单谓词,也可以是基于子查询结果的条件。此子句在 Snowflake 中完全等效。
输入代码:¶
Redshift¶
DELETE FROM employees
WHERE department = 'Marketing';
SELECT * FROM employees
ORDER BY id;
Result¶
ID |
NAME |
DEPARTMENT |
MANAGER_ID |
|---|---|---|---|
1 |
Alice |
Sales |
2 |
2 |
Bob |
Sales |
1 |
3 |
Charlie |
Sales |
1 |
7 |
Grace |
Engineering |
6 |
8 |
Helen |
Engineering |
7 |
9 |
Ivy |
Engineering |
7 |
10 |
John |
Sales |
3 |
11 |
Joe |
Engineering |
5 |
输出代码:¶
Snowflake¶
DELETE FROM
employees
WHERE department = 'Marketing';
SELECT * FROM
employees
ORDER BY id;
Result¶
ID |
NAME |
DEPARTMENT |
MANAGER_ID |
|---|---|---|---|
1 |
Alice |
Sales |
2 |
2 |
Bob |
Sales |
1 |
3 |
Charlie |
Sales |
1 |
7 |
Grace |
Engineering |
6 |
8 |
Helen |
Engineering |
7 |
9 |
Ivy |
Engineering |
7 |
10 |
John |
Sales |
3 |
11 |
Joe |
Engineering |
5 |
Using 子句¶
在 WHERE 子句条件中引用其他表时,此子句引入表列表。此子句在 Snowflake 中完全等效。
输入代码:¶
Redshift¶
DELETE FROM employees
USING departments d
WHERE employees.department = d.department_name
AND d.department_name = 'Sales';
SELECT * FROM employees ORDER BY id;
Result¶
ID |
NAME |
DEPARTMENT |
MANAGER_ID |
|---|---|---|---|
4 |
David |
Marketing |
2 |
5 |
Eve |
Marketing |
4 |
6 |
Frank |
Marketing |
4 |
7 |
Grace |
Engineering |
6 |
8 |
Helen |
Engineering |
7 |
9 |
Ivy |
Engineering |
7 |
11 |
Joe |
Engineering |
5 |
输出代码:¶
Snowflake¶
DELETE FROM employees
USING departments d
WHERE employees.department = d.department_name
AND d.department_name = 'Sales';
SELECT * FROM employees ORDER BY id;
Result¶
ID |
NAME |
DEPARTMENT |
MANAGER_ID |
|---|---|---|---|
4 |
David |
Marketing |
2 |
5 |
Eve |
Marketing |
4 |
6 |
Frank |
Marketing |
4 |
7 |
Grace |
Engineering |
6 |
8 |
Helen |
Engineering |
7 |
9 |
Ivy |
Engineering |
7 |
11 |
Joe |
Engineering |
5 |
WITH 子句¶
此子句指定一个或多个公用表表达式 (CTE)。对于非递归 CTEs,输出列名是可选的,但对于递归列名是强制性的。
由于此子句不能在 DELETE 语句中使用,因此使用相应的查询将其转换为临时表。执行 DELETE 语句后,删除这些临时表以清理、释放资源,并避免在同一会话中创建表时发生名称冲突。此外,如果存在同名的常规表,它将再次获得优先权,因为该临时表 优先级 高于同一个会话中的任何其他同名表。
非递归 CTE¶
输入代码:¶
Redshift¶
WITH sales_employees AS (
SELECT id
FROM employees
WHERE department = 'Sales'
), engineering_employees AS (
SELECT id
FROM employees
WHERE department = 'Engineering'
)
DELETE FROM employees
WHERE id IN (SELECT id FROM sales_employees)
OR id IN (SELECT id FROM engineering_employees);
SELECT * FROM employees ORDER BY id;
Result¶
ID |
NAME |
DEPARTMENT |
MANAGER_ID |
|---|---|---|---|
4 |
David |
Marketing |
2 |
5 |
Eve |
Marketing |
4 |
6 |
Frank |
Marketing |
4 |
输出代码:¶
Snowflake¶
CREATE TEMPORARY TABLE sales_employees AS
SELECT id
FROM employees
WHERE department = 'Sales';
CREATE TEMPORARY TABLE engineering_employees AS
SELECT id
FROM employees
WHERE department = 'Engineering';
DELETE FROM
employees
WHERE id IN (SELECT id FROM sales_employees)
OR id IN (SELECT id FROM engineering_employees);
DROP TABLE sales_employees;
DROP TABLE engineering_employees;
SELECT * FROM
employees
ORDER BY id;
Result¶
ID |
NAME |
DEPARTMENT |
MANAGER_ID |
|---|---|---|---|
4 |
David |
Marketing |
2 |
5 |
Eve |
Marketing |
4 |
6 |
Frank |
Marketing |
4 |
递归 CTE¶
输入代码:¶
Redshift¶
WITH RECURSIVE subordinate_hierarchy(id, name, department, level) AS (
SELECT id, name, department, 0 as level
FROM employees
WHERE department = 'Marketing'
UNION ALL
SELECT e.id, e.name, e.department, sh.level + 1
FROM employees e
INNER JOIN subordinate_hierarchy sh ON e.manager_id = sh.id
)
DELETE FROM employees
WHERE id IN (SELECT id FROM subordinate_hierarchy);
Result¶
ID |
NAME |
DEPARTMENT |
MANAGER_ID |
|---|---|---|---|
1 |
Alice |
Sales |
2 |
2 |
Bob |
Sales |
1 |
3 |
Charlie |
Sales |
1 |
10 |
John |
Sales |
3 |
输出代码:¶
Snowflake¶
CREATE TEMPORARY TABLE subordinate_hierarchy AS
WITH RECURSIVE subordinate_hierarchy(id, name, department, level) AS (
SELECT id, name, department, 0 as level
FROM
employees
WHERE department = 'Marketing'
UNION ALL
SELECT e.id, e.name, e.department, sh.level + 1
FROM
employees e
INNER JOIN
subordinate_hierarchy sh ON e.manager_id = sh.id
)
SELECT
id,
name,
department,
level
FROM
subordinate_hierarchy;
DELETE FROM
employees
WHERE id IN (SELECT id FROM
subordinate_hierarchy
);
DROP TABLE subordinate_hierarchy;
Result¶
ID |
NAME |
DEPARTMENT |
MANAGER_ID |
|---|---|---|---|
1 |
Alice |
Sales |
2 |
2 |
Bob |
Sales |
1 |
3 |
Charlie |
Sales |
1 |
10 |
John |
Sales |
3 |
删除物化视图¶
在 Redshift 中,可以对用于 流式引入 (https://docs.aws.amazon.com/redshift/latest/dg/materialized-view-streaming-ingestion.html) 的物化视图应用 DELETE 语句。在 Snowflake 中,这些视图转换为动态表,并且 DELETE 语句不能用于动态表。因此,将添加 EWI。
输入代码:¶
Redshift¶
CREATE MATERIALIZED VIEW emp_mv AS
SELECT id, name, department FROM employees WHERE department = 'Engineering';
DELETE FROM emp_mv
WHERE id = 2;
输出代码:¶
Snowflake¶
CREATE DYNAMIC TABLE emp_mv
--** SSC-FDM-0031 - DYNAMIC TABLE REQUIRED PARAMETERS SET BY DEFAULT **
TARGET_LAG='1 day'
WAREHOUSE=UPDATE_DUMMY_WAREHOUSE
COMMENT = '{ "origin": "sf_sc", "name": "snowconvert", "version": { "major": 0, "minor": 0, "patch": "0" }, "attributes": { "component": "redshift", "convertedOn": "02/11/2025", "domain": "test" }}'
AS
SELECT id, name, department FROM
employees
WHERE department = 'Engineering';
!!!RESOLVE EWI!!! /*** SSC-EWI-RS0008 - MATERIALIZED VIEW IS TRANSFORMED INTO A DYNAMIC TABLE, AND THE DELETE STATEMENT CANNOT BE USED ON DYNAMIC TABLES. ***/!!!
DELETE FROM
emp_mv
WHERE id = 2;
已知问题 ¶
要复制
WITH子句的功能,需要创建与每个公用表表达式 (CTE) 对应的临时表。但是,如果当前会话中已经存在同名的临时表,则此方法将失败,从而导致错误。
相关的 EWIs¶
SSC-FDM-0031:默认情况下,已设置动态表必填参数。
SSC-EWI-RS0008:Delete 语句不能用于动态表。
EXECUTE¶
描述¶
The
EXECUTEIMMEDIATEstatement builds and runs a dynamic SQL statement in a single operation.Native dynamic SQL uses the
EXECUTEIMMEDIATEstatement to process most dynamic SQL statements. (Redshift Language Reference EXECUTE Statement (https://docs.aws.amazon.com/redshift/latest/dg/c_PLpgSQL-statements.html#r_PLpgSQL-dynamic-sql))
语法¶
EXECUTE command-string [ INTO target ];
示例源模式¶
Concated Example
输入代码
Redshift¶
CREATE OR REPLACE PROCEDURE create_dynamic_table(table_name VARCHAR)
AS $$
DECLARE
sql_statement VARCHAR;
BEGIN
sql_statement := 'CREATE TABLE IF NOT EXISTS ' || table_name || ' (id INT, value VARCHAR);';
EXECUTE sql_statement;
END;
$$ LANGUAGE plpgsql;
输出代码
Snowflake¶
CREATE OR REPLACE PROCEDURE create_dynamic_table (table_name VARCHAR)
RETURNS VARCHAR
LANGUAGE SQL
COMMENT = '{ "origin": "sf_sc", "name": "snowconvert", "version": { "major": 0, "minor": 0, "patch": "0" }, "attributes": { "component": "redshift", "convertedOn": "07/11/2025", "domain": "no-domain-provided" }}'
AS $$
DECLARE
sql_statement VARCHAR;
BEGIN
sql_statement := 'CREATE TABLE IF NOT EXISTS ' || table_name || ' (id INT, value VARCHAR)';
!!!RESOLVE EWI!!! /*** SSC-EWI-0030 - THE STATEMENT BELOW HAS USAGES OF DYNAMIC SQL. ***/!!!
EXECUTE IMMEDIATE sql_statement;
END;
$$;
Function Transformation¶
输入代码¶
Redshift¶
CREATE OR REPLACE PROCEDURE insert_with_dynamic()
AS $$
DECLARE
sql_statement VARCHAR;
BEGIN
sql_statement := 'insert into orders(order_date) values ("getdate"());';
EXECUTE sql_statement;
END;
$$ LANGUAGE plpgsql;
输出代码¶
Snowflake¶
CREATE OR REPLACE PROCEDURE insert_with_dynamic ()
RETURNS VARCHAR
LANGUAGE SQL
COMMENT = '{ "origin": "sf_sc", "name": "snowconvert", "version": { "major": 0, "minor": 0, "patch": "0" }, "attributes": { "component": "redshift", "convertedOn": "07/11/2025", "domain": "no-domain-provided" }}'
AS $$
DECLARE
sql_statement VARCHAR;
BEGIN
sql_statement := 'insert into orders (order_date) values (GETDATE())';
!!!RESOLVE EWI!!! /*** SSC-EWI-0030 - THE STATEMENT BELOW HAS USAGES OF DYNAMIC SQL. ***/!!!
EXECUTE IMMEDIATE sql_statement;
END;
$$;
Error In Query Parsing¶
输入代码¶
Redshift¶
CREATE OR REPLACE PROCEDURE bad_statement(table_name VARCHAR)
AS $$
DECLARE
sql_statement VARCHAR;
BEGIN
sql_statement := 'bad statement goes here';
EXECUTE sql_statement;
END;
$$ LANGUAGE plpgsql;
输出代码¶
Snowflake¶
CREATE OR REPLACE PROCEDURE bad_statement (table_name VARCHAR)
RETURNS VARCHAR
LANGUAGE SQL
COMMENT = '{ "origin": "sf_sc", "name": "snowconvert", "version": { "major": 0, "minor": 0, "patch": "0" }, "attributes": { "component": "redshift", "convertedOn": "07/11/2025", "domain": "no-domain-provided" }}'
AS $$
DECLARE
sql_statement VARCHAR;
BEGIN
sql_statement := 'bad statement goes here';
!!!RESOLVE EWI!!! /*** SSC-EWI-0030 - THE STATEMENT BELOW HAS USAGES OF DYNAMIC SQL. ***/!!!!!!RESOLVE EWI!!! /*** SSC-EWI-0027 - THE FOLLOWING STATEMENT USES A VARIABLE/LITERAL WITH AN INVALID QUERY AND IT WILL NOT BE EXECUTED ***/!!!
EXECUTE IMMEDIATE sql_statement;
END;
$$;
INTO Clause¶
输入代码¶
Redshift¶
CREATE OR REPLACE PROCEDURE get_max_id(table_name VARCHAR, OUT max_id INTEGER)
AS $$
DECLARE
sql_statement VARCHAR;
BEGIN
sql_statement := 'SELECT MAX(id) FROM ' || table_name || ';';
EXECUTE sql_statement INTO max_id;
END;
$$ LANGUAGE plpgsql;
输出代码¶
Snowflake¶
CREATE OR REPLACE PROCEDURE get_max_id (table_name VARCHAR, max_id OUT INTEGER)
RETURNS VARCHAR
LANGUAGE SQL
COMMENT = '{ "origin": "sf_sc", "name": "snowconvert", "version": { "major": 0, "minor": 0, "patch": "0" }, "attributes": { "component": "redshift", "convertedOn": "07/11/2025", "domain": "no-domain-provided" }}'
AS $$
DECLARE
sql_statement VARCHAR;
BEGIN
sql_statement := 'SELECT
MAX(id) FROM
' || table_name;
!!!RESOLVE EWI!!! /*** SSC-EWI-0030 - THE STATEMENT BELOW HAS USAGES OF DYNAMIC SQL. ***/!!!
EXECUTE IMMEDIATE sql_statement
!!!RESOLVE EWI!!! /*** SSC-EWI-PG0007 - INTO CLAUSE IN DYNAMIC SQL IS NOT SUPPORTED IN SNOWFLAKE. ***/!!! INTO max_id;
END;
$$;
已知问题¶
1. Execution results cannot be stored in variables.¶
SnowScripting does not support INTO nor BULK COLLECT INTO clauses. For this reason, results will need to be passed through other means.
2. Dynamic SQL Execution queries may be marked incorrectly as non-runnable.¶
In some scenarios there an execute statement may be commented regardless of being safe or non-safe to run so please take this into account:
相关的 EWIs¶
SSC-EWI-0027: Variable with invalid query.
SSC-EWI-0030: The statement below has usages of dynamic SQL.
INSERT¶
描述¶
在表中插入新行。(Redshift SQL 语言参考 Insert 语句 (https://docs.aws.amazon.com/redshift/latest/dg/r_INSERT_30.html#r_INSERT_30-synopsis))。
警告
Snowflake 部分支持此语法。
语法¶
INSERT INTO table_name [ ( column [, ...] ) ]
{DEFAULT VALUES |
VALUES ( { expression | DEFAULT } [, ...] )
[, ( { expression | DEFAULT } [, ...] )
[, ...] ] |
query }
示例源模式¶
Setup data¶
Redshift¶
CREATE TABLE employees (
id INTEGER IDENTITY(1,1),
name VARCHAR(100),
salary INT DEFAULT 20000,
department VARCHAR(50) DEFAULT 'Marketing'
);
CREATE TABLE new_employees (
name VARCHAR(100),
salary INT,
department VARCHAR(50)
);
INSERT INTO new_employees (name, salary, department)
VALUES
('Grace Lee', 32000, 'Operations'),
('Hannah Gray', 26000, 'Finance');
默认值¶
该操作会插入包含默认值的完整行。如果有任何列没有默认值,则会在这些列中插入 NULL 值。
This clause cannot specify individual columns; it always inserts a complete row with its default values. Additionally, columns with the NOT NULL constraint cannot be included in the table definition. To replicate this behavior in Snowflake, SnowConvert AI insert a column with a DEFAULT value in the table. This action inserts a complete row, using the default value for every column.
输入代码:¶
Redshift¶
CREATE TABLE employees (
id INTEGER IDENTITY(1,1),
name VARCHAR(100),
salary INT DEFAULT 20000,
department VARCHAR(50) DEFAULT 'Marketing'
);
INSERT INTO employees
DEFAULT VALUES;
SELECT * FROM employees ORDER BY id;
Result¶
ID |
NAME |
SALARY |
DEPARTMENT |
|---|---|---|---|
1 |
NULL |
20000 |
Marketing |
输出代码:¶
Snowflake¶
CREATE TABLE employees (
id INTEGER IDENTITY(1,1) ORDER,
name VARCHAR(100),
salary INT DEFAULT 20000,
department VARCHAR(50) DEFAULT 'Marketing'
)
COMMENT = '{ "origin": "sf_sc", "name": "snowconvert", "version": { "major": 0, "minor": 0, "patch": "0" }, "attributes": { "component": "redshift", "convertedOn": "03/03/2025", "domain": "test" }}';
INSERT INTO employees (id)
VALUES (DEFAULT);
SELECT * FROM
employees
ORDER BY id;
Result¶
ID |
NAME |
SALARY |
DEPARTMENT |
|---|---|---|---|
1 |
NULL |
20000 |
Marketing |
查询¶
使用查询向表中插入一行或多行。查询生成的所有行都将插入到表中。尽管列名不需要匹配,但查询必须返回与表列兼容的列列表。此功能在 Snowflake 中完全等效。
输入代码:¶
Redshift¶
INSERT INTO employees (name, salary, department)
SELECT name, salary, department FROM new_employees;
Result¶
ID |
NAME |
SALARY |
DEPARTMENT |
|---|---|---|---|
1 |
Grace Lee |
32000 |
操作 |
2 |
Hannah Gray |
26000 |
财务 |
输出代码:¶
Snowflake¶
INSERT INTO employees (name, salary, department)
SELECT name, salary, department FROM
new_employees;
Result¶
ID |
NAME |
SALARY |
DEPARTMENT |
|---|---|---|---|
1 |
Grace Lee |
32000 |
操作 |
2 |
Hannah Gray |
26000 |
财务 |
已知问题 ¶
某些表达式不能在 Snowflake 的 VALUES 子句中使用。例如,在 Redshift 中,可以在 VALUES 子句中使用 JSON_PARSE (https://docs.aws.amazon.com/redshift/latest/dg/JSON_PARSE.html) 函数将 JSON 值插入到 SUPER 数据类型中。但是,在 Snowflake 中,不能在 VALUES 子句中使用 PARSE_JSON 函数将 JSON 值插入到 VARIANT 数据类型中。相反,可以使用查询来代替 VALUES 子句。有关更多详情,请参阅 Snowflake 文档。您还可以查看 以下文章 (https://community.snowflake.com/s/article/Cannot-use-DATE-FROM-PARTS-function-inside-the-VALUES-clause) 了解更多信息。
相关的 EWIs¶
没有已知问题。
MERGE¶
语法¶
MERGE INTO target_table
USING source_table [ [ AS ] alias ]
ON match_condition
[ WHEN MATCHED THEN { UPDATE SET col_name = { expr } [,...] | DELETE }
WHEN NOT MATCHED THEN INSERT [ ( col_name [,...] ) ] VALUES ( { expr } [, ...] ) |
REMOVE DUPLICATES ]
有关更多信息,请参阅 Redshift MERGE 文档 (https://docs.aws.amazon.com/redshift/latest/dg/r_MERGE.html)。
示例源模式¶
UPDATE - INSERT¶
两种语言之间没有区别。该代码以其原始形式保存。
输入代码:¶
Redshift¶
MERGE INTO target USING source ON target.id = source.id
WHEN MATCHED THEN UPDATE SET id = source.id, name = source.name
WHEN NOT MATCHED THEN INSERT VALUES (source.id, source.name);
输出代码:¶
Snowflake¶
--** SSC-FDM-RS0005 - REDSHIFT MERGE STATEMENT DOESN'T ALLOW DUPLICATES IN THE SOURCE TABLE. SNOWFLAKE BEHAVIOR MAY DIFFER IF THERE ARE DUPLICATE VALUES. **
MERGE INTO target USING source ON target.id = source.id
WHEN MATCHED THEN UPDATE SET id = source.id, name = source.name
WHEN NOT MATCHED THEN INSERT VALUES (source.id, source.name);
DELETE - INSERT¶
两种语言之间没有区别。该代码以其原始形式保存。
输入代码:¶
Redshift¶
MERGE INTO target USING source ON target.id = source.id
WHEN MATCHED THEN DELETE
WHEN NOT MATCHED THEN INSERT VALUES (source.id, source.name);
输出代码:¶
Snowflake¶
--** SSC-FDM-RS0005 - REDSHIFT MERGE STATEMENT DOESN'T ALLOW DUPLICATES IN THE SOURCE TABLE. SNOWFLAKE BEHAVIOR MAY DIFFER IF THERE ARE DUPLICATE VALUES. **
MERGE INTO target USING source ON target.id = source.id
WHEN MATCHED THEN DELETE
WHEN NOT MATCHED THEN INSERT VALUES (source.id, source.name);
REMOVE DUPLICATES¶
Snowflake 不支持 REMOVE DUPLICATES 子句,但有一种替代方案可以模拟原始行为。
输出代码将包含三个新语句:
TEMPORARY TABLE,其中包含源表和目标表中符合条件的重复值
INSERT 语句,用于合并后将待处理的值添加到目标表中
DROP 语句,用于删除生成的临时表。
这些是必要的,因为 DROP DUPLICATES 行为会从目标表中删除重复值,然后从源表中插入符合条件的值。
输入代码:¶
Redshift¶
CREATE TABLE target (id INT, name CHAR(10));
CREATE TABLE source (id INT, name CHAR(10));
INSERT INTO target VALUES (30, 'Tony'), (30, 'Daisy'), (11, 'Alice'), (23, 'Bill'), (23, 'Nikki');
INSERT INTO source VALUES (23, 'David'), (22, 'Clarence');
MERGE INTO target USING source ON target.id = source.id REMOVE DUPLICATES;
Results¶
ID |
NAME |
|---|---|
30 |
Daisy |
22 |
Clarence |
30 |
Tony |
11 |
Alice |
23 |
David |
输出代码:¶
Snowflake¶
CREATE TABLE target (id INT, name CHAR(10))
COMMENT = '{ "origin": "sf_sc", "name": "snowconvert", "version": { "major": 0, "minor": 0, "patch": "0" }, "attributes": { "component": "redshift", "convertedOn": "03/03/2025", "domain": "test" }}';
CREATE TABLE source (id INT, name CHAR(10))
COMMENT = '{ "origin": "sf_sc", "name": "snowconvert", "version": { "major": 0, "minor": 0, "patch": "0" }, "attributes": { "component": "redshift", "convertedOn": "03/03/2025", "domain": "test" }}';
INSERT INTO target
VALUES (30, 'Tony'), (30, 'Daisy'), (11, 'Alice'), (23, 'Bill'), (23, 'Nikki');
INSERT INTO source
VALUES (23, 'David'), (22, 'Clarence');
CREATE TEMPORARY TABLE source_duplicates AS
SELECT DISTINCT
source.*
FROM
source
INNER JOIN
target
ON target.id = source.id;
--** SSC-FDM-RS0005 - REDSHIFT MERGE STATEMENT DOESN'T ALLOW DUPLICATES IN THE SOURCE TABLE. SNOWFLAKE BEHAVIOR MAY DIFFER IF THERE ARE DUPLICATE VALUES. **
MERGE INTO target
USING source ON target.id = source.id
WHEN MATCHED THEN
DELETE
WHEN NOT MATCHED THEN
INSERT
VALUES (source.id, source.name);
INSERT INTO target
SELECT
*
FROM
source_duplicates;
DROP TABLE IF EXISTS source_duplicates CASCADE;
Results¶
ID |
NAME |
|---|---|
22 |
Clarence |
30 |
Tony |
30 |
Daisy |
11 |
Alice |
23 |
David |
已知问题¶
没有已知问题。
相关的 EWIs¶
SSC-EWI-RS0009:找不到源表的语义信息。
SSC-FDM-RS0005:源表中不允许存在重复项。
UPDATE¶
描述¶
满足条件时更新一个或多个表列中的值。(Redshift SQL 语言参考 Update 语句 (https://docs.aws.amazon.com/redshift/latest/dg/r_UPDATE.html))。
Note
Snowflake 完全支持此语法。
语法¶
[ WITH [RECURSIVE] common_table_expression [, common_table_expression , ...] ]
UPDATE table_name [ [ AS ] alias ] SET column = { expression | DEFAULT } [,...]
[ FROM fromlist ]
[ WHERE condition ]
示例源模式¶
Setup data¶
Redshift¶
CREATE TABLE employees (
id INTEGER IDENTITY(1,1),
name VARCHAR(100),
salary DECIMAL DEFAULT 20000,
department VARCHAR(50) DEFAULT 'Marketing'
);
INSERT INTO employees (name, salary, department)
VALUES
('Alice', 500000, 'HR'),
('Bob', 600000, 'Engineering'),
('Charlie', 700000, 'Engineering'),
('David', 400000, 'Marketing'),
('Eve', 450000, 'HR'),
('Frank', 750000, 'Engineering'),
('Grace', 650000, 'Engineering'),
('Helen', 390000, 'Marketing'),
('Ivy', 480000, 'HR'),
('Jack', 420000, 'Engineering'),
('Ken', 700000, 'Marketing'),
('Liam', 600000, 'Engineering'),
('Mona', 470000, 'HR');
CREATE TABLE department_bonus (
department VARCHAR(100),
bonus DECIMAL
);
INSERT INTO department_bonus (department, bonus)
VALUES
('HR', 10000),
('Engineering', 50000),
('Marketing', 20000),
('Sales', 5000);
别名¶
尽管 Snowflake 的语法没有指定可以使用表别名,但它是 Snowflake 中的有效代码。
输入代码:¶
Redshift¶
UPDATE employees AS e
SET salary = salary + 5000
WHERE e.salary < 600000;
Result¶
ID |
NAME |
SALARY |
DEPARTMENT |
|---|---|---|---|
1 |
Alice |
505000 |
HR |
2 |
Bob |
600000 |
Engineering |
3 |
Charlie |
700000 |
Engineering |
4 |
David |
405000 |
Marketing |
5 |
Eve |
455000 |
HR |
6 |
Frank |
750000 |
Engineering |
7 |
Grace |
650000 |
Engineering |
8 |
Helen |
395000 |
Marketing |
9 |
Ivy |
485000 |
HR |
10 |
Jack |
425000 |
Engineering |
11 |
Ken |
700000 |
Marketing |
12 |
Liam |
600000 |
Engineering |
13 |
Mona |
475000 |
HR |
输出代码:¶
Snowflake¶
UPDATE employees AS e
SET salary = salary + 5000
WHERE e.salary < 600000;
Result¶
ID |
NAME |
SALARY |
DEPARTMENT |
|---|---|---|---|
1 |
Alice |
505000 |
HR |
2 |
Bob |
600000 |
Engineering |
3 |
Charlie |
700000 |
Engineering |
4 |
David |
405000 |
Marketing |
5 |
Eve |
455000 |
HR |
6 |
Frank |
750000 |
Engineering |
7 |
Grace |
650000 |
Engineering |
8 |
Helen |
395000 |
Marketing |
9 |
Ivy |
485000 |
HR |
10 |
Jack |
425000 |
Engineering |
11 |
Ken |
700000 |
Marketing |
12 |
Liam |
600000 |
Engineering |
13 |
Mona |
475000 |
HR |
WITH 子句¶
此子句指定一个或多个公用表表达式 (CTE)。对于非递归 CTEs,输出列名是可选的,但对于递归列名是强制性的。
由于此子句不能在 UPDATE 语句中使用,因此使用相应的查询将其转换为临时表。执行 UPDATE 语句后,删除这些临时表以清理、释放资源,并避免在同一会话中创建表时发生名称冲突。此外,如果存在同名的常规表,它将再次获得优先权,因为该临时表 优先级 高于同一个会话中的任何其他同名表。
非递归 CTE¶
输入代码:¶
Redshift¶
WITH avg_salary_cte AS (
SELECT AVG(salary) AS avg_salary FROM employees
)
UPDATE employees
SET salary = (SELECT avg_salary FROM avg_salary_cte)
WHERE salary < 500000;
Result¶
ID |
NAME |
SALARY |
DEPARTMENT |
|---|---|---|---|
1 |
Alice |
500000 |
HR |
2 |
Bob |
600000 |
Engineering |
3 |
Charlie |
700000 |
Engineering |
4 |
David |
546923 |
Marketing |
5 |
Eve |
546923 |
HR |
6 |
Frank |
750000 |
Engineering |
7 |
Grace |
650000 |
Engineering |
8 |
Helen |
546923 |
Marketing |
9 |
Ivy |
546923 |
HR |
10 |
Jack |
546923 |
Engineering |
11 |
Ken |
700000 |
Marketing |
12 |
Liam |
600000 |
Engineering |
13 |
Mona |
546923 |
HR |
输出代码:¶
Snowflake¶
CREATE TEMPORARY TABLE avg_salary_cte AS
SELECT AVG(salary) AS avg_salary FROM
employees;
UPDATE employees
SET salary = (SELECT avg_salary FROM
avg_salary_cte
)
WHERE salary < 500000;
DROP TABLE avg_salary_cte;
Result¶
ID |
NAME |
SALARY |
DEPARTMENT |
|---|---|---|---|
1 |
Alice |
500000 |
HR |
2 |
Bob |
600000 |
Engineering |
3 |
Charlie |
700000 |
Engineering |
4 |
David |
546923 |
Marketing |
5 |
Eve |
546923 |
HR |
6 |
Frank |
750000 |
Engineering |
7 |
Grace |
650000 |
Engineering |
8 |
Helen |
546923 |
Marketing |
9 |
Ivy |
546923 |
HR |
10 |
Jack |
546923 |
Engineering |
11 |
Ken |
700000 |
Marketing |
12 |
Liam |
600000 |
Engineering |
13 |
Mona |
546923 |
HR |
递归 CTE¶
输入代码:¶
Redshift¶
WITH RECURSIVE bonus_updates(id, name, department, salary, level) AS (
SELECT e.id,
e.name,
e.department,
e.salary + CASE
WHEN db.bonus IS NOT NULL THEN db.bonus
ELSE 0
END AS new_salary,
1 AS level
FROM employees e
LEFT JOIN department_bonus db ON e.department = db.department
UNION ALL
SELECT e.id,
e.name,
e.department,
e.salary + CASE
WHEN db.bonus IS NOT NULL THEN db.bonus
ELSE 0
END + (e.salary * 0.05) AS new_salary,
bu.level + 1
FROM employees e
JOIN department_bonus db ON e.department = db.department
JOIN bonus_updates bu ON e.id = bu.id
WHERE bu.level < 3
)
UPDATE employees
SET salary = bu.new_salary
FROM (SELECT id, AVG(salary) as new_salary FROM bonus_updates GROUP BY id) as bu
WHERE employees.id = bu.id
AND bu.new_salary > employees.salary;
Result¶
ID |
NAME |
SALARY |
DEPARTMENT |
|---|---|---|---|
1 |
Alice |
526666 |
HR |
2 |
Bob |
670000 |
Engineering |
3 |
Charlie |
773333 |
Engineering |
4 |
David |
433333 |
Marketing |
5 |
Eve |
475000 |
HR |
6 |
Frank |
825000 |
Engineering |
7 |
Grace |
721666 |
Engineering |
8 |
Helen |
423000 |
Marketing |
9 |
Ivy |
506000 |
HR |
10 |
Jack |
484000 |
Engineering |
11 |
Ken |
743333 |
Marketing |
12 |
Liam |
670000 |
Engineering |
13 |
Mona |
495668 |
HR |
输出代码:¶
Snowflake¶
CREATE TEMPORARY TABLE bonus_updates AS
--** SSC-FDM-0007 - MISSING DEPENDENT OBJECTS "employees", "department_bonus" **
WITH RECURSIVE bonus_updates(id, name, department, salary, level) AS (
SELECT e.id,
e.name,
e.department,
e.salary + CASE
WHEN db.bonus IS NOT NULL THEN db.bonus
ELSE 0
END AS new_salary,
1 AS level
FROM
employees e
LEFT JOIN
department_bonus db ON e.department = db.department
UNION ALL
SELECT e.id,
e.name,
e.department,
e.salary + CASE
WHEN db.bonus IS NOT NULL THEN db.bonus
ELSE 0
END + (e.salary * 0.05) AS new_salary,
bu.level + 1
FROM
employees e
JOIN
department_bonus db ON e.department = db.department
JOIN
bonus_updates bu ON e.id = bu.id
WHERE bu.level < 3
)
SELECT
id,
name,
department,
salary,
level
FROM
bonus_updates;
UPDATE employees
SET salary = bu.new_salary
FROM (SELECT id, AVG(salary) as new_salary
FROM bonus_updates
GROUP BY id) as bu
WHERE employees.id = bu.id
AND bu.new_salary > employees.salary;
DROP TABLE bonus_updates;
Result¶
ID |
NAME |
SALARY |
DEPARTMENT |
|---|---|---|---|
1 |
Alice |
526667 |
HR |
2 |
Bob |
670000 |
Engineering |
3 |
Charlie |
773333 |
Engineering |
4 |
David |
433333 |
Marketing |
5 |
Eve |
475000 |
HR |
6 |
Frank |
825000 |
Engineering |
7 |
Grace |
721667 |
Engineering |
8 |
Helen |
423000 |
Marketing |
9 |
Ivy |
506000 |
HR |
10 |
Jack |
484000 |
Engineering |
11 |
Ken |
743333 |
Marketing |
12 |
Liam |
670000 |
Engineering |
13 |
Mona |
495667 |
HR |
SET DEFAULT 值¶
输入代码:¶
Redshift¶
UPDATE employees
SET salary = DEFAULT, department = 'Sales'
WHERE department = 'HR';
Result¶
ID |
NAME |
SALARY |
DEPARTMENT |
|---|---|---|---|
1 |
Alice |
20000 |
Sales |
2 |
Bob |
600000 |
Engineering |
3 |
Charlie |
700000 |
Engineering |
4 |
David |
400000 |
Marketing |
5 |
Eve |
20000 |
Sales |
6 |
Frank |
750000 |
Engineering |
7 |
Grace |
650000 |
Engineering |
8 |
Helen |
390000 |
Marketing |
9 |
Ivy |
20000 |
Sales |
10 |
Jack |
420000 |
Engineering |
11 |
Ken |
700000 |
Marketing |
12 |
Liam |
600000 |
Engineering |
13 |
Mona |
20000 |
Sales |
输出代码:¶
Snowflake¶
UPDATE employees
SET salary = DEFAULT, department = 'Sales'
WHERE
department = 'HR';
Result¶
ID |
NAME |
SALARY |
DEPARTMENT |
|---|---|---|---|
1 |
Alice |
20000 |
Sales |
2 |
Bob |
600000 |
Engineering |
3 |
Charlie |
700000 |
Engineering |
4 |
David |
400000 |
Marketing |
5 |
Eve |
20000 |
Sales |
6 |
Frank |
750000 |
Engineering |
7 |
Grace |
650000 |
Engineering |
8 |
Helen |
390000 |
Marketing |
9 |
Ivy |
20000 |
Sales |
10 |
Jack |
420000 |
Engineering |
11 |
Ken |
700000 |
Marketing |
12 |
Liam |
600000 |
Engineering |
13 |
Mona |
20000 |
Sales |
SET 子句¶
它负责修改列中的值。与 Snowflake 类似,当配置参数 ERROR_ON_NONDETERMINISTIC_UPDATE (https://docs.aws.amazon.com/redshift/latest/dg/r_error_on_nondeterministic_update.html) 设置为 true 时,每行有多个匹配项的更新查询将引发错误。此标志在 Snowflake 中的作用方式相同,甚至使用相同的名称:ERROR_ON_NONDETERMINISTIC_UPDATE。
但是,当关闭此标志时,不会返回错误,而是使用匹配行中的一行来更新目标行。选定的联接行在两种语言中都是非确定且随机的;执行过程中的行为可能不一致,从而可能导致数据不一致。
设置数据:¶
Redshift¶
CREATE TABLE target (
k INT,
v INT
);
CREATE TABLE src (
k INT,
v INT
);
INSERT INTO target (k, v) VALUES (0, 10);
INSERT INTO src (k, v) VALUES
(0, 14),
(0, 15),
(0, 16);
输入代码:¶
Redshift¶
UPDATE target
SET v = src.v
FROM src
WHERE target.k = src.k;
SELECT * FROM target;
Result¶
K |
V |
|---|---|
0 |
16 |
输出代码:¶
Snowflake¶
UPDATE target
SET v = src.v
FROM src
WHERE target.k = src.k;
SELECT * FROM target;
Result¶
K |
V |
|---|---|
0 |
14 |
已知问题 ¶
更新每行有多个匹配项的查询可能会导致数据不一致。尽管两个平台都有标志 ERROR_ON_NONDETERMINISTIC_UPDATE (https://docs.aws.amazon.com/redshift/latest/dg/r_error_on_nondeterministic_update.html),但这些值将始终是不确定的。Snowflake 提供了处理此类情况的建议。点击 此处 以了解更多详情。
要复制
WITH子句的功能,需要创建与每个公用表表达式 (CTE) 对应的临时表。但是,如果当前会话中已经存在同名的临时表,则此方法将失败,从而导致错误。
相关的 EWIs¶
没有已知问题。