Query Parquet files directly in your data lake

Note

Snowflake also supports loading data from Parquet files directly into a Snowflake-managed Iceberg table. This option is generally available. For more information, see Load data into Apache Iceberg™ tables and Example: Load Iceberg-compatible Parquet files.

This topic covers how to create a read-only Apache Iceberg™ table for Snowflake from Parquet files that you manage in object storage, otherwise known as Parquet Direct. This option allows you to query Parquet data directly in your data lake, so you don’t have to make a copy of the data or pay for data ingestion. Parquet Direct presents multiple benefits:

  • Cost: Significantly lower cost as compared to full ingestion, and no per-file charge in refresh as compared to external tables

  • Seamless Syncing: Automatically refresh tables in Snowflake to reflect changes made to files in storage (add, delete, upsert, schema changes)

  • Hive-style partitioning: Full support for Hive-style partitioning (for example, key=value), which makes it easy for you to modernize legacy datasets

  • Read-Only Permission Model: The permission model doesn’t require write access to your storage, enabling its use in security-conscious and regulated verticals

  • Performance: Iceberg-grade query performance unlike external tables

Note

Parquet Direct tables are read-only access, so you can’t perform the following DML operations on these tables through Snowflake:

  • Insert

  • Update

  • Delete

Instead, you can use other engines to perform DML operations directly on the files in cloud storage. For the full list of limitations in this private preview, see Limitations for querying Parquet files directly in your data lake.

Partitioned tables

To improve query performance, we strongly recommend that you partition tables created from Parquet source files by using partition columns. Query response time is faster when Snowflake processes only a small part of the data instead of having to scan the entire data set. An Iceberg table definition can include multiple partition columns, which impose a multi-dimensional structure on the external data.

To partition a table, your data must be organized using logical paths.

When you create an Iceberg table, you define partition columns as expressions that parse the path or filename information stored in the METADATA$FILENAME pseudo-column. A partition consists of all data files that match the path and/or filename in the expression for the partition column.

Snowflake computes and adds partitions based on the defined partition column expressions when an Iceberg table is refreshed. For an example of creating a partitioned Iceberg table, see Example: Create an Iceberg table from Parquet files, specifying a partition column.

Workflow

Use the workflow in this section to create an Iceberg table from Parquet source files.

Note

If you store your Parquet files in Amazon S3 or Microsoft Azure, you can create a table that supports automatically refreshing the table data. To learn more, see the following sections:

Step 1: Create an external volume

To create an external volume, complete the instructions for your cloud storage service:

Step 2: Create a catalog integration

When you create a catalog integration for this use case, you specify the following properties for the catalog integration:

  • There isn’t a table format

  • There isn’t a catalog

  • The tables reside in object storage

Create a catalog integration by using the CREATE CATALOG INTEGRATION command. To indicate that the catalog integration is for Iceberg tables created from Parquet source files, set the CATALOG_SOURCE parameter equal to OBJECT_STORE and the TABLE_FORMAT parameter equal to NONE.

Note

Snowflake does not support creating Iceberg tables from Parquet-based table definitions in the AWS Glue Data Catalog.

The following example creates a catalog integration for Parquet files in object storage.

CREATE OR REPLACE CATALOG INTEGRATION icebergCatalogInt
  CATALOG_SOURCE = OBJECT_STORE
  TABLE_FORMAT = NONE
  ENABLED=TRUE;

Step 3: Create an Iceberg table

Create an Iceberg table by using the CREATE ICEBERG TABLE command.

Syntax

CREATE [ OR REPLACE ] ICEBERG TABLE [ IF NOT EXISTS ] <table_name>
  [
    --Data column definition
    <col_name> <col_type>
    [ COLLATE '<collation_specification>' ]
    [ [ WITH ] MASKING POLICY <policy_name> [ USING ( <col_name> , <cond_col1> , ... ) ] ]
    [ [ WITH ] TAG ( <tag_name> = '<tag_value>' [ , <tag_name> = '<tag_value>' , ... ] ) ]
    [ COMMENT '<string_literal>' ]
    -- In-line constraint
    [ inlineConstraint ]
    -- Additional column definitions (data, virtual, or partition columns)
    [ ,     <col_name> <col_type> ...
      -- Virtual column definition
      |  <col_name> <col_type> AS <expr>
      -- Partition column definition
      | <part_col_name> <col_type> AS <part_expr>
      -- In-line constraint
      [ inlineConstraint ]
      [ , ... ]
    ]
    -- Out-of-line constraints
    [ , outoflineConstraint [ ... ] ]
  ]
  [ PARTITION BY ( <part_col_name> [, <part_col_name> ... ] ) ]
  [ EXTERNAL_VOLUME = '<external_volume_name>' ]
  [ CATALOG = <catalog_integration_name> ]
  BASE_LOCATION = '<relative_path_from_external_volume>'
  [ INFER_SCHEMA = { TRUE | FALSE } ]
  [ AUTO_REFRESH = { TRUE | FALSE } ]
  [ PATTERN = '<regex_pattern>' ]
  [ REPLACE_INVALID_CHARACTERS = { TRUE | FALSE } ]
  [ [ WITH ] ROW ACCESS POLICY <policy_name> ON ( <col_name> [ , <col_name> ... ] ) ]
  [ [ WITH ] TAG ( <tag_name> = '<tag_value>' [ , <tag_name> = '<tag_value>' , ... ] ) ]
  [ COMMENT = '<string_literal>' ]

Where:

inlineConstraint ::=
  [ CONSTRAINT <constraint_name> ]
  { UNIQUE
    | PRIMARY KEY
    | [ FOREIGN KEY ] REFERENCES <ref_table_name> [ ( <ref_col_name> ) ]
  }
  [ <constraint_properties> ]

For additional inline constraint details, see CREATE | ALTER TABLE … CONSTRAINT.

outoflineConstraint ::=
  [ CONSTRAINT <constraint_name> ]
  { UNIQUE [ ( <col_name> [ , <col_name> , ... ] ) ]
    | PRIMARY KEY [ ( <col_name> [ , <col_name> , ... ] ) ]
    | [ FOREIGN KEY ] [ ( <col_name> [ , <col_name> , ... ] ) ]
      REFERENCES <ref_table_name> [ ( <ref_col_name> [ , <ref_col_name> , ... ] ) ]
  }
  [ <constraint_properties> ]

For additional out-of-line constraint details, see CREATE | ALTER TABLE … CONSTRAINT.

Required parameters

table_name

Specifies the identifier (name) for the table; must be unique for the schema in which the table is created.

In addition, the identifier must start with an alphabetic character and cannot contain spaces or special characters unless the entire identifier string is enclosed in double quotes (for example, "My object"). Identifiers enclosed in double quotes are also case-sensitive.

For more details, see Identifier requirements.

BASE_LOCATION = 'relative_path_from_external_volume'

Specifies a relative path from the table’s EXTERNAL_VOLUME location to a directory where Snowflake can access your Parquet files and write table metadata. The base location must point to a directory and cannot point to a single Parquet file.

Optional parameters

col_name

Specifies the column identifier (name). All the requirements for table identifiers also apply to column identifiers.

For more details, see Identifier requirements and Reserved & limited keywords.

Note

In addition to the standard reserved keywords, the following keywords cannot be used as column identifiers because they are reserved for ANSI-standard context functions:

  • CURRENT_DATE

  • CURRENT_ROLE

  • CURRENT_TIME

  • CURRENT_TIMESTAMP

  • CURRENT_USER

For the list of reserved keywords, see Reserved & limited keywords.

col_type

Specifies the data type for the column.

For details about the data types that can be specified for table columns, see Data type mapping and SQL data types reference.

expr

String that specifies the expression for the column. When queried, the column returns results derived from this expression.

A column can be a virtual column, which is defined using an explicit expression.

METADATA$FILENAME:

A pseudo-column that identifies the name of each Parquet data file included in the table, relative to its path on the external volume.

For example:

If the external volume location is s3://bucket-name/data/warehouse/ and the BASE_LOCATION of the table is default_db/schema_name/table_name/, the absolute location of the Parquet file is s3://bucket-name/data/warehouse/default_db/schema_name/table_name/ds=2023-01-01/file1.parquet.

As a result, the METADATA$FILENAME for this file is default_db/schema_name/table_name/ds=2023-01-01/file1.parquet.

CONSTRAINT ...

Defines an inline or out-of-line constraint for the specified column(s) in the table.

For syntax details, see CREATE | ALTER TABLE … CONSTRAINT. For more information about constraints, see Constraints.

COLLATE 'collation_specification'

Specifies the collation to use for column operations such as string comparison. This option applies only to text columns (VARCHAR, STRING, TEXT, etc.). For more details, see Collation specifications.

MASKING POLICY = policy_name

Specifies the masking policy to set on a column.

EXTERNAL_VOLUME = 'external_volume_name'

Specifies the identifier (name) for the external volume where Snowflake can access your Parquet data files.

You must specify an external volume if you have not set one at the database or schema level. Otherwise, the Iceberg table defaults to the external volume for the schema, database, or account. The schema takes precedence over the database, and the database takes precedence over the account.

CATALOG = 'catalog_integration_name'

Specifies the identifier (name) of the catalog integration for this table.

You must specify a catalog integration if you have not set one at the database or schema level. Otherwise, the Iceberg table defaults to the catalog integration for the schema, database, or account. The schema takes precedence over the database, and the database takes precedence over the account.

INFER_SCHEMA = '{ TRUE | FALSE }'

Specifies whether to automatically detect and evolve the table schema (based on the fields in the Parquet files) in order to retrieve column definitions and partition values.

  • TRUE: Snowflake detects the table schema to retrieve column definitions and detect partition values. With this option, Snowflake automatically creates virtual columns for partition values in the Parquet file path.

    If you manually or automatically refresh the table and the Parquet file schema changes, Snowflake automatically evolves the table schema by creating newly identified columns as visible table columns.

  • FALSE: Snowflake does not detect the table schema. You must include column definitions in your CREATE ICEBERG TABLE statement.

Default: TRUE if you don’t provide a column definition; otherwise, FALSE.

AUTO_REFRESH = '{ TRUE | FALSE }'

Specifies whether the table data will be automatically refreshed. This parameter is required only when you create an Iceberg table from Parquet files that supports automatic refreshes. For more information, see Refresh an Iceberg table automatically for Amazon S3.

PATTERN = '{regex_pattern}'

A regular expression pattern string, enclosed in single quotes, specifying the filenames and paths on the external stage to match.

If you manage your Parquet source files in Amazon S3, you can use this parameter to avoid reaching the AWS limit for the number of SNS topics that can be created per account. To avoid reaching this limit, do the following:

  1. Create one SNS topic at the bucket level

  2. Create tables, which each have their own regular expression pattern, to logically group them

For more information on this SNS topic limit, see the AWS documentation (https://docs.aws.amazon.com/general/latest/gr/sns.html#limits_sns_resource).

Tip

For the best performance, don’t apply patterns that filter on a large number of files.

REPLACE_INVALID_CHARACTERS = { TRUE | FALSE }

Specifies whether to replace invalid UTF-8 characters with the Unicode replacement character (�) in query results. You can only set this parameter for tables that use an external Iceberg catalog.

  • TRUE replaces invalid UTF-8 characters with the Unicode replacement character.

  • FALSE leaves invalid UTF-8 characters unchanged. Snowflake returns a user error message when it encounters invalid UTF-8 characters in a Parquet data file.

If not specified, the Iceberg table defaults to the parameter value for the schema, database, or account. The schema takes precedence over the database, and the database takes precedence over the account.

Default: FALSE

ROW ACCESS POLICY policy_name ON ( col_name [ , col_name ... ] )

Specifies the row access policy to set on a table.

TAG ( tag_name = 'tag_value' [ , tag_name = 'tag_value' , ... ] )

Specifies the tag name and the tag string value.

The tag value is always a string, and the maximum number of characters for the tag value is 256.

For information about specifying tags in a statement, see Tag quotas.

COMMENT 'string_literal'

Specifies a comment for the column or the table.

Comments can be specified at the column level or the table level. The syntax for each is slightly different.

Partitioning parameters

Use these parameters to partition your Iceberg table.

part_col_name col_type AS part_expr

Defines one or more partition columns in the Iceberg table.

A partition column must evaluate as an expression that parses the path and/or filename information in the METADATA$FILENAME pseudo-column. A partition consists of all data files that match the path and/or filename in the expression for the partition column.

part_col_name

String that specifies the partition column identifier (i.e. name). All the requirements for table identifiers also apply to column identifiers.

col_type

String (constant) that specifies the data type for the column. The data type must match the result of part_expr for the column.

part_expr

String that specifies the expression for the column. The expression must include the METADATA$FILENAME pseudocolumn.

Iceberg tables currently support the following subset of functions in partition expressions:

List of supported functions:

[ PARTITION BY ( part_col_name [, part_col_name ... ] ) ]

Specifies any partition columns to evaluate for the Iceberg table.

Usage:

When querying an Iceberg table, include one or more partition columns in a WHERE clause, for example:

... WHERE part_col_name = 'filter_value'

A common practice is to partition the data files based on increments of time; or, if the data files are staged from multiple sources, to partition by a data source identifier and date or timestamp.

Example: Create an Iceberg table from Parquet files, specifying data columns

The following example creates an Iceberg table from Parquet files in object storage.

The example specifies the external volume and catalog integration created previously in this workflow, and provides a value for the required BASE_LOCATION parameter.

CREATE ICEBERG TABLE myTable (
    first_name STRING,
    last_name STRING,
    amount NUMBER,
    create_date DATE
  )
  CATALOG = icebergCatalogInt
  EXTERNAL_VOLUME = myIcebergVolume
  BASE_LOCATION='relative_path_from_external_volume/';

Example: Create an Iceberg table from Parquet files, specifying a partition column

The following example creates an Iceberg table from Parquet files in object storage and defines a partition column named sr_returned_date_sk.

 CREATE OR REPLACE ICEBERG TABLE store_returns (
  sr_returned_date_sk integer AS
    IFF(
        regexp_substr(METADATA$FILENAME, 'sr_returned_date_sk=(.*)/', 1, 1, 'e') = '__HIVE_DEFAULT_PARTITION__',
        null,
        TO_NUMBER(
          regexp_substr(METADATA$FILENAME, 'sr_returned_date_sk=(.*)/', 1, 1, 'e')
        )
    ),
  sr_return_time_sk         integer                       ,
  sr_item_sk                integer                       ,
  sr_customer_sk            integer                       ,
  sr_cdemo_sk               integer                       ,
  sr_hdemo_sk               integer                       ,
  sr_addr_sk                integer                       ,
  sr_store_sk               integer                       ,
  sr_reason_sk              integer                       ,
  sr_ticket_number          bigint                        ,
  sr_return_quantity        integer                       ,
  sr_return_amt             decimal(7,2)                  ,
  sr_return_tax             decimal(7,2)                  ,
  sr_return_amt_inc_tax     decimal(7,2)                  ,
  sr_fee                    decimal(7,2)                  ,
  sr_return_ship_cost       decimal(7,2)                  ,
  sr_refunded_cash          decimal(7,2)                  ,
  sr_reversed_charge        decimal(7,2)                  ,
  sr_store_credit           decimal(7,2)                  ,
  sr_net_loss               decimal(7,2)
)
PARTITION BY (sr_returned_date_sk)
EXTERNAL_VOLUME = 'exvol'
CATALOG = 'catint'
BASE_LOCATION = 'store_returns/';

Example: Create an Iceberg table from Parquet files using automatic schema inference

The following example creates an Iceberg table from Parquet files using automatic schema inference without including a column definition.

CREATE OR REPLACE ICEBERG TABLE auto_schema_table
  EXTERNAL_VOLUME = 'exvol'
  CATALOG = 'catint'
  BASE_LOCATION = 'auto_schema_table/';

Alternatively, you can include a column definition to provide information about certain columns. Snowflake uses the definition to create those columns, then automatically detects other table columns. In this scenario, you must specify INFER_SCHEMA = TRUE since you include a column definition.

CREATE OR REPLACE ICEBERG TABLE auto_schema_table_col_spec (col1 INT)
  EXTERNAL_VOLUME = 'exvol'
  CATALOG = 'catint'
  BASE_LOCATION = 'auto_schema_table_col_spec/'
  INFER_SCHEMA = TRUE;

Refresh the table

After you create an Iceberg table from Parquet files, you can refresh the table data. Refreshing synchronizes the table with the most recent changes to your Parquet files in object storage. You can either automatically refresh the table data or manually refresh the table data.

Note

We recommend setting up automatic refresh for the Parquet source files.

Schema evolution

With the INFER_SCHEMA parameter equal to TRUE, table refresh synchronizes your table with the following schema changes to the Parquet source files:

  • New columns

  • Type widening for the following scenarios to adhere to the Apache Iceberg specification:

    • int to long

    • float to double

    • decimal(p,s) to decimal(p1,s)

Primitive columns

Original Type

Widened Type

Notes

int

long

NUMBER(10,0) → NUMBER(19,0)

float

double

Both map to Snowflake FLOAT

decimal(p1, s)

decimal(p2, s)

Precision can increase (for example, decimal(5,2) → decimal(7,2))

Struct fields

Type widening applies recursively to struct fields:

Original Type

Widened Type

struct<field:int>

struct<field:long>

struct<field:float>

struct<field:double>

struct<field:decimal(p1,s)>

struct<field:decimal(p2,s)>

The following example shows type widening for a struct field:

struct<p1:int, p2:float, p3:decimal(5,2)>
    ↓
struct<p1:long, p2:double, p3:decimal(8,2)>
Array elements

Type widening applies to array element types:

Original Type

Widened Type

array<int>

array<long>

array<float>

array<double>

array<decimal(p1,s)>

array<decimal(p2,s)>

Map keys and values

Type widening applies to both map keys and values:

Original Type

Widened Type

map<int, V>

map<long, V>

map<K, float>

map<K, double>

map<K, decimal(p1,s)>

map<K, decimal(p2,s)>

The following example shows type widening for a struct fields:

map<int, string>           → map<long, string>
map<string, float>         → map<string, double>
map<string, decimal(5,2)>  → map<string, decimal(8,2)>

Automatically refresh tables

For information on how to set up automatic refresh, see the instructions for your cloud provider:

These instructions include a step for creating a table from Parquet files with automated refresh enabled on the table.

Manually refresh tables

Important

When auto refresh is enabled on a table, you can’t perform a manual refresh on the table. To perform a manual refresh on the table, auto refresh must be disabled.

After you create an Iceberg table from Parquet files, you can refresh the table data using the ALTER ICEBERG TABLE command.

ALTER ICEBERG TABLE [ IF EXISTS ] <table_name> REFRESH ['<relative_path>']

Where:

relative_path

Optional path to a Parquet file or a directory of Parquet files that you want to refresh.

Note

If you specify a relative path that does not exist, the table refresh proceeds as if no relative path was specified.

Example: Refresh all of the files in a table’s BASE_LOCATION

To manually refresh all of the files in the table’s BASE_LOCATION, omit the relative path argument:

ALTER ICEBERG TABLE myIcebergTable REFRESH;

Example: Refresh the files in a subpath from the BASE_LOCATION

To manually refresh a set of Parquet files in a directory, specify a relative path to that directory from the table’s BASE_LOCATION:

ALTER ICEBERG TABLE myIcebergTable REFRESH '/relative/path/to/myParquetDataFiles';

Example: Refresh a particular file

To manually refresh a particular Parquet file, specify a relative path to that file from the BASE_LOCATION:

ALTER ICEBERG TABLE myIcebergTable REFRESH '/relative/path/to/myParquetFile.parquet';

Example: Refresh a particular partition

To manually refresh a particular partition, specify a relative path to that partition from the BASE_LOCATION:

ALTER ICEBERG TABLE store_returns REFRESH '/sr_returned_date_sk=20231201/';

Refresh an Iceberg table automatically for Amazon S3

If you manage your Parquet source files in Amazon S3, you can create an Iceberg table that uses Amazon SNS (Simple Notification Service) for automatic refresh.

This section provides instructions for creating an Iceberg table that automatically refreshes the Parquet source files.

Prerequisite: Create an Amazon SNS topic and subscription

  1. Create an SNS topic in your AWS account to handle all messages for the Snowflake external volume location on your S3 bucket.

  2. Subscribe your target destinations for the S3 event notifications (for example, other SQS queues or AWS Lambda workloads) to this topic. SNS publishes event notifications for your bucket to all subscribers to the topic.

For full instructions, see the SNS documentation (https://aws.amazon.com/documentation/sns/).

Step 1: Subscribe the Snowflake SQS queue to your SNS topic

  1. Log in to the AWS Management Console.

  2. From the home dashboard, select Simple Notification Service (SNS).

  3. In the left-hand navigation pane, select Topics.

  4. Locate the topic for your S3 bucket. Note the topic ARN.

  5. Using a Snowflake client, query the SYSTEM$GET_AWS_SNS_IAM_POLICY system function with your SNS topic ARN:

    select system$get_aws_sns_iam_policy('<sns_topic_arn>');
    

    The function returns an IAM policy that grants a Snowflake SQS queue permission to subscribe to the SNS topic.

  6. Return to the AWS Management console. In the left-hand navigation pane, select Topics.

  7. Select the topic for your S3 bucket, then select Edit. The Edit page opens.

  8. Select Access policy - Optional to expand this area of the page.

  9. Merge the IAM policy addition from the SYSTEM$GET_AWS_SNS_IAM_POLICY function results into the JSON document.

  10. To allow S3 to publish event notifications for the bucket to the SNS topic, add an additional policy grant.

    For example:

    {
        "Sid":"s3-event-notifier",
        "Effect":"Allow",
        "Principal":{
           "Service":"s3.amazonaws.com"
        },
        "Action":"SNS:Publish",
        "Resource":"arn:aws:sns:us-west-2:001234567890:s3_mybucket",
        "Condition":{
           "ArnLike":{
              "aws:SourceArn":"arn:aws:s3:*:*:s3_mybucket"
           }
        }
     }
    

    Merged IAM policy:

    {
      "Version":"2008-10-17",
      "Id":"__default_policy_ID",
      "Statement":[
         {
            "Sid":"__default_statement_ID",
            "Effect":"Allow",
            "Principal":{
               "AWS":"*"
            }
            ..
         },
         {
            "Sid":"1",
            "Effect":"Allow",
            "Principal":{
              "AWS":"arn:aws:iam::123456789001:user/vj4g-a-abcd1234"
             },
             "Action":[
               "sns:Subscribe"
             ],
             "Resource":[
               "arn:aws:sns:us-west-2:001234567890:s3_mybucket"
             ]
         },
         {
            "Sid":"s3-event-notifier",
            "Effect":"Allow",
            "Principal":{
               "Service":"s3.amazonaws.com"
            },
            "Action":"SNS:Publish",
            "Resource":"arn:aws:sns:us-west-2:001234567890:s3_mybucket",
            "Condition":{
               "ArnLike":{
                  "aws:SourceArn":"arn:aws:s3:*:*:s3_mybucket"
               }
            }
          }
       ]
     }
    
  11. Select Save changes.

Step 2: Create an external volume with your AWS SNS topic

To configure an external volume, complete the instructions for Configure an external volume for Amazon S3.

In Step 4: Creating an external volume in Snowflake, specify the following additional parameter:

AWS_SNS_TOPIC = '<sns_topic_arn>'

Specifies the Amazon Resource Name (ARN) of the Amazon SNS topic that handles all messages for your external volume location.

For example:

CREATE OR REPLACE EXTERNAL VOLUME auto_refresh_exvol
  STORAGE_LOCATIONS = (
    (
      NAME = 'my-s3-us-east-1'
      STORAGE_PROVIDER = 'S3'
      STORAGE_BASE_URL = 's3://s3_mybucket/'
      STORAGE_AWS_ROLE_ARN = 'arn:aws:iam::0123456789102:role/my-role'
      AWS_SNS_TOPIC = 'arn:aws:sns:us-east-1:0123456789102:sns_topic'
    )
  );

Step 3: Create a catalog integration

Create a catalog integration by using the CREATE CATALOG INTEGRATION command. To indicate that the catalog integration is for Iceberg tables created from Parquet source files, set the CATALOG_SOURCE parameter equal to OBJECT_STORE and the TABLE_FORMAT parameter equal to NONE.

Note

Snowflake does not support creating Iceberg tables from Parquet-based table definitions in the AWS Glue Data Catalog.

The following example creates a catalog integration for Parquet files in object storage.

CREATE OR REPLACE CATALOG INTEGRATION icebergCatalogInt
  CATALOG_SOURCE = OBJECT_STORE
  TABLE_FORMAT = NONE
  ENABLED=TRUE;

Step 4: Create an Iceberg table

Create an Iceberg table by using the CREATE ICEBERG TABLE command, setting the AUTO_REFRESH parameter equal to TRUE.

CREATE OR REPLACE ICEBERG TABLE my_s3_auto_refresh_table (
    first_name STRING,
    last_name STRING,
    amount NUMBER,
    create_date DATE
  )
  CATALOG = icebergCatalogInt
  EXTERNAL_VOLUME = myIcebergVolume
  BASE_LOCATION='relative_path_from_external_volume'
  AUTO_REFRESH = true;

Example: Create an Iceberg table from Parquet files using automatic schema inference and evolution with auto refresh

The following example creates an Iceberg table from Parquet files using:

  • Automatic schema inference without including a column definition

  • Automatic schema evolution in auto refresh

CREATE OR REPLACE ICEBERG TABLE auto_schema_table
  EXTERNAL_VOLUME = 'exvol'
  CATALOG = 'catint'
  BASE_LOCATION = 'auto_schema_table/'
  AUTO_REFRESH = TRUE;

Alternatively, you can include a column definition to provide information about certain columns. Snowflake uses the definition to create those columns, then automatically detects other table columns. In this scenario, you must specify INFER_SCHEMA = TRUE since you include a column definition.

CREATE OR REPLACE ICEBERG TABLE auto_schema_table_col_spec (col1 INT)
  EXTERNAL_VOLUME = 'exvol'
  CATALOG = 'catint'
  BASE_LOCATION = 'auto_schema_table_col_spec/'
  INFER_SCHEMA = TRUE
  AUTO_REFRESH = TRUE;

Troubleshoot

To track the status of automatic refreshes for your Iceberg table, use the SYSTEM$ICEBERG_TABLE_AUTO_REFRESH_STATUS function.

For example:

SELECT SYSTEM$ICEBERG_TABLE_AUTO_REFRESH_STATUS('my_s3_auto_refresh_table');

Refresh an Iceberg table automatically for Azure Blob Storage

If you manage your Parquet source files in Microsoft Azure, you can create an Iceberg table that uses Azure Event Grid for automatic refresh.

This section provides instructions for creating an Iceberg table that automatically refreshes the Parquet source files.

Supported accounts, APIs, and schemas

Snowflake supports the following types of blob storage accounts:

  • Blob storage

  • Data Lake Storage Gen2

  • General-purpose v2

Automatic refresh of your Iceberg table from Parquet files isn’t supported for Microsoft Fabric OneLake. For OneLake Iceberg tables from Parquet files, you must manually refresh a table with ALTER ICEBERG TABLE … REFRESH with the REFRESH parameter.

Note

Only Microsoft.Storage.BlobCreated and Microsoft.Storage.BlobDeleted events trigger the refreshing of the Parquet source files. Adding new objects to blob storage triggers these events. Renaming a directory or object doesn’t trigger these events. Snowflake recommends that you only send supported events for Iceberg tables from Parquet files to reduce costs, event noise, and latency.

For cloud platform support, triggering automated refreshes of the Parquet source files using Azure Event Grid messages is supported by Snowflake accounts hosted on Microsoft Azure (Azure).

Snowflake supports the following Microsoft.Storage.BlobCreated APIs:

  • CopyBlob

  • PutBlob

  • PutBlockList

  • FlushWithClose

  • SftpCommit

Snowflake supports the following Microsoft.Storage.BlobDeleted APIs:

  • DeleteBlob

  • DeleteFile

  • SftpRemove

For Data Lake Storage Gen2 storage accounts, Microsoft.Storage.BlobCreated events are triggered when clients use the CreateFile and FlushWithClose operations. If the SSH File Transfer Protocol (SFTP) is used, Microsoft.Storage.BlobCreated events are triggered with SftpCreate and SftpCommit operations. The CreateFile or SftpCreate API alone does not indicate a commit of a file in the storage account. If the FlushWithClose or SftpCommit message is not sent, Snowflake does not refresh the Parquet source files.

Snowflake only supports the Azure Event Grid event schema (https://learn.microsoft.com/en-us/azure/event-grid/event-schema); it doesn’t support the CloudEvents schema with Azure Event Grid (https://learn.microsoft.com/en-us/azure/event-grid/cloud-event-schema).

Iceberg tables for Snowflake from Parquet files that you manage in object storage don’t support storage versioning.

Prerequisites

Before you proceed, ensure you meet the following prerequisites:

  • A role that has the CREATE EXTERNAL VOLUME and CREATE ICEBERG TABLE privileges on a schema.

  • Administrative access to Microsoft Azure. If you aren’t an Azure administrator, ask your Azure administrator to complete the steps in Step 1: Configure the Event Grid subscription.

Step 1: Configure the Event Grid subscription

This section describes how to set up an Event Grid subscription for Azure Storage events using the Azure CLI.

Create a resource group

An Event Grid topic provides an endpoint where the source (that is, Azure Storage) sends events. A topic is used for a collection of related events. Event Grid topics are Azure resources, and must be placed in an Azure resource group.

Execute the following command to create a resource group:

az group create --name <resource_group_name> --location <location>

Where:

  • <resource_group_name> is the name of the new resource group.

  • <location> is the location, or region in Snowflake terminology, of your Azure Storage account.

Enable the Event Grid resource provider

Execute the following command to register the Event Grid resource provider. Note that this step is only required if you have not previously used Event Grid with your Azure account:

az provider register --namespace Microsoft.EventGrid
az provider show --namespace Microsoft.EventGrid --query "registrationState"

Create a storage account for data files

Execute the following command to create a storage account to store your data files. This account must be either a Blob storage (that is, a BlobStorage kind) or GPv2 (that is, a StorageV2 kind) account, because only these two account types support event messages.

Note

If you already have a Blob storage or GPv2 account, you can use that account instead.

For example, create a Blob storage account:

az storage account create \
  --resource-group <resource_group_name> \
  --name <storage_account_name> \
  --sku Standard_LRS \
  --location <location> \
  --kind BlobStorage \
  --access-tier Hot

Where:

  • <resource_group_name> is the name of the resource group you created in Create a resource group.

  • <storage_account_name> is the name of the new storage account.

  • <location> is the location of your Azure Storage account.

Create a storage account for the storage queue

Execute the following command to create a storage account to host your storage queue. This account must be a GPv2 account, because only this kind of account supports event messages to a storage queue.

Note

If you already have a GPv2 account, you can use that account to host both your data files and your storage queue.

For example, create a GPv2 account:

az storage account create \
  --resource-group <resource_group_name> \
  --name <storage_account_name> \
  --sku Standard_LRS \
  --location <location> \
  --kind StorageV2

Where:

  • <resource_group_name> is the name of the resource group you created in Create a resource group.

  • <storage_account_name> is the name of the new storage account.

  • <location> is the location of your Azure Storage account.

Create a storage queue

A single Azure Queue Storage queue can collect the event messages for many Event Grid subscriptions. For best performance, Snowflake recommends creating a single storage queue to accommodate all of your subscriptions related to Snowflake.

Execute the following command to create a storage queue. A storage queue stores a set of messages, in this case event messages from Event Grid:

az storage queue create \
  --name <storage_queue_name> \
  --account-name <storage_account_name>

Where:

Export the storage account and queue IDs for reference

Execute the following commands to set environment variables for the storage account and queue IDs that will be requested later in these instructions:

Linux or macOS:

export storageid=$(az storage account show \
  --name <data_storage_account_name> \
  --resource-group <resource_group_name> \
  --query id --output tsv)
export queuestorageid=$(az storage account show \
  --name <queue_storage_account_name> \
  --resource-group <resource_group_name> \
  --query id --output tsv)
export queueid="$queuestorageid/queueservices/default/queues/<storage_queue_name>"

Windows:

set storageid=$(az storage account show \
  --name <data_storage_account_name> \
  --resource-group <resource_group_name> \
  --query id --output tsv)
set queuestorageid=$(az storage account show \
  --name <queue_storage_account_name> \
  --resource-group <resource_group_name> \
  --query id --output tsv)
set queueid="%queuestorageid%/queueservices/default/queues/<storage_queue_name>"

Where:

Install the Event Grid extension

Execute the following command to install the Event Grid extension for Azure CLI:

az extension add --name eventgrid

Create the Event Grid subscription

Execute the following command to create the Event Grid subscription. Subscribing to a topic informs Event Grid which events to track:

Linux or macOS:

az eventgrid event-subscription create \
  --source-resource-id $storageid \
  --name <subscription_name> \
  --endpoint-type storagequeue \
  --endpoint $queueid \
  --advanced-filter data.api stringin CopyBlob PutBlob PutBlockList \
    FlushWithClose SftpCommit DeleteBlob DeleteFile SftpRemove

Windows:

az eventgrid event-subscription create \
  --source-resource-id %storageid% \
  --name <subscription_name> \
  --endpoint-type storagequeue \
  --endpoint %queueid% \
  --advanced-filter data.api stringin CopyBlob PutBlob PutBlockList \
    FlushWithClose SftpCommit DeleteBlob DeleteFile SftpRemove

Where:

Step 2: Create a notification integration

A notification integration is a Snowflake object that provides an interface between Snowflake and a third-party cloud message queuing service such as Azure Event Grid.

Note

A single notification integration supports a single Azure Storage queue. Referencing the same storage queue in multiple notification integrations can result in missing data in target tables because event notifications are split between notification integrations.

Retrieve the storage queue URL and tenant ID

  1. Sign in to the Microsoft Azure portal.

  2. Navigate to Storage account » Queue service » Queues. Record the URL for the queue you created in Create a storage queue for reference later. The URL has the following format:

    https://<storage_account_name>.queue.core.windows.net/<storage_queue_name>
    
  3. Navigate to Azure Active Directory » Properties. Record the Tenant ID value for reference later. The directory ID, or tenant ID, is needed to grant Snowflake access to the Event Grid subscription.

Create the notification integration

Create a notification integration by using the CREATE NOTIFICATION INTEGRATION command.

Note

  • Only account administrators (users with the ACCOUNTADMIN role) or a role with the global CREATE INTEGRATION privilege can execute this SQL command.

  • The Azure service principal for notification integrations is different from the service principal created for storage integrations.

CREATE NOTIFICATION INTEGRATION <integration_name>
  ENABLED = true
  TYPE = QUEUE
  NOTIFICATION_PROVIDER = AZURE_STORAGE_QUEUE
  AZURE_STORAGE_QUEUE_PRIMARY_URI = '<queue_URL>'
  AZURE_TENANT_ID = '<directory_ID>';

Where:

For example:

CREATE NOTIFICATION INTEGRATION my_notification_int
  ENABLED = true
  TYPE = QUEUE
  NOTIFICATION_PROVIDER = AZURE_STORAGE_QUEUE
  AZURE_STORAGE_QUEUE_PRIMARY_URI = 'https://myqueue.queue.core.windows.net/mystoragequeue'
  AZURE_TENANT_ID = 'a123bcde-1234-5678-abc1-9abc12345678';

Grant Snowflake access to the storage queue

  1. Execute the DESCRIBE INTEGRATION command to retrieve the consent URL:

    DESC NOTIFICATION INTEGRATION <integration_name>;
    

    Where:

    Note the values in the following columns:

    AZURE_CONSENT_URL:

    URL to the Microsoft permissions request page.

    AZURE_MULTI_TENANT_APP_NAME:

    Name of the Snowflake client application created for your account. In a later step in this section, you will need to grant this application the permissions necessary to obtain an access token on your allowed topic.

  2. In a web browser, navigate to the URL in the AZURE_CONSENT_URL column. The page displays a Microsoft permissions request page.

  3. Select Accept. This action allows the Azure service principal created for your Snowflake account to obtain an access token on any resource inside your tenant. Obtaining an access token succeeds only if you grant the service principal the appropriate permissions on the container (see the next step).

    The Microsoft permissions request page redirects to the Snowflake corporate site (snowflake.com).

  4. Sign in to the Microsoft Azure portal.

  5. Navigate to Azure Active Directory » Enterprise applications. Verify that the Snowflake application identifier you recorded in Step 1 in this section is listed.

    Important

    If you delete the Snowflake application in Azure Active Directory at a later time, the notification integration stops working.

  6. Navigate to Queues » <storage_queue_name>, where <storage_queue_name> is the name of the storage queue you created in Create a storage queue.

  7. Select Access Control (IAM) » Add role assignment.

  8. Search for the Snowflake service principal. This is the identity in the AZURE_MULTI_TENANT_APP_NAME property in the DESC NOTIFICATION INTEGRATION output (in Step 1). Search for the string before the underscore in the AZURE_MULTI_TENANT_APP_NAME property.

    Important

    • It can take an hour or longer for Azure to create the Snowflake service principal requested through the Microsoft request page in this section. If the service principal is not available immediately, we recommend waiting an hour or two and then searching again.

    • If you delete the service principal, the notification integration stops working.

  9. Grant the Snowflake app the following permissions:

    • Role: Storage Queue Data Message Processor (the minimum required role), or Storage Queue Data Contributor.

    • Assign access to: Azure AD user, group, or service principal.

    • Select: The appDisplayName value.

    The Snowflake application identifier should now be listed under Storage Queue Data Message Processor or Storage Queue Data Contributor (on the same dialog).

Step 3: Create an external volume with your Azure storage queue

To configure an external volume, complete the instructions for Configure an external volume for Azure.

In Step 1: Create an external volume in Snowflake, specify the following additional parameter:

AZURE_STORAGE_QUEUE_PRIMARY_URI = '<queue_URL>'

Specifies the URL of the Azure Storage queue that handles all messages for your external volume location.

For example:

CREATE OR REPLACE EXTERNAL VOLUME auto_refresh_exvol
  STORAGE_LOCATIONS = (
    (
      NAME = 'my-azure-location'
      STORAGE_PROVIDER = 'AZURE'
      STORAGE_BASE_URL = 'azure://myaccount.blob.core.windows.net/mycontainer/'
      AZURE_TENANT_ID = 'a123b4c5-1234-123a-a12b-1a23b45678c9'
      AZURE_STORAGE_QUEUE_PRIMARY_URI = 'https://myqueue.queue.core.windows.net/mystoragequeue'
    )
  );

Step 4: Create a catalog integration

Create a catalog integration by using the CREATE CATALOG INTEGRATION command. To indicate that the catalog integration is for Iceberg tables created from Parquet source files, set the CATALOG_SOURCE parameter equal to OBJECT_STORE and the TABLE_FORMAT parameter equal to NONE.

Note

Snowflake does not support creating Iceberg tables from Parquet-based table definitions in the Azure environment.

The following example creates a catalog integration for Parquet files in object storage.

CREATE OR REPLACE CATALOG INTEGRATION icebergCatalogInt
  CATALOG_SOURCE = OBJECT_STORE
  TABLE_FORMAT = NONE
  ENABLED = TRUE;

Step 5: Create an Iceberg table

Create an Iceberg table by using the CREATE ICEBERG TABLE command, setting the AUTO_REFRESH parameter equal to TRUE.

CREATE OR REPLACE ICEBERG TABLE my_azure_auto_refresh_table (
    first_name STRING,
    last_name STRING,
    amount NUMBER,
    create_date DATE
  )
  CATALOG = icebergCatalogInt
  EXTERNAL_VOLUME = myIcebergVolume
  BASE_LOCATION = 'relative_path_from_external_volume'
  AUTO_REFRESH = true;

Example: Create an Iceberg table from Parquet files using automatic schema inference with auto refresh

The following example creates an Iceberg table from Parquet files using:

  • Automatic schema inference without including a column definition

  • Auto refresh

CREATE OR REPLACE ICEBERG TABLE auto_schema_table
  EXTERNAL_VOLUME = 'exvol'
  CATALOG = 'catint'
  BASE_LOCATION = 'auto_schema_table/'
  AUTO_REFRESH = TRUE;

Alternatively, you can include a column definition to provide information about certain columns. Snowflake uses the definition to create those columns, then automatically detects other table columns. In this scenario, you must specify INFER_SCHEMA = TRUE since you include a column definition.

CREATE OR REPLACE ICEBERG TABLE auto_schema_table_col_spec (col1 INT)
  EXTERNAL_VOLUME = 'exvol'
  CATALOG = 'catint'
  BASE_LOCATION = 'auto_schema_table_col_spec/'
  INFER_SCHEMA = TRUE
  AUTO_REFRESH = TRUE;

Troubleshoot

To track the status of automatic refreshes for your Iceberg table, use the SYSTEM$ICEBERG_TABLE_AUTO_REFRESH_STATUS function.

For example:

SELECT SYSTEM$ICEBERG_TABLE_AUTO_REFRESH_STATUS('my_azure_auto_refresh_table');

Data type mapping

When you define a column in a CREATE ICEBERG TABLE statement for Parquet source files, you must specify a Snowflake data type that maps to the Parquet data type used in your source files.

Note

In addition to data types that are compatible with Iceberg, the following non-Iceberg data types are also supported:

  • BYTE_ARRAY

  • INT96

For more information, see the data type mapping table below.

The following table shows how Parquet logical types map to physical types, and how the physical types map to Snowflake data types.

Parquet logical type

Parquet physical type

Snowflake data type

None

BOOLEAN

BOOLEAN

None

INT(bitWidth=8, isSigned=true)

INT(bitWidth=16, isSigned=true)

INT(bitWidth=32, isSigned=true)

INT32

INT

None, INT(bitWidth=64, isSigned=true)

INT64

BIGINT

None

FLOAT

FLOAT

None

DOUBLE

FLOAT

DECIMAL(P,S)

INT32

INT64

FIXED_LEN_BYTE_ARRAY(N)

DECIMAL(P,S)

DATE

INT32

DATE

TIME(isAdjustedToUTC=true, unit=MILLIS)

INT32

TIME(3)

TIME(isAdjustedToUTC=true, unit=MICROS)

INT64

TIME(6)

TIME(isAdjustedToUTC=true, unit=NANOS)

INT64

TIME(9)

NONE

INT96

TIMESTAMP_LTZ(9)

TIMESTAMP(isAdjustedToUTC=true, unit=MILLIS)

INT64

TIMESTAMP_NTZ(3)

TIMESTAMP(isAdjustedToUTC=true, unit=MICROS)

INT64

TIMESTAMP_NTZ(6)

TIMESTAMP(isAdjustedToUTC=true, unit=NANOS)

INT64

TIMESTAMP_NTZ(9)

STRING

BYTE_ARRAY

VARCHAR

ENUM

BYTE_ARRAY

VARCHAR

JSON

BYTE_ARRAY

VARCHAR

UUID

FIXED_LEN_BYTE_ARRAY(16)

BINARY(16)

NONE

FIXED_LEN_BYTE_ARRAY(N)

BINARY(L)

NONE

BSON

BYTE_ARRAY

BINARY

INTERVAL

FIXED_LEN_BYTE_ARRAY(12)

BINARY(12)

Snowflake does not support a corresponding data type for the Parquet INTERVAL type, and reads the data from source files as binary data.

The following table shows how Parquet nested data types map to Snowflake data types.

Parquet logical nested type

Snowflake data type

NONE

Structured OBJECT

LIST

Structured ARRAY

MAP

MAP

Limitations for querying Parquet files directly in your data lake

  • By default, the maximum number of Parquet files that you can use to create an Iceberg table is ~2 million.

    To use more than this limit, contact Snowflake Support for assistance.

  • Parquet files that use any of the following features or data types are not supported:

    • Field IDs.

    • The DECIMAL data type with precision higher than 38.

    • LIST or MAP types with one-level or two-level representation.

    • Unsigned integer types (INT(signed=false)).

    • The FLOAT16 data type.

  • Snowflake does not support creating Iceberg tables from Parquet-based table definitions in the AWS Glue Data Catalog.

  • Generating Iceberg metadata using the SYSTEM$GET_ICEBERG_TABLE_INFORMATION function is not supported.

  • When auto refresh is enabled on a table, you can’t perform a manual refresh on the table. To perform a manual refresh on the table, auto refresh must be disabled.

  • For the read-only Iceberg tables that you create from Parquet files:

    • You can’t generate Iceberg metadata for these tables.

    • You can’t convert these tables to Snowflake-managed Iceberg tables.

    • You can’t perform DML operations on these tables; they are read only.

    • You can’t perform table maintenance on these tables.

  • The following Snowflake features aren’t currently supported for read-only Iceberg tables that you create from Parquet files:

    • Cloning

    • Replication

    • Change tracking

    • Dynamic tables

    • Data sharing

    • Snowflake Native Apps