February 2022¶
The following new features, behavior changes, and updates (enhancements, fixes, etc.) have been introduced this month. If you have any questions about these additions, please contact Snowflake Support.
Important
Each release may include updates that require the web interface to be refreshed.
As a general practice, to ensure these updates do not impact your usage, we recommend refreshing the web interface after each Snowflake release has been deployed.
New Features¶
Java UDTFs on Amazon Web Services — General Availability¶
With this release, Snowflake is pleased to announce the general availability of support for Java UDTFs (user-defined tabular functions) on Amazon Web Services (AWS).
Java UDTFs extend Snowflake’s native development capabilities by combining the advantages of table functions with the power, flexibility, and ease of programming in Java.
For more information, see Tabular Java UDFs (UDTFs).
Data Classification — Preview¶
With this release, Snowflake is pleased to announce the preview of data classification for all accounts using Snowflake Enterprise Edition (or higher).
Classification enables categorizing potentially personal and/or sensitive data stored in Snowflake tables and views, which can then be used to enable a variety of data governance, sharing, and privacy use cases, including:
Classification of PII (Personally Identifiable Information) data.
Policy management for setting and controlling access to private data.
Anonymization of personal data.
For more information, see Sensitive data classification.
Object Dependencies — Preview¶
With this release, Snowflake is pleased to announce preview support for object dependencies. An object dependency means that in order to operate on an object, the object that is being operated on must reference metadata for itself or reference metadata for at least one other object.
Snowflake tracks object dependencies in the Account Usage view OBJECT_DEPENDENCIES. To discover object dependencies, query the OBJECT_DEPENDENCIES view.
For more information, see Object Dependencies.
Snowpark Stored Procedures — Preview¶
With this release, we are pleased to announce support for Snowpark stored procedures, which allows you to write stored procedures in Scala using the Snowpark API.
In your stored procedure, you can use the Snowpark API for Scala to host your data pipelines in Snowflake. For example, you can write stored procedures in cases where you need to execute your Snowpark code without running a client application (e.g. from a task).
New Regions¶
We are pleased to announce the immediate availability of the following new region:
Cloud Platform |
Region |
---|---|
Microsoft Azure |
UAE North (Dubai) |
With the addition of this region, Snowflake now supports thirty global regions across three cloud platforms (AWS, GCP, and Azure), including three regions that support compliance with US government regulations.
The new region supports all Snowflake editions. You can provision initial accounts in the region through self-service (https://signup.snowflake.com/) or a Snowflake representative.
SQL Updates¶
Account Usage: New Views¶
The following view in the ACCOUNT_USAGE schema is now available with this release:
View |
Description |
---|---|
This ACCOUNT_USAGE view displays one row for each object dependency. For example, while creating a view from a single table, the view is dependent on the table. Snowflake returns one row to record the dependency of the view on the table. For more information, see Object Dependencies — Preview (in this topic). |
Data Loading Updates¶
Snowpipe: Automated Loads Using Google Cloud Storage Event Notifications — General Availability¶
With this release, we are pleased to announce the general availability of Snowpipe data loads triggered by Google Cloud Storage (GCS) event notifications delivered using Google Cloud Pub/Sub (i.e. “auto-ingest Snowpipe for GCS”).
Note that auto-ingest Snowpipe for Amazon S3 or Microsoft Azure blob storage is already generally available.
Data Pipeline Updates¶
Tasks: Manual Execution of Runs — Preview¶
With this release, we are pleased to announce the preview of the ability to manually execute a single run of a scheduled task (i.e. a standalone task or root task in a task tree). Executing a run of a root task triggers a cascading run of child tasks in the tree, as though the root task had run on its defined schedule. Previously, a scheduled task could only start when its next scheduled run occurred.
This feature is implemented through a new SQL command, EXECUTE TASK, which can be executed by the task owner (i.e. the role that has the OWNERSHIP privilege on the tasks) or any role that has the OPERATE privilege on the tasks. The SQL command triggers asynchronous runs of a task.
The EXECUTE TASK command is useful for testing new or modified standalone tasks before you allow them into a production schedule. Call this SQL command in scripts or stored procedures, or execute the command using third-party tools or services to integrate tasks in external data pipelines.
Data Lake Updates¶
External Table Support for Delta Lake — Preview¶
With this release, we are pleased to announce preview support for Delta Lake (https://delta.io/) in external tables. Delta Lake is a table format on your data lake that supports ACID (atomicity, consistency, isolation, durability) transactions among other features. All data in Delta Lake is stored in Apache Parquet format.
Query the Parquet files in a Delta Lake by creating external tables that reference your cloud storage locations enhanced with Delta Lake.
Preview features are intended for evaluation and testing purposes, and are not recommended for use in production.
Data Sharing Updates¶
Ecosystem Updates¶
SQL API: Updates to the Endpoints¶
With this release, the endpoints for the SQL API now include the version of the API. The updated endpoints are:
/api/v2/statements/
/api/v2/statements/statementHandle
/api/v2/statements/statementHandle/cancel
When sending a request to these new endpoints, you do not need to set the format
field to jsonv2
in the resultSetMetaData
field. If the format
field is set in the request, the SQL API ignores the field.
The older, deprecated version of the SQL API does not support changes to the endpoints or the format
field. When using this version of
the SQL API, you must use the original endpoints and supply the format
field as part of the request.
SQL API: Support for Concurrent Fetches¶
With this release, the SQL API removes limitations on fetching results concurrently (i.e. in parallel with multiple threads). The SQL API supports requests from multiple threads.
This change is not applicable to the older, deprecated version of the SQL API.