June 2023

The following new features, behavior changes, and updates (enhancements, fixes, etc.) have been introduced this month. If you have any questions about these additions, please contact Snowflake Support.

Important

Each release may include updates that require the web interface to be refreshed.

As a general practice, to ensure these updates do not impact your usage, we recommend refreshing the web interface after each Snowflake release has been deployed.

New Features

Dynamic Tables - Preview

We are pleased to announce the preview of Dynamic Tables.

Dynamic tables are the building blocks of declarative data transformation pipelines. They significantly simplify data engineering in Snowflake and provide a reliable, cost-effective, and automated way to transform your data for consumption. Instead of defining data transformation steps as a series of tasks and having to monitor dependencies and scheduling, you can simply define the end state of the transformation using dynamic tables and leave the complex pipeline management to Snowflake.

For more information, see Dynamic Tables.

Amazon S3-compatible Storage — General Availability

We are pleased to announce the general availability of support for accessing data in Amazon S3-compatible storage. You can create external stages for on-premises or other cloud storage services and devices that are highly compliant with the Amazon S3 REST API. With this feature, you can efficiently manage, govern, and analyze your data regardless of where the data is stored.

For more information, see Working with Amazon S3-compatible storage.

Passing References for Tables, Views, Functions, and Queries to a Stored Procedure — Preview

We are pleased to announce the preview of the ability to pass references for tables, views, functions, and queries to a stored procedure.

A reference is a unique identifier for a table, view, function, or query. When you pass a reference to a stored procedure, the stored procedure performs actions using the active role or secondary roles of the user who created the reference. For example, if you are calling an owner’s rights stored procedure, you can create and pass in a reference to a table to allow the stored procedure to perform actions on the table using your active role.

In addition, if the table, view, or function is not fully qualified, the name of the object is resolved by using the current database and schema when the reference was created (i.e. the database and schema of the user who created the reference).

For more information, see Passing references for tables, views, functions, and queries to stored procedures.

Snowpark ML: Machine Learning at Scale — Preview

We are pleased to announce the preview of Snowpark ML. Snowpark ML is a set of Python tools, including SDKs and underlying infrastructure, for building and deploying machine learning models within Snowflake. This preview includes preprocessing and modeling classes based on popular machine learning libraries such as scikit-learn (https://scikit-learn.org/stable/), xgboost (https://xgboost.readthedocs.io/en/stable/), and lightgbm (https://lightgbm.readthedocs.io/en/stable/).

Snowpark ML works with Snowpark Python. You use Snowpark DataFrames to hold your training or test data and to receive your prediction results.

For more information, see Snowflake ML: End-to-End Machine Learning.

ML Functions — Preview

We are pleased to announce the preview of three new analysis tools powered by machine learning algorithms.

These three features train a machine learning model on your time-series data to determine how a specified metric varies over time and relative to other features. The model then provides insights and predictions based on the trends detected in the data.

  • Forecasting: Predicts future metric values from trends in historical data.

  • Anomaly Detection: Flags metric values that differ from typical expectations.

  • Contribution Explorer: Helps you find dimensions and values that affect the metric in surprising ways.

For more information, see ML Functions.

Native Applications Framework — Preview

We are pleased to announce the preview of the Native Apps Framework that enables you to create data applications that expand the capabilities of other Snowflake features by sharing data and related business logic with other Snowflake accounts.

For more information, see About the Native Apps Framework and Tutorial: Developing an Application with the Native Apps Framework.

Custom Event Billing for Applications — Preview

We are pleased to announce the preview of Custom Event Billing, a usage-based pricing plan that providers can use to charge consumers for usage of apps built with the Snowflake Native Apps Framework.

For more information, see Paid Listings Pricing Models and Adding Billable Events to Applications.

Marketplace Capacity Drawdown Program — General Availability

We are pleased to announce the general availability of the Marketplace Capacity Drawdown Program, which allows eligible customers with a Capacity contract at Snowflake to pay for listings with their committed Capacity.

See Paying for Listings for more information.

Snowpipe Streaming Replication Support — Preview

With this release, we are pleased to announce the support of Snowpipe Streaming with Snowflake replication. Snowflake supports the replication and failover of Snowflake tables populated by Snowpipe Streaming and its associated channel offsets from a source account to a target account in different regions and across cloud platforms with replication. Snowpipe streaming supports both database replication and group-based replication.

For more information, see Replication and Snowpipe Streaming.

Anonymous Procedures — General Availability

With this release, we are pleased to announce the general availability of support for creating anonymous procedures. An anonymous procedure is similar to a stored procedure, but not stored for later use.

You can create an anonymous procedure using the WITH…CALL syntax. With this command, you both create an anonymous procedure defined by parameters in the WITH clause and call that procedure. You do not need to have a role with CREATE PROCEDURE schema privileges for this command.

Reading Files With a Java Function or Procedure Handler — General Availability

With this release, we are pleased to announce the general availability of support for reading staged files with a UDF or procedure handler code written in Java.

For more information, see Reading a file with a Java UDF and Reading a file with a Java procedure.

Reading Files With a Scala Function or Procedure Handler — Preview

With this release, we are pleased to announce a preview of support for reading staged files with a UDF or procedure handler code written in Scala.

For more information, see Reading a file with a Scala UDF and Reading a file with a Scala procedure.

Reading Files With a Python Function or Procedure — Preview

With this release, we are pleased to announce a preview of Python support for reading files with the SnowflakeFile class.

SnowflakeFile is a new class in the snowflake.snowpark.files module that provides dynamic read access for files on an internal or external stage. With SnowflakeFile, you can stream files to accomplish tasks such as reading unstructured data or using your own machine learning model in a user-defined function (UDF), user-defined table function (UDTF), or stored procedure.

For more information, see:

Schema Detection for JSON and CSV — Preview

With this release, we are pleased to announce a preview of the schema detection feature for JSON and CSV. The schema detection feature uses the INFER_SCHEMA function to automatically detect the schema in a set of staged data files and retrieve the column definitions. The generally available INFER_SCHEMA function applies to Apache Parquet, Apache Avro, and ORC files. This preview function expands support to include JSON and CSV files.

For more information, see Schema detection of column definitions from staged semi-structured data files.

Table Schema Evolution — Preview

With this release, we are pleased to announce a preview of the table schema evolution feature. The structure of tables in Snowflake can now evolve automatically to support the structure of new data received from the data sources. Snowflake allows adding new columns or dropping the NOT NULL constraint from columns missing in new data files, and supports dropping columns or changing the data type, length, or precision of existing columns.

To enable table schema evolution, you can set the ENABLE_SCHEMA_EVOLUTION parameter to TRUE when you create or alter a table.

For more information, see Table schema evolution.

Security Updates

Access Control: New Privilege for Delegating Warehouse Management — Preview

With this release, we are pleased to announce a preview of a new privilege for managing warehouses.

If you need to delegate the ability to alter, suspend, or resume any warehouse in your account to a custom role, you can grant the MANAGE WAREHOUSES privilege to that role. Granting the MANAGE WAREHOUSES privilege is equivalent to granting the MODIFY, MONITOR, and OPERATE privileges on all warehouses in the account.

For more information, see Delegating warehouse management.

SQL Updates

New SQL Functions

The following function(s) are now available with this release:

Function Category

New Function

Description

Geospatial Functions (Transformation)

ST_TRANSFORM

Converts a GEOMETRY object from one spatial reference system (SRS) (link removed) to another.

This function is a preview feature.

Improved Performance for SELECT Statements With LIMIT and ORDER BY Clauses — General Availability

With this release, we are pleased to announce that the performance of certain long-running SELECT statements containing both LIMIT and ORDER BY clauses has been significantly improved. This improvement is immediately available to all customers at no additional cost.

The improvement works by pruning micro-partitions that cannot affect the results of such “top K” queries. The additional pruning applies to queries where an integer-representable value (timestamp or integer, or variant explicitly cast to integer, but not an expression) is the first or only column specified in the ORDER BY clause. If the query contains a JOIN clause, the ORDER BY column must be from the fact table (or probe side), typically the larger of the two tables.

Queries on small tables generally do not benefit from this improvement. Queries that return fewer than the number of rows specified in the LIMIT clause, or that use aggregations, also do not benefit.

Note that not all queries, not even all queries that meet these requirements, will benefit.

For more information on micro-partitions and query pruning, see Micro-partitions & Data Clustering.

Support for Python 3.10 in Snowpark, UDFs, UDTFs and Stored Procedures — Preview

With this release, we are pleased to announce support for Python 3.10 in Snowpark Python, Python UDFs, Python UDTFs and Python stored procedures as a preview feature to all accounts.

For more information, see:

Support for Python 3.9 in Snowpark, UDFs, and Stored Procedures — Preview

With this release, we are pleased to announce support for Python 3.9 in Snowpark Python, Python UDFs and Python stored procedures as a preview feature to all accounts.

For more information, see:

UDFs, UDTFs, and Stored Procedures Support Passing Arguments by Name

When calling a UDF, UDTF, or stored procedure, you can now pass arguments by name, in addition to by position.

For example, suppose that you created a UDF with the following statement:

CREATE OR REPLACE FUNCTION add_numbers (n1 NUMBER, n2 NUMBER)
  RETURNS NUMBER
  AS 'n1 + n2';
Copy

To pass the arguments by name, specify the argument name followed by => and the argument value. For example:

SELECT add_numbers(n1 => 10, n2 => 5);
Copy

You can pass the arguments in any order:

SELECT add_numbers(n2 => 5, n1 => 10);
Copy

For more information, see:

If there are multiple functions or procedures with the same name, the same number of arguments, and different data types for the arguments, you can specify the argument names in the call to indicate which function or procedure to execute. The argument names that you specify in the call take precedence over the argument positions. For more information, see Overloading procedures and functions.

Finally, the following built-in functions support passing arguments by name:

Data Science Updates

Work With Snowflake’s Upcoming ML features

This release introduces a new schema, “ML”, to the Snowflake database, along with an ML_USER SNOWFLAKE database role, which is granted to the PUBLIC role in all Snowflake accounts containing a shared SNOWFLAKE database.

For more information, see:

The schema, roles, and privileges support features that will be made available in Public Preview at Snowflake Summit 2023.

Organization Updates

ACCOUNTS View (Organization Usage) — Preview

With this release, we are pleased to announce the preview of the ACCOUNTS view in the ORGANIZATION_USAGE schema. The ACCOUNTS view allows an organization administrator to obtain details about the accounts in an organization, including accounts deleted within the last year.

For more information, see ACCOUNTS view.

Data Loading Updates

Support REPLACE_INVALID_CHARACTERS for Avro, Parquet, Orc, and XML

With this release, we are pleased to announce that the COPY INTO and CREATE EXTERNAL TABLE commands support the file format option REPLACE_INVALID_CHARACTERS for Avro, Parquet, Orc, and XML. Previously, this file format option only worked with CSV and JSON.

For more information, see CREATE FILE FORMAT.

Data Governance Updates

Tag-based Masking Policy: Support for Database & Schema — Preview

With this release, we are pleased to announce the preview of setting a tag-based masking policy on a database and schema. This update enables data engineers to protect all columns in a schema or database when the data type of the column matches the data type of the policy set on the tag. Additionally, a new column is protected when its data type matches the data type of the policy set on the tag. Setting the tag-based masking policy on the database or schema simplifies data protection management because you can set the tag-based policy once and not have to set a masking policy on every column in the database or schema.

For more information, see Tag-based masking policies.

Access History: Track Objects Modified by a DDL Operation — Preview

With this release, we are pleased to announce the preview of tracking objects modified by a DDL operation in the Account Usage ACCESS_HISTORY view. These operations include:

  • Track how tag and policy assignments change.

  • Track the table and column lifecycle.

The object_modified_by_ddl column records these changes. You can use this column to enhance your data auditing practices and detect new objects to classify to meet PII detection requirements.

For more information, see Access History.

Web Interface Updates

Load Files From a Stage Into a Table — General Availability

With this release, we are pleased to announce the general availability of loading files from a stage into a table by using Snowsight.

For more information, see Load data into an existing table using Snowsight.

New Organizations Only Have Snowsight Access

Starting May 30, 2023, new Snowflake organizations only have access to Snowsight and no longer have access to Classic Console.

For more information, see About the Snowsight upgrade.

Language: English