CREATE DATABASE

Creates a new database in the system.

This command supports the following variants:

In addition, this command can be used to:

Tip

You can also use the Snowflake REST APIs to perform this operation. For information, see the following REST endpoints:

See also:

ALTER DATABASE , DESCRIBE DATABASE , DROP DATABASE , SHOW DATABASES , UNDROP DATABASE

DESCRIBE SHARE , SHOW SHARES, CREATE LISTING, CREATE OR ALTER <object>

Syntax

Standard Database

CREATE [ OR REPLACE ] [ TRANSIENT ] DATABASE [ IF NOT EXISTS ] <name>
    [ CLONE <source_schema>
        [ { AT | BEFORE } ( { TIMESTAMP => <timestamp> | OFFSET => <time_difference> | STATEMENT => <id> } ) ]
        [ IGNORE TABLES WITH INSUFFICIENT DATA RETENTION ]
        [ IGNORE HYBRID TABLES ] ]
    [ DATA_RETENTION_TIME_IN_DAYS = <integer> ]
    [ MAX_DATA_EXTENSION_TIME_IN_DAYS = <integer> ]
    [ EXTERNAL_VOLUME = <external_volume_name> ]
    [ CATALOG = <catalog_integration_name> ]
    [ REPLACE_INVALID_CHARACTERS = { TRUE | FALSE } ]
    [ DEFAULT_DDL_COLLATION = '<collation_specification>' ]
    [ STORAGE_SERIALIZATION_POLICY = { COMPATIBLE | OPTIMIZED } ]
    [ COMMENT = '<string_literal>' ]
    [ [ WITH ] TAG ( <tag_name> = '<tag_value>' [ , <tag_name> = '<tag_value>' , ... ] ) ]
Copy

Standard Database (from a listing)

CREATE DATABASE <name> FROM LISTING '<listing_global_name>'
Copy

Shared Database (from a Share)

CREATE DATABASE <name> FROM SHARE <provider_account>.<share_name>
Copy

Secondary Database (Database Replication)

CREATE DATABASE <name>
    AS REPLICA OF <account_identifier>.<primary_db_name>
    [ DATA_RETENTION_TIME_IN_DAYS = <integer> ]
Copy

Variant syntax

CREATE OR ALTER DATABASE

Creates a new database if it doesn’t already exist, or transforms an existing database into the database defined in the statement. A CREATE OR ALTER DATABASE statement follows the syntax rules of a CREATE DATABASE statement and has the same limitations as an ALTER DATABASE statement.

The following modifications are supported:

For more information, see CREATE OR ALTER DATABASE usage notes.

CREATE OR ALTER [ TRANSIENT ] DATABASE <name>
    [ DATA_RETENTION_TIME_IN_DAYS = <integer> ]
    [ MAX_DATA_EXTENSION_TIME_IN_DAYS = <integer> ]
    [ EXTERNAL_VOLUME = <external_volume_name> ]
    [ CATALOG = <catalog_integration_name> ]
    [ REPLACE_INVALID_CHARACTERS = { TRUE | FALSE } ]
    [ DEFAULT_DDL_COLLATION = '<collation_specification>' ]
    [ LOG_LEVEL = '<log_level>' ]
    [ TRACE_LEVEL = '<trace_level>' ]
    [ STORAGE_SERIALIZATION_POLICY = { COMPATIBLE | OPTIMIZED } ]
    [ COMMENT = '<string_literal>' ]
Copy

CREATE DATABASE … CLONE

Creates a new database with the same parameter values:

CREATE [ OR REPLACE ] DATABASE [ IF NOT EXISTS ] <name> CLONE <source_database>
  [ ... ]
Copy

For more details, see CREATE <object> … CLONE.

Required parameters

name

Specifies the identifier for the database; must be unique for your account.

In addition, the identifier must start with an alphabetic character and cannot contain spaces or special characters unless the entire identifier string is enclosed in double quotes (e.g. "My object"). Identifiers enclosed in double quotes are also case-sensitive.

For more details, see Identifier requirements.

Important

As a best practice for Database Replication and Failover, we recommend giving each secondary database the same name as its primary database. This practice supports referencing fully-qualified objects (i.e. '<db>.<schema>.<object>') by other objects in the same database, such as querying a fully-qualified table name in a view.

If a secondary database has a different name from the primary database, then these object references would break in the secondary database.

Secure Data Sharing parameters

provider_account.share_name

Specifies the identifier of the share from which to create the database. As documented, the name of the share must be fully-qualified with the name of the account providing the share.

Database replication parameters

Attention

Snowflake recommends using the account replication feature to replicate databases.

AS REPLICA OF account_identifier.primary_db_name

Specifies the identifier for a primary database from which to create a replica (i.e. a secondary database). If the identifier contains spaces, special characters, or mixed-case characters, the entire string must be enclosed in double quotes.

Requires the account identifier and name of the primary database.

account_identifier

Unique identifier of the account that stores the primary database. The preferred identifier is organization_name.account_name. To view the list of accounts enabled for replication in your organization, query SHOW REPLICATION ACCOUNTS.

Though the legacy account locator can also be used as the account identifier, its use is discouraged as it may not work in the future. For more details about using the account locator as an account identifier, see Database Replication Usage Notes.

primary_db_name

Name of the primary database. As a best practice, we recommend giving each secondary database the same name as its primary database.

Note

As a best practice for Database Replication and Failover, we recommend setting the optional parameter DATA_RETENTION_TIME_IN_DAYS to the same value on the secondary database as on the primary database.

Listing parameters

'listing_global_name'

Specifies the global name of the listing from which to create the database, which must meet the following requirements:

  • Can’t be a monetized listing.

  • Listing terms, if not of type OFFLINE, must have been accepted using Snowsight.

  • Listing data products must be available locally in the current region.

    Whether a listing is available in the local region can be determined by viewing the is_ready_for_import column of DESCRIBE AVAILABLE LISTING.

You must have the IMPORT LISTING privilege to create a database from a listing. You must have the IMPORT SHARE privilege to create a database from a share.

Optional parameters

TRANSIENT

Specifies a database as transient. Transient databases do not have a Fail-safe period so they do not incur additional storage costs once they leave Time Travel; however, this means they are also not protected by Fail-safe in the event of a data loss. For more information, see Understanding and viewing Fail-safe.

In addition, by definition, all schemas (and consequently all tables) created in a transient database are transient. For more information about transient tables, see CREATE TABLE.

Default: No value (i.e. database is permanent)

CLONE source_db

Specifies to create a clone of the specified source database. For more details about cloning a database, see CREATE <object> … CLONE.

AT | BEFORE ( TIMESTAMP => timestamp | OFFSET => time_difference | STATEMENT => id )

When cloning a database, the AT | BEFORE clause specifies to use Time Travel to clone the database at or before a specific point in the past. If the specified Time Travel time is at or before the point in time when the database was created, the cloning operation fails with an error.

IGNORE TABLES WITH INSUFFICIENT DATA RETENTION

Ignore tables that no longer have historical data available in Time Travel to clone. If the time in the past specified in the AT | BEFORE clause is beyond the data retention period for any child table in a database or schema, skip the cloning operation for the child table. For more information, see Child Objects and Data Retention Time.

IGNORE HYBRID TABLES

Ignore hybrid tables, which will not be cloned. Use this option to clone a database that contains hybrid tables. The cloned database includes other objects but skips hybrid tables.

If you don’t use this option and your database contains one or more hybrid tables, the command ignores hybrid tables silently. However, the error handling for databases that contain hybrid tables will change in an upcoming release; therefore, you may want to add this parameter to your commands preemptively.

DATA_RETENTION_TIME_IN_DAYS = integer

Specifies the number of days for which Time Travel actions (CLONE and UNDROP) can be performed on the database, as well as specifying the default Time Travel retention time for all schemas created in the database. For more details, see Understanding & using Time Travel.

For a detailed description of this object-level parameter, as well as more information about object parameters, see Parameters.

Values:

  • Standard Edition: 0 or 1

  • Enterprise Edition:

    • 0 to 90 for permanent databases

    • 0 or 1 for transient databases

Default:

  • Standard Edition: 1

  • Enterprise Edition (or higher): 1 (unless a different default value was specified at the account level)

Note

A value of 0 effectively disables Time Travel for the database.

MAX_DATA_EXTENSION_TIME_IN_DAYS = integer

Object parameter that specifies the maximum number of days for which Snowflake can extend the data retention period for tables in the database to prevent streams on the tables from becoming stale.

For a detailed description of this parameter, see MAX_DATA_EXTENSION_TIME_IN_DAYS.

EXTERNAL_VOLUME = external_volume_name

Object parameter that specifies the default external volume to use for Apache Iceberg™ tables.

For more information about this parameter, see EXTERNAL_VOLUME.

CATALOG = catalog_integration_name

Object parameter that specifies the default catalog integration to use for Apache Iceberg™ tables.

For more information about this parameter, see CATALOG.

REPLACE_INVALID_CHARACTERS = { TRUE | FALSE }

Specifies whether to replace invalid UTF-8 characters with the Unicode replacement character (�) in query results for an Iceberg table. You can only set this parameter for tables that use an external Iceberg catalog.

  • TRUE replaces invalid UTF-8 characters with the Unicode replacement character.

  • FALSE leaves invalid UTF-8 characters unchanged. Snowflake returns a user error message when it encounters invalid UTF-8 characters in a Parquet data file.

Default: FALSE

DEFAULT_DDL_COLLATION = 'collation_specification'

Specifies a default collation specification for all schemas and tables added to the database. The default can be overridden at the schema and individual table level.

For more details about the parameter, see DEFAULT_DDL_COLLATION.

LOG_LEVEL = 'log_level'

Specifies the severity level of messages that should be ingested and made available in the active event table. Messages at the specified level (and at more severe levels) are ingested.

For more information about levels, see LOG_LEVEL. For information about setting the log level, see Setting levels for logging, metrics, and tracing.

TRACE_LEVEL = 'trace_level'

Controls how trace events are ingested into the event table.

For information about levels, see TRACE_LEVEL. For information about setting trace level, see Setting levels for logging, metrics, and tracing.

STORAGE_SERIALIZATION_POLICY = { COMPATIBLE | OPTIMIZED }

Specifies the storage serialization policy for Apache Iceberg™ tables that use Snowflake as the catalog.

  • COMPATIBLE: Snowflake performs encoding and compression of data files that ensures interoperability with third-party compute engines.

  • OPTIMIZED: Snowflake performs encoding and compression of data files that ensures the best table performance within Snowflake.

Default: OPTIMIZED

COMMENT = 'string_literal'

Specifies a comment for the database.

Default: No value

TAG ( tag_name = 'tag_value' [ , tag_name = 'tag_value' , ... ] )

Specifies the tag name and the tag string value.

The tag value is always a string, and the maximum number of characters for the tag value is 256.

For information about specifying tags in a statement, see Tag quotas for objects and columns.

Access control requirements

A role used to execute this SQL command must have the following privileges at a minimum:

Privilege

Object

Notes

CREATE DATABASE

Account

Required to create a new database.

Only the SYSADMIN role, or a higher role, has this privilege by default. The privilege can be granted to additional roles as needed.

USAGE

External volume, catalog integration

Required if setting the EXTERNAL_VOLUME or CATALOG object parameters, respectively.

IMPORT LISTING

Account

Required to create a database from a listing.

IMPORT SHARE

Account

Required to create a database from a share.

MODIFY LOG LEVEL

Account

Required to set the LOG_LEVEL for a database.

MODIFY TRACE LEVEL

Account

Required to set the TRACE_LEVEL for a database.

OWNERSHIP

Database

Required only when executing a CREATE OR ALTER DATABASE statement for an existing database.

OWNERSHIP is a special privilege on an object that is automatically granted to the role that created the object, but can also be transferred using the GRANT OWNERSHIP command to a different role by the owning role (or any role with the MANAGE GRANTS privilege).

For instructions on creating a custom role with a specified set of privileges, see Creating custom roles.

For general information about roles and privilege grants for performing SQL actions on securable objects, see Overview of Access Control.

General usage notes

  • Creating a database automatically sets it as the active/current database for the current session (equivalent to using the USE DATABASE command for the database).

  • If a database with the same name already exists, an error is returned and the database is not created, unless the optional OR REPLACE keyword is specified in the command.

    Important

    Using OR REPLACE is the equivalent of using DROP DATABASE on the existing database and then creating a new database with the same name; however, the dropped database is not permanently removed from the system. Instead, it is retained in Time Travel. This is important because dropped databases in Time Travel contribute to data storage for your account. For more information, see Storage Costs for Time Travel and Fail-safe.

  • CREATE OR REPLACE <object> statements are atomic. That is, when an object is replaced, the old object is deleted and the new object is created in a single transaction.

  • Creating a new database automatically creates two schemas in the database:

    • PUBLIC: Default schema for the database.

    • INFORMATION_SCHEMA: Schema which contains views and table functions that can be used for querying metadata about the objects in the database, as well as across all objects in the account.

  • Databases created from shares differ from standard databases in the following ways:

    • They do not have the PUBLIC or INFORMATION_SCHEMA schemas unless these schemas were explicitly granted to the share.

    • They cannot be cloned.

    • Properties, such as TRANSIENT and DATA_RETENTION_TIME_IN_DAYS, do not apply.

  • When a database is active/current, the PUBLIC schema is also active/current by default unless a different schema is used or the PUBLIC schema has been dropped.

  • Regarding metadata:

    Attention

    Customers should ensure that no personal data (other than for a User object), sensitive data, export-controlled data, or other regulated data is entered as metadata when using the Snowflake service. For more information, see Metadata fields in Snowflake.

CREATE OR ALTER DATABASE usage notes

  • All limitations of the ALTER DATABASE command apply.

  • This command supports the properties and syntax that overlap between the CREATE DATABASE and ALTER DATABASE commands. For this reason, the following are not supported:

    • Swapping databases using the SWAP WITH parameter.

    • Renaming a database using the RENAME TO parameter.

    • Creating a clone of a database using the CLONE parameter.

    • Adding or changing tags and policies. Any existing tags and policies are preserved.

    • Converting a TRANSIENT database into a non-TRANSIENT database, or vice versa.

    • Creating a database from a share using CREATE OR ALTER DATABASE … FROM SHARE.

    • Creating a secondary (replica) database using CREATE OR ALTER DATABASE … AS REPLICA OF.

Database replication usage notes

Attention

Snowflake recommends using the account replication feature to replicate databases.

  • Database replication uses Snowflake-provided compute resources instead of your own virtual warehouse to copy objects and data. However, the STATEMENT_TIMEOUT_IN_SECONDS session/object parameter still controls how long a statement runs before it is canceled. The default value is 172800 (2 days). Because the initial replication of a primary database can take longer than 2 days to complete (depending on the amount of metadata in the database as well as the amount of data in database objects), we recommend increasing the STATEMENT_TIMEOUT_IN_SECONDS value to 604800 (7 days, the maximum value) for the session in which you run the replication operation.

    Run the following ALTER SESSION statement prior to executing the ALTER DATABASE secondary_db_name REFRESH statement in the same session:

    ALTER SESSION SET STATEMENT_TIMEOUT_IN_SECONDS = 604800;
    
    Copy

    Note that the STATEMENT_TIMEOUT_IN_SECONDS parameter also applies to the active warehouse in a session. The parameter honors the lower value set at the session or warehouse level. If you have an active warehouse in the current session, also set STATEMENT_TIMEOUT_IN_SECONDS to 604800 for this warehouse (using ALTER WAREHOUSE).

    For example:

    -- determine the active warehouse in the current session (if any)
    SELECT CURRENT_WAREHOUSE();
    
    +---------------------+
    | CURRENT_WAREHOUSE() |
    |---------------------|
    | MY_WH               |
    +---------------------+
    
    -- change the STATEMENT_TIMEOUT_IN_SECONDS value for the active warehouse
    
    ALTER WAREHOUSE my_wh SET STATEMENT_TIMEOUT_IN_SECONDS = 604800;
    
    Copy

    You can reset the parameter value to the default after the replication operation is completed:

    ALTER WAREHOUSE my_wh UNSET STATEMENT_TIMEOUT_IN_SECONDS;
    
    Copy
  • The preferred method of identifying the account that stores the primary database uses the organization name and account name as the account identifier. If you decide to use the legacy account locator instead, see Account identifiers for replication and failover.

  • The CREATE DATABASE … AS REPLICA command does not support the WITH TAG clause.

    This clause is not supported because the secondary database is read only. If your primary database specifies the WITH TAG clause, remove the clause prior to creating the secondary database. To verify whether your database has the WITH TAG clause, call the GET_DDL function in your Snowflake account and specify the primary database in the function argument. If a tag is set on the database, the function output will include an ALTER DATABASE … SET TAG statement.

    For more information, see Replication and tags.

Examples

Create two permanent databases, one with a data retention period of 10 days:

CREATE DATABASE mytestdb;

CREATE DATABASE mytestdb2 DATA_RETENTION_TIME_IN_DAYS = 10;

SHOW DATABASES LIKE 'my%';

+---------------------------------+------------+------------+------------+--------+----------+---------+---------+----------------+
| created_on                      | name       | is_default | is_current | origin | owner    | comment | options | retention_time |
|---------------------------------+------------+------------+------------+--------+----------+---------+---------+----------------|
| Tue, 17 Mar 2016 16:57:04 -0700 | MYTESTDB   | N          | N          |        | PUBLIC   |         |         | 1              |
| Tue, 17 Mar 2016 17:06:32 -0700 | MYTESTDB2  | N          | N          |        | PUBLIC   |         |         | 10             |
+---------------------------------+------------+------------+------------+--------+----------+---------+---------+----------------+
Copy

Create a transient database:

CREATE TRANSIENT DATABASE mytransientdb;

SHOW DATABASES LIKE 'my%';

+---------------------------------+---------------+------------+------------+--------+----------+---------+-----------+----------------+
| created_on                      | name          | is_default | is_current | origin | owner    | comment | options   | retention_time |
|---------------------------------+---------------+------------+------------+--------+----------+---------+-----------+----------------|
| Tue, 17 Mar 2016 16:57:04 -0700 | MYTESTDB      | N          | N          |        | PUBLIC   |         |           | 1              |
| Tue, 17 Mar 2016 17:06:32 -0700 | MYTESTDB2     | N          | N          |        | PUBLIC   |         |           | 10             |
| Tue, 17 Mar 2015 17:07:51 -0700 | MYTRANSIENTDB | N          | N          |        | PUBLIC   |         | TRANSIENT | 1              |
+---------------------------------+---------------+------------+------------+--------+----------+---------+-----------+----------------+
Copy

Create a database from a share provided by account ab67890:

CREATE DATABASE snow_sales FROM SHARE ab67890.sales_s;
Copy

For more detailed examples of creating a database from a share, see Consume imported data.

Database replication examples

Attention

Snowflake recommends using the account replication feature to replicate databases.

For an example of creating a replication group to replicate a single database to a target account, see Replicate a single database.

CREATE OR ALTER DATABASE examples

Create a simple database

Create a database named db1:

CREATE OR ALTER DATABASE db1;
Copy

Alter database db1 to set the DATA_RETENTION_TIME_IN_DAYS and DEFAULT_DDL_COLLATION parameters:

CREATE OR ALTER DATABASE db1
  DATA_RETENTION_TIME_IN_DAYS = 5
  DEFAULT_DDL_COLLATION = 'de';
Copy

Unset a parameter previously set on database

The absence of a previously set parameter in the modified database definition results in unsetting it. In the following example, unset the DATA_RETENTION_TIME_IN_DAYS parameter for the database db1 created in the previous example:

CREATE OR ALTER DATABASE db1
  DEFAULT_DDL_COLLATION = 'de';
Copy
Language: English