CREATE EXTERNAL VOLUME

Creates a new external volume for Apache Iceberg™ tables in the account or replaces an existing external volume.

See also:

ALTER EXTERNAL VOLUME , DROP EXTERNAL VOLUME , SHOW EXTERNAL VOLUMES, DESCRIBE EXTERNAL VOLUME

Syntax

CREATE [ OR REPLACE ] EXTERNAL VOLUME [IF NOT EXISTS]
  <name>
  STORAGE_LOCATIONS =
    (
      (
        NAME = '<storage_location_name>'
        { cloudProviderParams | s3CompatibleStorageParams }
      )
      [, (...), ...]
    )
  [ ALLOW_WRITES = { TRUE | FALSE }]
  [ COMMENT = '<string_literal>' ]
Copy

Where:

cloudProviderParams (for Amazon S3) ::=
  STORAGE_PROVIDER = '{ S3 | S3GOV }'
  STORAGE_AWS_ROLE_ARN = '<iam_role>'
  STORAGE_BASE_URL = '<protocol>://<bucket>[/<path>/]'
  [ STORAGE_AWS_EXTERNAL_ID = '<external_id>' ]
  [ ENCRYPTION = ( [ TYPE = 'AWS_SSE_S3' ] |
              [ TYPE = 'AWS_SSE_KMS' [ KMS_KEY_ID = '<string>' ] ] |
              [ TYPE = 'NONE' ] ) ]
Copy
cloudProviderParams (for Google Cloud Storage) ::=
  STORAGE_PROVIDER = 'GCS'
  STORAGE_BASE_URL = 'gcs://<bucket>[/<path>/]'
  [ ENCRYPTION = ( [ TYPE = 'GCS_SSE_KMS' ] [ KMS_KEY_ID = '<string>' ] |
              [ TYPE = 'NONE' ] ) ]
Copy
cloudProviderParams (for Microsoft Azure) ::=
  STORAGE_PROVIDER = 'AZURE'
  AZURE_TENANT_ID = '<tenant_id>'
  STORAGE_BASE_URL = 'azure://...'
  [ USE_PRIVATELINK_ENDPOINT = { TRUE | FALSE } ]
Copy
s3CompatibleStorageParams ::=
  STORAGE_PROVIDER = 'S3COMPAT'
  STORAGE_BASE_URL = 's3compat://<bucket>[/<path>/]'
  CREDENTIALS = ( AWS_KEY_ID = '<string>' AWS_SECRET_KEY = '<string>' )
  STORAGE_ENDPOINT = '<s3_api_compatible_endpoint>'
Copy

Required parameters

name

String that specifies the identifier (the name) for the external volume; must be unique in your account.

The identifier must start with an alphabetic character and cannot contain spaces or special characters unless the entire identifier string is enclosed in double quotes (for example, "My object"). Identifiers enclosed in double quotes are also case-sensitive.

For more details, see Identifier requirements.

STORAGE_LOCATIONS = ( ( NAME = 'storage_location_name' { cloudProviderParams | s3CompatibleStorageParams } ) [, (...), ...] )

Set of named cloud storage locations in different regions and, optionally, cloud platforms.

Note

  • Each external volume that you create supports a single active storage location.

  • Cross-cloud and cross-region tables are not currently supported when you use Snowflake as the Iceberg catalog.

Optional parameters

ALLOW_WRITES = '{ TRUE | FALSE }'

Specifies whether write operations are allowed for the external volume; must be set to TRUE for Iceberg tables that use Snowflake as the catalog.

The value of this parameter must also match the permissions that you set on the cloud storage account for each specified storage location.

Note

If you plan to use the external volume for externally managed Iceberg tables, you can set this parameter to FALSE. Snowflake doesn’t write data or Iceberg metadata files to your cloud storage when you use an external Iceberg catalog.

Default: TRUE

COMMENT = 'string_literal'

String (literal) that specifies a comment for the external volume.

Default: No value

Cloud provider parameters (cloudProviderParams)

Note

The KMS keys are managed by the storage owner in Amazon S3 or Google Cloud Storage instances. The service principals (IAM role and GCS service account) must be granted privileges to use KMS keys. For more information, see Encrypting table files.

Amazon S3

STORAGE_PROVIDER = '{ S3 | S3GOV }'

Specifies the cloud storage provider that stores your data files.

  • 'S3': S3 storage in public AWS regions outside of China.

  • 'S3GOV': S3 storage in AWS government regions.

STORAGE_AWS_ROLE_ARN = 'iam_role'

Specifies the case-sensitive Amazon Resource Name (ARN) of the AWS identity and access management (IAM) role that grants privileges on the S3 bucket containing your data files. For more information, see Configure an external volume for Amazon S3.

STORAGE_BASE_URL = 'protocol://bucket[/path/]'

Specifies the base URL for your cloud storage location, where:

  • protocol is one of the following:

    • s3 refers to S3 storage in public AWS regions outside of China.

    • s3gov refers to S3 storage in government regions.

  • bucket is the name of an S3 bucket that stores your data files.

  • path is an optional path that can be used to provide granular control over objects in the bucket.

Important

To create an Iceberg table that uses an external catalog, your Parquet data files and Iceberg metadata files must be within the STORAGE_BASE_URL location.

STORAGE_AWS_EXTERNAL_ID = 'external_id'

Optionally specifies an external ID that Snowflake uses to establish a trust relationship with AWS. You must specify the same external ID in the trust policy of the IAM role that you configured for this external volume. For more information, see How to use an external ID when granting access to your AWS resources to a third party (https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-user_externalid.html).

If you don’t specify a value for this parameter, Snowflake automatically generates an external ID when you create the external volume.

ENCRYPTION = ( [ TYPE = 'AWS_SSE_S3' ] | [ TYPE = 'AWS_SSE_KMS' [ KMS_KEY_ID = 'string' ] ] | [ TYPE = 'NONE' ] )

Specifies the properties needed to encrypt data on the external volume.

TYPE = ...

Specifies the encryption type used. Possible values are:

  • 'AWS_SSE_S3' : Server-side encryption using S3-managed encryption keys. For more information, see Using server-side encryption with Amazon S3-managed encryption keys (SSE-S3) (https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingServerSideEncryption.html).

  • 'AWS_SSE_KMS' : Server-side encryption using keys stored in KMS. For more information, see Using server-side encryption with AWS Key Management Service (SSE-KMS) (https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingKMSEncryption.html).

  • 'NONE': No encryption.

KMS_KEY_ID = 'string' (applies to AWS_SSE_KMS encryption only)

Optionally specifies the ID for the AWS KMS-managed key used to encrypt files written to the bucket. If no value is provided, your default KMS key is used to encrypt files for writing data.

Note that this value is ignored when reading data.

Google Cloud Storage

STORAGE_PROVIDER = 'GCS'

Specifies the cloud storage provider that stores your data files.

STORAGE_BASE_URL = 'gcs://bucket[/path/]'

Specifies the base URL for your cloud storage location, where:

  • bucket is the name of a Cloud Storage bucket that stores your data files.

  • path is an optional path that can be used to provide granular control over objects in the bucket.

Important

To create an Iceberg table that uses an external catalog, your Parquet data files and Iceberg metadata files must be within the STORAGE_BASE_URL location.

ENCRYPTION = ( [ TYPE = 'GCS_SSE_KMS' ] [ KMS_KEY_ID = 'string' ] | [ TYPE = 'NONE' ] )

Specifies the properties needed to encrypt data on the external volume.

TYPE = ...

Specifies the encryption type used. Possible values are:

  • 'GCS_SSE_KMS': Server-side encryption using keys stored in KMS. For more information, see customer-managed encryption keys (https://cloud.google.com/storage/docs/encryption/customer-managed-keys).

  • 'NONE': No encryption.

KMS_KEY_ID = 'string' (applies to GCS_SSE_KMS encryption only)

Specifies the ID for the Cloud KMS-managed key used to encrypt files written to the bucket.

Note that this value is ignored when reading data. The read operation should succeed if the service account has sufficient permissions to the data and any specified KMS keys.

Microsoft Azure

STORAGE_PROVIDER = 'AZURE'

Specifies the cloud storage provider that stores your data files.

AZURE_TENANT_ID = 'tenant_id'

Specifies the ID for your Office 365 tenant that the storage location belongs to. An external volume can authenticate to only one tenant, so the storage location must refer to a storage account that belongs to this tenant.

To find your tenant ID, log into the Azure portal and select Azure Active Directory » Properties. The tenant ID is displayed in the Tenant ID field.

STORAGE_BASE_URL = 'azure://...'

Specifies the base URL for your cloud storage location (case-sensitive).

  • For Azure Blob Storage, specify azure://account.blob.core.windows.net/container[/path/], where:

    • account is the name of your Azure account; for example, myaccount.

    • container is the name of an Azure container that stores your data files.

    • path is an optional path that can be used to provide granular control over logical directories in the container.

  • For Fabric OneLake, specify azure://[region-]onelake.dfs | blob.fabric.microsoft.com/workspace/lakehouse/Files/path/, where:

    • region optionally specifies the endpoint region; for example, westus. If specified, this must be the same region used by your Microsoft Fabric capacity, and the same region in which your Snowflake account is hosted.

    • dfs | blob specifies the endpoint type.

    • workspace is either your Fabric workspace ID or workspace name; for example, cfafbeb1-8037-4d0c-896e-a46fb27ff227 or my_workspace. You must use the same type of identifier (ID or name) for both your workspace and Lakehouse.

    • lakehouse is either your Lakehouse ID or Lakehouse name. You must use the same type of identifier (ID or name) for both your workspace and Lakehouse; for example, 5b218778-e7a5-4d73-8187-f10824047715 or my_lakehouse.Lakehouse.

    • path is a path to your storage location in the specified Lakehouse and Workspace.

Note

Use the azure:// prefix and not https://.

Important

To create an Iceberg table that uses an external catalog, your Parquet data files and Iceberg metadata files must be within the STORAGE_BASE_URL location.

USE_PRIVATELINK_ENDPOINT = { TRUE | FALSE }

Specifies whether to use outbound private connectivity. For information about using this parameter, see Private connectivity to external volumes for Microsoft Azure.

S3-compatible storage parameters (s3CompatibleStorageParams)

STORAGE_PROVIDER = 'S3COMPAT'

Specifies S3-compatible storage as your storage provider.

STORAGE_BASE_URL = 's3compat://bucket[/path/]'

Specifies the URL for the external location used to store data files (an existing bucket accessed using an S3-compatible API endpoint), where:

  • bucket is the name of the bucket.

  • path is an optional case-sensitive path (or prefix in S3 terminology) for files in the cloud storage location (files with names that begin with a common string).

CREDENTIALS = ( AWS_KEY_ID = 'string' AWS_SECRET_KEY = 'string' )

Specifies the security credentials for connecting to and accessing your S3-compatible storage location.

STORAGE_ENDPOINT = 's3_api_compatible_endpoint'

Specifies a fully qualified domain that points to your S3-compatible API endpoint.

Note

The storage endpoint should not include a bucket name; for example, specify mystorage.com instead of my_bucket.mystorage.com.

Access control requirements

A role used to execute this SQL command must have the following privileges at a minimum:

Privilege

Object

Notes

CREATE EXTERNAL VOLUME

Account

Only the ACCOUNTADMIN role has this privilege by default. The privilege can be granted to additional roles as needed.

For instructions on creating a custom role with a specified set of privileges, see Creating custom roles.

For general information about roles and privilege grants for performing SQL actions on securable objects, see Overview of Access Control.

Usage notes

Important

External volumes in Amazon S3 storage only: If you recreate an external volume (using the CREATE OR REPLACE EXTERNAL VOLUME syntax) without specifying an external ID, you must repeat the steps to grant the AWS identity and access management (IAM) user for your Snowflake account the access permissions required on the S3 storage location. For more information, see the instructions for retrieving the AWS IAM user for your Snowflake account in Configure an external volume for Amazon S3.

  • You can’t drop or replace an external volume if one or more Iceberg tables are associated with the external volume.

    To view the tables that depend on an external volume, you can use the SHOW ICEBERG TABLES command and a query using RESULT_SCAN that filters on the external_volume_name column.

    Note

    The column identifier (external_volume_name) is case-sensitive. Specify the column identifier exactly as it appears in the SHOW ICEBERG TABLES output.

    For example:

    SHOW ICEBERG TABLES;
    
    SELECT * FROM TABLE(
      RESULT_SCAN(
          LAST_QUERY_ID()
        )
      )
      WHERE "external_volume_name" = 'my_external_volume_1';
    
    Copy
  • If you use a regional endpoint for a Microsoft Fabric OneLake storage location, use the same region as your Microsoft Fabric capacity. This must also be the same region that hosts your Snowflake account.

  • Regarding metadata:

    Attention

    Customers should ensure that no personal data (other than for a User object), sensitive data, export-controlled data, or other regulated data is entered as metadata when using the Snowflake service. For more information, see Metadata fields in Snowflake.

  • CREATE OR REPLACE <object> statements are atomic. That is, when an object is replaced, the old object is deleted and the new object is created in a single transaction.

Examples

The following examples create external volumes that define writable storage locations with different cloud providers:

Amazon S3

The following example creates an external volume that defines an Amazon S3 storage location with encryption:

CREATE OR REPLACE EXTERNAL VOLUME exvol
  STORAGE_LOCATIONS =
      (
        (
            NAME = 'my-s3-us-west-2'
            STORAGE_PROVIDER = 'S3'
            STORAGE_BASE_URL = 's3://MY_EXAMPLE_BUCKET/'
            STORAGE_AWS_ROLE_ARN = 'arn:aws:iam::123456789012:role/myrole'
            ENCRYPTION=(TYPE='AWS_SSE_KMS' KMS_KEY_ID='1234abcd-12ab-34cd-56ef-1234567890ab')
        )
      )
  ALLOW_WRITES = TRUE;
Copy

Google Cloud Storage

The following example creates an external volume that defines a GCS storage location with encryption:

CREATE EXTERNAL VOLUME exvol
  STORAGE_LOCATIONS =
    (
      (
        NAME = 'my-us-east-1'
        STORAGE_PROVIDER = 'GCS'
        STORAGE_BASE_URL = 'gcs://mybucket1/path1/'
        ENCRYPTION=(TYPE='GCS_SSE_KMS' KMS_KEY_ID = '1234abcd-12ab-34cd-56ef-1234567890ab')
      )
    )
  ALLOW_WRITES = TRUE;
Copy

Microsoft Azure

The following example creates an external volume that defines an Azure storage location with encryption:

CREATE EXTERNAL VOLUME exvol
  STORAGE_LOCATIONS =
    (
      (
        NAME = 'my-azure-northeurope'
        STORAGE_PROVIDER = 'AZURE'
        STORAGE_BASE_URL = 'azure://exampleacct.blob.core.windows.net/my_container_northeurope/'
        AZURE_TENANT_ID = 'a123b4c5-1234-123a-a12b-1a23b45678c9'
      )
    )
  ALLOW_WRITES = TRUE;
Copy

S3-compatible storage

Create an external volume that defines an S3-compatible storage location. For more information, see Configure an external volume for S3-compatible storage.

CREATE OR REPLACE EXTERNAL VOLUME ext_vol_s3_compat
  STORAGE_LOCATIONS = (
    (
      NAME = 'my_s3_compat_storage_location'
      STORAGE_PROVIDER = 'S3COMPAT'
      STORAGE_BASE_URL = 's3compat://mybucket/unload/mys3compatdata'
      CREDENTIALS = (
        AWS_KEY_ID = '1a2b3c...'
        AWS_SECRET_KEY = '4x5y6z...'
      )
      STORAGE_ENDPOINT = 'mystorage.com'
    )
  )
  ALLOW_WRITES = FALSE;
Copy
Language: English