Adaptive Compute

Adaptive Compute is a compute service focused on delivering strong performance with effortless operations. It replaces the fixed compute engine with a workload-aware one that adapts to your queries automatically. The system decides how to allocate resources for the best performance, eliminating the need for infrastructure tuning.

By automatically scaling resources and intelligently routing queries, Adaptive Compute removes the operational complexity that comes with traditional warehouse management: manual cluster sizing, disruptive upgrades, and hands-on performance tuning. It also incorporates the latest hardware and performance enhancements, so adaptive warehouses can run significantly more queries at a similar cost to Gen2.

You access Adaptive Compute through adaptive warehouses. With an adaptive warehouse, you no longer need to manage:

  • Warehouse size (XSMALL, SMALL, MEDIUM, and so on).

  • Multi-cluster warehouse settings.

  • Query Acceleration Service settings.

  • Suspend and resume semantics.

Snowflake handles all of this automatically, so your team can focus on working with data rather than managing the infrastructure behind it.

All jobs across all adaptive warehouses in an account are routed to a shared pool of compute resources. This pool is dedicated to your account: it isn’t shared with other accounts in your organization and isn’t used by other warehouse types, such as standard, interactive, or Snowpark-optimized. You can still have multiple adaptive warehouses per account for grouping workloads with similar performance and cost characteristics, reporting, and governance.

Adaptive warehouses use a query-based billing model, where the cost of each query depends on factors like the amount of compute and software resources it uses. You can still reason about costs at the warehouse level, because all queries running in an adaptive warehouse add up to the total cost of that warehouse. Query-level cost visibility isn’t available during Public Preview but is planned for general availability.

The same cost management tools are available:

You can create new adaptive warehouses or convert existing standard warehouses to adaptive without downtime. Converting existing warehouses allows you to retain your existing chargeback and showback structures and workload segregation (analytics versus ETL, team-based warehouses, and so on). For example, the finance team might use one adaptive warehouse and the engineering team might use another.

Limitations

Adaptive warehouses require Enterprise Edition (or higher).

During Public Preview, adaptive warehouses are available in the following regions: US West 2 (Oregon), EU West 1 (Ireland), and AP Northeast 1 (Tokyo).

The following conversions are also not yet supported:

  • Converting to or from an X5Large or X6Large warehouse.

  • Converting to or from a Snowpark-optimized or interactive warehouse.

Managing performance and throughput

Adaptive warehouses expose two primary properties to control performance and throughput:

  • MAX_QUERY_PERFORMANCE_LEVEL

  • QUERY_THROUGHPUT_MULTIPLIER

MAX_QUERY_PERFORMANCE_LEVEL

MAX_QUERY_PERFORMANCE_LEVEL expresses the upper bound of performance for any individual query. It’s set at the warehouse level and serves as the mechanism to tell the system to “speed up” or “slow down” query execution.

The property is expressed in units of t-shirt sizes (XSMALL through X4LARGE). Each t-shirt size conveys a similar or better level of performance than its commensurate classic warehouse size.

Type:

{ XSMALL | SMALL | MEDIUM | LARGE | XLARGE | XXLARGE | XXXLARGE | X4LARGE }

Default:

XLARGE

Semantics:

  • Larger values provide more compute headroom per statement, improving latency for large, complex queries, and increase potential instantaneous spend for a single statement.

  • Smaller values constrain per-statement spend but might slow large queries while leaving more headroom for concurrency.

  • This value doesn’t map to a specific underlying compute configuration. It expresses only a performance level: Snowflake determines the actual resources needed for each query.

Behavior:

Adaptive Compute determines the optimal compute needed for a query based on the query plan. If the service determines that the compute needs for optimal performance are greater than MAX_QUERY_PERFORMANCE_LEVEL, Snowflake caps it at MAX_QUERY_PERFORMANCE_LEVEL. For smaller queries, Snowflake chooses compute for optimal performance below MAX_QUERY_PERFORMANCE_LEVEL commensurate with what the query needs.

Guidance:

Set MAX_QUERY_PERFORMANCE_LEVEL to the highest query performance you’re comfortable having for your largest queries. Use budgets and resource monitors to govern total spend over time.

QUERY_THROUGHPUT_MULTIPLIER

QUERY_THROUGHPUT_MULTIPLIER expresses the multiplier used to compute the maximum throughput at any given time. Rather than specifying an absolute maximum throughput, you specify an integer scale factor over the system-computed minimum.

To run N statements in parallel at the MAX_QUERY_PERFORMANCE_LEVEL, set the multiplier to N. Because MAX_QUERY_PERFORMANCE_LEVEL represents the upper bound, this setting typically supports more than N queries running in parallel, because many queries need less than the maximum.

Type:

Non-negative integer

Default:

2

Setting this value to 0 means unlimited throughput: the warehouse can use as much burst capacity as available with no cap.

Semantics:

When set to a positive value, the maximum throughput is computed as:

MAX_THROUGHPUT = QUERY_THROUGHPUT_MULTIPLIER * MINIMUM

Where MINIMUM is a system-computed base capacity for the MAX_QUERY_PERFORMANCE_LEVEL set on the warehouse.

  • Acts as a scale factor on this system-computed base capacity.

  • Higher values increase peak throughput (more concurrent work) and reduce queuing, at the cost of potentially higher instantaneous spend.

  • Lower values constrain burst throughput and reduce the risk of sudden spikes in spend, but might lead to queuing.

Behavior:

Snowflake computes an internal base capacity rate for the warehouse based on MAX_QUERY_PERFORMANCE_LEVEL, migration history (classic size, max cluster count, QAS scale factor), and other system tuning parameters.

QUERY_THROUGHPUT_MULTIPLIER multiplies against this base capacity to determine the total number of queries that can be executed concurrently. When the system is below this target, it allows execution of the query. When it reaches the target, it queues the query.

Guidance:

If you observe persistent queued-on-load time and want higher throughput, increase QUERY_THROUGHPUT_MULTIPLIER. If you’re more concerned about capping instantaneous spend, reduce QUERY_THROUGHPUT_MULTIPLIER and rely on budgets and resource monitors for absolute cost controls.

Create an adaptive warehouse

You can create an adaptive warehouse using Snowsight, SQL, or Cortex Code.

To create an adaptive warehouse using Snowsight:

  1. Sign in to Snowsight.

  2. In the navigation menu, select Compute » Warehouses.

  3. Select +Warehouse.

  4. In the Type dropdown, select Adaptive.

  5. Optionally, select Advanced and configure:

    • Maximum query performance level (default: XLarge)

    • Query throughput multiplier (default: 2)

The warehouse is created and can be used normally.

Convert a standard warehouse to an adaptive warehouse

You can convert a standard warehouse to adaptive using Snowsight, SQL, or Cortex Code.

Note

Converting a warehouse to or from an adaptive warehouse is an online operation, which means that it doesn’t involve any downtime. This conversion doesn’t make the warehouse unavailable or interrupt any running queries.

When you convert a warehouse to an adaptive warehouse or back to a standard warehouse, existing queries that were running on that warehouse continue to run to completion using the existing compute resources. At the same time, the warehouse runs any new queries on the compute resources of the new warehouse type. While the existing queries are running, you’re charged for both sets of compute resources. If you’re converting the warehouse back to a standard one, the warehouse doesn’t automatically suspend during this period, whether or not any queries are using the new compute resources. When the existing queries complete, the workload shifts entirely to the new compute resources.

To convert a standard warehouse to an adaptive warehouse using Snowsight:

  1. Sign in to Snowsight.

  2. In the navigation menu, select Compute » Warehouses » <warehouse_identifier>.

  3. Select the more menu (three dots) » Convert to Adaptive.

  4. Confirm the operation.

Property behavior during conversion

When you convert a standard warehouse to an adaptive warehouse, the only property you must change is WAREHOUSE_TYPE. Snowflake automatically computes appropriate values for MAX_QUERY_PERFORMANCE_LEVEL and QUERY_THROUGHPUT_MULTIPLIER.

The system derives these from the existing configuration of the standard warehouse:

  • Warehouse size.

  • MAX_CLUSTER_COUNT (for multi-cluster warehouses).

  • QAS scale factor.

  • Warehouse generation (hardware/software generation).

The goal is to preserve or improve performance compared to the original standard warehouse, provide enough burst capacity for typical load spikes, and avoid requiring manual tuning when switching to adaptive.

After conversion, you can optionally override MAX_QUERY_PERFORMANCE_LEVEL and QUERY_THROUGHPUT_MULTIPLIER using ALTER WAREHOUSE. Standard warehouse properties such as WAREHOUSE_SIZE and MAX_CLUSTER_COUNT no longer apply after conversion to adaptive, and adaptive properties no longer apply after conversion back to standard.

Billing and pricing

Adaptive warehouses use a query-based billing model. The cost of each query depends on factors like the amount of compute and software resources it uses, including the cluster sizes and additional capacity used by features like Query Acceleration Service (QAS). You aren’t charged for creating an adaptive warehouse: charges start when the first query runs.

All queries running in an adaptive warehouse add up to the total cost of that warehouse, so you can continue to use existing chargeback and showback structures. Adaptive warehouse usage is reported as part of COMPUTE in usage statements using virtual warehouse credits.

You control performance and spend primarily through:

  • MAX_QUERY_PERFORMANCE_LEVEL: caps the per-statement performance level.

  • QUERY_THROUGHPUT_MULTIPLIER: caps the overall burst capacity at any instant.

  • Budgets and resource monitors: govern total spend over time at the account or warehouse level.

Typical configuration patterns:

Workload type

Configuration

Latency-sensitive, critical workloads

Higher MAX_QUERY_PERFORMANCE_LEVEL (XLARGE or above). Higher QUERY_THROUGHPUT_MULTIPLIER. Resource monitors or budgets to keep aggregate spend within plan.

Cost-sensitive, high-throughput workloads

Moderate MAX_QUERY_PERFORMANCE_LEVEL (MEDIUM or LARGE). Medium QUERY_THROUGHPUT_MULTIPLIER to balance throughput against spend spikes.

Tightly budgeted workloads

Lower MAX_QUERY_PERFORMANCE_LEVEL. Lower QUERY_THROUGHPUT_MULTIPLIER. Strict budgets and resource monitors.

You can use ACCOUNT_USAGE views to retrieve granular data on credit consumption for a specific adaptive warehouse. Use WAREHOUSE_METERING_HISTORY view to view credit consumption for your warehouse. For a full list of relevant views, see Account Usage views.

For more information about compute cost, see Understanding compute cost.

SQL reference

CREATE ADAPTIVE WAREHOUSE

Creates a new adaptive virtual warehouse.

CREATE [ OR REPLACE ] ADAPTIVE WAREHOUSE [ IF NOT EXISTS ] <name>
  [ [ WITH ] adaptiveProperties ]
  [ [ WITH ] TAG ( <tag_name> = '<tag_value>' [ , ... ] ) ]
  [ objectParams ]

adaptiveProperties ::=
  COMMENT = '<string_literal>'
  MAX_QUERY_PERFORMANCE_LEVEL = { XSMALL | SMALL | MEDIUM | LARGE
                                | XLARGE | XXLARGE | XXXLARGE | X4LARGE }
  QUERY_THROUGHPUT_MULTIPLIER = <integer>

objectParams ::=
  STATEMENT_QUEUED_TIMEOUT_IN_SECONDS = <num>
  STATEMENT_TIMEOUT_IN_SECONDS = <num>

You can also create an adaptive warehouse using the standard CREATE WAREHOUSE syntax with WAREHOUSE_TYPE = 'ADAPTIVE':

CREATE [ OR REPLACE ] WAREHOUSE [ IF NOT EXISTS ] <name>
  [ [ WITH ] WAREHOUSE_TYPE = 'ADAPTIVE'
    [ adaptiveProperties ]
  ]
  [ [ WITH ] TAG ( <tag_name> = '<tag_value>' [ , ... ] ) ]
  [ objectParams ]

Note

Standard warehouse properties such as WAREHOUSE_SIZE, MIN_CLUSTER_COUNT, MAX_CLUSTER_COUNT, and SCALING_POLICY can’t be set on an adaptive warehouse. Similarly, adaptive warehouse properties such as MAX_QUERY_PERFORMANCE_LEVEL and QUERY_THROUGHPUT_MULTIPLIER can’t be set on a standard warehouse.

Required parameters

name

Identifier for the adaptive virtual warehouse. Must be unique for your account. Must start with an alphabetic character and can’t contain spaces or special characters unless enclosed in double quotes. See Object identifiers for details.

Optional properties

MAX_QUERY_PERFORMANCE_LEVEL = { XSMALL | SMALL | MEDIUM | LARGE | XLARGE | XXLARGE | XXXLARGE | X4LARGE }

Upper bound on the performance level for a single statement, expressed as a t-shirt size. Default: XLARGE.

Snowflake chooses a performance level up to this bound based on statement characteristics. Smaller statements might run at a lower performance level to reduce spend. Choose a value appropriate for your largest queries.

For more details, see Managing performance and throughput.

QUERY_THROUGHPUT_MULTIPLIER = <integer>

Multiplier used to compute the maximum throughput at any given time, expressed as a non-negative integer scale factor over the system-computed minimum. Higher values increase peak throughput (more concurrent work) and reduce queuing, at the cost of potentially higher instantaneous spend. Lower values constrain burst throughput and reduce the risk of sudden spikes in spend, but might lead to queuing. A value of 0 means unlimited throughput.

Default: 2.

For more details, see Managing performance and throughput.

STATEMENT_QUEUED_TIMEOUT_IN_SECONDS = <num>

Maximum time, in seconds, a SQL statement can remain queued on the warehouse before Snowflake cancels it. See Parameters for details.

STATEMENT_TIMEOUT_IN_SECONDS = <num>

Maximum time, in seconds, a running SQL statement can run before Snowflake cancels it. See Parameters for details.

Examples

Create an adaptive warehouse with defaults:

CREATE ADAPTIVE WAREHOUSE my_adaptive_wh;

Create with a specific performance level:

CREATE ADAPTIVE WAREHOUSE my_adaptive_wh
  WITH MAX_QUERY_PERFORMANCE_LEVEL = XXLARGE;

Create with both properties:

CREATE ADAPTIVE WAREHOUSE my_adaptive_wh
  WITH MAX_QUERY_PERFORMANCE_LEVEL = MEDIUM
       QUERY_THROUGHPUT_MULTIPLIER = 6;

Create using the standard CREATE WAREHOUSE syntax:

CREATE WAREHOUSE my_adaptive_wh
  WITH WAREHOUSE_TYPE = 'ADAPTIVE'
       MAX_QUERY_PERFORMANCE_LEVEL = LARGE
       QUERY_THROUGHPUT_MULTIPLIER = 3;

ALTER WAREHOUSE (adaptive)

You can use ALTER WAREHOUSE to convert a standard warehouse to adaptive, modify adaptive warehouse properties, or convert an adaptive warehouse back to standard.

Convert a standard warehouse to adaptive:

ALTER WAREHOUSE my_warehouse SET WAREHOUSE_TYPE = 'ADAPTIVE';

Modify adaptive warehouse properties after creation or conversion:

ALTER WAREHOUSE my_adaptive_wh SET
  MAX_QUERY_PERFORMANCE_LEVEL = XLARGE
  QUERY_THROUGHPUT_MULTIPLIER = 8;

Convert an adaptive warehouse back to standard:

ALTER WAREHOUSE my_warehouse SET WAREHOUSE_TYPE = 'STANDARD';

SHOW WAREHOUSES

The adaptive warehouse feature introduces new columns to the SHOW WAREHOUSES command. Properties that don’t apply to adaptive warehouses are shown as NULL.

Columns specific to adaptive warehouses include:

Column name

Description

STATE

One of:

  • ENABLED (active/running)

  • DISABLED (inactive)

MAX_QUERY_PERFORMANCE_LEVEL

Expressed as a t-shirt size. Upper bound on the per-statement performance level.

QUERY_THROUGHPUT_MULTIPLIER

Integer scale factor controlling how much burst capacity the warehouse can use at any instant.

DISABLED_REASONS

One or more reasons why the adaptive warehouse was disabled.

Account Usage views

The following ACCOUNT_USAGE views are available for adaptive warehouses:

Note

For adaptive warehouses, QAS usage is included in compute credits and doesn’t appear as a separate credit column. Use WAREHOUSE_LOAD_HISTORY view to monitor queuing behavior and understand whether to adjust MAX_QUERY_PERFORMANCE_LEVEL or QUERY_THROUGHPUT_MULTIPLIER.

The following sample query produces a time series of warehouse-level performance data for any warehouse that ran at least one query in ADAPTIVE state within a specified lookback period.

WITH adaptive_whs AS (
  SELECT DISTINCT warehouse_name
  FROM SNOWFLAKE.ACCOUNT_USAGE.QUERY_HISTORY q
  WHERE q.warehouse_size = 'ADAPTIVE'
    AND q.start_time >= DATEADD(day, -7, CURRENT_DATE())
)
SELECT
  q.end_time::DATE AS ds,
  q.warehouse_name,
  IFF(q.warehouse_size = 'ADAPTIVE', 'ADAPTIVE', 'STANDARD') AS warehouse_type,
  AVG(q.total_elapsed_time) AS avg_query_time,
  AVG(q.execution_time) AS avg_exec_time,
  AVG(q.queued_overload_time) AS avg_queued_overload_time,
  AVG(q.queued_provisioning_time) AS avg_queued_provisioning_time
FROM SNOWFLAKE.ACCOUNT_USAGE.QUERY_HISTORY q
WHERE q.start_time >= DATEADD(day, -7, CURRENT_DATE())
  AND q.warehouse_name IN (SELECT warehouse_name FROM adaptive_whs)
GROUP BY ALL;

Bulk migration of standard warehouses to adaptive

If you want to migrate many standard warehouses to adaptive simultaneously, you can use the SYSTEM$BULK_UPDATE_WH function.

Parameters for SYSTEM$BULK_UPDATE_WH function

Parameter

Description

Allowed values

property_name

The warehouse property to update.

'WAREHOUSE_TYPE'

new_value

New value for the property.

'ADAPTIVE' or 'STANDARD'

property_filter

JSON filter on warehouse properties (for example, name pattern, size). Warehouses matching all filters are considered for update.

'{"name": "TEST.*"}'

tag_filter

JSON filter on tags. Warehouses must match all specified tags to be selected.

'{"cost-centre": "sales"}'

execution_mode

Operation mode: perform the update or dry run.

'ACTIVE', 'DRY_RUN'

Suggested usage:

  1. First, do a dry run and review the results:

    SELECT SYSTEM$BULK_UPDATE_WH(
      'WAREHOUSE_TYPE',
      'ADAPTIVE',
      '{"WAREHOUSE_TYPE": "STANDARD"}',
      'DRY_RUN'
    );
    
  2. Review the output and adjust filters if necessary.

  3. After verifying the dry run, call the function again using the active mode:

    SELECT SYSTEM$BULK_UPDATE_WH(
      'WAREHOUSE_TYPE',
      'ADAPTIVE',
      '{"WAREHOUSE_TYPE": "STANDARD"}',
      'ACTIVE'
    );
    
  4. Carefully review the results and any errors before repeating or broadening the migration scope.