Openflow Snowflake Deployment cost and scaling considerations

Openflow - Snowflake Deployment has cost considerations in multiple areas, Snowflake compute, Snowpark Container Services infrastructure, data ingestion and others. Scaling Openflow involves understanding these costs.

The following sections describe Openflow costs in general, and provide a number of examples of scaling Openflow runtimes and associated costs.

Openflow - Snowflake Deployment costs

When using Openflow - Snowflake Deployment, you can incur the following types of costs:

Cost category

Description

Openflow (shown as Openflow Compute Snowflake on your Snowflake bill)

Cost based on the number and types of instances used by Snowpark Container Service Compute Pools in your Snowflake account. You are charged for active compute pools only. Credits are billed per-second with a 5 minute minimum.

For information on the rate per SPCS Compute Instance Family per hour, refer to Table 1(d) in the Snowflake Service Consumption Table.

Additionally, the METERING_DAILY_HISTORY and METERING_HISTORY views in the Account Usage schema can provide additional details on Openflow compute costs using queries for SERVICE_TYPE=OPENFLOW_COMPUTE_SNOWFLAKE.

Note

OPENFLOW_USAGE_HISTORY currently does not contain records for SERVICE_TYPE=OPENFLOW_COMPUTE_SNOWFLAKE

See Exploring compute cost for more information on exploring compute costs in Snowflake.

Snowpark Container Services infrastructure

Cost for additional Snowpark Container Services infrastructure such as storage and data transfer.

See Snowpark Container Services costs for more information.

Ingestion

Cost for loading data into Snowflake using services such as Snowpipe or Snowpipe Streaming, based on data volume. Appears on your Snowflake bill under respective ingestion services line items. Certain connectors may require a standard Snowflake warehouse, incurring additional warehouse costs. For example, database CDC connectors require a Snowflake warehouse for both initial snapshot and incremental Change Data Capture (CDC). You can schedule MERGE operations to manage the compute cost.

Telemetry Data Ingest

Standard Snowflake charges for sending logs and metrics to Openflow deployments and sending runtimes to your event table within Snowflake. The rate for credits per GB of telemetry data can be found in Table 5 in the Snowflake Service Consumption Table and is referred to as Telemetry Data Ingest.

Openflow - Snowflake Deployment scaling

The runtimes and scaling behavior you choose are crucial for managing costs effectively. Openflow supports different runtime types, each with its own scaling characteristics.

Mapping runtimes to Snowflake compute pools

Choosing a runtime type results in the runtime pods being scheduled on the associated Snowflake Compute Pool INTERNAL_OPENFLOW_0_SMALL, INTERNAL_OPENFLOW_0_MEDIUM, INTERNAL_OPENFLOW_0_LARGE with resources described in the following table:

The following table illustrates the scaling behavior of various runtimes and their associated costs:

Runtime type

vCPUs

Available memory (GB)

Snowflake Compute Pool instance family

Snowflake Compute Pool

Instance Family - vCPUs

Instance Family - memory (GB)

Small

1

2

CPU_X64_X

INTERNAL_OPENFLOW_0_SMALL

4

16

Medium

4

10

CPU_X64_SL

INTERNAL_OPENFLOW_0_MEDIUM

16

64

Large

8

20

CPU_X64_L

INTERNAL_OPENFLOW_0_LARGE

32

128

The type of runtime selected impacts the type of compute instances being provisioned. Openflow scales the underlying Snowflake Compute Pools when additional pods need to be scheduled, based on CPU consumption, and up to the maximum node setting set during runtime creation.

Snowflake Compute Pools are configured with a minimum size of 0 nodes and a maximum of 50 nodes. The desired size is dynamically adjusted depending on the runtime required CPU and memory. Snowflake Compute Pools will scale down to 0 after 600 seconds without resource demand.

Runtime types and associated costs

The following table illustrates the scaling behavior of various runtimes and their associated costs:

Runtime

Activity

Snowflake costs

Cloud costs

No runtimes

None

1x Openflow Control Pool x 1 node = 1 CPU_X64_S instance-hour

None

1 small runtime (1vCPU) (min 1 max 2)

Active for 1 hour.

Runtime does not scale to 2.

1x Openflow Control Pool x 1 node + 1x Small Openflow Compute Pool (CPU_X64_S) x 1 node = 2 CPU_X64_S instance-hours

None

2 small runtime (1 vCPU) (min/max=2) 1 large runtime (8 vCPU) (min/max=10)

Small: 4 nodes active for 1 hour Large: 10 nodes active for 1 hour

1x Openflow Control Pool x 1 node + 1x CPU_X64_S x 1 node + 3x CPU_X64_L = 2 CPU_X64_S instance-hours + 3 CPU_X64_L instance-hours

None

1 medium (4vCPU) (min =1 max=2)

First 20 minutes 1 node is running After 20 minutes, scales to 2 nodes After 40 minutes, scales back to 1 node Total 1 hour

1x Openflow Control Pool x 1 node + 1x CPU_X64_SL x 1 node= 1 CPU_X64_S instance-hour + 1 CPU_X64_SL instance-hour

None

1 medium (4vCPU) (min/max=2)

First 30 minutes 2 nodes running Suspends after the first 30 minutes

1x Openflow Control Pool x 1 node + 1x CPU_X64_SL x 1 node x 1/2 hour = 1 CPU_X64_S instance-hour + 1/2 CPU_X64_SL instance-hour

None

Examples for calculating Openflow - Snowflake Deployment consumption

A user creates an Openflow Snowflake Deployment and has not created any runtimes.
  • The Openflow_Control_Pool_0 Compute Pool is running with one CPU_X64_S instance

  • Total Openflow consumption =1 CPU_X64_S instance-hour

A user creates one small runtime with Min Nodes = 1 and Max Nodes = 2. Runtime stays at 1 node for 1 hour.
  • The Openflow_Control_Pool_0 Compute Pool is running with 1 CPU_X64_S instance

  • The INTERNAL_OPENFLOW_0_SMALL Compute Pool is running with 1 CPU_X64_S instance

  • Total Openflow consumption = 2 CPU_X64_S instance-hours

A user creates 2 small runtimes with min/max of 2 nodes each, and one large runtime with min/max of 10 nodes. These Runtimes are active for 1 hour.
  • The Openflow_Control_Pool_0 Compute Pool is running with 1 CPU_X64_S instance

    • 2 small runtimes at 2 nodes = INTERNAL_OPENFLOW_0_SMALL Compute Pool is running with 2 CPU_X64_S instances= 2 CPU_X64_S instance-hours

    • 1 large runtime at 10 nodes = INTERNAL_OPENFLOW_0_LARGE Compute Pool is running with 3 CPU_X64_L instances = 3 CPU_X64_L instance-hours

  • Total Openflow consumption = 3 CPU_X64_S instance-hours + 3 CPU_X64_L instance-hours

A user creates 1 medium runtime with 1 node. After 20 minutes, it scales to 2 nodes. After 20 minutes, it scales back down to 1 node and runs for another 20 minutes.
  • The Openflow_Control_Pool_0 Compute Pool is running with 1 CPU_X64_S instance

  • 1 medium runtime scaling up to 2 medium runtimes = INTERNAL_OPENFLOW_0_MEDIUM Compute Pool is running with 1 CPU_X64_SL instance = 1 CPU_X64_SL instance-hour

  • Total Openflow consumption = 1 CPU_X64_S instance-hour + 1 CPU_X64_SL instance-hour

A user creates 1 medium runtime with 2 nodes, then suspends it after 30 minutes.
  • The Openflow_Control_Pool_0 Compute Pool is running with 1 CPU_X64_S instance

  • 1 medium runtime at 1 node = INTERNAL_OPENFLOW_0_MEDIUM Compute Pool is running with 1 CPU_X64_SL instance

  • 30 minutes = 1/2 hour

  • Total Openflow consumption = 1 CPU_X64_S instance-hour +1/2 CPU_X64_SL instance-hour

Language: English