Run Spark workloads from VS Code, Jupyter Notebooks, or a terminal¶
You can run Spark workloads interactively from Jupyter Notebooks, VS Code, or any Python-based interface without needing to manage a Spark cluster. The workloads run on the Snowflake infrastructure.
For example, you can do the following tasks:
Confirm that you have prerequisites.
Set up your environment to connect with Snowpark Connect for Spark on Snowflake.
Install Snowpark Connect for Spark.
Run PySpark code from your client to run on Snowflake.
Prerequisites¶
Confirm that your Python and Java installations are based on the same computer architecture. For example, if Python is based is arm64, Java must also be arm64 (not x86_64, for example).
Set up your environment¶
You can set up your development environment by ensuring the your code can connect to Snowpark Connect for Spark on Snowflake. To connect to Snowflake
client code will use a .toml file containing connection details.
If you have Snowflake CLI installed, you can use it to define a connection. Otherwise, you can manually write connection parameters in a
config.toml file.
Add a connection by using Snowflake CLI¶
You can use Snowflake CLI to add connection properties that Snowpark Connect for Spark can use to connect to Snowflake. Your changes are saved to a
config.toml file.
Run the following command to add a connection using the snow connection add command.
Follow the prompts to define a connection.
Be sure to specify
spark-connectas the connection name.This command adds a connection to your
config.tomlfile, as in the following example:Run the following command to confirm that the connection works.
You can test the connection in this way when you’ve added it by using Snowflake CLI.
Add a connection by manually writing a connection file¶
You can manually write or update a connections.toml file so that your code can connect to Snowpark Connect for Spark on Snowflake.
Run the following command to ensure that your
connections.tomlfile allows only the owner (user) to have read and write access.Edit the
connections.tomlfile so that it contains a[spark-connect]connection with the connection properties in the following example.Be sure to replace values with your own connection specifics.
Install Snowpark Connect for Spark¶
You can install Snowpark Connect for Spark as a Python package.
Create a Python virtual environment.
Confirm that your Python version is 3.10 or later and earlier than 3.13 by running
python3 --version.Install the Snowpark Connect for Spark package.
Add Python code to start a Snowpark Connect for Spark server and create a Snowpark Connect for Spark session.
Run Python code from your client¶
Once you have an authenticated connection in place, you can write code as you normally would.
You can run PySpark code that connects to Snowpark Connect for Spark by using the PySpark client library.
Run Scala code from your client¶
You can run Scala applications that connect to Snowpark Connect for Spark by using the Spark Connect client library.
This guide walks you through setting up Snowpark Connect and connecting your Scala applications to the Snowpark Connect for Spark server.
Step 1: Set up your Snowpark Connect for Spark environment¶
Set up your environment by using steps described in the following topics:
Step 2: Create a Snowpark Connect for Spark server script and launch the server¶
Create a Python script to launch the Snowpark Connect for Spark server.
Launch the Snowpark Connect for Spark server.
Step 3: Set up your Scala application¶
Add the Spark Connect client dependency to your build.sbt file.
Execute Scala code to connect to the Snowpark Connect for Spark server.
Compile and run your application.
Scala UDF support on Snowpark Connect for Spark¶
When using user-defined functions or custom code, do one of the following:
Register a class finder to monitor and upload class files.
Upload JAR dependencies if needed. You can include the workload JAR itself if a class finder is not used.
Use a staged JAR.
Using Scala 2.13¶
By default, Snowpark Connect for Spark uses Scala 2.12. Workloads built with Scala 2.13 must specify the Scala version using the “snowpark.connect.scala.version” configuration option.
Troubleshoot Snowpark Connect for Spark installation¶
With the following list of checks, you can troubleshoot Snowpark Connect for Spark installation and use.
Ensure that Java and Python are based on the same architecture.
Use the most recent Snowpark Connect for Spark package file, as described in Install Snowpark Connect for Spark.
Confirm that the python command with PySpark code is working correctly for local execution—that is, without Snowflake connectivity.
For example, execute a command such as the following:
Open source clients¶
You can use standard, off-the-shelf open source software (OSS) Spark client packages—such as PySpark and Spark clients for Java or Scala—from your preferred local environments, including Jupyter Notebooks and VS Code. In this way, you can avoid installing packages specific to Snowflake.
You might find this useful if you want to write Spark code locally and have the code use Snowflake compute resources and enterprise governance. In this scenario, you perform authentication and authorization through programmatic access tokens (PATs).
The following sections cover installation, configuration, and authentication. You’ll also find a simple PySpark example to validate your connection.
Step 1: Install Required Packages¶
Install
pyspark. You don’t need to install any Snowflake packages.
Step 2: Setup and Authentication¶
Generate a programmatic access token (PAT).
For more information, see the following topics:
The following example adds a PAT named
TEST_PATfor the usersysadminand sets the expiration to 30 days.Find your Snowflake Spark Connect host URL.
Run the following SQL in Snowflake to find the hostname for your account:
Step 3: Connect to Spark Connect server¶
To connect to the Spark Connect server, use code such as the following: