Snowpark Migration Accelerator: 支持的平台

支持的平台

The Snowpark Migration Accelerator (SMA) currently supports the following programming languages as source code:

  • Python

  • Scala

  • SQL

The SMA analyzes both code files and notebook files to identify any usage of Spark API and other third-party APIs. For a complete list of file types that SMA can analyze, please refer to Supported Filetypes.

SQL 方言

Snowpark Migration Accelerator (SMA) 可以分析代码文件以识别 SQL 元素。目前,SMA 可以检测采用以下格式编写的 SQL 代码:

  • Spark SQL

  • Hive QL

  • Databricks SQL

SQL Assessment and Conversion Guidelines

虽然 Spark SQL 和 Snowflake SQL 高度兼容,但某些 SQL 代码可能无法完美转换。

SQL analysis is only possible when the SQL is received in the following ways:

  • A SQL cell within a supported notebook file

  • A .sql or .hql file

  • A complete string passed to a spark.sql statement.

    Some variable substitutions are not supported. Here are a few examples:

    • Parsed:

      spark.sql("select * from TableA")
      
      Copy

      New SMA scenarios supported include the following:

      # explicit concatenation
      spark.sql("select * from TableA" + ' where col1 = 1')
      
      # implicit concatenation (juxtaposition)
      spark.sql("select * from TableA" ' where col1 = 1')
      
      # var initialized with sql in previous lines before execution on same scope
      sql = "select * from TableA"
      spark.sql(sql)
      
      # f-string interpolation:
      spark.sql(f"select * from {varTableA}")
      
      # format kindof interpolation
      spark.sql("select * from {}".format(varTableA))
      
      # mix var with concat and f-string interpolation
      sql = f"select * from {varTableA} " + f'where {varCol1} = 1'
      spark.sql(sql)
      
      Copy
    • Not Parsed:

      some_variable = "TableA"
      spark.sql("select * from" + some_variable)
      
      Copy

    SQL elements are accounted for in the object inventories, and a readiness score is generated specifically for SQL.

语言: 中文