snowflake.snowpark.FileOperation.put_stream

FileOperation.put_stream(input_stream: IO[bytes], stage_location: str, *, parallel: int = 4, auto_compress: bool = True, source_compression: str = 'AUTO_DETECT', overwrite: bool = False) PutResult[source] (https://github.com/snowflakedb/snowpark-python/blob/v1.16.0/src/snowflake/snowpark/file_operation.py#L227-L291)

Uploads local files to the stage via a file stream.

Parameters:
  • input_stream – The input stream from which the data will be uploaded.

  • stage_location – The full stage path with prefix and file name where you want the file to be uploaded.

  • parallel

    Specifies the number of threads to use for uploading files. The upload process separates batches of data files by size:

    • Small files (< 64 MB compressed or uncompressed) are staged in parallel as individual files.

    • Larger files are automatically split into chunks, staged concurrently, and reassembled in the target stage. A single thread can upload multiple chunks.

    Increasing the number of threads can improve performance when uploading large files. Supported values: Any integer value from 1 (no parallelism) to 99 (use 99 threads for uploading files). Defaults to 4.

  • auto_compress – Specifies whether Snowflake uses gzip to compress files during upload. Defaults to True.

  • source_compression – Specifies the method of compression used on already-compressed files that are being staged. Values can be ‘AUTO_DETECT’, ‘GZIP’, ‘BZ2’, ‘BROTLI’, ‘ZSTD’, ‘DEFLATE’, ‘RAW_DEFLATE’, ‘NONE’. Defaults to “AUTO_DETECT”.

  • overwrite – Specifies whether Snowflake will overwrite an existing file with the same name during upload. Defaults to False.

Returns:

An object of PutResult which represents the results of an uploaded file.

语言: 中文