Cortex Code Agent SDK quickstart¶
This topic walks you through building an AI agent that reads a data pipeline script, finds bugs, and fixes them automatically using the Cortex Code Agent SDK.
What you will do:
Set up a project with the Cortex Code Agent SDK.
Create a data pipeline script with some bugs.
Run an agent that finds and fixes the bugs without manual intervention.
Prerequisites¶
Node.js 18+ (for TypeScript) or Python 3.10+ (for Python).
Snowflake connection configured through Snowflake CLI connection settings, typically in
~/.snowflake/connections.toml, with~/.snowflake/config.tomlalso supported for existing setups (see Configuring connections):
Setup¶
1. Install the Cortex Code CLI¶
Install the CLI:
Verify the installation:
2. Set up your project¶
Create and enter a project directory:
3. Install the SDK¶
Create a data pipeline script¶
Create a data pipeline script with some intentional bugs for the agent to fix:
This code has two issues:
computeConversionRate/compute_conversion_ratedivides byclickswithout checking for zero, returningNaNorInfinity(TypeScript) or raising aZeroDivisionError(Python) for campaigns with no clicks.formatReport/format_reportcallsmax/reduceon the results list without checking whether it is empty, which raises aValueError(Python) orTypeError(TypeScript) when there are no rows.
Build an agent that finds and fixes bugs¶
This code has three main parts:
query(): The main entry point that creates the agentic loop. It returns an async iterator that you consume in your language’s async loop syntax to stream messages as the agent works. See the full API in the TypeScript or Python reference.
prompt: What you want the agent to do. It tells the agent what task to complete.
options: Configuration for the agent.
connectionspecifies which Snowflake CLI connection to authenticate with.allowedToolsspecifies which tools are auto-approved without prompting, anddisallowedToolscan block tools entirely. Other options includemodel,mcp_servers, and more.
The streaming loop runs as the agent thinks, calls tools, observes results, and decides what to do next. Each iteration yields a message: the agent’s reasoning, a tool call, a tool result, or the final outcome. The SDK handles the orchestration.
Run your agent¶
After running, check your report file. You’ll see defensive code handling empty results and zero-click campaigns. Your agent autonomously:
Read the file to understand the code.
Analyzed the logic and identified edge cases that would crash.
Edited the file to add proper error handling.
Multi-turn conversations¶
For interactive sessions where you send multiple prompts with shared context, use the Client API:
Try other prompts¶
Now that your agent is set up, try some different prompts:
"Add comprehensive type hints to all functions in report.py""Write a SQL query that finds the top 10 campaigns by conversion rate""Add input validation to all functions in report.py""Create a README.md documenting the functions in report.py"
Key concepts¶
Permission modes¶
Permission modes control the level of human oversight for tool calls:
Mode |
Behavior |
Use case |
|---|---|---|
|
Runs every tool without prompts. Requires |
Sandboxed CI, fully trusted environments |
|
Uses standard permission checks. In SDK sessions, configure |
Controlled workflows with explicit permission policy |
|
Auto-approves plan requests and plan-exit confirmations. It does not bypass ordinary tool permissions. |
Specialized workflows that want plan approvals to proceed automatically |
|
Starts in planning; approving |
Code review, analysis |
For granular control over individual tool calls, use the canUseTool callback. See
Handle approvals and user input for details.
Next steps¶
Handle approvals and user input: Control which tools the agent can use with the
canUseToolcallback.TypeScript SDK reference: Complete API docs for
query(),createCortexCodeSession(), types, and events.Python SDK reference: Complete API docs for
query(),CortexCodeSDKClient, MCP tools, and hooks.
Legal notices¶
Where your configuration of Cortex Code uses a model provided on the Model and Service Pass-Through Terms, your use of that model is further subject to the terms for that model on that page.
The data classification of inputs and outputs are as set forth in the following table.
Input data classification |
Output data classification |
Designation |
|---|---|---|
Usage Data |
Customer Data |
Covered AI Features [1] |
For additional information, refer to Snowflake AI and ML.