Skip to content

Commit 2647ccb

Browse files
authored
de (#3006)
1 parent 58bf278 commit 2647ccb

File tree

1 file changed

+78
-10
lines changed

1 file changed

+78
-10
lines changed

site/sfguides/src/best-practices-cortex-code-cli/best-practices-cortex-code-cli.md

Lines changed: 78 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -72,9 +72,9 @@ curl -LsS https://ai.snowflake.com/static/cc-scripts/install.sh | sh
7272
* **Leverage built-in help** - ask "How does this work?" or check Snowflake documentation
7373

7474

75-
## 101 Use Cases
75+
# 101 Use Cases
7676

77-
### Data discovery & querying
77+
## Data discovery & querying
7878

7979
Here we'll create a basic synthetic dataset and do some basic analysis to generate a dashboard.
8080

@@ -139,7 +139,7 @@ they cancelled their service (churn). Ensure there's a customer_id column that's
139139
unique. Create the data locally and then upload it to Snowflake.
140140
```
141141

142-
### Perform basic queries against this data
142+
### Query your data
143143

144144
Ask anything! Here are some basic examples:
145145

@@ -152,7 +152,7 @@ the most risky regions and contract types.
152152
I want to identify the heaviest data users who are also churning.
153153
```
154154

155-
### Build Interactive Dashboards
155+
## Build Interactive Dashboards
156156

157157
Create and deploy Streamlit apps with charts, filters, and interactivity.
158158

@@ -173,13 +173,13 @@ Give me a link to access the dashboard when it's done.
173173

174174
Congratulations! You should now have a working Streamlit dashboard that displays the dataset you created!
175175

176-
## 201 Use Cases
176+
# 201 Use Cases
177177

178178
Now, let's make this more interactive by creating a cortex agent to answer questions about this data in Snowflake Intelligence.
179179

180180
In this process, we'll augment the existing synthetic data with some synthetic data of customer calls.
181181

182-
### Create a Semantic View for Cortex Analyst
182+
## Create a Semantic View
183183

184184
Now let's create a semantic view so that you can use Cortex Analyst with this data. Try the prompt below and use the defaults for all the questions it asks.
185185

@@ -188,7 +188,7 @@ Write a Semantic View named DEMO_TELECOM_CHURN_ANALYTICS for Cortex Analyst
188188
based on this data. Use the semantic-view optimization skill
189189
```
190190

191-
### Create a Cortex Search service
191+
## Create a Cortex Search service
192192

193193
Step 1: Generate some synthetic data containing customer service calls
194194

@@ -208,7 +208,7 @@ Create a Cortex Search Service named CALL_LOGS_SEARCH that indexes these
208208
transcripts. It should index the TRANSCRIPT_TEXT column and filter by CUSTOMER_ID
209209
```
210210

211-
### Create a Cortex Agent
211+
## Create a Cortex Agent
212212

213213
Finally, let's create a Cortex Agent that uses these two services and add it to Snowflake Intelligence:
214214

@@ -227,7 +227,7 @@ Constraint: Never reveal the raw CHURN_RISK_SCORE to the user; interpret it as
227227
'Low', 'Medium', or 'High'."
228228
```
229229

230-
### Deploy to Snowflake Intelligence
230+
## Deploy to Snowflake Intelligence
231231

232232
Finally, we can deploy the agent to [Snowflake Intelligence](https://ai.snowflake.com/)
233233

@@ -239,7 +239,75 @@ Ta-da! You have successfully created and deployed a Snowflake Intelligence agent
239239

240240
Now you should be able to access this agent in Snowflake Intelligence and ask it questions like:
241241

242-
*What are customers complaining about in their calls?"* or *"Show me high-risk customers with monthly charges over $100"*
242+
- *What are customers complaining about in their calls?"*
243+
- *"Show me high-risk customers with monthly charges over $100"*
244+
245+
246+
## Create and manage dbt projects
247+
248+
Sometimes starting a brand-new dbt project can feel like a full-day task: jumping between Snowsight, your IDE, your terminal, and your dbt repo to define sources, build models, add tests, run builds, validate outputs, and share results.
249+
250+
With Cortex Code CLI, you can often collapse that end-to-end loop into a single conversation, staying in flow while it handles the boilerplate, wiring, and Snowflake-specific best practices.
251+
252+
### dbt Projects on Snowflake
253+
254+
[dbt Projects on Snowflake](https://docs.snowflake.com/en/user-guide/data-engineering/dbt-projects-on-snowflake-using-workspaces) are a Snowflake native implementation of dbt that unlocks project management and orchestration through Workspaces.
255+
256+
Keep these Workspace-specific considerations in mind:
257+
- **`profiles.yml` is required in the workspace**: Each dbt project folder in a Snowflake Workspace must include a `profiles.yml` that specifies a target `warehouse`, `database`, `schema`, and `role`. (Unlike dbt Core, `account` and `user` can be blank/arbitrary because runs execute in Snowflake under the current context.)
258+
- **File-count limits**: A dbt project folder can’t exceed 20,000 files (including generated/log directories).
259+
- **Sharing**: Workspaces are typically created in a personal database and aren’t shareable; use shared workspaces if you need multi-user collaboration.
260+
261+
If you are using Snowflake's native dbt implementation, try prompts like:
262+
263+
```
264+
Find all of the tables within Database tb_101 Schema RAW.
265+
```
266+
267+
```
268+
Create a dbt project that uses these source tables to create an operations pipeline to analyze weekly food truck performance using the Tasty Bytes dataset. Using the raw order, menu, and truck location data, build a model that calculates weekly revenue, total orders, and average order value by truck and city. Add appropriate tests, run a build, validate the output, and generate a shareable HTML summary of the results.
269+
```
270+
271+
```
272+
Open the generated HTML summary in my browser.
273+
```
274+
275+
Once you have a first version working, keep iterating with follow-ups like:
276+
- **Why this structure**: Why did you structure the model this way?
277+
- **Better coverage**: Can you add more tests for nulls and uniqueness?
278+
- **Cleaner layering**: Can you refactor this into staging and mart layers?
279+
- **Speed/cost**: How would you optimize this project for performance and cost?
280+
281+
### dbt OSS and dbt Cloud
282+
283+
If you run dbt from your own repo (dbt Core) or manage it via dbt Cloud, you can still use Cortex Code CLI to generate and evolve the project locally—while respecting your existing connection setup (for example, using `~/.dbt` instead of creating a new `profiles.yml`).
284+
285+
For example:
286+
287+
```
288+
Create a dbt project under /tasty_food that builds a data pipeline to analyze order information and trends using my source data in Database tb_101 Schema RAW. Add appropriate tests, run a build, validate the output, and generate a shareable HTML summary. Don’t create a profiles.yml file; I already have a Snowflake connection via ~/.dbt. When running dbt commands, use --target PM.
289+
```
290+
291+
And when your project grows, you can use Cortex Code CLI to help keep it fast and cost-efficient:
292+
293+
```
294+
Take a look at /target/run_results.json, identify the slowest-running models, suggest specific performance optimizations, and flag any models that aren’t referenced downstream and could potentially be removed.
295+
```
296+
297+
## Debug Apache Airflow® orchestration
298+
299+
Airflow + Snowflake workflows often fail in cross-tool ways: a DAG task fails, a dbt model doesn’t populate, upstream data is missing, or a warehouse setting causes timeouts.
300+
301+
Instead of manually bouncing between the Airflow UI, task logs, DAG code, dbt artifacts, and Snowflake, you can ask Cortex Code CLI to triage the whole issue end-to-end. For example:
302+
303+
```
304+
What's wrong with dbt_finance_customer_product_dag in dev Airflow? Help me debug why my dbt model product_unistore_compute_account_revenue isn't populated.
305+
```
306+
307+
From there, you can follow up with prompts like:
308+
- **Add guardrails**: Can you add a data quality check before loading?
309+
- **Assess impact**: Which downstream reports depend on this table?
310+
- **Fix fast**: The pipeline failed—can you diagnose and fix it?
243311

244312
## Conclusion and Resources
245313

0 commit comments

Comments
 (0)