Skip to content

Commit 44cff85

Browse files
finished adding showLineNumbers to snow docs
1 parent b42702e commit 44cff85

26 files changed

+76
-76
lines changed

src/content/docs/snowflake/features/cross-database-resource-sharing.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ In this guide, we'll walk through a series of Snowflake SQL statements to create
1919

2020
Create three databases to represent the three different organizations that will share resources. In this example, we'll create databases for `db_name1`, `db_name2`, and `db_name3`.
2121

22-
```sql
22+
```sql showLineNumbers
2323
CREATE DATABASE db_name1_actual;
2424
CREATE DATABASE db_name2_actual;
2525
CREATE DATABASE db_name3_actual;
@@ -29,7 +29,7 @@ CREATE DATABASE db_name3_actual;
2929

3030
Create a schema in each database to represent the shared resources. In this example, you can create a schema called `sch` in each database.
3131

32-
```sql
32+
```sql showLineNumbers
3333
CREATE SCHEMA db_name1_actual.sch;
3434
CREATE SCHEMA db_name2_actual.sch;
3535
CREATE SCHEMA db_name3_actual.sch;
@@ -39,7 +39,7 @@ CREATE SCHEMA db_name3_actual.sch;
3939

4040
Create a table in each schema to represent the shared resources. In this example, you can create a table called `table1` in `db_name1_actual.sch`, `table2` in `db_name2_actual.sch`, and `table3` in `db_name3_actual.sch`.
4141

42-
```sql
42+
```sql showLineNumbers
4343
CREATE TABLE db_name1_actual.sch.table1 (id INT);
4444
CREATE TABLE db_name2_actual.sch.table2 (id INT);
4545
CREATE TABLE db_name3_actual.sch.table3 (id INT);
@@ -49,7 +49,7 @@ CREATE TABLE db_name3_actual.sch.table3 (id INT);
4949

5050
You can now insert data into the tables to represent the shared resources. In this example, we'll insert a single row into each table.
5151

52-
```sql
52+
```sql showLineNumbers
5353
INSERT INTO db_name1_actual.sch.table1 (id) VALUES (1);
5454
INSERT INTO db_name2_actual.sch.table2 (id) VALUES (2);
5555
INSERT INTO db_name3_actual.sch.table3 (id) VALUES (3);
@@ -67,7 +67,7 @@ CREATE VIEW db_name1_actual.sch.view1 AS SELECT * FROM db_name1_actual.sch.table
6767

6868
You can creates a secure view `view3` in `db_name3_actual.sch` by joining data from different tables.
6969

70-
```sql
70+
```sql showLineNumbers
7171
CREATE SECURE VIEW db_name3_actual.sch.view3 AS
7272
SELECT view1.id AS View1Id, table2.id AS table2id, table3.id AS table3id
7373
FROM db_name1_actual.sch.view1 view1, db_name2_actual.sch.table2 table2, db_name3_actual.sch.table3 table3;
@@ -77,7 +77,7 @@ FROM db_name1_actual.sch.view1 view1, db_name2_actual.sch.table2 table2, db_name
7777

7878
You can create a share `s_actual` and grant usage permissions on the `db_name3_actual` database and its schema.
7979

80-
```sql
80+
```sql showLineNumbers
8181
CREATE SHARE s_actual;
8282
GRANT USAGE ON DATABASE db_name3_actual TO SHARE s_actual;
8383
GRANT USAGE ON SCHEMA db_name3_actual.sch TO SHARE s_actual;

src/content/docs/snowflake/features/dynamic-tables.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,7 @@ The output should be:
3737

3838
You can create a dynamic table using the `CREATE DYNAMIC TABLE` statement. Run the following query to create a dynamic table:
3939

40-
```sql
40+
```sql showLineNumbers
4141
CREATE OR REPLACE DYNAMIC TABLE t_12345
4242
TARGET_LAG = '1 minute' WAREHOUSE = 'test' REFRESH_MODE = auto INITIALIZE = on_create
4343
AS SELECT id, name FROM example_table_name;

src/content/docs/snowflake/features/iceberg-tables.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@ $ awslocal s3 mb s3://test-bucket
2727

2828
You can create an external volume using the `CREATE OR REPLACE EXTERNAL VOLUME` statement. The external volume is used to define the location of the files that Iceberg will use to store the table data.
2929

30-
```sql
30+
```sql showLineNumbers
3131
CREATE OR REPLACE EXTERNAL VOLUME test_volume
3232
STORAGE_LOCATIONS = (
3333
(

src/content/docs/snowflake/features/materialized-views.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ The following sections guide you through creating materialized views, inserting
1919

2020
To create a materialized view, use the `CREATE MATERIALIZED VIEW` statement. The following example creates a view `order_view` that selects specific columns from the `orders` table.
2121

22-
```sql
22+
```sql showLineNumbers
2323
CREATE TABLE IF NOT EXISTS orders (
2424
id INT,
2525
product TEXT,

src/content/docs/snowflake/features/polaris-catalog.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@ This guide shows how to use the Polaris REST catalog to create Iceberg tables in
2525

2626
The following command starts the Polaris catalog container using the `localstack/polaris` Docker image:
2727

28-
```bash
28+
```bash showLineNumbers
2929
docker run -d --name polaris-test \
3030
-p 8181:8181 -p 8182:8182 \
3131
-e AWS_REGION=us-east-1 \
@@ -48,7 +48,7 @@ curl -X GET http://localhost:8182/health
4848

4949
Set variables and retrieve an access token:
5050

51-
```bash
51+
```bash showLineNumbers
5252
REALM="default-realm"
5353
CLIENT_ID="root"
5454
CLIENT_SECRET="s3cr3t"
@@ -64,7 +64,7 @@ The `TOKEN` variable will contain the access token.
6464

6565
Create a catalog:
6666

67-
```bash
67+
```bash showLineNumbers
6868
curl -s -X POST http://localhost:8181/api/management/v1/catalogs \
6969
-H "Authorization: Bearer $TOKEN" \
7070
-H "Content-Type: application/json" \
@@ -89,7 +89,7 @@ curl -s -X POST http://localhost:8181/api/management/v1/catalogs \
8989

9090
Grant necessary permissions to the catalog:
9191

92-
```bash
92+
```bash showLineNumbers
9393
curl -s -X PUT http://localhost:8181/api/management/v1/catalogs/polaris/catalog-roles/catalog_admin/grants \
9494
-H "Authorization: Bearer $TOKEN" \
9595
-H "Content-Type: application/json" \
@@ -108,7 +108,7 @@ awslocal s3 mb s3://$BUCKET_NAME
108108

109109
In your SQL client, create an external volume using the `CREATE EXTERNAL VOLUME` statement:
110110

111-
```sql
111+
```sql showLineNumbers
112112
CREATE EXTERNAL VOLUME polaris_volume
113113
STORAGE_LOCATIONS = (
114114
(
@@ -126,7 +126,7 @@ ALLOW_WRITES = TRUE;
126126

127127
Create a catalog integration using the `CREATE CATALOG INTEGRATION` statement:
128128

129-
```sql
129+
```sql showLineNumbers
130130
CREATE CATALOG INTEGRATION polaris_catalog
131131
CATALOG_SOURCE = ICEBERG_REST
132132
TABLE_FORMAT = ICEBERG
@@ -150,7 +150,7 @@ COMMENT = 'Polaris catalog integration';
150150

151151
Now create the table using the Polaris catalog and volume:
152152

153-
```sql
153+
```sql showLineNumbers
154154
CREATE ICEBERG TABLE polaris_iceberg_table (c1 TEXT)
155155
CATALOG = 'polaris_catalog',
156156
EXTERNAL_VOLUME = 'polaris_volume',

src/content/docs/snowflake/features/row-access-policies.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ The following sections demonstrate how to create a row access policy, attach it
1919

2020
Use the `CREATE ROW ACCESS POLICY` statement to define a filter condition. This policy will restrict row visibility based on column values.
2121

22-
```sql
22+
```sql showLineNumbers
2323
CREATE OR REPLACE ROW ACCESS POLICY id_filter_policy
2424
AS (id INT) RETURNS BOOLEAN ->
2525
id IN (1, 2);
@@ -29,7 +29,7 @@ AS (id INT) RETURNS BOOLEAN ->
2929

3030
Create a table and bind the row access policy to one of its columns using the `WITH ROW ACCESS POLICY` clause.
3131

32-
```sql
32+
```sql showLineNumbers
3333
CREATE TABLE accounts (
3434
id INT
3535
)

src/content/docs/snowflake/features/snowpipe.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@ awslocal s3 mb s3://test-bucket
2727

2828
You can create a stage using the `CREATE STAGE` command. The stage is used to define the location of the files that Snowpipe will load into the table.
2929

30-
```sql
30+
```sql showLineNumbers
3131
CREATE STAGE test_stage
3232
URL='s3://test-bucket'
3333
CREDENTIALS = (
@@ -68,15 +68,15 @@ Retrieve the `notification_channel` value from the output of the `DESC PIPE` que
6868

6969
You can use the [`PutBucketNotificationConfiguration`](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketNotificationConfiguration.html) API to create a bucket notification configuration that sends notifications to Snowflake when new files are uploaded to the S3 bucket.
7070

71-
```bash
71+
```bash showLineNumbers
7272
awslocal s3api put-bucket-notification-configuration \
7373
--bucket test-bucket \
7474
--notification-configuration file://notification.json
7575
```
7676

7777
The `notification.json` file should contain the following configuration:
7878

79-
```json
79+
```json showLineNumbers
8080
{
8181
"QueueConfigurations": [
8282
{

src/content/docs/snowflake/features/stages.mdx

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -30,7 +30,7 @@ CREATE OR REPLACE DATABASE snowflake_tutorials;
3030

3131
Similarly, you can create a table using the `CREATE TABLE` command. In this example, you can create a table called `employees` in `snowflake_tutorials.public`:
3232

33-
```sql
33+
```sql showLineNumbers
3434
CREATE OR REPLACE TABLE employees (
3535
first_name STRING ,
3636
last_name STRING ,
@@ -96,7 +96,7 @@ awslocal s3 cp employees0*.csv s3://testbucket
9696

9797
In this example, you can create a stage called `my_s3_stage` to load data from an S3 bucket:
9898

99-
```sql
99+
```sql showLineNumbers
100100
CREATE STAGE my_s3_stage
101101
STORAGE_INTEGRATION = s3_int
102102
URL = 's3://testbucket/'
@@ -105,7 +105,7 @@ FILE_FORMAT = csv;
105105

106106
You can further copy data from the S3 stage to the table using the `COPY INTO` command:
107107

108-
```sql
108+
```sql showLineNumbers
109109
COPY INTO mytable
110110
FROM @my_s3_stage
111111
PATTERN='.*employees.*.csv';

src/content/docs/snowflake/features/storage-integrations.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@ awslocal s3 cp file.csv s3://testbucket
3333

3434
You can now create a Storage Integration named `s_example` which will connect Snowflake to your S3 bucket using the following statement:
3535

36-
```sql
36+
```sql showLineNumbers
3737
CREATE STORAGE INTEGRATION s_example
3838
TYPE = EXTERNAL_STAGE
3939
ENABLED = TRUE
@@ -82,7 +82,7 @@ The expected output is:
8282

8383
You can now create an external stage using the following statement:
8484

85-
```sql
85+
```sql showLineNumbers
8686
CREATE STAGE stage_example
8787
STORAGE_INTEGRATION = s_example
8888
URL = 's3://testbucket'

src/content/docs/snowflake/features/streamlit.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -74,7 +74,7 @@ To connect to the Snowflake emulator while developing locally, Streamlit provide
7474

7575
To run the sample against Snowflake emulator, your local `~/.streamlit/secrets.toml` should look like this:
7676

77-
```toml
77+
```toml showLineNumbers
7878
[snowpark]
7979
user = "test"
8080
password = "test"

0 commit comments

Comments
 (0)