Skip to content

Commit ac7e4ac

Browse files
authored
Merge pull request #3985 from DamianMaslanka5/sql-keyword-uppercase
Change lowercase SQL keywords to uppercase to improve consistency
2 parents 577a8e7 + 07fd681 commit ac7e4ac

File tree

44 files changed

+201
-201
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

44 files changed

+201
-201
lines changed

docs/best-practices/using_data_skipping_indices.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -140,7 +140,7 @@ LIMIT 1
140140
A simple analysis shows that `ViewCount` is correlated with the `CreationDate` (a primary key) as one might expect - the longer a post exists, the more time it has to be viewed.
141141

142142
```sql
143-
SELECT toDate(CreationDate) as day, avg(ViewCount) as view_count FROM stackoverflow.posts WHERE day > '2009-01-01' GROUP BY day
143+
SELECT toDate(CreationDate) AS day, avg(ViewCount) AS view_count FROM stackoverflow.posts WHERE day > '2009-01-01' GROUP BY day
144144
```
145145

146146
This therefore makes a logical choice for a data skipping index. Given the numeric type, a min_max index makes sense. We add an index using the following `ALTER TABLE` commands - first adding it, then "materializing it".

docs/cloud/get-started/query-endpoints.md

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -29,16 +29,16 @@ If you have a saved query, you can skip this step.
2929
Open a new query tab. For demonstration purposes, we'll use the [youtube dataset](/getting-started/example-datasets/youtube-dislikes), which contains approximately 4.5 billion records. As an example query, we'll return the top 10 uploaders by average views per video in a user-inputted `year` parameter:
3030

3131
```sql
32-
with sum(view_count) as view_sum,
33-
round(view_sum / num_uploads, 2) as per_upload
34-
select
32+
WITH sum(view_count) AS view_sum,
33+
round(view_sum / num_uploads, 2) AS per_upload
34+
SELECT
3535
uploader,
36-
count() as num_uploads,
37-
formatReadableQuantity(view_sum) as total_views,
38-
formatReadableQuantity(per_upload) as views_per_video
39-
from
36+
count() AS num_uploads,
37+
formatReadableQuantity(view_sum) AS total_views,
38+
formatReadableQuantity(per_upload) AS views_per_video
39+
FROM
4040
youtube
41-
where
41+
WHERE
4242
toYear(upload_date) = {year: UInt16}
4343
group by uploader
4444
order by per_upload desc
@@ -149,7 +149,7 @@ To upgrade the endpoint version from `v1` to `v2`, include the `x-clickhouse-end
149149
**Query API Endpoint SQL:**
150150

151151
```sql
152-
SELECT database, name as num_tables FROM system.tables limit 3;
152+
SELECT database, name AS num_tables FROM system.tables LIMIT 3;
153153
```
154154

155155
#### Version 1 {#version-1}
@@ -365,7 +365,7 @@ OK
365365
**Query API Endpoint SQL:**
366366

367367
```sql
368-
SELECT * from system.tables;
368+
SELECT * FROM system.tables;
369369
```
370370

371371
**cURL:**
@@ -401,7 +401,7 @@ fetch(
401401
**Query API Endpoint SQL:**
402402

403403
```sql
404-
SELECT name, database from system.tables;
404+
SELECT name, database FROM system.tables;
405405
```
406406

407407
**Typescript:**

docs/cloud/get-started/sql-console.md

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -251,17 +251,17 @@ Query result sets can be easily exported to CSV format directly from the SQL con
251251
Some data can be more easily interpreted in chart form. You can quickly create visualizations from query result data directly from the SQL console in just a few clicks. As an example, we'll use a query that calculates weekly statistics for NYC taxi trips:
252252

253253
```sql
254-
select
255-
toStartOfWeek(pickup_datetime) as week,
256-
sum(total_amount) as fare_total,
257-
sum(trip_distance) as distance_total,
258-
count(*) as trip_total
259-
from
254+
SELECT
255+
toStartOfWeek(pickup_datetime) AS week,
256+
sum(total_amount) AS fare_total,
257+
sum(trip_distance) AS distance_total,
258+
count(*) AS trip_total
259+
FROM
260260
nyc_taxi
261-
group by
261+
GROUP BY
262262
1
263-
order by
264-
1 asc
263+
ORDER BY
264+
1 ASC
265265
```
266266

267267
<Image img={tabular_query_results} size="md" alt='Tabular query results' />

docs/cloud/reference/warehouses.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -135,8 +135,8 @@ Once compute-compute is enabled for a service (at least one secondary service wa
135135
6. **CREATE/RENAME/DROP DATABASE queries could be blocked by idled/stopped services by default.** These queries can hang. To bypass this, you can run database management queries with `settings distributed_ddl_task_timeout=0` at the session or per query level. For example:
136136

137137
```sql
138-
create database db_test_ddl_single_query_setting
139-
settings distributed_ddl_task_timeout=0
138+
CREATE DATABASE db_test_ddl_single_query_setting
139+
SETTINGS distributed_ddl_task_timeout=0
140140
```
141141

142142
6. **In very rare cases, secondary services that are idled or stopped for a long time (days) without waking/starting up can cause performance degradation to other services in the same warehouse.** This issue will be resolved soon and is connected to mutations running in the background. If you think you are experiencing this issue, please contact ClickHouse [Support](https://clickhouse.com/support/program).

docs/data-modeling/backfilling.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -371,7 +371,7 @@ While we can add the target table, prior to adding the materialized view we modi
371371
```sql
372372
CREATE MATERIALIZED VIEW pypi_downloads_per_day_mv TO pypi_downloads_per_day
373373
AS SELECT
374-
toStartOfHour(timestamp) as hour,
374+
toStartOfHour(timestamp) AS hour,
375375
project, count() AS count
376376
FROM pypi WHERE timestamp >= '2024-12-17 09:00:00'
377377
GROUP BY hour, project

docs/getting-started/example-datasets/covid19.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -152,7 +152,7 @@ WITH latest_deaths_data AS
152152
date,
153153
new_deceased,
154154
new_confirmed,
155-
ROW_NUMBER() OVER (PARTITION BY location_key ORDER BY date DESC) as rn
155+
ROW_NUMBER() OVER (PARTITION BY location_key ORDER BY date DESC) AS rn
156156
FROM covid19)
157157
SELECT location_key,
158158
date,

docs/getting-started/example-datasets/environmental-sensors.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -168,7 +168,7 @@ WITH
168168
SELECT day, count() FROM sensors
169169
WHERE temperature >= 40 AND temperature <= 50 AND humidity >= 90
170170
GROUP BY day
171-
ORDER BY day asc;
171+
ORDER BY day ASC;
172172
```
173173

174174
Here's a visualization of the result:

docs/getting-started/example-datasets/github.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1245,8 +1245,8 @@ The `sign = -1` indicates a code deletion. We exclude punctuation and the insert
12451245

12461246
```sql
12471247
SELECT
1248-
prev_author || '(a)' as add_author,
1249-
author || '(d)' as delete_author,
1248+
prev_author || '(a)' AS add_author,
1249+
author || '(d)' AS delete_author,
12501250
count() AS c
12511251
FROM git.line_changes
12521252
WHERE (sign = -1) AND (file_extension IN ('h', 'cpp')) AND (line_type NOT IN ('Punct', 'Empty')) AND (author != prev_author) AND (prev_author != '')

docs/getting-started/example-datasets/ontime.md

Lines changed: 12 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -209,7 +209,7 @@ ORDER BY count(*) DESC;
209209
Q5. The percentage of delays by carrier for 2007
210210

211211
```sql
212-
SELECT Carrier, c, c2, c*100/c2 as c3
212+
SELECT Carrier, c, c2, c*100/c2 AS c3
213213
FROM
214214
(
215215
SELECT
@@ -245,7 +245,7 @@ ORDER BY c3 DESC
245245
Q6. The previous request for a broader range of years, 2000-2008
246246

247247
```sql
248-
SELECT Carrier, c, c2, c*100/c2 as c3
248+
SELECT Carrier, c, c2, c*100/c2 AS c3
249249
FROM
250250
(
251251
SELECT
@@ -284,19 +284,19 @@ Q7. Percentage of flights delayed for more than 10 minutes, by year
284284
SELECT Year, c1/c2
285285
FROM
286286
(
287-
select
287+
SELECT
288288
Year,
289-
count(*)*100 as c1
290-
from ontime
289+
count(*)*100 AS c1
290+
FROM ontime
291291
WHERE DepDelay>10
292292
GROUP BY Year
293293
) q
294294
JOIN
295295
(
296-
select
296+
SELECT
297297
Year,
298-
count(*) as c2
299-
from ontime
298+
count(*) AS c2
299+
FROM ontime
300300
GROUP BY Year
301301
) qq USING (Year)
302302
ORDER BY Year;
@@ -316,7 +316,7 @@ Q8. The most popular destinations by the number of directly connected cities for
316316
```sql
317317
SELECT DestCityName, uniqExact(OriginCityName) AS u
318318
FROM ontime
319-
WHERE Year >= 2000 and Year <= 2010
319+
WHERE Year >= 2000 AND Year <= 2010
320320
GROUP BY DestCityName
321321
ORDER BY u DESC LIMIT 10;
322322
```
@@ -341,9 +341,9 @@ WHERE
341341
DayOfWeek NOT IN (6,7) AND OriginState NOT IN ('AK', 'HI', 'PR', 'VI')
342342
AND DestState NOT IN ('AK', 'HI', 'PR', 'VI')
343343
AND FlightDate < '2010-01-01'
344-
GROUP by Carrier
345-
HAVING cnt>100000 and max(Year)>1990
346-
ORDER by rate DESC
344+
GROUP BY Carrier
345+
HAVING cnt>100000 AND max(Year)>1990
346+
ORDER BY rate DESC
347347
LIMIT 1000;
348348
```
349349

docs/getting-started/example-datasets/tpch.md

Lines changed: 12 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -698,7 +698,7 @@ FROM
698698
lineitem
699699
WHERE
700700
o_orderkey = l_orderkey
701-
AND l_shipmode in ('MAIL', 'SHIP')
701+
AND l_shipmode IN ('MAIL', 'SHIP')
702702
AND l_commitdate < l_receiptdate
703703
AND l_shipdate < l_commitdate
704704
AND l_receiptdate >= DATE '1994-01-01'
@@ -718,7 +718,7 @@ SELECT
718718
FROM (
719719
SELECT
720720
c_custkey,
721-
count(o_orderkey) as c_count
721+
count(o_orderkey) AS c_count
722722
FROM
723723
customer LEFT OUTER JOIN orders ON
724724
c_custkey = o_custkey
@@ -796,16 +796,16 @@ SELECT
796796
p_brand,
797797
p_type,
798798
p_size,
799-
count(distinct ps_suppkey) AS supplier_cnt
799+
count(DISTINCT ps_suppkey) AS supplier_cnt
800800
FROM
801801
partsupp,
802802
part
803803
WHERE
804804
p_partkey = ps_partkey
805805
AND p_brand <> 'Brand#45'
806806
AND p_type NOT LIKE 'MEDIUM POLISHED%'
807-
AND p_size in (49, 14, 23, 45, 19, 3, 36, 9)
808-
AND ps_suppkey NOT in (
807+
AND p_size IN (49, 14, 23, 45, 19, 3, 36, 9)
808+
AND ps_suppkey NOT IN (
809809
SELECT
810810
s_suppkey
811811
FROM
@@ -894,7 +894,7 @@ FROM
894894
orders,
895895
lineitem
896896
WHERE
897-
o_orderkey in (
897+
o_orderkey IN (
898898
SELECT
899899
l_orderkey
900900
FROM
@@ -929,30 +929,30 @@ WHERE
929929
(
930930
p_partkey = l_partkey
931931
AND p_brand = 'Brand#12'
932-
AND p_container in ('SM CASE', 'SM BOX', 'SM PACK', 'SM PKG')
932+
AND p_container IN ('SM CASE', 'SM BOX', 'SM PACK', 'SM PKG')
933933
AND l_quantity >= 1 AND l_quantity <= 1 + 10
934934
AND p_size BETWEEN 1 AND 5
935-
AND l_shipmode in ('AIR', 'AIR REG')
935+
AND l_shipmode IN ('AIR', 'AIR REG')
936936
AND l_shipinstruct = 'DELIVER IN PERSON'
937937
)
938938
OR
939939
(
940940
p_partkey = l_partkey
941941
AND p_brand = 'Brand#23'
942-
AND p_container in ('MED BAG', 'MED BOX', 'MED PKG', 'MED PACK')
942+
AND p_container IN ('MED BAG', 'MED BOX', 'MED PKG', 'MED PACK')
943943
AND l_quantity >= 10 AND l_quantity <= 10 + 10
944944
AND p_size BETWEEN 1 AND 10
945-
AND l_shipmode in ('AIR', 'AIR REG')
945+
AND l_shipmode IN ('AIR', 'AIR REG')
946946
AND l_shipinstruct = 'DELIVER IN PERSON'
947947
)
948948
OR
949949
(
950950
p_partkey = l_partkey
951951
AND p_brand = 'Brand#34'
952-
AND p_container in ('LG CASE', 'LG BOX', 'LG PACK', 'LG PKG')
952+
AND p_container IN ('LG CASE', 'LG BOX', 'LG PACK', 'LG PKG')
953953
AND l_quantity >= 20 AND l_quantity <= 20 + 10
954954
AND p_size BETWEEN 1 AND 15
955-
AND l_shipmode in ('AIR', 'AIR REG')
955+
AND l_shipmode IN ('AIR', 'AIR REG')
956956
AND l_shipinstruct = 'DELIVER IN PERSON'
957957
);
958958
```

0 commit comments

Comments
 (0)