Skip to content

Commit 73a7d9b

Browse files
committed
docs: use snake-case property names
1 parent 50cfd04 commit 73a7d9b

35 files changed

+374
-371
lines changed

docs/content/API-Reference/GraphQL-API.mdx

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -207,13 +207,13 @@ query {
207207

208208
### <--{"id" : "Reference"}--> CubeQueryArgs
209209

210-
| Key | Schema | Description |
211-
| ------------ | ------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------ |
212-
| `where` | [`RootWhereInput`](#root-where-input) | Represents a SQL `WHERE` clause |
213-
| `limit` | `Int` | A row limit for your query. The default value is `10000`. The maximum allowed limit is `50000` |
214-
| `offset` | `Int` | The number of initial rows to be skipped for your query. The default value is `0` |
215-
| `timezone` | `String` | The timezone to use for the query. The default value is `UTC` |
216-
| `renewQuery` | `Boolean` | If `renewQuery` is set to `true`, Cube will renew all `refreshKey` for queries and query results in the foreground. The default value is `false` |
210+
| Key | Schema | Description |
211+
| ------------ | ------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------- |
212+
| `where` | [`RootWhereInput`](#root-where-input) | Represents a SQL `WHERE` clause |
213+
| `limit` | `Int` | A row limit for your query. The default value is `10000`. The maximum allowed limit is `50000` |
214+
| `offset` | `Int` | The number of initial rows to be skipped for your query. The default value is `0` |
215+
| `timezone` | `String` | The timezone to use for the query. The default value is `UTC` |
216+
| `renewQuery` | `Boolean` | If `renewQuery` is set to `true`, Cube will renew all `refresh_key` for queries and query results in the foreground. The default value is `false` |
217217

218218
### <--{"id" : "Reference"}--> RootWhereInput
219219

docs/content/API-Reference/Query-Format.mdx

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -49,16 +49,16 @@ A Query has the following properties:
4949
[TZ Database Name](https://en.wikipedia.org/wiki/Tz_database) format, e.g.:
5050
`America/Los_Angeles`. The default value is `UTC`.
5151
- `renewQuery`: If `renewQuery` is set to `true`, Cube will renew all
52-
[`refreshKey`][ref-schema-ref-preaggs-refreshkey] for queries and query
52+
[`refresh_key`][ref-schema-ref-preaggs-refreshkey] for queries and query
5353
results in the foreground. However, if the
54-
[`refreshKey`][ref-schema-ref-preaggs-refreshkey] (or
55-
[`refreshKey.every`][ref-schema-ref-preaggs-refreshkey-every]) doesn't
54+
[`refresh_key`][ref-schema-ref-preaggs-refreshkey] (or
55+
[`refresh_key.every`][ref-schema-ref-preaggs-refreshkey-every]) doesn't
5656
indicate that there's a need for an update this setting has no effect. The
5757
default value is `false`.
5858
> **NOTE**: Cube provides only eventual consistency guarantee. Using a small
59-
> [`refreshKey.every`][ref-schema-ref-preaggs-refreshkey-every] value together
60-
> with `renewQuery` to achieve immediate consistency can lead to endless
61-
> refresh loops and overall system instability.
59+
> [`refresh_key.every`][ref-schema-ref-preaggs-refreshkey-every] value
60+
> together with `renewQuery` to achieve immediate consistency can lead to
61+
> endless refresh loops and overall system instability.
6262
- `ungrouped`: If `ungrouped` is set to `true` no `GROUP BY` statement will be
6363
added to the query. Instead, the raw results after filtering and joining will
6464
be returned without grouping. By default `ungrouped` queries require a primary
@@ -206,8 +206,8 @@ The opposite operator of `contains`. It supports multiple values.
206206
}
207207
```
208208

209-
This opertor adds `IS NULL` check to include `NULL` values unless you add `null`
210-
to `values`. For example:
209+
This operator adds `IS NULL` check to include `NULL` values unless you add
210+
`null` to `values`. For example:
211211

212212
```javascript
213213
{

docs/content/API-Reference/REST-API.mdx

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -44,7 +44,7 @@ If the request takes too long to be processed, Cube Backend responds with
4444
mechanism in Cube is implemented. Clients should continuously retry the same
4545
query in a loop until they get a successful result. Subsequent calls to the Cube
4646
endpoints are idempotent and don't lead to scheduling new database queries if
47-
not required by `refreshKey`. Also, receiving `Continue wait` doesn't mean the
47+
not required by `refresh_key`. Also, receiving `Continue wait` doesn't mean the
4848
database query has been canceled, and it's actually still being processed by the
4949
Cube. Database queries that weren't started and are no longer waited by the
5050
client's long polling loop will be marked as orphaned and removed from the
@@ -292,7 +292,7 @@ Example response:
292292
aliasName: 'users.count',
293293
type: 'number',
294294
aggType: 'count',
295-
drillMembers: ['Users.id', 'Users.city', 'Users.createdAt'],
295+
drill_members: ['Users.id', 'Users.city', 'Users.createdAt'],
296296
},
297297
],
298298
dimensions: [

docs/content/Caching/Getting-Started-Pre-Aggregations.mdx

Lines changed: 15 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -49,7 +49,7 @@ cube(`Orders`, {
4949
measures: {
5050
count: {
5151
type: `count`,
52-
drillMembers: [id, createdAt],
52+
drill_members: [id, createdAt],
5353
},
5454
},
5555

@@ -62,7 +62,7 @@ cube(`Orders`, {
6262
id: {
6363
sql: `id`,
6464
type: `number`,
65-
primaryKey: true,
65+
primary_key: true,
6666
},
6767

6868
completedAt: {
@@ -96,7 +96,7 @@ might look something like:
9696
```javascript
9797
cube(`Orders`, {
9898
// Same content as before, but including the following:
99-
preAggregations: {
99+
pre_aggregations: {
100100
orderStatuses: {
101101
dimensions: [status],
102102
},
@@ -123,10 +123,10 @@ pre-aggregation definition to the `Orders` schema:
123123
```javascript
124124
cube(`Orders`, {
125125
// Same content as before, but including the following:
126-
preAggregations: {
126+
pre_aggregations: {
127127
ordersByCompletedAt: {
128128
measures: [count],
129-
timeDimension: completedAt,
129+
time_dimension: completedAt,
130130
granularity: `month`,
131131
},
132132
},
@@ -167,7 +167,7 @@ receives via the API. The process for selection is summarized below:
167167

168168
- The pre-aggregation contains all dimensions, filter dimensions and leaf
169169
measures from the query
170-
- The measures aren't multiplied ([via a `hasMany`
170+
- The measures aren't multiplied ([via a `has_many`
171171
relation][ref-schema-joins-hasmany])
172172

173173
3. If no, then check if
@@ -197,22 +197,22 @@ cube(`LineItems`, {
197197
joins: {
198198
Orders: {
199199
sql: `${CUBE}.order_id = ${Orders.id}`,
200-
relationship: `belongsTo`,
200+
relationship: `belongs_to`,
201201
},
202202
},
203203

204204
measures: {
205205
count: {
206206
type: `count`,
207-
drillMembers: [id, createdAt],
207+
drill_members: [id, createdAt],
208208
},
209209
},
210210

211211
dimensions: {
212212
id: {
213213
sql: `id`,
214214
type: `number`,
215-
primaryKey: true,
215+
primary_key: true,
216216
},
217217

218218
createdAt: {
@@ -409,7 +409,7 @@ cube(`LineItems`, {
409409
type: `countDistinctApprox`,
410410
},
411411
},
412-
preAggregations: {
412+
pre_aggregations: {
413413
myRollup: {
414414
...,
415415
measures: [ CUBE.countDistinctProducts ],
@@ -425,7 +425,7 @@ To recap what we've learnt so far:
425425

426426
- **Additive measures** are measures whose values can be added together
427427

428-
- **Multiplied measures** are measures that define `hasMany` relations
428+
- **Multiplied measures** are measures that define `has_many` relations
429429

430430
- **Leaf measures** are measures that do not reference any other measures in
431431
their definition
@@ -475,11 +475,11 @@ Some extra considerations for pre-aggregation selection:
475475
measures and dimensions of any cubes specified in the query are checked to
476476
find a matching `rollup`.
477477

478-
- `rollup` pre-aggregations **always** have priority over `originalSql`. Thus,
479-
if you have both `originalSql` and `rollup` defined, Cube will try to match
480-
`rollup` pre-aggregations before trying to match `originalSql`. You can
478+
- `rollup` pre-aggregations **always** have priority over `original_sql`. Thus,
479+
if you have both `original_sql` and `rollup` defined, Cube will try to match
480+
`rollup` pre-aggregations before trying to match `original_sql`. You can
481481
instruct Cube to use the original SQL pre-aggregations by using
482-
[`useOriginalSqlPreAggregations`][ref-schema-preaggs-origsql].
482+
[`use_original_sql_pre_aggregations`][ref-schema-preaggs-origsql].
483483

484484
[ref-caching-preaggs-cubestore]:
485485
/caching/using-pre-aggregations#pre-aggregations-storage

docs/content/Caching/Lambda-Pre-Aggregations.mdx

Lines changed: 16 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -38,16 +38,16 @@ data source.
3838

3939
First, you need to create pre-aggregations that will contain your batch data. In
4040
the following example, we call it **batch**. Please note, it must have a
41-
`timeDimension` and `partitionGranularity` specified. Cube will use these
41+
`time_dimension` and `partition_granularity` specified. Cube will use these
4242
properties to union batch data with freshly-retrieved source data.
4343

44-
You may also control the batch part of your data with the `buildRangeStart` and
45-
`buildRangeEnd` properties of a pre-aggregation to determine a specific window
44+
You may also control the batch part of your data with the `build_range_start` and
45+
`build_range_end` properties of a pre-aggregation to determine a specific window
4646
for your batched data.
4747

4848
Next, you need to create a lambda pre-aggregation. To do that, create
4949
pre-aggregation with type `rollupLambda`, specify rollups you would like to use
50-
with `rollups` property, and finally set `unionWithSourceData: true` to use
50+
with `rollups` property, and finally set `union_with_source_data: true` to use
5151
source data as a real-time layer.
5252

5353
Please make sure that the lambda pre-aggregation definition comes first when
@@ -57,22 +57,22 @@ defining your pre-aggregations.
5757
cube('Users', {
5858
...,
5959

60-
preAggregations: {
60+
pre_aggregations: {
6161
lambda: {
6262
type: `rollupLambda`,
63-
unionWithSourceData: true,
63+
union_with_source_data: true,
6464
rollups: [Users.batch]
6565
},
6666
batch: {
6767
measures: [Users.count],
6868
dimensions: [Users.name],
69-
timeDimension: Users.createdAt,
69+
time_dimension: Users.createdAt,
7070
granularity: `day`,
71-
partitionGranularity: `day`,
72-
buildRangeStart: {
71+
partition_granularity: `day`,
72+
build_range_start: {
7373
sql: `SELECT '2020-01-01'`
7474
},
75-
buildRangeEnd: {
75+
build_range_end: {
7676
sql: `SELECT '2022-05-30'`
7777
}
7878
}
@@ -102,7 +102,7 @@ streaming.
102102
// This cube uses a streaming SQL data source such as ksqlDB
103103
cube('StreamingUsers', {
104104
...,
105-
dataSource: 'ksql',
105+
data_source: 'ksql',
106106

107107
preAggregations: {
108108
streaming: {
@@ -119,7 +119,7 @@ cube('StreamingUsers', {
119119
// This cube uses a data source such as Clickhouse or BigQuery
120120
cube('Users', {
121121
...,
122-
preAggregations: {
122+
pre_aggregations: {
123123
batchStreamingLambda: {
124124
type: `rollupLambda`,
125125
rollups: [Users.batch, StreamingUsers.streaming]
@@ -128,13 +128,13 @@ cube('Users', {
128128
type: `rollup`,
129129
measures: [Users.count],
130130
dimensions: [Users.name],
131-
timeDimension: Users.createdAt,
131+
time_dimension: Users.createdAt,
132132
granularity: `day`,
133-
partitionGranularity: `day`,
134-
buildRangeStart: {
133+
partition_granularity: `day`,
134+
build_range_start: {
135135
sql: `SELECT '2020-01-01'`
136136
},
137-
buildRangeEnd: {
137+
build_range_end: {
138138
sql: `SELECT '2022-05-30'`
139139
}
140140
},

docs/content/Caching/Overview.mdx

Lines changed: 14 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -68,10 +68,10 @@ cube(`Orders`, {
6868
},
6969
},
7070

71-
preAggregations: {
71+
pre_aggregations: {
7272
amountByCreated: {
7373
measures: [totalAmount],
74-
timeDimension: createdAt,
74+
time_dimension: createdAt,
7575
granularity: `month`,
7676
},
7777
},
@@ -87,7 +87,7 @@ Upon receiving an incoming request, Cube first checks the cache using this key.
8787
If nothing is found in the cache, the query is executed in the database and the
8888
result set is returned as well as updating the cache.
8989

90-
If an existing value is present in the cache and the `refreshKey` value for the
90+
If an existing value is present in the cache and the `refresh_key` value for the
9191
query hasn't changed, the cached value will be returned. Otherwise, an SQL query
9292
will be executed against either the pre-aggregations storage or the source
9393
database to populate the cache with the results and return them.
@@ -101,21 +101,21 @@ isn't any different, the cached result is valid and can be returned skipping an
101101
expensive query, but if there is a difference, the query needs to be re-run and
102102
its result cached.
103103

104-
To aid with this, Cube defines a `refreshKey` for each cube. [Refresh
104+
To aid with this, Cube defines a `refresh_key` for each cube. [Refresh
105105
keys][ref-schema-ref-cube-refresh-key] are evaluated by Cube to assess if the
106106
data needs to be refreshed.
107107

108108
```javascript
109109
cube(`Orders`, {
110-
// This refreshKey tells Cube to refresh data every 5 minutes
111-
refreshKey: {
110+
// This refresh_key tells Cube to refresh data every 5 minutes
111+
refresh_key: {
112112
every: `5 minute`,
113113
},
114114

115-
// With this refreshKey Cube will only refresh the data if
115+
// With this refresh_key Cube will only refresh the data if
116116
// the value of previous MAX(created_at) changed
117-
// By default Cube will check this refreshKey every 10 seconds
118-
refreshKey: {
117+
// By default Cube will check this refresh_key every 10 seconds
118+
refresh_key: {
119119
sql: `SELECT MAX(created_at) FROM orders`,
120120
},
121121
});
@@ -136,13 +136,13 @@ recommend always enabling background refresh.
136136

137137
### <--{"id" : "In-memory Cache"}--> Default Refresh Keys
138138

139-
The default values for `refreshKey` are
139+
The default values for `refresh_key` are
140140

141141
- `every: '2 minute'` for BigQuery, Athena, Snowflake, and Presto.
142142
- `every: '10 second'` for all other databases.
143143

144144
+You can use a custom SQL query to check if a refresh is required by changing
145-
the [`refreshKey`][ref-schema-ref-cube-refresh-key] property in a cube's Data
145+
the [`refresh_key`][ref-schema-ref-cube-refresh-key] property in a cube's Data
146146
Schema. Often, a `MAX(updated_at_timestamp)` for OLTP data is a viable option,
147147
or examining a metadata table for whatever system is managing the data to see
148148
when it last ran.
@@ -161,9 +161,9 @@ wasn't initiated earlier by another client. Only Refresh Key freshness
161161
guarantees are provided in this case.
162162

163163
For situations like real-time analytics or responding to live user changes to
164-
underlying data, the `refreshKey` query cache can prevent fresh data from
164+
underlying data, the `refresh_key` query cache can prevent fresh data from
165165
showing up immediately. For these situations, the cache can effectively be
166-
disabled by setting the [`refreshKey.every`][ref-schema-ref-cube-refresh-key]
166+
disabled by setting the [`refresh_key.every`][ref-schema-ref-cube-refresh-key]
167167
parameter to something very low, like `1 second`.
168168

169169
## Inspecting Queries
@@ -174,7 +174,7 @@ Cloud][link-cube-cloud].
174174

175175
[Developer Playground][ref-dev-playground] can be used to inspect a single
176176
query. To do that, click the "cache" button after executing the query. It will
177-
show you the information about the `refreshKey` for the query and whether the
177+
show you the information about the `refresh_key` for the query and whether the
178178
query uses any pre-aggregations. To inspect multiple queries or list existing
179179
pre-aggregations, you can use [Cube Cloud][link-cube-cloud].
180180

0 commit comments

Comments
 (0)