Skip to content

Commit d9db9cb

Browse files
committed
docs: update links in documentation
1 parent 6adec2a commit d9db9cb

File tree

7 files changed

+158
-123
lines changed

7 files changed

+158
-123
lines changed

docs/content/Caching/Running-in-Production.mdx

Lines changed: 43 additions & 33 deletions
Original file line numberDiff line numberDiff line change
@@ -5,25 +5,24 @@ category: Caching
55
menuOrder: 4
66
---
77

8-
Cube.js makes use of two different kinds of cache:
8+
Cube makes use of two different kinds of cache:
99

1010
- Redis, for in-memory storage of query results
1111
- Cube Store for storing pre-aggregations
1212

13-
In development, Cube.js uses in-memory storage on the server. In production, we
13+
In development, Cube uses in-memory storage on the server. In production, we
1414
**strongly** recommend running Redis as a separate service.
1515

1616
<WarningBox>
1717

18-
Cube Store [will replace
19-
Redis][replace-redis] for in-memory cache and queue management in late
20-
2022.
18+
Cube Store [will replace Redis][replace-redis] for in-memory cache and queue
19+
management in late 2022.
2120

2221
</WarningBox>
2322

24-
Cube Store is enabled by default when running Cube.js in development mode. In
23+
Cube Store is enabled by default when running Cube in development mode. In
2524
production, Cube Store **must** run as a separate process. The easiest way to do
26-
this is to use the official Docker images for Cube.js and Cube Store.
25+
this is to use the official Docker images for Cube and Cube Store.
2726

2827
<InfoBox>
2928

@@ -41,13 +40,13 @@ docker run -p 3030:3030 cubejs/cubestore
4140
<InfoBox>
4241

4342
Cube Store can further be configured via environment variables. To see a
44-
complete reference, please consult the [Cube Store section of the Environment
45-
Variables reference][ref-config-env].
43+
complete reference, please consult the `CUBESTORE_*` environment variables in
44+
the [Environment Variables reference][ref-config-env].
4645

4746
</InfoBox>
4847

49-
Next, run Cube.js and tell it to connect to Cube Store running on `localhost`
50-
(on the default port `3030`):
48+
Next, run Cube and tell it to connect to Cube Store running on `localhost` (on
49+
the default port `3030`):
5150

5251
```bash
5352
docker run -p 4000:4000 \
@@ -56,8 +55,8 @@ docker run -p 4000:4000 \
5655
cubejs/cube
5756
```
5857

59-
In the command above, we're specifying `CUBEJS_CUBESTORE_HOST` to let Cube.js
60-
know where Cube Store is running.
58+
In the command above, we're specifying `CUBEJS_CUBESTORE_HOST` to let Cube know
59+
where Cube Store is running.
6160

6261
You can also use Docker Compose to achieve the same:
6362

@@ -87,7 +86,8 @@ services:
8786
8887
## Architecture
8988
90-
Deep dive on Cube Store architecture can be found in [this presentation](https://docs.google.com/presentation/d/1oQ-koloag0UcL-bUHOpBXK4txpqiGl41rxhgDVrw7gw/).
89+
Deep dive on Cube Store architecture can be found in
90+
[this presentation](https://docs.google.com/presentation/d/1oQ-koloag0UcL-bUHOpBXK4txpqiGl41rxhgDVrw7gw/).
9191
9292
## Scaling
9393
@@ -123,8 +123,10 @@ reference][ref-config-env].
123123
| `CUBESTORE_WORKER_PORT` | - | Yes |
124124
| `CUBESTORE_META_ADDR` | - | Yes |
125125

126-
`CUBESTORE_WORKERS` and `CUBESTORE_META_ADDR` variables should be set with stable addresses, which should not change.
127-
You can use stable DNS names and put load balancers in front of your worker and router instances to fulfill stable name requirements in environments where stable IP addresses can't be guaranteed.
126+
`CUBESTORE_WORKERS` and `CUBESTORE_META_ADDR` variables should be set with
127+
stable addresses, which should not change. You can use stable DNS names and put
128+
load balancers in front of your worker and router instances to fulfill stable
129+
name requirements in environments where stable IP addresses can't be guaranteed.
128130

129131
<InfoBox>
130132

@@ -133,7 +135,8 @@ recommend using [partitioned pre-aggregations][ref-caching-partitioning].
133135

134136
</InfoBox>
135137

136-
A sample Docker Compose stack for the single machine setting this up might look like:
138+
A sample Docker Compose stack for the single machine setting this up might look
139+
like:
137140

138141
```yaml
139142
version: '2.2'
@@ -188,11 +191,13 @@ services:
188191

189192
## Replication and High Availability
190193

191-
The open-source version of Cube Store doesn't support replicating any of its nodes.
192-
The router node and every worker node should always have only one instance copy if served behind the load balancer or service address.
193-
Replication will lead to undefined behavior of the cluster, including connection errors and data loss.
194-
If any cluster node is down, it'll lead to a complete cluster outage.
195-
If Cube Store replication and high availability are required, please consider using Cube Cloud.
194+
The open-source version of Cube Store doesn't support replicating any of its
195+
nodes. The router node and every worker node should always have only one
196+
instance copy if served behind the load balancer or service address. Replication
197+
will lead to undefined behavior of the cluster, including connection errors and
198+
data loss. If any cluster node is down, it'll lead to a complete cluster outage.
199+
If Cube Store replication and high availability are required, please consider
200+
using Cube Cloud.
196201

197202
## Storage
198203

@@ -204,14 +209,17 @@ Cube Store can only use one type of remote storage at runtime.
204209

205210
Cube Store makes use of a separate storage layer for storing metadata as well as
206211
for persisting pre-aggregations as Parquet files. Cube Store [can be configured
207-
to use either AWS S3 or Google Cloud Storage][ref-config-env-cloud-storage].
208-
If desired, local path on the server can also be used in case all Cube Store cluster nodes are co-located on a single machine.
212+
to use either AWS S3 or Google Cloud Storage][ref-config-env]. If desired, local
213+
path on the server can also be used in case all Cube Store cluster nodes are
214+
co-located on a single machine.
209215

210216
<WarningBox>
211217

212-
Cube Store requires strong consistency guarantees from underlying distributed storage.
213-
AWS S3, Google Cloud Storage, and Azure Blob Storage (Cube Cloud only) are the only known implementations that provide strong consistency.
214-
Using other implementations in production is discouraged and can lead to consistency and data corruption errors.
218+
Cube Store requires strong consistency guarantees from underlying distributed
219+
storage. AWS S3, Google Cloud Storage, and Azure Blob Storage (Cube Cloud only)
220+
are the only known implementations that provide strong consistency. Using other
221+
implementations in production is discouraged and can lead to consistency and
222+
data corruption errors.
215223

216224
</WarningBox>
217225

@@ -247,9 +255,12 @@ services:
247255

248256
### Local Storage
249257

250-
Separately from remote storage, Cube Store requires local scratch space to warm up partitions by downloading Parquet files before querying them.
251-
By default, this directory should be mounted to `.cubestore/data` dir inside contained and can be configured by [CUBESTORE_DATA_DIR][ref-config-env] environment variable.
252-
It is advised to use local SSDs for this scratch space to maximize querying performance.
258+
Separately from remote storage, Cube Store requires local scratch space to warm
259+
up partitions by downloading Parquet files before querying them. By default,
260+
this directory should be mounted to `.cubestore/data` dir inside contained and
261+
can be configured by [CUBESTORE_DATA_DIR][ref-config-env] environment variable.
262+
It is advised to use local SSDs for this scratch space to maximize querying
263+
performance.
253264

254265
### <--{"id" : "Storage"}--> AWS
255266

@@ -271,10 +282,9 @@ default.
271282

272283
Cube Store currently does not have any in-built authentication mechanisms. For
273284
this reason, we recommend running your Cube Store cluster on a network that only
274-
allows requests from the Cube.js deployment.
285+
allows requests from the Cube deployment.
275286

276287
[link-wsl2]: https://docs.microsoft.com/en-us/windows/wsl/install-win10
277288
[ref-caching-partitioning]: /caching/using-pre-aggregations#partitioning
278-
[ref-config-env]: /reference/environment-variables#cube-store
279-
[ref-config-env-cloud-storage]: /reference/environment-variables#cloud-storage
289+
[ref-config-env]: /reference/environment-variables
280290
[replace-redis]: https://cube.dev/blog/replacing-redis-with-cube-store

docs/content/Caching/Using-Pre-Aggregations.mdx

Lines changed: 30 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -5,8 +5,8 @@ category: Caching
55
menuOrder: 3
66
---
77

8-
Pre-aggregations is a powerful way to speed up your Cube.js queries. There are
9-
many configuration options to consider. Please make sure to also check [the
8+
Pre-aggregations is a powerful way to speed up your Cube queries. There are many
9+
configuration options to consider. Please make sure to also check [the
1010
Pre-Aggregations reference in the data schema section][ref-schema-ref-preaggs].
1111

1212
## Refresh Strategy
@@ -81,15 +81,15 @@ cube(`Orders`, {
8181
});
8282
```
8383

84-
When `every` and `sql` are used together, Cube.js will run the query from the
85-
`sql` property on an interval defined by the `every` property. If the query
86-
returns new results, then the pre-aggregation will be refreshed.
84+
When `every` and `sql` are used together, Cube will run the query from the `sql`
85+
property on an interval defined by the `every` property. If the query returns
86+
new results, then the pre-aggregation will be refreshed.
8787

8888
## Rollup Only Mode
8989

90-
To make Cube.js _only_ serve requests from pre-aggregations, the
91-
[`CUBEJS_ROLLUP_ONLY` environment variable][ref-config-env-general] can be set
92-
to `true` on an API instance. This will prevent serving data on API requests
90+
To make Cube _only_ serve requests from pre-aggregations, the
91+
[`CUBEJS_ROLLUP_ONLY`][ref-config-env-rolluponly] environment variable can be
92+
set to `true` on an API instance. This will prevent serving data on API requests
9393
from the source database.
9494

9595
<WarningBox>
@@ -435,8 +435,8 @@ or [Export Bucket][self-export-bucket] strategies instead.
435435

436436
### <--{"id" : "Pre-Aggregation Build Strategies"}--> Batching
437437

438-
Batching is a more performant strategy where Cube.js sends compressed CSVs for
439-
Cube Store to ingest.
438+
Batching is a more performant strategy where Cube sends compressed CSVs for Cube
439+
Store to ingest.
440440

441441
<div style="text-align: center">
442442
<img
@@ -447,9 +447,8 @@ Cube Store to ingest.
447447
/>
448448
</div>
449449

450-
The performance scales to the amount of memory available on the Cube.js
451-
instance. Batching is automatically enabled for any databases that can support
452-
it.
450+
The performance scales to the amount of memory available on the Cube instance.
451+
Batching is automatically enabled for any databases that can support it.
453452

454453
### <--{"id" : "Pre-Aggregation Build Strategies"}--> Export bucket
455454

@@ -483,30 +482,40 @@ refer to the database-specific documentation for more details:
483482
- [Snowflake][ref-connect-db-snowflake]
484483

485484
When using cloud storage, it is important to correctly configure any data
486-
retention policies to clean up the data in the export bucket as Cube.js does not
485+
retention policies to clean up the data in the export bucket as Cube does not
487486
currently manage this. For most use-cases, 1 day is sufficient.
488487

489488
## Streaming pre-aggregations
490489

491-
Streaming pre-aggregations are different from traditional pre-aggregations in the way they are being updated. Traditional pre-aggregations follow the “pull” model — Cube **pulls updates** from the data source based on some cadence and/or condition. Streaming pre-aggregations follow the “push” model — Cube **subscribes to the updates** from the data source and always keeps pre-aggregation up to date.
490+
Streaming pre-aggregations are different from traditional pre-aggregations in
491+
the way they are being updated. Traditional pre-aggregations follow the “pull”
492+
model — Cube **pulls updates** from the data source based on some cadence and/or
493+
condition. Streaming pre-aggregations follow the “push” model — Cube
494+
**subscribes to the updates** from the data source and always keeps
495+
pre-aggregation up to date.
492496

493-
You don’t need to define `refreshKey` for streaming pre-aggregations. Whether pre-aggregation is streaming or not is defined by the data source.
497+
You don’t need to define `refreshKey` for streaming pre-aggregations. Whether
498+
pre-aggregation is streaming or not is defined by the data source.
494499

495-
Currently, Cube supports only one streaming data source - [ksqlDB](/config/databases/ksqldb). All pre-aggregations where data source is ksqlDB are streaming.
500+
Currently, Cube supports only one streaming data source -
501+
[ksqlDB](/config/databases/ksqldb). All pre-aggregations where data source is
502+
ksqlDB are streaming.
496503

497-
We’re working on supporting streaming pre-aggregations for the following data sources -
504+
We’re working on supporting streaming pre-aggregations for the following data
505+
sources -
498506

499507
- Materialize
500508
- Flink SQL
501509
- Spark Streaming
502510

503-
Please [let us know](https://cube.dev/contact) if you are interested in early access to any of these drivers or would like Cube to support any other SQL streaming engine.
511+
Please [let us know](https://cube.dev/contact) if you are interested in early
512+
access to any of these drivers or would like Cube to support any other SQL
513+
streaming engine.
504514

505515
[ref-caching-in-mem-default-refresh-key]: /caching#default-refresh-keys
506516
[ref-config-db]: /config/databases
507517
[ref-config-driverfactory]: /config#driver-factory
508-
[ref-config-env]: /reference/environment-variables#cube-store
509-
[ref-config-env-general]: /config#general
518+
[ref-config-env-rolluponly]: /reference/environment-variables#cubejs-rollup-only
510519
[ref-config-extdriverfactory]: /config#external-driver-factory
511520
[ref-connect-db-athena]: /config/databases/aws-athena
512521
[ref-connect-db-redshift]: /config/databases/aws-redshift

docs/content/Configuration/Databases/Google-BigQuery.mdx

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -133,7 +133,6 @@ BigQuery connections are made over HTTPS.
133133
/caching/using-pre-aggregations#pre-aggregation-build-strategies
134134
[ref-config-multiple-ds-decorating-env]:
135135
/config/multiple-data-sources#configuring-data-sources-with-environment-variables-decorated-environment-variables
136-
[ref-env-var]: /reference/environment-variables#database-connection
137136
[ref-schema-ref-types-formats-countdistinctapprox]:
138137
/schema/reference/types-and-formats#count-distinct-approx
139138
[self-preaggs-batching]: #batching

docs/content/Configuration/Databases/Snowflake.mdx

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -127,7 +127,6 @@ connections are made over HTTPS.
127127
[google-cloud-storage]: https://cloud.google.com/storage
128128
[ref-caching-using-preaggs-build-strats]:
129129
/caching/using-pre-aggregations#pre-aggregation-build-strategies
130-
[ref-env-var]: /reference/environment-variables#database-connection
131130
[ref-schema-ref-types-formats-countdistinctapprox]:
132131
/schema/reference/types-and-formats#count-distinct-approx
133132
[self-preaggs-batching]: #batching

docs/content/Cube-Cloud/Configuration/Connecting-to-Databases.mdx

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ redirect_from:
77
- /cloud/configuration/connecting-to-databases
88
---
99

10-
You can connect all Cube.js supported databases to your Cube Cloud deployment.
10+
You can connect all Cube supported databases to your Cube Cloud deployment.
1111

1212
<div style="text-align: center">
1313
<img
@@ -32,12 +32,12 @@ vendors.
3232

3333
The following fields are required when creating an AWS Athena connection:
3434

35-
| Field | Description | Examples |
36-
| ------------------------- | ----------------------------------------------------------------- | ------------------------------------------ |
37-
| **AWS Access Key ID** | The AWS Access Key ID to use for database connections | `AKIAXXXXXXXXXXXXXXXX` |
38-
| **AWS Secret Access Key** | The AWS Secret Access Key to use for database connections | `asd+/Ead123456asc23ASD2Acsf23/1A3fAc56af` |
39-
| **AWS Region** | The AWS region of the Cube.js deployment | `us-east-1` |
40-
| **S3 Output Location** | The S3 path to store query results made by the Cube.js deployment | `s3://my-output-bucket/outputs/` |
35+
| Field | Description | Examples |
36+
| ------------------------- | -------------------------------------------------------------- | ------------------------------------------ |
37+
| **AWS Access Key ID** | The AWS Access Key ID to use for database connections | `AKIAXXXXXXXXXXXXXXXX` |
38+
| **AWS Secret Access Key** | The AWS Secret Access Key to use for database connections | `asd+/Ead123456asc23ASD2Acsf23/1A3fAc56af` |
39+
| **AWS Region** | The AWS region of the Cube deployment | `us-east-1` |
40+
| **S3 Output Location** | The S3 path to store query results made by the Cube deployment | `s3://my-output-bucket/outputs/` |
4141

4242
<div style="text-align: center">
4343
<img
@@ -167,7 +167,7 @@ Add the following environment variables:
167167

168168
| Environment Variable | Description | Example |
169169
| -------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------- |
170-
| `CUBEJS_DB_SSL` | If `true`, enables SSL encryption for database connections from Cube.js | `true`, `false` |
170+
| `CUBEJS_DB_SSL` | If `true`, enables SSL encryption for database connections from Cube | `true`, `false` |
171171
| `CUBEJS_DB_SSL_CA` | The contents of a CA bundle in PEM format, or a path to one. For more information, check the `options.ca` property for TLS Secure Contexts [in the Node.js documentation][link-nodejs-tls-options] | A valid CA bundle or a path to one |
172172
| `CUBEJS_DB_SSL_CERT` | The contents of an SSL certificate in PEM format, or a path to one. For more information, check the `options.cert` property for TLS Secure Contexts [in the Node.js documentation][link-nodejs-tls-options] | A valid SSL certificate or a path to one |
173173
| `CUBEJS_DB_SSL_KEY` | The contents of a private key in PEM format, or a path to one. For more information, check the `options.key` property for TLS Secure Contexts [in the Node.js documentation][link-nodejs-tls-options] | A valid SSL private key or a path to one |
@@ -191,4 +191,4 @@ Settings page.
191191
https://docs.getdbt.com/reference/warehouse-profiles/snowflake-profile#account
192192
[link-snowflake-regions]:
193193
https://docs.snowflake.com/en/user-guide/intro-regions.html
194-
[ref-config-env-vars]: /reference/environment-variables#database-connection
194+
[ref-config-env-vars]: /reference/environment-variables

0 commit comments

Comments
 (0)