Skip to content

Commit 15f9a1e

Browse files
committed
docs(faqs): create new FAQs section
1 parent 0a1dd6b commit 15f9a1e

File tree

4 files changed

+226
-0
lines changed

4 files changed

+226
-0
lines changed

docs/content/FAQs/General.mdx

Lines changed: 39 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,39 @@
1+
---
2+
title: General
3+
permalink: /faqs/general
4+
category: FAQs
5+
---
6+
7+
## Is there a row limit on the results of a query?
8+
9+
The row limit for all query results is set to 10,000 rows by default. You may
10+
specify a row limit up to 50,000 in query parameters for an individual query. If
11+
more rows are needed, we recommend using pagination in your application to
12+
request more rows.
13+
14+
## Can I try Cube Cloud for free?
15+
16+
Currently, Cube Cloud offers Free, Standard and Enterprise plans, with Free
17+
being the default for new Cloud accounts. Each tier comes with different
18+
benefits and data pass-through limits, and you are welcome to remain on the free
19+
tier indefinitely if it satisfies your data needs. More details on what each
20+
plan includes can be found at [cube.dev/pricing](https://cube.dev/pricing).
21+
22+
## What is the difference between CUBEJS_CONCURRENCY and CUBEJS_DB_MAX_POOL?
23+
24+
`CUBEJS_CONCURRENCY` specifies the maximum number of queries that can be
25+
executed concurrently on your source database. This variable should reflect the
26+
limitations of your database, and will help limit the number of queries that are
27+
sent from cube.
28+
29+
`CUBEJS_DB_MAX_POOL` allows you to set a maximum number of connection pools to
30+
your database. This only applies to databases that use connection pooling
31+
(Postgresql, Redshift, Clickhouse) and is not applicable to databases without it
32+
(BigQuery, Snowflake). The concurrency limit specified in `CUBEJS_CONCURRENCY`
33+
will supersede the number of connections if it is lower.
34+
35+
For example, if your database has a hard concurrency limit of 10 that cannot be
36+
changed, but you wish to raise the concurrency limit to 50 you would first set
37+
`CUBEJS_CONCURRENCY=50` and `CUBEJS_DB_MAX_POOL=5` or higher. Note that some
38+
data warehouses (BigQuery, Snowflake) do not use connection pooling so this
39+
parameter is not applicable.
Lines changed: 118 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,118 @@
1+
---
2+
title: Tips and Tricks
3+
permalink: /faqs/tips-and-tricks
4+
category: FAQs
5+
---
6+
7+
## How can I read from two different database schemas in my database when I'm only able to select one while connecting?
8+
9+
Use your first schema when setting up your database connection in Cube Cloud.
10+
11+
To use your second database schema, update the `CUBE_DB_NAME` environment
12+
variable in **Settings > Configuration**. Change `CUBE_DB_NAME` to the name of
13+
your second schema.
14+
15+
This will trigger a new build. Once it's completed click on the Schema tab on
16+
the left hand side navigation, and then in the upper-right corner, click the
17+
three-dot menu -> Generate Schema. You should be able to see the name of the
18+
second schema from your database and generate new models.
19+
20+
## Can I track my customers' query usage?
21+
22+
You can track query usage by user (or other dimension) by setting up [Log
23+
Export][ref-cloud-o11y-logs] and parsing the necessary information.
24+
25+
## Can I bypass Row-Level Security when using the SQL API?
26+
27+
There may be times when you want the permissions through Cube's REST API to be
28+
different from the permissions of the SQL API.
29+
30+
For example, perhaps your customers use the REST API to access their own data.
31+
You might use row-level security to prevent them from seeing any data associated
32+
with other customers.
33+
34+
For your internal analytics, you could provide access to your Data Analysts via
35+
the SQL API. Since this is for your internal use, you will need access to all
36+
the data rather than a single customer's. To give yourself higher permissions
37+
through the SQL API, you could create an exception for the usual Row-Level
38+
Security checks.
39+
40+
In the following schema, we have created some example Row-Level Security rules
41+
and an exception for querying data via the SQL API.
42+
43+
### Defining basic RLS
44+
45+
First, in the `cube.js` configuration file, we'll define the
46+
[`queryRewrite()`][ref-conf-ref-queryrewrite] property to push a filter to each
47+
query depending on the `tenantId` within the [Security Context][ref-sec-ctx].
48+
49+
```javascript
50+
module.exports = {
51+
queryRewrite: (query, { securityContext }) => {
52+
if (!securityContext.tenantId) {
53+
throw new Error('No id found in Security Context!');
54+
} else {
55+
query.filters.push({
56+
member: 'Orders.tenantId',
57+
operator: 'equals',
58+
values: [securityContext.tenantId],
59+
});
60+
61+
return query;
62+
}
63+
},
64+
};
65+
```
66+
67+
With this logic, each tenant can see their data and nothing else.
68+
69+
### Bypassing RLS for queries created with the SQL API
70+
71+
When we want to bypass the RLS we defined above, we need to create a sort of
72+
"superuser" only accessible when authenticating via the SQL API. We need to
73+
define two new things for this to work:
74+
75+
1. Leverage the [`checkSqlAuth()`][ref-conf-ref-checksqlauth] configuration
76+
option to inject a new property into the Security Context that defines a
77+
superuser. In this case, we'll call it `isSuperUser`.
78+
79+
2. Handle the new `isSuperUser` property in our previously defined
80+
`queryRewrite` to bypass the filter push.
81+
82+
```javascript
83+
module.exports = {
84+
// Create a "superuser" security context for the SQL API
85+
checkSqlAuth: async (req, username) => {
86+
if (username === process.env.CUBEJS_SQL_USER) {
87+
return {
88+
password: process.env.CUBEJS_SQL_PASSWORD,
89+
securityContext: { isSuperUser: true },
90+
};
91+
}
92+
},
93+
queryRewrite: (query, { securityContext }) => {
94+
// Bypass row-level-security when connected from the SQL API
95+
if (securityContext.isSuperUser) {
96+
return query;
97+
} else if (!securityContext.tenantId) {
98+
throw new Error('No id found in Security Context!');
99+
} else {
100+
query.filters.push({
101+
member: 'Orders.tenantId',
102+
operator: 'equals',
103+
values: [securityContext.tenantId],
104+
});
105+
106+
return query;
107+
}
108+
},
109+
};
110+
```
111+
112+
With this exception in place we should be able to query all the customers' data
113+
via the SQL API without being hindered by the row-level security checks.
114+
115+
[ref-cloud-o11y-logs]: /cloud/workspace/logs
116+
[ref-conf-ref-checksqlauth]: /config#options-reference-check-sql-auth
117+
[ref-conf-ref-queryrewrite]: /config#options-reference-query-rewrite
118+
[ref-sec-ctx]: /security/context
Lines changed: 68 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,68 @@
1+
---
2+
title: Troubleshooting
3+
permalink: /faqs/troubleshooting
4+
category: FAQs
5+
---
6+
7+
## Error: Unsupported db type: undefined
8+
9+
This error message might mean that there's no `CUBEJS_DB_TYPE` defined. Please
10+
visit **Settings > Configuration** and define this environment variable.
11+
12+
If this doesn't help, please reach out to us in our support chat, we'd be happy
13+
to help!
14+
15+
## Error: Internal: deadline has elapsed OR Error: Query execution timeout after 10 min of waiting
16+
17+
This error happens when a query remains queued for an excessive amount of time.
18+
To troubleshoot, try increasing concurrency and/or timeout limits. The default
19+
concurrency is 4 for most data warehouses and the default timeout is 10 minutes.
20+
You can increase these values by adjusting the `CUBEJS_CONCURRENCY` or
21+
`CUBEJS_DB_TIMEOUT` environment variables in **Settings > Configuration**. If
22+
your timeout limit is already high, we recommend either adding a pre-aggregation
23+
or refactoring your SQL for better efficiency.
24+
25+
If these methods don't help, please reach out to us in our support chat!
26+
27+
## Error: Error during create table: CREATE TABLE with pre-aggregations
28+
29+
This usually means Cube can't create a pre-aggregation table, which could be due
30+
to a few different reasons. Further down the error log, you should see more
31+
details which will help narrow down the scope of the issue.
32+
33+
| If you see… | The issue is likely… | Recommendation |
34+
| -------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | --------------------------------------------------------------------------------------------------------- |
35+
| `has a key that already exists in Name index` | Multi-tenancy setup missing following configuration setting: <pre><code>preAggregationsSchema: ({ securityContext }) => `pre_aggregations_${securityContext.tenantId},`</code></pre> | Update configuration. |
36+
| `Error: Query execution timeout after 10 min of waiting` | Pre-aggregation is too large to be built. | Try enabling [export bucket][ref-caching-using-preaggs-export-bucket] or reducing `partitionGranularity`. |
37+
38+
For any other error types, feel free to reach out to us in our support chat.
39+
40+
## Warning: There were queries in these timezones which are not added in the CUBEJS_SCHEDULED_REFRESH_TIMEZONES environment variable.
41+
42+
If you want your query to use pre-aggregations, you must define all necessary
43+
timezones using either the `CUBEJS_SCHEDULED_REFRESH_TIMEZONES` environment
44+
variable, or in the `cube.js` file:
45+
46+
```javascript
47+
module.exports = {
48+
// You can define one or multiple timezones based on your requirements
49+
scheduledRefreshTimeZones: ['America/Vancouver', 'America/Toronto'],
50+
};
51+
```
52+
53+
Without this configuration, Cube will default to `UTC`. The warning reflects
54+
Cube's inability to find the query's timezone in the desired pre-aggregation.
55+
56+
## Cube Cloud API is down after upgrading the version of Cube
57+
58+
You may roll back to a previous Cube version at any time in **Settings >
59+
Configuration**.
60+
61+
We always recommend testing new Cube versions in your staging environment before
62+
updating your production environment. We do not recommend setting your
63+
production deployment to the latest version since it will automatically upgrade
64+
to the latest version every time it's released on the next build or settings
65+
update.
66+
67+
[ref-caching-using-preaggs-export-bucket]:
68+
/caching/using-pre-aggregations#pre-aggregation-build-strategies-export-bucket

docs/src/components/Layout/MainMenu.tsx

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -40,6 +40,7 @@ const menuOrder = [
4040
'Deployment',
4141
'Developer Tools',
4242
'Examples & Tutorials',
43+
'FAQs',
4344
'Release Notes',
4445
];
4546

0 commit comments

Comments
 (0)