Skip to content

Commit 5f3c58f

Browse files
committed
Merge branch 'main' into scalardb/update-docs-3.14-en-us
2 parents e029290 + a7e033a commit 5f3c58f

File tree

2 files changed

+8
-8
lines changed

2 files changed

+8
-8
lines changed

docs/scalardb-analytics/development.mdx renamed to docs/scalardb-analytics/run-analytical-queries.mdx

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ import TabItem from '@theme/TabItem';
99

1010
# Run Analytical Queries Through ScalarDB Analytics
1111

12-
This guide explains how to develop ScalarDB Analytics applications. For details on the architecture and design, see [ScalarDB Analytics Design](design.mdx)
12+
This guide explains how to develop ScalarDB Analytics applications. For details on the architecture and design, see [ScalarDB Analytics Design](./design.mdx)
1313

1414
ScalarDB Analytics currently uses Spark as an execution engine and provides a Spark custom catalog plugin to provide a unified view of ScalarDB-managed and non-ScalarDB-managed data sources as Spark tables. This allows you to execute arbitrary Spark SQL queries seamlessly.
1515

@@ -41,7 +41,7 @@ For example configurations in a practical scenario, see [the sample application
4141

4242
| Configuration Key | Required | Description |
4343
|:-----------------|:---------|:------------|
44-
| `spark.jars.packages` | No | A comma-separated list of Maven coordinates for the required dependencies. User need to include the ScalarDB Analytics package you are using, otherwise, specify it as the command line argument when running the Spark application. For the details about the Maven coordinates of ScalarDB Analytics, refer to [Add ScalarDB Analytics dependency](#add-scalardb-analytics-dependency). |
44+
| `spark.jars.packages` | No | A comma-separated list of Maven coordinates for the required dependencies. User need to include the ScalarDB Analytics package you are using, otherwise, specify it as the command line argument when running the Spark application. For details about the Maven coordinates of ScalarDB Analytics, refer to [Add ScalarDB Analytics dependency](#add-the-scalardb-analytics-dependency). |
4545
| `spark.sql.extensions` | Yes | Must be set to `com.scalar.db.analytics.spark.Extensions` |
4646
| `spark.sql.catalog.<CATALOG_NAME>` | Yes | Must be set to `com.scalar.db.analytics.spark.ScalarCatalog` |
4747

@@ -225,11 +225,11 @@ There are three ways to develop Spark applications with ScalarDB Analytics:
225225

226226
:::note
227227

228-
Depending on your environment, you may not be able to use all of the methods mentioned above. For details about supported features and deployment options, refer to [Supported managed Spark services and their application types](deployment.mdx#supported-managed-spark-services-and-their-application-types).
228+
Depending on your environment, you may not be able to use all the methods mentioned above. For details about supported features and deployment options, refer to [Supported managed Spark services and their application types](./deployment.mdx#supported-managed-spark-services-and-their-application-types).
229229

230230
:::
231231

232-
With all of these methods, you can refer to tables in ScalarDB Analytics using the same table identifier format. For details about how ScalarDB Analytics maps catalog information from data sources, refer to [Catalog information mappings by data source](design.mdx#catalog-information-mappings-by-data-source).
232+
With all these methods, you can refer to tables in ScalarDB Analytics using the same table identifier format. For details about how ScalarDB Analytics maps catalog information from data sources, refer to [Catalog information mappings by data source](./design.mdx#catalog-information-mappings-by-data-source).
233233

234234
<Tabs groupId="spark-application-type" queryString>
235235
<TabItem value="spark-driver" label="Spark Driver application">
@@ -339,7 +339,7 @@ For details about how you can use Spark Connect, refer to the [Spark Connect doc
339339
</TabItem>
340340
<TabItem value="jdbc" label="JDBC">
341341

342-
Unfortunately, Spark Thrift JDBC server does not support the Spark features that are necessary for ScalarDB Analytics, so you cannot use JDBC to read data from ScalarDB Analytics in your Apache Spark environment. JDBC application is referred to here because some managed Spark services provide different ways to interact with a Spark cluster via the JDBC interface. For more details, refer to [Supported application types](deployment.mdx#supported-managed-spark-services-and-their-application-types).
342+
Unfortunately, Spark Thrift JDBC server does not support the Spark features that are necessary for ScalarDB Analytics, so you cannot use JDBC to read data from ScalarDB Analytics in your Apache Spark environment. JDBC application is referred to here because some managed Spark services provide different ways to interact with a Spark cluster via the JDBC interface. For more details, refer to [Supported application types](./deployment.mdx#supported-managed-spark-services-and-their-application-types).
343343

344344
</TabItem>
345345
</Tabs>
@@ -348,7 +348,7 @@ Unfortunately, Spark Thrift JDBC server does not support the Spark features that
348348

349349
ScalarDB Analytics manages its own catalog, containing data sources, namespaces, tables, and columns. That information is automatically mapped to the Spark catalog. In this section, you will learn how ScalarDB Analytics maps its catalog information to the Spark catalog.
350350

351-
For details about how information in the raw data sources is mapped to the ScalarDB Analytics catalog, refer to [Catalog information mappings by data source](design.mdx#catalog-information-mappings-by-data-source).
351+
For details about how information in the raw data sources is mapped to the ScalarDB Analytics catalog, refer to [Catalog information mappings by data source](./design.mdx#catalog-information-mappings-by-data-source).
352352

353353
### Catalog level mapping
354354

@@ -395,7 +395,7 @@ For example, if you have a ScalarDB catalog named `my_catalog` and a view namesp
395395

396396
##### WAL-interpreted views
397397

398-
As explained in [ScalarDB Analytics Design](design.mdx), ScalarDB Analytics provides a functionality called WAL-interpreted views, which is a special type of views. These views are automatically created for tables of ScalarDB data sources to provide a user-friendly view of the data by interpreting WAL-metadata in the tables.
398+
As explained in [ScalarDB Analytics Design](./design.mdx), ScalarDB Analytics provides a functionality called WAL-interpreted views, which is a special type of views. These views are automatically created for tables of ScalarDB data sources to provide a user-friendly view of the data by interpreting WAL-metadata in the tables.
399399

400400
Since the data source name and the namespace names of the original ScalarDB tables are used as the view namespace names for WAL-interpreted views, if you have a ScalarDB table named `my_table` in a namespace named `my_namespace` of a data source named `my_data_source`, you can refer to the WAL-interpreted view of the table as `my_catalog.view.my_data_source.my_namespace.my_table`.
401401

sidebars.js

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -306,7 +306,7 @@ const sidebars = {
306306
},
307307
{
308308
type: 'doc',
309-
id: 'scalardb-analytics/development',
309+
id: 'scalardb-analytics/run-analytical-queries',
310310
label: 'Run Analytical Queries',
311311
},
312312
{

0 commit comments

Comments
 (0)