Skip to content

Commit 39ebead

Browse files
AUTO: Sync ScalarDB docs in English to docs site repo (#1128)
Co-authored-by: josh-wong <[email protected]>
1 parent d50d495 commit 39ebead

File tree

2 files changed

+5
-5
lines changed

2 files changed

+5
-5
lines changed

versioned_docs/version-3.14/scalardb-analytics/deployment.mdx

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -66,13 +66,13 @@ For more details, refer to [Set up ScalarDB Analytics in the Spark configuration
6666

6767
<h4>Run analytical queries via the Spark driver</h4>
6868

69-
After the EMR Spark cluster has launched, you can use ssh to connect to the primary node of the EMR cluster and run your Spark application. For details on how to create a Spark Driver application, refer to [Spark Driver application](./run-analytical-queries.mdx?spark-application-type=spark-driver#spark-driver-application).
69+
After the EMR Spark cluster has launched, you can use ssh to connect to the primary node of the EMR cluster and run your Spark application. For details on how to create a Spark Driver application, refer to [Spark Driver application](./run-analytical-queries.mdx?spark-application-type=spark-driver-application#develop-a-spark-application).
7070

7171
<h4>Run analytical queries via Spark Connect</h4>
7272

7373
You can use Spark Connect to run your Spark application remotely by using the EMR cluster that you launched.
7474

75-
You first need to configure the Software setting in the same way as the [Spark Driver application](./run-analytical-queries.mdx?spark-application-type=spark-driver#spark-driver-application). You also need to set the following configuration to enable Spark Connect.
75+
You first need to configure the Software setting in the same way as the [Spark Driver application](./run-analytical-queries.mdx?spark-application-type=spark-driver-application#develop-a-spark-application). You also need to set the following configuration to enable Spark Connect.
7676

7777
<h5>Allow inbound traffic for a Spark Connect server</h5>
7878

@@ -109,7 +109,7 @@ The following describes what you should change the content in the angle brackets
109109

110110
You can run your Spark application via Spark Connect from anywhere by using the remote URL of the Spark Connect server, which is `sc://<PRIMARY_NODE_PUBLIC_HOSTNAME>:15001`.
111111

112-
For details on how to create a Spark application by using Spark Connect, refer to [Spark Connect application](./run-analytical-queries.mdx?spark-application-type=spark-connect#spark-connect-application).
112+
For details on how to create a Spark application by using Spark Connect, refer to [Spark Connect application](./run-analytical-queries.mdx?spark-application-type=spark-connect#develop-a-spark-application).
113113

114114
</TabItem>
115115
<TabItem value="databricks" label="Databricks">

versioned_docs/version-3.14/scalardb-analytics/run-analytical-queries.mdx

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -282,7 +282,7 @@ You can also use other CLI tools that Spark provides, such as `spark-sql` and `s
282282
:::
283283

284284
</TabItem>
285-
<TabItem value="spark-connect" label="Spark Connect">
285+
<TabItem value="spark-connect" label="Spark Connect application">
286286

287287
You can use [Spark Connect](https://spark.apache.org/spark-connect/) to interact with ScalarDB Analytics. By using Spark Connect, you can access a remote Spark cluster and read data in the same way as a Spark Driver application. The following briefly describes how to use Spark Connect.
288288

@@ -338,7 +338,7 @@ java -jar my-spark-connect-client.jar
338338
For details about how you can use Spark Connect, refer to the [Spark Connect documentation](https://spark.apache.org/docs/latest/spark-connect-overview.html).
339339

340340
</TabItem>
341-
<TabItem value="jdbc" label="JDBC">
341+
<TabItem value="jdbc" label="JDBC application">
342342

343343
Unfortunately, Spark Thrift JDBC server does not support the Spark features that are necessary for ScalarDB Analytics, so you cannot use JDBC to read data from ScalarDB Analytics in your Apache Spark environment. JDBC application is referred to here because some managed Spark services provide different ways to interact with a Spark cluster via the JDBC interface. For more details, refer to [Supported application types](./deployment.mdx#supported-managed-spark-services-and-their-application-types).
344344

0 commit comments

Comments
 (0)