Skip to content

Commit 9baeb69

Browse files
AUTO: Sync ScalarDB docs in English to docs site repo (#1127)
Co-authored-by: josh-wong <[email protected]>
1 parent 39ebead commit 9baeb69

File tree

2 files changed

+5
-5
lines changed

2 files changed

+5
-5
lines changed

docs/scalardb-analytics/deployment.mdx

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -65,13 +65,13 @@ For more details, refer to [Set up ScalarDB Analytics in the Spark configuration
6565

6666
<h4>Run analytical queries via the Spark driver</h4>
6767

68-
After the EMR Spark cluster has launched, you can use ssh to connect to the primary node of the EMR cluster and run your Spark application. For details on how to create a Spark Driver application, refer to [Spark Driver application](./run-analytical-queries.mdx?spark-application-type=spark-driver#spark-driver-application).
68+
After the EMR Spark cluster has launched, you can use ssh to connect to the primary node of the EMR cluster and run your Spark application. For details on how to create a Spark Driver application, refer to [Spark Driver application](./run-analytical-queries.mdx?spark-application-type=spark-driver-application#develop-a-spark-application).
6969

7070
<h4>Run analytical queries via Spark Connect</h4>
7171

7272
You can use Spark Connect to run your Spark application remotely by using the EMR cluster that you launched.
7373

74-
You first need to configure the Software setting in the same way as the [Spark Driver application](./run-analytical-queries.mdx?spark-application-type=spark-driver#spark-driver-application). You also need to set the following configuration to enable Spark Connect.
74+
You first need to configure the Software setting in the same way as the [Spark Driver application](./run-analytical-queries.mdx?spark-application-type=spark-driver-application#develop-a-spark-application). You also need to set the following configuration to enable Spark Connect.
7575

7676
<h5>Allow inbound traffic for a Spark Connect server</h5>
7777

@@ -108,7 +108,7 @@ The following describes what you should change the content in the angle brackets
108108

109109
You can run your Spark application via Spark Connect from anywhere by using the remote URL of the Spark Connect server, which is `sc://<PRIMARY_NODE_PUBLIC_HOSTNAME>:15001`.
110110

111-
For details on how to create a Spark application by using Spark Connect, refer to [Spark Connect application](./run-analytical-queries.mdx?spark-application-type=spark-connect#spark-connect-application).
111+
For details on how to create a Spark application by using Spark Connect, refer to [Spark Connect application](./run-analytical-queries.mdx?spark-application-type=spark-connect#develop-a-spark-application).
112112

113113
</TabItem>
114114
<TabItem value="databricks" label="Databricks">

docs/scalardb-analytics/run-analytical-queries.mdx

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -281,7 +281,7 @@ You can also use other CLI tools that Spark provides, such as `spark-sql` and `s
281281
:::
282282

283283
</TabItem>
284-
<TabItem value="spark-connect" label="Spark Connect">
284+
<TabItem value="spark-connect" label="Spark Connect application">
285285

286286
You can use [Spark Connect](https://spark.apache.org/spark-connect/) to interact with ScalarDB Analytics. By using Spark Connect, you can access a remote Spark cluster and read data in the same way as a Spark Driver application. The following briefly describes how to use Spark Connect.
287287

@@ -337,7 +337,7 @@ java -jar my-spark-connect-client.jar
337337
For details about how you can use Spark Connect, refer to the [Spark Connect documentation](https://spark.apache.org/docs/latest/spark-connect-overview.html).
338338

339339
</TabItem>
340-
<TabItem value="jdbc" label="JDBC">
340+
<TabItem value="jdbc" label="JDBC application">
341341

342342
Unfortunately, Spark Thrift JDBC server does not support the Spark features that are necessary for ScalarDB Analytics, so you cannot use JDBC to read data from ScalarDB Analytics in your Apache Spark environment. JDBC application is referred to here because some managed Spark services provide different ways to interact with a Spark cluster via the JDBC interface. For more details, refer to [Supported application types](./deployment.mdx#supported-managed-spark-services-and-their-application-types).
343343

0 commit comments

Comments
 (0)