You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
:description: Create and run your first Spark job with the Stackable Operator. Includes steps for job setup, verification, and inspecting driver logs.
2
3
3
-
Once you have followed the steps in the xref:getting_started/installation.adoc[] section to install the operator and its dependencies, you will now create a Spark job. Afterwards you can <<_verify_that_it_works, verify that it works>> by looking at the logs from the driver pod.
4
+
Once you have followed the steps in the xref:getting_started/installation.adoc[] section to install the operator and its dependencies, you will now create a Spark job.
5
+
Afterwards you can <<_verify_that_it_works, verify that it works>> by looking at the logs from the driver pod.
4
6
5
7
== Starting a Spark job
6
8
7
9
A Spark application is made of up three components:
8
10
9
-
- Job: this will build a `spark-submit` command from the resource, passing this to internal spark code together with templates for building the driver and executor pods
10
-
- Driver: the driver starts the designated number of executors and removes them when the job is completed.
11
-
- Executor(s): responsible for executing the job itself
11
+
* Job: this will build a `spark-submit` command from the resource, passing this to internal spark code together with templates for building the driver and executor pods
12
+
* Driver: the driver starts the designated number of executors and removes them when the job is completed.
13
+
* Executor(s): responsible for executing the job itself
- `metadata.name` contains the name of the SparkApplication
23
-
- `spec.version`: SparkApplication version (1.0). This can be freely set by the users and is added by the operator as label to all workload resources created by the application.
24
-
- `spec.sparkImage`: the image used by the job, driver and executor pods. This can be a custom image built by the user or an official Stackable image. Available official images are listed in the Stackable https://repo.stackable.tech/#browse/browse:docker:v2%2Fstackable%spark-k8s%2Ftags[image registry].
25
-
- `spec.mode`: only `cluster` is currently supported
26
-
- `spec.mainApplicationFile`: the artifact (Java, Scala or Python) that forms the basis of the Spark job. This path is relative to the image, so in this case we are running an example python script (that calculates the value of pi): it is bundled with the Spark code and therefore already present in the job image
27
-
- `spec.driver`: driver-specific settings.
28
-
- `spec.executor`: executor-specific settings.
24
+
* `metadata.name` contains the name of the SparkApplication
25
+
* `spec.version`: SparkApplication version (1.0). This can be freely set by the users and is added by the operator as label to all workload resources created by the application.
26
+
* `spec.sparkImage`: the image used by the job, driver and executor pods. This can be a custom image built by the user or an official Stackable image. Available official images are listed in the Stackable https://repo.stackable.tech/#browse/browse:docker:v2%2Fstackable%spark-k8s%2Ftags[image registry].
27
+
* `spec.mode`: only `cluster` is currently supported
28
+
* `spec.mainApplicationFile`: the artifact (Java, Scala or Python) that forms the basis of the Spark job. This path is relative to the image, so in this case we are running an example python script (that calculates the value of pi): it is bundled with the Spark code and therefore already present in the job image
29
+
* `spec.driver`: driver-specific settings.
30
+
* `spec.executor`: executor-specific settings.
29
31
30
32
== Verify that it works
31
33
32
-
As mentioned above, the `SparkApplication` that has just been created will build a `spark-submit` command and pass it to the driver pod, which in turn will create executor pods that run for the duration of the job before being clean up. A running process will look like this:
34
+
As mentioned above, the SparkApplication that has just been created will build a `spark-submit` command and pass it to the driver Pod, which in turn will create executor Pods that run for the duration of the job before being clean up.
- `pyspark-pi-xxxx`: this is the initialising job that creates the spark-submit command (named as `metadata.name` with a unique suffix)
37
-
- `pyspark-pi-xxxxxxx-driver`: the driver pod that drives the execution
38
-
- `pythonpi-xxxxxxxxx-exec-x`: the set of executors started by the driver (in our example `spec.executor.instances` was set to 3 which is why we have 3 executors)
39
+
* `pyspark-pi-xxxx`: this is the initializing job that creates the spark-submit command (named as `metadata.name` with a unique suffix)
40
+
* `pyspark-pi-xxxxxxx-driver`: the driver pod that drives the execution
41
+
* `pythonpi-xxxxxxxxx-exec-x`: the set of executors started by the driver (in our example `spec.executor.instances` was set to 3 which is why we have 3 executors)
39
42
40
43
Job progress can be followed by issuing this command:
When the job completes the driver cleans up the executor. The initial job is persisted for several minutes before being removed. The completed state will look like this:
49
+
When the job completes the driver cleans up the executor.
50
+
The initial job is persisted for several minutes before being removed.
The driver logs can be inspected for more information about the results of the job. In this case we expect to find the results of our (approximate!) pi calculation:
55
+
The driver logs can be inspected for more information about the results of the job.
56
+
In this case we expect to find the results of our (approximate!) pi calculation:
Copy file name to clipboardExpand all lines: docs/modules/spark-k8s/pages/getting_started/index.adoc
+2-1Lines changed: 2 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,7 @@
1
1
= Getting started
2
2
3
-
This guide will get you started with Spark using the Stackable Operator for Apache Spark. It will guide you through the installation of the Operator and its dependencies, executing your first Spark job and reviewing its result.
3
+
This guide will get you started with Spark using the Stackable Operator for Apache Spark.
4
+
It will guide you through the installation of the Operator and its dependencies, executing your first Spark job and reviewing its result.
Copy file name to clipboardExpand all lines: docs/modules/spark-k8s/pages/getting_started/installation.adoc
+15-14Lines changed: 15 additions & 14 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,20 +1,20 @@
1
1
= Installation
2
+
:description: Learn how to set up Spark with the Stackable Operator, from installation to running your first job, including prerequisites and resource recommendations.
2
3
3
4
On this page you will install the Stackable Spark-on-Kubernetes operator as well as the commons, secret and listener operators
4
5
which are required by all Stackable operators.
5
6
6
7
== Dependencies
7
8
8
-
Spark applications almost always require dependencies like database drivers, REST api clients and many others. These
9
-
dependencies must be available on the `classpath` of each executor (and in some cases of the driver, too). There are
10
-
multiple ways to provision Spark jobs with such dependencies: some are built into Spark itself while others are
11
-
implemented at the operator level. In this guide we are going to keep things simple and look at executing a Spark job
12
-
that has a minimum of dependencies.
9
+
Spark applications almost always require dependencies like database drivers, REST api clients and many others.
10
+
These dependencies must be available on the `classpath` of each executor (and in some cases of the driver, too).
11
+
There are multiple ways to provision Spark jobs with such dependencies: some are built into Spark itself while others are implemented at the operator level.
12
+
In this guide we are going to keep things simple and look at executing a Spark job that has a minimum of dependencies.
13
13
14
14
More information about the different ways to define Spark jobs and their dependencies is given on the following pages:
15
15
16
-
- xref:usage-guide/index.adoc[]
17
-
- xref:job_dependencies.adoc[]
16
+
* xref:usage-guide/index.adoc[]
17
+
* xref:job_dependencies.adoc[]
18
18
19
19
== Stackable Operators
20
20
@@ -25,8 +25,8 @@ There are 2 ways to install Stackable operators
25
25
26
26
=== stackablectl
27
27
28
-
`stackablectl` is the command line tool to interact with Stackable operators and our recommended way to install
29
-
Operators. Follow the xref:management:stackablectl:installation.adoc[installation steps] for your platform.
28
+
`stackablectl` is the command line tool to interact with Stackable operators and our recommended way to install Operators.
29
+
Follow the xref:management:stackablectl:installation.adoc[installation steps] for your platform.
30
30
31
31
After you have installed `stackablectl` run the following command to install the Spark-k8s operator:
Helm will deploy the operators in a Kubernetes Deployment and apply the CRDs for the `SparkApplication` (as well as the
63
-
CRDs for the required operators). You are now ready to create a Spark job.
63
+
Helm will deploy the operators in a Kubernetes Deployment and apply the CRDs for the SparkApplication (as well as the CRDs for the required operators).
Copy file name to clipboardExpand all lines: docs/modules/spark-k8s/pages/index.adoc
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,5 +1,5 @@
1
1
= Stackable Operator for Apache Spark
2
-
:description: The Stackable operator for Apache Spark is a Kubernetes operator that can manage Apache Spark clusters. Learn about its features, resources, dependencies and demos, and see the list of supported Spark versions.
2
+
:description: Manage Apache Spark clusters on Kubernetes with Stackable Operator, featuring SparkApplication CRDs, history server, S3 integration, and demos for big data tasks.
3
3
:keywords: Stackable operator, Apache Spark, Kubernetes, operator, data science, engineer, big data, CRD, StatefulSet, ConfigMap, Service, S3, demo, version
Copy file name to clipboardExpand all lines: docs/modules/spark-k8s/pages/usage-guide/examples.adoc
+1Lines changed: 1 addition & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,4 +1,5 @@
1
1
= Examples
2
+
:description: Explore Spark job examples with various setups for PySpark and Scala, including external datasets, PVC mounts, and S3 access configurations.
2
3
3
4
The following examples have the following `spec` fields in common:
Copy file name to clipboardExpand all lines: docs/modules/spark-k8s/pages/usage-guide/history-server.adoc
+16-6Lines changed: 16 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,13 +1,18 @@
1
1
= Spark History Server
2
+
:description: Set up Spark History Server on Kubernetes to access Spark logs via S3, with configuration for cleanups and web UI access details.
2
3
:page-aliases: history_server.adoc
3
4
4
5
== Overview
5
6
6
-
The Stackable Spark-on-Kubernetes operator runs Apache Spark workloads in a Kubernetes cluster, whereby driver- and executor-pods are created for the duration of the job and then terminated. One or more Spark History Server instances can be deployed independently of `SparkApplication` jobs and used as an end-point for spark logging, so that job information can be viewed once the job pods are no longer available.
7
+
The Stackable Spark-on-Kubernetes operator runs Apache Spark workloads in a Kubernetes cluster, whereby driver- and executor-pods are created for the duration of the job and then terminated.
8
+
One or more Spark History Server instances can be deployed independently of SparkApplication jobs and used as an end-point for spark logging, so that job information can be viewed once the job pods are no longer available.
7
9
8
10
== Deployment
9
11
10
-
The example below demonstrates how to set up the history server running in one Pod with scheduled cleanups of the event logs. The event logs are loaded from an S3 bucket named `spark-logs` and the folder `eventlogs/`. The credentials for this bucket are provided by the secret class `s3-credentials-class`. For more details on how the Stackable Data Platform manages S3 resources see the xref:concepts:s3.adoc[S3 resources] page.
12
+
The example below demonstrates how to set up the history server running in one Pod with scheduled cleanups of the event logs.
13
+
The event logs are loaded from an S3 bucket named `spark-logs` and the folder `eventlogs/`.
14
+
The credentials for this bucket are provided by the secret class `s3-credentials-class`.
15
+
For more details on how the Stackable Data Platform manages S3 resources see the xref:concepts:s3.adoc[S3 resources] page.
To access the history server web UI, use one of the `NodePort` services created by the operator. For the example above, the operator created two services as shown:
60
+
To access the history server web UI, use one of the `NodePort` services created by the operator.
61
+
For the example above, the operator created two services as shown:
56
62
57
63
[source,bash]
58
64
----
@@ -70,13 +76,17 @@ image::history-server-ui.png[History Server Console]
70
76
71
77
For a role group of the Spark history server, you can specify: `configOverrides` for the following files:
72
78
73
-
- `security.properties`
79
+
* `security.properties`
74
80
75
81
=== The security.properties file
76
82
77
-
The `security.properties` file is used to configure JVM security properties. It is very seldom that users need to tweak any of these, but there is one use-case that stands out, and that users need to be aware of: the JVM DNS cache.
83
+
The `security.properties` file is used to configure JVM security properties.
84
+
It is very seldom that users need to tweak any of these, but there is one use-case that stands out, and that users need to be aware of: the JVM DNS cache.
78
85
79
-
The JVM manages its own cache of successfully resolved host names as well as a cache of host names that cannot be resolved. Some products of the Stackable platform are very sensible to the contents of these caches and their performance is heavily affected by them. As of version 3.4.0, Apache Spark may perform poorly if the positive cache is disabled. To cache resolved host names, and thus speeding up queries you can configure the TTL of entries in the positive cache like this:
86
+
The JVM manages its own cache of successfully resolved host names as well as a cache of host names that cannot be resolved.
87
+
Some products of the Stackable platform are very sensible to the contents of these caches and their performance is heavily affected by them.
88
+
As of version 3.4.0, Apache Spark may perform poorly if the positive cache is disabled.
89
+
To cache resolved host names, and thus speeding up queries you can configure the TTL of entries in the positive cache like this:
Copy file name to clipboardExpand all lines: docs/modules/spark-k8s/pages/usage-guide/job-dependencies.adoc
+1Lines changed: 1 addition & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,4 +1,5 @@
1
1
= Job Dependencies
2
+
:description: Learn how to provision dependencies for Spark jobs using custom images, volumes, Maven packages, or Python packages, and their trade-offs.
Copy file name to clipboardExpand all lines: docs/modules/spark-k8s/pages/usage-guide/s3.adoc
+5-4Lines changed: 5 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,10 +1,11 @@
1
1
= S3 bucket specification
2
+
:description: Learn how to configure S3 access in SparkApplications using inline credentials or external resources, including TLS for secure connections.
2
3
3
-
You can specify S3 connection details directly inside the `SparkApplication` specification or by referring to an external `S3Bucket` custom resource.
4
+
You can specify S3 connection details directly inside the SparkApplication specification or by referring to an external S3Bucket custom resource.
4
5
5
6
== S3 access using credentials
6
7
7
-
To specify S3 connection details directly as part of the `SparkApplication` resource you add an inline connection configuration as shown below.
8
+
To specify S3 connection details directly as part of the SparkApplication resource you add an inline connection configuration as shown below.
8
9
9
10
[source,yaml]
10
11
----
@@ -21,7 +22,7 @@ s3connection: # <1>
21
22
<3> Optional connection port.
22
23
<4> Name of the `Secret` object expected to contain the following keys: `accessKey` and `secretKey`
23
24
24
-
It is also possible to configure the connection details as a separate Kubernetes resource and only refer to that object from the `SparkApplication` like this:
25
+
It is also possible to configure the connection details as a separate Kubernetes resource and only refer to that object from the SparkApplication like this:
25
26
26
27
[source,yaml]
27
28
----
@@ -47,7 +48,7 @@ spec:
47
48
secretClass: minio-credentials-class
48
49
----
49
50
50
-
This has the advantage that one connection configuration can be shared across `SparkApplications` and reduces the cost of updating these details.
51
+
This has the advantage that one connection configuration can be shared across SparkApplications and reduces the cost of updating these details.
Copy file name to clipboardExpand all lines: docs/modules/spark-k8s/pages/usage-guide/security.adoc
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,4 +1,5 @@
1
1
= Security
2
+
:description: Learn how to configure Apache Spark applications with Kerberos authentication using Stackable Secret Operator for secure data access in HDFS.
2
3
3
4
== Authentication
4
5
@@ -56,7 +57,7 @@ executor:
56
57
volumes:
57
58
- name: hdfs-config <4>
58
59
configMap:
59
-
name: hdfs
60
+
name: hdfs
60
61
- name: kerberos
61
62
ephemeral:
62
63
volumeClaimTemplate:
@@ -94,4 +95,3 @@ sparkConf:
94
95
----
95
96
<1> Location of the keytab file.
96
97
<2> Principal name. This needs to have the format `<SERVICE_NAME>.default.svc.cluster.local@<REALM>` where `SERVICE_NAME` matches the volume claim annotation `secrets.stackable.tech/kerberos.service.names` and `REALM` must be `CLUSTER.LOCAL` unless a different realm was used explicitly. In that case, the `KERBEROS_REALM` environment variable must also be set accordingly.
0 commit comments