Skip to content

Commit db7e640

Browse files
peter-tothdongjoon-hyun
authored andcommitted
[SPARK-53679] Fix typos in Spark Kubernetes Operator documentation
### What changes were proposed in this pull request? This PR fixes a few typos in the documentation: - Fix grammar error in config property description ("to for" → "for") - Correct property name consistency in configuration.md - Fix spelling errors: registory→registry, secuirty→security, etc. - Update cluster state diagram reference to use correct image - Fix table header typo and various other spelling corrections ### Why are the changes needed? To have better documentation. ### Does this PR introduce _any_ user-facing change? No. ### How was this patch tested? This PR changes only documentation, no test is needed. ### Was this patch authored or co-authored using generative AI tooling? Yes, this PR was generated with `claude-sonnet-4` and was reviewed manually. Closes apache#334 from peter-toth/fix-typos. Authored-by: Peter Toth <[email protected]> Signed-off-by: Dongjoon Hyun <[email protected]>
1 parent cde51b1 commit db7e640

File tree

6 files changed

+11
-11
lines changed

6 files changed

+11
-11
lines changed

docs/architecture.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -74,7 +74,7 @@ launching Spark deployments and submitting jobs under the hood. It also uses
7474

7575
## Cluster State Transition
7676

77-
[![Cluster State Transition](resources/application_state_machine.png)](resources/application_state_machine.png)
77+
[![Cluster State Transition](resources/cluster_state_machine.png)](resources/cluster_state_machine.png)
7878

7979
* Spark clusters are expected to be always running after submitted.
8080
* Similar to Spark applications, K8s resources created for a cluster would be deleted as the final

docs/config_properties.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@
88
| spark.kubernetes.operator.terminateOnInformerFailureEnabled | Boolean | false | false | Enable to indicate informer errors should stop operator startup. If disabled, operator startup will ignore recoverable errors, caused for example by RBAC issues and will retry periodically. |
99
| spark.kubernetes.operator.reconciler.terminationTimeoutSeconds | Integer | 30 | false | Grace period for operator shutdown before reconciliation threads are killed. |
1010
| spark.kubernetes.operator.reconciler.parallelism | Integer | 50 | false | Thread pool size for Spark Operator reconcilers. Unbounded pool would be used if set to non-positive number. |
11-
| spark.kubernetes.operator.reconciler.foregroundRequestTimeoutSeconds | Long | 30 | true | Timeout (in seconds) to for requests made to API server. This applies only to foreground requests. |
11+
| spark.kubernetes.operator.reconciler.foregroundRequestTimeoutSeconds | Long | 30 | true | Timeout (in seconds) for requests made to API server. This applies only to foreground requests. |
1212
| spark.kubernetes.operator.reconciler.intervalSeconds | Long | 120 | true | Interval (in seconds, non-negative) to reconcile Spark applications. Note that reconciliation is always expected to be triggered when app spec / status is updated. This interval controls the reconcile behavior of operator reconciliation even when there's no update on SparkApplication, e.g. to determine whether a hanging app needs to be proactively terminated. Thus this is recommended to set to above 2 minutes to avoid unnecessary no-op reconciliation. |
1313
| spark.kubernetes.operator.reconciler.trimStateTransitionHistoryEnabled | Boolean | true | true | When enabled, operator would trim state transition history when a new attempt starts, keeping previous attempt summary only. |
1414
| spark.kubernetes.operator.reconciler.appStatusListenerClassNames | String | | false | Comma-separated names of SparkAppStatusListener class implementations |

docs/configuration.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -45,7 +45,7 @@ To enable hot properties loading, update the **helm chart values file** with
4545
```yaml
4646
operatorConfiguration:
4747
spark-operator.properties: |+
48-
spark.operator.dynamic.config.enabled=true
48+
spark.kubernetes.operator.dynamicConfig.enabled=true
4949
# ... all other config overides...
5050
dynamicConfig:
5151
create: true

docs/operations.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -41,7 +41,7 @@ in `values.yaml`) for the Helm chart.
4141
To override single parameters you can use `--set`, for example:
4242

4343
```bash
44-
helm install --set image.repository=<my_registory>/spark-kubernetes-operator \
44+
helm install --set image.repository=<my_registry>/spark-kubernetes-operator \
4545
-f build-tools/helm/spark-kubernetes-operator/values.yaml \
4646
build-tools/helm/spark-kubernetes-operator/
4747
```
@@ -80,7 +80,7 @@ following table:
8080
| operatorDeployment.operatorPod.operatorContainer.env | Custom env to be added to the operator container. | |
8181
| operatorDeployment.operatorPod.operatorContainer.envFrom | Custom envFrom to be added to the operator container, e.g. for downward API. | |
8282
| operatorDeployment.operatorPod.operatorContainer.probes | Probe config for the operator container. | |
83-
| operatorDeployment.operatorPod.operatorContainer.securityContext | Security context overrides for the operator container. | run as non root for baseline secuirty standard compliance |
83+
| operatorDeployment.operatorPod.operatorContainer.securityContext | Security context overrides for the operator container. | run as non root for baseline security standard compliance |
8484
| operatorDeployment.operatorPod.operatorContainer.resources | Resources for the operator container. | memory 2Gi, ephemeral storage 2Gi and 1 cpu |
8585
| operatorDeployment.additionalContainers | Additional containers to be added to the operator pod, e.g. sidecar. | |
8686
| operatorRbac.serviceAccount.create | Whether to create service account for operator to use. | true |
@@ -125,7 +125,7 @@ following table:
125125
For more information check the [Helm documentation](https://helm.sh/docs/helm/helm_install/).
126126

127127
__Notice__: The pod resources should be set as your workload in different environments to
128-
archive a matched K8s pod QoS. See
128+
achieve a matched K8s pod QoS. See
129129
also [Pod Quality of Service Classes](https://kubernetes.io/docs/concepts/workloads/pods/pod-qos/#quality-of-service-classes).
130130

131131
## Operator Health(Liveness) Probe with Sentinel Resource

docs/spark_custom_resources.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -225,9 +225,9 @@ sample restart config snippet:
225225

226226
``` yaml
227227
restartConfig:
228-
# accptable values are 'Never', 'Always', 'OnFailure' and 'OnInfrastructureFailure'
228+
# acceptable values are 'Never', 'Always', 'OnFailure' and 'OnInfrastructureFailure'
229229
restartPolicy: Never
230-
# operator would retry the application if configured. All resources from current attepmt
230+
# operator would retry the application if configured. All resources from current attempt
231231
# would be deleted before starting next attempt
232232
maxRestartAttempts: 3
233233
# backoff time (in millis) that operator would wait before next attempt
@@ -239,7 +239,7 @@ restartConfig:
239239
It's possible to configure applications to be proactively terminated and resubmitted in particular
240240
cases to avoid resource deadlock.
241241

242-
| Field | Type | Default Value | Descritpion |
242+
| Field | Type | Default Value | Description |
243243
|-----------------------------------------------------------------------------------------|---------|---------------|--------------------------------------------------------------------------------------------------------------------|
244244
| .spec.applicationTolerations.applicationTimeoutConfig.driverStartTimeoutMillis | integer | 300000 | Time to wait for driver reaches running state after requested driver. |
245245
| .spec.applicationTolerations.applicationTimeoutConfig.executorStartTimeoutMillis | integer | 300000 | Time to wait for driver to acquire minimal number of running executors. |
@@ -270,7 +270,7 @@ sparkConf:
270270
Spark would try to bring up 10 executors as defined in SparkConf. In addition, from
271271
operator perspective,
272272

273-
* If Spark app acquires less than 5 executors in given tine window (.spec.
273+
* If Spark app acquires less than 5 executors in given time window (.spec.
274274
applicationTolerations.applicationTimeoutConfig.executorStartTimeoutMillis) after
275275
submitted, it would be shut down proactively in order to avoid resource deadlock.
276276
* Spark app would be marked as 'RunningWithBelowThresholdExecutors' if it loses executors after

spark-operator/src/main/java/org/apache/spark/k8s/operator/config/SparkOperatorConf.java

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -120,7 +120,7 @@ public final class SparkOperatorConf {
120120
.key("spark.kubernetes.operator.reconciler.foregroundRequestTimeoutSeconds")
121121
.enableDynamicOverride(true)
122122
.description(
123-
"Timeout (in seconds) to for requests made to API server. This "
123+
"Timeout (in seconds) for requests made to API server. This "
124124
+ "applies only to foreground requests.")
125125
.typeParameterClass(Long.class)
126126
.defaultValue(30L)

0 commit comments

Comments
 (0)