Skip to content

Commit eea4a03

Browse files
Leemoonsoodongjoon-hyun
authored andcommitted
[MINOR][K8S] Invalid property "spark.driver.pod.name" is referenced in docs.
## What changes were proposed in this pull request? "Running on Kubernetes" references `spark.driver.pod.name` few places, and it should be `spark.kubernetes.driver.pod.name`. ## How was this patch tested? See changes Closes apache#23133 from Leemoonsoo/fix-driver-pod-name-prop. Authored-by: Lee moon soo <[email protected]> Signed-off-by: Dongjoon Hyun <[email protected]>
1 parent 0f56977 commit eea4a03

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

docs/running-on-kubernetes.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -166,7 +166,7 @@ hostname via `spark.driver.host` and your spark driver's port to `spark.driver.p
166166

167167
### Client Mode Executor Pod Garbage Collection
168168

169-
If you run your Spark driver in a pod, it is highly recommended to set `spark.driver.pod.name` to the name of that pod.
169+
If you run your Spark driver in a pod, it is highly recommended to set `spark.kubernetes.driver.pod.name` to the name of that pod.
170170
When this property is set, the Spark scheduler will deploy the executor pods with an
171171
[OwnerReference](https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/), which in turn will
172172
ensure that once the driver pod is deleted from the cluster, all of the application's executor pods will also be deleted.
@@ -175,7 +175,7 @@ an OwnerReference pointing to that pod will be added to each executor pod's Owne
175175
setting the OwnerReference to a pod that is not actually that driver pod, or else the executors may be terminated
176176
prematurely when the wrong pod is deleted.
177177

178-
If your application is not running inside a pod, or if `spark.driver.pod.name` is not set when your application is
178+
If your application is not running inside a pod, or if `spark.kubernetes.driver.pod.name` is not set when your application is
179179
actually running in a pod, keep in mind that the executor pods may not be properly deleted from the cluster when the
180180
application exits. The Spark scheduler attempts to delete these pods, but if the network request to the API server fails
181181
for any reason, these pods will remain in the cluster. The executor processes should exit when they cannot reach the

0 commit comments

Comments
 (0)