Skip to content

Commit a4f9ced

Browse files
authored
Resync from kube upstream (apache-spark-on-k8s#231)
* Allow spark driver find shuffle pods in specified namespace The conf property spark.kubernetes.shuffle.namespace is used to specify the namesapce of shuffle pods. In normal cases, only one "shuffle daemonset" is deployed and shared by all spark pods. The spark driver should be able to list and watch shuffle pods in the namespace specified by user. Note: by default, spark driver pod doesn't have authority to list and watch shuffle pods in another namespace. Some action is needed to grant it the authority. For example, below ABAC policy works. ``` {"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"group": "system:serviceaccounts", "namespace": "SHUFFLE_NAMESPACE", "resource": "pods", "readonly": true}} ``` (cherry picked from commit a6291c6) * Bypass init-containers when possible (cherry picked from commit 08fe944) * Config for hard cpu limit on pods; default unlimited (cherry picked from commit 8b3248f) * Allow number of executor cores to have fractional values This commit tries to solve issue apache-spark-on-k8s#359 by allowing the `spark.executor.cores` configuration key to take fractional values, e.g., 0.5 or 1.5. The value is used to specify the cpu request when creating the executor pods, which is allowed to be fractional by Kubernetes. When the value is passed to the executor process through the environment variable `SPARK_EXECUTOR_CORES`, the value is rounded up to the closest integer as required by the `CoarseGrainedExecutorBackend`. Signed-off-by: Yinan Li <[email protected]>(cherry picked from commit 6f6cfd6) * Python Bindings for launching PySpark Jobs from the JVM * Adding PySpark Submit functionality. Launching Python from JVM * Addressing scala idioms related to PR351 * Removing extends Logging which was necessary for LogInfo * Refactored code to leverage the ContainerLocalizedFileResolver * Modified Unit tests so that they would pass * Modified Unit Test input to pass Unit Tests * Setup working environent for integration tests for PySpark * Comment out Python thread logic until Jenkins has python in Python * Modifying PythonExec to pass on Jenkins * Modifying python exec * Added unit tests to ClientV2 and refactored to include pyspark submission resources * Modified unit test check * Scalastyle * PR 348 file conflicts * Refactored unit tests and styles * further scala stylzing and logic * Modified unit tests to be more specific towards Class in question * Removed space delimiting for methods * Submission client redesign to use a step-based builder pattern. This change overhauls the underlying architecture of the submission client, but it is intended to entirely preserve existing behavior of Spark applications. Therefore users will find this to be an invisible change. The philosophy behind this design is to reconsider the breakdown of the submission process. It operates off the abstraction of "submission steps", which are transformation functions that take the previous state of the driver and return the new state of the driver. The driver's state includes its Spark configurations and the Kubernetes resources that will be used to deploy it. Such a refactor moves away from a features-first API design, which considers different containers to serve a set of features. The previous design, for example, had a container files resolver API object that returned different resolutions of the dependencies added by the user. However, it was up to the main Client to know how to intelligently invoke all of those APIs. Therefore the API surface area of the file resolver became untenably large and it was not intuitive of how it was to be used or extended. This design changes the encapsulation layout; every module is now responsible for changing the driver specification directly. An orchestrator builds the correct chain of steps and hands it to the client, which then calls it verbatim. The main client then makes any final modifications that put the different pieces of the driver together, particularly to attach the driver container itself to the pod and to apply the Spark configuration as command-line arguments. * Don't add the init-container step if all URIs are local. * Python arguments patch + tests + docs * Revert "Python arguments patch + tests + docs" This reverts commit 4533df2. * Revert "Don't add the init-container step if all URIs are local." This reverts commit e103225. * Revert "Submission client redesign to use a step-based builder pattern." This reverts commit 5499f6d. * style changes * space for styling (cherry picked from commit befcf0a) Conflicts: README.md core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala * Submission client redesign to use a step-based builder pattern * Submission client redesign to use a step-based builder pattern. This change overhauls the underlying architecture of the submission client, but it is intended to entirely preserve existing behavior of Spark applications. Therefore users will find this to be an invisible change. The philosophy behind this design is to reconsider the breakdown of the submission process. It operates off the abstraction of "submission steps", which are transformation functions that take the previous state of the driver and return the new state of the driver. The driver's state includes its Spark configurations and the Kubernetes resources that will be used to deploy it. Such a refactor moves away from a features-first API design, which considers different containers to serve a set of features. The previous design, for example, had a container files resolver API object that returned different resolutions of the dependencies added by the user. However, it was up to the main Client to know how to intelligently invoke all of those APIs. Therefore the API surface area of the file resolver became untenably large and it was not intuitive of how it was to be used or extended. This design changes the encapsulation layout; every module is now responsible for changing the driver specification directly. An orchestrator builds the correct chain of steps and hands it to the client, which then calls it verbatim. The main client then makes any final modifications that put the different pieces of the driver together, particularly to attach the driver container itself to the pod and to apply the Spark configuration as command-line arguments. * Add a unit test for BaseSubmissionStep. * Add unit test for kubernetes credentials mounting. * Add unit test for InitContainerBootstrapStep. * unit tests for initContainer * Add a unit test for DependencyResolutionStep. * further modifications to InitContainer unit tests * Use of resolver in PythonStep and unit tests for PythonStep * refactoring of init unit tests and pythonstep resolver logic * Add unit test for KubernetesSubmissionStepsOrchestrator. * refactoring and addition of secret trustStore+Cert checks in a SubmissionStepSuite * added SparkPodInitContainerBootstrapSuite * Added InitContainerResourceStagingServerSecretPluginSuite * style in Unit tests * extremely minor style fix in variable naming * Address comments. * Rename class for consistency. * Attempt to make spacing consistent. Multi-line methods should have four-space indentation for arguments that aren't on the same line as the method call itself... but this is difficult to do consistently given how IDEs handle Scala multi-line indentation in most cases. (cherry picked from commit 0f4368f) Conflicts: resource-managers/kubernetes/core/src/main/scala/org/apache/spark/scheduler/cluster/kubernetes/KubernetesClusterSchedulerBackend.scala * Add implicit conversions to imports. Otherwise we can get a Scalastyle error when building from SBT. (cherry picked from commit 7deaaa3) * Fix import order and scalastyle Test with ./dev/scalastyle * fix submit job errors (cherry picked from commit 8751a9a) * Add node selectors for driver and executor pods (cherry picked from commit 6dbd32e) * Retry binding server to random port in the resource staging server test. * Retry binding server to random port in the resource staging server test. * Break if successful start * Start server in try block. * FIx scalastyle * More rigorous cleanup logic. Increment port numbers. * Move around more exception logic. * More exception refactoring. * Remove whitespace * Fix test * Rename variable * Scalastyle fix
1 parent 4d101d4 commit a4f9ced

File tree

67 files changed

+3442
-2472
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

67 files changed

+3442
-2472
lines changed

core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala

Lines changed: 14 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -279,8 +279,8 @@ object SparkSubmit extends CommandLineUtils {
279279
(clusterManager, deployMode) match {
280280
case (KUBERNETES, CLIENT) =>
281281
printErrorAndExit("Client mode is currently not supported for Kubernetes.")
282-
case (KUBERNETES, CLUSTER) if args.isPython || args.isR =>
283-
printErrorAndExit("Kubernetes does not currently support python or R applications.")
282+
case (KUBERNETES, CLUSTER) if args.isR =>
283+
printErrorAndExit("Kubernetes does not currently support R applications.")
284284
case (STANDALONE, CLUSTER) if args.isPython =>
285285
printErrorAndExit("Cluster deploy mode is currently not supported for python " +
286286
"applications on standalone clusters.")
@@ -373,7 +373,6 @@ object SparkSubmit extends CommandLineUtils {
373373
}.orNull
374374
}
375375

376-
377376
// If we're running a python app, set the main class to our specific python runner
378377
if (args.isPython && deployMode == CLIENT) {
379378
if (args.primaryResource == PYSPARK_SHELL) {
@@ -651,9 +650,18 @@ object SparkSubmit extends CommandLineUtils {
651650

652651
if (isKubernetesCluster) {
653652
childMainClass = "org.apache.spark.deploy.kubernetes.submit.Client"
654-
childArgs += args.primaryResource
655-
childArgs += args.mainClass
656-
childArgs ++= args.childArgs
653+
if (args.isPython) {
654+
childArgs ++= Array("--primary-py-file", args.primaryResource)
655+
childArgs ++= Array("--main-class", "org.apache.spark.deploy.PythonRunner")
656+
childArgs ++= Array("--other-py-files", args.pyFiles)
657+
} else {
658+
childArgs ++= Array("--primary-java-resource", args.primaryResource)
659+
childArgs ++= Array("--main-class", args.mainClass)
660+
}
661+
args.childArgs.foreach { arg =>
662+
childArgs += "--arg"
663+
childArgs += arg
664+
}
657665
}
658666

659667
// Load any properties specified through --conf and the default properties file

docs/running-on-kubernetes.md

Lines changed: 50 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -182,6 +182,32 @@ The above mechanism using `kubectl proxy` can be used when we have authenticatio
182182
kubernetes-client library does not support. Authentication using X509 Client Certs and OAuth tokens
183183
is currently supported.
184184

185+
### Running PySpark
186+
187+
Running PySpark on Kubernetes leverages the same spark-submit logic when launching on Yarn and Mesos.
188+
Python files can be distributed by including, in the conf, `--py-files`
189+
190+
Below is an example submission:
191+
192+
193+
```
194+
bin/spark-submit \
195+
--deploy-mode cluster \
196+
--master k8s://http://127.0.0.1:8001 \
197+
--kubernetes-namespace default \
198+
--conf spark.executor.memory=500m \
199+
--conf spark.driver.memory=1G \
200+
--conf spark.driver.cores=1 \
201+
--conf spark.executor.cores=1 \
202+
--conf spark.executor.instances=1 \
203+
--conf spark.app.name=spark-pi \
204+
--conf spark.kubernetes.driver.docker.image=spark-driver-py:latest \
205+
--conf spark.kubernetes.executor.docker.image=spark-executor-py:latest \
206+
--conf spark.kubernetes.initcontainer.docker.image=spark-init:latest \
207+
--py-files local:///opt/spark/examples/src/main/python/sort.py \
208+
local:///opt/spark/examples/src/main/python/pi.py 100
209+
```
210+
185211
## Dynamic Executor Scaling
186212

187213
Spark on Kubernetes supports Dynamic Allocation with cluster mode. This mode requires running
@@ -720,6 +746,30 @@ from the other deployment modes. See the [configuration page](configuration.html
720746
Docker image pull policy used when pulling Docker images with Kubernetes.
721747
</td>
722748
</tr>
749+
<tr>
750+
<td><code>spark.kubernetes.driver.limit.cores</code></td>
751+
<td>(none)</td>
752+
<td>
753+
Specify the hard cpu limit for the driver pod
754+
</td>
755+
</tr>
756+
<tr>
757+
<td><code>spark.kubernetes.executor.limit.cores</code></td>
758+
<td>(none)</td>
759+
<td>
760+
Specify the hard cpu limit for a single executor pod
761+
</td>
762+
</tr>
763+
<tr>
764+
<td><code>spark.kubernetes.node.selector.[labelKey]</code></td>
765+
<td>(none)</td>
766+
<td>
767+
Adds to the node selector of the driver pod and executor pods, with key <code>labelKey</code> and the value as the
768+
configuration's value. For example, setting <code>spark.kubernetes.node.selector.identifier</code> to <code>myIdentifier</code>
769+
will result in the driver pod and executors having a node selector with key <code>identifier</code> and value
770+
<code>myIdentifier</code>. Multiple node selector keys can be added by setting multiple configurations with this prefix.
771+
</td>
772+
</tr>
723773
</table>
724774

725775

resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/kubernetes/ConfigurationUtils.scala

Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -65,4 +65,18 @@ object ConfigurationUtils extends Logging {
6565
}
6666
combined.toMap
6767
}
68+
69+
def parsePrefixedKeyValuePairs(
70+
sparkConf: SparkConf,
71+
prefix: String,
72+
configType: String): Map[String, String] = {
73+
val fromPrefix = sparkConf.getAllWithPrefix(prefix)
74+
fromPrefix.groupBy(_._1).foreach {
75+
case (key, values) =>
76+
require(values.size == 1,
77+
s"Cannot have multiple values for a given $configType key, got key $key with" +
78+
s" values $values")
79+
}
80+
fromPrefix.toMap
81+
}
6882
}

resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/kubernetes/InitContainerResourceStagingServerSecretPlugin.scala

Lines changed: 21 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@
1616
*/
1717
package org.apache.spark.deploy.kubernetes
1818

19-
import io.fabric8.kubernetes.api.model.{ContainerBuilder, PodBuilder, Secret}
19+
import io.fabric8.kubernetes.api.model.{Container, ContainerBuilder, Pod, PodBuilder, Secret}
2020

2121
import org.apache.spark.deploy.kubernetes.constants._
2222

@@ -27,13 +27,13 @@ private[spark] trait InitContainerResourceStagingServerSecretPlugin {
2727
* from a resource staging server.
2828
*/
2929
def mountResourceStagingServerSecretIntoInitContainer(
30-
initContainer: ContainerBuilder): ContainerBuilder
30+
initContainer: Container): Container
3131

3232
/**
3333
* Configure the pod to attach a Secret volume which hosts secret files allowing the
3434
* init-container to retrieve dependencies from the resource staging server.
3535
*/
36-
def addResourceStagingServerSecretVolumeToPod(basePod: PodBuilder): PodBuilder
36+
def addResourceStagingServerSecretVolumeToPod(basePod: Pod): Pod
3737
}
3838

3939
private[spark] class InitContainerResourceStagingServerSecretPluginImpl(
@@ -42,21 +42,25 @@ private[spark] class InitContainerResourceStagingServerSecretPluginImpl(
4242
extends InitContainerResourceStagingServerSecretPlugin {
4343

4444
override def mountResourceStagingServerSecretIntoInitContainer(
45-
initContainer: ContainerBuilder): ContainerBuilder = {
46-
initContainer.addNewVolumeMount()
47-
.withName(INIT_CONTAINER_SECRET_VOLUME_NAME)
48-
.withMountPath(initContainerSecretMountPath)
49-
.endVolumeMount()
45+
initContainer: Container): Container = {
46+
new ContainerBuilder(initContainer)
47+
.addNewVolumeMount()
48+
.withName(INIT_CONTAINER_SECRET_VOLUME_NAME)
49+
.withMountPath(initContainerSecretMountPath)
50+
.endVolumeMount()
51+
.build()
5052
}
5153

52-
override def addResourceStagingServerSecretVolumeToPod(basePod: PodBuilder): PodBuilder = {
53-
basePod.editSpec()
54-
.addNewVolume()
55-
.withName(INIT_CONTAINER_SECRET_VOLUME_NAME)
56-
.withNewSecret()
57-
.withSecretName(initContainerSecretName)
58-
.endSecret()
59-
.endVolume()
60-
.endSpec()
54+
override def addResourceStagingServerSecretVolumeToPod(basePod: Pod): Pod = {
55+
new PodBuilder(basePod)
56+
.editSpec()
57+
.addNewVolume()
58+
.withName(INIT_CONTAINER_SECRET_VOLUME_NAME)
59+
.withNewSecret()
60+
.withSecretName(initContainerSecretName)
61+
.endSecret()
62+
.endVolume()
63+
.endSpec()
64+
.build()
6165
}
6266
}
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,24 @@
1+
/*
2+
* Licensed to the Apache Software Foundation (ASF) under one or more
3+
* contributor license agreements. See the NOTICE file distributed with
4+
* this work for additional information regarding copyright ownership.
5+
* The ASF licenses this file to You under the Apache License, Version 2.0
6+
* (the "License"); you may not use this file except in compliance with
7+
* the License. You may obtain a copy of the License at
8+
*
9+
* http://www.apache.org/licenses/LICENSE-2.0
10+
*
11+
* Unless required by applicable law or agreed to in writing, software
12+
* distributed under the License is distributed on an "AS IS" BASIS,
13+
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14+
* See the License for the specific language governing permissions and
15+
* limitations under the License.
16+
*/
17+
package org.apache.spark.deploy.kubernetes
18+
19+
import io.fabric8.kubernetes.api.model.{Container, Pod}
20+
21+
private[spark] case class PodWithDetachedInitContainer(
22+
pod: Pod,
23+
initContainer: Container,
24+
mainContainer: Container)

resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/kubernetes/SparkPodInitContainerBootstrap.scala

Lines changed: 27 additions & 23 deletions
Original file line numberDiff line numberDiff line change
@@ -19,19 +19,25 @@ package org.apache.spark.deploy.kubernetes
1919
import io.fabric8.kubernetes.api.model.{ContainerBuilder, EmptyDirVolumeSource, PodBuilder, VolumeMount, VolumeMountBuilder}
2020

2121
import org.apache.spark.deploy.kubernetes.constants._
22-
import org.apache.spark.deploy.kubernetes.submit.{ContainerNameEqualityPredicate, InitContainerUtil}
2322

23+
/**
24+
* This is separated out from the init-container steps API because this component can be reused to
25+
* set up the init-container for executors as well.
26+
*/
2427
private[spark] trait SparkPodInitContainerBootstrap {
2528
/**
2629
* Bootstraps an init-container that downloads dependencies to be used by a main container.
2730
* Note that this primarily assumes that the init-container's configuration is being provided
2831
* by a ConfigMap that was installed by some other component; that is, the implementation
2932
* here makes no assumptions about how the init-container is specifically configured. For
3033
* example, this class is unaware if the init-container is fetching remote dependencies or if
31-
* it is fetching dependencies from a resource staging server.
34+
* it is fetching dependencies from a resource staging server. Additionally, the container itself
35+
* is not actually attached to the pod, but the init container is returned so it can be attached
36+
* by InitContainerUtil after the caller has decided to make any changes to it.
3237
*/
3338
def bootstrapInitContainerAndVolumes(
34-
mainContainerName: String, originalPodSpec: PodBuilder): PodBuilder
39+
originalPodWithUnattachedInitContainer: PodWithDetachedInitContainer)
40+
: PodWithDetachedInitContainer
3541
}
3642

3743
private[spark] class SparkPodInitContainerBootstrapImpl(
@@ -41,13 +47,11 @@ private[spark] class SparkPodInitContainerBootstrapImpl(
4147
filesDownloadPath: String,
4248
downloadTimeoutMinutes: Long,
4349
initContainerConfigMapName: String,
44-
initContainerConfigMapKey: String,
45-
resourceStagingServerSecretPlugin: Option[InitContainerResourceStagingServerSecretPlugin])
50+
initContainerConfigMapKey: String)
4651
extends SparkPodInitContainerBootstrap {
4752

4853
override def bootstrapInitContainerAndVolumes(
49-
mainContainerName: String,
50-
originalPodSpec: PodBuilder): PodBuilder = {
54+
podWithDetachedInitContainer: PodWithDetachedInitContainer): PodWithDetachedInitContainer = {
5155
val sharedVolumeMounts = Seq[VolumeMount](
5256
new VolumeMountBuilder()
5357
.withName(INIT_CONTAINER_DOWNLOAD_JARS_VOLUME_NAME)
@@ -58,7 +62,7 @@ private[spark] class SparkPodInitContainerBootstrapImpl(
5862
.withMountPath(filesDownloadPath)
5963
.build())
6064

61-
val initContainer = new ContainerBuilder()
65+
val initContainer = new ContainerBuilder(podWithDetachedInitContainer.initContainer)
6266
.withName(s"spark-init")
6367
.withImage(initContainerImage)
6468
.withImagePullPolicy(dockerImagePullPolicy)
@@ -68,11 +72,8 @@ private[spark] class SparkPodInitContainerBootstrapImpl(
6872
.endVolumeMount()
6973
.addToVolumeMounts(sharedVolumeMounts: _*)
7074
.addToArgs(INIT_CONTAINER_PROPERTIES_FILE_PATH)
71-
val resolvedInitContainer = resourceStagingServerSecretPlugin.map { plugin =>
72-
plugin.mountResourceStagingServerSecretIntoInitContainer(initContainer)
73-
}.getOrElse(initContainer).build()
74-
val podWithBasicVolumes = InitContainerUtil.appendInitContainer(
75-
originalPodSpec, resolvedInitContainer)
75+
.build()
76+
val podWithBasicVolumes = new PodBuilder(podWithDetachedInitContainer.pod)
7677
.editSpec()
7778
.addNewVolume()
7879
.withName(INIT_CONTAINER_PROPERTIES_FILE_VOLUME)
@@ -92,17 +93,20 @@ private[spark] class SparkPodInitContainerBootstrapImpl(
9293
.withName(INIT_CONTAINER_DOWNLOAD_FILES_VOLUME_NAME)
9394
.withEmptyDir(new EmptyDirVolumeSource())
9495
.endVolume()
95-
.editMatchingContainer(new ContainerNameEqualityPredicate(mainContainerName))
96-
.addToVolumeMounts(sharedVolumeMounts: _*)
97-
.addNewEnv()
98-
.withName(ENV_MOUNTED_FILES_DIR)
99-
.withValue(filesDownloadPath)
100-
.endEnv()
101-
.endContainer()
10296
.endSpec()
103-
resourceStagingServerSecretPlugin.map { plugin =>
104-
plugin.addResourceStagingServerSecretVolumeToPod(podWithBasicVolumes)
105-
}.getOrElse(podWithBasicVolumes)
97+
.build()
98+
val mainContainerWithMountedFiles = new ContainerBuilder(
99+
podWithDetachedInitContainer.mainContainer)
100+
.addToVolumeMounts(sharedVolumeMounts: _*)
101+
.addNewEnv()
102+
.withName(ENV_MOUNTED_FILES_DIR)
103+
.withValue(filesDownloadPath)
104+
.endEnv()
105+
.build()
106+
PodWithDetachedInitContainer(
107+
podWithBasicVolumes,
108+
initContainer,
109+
mainContainerWithMountedFiles)
106110
}
107111

108112
}

resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/kubernetes/config.scala

Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -485,6 +485,20 @@ package object config extends Logging {
485485
.stringConf
486486
.createOptional
487487

488+
private[spark] val KUBERNETES_DRIVER_LIMIT_CORES =
489+
ConfigBuilder("spark.kubernetes.driver.limit.cores")
490+
.doc("Specify the hard cpu limit for the driver pod")
491+
.stringConf
492+
.createOptional
493+
494+
private[spark] val KUBERNETES_EXECUTOR_LIMIT_CORES =
495+
ConfigBuilder("spark.kubernetes.executor.limit.cores")
496+
.doc("Specify the hard cpu limit for a single executor pod")
497+
.stringConf
498+
.createOptional
499+
500+
private[spark] val KUBERNETES_NODE_SELECTOR_PREFIX = "spark.kubernetes.node.selector."
501+
488502
private[spark] def resolveK8sMaster(rawMasterString: String): String = {
489503
if (!rawMasterString.startsWith("k8s://")) {
490504
throw new IllegalArgumentException("Master URL should start with k8s:// in Kubernetes mode.")

resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/kubernetes/constants.scala

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -67,6 +67,8 @@ package object constants {
6767
private[spark] val ENV_DRIVER_ARGS = "SPARK_DRIVER_ARGS"
6868
private[spark] val ENV_DRIVER_JAVA_OPTS = "SPARK_DRIVER_JAVA_OPTS"
6969
private[spark] val ENV_MOUNTED_FILES_DIR = "SPARK_MOUNTED_FILES_DIR"
70+
private[spark] val ENV_PYSPARK_FILES = "PYSPARK_FILES"
71+
private[spark] val ENV_PYSPARK_PRIMARY = "PYSPARK_PRIMARY"
7072

7173
// Bootstrapping dependencies with the init-container
7274
private[spark] val INIT_CONTAINER_ANNOTATION = "pod.beta.kubernetes.io/init-containers"

0 commit comments

Comments
 (0)