Skip to content
This repository was archived by the owner on Jan 9, 2020. It is now read-only.

Commit 6f6cfd6

Browse files
liyinan926foxish
authored andcommitted
Allow number of executor cores to have fractional values (#361)
This commit tries to solve issue #359 by allowing the `spark.executor.cores` configuration key to take fractional values, e.g., 0.5 or 1.5. The value is used to specify the cpu request when creating the executor pods, which is allowed to be fractional by Kubernetes. When the value is passed to the executor process through the environment variable `SPARK_EXECUTOR_CORES`, the value is rounded up to the closest integer as required by the `CoarseGrainedExecutorBackend`. Signed-off-by: Yinan Li <[email protected]>
1 parent 8b3248f commit 6f6cfd6

File tree

1 file changed

+4
-3
lines changed

1 file changed

+4
-3
lines changed

resource-managers/kubernetes/core/src/main/scala/org/apache/spark/scheduler/cluster/kubernetes/KubernetesClusterSchedulerBackend.scala

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -107,7 +107,7 @@ private[spark] class KubernetesClusterSchedulerBackend(
107107
MEMORY_OVERHEAD_MIN))
108108
private val executorMemoryWithOverhead = executorMemoryMb + memoryOverheadMb
109109

110-
private val executorCores = conf.getOption("spark.executor.cores").getOrElse("1")
110+
private val executorCores = conf.getDouble("spark.executor.cores", 1d)
111111
private val executorLimitCores = conf.getOption(KUBERNETES_EXECUTOR_LIMIT_CORES.key)
112112

113113
private implicit val requestExecutorContext = ExecutionContext.fromExecutorService(
@@ -377,7 +377,7 @@ private[spark] class KubernetesClusterSchedulerBackend(
377377
.withAmount(s"${executorMemoryWithOverhead}M")
378378
.build()
379379
val executorCpuQuantity = new QuantityBuilder(false)
380-
.withAmount(executorCores)
380+
.withAmount(executorCores.toString)
381381
.build()
382382
val executorExtraClasspathEnv = executorExtraClasspath.map { cp =>
383383
new EnvVarBuilder()
@@ -388,7 +388,8 @@ private[spark] class KubernetesClusterSchedulerBackend(
388388
val requiredEnv = Seq(
389389
(ENV_EXECUTOR_PORT, executorPort.toString),
390390
(ENV_DRIVER_URL, driverUrl),
391-
(ENV_EXECUTOR_CORES, executorCores),
391+
// Executor backend expects integral value for executor cores, so round it up to an int.
392+
(ENV_EXECUTOR_CORES, math.ceil(executorCores).toInt.toString),
392393
(ENV_EXECUTOR_MEMORY, executorMemoryString),
393394
(ENV_APPLICATION_ID, applicationId()),
394395
(ENV_EXECUTOR_ID, executorId),

0 commit comments

Comments
 (0)