Skip to content

Commit 21fc0d1

Browse files
committed
handling flags, readmes, and POM changes
1 parent a59339c commit 21fc0d1

File tree

4 files changed

+9
-29
lines changed

4 files changed

+9
-29
lines changed

README.md

Lines changed: 3 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -53,34 +53,17 @@ $ mvn clean integration-test \
5353
-DextraScalaTestArgs="-Dspark.kubernetes.test.master=k8s://https://<master> -Dspark.docker.test.driverImage=<driver-image> -Dspark.docker.test.executorImage=<executor-image>"
5454
```
5555

56-
# Preserve the Minikube VM
57-
58-
The integration tests make use of
59-
[Minikube](https://github.com/kubernetes/minikube), which fires up a virtual
60-
machine and setup a single-node kubernetes cluster within it. By default the vm
61-
is destroyed after the tests are finished. If you want to preserve the vm, e.g.
62-
to reduce the running time of tests during development, you can pass the
63-
property `spark.docker.test.persistMinikube` to the test process:
64-
65-
```
66-
$ mvn clean integration-test \
67-
-Dspark-distro-tgz=spark/spark-2.3.0-SNAPSHOT-bin.tgz \
68-
-DextraScalaTestArgs=-Dspark.docker.test.persistMinikube=true
69-
```
70-
7156
# Reuse the previous Docker images
7257

7358
The integration tests build a number of Docker images, which takes some time.
7459
By default, the images are built every time the tests run. You may want to skip
7560
re-building those images during development, if the distribution package did not
7661
change since the last run. You can pass the property
77-
`spark.docker.test.skipBuildImages` to the test process. This will work only if
78-
you have been setting the property `spark.docker.test.persistMinikube`, in the
79-
previous run since the docker daemon run inside the minikube environment. Here
80-
is an example:
62+
`spark.docker.test.skipBuildImages` to the test process.
63+
Here is an example:
8164

8265
```
8366
$ mvn clean integration-test \
8467
-Dspark-distro-tgz=spark/spark-2.3.0-SNAPSHOT-bin.tgz \
85-
"-DextraScalaTestArgs=-Dspark.docker.test.persistMinikube=true -Dspark.docker.test.skipBuildImages=true"
68+
"-Dspark.docker.test.skipBuildImages=true"
8669
```

integration-test/pom.xml

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -40,7 +40,6 @@
4040
<slf4j-log4j12.version>1.7.24</slf4j-log4j12.version>
4141
<sbt.project.name>kubernetes-integration-tests</sbt.project.name>
4242
<spark-distro-tgz>YOUR-SPARK-DISTRO-TARBALL-HERE</spark-distro-tgz>
43-
<spark-dockerfiles-dir>YOUR-DOCKERFILES-DIR-HERE</spark-dockerfiles-dir>
4443
<test.exclude.tags></test.exclude.tags>
4544
</properties>
4645
<packaging>jar</packaging>

integration-test/src/test/scala/org/apache/spark/deploy/k8s/integrationtest/backend/minikube/MinikubeTestBackend.scala

Lines changed: 6 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -21,38 +21,37 @@ import java.util.UUID
2121
import io.fabric8.kubernetes.client.DefaultKubernetesClient
2222

2323
import org.apache.spark.deploy.k8s.integrationtest.backend.IntegrationTestBackend
24-
import org.apache.spark.deploy.k8s.integrationtest.config._
2524
import org.apache.spark.deploy.k8s.integrationtest.docker.KubernetesSuiteDockerManager
2625

2726
private[spark] object MinikubeTestBackend extends IntegrationTestBackend {
2827
private var defaultClient: DefaultKubernetesClient = _
29-
private val userProvidedDockerImageTag = Option(
30-
System.getProperty(KUBERNETES_TEST_DOCKER_TAG_SYSTEM_PROPERTY))
28+
private val userSkipBuildImages =
29+
System.getProperty("spark.docker.test.skipBuildImages", "false").toBoolean
3130
private val resolvedDockerImageTag =
32-
userProvidedDockerImageTag.getOrElse(UUID.randomUUID().toString.replaceAll("-", ""))
31+
UUID.randomUUID().toString.replaceAll("-", "")
3332
private val dockerManager = new KubernetesSuiteDockerManager(
3433
Minikube.getDockerEnv, resolvedDockerImageTag)
3534
override def initialize(): Unit = {
3635
val minikubeStatus = Minikube.getMinikubeStatus
3736
require(minikubeStatus == MinikubeStatus.RUNNING,
3837
s"Minikube must be running before integration tests can execute. Current status" +
3938
s" is: $minikubeStatus")
40-
if (userProvidedDockerImageTag.isEmpty) {
39+
if (!userSkipBuildImages) {
4140
dockerManager.buildSparkDockerImages()
4241
}
4342
defaultClient = Minikube.getKubernetesClient
4443
}
4544

46-
4745
override def cleanUp(): Unit = {
4846
super.cleanUp()
49-
if (userProvidedDockerImageTag.isEmpty) {
47+
if (!userSkipBuildImages) {
5048
dockerManager.deleteImages()
5149
}
5250
}
5351

5452
override def getKubernetesClient(): DefaultKubernetesClient = {
5553
defaultClient
5654
}
55+
5756
override def dockerImageTag(): String = resolvedDockerImageTag
5857
}

integration-test/src/test/scala/org/apache/spark/deploy/k8s/integrationtest/config.scala

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,6 @@
1717
package org.apache.spark.deploy.k8s.integrationtest
1818

1919
package object config {
20-
val KUBERNETES_TEST_DOCKER_TAG_SYSTEM_PROPERTY = "spark.kubernetes.test.imageDockerTag"
2120
val DRIVER_DOCKER_IMAGE = "spark.kubernetes.driver.docker.image"
2221
val EXECUTOR_DOCKER_IMAGE = "spark.kubernetes.executor.docker.image"
2322
val INIT_CONTAINER_DOCKER_IMAGE = "spark.kubernetes.initcontainer.docker.image"

0 commit comments

Comments
 (0)