@@ -6,9 +6,45 @@ title: Spark on Kubernetes Integration Tests
6
6
# Running the Kubernetes Integration Tests
7
7
8
8
Note that the integration test framework is currently being heavily revised and
9
- is subject to change.
9
+ is subject to change. Note that currently the integration tests only run with Java 8.
10
10
11
- Note that currently the integration tests only run with Java 8.
11
+ As shorthand to run the tests against any given cluster, you can use the ` e2e/runner.sh ` script.
12
+ The script assumes that you have a functioning Kubernetes cluster (1.6+) with kubectl
13
+ configured to access it. The master URL of the currently configured cluster on your
14
+ machine can be discovered as follows:
15
+
16
+ ```
17
+ $ kubectl cluster-info
18
+
19
+ Kubernetes master is running at https://xyz
20
+ ```
21
+
22
+ If you want to use a local [ minikube] ( https://github.com/kubernetes/minikube ) cluster,
23
+ the minimum tested version is 0.23.0, with the kube-dns addon enabled
24
+ and the recommended configuration is 3 CPUs and 4G of memory. There is also a wrapper
25
+ script for running on minikube, ` e2e/e2e-minikube.sh ` for testing the apache/spark repo
26
+ in specific.
27
+
28
+ ```
29
+ $ minikube start --memory 4000 --cpus 3
30
+ ```
31
+
32
+ If you're using a non-local cluster, you must provide an image repository
33
+ which you have write access to, using the ` -i ` option, in order to store docker images
34
+ generated during the test.
35
+
36
+ Example usages of the script:
37
+
38
+ ```
39
+ $ ./e2e/runner.sh -m https://xyz -i docker.io/foxish -d cloud
40
+ $ ./e2e/runner.sh -m https://xyz -i test -d minikube
41
+ $ ./e2e/runner.sh -m https://xyz -i test -r https://github.com/my-spark/spark -d minikube
42
+
43
+ ```
44
+
45
+ # Detailed Documentation
46
+
47
+ ## Running the tests using maven
12
48
13
49
Running the integration tests requires a Spark distribution package tarball that
14
50
contains Spark jars, submission clients, etc. You can download a tarball from
@@ -40,7 +76,7 @@ $ mvn clean integration-test \
40
76
-Dspark-distro-tgz=spark/spark-2.3.0-SNAPSHOT-bin.tgz
41
77
```
42
78
43
- # Running against an arbitrary cluster
79
+ ## Running against an arbitrary cluster
44
80
45
81
In order to run against any cluster, use the following:
46
82
``` sh
@@ -49,7 +85,7 @@ $ mvn clean integration-test \
49
85
-DextraScalaTestArgs=" -Dspark.kubernetes.test.master=k8s://https://<master> -Dspark.docker.test.driverImage=<driver-image> -Dspark.docker.test.executorImage=<executor-image>"
50
86
```
51
87
52
- # Preserve the Minikube VM
88
+ ## Preserve the Minikube VM
53
89
54
90
The integration tests make use of
55
91
[ Minikube] ( https://github.com/kubernetes/minikube ) , which fires up a virtual
@@ -64,7 +100,7 @@ $ mvn clean integration-test \
64
100
-DextraScalaTestArgs=-Dspark.docker.test.persistMinikube=true
65
101
```
66
102
67
- # Reuse the previous Docker images
103
+ ## Reuse the previous Docker images
68
104
69
105
The integration tests build a number of Docker images, which takes some time.
70
106
By default, the images are built every time the tests run. You may want to skip
0 commit comments