Skip to content

Commit cdd694c

Browse files
[SPARK-7721][INFRA] Run and generate test coverage report from Python via Jenkins
## What changes were proposed in this pull request? ### Background For the current status, the test script that generates coverage information was merged into Spark, apache#20204 So, we can generate the coverage report and site by, for example: ``` run-tests-with-coverage --python-executables=python3 --modules=pyspark-sql ``` like `run-tests` script in `./python`. ### Proposed change The next step is to host this coverage report via `github.io` automatically by Jenkins (see https://spark-test.github.io/pyspark-coverage-site/). This uses my testing account for Spark, spark-test, which is shared to Felix and Shivaram a long time ago for testing purpose including AppVeyor. To cut this short, this PR targets to run the coverage in [spark-master-test-sbt-hadoop-2.7](https://amplab.cs.berkeley.edu/jenkins/job/spark-master-test-sbt-hadoop-2.7/) In the specific job, it will clone the page, and rebase the up-to-date PySpark test coverage from the latest commit. For instance as below: ```bash # Clone PySpark coverage site. git clone https://github.com/spark-test/pyspark-coverage-site.git # Remove existing HTMLs. rm -fr pyspark-coverage-site/* # Copy generated coverage HTMLs. cp -r .../python/test_coverage/htmlcov/* pyspark-coverage-site/ # Check out to a temporary branch. git symbolic-ref HEAD refs/heads/latest_branch # Add all the files. git add -A # Commit current HTMLs. git commit -am "Coverage report at latest commit in Apache Spark" # Delete the old branch. git branch -D gh-pages # Rename the temporary branch to master. git branch -m gh-pages # Finally, force update to our repository. git push -f origin gh-pages ``` So, it is a one single up-to-date coverage can be shown in the `github-io` page. The commands above were manually tested. ### TODOs - [x] Write a draft HyukjinKwon - [x] `pip install coverage` to all python implementations (pypy, python2, python3) in Jenkins workers - shaneknapp - [x] Set hidden `SPARK_TEST_KEY` for spark-test's password in Jenkins via Jenkins's feature This should be set in both PR builder and `spark-master-test-sbt-hadoop-2.7` so that later other PRs can test and fix the bugs - shaneknapp - [x] Set an environment variable that indicates `spark-master-test-sbt-hadoop-2.7` so that that specific build can report and update the coverage site - shaneknapp - [x] Make PR builder's test passed HyukjinKwon - [x] Fix flaky test related with coverage HyukjinKwon - 6 consecutive passes out of 7 runs This PR will be co-authored with me and shaneknapp ## How was this patch tested? It will be tested via Jenkins. Closes apache#23117 from HyukjinKwon/SPARK-7721. Lead-authored-by: Hyukjin Kwon <[email protected]> Co-authored-by: hyukjinkwon <[email protected]> Co-authored-by: shane knapp <[email protected]> Signed-off-by: Hyukjin Kwon <[email protected]>
1 parent e44f308 commit cdd694c

File tree

3 files changed

+71
-3
lines changed

3 files changed

+71
-3
lines changed

README.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,7 @@
22

33
[![Jenkins Build](https://amplab.cs.berkeley.edu/jenkins/job/spark-master-test-sbt-hadoop-2.7/badge/icon)](https://amplab.cs.berkeley.edu/jenkins/job/spark-master-test-sbt-hadoop-2.7)
44
[![AppVeyor Build](https://img.shields.io/appveyor/ci/ApacheSoftwareFoundation/spark/master.svg?style=plastic&logo=appveyor)](https://ci.appveyor.com/project/ApacheSoftwareFoundation/spark)
5+
[![PySpark Coverage](https://img.shields.io/badge/dynamic/xml.svg?label=pyspark%20coverage&url=https%3A%2F%2Fspark-test.github.io%2Fpyspark-coverage-site&query=%2Fhtml%2Fbody%2Fdiv%5B1%5D%2Fdiv%2Fh1%2Fspan&colorB=brightgreen&style=plastic)](https://spark-test.github.io/pyspark-coverage-site)
56

67
Spark is a fast and general cluster computing system for Big Data. It provides
78
high-level APIs in Scala, Java, Python, and R, and an optimized engine that

dev/run-tests.py

Lines changed: 60 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -25,6 +25,8 @@
2525
import re
2626
import sys
2727
import subprocess
28+
import glob
29+
import shutil
2830
from collections import namedtuple
2931

3032
from sparktestsupport import SPARK_HOME, USER_HOME, ERROR_CODES
@@ -400,15 +402,66 @@ def run_scala_tests(build_tool, hadoop_version, test_modules, excluded_tags):
400402
run_scala_tests_sbt(test_modules, test_profiles)
401403

402404

403-
def run_python_tests(test_modules, parallelism):
405+
def run_python_tests(test_modules, parallelism, with_coverage=False):
404406
set_title_and_block("Running PySpark tests", "BLOCK_PYSPARK_UNIT_TESTS")
405407

406-
command = [os.path.join(SPARK_HOME, "python", "run-tests")]
408+
if with_coverage:
409+
# Coverage makes the PySpark tests flaky due to heavy parallelism.
410+
# When we run PySpark tests with coverage, it uses 4 for now as
411+
# workaround.
412+
parallelism = 4
413+
script = "run-tests-with-coverage"
414+
else:
415+
script = "run-tests"
416+
command = [os.path.join(SPARK_HOME, "python", script)]
407417
if test_modules != [modules.root]:
408418
command.append("--modules=%s" % ','.join(m.name for m in test_modules))
409419
command.append("--parallelism=%i" % parallelism)
410420
run_cmd(command)
411421

422+
if with_coverage:
423+
post_python_tests_results()
424+
425+
426+
def post_python_tests_results():
427+
if "SPARK_TEST_KEY" not in os.environ:
428+
print("[error] 'SPARK_TEST_KEY' environment variable was not set. Unable to post "
429+
"PySpark coverage results.")
430+
sys.exit(1)
431+
spark_test_key = os.environ.get("SPARK_TEST_KEY")
432+
# The steps below upload HTMLs to 'github.com/spark-test/pyspark-coverage-site'.
433+
# 1. Clone PySpark coverage site.
434+
run_cmd([
435+
"git",
436+
"clone",
437+
"https://spark-test:%[email protected]/spark-test/pyspark-coverage-site.git" % spark_test_key])
438+
# 2. Remove existing HTMLs.
439+
run_cmd(["rm", "-fr"] + glob.glob("pyspark-coverage-site/*"))
440+
# 3. Copy generated coverage HTMLs.
441+
for f in glob.glob("%s/python/test_coverage/htmlcov/*" % SPARK_HOME):
442+
shutil.copy(f, "pyspark-coverage-site/")
443+
os.chdir("pyspark-coverage-site")
444+
try:
445+
# 4. Check out to a temporary branch.
446+
run_cmd(["git", "symbolic-ref", "HEAD", "refs/heads/latest_branch"])
447+
# 5. Add all the files.
448+
run_cmd(["git", "add", "-A"])
449+
# 6. Commit current HTMLs.
450+
run_cmd([
451+
"git",
452+
"commit",
453+
"-am",
454+
"Coverage report at latest commit in Apache Spark",
455+
'--author="Apache Spark Test Account <[email protected]>"'])
456+
# 7. Delete the old branch.
457+
run_cmd(["git", "branch", "-D", "gh-pages"])
458+
# 8. Rename the temporary branch to master.
459+
run_cmd(["git", "branch", "-m", "gh-pages"])
460+
# 9. Finally, force update to our repository.
461+
run_cmd(["git", "push", "-f", "origin", "gh-pages"])
462+
finally:
463+
os.chdir("..")
464+
412465

413466
def run_python_packaging_tests():
414467
set_title_and_block("Running PySpark packaging tests", "BLOCK_PYSPARK_PIP_TESTS")
@@ -567,7 +620,11 @@ def main():
567620

568621
modules_with_python_tests = [m for m in test_modules if m.python_test_goals]
569622
if modules_with_python_tests:
570-
run_python_tests(modules_with_python_tests, opts.parallelism)
623+
# We only run PySpark tests with coverage report in one specific job with
624+
# Spark master with SBT in Jenkins.
625+
is_sbt_master_job = "SPARK_MASTER_SBT_HADOOP_2_7" in os.environ
626+
run_python_tests(
627+
modules_with_python_tests, opts.parallelism, with_coverage=is_sbt_master_job)
571628
run_python_packaging_tests()
572629
if any(m.should_run_r_tests for m in test_modules):
573630
run_sparkr_tests()

python/pyspark/streaming/tests/test_dstream.py

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -22,12 +22,16 @@
2222
import unittest
2323
from functools import reduce
2424
from itertools import chain
25+
import platform
2526

2627
from pyspark import SparkConf, SparkContext, RDD
2728
from pyspark.streaming import StreamingContext
2829
from pyspark.testing.streamingutils import PySparkStreamingTestCase
2930

3031

32+
@unittest.skipIf(
33+
"pypy" in platform.python_implementation().lower() and "COVERAGE_PROCESS_START" in os.environ,
34+
"PyPy implementation causes to hang DStream tests forever when Coverage report is used.")
3135
class BasicOperationTests(PySparkStreamingTestCase):
3236

3337
def test_map(self):
@@ -389,6 +393,9 @@ def failed_func(i):
389393
self.fail("a failed func should throw an error")
390394

391395

396+
@unittest.skipIf(
397+
"pypy" in platform.python_implementation().lower() and "COVERAGE_PROCESS_START" in os.environ,
398+
"PyPy implementation causes to hang DStream tests forever when Coverage report is used.")
392399
class WindowFunctionTests(PySparkStreamingTestCase):
393400

394401
timeout = 15
@@ -466,6 +473,9 @@ def func(dstream):
466473
self._test_func(input, func, expected)
467474

468475

476+
@unittest.skipIf(
477+
"pypy" in platform.python_implementation().lower() and "COVERAGE_PROCESS_START" in os.environ,
478+
"PyPy implementation causes to hang DStream tests forever when Coverage report is used.")
469479
class CheckpointTests(unittest.TestCase):
470480

471481
setupCalled = False

0 commit comments

Comments
 (0)