@@ -9,19 +9,22 @@ documentation yourself. Why build it yourself? So that you have the docs that co
9
9
whichever version of Spark you currently have checked out of revision control.
10
10
11
11
## Prerequisites
12
- The Spark documentation build uses a number of tools to build HTML docs and API docs in Scala,
13
- Python and R.
12
+
13
+ The Spark documentation build uses a number of tools to build HTML docs and API docs in Scala, Java,
14
+ Python, R and SQL.
14
15
15
16
You need to have [ Ruby] ( https://www.ruby-lang.org/en/documentation/installation/ ) and
16
17
[ Python] ( https://docs.python.org/2/using/unix.html#getting-and-installing-the-latest-version-of-python )
17
18
installed. Also install the following libraries:
19
+
18
20
``` sh
19
- $ sudo gem install jekyll jekyll-redirect-from pygments.rb
20
- $ sudo pip install Pygments
21
- # Following is needed only for generating API docs
22
- $ sudo pip install sphinx pypandoc mkdocs
23
- $ sudo Rscript -e ' install.packages(c("knitr", "devtools", "roxygen2", "testthat", "rmarkdown"), repos="http://cran.stat.ucla.edu/")'
21
+ $ sudo gem install jekyll jekyll-redirect-from pygments.rb
22
+ $ sudo pip install Pygments
23
+ # Following is needed only for generating API docs
24
+ $ sudo pip install sphinx pypandoc mkdocs
25
+ $ sudo Rscript -e ' install.packages(c("knitr", "devtools", "roxygen2", "testthat", "rmarkdown"), repos="http://cran.stat.ucla.edu/")'
24
26
```
27
+
25
28
(Note: If you are on a system with both Ruby 1.9 and Ruby 2.0 you may need to replace gem with gem2.0)
26
29
27
30
## Generating the Documentation HTML
@@ -32,42 +35,49 @@ the source code and be captured by revision control (currently git). This way th
32
35
includes the version of the documentation that is relevant regardless of which version or release
33
36
you have checked out or downloaded.
34
37
35
- In this directory you will find textfiles formatted using Markdown, with an ".md" suffix. You can
36
- read those text files directly if you want. Start with index.md.
38
+ In this directory you will find text files formatted using Markdown, with an ".md" suffix. You can
39
+ read those text files directly if you want. Start with ` index.md ` .
37
40
38
41
Execute ` jekyll build ` from the ` docs/ ` directory to compile the site. Compiling the site with
39
- Jekyll will create a directory called ` _site ` containing index.html as well as the rest of the
42
+ Jekyll will create a directory called ` _site ` containing ` index.html ` as well as the rest of the
40
43
compiled files.
41
44
42
- $ cd docs
43
- $ jekyll build
45
+ ``` sh
46
+ $ cd docs
47
+ $ jekyll build
48
+ ```
44
49
45
50
You can modify the default Jekyll build as follows:
51
+
46
52
``` sh
47
- # Skip generating API docs (which takes a while)
48
- $ SKIP_API=1 jekyll build
49
-
50
- # Serve content locally on port 4000
51
- $ jekyll serve --watch
52
-
53
- # Build the site with extra features used on the live page
54
- $ PRODUCTION=1 jekyll build
53
+ # Skip generating API docs (which takes a while)
54
+ $ SKIP_API=1 jekyll build
55
+
56
+ # Serve content locally on port 4000
57
+ $ jekyll serve --watch
58
+
59
+ # Build the site with extra features used on the live page
60
+ $ PRODUCTION=1 jekyll build
55
61
```
56
62
57
- ## API Docs (Scaladoc, Sphinx, roxygen2)
63
+ ## API Docs (Scaladoc, Javadoc, Sphinx, roxygen2, MkDocs )
58
64
59
- You can build just the Spark scaladoc by running ` build/sbt unidoc ` from the SPARK_PROJECT_ROOT directory.
65
+ You can build just the Spark scaladoc and javadoc by running ` build/sbt unidoc ` from the ` SPARK_HOME ` directory.
60
66
61
67
Similarly, you can build just the PySpark docs by running ` make html ` from the
62
- SPARK_PROJECT_ROOT/python/docs directory. Documentation is only generated for classes that are listed as
63
- public in ` __init__.py ` . The SparkR docs can be built by running SPARK_PROJECT_ROOT/R/create-docs.sh.
68
+ ` SPARK_HOME/python/docs ` directory. Documentation is only generated for classes that are listed as
69
+ public in ` __init__.py ` . The SparkR docs can be built by running ` SPARK_HOME/R/create-docs.sh ` , and
70
+ the SQL docs can be built by running ` SPARK_HOME/sql/create-docs.sh `
71
+ after [ building Spark] ( https://github.com/apache/spark#building-spark ) first.
64
72
65
- When you run ` jekyll ` in the ` docs ` directory, it will also copy over the scaladoc for the various
73
+ When you run ` jekyll build ` in the ` docs ` directory, it will also copy over the scaladoc and javadoc for the various
66
74
Spark subprojects into the ` docs ` directory (and then also into the ` _site ` directory). We use a
67
75
jekyll plugin to run ` build/sbt unidoc ` before building the site so if you haven't run it (recently) it
68
- may take some time as it generates all of the scaladoc. The jekyll plugin also generates the
69
- PySpark docs using [ Sphinx] ( http://sphinx-doc.org/ ) .
76
+ may take some time as it generates all of the scaladoc and javadoc using [ Unidoc] ( https://github.com/sbt/sbt-unidoc ) .
77
+ The jekyll plugin also generates the PySpark docs using [ Sphinx] ( http://sphinx-doc.org/ ) , SparkR docs
78
+ using [ roxygen2] ( https://cran.r-project.org/web/packages/roxygen2/index.html ) and SQL docs
79
+ using [ MkDocs] ( http://www.mkdocs.org/ ) .
70
80
71
- NOTE: To skip the step of building and copying over the Scala, Python, R and SQL API docs, run `SKIP_API=1
72
- jekyll` . In addition, ` SKIP_SCALADOC=1` , ` SKIP_PYTHONDOC=1` , ` SKIP_RDOC=1` and ` SKIP_SQLDOC=1` can be used
73
- to skip a single step of the corresponding language.
81
+ NOTE: To skip the step of building and copying over the Scala, Java, Python, R and SQL API docs, run `SKIP_API=1
82
+ jekyll build ` . In addition, ` SKIP_SCALADOC=1` , ` SKIP_PYTHONDOC=1` , ` SKIP_RDOC=1` and ` SKIP_SQLDOC=1` can be used
83
+ to skip a single step of the corresponding language. ` SKIP_SCALADOC ` indicates skipping both the Scala and Java docs.
0 commit comments