Skip to content

Commit f5d16b8

Browse files
HyukjinKwonSumedh Wale
authored andcommitted
[SPARK-34021][R] Fix hyper links in SparkR documentation for CRAN submission
3.0.1 CRAN submission was failed as the reason below: ``` Found the following (possibly) invalid URLs: URL: http://jsonlines.org/ (moved to https://jsonlines.org/) From: man/read.json.Rd man/write.json.Rd Status: 200 Message: OK URL: https://dl.acm.org/citation.cfm?id=1608614 (moved to https://dl.acm.org/doi/10.1109/MC.2009.263) From: inst/doc/sparkr-vignettes.html Status: 200 Message: OK ``` The links were being redirected now. This PR checked all hyperlinks in the docs such as `href{...}` and `url{...}`, and fixed all in SparkR: - Fix two problems above. - Fix http to https - Fix `https://www.apache.org/ https://spark.apache.org/` -> `https://www.apache.org https://spark.apache.org`. For CRAN submission. Virtually no because it's just cleanup that CRAN requires. Manually tested by clicking the links Closes apache#31058 from HyukjinKwon/SPARK-34021. Authored-by: HyukjinKwon <[email protected]> Signed-off-by: HyukjinKwon <[email protected]>
1 parent 74dbd37 commit f5d16b8

File tree

5 files changed

+11
-8
lines changed

5 files changed

+11
-8
lines changed

R/pkg/DESCRIPTION

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ Authors@R: c(person("Shivaram", "Venkataraman", role = c("aut", "cre"),
1111
email = "[email protected]"),
1212
person(family = "The Apache Software Foundation", role = c("aut", "cph")))
1313
License: Apache License (== 2.0)
14-
URL: https://www.apache.org/ https://spark.apache.org/
14+
URL: https://www.apache.org https://spark.apache.org
1515
BugReports: https://spark.apache.org/contributing.html
1616
Depends:
1717
R (>= 3.0),

R/pkg/R/DataFrame.R

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -801,7 +801,7 @@ setMethod("toJSON",
801801

802802
#' Save the contents of SparkDataFrame as a JSON file
803803
#'
804-
#' Save the contents of a SparkDataFrame as a JSON file (\href{http://jsonlines.org/}{
804+
#' Save the contents of a SparkDataFrame as a JSON file (\href{https://jsonlines.org/}{
805805
#' JSON Lines text format or newline-delimited JSON}). Files written out
806806
#' with this method can be read back in as a SparkDataFrame using read.json().
807807
#'

R/pkg/R/SQLContext.R

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -332,8 +332,10 @@ setMethod("toDF", signature(x = "RDD"),
332332

333333
#' Create a SparkDataFrame from a JSON file.
334334
#'
335-
#' Loads a JSON file (\href{http://jsonlines.org/}{JSON Lines text format or newline-delimited JSON}
336-
#' ), returning the result as a SparkDataFrame
335+
#' Loads a JSON file, returning the result as a SparkDataFrame
336+
#' By default, (\href{https://jsonlines.org/}{JSON Lines text format or newline-delimited JSON}
337+
#' ) is supported. For JSON (one record per file), set a named property \code{multiLine} to
338+
#' \code{TRUE}.
337339
#' It goes through the entire dataset once to determine the schema.
338340
#'
339341
#' @param path Path of file to read. A vector of multiple paths is allowed.

R/pkg/R/install.R

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -39,11 +39,11 @@
3939
#' version number in the format of "x.y" where x and y are integer.
4040
#' If \code{hadoopVersion = "without"}, "Hadoop free" build is installed.
4141
#' See
42-
#' \href{http://spark.apache.org/docs/latest/hadoop-provided.html}{
42+
#' \href{https://spark.apache.org/docs/latest/hadoop-provided.html}{
4343
#' "Hadoop Free" Build} for more information.
4444
#' Other patched version names can also be used, e.g. \code{"cdh4"}
4545
#' @param mirrorUrl base URL of the repositories to use. The directory layout should follow
46-
#' \href{http://www.apache.org/dyn/closer.lua/spark/}{Apache mirrors}.
46+
#' \href{https://www.apache.org/dyn/closer.lua/spark/}{Apache mirrors}.
4747
#' @param localDir a local directory where Spark is installed. The directory contains
4848
#' version-specific folders of Spark packages. Default is path to
4949
#' the cache directory:
@@ -65,7 +65,7 @@
6565
#'}
6666
#' @note install.spark since 2.1.0
6767
#' @seealso See available Hadoop versions:
68-
#' \href{http://spark.apache.org/downloads.html}{Apache Spark}
68+
#' \href{https://spark.apache.org/downloads.html}{Apache Spark}
6969
install.spark <- function(hadoopVersion = "2.7", mirrorUrl = NULL,
7070
localDir = NULL, overwrite = FALSE) {
7171
sparkHome <- Sys.getenv("SPARK_HOME")

R/pkg/R/stats.R

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -112,7 +112,8 @@ setMethod("corr",
112112
#'
113113
#' Finding frequent items for columns, possibly with false positives.
114114
#' Using the frequent element count algorithm described in
115-
#' \url{http://dx.doi.org/10.1145/762471.762473}, proposed by Karp, Schenker, and Papadimitriou.
115+
#' \url{https://dl.acm.org/doi/10.1145/762471.762473}, proposed by Karp, Schenker,
116+
#' and Papadimitriou.
116117
#'
117118
#' @param x A SparkDataFrame.
118119
#' @param cols A vector column names to search frequent items in.

0 commit comments

Comments
 (0)