Skip to content

Commit 5c38ff6

Browse files
committed
links
1 parent 2089c24 commit 5c38ff6

File tree

5 files changed

+11
-10
lines changed

5 files changed

+11
-10
lines changed

R/read_sas.R

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -6,10 +6,10 @@
66
#' Mark that files on the local file system need to be specified using the full path.
77
#' @param table character string with the name of the Spark table where the SAS dataset will be put into
88
#' @return an object of class \code{tbl_spark}, which is a reference to a Spark DataFrame based on which
9-
#' dplyr functions can be executed. See \url{https://github.com/rstudio/sparklyr}
9+
#' dplyr functions can be executed. See \url{https://github.com/sparklyr/sparklyr}
1010
#' @export
1111
#' @seealso \code{\link[sparklyr]{spark_connect}}, \code{\link[sparklyr]{sdf_register}}
12-
#' @references \url{https://spark-packages.org/package/saurfang/spark-sas7bdat}, \url{https://github.com/saurfang/spark-sas7bdat}, \url{https://github.com/rstudio/sparklyr}
12+
#' @references \url{https://spark-packages.org/package/saurfang/spark-sas7bdat}, \url{https://github.com/saurfang/spark-sas7bdat}, \url{https://github.com/sparklyr/sparklyr}
1313
#' @examples
1414
#' \dontrun{
1515
#' ## If you haven't got a Spark cluster, you can install Spark locally like this

README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,13 +1,13 @@
11
# spark.sas7bdat
22

3-
The **spark.sas7bdat** package allows R users working with [Apache Spark](https://spark.apache.org) to read in [SAS](http://www.sas.com) datasets in .sas7bdat format into Spark by using the [spark-sas7bdat Spark package](https://spark-packages.org/package/saurfang/spark-sas7bdat). This allows R users to
3+
The **spark.sas7bdat** package allows R users working with [Apache Spark](https://spark.apache.org) to read in [SAS](https://www.sas.com) datasets in .sas7bdat format into Spark by using the [spark-sas7bdat Spark package](https://spark-packages.org/package/saurfang/spark-sas7bdat). This allows R users to
44

55
- load a SAS dataset in parallel into a Spark table for further processing with the [sparklyr](https://cran.r-project.org/package=sparklyr) package
66
- process in parallel the full SAS dataset with dplyr statements, instead of having to import the full SAS dataset in RAM (using the foreign/haven packages) and hence avoiding RAM problems of large imports
77

88

99
## Example
10-
The following example reads in a file called iris.sas7bdat in a table called sas_example in Spark. Do try this with bigger data on your cluster and look at the help of the [sparklyr](https://github.com/rstudio/sparklyr) package to connect to your Spark cluster.
10+
The following example reads in a file called iris.sas7bdat in a table called sas_example in Spark. Do try this with bigger data on your cluster and look at the help of the [sparklyr](https://github.com/sparklyr/sparklyr) package to connect to your Spark cluster.
1111

1212
```r
1313
library(sparklyr)

inst/NEWS

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,6 @@
11
CHANGES IN spark.sas7bdat VERSION 1.4
22

3+
o Fix URL's
34
o Add rmarkdown to Suggests in DESCRIPTION
45

56
CHANGES IN spark.sas7bdat VERSION 1.3

man/spark_read_sas.Rd

Lines changed: 2 additions & 2 deletions
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

vignettes/spark_sas7bdat_examples.Rmd

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -9,18 +9,18 @@ vignette: >
99
%\VignetteEncoding{UTF-8}
1010
---
1111

12-
This R package allows R users to easily import large [SAS](http://www.sas.com) datasets into [Spark](https://spark.apache.org) tables in parallel.
12+
This R package allows R users to easily import large [SAS](https://www.sas.com) datasets into [Spark](https://spark.apache.org) tables in parallel.
1313

1414

15-
The package uses the [spark-sas7bdat Spark package](https://spark-packages.org/package/saurfang/spark-sas7bdat) in order to read a SAS dataset in Spark. That Spark package imports the data in parallel on the Spark cluster using the Parso library and this process is launched from R using the [sparklyr](https://github.com/rstudio/sparklyr) functionality.
15+
The package uses the [spark-sas7bdat Spark package](https://spark-packages.org/package/saurfang/spark-sas7bdat) in order to read a SAS dataset in Spark. That Spark package imports the data in parallel on the Spark cluster using the Parso library and this process is launched from R using the [sparklyr](https://github.com/sparklyr/sparklyr) functionality.
1616

1717
More information about the spark-sas7bdat Spark package and sparklyr can be found at:
1818

1919
- https://spark-packages.org/package/saurfang/spark-sas7bdat and https://github.com/saurfang/spark-sas7bdat
20-
- https://github.com/rstudio/sparklyr
20+
- https://github.com/sparklyr/sparklyr
2121

2222
## Example
23-
The following example reads in a file called iris.sas7bdat in parallel in a table called sas_example in Spark. Do try this with bigger data on your cluster and look at the help of the [sparklyr](https://github.com/rstudio/sparklyr) package to connect to your Spark cluster.
23+
The following example reads in a file called iris.sas7bdat in parallel in a table called sas_example in Spark. Do try this with bigger data on your cluster and look at the help of the [sparklyr](https://github.com/sparklyr/sparklyr) package to connect to your Spark cluster.
2424

2525
```{r, eval=FALSE}
2626
library(sparklyr)

0 commit comments

Comments
 (0)