Skip to content
This repository was archived by the owner on Mar 24, 2025. It is now read-only.

Commit 42a08c2

Browse files
committed
Deprecate Scala 2.11; update versions for 0.13.0 release
1 parent c054bc2 commit 42a08c2

File tree

1 file changed

+8
-23
lines changed

1 file changed

+8
-23
lines changed

README.md

Lines changed: 8 additions & 23 deletions
Original file line numberDiff line numberDiff line change
@@ -7,39 +7,24 @@ The structure and test tools are mostly copied from [CSV Data Source for Spark](
77

88
- This package supports to process format-free XML files in a distributed way, unlike JSON datasource in Spark restricts in-line JSON format.
99

10-
- Compatible with Spark 2.4.x (with Scala 2.11) and 3.x (with Scala 2.12)
10+
- Compatible with Spark 2.4.x and 3.x, with Scala 2.12. Scala 2.11 support with Spark 2.4.x is deprecated.
1111

1212
## Linking
13-
You can link against this library in your program at the following coordinates:
14-
15-
### Scala 2.11
16-
17-
```
18-
groupId: com.databricks
19-
artifactId: spark-xml_2.11
20-
version: 0.12.0
21-
```
2213

23-
### Scala 2.12
14+
You can link against this library in your program at the following coordinates:
2415

2516
```
2617
groupId: com.databricks
2718
artifactId: spark-xml_2.12
28-
version: 0.12.0
19+
version: 0.13.0
2920
```
3021

3122
## Using with Spark shell
32-
This package can be added to Spark using the `--packages` command line option. For example, to include it when starting the spark shell:
33-
3423

35-
### Spark compiled with Scala 2.11
36-
```
37-
$SPARK_HOME/bin/spark-shell --packages com.databricks:spark-xml_2.11:0.12.0
38-
```
24+
This package can be added to Spark using the `--packages` command line option. For example, to include it when starting the spark shell:
3925

40-
### Spark compiled with Scala 2.12
4126
```
42-
$SPARK_HOME/bin/spark-shell --packages com.databricks:spark-xml_2.12:0.12.0
27+
$SPARK_HOME/bin/spark-shell --packages com.databricks:spark-xml_2.12:0.13.0
4328
```
4429

4530
## Features
@@ -413,7 +398,7 @@ Automatically infer schema (data types)
413398
```R
414399
library(SparkR)
415400

416-
sparkR.session("local[4]", sparkPackages = c("com.databricks:spark-xml_2.11:0.12.0"))
401+
sparkR.session("local[4]", sparkPackages = c("com.databricks:spark-xml_2.12:0.13.0"))
417402

418403
df <- read.df("books.xml", source = "xml", rowTag = "book")
419404

@@ -425,7 +410,7 @@ You can manually specify schema:
425410
```R
426411
library(SparkR)
427412

428-
sparkR.session("local[4]", sparkPackages = c("com.databricks:spark-xml_2.11:0.12.0"))
413+
sparkR.session("local[4]", sparkPackages = c("com.databricks:spark-xml_2.12:0.13.0"))
429414
customSchema <- structType(
430415
structField("_id", "string"),
431416
structField("author", "string"),
@@ -466,7 +451,7 @@ val records = sc.newAPIHadoopFile(
466451

467452
## Building From Source
468453

469-
This library is built with [SBT](https://www.scala-sbt.org/). To build a JAR file simply run `sbt package` from the project root. The build configuration includes support for both Scala 2.11 and 2.12.
454+
This library is built with [SBT](https://www.scala-sbt.org/). To build a JAR file simply run `sbt package` from the project root.
470455

471456
## Acknowledgements
472457

0 commit comments

Comments
 (0)