Skip to content
This repository was archived by the owner on Aug 31, 2021. It is now read-only.

Commit f813833

Browse files
committed
Fixed README to correct call to spark.read.dynamodb
1 parent ea1f341 commit f813833

File tree

3 files changed

+6
-3
lines changed

3 files changed

+6
-3
lines changed

.gitignore

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,7 @@
1+
/target/
2+
/bin/
13
*.class
24
*.log
5+
.classpath
36
.idea
4-
target/
57
.wercker

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ Plug-and-play implementation of an Apache Spark custom data source for AWS Dynam
1818
import com.audienceproject.spark.dynamodb.implicits._
1919

2020
// Load a DataFrame from a Dynamo table. Only incurs the cost of a single scan for schema inference.
21-
val dynamoDf = spark.dynamodb("SomeTableName") // <-- DataFrame of Row objects with inferred schema.
21+
val dynamoDf = spark.read.dynamodb("SomeTableName") // <-- DataFrame of Row objects with inferred schema.
2222

2323
// Scan the table for the first 100 items (the order is arbitrary) and print them.
2424
dynamoDf.show(100)

project/plugins.sbt

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,4 @@
11
logLevel := Level.Warn
22

3-
addSbtPlugin("com.jsuereth" % "sbt-pgp" % "1.1.1")
3+
addSbtPlugin("com.jsuereth" % "sbt-pgp" % "1.1.1")
4+
addSbtPlugin("com.typesafe.sbteclipse" % "sbteclipse-plugin" % "5.2.4")

0 commit comments

Comments
 (0)