Skip to content
This repository was archived by the owner on Aug 31, 2021. It is now read-only.

Commit 028700c

Browse files
committed
2 parents 35f2c34 + fd07c9f commit 028700c

File tree

1 file changed

+1
-0
lines changed

1 file changed

+1
-0
lines changed

README.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -86,6 +86,7 @@ The following parameters can be set as options on the Spark reader object before
8686

8787
- `readPartitions` number of partitions to split the initial RDD when loading the data into Spark. Defaults to the size of the DynamoDB table divided into chunks of `maxPartitionBytes`
8888
- `maxPartitionBytes` the maximum size of a single input partition. Default 128 MB
89+
- `defaultParallelism` the number of input partitions that can be read from DynamoDB simultaneously. Defaults to `sparkContext.defaultParallelism`
8990
- `targetCapacity` fraction of provisioned read capacity on the table (or index) to consume for reading. Default 1 (i.e. 100% capacity).
9091
- `stronglyConsistentReads` whether or not to use strongly consistent reads. Default false.
9192
- `bytesPerRCU` number of bytes that can be read per second with a single Read Capacity Unit. Default 4000 (4 KB). This value is multiplied by two when `stronglyConsistentReads=false`

0 commit comments

Comments
 (0)