You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Aug 31, 2021. It is now read-only.
Fix option casing issue, don't use DAX for DescribeTable
- Option keys are case sensitive have been lowercased by the time they
get to TableConnector
- DAX isn't capable of doing DescribeTale, so we ask for a non-DAX
client for that purpose
Copy file name to clipboardExpand all lines: README.md
+3-1Lines changed: 3 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -92,7 +92,9 @@ The following parameters can be set as options on the Spark reader object before
92
92
93
93
-`readPartitions` number of partitions to split the initial RDD when loading the data into Spark. Defaults to the size of the DynamoDB table divided into chunks of `maxPartitionBytes`
94
94
-`maxPartitionBytes` the maximum size of a single input partition. Default 128 MB
95
-
-`defaultParallelism` the number of input partitions that can be read from DynamoDB simultaneously. Defaults to `sparkContext.defaultParallelism`
95
+
-`defaultParallelism` the number of input partitions that can be read from or written to DynamoDB simultaneously.
96
+
Read/write throughput will be limited by dividing it by this number. Set this to the number of CPU cores in your
97
+
Spark job. Defaults to the value of `SparkContext#defaultParallelism`.
96
98
-`targetCapacity` fraction of provisioned read capacity on the table (or index) to consume for reading, enforced by
97
99
a rate limiter. Default 1 (i.e. 100% capacity).
98
100
-`stronglyConsistentReads` whether or not to use strongly consistent reads. Default false.
0 commit comments