@@ -115,7 +115,7 @@ spark.cdm.schema.origin.keyspaceTable keyspace_name.table_name
115
115
# .mismatch : Default is false. when true, data that is different between Origin and Target
116
116
# will be reconciled. It is important to note that TIMESTAMP will have an effect
117
117
# here - if the WRITETIME of the Origin record (determined with
118
- # .writetime.indexes ) is earlier than the WRITETIME of the Target record, the
118
+ # .writetime.names ) is earlier than the WRITETIME of the Target record, the
119
119
# change will not appear in Target. This may be particularly challenging to
120
120
# troubleshoot if individual columns (cells) have been modified in Target.
121
121
#
@@ -205,7 +205,7 @@ spark.cdm.perfops.ratelimit.target 40000
205
205
# the record in Origin cannot be determined (such as the only non-key
206
206
# columns are collections). This parameter allows a crude constant value
207
207
# to be used in its place, and overrides
208
- # .schema.origin.column.writetime.indexes .
208
+ # .schema.origin.column.writetime.names .
209
209
# .writetime.incrementBy Default is 0. This is useful when you have a List that is not frozen,
210
210
# and are updating this via the autocorrect feature. Lists are not idempotent,
211
211
# and subsequent UPSERTs would add duplicates to the list. Future versions
@@ -263,14 +263,14 @@ spark.cdm.perfops.ratelimit.target 40000
263
263
# split. Invalid percentages will be treated as 100.
264
264
#
265
265
# .writetime : Filter records based on their writetimes. The maximum writetime of the columns
266
- # configured at .schema.origin.column.writetime.indexes will be compared to the
267
- # thresholds, which must be in microseconds since the epoch. If the .writetime.indexes
266
+ # configured at .schema.origin.column.writetime.names will be compared to the
267
+ # thresholds, which must be in microseconds since the epoch. If the .writetime.names
268
268
# are not specified, or the thresholds are null or otherwise invalid, the filter will
269
269
# be ignored. Note that .perfops.batchSize will be ignored when this filter is in place,
270
270
# a value of 1 will be used instead.#
271
271
# .min : Lowest (inclusive) writetime values to be migrated
272
272
# .max : Highest (inclusive) writetime values to be migrated
273
- # maximum timestamp of the columns specified by .schema.origin.column.writetime.indexes ,
273
+ # maximum timestamp of the columns specified by .schema.origin.column.writetime.names ,
274
274
# and if that is not specified, or is for some reason null, the filter is ignored.
275
275
#
276
276
# .column : Filter rows based on matching a configured value.
0 commit comments