Skip to content

Commit 6d343ec

Browse files
authored
Merge pull request #178 from johnsmartco/patch-2
Add a commented out line for #spark.cdm.feature.constantColumns.types
2 parents 83ec6c7 + caf94cf commit 6d343ec

File tree

1 file changed

+6
-3
lines changed

1 file changed

+6
-3
lines changed

src/resources/cdm-detailed.properties

Lines changed: 6 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -200,7 +200,7 @@ spark.cdm.perfops.ratelimit.target 40000
200200
# constant value to be used in its place, separate from the Constant
201201
# Values feature.
202202
# .custom
203-
# .writetime Default is 0 (diabled). Timestamp value in microseconds to use as the
203+
# .writetime Default is 0 (disabled). Timestamp value in microseconds to use as the
204204
# WRITETIME for the target record. This is useful when the WRITETIME of
205205
# the record in Origin cannot be determined (such as the only non-key
206206
# columns are collections). This parameter allows a crude constant value
@@ -254,8 +254,10 @@ spark.cdm.perfops.ratelimit.target 40000
254254

255255
#===========================================================================================================
256256
# Java Filters are applied on the client node. Data must be pulled from the origin cluster and then filtered,
257-
# but this may have a lower impact on the production cluster than the Cassandra Filters.
258-
# node may need to do a lot more work than is normal.
257+
# but this may have a lower impact on the production cluster than the Cassandra Filters. Java filters put
258+
# load onto the Cassandra Data Migrator processing node, by sending more data from Cassandra.
259+
# Cassandra filters put load on the Cassandra nodes, notably because Cassandra Data Migrator specifies
260+
# ALLOW FILTERING, which could cause the coordinator node to perform a lot more work.
259261
#
260262
# spark.cdm.filter.java
261263
# .token.percent : Percent (between 1 and 100) of the token in each Split that will be migrated.
@@ -299,6 +301,7 @@ spark.cdm.perfops.ratelimit.target 40000
299301
# because some type values contain commas, e.g. lists, maps, sets, etc.
300302
#-----------------------------------------------------------------------------------------------------------
301303
#spark.cdm.feature.constantColumns.names const1,const2
304+
#spark.cdm.feature.constantColumns.types
302305
#spark.cdm.feature.constantColumns.values 'abcd',1234
303306
#spark.cdm.feature.constantColumns.splitRegex ,
304307

0 commit comments

Comments
 (0)