Skip to content

Commit bea52d6

Browse files
authored
Spark V2->V3 Migration doc is missing the throughput control section (Azure#36269)
1 parent c1d6671 commit bea52d6

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

sdk/cosmos/azure-cosmos-spark_3_2-12/docs/migration.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@
1717
| changefeedmaxpagesperbatch | spark.cosmos.changeFeed.itemCountPerTriggerHint | |
1818
| WritingBatchSize | spark.cosmos.write.bulk.maxPendingOperations | Recommendation would be to start with the default (not specifying this config entry - and only adjust (reduce) it when really necessary|
1919
| Upsert | spark.cosmos.write.strategy | If you use `ItemOverwrite` here the behavior is the same as with Upsert==true before |
20-
| WriteThroughputBudget | spark.cosmos.throughputControl.* | See the `Throughput control` section below|
20+
| WriteThroughputBudget | spark.cosmos.throughputControl.* | See the [Throughput control](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/cosmos/azure-cosmos-spark_3_2-12/docs/scenarios/Ingestion.md#throughput-control) section for more information |
2121
| MaxIngestionTaskParallelism | n/a | Not relevant anymore - just remove this config entry |
2222
| query_pagesize | n/a | Not relevant anymore - just remove this config entry |
2323
| query_custom | spark.cosmos.read.customQuery | When provided the custom query will be processed against the Cosmos endpoint instead of dynamically generating the query via predicate push down. Usually it is recommended to rely on Spark's predicate push down because that will allow to generate the most efficient set of filters based on the query plan. But there are a couple of of predicates like aggregates (count, group by, avg, sum etc.) that cannot be pushed down yet (at least in Spark 3.1) - so the custom query is a fallback to allow them to be pushed into the query sent to Cosmos. |

0 commit comments

Comments
 (0)