Skip to content

Commit 931e422

Browse files
Update articles/cosmos-db/sql/throughput-control-spark.md
Co-authored-by: Fabian Meiswinkel <[email protected]>
1 parent 0e56187 commit 931e422

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

articles/cosmos-db/sql/throughput-control-spark.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -76,7 +76,7 @@ In the above example, the `targetThroughputThreshold` is defined as **0.95**, so
7676
> Throughput control does not do RU pre-calculation of each operation. Instead, it tracks the RU usages after the operation based on the response header. As such, throughput control is based on an approximation - and does not guarantee that amount of throughput will be available for the group at any given time.
7777
7878
> [!WARNING]
79-
> The `targetThroughputThreshold` is **immutable**. If you change the target throughput threshold value, this will create a new throughput control group. You need to restart all Spark jobs that are using the group to ensure they all consume the new threshold.
79+
> The `targetThroughputThreshold` is **immutable**. If you change the target throughput threshold value, this will create a new throughput control group (but as long as you use Version 4.10.0 or later it can have the same name). You need to restart all Spark jobs that are using the group if you want to ensure they all consume the new threshold immediately (otherwise they will pick-up the new threshold after the next restart).
8080
8181
For each Spark client that uses the throughput control group, a record will be created in the `ThroughputControl` container which looks like the below:
8282

0 commit comments

Comments
 (0)