You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/cosmos-db/nosql/change-feed-processor.md
+15-29Lines changed: 15 additions & 29 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,7 +8,7 @@ ms.service: cosmos-db
8
8
ms.subservice: nosql
9
9
ms.devlang: csharp
10
10
ms.topic: conceptual
11
-
ms.date: 04/05/2022
11
+
ms.date: 04/26/2023
12
12
ms.custom: devx-track-csharp
13
13
---
14
14
@@ -106,12 +106,6 @@ The number of instances can grow and shrink, and the change feed processor will
106
106
107
107
Moreover, the change feed processor can dynamically adjust to containers scale due to throughput or storage increases. When your container grows, the change feed processor transparently handles these scenarios by dynamically increasing the leases and distributing the new leases among existing instances.
108
108
109
-
## Change feed and provisioned throughput
110
-
111
-
Change feed read operations on the monitored container will consume [request units](../request-units.md). Make sure your monitored container is not experiencing [throttling](troubleshoot-request-rate-too-large.md), otherwise you will experience delays in receiving change feed events on your processors.
112
-
113
-
Operations on the lease container (updating and maintaining state) consume [request units](../request-units.md). The higher the number of instances using the same lease container, the higher the potential request units consumption will be. Make sure your lease container is not experiencing [throttling](troubleshoot-request-rate-too-large.md), otherwise you will experience delays in receiving change feed events on your processors, in some cases where throttling is high, the processors might stop processing completely.
114
-
115
109
## Starting time
116
110
117
111
By default, when a change feed processor starts the first time, it will initialize the leases container, and start its [processing life cycle](#processing-life-cycle). Any changes that happened in the monitored container before the change feed processor was initialized for the first time won't be detected.
@@ -124,9 +118,6 @@ It's possible to initialize the change feed processor to read changes starting a
124
118
125
119
The change feed processor will be initialized for that specific date and time and start reading the changes that happened after.
126
120
127
-
> [!NOTE]
128
-
> Starting the change feed processor at a specific date and time is not supported in multi-region write accounts.
129
-
130
121
### Reading from the beginning
131
122
132
123
In other scenarios like data migrations or analyzing the entire history of a container, we need to read the change feed from **the beginning of that container's lifetime**. To do that, we can use `WithStartTime` on the builder extension, but passing `DateTime.MinValue.ToUniversalTime()`, which would generate the UTC representation of the minimum `DateTime` value, like so:
@@ -175,16 +166,6 @@ To prevent your change feed processor from getting "stuck" continuously retrying
175
166
176
167
In addition, you can use the [change feed estimator](how-to-use-change-feed-estimator.md) to monitor the progress of your change feed processor instances as they read the change feed.
177
168
178
-
<!-- ## Life-cycle notifications
179
-
180
-
The change feed processor lets you hook to relevant events in its [life cycle](#processing-life-cycle), you can choose to be notified to one or all of them. The recommendation is to at least register the error notification:
181
-
182
-
* Register a handler for `WithLeaseAcquireNotification` to be notified when the current host acquires a lease to start processing it.
183
-
* Register a handler for `WithLeaseReleaseNotification` to be notified when the current host releases a lease and stops processing it.
184
-
* Register a handler for `WithErrorNotification` to be notified when the current host encounters an exception during processing, being able to distinguish if the source is the user delegate (unhandled exception) or an error the processor is encountering trying to access the monitored container (for example, networking issues).
A single change feed processor deployment unit consists of one or more compute instances with the same lease container configuration, the same `leasePrefix`, but different `hostName` name each. You can have many deployment units where each one has a different business flow for the changes and each deployment unit consisting of one or more instances.
@@ -205,12 +186,6 @@ The number of instances can grow and shrink, and the change feed processor will
205
186
206
187
Moreover, the change feed processor can dynamically adjust to containers scale due to throughput or storage increases. When your container grows, the change feed processor transparently handles these scenarios by dynamically increasing the leases and distributing the new leases among existing instances.
207
188
208
-
## Change feed and provisioned throughput
209
-
210
-
Change feed read operations on the monitored container will consume [request units](../request-units.md). Make sure your monitored container is not experiencing [throttling](troubleshoot-request-rate-too-large.md), otherwise you will experience delays in receiving change feed events on your processors.
211
-
212
-
Operations on the lease container (updating and maintaining state) consume [request units](../request-units.md). The higher the number of instances using the same lease container, the higher the potential request units consumption will be. Make sure your lease container is not experiencing [throttling](troubleshoot-request-rate-too-large.md), otherwise you will experience delays in receiving change feed events on your processors, in some cases where throttling is high, the processors might stop processing completely.
213
-
214
189
## Starting time
215
190
216
191
By default, when a change feed processor starts the first time, it will initialize the leases container, and start its [processing life cycle](#processing-life-cycle). Any changes that happened in the monitored container before the change feed processor was initialized for the first time won't be detected.
@@ -219,9 +194,6 @@ By default, when a change feed processor starts the first time, it will initiali
219
194
220
195
It's possible to initialize the change feed processor to read changes starting at a **specific date and time**, by setting `setStartTime` in `options`. The change feed processor will be initialized for that specific date and time and start reading the changes that happened after.
221
196
222
-
> [!NOTE]
223
-
> Starting the change feed processor at a specific date and time is not supported in multi-region write accounts.
224
-
225
197
### Reading from the beginning
226
198
227
199
In our above sample, we set `setStartFromBeginning` to `false`, which is the same as the default value. In other scenarios like data migrations or analyzing the entire history of a container, we need to read the change feed from **the beginning of that container's lifetime**. To do that, we can set `setStartFromBeginning` to `true`. The change feed processor will be initialized and start reading changes from the beginning of the lifetime of the container.
@@ -231,6 +203,20 @@ In our above sample, we set `setStartFromBeginning` to `false`, which is the sam
231
203
232
204
---
233
205
206
+
## Change feed and provisioned throughput
207
+
208
+
Change feed read operations on the monitored container will consume [request units](../request-units.md). Make sure your monitored container is not experiencing [throttling](troubleshoot-request-rate-too-large.md), otherwise you will experience delays in receiving change feed events on your processors.
209
+
210
+
Operations on the lease container (updating and maintaining state) consume [request units](../request-units.md). The higher the number of instances using the same lease container, the higher the potential request units consumption will be. Make sure your lease container is not experiencing [throttling](troubleshoot-request-rate-too-large.md), otherwise you will experience delays in receiving change feed events on your processors, in some cases where throttling is high, the processors might stop processing completely.
211
+
212
+
## Customizing lease operations
213
+
214
+
There are three key configurations that can affect the change feed processor behavior, in all cases, they will affect the [request unit consumption on the lease container](#change-feed-and-provisioned-throughput). These configurations can be changed during the creation of the change feed processor:
215
+
216
+
1. Lease Acquire: By default every 17 seconds. A host will periodically check the state of the lease store and consider acquiring leases as part of the [dynamic scaling](#dynamic-scaling) process. This process is done by executing a Query on the lease container. Reducing this value will make re-balancing and acquiring leases faster but increase [request unit consumption on the lease container](#change-feed-and-provisioned-throughput).
217
+
1. Lease Expiration: By default 60 seconds. Defines the maximum amount of time that a lease can exist without any renewal activity before it is acquired by another host. If a host crashes, the leases it owned will be picked up by other hosts after this period of time plus the configured renewal interval. Reducing this value will make recovering after a host crash faster, but the expiration value should never be lower than the renewal interval.
218
+
1.1. Lease Renewal: By default every 13 seconds. A host owning a lease will periodically renew it even if there are no new changes to consume. This process is done by executing a Replace on the lease. Reducing this value will lower the time required to detect leases lost by host crashing but increase [request unit consumption on the lease container](#change-feed-and-provisioned-throughput).
219
+
234
220
## Sharing the lease container
235
221
236
222
You can share the lease container across multiple [deployment units](#deployment-unit), each deployment unit would be listening to a different monitored container or have a different `processorName`. With this configuration, each deployment unit would maintain an independent state on the lease container. Review the [request unit consumption on the lease container](#change-feed-and-provisioned-throughput) to make sure the provisioned throughput is enough for all the deployment units.
0 commit comments