You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/cosmos-db/sql/change-feed-processor.md
+12-8Lines changed: 12 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,7 +7,7 @@ ms.service: cosmos-db
7
7
ms.subservice: cosmosdb-sql
8
8
ms.devlang: csharp
9
9
ms.topic: conceptual
10
-
ms.date: 03/10/2022
10
+
ms.date: 04/05/2022
11
11
ms.reviewer: sngun
12
12
ms.custom: devx-track-csharp
13
13
---
@@ -29,11 +29,11 @@ There are four main components of implementing the change feed processor:
29
29
30
30
1.**The compute instance**: A compute instance hosts the change feed processor to listen for changes. Depending on the platform, it could be represented by a VM, a kubernetes pod, an Azure App Service instance, an actual physical machine. It has a unique identifier referenced as the *instance name* throughout this article.
31
31
32
-
1.**The delegate:** The delegate is the code that defines what you, the developer, want to do with each batch of changes that the change feed processor reads.
32
+
1.**The delegate:** The delegate is the code that defines what you, the developer, want to do with each batch of changes that the change feed processor reads.
33
33
34
-
To further understand how these four elements of change feed processor work together, let's look at an example in the following diagram. The monitored container stores documents and uses 'City' as the partition key. We see that the partition key values are distributed in ranges that contain items.
35
-
There are two compute instances and the change feed processor is assigning different ranges of partition key values to each instance to maximize compute distribution, each instance has a unique and different name.
36
-
Each range is being read in parallel and its progress is maintained separately from other ranges in the lease container.
34
+
To further understand how these four elements of change feed processor work together, let's look at an example in the following diagram. The monitored container stores documents and uses 'City' as the partition key. We see that the partition key values are distributed in ranges (each range representing a [physical partition](../partitioning-overview.md#physical-partitions)) that contain items.
35
+
There are two compute instances and the change feed processor is assigning different ranges to each instance to maximize compute distribution, each instance has a unique and different name.
36
+
Each range is being read in parallel and its progress is maintained separately from other ranges in the lease container through a *lease* document. The combination of the leases represents the current state of the change feed processor.
@@ -86,7 +86,7 @@ The change feed processor lets you hook to relevant events in its [life cycle](#
86
86
87
87
## Deployment unit
88
88
89
-
A single change feed processor deployment unit consists of one or more compute instances with the same `processorName` and lease container configuration but different instance name each. You can have many deployment units where each one has a different business flow for the changes and each deployment unit consisting of one or more instances.
89
+
A single change feed processor deployment unit consists of one or more compute instances with the same `processorName` and lease container configuration but different instance name each. You can have many deployment units where each one has a different business flow for the changes and each deployment unit consisting of one or more instances.
90
90
91
91
For example, you might have one deployment unit that triggers an external API anytime there is a change in your container. Another deployment unit might move data, in real time, each time there is a change. When a change happens in your monitored container, all your deployment units will get notified.
92
92
@@ -106,9 +106,9 @@ Moreover, the change feed processor can dynamically adjust to containers scale d
106
106
107
107
## Change feed and provisioned throughput
108
108
109
-
Change feed read operations on the monitored container will consume RUs.
109
+
Change feed read operations on the monitored container will consume [request units](../request-units.md). Make sure your monitored container is not experiencing [throttling](troubleshoot-request-rate-too-large.md), otherwise you will experience delays in receiving change feed events on your processors.
110
110
111
-
Operations on the lease container consume RUs. The higher the number of instances using the same lease container, the higher the potential RU consumption will be. Remember to monitor your RU consumption on the leases container if you decide to scale and increment the number of instances.
111
+
Operations on the lease container (updating and maintaining state) consume [request units](../request-units.md). The higher the number of instances using the same lease container, the higher the potential request units consumption will be. Make sure your lease container is not experiencing [throttling](troubleshoot-request-rate-too-large.md), otherwise you will experience delays in receiving change feed events on your processors, in some cases where throttling is high, the processors might stop processing completely.
112
112
113
113
## Starting time
114
114
@@ -136,6 +136,10 @@ The change feed processor will be initialized and start reading changes from the
136
136
> [!NOTE]
137
137
> These customization options only work to setup the starting point in time of the change feed processor. Once the leases container is initialized for the first time, changing them has no effect.
138
138
139
+
## Sharing the lease container
140
+
141
+
You can share the lease container across multiple [deployment units](#deployment-unit), each deployment unit would be listening to a different monitored container or have a different `processorName`. With this configuration, each deployment unit would maintain an independent state on the lease container. Review the [request unit consumption on the lease container](#change-feed-and-provisioned-throughput) to make sure the provisioned throughput is enough for all the deployment units.
142
+
139
143
## Where to host the change feed processor
140
144
141
145
The change feed processor can be hosted in any platform that supports long running processes or tasks:
0 commit comments