You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/cosmos-db/sql/change-feed-processor.md
+6-6Lines changed: 6 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,7 +7,7 @@ ms.service: cosmos-db
7
7
ms.subservice: cosmosdb-sql
8
8
ms.devlang: csharp
9
9
ms.topic: conceptual
10
-
ms.date: 11/16/2021
10
+
ms.date: 03/10/2022
11
11
ms.reviewer: sngun
12
12
ms.custom: devx-track-csharp
13
13
---
@@ -27,12 +27,12 @@ There are four main components of implementing the change feed processor:
27
27
28
28
1.**The lease container:** The lease container acts as a state storage and coordinates processing the change feed across multiple workers. The lease container can be stored in the same account as the monitored container or in a separate account.
29
29
30
-
1.**The host:** A host is an application instance that uses the change feed processor to listen for changes. Multiple instances with the same lease configuration can run in parallel, but each instance should have a different **instance name**.
30
+
1.**The compute instance**: A compute instance hosts the change feed processor to listen for changes. Depending on the platform, it could be represented by a VM, a kubernetes pod, an Azure App Service instance, an actual physical machine. It has a unique identifier referenced as the *instance name* throughout this article.
31
31
32
32
1.**The delegate:** The delegate is the code that defines what you, the developer, want to do with each batch of changes that the change feed processor reads.
33
33
34
34
To further understand how these four elements of change feed processor work together, let's look at an example in the following diagram. The monitored container stores documents and uses 'City' as the partition key. We see that the partition key values are distributed in ranges that contain items.
35
-
There are two host instances and the change feed processor is assigning different ranges of partition key values to each instance to maximize compute distribution.
35
+
There are two compute instances and the change feed processor is assigning different ranges of partition key values to each instance to maximize compute distribution, each instance has a unique and different name.
36
36
Each range is being read in parallel and its progress is maintained separately from other ranges in the lease container.
Finally you define a name for this processor instance with `WithInstanceName`and which is the container to maintain the lease state with `WithLeaseContainer`.
53
+
Afterwards, you define the compute instance name or unique identifier with `WithInstanceName`, this should be unique and different in each compute instance you are deploying, and finally which is the container to maintain the lease state with `WithLeaseContainer`.
54
54
55
55
Calling `Build` will give you the processor instance that you can start by calling `StartAsync`.
56
56
@@ -86,13 +86,13 @@ The change feed processor lets you hook to relevant events in its [life cycle](#
86
86
87
87
## Deployment unit
88
88
89
-
A single change feed processor deployment unit consists of one or more instances with the same `processorName` and lease container configuration. You can have many deployment units where each one has a different business flow for the changes and each deployment unit consisting of one or more instances.
89
+
A single change feed processor deployment unit consists of one or more compute instances with the same `processorName` and lease container configuration but different instance name each. You can have many deployment units where each one has a different business flow for the changes and each deployment unit consisting of one or more instances.
90
90
91
91
For example, you might have one deployment unit that triggers an external API anytime there is a change in your container. Another deployment unit might move data, in real time, each time there is a change. When a change happens in your monitored container, all your deployment units will get notified.
92
92
93
93
## Dynamic scaling
94
94
95
-
As mentioned before, within a deployment unit you can have one or more instances. To take advantage of the compute distribution within the deployment unit, the only key requirements are:
95
+
As mentioned before, within a deployment unit you can have one or more compute instances. To take advantage of the compute distribution within the deployment unit, the only key requirements are:
96
96
97
97
1. All instances should have the same lease container configuration.
98
98
1. All instances should have the same `processorName`.
0 commit comments