|
1 | 1 | ---
|
2 |
| -title: Change feed processor library in Azure Cosmos DB |
3 |
| -description: Learn how to use the Azure Cosmos DB change feed processor library to read the change feed, the components of the change feed processor |
4 |
| -author: markjbrown |
5 |
| -ms.author: mjbrown |
| 2 | +title: Change feed processor in Azure Cosmos DB |
| 3 | +description: Learn how to use the Azure Cosmos DB change feed processor to read the change feed, the components of the change feed processor |
| 4 | +author: timsander1 |
| 5 | +ms.author: tisande |
6 | 6 | ms.service: cosmos-db
|
7 | 7 | ms.devlang: dotnet
|
8 | 8 | ms.topic: conceptual
|
9 |
| -ms.date: 12/03/2019 |
| 9 | +ms.date: 4/29/2020 |
10 | 10 | ms.reviewer: sngun
|
11 | 11 | ---
|
12 | 12 |
|
13 |
| -# Change feed processor in Azure Cosmos DB |
| 13 | +# Change feed processor in Azure Cosmos DB |
14 | 14 |
|
15 | 15 | The change feed processor is part of the [Azure Cosmos DB SDK V3](https://github.com/Azure/azure-cosmos-dotnet-v3). It simplifies the process of reading the change feed and distribute the event processing across multiple consumers effectively.
|
16 | 16 |
|
17 | 17 | The main benefit of change feed processor library is its fault-tolerant behavior that assures an "at-least-once" delivery of all the events in the change feed.
|
18 | 18 |
|
19 | 19 | ## Components of the change feed processor
|
20 | 20 |
|
21 |
| -There are four main components of implementing the change feed processor: |
| 21 | +There are four main components of implementing the change feed processor: |
22 | 22 |
|
23 | 23 | 1. **The monitored container:** The monitored container has the data from which the change feed is generated. Any inserts and updates to the monitored container are reflected in the change feed of the container.
|
24 | 24 |
|
25 |
| -1. **The lease container:** The lease container acts as a state storage and coordinates processing the change feed across multiple workers. The lease container can be stored in the same account as the monitored container or in a separate account. |
| 25 | +1. **The lease container:** The lease container acts as a state storage and coordinates processing the change feed across multiple workers. The lease container can be stored in the same account as the monitored container or in a separate account. |
26 | 26 |
|
27 |
| -1. **The host:** A host is an application instance that uses the change feed processor to listen for changes. Multiple instances with the same lease configuration can run in parallel, but each instance should have a different **instance name**. |
| 27 | +1. **The host:** A host is an application instance that uses the change feed processor to listen for changes. Multiple instances with the same lease configuration can run in parallel, but each instance should have a different **instance name**. |
28 | 28 |
|
29 | 29 | 1. **The delegate:** The delegate is the code that defines what you, the developer, want to do with each batch of changes that the change feed processor reads.
|
30 | 30 |
|
@@ -62,7 +62,11 @@ The normal life cycle of a host instance is:
|
62 | 62 |
|
63 | 63 | ## Error handling
|
64 | 64 |
|
65 |
| -The change feed processor is resilient to user code errors. That means that if your delegate implementation has an unhandled exception (step #4), the thread processing that particular batch of changes will be stopped, and a new thread will be created. The new thread will check which was the latest point in time the lease store has for that range of partition key values, and restart from there, effectively sending the same batch of changes to the delegate. This behavior will continue until your delegate processes the changes correctly and it's the reason the change feed processor has an "at least once" guarantee, because if the delegate code throws, it will retry that batch. |
| 65 | +The change feed processor is resilient to user code errors. That means that if your delegate implementation has an unhandled exception (step #4), the thread processing that particular batch of changes will be stopped, and a new thread will be created. The new thread will check which was the latest point in time the lease store has for that range of partition key values, and restart from there, effectively sending the same batch of changes to the delegate. This behavior will continue until your delegate processes the changes correctly and it's the reason the change feed processor has an "at least once" guarantee, because if the delegate code throws an exception, it will retry that batch. |
| 66 | + |
| 67 | +To prevent your change feed processor from getting "stuck" continuously retrying the same batch of changes, you should add logic in your delegate code to write documents, upon exception, to a dead-letter queue. This design ensures that you can keep track of unprocessed changes while still being able to continue to process future changes. The dead-letter queue might simply be another Cosmos container. The exact data store does not matter, simply that the unprocessed changes are persisted. |
| 68 | + |
| 69 | +In addition, you can use the [change feed estimator](how-to-use-change-feed-estimator.md) to monitor the progress of your change feed processor instances as they read the change feed. In addition to monitoring if the change feed processor gets "stuck" continuously retrying the same batch of changes, you can also understand if your change feed processor is lagging behind due to available resources like CPU, memory, and network bandwidth. |
66 | 70 |
|
67 | 71 | ## Dynamic scaling
|
68 | 72 |
|
|
0 commit comments