You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/service-bus-messaging/service-bus-messaging-exceptions-latest.md
+41Lines changed: 41 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -200,6 +200,47 @@ We recommend that you follow these verification steps, depending on the type of
200
200
-[Verify the SAS token was generated correctly](service-bus-sas.md)
201
201
-[Verify the correct role-based access control (RBAC) roles were granted](service-bus-managed-service-identity.md)
202
202
203
+
## Geo-Replication exceptions
204
+
205
+
### ServerBusyException
206
+
207
+
#### Causes
208
+
209
+
- During asynchronous replication (replication lag greater than zero), the client tries to perform an operation on a service bus entity (queue, topic) or performs a management operation, but the operation cannot be completed because the replication lag between the primary and the secondary regions has exceeded the maximum allowed replication lag in seconds.
210
+
-**Example**: The operation is being throttled because with it the new replication lag would reach 38323 seconds, which is greater than the maximum replication lag that was set (300 seconds). The current replication lag for the latest operation being replicated is 0 seconds.
211
+
- The replication queue for an entity exceeds its maximum size in bytes. The maximum size in bytes for a replication queue is an internal limit set by Service Bus.
- In synchronous replication, a request times out while waiting for another request to replicate.
214
+
-**Example**: High volume of requests from client application for skarri-stroage-exp1(westus3)/q1:MessagingJournal. Replication to other region(s) is in progress.
215
+
216
+
#### Resolution
217
+
218
+
- The client should back off to give time for the service to process its given workload, then the client should retry. Moreover, this could indicate an issue for replication between the primary and secondaries and could be used as an indication that promotion of a secondary should be triggered.
219
+
220
+
### Timeout
221
+
222
+
#### Cause
223
+
224
+
- A timeout exception in Geo DR means that the operation did not complete within the client-provided timeout.
225
+
- In synchronous replication, an operation’s primary region write and replication to secondary regions are within the scope of the operation’s timeout.
226
+
- In asynchronous replication, an operation’s primary region write is within the scope of the operation’s timeout, but an operation’s replication to secondary regions is not within the scope of the operation’s timeout.
227
+
-**Example**: The operation did not complete within the allocated time 00:01:00 for object message. (ServiceTimeout).
228
+
229
+
#### Resolution
230
+
231
+
- The client should retry the operation.
232
+
- Note that some steps of a timed-out operation may have been completed. It’s possible that a timed-out operation may have been written to the primary region and some secondary regions. If an operation has been written to the primary region, it will eventually be replicated to all secondary regions regardless of client timeout.
233
+
234
+
### BadRequest
235
+
236
+
#### Cause
237
+
238
+
- During a planned failover, the primary region is temporarily set as read-only in order to allow the secondary region to catch up. If the client attempts a write operation to the primary region while it is in this temporary read-only state, then the client will be receive a BadRequest exception.
239
+
-**Example**: Replication role switch in progress, primary replica:<entity-name> is ReadOnly.
240
+
241
+
#### Resolution
242
+
- The client must wait for planned failover to complete before write operations will succeed.
243
+
- In case planned failover takes to long, it is possible to trigger a forced failover instead.
0 commit comments