You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/azure-signalr/signalr-howto-troubleshoot-guide.md
+23-23Lines changed: 23 additions & 23 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,7 +4,7 @@ description: Learn how to troubleshoot common issues
4
4
author: vicancy
5
5
ms.service: azure-signalr-service
6
6
ms.topic: how-to
7
-
ms.date: 07/02/2024
7
+
ms.date: 08/29/2024
8
8
ms.author: lianwei
9
9
ms.devlang: csharp
10
10
---
@@ -23,11 +23,11 @@ This article provides troubleshooting guidance for some of the common issues tha
23
23
24
24
### Root cause
25
25
26
-
For HTTP/2, the max length for a single header is **4 K**, so if using browser to access Azure service, there will be an error `ERR_CONNECTION_` for this limitation.
26
+
For HTTP/2, the max length for a single header is **4 K**, so if using browser to access Azure service, there's an error `ERR_CONNECTION_` for this limitation.
27
27
28
-
For HTTP/1.1, or C# clients, the max URI length is **12 K**, the max header length is **16 K**.
28
+
For HTTP/1.1, or C# clients, the max URI length is **12 K** and the max header length is **16 K**.
29
29
30
-
With SDK version **1.0.6** or higher, `/negotiate`will throw`413 Payload Too Large` when the generated access token is larger than **4 K**.
30
+
With SDK version **1.0.6** or higher, `/negotiate`throws`413 Payload Too Large` when the generated access token is larger than **4 K**.
* ASP.NET "No server available" error [#279](https://github.com/Azure/azure-signalr/issues/279)
70
-
* ASP.NET "The connection isn't active, data cannot be sent to the service." error [#324](https://github.com/Azure/azure-signalr/issues/324)
71
-
* "An error occurred while making the HTTP request to `https://<API endpoint>`. This error could be because the server certificate is not configured properly with HTTP.SYS in the HTTPS case. This error could also be caused by a mismatch of the security binding between the client and the server."
70
+
* ASP.NET "The connection isn't active, data can't be sent to the service." error [#324](https://github.com/Azure/azure-signalr/issues/324)
71
+
* "An error occurred while making the HTTP request to `https://<API endpoint>`. This error might occur if the server certificate isn't properly configured with HTTP.SYS in the HTTPS case. The possible cause of this error is a mismatch of the security binding between the client and server."
Check if your client request has multiple `hub` query strings. `hub` is a preserved query parameter and 400 will throw if the service detects more than one `hub` in the query.
114
+
Check if your client request has multiple `hub` query strings. The `hub` is preserved query parameter, and if the service detects more than one `hub` in the query, it returns a 400 error.
115
115
116
116
[Having issues or feedback about the troubleshooting? Let us know.](https://aka.ms/asrs/survey/troubleshooting)
117
117
@@ -129,7 +129,7 @@ For ASP.NET SignalR, the client sends a `/ping` "keep alive" request to the serv
129
129
130
130
### Solution
131
131
132
-
For security concerns, extend TTL isn't encouraged. We suggest adding reconnect logic from the client to restart the connection when such 401 occurs. When the client restarts the connection, it will negotiate with app server to get the JWT token again and get a renewed token.
132
+
For security concerns, extend TTL isn't encouraged. We suggest adding reconnect logic from the client to restart the connection when such 401 occurs. When the client restarts the connection, it negotiates with app server to get the JWT token again and get a renewed token.
133
133
134
134
Check [here](#restart_connection) for how to restart client connections.
135
135
@@ -149,7 +149,7 @@ For a SignalR persistent connection, it first `/negotiate` to Azure SignalR serv
149
149
150
150
## 404 returned for ASP.NET SignalR's reconnect request
151
151
152
-
For ASP.NET SignalR, when the [client connection drops](#client_connection_drop), it reconnects using the same `connectionId` for three times before stopping the connection. `/reconnect` can help if the connection is dropped due to network intermittent issues that `/reconnect` can reestablish the persistent connection successfully. Under other circumstances, for example, the client connection is dropped due to the routed server connection is dropped, or SignalR Service has some internal errors like instance restart/failover/deployment, the connection no longer exists, thus `/reconnect` returns `404`. It's the expected behavior for `/reconnect` and after three times retry the connection stops. We suggest having [connection restart](#restart_connection) logic when connection stops.
152
+
For ASP.NET SignalR, when the [client connection drops](#client_connection_drop), it reconnects using the same `connectionId` for three times before stopping the connection. `/reconnect` can help if the connection is dropped due to network intermittent issues that `/reconnect` can reestablish the persistent connection successfully. Under other circumstances, for example, the client connection is dropped due to the routed server connection is dropped, or SignalR Service has some internal errors like instance restart/failover/deployment. The connection no longer exists, thus `/reconnect` returns `404`. It's the expected behavior for `/reconnect` and after three times retry the connection stops. We suggest having [connection restart](#restart_connection) logic when connection stops.
153
153
154
154
[Having issues or feedback about the troubleshooting? Let us know.](https://aka.ms/asrs/survey/troubleshooting)
155
155
@@ -162,11 +162,11 @@ There are two cases.
162
162
For **Free** instances, **Concurrent** connection count limit is 20
163
163
For **Standard** instances, **concurrent** connection count limit **per unit** is 1 K, which means Unit100 allows 100-K concurrent connections.
164
164
165
-
The connections include both client and server connections. check[here](./signalr-concept-messages-and-connections.md#how-connections-are-counted) for how connections are counted.
165
+
The connections include both client and server connections. Check[here](./signalr-concept-messages-and-connections.md#how-connections-are-counted) for how connections are counted.
166
166
167
167
### NegotiateThrottled
168
168
169
-
When there are too many client negotiate requests at the **same** time, it may get throttled. The limit relates to the unit counts that more units has a higher limit. Besides, we suggest having a random delay before reconnecting, check [here](#restart_connection) for retry samples.
169
+
When there are too many clients negotiate requests at the **same** time, it might get throttled. The limit relates to the unit counts that more units have a higher limit. Besides, we suggest having a random delay before reconnecting, check [here](#restart_connection) for retry samples.
170
170
171
171
[Having issues or feedback about the troubleshooting? Let us know.](https://aka.ms/asrs/survey/troubleshooting)
172
172
@@ -192,7 +192,7 @@ Server-side logging for ASP.NET Core SignalR integrates with the `ILogger` based
192
192
})
193
193
```
194
194
195
-
Logger categories for Azure SignalR always start with `Microsoft.Azure.SignalR`. To enable detailed logs from Azure SignalR, configure the preceding prefixes to `Debug` level in your **appsettings.json** file like below:
195
+
Logger categories for Azure SignalR always start with `Microsoft.Azure.SignalR`. To enable detailed logs from Azure SignalR, configure the preceding prefixes to `Debug` level in your **appsettings.json** file, see the following example:
196
196
197
197
```json
198
198
{
@@ -250,7 +250,7 @@ When the client is connected to the Azure SignalR, the persistent connection bet
250
250
251
251
Client connections can drop under various circumstances:
252
252
* When `Hub` throws exceptions with the incoming request
253
-
* When the server connection, which the client routed to, drops, see below section for details on [server connection drops](#server_connection_drop)
253
+
* When the server connection, which the client routed to, drops, see the following section for details on [server connection drops](#server_connection_drop)
254
254
* When a network connectivity issue happens between client and SignalR Service
255
255
* When SignalR Service has some internal errors like instance restart, failover, deployment, and so on
256
256
@@ -264,17 +264,17 @@ Client connections can drop under various circumstances:
264
264
265
265
## Client connection increases constantly
266
266
267
-
It might be caused by improper usage of client connection. If someone forgets to stop/dispose SignalR client, the connection remains open.
267
+
Improper usage of the client connection might cause it. If someone forgets to stop/dispose SignalR client, the connection remains open.
268
268
269
-
### Possible errors seen from the SignalR's metrics that is in Monitoring section of Azure portal resource menu
269
+
### Possible errors seen from the SignalR's metrics that are in Monitoring section of Azure portal resource menu
270
270
271
271
Client connections rise constantly for a long time in Azure SignalR's Metrics.
SignalR client connection's `DisposeAsync` never be called, the connection keeps open.
277
+
SignalR client connection's `DisposeAsync` never be called and the connection keeps open.
278
278
279
279
### Troubleshooting guide
280
280
@@ -306,7 +306,7 @@ finally
306
306
307
307
#### Azure Function example
308
308
309
-
This issue often occurs when someone establishes a SignalR client connection in an Azure Function method instead of making it a static member in the function class. You might expect only one client connection to be established, but instead you see client connection count increase constantly in metrics. All these connections drop only after the Azure Function or Azure SignalR service restarts. This behavior is because for **each** request, Azure Function creates**one** client connection, and if you don't stop client connection in the function method, the client keeps the connections alive to Azure SignalR service.
309
+
This issue often occurs when someone establishes a SignalR client connection in an Azure Function method instead of making it a static member in the function class. You might expect only one client connection to be established, but instead you see the client connection count increase constantly in metrics. All these connections drop only after the Azure Function or Azure SignalR service restarts. This behavior occurs because Azure Function establishes**one** client connection for **each** request and if you don't stop the client connection in the function method, the client keeps the connections alive to Azure SignalR service.
310
310
311
311
#### Solution
312
312
@@ -320,11 +320,11 @@ This issue often occurs when someone establishes a SignalR client connection in
320
320
321
321
## Server connection drops
322
322
323
-
When the app server starts, in the background, the Azure SDK starts to initiate server connections to the remote Azure SignalR. As described in [Internals of Azure SignalR Service](https://github.com/Azure/azure-signalr/blob/dev/docs/internal.md), Azure SignalR routes incoming client traffics to these server connections. Once a server connection is dropped, all the client connections it serves will be closed too.
323
+
When the app server starts, in the background, the Azure SDK starts to initiate server connections to the remote Azure SignalR. As described in [Internals of Azure SignalR Service](https://github.com/Azure/azure-signalr/blob/dev/docs/internal.md), Azure SignalR routes incoming client traffics to these server connections. When a server connection is dropped, it closes all the client connections it was serving.
324
324
325
-
As the connections between the app server and SignalR Service are persistent connections, they may experience network connectivity issues. In the Server SDK, we have an **Always Reconnect** strategy to server connections. As a best practice, we also encourage users to add continuous reconnection logic to the clients with a random delay time to avoid massive simultaneous requests to the server.
325
+
As the connections between the app server and SignalR Service are persistent connections, they might experience network connectivity issues. In the Server SDK, we have an **Always Reconnect** strategy to server connections. As a best practice, we also encourage users to add continuous reconnection logic to the clients with a random delay time to avoid massive simultaneous requests to the server.
326
326
327
-
Regularly, there are new version releases for the Azure SignalR Service, and sometimes the Azure-wide patching or upgrades or occasionally interruption from our dependent services. These events may bring in a short period of service disruption, but as long as client-side has a disconnect/reconnect mechanism, the effect is minimal like any client-side caused disconnect-reconnect.
327
+
Regularly, there are new version releases for the Azure SignalR Service, and sometimes the Azure-wide patching or upgrades or occasionally interruption from our dependent services. These events might bring in a short period of service disruption, but as long as client-side has a disconnect/reconnect mechanism, the effect is minimal like any client-side caused disconnect-reconnect.
328
328
329
329
This section describes several possibilities leading to server connection drop, and provides some guidance on how to identify the root cause.
330
330
@@ -338,15 +338,15 @@ This section describes several possibilities leading to server connection drop,
338
338
339
339
Server-service connection is closed by **ASRS**(**A**zure **S**ignal**R****S**ervice).
340
340
341
-
For ping timeout, it might be caused by high CPU usage or thread pool starvation on the server side.
341
+
High CPU usage or thread pool starvation on the server side might cause a ping timeout.
342
342
343
343
For ASP.NET SignalR, a known issue was fixed in SDK 1.6.0. Upgrade your SDK to newest version.
344
344
345
345
## Thread pool starvation
346
346
347
347
If your server is starving, that means no threads are working on message processing. All threads aren't responding in a certain method.
348
348
349
-
Normally, this scenario is caused by async over sync or by `Task.Result`/`Task.Wait()`in async methods.
349
+
Normally, in async methods, async over sync or by `Task.Result`/`Task.Wait()`causes this scenario.
350
350
351
351
See [ASP.NET Core performance best practices](/aspnet/core/performance/performance-best-practices#avoid-blocking-calls).
0 commit comments