You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Grammar and minor content updates to virtual-network-tcpip-performance-tuning.md
--Expand on frame vs packet. The current verbage suggests they're the same thing.
--Chagned 'destinations' to 'endpoints'. A conversation has a a single destination.
--Virtual appliances will introduce latency, but not necessarily to a negative degree. Changed to more accurate language, but softened the impact with 'some'.
--Added logical inclusion of 'proximity placement groups' along with 'availability sets' and 'availability zones'.
--Removed capitalization of 'Availability Zones', it is not capitalized in other documents like: https://learn.microsoft.com/en-us/azure/reliability/availability-zones-overview
--This is a wonderful place to introduce latte and sockperf, so added to latency testing section.
--Fixed predicament with a sentence that used 'problems' far too many times.
Copy file name to clipboardExpand all lines: articles/virtual-network/virtual-network-tcpip-performance-tuning.md
+7-5Lines changed: 7 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -21,7 +21,7 @@ This article discusses common TCP/IP performance tuning techniques and some thin
21
21
22
22
#### MTU
23
23
24
-
The maximum transmission unit (MTU) is the largest size frame (packet), specified in bytes, that can be sent over a network interface. The MTU is a configurable setting. The default MTU used on Azure VMs, and the default setting on most network devices globally, is 1,500 bytes.
24
+
The maximum transmission unit (MTU) is the largest size frame (packet plus network access headers), specified in bytes, that can be sent over a network interface. The MTU is a configurable setting. The default MTU used on Azure VMs, and the default setting on most network devices globally, is 1,500 bytes.
25
25
26
26
#### Fragmentation
27
27
@@ -258,7 +258,7 @@ For more information, see [Virtual machine network bandwidth](./virtual-machine-
258
258
259
259
As discussed throughout this article, factors on the internet and outside the control of Azure can affect network performance. Here are some of those factors:
260
260
261
-
-**Latency**: The round-trip time between two destinations can be affected by issues on intermediate networks, by traffic that doesn't take the "shortest" distance path, and by suboptimal peering paths.
261
+
-**Latency**: The round-trip time between two endpoints can be affected by issues on intermediate networks, by traffic that doesn't take the "shortest" distance path, and by suboptimal peering paths.
262
262
263
263
-**Packet loss**: Packet loss can be caused by network congestion, physical path issues, and underperforming network devices.
264
264
@@ -270,15 +270,15 @@ Traceroute is a good tool for measuring network performance characteristics (lik
270
270
271
271
Along with the considerations discussed earlier in this article, the topology of a virtual network can affect the network's performance. For example, a hub-and-spoke design that backhauls traffic globally to a single-hub virtual network will introduce network latency, which will affect overall network performance.
272
272
273
-
The number of network devices that network traffic passes through can also affect overall latency. For example, in a hub-and-spoke design, if traffic passes through a spoke network virtual appliance and a hub virtual appliance before transiting to the internet, the network virtual appliances can introduce latency.
273
+
The number of network devices that network traffic passes through can also affect overall latency. For example, in a hub-and-spoke design, if traffic passes through a spoke network virtual appliance and a hub virtual appliance before transiting to the internet, the network virtual appliances will introduce some latency.
274
274
275
275
### Azure regions, virtual networks, and latency
276
276
277
277
Azure regions are made up of multiple datacenters that exist within a general geographic area. These datacenters might not be physically next to each other. In some cases they're separated by as much as 10 kilometers. The virtual network is a logical overlay on top of the Azure physical datacenter network. A virtual network doesn't imply any specific network topology within the datacenter.
278
278
279
279
For example, two VMs that are in the same virtual network and subnet might be in different racks, rows, or even datacenters. They could be separated by feet of fiber optic cable or by kilometers of fiber optic cable. This variation could introduce variable latency (a few milliseconds difference) between different VMs.
280
280
281
-
The geographic placement of VMs, and the potential resulting latency between two VMs, can be influenced by the configuration of availability setsand Availability Zones. But the distance between datacenters in a region is region-specific and primarily influenced by datacenter topology in the region.
281
+
The geographic placement of VMs, and the potential resulting latency between two VMs, can be influenced by the configuration of availability sets, proximity placement groups, and availability zones. But the distance between datacenters in a region is region-specific and primarily influenced by datacenter topology in the region.
282
282
283
283
### Source NAT port exhaustion
284
284
@@ -296,6 +296,8 @@ A number of the performance maximums in this article are related to the network
296
296
297
297
TCP performance relies heavily on RTT and packet Loss. The PING utility available in Windows and Linux provides the easiest way to measure RTT and packet loss. The output of PING will show the minimum/maximum/average latency between a source and destination. It will also show packet loss. PING uses the ICMP protocol by default. You can use PsPing to test TCP RTT. For more information, see [PsPing](/sysinternals/downloads/psping).
298
298
299
+
Neither ICMP nor TCP pings measure the accelerated networking datapath. To measure this, please read about Latte and SockPerf in [this article](/azure/virtual-network/virtual-network-test-latency).
300
+
299
301
### Measure actual bandwidth of a virtual machine
300
302
301
303
To accurately measure the bandwidth of Azure VMs, follow [this guidance](./virtual-network-bandwidth-testing.md).
@@ -308,7 +310,7 @@ For more details on testing other scenarios, see these articles:
308
310
309
311
### Detect inefficient TCP behaviors
310
312
311
-
In packet captures, Azure customers might see TCP packets with TCP flags (SACK, DUP ACK, RETRANSMIT, and FAST RETRANSMIT) that could indicate network performance problems. These packets specifically indicate network inefficiencies that result from packet loss. But packet loss isn't necessarily caused by Azure performance problems. Performance problems could be the result of application problems, operating system problems, or other problems that might not be directly related to the Azure platform.
313
+
In packet captures, Azure customers might see TCP packets with TCP flags (SACK, DUP ACK, RETRANSMIT, and FAST RETRANSMIT) that could indicate network performance problems. These packets specifically indicate network inefficiencies that result from packet loss. But packet loss isn't necessarily caused by Azure performance problems. Performance issues could be the result of application, operating system, or other problems that might not be directly related to the Azure platform.
312
314
313
315
Also, keep in mind that some retransmission and duplicate ACKs are normal on a network. TCP protocols were built to be reliable. Evidence of these TCP packets in a packet capture doesn't necessarily indicate a systemic network problem, unless they're excessive.
0 commit comments