Skip to content

Commit 856b3ee

Browse files
author
MichaelSalivar-MSFT
authored
Grammar and minor content updates to virtual-network-tcpip-performance-tuning.md
--Expand on frame vs packet. The current verbage suggests they're the same thing. --Chagned 'destinations' to 'endpoints'. A conversation has a a single destination. --Virtual appliances will introduce latency, but not necessarily to a negative degree. Changed to more accurate language, but softened the impact with 'some'. --Added logical inclusion of 'proximity placement groups' along with 'availability sets' and 'availability zones'. --Removed capitalization of 'Availability Zones', it is not capitalized in other documents like: https://learn.microsoft.com/en-us/azure/reliability/availability-zones-overview --This is a wonderful place to introduce latte and sockperf, so added to latency testing section. --Fixed predicament with a sentence that used 'problems' far too many times.
1 parent 6788371 commit 856b3ee

File tree

1 file changed

+7
-5
lines changed

1 file changed

+7
-5
lines changed

articles/virtual-network/virtual-network-tcpip-performance-tuning.md

Lines changed: 7 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ This article discusses common TCP/IP performance tuning techniques and some thin
2121

2222
#### MTU
2323

24-
The maximum transmission unit (MTU) is the largest size frame (packet), specified in bytes, that can be sent over a network interface. The MTU is a configurable setting. The default MTU used on Azure VMs, and the default setting on most network devices globally, is 1,500 bytes.
24+
The maximum transmission unit (MTU) is the largest size frame (packet plus network access headers), specified in bytes, that can be sent over a network interface. The MTU is a configurable setting. The default MTU used on Azure VMs, and the default setting on most network devices globally, is 1,500 bytes.
2525

2626
#### Fragmentation
2727

@@ -258,7 +258,7 @@ For more information, see [Virtual machine network bandwidth](./virtual-machine-
258258

259259
As discussed throughout this article, factors on the internet and outside the control of Azure can affect network performance. Here are some of those factors:
260260

261-
- **Latency**: The round-trip time between two destinations can be affected by issues on intermediate networks, by traffic that doesn't take the "shortest" distance path, and by suboptimal peering paths.
261+
- **Latency**: The round-trip time between two endpoints can be affected by issues on intermediate networks, by traffic that doesn't take the "shortest" distance path, and by suboptimal peering paths.
262262

263263
- **Packet loss**: Packet loss can be caused by network congestion, physical path issues, and underperforming network devices.
264264

@@ -270,15 +270,15 @@ Traceroute is a good tool for measuring network performance characteristics (lik
270270

271271
Along with the considerations discussed earlier in this article, the topology of a virtual network can affect the network's performance. For example, a hub-and-spoke design that backhauls traffic globally to a single-hub virtual network will introduce network latency, which will affect overall network performance.
272272

273-
The number of network devices that network traffic passes through can also affect overall latency. For example, in a hub-and-spoke design, if traffic passes through a spoke network virtual appliance and a hub virtual appliance before transiting to the internet, the network virtual appliances can introduce latency.
273+
The number of network devices that network traffic passes through can also affect overall latency. For example, in a hub-and-spoke design, if traffic passes through a spoke network virtual appliance and a hub virtual appliance before transiting to the internet, the network virtual appliances will introduce some latency.
274274

275275
### Azure regions, virtual networks, and latency
276276

277277
Azure regions are made up of multiple datacenters that exist within a general geographic area. These datacenters might not be physically next to each other. In some cases they're separated by as much as 10 kilometers. The virtual network is a logical overlay on top of the Azure physical datacenter network. A virtual network doesn't imply any specific network topology within the datacenter.
278278

279279
For example, two VMs that are in the same virtual network and subnet might be in different racks, rows, or even datacenters. They could be separated by feet of fiber optic cable or by kilometers of fiber optic cable. This variation could introduce variable latency (a few milliseconds difference) between different VMs.
280280

281-
The geographic placement of VMs, and the potential resulting latency between two VMs, can be influenced by the configuration of availability sets and Availability Zones. But the distance between datacenters in a region is region-specific and primarily influenced by datacenter topology in the region.
281+
The geographic placement of VMs, and the potential resulting latency between two VMs, can be influenced by the configuration of availability sets, proximity placement groups, and availability zones. But the distance between datacenters in a region is region-specific and primarily influenced by datacenter topology in the region.
282282

283283
### Source NAT port exhaustion
284284

@@ -296,6 +296,8 @@ A number of the performance maximums in this article are related to the network
296296

297297
TCP performance relies heavily on RTT and packet Loss. The PING utility available in Windows and Linux provides the easiest way to measure RTT and packet loss. The output of PING will show the minimum/maximum/average latency between a source and destination. It will also show packet loss. PING uses the ICMP protocol by default. You can use PsPing to test TCP RTT. For more information, see [PsPing](/sysinternals/downloads/psping).
298298

299+
Neither ICMP nor TCP pings measure the accelerated networking datapath. To measure this, please read about Latte and SockPerf in [this article](/azure/virtual-network/virtual-network-test-latency).
300+
299301
### Measure actual bandwidth of a virtual machine
300302

301303
To accurately measure the bandwidth of Azure VMs, follow [this guidance](./virtual-network-bandwidth-testing.md).
@@ -308,7 +310,7 @@ For more details on testing other scenarios, see these articles:
308310

309311
### Detect inefficient TCP behaviors
310312

311-
In packet captures, Azure customers might see TCP packets with TCP flags (SACK, DUP ACK, RETRANSMIT, and FAST RETRANSMIT) that could indicate network performance problems. These packets specifically indicate network inefficiencies that result from packet loss. But packet loss isn't necessarily caused by Azure performance problems. Performance problems could be the result of application problems, operating system problems, or other problems that might not be directly related to the Azure platform.
313+
In packet captures, Azure customers might see TCP packets with TCP flags (SACK, DUP ACK, RETRANSMIT, and FAST RETRANSMIT) that could indicate network performance problems. These packets specifically indicate network inefficiencies that result from packet loss. But packet loss isn't necessarily caused by Azure performance problems. Performance issues could be the result of application, operating system, or other problems that might not be directly related to the Azure platform.
312314

313315
Also, keep in mind that some retransmission and duplicate ACKs are normal on a network. TCP protocols were built to be reliable. Evidence of these TCP packets in a packet capture doesn't necessarily indicate a systemic network problem, unless they're excessive.
314316

0 commit comments

Comments
 (0)