Skip to content

Commit 0047a68

Browse files
committed
2 parents bc38f6e + c04779a commit 0047a68

File tree

4 files changed

+47
-49
lines changed

4 files changed

+47
-49
lines changed

azure-stack/hci/deploy/deploy-via-portal.md

Lines changed: 6 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -89,9 +89,12 @@ Choose whether to create a new configuration for this system or to load deployme
8989

9090
Make sure to use high-speed adapters for the intent that includes storage traffic.
9191
4. For the storage intent, enter the **VLAN ID** set on the network switches used for each storage network.
92+
> [!IMPORTANT]
93+
> Portal deployment does not allow to specify your own IPs for the storage intent. However, you can use ARM template deployment if you require to specify the IPs for storage and you cannot use the default values from Network ATC. For more information check this page: [Custom IPs for storage intent](../plan/cloud-deployment-network-considerations.md#custom-ips-for-storage)
94+
9295
:::image type="content" source="./media/deploy-via-portal/networking-tab-1.png" alt-text="Screenshot of the Networking tab with network intents in deployment via Azure portal." lightbox="./media/deploy-via-portal/networking-tab-1.png":::
9396

94-
1. To customize network settings for an intent, select **Customize network settings** and provide the following information:
97+
5. To customize network settings for an intent, select **Customize network settings** and provide the following information:
9598

9699
- **Storage traffic priority**. This specifies the Priority Flow Control where Data Center Bridging (DCB) is used.
97100
- **Cluster traffic priority**.
@@ -100,13 +103,13 @@ Choose whether to create a new configuration for this system or to load deployme
100103

101104
:::image type="content" source="./media/deploy-via-portal/customize-networking-settings-1.png" alt-text="Screenshot of the customize network settings for a network intent used in deployment via Azure portal." lightbox="./media/deploy-via-portal/customize-networking-settings-1.png":::
102105

103-
1. Using the **Starting IP** and **Ending IP** (and related) fields, allocate a contiguous block of at least six static IP addresses on your management network's subnet, omitting addresses already used by the servers.
106+
6. Using the **Starting IP** and **Ending IP** (and related) fields, allocate a contiguous block of at least six static IP addresses on your management network's subnet, omitting addresses already used by the servers.
104107

105108
These IPs are used by Azure Stack HCI and internal infrastructure (Arc Resource Bridge) that's required for Arc VM management and AKS Hybrid.
106109

107110
:::image type="content" source="./media/deploy-via-portal/networking-tab-2.png" alt-text="Screenshot of the Networking tab with IP address allocation to systems and services in deployment via Azure portal." lightbox="./media/deploy-via-portal/networking-tab-2.png":::
108111

109-
6. Select **Next: Management**.
112+
7. Select **Next: Management**.
110113

111114
## Specify management settings
112115

azure-stack/hci/manage/diskspd-overview.md

Lines changed: 39 additions & 44 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ description: This topic provides guidance on how to use DISKSPD to test workload
44
author: JasonGerend
55
ms.author: jgerend
66
ms.topic: how-to
7-
ms.date: 02/26/2024
7+
ms.date: 10/03/2024
88
---
99

1010
# Use DISKSPD to test workload storage performance
@@ -14,67 +14,61 @@ ms.date: 02/26/2024
1414
This topic provides guidance on how to use DISKSPD to test workload storage performance. You have an Azure Stack HCI cluster set up, all ready to go. Great, but how do you know if you're getting the promised performance metrics, whether it be latency, throughput, or IOPS? This is when you may want to turn to DISKSPD. After reading this topic, you'll know how to run DISKSPD, understand a subset of parameters, interpret output, and gain a general understanding of the variables that affect workload storage performance.
1515

1616
## What is DISKSPD?
17+
1718
DISKSPD is an I/O generating, command-line tool for micro-benchmarking. Great, so what do all these terms mean? Anyone who sets up an Azure Stack HCI cluster or physical server has a reason. It could be to set up a web hosting environment, or run virtual desktops for employees. Whatever the real-world use case may be, you likely want to simulate a test before deploying your actual application. However, testing your application in a real scenario is often difficult – this is where DISKSPD comes in.
1819

1920
DISKSPD is a tool that you can customize to create your own synthetic workloads, and test your application before deployment. The cool thing about the tool is that it gives you the freedom to configure and tweak the parameters to create a specific scenario that resembles your real workload. DISKSPD can give you a glimpse into what your system is capable of before deployment. At its core, DISKSPD simply issues a bunch of read and write operations.
2021

2122
Now you know what DISKSPD is, but when should you use it? DISKSPD has a difficult time emulating complex workloads. But DISKSPD is great when your workload is not closely approximated by a single-threaded file copy, and you need a simple tool that produces acceptable baseline results.
2223

2324
## Quick start: install and run DISKSPD
24-
Without further ado, let’s get started:
25-
26-
1. From your management PC, open PowerShell as an administrator to connect to the target computer that you want to test using DISKSPD, and then type the following command and press Enter.
27-
28-
```powershell
29-
Enter-PSSession -ComputerName <TARGET_COMPUTER_NAME>
30-
```
31-
32-
In this example, we're running a virtual machine (VM) called “node1.”
3325

34-
1. To download the DISKSPD tool, type the following commands and press Enter:
26+
To install and run DISKSPD, open PowerShell as an admin on your management PC, and then follow these steps:
3527

36-
```powershell
37-
$client = new-object System.Net.WebClient
38-
```
39-
40-
```powershell
41-
$client.DownloadFile("https://github.com/microsoft/diskspd/releases/latest/download/DiskSpd.zip","<ENTER_PATH>\DiskSpd-2.1.zip")
42-
```
28+
1. To download and expand the ZIP file for the DISKSPD tool, run the following commands:
4329

44-
1. Use the following command to unzip the downloaded file:
30+
```powershell
31+
# Define the ZIP URL and the full path to save the file, including the filename
32+
$zipName = "DiskSpd.zip"
33+
$zipPath = "C:\DISKSPD"
34+
$zipFullName = Join-Path $zipPath $zipName
35+
$zipUrl = "https://github.com/microsoft/diskspd/releases/latest/download/" +$zipName
36+
37+
# Ensure the target directory exists, if not then create
38+
if (-Not (Test-Path $zipPath)) {
39+
New-Item -Path $zipPath -ItemType Directory | Out-Null
40+
}
41+
# Download and expand the ZIP file
42+
Invoke-RestMethod -Uri $zipUrl -OutFile $zipFullName
43+
Expand-Archive -Path $zipFullName -DestinationPath $zipPath
44+
45+
1. To add the DISKSPD directory to your `$PATH` environment variable, run the following command:
4546
46-
```powershell
47-
Expand-Archive -LiteralPath <ENTERPATH>\DiskSpd-2.1.zip -DestinationPath C:\DISKSPD
47+
```powershell
48+
$diskspdPath = Join-Path $zipPath $env:PROCESSOR_ARCHITECTURE
49+
if ($env:path -split ';' -notcontains $diskspdPath) {
50+
$env:path += ";" + $diskspdPath
51+
}
4852
```
4953
50-
1. Change directory to the DISKSPD directory and locate the appropriate executable file for the Windows operating system that the target computer is running.
51-
52-
In this example, we're using the amd64 version.
53-
54-
> [!NOTE]
55-
> You can also download the DISKSPD tool directly from the [GitHub repository](https://github.com/microsoft/diskspd) that contains the open-source code, and a wiki page that details all the parameters and specifications. In the repository, under **Releases**, select the link to automatically download the ZIP file.
56-
57-
In the ZIP file, you'll see three subfolders: amd64 (64-bit systems), x86 (32-bit systems), and ARM64 (ARM systems). These options enable you to run the tool in every Windows client or server version.
58-
59-
:::image type="content" source="media/diskspd-overview/download-directory.png" alt-text="Directory to download the DISKSPD .zip file." lightbox="media/diskspd-overview/download-directory.png":::
60-
61-
1. Run DISKSPD with the following PowerShell command. Replace everything inside the square brackets, including the brackets themselves with your appropriate settings.
54+
1. Run DISKSPD with the following PowerShell command. Replace square brackets with your appropriate settings.
6255
6356
```powershell
64-
.\[INSERT_DISKSPD_PATH] [INSERT_SET_OF_PARAMETERS] [INSERT_CSV_PATH_FOR_TEST_FILE] > [INSERT_OUTPUT_FILE.txt]
57+
diskspd [INSERT_SET_OF_PARAMETERS] [INSERT_CSV_PATH_FOR_TEST_FILE] > [INSERT_OUTPUT_FILE.txt]
6558
```
6659
67-
Here is an example command that you can run:
60+
Here's an example command that you can run:
6861
6962
```powershell
70-
.\diskspd -t2 -o32 -b4k -r4k -w0 -d120 -Sh -D -L -c5G C:\ClusterStorage\test01\targetfile\IO.dat > test01.txt
63+
diskspd -t2 -o32 -b4k -r4k -w0 -d120 -Sh -D -L -c5G C:\ClusterStorage\test01\targetfile\IO.dat > test01.txt
7164
```
7265
7366
> [!NOTE]
7467
> If you do not have a test file, use the **-c** parameter to create one. If you use this parameter, be sure to include the test file name when you define your path. For example: [INSERT_CSV_PATH_FOR_TEST_FILE] = C:\ClusterStorage\CSV01\IO.dat. In the example command, IO.dat is the test file name, and test01.txt is the DISKSPD output file name.
7568
7669
## Specify key parameters
77-
Well, that was simple right? Unfortunately, there is more to it than that. Let’s unpack what we did. First, there are various parameters that you can tinker with and it can get specific. However, we used the following set of baseline parameters:
70+
71+
Well, that was simple right? Unfortunately, there's more to it than that. Let’s unpack what we did. First, there are various parameters that you can tinker with and it can get specific. However, we used the following set of baseline parameters:
7872
7973
> [!NOTE]
8074
> DISKSPD parameters are case sensitive.
@@ -109,7 +103,7 @@ You generate the test file under the unified namespace that the Cluster Shared V
109103
>[!NOTE]
110104
> The example environment does *not* have Hyper-V or a nested virtualization structure.
111105
112-
As you’ll see, it's entirely possible to independently hit either the IOPS or bandwidth ceiling at the VM or drive limit. And so, it is important to understand your VM size and drive type, because both have a maximum IOPS limit and a bandwidth ceiling. This knowledge helps to locate bottlenecks and understand your performance results. To learn more about what size may be appropriate for your workload, see the following resources:
106+
As you’ll see, it's entirely possible to independently hit either the IOPS or bandwidth ceiling at the VM or drive limit. And so, it's important to understand your VM size and drive type, because both have a maximum IOPS limit and a bandwidth ceiling. This knowledge helps to locate bottlenecks and understand your performance results. To learn more about what size may be appropriate for your workload, see the following resources:
113107
114108
- [VM sizes](/azure/virtual-machines/sizes-general?bc=/azure/virtual-machines/linux/breadcrumb/toc.json&toc=/azure/virtual-machines/linux/toc.json)
115109
- [Disk types](https://azure.microsoft.com/pricing/details/managed-disks/)
@@ -183,7 +177,7 @@ Storage performance is a delicate thing. Meaning, there are many variables that
183177
- Hard drive spindle speeds
184178
185179
### CSV ownership
186-
A node is known as a volume owner or the **coordinator** node (a non-coordinator node would be the node that does not own a specific volume). Every standard volume is assigned a node and the other nodes can access this standard volume through network hops, which results in slower performance (higher latency).
180+
A node is known as a volume owner or the **coordinator** node (a non-coordinator node would be the node that doesn't own a specific volume). Every standard volume is assigned a node and the other nodes can access this standard volume through network hops, which results in slower performance (higher latency).
187181
188182
Similarly, a Cluster Shared Volume (CSV) also has an “owner.” However, a CSV is “dynamic” in the sense that it will hop around and change ownership every time you restart the system (RDP). As a result, it’s important to confirm that DISKSPD is run from the coordinator node that owns the CSV. If not, you may need to manually change the CSV ownership.
189183
@@ -206,21 +200,22 @@ If your real-world goal is to test file copy performance, then this may be a per
206200
207201
The following short summary explains why using file copy to measure storage performance may not provide the results that you're looking for:
208202
- **File copies might not be optimized,** There are two levels of parallelism that occur, one internal and the other external. Internally, if the file copy is headed for a remote target, the CopyFileEx engine does apply some parallelization. Externally, there are different ways of invoking the CopyFileEx engine. For example, copies from File Explorer are single threaded, but Robocopy is multi-threaded. For these reasons, it's important to understand whether the implications of the test are what you are looking for.
209-
- **Every copy has two sides.** When you simply copy and paste a file, you may be using two disks: the source disk and the destination disk. If one is slower than the other, you essentially measure the performance of the slower disk. There are other cases where the communication between the source, destination, and the copy engine may affect the performance in unique ways.
203+
- **Every copy has two sides.** When you copy and paste a file, you may be using two disks: the source disk and the destination disk. If one is slower than the other, you essentially measure the performance of the slower disk. There are other cases where the communication between the source, destination, and the copy engine may affect the performance in unique ways.
210204
211205
To learn more, see [Using file copy to measure storage performance](/archive/blogs/josebda/using-file-copy-to-measure-storage-performance-why-its-not-a-good-idea-and-what-you-should-do-instead?epi=je6NUbpObpQ-OaAFQvelcuupBvT5Qlis7Q&irclickid=_rcvu3tufjwkftzjukk0sohzizm2xiezdpnxvqy9i00&irgwc=1&OCID=AID2000142_aff_7593_1243925&ranEAID=je6NUbpObpQ&ranMID=24542&ranSiteID=je6NUbpObpQ-OaAFQvelcuupBvT5Qlis7Q&tduid=(ir__rcvu3tufjwkftzjukk0sohzizm2xiezdpnxvqy9i00)(7593)(1243925)(je6NUbpObpQ-OaAFQvelcuupBvT5Qlis7Q)()).
212206
213207
## Experiments and common workloads
214208
This section includes a few other examples, experiments, and workload types.
215209
216210
### Confirming the coordinator node
217-
As mentioned previously, if the VM you are currently testing does not own the CSV, you'll see a performance drop (IOPS, throughput, and latency) as opposed to testing it when the node owns the CSV. This is because every time you issue an I/O operation, the system does a network hop to the coordinator node to perform that operation.
211+
As mentioned previously, if the VM you are currently testing doesn't own the CSV, you'll see a performance drop (IOPS, throughput, and latency) as opposed to testing it when the node owns the CSV. This is because every time you issue an I/O operation, the system does a network hop to the coordinator node to perform that operation.
218212
219213
For a three-node, three-way mirrored situation, write operations always make a network hop because it needs to store data on all the drives across the three nodes. Therefore, write operations make a network hop regardless. However, if you use a different resiliency structure, this could change.
220214
221-
Here is an example:
222-
- **Running on local node:** .\DiskSpd-2.0.21a\amd64\diskspd.exe -t4 -o32 -b4k -r4k -w0 -Sh -D -L C:\ClusterStorage\test01\targetfile\IO.dat
223-
- **Running on nonlocal node:** .\DiskSpd-2.0.21a\amd64\diskspd.exe -t4 -o32 -b4k -r4k -w0 -Sh -D -L C:\ClusterStorage\test01\targetfile\IO.dat
215+
Here's an example:
216+
217+
- **Running on local node:** diskspd.exe -t4 -o32 -b4k -r4k -w0 -Sh -D -L C:\ClusterStorage\test01\targetfile\IO.dat
218+
- **Running on nonlocal node:** diskspd.exe -t4 -o32 -b4k -r4k -w0 -Sh -D -L C:\ClusterStorage\test01\targetfile\IO.dat
224219
225220
From this example, you can clearly see in the results of the following figure that latency decreased, IOPS increased, and throughput increased when the coordinator node owns the CSV.
226221
Binary file not shown.

azure-stack/zone-pivot-groups.yml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -78,10 +78,10 @@ groups:
7878
title: Products
7979
prompt: "Choose your product:"
8080
pivots:
81-
- id: windows-server
82-
title: Windows Server
8381
- id: azure-stack-hci
8482
title: Azure Stack HCI
83+
- id: windows-server
84+
title: Windows Server
8585
# BELOW: entries inherited from github.com/microsoftdocs/azure-docs-pr/articles/zone-pivot-groups.yml for reuse.
8686
# For consistency across Docs. This includes: client OSes, languages, etc.
8787

0 commit comments

Comments
 (0)