You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: azure-stack/hci/deploy/deploy-via-portal.md
+6-3Lines changed: 6 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -89,9 +89,12 @@ Choose whether to create a new configuration for this system or to load deployme
89
89
90
90
Make sure to use high-speed adapters for the intent that includes storage traffic.
91
91
4. For the storage intent, enter the **VLAN ID** set on the network switches used for each storage network.
92
+
> [!IMPORTANT]
93
+
> Portal deployment does not allow to specify your own IPs for the storage intent. However, you can use ARM template deployment if you require to specify the IPs for storage and you cannot use the default values from Network ATC. For more information check this page: [Custom IPs for storage intent](../plan/cloud-deployment-network-considerations.md#custom-ips-for-storage)
94
+
92
95
:::image type="content" source="./media/deploy-via-portal/networking-tab-1.png" alt-text="Screenshot of the Networking tab with network intents in deployment via Azure portal." lightbox="./media/deploy-via-portal/networking-tab-1.png":::
93
96
94
-
1. To customize network settings for an intent, select **Customize network settings** and provide the following information:
97
+
5. To customize network settings for an intent, select **Customize network settings** and provide the following information:
95
98
96
99
-**Storage traffic priority**. This specifies the Priority Flow Control where Data Center Bridging (DCB) is used.
97
100
-**Cluster traffic priority**.
@@ -100,13 +103,13 @@ Choose whether to create a new configuration for this system or to load deployme
100
103
101
104
:::image type="content" source="./media/deploy-via-portal/customize-networking-settings-1.png" alt-text="Screenshot of the customize network settings for a network intent used in deployment via Azure portal." lightbox="./media/deploy-via-portal/customize-networking-settings-1.png":::
102
105
103
-
1. Using the **Starting IP** and **Ending IP** (and related) fields, allocate a contiguous block of at least six static IP addresses on your management network's subnet, omitting addresses already used by the servers.
106
+
6. Using the **Starting IP** and **Ending IP** (and related) fields, allocate a contiguous block of at least six static IP addresses on your management network's subnet, omitting addresses already used by the servers.
104
107
105
108
These IPs are used by Azure Stack HCI and internal infrastructure (Arc Resource Bridge) that's required for Arc VM management and AKS Hybrid.
106
109
107
110
:::image type="content" source="./media/deploy-via-portal/networking-tab-2.png" alt-text="Screenshot of the Networking tab with IP address allocation to systems and services in deployment via Azure portal." lightbox="./media/deploy-via-portal/networking-tab-2.png":::
Copy file name to clipboardExpand all lines: azure-stack/hci/manage/diskspd-overview.md
+39-44Lines changed: 39 additions & 44 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,7 +4,7 @@ description: This topic provides guidance on how to use DISKSPD to test workload
4
4
author: JasonGerend
5
5
ms.author: jgerend
6
6
ms.topic: how-to
7
-
ms.date: 02/26/2024
7
+
ms.date: 10/03/2024
8
8
---
9
9
10
10
# Use DISKSPD to test workload storage performance
@@ -14,67 +14,61 @@ ms.date: 02/26/2024
14
14
This topic provides guidance on how to use DISKSPD to test workload storage performance. You have an Azure Stack HCI cluster set up, all ready to go. Great, but how do you know if you're getting the promised performance metrics, whether it be latency, throughput, or IOPS? This is when you may want to turn to DISKSPD. After reading this topic, you'll know how to run DISKSPD, understand a subset of parameters, interpret output, and gain a general understanding of the variables that affect workload storage performance.
15
15
16
16
## What is DISKSPD?
17
+
17
18
DISKSPD is an I/O generating, command-line tool for micro-benchmarking. Great, so what do all these terms mean? Anyone who sets up an Azure Stack HCI cluster or physical server has a reason. It could be to set up a web hosting environment, or run virtual desktops for employees. Whatever the real-world use case may be, you likely want to simulate a test before deploying your actual application. However, testing your application in a real scenario is often difficult – this is where DISKSPD comes in.
18
19
19
20
DISKSPD is a tool that you can customize to create your own synthetic workloads, and test your application before deployment. The cool thing about the tool is that it gives you the freedom to configure and tweak the parameters to create a specific scenario that resembles your real workload. DISKSPD can give you a glimpse into what your system is capable of before deployment. At its core, DISKSPD simply issues a bunch of read and write operations.
20
21
21
22
Now you know what DISKSPD is, but when should you use it? DISKSPD has a difficult time emulating complex workloads. But DISKSPD is great when your workload is not closely approximated by a single-threaded file copy, and you need a simple tool that produces acceptable baseline results.
22
23
23
24
## Quick start: install and run DISKSPD
24
-
Without further ado, let’s get started:
25
-
26
-
1. From your management PC, open PowerShell as an administrator to connect to the target computer that you want to test using DISKSPD, and then type the following command and press Enter.
if ($env:path -split ';' -notcontains $diskspdPath) {
50
+
$env:path += ";" + $diskspdPath
51
+
}
48
52
```
49
53
50
-
1. Change directory to the DISKSPD directory and locate the appropriate executable file for the Windows operating system that the target computer is running.
51
-
52
-
In this example, we're using the amd64 version.
53
-
54
-
> [!NOTE]
55
-
> You can also download the DISKSPD tool directly from the [GitHub repository](https://github.com/microsoft/diskspd) that contains the open-source code, and a wiki page that details all the parameters and specifications. In the repository, under **Releases**, select the link to automatically download the ZIP file.
56
-
57
-
In the ZIP file, you'll see three subfolders: amd64 (64-bit systems), x86 (32-bit systems), and ARM64 (ARM systems). These options enable you to run the tool in every Windows client or server version.
58
-
59
-
:::image type="content" source="media/diskspd-overview/download-directory.png" alt-text="Directory to download the DISKSPD .zip file." lightbox="media/diskspd-overview/download-directory.png":::
60
-
61
-
1. Run DISKSPD with the following PowerShell command. Replace everything inside the square brackets, including the brackets themselves with your appropriate settings.
54
+
1. Run DISKSPD with the following PowerShell command. Replace square brackets with your appropriate settings.
> If you do not have a test file, use the **-c** parameter to create one. If you use this parameter, be sure to include the test file name when you define your path. For example: [INSERT_CSV_PATH_FOR_TEST_FILE] = C:\ClusterStorage\CSV01\IO.dat. In the example command, IO.dat is the test file name, and test01.txt is the DISKSPD output file name.
75
68
76
69
## Specify key parameters
77
-
Well, that was simple right? Unfortunately, there is more to it than that. Let’s unpack what we did. First, there are various parameters that you can tinker with and it can get specific. However, we used the following set of baseline parameters:
70
+
71
+
Well, that was simple right? Unfortunately, there's more to it than that. Let’s unpack what we did. First, there are various parameters that you can tinker with and it can get specific. However, we used the following set of baseline parameters:
78
72
79
73
> [!NOTE]
80
74
> DISKSPD parameters are case sensitive.
@@ -109,7 +103,7 @@ You generate the test file under the unified namespace that the Cluster Shared V
109
103
>[!NOTE]
110
104
> The example environment does *not* have Hyper-V or a nested virtualization structure.
111
105
112
-
As you’ll see, it's entirely possible to independently hit either the IOPS or bandwidth ceiling at the VM or drive limit. And so, it is important to understand your VM size and drive type, because both have a maximum IOPS limit and a bandwidth ceiling. This knowledge helps to locate bottlenecks and understand your performance results. To learn more about what size may be appropriate for your workload, see the following resources:
106
+
As you’ll see, it's entirely possible to independently hit either the IOPS or bandwidth ceiling at the VM or drive limit. And so, it's important to understand your VM size and drive type, because both have a maximum IOPS limit and a bandwidth ceiling. This knowledge helps to locate bottlenecks and understand your performance results. To learn more about what size may be appropriate for your workload, see the following resources:
@@ -183,7 +177,7 @@ Storage performance is a delicate thing. Meaning, there are many variables that
183
177
- Hard drive spindle speeds
184
178
185
179
### CSV ownership
186
-
A node is known as a volume owner or the **coordinator** node (a non-coordinator node would be the node that does not own a specific volume). Every standard volume is assigned a node and the other nodes can access this standard volume through network hops, which results in slower performance (higher latency).
180
+
A node is known as a volume owner or the **coordinator** node (a non-coordinator node would be the node that doesn't own a specific volume). Every standard volume is assigned a node and the other nodes can access this standard volume through network hops, which results in slower performance (higher latency).
187
181
188
182
Similarly, a Cluster Shared Volume (CSV) also has an “owner.” However, a CSV is “dynamic” in the sense that it will hop around and change ownership every time you restart the system (RDP). As a result, it’s important to confirm that DISKSPD is run from the coordinator node that owns the CSV. If not, you may need to manually change the CSV ownership.
189
183
@@ -206,21 +200,22 @@ If your real-world goal is to test file copy performance, then this may be a per
206
200
207
201
The following short summary explains why using file copy to measure storage performance may not provide the results that you're looking for:
208
202
- **File copies might not be optimized,** There are two levels of parallelism that occur, one internal and the other external. Internally, if the file copy is headed for a remote target, the CopyFileEx engine does apply some parallelization. Externally, there are different ways of invoking the CopyFileEx engine. For example, copies from File Explorer are single threaded, but Robocopy is multi-threaded. For these reasons, it's important to understand whether the implications of the test are what you are looking for.
209
-
- **Every copy has two sides.** When you simply copy and paste a file, you may be using two disks: the source disk and the destination disk. If one is slower than the other, you essentially measure the performance of the slower disk. There are other cases where the communication between the source, destination, and the copy engine may affect the performance in unique ways.
203
+
- **Every copy has two sides.** When you copy and paste a file, you may be using two disks: the source disk and the destination disk. If one is slower than the other, you essentially measure the performance of the slower disk. There are other cases where the communication between the source, destination, and the copy engine may affect the performance in unique ways.
210
204
211
205
To learn more, see [Using file copy to measure storage performance](/archive/blogs/josebda/using-file-copy-to-measure-storage-performance-why-its-not-a-good-idea-and-what-you-should-do-instead?epi=je6NUbpObpQ-OaAFQvelcuupBvT5Qlis7Q&irclickid=_rcvu3tufjwkftzjukk0sohzizm2xiezdpnxvqy9i00&irgwc=1&OCID=AID2000142_aff_7593_1243925&ranEAID=je6NUbpObpQ&ranMID=24542&ranSiteID=je6NUbpObpQ-OaAFQvelcuupBvT5Qlis7Q&tduid=(ir__rcvu3tufjwkftzjukk0sohzizm2xiezdpnxvqy9i00)(7593)(1243925)(je6NUbpObpQ-OaAFQvelcuupBvT5Qlis7Q)()).
212
206
213
207
## Experiments and common workloads
214
208
This section includes a few other examples, experiments, and workload types.
215
209
216
210
### Confirming the coordinator node
217
-
As mentioned previously, if the VM you are currently testing does not own the CSV, you'll see a performance drop (IOPS, throughput, and latency) as opposed to testing it when the node owns the CSV. This is because every time you issue an I/O operation, the system does a network hop to the coordinator node to perform that operation.
211
+
As mentioned previously, if the VM you are currently testing doesn't own the CSV, you'll see a performance drop (IOPS, throughput, and latency) as opposed to testing it when the node owns the CSV. This is because every time you issue an I/O operation, the system does a network hop to the coordinator node to perform that operation.
218
212
219
213
For a three-node, three-way mirrored situation, write operations always make a network hop because it needs to store data on all the drives across the three nodes. Therefore, write operations make a network hop regardless. However, if you use a different resiliency structure, this could change.
220
214
221
-
Here is an example:
222
-
- **Running on local node:** .\DiskSpd-2.0.21a\amd64\diskspd.exe -t4 -o32 -b4k -r4k -w0 -Sh -D -L C:\ClusterStorage\test01\targetfile\IO.dat
From this example, you can clearly see in the results of the following figure that latency decreased, IOPS increased, and throughput increased when the coordinator node owns the CSV.
0 commit comments