Skip to content

Commit 2102fce

Browse files
committed
fix GitHub review issues
1 parent 3cbe573 commit 2102fce

File tree

2 files changed

+3
-3
lines changed

2 files changed

+3
-3
lines changed

articles/azure-netapp-files/performance-linux-direct-io.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ This article helps you understand direct I/O best practices for Azure NetApp Fil
2626

2727
Using the direct I/O parameter makes storage testing easy. No data is read from the filesystem cache on the client. As such, the test is truly stressing the storage protocol and service itself, rather than memory access speeds. Also, without the DMA memory copies, read and write operations are efficient from a processing perspective.
2828

29-
Take the Linux `dd` command as an example workload. Without the optional `odirect` flag, all I/O generated by `dd` is served from the Linux buffer cache. Reads with the blocks already in memory are not retrieved from storage. Reads resulting in a buffer cache miss end up being read from storage using NFS readahead with varying results, depending on factors as mount `rsize` and client readahead tunables. When writes are sent through the buffer cache, they use a write-behind mechanism, which is untuned and uses a significant amount of parallelism to send the data to the storage device. You might attempt to run two independent streams of I/O, one `dd` for reads and one `dd` for writes. But in fact, the operating system, untuned, favors writes over reads and uses more parallelism of it.
29+
Take the Linux `dd` command as an example workload. Without the optional `odirect` flag, all I/O generated by `dd` is served from the Linux buffer cache. Reads with the blocks already in memory are not retrieved from storage. Reads resulting in a buffer cache miss end up being read from storage using NFS read-ahead with varying results, depending on factors as mount `rsize` and client read-ahead tunables. When writes are sent through the buffer cache, they use a write-behind mechanism, which is untuned and uses a significant amount of parallelism to send the data to the storage device. You might attempt to run two independent streams of I/O, one `dd` for reads and one `dd` for writes. But in fact, the operating system, untuned, favors writes over reads and uses more parallelism of it.
3030

3131
Aside from database, few applications use direct I/O. Instead, they choose to leverage the advantages of a large memory cache for repeated reads and a write behind cache for asynchronous writes. In short, using direct I/O turns the test into a micro benchmark *if* the application being synthesized uses the filesystem cache.
3232

articles/azure-netapp-files/performance-linux-mount-options.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@ When preparing a multi-node SAS GRID environment for production, you might notic
3131
| No `nconnect` | 8 hours |
3232
| `nconnect=8` | 5.5 hours |
3333

34-
Both sets of tests used the same E32-8_v4 virtual machine and RHEL8.3, with readahead set to 15 MiB.
34+
Both sets of tests used the same E32-8_v4 virtual machine and RHEL8.3, with read-ahead set to 15 MiB.
3535

3636
When you use `nconnect`, keep the following rules in mind:
3737

@@ -85,7 +85,7 @@ sudo vi /etc/fstab
8585
10.23.1.4:/HN1-shared/shared /hana/shared nfs rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys 0 0
8686
```
8787

88-
Also for example, SAS Viya recommends a 256-KiB read and write sizes, and [SAS GRID](https://communities.sas.com/t5/Administration-and-Deployment/Azure-NetApp-Files-A-shared-file-system-to-use-with-SAS-Grid-on/m-p/606973/highlight/true#M17740) limits the `r/wsize` to 64 KiB while augmenting read performance with increased readahead for the NFS mounts. <!-- For more information on readahead, see the article “NFS Readahead”. -->
88+
Also for example, SAS Viya recommends a 256-KiB read and write sizes, and [SAS GRID](https://communities.sas.com/t5/Administration-and-Deployment/Azure-NetApp-Files-A-shared-file-system-to-use-with-SAS-Grid-on/m-p/606973/highlight/true#M17740) limits the `r/wsize` to 64 KiB while augmenting read performance with increased read-ahead for the NFS mounts. See [NFS read-ahead best practices for Azure NetApp Files](performance-linux-nfs-read-ahead.md) for details.
8989

9090
The following considerations apply to the use of `rsize` and `wsize`:
9191

0 commit comments

Comments
 (0)