Skip to content

Commit 02be49b

Browse files
committed
acrolinx
1 parent f84e0e2 commit 02be49b

File tree

4 files changed

+13
-13
lines changed

4 files changed

+13
-13
lines changed

articles/virtual-machines/hb-series-overview.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -15,11 +15,11 @@ author: padmalathas
1515

1616
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
1717

18-
Maximizing high performance compute (HPC) application performance on AMD EPYC requires a thoughtful approach memory locality and process placement. Below we outline the AMD EPYC architecture and our implementation of it on Azure for HPC applications. We will use the term “pNUMA” to refer to a physical NUMA domain, and “vNUMA” to refer to a virtualized NUMA domain.
18+
Maximizing high performance compute (HPC) application performance on AMD EPYC requires a thoughtful approach memory locality and process placement. Below we outline the AMD EPYC architecture and our implementation of it on Azure for HPC applications. We use the term “pNUMA” to refer to a physical NUMA domain, and “vNUMA” to refer to a virtualized NUMA domain.
1919

20-
Physically, an [HB-series](hb-series.md) server is 2 * 32-core EPYC 7551 CPUs for a total of 64 physical cores. These 64 cores are divided into 16 pNUMA domains (8 per socket), each of which is four cores and known as a “CPU Complex” (or “CCX”). Each CCX has its own L3 cache, which is how an OS will see a pNUMA/vNUMA boundary. A pair of adjacent CCXs shares access to two channels of physical DRAM (32 GB of DRAM in HB-series servers).
20+
Physically, an [HB-series](hb-series.md) server is 2 * 32-core EPYC 7551 CPUs for a total of 64 physical cores. These 64 cores are divided into 16 pNUMA domains (8 per socket), each of which is four cores and known as a “CPU Complex” (or “CCX”). Each CCX has its own L3 cache, which is how an OS sees a pNUMA/vNUMA boundary. A pair of adjacent CCXs shares access to two channels of physical DRAM (32 GB of DRAM in HB-series servers).
2121

22-
To provide room for the Azure hypervisor to operate without interfering with the VM, we reserve physical pNUMA domain 0 (the first CCX). We then assign pNUMA domains 1-15 (the remaining CCX units) for the VM. The VM will see:
22+
To provide room for the Azure hypervisor to operate without interfering with the VM, we reserve physical pNUMA domain 0 (the first CCX). We then assign pNUMA domains 1-15 (the remaining CCX units) for the VM. The VM sees:
2323

2424
`(15 vNUMA domains) * (4 cores/vNUMA) = 60` cores per VM
2525

articles/virtual-machines/linux/cloudinit-configure-swapfile.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ This article shows you how to use [cloud-init](https://cloudinit.readthedocs.io)
1818

1919
## Create swap partition for Ubuntu based images
2020

21-
By default on Azure, Ubuntu gallery images do not create swap partitions. To enable swap partition configuration during VM provisioning time using cloud-init - please see the [AzureSwapPartitions document](https://wiki.ubuntu.com/AzureSwapPartitions) on the Ubuntu wiki.
21+
By default on Azure, Ubuntu gallery images don't create swap partitions. To enable swap partition configuration during VM provisioning time using cloud-init - please see the [AzureSwapPartitions document](https://wiki.ubuntu.com/AzureSwapPartitions) on the Ubuntu wiki.
2222

2323
## Create swap partition for RHEL based images
2424

@@ -41,7 +41,7 @@ mounts:
4141
- ["ephemeral0.2", "none", "swap", "sw,nofail,x-systemd.requires=cloud-init.service", "0", "0"]
4242
```
4343
44-
The mount is created with the `nofail` option to ensure that the boot process continues even if the mount is not completed successfully.
44+
The mount is created with the `nofail` option to ensure that the boot process continues even if the mount isn't completed successfully.
4545

4646
Before deploying this image, you need to create a resource group with the [az group create](/cli/azure/group) command. An Azure resource group is a logical container into which Azure resources are deployed and managed. The following example creates a resource group named *myResourceGroup* in the *eastus* location.
4747

@@ -95,7 +95,7 @@ DefaultEnvironment="CLOUD_CFG=/etc/cloud/cloud.cfg.d/00-azure-swap.cfg"
9595
> [!NOTE]
9696
> The name of the file is totally arbitrary, it can be replaced with any particular name of your preference, it just needs the .cfg suffix and make sure to reflect the changes in the CLOUD_CFG parameter line as well.
9797

98-
After the changes are done, the machine needs to be deallocated or re-deployed for the changes to take effect.
98+
After the changes are done, the machine needs to be deallocated or redeployed for the changes to take effect.
9999

100100

101101
## Verify swap partition was created

articles/virtual-machines/linux/cloudinit-update-vm.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -18,9 +18,9 @@ This article shows you how to use [cloud-init](https://cloudinit.readthedocs.io)
1818

1919
## Update a VM with cloud-init
2020

21-
For security purposes, you may want to configure a VM to apply the latest updates on first boot. As cloud-init works across different Linux distros, there is no need to specify `apt`, `zypper` or `yum` for the package manager. Instead, you define `package_upgrade` and let the cloud-init process determine the appropriate mechanism for the distro in use.
21+
For security purposes, you may want to configure a VM to apply the latest updates on first boot. As cloud-init works across different Linux distros, there's no need to specify `apt`, `zypper` or `yum` for the package manager. Instead, you define `package_upgrade` and let the cloud-init process determine the appropriate mechanism for the distro in use.
2222

23-
For this example, we will be using the Azure Cloud Shell. To see the upgrade process in action, create a file named *cloud_init_upgrade.txt* and paste the following configuration. You can use any editor you wish. Make sure that the whole cloud-init file is copied correctly, especially the first line.
23+
For this example, we use the Azure Cloud Shell. To see the upgrade process in action, create a file named *cloud_init_upgrade.txt* and paste the following configuration. You can use any editor you wish. Make sure that the whole cloud-init file is copied correctly, especially the first line.
2424

2525
Copy the text below and paste it into the `cloud_init_upgrade.txt` file. Make sure that the whole cloud-init file is copied correctly, especially the first line.
2626

@@ -70,7 +70,7 @@ sudo yum check-update
7070

7171
As cloud-init checked for and installed updates on boot, there should be no additional updates to apply.
7272

73-
- You can see the update process, number of altered packages as well as the installation of `httpd` by running the following command and review the output.
73+
- You can see the update process, number of altered packages, and the installation of `httpd` by running the following command and review the output.
7474

7575
```bash
7676
sudo yum history

articles/virtual-machines/linux/time-sync.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@ On stand-alone hardware, the Linux OS only reads the host hardware clock on boot
3333

3434
Virtual machine interactions with the host can also affect the clock. During [memory preserving maintenance](../maintenance-and-updates.md#maintenance-that-doesnt-require-a-reboot), VMs are paused for up to 30 seconds. For example, before maintenance begins the VM clock shows 10:00:00 AM and lasts 28 seconds. After the VM resumes, the clock on the VM would still show 10:00:00 AM, which would be 28 seconds off. To correct for this, the VMICTimeSync service monitors what is happening on the host and updates the time-of-day clock in Linux VMs to compensate.
3535

36-
Without time synchronization working, the clock on the VM would accumulate errors. When there's only one VM, the effect might not be significant unless the workload requires highly accurate timekeeping. But in most cases, we've multiple, interconnected VMs that use time to track transactions and the time needs to be consistent throughout the entire deployment. When time between VMs is different, you could see the following effects:
36+
Without time synchronization working, the clock on the VM would accumulate errors. When there's only one VM, the effect might not be significant unless the workload requires highly accurate timekeeping. But in most cases, we have multiple, interconnected VMs that use time to track transactions and the time needs to be consistent throughout the entire deployment. When time between VMs is different, you could see the following effects:
3737

3838
- Authentication will fail. Security protocols like Kerberos or certificate-dependent technology rely on time being consistent across the systems.
3939
- It's hard to figure out what have happened in a system if logs (or other data) don't agree on time. The same event would look like it occurred at different times, making correlation difficult.
@@ -59,12 +59,12 @@ The VMICTimeSync is used in parallel and provides two functions:
5959
- Immediately updates the Linux VM time-of-day clock after a host maintenance event
6060
- Instantiates an IEEE 1588 Precision Time Protocol (PTP) hardware clock source as a /dev/ptp device that provides the accurate time-of-day from the Azure host. Chronyd can be configured to synchronize against this time source (which is the default configuration in the newest Linux images). Linux distributions with kernel version 4.11 or later (or version 3.10.0-693 or later for RHEL 7) support the /dev/ptp device. For earlier kernel versions that don't support /dev/ptp for Azure host time, only synchronization against an external time source is possible.
6161

62-
Of course, the default configuration can be changed. An older image that is configured to use ntpd and an external time source can be changed to use chronyd and the /dev/ptp device for Azure host time. Similarly, an image using Azure host time via a /dev/ptp device can be configured to use an external NTP time source if required by your application or workload.
62+
The default configuration can be changed. An older image that is configured to use ntpd and an external time source can be changed to use chronyd and the /dev/ptp device for Azure host time. Similarly, an image using Azure host time via a /dev/ptp device can be configured to use an external NTP time source if required by your application or workload.
6363

6464

6565
## Tools and resources
6666

67-
There are some basic commands for checking your time synchronization configuration. Documentation for Linux distribution will have more details on the best way to configure time synchronization for that distribution.
67+
There are some basic commands for checking your time synchronization configuration. Documentation for Linux distribution has more details on the best way to configure time synchronization for that distribution.
6868

6969
### Integration services
7070

@@ -84,7 +84,7 @@ hv_vmbus 397185 7 hv_balloon,hyperv_keyboard,hv_netvsc,hid_hyperv,
8484

8585
With newer versions of Linux, a Precision Time Protocol (PTP) clock source corresponding to the Azure host is available as part of the VMICTimeSync provider.
8686
On older versions of Red Hat Enterprise Linux 7.x, the [Linux Integration Services](https://github.com/LIS/lis-next) can be downloaded and used to
87-
install the updated driver. When the PTP clock source is available, the Linux device will be of the form /dev/ptp*x*.
87+
install the updated driver. When the PTP clock source is available, the Linux device is of the form /dev/ptp*x*.
8888

8989
See which PTP clock sources are available.
9090

0 commit comments

Comments
 (0)