You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/bastion/kerberos-authentication-portal.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -12,7 +12,7 @@ ms.author: cherylmc
12
12
13
13
# Configure Bastion for Kerberos authentication using the Azure portal
14
14
15
-
This article shows you how to configure Azure Bastion to use Kerberos authentication. Kerberos authentication can be used with both the Basic and the Standard Bastion SKUs. For more information about Kerberos authentication, see the [Kerberos authentication overview](/windows-server/security/kerberos/kerberos-authentication-overview). For more information about Azure Bastion, see [What is Azure Bastion?](bastion-overview.md)
15
+
This article shows you how to configure Azure Bastion to use Kerberos authentication. Kerberos authentication can be used with Basic SKU tier or higher for Azure Bastion. For more information about Kerberos authentication, see the [Kerberos authentication overview](/windows-server/security/kerberos/kerberos-authentication-overview). For more information about Azure Bastion, see [What is Azure Bastion?](bastion-overview.md)
Copy file name to clipboardExpand all lines: articles/cost-management-billing/reservations/exchange-and-refund-azure-reservations.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -34,21 +34,21 @@ When you exchange a reservation, you can change your term from one-year to three
34
34
35
35
Not all reservations are eligible for exchange. For example, you can't exchange the following reservations:
36
36
37
-
- Azure Databricks reserved capacity
37
+
- Azure Databricks Pre-purchase plan
38
38
- Azure OpenAI provisioned throughput
39
39
- Synapse Analytics Pre-purchase plan
40
40
- Red Hat plans
41
41
- SUSE Linux plans
42
42
- Microsoft Defender for Cloud Pre-Purchase Plan
43
43
- Microsoft Sentinel Pre-Purchase Plan
44
44
45
-
You can also refund reservations, but the sum total of all canceled reservation commitment in your billing scope (such as EA, Microsoft Customer Agreement, and Microsoft Partner Agreement) can't exceed USD 50,000 in a 12 month rolling window.
45
+
You can also refund reservations, but the sum total of all canceled reservation commitment in your billing scope (such as EA, Microsoft Customer Agreement - Billing Profile, and Microsoft Partner Agreement - Customer) can't exceed USD 50,000 in a 12 month rolling window.
46
46
47
47
*Microsoft is not currently charging early termination fees for reservation refunds. We might charge the fees for refunds made in the future. We currently don't have a date for enabling the fee.*
48
48
49
49
The following reservations aren't eligible for refunds:
This article walks you through deploying Nvidia’s DeepStream module on an Ubuntu VM running on your Azure Stack Edge device. The DeepStream module is supported only on GPU devices.
17
+
This article walks you through deploying NVIDIA’s DeepStream module on an Ubuntu VM running on your Azure Stack Edge device. The DeepStream module is supported only on GPU devices.
Copy file name to clipboardExpand all lines: articles/databox-online/azure-stack-edge-gpu-configure-gpu-modules.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -18,7 +18,7 @@ ms.author: alkohli
18
18
19
19
Your Azure Stack Edge Pro device contains one or more Graphics Processing Unit (GPU). GPUs are a popular choice for AI computations as they offer parallel processing capabilities and are faster at image rendering than Central Processing Units (CPUs). For more information on the GPU contained in your Azure Stack Edge Pro device, go to [Azure Stack Edge Pro device technical specifications](azure-stack-edge-gpu-technical-specifications-compliance.md).
20
20
21
-
This article describes how to configure and run a module on the GPU on your Azure Stack Edge Pro device. In this article, you will use a publicly available container module **Digits** written for Nvidia T4 GPUs. This procedure can be used to configure any other modules published by Nvidia for these GPUs.
21
+
This article describes how to configure and run a module on the GPU on your Azure Stack Edge Pro device. In this article, you will use a publicly available container module **Digits** written for NVIDIA T4 GPUs. This procedure can be used to configure any other modules published by NVIDIA for these GPUs.
22
22
23
23
## Prerequisites
24
24
@@ -81,7 +81,7 @@ To configure a module to use the GPU on your Azure Stack Edge Pro device to run
81
81
82
82
10. In the **Add IoT Edge Module** tab:
83
83
84
-
1. Provide the **Image URI**. You will use the publicly available Nvidia module **Digits** here.
84
+
1. Provide the **Image URI**. You will use the publicly available NVIDIA module **Digits** here.
85
85
86
86
2. Set **Restart policy** to **always**.
87
87
@@ -97,7 +97,7 @@ To configure a module to use the GPU on your Azure Stack Edge Pro device to run
97
97
98
98

99
99
100
-
For more information on environment variables that you can use with the Nvidia GPU, go to [nVidia container runtime](https://github.com/NVIDIA/nvidia-container-runtime#environment-variables-oci-spec).
100
+
For more information on environment variables that you can use with the NVIDIA GPU, go to [nVidia container runtime](https://github.com/NVIDIA/nvidia-container-runtime#environment-variables-oci-spec).
101
101
102
102
> [!NOTE]
103
103
> A module can use one, both or no GPUs.
@@ -125,4 +125,4 @@ To configure a module to use the GPU on your Azure Stack Edge Pro device to run
125
125
126
126
## Next steps
127
127
128
-
- Learn more about [Environment variables that you can use with the Nvidia GPU](https://github.com/NVIDIA/nvidia-container-runtime#environment-variables-oci-spec).
128
+
- Learn more about [Environment variables that you can use with the NVIDIA GPU](https://github.com/NVIDIA/nvidia-container-runtime#environment-variables-oci-spec).
Copy file name to clipboardExpand all lines: articles/databox-online/azure-stack-edge-gpu-connect-powershell-interface.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -62,7 +62,7 @@ If the compute role is configured on your device, you can also get the GPU drive
62
62
63
63
## Enable Multi-Process Service (MPS)
64
64
65
-
A Multi-Process Service (MPS) on Nvidia GPUs provides a mechanism where GPUs can be shared by multiple jobs, where each job is allocated some percentage of the GPU's resources. MPS is a preview feature on your Azure Stack Edge Pro GPU device. To enable MPS on your device, follow these steps:
65
+
A Multi-Process Service (MPS) on NVIDIA GPUs provides a mechanism where GPUs can be shared by multiple jobs, where each job is allocated some percentage of the GPU's resources. MPS is a preview feature on your Azure Stack Edge Pro GPU device. To enable MPS on your device, follow these steps:
Copy file name to clipboardExpand all lines: articles/databox-online/azure-stack-edge-gpu-deploy-compute-acceleration.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -25,8 +25,8 @@ Compute acceleration is a term used specifically for Azure Stack Edge devices wh
25
25
26
26
The article will discuss compute acceleration only using GPU or VPU for the following devices:
27
27
28
-
-**Azure Stack Edge Pro GPU** - These devices can have 1 or 2 Nvidia T4 Tensor Core GPU. For more information, see [NVIDIA T4](https://www.nvidia.com/en-us/data-center/tesla-t4/).
29
-
-**Azure Stack Edge Pro R** - These devices have 1 Nvidia T4 Tensor Core GPU. For more information, see [NVIDIA T4](https://www.nvidia.com/en-us/data-center/tesla-t4/).
28
+
-**Azure Stack Edge Pro GPU** - These devices can have 1 or 2 NVIDIA T4 Tensor Core GPU. For more information, see [NVIDIA T4](https://www.nvidia.com/en-us/data-center/tesla-t4/).
29
+
-**Azure Stack Edge Pro R** - These devices have 1 NVIDIA T4 Tensor Core GPU. For more information, see [NVIDIA T4](https://www.nvidia.com/en-us/data-center/tesla-t4/).
30
30
-**Azure Stack Edge Mini R** - These devices have 1 Intel Movidius Myriad X VPU. For more information, see [Intel Movidius Myriad X VPU](https://www.movidius.com/MyriadX).
Copy file name to clipboardExpand all lines: articles/databox-online/azure-stack-edge-gpu-deploy-gpu-virtual-machine.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -38,7 +38,7 @@ Follow these steps when deploying GPU VMs on your device via the Azure portal:
38
38
39
39
1. To create GPU VMs, follow all the steps in [Deploy VM on your Azure Stack Edge using Azure portal](azure-stack-edge-gpu-deploy-virtual-machine-portal.md), with these configuration requirements:
40
40
41
-
- On the **Basics** tab, select a [VM size from N-series, optimized for GPUs](azure-stack-edge-gpu-virtual-machine-sizes.md#n-series-gpu-optimized). Based on the GPU model on your device, Nvidia T4 or Nvidia A2, the dropdown list will display the corresponding supported GPU VM sizes.
41
+
- On the **Basics** tab, select a [VM size from N-series, optimized for GPUs](azure-stack-edge-gpu-virtual-machine-sizes.md#n-series-gpu-optimized). Based on the GPU model on your device, NVIDIA T4 or NVIDIA A2, the dropdown list will display the corresponding supported GPU VM sizes.
42
42
43
43

44
44
@@ -97,7 +97,7 @@ After the VM is created, you can [deploy the GPU extension using the extension t
97
97
98
98
## Install GPU extension after deployment
99
99
100
-
To take advantage of the GPU capabilities of Azure N-series VMs, Nvidia GPU drivers must be installed. From the Azure portal, you can install the GPU extension during or after VM deployment. If you're using templates, you'll install the GPU extension after you create the VM.
100
+
To take advantage of the GPU capabilities of Azure N-series VMs, NVIDIA GPU drivers must be installed. From the Azure portal, you can install the GPU extension during or after VM deployment. If you're using templates, you'll install the GPU extension after you create the VM.
Copy file name to clipboardExpand all lines: articles/databox-online/azure-stack-edge-gpu-deploy-iot-edge-gpu-sharing.md
+8-8Lines changed: 8 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -21,7 +21,7 @@ Before you begin, make sure that:
21
21
22
22
1. You've access to a client system with a [Supported operating system](azure-stack-edge-gpu-system-requirements.md#supported-os-for-clients-connected-to-device). If using a Windows client, the system should run PowerShell 5.0 or later to access the device.
23
23
24
-
1. Save the following deployment `json` on your local system. You'll use information from this file to run the IoT Edge deployment. This deployment is based on [Simple CUDA containers](https://docs.nvidia.com/cuda/wsl-user-guide/index.html#running-simple-containers) that are publicly available from Nvidia.
24
+
1. Save the following deployment `json` on your local system. You'll use information from this file to run the IoT Edge deployment. This deployment is based on [Simple CUDA containers](https://docs.nvidia.com/cuda/wsl-user-guide/index.html#running-simple-containers) that are publicly available from NVIDIA.
25
25
26
26
```json
27
27
{
@@ -118,7 +118,7 @@ The first step is to verify that your device is running required GPU driver and
118
118
119
119
`Get-HcsGpuNvidiaSmi`
120
120
121
-
1. In the Nvidia smi output, make a note of the GPU version and the CUDA version on your device. If you are running Azure Stack Edge 2102 software, this version would correspond to the following driver versions:
121
+
1. In the NVIDIA smi output, make a note of the GPU version and the CUDA version on your device. If you are running Azure Stack Edge 2102 software, this version would correspond to the following driver versions:
122
122
123
123
- GPU driver version: 460.32.03
124
124
- CUDA version: 11.2
@@ -152,7 +152,7 @@ The first step is to verify that your device is running required GPU driver and
152
152
[10.100.10.10]: PS>
153
153
```
154
154
155
-
1. Keep this session open as you will use it to view the Nvidia smi output throughout the article.
155
+
1. Keep this session open as you will use it to view the NVIDIA smi output throughout the article.
156
156
157
157
158
158
## Deploy without context-sharing
@@ -216,7 +216,7 @@ For detailed instructions, see [Connect to and manage a Kubernetes cluster via k
216
216
217
217
### Deploy modules via portal
218
218
219
-
Deploy IoT Edge modules via the Azure portal. You'll deploy publicly available Nvidia CUDA sample modules that run n-body simulation.
219
+
Deploy IoT Edge modules via the Azure portal. You'll deploy publicly available NVIDIA CUDA sample modules that run n-body simulation.
220
220
221
221
1. Make sure that the IoT Edge service is running on your device.
222
222
@@ -316,7 +316,7 @@ Deploy IoT Edge modules via the Azure portal. You'll deploy publicly available N
316
316
```
317
317
There are two pods, `cuda-sample1-97c494d7f-lnmns` and `cuda-sample2-d9f6c4688-2rld9` running on your device.
318
318
319
-
1. While both the containers are running the n-body simulation, view the GPU utilization from the Nvidia smi output. Go to the PowerShell interface of the device and run `Get-HcsGpuNvidiaSmi`.
319
+
1. While both the containers are running the n-body simulation, view the GPU utilization from the NVIDIA smi output. Go to the PowerShell interface of the device and run `Get-HcsGpuNvidiaSmi`.
320
320
321
321
Here is an example output when both the containers are running the n-body simulation:
322
322
@@ -349,7 +349,7 @@ Deploy IoT Edge modules via the Azure portal. You'll deploy publicly available N
349
349
```
350
350
As you can see, there are two containers running with n-body simulation on GPU 0. You can also view their corresponding memory usage.
351
351
352
-
1. Once the simulation has completed, the Nvidia smi output will show that there are no processes running on the device.
352
+
1. Once the simulation has completed, the NVIDIA smi output will show that there are no processes running on the device.
353
353
354
354
```powershell
355
355
[10.100.10.10]: PS>Get-HcsGpuNvidiaSmi
@@ -460,7 +460,7 @@ You can now deploy the n-body simulation on two CUDA containers when MPS is runn
460
460
Created nvidia-mps.service
461
461
[10.100.10.10]: PS>
462
462
```
463
-
1. Get the Nvidia smi output from the PowerShell interface of the device. You can see the `nvidia-cuda-mps-server` process or the MPS service is running on the device.
463
+
1. Get the NVIDIA smi output from the PowerShell interface of the device. You can see the `nvidia-cuda-mps-server` process or the MPS service is running on the device.
464
464
465
465
Here is an example output:
466
466
@@ -548,7 +548,7 @@ You can now deploy the n-body simulation on two CUDA containers when MPS is runn
548
548
PS C:\WINDOWS\system32>
549
549
```
550
550
551
-
1. Get the Nvidia smi output from the PowerShell interface of the device when both the containers are running the n-body simulation. Here is an example output. There are three processes, the `nvidia-cuda-mps-server` process (type C) corresponds to the MPS service and the `/tmp/nbody` processes (type M + C) correspond to the n-body workloads deployed by the modules.
551
+
1. Get the NVIDIA smi output from the PowerShell interface of the device when both the containers are running the n-body simulation. Here is an example output. There are three processes, the `nvidia-cuda-mps-server` process (type C) corresponds to the MPS service and the `/tmp/nbody` processes (type M + C) correspond to the n-body workloads deployed by the modules.
0 commit comments