Skip to content

Commit cb2ebab

Browse files
authored
Merge pull request #110523 from MicrosoftDocs/repo_sync_working_branch
Confirm merge from repo_sync_working_branch to master to sync with https://github.com/Microsoft/azure-docs (branch master)
2 parents 014c879 + 48f15de commit cb2ebab

8 files changed

+17
-11
lines changed

articles/aks/operator-best-practices-storage.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -36,7 +36,7 @@ The following table outlines the available storage types and their capabilities:
3636

3737
The two primary types of storage provided for volumes in AKS are backed by Azure Disks or Azure Files. To improve security, both types of storage use Azure Storage Service Encryption (SSE) by default that encrypts data at rest. Disks cannot currently be encrypted using Azure Disk Encryption at the AKS node level.
3838

39-
Azure Files are currently available in the Standard performance tier. Azure Disks are available in Standard and Premium performance tiers:
39+
Both Azure Files and Azure Disks are available in Standard and Premium performance tiers:
4040

4141
- *Premium* disks are backed by high-performance solid-state disks (SSDs). Premium disks are recommended for all production workloads.
4242
- *Standard* disks are backed by regular spinning disks (HDDs), and are good for archival or infrequently accessed data.

articles/aks/spark-job.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -288,7 +288,7 @@ Pi is roughly 3.152155760778804
288288

289289
In the above example, the Spark jar file was uploaded to Azure storage. Another option is to package the jar file into custom-built Docker images.
290290

291-
To do so, find the `dockerfile` for the Spark image located at `$sparkdir/resource-managers/kubernetes/docker/src/main/dockerfiles/spark/` directory. Add am `ADD` statement for the Spark job `jar` somewhere between `WORKDIR` and `ENTRYPOINT` declarations.
291+
To do so, find the `dockerfile` for the Spark image located at `$sparkdir/resource-managers/kubernetes/docker/src/main/dockerfiles/spark/` directory. Add an `ADD` statement for the Spark job `jar` somewhere between `WORKDIR` and `ENTRYPOINT` declarations.
292292

293293
Update the jar path to the location of the `SparkPi-assembly-0.1.0-SNAPSHOT.jar` file on your development system. You can also use your own custom jar file.
294294

articles/aks/tutorial-kubernetes-scale.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -191,7 +191,7 @@ Advance to the next tutorial to learn how to update application in Kubernetes.
191191
[kubectl-scale]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#scale
192192
[kubernetes-hpa]: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/
193193
[metrics-server-github]: https://github.com/kubernetes-incubator/metrics-server/tree/master/deploy/1.8%2B
194-
[metrics-server]: https://v1-13.docs.kubernetes.io/docs/tasks/debug-application-cluster/core-metrics-pipeline/
194+
[metrics-server]: https://kubernetes.io/docs/tasks/debug-application-cluster/resource-metrics-pipeline/#metrics-server
195195

196196
<!-- LINKS - internal -->
197197
[aks-tutorial-prepare-app]: ./tutorial-kubernetes-prepare-app.md

articles/azure-functions/functions-host-json.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -281,7 +281,7 @@ Controls the logging behaviors of the function app, including Application Insigh
281281

282282
```json
283283
"logging": {
284-
"fileLoggingMode": "debugOnly"
284+
"fileLoggingMode": "debugOnly",
285285
"logLevel": {
286286
"Function.MyFunction": "Information",
287287
"default": "None"

articles/iot-hub/iot-hub-arduino-iot-devkit-az3166-get-started.md

Lines changed: 7 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -77,7 +77,13 @@ A device must be registered with your IoT hub before it can connect. In this qui
7777
```
7878
7979
> [!NOTE]
80-
> If you get an error running `device-identity`, install the [Azure IOT Extension for Azure CLI](https://github.com/Azure/azure-iot-cli-extension/blob/dev/README.md) for more details.
80+
> If you get an error running `device-identity`, install the [Azure IoT Extension for Azure CLI](https://github.com/Azure/azure-iot-cli-extension/blob/dev/README.md).
81+
> Run the following command to add the Microsoft Azure IoT Extension for Azure CLI to your Cloud Shell instance. The IoT Extension adds commands that are specific to IoT Hub, IoT Edge, and IoT Device Provisioning Service (DPS) to Azure CLI.
82+
>
83+
> ```azurecli-interactive
84+
> az extension add --name azure-iot
85+
> ```
86+
>
8187
8288
1. Run the following commands in Azure Cloud Shell to get the _device connection string_ for the device you just registered:
8389

articles/sql-database/sql-database-managed-instance-create-vnet-subnet.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -23,13 +23,13 @@ Azure SQL Database Managed Instance must be deployed within an Azure [virtual ne
2323
- Connecting a Managed Instance to linked server or another on-premises data store
2424
- Connecting a Managed Instance to Azure resources
2525

26-
> [!Note]
26+
> [!NOTE]
2727
> You should [determine the size of the subnet for Managed Instance](sql-database-managed-instance-determine-size-vnet-subnet.md) before you deploy the first instance. You can't resize the subnet after you put the resources inside.
2828
>
2929
> If you plan to use an existing virtual network, you need to modify that network configuration to accommodate your Managed Instance. For more information, see [Modify an existing virtual network for Managed Instance](sql-database-managed-instance-configure-vnet-subnet.md).
3030
>
31-
> After a managed instance is created, moving the managed instance or VNet to another resource group or subscription is not supported.
32-
31+
> After a managed instance is created, moving the managed instance or VNet to another resource group or subscription is not supported. Moving the managed instance to another subnet also is not supported.
32+
>
3333
3434
## Create a virtual network
3535

articles/vpn-gateway/vpn-gateway-howto-setup-alerts-virtual-network-gateway-log.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@ The following logs are available in Azure:
2222
|TunnelDiagnosticLog | Contains tunnel state change events. Tunnel connect/disconnect events have a summarized reason for the state change if applicable |
2323
|RouteDiagnosticLog | Logs changes to static routes and BGP events that occur on the gateway |
2424
|IKEDiagnosticLog | Logs IKE control messages and events on the gateway |
25-
|P2SDiagnosticLog | Logs point-to-site control messages and events on the gateway |
25+
|P2SDiagnosticLog | Logs point-to-site control messages and events on the gateway. Connection source info is provided for IKEv2 connections only |
2626

2727
## <a name="setup"></a>Set up alerts in the Azure portal
2828

includes/managed-disks-bursting.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -41,8 +41,8 @@ Disk bursting is available in all regions in Public Cloud.
4141

4242
To give you a better idea of how this works, here's a few example scenarios:
4343

44-
- One common scenario that can benefit from disk bursting is faster VM boot and application launch on OS disks. Take a Linux VM with an 8 GiB OS image as an example. If we use a P2 disk as the OS disk, the provisioned target is 120 IOPS and 25 MBps. When VM starts, there will be a read spike to the OS disk loading the boot files. With the introduction of bursting, you can read at the max burst speed of 3500 IOPS and 170 MBps, accelerating the load time by at least 6x. After VM boot, the traffic level on the OS disk is usually low, since most data operations by the application will be against the attached data disks. If the traffic is below the provisioned target, you will accumulate credits.
44+
- One common scenario that can benefit from disk bursting is faster VM boot and application launch on OS disks. Take a Linux VM with an 8 GiB OS image as an example. If we use a P2 disk as the OS disk, the provisioned target is 120 IOPS and 25 MiB. When VM starts, there will be a read spike to the OS disk loading the boot files. With the introduction of bursting, you can read at the max burst speed of 3500 IOPS and 170 MiB, accelerating the load time by at least 6x. After VM boot, the traffic level on the OS disk is usually low, since most data operations by the application will be against the attached data disks. If the traffic is below the provisioned target, you will accumulate credits.
4545

4646
- If you are hosting a Remote Virtual Desktop environment, whenever an active user launches an application like AutoCAD, read traffic to the OS disk significantly increases. In this case, burst traffic will consume accumulated credits, allowing you to go beyond the provisioned target, and launching the application much faster.
4747

48-
- A P1 disk has a provisioned target of 120 IOPS and 25 MBps. If the actual traffic on the disk was 100 IOPS and 20 MBps in the past 1 second interval, then the unused 20 IOs and 5 MB are credited to the burst bucket of the disk. Credits in the burst bucket can later be used when the traffic exceeds the provisioned target, up to the max burst limit. The max burst limit defines the ceiling of disk traffic even if you have burst credits to consume from. In this case, even if you have 10,000 IOs in the credit bucket, a P1 disk cannot issue more than the max burst of 3,500 IO per sec.
48+
- A P1 disk has a provisioned target of 120 IOPS and 25 MiB. If the actual traffic on the disk was 100 IOPS and 20 MiB in the past 1 second interval, then the unused 20 IOs and 5 MB are credited to the burst bucket of the disk. Credits in the burst bucket can later be used when the traffic exceeds the provisioned target, up to the max burst limit. The max burst limit defines the ceiling of disk traffic even if you have burst credits to consume from. In this case, even if you have 10,000 IOs in the credit bucket, a P1 disk cannot issue more than the max burst of 3,500 IO per sec.

0 commit comments

Comments
 (0)