Skip to content

Commit d3c798d

Browse files
authored
Merge pull request #185456 from MicrosoftDocs/repo_sync_working_branch
Confirm merge from repo_sync_working_branch to master to sync with https://github.com/MicrosoftDocs/azure-docs (branch master)
2 parents 882c97f + 3add66e commit d3c798d

9 files changed

+15
-7
lines changed

articles/backup/back-up-azure-stack-hyperconverged-infrastructure-virtual-machines.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -8,6 +8,9 @@ ms.date: 07/27/2021
88
# Back up Azure Stack HCI virtual machines with Azure Backup Server
99

1010
This article explains how to back up virtual machines on Azure Stack HCI using Microsoft Azure Backup Server (MABS).
11+
12+
> [!NOTE]
13+
> This support applies to Azure Stack HCI version 20H2. Backup of virtual machines on Azure Stack HCI version 21H2 is not supported.
1114
1215
## Supported scenarios
1316

articles/batch/batch-compute-node-environment-variables.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -56,7 +56,6 @@ The command lines executed by tasks on compute nodes don't run under a shell. Th
5656
| AZ_BATCH_TASK_ID | The ID of the current task. | All tasks except start task. | task001 |
5757
| AZ_BATCH_TASK_SHARED_DIR | A directory path that is identical for the primary task and every subtask of a [multi-instance task](batch-mpi.md). The path exists on every node on which the multi-instance task runs, and is read/write accessible to the task commands running on that node (both the [coordination command](batch-mpi.md#coordination-command) and the [application command](batch-mpi.md#application-command). Subtasks or a primary task that execute on other nodes do not have remote access to this directory (it is not a "shared" network directory). | Multi-instance primary and subtasks. | C:\user\tasks\workitems\multiinstancesamplejob\job-1\multiinstancesampletask |
5858
| AZ_BATCH_TASK_WORKING_DIR | The full path of the [task working directory](files-and-directories.md) on the node. The currently running task has read/write access to this directory. | All tasks. | C:\user\tasks\workitems\batchjob001\job-1\task001\wd |
59-
| AZ_BATCH_TASK_WORKING_DIR | The full path of the [task working directory](files-and-directories.md) on the node. The currently running task has read/write access to this directory. | All tasks. | C:\user\tasks\workitems\batchjob001\job-1\task001\wd |
6059
| AZ_BATCH_TASK_RESERVED_EPHEMERAL_DISK_SPACE_BYTES | The current threshold for disk space upon which the VM will be marked as `DiskFull`. | All tasks. | 1000000 |
6160
| CCP_NODES | The list of nodes and number of cores per node that are allocated to a [multi-instance task](batch-mpi.md). Nodes and cores are listed in the format `numNodes<space>node1IP<space>node1Cores<space>`<br/>`node2IP<space>node2Cores<space> ...`, where the number of nodes is followed by one or more node IP addresses and the number of cores for each. | Multi-instance primary and subtasks. |`2 10.0.0.4 1 10.0.0.5 1` |
6261

articles/healthcare-apis/fhir/overview.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -57,7 +57,7 @@ Protect your PHI with unparalleled security intelligence. Your data is isolated
5757

5858
FHIR servers are key tools for interoperability of health data. The FHIR service is designed as an API and service that you can create, deploy, and begin using quickly. As the FHIR standard expands in healthcare, use cases will continue to grow, but some initial customer applications where FHIR service is useful are below:
5959

60-
- **Startup/IoT and App Development:** Customers developing a patient or provider centric app (mobile or web) can leverage FHIR service as a fully managed backend service. The FHIR service provides a valuable resource in that customers can managing data and exchanging data in a secure cloud environment designed for health data, leverage SMART on FHIR implementation guidelines, and enable their technology to be utilized by all provider systems (for example, most EHRs have enabled FHIR read APIs).
60+
- **Startup/IoT and App Development:** Customers developing a patient or provider centric app (mobile or web) can leverage FHIR service as a fully managed backend service. The FHIR service provides a valuable resource in that customers can manage and exchange data in a secure cloud environment designed for health data, leverage SMART on FHIR implementation guidelines, and enable their technology to be utilized by all provider systems (for example, most EHRs have enabled FHIR read APIs).
6161

6262
- **Healthcare Ecosystems:** While EHRs exist as the primary ‘source of truth’ in many clinical settings, it is not uncommon for providers to have multiple databases that aren’t connected to one another or store data in different formats. Utilizing the FHIR service as a service that sits on top of those systems allows you to standardize data in the FHIR format. This helps to enable data exchange across multiple systems with a consistent data format.
6363

articles/industrial-iot/tutorial-publisher-configure-opc-publisher.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -115,7 +115,7 @@ We have provided a [sample configuration application](https://github.com/Azure-S
115115
>[!NOTE]
116116
> This feature is only available in version 2.6 and above of OPC Publisher.
117117
118-
A cloud-based, companion microservice with a REST interface is described and available [here](https://github.com/Azure/Industrial-IoT/blob/master/docs/services/publisher.md). It can be used to configure OPC Publisher via an OpenAPI-compatible interface, for example through Swagger.
118+
A cloud-based, companion microservice with a REST interface is described and available [here](https://github.com/Azure/Industrial-IoT/blob/main/docs/services/publisher.md). It can be used to configure OPC Publisher via an OpenAPI-compatible interface, for example through Swagger.
119119

120120
## Configuration of the simple JSON telemetry format via Separate Configuration File
121121

articles/media-services/latest/configure-connect-nodejs-howto.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ This article shows you how to connect to the Azure Media Services v3 node.js SDK
2626

2727
- An installation of Visual Studio Code.
2828
- Install [Node.js](https://nodejs.org/en/download/).
29-
- Install [Typescript](https://www.typescriptlang.org/download).
29+
- Install [TypeScript](https://www.typescriptlang.org/download).
3030
- [Create a Media Services account](./account-create-how-to.md). Be sure to remember the resource group name and the Media Services account name.
3131
- Create a service principal for your application. See [access APIs](./access-api-howto.md).<br/>**Pro tip!** Keep this window open or copy everything in the JSON tab to Notepad.
3232
- Make sure to get the latest version of the [AzureMediaServices SDK for JavaScript](https://www.npmjs.com/package/@azure/arm-mediaservices).

articles/stream-analytics/machine-learning-udf.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ ms.custom: devx-track-js
1111
---
1212
# Integrate Azure Stream Analytics with Azure Machine Learning (Preview)
1313

14-
You can implement machine learning models as a user-defined function (UDF) in your Azure Stream Analytics jobs to do real-time scoring and predictions on your streaming input data. [Azure Machine Learning](../machine-learning/overview-what-is-azure-machine-learning.md) allows you to use any popular open-source tool, such as Tensorflow, scikit-learn, or PyTorch, to prep, train, and deploy models.
14+
You can implement machine learning models as a user-defined function (UDF) in your Azure Stream Analytics jobs to do real-time scoring and predictions on your streaming input data. [Azure Machine Learning](../machine-learning/overview-what-is-azure-machine-learning.md) allows you to use any popular open-source tool, such as TensorFlow, scikit-learn, or PyTorch, to prep, train, and deploy models.
1515

1616
## Prerequisites
1717

articles/virtual-machine-scale-sets/virtual-machine-scale-sets-health-extension.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -47,7 +47,9 @@ The following JSON shows the schema for the Application Health extension. The ex
4747
"settings": {
4848
"protocol": "<protocol>",
4949
"port": "<port>",
50-
"requestPath": "</requestPath>"
50+
"requestPath": "</requestPath>",
51+
"intervalInSeconds": "5.0",
52+
"numberOfProbes": "1.0"
5153
}
5254
}
5355
}

articles/virtual-machine-scale-sets/virtual-machine-scale-sets-orchestration-modes.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -51,7 +51,7 @@ Virtual machine scale sets in Flexible Orchestration mode manages standard Azure
5151
You can choose the number of fault domains for the Flexible orchestration scale set. By default, when you add a VM to a Flexible scale set, Azure evenly spreads instances across fault domains. While it is recommended to let Azure assign the fault domain, for advanced or troubleshooting scenarios you can override this default behavior and specify the fault domain where the instance will land.
5252

5353
```azurecli-interactive
54-
az vm create –vmss "myVMSS" –-platform_fault_domain 1
54+
az vm create –vmss "myVMSS" –-platform-fault-domain 1
5555
```
5656

5757
### Instance naming

includes/virtual-machines-imds.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -46,6 +46,8 @@ Invoke-RestMethod -Headers @{"Metadata"="true"} -Method GET -NoProxy -Uri "http:
4646
curl -H Metadata:true --noproxy "*" "http://169.254.169.254/metadata/instance?api-version=2021-02-01" | jq
4747
```
4848

49+
The `jq` utility is available in many cases, but not all. If the `jq` utility is missing, use `| python -m json.tool` instead.
50+
4951
---
5052

5153
**Response**
@@ -569,6 +571,8 @@ Invoke-RestMethod -Headers @{"Metadata"="true"} -Method GET -NoProxy -Uri "http:
569571
curl -H Metadata:true --noproxy "*" "http://169.254.169.254/metadata/instance/compute/tagsList?api-version=2019-06-04" | jq
570572
```
571573

574+
The `jq` utility is available in many cases, but not all. If the `jq` utility is missing, use `| python -m json.tool` instead.
575+
572576
---
573577

574578
**Response**

0 commit comments

Comments
 (0)