Skip to content

Commit f5be381

Browse files
committed
Merge branch 'main' of https://github.com/MicrosoftDocs/azure-docs-pr into premV2
2 parents 00a5071 + 4ff46b1 commit f5be381

File tree

85 files changed

+1238
-1115
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

85 files changed

+1238
-1115
lines changed

articles/api-management/api-management-api-import-restrictions.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -226,6 +226,10 @@ You can create [SOAP pass-through](import-soap-api.md) and [SOAP-to-REST](restif
226226

227227
* For an open-source tool to resolve and merge `wsdl:import`, `xsd:import`, and `xsd:include` dependencies in a WSDL file, see this [GitHub repo](https://github.com/Azure-Samples/api-management-schema-import).
228228

229+
### WS-* specifications
230+
231+
WSDL files incorporating WS-* specifications are not supported.
232+
229233
### Messages with multiple parts
230234
This message type is not supported.
231235

articles/azure-vmware/deploy-disaster-recovery-using-jetstream.md

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ title: Deploy disaster recovery using JetStream DR
33
description: Learn how to implement JetStream DR for your Azure VMware Solution private cloud and on-premises VMware workloads.
44
ms.topic: how-to
55
ms.service: azure-vmware
6-
ms.date: 04/11/2022
6+
ms.date: 07/15/2022
77
ms.custom: references_regions
88
---
99

@@ -147,8 +147,6 @@ For full details, refer to the article: [Disaster Recovery with Azure NetApp Fil
147147

148148
For more on-premises JetStream DR prerequisites, see the [JetStream Pre-Installation Guide](https://www.jetstreamsoft.com/portal/jetstream-knowledge-base/pre-installation-guidelines/).
149149

150-
151-
152150
## Install JetStream DR on Azure VMware Solution
153151

154152
You can follow these steps for both supported scenarios.
408 KB
Loading

articles/bastion/TOC.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,8 @@
99
items:
1010
- name: Deploy Bastion with default settings
1111
href: quickstart-host-portal.md
12+
- name: Deploy Bastion - ARM template
13+
href: quickstart-host-arm-template.md
1214
- name: Tutorials
1315
items:
1416
- name: Deploy Bastion with manual settings

articles/expressroute/expressroute-howto-linkvnet-portal-resource-manager.md

Lines changed: 6 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -3,13 +3,11 @@ title: 'Tutorial: Link a VNet to an ExpressRoute circuit - Azure portal'
33
description: This tutorial shows you how to create a connection to link a virtual network to an Azure ExpressRoute circuit using the Azure portal.
44
services: expressroute
55
author: duongau
6-
76
ms.service: expressroute
87
ms.topic: tutorial
9-
ms.date: 08/10/2021
8+
ms.date: 07/15/2022
109
ms.author: duau
11-
ms.custom: seodec18
12-
10+
ms.custom: seodec18, template-tutorial
1311
---
1412
# Tutorial: Connect a virtual network to an ExpressRoute circuit using the portal
1513

@@ -49,8 +47,6 @@ In this tutorial, you learn how to:
4947

5048
* Review guidance for [connectivity between virtual networks over ExpressRoute](virtual-network-connectivity-guidance.md).
5149

52-
* You can [view a video](https://azure.microsoft.com/documentation/videos/azure-expressroute-how-to-create-a-connection-between-your-vpn-gateway-and-expressroute-circuit) before beginning to better understand the steps.
53-
5450
## Connect a VNet to a circuit - same subscription
5551

5652
> [!NOTE]
@@ -202,7 +198,9 @@ You can delete a connection and unlink your VNet to an ExpressRoute circuit by s
202198

203199
## Next steps
204200

205-
In this tutorial, you learned how to connect a virtual network to a circuit in the same subscription and a different subscription. For more information about the ExpressRoute gateway, see:
201+
In this tutorial, you learned how to connect a virtual network to a circuit in the same subscription and a different subscription. For more information about ExpressRoute gateways, see: [ExpressRoute virtual network gateways](expressroute-about-virtual-network-gateways.md).
202+
203+
To learn how to configure route filters for Microsoft peering using the Azure portal, advance to the next tutorial.
206204

207205
> [!div class="nextstepaction"]
208-
> [About ExpressRoute virtual network gateways](expressroute-about-virtual-network-gateways.md)
206+
> [Configure route filters for Microsoft peering](how-to-routefilter-portal.md)

articles/healthcare-apis/iot/get-started-with-iot.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -12,9 +12,9 @@ ms.custom: mode-api
1212

1313
# Get started with MedTech service in Azure Health Data Services
1414

15-
This article outlines the basic steps to get started with MedTech service in [Azure Health Data Services](../healthcare-apis-overview.md). MedTech service first processes data that has been sent to an event hub from a medical device, and then saves the data to the Fast Healthcare Interoperability Resources (FHIR®) service as Observation resources. This procedure makes it possible to link the FHIR service Observation to patient and device resources.
15+
This article outlines the basic steps to get started with Azure MedTech service in [Azure Health Data Services](../healthcare-apis-overview.md). MedTech service ingests health data from a medical device using Azure Event Hubs service. It then persists the data to the Azure Fast Healthcare Interoperability Resources (FHIR®) service as Observation resources. This data processing procedure makes it possible to link FHIR service Observations to patient and device resources.
1616

17-
The following diagram shows the four development steps of the data flow needed to get MedTech service to receive data from a device and send it to FHIR service.
17+
The following diagram shows the four-step data flow that enables MedTech service to receive data from a device and send it to FHIR service.
1818

1919
- Step 1 introduces the subscription and permissions prerequisites needed.
2020

articles/iot-hub/iot-hub-devguide-direct-methods.md

Lines changed: 20 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ author: kgremban
55
ms.service: iot-hub
66
services: iot-hub
77
ms.topic: conceptual
8-
ms.date: 07/17/2018
8+
ms.date: 07/15/2022
99
ms.author: kgremban
1010
ms.custom: [amqp, mqtt,'Role: Cloud Development', 'Role: IoT Device']
1111
---
@@ -40,26 +40,27 @@ The payload for method requests and responses is a JSON document up to 128 KB.
4040

4141
## Invoke a direct method from a back-end app
4242

43-
Now, invoke a direct method from a back-end app.
43+
To invoke a direct method from a back-end app use the [Invoke device method](/rest/api/iothub/service/devices/invoke-method) REST API or its equivalent in one of the [IoT Hub service SDKs](iot-hub-devguide-sdks.md#azure-iot-hub-service-sdks).
4444

4545
### Method invocation
4646

4747
Direct method invocations on a device are HTTPS calls that are made up of the following items:
4848

49-
* The *request URI* specific to the device along with the [API version](/rest/api/iothub/service/devices/invokemethod):
49+
* The *request URI* specific to the device along with the API version:
5050

5151
```http
5252
https://fully-qualified-iothubname.azure-devices.net/twins/{deviceId}/methods?api-version=2021-04-12
5353
```
5454
5555
* The POST *method*
5656
57-
* *Headers* that contain the authorization, request ID, content type, and content encoding.
57+
* *Headers* that contain the authorization, content type, and content encoding.
5858
5959
* A transparent JSON *body* in the following format:
6060
6161
```json
6262
{
63+
"connectTimeoutInSeconds": 200,
6364
"methodName": "reboot",
6465
"responseTimeoutInSeconds": 200,
6566
"payload": {
@@ -75,7 +76,7 @@ The value provided as `connectTimeoutInSeconds` in the request is the amount of
7576
7677
#### Example
7778
78-
This example will allow you to securely initiate a request to invoke a Direct Method on an IoT device registered to an Azure IoT Hub.
79+
This example will allow you to securely initiate a request to invoke a direct method on an IoT device registered to an Azure IoT hub.
7980
8081
To begin, use the [Microsoft Azure IoT extension for Azure CLI](https://github.com/Azure/azure-iot-cli-extension) to create a SharedAccessSignature.
8182
@@ -100,14 +101,15 @@ curl -X POST \
100101
}'
101102
```
102103

103-
Execute the modified command to invoke the specified Direct Method. Successful requests will return an HTTP 200 status code.
104+
Execute the modified command to invoke the specified direct method. Successful requests will return an HTTP 200 status code.
104105

105106
> [!NOTE]
106-
> The above example demonstrates invoking a Direct Method on a device. If you wish to invoke a Direct Method in an IoT Edge Module, you would need to modify the url request as shown below:
107+
> The example above demonstrates invoking a direct method on a device. If you want to invoke a direct method in an IoT Edge Module, you would need to modify the url request as shown below:
108+
>
109+
> ```bash
110+
> https://<iothubName>.azure-devices.net/twins/<deviceId>/modules/<moduleName>/methods?api-version=2021-04-12
111+
> ```
107112
108-
```bash
109-
https://<iothubName>.azure-devices.net/twins/<deviceId>/modules/<moduleName>/methods?api-version=2021-04-12
110-
```
111113
### Response
112114
113115
The back-end app receives a response that is made up of the following items:
@@ -117,7 +119,7 @@ The back-end app receives a response that is made up of the following items:
117119
* 404 indicates that either device ID is invalid, or that the device was not online upon invocation of a direct method and for `connectTimeoutInSeconds` thereafter (use accompanied error message to understand the root cause);
118120
* 504 indicates gateway timeout caused by device not responding to a direct method call within `responseTimeoutInSeconds`.
119121
120-
* *Headers* that contain the ETag, request ID, content type, and content encoding.
122+
* *Headers* that contain the request ID, content type, and content encoding.
121123
122124
* A JSON *body* in the following format:
123125
@@ -128,25 +130,25 @@ The back-end app receives a response that is made up of the following items:
128130
}
129131
```
130132
131-
Both `status` and `body` are provided by the device and used to respond with the device's own status code and/or description.
133+
Both `status` and `payload` are provided by the device and used to respond with the device's own status code and the method response.
132134
133135
### Method invocation for IoT Edge modules
134136
135-
Invoking direct methods using a module ID is supported in the [IoT Service Client C# SDK](https://www.nuget.org/packages/Microsoft.Azure.Devices/).
137+
Invoking direct methods on a module is supported by the [Invoke module method](/rest/api/iothub/service/modules/invoke-method) REST API or its equivalent in one of the IoT Hub service SDKs.
136138
137-
For this purpose, use the `ServiceClient.InvokeDeviceMethodAsync()` method and pass in the `deviceId` and `moduleId` as parameters.
139+
The `moduleId` is passed along with the `deviceId` in the request URI when using the REST API or as a parameter when using a service SDK. For example, `https://<iothubName>.azure-devices.net/twins/<deviceId>/modules/<moduleName>/methods?api-version=2021-04-12`. The request body and response is similar to that of direct methods invoked on the device.
138140
139141
## Handle a direct method on a device
140142
141-
Let's look at how to handle a direct method on an IoT device.
143+
On an IoT device, direct methods can be received over MQTT, AMQP, or either of these protocols over WebSockets. The [IoT Hub device SDKs](iot-hub-devguide-sdks.md#azure-iot-hub-device-sdks) help you receive and respond to direct methods on devices without having to worry about the underlying protocol details.
142144
143145
### MQTT
144146
145-
The following section is for the MQTT protocol.
147+
The following section is for the MQTT protocol. To learn more about using the MQTT protocol directly with IoT Hub, see [MQTT protocol support](iot-hub-mqtt-support.md).
146148
147149
#### Method invocation
148150
149-
Devices receive direct method requests on the MQTT topic: `$iothub/methods/POST/{method name}/?$rid={request id}`. The number of subscriptions per device is limited to 5. It is therefore recommended not to subscribe to each direct method individually. Instead consider subscribing to `$iothub/methods/POST/#` and then filter the delivered messages based on your desired method names.
151+
Devices receive direct method requests on the MQTT topic: `$iothub/methods/POST/{method name}/?$rid={request id}`. However, the `request id` is generated by IoT Hub and cannot be known ahead of time, so subscribe to `$iothub/methods/POST/#` and then filter the delivered messages based on method names supported by your device. (You'll use the `request id` to respond.)
150152
151153
The body that the device receives is in the following format:
152154
@@ -171,7 +173,7 @@ The body is set by the device and can be any status.
171173
172174
### AMQP
173175
174-
The following section is for the AMQP protocol.
176+
The following section is for the AMQP protocol. To learn more about using the AMQP protocol directly with IoT Hub, see [AMQP protocol support](iot-hub-amqp-support.md).
175177
176178
#### Method invocation
177179

articles/machine-learning/component-reference/train-pytorch-model.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -50,7 +50,7 @@ Currently, **Train PyTorch Model** component supports both single node and distr
5050
> In distributed training, to keep gradient descent stable, the actual learning rate is calculated by `lr * torch.distributed.get_world_size()` because batch size of the process group is world size times that of single process.
5151
> Polynomial learning rate decay is applied and can help result in a better performing model.
5252
53-
8. For **Random seed**, optionally type an integer value to use as the seed. Using a seed is recommended if you want to ensure reproducibility of the experiment across runs.
53+
8. For **Random seed**, optionally type an integer value to use as the seed. Using a seed is recommended if you want to ensure reproducibility of the experiment across jobs.
5454

5555
9. For **Patience**, specify how many epochs to early stop training if validation loss does not decrease consecutively. by default 3.
5656

@@ -77,12 +77,12 @@ Click on this component 'Metrics' tab and see training metric graphs, such as 'T
7777

7878
### How to enable distributed training
7979

80-
To enable distributed training for **Train PyTorch Model** component, you can set in **Run settings** in the right pane of the component. Only **[AML Compute cluster](../how-to-create-attach-compute-cluster.md?tabs=python)** is supported for distributed training.
80+
To enable distributed training for **Train PyTorch Model** component, you can set in **Job settings** in the right pane of the component. Only **[AML Compute cluster](../how-to-create-attach-compute-cluster.md?tabs=python)** is supported for distributed training.
8181

8282
> [!NOTE]
8383
> **Multiple GPUs** are required to activate distributed training because NCCL backend Train PyTorch Model component uses needs cuda.
8484
85-
1. Select the component and open the right panel. Expand the **Run settings** section.
85+
1. Select the component and open the right panel. Expand the **Job settings** section.
8686

8787
[![Screenshot showing how to set distributed training in runsetting](./media/module/distributed-training-run-setting.png)](./media/module/distributed-training-run-setting.png#lightbox)
8888

@@ -116,7 +116,7 @@ You can refer to [this article](designer-error-codes.md) for more details about
116116

117117
## Results
118118

119-
After pipeline run is completed, to use the model for scoring, connect the [Train PyTorch Model](train-PyTorch-model.md) to [Score Image Model](score-image-model.md), to predict values for new input examples.
119+
After pipeline job is completed, to use the model for scoring, connect the [Train PyTorch Model](train-PyTorch-model.md) to [Score Image Model](score-image-model.md), to predict values for new input examples.
120120

121121
## Technical notes
122122
### Expected inputs

articles/machine-learning/component-reference/train-svd-recommender.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -68,7 +68,7 @@ From this sample, you can see that a single user has rated several movies.
6868

6969
## Results
7070

71-
After pipeline run is completed, to use the model for scoring, connect the [Train SVD Recommender](train-svd-recommender.md) to [Score SVD Recommender](score-svd-recommender.md), to predict values for new input examples.
71+
After pipeline job is completed, to use the model for scoring, connect the [Train SVD Recommender](train-svd-recommender.md) to [Score SVD Recommender](score-svd-recommender.md), to predict values for new input examples.
7272

7373
## Next steps
7474

articles/machine-learning/component-reference/train-vowpal-wabbit-model.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -65,9 +65,9 @@ The data can be read from two kinds of datasets, file dataset or tabular dataset
6565
- **VW** represents the internal format used by Vowpal Wabbit . See the [Vowpal Wabbit wiki page](https://github.com/JohnLangford/vowpal_wabbit/wiki/Input-format) for details.
6666
- **SVMLight** is a format used by some other machine learning tools.
6767

68-
6. **Output readable model file**: select the option if you want the component to save the readable model to the run records. This argument corresponds to the `--readable_model` parameter in the VW command line.
68+
6. **Output readable model file**: select the option if you want the component to save the readable model to the job records. This argument corresponds to the `--readable_model` parameter in the VW command line.
6969

70-
7. **Output inverted hash file**: select the option if you want the component to save the inverted hashing function to one file in the run records. This argument corresponds to the `--invert_hash` parameter in the VW command line.
70+
7. **Output inverted hash file**: select the option if you want the component to save the inverted hashing function to one file in the job records. This argument corresponds to the `--invert_hash` parameter in the VW command line.
7171

7272
8. Submit the pipeline.
7373

@@ -83,7 +83,7 @@ Vowpal Wabbit supports incremental training by adding new data to an existing mo
8383
2. Connect the previously trained model to the **Pre-trained Vowpal Wabbit Model** input port of the component.
8484
3. Connect the new training data to the **Training data** input port of the component.
8585
4. In the parameters pane of **Train Vowpal Wabbit Model**, specify the format of the new training data, and also the training data file name if the input dataset is a directory.
86-
5. Select the **Output readable model file** and **Output inverted hash file** options if the corresponding files need to be saved in the run records.
86+
5. Select the **Output readable model file** and **Output inverted hash file** options if the corresponding files need to be saved in the job records.
8787

8888
6. Submit the pipeline.
8989
7. Select the component and select **Register dataset** under **Outputs+logs** tab in the right pane, to preserve the updated model in your Azure Machine Learning workspace. If you don't specify a new name, the updated model overwrites the existing saved model.

0 commit comments

Comments
 (0)