You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/api-management/api-management-api-import-restrictions.md
+4Lines changed: 4 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -226,6 +226,10 @@ You can create [SOAP pass-through](import-soap-api.md) and [SOAP-to-REST](restif
226
226
227
227
* For an open-source tool to resolve and merge `wsdl:import`, `xsd:import`, and `xsd:include` dependencies in a WSDL file, see this [GitHub repo](https://github.com/Azure-Samples/api-management-schema-import).
228
228
229
+
### WS-* specifications
230
+
231
+
WSDL files incorporating WS-* specifications are not supported.
Copy file name to clipboardExpand all lines: articles/azure-vmware/deploy-disaster-recovery-using-jetstream.md
+1-3Lines changed: 1 addition & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,7 +3,7 @@ title: Deploy disaster recovery using JetStream DR
3
3
description: Learn how to implement JetStream DR for your Azure VMware Solution private cloud and on-premises VMware workloads.
4
4
ms.topic: how-to
5
5
ms.service: azure-vmware
6
-
ms.date: 04/11/2022
6
+
ms.date: 07/15/2022
7
7
ms.custom: references_regions
8
8
---
9
9
@@ -147,8 +147,6 @@ For full details, refer to the article: [Disaster Recovery with Azure NetApp Fil
147
147
148
148
For more on-premises JetStream DR prerequisites, see the [JetStream Pre-Installation Guide](https://www.jetstreamsoft.com/portal/jetstream-knowledge-base/pre-installation-guidelines/).
149
149
150
-
151
-
152
150
## Install JetStream DR on Azure VMware Solution
153
151
154
152
You can follow these steps for both supported scenarios.
Copy file name to clipboardExpand all lines: articles/expressroute/expressroute-howto-linkvnet-portal-resource-manager.md
+6-8Lines changed: 6 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,13 +3,11 @@ title: 'Tutorial: Link a VNet to an ExpressRoute circuit - Azure portal'
3
3
description: This tutorial shows you how to create a connection to link a virtual network to an Azure ExpressRoute circuit using the Azure portal.
4
4
services: expressroute
5
5
author: duongau
6
-
7
6
ms.service: expressroute
8
7
ms.topic: tutorial
9
-
ms.date: 08/10/2021
8
+
ms.date: 07/15/2022
10
9
ms.author: duau
11
-
ms.custom: seodec18
12
-
10
+
ms.custom: seodec18, template-tutorial
13
11
---
14
12
# Tutorial: Connect a virtual network to an ExpressRoute circuit using the portal
15
13
@@ -49,8 +47,6 @@ In this tutorial, you learn how to:
49
47
50
48
* Review guidance for [connectivity between virtual networks over ExpressRoute](virtual-network-connectivity-guidance.md).
51
49
52
-
* You can [view a video](https://azure.microsoft.com/documentation/videos/azure-expressroute-how-to-create-a-connection-between-your-vpn-gateway-and-expressroute-circuit) before beginning to better understand the steps.
53
-
54
50
## Connect a VNet to a circuit - same subscription
55
51
56
52
> [!NOTE]
@@ -202,7 +198,9 @@ You can delete a connection and unlink your VNet to an ExpressRoute circuit by s
202
198
203
199
## Next steps
204
200
205
-
In this tutorial, you learned how to connect a virtual network to a circuit in the same subscription and a different subscription. For more information about the ExpressRoute gateway, see:
201
+
In this tutorial, you learned how to connect a virtual network to a circuit in the same subscription and a different subscription. For more information about ExpressRoute gateways, see: [ExpressRoute virtual network gateways](expressroute-about-virtual-network-gateways.md).
202
+
203
+
To learn how to configure route filters for Microsoft peering using the Azure portal, advance to the next tutorial.
Copy file name to clipboardExpand all lines: articles/healthcare-apis/iot/get-started-with-iot.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -12,9 +12,9 @@ ms.custom: mode-api
12
12
13
13
# Get started with MedTech service in Azure Health Data Services
14
14
15
-
This article outlines the basic steps to get started with MedTech service in [Azure Health Data Services](../healthcare-apis-overview.md). MedTech service first processes data that has been sent to an event hub from a medical device, and then saves the data to the Fast Healthcare Interoperability Resources (FHIR®) service as Observation resources. This procedure makes it possible to link the FHIR service Observation to patient and device resources.
15
+
This article outlines the basic steps to get started with Azure MedTech service in [Azure Health Data Services](../healthcare-apis-overview.md). MedTech service ingests health data from a medical device using Azure Event Hubs service. It then persists the data to the Azure Fast Healthcare Interoperability Resources (FHIR®) service as Observation resources. This data processing procedure makes it possible to link FHIR service Observations to patient and device resources.
16
16
17
-
The following diagram shows the four development steps of the data flow needed to get MedTech service to receive data from a device and send it to FHIR service.
17
+
The following diagram shows the four-step data flow that enables MedTech service to receive data from a device and send it to FHIR service.
18
18
19
19
- Step 1 introduces the subscription and permissions prerequisites needed.
@@ -40,26 +40,27 @@ The payload for method requests and responses is a JSON document up to 128 KB.
40
40
41
41
## Invoke a direct method from a back-end app
42
42
43
-
Now, invoke a direct method from a back-end app.
43
+
To invoke a direct method from a back-end app use the [Invoke device method](/rest/api/iothub/service/devices/invoke-method) REST API or its equivalent in one of the [IoT Hub service SDKs](iot-hub-devguide-sdks.md#azure-iot-hub-service-sdks).
44
44
45
45
### Method invocation
46
46
47
47
Direct method invocations on a device are HTTPS calls that are made up of the following items:
48
48
49
-
* The *request URI* specific to the device along with the [API version](/rest/api/iothub/service/devices/invokemethod):
49
+
* The *request URI* specific to the device along with the API version:
* *Headers* that contain the authorization, request ID, content type, and content encoding.
57
+
* *Headers* that contain the authorization, content type, and content encoding.
58
58
59
59
* A transparent JSON *body* in the following format:
60
60
61
61
```json
62
62
{
63
+
"connectTimeoutInSeconds": 200,
63
64
"methodName": "reboot",
64
65
"responseTimeoutInSeconds": 200,
65
66
"payload": {
@@ -75,7 +76,7 @@ The value provided as `connectTimeoutInSeconds` in the request is the amount of
75
76
76
77
#### Example
77
78
78
-
This example will allow you to securely initiate a request to invoke a Direct Method on an IoT device registered to an Azure IoT Hub.
79
+
This example will allow you to securely initiate a request to invoke a direct method on an IoT device registered to an Azure IoT hub.
79
80
80
81
To begin, use the [Microsoft Azure IoT extension for Azure CLI](https://github.com/Azure/azure-iot-cli-extension) to create a SharedAccessSignature.
81
82
@@ -100,14 +101,15 @@ curl -X POST \
100
101
}'
101
102
```
102
103
103
-
Execute the modified command to invoke the specified Direct Method. Successful requests will return an HTTP 200 status code.
104
+
Execute the modified command to invoke the specified direct method. Successful requests will return an HTTP 200 status code.
104
105
105
106
> [!NOTE]
106
-
> The above example demonstrates invoking a Direct Method on a device. If you wish to invoke a Direct Method in an IoT Edge Module, you would need to modify the url request as shown below:
107
+
> The example above demonstrates invoking a direct method on a device. If you want to invoke a direct method in an IoT Edge Module, you would need to modify the url request as shown below:
The back-end app receives a response that is made up of the following items:
@@ -117,7 +119,7 @@ The back-end app receives a response that is made up of the following items:
117
119
* 404 indicates that either device ID is invalid, or that the device was not online upon invocation of a direct method and for`connectTimeoutInSeconds` thereafter (use accompanied error message to understand the root cause);
118
120
* 504 indicates gateway timeout caused by device not responding to a direct method call within `responseTimeoutInSeconds`.
119
121
120
-
**Headers* that contain the ETag, request ID, content type, and content encoding.
122
+
**Headers* that contain the request ID, content type, and content encoding.
121
123
122
124
* A JSON *body*in the following format:
123
125
@@ -128,25 +130,25 @@ The back-end app receives a response that is made up of the following items:
128
130
}
129
131
```
130
132
131
-
Both `status` and `body` are provided by the device and used to respond with the device's own status code and/or description.
133
+
Both `status` and `payload` are provided by the device and used to respond with the device's own status code and the method response.
132
134
133
135
### Method invocation for IoT Edge modules
134
136
135
-
Invoking direct methods using a module ID is supported in the [IoT Service Client C# SDK](https://www.nuget.org/packages/Microsoft.Azure.Devices/).
137
+
Invoking direct methods on a module is supported by the [Invoke module method](/rest/api/iothub/service/modules/invoke-method) REST API or its equivalent in one of the IoT Hub service SDKs.
136
138
137
-
For this purpose, use the `ServiceClient.InvokeDeviceMethodAsync()` method and pass in the `deviceId` and `moduleId` as parameters.
139
+
The `moduleId` is passed along with the `deviceId` in the request URI when using the REST API or as a parameter when using a service SDK. For example, `https://<iothubName>.azure-devices.net/twins/<deviceId>/modules/<moduleName>/methods?api-version=2021-04-12`. The request body and response is similar to that of direct methods invoked on the device.
138
140
139
141
## Handle a direct method on a device
140
142
141
-
Let's look at how to handle a direct method on an IoT device.
143
+
On an IoT device, direct methods can be received over MQTT, AMQP, or either of these protocols over WebSockets. The [IoT Hub device SDKs](iot-hub-devguide-sdks.md#azure-iot-hub-device-sdks) help you receive and respond to direct methods on devices without having to worry about the underlying protocol details.
142
144
143
145
### MQTT
144
146
145
-
The following section is for the MQTT protocol.
147
+
The following section is for the MQTT protocol. To learn more about using the MQTT protocol directly with IoT Hub, see [MQTT protocol support](iot-hub-mqtt-support.md).
146
148
147
149
#### Method invocation
148
150
149
-
Devices receive direct method requests on the MQTT topic: `$iothub/methods/POST/{method name}/?$rid={request id}`. The number of subscriptions per device is limited to 5. It is therefore recommended not to subscribe to each direct method individually. Instead consider subscribing to `$iothub/methods/POST/#` and then filter the delivered messages based on your desired method names.
151
+
Devices receive direct method requests on the MQTT topic: `$iothub/methods/POST/{method name}/?$rid={request id}`. However, the `request id` is generated by IoT Hub and cannot be known ahead of time, so subscribe to `$iothub/methods/POST/#` and then filter the delivered messages based on method names supported by your device. (You'll use the `request id` to respond.)
150
152
151
153
The body that the device receives is in the following format:
152
154
@@ -171,7 +173,7 @@ The body is set by the device and can be any status.
171
173
172
174
### AMQP
173
175
174
-
The following section is for the AMQP protocol.
176
+
The following section is for the AMQP protocol. To learn more about using the AMQP protocol directly with IoT Hub, see [AMQP protocol support](iot-hub-amqp-support.md).
Copy file name to clipboardExpand all lines: articles/machine-learning/component-reference/train-pytorch-model.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -50,7 +50,7 @@ Currently, **Train PyTorch Model** component supports both single node and distr
50
50
> In distributed training, to keep gradient descent stable, the actual learning rate is calculated by `lr * torch.distributed.get_world_size()` because batch size of the process group is world size times that of single process.
51
51
> Polynomial learning rate decay is applied and can help result in a better performing model.
52
52
53
-
8. For **Random seed**, optionally type an integer value to use as the seed. Using a seed is recommended if you want to ensure reproducibility of the experiment across runs.
53
+
8. For **Random seed**, optionally type an integer value to use as the seed. Using a seed is recommended if you want to ensure reproducibility of the experiment across jobs.
54
54
55
55
9. For **Patience**, specify how many epochs to early stop training if validation loss does not decrease consecutively. by default 3.
56
56
@@ -77,12 +77,12 @@ Click on this component 'Metrics' tab and see training metric graphs, such as 'T
77
77
78
78
### How to enable distributed training
79
79
80
-
To enable distributed training for **Train PyTorch Model** component, you can set in **Run settings** in the right pane of the component. Only **[AML Compute cluster](../how-to-create-attach-compute-cluster.md?tabs=python)** is supported for distributed training.
80
+
To enable distributed training for **Train PyTorch Model** component, you can set in **Job settings** in the right pane of the component. Only **[AML Compute cluster](../how-to-create-attach-compute-cluster.md?tabs=python)** is supported for distributed training.
81
81
82
82
> [!NOTE]
83
83
> **Multiple GPUs** are required to activate distributed training because NCCL backend Train PyTorch Model component uses needs cuda.
84
84
85
-
1. Select the component and open the right panel. Expand the **Run settings** section.
85
+
1. Select the component and open the right panel. Expand the **Job settings** section.
86
86
87
87
[](./media/module/distributed-training-run-setting.png#lightbox)
88
88
@@ -116,7 +116,7 @@ You can refer to [this article](designer-error-codes.md) for more details about
116
116
117
117
## Results
118
118
119
-
After pipeline run is completed, to use the model for scoring, connect the [Train PyTorch Model](train-PyTorch-model.md) to [Score Image Model](score-image-model.md), to predict values for new input examples.
119
+
After pipeline job is completed, to use the model for scoring, connect the [Train PyTorch Model](train-PyTorch-model.md) to [Score Image Model](score-image-model.md), to predict values for new input examples.
Copy file name to clipboardExpand all lines: articles/machine-learning/component-reference/train-svd-recommender.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -68,7 +68,7 @@ From this sample, you can see that a single user has rated several movies.
68
68
69
69
## Results
70
70
71
-
After pipeline run is completed, to use the model for scoring, connect the [Train SVD Recommender](train-svd-recommender.md) to [Score SVD Recommender](score-svd-recommender.md), to predict values for new input examples.
71
+
After pipeline job is completed, to use the model for scoring, connect the [Train SVD Recommender](train-svd-recommender.md) to [Score SVD Recommender](score-svd-recommender.md), to predict values for new input examples.
Copy file name to clipboardExpand all lines: articles/machine-learning/component-reference/train-vowpal-wabbit-model.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -65,9 +65,9 @@ The data can be read from two kinds of datasets, file dataset or tabular dataset
65
65
-**VW** represents the internal format used by Vowpal Wabbit . See the [Vowpal Wabbit wiki page](https://github.com/JohnLangford/vowpal_wabbit/wiki/Input-format) for details.
66
66
-**SVMLight** is a format used by some other machine learning tools.
67
67
68
-
6.**Output readable model file**: select the option if you want the component to save the readable model to the run records. This argument corresponds to the `--readable_model` parameter in the VW command line.
68
+
6.**Output readable model file**: select the option if you want the component to save the readable model to the job records. This argument corresponds to the `--readable_model` parameter in the VW command line.
69
69
70
-
7.**Output inverted hash file**: select the option if you want the component to save the inverted hashing function to one file in the run records. This argument corresponds to the `--invert_hash` parameter in the VW command line.
70
+
7.**Output inverted hash file**: select the option if you want the component to save the inverted hashing function to one file in the job records. This argument corresponds to the `--invert_hash` parameter in the VW command line.
71
71
72
72
8. Submit the pipeline.
73
73
@@ -83,7 +83,7 @@ Vowpal Wabbit supports incremental training by adding new data to an existing mo
83
83
2. Connect the previously trained model to the **Pre-trained Vowpal Wabbit Model** input port of the component.
84
84
3. Connect the new training data to the **Training data** input port of the component.
85
85
4. In the parameters pane of **Train Vowpal Wabbit Model**, specify the format of the new training data, and also the training data file name if the input dataset is a directory.
86
-
5. Select the **Output readable model file** and **Output inverted hash file** options if the corresponding files need to be saved in the run records.
86
+
5. Select the **Output readable model file** and **Output inverted hash file** options if the corresponding files need to be saved in the job records.
87
87
88
88
6. Submit the pipeline.
89
89
7. Select the component and select **Register dataset** under **Outputs+logs** tab in the right pane, to preserve the updated model in your Azure Machine Learning workspace. If you don't specify a new name, the updated model overwrites the existing saved model.
0 commit comments