You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/machine-learning/tutorial-1st-experiment-sdk-train.md
+12-11Lines changed: 12 additions & 11 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -23,7 +23,7 @@ In this tutorial, you learn the following tasks:
23
23
> [!div class="checklist"]
24
24
> * Connect your workspace and create an experiment
25
25
> * Load data and train scikit-learn models
26
-
> * View training results in the portal
26
+
> * View training results in the studio
27
27
> * Retrieve the best model
28
28
29
29
## Prerequisites
@@ -120,32 +120,33 @@ The above code accomplishes the following:
120
120
121
121
1. For each alpha hyperparameter value in the `alphas` array, a new run is created within the experiment. The alpha value is logged to differentiate between each run.
122
122
1. In each run, a Ridge model is instantiated, trained, and used to run predictions. The root-mean-squared-error is calculated for the actual versus predicted values, and then logged to the run. At this point the run has metadata attached for both the alpha value and the rmse accuracy.
123
-
1. Next, the model for each run is serialized and uploaded to the run. This allows you to download the model file from the run in the portal.
123
+
1. Next, the model for each run is serialized and uploaded to the run. This allows you to download the model file from the run in the studio.
124
124
1. At the end of each iteration the run is completed by calling `run.complete()`.
125
125
126
-
After the training has completed, call the `experiment` variable to fetch a link to the experiment in the portal.
126
+
After the training has completed, call the `experiment` variable to fetch a link to the experiment in the studio.
127
127
128
128
```python
129
129
experiment
130
130
```
131
131
132
-
<tablestyle="width:100%"><tr><th>Name</th><th>Workspace</th><th>Report Page</th><th>Docs Page</th></tr><tr><td>diabetes-experiment</td><td>your-workspace-name</td><td>Link to Azure portal</td><td>Link to Documentation</td></tr></table>
132
+
<tablestyle="width:100%"><tr><th>Name</th><th>Workspace</th><th>Report Page</th><th>Docs Page</th></tr><tr><td>diabetes-experiment</td><td>your-workspace-name</td><td>Link to Azure Machine Learning studio</td><td>Link to Documentation</td></tr></table>
133
133
134
-
## View training results in portal
134
+
## View training results in studio
135
135
136
-
Following the **Link to Azure portal** takes you to the main experiment page. Here you see all the individual runs in the experiment. Any custom-logged values (`alpha_value` and `rmse`, in this case) become fields for each run, and also become available for the charts and tiles at the top of the experiment page. To add a logged metric to a chart or tile, hover over it, click the edit button, and find your custom-logged metric.
136
+
Following the **Link to Azure Machine Learning studio** takes you to the main experiment page. Here you see all the individual runs in the experiment. Any custom-logged values (`alpha_value` and `rmse`, in this case) become fields for each run, and also become available for the charts and tiles at the top of the experiment page. To add a logged metric to a chart or tile, hover over it, click the edit button, and find your custom-logged metric.
137
137
138
138
When training models at scale over hundreds and thousands of separate runs, this page makes it easy to see every model you trained, specifically how they were trained, and how your unique metrics have changed over time.
139
139
140
-

140
+
:::image type="content" source="./media/tutorial-1st-experiment-sdk-train/experiment-main.png" alt-text="Main Experiment page in the studio.":::
141
141
142
-
Clicking on a run number link in the `RUN NUMBER` column takes you to the page for each individual run. The default tab **Details** shows you more-detailed information on each run. Navigate to the **Outputs** tab, and you see the `.pkl` file for the model that was uploaded to the run during each training iteration. Here you can download the model file, rather than having to retrain it manually.
143
142
144
-

143
+
Select a run number link in the `RUN NUMBER` column to see the page for an individual run. The default tab **Details** shows you more-detailed information on each run. Navigate to the **Outputs + logs** tab, and you see the `.pkl` file for the model that was uploaded to the run during each training iteration. Here you can download the model file, rather than having to retrain it manually.
144
+
145
+
:::image type="content" source="./media/tutorial-1st-experiment-sdk-train/model-download.png" alt-text="Run details page in the studio.":::
145
146
146
147
## Get the best model
147
148
148
-
In addition to being able to download model files from the experiment in the portal, you can also download them programmatically. The following code iterates through each run in the experiment, and accesses both the logged run metrics and the run details (which contains the run_id). This keeps track of the best run, in this case the run with the lowest root-mean-squared-error.
149
+
In addition to being able to download model files from the experiment in the studio, you can also download them programmatically. The following code iterates through each run in the experiment, and accesses both the logged run metrics and the run details (which contains the run_id). This keeps track of the best run, in this case the run with the lowest root-mean-squared-error.
149
150
150
151
```python
151
152
minimum_rmse_runid =None
@@ -210,7 +211,7 @@ In this tutorial, you did the following tasks:
210
211
> [!div class="checklist"]
211
212
> * Connected your workspace and created an experiment
212
213
> * Loaded data and trained scikit-learn models
213
-
> * Viewed training results in the portal and retrieved models
214
+
> * Viewed training results in the studio and retrieved models
214
215
215
216
[Deploy your model](tutorial-deploy-models-with-aml.md) with Azure Machine Learning.
216
217
Learn how to develop [automated machine learning](tutorial-auto-train-models.md) experiments.
This article lists some of the most common Microsoft Azure Media Services limits, which are also sometimes called quotas.
20
+
21
+
> [!NOTE]
22
+
> For resources that aren't fixed, open a support ticket to ask for an increase in the quotas. Don't create additional Azure Media Services accounts in an attempt to obtain higher limits.
23
+
24
+
## Account limits
25
+
26
+
| Resource | Default Limit |
27
+
| --- | --- |
28
+
|[Media Services accounts](media-services-account-concept.md) in a single subscription | 25 (fixed) |
29
+
30
+
## Asset limits
31
+
32
+
| Resource | Default Limit |
33
+
| --- | --- |
34
+
|[Assets](assets-concept.md) per Media Services account | 1,000,000|
35
+
36
+
## Storage limits
37
+
38
+
| Resource | Default Limit |
39
+
| --- | --- |
40
+
| File size| In some scenarios, there is a limit on the maximum file size supported for processing in Media Services. <sup>(1)</sup> |
<sup>1</sup> The maximum size supported for a single blob is currently up to 5 TB in Azure Blob Storage. Additional limits apply in Media Services based on the VM sizes that are used by the service. The size limit applies to the files that you upload and also the files that get generated as a result of Media Services processing (encoding or analyzing). If your source file is larger than 260-GB, your Job will likely fail.
44
+
45
+
The following table shows the limits on the media reserved units S1, S2, and S3. If your source file is larger than the limits defined in the table, your encoding job fails. If you encode 4K resolution sources of long duration, you're required to use S3 media reserved units to achieve the performance needed. If you have 4K content that's larger than the 260-GB limit on the S3 media reserved units, open a support ticket.
46
+
47
+
|Media reserved unit type|Maximum input size (GB)|
48
+
|---|---|
49
+
|S1 | 26|
50
+
|S2 | 60|
51
+
|S3 |260|
52
+
53
+
<sup>2</sup> The storage accounts must be from the same Azure subscription.
54
+
55
+
## Jobs (encoding & analyzing) limits
56
+
57
+
| Resource | Default Limit |
58
+
| --- | --- |
59
+
|[Jobs](transforms-jobs-concept.md) per Media Services account | 500,000 <sup>(3)</sup> (fixed)|
60
+
| Job inputs per Job | 50 (fixed)|
61
+
| Job outputs per Job | 20 (fixed) |
62
+
|[Transforms](transforms-jobs-concept.md) per Media Services account | 100 (fixed)|
63
+
| Transform outputs in a Transform | 20 (fixed) |
64
+
| Files per job input|10 (fixed)|
65
+
66
+
<sup>3</sup> This number includes queued, finished, active, and canceled Jobs. It does not include deleted Jobs.
67
+
68
+
Any Job record in your account older than 90 days will be automatically deleted, even if the total number of records is below the maximum quota.
69
+
70
+
## Live streaming limits
71
+
72
+
| Resource | Default Limit |
73
+
| --- | --- |
74
+
|[Live Events](live-events-outputs-concept.md) <sup>(4)</sup> per Media Services account |5|
75
+
| Live Outputs per Live Event |3 <sup>(5)</sup> |
76
+
| Max Live Output duration | 25 hours |
77
+
78
+
<sup>4</sup> For detailed information about Live Event limitations, see [Live Event types comparison and limitations](live-event-types-comparison.md).
79
+
80
+
<sup>5</sup> Live Outputs start on creation and stop when deleted.
81
+
82
+
## Packaging & delivery limits
83
+
84
+
| Resource | Default Limit |
85
+
| --- | --- |
86
+
|[Streaming Endpoints](streaming-endpoint-concept.md) (stopped or running) per Media Services account|2 (fixed)|
| Unique [Streaming Locators](streaming-locators-concept.md) associated with an Asset at one time | 100<sup>(7)</sup> (fixed) |
90
+
91
+
<sup>6</sup> When using a custom [Streaming Policy](https://docs.microsoft.com/rest/api/media/streamingpolicies), you should design a limited set of such policies for your Media Service account, and re-use them for your StreamingLocators whenever the same encryption options and protocols are needed. You should not be creating a new Streaming Policy for each Streaming Locator.
92
+
93
+
<sup>7</sup> Streaming Locators are not designed for managing per-user access control. To give different access rights to individual users, use Digital Rights Management (DRM) solutions.
94
+
95
+
## Protection limits
96
+
97
+
| Resource | Default Limit |
98
+
| --- | --- |
99
+
| Options per [Content Key Policy](content-key-policy-concept.md)|30 |
100
+
| Licenses per month for each of the DRM types on Media Services key delivery service per account|1,000,000|
101
+
102
+
## Support ticket
103
+
104
+
For resources that are not fixed, you may ask for the quotas to be raised, by opening a [support ticket](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest). Include detailed information in the request on the desired quota changes, use-case scenarios, and regions required. <br/>Do **not** create additional Azure Media Services accounts in an attempt to obtain higher limits.
>Opening these URLs is essential for a reliable Windows Virtual Desktop deployment. Blocking access to these URLs is unsupported and will affect service functionality. These URLs only correspond to Windows Virtual Desktop sites and resources, and don't include URLs for other services like Azure Active Directory.
99
101
102
+
The following table lists optional URLs that your Azure virtual machines can have access to:
103
+
104
+
|Address|Outbound TCP port|Purpose|Service Tag|
105
+
|---|---|---|---|
106
+
|*.microsoftonline.com|443|Authentication to MS Online Services|None|
>Windows Virtual Desktop currently doesn't have a list of IP address ranges that you can whitelist to allow network traffic. We only support whitelisting specific URLs at this time.
102
117
>
118
+
>For a list of Office-related URLs, including required Azure Active Directory-related URLs, see [Office 365 URLs and IP address ranges](/office365/enterprise/urls-and-ip-address-ranges).
119
+
>
103
120
>You must use the wildcard character (*) for URLs involving service traffic. If you prefer to not use * for agent-related traffic, here's how to find the URLs without wildcards:
104
121
>
105
122
>1. Register your virtual machines to the Windows Virtual Desktop host pool.
@@ -132,15 +149,15 @@ The following Remote Desktop clients support Windows Virtual Desktop:
132
149
133
150
The Remote Desktop clients must have access to the following URLs:
134
151
135
-
|Address|Outbound port|Purpose|Client(s)|
152
+
|Address|Outbound TCP port|Purpose|Client(s)|
136
153
|---|---|---|---|
137
-
|*.wvd.microsoft.com|TCP port 443|Service traffic|All|
138
-
|*.servicebus.windows.net|TCP port 443|Troubleshooting data|All|
139
-
|go.microsoft.com|TCP port 443|Microsoft FWLinks|All|
140
-
|aka.ms|TCP port 443|Microsoft URL shortener|All|
141
-
|docs.microsoft.com|TCP port 443|Documentation|All|
142
-
|privacy.microsoft.com|TCP port 443|Privacy statement|All|
143
-
|query.prod.cms.rt.microsoft.com|TCP port 443|Client updates|Windows Desktop|
>Opening these URLs is essential for a reliable client experience. Blocking access to these URLs is unsupported and will affect service functionality. These URLs only correspond to the client sites and resources, and don't include URLs for other services like Azure Active Directory.
0 commit comments