Skip to content

Commit fb9afaa

Browse files
authored
Merge pull request #108817 from MicrosoftDocs/repo_sync_working_branch
Confirm merge from repo_sync_working_branch to master to sync with https://github.com/Microsoft/azure-docs (branch master)
2 parents 41f1ed9 + 262e587 commit fb9afaa

10 files changed

+152
-69
lines changed

articles/azure-databricks/databricks-extract-load-sql-data-warehouse.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -149,13 +149,13 @@ In this section, you create a notebook in Azure Databricks workspace and then ru
149149

150150
```scala
151151
val appID = "<appID>"
152-
val password = "<password>"
152+
val secret = "<secret>"
153153
val tenantID = "<tenant-id>"
154154

155155
spark.conf.set("fs.azure.account.auth.type", "OAuth")
156156
spark.conf.set("fs.azure.account.oauth.provider.type", "org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider")
157157
spark.conf.set("fs.azure.account.oauth2.client.id", "<appID>")
158-
spark.conf.set("fs.azure.account.oauth2.client.secret", "<password>")
158+
spark.conf.set("fs.azure.account.oauth2.client.secret", "<secret>")
159159
spark.conf.set("fs.azure.account.oauth2.client.endpoint", "https://login.microsoftonline.com/<tenant-id>/oauth2/token")
160160
spark.conf.set("fs.azure.createRemoteFileSystemDuringInitialization", "true")
161161
```

articles/cost-management-billing/reservations/prepare-buy-reservation.md

Lines changed: 2 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ author: bandersmsft
55
ms.reviewer: yashar
66
ms.service: cost-management-billing
77
ms.topic: conceptual
8-
ms.date: 03/22/2020
8+
ms.date: 03/24/2020
99
ms.author: banders
1010
---
1111

@@ -27,7 +27,7 @@ You can scope a reservation to a subscription or resource groups. Setting the sc
2727

2828
### Reservation scoping options
2929

30-
With resource group scoping you have three options to scope a reservation, depending on your needs:
30+
You have three options to scope a reservation, depending on your needs:
3131

3232
- **Single resource group scope**—Applies the reservation discount to the matching resources in the selected resource group only.
3333
- **Single subscription scope**—Applies the reservation discount to the matching resources in the selected subscription.
@@ -41,16 +41,6 @@ While applying reservation discounts on your usage, Azure processes the reservat
4141

4242
A single resource group can get reservation discounts from multiple reservations, depending on how you scope your reservations.
4343

44-
### Scope a reservation to a resource group
45-
46-
You can scope the reservation to a resource group when you buy the reservation, or you set the scope after purchase. You must be a subscription owner to scope the reservation to a resource group.
47-
48-
To set the scope, go to the [Purchase reservation](https://ms.portal.azure.com/#blade/Microsoft\_Azure\_Reservations/CreateBlade/referrer/Browse\_AddCommand) page in the Azure portal. Select the reservation type that you want to buy. On the **Select the product that you want to purchase** selection form, change the Scope value to Single resource group. Then, select a resource group.
49-
50-
![Example showing VM reservation purchase selection](./media/prepare-buy-reservation/select-product-to-purchase.png)
51-
52-
Purchase recommendations for the resource group in the virtual machine reservation are shown. Recommendations are calculated by analyzing your usage over the last 30 days. A purchase recommendation is made if the cost of running resources with reserved instances is cheaper than the cost of running resources with pay-as-you-go rates. For more information about reservation purchase recommendations, see [Get Reserved Instance purchase recommendations based on usage pattern](https://azure.microsoft.com/blog/get-usage-based-reserved-instance-recommendations).
53-
5444
You can always update the scope after you buy a reservation. To do so, go to the reservation, click **Configuration**, and rescope the reservation. Rescoping a reservation isn't a commercial transaction. Your reservation term isn't changed. For more information about updating the scope, see [Update the scope after you purchase a reservation](manage-reserved-vm-instance.md#change-the-reservation-scope).
5545

5646
![Example showing a reservation scope change](./media/prepare-buy-reservation/rescope-reservation-resource-group.png)

articles/governance/policy/concepts/definition-structure.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -80,7 +80,7 @@ are:
8080
- `indexed`: only evaluate resource types that support tags and location
8181

8282
For example, resource `Microsoft.Network/routeTables` supports tags and location and is evaluated in
83-
both modes. However, resource `Microsoft.Network/routeTables/routes` can't be tagged isn't evaluated
83+
both modes. However, resource `Microsoft.Network/routeTables/routes` can't be tagged and isn't evaluated
8484
in `Indexed` mode.
8585

8686
We recommend that you set **mode** to `all` in most cases. All policy definitions created through
@@ -855,7 +855,7 @@ tagging policy definitions into a single initiative. Rather than assigning each
855855
you apply the initiative.
856856

857857
> [!NOTE]
858-
> Once an initiative is assigned, initative level parameters can't be altered. Due to this, the
858+
> Once an initiative is assigned, initiative level parameters can't be altered. Due to this, the
859859
> recommendation is to set a **defaultValue** when defining the parameter.
860860
861861
The following example illustrates how to create an initiative for handling two tags: `costCenter`

articles/governance/policy/concepts/rego-for-aks.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -184,7 +184,7 @@ The Azure Policy language structure for managing Kubernetes follows that of exis
184184
effect _EnforceOPAConstraint_ is used to manage your Kubernetes clusters and takes details
185185
properties specific to working with
186186
[OPA Constraint Framework](https://github.com/open-policy-agent/frameworks/tree/master/constraint)
187-
and Gatekeeper v3. For details and examples, see the
187+
and Gatekeeper v3. For details and examples, see the
188188
[EnforceOPAConstraint](./effects.md#enforceopaconstraint) effect.
189189

190190
As part of the _details.constraintTemplate_ and _details.constraint_ properties in the policy
@@ -315,4 +315,4 @@ collected:
315315
- Understand how to [programmatically create policies](../how-to/programmatically-create.md).
316316
- Learn how to [get compliance data](../how-to/get-compliance-data.md).
317317
- Learn how to [remediate non-compliant resources](../how-to/remediate-resources.md).
318-
- Review what a management group is with [Organize your resources with Azure management groups](../../management-groups/overview.md).
318+
- Review what a management group is with [Organize your resources with Azure management groups](../../management-groups/overview.md).

articles/iot-edge/how-to-edgeagent-direct-method.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,9 +17,11 @@ Monitor and manage IoT Edge deployments by using the direct methods included in
1717

1818
For more information about direct methods, how to use them, and how to implement them in your own modules, see [Understand and invoke direct methods from IoT Hub](../iot-hub/iot-hub-devguide-direct-methods.md).
1919

20+
The names of these direct methods are handled case-insensitive.
21+
2022
## Ping
2123

22-
The **ping** method is useful for checking whether IoT Edge is running on a device, or whether the device has an open connection to ioT Hub. Use this direct method to ping the IoT Edge agent and get its status. A successful ping returns an empty payload and **"status": 200**.
24+
The **ping** method is useful for checking whether IoT Edge is running on a device, or whether the device has an open connection to IoT Hub. Use this direct method to ping the IoT Edge agent and get its status. A successful ping returns an empty payload and **"status": 200**.
2325

2426
For example:
2527

articles/load-balancer/load-balancer-ha-ports-overview.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -92,7 +92,6 @@ You can configure *one* public Standard Load Balancer resource for the back-end
9292
- HA ports load-balancing rules are available only for internal Standard Load Balancer.
9393
- The combining of an HA ports load-balancing rule and a non-HA ports load-balancing rule is not supported.
9494
- Existing IP fragments will be forwarded by HA Ports load-balancing rules to same destination as first packet. IP fragmenting a UDP or TCP packet is not supported.
95-
- The HA ports load-balancing rules are not available for IPv6.
9695
- Flow symmetry (primarily for NVA scenarios) is supported with backend instance and a single NIC (and single IP configuration) only when used as shown in the diagram above and using HA Ports load-balancing rules. It is not provided in any other scenario. This means that two or more Load Balancer resources and their respective rules make independent decisions and are never coordinated. See the description and diagram for [network virtual appliances](#nva). When you are using multiple NICs or sandwiching the NVA between a public and internal Load Balancer, flow symmetry is not available. You may be able to work around this by source NAT'ing the ingress flow to the IP of the appliance to allow replies to arrive on the same NVA. However, we strongly recommend using a single NIC and using the reference architecture shown in the diagram above.
9796

9897

articles/load-balancer/tutorial-load-balancer-standard-manage-portal.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -120,9 +120,9 @@ In this section you'll need to replace the following parameters in the steps wit
120120
| **\<resource-group-name>** | myResourceGroupSLB (Select existing resource group) |
121121
| **\<virtual-network-name>** | myVNet |
122122
| **\<region-name>** | West Europe |
123-
| **\<IPv4-address-space>** | 10.1.0.0\16 |
123+
| **\<IPv4-address-space>** | 10.1.0.0/16 |
124124
| **\<subnet-name>** | mySubnet |
125-
| **\<subnet-address-range>** | 10.1.0.0\24 |
125+
| **\<subnet-address-range>** | 10.1.0.0/24 |
126126

127127
[!INCLUDE [virtual-networks-create-new](../../includes/virtual-networks-create-new.md)]
128128

articles/machine-learning/how-to-configure-auto-train.md

Lines changed: 37 additions & 36 deletions
Original file line numberDiff line numberDiff line change
@@ -77,6 +77,7 @@ Requirements for training data:
7777
The following code examples demonstrate how to store the data in these formats.
7878

7979
* TabularDataset
80+
8081
```python
8182
from azureml.core.dataset import Dataset
8283
from azureml.opendatasets import Diabetes
@@ -88,14 +89,14 @@ The following code examples demonstrate how to store the data in these formats.
8889

8990
* Pandas dataframe
9091

91-
```python
92-
import pandas as pd
93-
from sklearn.model_selection import train_test_split
92+
```python
93+
import pandas as pd
94+
from sklearn.model_selection import train_test_split
9495

95-
df = pd.read_csv("your-local-file.csv")
96-
train_data, test_data = train_test_split(df, test_size=0.1, random_state=42)
97-
label = "label-col-name"
98-
```
96+
df = pd.read_csv("your-local-file.csv")
97+
train_data, test_data = train_test_split(df, test_size=0.1, random_state=42)
98+
label = "label-col-name"
99+
```
99100

100101
## Fetch data for running experiment on remote compute
101102

@@ -125,14 +126,14 @@ Use custom validation dataset if random split is not acceptable, usually time se
125126
## Compute to run experiment
126127

127128
Next determine where the model will be trained. An automated machine learning training experiment can run on the following compute options:
128-
* Your local machine such as a local desktop or laptop – Generally when you have small dataset and you are still in the exploration stage.
129-
* A remote machine in the cloud – [Azure Machine Learning Managed Compute](concept-compute-target.md#amlcompute) is a managed service that enables the ability to train machine learning models on clusters of Azure virtual machines.
129+
* Your local machine such as a local desktop or laptop – Generally when you have small dataset and you are still in the exploration stage.
130+
* A remote machine in the cloud – [Azure Machine Learning Managed Compute](concept-compute-target.md#amlcompute) is a managed service that enables the ability to train machine learning models on clusters of Azure virtual machines.
130131

131-
See this [GitHub site](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning) for examples of notebooks with local and remote compute targets.
132+
See this [GitHub site](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning) for examples of notebooks with local and remote compute targets.
132133

133-
* An Azure Databricks cluster in your Azure subscription. You can find more details here - [Setup Azure Databricks cluster for Automated ML](how-to-configure-environment.md#azure-databricks)
134+
* An Azure Databricks cluster in your Azure subscription. You can find more details here - [Setup Azure Databricks cluster for Automated ML](how-to-configure-environment.md#azure-databricks)
134135

135-
See this [GitHub site](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/azure-databricks/automl) for examples of notebooks with Azure Databricks.
136+
See this [GitHub site](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/azure-databricks/automl) for examples of notebooks with Azure Databricks.
136137

137138
<a name='configure-experiment'></a>
138139

@@ -142,30 +143,30 @@ There are several options that you can use to configure your automated machine l
142143

143144
Some examples include:
144145

145-
1. Classification experiment using AUC weighted as the primary metric with experiment timeout minutes set to 30 minutes and 2 cross-validation folds.
146-
147-
```python
148-
automl_classifier=AutoMLConfig(
149-
task='classification',
150-
primary_metric='AUC_weighted',
151-
experiment_timeout_minutes=30,
152-
blacklist_models=['XGBoostClassifier'],
153-
training_data=train_data,
154-
label_column_name=label,
155-
n_cross_validations=2)
156-
```
157-
2. Below is an example of a regression experiment set to end after 60 minutes with five validation cross folds.
158-
159-
```python
160-
automl_regressor = AutoMLConfig(
161-
task='regression',
162-
experiment_timeout_minutes=60,
163-
whitelist_models=['kNN regressor'],
164-
primary_metric='r2_score',
165-
training_data=train_data,
166-
label_column_name=label,
167-
n_cross_validations=5)
168-
```
146+
1. Classification experiment using AUC weighted as the primary metric with experiment timeout minutes set to 30 minutes and 2 cross-validation folds.
147+
148+
```python
149+
automl_classifier=AutoMLConfig(
150+
task='classification',
151+
primary_metric='AUC_weighted',
152+
experiment_timeout_minutes=30,
153+
blacklist_models=['XGBoostClassifier'],
154+
training_data=train_data,
155+
label_column_name=label,
156+
n_cross_validations=2)
157+
```
158+
2. Below is an example of a regression experiment set to end after 60 minutes with five validation cross folds.
159+
160+
```python
161+
automl_regressor = AutoMLConfig(
162+
task='regression',
163+
experiment_timeout_minutes=60,
164+
whitelist_models=['kNN regressor'],
165+
primary_metric='r2_score',
166+
training_data=train_data,
167+
label_column_name=label,
168+
n_cross_validations=5)
169+
```
169170

170171
The three different `task` parameter values (the third task-type is `forecasting`, and uses a similar algorithm pool as `regression` tasks) determine the list of models to apply. Use the `whitelist` or `blacklist` parameters to further modify iterations with the available models to include or exclude. The list of supported models can be found on [SupportedModels Class](https://docs.microsoft.com/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels) for ([Classification](https://docs.microsoft.com/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.classification), [Forecasting](https://docs.microsoft.com/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.forecasting), and [Regression](https://docs.microsoft.com/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.regression)).
171172

articles/stream-analytics/stream-analytics-twitter-sentiment-analysis-trends.md

Lines changed: 4 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@ In this how-to guide, you use a client application that connects to Twitter and
3434

3535
* The TwitterClientCore application, which reads the Twitter feed. To get this application, download [TwitterClientCore](https://github.com/Azure/azure-stream-analytics/tree/master/DataGenerators/TwitterClientCore).
3636

37-
* Install the [.NET Core CLI](https://docs.microsoft.com/dotnet/core/tools/?tabs=netcore2x).
37+
* Install the [.NET Core CLI](https://docs.microsoft.com/dotnet/core/tools/?tabs=netcore2x) version 2.1.0.
3838

3939
## Create an event hub for streaming input
4040

@@ -89,12 +89,6 @@ Before a process can send data to an event hub, the event hub needs a policy tha
8989
> [!NOTE]
9090
> For security, parts of the connection string in the example have been removed.
9191
92-
8. In the text editor, remove the `EntityPath` pair from the connection string (don't forget to remove the semicolon that precedes it). When you're done, the connection string looks like this:
93-
94-
```
95-
Endpoint=sb://EVENTHUBS-NAMESPACE.servicebus.windows.net/;SharedAccessKeyName=socialtwitter-access;SharedAccessKey=Gw2NFZw6r...FxKbXaC2op6a0ZsPkI=
96-
```
97-
9892
## Configure and start the Twitter client application
9993

10094
The client application gets tweet events directly from Twitter. In order to do so, it needs permission to call the Twitter Streaming APIs. To configure that permission, you create an application in Twitter, which generates unique credentials (such as an OAuth token). You can then configure the client application to use these credentials when it makes API calls.
@@ -105,7 +99,7 @@ If you do not already have a Twitter application that you can use for this how-t
10599
> [!NOTE]
106100
> The exact process in Twitter for creating an application and getting the keys, secrets, and token might change. If these instructions don't match what you see on the Twitter site, refer to the Twitter developer documentation.
107101
108-
1. From a web browser, go to [Twitter For Developers](https://developer.twitter.com/en/apps), and select **Create an app**. You might see a message saying that you need to apply for a Twitter developer account. Feel free to do so, and after your application has been approved, you should see a confirmation email. It could take several days to be approved for a developer account.
102+
1. From a web browser, go to [Twitter For Developers](https://developer.twitter.com/en/apps), create a developer account, and select **Create an app**. You might see a message saying that you need to apply for a Twitter developer account. Feel free to do so, and after your application has been approved, you should see a confirmation email. It could take several days to be approved for a developer account.
109103

110104
![Twitter application details](./media/stream-analytics-twitter-sentiment-analysis-trends/provide-twitter-app-details.png "Twitter application details")
111105

@@ -134,7 +128,7 @@ Before the application runs, it requires certain information from you, like the
134128
* Set `oauth_consumer_secret` to the Twitter Consumer Secret (API secret key).
135129
* Set `oauth_token` to the Twitter Access token.
136130
* Set `oauth_token_secret` to the Twitter Access token secret.
137-
* Set `EventHubNameConnectionString` to the connection string. Make sure that you use the connection string that you removed the `EntityPath` key-value pair from.
131+
* Set `EventHubNameConnectionString` to the connection string.
138132
* Set `EventHubName` to the event hub name (that is the value of the entity path).
139133

140134
3. Open the command line and navigate to the directory where your TwitterClientCore app is located. Use the command `dotnet build` to build the project. Then use the command `dotnet run` to run the app. The app sends Tweets to your Event Hub.
@@ -233,4 +227,4 @@ For further assistance, try our [Azure Stream Analytics forum](https://social.ms
233227
* [Get started using Azure Stream Analytics](stream-analytics-real-time-fraud-detection.md)
234228
* [Scale Azure Stream Analytics jobs](stream-analytics-scale-jobs.md)
235229
* [Azure Stream Analytics Query Language Reference](https://docs.microsoft.com/stream-analytics-query/stream-analytics-query-language-reference)
236-
* [Azure Stream Analytics Management REST API Reference](https://msdn.microsoft.com/library/azure/dn835031.aspx)
230+
* [Azure Stream Analytics Management REST API Reference](https://msdn.microsoft.com/library/azure/dn835031.aspx)

0 commit comments

Comments
 (0)