Skip to content

Commit ba0d852

Browse files
committed
2 parents c58dd23 + 5aa5d46 commit ba0d852

File tree

48 files changed

+351
-652
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

48 files changed

+351
-652
lines changed

.openpublishing.redirection.json

Lines changed: 30 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -47339,6 +47339,36 @@
4733947339
"source_path": "articles/load-balancer/create-load-balancer-rest-api.md",
4734047340
"redirect_url": "/rest/api/load-balancer/loadbalancers/createorupdate",
4734147341
"redirect_document_id": false
47342+
},
47343+
{
47344+
"source_path": "articles/load-balancer/load-balancer-get-started-internet-az-portal.md",
47345+
"redirect_url": "/azure/load-balancer/quickstart-load-balancer-standard-public-portal",
47346+
"redirect_document_id": false
47347+
},
47348+
{
47349+
"source_path": "articles/load-balancer/load-balancer-get-started-internet-az-cli.md",
47350+
"redirect_url": "/azure/load-balancer/quickstart-load-balancer-standard-public-cli",
47351+
"redirect_document_id": false
47352+
},
47353+
{
47354+
"source_path": "articles/load-balancer/load-balancer-get-started-internet-az-powershell.md",
47355+
"redirect_url": "/azure/load-balancer/quickstart-create-standard-load-balancer-powershell",
47356+
"redirect_document_id": false
47357+
},
47358+
{
47359+
"source_path": "articles/load-balancer/load-balancer-get-started-internet-availability-zones-zonal-portal.md",
47360+
"redirect_url": "/azure/load-balancer/quickstart-load-balancer-standard-public-portal",
47361+
"redirect_document_id": false
47362+
},
47363+
{
47364+
"source_path": "articles/load-balancer/load-balancer-get-started-internet-availability-zones-zonal-powershell.md",
47365+
"redirect_url": "/azure/load-balancer/quickstart-create-standard-load-balancer-powershell",
47366+
"redirect_document_id": false
47367+
},
47368+
{
47369+
"source_path": "articles/load-balancer/load-balancer-get-started-internet-availability-zones-zonal-cli.md",
47370+
"redirect_url": "/azure/load-balancer/quickstart-load-balancer-standard-public-cli",
47371+
"redirect_document_id": false
4734247372
}
4734347373
]
4734447374
}

articles/active-directory-b2c/quickstart-native-app-desktop.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@ Azure Active Directory B2C (Azure AD B2C) provides cloud identity management to
2525

2626
- [Visual Studio 2019](https://www.visualstudio.com/downloads/) with the **ASP.NET and web development** workload.
2727
- A social account from either Facebook, Google, or Microsoft.
28-
- [Download a zip file](https://github.com/Azure-Samples/active-directory-b2c-dotnet-desktop/archive/master.zip) or clone the sample web app from GitHub.
28+
- [Download a zip file](https://github.com/Azure-Samples/active-directory-b2c-dotnet-desktop/archive/msalv3.zip) or clone the [Azure-Samples/active-directory-b2c-dotnet-desktop](https://github.com/Azure-Samples/active-directory-b2c-dotnet-desktop) repository from GitHub.
2929

3030
```
3131
git clone https://github.com/Azure-Samples/active-directory-b2c-dotnet-desktop.git
4.91 KB
Loading
-650 Bytes
Loading
5.16 KB
Loading

articles/azure-databricks/databricks-extract-load-sql-data-warehouse.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ ms.reviewer: jasonh
77
ms.service: azure-databricks
88
ms.custom: mvc
99
ms.topic: tutorial
10-
ms.date: 06/20/2019
10+
ms.date: 01/29/2020
1111
---
1212
# Tutorial: Extract, transform, and load data by using Azure Databricks
1313

@@ -57,7 +57,7 @@ Complete these tasks before you begin this tutorial:
5757

5858
If you'd prefer to use an access control list (ACL) to associate the service principal with a specific file or directory, reference [Access control in Azure Data Lake Storage Gen2](../storage/blobs/data-lake-storage-access-control.md).
5959

60-
* When performing the steps in the [Get values for signing in](https://docs.microsoft.com/azure/active-directory/develop/howto-create-service-principal-portal#get-values-for-signing-in) section of the article, paste the tenant ID, app ID, and password values into a text file. You'll need those soon.
60+
* When performing the steps in the [Get values for signing in](https://docs.microsoft.com/azure/active-directory/develop/howto-create-service-principal-portal#get-values-for-signing-in) section of the article, paste the tenant ID, app ID, and secret values into a text file. You'll need those soon.
6161

6262
* Sign in to the [Azure portal](https://portal.azure.com/).
6363

@@ -165,23 +165,23 @@ In this section, you create a notebook in Azure Databricks workspace and then ru
165165
```scala
166166
val storageAccountName = "<storage-account-name>"
167167
val appID = "<app-id>"
168-
val password = "<password>"
168+
val secret = "<secret>"
169169
val fileSystemName = "<file-system-name>"
170170
val tenantID = "<tenant-id>"
171171

172172
spark.conf.set("fs.azure.account.auth.type." + storageAccountName + ".dfs.core.windows.net", "OAuth")
173173
spark.conf.set("fs.azure.account.oauth.provider.type." + storageAccountName + ".dfs.core.windows.net", "org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider")
174174
spark.conf.set("fs.azure.account.oauth2.client.id." + storageAccountName + ".dfs.core.windows.net", "" + appID + "")
175-
spark.conf.set("fs.azure.account.oauth2.client.secret." + storageAccountName + ".dfs.core.windows.net", "" + password + "")
175+
spark.conf.set("fs.azure.account.oauth2.client.secret." + storageAccountName + ".dfs.core.windows.net", "" + secret + "")
176176
spark.conf.set("fs.azure.account.oauth2.client.endpoint." + storageAccountName + ".dfs.core.windows.net", "https://login.microsoftonline.com/" + tenantID + "/oauth2/token")
177177
spark.conf.set("fs.azure.createRemoteFileSystemDuringInitialization", "true")
178178
dbutils.fs.ls("abfss://" + fileSystemName + "@" + storageAccountName + ".dfs.core.windows.net/")
179179
spark.conf.set("fs.azure.createRemoteFileSystemDuringInitialization", "false")
180180
```
181181

182-
6. In this code block, replace the `<app-id>`, `<password>`, `<tenant-id>`, and `<storage-account-name>` placeholder values in this code block with the values that you collected while completing the prerequisites of this tutorial. Replace the `<file-system-name>` placeholder value with whatever name you want to give the file system.
182+
6. In this code block, replace the `<app-id>`, `<secret>`, `<tenant-id>`, and `<storage-account-name>` placeholder values in this code block with the values that you collected while completing the prerequisites of this tutorial. Replace the `<file-system-name>` placeholder value with whatever name you want to give the file system.
183183

184-
* The `<app-id>`, and `<password>` are from the app that you registered with active directory as part of creating a service principal.
184+
* The `<app-id>`, and `<secret>` are from the app that you registered with active directory as part of creating a service principal.
185185

186186
* The `<tenant-id>` is from your subscription.
187187

articles/azure-monitor/platform/collect-custom-metrics-linux-telegraf.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ By using Azure Monitor, you can collect custom metrics via your application tele
1515

1616
## InfluxData Telegraf agent
1717

18-
[Telegraf](https://docs.influxdata.com/telegraf/v1.7/) is a plug-in-driven agent that enables the collection of metrics from over 150 different sources. Depending on what workloads run on your VM, you can configure the agent to leverage specialized input plug-ins to collect metrics. Examples are MySQL, NGINX, and Apache. By using output plug-ins, the agent can then write to destinations that you choose. The Telegraf agent has integrated directly with the Azure Monitor custom metrics REST API. It supports an Azure Monitor output plug-in. By using this plug-in, the agent can collect workload-specific metrics on your Linux VM and submit them as custom metrics to Azure Monitor.
18+
[Telegraf](https://docs.influxdata.com/telegraf/) is a plug-in-driven agent that enables the collection of metrics from over 150 different sources. Depending on what workloads run on your VM, you can configure the agent to leverage specialized input plug-ins to collect metrics. Examples are MySQL, NGINX, and Apache. By using output plug-ins, the agent can then write to destinations that you choose. The Telegraf agent has integrated directly with the Azure Monitor custom metrics REST API. It supports an Azure Monitor output plug-in. By using this plug-in, the agent can collect workload-specific metrics on your Linux VM and submit them as custom metrics to Azure Monitor.
1919

2020
![Telegraph agent overview](./media/collect-custom-metrics-linux-telegraf/telegraf-agent-overview.png)
2121

articles/azure-monitor/platform/delete-workspace.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@ When you delete a Log Analytics workspace, a soft-delete operation is performed
2020
After the soft-delete period, the workspace resource and its data are non-recoverable – its data is queued for permanent deletion and completely purged within 30 days. The workspace name is 'released' and you can use it to create a new workspace.
2121

2222
> [!NOTE]
23-
> The soft-delete behavior cannot be turned off. We will shortly add an option to override the soft-delete when using a ‘force’ tag in the delete operation.
23+
> If you want to override the soft-delete behavior and delete your workspace permanently, follow the steps in [Permanent workspace delete](#Permanent workspace delete).
2424
2525
You want to exercise caution when you delete a workspace because there might be important data and configuration that may negatively impact your service operation. Review what agents, solutions, and other Azure services and sources that store their data in Log Analytics, such as:
2626

articles/batch/batch-aad-auth.md

Lines changed: 62 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ ms.service: batch
1313
ms.topic: article
1414
ms.tgt_pltfrm:
1515
ms.workload: big-compute
16-
ms.date: 08/15/2019
16+
ms.date: 01/28/2020
1717
ms.author: jushiman
1818
---
1919

@@ -140,6 +140,67 @@ Your application should now appear in your access control settings with an RBAC
140140

141141
![Assign an RBAC role to your application](./media/batch-aad-auth/app-rbac-role.png)
142142

143+
### Assign a custom role
144+
145+
A custom role grants granular permission to a user for submitting jobs, tasks, and more. This provides the ability to prevent users from performing operations that affect cost, such as creating pools or modifying nodes.
146+
147+
You can use a custom role to grant permissions to an Azure AD user, group, or service principal for the following RBAC operations:
148+
149+
- Microsoft.Batch/batchAccounts/pools/write
150+
- Microsoft.Batch/batchAccounts/pools/delete
151+
- Microsoft.Batch/batchAccounts/pools/read
152+
- Microsoft.Batch/batchAccounts/jobSchedules/write
153+
- Microsoft.Batch/batchAccounts/jobSchedules/delete
154+
- Microsoft.Batch/batchAccounts/jobSchedules/read
155+
- Microsoft.Batch/batchAccounts/jobs/write
156+
- Microsoft.Batch/batchAccounts/jobs/delete
157+
- Microsoft.Batch/batchAccounts/jobs/read
158+
- Microsoft.Batch/batchAccounts/certificates/write
159+
- Microsoft.Batch/batchAccounts/certificates/delete
160+
- Microsoft.Batch/batchAccounts/certificates/read
161+
- Microsoft.Batch/batchAccounts/read (for any read operation)
162+
- Microsoft.Batch/batchAccounts/listKeys/action (for any operation)
163+
164+
Custom roles are for users authenticated by Azure AD, not the Batch account credentials (shared key). Note that the Batch account credentials give full permission to the Batch account. Also note that jobs using autopool require pool-level permissions.
165+
166+
Here's an example of a custom role definition:
167+
168+
```json
169+
{
170+
"properties":{
171+
"roleName":"Azure Batch Custom Job Submitter",
172+
"type":"CustomRole",
173+
"description":"Allows a user to submit jobs to Azure Batch but not manage pools",
174+
"assignableScopes":[
175+
"/subscriptions/88888888-8888-8888-8888-888888888888"
176+
],
177+
"permissions":[
178+
{
179+
"actions":[
180+
"Microsoft.Batch/*/read",
181+
"Microsoft.Authorization/*/read",
182+
"Microsoft.Resources/subscriptions/resourceGroups/read",
183+
"Microsoft.Support/*",
184+
"Microsoft.Insights/alertRules/*"
185+
],
186+
"notActions":[
187+
188+
],
189+
"dataActions":[
190+
"Microsoft.Batch/batchAccounts/jobs/*",
191+
"Microsoft.Batch/batchAccounts/jobSchedules/*"
192+
],
193+
"notDataActions":[
194+
195+
]
196+
}
197+
]
198+
}
199+
}
200+
```
201+
202+
For more general information on creating a custom role, see [Custom roles for Azure resources](../role-based-access-control/custom-roles.md).
203+
143204
### Get the tenant ID for your Azure Active Directory
144205

145206
The tenant ID identifies the Azure AD tenant that provides authentication services to your application. To get the tenant ID, follow these steps:

articles/hdinsight/disk-encryption.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -74,7 +74,7 @@ HDInsight only supports Azure Key Vault. If you have your own key vault, you can
7474
7575
b. Under **Select Principal**, choose the user-assigned managed identity you created.
7676
77-
![Set Select Principal for Azure Key Vault access policy](./media/disk-encryption/add-key-vault-access-policy-select-principal.png)
77+
![Set Select Principal for Azure Key Vault access policy](./media/disk-encryption/azure-portal-add-access-policy.png)
7878
7979
c. Set **Key Permissions** to **Get**, **Unwrap Key**, and **Wrap Key**.
8080
@@ -96,6 +96,8 @@ You're now ready to create a new HDInsight cluster. Customer-managed key can onl
9696
9797
During cluster creation, provide the full key URL, including the key version. For example, `https://contoso-kv.vault.azure.net/keys/myClusterKey/46ab702136bc4b229f8b10e8c2997fa4`. You also need to assign the managed identity to the cluster and provide the key URI.
9898
99+
![Create new cluster](./media/disk-encryption/create-cluster-portal.png)
100+
99101
### Using Azure CLI
100102
101103
The following example shows how to use Azure CLI to create a new Apache Spark cluster with disk encryption enabled. See the [Azure CLI az hdinsight create](https://docs.microsoft.com/cli/azure/hdinsight?view=azure-cli-latest#az-hdinsight-create) documentation for more information.

0 commit comments

Comments
 (0)