Skip to content

Commit d645cba

Browse files
authored
Merge pull request #57759 from MicrosoftDocs/repo_sync_working_branch
Confirm merge from repo_sync_working_branch to master to sync with https://github.com/Microsoft/azure-docs (branch master)
2 parents e29eb94 + dd69aed commit d645cba

22 files changed

+77
-55
lines changed

articles/active-directory-b2c/saml-technical-profile.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ ms.component: B2C
1717

1818
[!INCLUDE [active-directory-b2c-advanced-audience-warning](../../includes/active-directory-b2c-advanced-audience-warning.md)]
1919

20-
Azure Active Directory (Azure AD) B2C provides support for the SAML 2.0 identity provider. This article describes the specifics of a technical profile for interacting with a claims provider that supports this standardized protocol. With SAML technical profile you can federate with a SAML based identity provider, such as AD-FS and Salesforce, allowing you users to sign-in with their existing social or enterprise identities.
20+
Azure Active Directory (Azure AD) B2C provides support for the SAML 2.0 identity provider. This article describes the specifics of a technical profile for interacting with a claims provider that supports this standardized protocol. With SAML technical profile you can federate with a SAML based identity provider, such as AD-FS and Salesforce, allowing your users to sign-in with their existing social or enterprise identities.
2121

2222
## Metadata exchange
2323

articles/application-insights/app-insights-ip-addresses.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@ You need to open some outgoing ports in your server's firewall to allow the Appl
3434

3535
| Purpose | URL | IP | Ports |
3636
| --- | --- | --- | --- |
37-
| Telemetry |dc.services.visualstudio.com<br/>dc.applicationinsights.microsoft.com |40.114.241.141<br/>104.45.136.42<br/>40.84.189.107<br/>168.63.242.221<br/>52.167.221.184<br/>52.169.64.244<br/>40.85.218.175<br/>104.211.92.54<br/>52.175.198.74<br/>51.140.6.23 | 443 |
37+
| Telemetry |dc.services.visualstudio.com<br/>dc.applicationinsights.microsoft.com |40.114.241.141<br/>104.45.136.42<br/>40.84.189.107<br/>168.63.242.221<br/>52.167.221.184<br/>52.169.64.244<br/>40.85.218.175<br/>104.211.92.54<br/>52.175.198.74<br/>51.140.6.23<br/>40.71.12.231 | 443 |
3838
| Live Metrics Stream |rt.services.visualstudio.com<br/>rt.applicationinsights.microsoft.com |23.96.28.38<br/>13.92.40.198 |443 |
3939

4040
## Status Monitor

articles/azure-databricks/databricks-extract-load-sql-data-warehouse.md

Lines changed: 15 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -13,15 +13,15 @@ ms.date: 07/26/2018
1313
---
1414
# Tutorial: Extract, transform, and load data using Azure Databricks
1515

16-
In this tutorial, you perform an ETL (extract, transform, and load data) operation using Azure Databricks. You extract data from Azure Data Lake Store into Azure Databricks, run transformations on the data in Azure Databricks, and then load the transformed data into Azure SQL Data Warehouse.
16+
In this tutorial, you perform an ETL (extract, transform, and load data) operation using Azure Databricks. You extract data from Azure Data Lake Store into Azure Databricks, run transformations on the data in Azure Databricks, and then load the transformed data into Azure SQL Data Warehouse.
1717

1818
The steps in this tutorial use the SQL Data Warehouse connector for Azure Databricks to transfer data to Azure Databricks. This connector, in turn, uses Azure Blob Storage as temporary storage for the data being transferred between an Azure Databricks cluster and Azure SQL Data Warehouse.
1919

2020
The following illustration shows the application flow:
2121

2222
![Azure Databricks with Data Lake Store and SQL Data Warehouse](./media/databricks-extract-load-sql-data-warehouse/databricks-extract-transform-load-sql-datawarehouse.png "Azure Databricks with Data Lake Store and SQL Data Warehouse")
2323

24-
This tutorial covers the following tasks:
24+
This tutorial covers the following tasks:
2525

2626
> [!div class="checklist"]
2727
> * Create an Azure Databricks workspace
@@ -58,7 +58,7 @@ In this section, you create an Azure Databricks workspace using the Azure portal
5858

5959
![Create an Azure Databricks workspace](./media/databricks-extract-load-sql-data-warehouse/create-databricks-workspace.png "Create an Azure Databricks workspace")
6060

61-
Provide the following values:
61+
Provide the following values:
6262

6363
|Property |Description |
6464
|---------|---------|
@@ -89,14 +89,14 @@ In this section, you create an Azure Databricks workspace using the Azure portal
8989
Accept all other defaults other than the following values:
9090

9191
* Enter a name for the cluster.
92-
* For this article, create a cluster with **4.0** runtime.
92+
* For this article, create a cluster with **4.0** runtime.
9393
* Make sure you select the **Terminate after \_\_ minutes of inactivity** checkbox. Provide a duration (in minutes) to terminate the cluster, if the cluster is not being used.
9494

9595
Select **Create cluster**. Once the cluster is running, you can attach notebooks to the cluster and run Spark jobs.
9696

9797
## Create an Azure Data Lake Store account
9898

99-
In this section, you create an Azure Data Lake Store account and associate an Azure Active Directory service principal with it. Later in this tutorial, you use this service principal in Azure Databricks to access Azure Data Lake Store.
99+
In this section, you create an Azure Data Lake Store account and associate an Azure Active Directory service principal with it. Later in this tutorial, you use this service principal in Azure Databricks to access Azure Data Lake Store.
100100

101101
1. From the [Azure portal](https://portal.azure.com), select **Create a resource** > **Storage** > **Data Lake Store**.
102102
3. In the **New Data Lake Store** blade, provide the values as shown in the following screenshot:
@@ -183,7 +183,7 @@ When programmatically logging in, you need to pass the tenant ID with your authe
183183

184184
1. Copy the **Directory ID**. This value is your tenant ID.
185185

186-
![tenant ID](./media/databricks-extract-load-sql-data-warehouse/copy-directory-id.png)
186+
![tenant ID](./media/databricks-extract-load-sql-data-warehouse/copy-directory-id.png)
187187

188188
## Upload data to Data Lake Store
189189

@@ -300,7 +300,7 @@ You have now extracted the data from Azure Data Lake Store into Azure Databricks
300300

301301
## Transform data in Azure Databricks
302302

303-
The raw sample data **small_radio_json.json** captures the audience for a radio station and has a variety of columns. In this section, you transform the data to only retrieve specific columns in from the dataset.
303+
The raw sample data **small_radio_json.json** captures the audience for a radio station and has a variety of columns. In this section, you transform the data to only retrieve specific columns in from the dataset.
304304

305305
1. Start by retrieving only the columns *firstName*, *lastName*, *gender*, *location*, and *level* from the dataframe you already created.
306306

@@ -334,7 +334,7 @@ The raw sample data **small_radio_json.json** captures the audience for a radio
334334
| Margaux| Smith| F|Atlanta-Sandy Spr...| free|
335335
+---------+----------+------+--------------------+-----+
336336

337-
2. You can further transform this data to rename the column **level** to **subscription_type**.
337+
2. You can further transform this data to rename the column **level** to **subscription_type**.
338338

339339
val renamedColumnsDf = specificColumnsDf.withColumnRenamed("level", "subscription_type")
340340
renamedColumnsDf.show()
@@ -376,7 +376,7 @@ As mentioned earlier, the SQL date warehouse connector uses Azure Blob Storage a
376376

377377
val blobStorage = "<STORAGE ACCOUNT NAME>.blob.core.windows.net"
378378
val blobContainer = "<CONTAINER NAME>"
379-
val blobAccessKey = "<ACCESS KEY>"
379+
val blobAccessKey = "<ACCESS KEY>"
380380

381381
2. Specify a temporary folder that will be used while moving data between Azure Databricks and Azure SQL Data Warehouse.
382382

@@ -391,23 +391,23 @@ As mentioned earlier, the SQL date warehouse connector uses Azure Blob Storage a
391391

392392
//SQL Data Warehouse related settings
393393
val dwDatabase = "<DATABASE NAME>"
394-
val dwServer = "<DATABASE SERVER NAME>"
394+
val dwServer = "<DATABASE SERVER NAME>"
395395
val dwUser = "<USER NAME>"
396396
val dwPass = "<PASSWORD>"
397-
val dwJdbcPort = "1433"
397+
val dwJdbcPort = "1433"
398398
val dwJdbcExtraOptions = "encrypt=true;trustServerCertificate=true;hostNameInCertificate=*.database.windows.net;loginTimeout=30;"
399399
val sqlDwUrl = "jdbc:sqlserver://" + dwServer + ".database.windows.net:" + dwJdbcPort + ";database=" + dwDatabase + ";user=" + dwUser+";password=" + dwPass + ";$dwJdbcExtraOptions"
400400
val sqlDwUrlSmall = "jdbc:sqlserver://" + dwServer + ".database.windows.net:" + dwJdbcPort + ";database=" + dwDatabase + ";user=" + dwUser+";password=" + dwPass
401401

402-
5. Run the following snippet to load the transformed dataframe, **renamedColumnsDf**, as a table in SQL data warehouse. This snippet creates a table called **SampleTable** in the SQL database. Please note that Azure SQL DW requires a master key. You can create a master key by executing "CREATE MASTER KEY;" command in SQL Server Management Studio.
402+
5. Run the following snippet to load the transformed dataframe, **renamedColumnsDf**, as a table in SQL data warehouse. This snippet creates a table called **SampleTable** in the SQL database. Please note that Azure SQL DW requires a master key. You can create a master key by executing "CREATE MASTER KEY;" command in SQL Server Management Studio.
403403

404404
spark.conf.set(
405405
"spark.sql.parquet.writeLegacyFormat",
406406
"true")
407407
408408
renamedColumnsDf.write
409409
.format("com.databricks.spark.sqldw")
410-
.option("url", sqlDwUrlSmall)
410+
.option("url", sqlDwUrlSmall)
411411
.option("dbtable", "SampleTable")
412412
.option( "forward_spark_azure_storage_credentials","True")
413413
.option("tempdir", tempDir)
@@ -428,9 +428,9 @@ After you have finished running the tutorial, you can terminate the cluster. To
428428

429429
![Stop a Databricks cluster](./media/databricks-extract-load-sql-data-warehouse/terminate-databricks-cluster.png "Stop a Databricks cluster")
430430

431-
If you do not manually terminate the cluster it will automatically stop, provided you selected the **Terminate after __ minutes of inactivity** checkbox while creating the cluster. In such a case, the cluster automatically stops if it has been inactive for the specified time.
431+
If you do not manually terminate the cluster it will automatically stop, provided you selected the **Terminate after \_\_ minutes of inactivity** checkbox while creating the cluster. In such a case, the cluster automatically stops if it has been inactive for the specified time.
432432

433-
## Next steps
433+
## Next steps
434434
In this tutorial, you learned how to:
435435

436436
> [!div class="checklist"]

articles/azure-stack/asdk/asdk-prepare-host.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -49,7 +49,7 @@ Before you can install the ASDK on the host computer, the ASDK host computer env
4949

5050
6. On the **Optional settings** page, provide the local administrator account information for the development kit host computer and then click **Next**. You can also provide values for the following optional settings:
5151
- **Computername**: This option sets the name for the development kit host. The name must comply with FQDN requirements and must be 15 characters or less in length. The default is a random computer name generated by Windows.
52-
- **Static IP configuration**: Sets your deployment to use a static IP address. Otherwise, when the installer reboots into the cloudbuilder.vhx, the network interfaces are configured with DHCP.
52+
- **Static IP configuration**: Sets your deployment to use a static IP address. Otherwise, when the installer reboots into the cloudbuilder.vhdx, the network interfaces are configured with DHCP.
5353

5454
![](media/asdk-prepare-host/3.PNG)
5555

articles/azure-stack/user/azure-stack-solution-machine-learning.md

Lines changed: 3 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -988,7 +988,7 @@ The Docker engine must be running locally to complete the following steps to ope
988988
1. Make sure the Azure resource provider **Microsoft.ContainerRegistry** is registered in the subscription. Register this resource provider before creating an environment in step 3. Check to see if it's already registered by using the following command:
989989
990990
```CLI
991-
az provider list --query "\[\].{Provider:namespace, Status:registrationState}" --out table
991+
az provider list --query "[].{Provider:namespace, Status:registrationState}" --out table
992992
```
993993
994994
View this output:
@@ -1267,10 +1267,8 @@ From within the WSL Environment, run the following commands to install kubectl i
12671267

12681268
```Bash
12691269
apt-get update && apt-get install -y apt-transport-https
1270-
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
1271-
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
1272-
deb http://apt.kubernetes.io/ kubernetes-xenial main
1273-
EOF
1270+
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add
1271+
sudo apt-add-repository "deb http://apt.kubernetes.io/ kubernetes-xenial main"
12741272
apt-get update
12751273
apt-get install -y kubectl
12761274
```

articles/batch-ai/use-azure-storage.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -94,7 +94,7 @@ If your training script requires knowledge of a path, you should pass it as a co
9494

9595
### Abbreviate input paths
9696

97-
To abbreviate input paths as an environment variable, use the `inputDirectories` property of your `job.json` file (or `models.JobCreateParamters.input_directories` if using the Batch AI SDK). The schema of `inputDirectories` is:
97+
To abbreviate input paths as an environment variable, use the `inputDirectories` property of your `job.json` file (or `models.JobCreateParameters.input_directories` if using the Batch AI SDK). The schema of `inputDirectories` is:
9898

9999
```json
100100
{
@@ -111,7 +111,7 @@ For more information, see [here](https://github.com/Azure/BatchAI/blob/master/do
111111

112112
### Abbreviate output paths
113113

114-
To abbreviate output paths as an environment variable, use the `outputDirectories` property of your `job.json` file (or `models.JobCreateParamters.output_directories` if using the Batch AI SDK). Using this method can simplify the paths for output files. The schema of `outputDirectories` is:
114+
To abbreviate output paths as an environment variable, use the `outputDirectories` property of your `job.json` file (or `models.JobCreateParameters.output_directories` if using the Batch AI SDK). Using this method can simplify the paths for output files. The schema of `outputDirectories` is:
115115

116116
```json
117117
{

articles/cognitive-services/QnAMaker/Quickstarts/update-kb-java.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -272,7 +272,7 @@ public class UpdateKB {
272272
* Sends the request to update the knowledge base.
273273
* @param kb The ID for the existing knowledge base
274274
* @param req The data source for the updated knowledge base
275-
* @return Reponse Returns the response from a PATCH request
275+
* @return Response Returns the response from a PATCH request
276276
*/
277277
public static Response UpdateKB (String kb, Request req) throws Exception {
278278
URL url = new URL(host + service + method + kb);

articles/cosmos-db/change-feed.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -378,7 +378,7 @@ That’s it. After these few steps documents will start showing up into the **Do
378378
Above code is for illustration purpose to show different kind of objects and their interaction. You have to define proper variables and initiate them with correct values. You can get the complete code used in this article from the [GitHub repo](https://github.com/Azure/azure-documentdb-dotnet/tree/master/samples/code-samples/ChangeFeedProcessorV2).
379379
380380
> [!NOTE]
381-
> You should never have a master key in your code or in config file as shown in above code. Please see [how to use Key-Vault to retrive the keys](https://sarosh.wordpress.com/2017/11/23/cosmos-db-and-key-vault/).
381+
> You should never have a master key in your code or in config file as shown in above code. Please see [how to use Key-Vault to retrieve the keys](https://sarosh.wordpress.com/2017/11/23/cosmos-db-and-key-vault/).
382382
383383

384384
## FAQ

articles/load-balancer/quickstart-create-basic-load-balancer-portal.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -154,6 +154,7 @@ To allow the Basic load balancer to monitor the status of your app, you use a he
154154
- **myHealthProbe** for the name of the health probe
155155
- **HTTP** for the protocol type
156156
- **80** for the port number
157+
- **Healthprobe.aspx** for the URI path. You can either replace this value with any other URI or keep the default path value of **"\\"** to get the default URI.
157158
- **15** for **Interval**, the number of seconds between probe attempts
158159
- **2** for **Unhealthy threshold**, the number of consecutive probe failures that must occur before a VM is considered unhealthy
159160

articles/load-balancer/quickstart-load-balancer-standard-public-portal.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -141,6 +141,7 @@ To allow the load balancer to monitor the status of your app, you use a health p
141141
- *myHealthProbe* - for the name of the health probe.
142142
- **HTTP** - for the protocol type.
143143
- *80* - for the port number.
144+
- *Healthprobe.aspx* - for the URI path. You can either replace this value with any other URI or keep the default path value of **"\\"** to get the default URI.
144145
- *15* - for number of **Interval** in seconds between probe attempts.
145146
- *2* - for number of **Unhealthy threshold** or consecutive probe failures that must occur before a VM is considered unhealthy.
146147
4. Click **OK**.

0 commit comments

Comments
 (0)