You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Azure Active Directory (Azure AD) B2C provides support for the SAML 2.0 identity provider. This article describes the specifics of a technical profile for interacting with a claims provider that supports this standardized protocol. With SAML technical profile you can federate with a SAML based identity provider, such as AD-FS and Salesforce, allowing you users to sign-in with their existing social or enterprise identities.
20
+
Azure Active Directory (Azure AD) B2C provides support for the SAML 2.0 identity provider. This article describes the specifics of a technical profile for interacting with a claims provider that supports this standardized protocol. With SAML technical profile you can federate with a SAML based identity provider, such as AD-FS and Salesforce, allowing your users to sign-in with their existing social or enterprise identities.
Copy file name to clipboardExpand all lines: articles/azure-databricks/databricks-extract-load-sql-data-warehouse.md
+15-15Lines changed: 15 additions & 15 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -13,15 +13,15 @@ ms.date: 07/26/2018
13
13
---
14
14
# Tutorial: Extract, transform, and load data using Azure Databricks
15
15
16
-
In this tutorial, you perform an ETL (extract, transform, and load data) operation using Azure Databricks. You extract data from Azure Data Lake Store into Azure Databricks, run transformations on the data in Azure Databricks, and then load the transformed data into Azure SQL Data Warehouse.
16
+
In this tutorial, you perform an ETL (extract, transform, and load data) operation using Azure Databricks. You extract data from Azure Data Lake Store into Azure Databricks, run transformations on the data in Azure Databricks, and then load the transformed data into Azure SQL Data Warehouse.
17
17
18
18
The steps in this tutorial use the SQL Data Warehouse connector for Azure Databricks to transfer data to Azure Databricks. This connector, in turn, uses Azure Blob Storage as temporary storage for the data being transferred between an Azure Databricks cluster and Azure SQL Data Warehouse.
19
19
20
20
The following illustration shows the application flow:
21
21
22
22

23
23
24
-
This tutorial covers the following tasks:
24
+
This tutorial covers the following tasks:
25
25
26
26
> [!div class="checklist"]
27
27
> * Create an Azure Databricks workspace
@@ -58,7 +58,7 @@ In this section, you create an Azure Databricks workspace using the Azure portal
58
58
59
59

60
60
61
-
Provide the following values:
61
+
Provide the following values:
62
62
63
63
|Property |Description |
64
64
|---------|---------|
@@ -89,14 +89,14 @@ In this section, you create an Azure Databricks workspace using the Azure portal
89
89
Accept all other defaults other than the following values:
90
90
91
91
* Enter a name for the cluster.
92
-
* For this article, create a cluster with **4.0** runtime.
92
+
* For this article, create a cluster with **4.0** runtime.
93
93
* Make sure you select the **Terminate after \_\_ minutes of inactivity** checkbox. Provide a duration (in minutes) to terminate the cluster, if the cluster is not being used.
94
94
95
95
Select **Create cluster**. Once the cluster is running, you can attach notebooks to the cluster and run Spark jobs.
96
96
97
97
## Create an Azure Data Lake Store account
98
98
99
-
In this section, you create an Azure Data Lake Store account and associate an Azure Active Directory service principal with it. Later in this tutorial, you use this service principal in Azure Databricks to access Azure Data Lake Store.
99
+
In this section, you create an Azure Data Lake Store account and associate an Azure Active Directory service principal with it. Later in this tutorial, you use this service principal in Azure Databricks to access Azure Data Lake Store.
100
100
101
101
1. From the [Azure portal](https://portal.azure.com), select **Create a resource** > **Storage** > **Data Lake Store**.
102
102
3. In the **New Data Lake Store** blade, provide the values as shown in the following screenshot:
@@ -183,7 +183,7 @@ When programmatically logging in, you need to pass the tenant ID with your authe
183
183
184
184
1. Copy the **Directory ID**. This value is your tenant ID.
@@ -300,7 +300,7 @@ You have now extracted the data from Azure Data Lake Store into Azure Databricks
300
300
301
301
## Transform data in Azure Databricks
302
302
303
-
The raw sample data **small_radio_json.json** captures the audience for a radio station and has a variety of columns. In this section, you transform the data to only retrieve specific columns in from the dataset.
303
+
The raw sample data **small_radio_json.json** captures the audience for a radio station and has a variety of columns. In this section, you transform the data to only retrieve specific columns in from the dataset.
304
304
305
305
1. Start by retrieving only the columns *firstName*, *lastName*, *gender*, *location*, and *level* from the dataframe you already created.
306
306
@@ -334,7 +334,7 @@ The raw sample data **small_radio_json.json** captures the audience for a radio
5. Run the following snippet to load the transformed dataframe, **renamedColumnsDf**, as a table in SQL data warehouse. This snippet creates a table called **SampleTable** in the SQL database. Please note that Azure SQL DW requires a master key. You can create a master key by executing "CREATE MASTER KEY;" command in SQL Server Management Studio.
402
+
5. Run the following snippet to load the transformed dataframe, **renamedColumnsDf**, as a table in SQL data warehouse. This snippet creates a table called **SampleTable** in the SQL database. Please note that Azure SQL DW requires a master key. You can create a master key by executing "CREATE MASTER KEY;" command in SQL Server Management Studio.
@@ -428,9 +428,9 @@ After you have finished running the tutorial, you can terminate the cluster. To
428
428
429
429

430
430
431
-
If you do not manually terminate the cluster it will automatically stop, provided you selected the **Terminate after __ minutes of inactivity** checkbox while creating the cluster. In such a case, the cluster automatically stops if it has been inactive for the specified time.
431
+
If you do not manually terminate the cluster it will automatically stop, provided you selected the **Terminate after \_\_ minutes of inactivity** checkbox while creating the cluster. In such a case, the cluster automatically stops if it has been inactive for the specified time.
Copy file name to clipboardExpand all lines: articles/azure-stack/asdk/asdk-prepare-host.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -49,7 +49,7 @@ Before you can install the ASDK on the host computer, the ASDK host computer env
49
49
50
50
6. On the **Optional settings** page, provide the local administrator account information for the development kit host computer and then click **Next**. You can also provide values for the following optional settings:
51
51
-**Computername**: This option sets the name for the development kit host. The name must comply with FQDN requirements and must be 15 characters or less in length. The default is a random computer name generated by Windows.
52
-
-**Static IP configuration**: Sets your deployment to use a static IP address. Otherwise, when the installer reboots into the cloudbuilder.vhx, the network interfaces are configured with DHCP.
52
+
-**Static IP configuration**: Sets your deployment to use a static IP address. Otherwise, when the installer reboots into the cloudbuilder.vhdx, the network interfaces are configured with DHCP.
Copy file name to clipboardExpand all lines: articles/azure-stack/user/azure-stack-solution-machine-learning.md
+3-5Lines changed: 3 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -988,7 +988,7 @@ The Docker engine must be running locally to complete the following steps to ope
988
988
1. Make sure the Azure resource provider **Microsoft.ContainerRegistry** is registered in the subscription. Register this resource provider before creating an environment in step 3. Check to see if it's already registered by using the following command:
989
989
990
990
```CLI
991
-
az provider list --query "\[\].{Provider:namespace, Status:registrationState}" --out table
991
+
az provider list --query "[].{Provider:namespace, Status:registrationState}" --out table
992
992
```
993
993
994
994
View this output:
@@ -1267,10 +1267,8 @@ From within the WSL Environment, run the following commands to install kubectl i
Copy file name to clipboardExpand all lines: articles/batch-ai/use-azure-storage.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -94,7 +94,7 @@ If your training script requires knowledge of a path, you should pass it as a co
94
94
95
95
### Abbreviate input paths
96
96
97
-
To abbreviate input paths as an environment variable, use the `inputDirectories` property of your `job.json` file (or `models.JobCreateParamters.input_directories` if using the Batch AI SDK). The schema of `inputDirectories` is:
97
+
To abbreviate input paths as an environment variable, use the `inputDirectories` property of your `job.json` file (or `models.JobCreateParameters.input_directories` if using the Batch AI SDK). The schema of `inputDirectories` is:
98
98
99
99
```json
100
100
{
@@ -111,7 +111,7 @@ For more information, see [here](https://github.com/Azure/BatchAI/blob/master/do
111
111
112
112
### Abbreviate output paths
113
113
114
-
To abbreviate output paths as an environment variable, use the `outputDirectories` property of your `job.json` file (or `models.JobCreateParamters.output_directories` if using the Batch AI SDK). Using this method can simplify the paths for output files. The schema of `outputDirectories` is:
114
+
To abbreviate output paths as an environment variable, use the `outputDirectories` property of your `job.json` file (or `models.JobCreateParameters.output_directories` if using the Batch AI SDK). Using this method can simplify the paths for output files. The schema of `outputDirectories` is:
Copy file name to clipboardExpand all lines: articles/load-balancer/quickstart-create-basic-load-balancer-portal.md
+1Lines changed: 1 addition & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -154,6 +154,7 @@ To allow the Basic load balancer to monitor the status of your app, you use a he
154
154
-**myHealthProbe** for the name of the health probe
155
155
-**HTTP** for the protocol type
156
156
-**80** for the port number
157
+
-**Healthprobe.aspx** for the URI path. You can either replace this value with any other URI or keep the default path value of **"\\"** to get the default URI.
157
158
-**15** for **Interval**, the number of seconds between probe attempts
158
159
-**2** for **Unhealthy threshold**, the number of consecutive probe failures that must occur before a VM is considered unhealthy
Copy file name to clipboardExpand all lines: articles/load-balancer/quickstart-load-balancer-standard-public-portal.md
+1Lines changed: 1 addition & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -141,6 +141,7 @@ To allow the load balancer to monitor the status of your app, you use a health p
141
141
-*myHealthProbe* - for the name of the health probe.
142
142
-**HTTP** - for the protocol type.
143
143
-*80* - for the port number.
144
+
-*Healthprobe.aspx* - for the URI path. You can either replace this value with any other URI or keep the default path value of **"\\"** to get the default URI.
144
145
-*15* - for number of **Interval** in seconds between probe attempts.
145
146
-*2* - for number of **Unhealthy threshold** or consecutive probe failures that must occur before a VM is considered unhealthy.
0 commit comments