You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/active-directory/authentication/concept-sspr-howitworks.md
+2Lines changed: 2 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -151,6 +151,8 @@ Custom security questions are not localized for different locales. All custom qu
151
151
152
152
The maximum length of a custom security question is 200 characters.
153
153
154
+
To view the password reset portal and questions in a different localized language append "?mkt=<Locale>" to the end of the password reset URL with the example that follows localizing to Spanish [https://passwordreset.microsoftonline.com/?mkt=es-us](https://passwordreset.microsoftonline.com/?mkt=es-us).
155
+
154
156
### Security question requirements
155
157
156
158
* The minimum answer character limit is three characters.
Copy file name to clipboardExpand all lines: articles/aks/networking-overview.md
+29-11Lines changed: 29 additions & 11 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -35,15 +35,41 @@ Nodes in an AKS cluster configured for Advanced networking use the [Azure Contai
35
35
Advanced networking provides the following benefits:
36
36
37
37
* Deploy your AKS cluster into an existing VNet, or create a new VNet and subnet for your cluster.
38
-
* Every pod in the cluster is assigned an IP address in the VNet, and can directly communicate with other pods in the cluster, and other VMs in the VNet.
38
+
* Every pod in the cluster is assigned an IP address in the VNet, and can directly communicate with other pods in the cluster, and other nodes in the VNet.
39
39
* A pod can connect to other services in a peered VNet, and to on-premises networks over ExpressRoute and site-to-site (S2S) VPN connections. Pods are also reachable from on-premises.
40
40
* Expose a Kubernetes service externally or internally through the Azure Load Balancer. Also a feature of Basic networking.
41
41
* Pods in a subnet that have service endpoints enabled can securely connect to Azure services, for example Azure Storage and SQL DB.
42
42
* Use user-defined routes (UDR) to route traffic from pods to a Network Virtual Appliance.
43
43
* Pods can access resources on the public Internet. Also a feature of Basic networking.
44
44
45
45
> [!IMPORTANT]
46
-
> Each node in an AKS cluster configured for Advanced networking can host a maximum of **30 pods**. Each VNet provisioned for use with the Azure CNI plugin is limited to **4096 IP addresses** (/20).
46
+
> Each node in an AKS cluster configured for Advanced networking can host a maximum of **30 pods**. Each VNet provisioned for use with the Azure CNI plugin is limited to **4096 configured IP addresses**.
47
+
48
+
## Advanced networking prerequisites
49
+
50
+
* The VNet for the AKS cluster must allow outbound internet connectivity.
51
+
* Do not create more than one AKS cluster in the same subnet.
52
+
* Advanced networking for AKS does not support VNets that use Azure Private DNS Zones.
53
+
* AKS clusters may not use `169.254.0.0/16`, `172.30.0.0/16`, or `172.31.0.0/16` for the Kubernetes service address range.
54
+
* The service principal used for the AKS cluster must have `Owner` permissions to the resource group containing the existing VNet.
55
+
56
+
## Plan IP addressing for your cluster
57
+
58
+
Clusters configured with Advanced networking require additional planning. The size of your VNet and its subnet must accommodate both the number of pods you plan to run as well as the number of nodes for the cluster.
59
+
60
+
IP addresses for the pods and the cluster's nodes are assigned from the specified subnet within the VNet. Each node is configured with a primary IP, which is the IP of the node and 30 additional IP addresses pre-configured by Azure CNI that are assigned to pods scheduled to the node. When you scale out your cluster, each node is similarly configured with IP addresses from the subnet.
61
+
62
+
The IP address plan for an AKS cluster consists of a VNet, at least one subnet for nodes and pods, and a Kubernetes service address range.
63
+
64
+
| Address range / Azure resource | Limits and sizing |
65
+
| --------- | ------------- |
66
+
| Virtual network | Azure VNet can be as large as /8 but may only have 4096 configured IP addresses. |
67
+
| Subnet | Must be large enough to accommodate the nodes and Pods. To calculate your minimum subnet size: (Number of nodes) + (Number of nodes * Pods per node). For a 50 node cluster: (50) + (50 * 30) = 1,550, your subnet would need to be a /21 or larger. |
68
+
| Kubernetes service address range | This range should not be used by any network element on or connected to this VNet. Service address CIDR must be smaller than /12. |
69
+
| Kubernetes DNS service IP address | IP address within the Kubernetes service address range that will be used by cluster service discovery (kube-dns). |
70
+
| Docker bridge address | IP address (in CIDR notation) used as the Docker bridge IP address on nodes. Default of 172.17.0.1/16. |
71
+
72
+
As mentioned previously, each VNet provisioned for use with the Azure CNI plugin is limited to **4096 configured IP addresses**. Each node in a cluster configured for Advanced networking can host a maximum of **30 pods**.
47
73
48
74
## Configure advanced networking
49
75
@@ -63,14 +89,6 @@ The following screenshot from the Azure portal shows an example of configuring t
63
89
64
90
![Advanced networking configuration in the Azure portal][portal-01-networking-advanced]
65
91
66
-
## Plan IP addressing for your cluster
67
-
68
-
Clusters configured with Advanced networking require additional planning. The size of your VNet and its subnet must accommodate the number of pods you plan to run simultaneously in the cluster, as well as your scaling requirements.
69
-
70
-
IP addresses for the pods and the cluster's nodes are assigned from the specified subnet within the VNet. Each node is configured with a primary IP, which is the IP of the node itself, and 30 additional IP addresses pre-configured by Azure CNI that are assigned to pods scheduled to the node. When you scale out your cluster, each node is similarly configured with IP addresses from the subnet.
71
-
72
-
As mentioned previously, each VNet provisioned for use with the Azure CNI plugin is limited to **4096 IP addresses** (/20). Each node in a cluster configured for Advanced networking can host a maximum of **30 pods**.
73
-
74
92
## Frequently asked questions
75
93
76
94
The following questions and answers apply to the **Advanced** networking configuration.
@@ -89,7 +107,7 @@ The following questions and answers apply to the **Advanced** networking configu
89
107
90
108
**Is the maximum number of pods deployable to a node configurable?*
91
109
92
-
By default, each node can host a maximum of 30 pods. You can currently change the maximum value only by modifying the `maxPods` property when deploying a cluster with a Resource Manager template.
110
+
By default, each node can host a maximum of 30 pods. You can change the maximum value only by modifying the `maxPods` property when deploying a cluster with a Resource Manager template.
93
111
94
112
**How do I configure additional properties for the subnet that I created during AKS cluster creation? For example, service endpoints.*
Copy file name to clipboardExpand all lines: articles/api-management/api-management-api-import-restrictions.md
+3-2Lines changed: 3 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -21,10 +21,11 @@ ms.author: apipm
21
21
## About this list
22
22
When importing an API, you might come across some restrictions or identify issues that need to be rectified before you can successfully import. This article documents these, organized by the import format of the API.
23
23
24
-
## <aname="open-api"> </a>Open API/Swagger
25
-
If you are receiving errors importing your Open API document, ensure you have validated it - either using the designer in the Azure portal (Design - Front End - Open API Specification Editor), or with a third-party tool such as <ahref="http://www.swagger.io">Swagger Editor</a>.
24
+
## <aname="open-api"> </a>OpenAPI/Swagger
25
+
If you are receiving errors importing your OpenAPI document, ensure you have validated it - either using the designer in the Azure portal (Design - Front End - OpenAPI Specification Editor), or with a third-party tool such as <ahref="http://www.swagger.io">Swagger Editor</a>.
26
26
27
27
* Only JSON format for OpenAPI is supported.
28
+
* Required parameters across both path and query must have unique names. (In OpenAPI a parameter name only needs to be unique within a location, e.g. path, query, header. However, in API Management we allow operations to be discriminated by both path and query parameters (which OpenAPI does not support). Therefore we require parameter names to be unique within the entire URL template.)
28
29
* Schemas referenced using **$ref** properties can't contain other **$ref** properties.
Copy file name to clipboardExpand all lines: articles/azure-databricks/databricks-extract-load-sql-data-warehouse.md
+50-20Lines changed: 50 additions & 20 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -13,11 +13,10 @@ ms.devlang: na
13
13
ms.topic: tutorial
14
14
ms.tgt_pltfrm: na
15
15
ms.workload: "Active"
16
-
ms.date: 03/23/2018
16
+
ms.date: 05/29/2018
17
17
ms.author: nitinme
18
18
19
19
---
20
-
21
20
# Tutorial: Extract, transform, and load data using Azure Databricks
22
21
23
22
In this tutorial, you perform an ETL (extract, transform, and load data) operation using Azure Databricks. You extract data from Azure Data Lake Store into Azure Databricks, run transformations on the data in Azure Databricks, and then load the transformed data into Azure SQL Data Warehouse.
@@ -49,15 +48,15 @@ Before you start with this tutorial, make sure to meet the following requirement
49
48
- Create a database master key for the Azure SQL Data Warehouse. Follow the instructions at [Create a Database Master Key](https://docs.microsoft.com/sql/relational-databases/security/encryption/create-a-database-master-key).
50
49
- Create an Azure Blob storage account, and a container within it. Also, retrieve the access key to access the storage account. Follow the instructions at [Quickstart: Create an Azure Blog storage account](../storage/blobs/storage-quickstart-blobs-portal.md).
51
50
52
-
## Log in to the Azure portal
51
+
## Log in to the Azure Portal
53
52
54
53
Log in to the [Azure portal](https://portal.azure.com/).
55
54
56
55
## Create an Azure Databricks workspace
57
56
58
57
In this section, you create an Azure Databricks workspace using the Azure portal.
59
58
60
-
1. In the Azure portal, select **Create a resource** > **Data + Analytics** > **Azure Databricks**.
59
+
1. In the Azure portal, select **Create a resource** > **Data + Analytics** > **Azure Databricks**.
61
60
62
61

63
62
@@ -192,22 +191,6 @@ When programmatically logging in, you need to pass the tenant ID with your authe
### Associate service principal with Azure Data Lake Store
196
-
197
-
In this section, you associate the Azure Data Lake Store account with the Azure Active Directory service principal you created. This ensures that you can access the Data Lake Store account from Azure Databricks.
198
-
199
-
1. From the [Azure portal](https://portal.azure.com), select the Data Lake Store account you created.
200
-
201
-
2. From the left pane, select **Access Control** > **Add**.
202
-
203
-

204
-
205
-
3. In **Add permissions**, select a role that you want to assign to the service principal. For this tutorial, select **Owner**. For **Assign access to**, select **Azure AD, user, group, or application**. For **Select** enter the name of the service principal you created to filter down the number of service principals to select from.
206
-
207
-

208
-
209
-
Select the service principal you created earlier, and then select **Save**. The service principal is now associated with the Azure Data Lake Store account.
210
-
211
194
## Upload data to Data Lake Store
212
195
213
196
In this section, you upload a sample data file to Data Lake Store. You use this file later in Azure Databricks to run some transformations. The sample data (**small_radio_json.json**) that you use in this tutorial is available in this [Github repo](https://github.com/Azure/usql/blob/master/Examples/Samples/Data/json/radiowebsite/small_radio_json.json).
@@ -228,6 +211,53 @@ In this section, you upload a sample data file to Data Lake Store. You use this
228
211
229
212
5. In this tutorial, you uploaded the data file to the root of the Data Lake Store. So, the file is now available at `adl://<YOUR_DATA_LAKE_STORE_ACCOUNT_NAME>.azuredatalakestore.net/small_radio_json.json`.
230
213
214
+
## Associate service principal with Azure Data Lake Store
215
+
216
+
In this section, you associate the data in Azure Data Lake Store account with the Azure Active Directory service principal you created. This ensures that you can access the Data Lake Store account from Azure Databricks. For the scenario in this article, you read the data in Data Lake Store to populate a table in SQL Data Warehouse. According to [Overview of Access Control in Data Lake Store](../data-lake-store/data-lake-store-access-control.md#common-scenarios-related-to-permissions), to have read access on a file in Data Lake Store, you must have:
217
+
218
+
-**Execute** permissions on all the folders in the folder structure leading up to the file.
219
+
-**Read** permissions on the file itself.
220
+
221
+
Perform the following steps to grant these permissions.
222
+
223
+
1. From the [Azure portal](https://portal.azure.com), select the Data Lake Store account you created, and then select **Data Explorer**.
224
+
225
+

226
+
227
+
2. In this scenario, because the sample data file is at the root of the folder structure, you only need to assign **Execute** permissions at the folder root. To do so, from the root of data explorer, select **Access**.
228
+
229
+

230
+
231
+
3. Under **Access**, select **Add**.
232
+
233
+

234
+
235
+
4. Under **Assign permissions**, click **Select user or group** and search for the Azure Active Directory service principal you created earlier.
236
+
237
+

238
+
239
+
Select the AAD service principal you want to assign and click **Select**.
240
+
241
+
5. Under **Assign permissions**, click **Select permissions** > **Execute**. Keep the other default values and select **OK** under **Select permissions** and then under **Assign permissions**.
242
+
243
+

244
+
245
+
6. Go back to the Data Explorer and now click the file on which you want to assign the read permission. Under **File Preview**, select **Access**.
246
+
247
+

248
+
249
+
7. Under **Access** select **Add**. Under **Assign permissions**, click **Select user or group** and search for the Azure Active Directory service principal you created earlier.
250
+
251
+

252
+
253
+
Select the AAD service principal you want to assign and click **Select**.
254
+
255
+
8. Under **Assign permissions**, click **Select permissions** > **Read**. Select **OK** under **Select permissions** and then under **Assign permissions**.
256
+
257
+

258
+
259
+
The service principal now has sufficient permissions to read the sample data file from Azure Data Lake Store.
260
+
231
261
## Extract data from Data Lake Store
232
262
233
263
In this section, you create a notebook in Azure Databricks workspace and then run code snippets to extract data from Data Lake Store into Azure Databricks.
0 commit comments