Skip to content

Commit 9559c1a

Browse files
committed
Merge branch 'master' of https://github.com/MicrosoftDocs/azure-docs-pr into ds-qsrt1720628
2 parents e8443fd + 6b53614 commit 9559c1a

File tree

109 files changed

+2516
-878
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

109 files changed

+2516
-878
lines changed

articles/active-directory/cloud-provisioning/tutorial-pilot-aadc-aadccp.md

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ manager: daveba
77
ms.service: active-directory
88
ms.workload: identity
99
ms.topic: overview
10-
ms.date: 03/04/2020
10+
ms.date: 05/19/2020
1111
ms.subservice: hybrid
1212
ms.author: billmath
1313
ms.collection: M365-identity-device-management
@@ -196,7 +196,9 @@ Azure AD Connect sync synchronizes changes occurring in your on-premises directo
196196
3. Run `Start-ADSyncSyncCycle`. Hit Enter.
197197

198198
>[!NOTE]
199-
>If you are running your own custom scheduler for AAD Connect sync, then please enable the scheduler.
199+
>If you are running your own custom scheduler for Azure AD Connect sync, then please enable the scheduler.
200+
201+
Once the scheduler is enabled, Azure AD Connect will stop exporting any changes on objects with `cloudNoFlow=true` in the metaverse, unless any reference attribute (eg. manager) is being updated. In case there is any reference attribute update on the object, Azure AD Connect will ignore the `cloudNoFlow` signal and export all updates on the object.
200202

201203
## Something went wrong
202204
In case the pilot does not work as expected, you can go back to the Azure AD Connect sync setup by following the steps below:

articles/aks/azure-files-dynamic-pv.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,6 @@ A storage class is used to define how an Azure file share is created. A storage
3131
* *Standard_ZRS* - standard zone redundant storage (ZRS)
3232
* *Standard_RAGRS* - standard read-access geo-redundant storage (RA-GRS)
3333
* *Premium_LRS* - premium locally redundant storage (LRS)
34-
* *Premium_ZRS* - premium zone redundant storage (GRS)
3534

3635
> [!NOTE]
3736
> Azure Files support premium storage in AKS clusters that run Kubernetes 1.13 or higher, minimum premium file share is 100GB

articles/aks/cluster-autoscaler.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -95,7 +95,7 @@ az aks update \
9595
The above example updates cluster autoscaler on the single node pool in *myAKSCluster* to a minimum of *1* and maximum of *5* nodes.
9696

9797
> [!NOTE]
98-
> You can't set a higher minimum node count than is currently set for the node pool. For example, if you currently have min count set to *1*, you can't update the min count to *3*.
98+
The cluster autoscaler will make its scaling decisions based on the minimum and maximum counts set on each node pool, but it does not enforce them. For example, setting a min-count of 5 when the current node count is 3 will not immediately scale the pool up to 5. If you change the minimum count on the node pool to a value higher than the current number of nodes, this new limit will be respected when there are enough unschedulable pods present that would require 2 new additional nodes and trigger an autoscaler event. After this occurs, the new minimum count limit will be respected for the cluster autoscaler.
9999

100100
Monitor the performance of your applications and services, and adjust the cluster autoscaler node counts to match the required performance.
101101

articles/aks/developer-best-practices-pod-security.md

Lines changed: 9 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -66,14 +66,17 @@ Work with your cluster operator to determine what security context settings you
6666
6767
To limit the risk of credentials being exposed in your application code, avoid the use of fixed or shared credentials. Credentials or keys shouldn't be included directly in your code. If these credentials are exposed, the application needs to be updated and redeployed. A better approach is to give pods their own identity and way to authenticate themselves, or automatically retrieve credentials from a digital vault.
6868
69-
The following [associated AKS open source projects][aks-associated-projects] let you automatically authenticate pods or request credentials and keys from a digital vault:
69+
### Use Azure Container Compute Upstream projects
7070
71-
* Managed identities for Azure resources, and
72-
* [Azure Key Vault Provider for Secrets Store CSI Driver](https://github.com/Azure/secrets-store-csi-driver-provider-azure#usage)
71+
> [!IMPORTANT]
72+
> Associated AKS open source projects are not supported by Azure technical support. They are provided for users to self-install into clusters and gather feedback from our community.
7373
74-
Associated AKS open source projects are not supported by Azure technical support. They are provided to gather feedback and bugs from our community. These projects are not recommended for production use.
74+
The following [associated AKS open source projects][aks-associated-projects] let you automatically authenticate pods or request credentials and keys from a digital vault. These projects are maintained by the Azure Container Compute Upstream team and are part of a [broader list of projects available for use](https://github.com/Azure/container-compute-upstream/blob/master/README.md#support).
7575
76-
### Use pod managed identities
76+
* [Azure Active Directory Pod Identity][aad-pod-identity]
77+
* [Azure Key Vault Provider for Secrets Store CSI Driver](https://github.com/Azure/secrets-store-csi-driver-provider-azure#usage)
78+
79+
#### Use pod managed identities
7780
7881
A managed identity for Azure resources lets a pod authenticate itself against Azure services that support it, such as Storage or SQL. The pod is assigned an Azure Identity that lets them authenticate to Azure Active Directory and receive a digital token. This digital token can be presented to other Azure services that check if the pod is authorized to access the service and perform the required actions. This approach means that no secrets are required for database connection strings, for example. The simplified workflow for pod managed identity is shown in the following diagram:
7982
@@ -83,7 +86,7 @@ With a managed identity, your application code doesn't need to include credentia
8386
8487
For more information about pod identities, see [Configure an AKS cluster to use pod managed identities and with your applications][aad-pod-identity]
8588
86-
### Use Azure Key Vault with Secrets Store CSI Driver
89+
#### Use Azure Key Vault with Secrets Store CSI Driver
8790
8891
Using the pod identity project enables authentication against supporting Azure services. For your own services or applications without managed identities for Azure resources, you can still authenticate using credentials or keys. A digital vault can be used to store these secret contents.
8992

articles/aks/egress-outboundtype.md

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -117,9 +117,6 @@ DEVSUBNET_NAME="${PREFIX}dev"
117117
Next, set subscription IDs.
118118

119119
```azure-cli
120-
# Get ARM Access Token and Subscription ID - This will be used for AuthN later.
121-
122-
ACCESS_TOKEN=$(az account get-access-token -o tsv --query 'accessToken')
123120
124121
# NOTE: Update Subscription Name
125122
# Set Default Azure Subscription to be Used via Subscription ID

articles/aks/faq.md

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -123,9 +123,7 @@ Windows Server support for node pool includes some limitations that are part of
123123

124124
## Does AKS offer a service-level agreement?
125125

126-
AKS offers an optional [uptime SLA add-on][uptime-sla] for the Kubernetes API server. Customers who choose the SLA add-on are guaranteed 99.95% availability for clusters which are spread across availability zones and 99.9% availability for those which are not. For clusters without the SLA add-on, AKS maintaines a service-level objective of 99.5% availability.
127-
128-
It is important to recognize the distinction between AKS service availability which refers to uptime of the Kubernetes control plane and the availability of your specific workload which is running on Azure Virtual Machines and therefore covered by the [VM SLA][vm-sla]. In the event that the Kubernetes control plane is unavailable, your applications will continue to function. However, you will be unable to deploy new applications, scale or upgrade existing applications, or perform other management operations on the cluster.
126+
AKS provides SLA guarantees as an optional add on feature with [Uptime SLA][uptime-sla].
129127

130128
## Can I apply Azure reservation discounts to my AKS agent nodes?
131129

articles/aks/private-clusters.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -98,13 +98,13 @@ As mentioned, VNet peering is one way to access your private cluster. To use VNe
9898
* IP authorized ranges cannot be applied to the private api server endpoint, they only apply to the public API server
9999
* Availability Zones are currently supported for certain regions, see the beginning of this document
100100
* [Azure Private Link service limitations][private-link-service] apply to private clusters.
101-
* No support for virtual nodes in a private cluster to spin private Azure Container Instances (ACI) in a private Azure virtual network
102101
* No support for Azure DevOps integration out of the box with private clusters
103102
* For customers that need to enable Azure Container Registry to work with private AKS, the Container Registry virtual network must be peered with the agent cluster virtual network.
104103
* No current support for Azure Dev Spaces
105104
* No support for converting existing AKS clusters into private clusters
106105
* Deleting or modifying the private endpoint in the customer subnet will cause the cluster to stop functioning.
107106
* Azure Monitor for containers Live Data isn't currently supported.
107+
* Uptime SLA isn't currently supported.
108108

109109

110110
<!-- LINKS - internal -->

articles/aks/uptime-sla.md

Lines changed: 9 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -1,25 +1,25 @@
11
---
2-
title: Azure Kubernetes Service (AKS) high availability with Uptime SLA
3-
description: Learn about the optional high availability Uptime SLA offering for the Azure Kubernetes Service (AKS) API Server.
2+
title: Azure Kubernetes Service (AKS) with Uptime SLA
3+
description: Learn about the optional Uptime SLA offering for the Azure Kubernetes Service (AKS) API Server.
44
services: container-service
55
ms.topic: conceptual
66
ms.date: 05/11/2020
77
---
88

99
# Azure Kubernetes Service (AKS) Uptime SLA
1010

11-
Uptime SLA is an optional feature to enable financially backed higher SLA for a cluster. Uptime SLA guarantees 99.95% availability of the Kubernetes API server endpoint for clusters that use [Availability Zone][availability-zones] and 99.9% of availability for clusters that don't use availability zones. AKS uses master node replicas across update and fault domains to ensure SLA requirements are met.
11+
Uptime SLA is an optional feature to enable a financially backed, higher SLA for a cluster. Uptime SLA guarantees 99.95% availability of the Kubernetes API server endpoint for clusters that use [Availability Zones][availability-zones] and 99.9% of availability for clusters that don't use Availability Zones. AKS uses master node replicas across update and fault domains to ensure SLA requirements are met.
1212

13-
Customers needing SLA for compliance reasons or extending SLA's to their customers should turn on this feature. Customers with critical workloads who need higher availability with an option of SLA benefit from enabling this feature. Enable the feature with Availability Zones to obtain higher availability of the Kubernetes API server.
13+
Customers needing an SLA to meet compliance requirements or require extending an SLA to their end-users should enable this feature. Customers with critical workloads that will benefit from a higher uptime SLA may also benefit. Using the Uptime SLA feature with Availability Zones enables a higher availability for the uptime of the Kubernetes API server.
1414

15-
Customers can create unlimited free clusters with a service level objective (SLO) of 99.5%.
15+
Customers can still create unlimited free clusters with a service level objective (SLO) of 99.5% and opt for the preferred SLO or SLA Uptime as needed.
1616

1717
> [!Important]
1818
> For clusters with egress lockdown, see [limit egress traffic](limit-egress-traffic.md) to open appropriate ports for Uptime SLA.
1919
2020
## SLA terms and conditions
2121

22-
Uptime SLA is a paid feature and enabled per cluster. Uptime SLA pricing is determined by the number of clusters, and not by the size of the clusters. You can view [Uptime SLA pricing details](https://azure.microsoft.com/pricing/details/kubernetes-service/) for more information.
22+
Uptime SLA is a paid feature and enabled per cluster. Uptime SLA pricing is determined by the number of discrete clusters, and not by the size of the individual clusters. You can view [Uptime SLA pricing details](https://azure.microsoft.com/pricing/details/kubernetes-service/) for more information.
2323

2424
## Region Availability
2525

@@ -59,19 +59,17 @@ After a few minutes, the command completes and returns JSON-formatted informatio
5959
"name": "Basic",
6060
"tier": "Paid"
6161
},
62-
"tags": null,
63-
"type": "Microsoft.ContainerService/ManagedClusters",
64-
"windowsProfile": null
6562
```
6663

6764
## Limitations
6865

69-
* You can't currently add Uptime SLA to existing clusters.
70-
* Currently, there is no way to remove Uptime SLA from an AKS cluster.
66+
* Currently, cannot convert as existing cluster to enable the Uptime SLA.
67+
* Currently, there is no way to remove Uptime SLA from an AKS cluster after creation with it enabled.
7168

7269
## Next steps
7370

7471
Use [Availability Zones][availability-zones] to increase high availability with your AKS cluster workloads.
72+
Configure your cluster to [limit egress traffic](limit-egress-traffic.md).
7573

7674
<!-- LINKS - External -->
7775
[azure-support]: https://ms.portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest

articles/aks/use-network-policies.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -77,7 +77,7 @@ The following example script:
7777
* Creates an Azure Active Directory (Azure AD) service principal for use with the AKS cluster.
7878
* Assigns *Contributor* permissions for the AKS cluster service principal on the virtual network.
7979
* Creates an AKS cluster in the defined virtual network and enables network policy.
80-
* The *azure* network policy option is used. To use Calico as the network policy option instead, use the `--network-policy calico` parameter. Note: Calico could be used with either `--network-plugin azure` or `--network-plugin kubenet`.
80+
* The _Azure Network_ policy option is used. To use Calico as the network policy option instead, use the `--network-policy calico` parameter. Note: Calico could be used with either `--network-plugin azure` or `--network-plugin kubenet`.
8181

8282
Note that instead of using a service principal, you can use a managed identity for permissions. For more information, see [Use managed identities](use-managed-identity.md).
8383

@@ -142,7 +142,7 @@ az aks get-credentials --resource-group $RESOURCE_GROUP_NAME --name $CLUSTER_NAM
142142

143143
## Deny all inbound traffic to a pod
144144

145-
Before you define rules to allow specific network traffic, first create a network policy to deny all traffic. This policy gives you a starting point to begin to whitelist only the desired traffic. You can also clearly see that traffic is dropped when the network policy is applied.
145+
Before you define rules to allow specific network traffic, first create a network policy to deny all traffic. This policy gives you a starting point to begin to create an allow list for only the desired traffic. You can also clearly see that traffic is dropped when the network policy is applied.
146146

147147
For the sample application environment and traffic rules, let's first create a namespace called *development* to run the example pods:
148148

@@ -470,9 +470,9 @@ To learn more about policies, see [Kubernetes network policies][kubernetes-netwo
470470
[policy-rules]: https://kubernetes.io/docs/concepts/services-networking/network-policies/#behavior-of-to-and-from-selectors
471471
[aks-github]: https://github.com/azure/aks/issues
472472
[tigera]: https://www.tigera.io/
473-
[calicoctl]: https://docs.projectcalico.org/v3.9/reference/calicoctl/
473+
[calicoctl]: https://docs.projectcalico.org/reference/calicoctl/
474474
[calico-support]: https://www.tigera.io/tigera-products/calico/
475-
[calico-logs]: https://docs.projectcalico.org/v3.9/maintenance/component-logs
475+
[calico-logs]: https://docs.projectcalico.org/maintenance/troubleshoot/component-logs
476476
[calico-aks-cleanup]: https://github.com/Azure/aks-engine/blob/master/docs/topics/calico-3.3.1-cleanup-after-upgrade.yaml
477477

478478
<!-- LINKS - internal -->

articles/azure-functions/TOC.yml

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -49,7 +49,7 @@
4949
- name: Machine learning with TensorFlow
5050
href: functions-machine-learning-tensorflow.md
5151
- name: Image classification with PyTorch
52-
href: machine-learning-pytorch.md
52+
href: machine-learning-pytorch.md
5353
- name: Create a custom Linux image
5454
href: functions-create-function-linux-custom-image.md
5555
- name: Functions on IoT Edge device
@@ -335,6 +335,8 @@
335335
items:
336336
- name: Troubleshoot storage
337337
href: functions-recover-storage-account.md
338+
- name: Troubleshoot Python dependencies
339+
href: recover-module-not-found.md
338340
- name: Reference
339341
items:
340342
- name: API references

0 commit comments

Comments
 (0)