Skip to content

Commit eabd38a

Browse files
committed
Merge branch 'master' of https://github.com/MicrosoftDocs/azure-docs-pr into release-sf-7.1
2 parents f59b79e + d21d295 commit eabd38a

File tree

264 files changed

+1485
-1381
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

264 files changed

+1485
-1381
lines changed

.openpublishing.redirection.json

Lines changed: 100 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -40082,12 +40082,12 @@
4008240082
},
4008340083
{
4008440084
"source_path": "articles/iot-central/tutorial-add-device-pnp.md",
40085-
"redirect_url": "/azure/iot-central/core/quick-create-pnp-device",
40085+
"redirect_url": "/azure/iot-central/core/tutorial-connect-pnp-device",
4008640086
"redirect_document_id": false
4008740087
},
4008840088
{
4008940089
"source_path": "articles/iot-central/tutorial-define-device-type-pnp.md",
40090-
"redirect_url": "/azure/iot-central/core/quick-create-pnp-device",
40090+
"redirect_url": "/azure/iot-central/core/howto-set-up-template",
4009140091
"redirect_document_id": false
4009240092
},
4009340093
{
@@ -46282,7 +46282,7 @@
4628246282
},
4628346283
{
4628446284
"source_path": "articles/iot-central/quick-create-pnp-device-pnp.md",
46285-
"redirect_url": "/azure/iot-central/core/quick-create-pnp-device/",
46285+
"redirect_url": "/azure/iot-central/core/quick-create-simulated-device/",
4628646286
"redirect_document_id": false
4628746287
},
4628846288
{
@@ -46347,7 +46347,12 @@
4634746347
},
4634846348
{
4634946349
"source_path": "articles/iot-central/core/quick-create-pnp-device-pnp.md",
46350-
"redirect_url": "/azure/iot-central/core/quick-create-pnp-device/",
46350+
"redirect_url": "/azure/iot-central/core/quick-create-simulated-device/",
46351+
"redirect_document_id": false
46352+
},
46353+
{
46354+
"source_path": "articles/iot-central/core/quick-create-pnp-device.md",
46355+
"redirect_url": "/azure/iot-central/core/quick-create-simulated-device/",
4635146356
"redirect_document_id": false
4635246357
},
4635346358
{
@@ -46782,7 +46787,7 @@
4678246787
},
4678346788
{
4678446789
"source_path": "articles/iot-central/preview/quick-create-pnp-device.md",
46785-
"redirect_url": "/azure/iot-central/core/quick-create-pnp-device/",
46790+
"redirect_url": "/azure/iot-central/core/quick-create-simulated-device/",
4678646791
"redirect_document_id": false
4678746792
},
4678846793
{
@@ -46880,6 +46885,66 @@
4688046885
"redirect_url": "/azure/iot-central/core/tutorial-connect-device-nodejs",
4688146886
"redirect_document_id": true
4688246887
},
46888+
{
46889+
"source_path": "articles/iot-central/retail/architecture-connected-logistics-pnp.md",
46890+
"redirect_url": "/azure/iot-central/retail/architecture-connected-logistics",
46891+
"redirect_document_id": true
46892+
},
46893+
{
46894+
"source_path": "articles/iot-central/retail/architecture-digital-distribution-center-pnp.md",
46895+
"redirect_url": "/azure/iot-central/retail/architecture-digital-distribution-center",
46896+
"redirect_document_id": true
46897+
},
46898+
{
46899+
"source_path": "articles/iot-central/retail/architecture-micro-fulfillment-center-pnp.md",
46900+
"redirect_url": "/azure/iot-central/retail/architecture-micro-fulfillment-center",
46901+
"redirect_document_id": true
46902+
},
46903+
{
46904+
"source_path": "articles/iot-central/retail/architecture-smart-inventory-management-pnp.md",
46905+
"redirect_url": "/azure/iot-central/retail/architecture-smart-inventory-management",
46906+
"redirect_document_id": true
46907+
},
46908+
{
46909+
"source_path": "articles/iot-central/retail/overview-iot-central-retail-pnp.md",
46910+
"redirect_url": "/azure/iot-central/retail/overview-iot-central-retail",
46911+
"redirect_document_id": true
46912+
},
46913+
{
46914+
"source_path": "articles/iot-central/retail/tutorial-in-store-analytics-create-app-pnp.md",
46915+
"redirect_url": "/azure/iot-central/retail/tutorial-in-store-analytics-create-app",
46916+
"redirect_document_id": true
46917+
},
46918+
{
46919+
"source_path": "articles/iot-central/retail/tutorial-in-store-analytics-customize-dashboard-pnp.md",
46920+
"redirect_url": "/azure/iot-central/retail/tutorial-in-store-analytics-customize-dashboard",
46921+
"redirect_document_id": true
46922+
},
46923+
{
46924+
"source_path": "articles/iot-central/retail/tutorial-in-store-analytics-export-data-visualize-insights-pnp.md",
46925+
"redirect_url": "/azure/iot-central/retail/tutorial-in-store-analytics-export-data-visualize-insights",
46926+
"redirect_document_id": true
46927+
},
46928+
{
46929+
"source_path": "articles/iot-central/retail/tutorial-iot-central-connected-logistics-pnp.md",
46930+
"redirect_url": "/azure/iot-central/retail/tutorial-iot-central-connected-logistics",
46931+
"redirect_document_id": true
46932+
},
46933+
{
46934+
"source_path": "articles/iot-central/retail/tutorial-iot-central-digital-distribution-center-pnp.md",
46935+
"redirect_url": "/azure/iot-central/retail/tutorial-iot-central-digital-distribution-center",
46936+
"redirect_document_id": true
46937+
},
46938+
{
46939+
"source_path": "articles/iot-central/retail/tutorial-iot-central-smart-inventory-management-pnp.md",
46940+
"redirect_url": "/azure/iot-central/retail/tutorial-iot-central-smart-inventory-management",
46941+
"redirect_document_id": true
46942+
},
46943+
{
46944+
"source_path": "articles/iot-central/retail/tutorial-micro-fulfillment-center-pnp.md",
46945+
"redirect_url": "/azure/iot-central/retail/tutorial-micro-fulfillment-center",
46946+
"redirect_document_id": true
46947+
},
4688346948
{
4688446949
"source_path": "articles/iot-accelerators/iot-accelerators-arduino-iot-devkit-az3166-devkit-remote-monitoringV2.md",
4688546950
"redirect_url": "/azure/iot-accelerators/iot-accelerators-arduino-iot-devkit-az3166-devkit-remote-monitoring-v2",
@@ -50588,6 +50653,36 @@
5058850653
"source_path": "articles/ansible/index.yml",
5058950654
"redirect_url": "/azure/developer/ansible/",
5059050655
"redirect_document_id": false
50656+
},
50657+
{
50658+
"source_path": "articles/chef/chef-automation.md",
50659+
"redirect_url": "/azure/developer/chef/windows-vm-configure",
50660+
"redirect_document_id": false
50661+
},
50662+
{
50663+
"source_path": "articles/chef/chef-extension-portal.md",
50664+
"redirect_url": "/azure/developer/chef/client-install-from-azure-portal",
50665+
"redirect_document_id": false
50666+
},
50667+
{
50668+
"source_path": "articles/chef/chef-habitat-overview.md",
50669+
"redirect_url": "/azure/developer/chef/habitat-overview",
50670+
"redirect_document_id": false
50671+
},
50672+
{
50673+
"source_path": "articles/chef/chef-inspec-overview.md",
50674+
"redirect_url": "/azure/developer/chef/inspec-overview",
50675+
"redirect_document_id": false
50676+
},
50677+
{
50678+
"source_path": "articles/chef/chef-overview.md",
50679+
"redirect_url": "/azure/developer/chef/overview",
50680+
"redirect_document_id": false
50681+
},
50682+
{
50683+
"source_path": "articles/chef/index.yml",
50684+
"redirect_url": "/azure/developer/chef/",
50685+
"redirect_document_id": false
5059150686
}
5059250687
]
5059350688
}

articles/active-directory/develop/scenario-web-api-call-api-app-configuration.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -211,7 +211,7 @@ class MsalAuthHelper {
211211

212212
The On-behalf-of (OBO) flow is used to obtain a token to call the downstream web API. In this flow, your web API receives a bearer token with user delegated permissions from the client application and then exchanges this token for another access token to call the downstream web API.
213213

214-
A Python web API will need to use some middleware to validate the bearer token received from the client. The web API can then obtain the access token for downstream API using MSAL Python library by calling the [`acquire_token_on_behalf_of`](https://msal-python.readthedocs.io/en/latest/?badge=latest#msal.ConfidentialClientApplication.acquire_token_on_behalf_of) method. A sample demonstrating this flow with MSAL Python is not yet available.
214+
A Python web API will need to use some middleware to validate the bearer token received from the client. The web API can then obtain the access token for downstream API using MSAL Python library by calling the [`acquire_token_on_behalf_of`](https://msal-python.readthedocs.io/en/latest/?badge=latest#msal.ConfidentialClientApplication.acquire_token_on_behalf_of) method. For an example of using this API, see the [test code for the microsoft-authentication-library-for-python on GitHub](https://github.com/AzureAD/microsoft-authentication-library-for-python/blob/1.2.0/tests/test_e2e.py#L429-L472). Also see the discussion of [issue 53](https://github.com/AzureAD/microsoft-authentication-library-for-python/issues/53) in that same repository for an approach that bypasses the need for a middle-tier application.
215215

216216
---
217217

articles/active-directory/hybrid/reference-connect-version-history.md

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -43,6 +43,13 @@ Not all releases of Azure AD Connect will be made available for auto upgrade. Th
4343
>
4444
>Please refer to [this article](https://docs.microsoft.com/azure/active-directory/hybrid/how-to-upgrade-previous-version) to learn more about how to upgrade Azure AD Connect to the latest version.
4545
46+
## 1.5.20.0
47+
48+
### Release status
49+
04/09/2020: Released for download
50+
51+
### Fixed issues
52+
This hotfix build fixes an issue with build 1.5.18.0 if you have the Group Filtering feature enabled and use mS-DS-ConsistencyGuid as the source anchor.
4653

4754
## 1.5.18.0
4855

articles/aks/use-pod-security-policies.md

Lines changed: 15 additions & 91 deletions
Original file line numberDiff line numberDiff line change
@@ -3,8 +3,7 @@ title: Use pod security policies in Azure Kubernetes Service (AKS)
33
description: Learn how to control pod admissions by using PodSecurityPolicy in Azure Kubernetes Service (AKS)
44
services: container-service
55
ms.topic: article
6-
ms.date: 04/17/2019
7-
6+
ms.date: 04/08/2020
87
---
98

109
# Preview - Secure your cluster using pod security policies in Azure Kubernetes Service (AKS)
@@ -99,17 +98,17 @@ NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP
9998
privileged true * RunAsAny RunAsAny RunAsAny RunAsAny false * configMap,emptyDir,projected,secret,downwardAPI,persistentVolumeClaim
10099
```
101100

102-
The *privileged* pod security policy is applied to any authenticated user in the AKS cluster. This assignment is controlled by ClusterRoles and ClusterRoleBindings. Use the [kubectl get clusterrolebindings][kubectl-get] command and search for the *default:privileged:* binding:
101+
The *privileged* pod security policy is applied to any authenticated user in the AKS cluster. This assignment is controlled by ClusterRoles and ClusterRoleBindings. Use the [kubectl get rolebindings][kubectl-get] command and search for the *default:privileged:* binding in the *kube-system* namespace:
103102

104103
```console
105-
kubectl get clusterrolebindings default:privileged -o yaml
104+
kubectl get rolebindings default:privileged -n kube-system -o yaml
106105
```
107106

108107
As shown in the following condensed output, the *psp:restricted* ClusterRole is assigned to any *system:authenticated* users. This ability provides a basic level of restrictions without your own policies being defined.
109108

110109
```
111110
apiVersion: rbac.authorization.k8s.io/v1
112-
kind: ClusterRoleBinding
111+
kind: RoleBinding
113112
metadata:
114113
[...]
115114
name: default:privileged
@@ -121,7 +120,7 @@ roleRef:
121120
subjects:
122121
- apiGroup: rbac.authorization.k8s.io
123122
kind: Group
124-
name: system:authenticated
123+
name: system:masters
125124
```
126125

127126
It's important to understand how these default policies interact with user requests to schedule pods before you start to create your own pod security policies. In the next few sections, let's schedule some pods to see these default policies in action.
@@ -191,7 +190,7 @@ The pod fails to be scheduled, as shown in the following example output:
191190
```console
192191
$ kubectl-nonadminuser apply -f nginx-privileged.yaml
193192

194-
Error from server (Forbidden): error when creating "nginx-privileged.yaml": pods "nginx-privileged" is forbidden: unable to validate against any pod security policy: [spec.containers[0].securityContext.privileged: Invalid value: true: Privileged containers are not allowed]
193+
Error from server (Forbidden): error when creating "nginx-privileged.yaml": pods "nginx-privileged" is forbidden: unable to validate against any pod security policy: []
195194
```
196195

197196
The pod doesn't reach the scheduling stage, so there are no resources to delete before you move on.
@@ -219,44 +218,15 @@ Create the pod using the [kubectl apply][kubectl-apply] command and specify the
219218
kubectl-nonadminuser apply -f nginx-unprivileged.yaml
220219
```
221220

222-
The Kubernetes scheduler accepts the pod request. However, if you look at the status of the pod using `kubectl get pods`, there's an error:
223-
224-
```console
225-
$ kubectl-nonadminuser get pods
226-
227-
NAME READY STATUS RESTARTS AGE
228-
nginx-unprivileged 0/1 CreateContainerConfigError 0 26s
229-
```
230-
231-
Use the [kubectl describe pod][kubectl-describe] command to look at the events for the pod. The following condensed example shows the container and image require root permissions, even though we didn't request them:
221+
The pod fails to be scheduled, as shown in the following example output:
232222

233223
```console
234-
$ kubectl-nonadminuser describe pod nginx-unprivileged
224+
$ kubectl-nonadminuser apply -f nginx-unprivileged.yaml
235225

236-
Name: nginx-unprivileged
237-
Namespace: psp-aks
238-
Priority: 0
239-
PriorityClassName: <none>
240-
Node: aks-agentpool-34777077-0/10.240.0.4
241-
Start Time: Thu, 28 Mar 2019 22:05:04 +0000
242-
[...]
243-
Events:
244-
Type Reason Age From Message
245-
---- ------ ---- ---- -------
246-
Normal Scheduled 7m14s default-scheduler Successfully assigned psp-aks/nginx-unprivileged to aks-agentpool-34777077-0
247-
Warning Failed 5m2s (x12 over 7m13s) kubelet, aks-agentpool-34777077-0 Error: container has runAsNonRoot and image will run as root
248-
Normal Pulled 2m10s (x25 over 7m13s) kubelet, aks-agentpool-34777077-0 Container image "nginx:1.14.2" already present on machine
226+
Error from server (Forbidden): error when creating "nginx-unprivileged.yaml": pods "nginx-unprivileged" is forbidden: unable to validate against any pod security policy: []
249227
```
250228

251-
Even though we didn't request any privileged access, the container image for NGINX needs to create a binding for port *80*. To bind ports *1024* and below, the *root* user is required. When the pod tries to start, the *restricted* pod security policy denies this request.
252-
253-
This example shows that the default pod security policies created by AKS are in effect and restrict the actions a user can perform. It's important to understand the behavior of these default policies, as you may not expect a basic NGINX pod to be denied.
254-
255-
Before you move on to the next step, delete this test pod using the [kubectl delete pod][kubectl-delete] command:
256-
257-
```console
258-
kubectl-nonadminuser delete -f nginx-unprivileged.yaml
259-
```
229+
The pod doesn't reach the scheduling stage, so there are no resources to delete before you move on.
260230

261231
## Test creation of a pod with a specific user context
262232

@@ -283,61 +253,15 @@ Create the pod using the [kubectl apply][kubectl-apply] command and specify the
283253
kubectl-nonadminuser apply -f nginx-unprivileged-nonroot.yaml
284254
```
285255

286-
The Kubernetes scheduler accepts the pod request. However, if you look at the status of the pod using `kubectl get pods`, there's a different error than the previous example:
287-
288-
```console
289-
$ kubectl-nonadminuser get pods
290-
291-
NAME READY STATUS RESTARTS AGE
292-
nginx-unprivileged-nonroot 0/1 CrashLoopBackOff 1 3s
293-
```
294-
295-
Use the [kubectl describe pod][kubectl-describe] command to look at the events for the pod. The following condensed example shows the pod events:
296-
297-
```console
298-
$ kubectl-nonadminuser describe pods nginx-unprivileged
299-
300-
Name: nginx-unprivileged
301-
Namespace: psp-aks
302-
Priority: 0
303-
PriorityClassName: <none>
304-
Node: aks-agentpool-34777077-0/10.240.0.4
305-
Start Time: Thu, 28 Mar 2019 22:05:04 +0000
306-
[...]
307-
Events:
308-
Type Reason Age From Message
309-
---- ------ ---- ---- -------
310-
Normal Scheduled 2m14s default-scheduler Successfully assigned psp-aks/nginx-unprivileged-nonroot to aks-agentpool-34777077-0
311-
Normal Pulled 118s (x3 over 2m13s) kubelet, aks-agentpool-34777077-0 Container image "nginx:1.14.2" already present on machine
312-
Normal Created 118s (x3 over 2m13s) kubelet, aks-agentpool-34777077-0 Created container
313-
Normal Started 118s (x3 over 2m12s) kubelet, aks-agentpool-34777077-0 Started container
314-
Warning BackOff 105s (x5 over 2m11s) kubelet, aks-agentpool-34777077-0 Back-off restarting failed container
315-
```
316-
317-
The events indicate that the container was created and started. There's nothing immediately obvious as to why the pod is in a failed state. Let's look at the pod logs using the [kubectl logs][kubectl-logs] command:
318-
319-
```console
320-
kubectl-nonadminuser logs nginx-unprivileged-nonroot --previous
321-
```
322-
323-
The following example log output gives an indication that within the NGINX configuration itself, there's a permissions error when the service tries to start. This error is again caused by needing to bind to port 80. Although the pod specification defined a regular user account, this user account isn't sufficient in the OS-level for the NGINX service to start and bind to the restricted port.
256+
The pod fails to be scheduled, as shown in the following example output:
324257

325258
```console
326-
$ kubectl-nonadminuser logs nginx-unprivileged-nonroot --previous
259+
$ kubectl-nonadminuser apply -f nginx-unprivileged-nonroot.yaml
327260

328-
2019/03/28 22:38:29 [warn] 1#1: the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /etc/nginx/nginx.conf:2
329-
nginx: [warn] the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /etc/nginx/nginx.conf:2
330-
2019/03/28 22:38:29 [emerg] 1#1: mkdir() "/var/cache/nginx/client_temp" failed (13: Permission denied)
331-
nginx: [emerg] mkdir() "/var/cache/nginx/client_temp" failed (13: Permission denied)
261+
Error from server (Forbidden): error when creating "nginx-unprivileged-nonroot.yaml": pods "nginx-unprivileged-nonroot" is forbidden: unable to validate against any pod security policy: []
332262
```
333263

334-
Again, it's important to understand the behavior of the default pod security policies. This error was a little harder to track down, and again, you may not expect a basic NGINX pod to be denied.
335-
336-
Before you move on to the next step, delete this test pod using the [kubectl delete pod][kubectl-delete] command:
337-
338-
```console
339-
kubectl-nonadminuser delete -f nginx-unprivileged-nonroot.yaml
340-
```
264+
The pod doesn't reach the scheduling stage, so there are no resources to delete before you move on.
341265

342266
## Create a custom pod security policy
343267

@@ -379,7 +303,7 @@ $ kubectl get psp
379303

380304
NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP READONLYROOTFS VOLUMES
381305
privileged true * RunAsAny RunAsAny RunAsAny RunAsAny false *
382-
psp-deny-privileged false RunAsAny RunAsAny RunAsAny RunAsAny false * configMap,emptyDir,projected,secret,downwardAPI,persistentVolumeClaim
306+
psp-deny-privileged false RunAsAny RunAsAny RunAsAny RunAsAny false *
383307
```
384308

385309
## Allow user account to use the custom pod security policy

0 commit comments

Comments
 (0)