You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/active-directory/hybrid/reference-connect-version-history.md
+7Lines changed: 7 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -43,6 +43,13 @@ Not all releases of Azure AD Connect will be made available for auto upgrade. Th
43
43
>
44
44
>Please refer to [this article](https://docs.microsoft.com/azure/active-directory/hybrid/how-to-upgrade-previous-version) to learn more about how to upgrade Azure AD Connect to the latest version.
45
45
46
+
## 1.5.20.0
47
+
48
+
### Release status
49
+
04/09/2020: Released for download
50
+
51
+
### Fixed issues
52
+
This hotfix build fixes an issue with build 1.5.18.0 if you have the Group Filtering feature enabled and use mS-DS-ConsistencyGuid as the source anchor.
The *privileged* pod security policy is applied to any authenticated user in the AKS cluster. This assignment is controlled by ClusterRoles and ClusterRoleBindings. Use the [kubectl get clusterrolebindings][kubectl-get] command and search for the *default:privileged:* binding:
101
+
The *privileged* pod security policy is applied to any authenticated user in the AKS cluster. This assignment is controlled by ClusterRoles and ClusterRoleBindings. Use the [kubectl get rolebindings][kubectl-get] command and search for the *default:privileged:* binding in the *kube-system* namespace:
103
102
104
103
```console
105
-
kubectl get clusterrolebindings default:privileged -o yaml
104
+
kubectl get rolebindings default:privileged -n kube-system -o yaml
106
105
```
107
106
108
107
As shown in the following condensed output, the *psp:restricted* ClusterRole is assigned to any *system:authenticated* users. This ability provides a basic level of restrictions without your own policies being defined.
109
108
110
109
```
111
110
apiVersion: rbac.authorization.k8s.io/v1
112
-
kind: ClusterRoleBinding
111
+
kind: RoleBinding
113
112
metadata:
114
113
[...]
115
114
name: default:privileged
@@ -121,7 +120,7 @@ roleRef:
121
120
subjects:
122
121
- apiGroup: rbac.authorization.k8s.io
123
122
kind: Group
124
-
name: system:authenticated
123
+
name: system:masters
125
124
```
126
125
127
126
It's important to understand how these default policies interact with user requests to schedule pods before you start to create your own pod security policies. In the next few sections, let's schedule some pods to see these default policies in action.
@@ -191,7 +190,7 @@ The pod fails to be scheduled, as shown in the following example output:
Error from server (Forbidden): error when creating "nginx-privileged.yaml": pods "nginx-privileged" is forbidden: unable to validate against any pod security policy: [spec.containers[0].securityContext.privileged: Invalid value: true: Privileged containers are not allowed]
193
+
Error from server (Forbidden): error when creating "nginx-privileged.yaml": pods "nginx-privileged" is forbidden: unable to validate against any pod security policy: []
195
194
```
196
195
197
196
The pod doesn't reach the scheduling stage, so there are no resources to delete before you move on.
@@ -219,44 +218,15 @@ Create the pod using the [kubectl apply][kubectl-apply] command and specify the
Use the [kubectl describe pod][kubectl-describe] command to look at the events for the pod. The following condensed example shows the container and image require root permissions, even though we didn't request them:
221
+
The pod fails to be scheduled, as shown in the following example output:
232
222
233
223
```console
234
-
$ kubectl-nonadminuser describe pod nginx-unprivileged
Normal Scheduled 7m14s default-scheduler Successfully assigned psp-aks/nginx-unprivileged to aks-agentpool-34777077-0
247
-
Warning Failed 5m2s (x12 over 7m13s) kubelet, aks-agentpool-34777077-0 Error: container has runAsNonRoot and image will run as root
248
-
Normal Pulled 2m10s (x25 over 7m13s) kubelet, aks-agentpool-34777077-0 Container image "nginx:1.14.2" already present on machine
226
+
Error from server (Forbidden): error when creating "nginx-unprivileged.yaml": pods "nginx-unprivileged" is forbidden: unable to validate against any pod security policy: []
249
227
```
250
228
251
-
Even though we didn't request any privileged access, the container image for NGINX needs to create a binding for port *80*. To bind ports *1024* and below, the *root* user is required. When the pod tries to start, the *restricted* pod security policy denies this request.
252
-
253
-
This example shows that the default pod security policies created by AKS are in effect and restrict the actions a user can perform. It's important to understand the behavior of these default policies, as you may not expect a basic NGINX pod to be denied.
254
-
255
-
Before you move on to the next step, delete this test pod using the [kubectl delete pod][kubectl-delete] command:
The Kubernetes scheduler accepts the pod request. However, if you look at the status of the pod using `kubectl get pods`, there's a different error than the previous example:
The events indicate that the container was created and started. There's nothing immediately obvious as to why the pod is in a failed state. Let's look at the pod logs using the [kubectl logs][kubectl-logs] command:
The following example log output gives an indication that within the NGINX configuration itself, there's a permissions error when the service tries to start. This error is again caused by needing to bind to port 80. Although the pod specification defined a regular user account, this user account isn't sufficient in the OS-level for the NGINX service to start and bind to the restricted port.
256
+
The pod fails to be scheduled, as shown in the following example output:
2019/03/28 22:38:29 [warn] 1#1: the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /etc/nginx/nginx.conf:2
329
-
nginx: [warn] the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /etc/nginx/nginx.conf:2
Error from server (Forbidden): error when creating "nginx-unprivileged-nonroot.yaml": pods "nginx-unprivileged-nonroot" is forbidden: unable to validate against any pod security policy: []
332
262
```
333
263
334
-
Again, it's important to understand the behavior of the default pod security policies. This error was a little harder to track down, and again, you may not expect a basic NGINX pod to be denied.
335
-
336
-
Before you move on to the next step, delete this test pod using the [kubectl delete pod][kubectl-delete] command:
description: Application Insights Agent API reference. Disable-InstrumentationEngine. Monitor website performance without redeploying the website. Works with ASP.NET web apps hosted on-premises, in VMs, or on Azure.
Copy file name to clipboardExpand all lines: articles/dev-spaces/faq.md
+10Lines changed: 10 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -76,6 +76,14 @@ When using Visual Studio to prepare your project, you have the option of enablin
76
76
77
77
Currently, Azure Dev Spaces does not support using [pod managed identities][aks-pod-managed-id] on AKS clusters with Azure Dev Spaces enabled. If you have pod managed identities installed and would like to uninstall it, you can find more details in the [uninstall notes][aks-pod-managed-id-uninstall].
78
78
79
+
## Can I use Azure Dev Spaces with multiple microservices in an application?
80
+
81
+
Yes, you can use Azure Dev Spaces in an application with multiple microservices, but you must prepare and run the individual microservices at their root. The Azure Dev Spaces CLI, Azure Dev Spaces VS Code extension, and Visual Studio Azure Development workload expect the *azds.yaml* file to be at the root of the microservice in order to run and debug. See the [Bike Sharing sample application][bike-sharing] for an example of multiple microservices in a single application.
82
+
83
+
In Visual Studio Code, it is possible to [open separate projects in a single workspace][vs-code-multi-root-workspaces] and debug them separately through Azure Dev Spaces. Each of the projects must be self-contained and prepared for Azure Dev Spaces.
84
+
85
+
In Visual Studio, it is possible to configure .NET Core solutions for debugging through Azure Dev Spaces.
0 commit comments