Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -30,4 +30,5 @@ env/
.idea
.env*
**/venv
**/noxfile.py
**/noxfile.py\n# Local secrets file\nauth/custom-credentials/okta/custom-credentials-okta-secrets.json
auth/custom-credentials/aws/custom-credentials-aws-secrets.json
20 changes: 20 additions & 0 deletions auth/custom-credentials/aws/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
FROM python:3.9-slim

# Create a non-root user
RUN useradd -m appuser

# Create a working directory
WORKDIR /app

# Copy files and install dependencies
COPY --chown=appuser:appuser requirements.txt .
COPY --chown=appuser:appuser snippets.py .

# Switch to the non-root user
USER appuser

# Install dependencies for the user
RUN pip install --no-cache-dir --user -r requirements.txt

# Set the entrypoint
CMD ["python3", "snippets.py"]
108 changes: 108 additions & 0 deletions auth/custom-credentials/aws/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,108 @@
# Running the Custom Credential Supplier Sample

If you want to use AWS security credentials that cannot be retrieved using methods supported natively by the [google-auth](https://github.com/googleapis/google-auth-library-python) library, a custom `AwsSecurityCredentialsSupplier` implementation may be specified. The supplier must return valid, unexpired AWS security credentials when called by the Google Cloud Auth library.

This sample demonstrates how to use **Boto3** (the AWS SDK for Python) as a custom supplier to bridge AWS credentials—from sources like EKS IRSA, ECS, or Fargate—to Google Cloud Workload Identity.

## Running Locally

For local development, you can provide credentials and configuration in a JSON file. For containerized environments like EKS, the script can fall back to environment variables.

### 1. Install Dependencies

Ensure you have Python installed, then install the required libraries:

```bash
pip install -r requirements.txt
```

### 2. Configure Credentials for Local Development

1. Copy the example secrets file to a new file named `custom-credentials-aws-secrets.json`:
```bash
cp custom-credentials-aws-secrets.json.example custom-credentials-aws-secrets.json
```
2. Open `custom-credentials-aws-secrets.json` and fill in the required values for your AWS and GCP configuration. The `custom-credentials-aws-secrets.json` file is ignored by Git, so your credentials will not be checked into version control.

### 3. Run the Script

```bash
python3 snippets.py
```

When run locally, the script will detect the `custom-credentials-aws-secrets.json` file and use it to configure the necessary environment variables for the Boto3 client.

## Running in a Containerized Environment (EKS)

This section provides a brief overview of how to run the sample in an Amazon EKS cluster.

### 1. EKS Cluster Setup

First, you need an EKS cluster. You can create one using `eksctl` or the AWS Management Console. For detailed instructions, refer to the [Amazon EKS documentation](https://docs.aws.amazon.com/eks/latest/userguide/create-cluster.html).

### 2. Configure IAM Roles for Service Accounts (IRSA)

IRSA allows you to associate an IAM role with a Kubernetes service account. This provides a secure way for your pods to access AWS services without hardcoding long-lived credentials.

You can essentially complete the OIDC setup, IAM role creation, and Service Account association in one step using `eksctl`.

Run the following command to create the IAM role and bind it to a Kubernetes Service Account:

```bash
eksctl create iamserviceaccount \
--name your-k8s-service-account \
--namespace default \
--cluster your-cluster-name \
--region your-aws-region \
--role-name your-role-name \
--attach-policy-arn arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess \
--approve
```

> **Note**: The `--attach-policy-arn` flag is used here to demonstrate attaching permissions. Update this with the specific AWS policy ARN your application requires (e.g., if your Boto3 client needs to read from S3 or DynamoDB).

For a deep dive into how this works manually, refer to the [IAM Roles for Service Accounts](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html) documentation.

### 3. Configure GCP to Trust the AWS Role

To allow your AWS role to authenticate as a Google Cloud service account, you need to configure Workload Identity Federation. This process involves these key steps:

1. **Create a Workload Identity Pool and an AWS Provider:** The pool holds the configuration, and the provider is set up to trust your AWS account.

2. **Create or select a GCP Service Account:** This service account will be impersonated by your AWS role. Grant this service account the necessary GCP permissions for your application (e.g., access to GCS or BigQuery).

3. **Bind the AWS Role to the GCP Service Account:** Create an IAM policy binding that gives your AWS role the `Workload Identity User` (`roles/iam.workloadIdentityUser`) role on the GCP service account. This allows the AWS role to impersonate the service account.

**Alternative: Direct Access**

> For supported resources, you can grant roles directly to the AWS identity, bypassing service account impersonation. To do this, grant a role (like `roles/storage.objectViewer`) to the workload identity principal (`principalSet://...`) directly on the resource's IAM policy.
For more detailed information, see the documentation on [Configuring Workload Identity Federation](https://cloud.google.com/iam/docs/workload-identity-federation-with-other-clouds).
### 4. Containerize and Package the Application
Create a `Dockerfile` for the Python application and push the image to a container registry (e.g., Amazon ECR) that your EKS cluster can access. Refer to the [`Dockerfile`](Dockerfile) for the container image definition.
Build and push the image:
```bash
docker build -t your-container-image:latest .
docker push your-container-image:latest
```
### 5. Deploy to EKS
Create a Kubernetes deployment manifest to deploy your application to the EKS cluster. See the [`pod.yaml`](pod.yaml) file for an example.
Deploy the pod:
```bash
kubectl apply -f pod.yaml
```
### 6. Clean Up
To clean up the resources, delete the EKS cluster and any other AWS and GCP resources you created.
```bash
eksctl delete cluster --name your-cluster-name
```
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

It's a good practice for text files, including Markdown files, to end with a newline character. This can prevent issues with some tools and file concatenations.

Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
{
"aws_access_key_id": "YOUR_AWS_ACCESS_KEY_ID",
"aws_secret_access_key": "YOUR_AWS_SECRET_ACCESS_KEY",
"aws_region": "YOUR_AWS_REGION",
"gcp_workload_audience": "YOUR_GCP_WORKLOAD_AUDIENCE",
"gcs_bucket_name": "YOUR_GCS_BUCKET_NAME",
"gcp_service_account_impersonation_url": "YOUR_GCP_SERVICE_ACCOUNT_IMPERSONATION_URL"
}
18 changes: 18 additions & 0 deletions auth/custom-credentials/aws/noxfile_config.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
# Copyright 2025 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

TEST_CONFIG_OVERRIDE = {
# Ignore all versions except 3.9, which is the version available.
"ignored_versions": ["2.7", "3.6", "3.7", "3.8", "3.10", "3.11", "3.12", "3.13"],
}
33 changes: 33 additions & 0 deletions auth/custom-credentials/aws/pod.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
# Copyright 2025 Google LLC
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

apiVersion: v1
kind: Pod
metadata:
name: custom-credential-pod
spec:
serviceAccountName: your-k8s-service-account # The service account associated with the AWS IAM role
containers:
- name: gcp-auth-sample
image: your-container-image:latest # Your image from ECR
env:
# AWS_REGION is often required for Boto3 to initialize correctly in containers
- name: AWS_REGION
value: "your-aws-region"
- name: GCP_WORKLOAD_AUDIENCE
value: "your-gcp-workload-audience"
# Optional: If you want to use service account impersonation
# - name: GCP_SERVICE_ACCOUNT_IMPERSONATION_URL
# value: "your-gcp-service-account-impersonation-url"
- name: GCS_BUCKET_NAME
value: "your-gcs-bucket-name"
2 changes: 2 additions & 0 deletions auth/custom-credentials/aws/requirements-test.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
-r requirements.txt
pytest==8.2.0
4 changes: 4 additions & 0 deletions auth/custom-credentials/aws/requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
boto3==1.40.53
google-auth==2.43.0
python-dotenv==1.1.1
requests==2.32.3
154 changes: 154 additions & 0 deletions auth/custom-credentials/aws/snippets.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,154 @@
# Copyright 2025 Google LLC
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# [START auth_custom_credential_supplier_aws]
import json
import os

import boto3
from google.auth import aws
from google.auth import exceptions
from google.auth.transport import requests as auth_requests
Comment on lines +15 to +21
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

To support writing to standard error, the sys module should be imported. It's also a good practice to group standard library imports together, followed by third-party imports, as per PEP 8.

Suggested change
import json
import os
import boto3
from google.auth import aws
from google.auth import exceptions
from google.auth.transport import requests as auth_requests
import json
import os
import sys
import boto3
from google.auth import aws
from google.auth import exceptions
from google.auth.transport import requests as auth_requests



class CustomAwsSupplier(aws.AwsSecurityCredentialsSupplier):
"""Custom AWS Security Credentials Supplier using Boto3."""

def __init__(self):
"""Initializes the Boto3 session, prioritizing environment variables for region."""
# Explicitly read the region from the environment first.
region = os.getenv("AWS_REGION") or os.getenv("AWS_DEFAULT_REGION")

# If region is None, Boto3's discovery chain will be used when needed.
self.session = boto3.Session(region_name=region)
self._cached_region = None

def get_aws_region(self, context, request) -> str:
"""Returns the AWS region using Boto3's default provider chain."""
if self._cached_region:
return self._cached_region

self._cached_region = self.session.region_name

if not self._cached_region:
raise exceptions.GoogleAuthError(
"Boto3 was unable to resolve an AWS region."
)

return self._cached_region

def get_aws_security_credentials(
self, context, request=None
) -> aws.AwsSecurityCredentials:
"""Retrieves AWS security credentials using Boto3's default provider chain."""
creds = self.session.get_credentials()
if not creds:
raise exceptions.GoogleAuthError(
"Unable to resolve AWS credentials from Boto3."
)

return aws.AwsSecurityCredentials(
access_key_id=creds.access_key,
secret_access_key=creds.secret_key,
session_token=creds.token,
)


def authenticate_with_aws_credentials(bucket_name, audience, impersonation_url=None):
"""Authenticates using the custom AWS supplier and gets bucket metadata.
Returns:
dict: The bucket metadata response from the Google Cloud Storage API.
"""

# 1. Instantiate the custom supplier.
custom_supplier = CustomAwsSupplier()

# 2. Instantiate the AWS Credentials object.
credentials = aws.Credentials(
audience=audience,
subject_token_type="urn:ietf:params:aws:token-type:aws4_request",
service_account_impersonation_url=impersonation_url,
aws_security_credentials_supplier=custom_supplier,
scopes=["https://www.googleapis.com/auth/devstorage.read_write"],
)

# 3. Create an authenticated session.
authed_session = auth_requests.AuthorizedSession(credentials)

# 4. Make the API Request.
bucket_url = f"https://storage.googleapis.com/storage/v1/b/{bucket_name}"

response = authed_session.get(bucket_url)
response.raise_for_status()

return response.json()


# [END auth_custom_credential_supplier_aws]


def _load_config_from_file():
"""
If a local secrets file is present, load it into the environment.
This is a "just-in-time" configuration for local development. These
variables are only set for the current process and are not exposed to the
shell.
"""
if os.path.exists("custom-credentials-aws-secrets.json"):
with open("custom-credentials-aws-secrets.json", "r") as f:
secrets = json.load(f)

os.environ["AWS_ACCESS_KEY_ID"] = secrets.get("aws_access_key_id", "")
os.environ["AWS_SECRET_ACCESS_KEY"] = secrets.get("aws_secret_access_key", "")
os.environ["AWS_REGION"] = secrets.get("aws_region", "")
os.environ["GCP_WORKLOAD_AUDIENCE"] = secrets.get("gcp_workload_audience", "")
os.environ["GCS_BUCKET_NAME"] = secrets.get("gcs_bucket_name", "")
os.environ["GCP_SERVICE_ACCOUNT_IMPERSONATION_URL"] = secrets.get(
"gcp_service_account_impersonation_url", ""
)


def main():

# Reads the custom-credentials-aws-secrets.json if running locally.
_load_config_from_file()

# Now, read the configuration from the environment. In a local run, these
# will be the values we just set. In a containerized run, they will be
# the values provided by the environment.
gcp_audience = os.getenv("GCP_WORKLOAD_AUDIENCE")
sa_impersonation_url = os.getenv("GCP_SERVICE_ACCOUNT_IMPERSONATION_URL")
gcs_bucket_name = os.getenv("GCS_BUCKET_NAME")

if not all([gcp_audience, gcs_bucket_name]):
print(
"Required configuration missing. Please provide it in a "
"custom-credentials-aws-secrets.json file or as environment variables: "
"GCP_WORKLOAD_AUDIENCE, GCS_BUCKET_NAME"
)
return

try:
print(f"Retrieving metadata for bucket: {gcs_bucket_name}...")
metadata = authenticate_with_aws_credentials(
gcs_bucket_name, gcp_audience, sa_impersonation_url
)
print("--- SUCCESS! ---")
print(json.dumps(metadata, indent=2))
except Exception as e:
print(f"Authentication or Request failed: {e}")


if __name__ == "__main__":
main()
Loading