Before starting the workshop, please ensure the following tools are installed:
- Terraform (installed via
tenv): YouLend Wiki Guide - Azure CLI: Installation Guide
- Teleport CLI tools (
tctlandtsh): If you already use thethhelper YouLend function, this is already installed. Teleport Azure AD Integration Guide
Welcome to the YouLend Control Tower Platform Workshop!
In this workshop, weβll explore the story of YouFinance, a fictional fintech startup spun out of YouLend, created to support small businesses through a modern Control Tower architecture. Youβll learn the foundations of Control Tower, key platform features, and build a working solution using the provided codebase.
π Follow along, experiment, and most importantlyβhave fun!
The YouLend Platform team has been evaluating Control Tower solutions for some time. Several third-party consultancies have pitched their services to us.
This workshop is designed to:
- Get you up to speed on what AWS Control Tower is.
- Understand its key features and strategic value.
- Enable you to build your own Control Tower using code.
- Prepare you to confidently engage in technical conversations about Control Tower.
By the end of this session, youβll have both the theoretical and practical knowledge needed to assess and discuss Control Tower-based architectures.
Before we begin, please review the following guidelines.
You should have received credentials structured as follows:
| User Name | User Email | Password | Access Key (Mgmt Account) | Secret Key (Mgmt Account) | SSO Console Link | Terraform State Bucket | Teleport Sign-In Link | Slack Workspace | Datadog Sign-In Link |
|---|---|---|---|---|---|---|---|---|---|
| TBD | TBD | TBD | TBD | TBD | TBD | TBD | TBD | TBD | TBD |
- Ask as many questions as you like during the session.
- Follow along with the hands-on tasks and report bugs or errors.
- Let the hosts know if theyβre going too fast or too slow.
- Feel free to request a quick break if needed.
- Do not change any credentials.
- Do not share any credentials with anyone.
- Do not enable MFA on the Management Account.
- Do not change any code, unless instructed to as this can cause errors.
- Do not deploy resources beyond the defined scope.
- What is AWS Control Tower?
- AWS Organizations & OU Strategies
- Control Tower Launch (Wizard vs. Terraform)
- Identity & Access Management (Teleport & Azure AD)
- OU Baselining
- Applying Control Tower Controls
- Security Foundations Part 1
- Security Foundations Part 2
- Account Customisations (Account Factory with Terraform)
- Centralised Logging (Datadog)
- Feedback Session
- Recap of Final Architecture
- Strategic Considerations & Final Thoughts
- Resources & Materials (e.g., Medium Articles)
AWS Control Tower is a managed service that helps you set up and govern a secure, multi-account AWS environment based on AWSβs best practices. Itβs not a single tool, but more like an orchestration layer that wires together multiple AWS services β like Organizations, IAM Identity Center (formerly AWS SSO), Config, CloudTrail, Service Catalog, and more β into a central place.
If youβre managing more than one AWS account, itβs worth thinking about. Control Tower provides:
- Guardrails (SCPs, Config rules) to keep things compliant
- Centralized identity access via IAM Identity Center
- Automated logging and monitoring configuration
- The ability to launch and enroll accounts in a consistent way
It removes a lot of the footguns and guesswork, since youβre building a structure that scales.
AWS Organizations helps you centrally manage and govern your environment as you grow and scale your AWS resources. Using Organizations, you can:
- Create accounts and allocate resources
- Group accounts to organize your workflows
- Apply policies for governance
- Simplify billing by using a single payment method for all accounts
Having AWS Organizations enabled is the main prerequisite for running Control Tower.
An OU is a container for AWS accounts that allows you to apply a set of common policies to all AWS accounts within it. This lets you consolidate and administer them as a single unit.
For this workshop, weβll follow a simplified OU strategy. Itβs not representative of the final recommendation, but keeps things easier to understand:
|- AWS Organization
|- Root Account (Management Account)
|- Security OU
|- Security Account
|- Logging Account
|- Product OU
|- Production Account
|- Staging Account
|- Development Account
|- Platform OU
|- AFT Account
We will explore two ways to deploy Control Tower:
- AWS UI Launch Wizard
- Simple and guided
- Creates all prerequisites including IAM roles and KMS keys
- Only requires AWS Organizations to be enabled
- Generates the AWS Security OU as well as, Security & Logging Accounts
-
Terraform
- Code stored in 03-control-tower folder
- Uses the Control Tower API
- Lifecycle hooks required (changes in the manifest can lead to account recreation)
- Useful for managing related infrastructure (e.g., Organizations, KMS keys)
- Requires IAM roles to be pre-configured
- Generates the Security OU BUT does NOT generate the AWS Accounts (It requires them to be pre-created)
We wonβt run the Terraform module live, as the deployment process takes 30β60 minutes and was executed beforehand to save time.
This step has already been completed in preparation for the workshop. We used the Makefile to automate the execution of the first three Terraform modules:
- 01-backend-init: Creates the S3 Backend for storing the Terraform state file (pre-provided for you).
- 02-aws-organizations: Enables AWS Organizations to manage accounts centrally.
- 03-control-tower: Deploys AWS Control Tower along with prerequisite IAM roles.
βΉοΈ These steps are skipped during the workshop to save time and ensure all participants start with a consistent baseline.
Please Ensure IAM CLI User has the below Inline Policies attached to them
- Assume_Control_Tower_Policy
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Resource": [
"arn:aws:iam::{{REPLACE_WITH_ACCOUNT_ID}}:role/service-role/AWSControlTower*"
]
}
]
}It includes three stages: Each step is automated using a Makefile to streamline setup and deployment.
- Initializing the backend.
- Provisioning AWS Organizations
- Enabling Control Tower configuration.
Before running, make sure you have a valid terraform.tfvars file in the root directory with email addresses for the security and logging accounts.
The Makefile validates this file, provisions the backend, injects the backend bucket into each module, and runs Terraform plan and apply in order.
- Ensure you have
terraformandmakeinstalled on your system. - Create a
terraform.tfvarsfile in the root directory with the following content:
security_account_email = "your+security@email.com"
logging_account_email = "your+logging@email.com"- From the root of the repository, run:
makeThis will validate the
terraform.tfvars, copy it into the appropriate directories, and run all Terraform steps in order.
If you havenβt already, run the following command and provide your credentials when prompted:
aws configureUse these values when prompted:
- AWS Access Key ID: [provided to you]
- AWS Secret Access Key: [provided to you]
- Default region name:
eu-west-1 - Default output format:
json
az loginThis will open a browser window. Select "Sign in with another account", then log in with the Azure credentials provided to you via Entra ID. Use the email provided to you.
Once configured, use your IDE to perform a global search for:
{{REPLACE_WITH_S3_BUCKET}}
Replace it with the actual S3 bucket name provided to you (e.g., tfstate-control-tower-abc123).
Alternatively, hereβs how to do it using find and sed in bash:
find . -type f -name "*.tf" -exec sed -i '' 's/{{REPLACE_WITH_S3_BUCKET}}/tfstate-control-tower-abc123/g' {} +
β οΈ Make sure to use the actual bucket name provided to you instead oftfstate-control-tower-abc123.
With Control Tower running, we now want to start logging into the AWS accounts it manages. For this, we use Teleport as our identity broker to securely authenticate users through Azure Entra ID, and then into AWS.
Change to the directory:
cd ./04-iam-authYouβll find a file named terraform.tfvars.example. Copy and rename it:
cp terraform.tfvars.example terraform.tfvarsUpdate the values with those provided to you. Here's a template:
user_first_name = "omar"
teleport_saml = "https://REPLACE_WITH_TENANT.teleport.sh:443/v1/webapi/saml/acs/ad"βΉοΈ user first name is your first name to be able to identify your application in Entra ID
Run the following commands:
terraform init
terraform apply --auto-approveThis will:
- Generate IAM users with group-based permissions for access control testing
- Create an Enterprise Application in Microsoft Entra ID
- Attach the IAM users to that application
The output should include a link like this β save it:
https://d-92312321321.awsapps.com/
Follow these steps to configure Azure Entra ID with your Teleport cluster.
Sign up for a new Teleport cluster: π https://goteleport.com/signup/
β οΈ Please use your main email address, not your Microsoft/Entra email for registration.
Create a file named azure-connector.yaml:
kind: saml
metadata:
name: ad
spec:
acs: https://{{REPLACE_WITH_TELEPORT_URL}}:443/v1/webapi/saml/acs/ad
attributes_to_roles:
- name: http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name
roles:
- access
- editor
value: {{REPLACE_WITH_AZURE_ENTRA_EMAIL}}
audience: https://{{REPLACE_WITH_TELEPORT_URL}}:443/v1/webapi/saml/acs/ad
cert: ""
display: Microsoft
entity_descriptor: ""
entity_descriptor_url: {{REPLACE_WITH_ENTITY_URL}}
issuer: ""
service_provider_issuer: https://{{REPLACE_WITH_TELEPORT_URL}}:443/v1/webapi/saml/acs/ad
sso: ""Replace the placeholders with the following values:
{{REPLACE_WITH_TELEPORT_URL}}: URL of your Teleport cluster (from step 1){{REPLACE_WITH_AZURE_ENTRA_EMAIL}}: Your assigned Azure Entra email address{{REPLACE_WITH_ENTITY_URL}}: Go to Azure Portal > Enterprise Application > Single Sign-On > SAML Certificates and copy the App Federation Metadata URL
tsh login --proxy=<YOUR_TELEPORT_URL>.teleport.sh:443 --auth=local --user=<YOUR_EMAIL>@proton.meReplace placeholders with your actual proxy URL and email
cat azure-connector.yaml | tctl sso testThis will check the validity of your Azure SAML configuration.
tctl create -f azure-connector.yamlYou should see a success message confirming the connection is valid.
β Your Entra ID x Teleport integration is now complete!
Follow the official guide here: π https://goteleport.com/docs/admin-guides/management/guides/aws-iam-identity-center/guide/
Once everything is set up:
- Go to the AWS SSO link output by Terraform.
- Youβll be redirected to Teleport.
- Choose Login with Entra ID.
- Use the Azure AD account provided to you.
- After authentication, you'll land on the AWS SSO homepage.
You are now ready to access your AWS accounts with proper role-based access through Teleport!
Using the details from Teleport's guide on creating custom Identity Center roles, you can assign different permission levels to the two IAM users.
To test this:
- Open a new Incognito window
- Log into Teleport using each userβs email address
- Since theyβre automatically assigned to the Enterprise application by Terraform, no manual assignment is required
You should see that each user only has access to resources based on their assigned IAM roles.
By default, when you create an Organizational Unit (OU) in AWS Organizations, it is not enrolled into Control Tower. In this step, we'll establish a baseline so that each new OU gets enrolled into Control Tower with a consistent set of controls and guardrails.
We achieve this using a CloudFormation stack that is deployed and managed via Terraform.
Change to the directory:
cd ./05-baseline-ouYouβll find a file named terraform.tfvars.example. Copy and rename it:
cp terraform.tfvars.example terraform.tfvarsUpdate the values with those provided to you. Here's a template:
platform_account_email = "user+platform@email.me"βΉοΈ Keep the
+symbol in the email (e.g.,john.doe+platform@youlend.com) to utilize email subaddressing.
Run the following:
terraform init
terraform apply --auto-approveThis will:
- Create a new OU called Platform
- Register a new Platform Account within that OU
- Deploy a CloudFormation stack to enroll the Platform OU and account into AWS Control Tower
After this step, any resources or policies defined within Control Tower will automatically apply to the new OU.
AWS Control Tower provides multiple types of Controls to help govern AWS accounts:
- Detective Controls (Config rules): Monitor and alert on policy violations
- Proactive Controls (Hooks): Validate infrastructure before deployment using CloudFormation Guard (only applicable to CloudFormation stacks)
- Preventive Controls (SCPs): Block restricted actions at the org level outright
Controls can be targeted by service, account, or audit requirement.
Change to the directory:
cd ./06-controlsRun the following to deploy the control rules:
terraform init
terraform apply --auto-approveThis will deploy all three types of controls to the AWS Platform Account in the Platform OU.
After the controls are deployed:
- Log in to the AWS SSO console
- Generate temporary credentials for the AWS Platform Account
- Add them to your
~/.aws/credentialsfile under a new profile:
[platform]
aws_access_key_id = AKXXXXXXXXXXXXXXXX
aws_secret_access_key = UzJXXXXXXXXXXX-
Navigate back to the
./06-controlsdirectory -
Un-comment the following blocks in your code:
- The
module "control_tower_controls_validation"block inmain.tf - The
provider "aws"block interraform.tf
- The
-
Re-run Terraform:
terraform apply --auto-approveYou should observe failures and errors triggered by the controls β weβll walk through and discuss them together during the live session.
In this section, we will deploy key AWS Security Services into our AWS accounts, leveraging the AWS Control Tower and Organizations setup to enable centralized monitoring and compliance.
To proceed, youβll need CLI access to the Security Account.
We recommend one of the following methods:
- Create a dedicated IAM user in Security Account with an Administrator policy and configure its credentials using the AWS CLI.
- Or, generate short-lived credentials from the SSO console (as done with the Platform Account) and paste them into your
~/.aws/credentialsfile under the profile namesecurity.
β οΈ Note: This module may take time to apply. If your credentials expire, simply re-fetch them.
cd ./07-security-foundationsCopy and rename the example tfvars file:
cp terraform.tfvars.example terraform.tfvarsUpdate the values with the credentials and information provided to you:
management_account_email = "user@email.me"
logging_account_email = "user+logging@email.me"
logging_account_id = "123456789"
platform_account_email = "user+platform@email.me"
platform_account_id = "123456789"
slack_channel_id = "C07EZ1ABC23"
slack_team_id = "T07EA123LEP"βΉοΈ Use email subaddressing with the
+symbol (e.g.,john.doe+logging@youlend.com) βοΈ AWS Account IDs can be fetched from the SSO console π¬ Slack team and channel IDs can be found by inspecting Slack workspace settings using the URL provided.
terraform init
terraform apply --auto-approveThis step will:
- Delegate the AWS Security Account as an Organization Admin for security services
- Enable AWS Security Services like Security Hub, Audit Manager, and Root Credential Monitoring on the Management Account
- Deploy Security Features into the Security Account to act as a centralized control plane
IAM Access Analyzer identifies:
- Unused IAM policies
- Permissions granting access to external accounts
To validate it works:
Edit ./07-security-foundations/main.tf, and in the security_foundation_security module, set:
validate_iam_access_analyzer = trueThis will create a cross-account IAM role that triggers IAM Access Analyzer and simulate findings for evaluation.
Weβll review these findings together in the live session.
When using AWS Organizations, each member account still retains its own root user. Thatβs why part of your security foundation should be to centrally secure and restrict root user access across all accounts.
To validate it works:
Edit ./07-security-foundations/main.tf, and in the security_foundation_security module, set:
validate_org_root_features = trueThis will create an S3 bucket that can only be deleted by the AWS Root user. To delete the bucket, run the following commands in AWS CloudShell Console in the Security Account
aws sts assume-root \
--region eu-west-1 \
--duration-seconds 900 \
--target-principal <my member account id> \
--task-policy-arn arn=arn:aws:iam::aws:policy/root-task/S3UnlockBucketPolicy{
"Credentials": {
"AccessKeyId": "AS....XIG",
"SecretAccessKey": "ao...QxG",
"SessionToken": "IQ...SS",
"Expiration": "2024-09-23T17:44:50+00:00"
}
}export AWS_ACCESS_KEY_ID=ASIA356SJWJITG32xxx
export AWS_SECRET_ACCESS_KEY=JFZzOAWWLocoq2of5Exxx
export AWS_SESSION_TOKEN=IQoJb3JpZ2luX2VjEMb//////////wEaCXVxxxxaws sts get-caller-identityExpected output:
{
"UserId": "012345678901",
"Account": "012345678901",
"Arn": "arn:aws:iam::012345678901:root"
}aws s3api delete-bucket-policy --bucket <bucket_name>Edit ./07-security-foundations/main.tf, and in the security_foundation_security module, set:
validate_org_root_features = falseAmazon GuardDuty is a powerful threat detection service that continuously monitors your AWS environment for suspicious activity, misconfigurations, and anomalous behavior. It's a key part of any security foundation.
- The
security_foundation_managementmodule enables GuardDuty for the AWS Organization and delegates the AWS Security Account as the GuardDuty administrator. - The
security_foundation_securitymodule auto-enables GuardDuty across all AWS Accounts in the Organization, including any new ones added later.
β οΈ Note: It can take up to 24 hours for GuardDuty configuration changes to propagate across all member accounts.
To validate that GuardDuty is working as expected:
- Log into the AWS Platform Account.
- Navigate to GuardDuty in the AWS Console.
- Use the console option to generate sample findings.
- Then, log into the AWS Security Account (GuardDuty admin account).
- In the Security Account's GuardDuty console, you should now see the sample findings from the Platform Account.
This confirms that all member account activity is being aggregated and monitored centrally from the Security Account.
AWS Audit Manager helps you continuously audit your AWS usage by automatically collecting evidence to evaluate compliance with frameworks like ISO/IEC 27001, and others.
-
The
security_foundation_managementmodule enables Audit Manager for the AWS Organization and delegates the AWS Security Account as the Audit Manager admin. -
The
security_foundation_securitymodule:-
Creates three custom controls to check:
- CloudTrail is enabled
- CloudTrail is encrypted
- S3 public access is blocked at the account level
-
Creates an assessment to aggregate and report findings across accounts
-
β οΈ Once the assessment is created, evidence collection starts automatically for the defined controls. It may take up to 24 hours for the evidence to appear in the Audit Manager console.
AWS Security Hub is a centralized security management service that aggregates and prioritizes findings across your AWS environment. It integrates with AWS-native services and third-party tools like Snyk, and supports automation through native AWS integrations.
It seamlessly fits into CLI, API, and Infrastructure as Code workflows, making it ideal for automated and continuous monitoring.
- The
security_foundation_managementmodule enables Security Hub for the entire AWS Organization. - It delegates the AWS Security Account as the Security Hub administrator.
- The
security_foundation_securitymodule auto-enrolls all member accounts into Security Hub.
β οΈ Note: Accounts created before enabling Security Hub must be manually invited.
To auto-invite existing accounts, enable the following variable in ./modules/security-foundation-security:
module "security_foundation_security" {
source = "./modules/security-foundation-security"
providers = {
aws = aws.security
}
enable_member_account_invites = true
security_hub_member_invite = local.security_hub_member_invite
}And in locals.tf:
security_hub_member_invite = {
logging = {
account_id = var.logging_account_id
email = var.logging_account_email
}
}Weβve scoped aggregation to specific governed AWS regions: eu-west-1, eu-west-2, eu-west-3, using:
To apply region aggregation, enable the following variable in ./modules/security-foundation-security:
module "security_foundation_security" {
source = "./modules/security-foundation-security"
providers = {
aws = aws.security
}
enable_sechub_aggregator = true
}Sometimes itβs best to narrow your view to the most critical risks. For example:
- Critical or High severity findings
- In platform or customer-facing accounts
To apply custom insights, enable the following variable in ./modules/security-foundation-security:
module "security_foundation_security" {
source = "./modules/security-foundation-security"
providers = {
aws = aws.security
}
enable_sechub_insights = true
}SecurityHub Automation Rules allow you to:
- Suppress irrelevant findings
- Change severity levels
- Add tags or notes
- Route findings based on account, resource, or region
In this setup, the module uses the aws_securityhub_automation_rule resource to:
- Elevate any HIGH severity finding in the Management or Security Account to CRITICAL
- Attach a note: βPlease address this ASAP, this is a high-risk account.β
To apply automation rules, enable the following variable in ./modules/security-foundation-security:
module "security_foundation_security" {
source = "./modules/security-foundation-security"
providers = {
aws = aws.security
}
enable_sechub_automation_rule = true
}Security Hub isnβt just for visibility and triage β it also enables real-time, automated remediation of security and compliance issues.
Thanks to its integrations with GuardDuty, Audit Manager, IAM Access Analyzer, and more, Security Hub can detect threats or misconfigurations and trigger custom actions via:
- AWS Lambda
- Systems Manager Automation Documents (SSM Docs)
- EventBridge and more
Scenario: John pushes a container image to Amazon ECR that includes known vulnerabilities and forgets to delete it.
Remediation Flow:
- ECRβs built-in image scanning detects the vulnerability
- EventBridge captures the scan event and routes it to Lambda
- Lambda sends the event to Security Hub
- A custom action in Security Hub triggers another Lambda that blocks the vulnerable image from being used
To apply the remediation workflow, set the following variable in ./modules/security-foundation-security:
module "security_foundation_security" {
source = "./modules/security-foundation-security"
providers = {
aws = aws.security
}
enable_sechub_ecr_remediation = true
}π AWS provides a CloudFormation repository on GitHub with many integration patterns. This ECR example was adapted from there.
To test this in a repeatable way, use the helper script:
./07-security-foundations/validate-sechub-custom-actions/ecr_test_trigger.shThis script simulates the full lifecycle: push β scan β detect β trigger remediation.
Before running the script, ensure:
- Docker is installed and running
- Your AWS CLI is configured with access to the AWS Security Account
Manually checking the Security Hub dashboard isnβt ideal β you want to be notified only when it matters.
In this step, weβll integrate AWS Security Hub with Slack to receive real-time alerts when findings are triggered.
Using the Slack workspace provided, Follow the setup guide below until the CloudFormation step: π AWS Security Hub + Chatbot Integration Guide
Ensure your terraform.tfvars includes the correct values:
slack_channel_id = "C07EZ1ABC23"
slack_team_id = "T07EA123LEP"To activate the integration, modify the module declaration in ./modules/security-foundation-security:
module "security_foundation_security" {
source = "./modules/security-foundation-security"
providers = {
aws = aws.security
}
enable_sechub_slack_integration = true
slack_channel_id = var.slack_channel_id
slack_team_id = var.slack_team_id
}You can use the existing helper script to simulate a vulnerability scenario:
./07-security-foundations/validate-sechub-custom-actions/ecr_test_trigger.shOptionally, make a change (e.g., deploy a new ECR repo or application version) before rerunning the script.
You should then receive a Slack alert in the configured channel β confirming successful integration and alert routing.
Account Factory Terraform (AFT) is an AWS framework that enables automated, consistent account provisioning and customization using Terraform, our preferred infrastructure-as-code tool.
The provisioning flow moves from left to right as follows:
- An account is requested via the AFT Account Request repository
- Provisioning-time customizations are applied in the AFT Management Account
- Global customizations are applied inside each new account via an auto-created pipeline
- Account-specific customizations, if defined, are executed inside the relevant accounts
AWS recommends managing AFT from a dedicated AWS account inside a Platform or Infrastructure OU. Weβll use the Platform Account created earlier in 05-baseline-ou
cd ./08-aft-setupCopy the example file:
cp terraform.tfvars.example terraform.tfvarsUpdate the values:
security_account_id = "123456789"
logging_account_id = "123456789"
platform_account_id = "123456789012"
github_organization = "MY_GITHUB_ORG_NAME"Use your personal or work GitHub account β just make sure the repos match the names below. Replace all
account_idvalues using the SSO console.
./08-aft-setup/aft-repos
βββ aft-account-customizations/
βββ aft-account-provisioning-customizations/
βββ aft-account-request/
βββ aft-global-customizations/aft-account-request: defines account creation requestsaft-global-customizations: applies to all accountsaft-account-customizations: account-specific customization templatesaft-account-provisioning-customizations: applies only at provisioning time
β οΈ AFT does not support monorepo setups.
You must push each directory to a separate GitHub repo under your account. Use the exact same repo name as the folder.
- Navigate to the AFT setup directory:
cd ./08-aft-setup- Make the export script executable:
chmod +x export_repos.sh- Run the script:
./export_repos.shThis script will:
-
Copy the following 4 AFT folders to your desktop:
aft-account-requestaft-account-customizationsaft-global-customizationsaft-account-provisioning-customizations
-
Initialize and push each as a GitHub repository
β You must have GitHub CLI or personal access token already configured on your system for this to work.
Once complete, you should see all 4 repositories in your GitHub account and can proceed with linking them to the AFT module in your Terraform deployment.
This module deploys the full AFT infrastructure into the AWS Management Account, as shown in the architecture diagram. It references the GitHub repos you just pushed.
Once the deployment is successful, a series of Service Catalog permissions will automatically configure relationships between the Platform and Management Accounts to allow safe, governed account vending.
Stay tuned in the workshop to observe and validate each step live.
After the AFT module and Terraform code deploy successfully:
- Navigate to the AWS Platform Account
- Open the CodePipeline service
- You will likely see two
ct-aft-*pipelines in a failed state:
β οΈ This is expected β it typically indicates an unvalidated connection between GitHub and CodePipeline.
- In the AWS Console, go to Developer Tools β Connections
- Locate the GitHub connection configured for AFT
- Click Validate to authorize it
Once validated, revisit the failed pipelines and retry the execution. They should now proceed successfully with your latest GitHub source commits.
Instead of provisioning accounts directly using Terraform, we now shift to using the Account Factory Terraform (AFT) pipeline for scalable and repeatable account provisioning.
We'll use the repository: 09-aft-account-request
We will:
-
Provision a new Production Account in the existing Product OU
-
Import existing accounts into AFT management:
- Security Account (Security OU)
- Logging Account (Security OU)
- Production Account (Product OU)
Go to the Platform Account, open Secrets Manager, and create a new secret named:
aft-account-secrets
Use Key/Value pairs for the following values:
security_account_email: your existing security email (with+subaddressing)production_account_email: a new subaddressed email for the Production accountlogging_account_email: your logging emailsso_user_email: your main workshop email
This avoids hardcoding sensitive data and follows AWS security best practices.
- Copy the folder contents from 09-aft-account-request
- Paste them into your GitHub repo named
aft-account-request - Commit and merge into the
mainbranch
AFT will now:
-
Check if the account already exists
- If yes, it imports the account into AFT management if the email and ID match
- If not, it creates the account from scratch
In locals.tf, make sure to define the customizations_name β this tag is crucial for applying account-level customizations later.
- Go to AWS Platform Account β CodePipeline
- Run the
aft-account-requestpipeline manually - Wait for it to complete (~a few minutes)
- Then check your Management Account β the Production Account should appear as provisioning starts
This confirms your AFT setup is working end-to-end for both account creation and importing existing accounts.
Now that our AWS accounts are fully managed by AFT, we can start applying customizations per account via the AFT pipeline.
cd ./010-aft-account-customizationsInside, youβll see one folders:
PRODUCTION
π The folder names must match the
customizations_namevalue defined in your earlier AFT account request (case sensitive).
Copy these folders into your aft-account-customizations GitHub repository and merge into main.
For Production accounts:
-
An AWS Budget
- $100 in Production
-
A Terraform module:
yl-finance-infra- This simulates deployment infrastructure for a React-based company application
By default, AFT customizations only run at account creation time. To apply them post-deployment, we trigger them manually using AWS Step Functions:
- Navigate to Step Functions in the Platform Account
- Select the state machine named:
aft-invoke-customisations - Click Start Execution, and use this JSON input:
{
"include": [
{
"type": "ous",
"target_value": [ "Product"]
}
]
}- Monitor the execution and associated CodePipeline runs in the Platform Account
- Check Production accounts for the results
To fully deploy the simulated React app, follow the instructions in this guide: π OmarFinance React App - GitHub
In addition to account-specific customizations, AFT also supports global customizations that apply to all AWS accounts under its management.
- Copy the Terraform folder from:
./011-aft-global-customizations/terraform-
This Terraform module defines an S3 global block policy that disables public access across all buckets in every account.
-
Push the contents to your GitHub repo named:
aft-global-customizations- Commit and merge to the
mainbranch.
To apply the change across all accounts, invoke the AFT Step Function:
- Go to AWS Step Functions in the Platform Account
- Select
aft-invoke-customisations - Start a new execution with the following input:
{
"include": [
{
"type": "all"
}
]
}- Wait for the execution to complete successfully
- Monitor CodePipeline executions for customization runs across all accounts
To confirm that the global S3 block policy was applied:
- Navigate to any AFT-managed account (e.g., Logging Account)
- Open the S3 Console
- You should see that block public access is enforced organization-wide
This ensures consistent baseline security policies across your entire AWS organization.
Although we havenβt explored this pipeline much, the Account Provisioning Customizations pipeline plays a crucial role. It:
- Executes both account-specific and global customizations
- Applies non-Terraform-based configurations, such as API calls or Lambda logic
In this step, weβll use it to assign AWS Alternate Contacts to individual accounts during the AFT Account Request process.
Update the existing aft-account-secrets in Secrets Manager with:
aws_ct_mgt_account_id: fetch from the SSO consoleaws_ct_mgt_org_id: fetch from AWS Organizations settings in the Management Account
- Navigate to:
cd ./012-aft-account-provisioning-customizations- Copy the contents of the Terraform folder into your GitHub repo named:
aft-account-provisioning-customizations- Merge the code into
main
This module will:
- Deploy AWS Lambda functions
- Use them to update Alternate Contacts across accounts (e.g., Operations, Billing)
- Validate changes with IAM permissions through Terraform
- Still in
./012-aft-account-provisioning-customizations, navigate to:
cd ./aft-account-requests-alternate-contacts- Inside, copy the Terraform folder
- Replace the corresponding folder in your GitHub repo:
aft-account-request- Merge the changes to
main
Check locals.tf in the request folder β you'll see alternate contact details configured for Production accounts in the Product OU.
- For example, the Head of Product is defined as the Operations contact
These contacts will be automatically applied via Lambda during the provisioning pipeline execution.
To validate the alternate contact setup:
- Run the Step Function
aft-invoke-customisationsusing the Product OU targeting template:
{
"include": [
{
"type": "ous",
"target_value": [ "Product" ]
}
]
}- Once complete, go to the AWS Billing Dashboard in Production account
- Confirm the Alternate Contacts have been updated according to the configuration
Centralizing logs is critical for observability and threat detection across your AWS accounts. In this step, weβll set up log forwarding from the AWS Logging Account to Datadog.
cd ./013-datadog-loggingCopy the example variable file:
cp terraform.tfvars.example terraform.tfvarsUpdate the following value using your Datadog API key (available from the provided Datadog Console link):
datadog_api_key = "XXXXXXXXXXXXXXXXXXXXXX"
datadog_app_key = XXXXXXXXXXXXXXXXXXXXXXEnsure that you have CLI access to the AWS Logging Account. You can achieve this via:
- AWS SSO temporary credentials
- Or by creating an IAM user with CLI access
Run the following Terraform commands:
terraform init
terraform apply --auto-approveThis will:
- Enable centralized log forwarding from your Logging Account
- Stream logs directly to Datadog for monitoring and alerting
After deployment, verify the Datadog dashboard for new incoming logs from the Logging Account.
Thank you for participating in the workshop! π
Weβd really appreciate it if you could take a moment to complete the feedback form using the link below.
- Helps us improve future workshops and tailor them better for YouLend teams
- Captures your questions and follow-ups to be addressed post-workshop
- Acts as a reference point when discussing future needs with external vendors for Control Tower solutions
Your input is highly valuable β even just a few lines can go a long way.
π Link to form: [Form]
As we wrap up the workshop, letβs reflect on what a well-architected Control Tower setup should look like when leveraging AWS Organizations effectively.
This final architecture represents an evolution of everything we've covered β from initial provisioning and OU baselining to centralized security and automation pipelines via AFT.
To follow along, please visit the Medium article linked below and scroll to the section titled:
β‘οΈ A Well-Architected Control Tower: Account Breakdown
π Medium Part 4
When working with AWS Control Tower, it's important to take a step back and ask the right questions β this section is meant to help plant those thoughts early so you can address them head-on as your implementation matures.
To follow along, please visit the Medium article linked below and scroll to the section titled:
β‘οΈ Key Considerations
π Medium Part 4
Thank you for joining the YouLend Control Tower Workshop! π
If you'd like to revisit or dive deeper into what we covered, weβve prepared a 4-part Medium article series that walks through everything from foundational setup to advanced customizations β and even more than we could fit into the live sessions.
We appreciate your engagement and thank everyone involved in organizing and contributing to this learning experience.
Feel free to share the series internally or use it as a reference for upcoming architecture discussions.










