A structured, hands-on Terraform study repository for beginners with zero prior Terraform or cloud experience. Every numbered subfolder is a standalone env0 template covering AWS (primary), Azure, and GCP.
Who is this for? Anyone who wants to learn how to manage cloud infrastructure using code — whether you're studying for the HashiCorp Terraform Associate certification, ramping up at a new job, or just tired of clicking around in the AWS console.
- What is Infrastructure as Code (IaC)?
- What is Terraform?
- Core Concepts Explainer
- Install & Setup
- Learning Path
- Repo Structure
- How env0 Works with These Templates
Before IaC, cloud infrastructure was managed manually — someone would log into the AWS console, click through menus, spin up a server, install software on it by hand, and write down what they did in a wiki page that was out of date by Thursday.
This created a set of well-known problems:
- Snowflake servers — every server becomes a unique, hand-crafted artifact that nobody fully understands. If it dies, reproducing it is a guessing game.
- "Works on my machine" infrastructure — dev, staging, and production environments drift apart over time because they were built by different people, on different days, clicking different things.
- No audit trail — you cannot
git blamea cloud console. When something breaks at 2 AM, you have no reliable record of what changed, when, or why. - Slow provisioning — creating a new environment for a new team or feature branch takes days of manual work instead of minutes.
Infrastructure as Code means you describe your infrastructure in text files, check those files into version control, and let a tool read them to create, update, and destroy real cloud resources.
The benefits are direct answers to the problems above:
| Problem | IaC Solution |
|---|---|
| Snowflake servers | Every environment is built from the same code — reproducible by definition |
| Environment drift | Run the same code against dev, staging, and prod — they stay identical |
| No audit trail | Every change is a git commit with an author, timestamp, and message |
| Slow provisioning | A new environment is terraform apply — minutes, not days |
Key insight: IaC treats your infrastructure the same way your application treats its source code. If you would not manage your app by SSH-ing into a server and editing files by hand, you should not manage your infrastructure that way either.
Terraform is an open-source IaC tool created by HashiCorp in 2014 and now maintained under the BSL license (with an open-source fork called OpenTofu). It lets you write declarative configuration files describing what infrastructure you want, and it figures out how to create it.
Declarative vs Imperative: You do not write steps ("first create this, then do that"). You write the desired end state ("I want an S3 bucket with these properties") and Terraform works out the steps.
There are several IaC tools. Here is how Terraform compares to the most common ones:
| Tool | Category | Language | Cloud Support | Key Trade-off |
|---|---|---|---|---|
| Terraform | Provisioning | HCL | Multi-cloud (AWS, Azure, GCP, 3,000+ providers) | Excellent for provisioning; not designed for software config inside a VM |
| Ansible | Configuration Management | YAML / Python | Agentless, SSH-based | Great for configuring what runs inside a server; not ideal for creating the server itself |
| Pulumi | Provisioning | Python, TypeScript, Go, C# | Multi-cloud | Use real programming languages instead of HCL; steeper learning curve, more expressive |
| CloudFormation | Provisioning | JSON / YAML | AWS only | Native AWS integration, no extra tool to install; locked to AWS, verbose syntax |
| CDK for Terraform (CDKTF) | Provisioning | Python, TypeScript, etc. | Multi-cloud | Terraform's dependency graph + real language; newer, smaller community |
Rule of thumb: Use Terraform to create a VM. Use Ansible (or cloud-init) to configure what runs inside it. They complement each other and are often used together.
When Terraform creates a resource — say, an S3 bucket — it needs to remember that it created that bucket and what its current properties are. It stores this information in a state file, by default a local file called terraform.tfstate.
The state file is a JSON document that maps your Terraform resource definitions to real-world cloud resource IDs. It is the bridge between your .tf files and actual infrastructure.
Your code ──▶ terraform.tfstate ──▶ Real AWS/Azure/GCP resource
- Planning: Terraform reads state to compute the difference between what exists and what your code describes. Without state, it cannot tell what already exists.
- Dependency tracking: State records the order in which resources were created and their output attributes (e.g., the ARN of an IAM role that another resource needs).
- Performance: Terraform does not re-query the cloud API for every resource on every run. It uses state as a cache.
Terraform loses track of all the infrastructure it created. Running terraform apply again will try to create everything from scratch — which usually fails because the resources already exist, or creates duplicates that you now have to clean up by hand. Losing state is a serious operational problem.
For any real project (i.e., anything used by more than one person), you must store state remotely so the whole team shares the same view. The most common backend for AWS is an S3 bucket with a DynamoDB table for locking:
terraform {
backend "s3" {
bucket = "my-terraform-state-bucket"
key = "aws/01-basics/terraform.tfstate"
region = "us-east-1"
dynamodb_table = "terraform-state-lock"
encrypt = true
}
}State locking: When one engineer runs
terraform apply, the DynamoDB lock prevents a second engineer from running apply at the same time. Without locking, two simultaneous applies can corrupt the state file.
04-state/ in this repo covers remote backends and locking in detail.
The Terraform workflow has four core commands. Understand these and everything else falls into place.
Downloads the provider plugins listed in your required_providers block and sets up the backend. Run this first, in every new directory, and again whenever you change providers or backends.
terraform initReads your .tf files, reads the current state, queries the cloud API for the real state of each resource, and then computes a diff. It shows you exactly what will be created, modified, or destroyed — without doing anything.
+ create aws_s3_bucket.example
~ update aws_iam_role.app (permissions changed)
- destroy aws_security_group.old
The symbols mean:
+— resource will be created~— resource will be updated in place-— resource will be destroyed-/+— resource must be destroyed and re-created (a "replacement")
Terraform does not apply resources in the order you wrote them. It builds a directed acyclic graph (DAG) of all resources and their dependencies, then applies them in the correct order — parallelizing independent resources automatically.
For example, if a subnet depends on a VPC, Terraform creates the VPC first, then the subnet, regardless of which you wrote first in the file.
Shows the plan again and asks for confirmation (yes), then executes it. Resources are created, modified, or destroyed, and the state file is updated.
terraform apply
# or, skip the confirmation prompt (use carefully in CI):
terraform apply -auto-approveThe reverse of apply — destroys all resources managed by the current state file.
terraform destroyBest practice: Always read the plan output carefully before typing
yes. The plan is the safety net. Pay special attention to any-/+replacements, because replacing a database is not the same as updating it in place.
A provider is a plugin that knows how to talk to a specific API. The AWS provider knows how to call the AWS API. The Azure provider knows how to call the Azure API. Without a provider, Terraform has no idea what an aws_s3_bucket is.
Providers are distributed separately from Terraform itself via the Terraform Registry. When you run terraform init, Terraform downloads the provider plugins your code needs.
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
}
provider "aws" {
region = var.aws_region
}This is taken directly from aws/01-basics/main.tf in this repo.
The ~> operator is a pessimistic constraint operator. ~> 5.0 means "allow any version in the 5.x range, but not 6.0 or higher." This lets you receive patch and minor updates automatically while protecting against breaking changes in a new major version.
| Constraint | Allowed range |
|---|---|
~> 5.0 |
>= 5.0, < 6.0 |
~> 5.47 |
>= 5.47, < 5.48 |
>= 5.0, < 6.0 |
Same as ~> 5.0 (explicit form) |
Always pin your provider versions in production code. Unpinned providers can break your configuration when a new major version is released.
These two concepts look similar but do fundamentally different things.
A resource block tells Terraform to create and manage a real cloud object. Terraform owns it: if you remove the block from your code, Terraform will destroy it on the next apply.
resource "aws_s3_bucket" "example" {
bucket = "my-unique-bucket-name"
tags = {
Environment = "study"
ManagedBy = "terraform"
}
}Syntax: resource "<provider>_<type>" "<local_name>" { ... }
A data block tells Terraform to look up an existing resource that is not managed by this Terraform configuration. Terraform does not create or destroy it — it just reads its attributes so you can reference them elsewhere.
# Look up an existing VPC by its tags
data "aws_vpc" "main" {
tags = {
Name = "production-vpc"
}
}
# Now use its ID in a resource
resource "aws_subnet" "app" {
vpc_id = data.aws_vpc.main.id
cidr_block = "10.0.1.0/24"
}| Scenario | Use |
|---|---|
| Creating a new S3 bucket for this project | resource |
| Reading the ID of a VPC created by another team's Terraform | data |
| Creating an EC2 instance | resource |
| Finding the latest Amazon Linux AMI ID | data |
| Creating an IAM role | resource |
| Referencing an existing IAM policy by name | data |
HCL (HashiCorp Configuration Language) is the language Terraform uses. It is designed to be readable by humans and writable without being a full programming language.
Everything in HCL is organized into blocks. A block has a type, optional labels, and a body wrapped in {}.
# block_type "label_one" "label_two" {
# argument = value
# }
resource "aws_s3_bucket" "example" {
bucket = "my-bucket"
}Arguments are key-value pairs inside a block body:
bucket = "my-bucket" # string
port = 8080 # number
enabled = true # boolUse ${} inside a double-quoted string to embed an expression:
variable "env" {
default = "production"
}
resource "aws_s3_bucket" "example" {
bucket = "myapp-${var.env}-assets"
# Result: "myapp-production-assets"
}Refer to the attributes of other resources using <type>.<local_name>.<attribute>:
resource "aws_s3_bucket" "example" {
bucket = "my-bucket"
}
resource "aws_s3_bucket_versioning" "example" {
bucket = aws_s3_bucket.example.id # reference to the bucket above
versioning_configuration {
status = "Enabled"
}
}locals are computed values you define once and reuse — like a variable that is calculated rather than inputted:
locals {
common_tags = {
Project = "learning-terraform"
ManagedBy = "terraform"
}
}
resource "aws_s3_bucket" "example" {
bucket = "my-bucket"
tags = local.common_tags
}Input variables are the parameters of your Terraform module. They are defined in variables.tf and supplied by the caller or by terraform.tfvars:
# variables.tf
variable "bucket_name" {
description = "Globally unique S3 bucket name"
type = string
# no default — caller must provide this
}
variable "aws_region" {
description = "AWS region to deploy resources"
type = string
default = "us-east-1"
}# terraform.tfvars (never commit secrets here — use env vars or a secrets manager)
bucket_name = "alfonsomeraz-study-20240101"
aws_region = "us-west-2"Outputs expose values after apply — useful for passing data between modules or displaying the URL of a newly created resource:
output "bucket_arn" {
description = "The ARN of the created S3 bucket"
value = aws_s3_bucket.example.arn
}Option A — tfenv (recommended for learners)
tfenv is a version manager for Terraform, similar to nvm for Node.js or pyenv for Python. It lets you switch Terraform versions per project.
# macOS (Homebrew)
brew install tfenv
# Install a specific Terraform version
tfenv install 1.9.0
tfenv use 1.9.0
# Verify
terraform versionOption B — Direct download
Go to https://developer.hashicorp.com/terraform/install, download the binary for your OS, and add it to your PATH.
Option C — Homebrew (macOS, no version management)
brew tap hashicorp/tap
brew install hashicorp/tap/terraform
terraform versionInstall the AWS CLI:
# macOS
brew install awscli
# Verify
aws --versionConfigure your credentials (interactive — stores them in ~/.aws/credentials):
aws configure
# AWS Access Key ID: AKIA...
# AWS Secret Access Key: ...
# Default region name: us-east-1
# Default output format: jsonEnvironment variables (preferred for CI and local isolation)
Setting environment variables overrides whatever is in ~/.aws/credentials, which makes them ideal for scripting and CI pipelines:
export AWS_ACCESS_KEY_ID="AKIAIOSFODNN7EXAMPLE"
export AWS_SECRET_ACCESS_KEY="wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"
export AWS_DEFAULT_REGION="us-east-1"Security note: Never hard-code credentials in your
.tffiles or commit them to git. The AWS provider automatically picks up the standard environment variables and~/.aws/credentials— you never need to write a secret into HCL.
Install the Azure CLI:
# macOS
brew install azure-cli
# Verify
az --versionAuthenticate:
az login
# A browser window opens — log in with your Azure account
# Your credentials are cached locally for Terraform to useThe AzureRM provider automatically picks up the credentials from az login. For CI/CD, use a service principal:
export ARM_CLIENT_ID="..."
export ARM_CLIENT_SECRET="..."
export ARM_SUBSCRIPTION_ID="..."
export ARM_TENANT_ID="..."Install the Google Cloud CLI:
# macOS
brew install --cask google-cloud-sdk
# Verify
gcloud --versionAuthenticate for local development (Application Default Credentials):
gcloud auth application-default login
# A browser window opens — log in with your Google accountThe Google provider automatically picks up Application Default Credentials. For CI/CD, use a service account key file:
export GOOGLE_APPLICATION_CREDENTIALS="/path/to/service-account-key.json"Work through these in order. Each module builds on the concepts introduced in the previous one. Every folder is a standalone Terraform configuration — cd into it and run init, plan, apply.
cd aws/01-basics
terraform init
terraform plan
terraform apply
# inspect what was created, then clean up:
terraform destroy| Step | Folder | What you learn |
|---|---|---|
| 1 | aws/01-basics |
Provider block, required_providers, version pinning, creating an S3 bucket, tagging, the full init/plan/apply/destroy workflow |
| 2 | aws/02-variables |
Input variables, variable types, defaults, terraform.tfvars, output values, locals |
| 3 | aws/03-modules |
Calling a reusable module, passing inputs, consuming outputs, the Terraform Registry |
| 4 | aws/04-state |
S3 remote backend, DynamoDB state locking, terraform state commands, state migration |
| 5 | aws/05-workspaces |
terraform workspace commands, using ${terraform.workspace} in code, environment isolation patterns |
| 6 | aws/06-functions |
Built-in functions (toset, merge, lookup, templatefile), count, for_each, dynamic blocks |
| Step | Folder | What you learn |
|---|---|---|
| 1 | azure/01-basics |
AzureRM provider, resource groups, basic resource creation |
| 2 | azure/02-variables |
Variables and outputs in an Azure context |
| 3 | azure/03-modules |
Calling the reusable Azure storage account module |
| Step | Folder | What you learn |
|---|---|---|
| 1 | gcp/01-basics |
Google provider, GCS bucket, project configuration |
| 2 | gcp/02-variables |
Variables and outputs in a GCP context |
| 3 | gcp/03-modules |
Calling the reusable GCS bucket module |
| Folder | What it contains |
|---|---|
modules/aws/s3-bucket |
A parameterized, reusable S3 bucket module consumed by aws/03-modules |
modules/azure/storage-account |
A reusable Azure Storage Account module |
modules/gcp/storage-bucket |
A reusable GCS bucket module |
Do the AWS track first. It has the most depth (six modules vs three for Azure and GCP). The patterns you learn in AWS — variables, modules, state, workspaces, functions — transfer directly to Azure and GCP. The cloud-specific syntax changes, but the Terraform concepts are identical.
example/
├── README.md # You are here
├── aws/
│ ├── 01-basics/ # Provider block, S3 bucket, core workflow
│ │ ├── main.tf
│ │ └── variables.tf
│ ├── 02-variables/ # Input vars, outputs, locals, type constraints
│ ├── 03-modules/ # Calling the reusable S3 module
│ ├── 04-state/ # S3 remote backend + DynamoDB locking
│ ├── 05-workspaces/ # terraform workspace usage
│ └── 06-functions/ # for_each, count, dynamic blocks, templatefile
├── azure/
│ ├── 01-basics/ # AzureRM provider block, resource group
│ ├── 02-variables/
│ └── 03-modules/
├── gcp/
│ ├── 01-basics/ # Google provider block, GCS bucket
│ ├── 02-variables/
│ └── 03-modules/
└── modules/
├── aws/s3-bucket/ # Reusable S3 bucket module
├── azure/storage-account/ # Reusable Azure Storage Account module
└── gcp/storage-bucket/ # Reusable GCS bucket module
Each numbered subfolder is a self-contained Terraform configuration. You can cd into any one of them independently and run the full Terraform workflow without touching any other folder.
env0 is a Terraform automation platform that adds a managed control plane on top of the standard Terraform workflow. Instead of running terraform apply from your laptop, env0 runs it for you in a consistent, governed environment.
Variable injection
Each subfolder declares its input variables in variables.tf. env0 reads these files and surfaces them as a form in the UI. Operators fill in values (or reference stored credentials) without ever editing a .tf file or setting shell environment variables. Secrets like AWS_SECRET_ACCESS_KEY are stored encrypted in env0 and injected at runtime — they are never visible in the UI after being set.
Managed Terraform workflow
env0 runs terraform init, terraform plan, and terraform apply in ephemeral compute. The plan output is shown in the env0 UI for review before apply is triggered — enforcing the human-review step that is easy to skip when running locally.
Drift detection
env0 can periodically run terraform plan in the background and alert you if it detects a diff between the code and the real infrastructure. This catches situations where someone made a manual change in the console ("ClickOps") that diverged from the Terraform state.
Policy enforcement
env0 integrates with OPA (Open Policy Agent) and Sentinel to enforce rules before an apply is allowed. Examples: "no resource may be created without a ManagedBy tag," "S3 buckets must never have public access enabled," "deployments to production require two approvals."
RBAC and audit log
Every apply is associated with a user identity and logged. You can see who deployed what, when, and what the plan showed — providing the audit trail that manual console work cannot.
- Add this repository as a template source in env0 (GitHub, GitLab, Bitbucket, or Azure DevOps are supported).
- When creating a new environment, select the subfolder (e.g.,
aws/01-basics) as the template root. - env0 discovers
variables.tfand presents the input variables in the UI. - Set your cloud credentials once at the organization or project level — env0 injects them as environment variables (
AWS_ACCESS_KEY_ID,AWS_SECRET_ACCESS_KEY,AWS_DEFAULT_REGION) into every run automatically. - Click Deploy — env0 runs
init → plan → (approval) → applyand shows you live logs.
Credential security: env0 never exposes stored credential values after they are saved. They are encrypted at rest and decrypted only inside the ephemeral runner that executes your Terraform. Your engineers never need to handle AWS keys directly.
| Step | Local | env0 |
|---|---|---|
| Auth | aws configure or export env vars |
Credentials stored in env0, injected at runtime |
| Init | terraform init |
Automatic |
| Plan | terraform plan |
Automatic, shown in UI for review |
| Apply | terraform apply (you type yes) |
Button click after plan approval |
| State | Local file or self-managed S3 backend | env0 manages a remote backend per environment |
| Audit | None (who ran what?) | Full log tied to user identity |
| Drift detection | Manual (terraform plan when you remember) |
Scheduled, automatic alerting |
terraform init # Download providers, set up backend
terraform validate # Check HCL syntax without connecting to any API
terraform fmt # Auto-format your .tf files (run this before every commit)
terraform plan # Show what will change
terraform apply # Apply the changes (prompts for confirmation)
terraform destroy # Destroy all managed resources
terraform output # Print output values from the current state
terraform state list # List all resources in state
terraform state show # Show details of a specific resource in state
terraform import # Import an existing resource into state
terraform workspace list # List workspaces
terraform workspace new dev # Create and switch to a new workspace| File | Purpose |
|---|---|
main.tf |
Primary resource definitions and provider configuration |
variables.tf |
Input variable declarations |
outputs.tf |
Output value declarations |
locals.tf |
Local value computations |
versions.tf |
terraform {} block with required_version and required_providers |
terraform.tfvars |
Default variable values (do not commit secrets) |
*.tfvars |
Environment-specific variable files (dev.tfvars, prod.tfvars) |
.terraform/ |
Provider plugins downloaded by init — gitignore this |
terraform.tfstate |
Local state file — gitignore this for team projects |
.terraform.lock.hcl |
Provider version lock file — commit this to git |
Always add to
.gitignore:.terraform/ terraform.tfstate terraform.tfstate.backup *.tfvars # if they contain secrets
Built for the HashiCorp Terraform Associate certification and hands-on cloud learning. Each module is a working example — read the code, run it, break it, fix it.