Skip to content

alfonsomeraz-env0/terraform-env0-workshop

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Terraform IaC Learning Hub

A structured, hands-on Terraform study repository for beginners with zero prior Terraform or cloud experience. Every numbered subfolder is a standalone env0 template covering AWS (primary), Azure, and GCP.

Who is this for? Anyone who wants to learn how to manage cloud infrastructure using code — whether you're studying for the HashiCorp Terraform Associate certification, ramping up at a new job, or just tired of clicking around in the AWS console.


Table of Contents

  1. What is Infrastructure as Code (IaC)?
  2. What is Terraform?
  3. Core Concepts Explainer
  4. Install & Setup
  5. Learning Path
  6. Repo Structure
  7. How env0 Works with These Templates

1. What is Infrastructure as Code (IaC)?

The problem it solves

Before IaC, cloud infrastructure was managed manually — someone would log into the AWS console, click through menus, spin up a server, install software on it by hand, and write down what they did in a wiki page that was out of date by Thursday.

This created a set of well-known problems:

  • Snowflake servers — every server becomes a unique, hand-crafted artifact that nobody fully understands. If it dies, reproducing it is a guessing game.
  • "Works on my machine" infrastructure — dev, staging, and production environments drift apart over time because they were built by different people, on different days, clicking different things.
  • No audit trail — you cannot git blame a cloud console. When something breaks at 2 AM, you have no reliable record of what changed, when, or why.
  • Slow provisioning — creating a new environment for a new team or feature branch takes days of manual work instead of minutes.

What IaC gives you

Infrastructure as Code means you describe your infrastructure in text files, check those files into version control, and let a tool read them to create, update, and destroy real cloud resources.

The benefits are direct answers to the problems above:

Problem IaC Solution
Snowflake servers Every environment is built from the same code — reproducible by definition
Environment drift Run the same code against dev, staging, and prod — they stay identical
No audit trail Every change is a git commit with an author, timestamp, and message
Slow provisioning A new environment is terraform apply — minutes, not days

Key insight: IaC treats your infrastructure the same way your application treats its source code. If you would not manage your app by SSH-ing into a server and editing files by hand, you should not manage your infrastructure that way either.


2. What is Terraform?

Terraform is an open-source IaC tool created by HashiCorp in 2014 and now maintained under the BSL license (with an open-source fork called OpenTofu). It lets you write declarative configuration files describing what infrastructure you want, and it figures out how to create it.

Declarative vs Imperative: You do not write steps ("first create this, then do that"). You write the desired end state ("I want an S3 bucket with these properties") and Terraform works out the steps.

How Terraform fits in the IaC landscape

There are several IaC tools. Here is how Terraform compares to the most common ones:

Tool Category Language Cloud Support Key Trade-off
Terraform Provisioning HCL Multi-cloud (AWS, Azure, GCP, 3,000+ providers) Excellent for provisioning; not designed for software config inside a VM
Ansible Configuration Management YAML / Python Agentless, SSH-based Great for configuring what runs inside a server; not ideal for creating the server itself
Pulumi Provisioning Python, TypeScript, Go, C# Multi-cloud Use real programming languages instead of HCL; steeper learning curve, more expressive
CloudFormation Provisioning JSON / YAML AWS only Native AWS integration, no extra tool to install; locked to AWS, verbose syntax
CDK for Terraform (CDKTF) Provisioning Python, TypeScript, etc. Multi-cloud Terraform's dependency graph + real language; newer, smaller community

Rule of thumb: Use Terraform to create a VM. Use Ansible (or cloud-init) to configure what runs inside it. They complement each other and are often used together.


3. Core Concepts Explainer

3.1 State

What is Terraform state?

When Terraform creates a resource — say, an S3 bucket — it needs to remember that it created that bucket and what its current properties are. It stores this information in a state file, by default a local file called terraform.tfstate.

The state file is a JSON document that maps your Terraform resource definitions to real-world cloud resource IDs. It is the bridge between your .tf files and actual infrastructure.

Your code  ──▶  terraform.tfstate  ──▶  Real AWS/Azure/GCP resource

Why state matters

  • Planning: Terraform reads state to compute the difference between what exists and what your code describes. Without state, it cannot tell what already exists.
  • Dependency tracking: State records the order in which resources were created and their output attributes (e.g., the ARN of an IAM role that another resource needs).
  • Performance: Terraform does not re-query the cloud API for every resource on every run. It uses state as a cache.

What happens if you lose the state file?

Terraform loses track of all the infrastructure it created. Running terraform apply again will try to create everything from scratch — which usually fails because the resources already exist, or creates duplicates that you now have to clean up by hand. Losing state is a serious operational problem.

Remote state

For any real project (i.e., anything used by more than one person), you must store state remotely so the whole team shares the same view. The most common backend for AWS is an S3 bucket with a DynamoDB table for locking:

terraform {
  backend "s3" {
    bucket         = "my-terraform-state-bucket"
    key            = "aws/01-basics/terraform.tfstate"
    region         = "us-east-1"
    dynamodb_table = "terraform-state-lock"
    encrypt        = true
  }
}

State locking: When one engineer runs terraform apply, the DynamoDB lock prevents a second engineer from running apply at the same time. Without locking, two simultaneous applies can corrupt the state file.

04-state/ in this repo covers remote backends and locking in detail.


3.2 The Plan / Apply Cycle

The Terraform workflow has four core commands. Understand these and everything else falls into place.

terraform init

Downloads the provider plugins listed in your required_providers block and sets up the backend. Run this first, in every new directory, and again whenever you change providers or backends.

terraform init

terraform plan

Reads your .tf files, reads the current state, queries the cloud API for the real state of each resource, and then computes a diff. It shows you exactly what will be created, modified, or destroyed — without doing anything.

+ create   aws_s3_bucket.example
  ~ update  aws_iam_role.app (permissions changed)
- destroy  aws_security_group.old

The symbols mean:

  • + — resource will be created
  • ~ — resource will be updated in place
  • - — resource will be destroyed
  • -/+ — resource must be destroyed and re-created (a "replacement")

The dependency graph

Terraform does not apply resources in the order you wrote them. It builds a directed acyclic graph (DAG) of all resources and their dependencies, then applies them in the correct order — parallelizing independent resources automatically.

For example, if a subnet depends on a VPC, Terraform creates the VPC first, then the subnet, regardless of which you wrote first in the file.

terraform apply

Shows the plan again and asks for confirmation (yes), then executes it. Resources are created, modified, or destroyed, and the state file is updated.

terraform apply
# or, skip the confirmation prompt (use carefully in CI):
terraform apply -auto-approve

terraform destroy

The reverse of apply — destroys all resources managed by the current state file.

terraform destroy

Best practice: Always read the plan output carefully before typing yes. The plan is the safety net. Pay special attention to any -/+ replacements, because replacing a database is not the same as updating it in place.


3.3 Providers

What is a provider?

A provider is a plugin that knows how to talk to a specific API. The AWS provider knows how to call the AWS API. The Azure provider knows how to call the Azure API. Without a provider, Terraform has no idea what an aws_s3_bucket is.

Providers are distributed separately from Terraform itself via the Terraform Registry. When you run terraform init, Terraform downloads the provider plugins your code needs.

Declaring a provider

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 5.0"
    }
  }
}

provider "aws" {
  region = var.aws_region
}

This is taken directly from aws/01-basics/main.tf in this repo.

Version pinning with ~>

The ~> operator is a pessimistic constraint operator. ~> 5.0 means "allow any version in the 5.x range, but not 6.0 or higher." This lets you receive patch and minor updates automatically while protecting against breaking changes in a new major version.

Constraint Allowed range
~> 5.0 >= 5.0, < 6.0
~> 5.47 >= 5.47, < 5.48
>= 5.0, < 6.0 Same as ~> 5.0 (explicit form)

Always pin your provider versions in production code. Unpinned providers can break your configuration when a new major version is released.


3.4 Resources vs Data Sources

These two concepts look similar but do fundamentally different things.

Resources — CREATE things

A resource block tells Terraform to create and manage a real cloud object. Terraform owns it: if you remove the block from your code, Terraform will destroy it on the next apply.

resource "aws_s3_bucket" "example" {
  bucket = "my-unique-bucket-name"

  tags = {
    Environment = "study"
    ManagedBy   = "terraform"
  }
}

Syntax: resource "<provider>_<type>" "<local_name>" { ... }

Data sources — READ existing things

A data block tells Terraform to look up an existing resource that is not managed by this Terraform configuration. Terraform does not create or destroy it — it just reads its attributes so you can reference them elsewhere.

# Look up an existing VPC by its tags
data "aws_vpc" "main" {
  tags = {
    Name = "production-vpc"
  }
}

# Now use its ID in a resource
resource "aws_subnet" "app" {
  vpc_id     = data.aws_vpc.main.id
  cidr_block = "10.0.1.0/24"
}

When to use each

Scenario Use
Creating a new S3 bucket for this project resource
Reading the ID of a VPC created by another team's Terraform data
Creating an EC2 instance resource
Finding the latest Amazon Linux AMI ID data
Creating an IAM role resource
Referencing an existing IAM policy by name data

3.5 HCL Syntax Primer

HCL (HashiCorp Configuration Language) is the language Terraform uses. It is designed to be readable by humans and writable without being a full programming language.

Blocks

Everything in HCL is organized into blocks. A block has a type, optional labels, and a body wrapped in {}.

# block_type "label_one" "label_two" {
#   argument = value
# }

resource "aws_s3_bucket" "example" {
  bucket = "my-bucket"
}

Arguments

Arguments are key-value pairs inside a block body:

bucket = "my-bucket"          # string
port   = 8080                 # number
enabled = true                # bool

String interpolation

Use ${} inside a double-quoted string to embed an expression:

variable "env" {
  default = "production"
}

resource "aws_s3_bucket" "example" {
  bucket = "myapp-${var.env}-assets"
  # Result: "myapp-production-assets"
}

References

Refer to the attributes of other resources using <type>.<local_name>.<attribute>:

resource "aws_s3_bucket" "example" {
  bucket = "my-bucket"
}

resource "aws_s3_bucket_versioning" "example" {
  bucket = aws_s3_bucket.example.id  # reference to the bucket above

  versioning_configuration {
    status = "Enabled"
  }
}

Locals

locals are computed values you define once and reuse — like a variable that is calculated rather than inputted:

locals {
  common_tags = {
    Project   = "learning-terraform"
    ManagedBy = "terraform"
  }
}

resource "aws_s3_bucket" "example" {
  bucket = "my-bucket"
  tags   = local.common_tags
}

Variables

Input variables are the parameters of your Terraform module. They are defined in variables.tf and supplied by the caller or by terraform.tfvars:

# variables.tf
variable "bucket_name" {
  description = "Globally unique S3 bucket name"
  type        = string
  # no default — caller must provide this
}

variable "aws_region" {
  description = "AWS region to deploy resources"
  type        = string
  default     = "us-east-1"
}
# terraform.tfvars  (never commit secrets here — use env vars or a secrets manager)
bucket_name = "alfonsomeraz-study-20240101"
aws_region  = "us-west-2"

Outputs

Outputs expose values after apply — useful for passing data between modules or displaying the URL of a newly created resource:

output "bucket_arn" {
  description = "The ARN of the created S3 bucket"
  value       = aws_s3_bucket.example.arn
}

4. Install & Setup

4.1 Install Terraform

Option A — tfenv (recommended for learners)

tfenv is a version manager for Terraform, similar to nvm for Node.js or pyenv for Python. It lets you switch Terraform versions per project.

# macOS (Homebrew)
brew install tfenv

# Install a specific Terraform version
tfenv install 1.9.0
tfenv use 1.9.0

# Verify
terraform version

Option B — Direct download

Go to https://developer.hashicorp.com/terraform/install, download the binary for your OS, and add it to your PATH.

Option C — Homebrew (macOS, no version management)

brew tap hashicorp/tap
brew install hashicorp/tap/terraform
terraform version

4.2 AWS CLI & Authentication

Install the AWS CLI:

# macOS
brew install awscli

# Verify
aws --version

Configure your credentials (interactive — stores them in ~/.aws/credentials):

aws configure
# AWS Access Key ID: AKIA...
# AWS Secret Access Key: ...
# Default region name: us-east-1
# Default output format: json

Environment variables (preferred for CI and local isolation)

Setting environment variables overrides whatever is in ~/.aws/credentials, which makes them ideal for scripting and CI pipelines:

export AWS_ACCESS_KEY_ID="AKIAIOSFODNN7EXAMPLE"
export AWS_SECRET_ACCESS_KEY="wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"
export AWS_DEFAULT_REGION="us-east-1"

Security note: Never hard-code credentials in your .tf files or commit them to git. The AWS provider automatically picks up the standard environment variables and ~/.aws/credentials — you never need to write a secret into HCL.


4.3 Azure CLI & Authentication

Install the Azure CLI:

# macOS
brew install azure-cli

# Verify
az --version

Authenticate:

az login
# A browser window opens — log in with your Azure account
# Your credentials are cached locally for Terraform to use

The AzureRM provider automatically picks up the credentials from az login. For CI/CD, use a service principal:

export ARM_CLIENT_ID="..."
export ARM_CLIENT_SECRET="..."
export ARM_SUBSCRIPTION_ID="..."
export ARM_TENANT_ID="..."

4.4 GCP & Authentication

Install the Google Cloud CLI:

# macOS
brew install --cask google-cloud-sdk

# Verify
gcloud --version

Authenticate for local development (Application Default Credentials):

gcloud auth application-default login
# A browser window opens — log in with your Google account

The Google provider automatically picks up Application Default Credentials. For CI/CD, use a service account key file:

export GOOGLE_APPLICATION_CREDENTIALS="/path/to/service-account-key.json"

5. Learning Path

Work through these in order. Each module builds on the concepts introduced in the previous one. Every folder is a standalone Terraform configuration — cd into it and run init, plan, apply.

cd aws/01-basics
terraform init
terraform plan
terraform apply
# inspect what was created, then clean up:
terraform destroy

AWS Track

Step Folder What you learn
1 aws/01-basics Provider block, required_providers, version pinning, creating an S3 bucket, tagging, the full init/plan/apply/destroy workflow
2 aws/02-variables Input variables, variable types, defaults, terraform.tfvars, output values, locals
3 aws/03-modules Calling a reusable module, passing inputs, consuming outputs, the Terraform Registry
4 aws/04-state S3 remote backend, DynamoDB state locking, terraform state commands, state migration
5 aws/05-workspaces terraform workspace commands, using ${terraform.workspace} in code, environment isolation patterns
6 aws/06-functions Built-in functions (toset, merge, lookup, templatefile), count, for_each, dynamic blocks

Azure Track

Step Folder What you learn
1 azure/01-basics AzureRM provider, resource groups, basic resource creation
2 azure/02-variables Variables and outputs in an Azure context
3 azure/03-modules Calling the reusable Azure storage account module

GCP Track

Step Folder What you learn
1 gcp/01-basics Google provider, GCS bucket, project configuration
2 gcp/02-variables Variables and outputs in a GCP context
3 gcp/03-modules Calling the reusable GCS bucket module

Reusable Modules

Folder What it contains
modules/aws/s3-bucket A parameterized, reusable S3 bucket module consumed by aws/03-modules
modules/azure/storage-account A reusable Azure Storage Account module
modules/gcp/storage-bucket A reusable GCS bucket module

Do the AWS track first. It has the most depth (six modules vs three for Azure and GCP). The patterns you learn in AWS — variables, modules, state, workspaces, functions — transfer directly to Azure and GCP. The cloud-specific syntax changes, but the Terraform concepts are identical.


6. Repo Structure

example/
├── README.md                         # You are here
├── aws/
│   ├── 01-basics/                    # Provider block, S3 bucket, core workflow
│   │   ├── main.tf
│   │   └── variables.tf
│   ├── 02-variables/                 # Input vars, outputs, locals, type constraints
│   ├── 03-modules/                   # Calling the reusable S3 module
│   ├── 04-state/                     # S3 remote backend + DynamoDB locking
│   ├── 05-workspaces/                # terraform workspace usage
│   └── 06-functions/                 # for_each, count, dynamic blocks, templatefile
├── azure/
│   ├── 01-basics/                    # AzureRM provider block, resource group
│   ├── 02-variables/
│   └── 03-modules/
├── gcp/
│   ├── 01-basics/                    # Google provider block, GCS bucket
│   ├── 02-variables/
│   └── 03-modules/
└── modules/
    ├── aws/s3-bucket/                # Reusable S3 bucket module
    ├── azure/storage-account/        # Reusable Azure Storage Account module
    └── gcp/storage-bucket/           # Reusable GCS bucket module

Each numbered subfolder is a self-contained Terraform configuration. You can cd into any one of them independently and run the full Terraform workflow without touching any other folder.


7. How env0 Works with These Templates

env0 is a Terraform automation platform that adds a managed control plane on top of the standard Terraform workflow. Instead of running terraform apply from your laptop, env0 runs it for you in a consistent, governed environment.

What env0 provides

Variable injection

Each subfolder declares its input variables in variables.tf. env0 reads these files and surfaces them as a form in the UI. Operators fill in values (or reference stored credentials) without ever editing a .tf file or setting shell environment variables. Secrets like AWS_SECRET_ACCESS_KEY are stored encrypted in env0 and injected at runtime — they are never visible in the UI after being set.

Managed Terraform workflow

env0 runs terraform init, terraform plan, and terraform apply in ephemeral compute. The plan output is shown in the env0 UI for review before apply is triggered — enforcing the human-review step that is easy to skip when running locally.

Drift detection

env0 can periodically run terraform plan in the background and alert you if it detects a diff between the code and the real infrastructure. This catches situations where someone made a manual change in the console ("ClickOps") that diverged from the Terraform state.

Policy enforcement

env0 integrates with OPA (Open Policy Agent) and Sentinel to enforce rules before an apply is allowed. Examples: "no resource may be created without a ManagedBy tag," "S3 buckets must never have public access enabled," "deployments to production require two approvals."

RBAC and audit log

Every apply is associated with a user identity and logged. You can see who deployed what, when, and what the plan showed — providing the audit trail that manual console work cannot.

Pointing env0 at a template

  1. Add this repository as a template source in env0 (GitHub, GitLab, Bitbucket, or Azure DevOps are supported).
  2. When creating a new environment, select the subfolder (e.g., aws/01-basics) as the template root.
  3. env0 discovers variables.tf and presents the input variables in the UI.
  4. Set your cloud credentials once at the organization or project level — env0 injects them as environment variables (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_DEFAULT_REGION) into every run automatically.
  5. Click Deploy — env0 runs init → plan → (approval) → apply and shows you live logs.

Credential security: env0 never exposes stored credential values after they are saved. They are encrypted at rest and decrypted only inside the ephemeral runner that executes your Terraform. Your engineers never need to handle AWS keys directly.

Local vs env0 workflow comparison

Step Local env0
Auth aws configure or export env vars Credentials stored in env0, injected at runtime
Init terraform init Automatic
Plan terraform plan Automatic, shown in UI for review
Apply terraform apply (you type yes) Button click after plan approval
State Local file or self-managed S3 backend env0 manages a remote backend per environment
Audit None (who ran what?) Full log tied to user identity
Drift detection Manual (terraform plan when you remember) Scheduled, automatic alerting

Quick Reference

Core Terraform commands

terraform init          # Download providers, set up backend
terraform validate      # Check HCL syntax without connecting to any API
terraform fmt           # Auto-format your .tf files (run this before every commit)
terraform plan          # Show what will change
terraform apply         # Apply the changes (prompts for confirmation)
terraform destroy       # Destroy all managed resources
terraform output        # Print output values from the current state
terraform state list    # List all resources in state
terraform state show    # Show details of a specific resource in state
terraform import        # Import an existing resource into state
terraform workspace list    # List workspaces
terraform workspace new dev # Create and switch to a new workspace

File naming conventions

File Purpose
main.tf Primary resource definitions and provider configuration
variables.tf Input variable declarations
outputs.tf Output value declarations
locals.tf Local value computations
versions.tf terraform {} block with required_version and required_providers
terraform.tfvars Default variable values (do not commit secrets)
*.tfvars Environment-specific variable files (dev.tfvars, prod.tfvars)
.terraform/ Provider plugins downloaded by init — gitignore this
terraform.tfstate Local state file — gitignore this for team projects
.terraform.lock.hcl Provider version lock file — commit this to git

Always add to .gitignore:

.terraform/
terraform.tfstate
terraform.tfstate.backup
*.tfvars        # if they contain secrets

Built for the HashiCorp Terraform Associate certification and hands-on cloud learning. Each module is a working example — read the code, run it, break it, fix it.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages