Infrastructure as Code with Terraform

Photo by HashiCorp

Photo by HashiCorp
Infrastructure as Code with Terraform has transformed how DevOps teams provision and manage cloud resources. Instead of clicking through cloud consoles or running ad-hoc scripts, Terraform lets you describe your entire infrastructure in human-readable HCL files that can be reviewed, versioned, and reused across every environment — from local dev to multi-region production.
Terraform is a declarative tool: you describe the desired end-state of your infrastructure and Terraform figures out how to get there. The workflow is always the same — write configuration in HCL, run terraform plan to preview changes, and terraform apply to execute them. The real power lies in how Terraform tracks everything it manages through a state file.
Every Terraform configuration is built from resource blocks that map directly to cloud API objects — an aws_instance, a google_storage_bucket, an azurerm_virtual_network. Variables, outputs, and data sources complete the language. The backend block tells Terraform where to store its state file — always use a remote backend (S3, GCS, Terraform Cloud) so state is shared across your team.
# main.tf — Provision an AWS EC2 instance
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
backend "s3" {
bucket = "my-terraform-state"
key = "prod/ec2/terraform.tfstate"
region = "ap-southeast-1"
}
}
provider "aws" {
region = var.aws_region
}
resource "aws_instance" "web" {
ami = data.aws_ami.ubuntu.id
instance_type = var.instance_type
tags = {
Name = "web-server"
Environment = var.environment
ManagedBy = "Terraform"
}
}
variable "aws_region" { default = "ap-southeast-1" }
variable "instance_type" { default = "t3.micro" }
variable "environment" { default = "production" }Modules are self-contained, reusable packages of Terraform resources. They accept input variables and expose outputs, letting you encapsulate complex infrastructure like a VPC with public and private subnets behind a clean interface. Well-designed modules make it trivial to replicate environments — spin up a staging environment that mirrors production with a single module call.
A good VPC module accepts cidr_block, environment name, and a list of availability zones. It creates the VPC, public and private subnets, route tables, and an internet gateway — outputting the VPC ID and subnet IDs for other modules to consume. The count meta-argument makes subnets dynamic: one subnet per AZ without duplicate code.
# modules/vpc/main.tf — Reusable VPC module
variable "cidr_block" {}
variable "environment" {}
variable "azs" { type = list(string) }
resource "aws_vpc" "main" {
cidr_block = var.cidr_block
enable_dns_hostnames = true
tags = { Name = "${var.environment}-vpc" }
}
resource "aws_subnet" "public" {
count = length(var.azs)
vpc_id = aws_vpc.main.id
cidr_block = cidrsubnet(var.cidr_block, 8, count.index)
availability_zone = var.azs[count.index]
tags = { Name = "${var.environment}-public-${count.index}" }
}
output "vpc_id" { value = aws_vpc.main.id }
output "subnet_ids" { value = aws_subnet.public[*].id }
# Usage in root module
module "vpc" {
source = "./modules/vpc"
cidr_block = "10.0.0.0/16"
environment = "prod"
azs = ["ap-southeast-1a", "ap-southeast-1b"]
}Always run terraform plan before apply in CI/CD and review the diff. Add -detailed-exitcode to distinguish 'no changes' (exit 0) from 'changes pending' (exit 2) — this lets you skip unnecessary apply runs and speeds up your pipeline.
Terraform state is what allows the tool to know what it manages and what changed since the last apply. Remote state in an S3 bucket with DynamoDB locking prevents concurrent writes from corrupting your infrastructure. Workspaces are an ergonomic way to maintain separate state files for staging and production while sharing the same configuration codebase.
Rather than duplicating configuration for each environment, use .tfvars files combined with workspaces. A single terraform apply -var-file=environments/production.tfvars selects the right values. The terraform.workspace interpolation lets you embed the environment name in resource names and tags automatically.
# terraform.tfvars — Environment-specific values
environment = "production"
aws_region = "ap-southeast-1"
instance_type = "t3.medium"
# Workspace-based state isolation
# terraform workspace new staging
# terraform workspace select production
# terraform apply -var-file="environments/${terraform.workspace}.tfvars"Terraform state files contain plaintext resource attributes including passwords and private keys. Enable state encryption (Terraform Cloud encrypts by default; S3 backends use KMS). Never commit .tfvars files containing secrets — use environment variables or a secrets manager like HashiCorp Vault or AWS Secrets Manager instead.
Running Terraform in CI/CD ensures that every change goes through plan → review → apply with an audit trail. GitHub Actions, GitLab CI, and Atlantis are all popular choices. The key pattern is: validate and plan on every pull request (post the plan as a PR comment), and apply automatically only when merging to the main branch.
A standard Terraform GitHub Actions workflow has four steps: checkout → terraform init → terraform validate → terraform plan on pull requests, and adds terraform apply on pushes to main. Use OIDC-based authentication to AWS or GCP to avoid storing long-lived credentials as GitHub Secrets.
# .github/workflows/terraform.yml — CI/CD for Terraform
name: Terraform CI/CD
on:
push:
branches: [main]
pull_request:
jobs:
terraform:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: hashicorp/setup-terraform@v3
with:
terraform_version: "1.7.0"
- name: Terraform Init
run: terraform init
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
- name: Terraform Validate
run: terraform validate
- name: Terraform Plan
run: terraform plan -out=tfplan
- name: Terraform Apply
if: github.ref == 'refs/heads/main'
run: terraform apply -auto-approve tfplanBefore going further, let's clarify some core Terraform terminology you'll encounter constantly: HCL, state file, modules, plan, workspaces, and providers.
Follow these practices for production-grade Terraform: pin provider versions with ~> to avoid breaking changes, use remote state with locking, structure code into modules from day one, tag every resource with environment and ManagedBy=Terraform, and run terraform fmt and tflint in CI to keep code consistent. Combine Terraform with Ansible for configuration management — Terraform provisions the infrastructure, Ansible configures the software on top.