That's exactly what modules fix. Today we will finally stopped copy-pasting and start building something reusable.
Why This Matters in the Industry
In production environments, Terraform codebases can get large fast. Dozens of services, multiple environments, different teams. Without modules, each team ends up writing their own version of the same infrastructure patterns — VPCs, EC2 clusters, RDS instances — all slightly different, all maintained separately.
Modules are how mature teams enforce consistency. Instead of every team writing their own load balancer configurations, there's one approved web-service module that everyone calls and use within the team. This is valid of Security best practices, tagging standards, and architecture decisions which live in the module. Consuming teams just pass in their variables for their desired state, nothing else.
It's also a safety layer. If a security group rule needs to change across all services, you update the module once and everything that uses it picks up the change on the next apply. Without modules, that's a search-and-update across every config in the repo — and something always gets missed.
Let's understand what Module Actually Is
A module is just a folder with .tf files. That's it.
Every Terraform config I've written so far has technically been a module — the "root module." What makes something a reusable module is that it's designed to be called from another config, not run directly.
A module is just a folder — three files is all you need:
modules/
└── web-app/
├── main.tf # the actual resources
├── variables.tf # inputs — what the caller passes in
└── outputs.tf # outputs — what the module exposes back
The caller (root module) calls it like this:
Calling a module from the root config:
module "web_app_dev" {
source = "./modules/web-app"
environment = "dev"
instance_type = "t2.micro"
min_size = 1
max_size = 2
}
That's the whole interface. The caller doesn't need to know what's inside — just what inputs to provide.
Building the Module
I took the web app config from the past few days and refactored it into a module. Here's what each file does:
variables.tf — The inputs
These are the knobs the caller can turn. Anything that varies between environments becomes a variable:
modules/web-app/variables.tf
variable "environment" {
description = "Deployment environment (dev, staging, prod)"
type = string
}
variable "instance_type" {
description = "EC2 instance type for the web servers"
type = string
default = "t2.micro"
}
variable "min_size" {
description = "Minimum number of instances in the ASG"
type = number
default = 1
}
variable "max_size" {
description = "Maximum number of instances in the ASG"
type = number
default = 2
}
variable "server_port" {
description = "Port the web server listens on"
type = number
default = 8080
}
main.tf — The resources
Same resources as before, but now they reference var.* instead of hardcoded values. The module doesn't know which environment it's deploying to — it just uses whatever the caller passed in:
modules/web-app/main.tf
locals {
name_prefix = "web-app-${var.environment}"
common_tags = {
Environment = var.environment
ManagedBy = "Terraform"
}
}
data "aws_ami" "amazon_linux" {
most_recent = true
owners = ["amazon"]
filter {
name = "name"
values = ["amzn2-ami-hvm-*-x86_64-gp2"]
}
}
data "aws_vpc" "default" {
default = true
}
data "aws_subnets" "default" {
filter {
name = "vpc-id"
values = [data.aws_vpc.default.id]
}
}
resource "aws_security_group" "instance" {
name = "${local.name_prefix}-instance-sg"
ingress {
from_port = var.server_port
to_port = var.server_port
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
tags = local.common_tags
}
resource "aws_launch_template" "web" {
image_id = data.aws_ami.amazon_linux.id
instance_type = var.instance_type
vpc_security_group_ids = [aws_security_group.instance.id]
user_data = base64encode(<<-EOF
#!/bin/bash
mkdir -p /var/www/html
echo "Hello from ${var.environment}" > /var/www/html/index.html
cd /var/www/html && nohup python3 -m http.server ${var.server_port} &
EOF
)
lifecycle {
create_before_destroy = true
}
}
resource "aws_security_group" "alb" {
name = "${local.name_prefix}-alb-sg"
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = local.common_tags
}
resource "aws_lb" "web" {
name = "${local.name_prefix}-alb"
load_balancer_type = "application"
subnets = data.aws_subnets.default.ids
security_groups = [aws_security_group.alb.id]
tags = local.common_tags
}
resource "aws_lb_target_group" "web" {
name = "${local.name_prefix}-tg"
port = var.server_port
protocol = "HTTP"
vpc_id = data.aws_vpc.default.id
health_check {
path = "/"
protocol = "HTTP"
matcher = "200"
interval = 15
timeout = 3
healthy_threshold = 2
unhealthy_threshold = 2
}
}
resource "aws_lb_listener" "http" {
load_balancer_arn = aws_lb.web.arn
port = 80
protocol = "HTTP"
default_action {
type = "forward"
target_group_arn = aws_lb_target_group.web.arn
}
}
resource "aws_autoscaling_group" "web" {
min_size = var.min_size
max_size = var.max_size
desired_capacity = var.min_size
launch_template {
id = aws_launch_template.web.id
version = "$Latest"
}
vpc_zone_identifier = data.aws_subnets.default.ids
target_group_arns = [aws_lb_target_group.web.arn]
health_check_type = "ELB"
tag {
key = "Name"
value = "${local.name_prefix}-web"
propagate_at_launch = true
}
}
outputs.tf — What the module exposes
After the module runs, the caller might need values from it — like the ALB DNS name to configure DNS or print for testing. Outputs make those values available:
modules/web-app/outputs.tf
output "alb_dns_name" {
value = aws_lb.web.dns_name
description = "DNS name of the Application Load Balancer"
}
output "asg_name" {
value = aws_autoscaling_group.web.name
description = "Name of the Auto Scaling Group"
}
output "alb_security_group_id" {
value = aws_security_group.alb.id
description = "ID of the ALB security group"
}
Using the Module
With the module written, deploying dev and prod becomes clean and explicit:
Calling the same module twice with different inputs — dev and prod from one definition:
# root main.tf
module "web_app_dev" {
source = "./modules/web-app"
environment = "dev"
instance_type = "t2.micro"
min_size = 1
max_size = 2
}
module "web_app_prod" {
source = "./modules/web-app"
environment = "prod"
instance_type = "t3.small"
min_size = 2
max_size = 6
}
To access an output from a module, prefix it with module.<name>:
Reading module outputs from the root config:
output "dev_url" {
value = module.web_app_dev.alb_dns_name
}
output "prod_url" {
value = module.web_app_prod.alb_dns_name
}
Run terraform apply and both environments deploy from the same module definition. If I need to change something — the health check interval, the tagging strategy, the AMI filter — I change it once in the module and it applies everywhere.
Module Sources
The source argument isn't limited to local paths. Terraform supports several source types:
All valid source formats for the source argument:
# Local path
source = "./modules/web-app"
# Terraform Registry (public modules)
source = "hashicorp/consul/aws"
version = "0.1.0"
# GitHub
source = "github.com/myorg/terraform-aws-web-app"
# Specific Git ref
source = "git::https://github.com/myorg/terraform-modules.git//web-app?ref=v1.2.0"
The Terraform Registry has thousands of community-maintained modules for common patterns — VPCs, EKS clusters, RDS instances. Worth checking before building from scratch.
When using remote sources, always pin a version:
Always pin the version when using a registry or remote module:
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "5.1.2" # pinned — upgrades are explicit
}
Unpinned modules can break silently when the upstream author makes a change. Pinning means upgrades are a conscious decision, not a surprise.
Project Structure After Refactoring
infrastructure/
├── modules/
│ └── web-app/
│ ├── main.tf
│ ├── variables.tf
│ └── outputs.tf
├── dev/
│ ├── main.tf # calls module.web_app with dev vars
│ └── backend.tf
└── prod/
├── main.tf # calls module.web_app with prod vars
└── backend.tf
The actual infrastructure logic lives once, in modules/web-app/. The environment folders just call it with different inputs. Adding a new environment is now a new folder with a module call — not a copy of 150 lines of Terraform.
Where I'm At
Modules are a very important concenpt of terraform worth stressing on. They are of great help with building infrastructure tools — reusable pieces that can be assembled rather than rewritten.
The pattern of variables.tf → main.tf → outputs.tf is simple but it unlocks a lot. Once a module is tested and working, you can hand it to anyone and they can use it without needing to understand the internals. That's a meaningful shift.
Next up: more advanced module patterns and the Terraform Registry.
This post is part of a 30-day Terraform learning journey.
💬 Comments
No comments yet. Be the first to share your thoughts!
Leave a Comment