Deploying a modern web application requires more than just writing code. For a robust, scalable, and maintainable system, the infrastructure that runs it is just as critical as the application logic itself. Django, with its “batteries-included” philosophy, is a powerhouse for building complex web apps. Amazon Web Services (AWS) provides an unparalleled suite of cloud services to host them. But how do you bridge the gap? How do you provision, manage, and scale this infrastructure reliably? The answer is Infrastructure as Code (IaC), and the leading tool for the job is Terraform.
This comprehensive guide will walk you through the end-to-end process to Deploy Django AWS Terraform, moving from a local development setup to a production-grade, scalable architecture. We won’t just scratch the surface; we’ll dive deep into creating a Virtual Private Cloud (VPC), provisioning a managed database with RDS, storing static files in S3, and running our containerized Django application on a serverless compute engine like AWS Fargate with ECS. By the end, you’ll have a repeatable, version-controlled, and automated framework for your Django deployments.
Table of Contents
- 1 Why Use Terraform for Your Django AWS Deployment?
- 2 Prerequisites for this Tutorial
- 3 Step 1: Planning Your Scalable AWS Architecture for Django
- 4 Step 2: Structuring Your Terraform Project
- 5 Step 3: Writing the Terraform Configuration
- 6 Step 4: Setting Up the Django Application for AWS
- 7 Step 5: Defining the Compute Layer – AWS ECS with Fargate
- 8 Step 6: The Deployment Workflow: How to Deploy Django AWS Terraform
- 9 Step 7: Automating with a CI/CD Pipeline (Conceptual Overview)
- 10 Frequently Asked Questions
- 11 Conclusion
Why Use Terraform for Your Django AWS Deployment?
Before we start writing .tf
files, it’s crucial to understand why this approach is superior to manual configuration via the AWS console, often called “click-ops.”
Infrastructure as Code (IaC) Explained
Infrastructure as Code is the practice of managing and provisioning computing infrastructure (like networks, virtual machines, load balancers, and databases) through machine-readable definition files, rather than through physical hardware configuration or interactive configuration tools. Your entire AWS environment—from the smallest security group rule to the largest database cluster—is defined in code.
Terraform, by HashiCorp, is an open-source IaC tool that specializes in this. It uses a declarative configuration language called HCL (HashiCorp Configuration Language). You simply declare the desired state of your infrastructure, and Terraform figures out how to get there. It creates an execution plan, shows you what it will create, modify, or destroy, and then executes it upon your approval.
Benefits: Repeatability, Scalability, and Version Control
- Repeatability: Need to spin up a new staging environment that perfectly mirrors production? With a manual setup, this is a checklist-driven, error-prone nightmare. With Terraform, it’s as simple as running
terraform apply -var-file="staging.tfvars"
. You get an identical environment every single time. - Version Control: Your infrastructure code lives in Git, just like your application code. You can review changes through pull requests, track a full history of who changed what and when, and easily roll back to a previous known-good state if a change causes problems.
- Scalability: A scalable Django architecture isn’t just about one server. It’s a complex system of load balancers, auto-scaling groups, and replicated database read-replicas. Defining this in code makes it trivial to adjust parameters (e.g., “scale from 2 to 10 web servers”) and apply the change consistently.
- Visibility:
terraform plan
provides a “dry run” that tells you exactly what changes will be made before you commit. This predictive power is invaluable for preventing costly mistakes in a live production environment.
Prerequisites for this Tutorial
This guide assumes you have a foundational understanding of Django, Docker, and basic AWS concepts. You will need the following tools installed and configured:
- Terraform: Download and install the Terraform CLI.
- AWS CLI: Install and configure the AWS CLI with credentials that have sufficient permissions (ideally, an IAM user with programmatic access).
- Docker: We will containerize our Django app. Install Docker Desktop.
- Python & Django: A working Django project. We’ll focus on the infrastructure, but we’ll cover the key
settings.py
modifications needed.
Step 1: Planning Your Scalable AWS Architecture for Django
A “scalable” architecture is one that can handle growth. This means decoupling our components. A monolithic “Django on a single EC2 instance” setup is simple, but it’s a single point of failure and a scaling bottleneck. Our target architecture will consist of several moving parts.
The Core Components:
- VPC (Virtual Private Cloud): Our own isolated network within AWS.
- Subnets: We’ll use public subnets for internet-facing resources (like our Load Balancer) and private subnets for our application and database, enhancing security.
- Application Load Balancer (ALB): Distributes incoming web traffic across our Django application instances.
- ECS (Elastic Container Service) with Fargate: This is our compute layer. Instead of managing EC2 virtual machines, we’ll use Fargate, a serverless compute engine for containers. We just provide a Docker image, and AWS handles running and scaling the containers.
- RDS (Relational Database Service): A managed PostgreSQL database. AWS handles patching, backups, and replication, allowing us to focus on our application.
- S3 (Simple Storage Service): Our Django app won’t serve static (CSS/JS) or media (user-uploaded) files. We’ll offload this to S3 for better performance and scalability.
- ECR (Elastic Container Registry): A private Docker registry where we’ll store our Django application’s Docker image.
Step 2: Structuring Your Terraform Project
Organization is key. A flat file of 1,000 lines is unmanageable. We’ll use a simple, scalable structure:
django-aws-terraform/
├── main.tf
├── variables.tf
├── outputs.tf
├── terraform.tfvars
└── .gitignore
main.tf
: The core file containing our resource definitions (VPC, RDS, ECS, etc.).variables.tf
: Declares input variables likeaws_region
,db_username
, orinstance_type
. This makes our configuration reusable.outputs.tf
: Defines outputs from our infrastructure, like the database endpoint or the load balancer’s URL.terraform.tfvars
: Where we assign *values* to our variables. This file should be added to.gitignore
as it will contain secrets like database passwords.
Step 3: Writing the Terraform Configuration
Let’s start building our infrastructure. We’ll add these blocks to main.tf
and variables.tf
.
Provider and Backend Configuration
First, we tell Terraform we’re using the AWS provider and specify a version. We also configure a backend, which is where Terraform stores its “state file” (a JSON file that maps your config to real-world resources). Using an S3 backend is highly recommended for any team project, as it provides locking and shared state.
In main.tf
:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
# Configuration for a remote S3 backend
# You must create this S3 bucket and DynamoDB table *before* running init
# For this tutorial, we will use the default local backend.
# backend "s3" {
# bucket = "my-terraform-state-bucket-unique-name"
# key = "django-aws/terraform.tfstate"
# region = "us-east-1"
# dynamodb_table = "terraform-lock-table"
# }
}
provider "aws" {
region = var.aws_region
}
In variables.tf
:
variable "aws_region" {
description = "The AWS region to deploy infrastructure in."
type = string
default = "us-east-1"
}
variable "project_name" {
description = "A name for the project, used to tag resources."
type = string
default = "django-app"
}
variable "vpc_cidr" {
description = "The CIDR block for the VPC."
type = string
default = "10.0.0.0/16"
}
Networking: Defining the VPC
We’ll create a VPC with two public and two private subnets across two Availability Zones (AZs) for high availability.
In main.tf
:
# Get list of Availability Zones
data "aws_availability_zones" "available" {
state = "available"
}
# --- VPC ---
resource "aws_vpc" "main" {
cidr_block = var.vpc_cidr
enable_dns_hostnames = true
enable_dns_support = true
tags = {
Name = "${var.project_name}-vpc"
}
}
# --- Subnets ---
resource "aws_subnet" "public" {
count = 2
vpc_id = aws_vpc.main.id
cidr_block = cidrsubnet(aws_vpc.main.cidr_block, 8, count.index)
availability_zone = data.aws_availability_zones.available.names[count.index]
map_public_ip_on_launch = true
tags = {
Name = "${var.project_name}-public-subnet-${count.index + 1}"
}
}
resource "aws_subnet" "private" {
count = 2
vpc_id = aws_vpc.main.id
cidr_block = cidrsubnet(aws_vpc.main.cidr_block, 8, count.index + 2) # Offset index
availability_zone = data.aws_availability_zones.available.names[count.index]
tags = {
Name = "${var.project_name}-private-subnet-${count.index + 1}"
}
}
# --- Internet Gateway for Public Subnets ---
resource "aws_internet_gateway" "gw" {
vpc_id = aws_vpc.main.id
tags = {
Name = "${var.project_name}-igw"
}
}
resource "aws_route_table" "public" {
vpc_id = aws_vpc.main.id
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.gw.id
}
tags = {
Name = "${var.project_name}-public-rt"
}
}
resource "aws_route_table_association" "public" {
count = length(aws_subnet.public)
subnet_id = aws_subnet.public[count.index].id
route_table_id = aws_route_table.public.id
}
# --- NAT Gateway for Private Subnets (for outbound internet access) ---
resource "aws_eip" "nat" {
domain = "vpc"
}
resource "aws_nat_gateway" "nat" {
allocation_id = aws_eip.nat.id
subnet_id = aws_subnet.public[0].id # Place NAT in a public subnet
depends_on = [aws_internet_gateway.gw]
tags = {
Name = "${var.project_name}-nat-gw"
}
}
resource "aws_route_table" "private" {
vpc_id = aws_vpc.main.id
route {
cidr_block = "0.0.0.0/0"
nat_gateway_id = aws_nat_gateway.nat.id
}
tags = {
Name = "${var.project_name}-private-rt"
}
}
resource "aws_route_table_association" "private" {
count = length(aws_subnet.private)
subnet_id = aws_subnet.private[count.index].id
route_table_id = aws_route_table.private.id
}
This block sets up a secure, production-grade network. Public subnets can reach the internet directly. Private subnets can reach the internet (e.g., to pull dependencies) via the NAT Gateway, but the internet cannot initiate connections to them.
Security: Security Groups
Security Groups act as virtual firewalls. We need one for our load balancer (allowing web traffic) and one for our database (allowing traffic only from our app).
In main.tf
:
# Security group for the Application Load Balancer
resource "aws_security_group" "lb_sg" {
name = "${var.project_name}-lb-sg"
description = "Allow HTTP/HTTPS traffic"
vpc_id = aws_vpc.main.id
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
# Security group for our Django application (ECS Tasks)
resource "aws_security_group" "app_sg" {
name = "${var.project_name}-app-sg"
description = "Allow traffic from LB and self"
vpc_id = aws_vpc.main.id
# Allow all outbound
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "${var.project_name}-app-sg"
}
}
# Security group for our RDS database
resource "aws_security_group" "db_sg" {
name = "${var.project_name}-db-sg"
description = "Allow PostgreSQL traffic from app"
vpc_id = aws_vpc.main.id
# Allow inbound PostgreSQL traffic from the app security group
ingress {
from_port = 5432
to_port = 5432
protocol = "tcp"
security_groups = [aws_security_group.app_sg.id] # IMPORTANT!
}
# Allow all outbound (for patches, etc.)
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "${var.project_name}-db-sg"
}
}
# --- Rule to allow LB to talk to App ---
# We add this rule *after* defining both SGs
resource "aws_security_group_rule" "lb_to_app" {
type = "ingress"
from_port = 8000 # Assuming Django runs on port 8000
to_port = 8000
protocol = "tcp"
security_group_id = aws_security_group.app_sg.id
source_security_group_id = aws_security_group.lb_sg.id
}
Database: Provisioning the RDS Instance
We’ll create a PostgreSQL instance. To do this securely, we first need an “RDS Subnet Group” to tell RDS which private subnets it can live in. We also must pass the username and password securely from variables.
In variables.tf
(add these):
variable "db_name" {
description = "Name for the RDS database."
type = string
default = "djangodb"
}
variable "db_username" {
description = "Username for the RDS database."
type = string
sensitive = true # Hides value in logs
}
variable "db_password" {
description = "Password for the RDS database."
type = string
sensitive = true # Hides value in logs
}
In terraform.tfvars
(DO NOT COMMIT THIS FILE):
aws_region = "us-east-1"
db_username = "django_admin"
db_password = "a-very-strong-and-secret-password"
Now, in main.tf
:
# --- RDS Database ---
# Subnet group for RDS
resource "aws_db_subnet_group" "default" {
name = "${var.project_name}-db-subnet-group"
subnet_ids = [for subnet in aws_subnet.private : subnet.id]
tags = {
Name = "${var.project_name}-db-subnet-group"
}
}
# The RDS PostgreSQL Instance
resource "aws_db_instance" "default" {
identifier = "${var.project_name}-db"
engine = "postgres"
engine_version = "15.3"
instance_class = "db.t3.micro" # Good for dev/staging, use larger for prod
allocated_storage = 20
db_name = var.db_name
username = var.db_username
password = var.db_password
db_subnet_group_name = aws_db_subnet_group.default.name
vpc_security_group_ids = [aws_security_group.db_sg.id]
multi_az = false # Set to true for production HA
skip_final_snapshot = true # Set to false for production
publicly_accessible = false # IMPORTANT! Keep database private
}
Storage: Creating the S3 Bucket for Static Files
This S3 bucket will hold our Django collectstatic
output and user-uploaded media files.
# --- S3 Bucket for Static and Media Files ---
resource "aws_s3_bucket" "static" {
# Bucket names must be globally unique
bucket = "${var.project_name}-static-media-${random_id.bucket_suffix.hex}"
tags = {
Name = "${var.project_name}-static-media-bucket"
}
}
# Need a random suffix to ensure bucket name is unique
resource "random_id" "bucket_suffix" {
byte_length = 8
}
# Block all public access by default
resource "aws_s3_bucket_public_access_block" "static" {
bucket = aws_s3_bucket.static.id
block_public_acls = true
block_public_policy = true
ignore_public_acls = true
restrict_public_buckets = true
}
# We will serve files via CloudFront (or signed URLs), not by making the bucket public.
# For simplicity in this guide, we'll configure Django to use IAM roles.
# A full production setup would add an aws_cloudfront_distribution.
Step 4: Setting Up the Django Application for AWS
Our infrastructure is useless without an application configured to use it.
Configuring settings.py for AWS
We need to install a few packages:
pip install django-storages boto3 psycopg2-binary gunicorn
Now, update your settings.py
to read from environment variables (which Terraform will inject into our container) and configure S3.
# settings.py
import os
import dj_database_url
# ...
# SECURITY WARNING: keep the secret key in production secret!
SECRET_KEY = os.environ.get('DJANGO_SECRET_KEY', 'a-fallback-dev-key')
# DEBUG should be False in production
DEBUG = os.environ.get('DJANGO_DEBUG', 'False') == 'True'
ALLOWED_HOSTS = os.environ.get('DJANGO_ALLOWED_HOSTS', 'localhost,127.0.0.1').split(',')
# --- Database ---
# Use dj_database_url to parse the DATABASE_URL environment variable
DATABASES = {
'default': dj_database_url.config(conn_max_age=600, default='sqlite:///db.sqlite3')
}
# The DATABASE_URL will be set by Terraform like:
# postgres://django_admin:secret_password@my-db-endpoint.rds.amazonaws.com:5432/djangodb
# --- AWS S3 for Static and Media Files ---
# Only use S3 in production (when AWS_STORAGE_BUCKET_NAME is set)
if 'AWS_STORAGE_BUCKET_NAME' in os.environ:
AWS_STORAGE_BUCKET_NAME = os.environ.get('AWS_STORAGE_BUCKET_NAME')
AWS_S3_CUSTOM_DOMAIN = f'{AWS_STORAGE_BUCKET_NAME}.s3.amazonaws.com'
AWS_S3_OBJECT_PARAMETERS = {
'CacheControl': 'max-age=86400',
}
AWS_DEFAULT_ACL = None # Recommended for security
AWS_S3_FILE_OVERWRITE = False
# --- Static Files ---
STATIC_LOCATION = 'static'
STATIC_URL = f'https://{AWS_S3_CUSTOM_DOMAIN}/{STATIC_LOCATION}/'
STATICFILES_STORAGE = 'storages.backends.s3boto3.S3Boto3Storage'
# --- Media Files ---
MEDIA_LOCATION = 'media'
MEDIA_URL = f'https://{AWS_S3_CUSTOM_DOMAIN}/{MEDIA_LOCATION}/'
DEFAULT_FILE_STORAGE = 'storages.backends.s3boto3.S3Boto3Storage'
else:
# --- Local settings ---
STATIC_URL = '/static/'
STATIC_ROOT = os.path.join(BASE_DIR, 'staticfiles')
MEDIA_URL = '/media/'
MEDIA_ROOT = os.path.join(BASE_DIR, 'mediafiles')
Dockerizing Your Django App
Create a Dockerfile
in your Django project root:
# Use an official Python runtime as a parent image
FROM python:3.11-slim
# Set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# Set work directory
WORKDIR /app
# Install dependencies
COPY requirements.txt /app/
RUN pip install --no-cache-dir -r requirements.txt
# Copy project
COPY . /app/
# Run collectstatic (will use S3 if env vars are set)
# We will run this as a separate task, but this is one way
# RUN python manage.py collectstatic --no-input
# Expose port
EXPOSE 8000
# Run gunicorn
# We will override this command in the ECS Task Definition
CMD ["gunicorn", "--bind", "0.0.0.0:8000", "your_project_name.wsgi:application"]
Step 5: Defining the Compute Layer – AWS ECS with Fargate
This is the most complex part, where we tie everything together.
Creating the ECR Repository
In main.tf
:
# --- ECR (Elastic Container Registry) ---
resource "aws_ecr_repository" "app" {
name = "${var.project_name}-app-repo"
image_tag_mutability = "MUTABLE"
image_scanning_configuration {
scan_on_push = true
}
}
Defining the ECS Cluster
An ECS Cluster is just a logical grouping of services and tasks.
# --- ECS (Elastic Container Service) ---
resource "aws_ecs_cluster" "main" {
name = "${var.project_name}-cluster"
tags = {
Name = "${var.project_name}-cluster"
}
}
Setting up the Application Load Balancer (ALB)
The ALB will receive public traffic on port 80/443 and forward it to our Django app on port 8000.
# --- Application Load Balancer (ALB) ---
resource "aws_lb" "main" {
name = "${var.project_name}-lb"
internal = false
load_balancer_type = "application"
security_groups = [aws_security_group.lb_sg.id]
subnets = [for subnet in aws_subnet.public : subnet.id]
enable_deletion_protection = false
}
# Target Group: where the LB sends traffic
resource "aws_lb_target_group" "app" {
name = "${var.project_name}-tg"
port = 8000 # Port our Django container listens on
protocol = "HTTP"
vpc_id = aws_vpc.main.id
target_type = "ip" # Required for Fargate
health_check {
path = "/health/" # Add a health-check endpoint to your Django app
protocol = "HTTP"
matcher = "200"
interval = 30
timeout = 5
healthy_threshold = 2
unhealthy_threshold = 2
}
}
# Listener: Listen on port 80 (HTTP)
resource "aws_lb_listener" "http" {
load_balancer_arn = aws_lb.main.arn
port = "80"
protocol = "HTTP"
default_action {
type = "forward"
target_group_arn = aws_lb_target_group.app.arn
}
# For production, you would add a listener on port 443 (HTTPS)
# using an aws_acm_certificate
}
Creating the ECS Task Definition and Service
A **Task Definition** is the blueprint for our application container. An **ECS Service** is responsible for running and maintaining a specified number of instances (Tasks) of that blueprint.
This is where we’ll inject our environment variables. WARNING: Never hardcode secrets. We’ll use AWS Secrets Manager (or Parameter Store) for this.
First, let’s create the secrets (you can also do this in Terraform, but for setup, the console or CLI is fine):
- Go to AWS Secrets Manager.
- Create a new secret (select “Other type of secret”).
- Create key/value pairs for
DJANGO_SECRET_KEY
,DB_USERNAME
,DB_PASSWORD
. - Name the secret (e.g.,
django/app/secrets
).
Now, in main.tf
:
# --- IAM Roles ---
# Role for the ECS Task to run
resource "aws_iam_role" "ecs_task_execution_role" {
name = "${var.project_name}_ecs_task_execution_role"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = "sts:AssumeRole"
Effect = "Allow"
Principal = {
Service = "ecs-tasks.amazonaws.com"
}
}
]
})
}
# Attach the managed policy for ECS task execution (pulling images, sending logs)
resource "aws_iam_role_policy_attachment" "ecs_task_execution_policy" {
role = aws_iam_role.ecs_task_execution_role.name
policy_arn = "arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy"
}
# Role for the Task *itself* (what your Django app can do)
resource "aws_iam_role" "ecs_task_role" {
name = "${var.project_name}_ecs_task_role"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = "sts:AssumeRole"
Effect = "Allow"
Principal = {
Service = "ecs-tasks.amazonaws.com"
}
}
]
})
}
# Policy to allow Django app to access S3 bucket
resource "aws_iam_policy" "s3_access_policy" {
name = "${var.project_name}_s3_access_policy"
description = "Allows ECS tasks to read/write to the S3 bucket"
policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = [
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject",
"s3:ListBucket"
]
Effect = "Allow"
Resource = [
aws_s3_bucket.static.arn,
"${aws_s3_bucket.static.arn}/*"
]
}
]
})
}
resource "aws_iam_role_policy_attachment" "task_s3_policy" {
role = aws_iam_role.ecs_task_role.name
policy_arn = aws_iam_policy.s3_access_policy.arn
}
# Policy to allow task to fetch secrets from Secrets Manager
resource "aws_iam_policy" "secrets_manager_access_policy" {
name = "${var.project_name}_secrets_manager_access_policy"
description = "Allows ECS tasks to read from Secrets Manager"
policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = [
"secretsmanager:GetSecretValue"
]
Effect = "Allow"
# Be specific with your secret ARN!
Resource = [aws_secretsmanager_secret.app_secrets.arn]
}
]
})
}
resource "aws_iam_role_policy_attachment" "task_secrets_policy" {
role = aws_iam_role.ecs_task_role.name
policy_arn = aws_iam_policy.secrets_manager_access_policy.arn
}
# --- Create the Secrets Manager Secret ---
resource "aws_secretsmanager_secret" "app_secrets" {
name = "${var.project_name}/app/secrets"
}
resource "aws_secretsmanager_secret_version" "app_secrets_version" {
secret_id = aws_secretsmanager_secret.app_secrets.id
secret_string = jsonencode({
DJANGO_SECRET_KEY = "generate-a-strong-random-key-here"
DB_USERNAME = var.db_username
DB_PASSWORD = var.db_password
})
# This makes it easier to update the password via Terraform
# by only changing the terraform.tfvars file
}
# --- CloudWatch Log Group ---
resource "aws_cloudwatch_log_group" "app_logs" {
name = "/ecs/${var.project_name}"
retention_in_days = 7
}
# --- ECS Task Definition ---
resource "aws_ecs_task_definition" "app" {
family = "${var.project_name}-task"
network_mode = "awsvpc" # Required for Fargate
requires_compatibilities = ["FARGATE"]
cpu = "256" # 0.25 vCPU
memory = "512" # 0.5 GB
execution_role_arn = aws_iam_role.ecs_task_execution_role.arn
task_role_arn = aws_iam_role.ecs_task_role.arn
# This is the "blueprint" for our container
container_definitions = jsonencode([
{
name = "${var.project_name}-container"
image = "${aws_ecr_repository.app.repository_url}:latest" # We'll push to this tag
essential = true
portMappings = [
{
containerPort = 8000
hostPort = 8000
}
]
# --- Environment Variables ---
environment = [
{
name = "DJANGO_DEBUG"
value = "False"
},
{
name = "DJANGO_ALLOWED_HOSTS"
value = aws_lb.main.dns_name # Allow traffic from the LB
},
{
name = "AWS_STORAGE_BUCKET_NAME"
value = aws_s3_bucket.static.id
},
{
name = "DATABASE_URL"
value = "postgres://${var.db_username}:${var.db_password}@${aws_db_instance.default.endpoint}/${var.db_name}"
}
]
# --- SECRETS (Better way for DATABASE_URL parts and SECRET_KEY) ---
# This is more secure than the DATABASE_URL above
# "secrets": [
# {
# "name": "DJANGO_SECRET_KEY",
# "valueFrom": "${aws_secretsmanager_secret.app_secrets.arn}:DJANGO_SECRET_KEY::"
# },
# {
# "name": "DB_PASSWORD",
# "valueFrom": "${aws_secretsmanager_secret.app_secrets.arn}:DB_PASSWORD::"
# }
# ],
# --- Logging ---
logConfiguration = {
logDriver = "awslogs"
options = {
"awslogs-group" = aws_cloudwatch_log_group.app_logs.name
"awslogs-region" = var.aws_region
"awslogs-stream-prefix" = "ecs"
}
}
}
])
}
# --- ECS Service ---
resource "aws_ecs_service" "app" {
name = "${var.project_name}-service"
cluster = aws_ecs_cluster.main.id
task_definition = aws_ecs_task_definition.app.arn
desired_count = 2 # Run 2 copies of our app for HA
launch_type = "FARGATE"
network_configuration {
subnets = [for subnet in aws_subnet.private : subnet.id] # Run tasks in private subnets
security_groups = [aws_security_group.app_sg.id]
assign_public_ip = false
}
load_balancer {
target_group_arn = aws_lb_target_group.app.arn
container_name = "${var.project_name}-container"
container_port = 8000
}
# Ensure the service depends on the LB listener
depends_on = [aws_lb_listener.http]
}
Finally, let’s output the URL of our load balancer.
In outputs.tf
:
output "app_url" {
description = "The HTTP URL of the application load balancer."
value = "http://${aws_lb.main.dns_name}"
}
output "ecr_repository_url" {
description = "The URL of the ECR repository to push images to."
value = aws_ecr_repository.app.repository_url
}
Step 6: The Deployment Workflow: How to Deploy Django AWS Terraform
Now that our code is written, here is the full workflow to Deploy Django AWS Terraform.
Step 6.1: Initializing and Planning
From your terminal in the project’s root directory, run:
# Initializes Terraform, downloads the AWS provider
terraform init
# Creates the execution plan. Review this output carefully!
terraform plan -out=tfplan
Terraform will show you a long list of all the AWS resources it’s about to create.
Step 6.2: Applying the Infrastructure
If the plan looks good, apply it:
# Applies the plan, answers 'yes' automatically
terraform apply "tfplan"
This will take several minutes. AWS needs time to provision the VPC, NAT Gateway, and especially the RDS instance. Once it’s done, it will print your outputs, including the ecr_repository_url
and app_url
.
Step 6.3: Building and Pushing the Docker Image
Now that our infrastructure exists, we need to push our application code to it.
# 1. Get the ECR URL from Terraform output
REPO_URL=$(terraform output -raw ecr_repository_url)
# 2. Log in to AWS ECR
aws ecr get-login-password --region ${VAR_AWS_REGION} | docker login --username AWS --password-stdin $REPO_URL
# 3. Build your Docker image (from your Django project root)
docker build -t $REPO_URL:latest .
# 4. Push the image to ECR
docker push $REPO_URL:latest
Step 6.4: Running Database Migrations and Collectstatic
Our app containers will start, but the database is empty. We need to run migrations. You can do this using an ECS “Run Task”. This is a one-off task.
You can create a separate “task definition” in Terraform for migrations, or run it manually from the AWS console:
- Go to your ECS Cluster -> Task Definitions -> Select your app task.
- Click “Actions” -> “Run Task”.
- Select “FARGATE”, your cluster, and your private subnets and app security group.
- Expand “Container Overrides”, select your container.
- In the “Command Override” box, enter:
python,manage.py,migrate
- Click “Run Task”.
Repeat this process with the command python,manage.py,collectstatic,--no-input
to populate your S3 bucket.
Step 6.5: Forcing a New Deployment
The ECS service is now running, but it’s probably using the “latest” tag from before you pushed. To force it to pull the new image, you can run:
# This tells the service to redeploy, which will pull the "latest" image again
aws ecs update-service --cluster ${VAR_PROJECT_NAME}-cluster \
--service ${VAR_PROJECT_NAME}-service \
--force-new-deployment \
--region ${VAR_AWS_REGION}
After a few minutes, your new containers will be running. You can now visit the app_url
from your Terraform output and see your live Django application!
Step 7: Automating with a CI/CD Pipeline (Conceptual Overview)
The real power of this setup comes from automation. The manual steps above are great for the first deployment, but tedious for daily updates. A CI/CD pipeline (using GitHub Actions, GitLab CI, or AWS CodePipeline) automates this.
A typical pipeline would look like this:
- On Push to
main
branch: - Lint & Test: Run
flake8
andpython manage.py test
. - Build & Push Docker Image: Build the image, tag it with the Git SHA (e.g.,
:a1b2c3d
) instead of:latest
. Push to ECR. - Run Terraform: Run
terraform apply
. This is safe because Terraform is declarative; it will only apply changes if your.tf
files have changed. - Run Migrations: Use the AWS CLI to run a one-off task for migrations.
- Update ECS Service: This is the key. Instead of just “forcing” a new deployment, you would update the Task Definition to use the new specific image tag (e.g.,
:a1b2c3d
) and then update the service to use that new task definition. This provides a true, versioned, roll-back-able deployment.
Frequently Asked Questions
How do I handle Django database migrations with Terraform?
Terraform is for provisioning infrastructure, not for running application-level commands. The best practice is to run migrations as a one-off task *after* terraform apply
is complete. Use ECS Run Task, as described in Step 6.4. Some people build this into a CI/CD pipeline, or even use a “init container” that runs migrations before the main app container starts (though this can be complex with multiple app instances starting at once).
Is Elastic Beanstalk a better option than ECS/Terraform?
Elastic Beanstalk (EB) is a Platform-as-a-Service (PaaS). It’s faster to get started because it provisions all the resources (EC2, ELB, RDS) for you with a simple eb deploy
. However, you lose granular control. Our custom Terraform setup is far more flexible, secure (e.g., Fargate in private subnets), and scalable. EB is great for simple projects or prototypes. For a complex, production-grade application, the custom Terraform/ECS approach is generally preferred by DevOps professionals.
How can I manage secrets like my database password?
Do not hardcode them in main.tf
or commit them to Git. The best practice is to use AWS Secrets Manager or AWS Systems Manager (SSM) Parameter Store.
1. Store the secret value (the password) in Secrets Manager.
2. Give your ECS Task Role (ecs_task_role
) IAM permission to read that specific secret.
3. In your ECS Task Definition, use the "secrets"
key (as shown in the commented-out example) to inject the secret into the container as an environment variable. Your Django app reads it from the environment, never knowing the value until runtime.
What’s the best way to run collectstatic
?
Similar to migrations, this is an application-level command.
1. In CI/CD: The best place is in your CI/CD pipeline. After building the Docker image but before pushing it, you can run the collectstatic
command *locally* (or in the CI runner) with the correct AWS credentials and environment variables set. It will collect files and upload them directly to S3.
2. One-off Task: Run it as an ECS “Run Task” just like migrations.
3. In the Dockerfile: You *can* run it in the Dockerfile
, but this is often discouraged as it bloats the image and requires build-time AWS credentials, which can be a security risk.

Conclusion
You have successfully journeyed from an empty AWS account to a fully scalable, secure, and production-ready home for your Django application. This is no small feat. By defining your entire infrastructure in code, you’ve unlocked a new level of professionalism and reliability in your deployment process.
We’ve provisioned a custom VPC, secured our app and database in private subnets, offloaded state to RDS and S3, and created a scalable, serverless compute layer with ECS Fargate. The true power of the Deploy Django AWS Terraform workflow is its repeatability and manageability. You can now tear down this entire stack with terraform destroy
and bring it back up in minutes. You can create a new staging environment with a single command. Your infrastructure is no longer a fragile, manually-configured black box; it’s a version-controlled, auditable, and automated part of your application’s codebase. Thank you for reading the DevopsRoles page!