In the world of modern infrastructure, managing multiple environments is a fundamental challenge. Every development lifecycle needs at least a development (dev) environment for building, a staging (or QA) environment for testing, and a production (prod) environment for serving users. Managing these environments manually is a recipe for configuration drift, errors, and significant downtime. This is where Infrastructure as Code (IaC), and specifically HashiCorp’s Terraform, becomes indispensable. But even with Terraform, how do you manage the state of three distinct, long-running environments without duplicating your entire codebase? The answer is built directly into the tool: Terraform Workspaces.
This comprehensive guide will explore exactly what Terraform Workspaces are, why they are a powerful solution for environment management, and how to implement them in a practical, real-world scenario to handle your dev, staging, and prod deployments from a single, unified codebase.
Table of Contents
- 1 What Are Terraform Workspaces?
- 2 Why Use Terraform Workspaces for Environment Management?
- 3 Practical Guide: Setting Up Dev, Staging & Prod Environments
- 3.1 Step 1: Initializing Your Project and Backend
- 3.2 Step 2: Creating Your Workspaces
- 3.3 Step 3: Structuring Your Configuration with Variables
- 3.4 Step 4: Using Environment-Specific .tfvars Files (Recommended)
- 3.5 Step 5: Using the terraform.workspace Variable (The “Map” Method)
- 3.6 Step 6: Deploying to a Specific Environment
- 4 Terraform Workspaces: Best Practices and Common Pitfalls
- 5 Alternatives to Terraform Workspaces
- 6 Frequently Asked Questions
- 7 Conclusion
What Are Terraform Workspaces?
At their core, Terraform Workspaces are named instances of a single Terraform configuration. Each workspace maintains its own separate state file. This allows you to use the exact same set of .tf configuration files to manage multiple, distinct sets of infrastructure resources.
When you run terraform apply, Terraform only considers the resources defined in the state file for the *currently selected* workspace. This isolation is the key feature. If you’re in the dev workspace, you can create, modify, or destroy resources without affecting any of the resources managed by the prod workspace, even though both are defined by the same main.tf file.
Workspaces vs. Git Branches: A Common Misconception
A critical distinction to make early on is the difference between Terraform Workspaces and Git branches. They solve two completely different problems.
- Git Branches are for managing changes to your code. You use a branch (e.g.,
feature-x) to develop a new part of your infrastructure. You test it, and once it’s approved, you merge it into yourmainbranch. - Terraform Workspaces are for managing deployments of your code. You use your
mainbranch (which contains your stable, approved code) and deploy it to yourdevworkspace. Once validated, you deploy the *exact same commit* to yourstagingworkspace, and finally to yourprodworkspace.
Do not use Git branches to manage environments (e.g., a dev branch, a prod branch). This leads to configuration drift, nightmarish merges, and violates the core IaC principle of having a single source of truth for your infrastructure’s definition.
How Workspaces Manage State
When you initialize a Terraform configuration that uses a local backend (the default), Terraform creates a terraform.tfstate file. As soon as you create a new workspace, Terraform creates a new directory called terraform.tfstate.d. Inside this directory, it will create a separate state file for each workspace you have.
For example, if you have dev, staging, and prod workspaces, your local directory might look like this:
.
├── main.tf
├── variables.tf
├── terraform.tfstate.d/
│ ├── dev/
│ │ └── terraform.tfstate
│ ├── staging/
│ │ └── terraform.tfstate
│ └── prod/
│ │ └── terraform.tfstate
└── .terraform/
...
This is why switching workspaces is so effective. Running terraform workspace select prod simply tells Terraform to use the prod/terraform.tfstate file for all subsequent plan and apply operations. When using a remote backend like AWS S3 (which is a strong best practice), this behavior is mirrored. Terraform will store the state files in a path that includes the workspace name, ensuring complete isolation.
Why Use Terraform Workspaces for Environment Management?
Using Terraform Workspaces offers several significant advantages for managing your infrastructure lifecycle, especially when compared to the alternatives like copying your entire project for each environment.
- State Isolation: This is the primary benefit. A catastrophic error in your
devenvironment (like runningterraform destroyby accident) will have zero impact on yourprodenvironment, as they have entirely separate state files. - Code Reusability (DRY Principle): You maintain one set of
.tffiles. You don’t repeat yourself. If you need to add a new monitoring rule or a security group, you add it once to your configuration, and then roll it out to each environment by selecting its workspace and applying the change. - Simplified Configuration: Workspaces allow you to parameterize your environments. Your
prodenvironment might need a larget3.largeEC2 instance, while yourdevenvironment only needs at3.micro. Workspaces provide clean mechanisms to inject these different variable values into the same configuration. - Clean CI/CD Integration: In an automation pipeline, it’s trivial to select the correct workspace based on the Git branch or a pipeline trigger. A deployment to the
mainbranch might trigger aterraform workspace select prodandapply, while a merge todeveloptriggers aterraform workspace select dev.
Practical Guide: Setting Up Dev, Staging & Prod Environments
Let’s walk through a practical example. We’ll define a simple AWS EC2 instance and see how to deploy different variations of it to dev, staging, and prod.
Step 1: Initializing Your Project and Backend
First, create a main.tf file. It’s a critical best practice to use a remote backend from the very beginning. This ensures your state is stored securely, durably, and can be accessed by your team and CI/CD pipelines. We’ll use AWS S3.
# main.tf
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
# Best Practice: Use a remote backend
backend "s3" {
bucket = "my-terraform-state-bucket-unique"
key = "global/ec2/terraform.tfstate"
region = "us-east-1"
dynamodb_table = "terraform-lock-table"
encrypt = true
}
}
provider "aws" {
region = "us-east-1"
}
# We will define variables later
variable "instance_type" {
description = "The EC2 instance type."
type = string
}
variable "ami_id" {
description = "The AMI to use for the instance."
type = string
}
variable "tags" {
description = "A map of tags to apply to the resources."
type = map(string)
default = {}
}
resource "aws_instance" "web_server" {
ami = var.ami_id
instance_type = var.instance_type
tags = merge(
{
"Name" = "web-server-${terraform.workspace}"
"Environment" = terraform.workspace
},
var.tags
)
}
Notice the use of terraform.workspace in the tags. This is a built-in variable that always contains the name of the currently selected workspace. It’s incredibly useful for naming and tagging resources to identify them easily.
Run terraform init to initialize the backend.
Step 2: Creating Your Workspaces
By default, you start in a workspace named default. Let’s create our three target environments.
# Create the new workspaces
$ terraform workspace new dev
Created and switched to workspace "dev"
$ terraform workspace new staging
Created and switched to workspace "staging"
$ terraform workspace new prod
Created and switched to workspace "prod"
# Let's list them to check
$ terraform workspace list
default
dev
staging
* prod
(The * indicates the currently selected workspace)
# Switch back to dev for our first deployment
$ terraform workspace select dev
Switched to workspace "dev"
Now, if you check your S3 bucket, you’ll see that Terraform has automatically created paths for your new workspaces under the key you defined. This is how it isolates the state files.
Step 3: Structuring Your Configuration with Variables
Our environments are not identical. Production needs a robust AMI and a larger instance, while dev can use a basic, cheap one. How do we supply different variables?
There are two primary methods: .tfvars files (recommended for clarity) and locals maps (good for simpler configs).
Step 4: Using Environment-Specific .tfvars Files (Recommended)
This is the cleanest and most scalable approach. We create a separate variable file for each environment.
Create dev.tfvars:
# dev.tfvars
instance_type = "t3.micro"
ami_id = "ami-0c55b159cbfafe1f0" # Amazon Linux 2 (free tier eligible)
tags = {
"CostCenter" = "development"
}
Create staging.tfvars:
# staging.tfvars
instance_type = "t3.small"
ami_id = "ami-0c55b159cbfafe1f0" # Amazon Linux 2
tags = {
"CostCenter" = "qa"
}
Create prod.tfvars:
# prod.tfvars
instance_type = "t3.large"
ami_id = "ami-0a8b421e306b0cfa4" # A custom, hardened production AMI
tags = {
"CostCenter" = "production-web"
}
Now, your deployment workflow in Step 6 will use these files explicitly.
Step 5: Using the terraform.workspace Variable (The “Map” Method)
An alternative method is to define all environment configurations inside your .tf files using a locals block and the terraform.workspace variable as a map key. This keeps the configuration self-contained but can become unwieldy for many variables.
You would create a locals.tf file:
# locals.tf
locals {
# A map of environment-specific configurations
env_config = {
dev = {
instance_type = "t3.micro"
ami_id = "ami-0c55b159cbfafe1f0"
}
staging = {
instance_type = "t3.small"
ami_id = "ami-0c55b159cbfafe1f0"
}
prod = {
instance_type = "t3.large"
ami_id = "ami-0a8b421e306b0cfa4"
}
# Failsafe for 'default' or other workspaces
default = {
instance_type = "t3.nano"
ami_id = "ami-0c55b159cbfafe1f0"
}
}
# Dynamically select the config based on the current workspace
# Use lookup() with a default value to prevent errors
current_config = lookup(
local.env_config,
terraform.workspace,
local.env_config.default
)
}
Then, you would modify your main.tf to use these locals instead of var:
# main.tf (Modified for 'locals' method)
resource "aws_instance" "web_server" {
# Use the looked-up local values
ami = local.current_config.ami_id
instance_type = local.current_config.instance_type
tags = {
"Name" = "web-server-${terraform.workspace}"
"Environment" = terraform.workspace
}
}
While this works, we will proceed with the .tfvars method (Step 4) as it’s generally considered a cleaner pattern for complex projects.
Step 6: Deploying to a Specific Environment
Now, let’s tie it all together using the .tfvars method. The workflow is simple: Select Workspace, then Plan/Apply with its .tfvars file.
Deploying to Dev:
# 1. Make sure you are in the 'dev' workspace
$ terraform workspace select dev
Switched to workspace "dev"
# 2. Plan the deployment, specifying the 'dev' variables
$ terraform plan -var-file="dev.tfvars"
...
Plan: 1 to add, 0 to change, 0 to destroy.
+ resource "aws_instance" "web_server" {
+ ami = "ami-0c55b159cbfafe1f0"
+ instance_type = "t3.micro"
+ tags = {
+ "CostCenter" = "development"
+ "Environment" = "dev"
+ "Name" = "web-server-dev"
}
...
}
# 3. Apply the plan
$ terraform apply -var-file="dev.tfvars" -auto-approve
You now have a t3.micro server running for your dev environment. Its state is tracked in the dev state file.
Deploying to Prod:
Now, let’s deploy production. Note that we don’t change any code. We just change our workspace and our variable file.
# 1. Select the 'prod' workspace
$ terraform workspace select prod
Switched to workspace "prod"
# 2. Plan the deployment, specifying the 'prod' variables
$ terraform plan -var-file="prod.tfvars"
...
Plan: 1 to add, 0 to change, 0 to destroy.
+ resource "aws_instance" "web_server" {
+ ami = "ami-0a8b421e306b0cfa4"
+ instance_type = "t3.large"
+ tags = {
+ "CostCenter" = "production-web"
+ "Environment" = "prod"
+ "Name" = "web-server-prod"
}
...
}
# 3. Apply the plan
$ terraform apply -var-file="prod.tfvars" -auto-approve
You now have a completely separate t3.large server for production, with its state tracked in the prod state file. Destroying the dev instance will have no effect on this new server.
Terraform Workspaces: Best Practices and Common Pitfalls
While powerful, Terraform Workspaces can be misused. Here are some best practices and common pitfalls to avoid.
Best Practice: Use a Remote Backend
This was mentioned in the tutorial but cannot be overstated. Using the local backend (the default) with workspaces is only suitable for solo development. For any team, you must use a remote backend like AWS S3, Azure Blob Storage, or Terraform Cloud. This provides state locking (so two people don’t run apply at the same time), security, and a single source of truth for your state.
Best Practice: Use .tfvars Files for Clarity
As demonstrated, using dev.tfvars, prod.tfvars, etc., is a very clear and explicit way to manage environment variables. It separates the “what” (the main.tf) from the “how” (the environment-specific values). In a CI/CD pipeline, you can easily pass the correct file: terraform apply -var-file="$WORKSPACE_NAME.tfvars".
Pitfall: Avoid Using Workspaces for Different *Projects*
A workspace is not a new project. It’s a new *instance* of the *same* project. If your “prod” environment needs a database, a cache, and a web server, your “dev” environment should probably have them too (even if they are smaller). If you find yourself writing a lot of logic like count = terraform.workspace == "prod" ? 1 : 0 to *conditionally create resources* only in certain environments, you may have a problem. This indicates your environments have different “shapes.” In this case, you might be better served by:
- Using separate Terraform configurations (projects) entirely.
- Using feature flags in your
.tfvarsfiles (e.g.,create_database = true).
Pitfall: The default Workspace Trap
Everyone starts in the default workspace. It’s often a good idea to avoid using it for any real environment, as its name is ambiguous. Some teams use it as a “scratch” or “admin” workspace. You can even rename it: terraform workspace rename default admin. A cleaner approach is to create your named environments (dev, prod) immediately and never use default at all.
Alternatives to Terraform Workspaces
Terraform Workspaces are a “built-in” solution, but not the only one. The main alternative is a directory-based structure, often orchestrated with a tool like Terragrunt.
1. Directory-Based Structure (Terragrunt)
This is a very popular and robust pattern. Instead of using workspaces, you create a directory for each environment. Each directory has its own terraform.tfvars file and often a small main.tf that calls a shared module.
infrastructure/
├── modules/
│ └── web_server/
│ ├── main.tf
│ └── variables.tf
├── envs/
│ ├── dev/
│ │ ├── terraform.tfvars
│ │ └── main.tf (calls ../../modules/web_server)
│ ├── staging/
│ │ ├── terraform.tfvars
│ │ └── main.tf (calls ../../modules/web_server)
│ └── prod/
│ ├── terraform.tfvars
│ └── main.tf (calls ../../modules/web_server)
In this pattern, each environment is its own distinct Terraform project (with its own state file, managed by its own backend configuration). Terragrunt is a thin wrapper that excels at managing this structure, letting you define backend and variable configurations in a DRY way.
When to Choose Workspaces: Workspaces are fantastic for small-to-medium projects where all environments have an identical “shape” (i.e., they deploy the same set of resources, just with different variables).
When to Choose Terragrunt/Directories: This pattern is often preferred for large, complex organizations where environments may have significant differences, or where you want to break up your infrastructure into many small, independently-managed state files.
Frequently Asked Questions
What is the difference between Terraform Workspaces and modules?
They are completely different concepts.
- Modules are for creating reusable code. You write a module once (e.g., a module to create a secure S3 bucket) and then “call” that module many times, even within the same configuration.
- Workspaces are for managing separate state files for different deployments of the same configuration.
You will almost always use modules *within* a configuration that is also managed by workspaces.
How do I delete a Terraform Workspace?
You can delete a workspace with terraform workspace delete <name>. However, Terraform will not let you delete a workspace that still has resources managed by it. You must run terraform destroy in that workspace first. You also cannot delete the default workspace.
Are Terraform Workspaces secure for production?
Yes, absolutely. The security of your environments is not determined by the workspace feature itself, but by your operational practices. Security is achieved by:
- Using a remote backend with encryption and strict access policies (e.g., S3 Bucket Policies and IAM).
- Using state locking (e.g., DynamoDB).
- Managing sensitive variables (like database passwords) using a tool like HashiCorp Vault or your CI/CD system’s secret manager, not by committing them in
.tfvarsfiles. - Using separate cloud accounts or projects (e.g., different AWS accounts for dev and prod) and separate provider credentials for each workspace, which can be passed in during the
applystep.
Can I use Terraform Workspaces with Terraform Cloud?
Yes. In fact, Terraform Cloud is built entirely around the concept of workspaces. In Terraform Cloud, a “workspace” is even more powerful: it’s a dedicated environment that holds your state file, your variables (including sensitive ones), your run history, and your access controls. This is the natural evolution of the open-source workspace concept.

Conclusion
Terraform Workspaces are a powerful, built-in feature that directly addresses the common challenge of managing dev, staging, and production environments. By providing clean state file isolation, they allow you to maintain a single, DRY (Don’t Repeat Yourself) codebase for your infrastructure while safely managing multiple, independent deployments. When combined with a remote backend and a clear variable strategy (like .tfvars files), Terraform Workspaces provide a scalable and professional workflow for any DevOps team looking to master their Infrastructure as Code lifecycle. Thank you for reading the DevopsRoles page!
