Category Archives: Terraform

Learn Terraform with DevOpsRoles.com. Access detailed guides and tutorials to master infrastructure as code and automate your DevOps workflows using Terraform.

Terraform Testing: 7 Essential Automation Strategies for DevOps

Terraform Testing has moved from a “nice-to-have” luxury to an absolute survival requirement for modern DevOps engineers.

I’ve seen infrastructure deployments melt down because of a single misplaced variable.

It isn’t pretty. In fact, it’s usually a 3 AM nightmare that costs thousands in downtime.

We need to stop treating Infrastructure as Code (IaC) differently than application code.

If you aren’t testing, you aren’t truly automating.

So, how do we move from manual “plan and pray” to a robust, automated pipeline?

Why Terraform Testing is Your Only Safety Net

The “move fast and break things” mantra works for apps, but it’s lethal for infrastructure.

One bad Terraform apply can delete a production database or open your S3 buckets to the world.

I remember a project three years ago where a junior dev accidentally wiped a VPC peering connection.

The fallout was immediate. Total network isolation for our microservices.

We realized then that manual code reviews aren’t enough to catch logical errors in HCL.

We needed a tiered approach to Terraform Testing that mirrors the classic software testing pyramid.

The Hierarchy of Infrastructure Validation

  • Static Analysis: Checking for syntax and security smells without executing code.
  • Unit Testing: Testing individual modules in isolation.
  • Integration Testing: Ensuring different modules play nice together.
  • End-to-End (E2E) Testing: Deploying real resources and verifying their state.

For more details on the initial setup, check the official documentation provided by the original author.

Mastering Static Analysis and Linting

The first step in Terraform Testing is the easiest and most cost-effective.

Tools like `tflint` and `terraform validate` should be your first line of defense.

They catch the “dumb” mistakes before they ever reach your cloud provider.

I personally never commit a line of code without running a linter.

It’s a simple habit that saves hours of debugging later.

You can also use Checkov or Terrascan for security-focused static analysis.

These tools look for “insecure defaults” like unencrypted disks or public SSH access.


# Basic Terraform validation
terraform init
terraform validate

# Running TFLint to catch provider-specific issues
tflint --init
tflint

The Power of Unit Testing in Terraform

How do you know your module actually does what it claims?

Unit testing focuses on the logic of your HCL code.

Since Terraform 1.6, we have a native testing framework that is a total game-changer.

Before this, we had to rely heavily on Go-based tools like Terratest.

Now, you can write Terraform Testing files directly in HCL.

It feels natural. It feels integrated.

Here is how a basic test file looks in the new native framework:


# main.tftest.hcl
variables {
  instance_type = "t3.micro"
}

run "verify_instance_type" {
  command = plan

  assert {
    condition     = aws_instance.web.instance_type == "t3.micro"
    error_message = "The instance type must be t3.micro for cost savings."
  }
}

This approach allows you to assert values in your plan without spending a dime on cloud resources.

Does it get better than that?

Actually, it does when we talk about actual resource creation.

Moving to End-to-End Terraform Testing

Static analysis and plans are great, but they don’t catch everything.

Sometimes, the cloud provider rejects your request even if the HCL is valid.

Maybe there’s a quota limit you didn’t know about.

This is where E2E Terraform Testing comes into play.

In this phase, we actually `apply` the code to a sandbox environment.

We verify that the resource exists and functions as expected.

Then, we `destroy` it to keep costs low.

It sounds expensive, but it’s cheaper than a production outage.

I usually recommend running these on a schedule or on specific release branches.

[Internal Link: Managing Cloud Costs in CI/CD]

Implementing Terratest for Complex Scenarios

While the native framework is great, complex scenarios still require Terratest.

Terratest is a Go library that gives you ultimate flexibility.

You can make HTTP requests to your new load balancer to check the response.

You can SSH into an instance and run a command.

It’s the “Gold Standard” for advanced Terraform Testing.


func TestTerraformWebserverExample(t *testing.T) {
    opts := &terraform.Options{
        TerraformDir: "../examples/webserver",
    }

    // Clean up at the end of the test
    defer terraform.Destroy(t, opts)

    // Deploy the infra
    terraform.InitAndApply(t, opts)

    // Get the output
    publicIp := terraform.Output(t, opts, "public_ip")

    // Verify it works
    url := fmt.Sprintf("http://%s:8080", publicIp)
    http_helper.HttpGetWithRetry(t, url, nil, 200, "Hello, World!", 30, 5*time.Second)
}

Is Go harder to learn than HCL? Yes.

Is it worth it for enterprise-grade infrastructure? Absolutely.

Integration with CI/CD Pipelines

Manual testing is better than no testing, but automated Terraform Testing is the goal.

Your CI/CD pipeline should be the gatekeeper.

No code should ever merge to `main` without passing the linting and unit test suite.

I like to use GitHub Actions or GitLab CI for this.

They provide clean environments to run your tests from scratch every time.

This ensures your infrastructure is reproducible.

If it works in the CI, it will work in production.

Well, 99.9% of the time, anyway.

Best Practices for Automated Pipelines

  1. Keep your test environments isolated using separate AWS accounts or Azure subscriptions.
  2. Use “Ephemeral” environments that are destroyed immediately after tests finish.
  3. Parallelize your tests to keep the developer feedback loop short.
  4. Store your state files securely in a remote backend like S3 with locking.

The Human Element of Infrastructure Code

We often forget that Terraform Testing is also about team confidence.

When a team knows their changes are being validated, they move faster.

Fear is the biggest bottleneck in DevOps.

Testing removes that fear.

It allows for experimentation without catastrophic consequences.

I’ve seen teams double their deployment frequency just by adding basic automated checks.

FAQ: Common Questions About Terraform Testing

  • How long should my tests take? Aim for unit tests under 2 minutes and E2E under 15.
  • Is Terratest better than the native ‘terraform test’? For simple checks, use native. For complex logic, use Terratest.
  • How do I handle secrets in tests? Use environment variables or a dedicated secret manager like HashiCorp Vault.
  • Can I test existing infrastructure? Yes, using `terraform plan -detailed-exitcode` or the `import` block.

Conclusion: Embracing a comprehensive Terraform Testing strategy is the only way to scale cloud infrastructure reliably. By combining static analysis, HCL-native unit tests, and robust E2E validation with tools like Terratest, you create a resilient ecosystem where “breaking production” becomes a relic of the past. Start small, lint your code today, and build your testing pyramid one block at a time.

Thank you for reading the DevopsRoles page!

Terraform Provisioners: 7 Proven Tricks for EC2 Automation

Introduction: Let’s get one thing straight right out of the gate: Terraform Provisioners are a controversial topic in the DevOps world.

I’ve been building infrastructure since the days when we racked our own physical servers.

Back then, automation meant a terrifying, undocumented bash script.

Today, we have elegant, declarative tools like Terraform. But sometimes, declarative isn’t enough.

Sometimes, you just need to SSH into a box, copy a configuration file, and run a command.

That is exactly where HashiCorp’s provisioners come into play, saving your deployment pipeline.

If you’re tired of banging your head against the wall trying to bootstrap an EC2 instance, you are in the right place.

In this guide, we are going deep into a real-world lab environment.

We are going to use the `file` and `remote-exec` provisioners to turn a useless vanilla AMI into a functional web server.

Grab a coffee. Let’s write some code that actually works.

The Hard Truth About Terraform Provisioners

HashiCorp themselves will tell you that provisioners should be a “last resort.”

Why? Because they break the fundamental rules of declarative infrastructure.

Terraform doesn’t track what a provisioner actually does to a server.

If your `remote-exec` script fails halfway through, Terraform marks the entire resource as “tainted.”

It won’t try to fix the script on the next run; it will just nuke the server and start over.

But let’s be real. In the trenches of enterprise IT, “last resort” scenarios happen before lunch on a Monday.

You will inevitably face legacy software that doesn’t support cloud-init or User Data.

When that happens, understanding how to wrangle Terraform Provisioners is the only thing standing between you and a missed deadline.

The “File” vs. “Remote-Exec” Dynamic Duo

These two provisioners are the bread and butter of quick-and-dirty instance bootstrapping.

The `file` provisioner is your courier. It safely copies files or directories from the machine running Terraform to the newly created resource.

The `remote-exec` provisioner is your remote operator. It invokes scripts directly on the target resource.

Together, they allow you to push a complex setup script, configure the environment, and execute it seamlessly.

I’ve used this exact pattern to deploy everything from custom Nginx proxies to hardened database clusters.

Building Your EC2 Lab for Terraform Provisioners

To really grasp this, we need a hands-on environment.

If you want to follow along with the specific project that inspired this deep dive, you can check out the lab setup and inspiration here.

First, we need to set up our AWS provider and lay down the foundational networking.

Without a proper Security Group allowing SSH (Port 22), your provisioners will simply time out.

I’ve seen junior devs waste hours debugging Terraform when the culprit was a closed AWS firewall.


# Define the AWS Provider
provider "aws" {
  region = "us-east-1"
}

# Create a Security Group for SSH and HTTP
resource "aws_security_group" "web_sg" {
  name        = "terraform-provisioner-sg"
  description = "Allow SSH and HTTP traffic"

  ingress {
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"] # Warning: Open to the world! Use your IP in production.
  }

  ingress {
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }
}

Notice that ingress block? Never, ever use `0.0.0.0/0` for SSH in a production environment.

But for this lab, we need to make sure Terraform can reach the instance without jumping through VPN hoops.

Mastering the Connection Block in Terraform Provisioners

Here is where 90% of deployments fail.

A provisioner cannot execute if it doesn’t know *how* to talk to the server.

You must define a `connection` block inside your resource.

This block tells Terraform what protocol to use (SSH or WinRM), the user, and the private key.

If you mess up the connection block, your terraform apply will hang for 5 minutes before throwing a fatal error.

Let’s automatically generate an SSH key pair using Terraform so we don’t have to manage local files manually.


# Generate a secure private key
resource "tls_private_key" "lab_key" {
  algorithm = "RSA"
  rsa_bits  = 4096
}

# Create an AWS Key Pair using the generated public key
resource "aws_key_pair" "generated_key" {
  key_name   = "terraform-lab-key"
  public_key = tls_private_key.lab_key.public_key_openssh
}

# Save the private key locally so we can SSH manually later
resource "local_file" "private_key_pem" {
  content  = tls_private_key.lab_key.private_key_pem
  filename = "terraform-lab-key.pem"
  file_permission = "0400"
}

This is a veteran trick: keeping everything inside the state file makes the lab reproducible.

No more “it works on my machine” excuses when handing off your codebase.

For more advanced key management strategies, you should always consult the official HashiCorp Connection Documentation.

Executing Terraform Provisioners: EC2, File, and Remote-Exec

Now comes the main event.

We are going to spin up an Ubuntu EC2 instance.

We will use the `file` provisioner to push a custom HTML file.

Then, we will use the `remote-exec` provisioner to install Nginx and move our file into the web root.

Pay close attention to the syntax here. Order matters.


resource "aws_instance" "web_server" {
  ami           = "ami-0c7217cdde317cfec" # Ubuntu 22.04 LTS in us-east-1
  instance_type = "t2.micro"
  key_name      = aws_key_pair.generated_key.key_name
  vpc_security_group_ids = [aws_security_group.web_sg.id]

  # The crucial connection block
  connection {
    type        = "ssh"
    user        = "ubuntu"
    private_key = tls_private_key.lab_key.private_key_pem
    host        = self.public_ip
  }

  # Provisioner 1: File Transfer
  provisioner "file" {
    content     = "<h1>Hello from Terraform Provisioners!</h1>"
    destination = "/tmp/index.html"
  }

  # Provisioner 2: Remote Execution
  provisioner "remote-exec" {
    inline = [
      "sudo apt-get update -y",
      "sudo apt-get install -y nginx",
      "sudo mv /tmp/index.html /var/www/html/index.html",
      "sudo systemctl restart nginx"
    ]
  }

  tags = {
    Name = "Terraform-Provisioner-Lab"
  }
}

Why Did We Transfer to /tmp First?

Did you catch that little detail in the file provisioner?

We didn’t send the file directly to `/var/www/html/`.

Why? Because the SSH user is `ubuntu`, which doesn’t have root permissions by default.

If you try to SCP a file directly into a protected system directory, Terraform will fail with a “permission denied” error.

You must copy files to a temporary directory like `/tmp`.

Then, you use `remote-exec` with `sudo` to move the file to its final destination.

That one tip alone will save you hours of pulling your hair out.

When NOT to Use Terraform Provisioners

I know I’ve been singing their praises for edge cases.

But as a senior engineer, I have to tell you the truth.

If you are using Terraform Provisioners to run massive, 500-line shell scripts, you are doing it wrong.

Terraform is an infrastructure orchestration tool, not a configuration management tool.

If your instances require that much bootstrapping, you should be using a tool built for the job.

I highly recommend exploring Ansible or Packer for heavy lifting.

Alternatively, bake your dependencies directly into a golden AMI.

It will make your Terraform runs faster, more reliable, and less prone to random network timeouts.

Always consider [Internal Link: The Principles of Immutable Infrastructure] before relying heavily on runtime execution.

Handling Tainted Resources

What happens when your `remote-exec` fails on line 3?

The EC2 instance is already created in AWS.

But Terraform marks the resource as tainted in your `terraform.tfstate` file.

This means the next time you run `terraform apply`, Terraform will destroy the instance and recreate it.

It will not attempt to resume the script from where it left off.

You can override this behavior by setting `on_failure = continue` inside the provisioner block.

However, I strongly advise against this.

If a provisioner fails, your instance is in an unknown state.

In the cloud native world, we don’t fix broken pets; we replace them with healthy cattle.

Let Terraform destroy the instance, fix your script, and let the automation run clean.

FAQ Section

  • Q: Can I use provisioners to run scripts locally?
    A: Yes, you can use the `local-exec` provisioner to run commands on the machine executing the Terraform binary. This is great for triggering local webhooks.
  • Q: Why does my provisioner time out connecting to SSH?
    A: 99% of the time, this is a Security Group issue, a missing public IP, or a mismatched private key in the connection block.
  • Q: Should I use cloud-init instead?
    A: If your target OS supports cloud-init (User Data), it is generally preferred over provisioners because it happens natively during the boot process.
  • Q: Can I run provisioners when destroying resources?
    A: Yes! You can set `when = destroy` to run cleanup scripts, like deregistering a node from a cluster before shutting it down.

Conclusion: Terraform Provisioners are powerful tools that every infrastructure engineer needs in their toolbelt. While they shouldn’t be your first choice for configuration management, knowing how to properly execute `file` and `remote-exec` commands will save your architecture when standard declarative methods fall short. Treat them with respect, keep your scripts idempotent, and never stop automating. Thank you for reading the DevopsRoles page!

Securely Scale AWS with Terraform Sentinel Policy

In high-velocity engineering organizations, the “move fast and break things” mantra often collides violently with security compliance and cost governance. As you scale AWS infrastructure using Infrastructure as Code (IaC), manual code reviews become the primary bottleneck. For expert practitioners utilizing Terraform Cloud or Enterprise, the solution isn’t slowing down-it’s automating governance. This is the domain of Terraform Sentinel Policy.

Sentinel is HashiCorp’s embedded policy-as-code framework. Unlike external linting tools that check syntax, Sentinel sits directly in the provisioning path, intercepting the Terraform plan before execution. It allows SREs and Platform Engineers to define granular, logic-based guardrails that enforce CIS benchmarks, limit blast radius, and control costs without hindering developer velocity. In this guide, we will bypass the basics and dissect how to architect, write, and test advanced Sentinel policies for enterprise-grade AWS environments.

The Architecture of Policy Enforcement

To leverage Terraform Sentinel Policy effectively, one must understand where it lives in the lifecycle. Sentinel runs in a sandboxed environment within the Terraform Cloud/Enterprise execution layer. It does not have direct access to the internet or your cloud provider APIs; instead, it relies on imports to make decisions based on context.

When a run is triggered:

  1. Plan Phase: Terraform generates the execution plan.
  2. Policy Check: Sentinel evaluates the plan against your defined policy sets.
  3. Decision: The run is allowed, halted (Hard Mandatory), or flagged for override (Soft Mandatory).
  4. Apply Phase: Provisioning occurs only if the policy check passes.

Pro-Tip: The tfplan/v2 import is the standard for accessing resource data. Avoid the legacy tfplan import as it lacks the detailed resource changes structure required for complex AWS resource evaluations.

Anatomy of an AWS Sentinel Policy

A robust policy typically consists of three phases: Imports, Filtering, and Evaluation. Let’s examine a scenario where we must ensure all AWS S3 buckets have server-side encryption enabled.

1. The Setup

First, we define our imports and useful helper functions to filter the plan for specific resource types.

import "tfplan/v2" as tfplan

# Filter resources by type
get_resources = func(type) {
  resources = {}
  for tfplan.resource_changes as address, rc {
    if rc.type is type and
       (rc.change.actions contains "create" or rc.change.actions contains "update") {
      resources[address] = rc
    }
  }
  return resources
}

# Fetch all S3 Buckets
s3_buckets = get_resources("aws_s3_bucket")

2. The Logic Rule

Next, we iterate through the filtered resources to validate their configuration. Note the use of the all quantifier, which ensures the rule returns true only if every instance passes the check.

# Rule: specific encryption configuration check
encryption_enforced = rule {
  all s3_buckets as _, bucket {
    keys(bucket.change.after) contains "server_side_encryption_configuration" and
    length(bucket.change.after.server_side_encryption_configuration) > 0
  }
}

# Main Rule
main = rule {
  encryption_enforced
}

This policy inspects the after state—the predicted state of the resource after the apply—ensuring that we are validating the final outcome, not just the code written in main.tf.

Advanced AWS Scaling Patterns

Scaling securely on AWS requires more than just resource configuration checks. It requires context-aware policies. Here are two advanced patterns for expert SREs.

Pattern 1: Cost Control via Instance Type Allow-Listing

To prevent accidental provisioning of expensive x1e.32xlarge instances, use a policy that compares requested types against an allowed list.

# Allowed EC2 types
allowed_types = ["t3.micro", "t3.small", "m5.large"]

# Check function
instance_type_allowed = rule {
  all get_resources("aws_instance") as _, instance {
    instance.change.after.instance_type in allowed_types
  }
}

Pattern 2: Enforcing Mandatory Tags for Cost Allocation

At scale, untagged resources are “ghost resources.” You can enforce that every AWS resource created carries specific tags (e.g., CostCenter, Environment).

mandatory_tags = ["CostCenter", "Environment"]

validate_tags = rule {
  all get_resources("aws_instance") as _, instance {
    all mandatory_tags as t {
      keys(instance.change.after.tags) contains t
    }
  }
}

Testing and Mocking Policies

Writing policy is development. Therefore, it requires testing. You should never push a Terraform Sentinel Policy to production without verifying it against mock data.

Use the Sentinel CLI to generate mocks from real Terraform plans:

$ terraform plan -out=tfplan
$ terraform show -json tfplan > plan.json
$ sentinel apply -trace policy.sentinel

By creating a suite of test cases (passing and failing mocks), you can integrate policy testing into your CI/CD pipeline, ensuring that a change to the governance logic doesn’t accidentally block legitimate deployments.

Enforcement Levels: The Deployment Strategy

When rolling out new policies, avoid the “Big Bang” approach. Sentinel offers three enforcement levels:

  • Advisory: Logs a warning but allows the run to proceed. Ideal for testing new policies in production without impact.
  • Soft Mandatory: Halts the run but allows administrators to override. Useful for edge cases where human judgment is required.
  • Hard Mandatory: Halts the run explicitly. No overrides. Use this for strict security violations (e.g., public S3 buckets, open security group 0.0.0.0/0).

Frequently Asked Questions (FAQ)

How does Sentinel differ from OPA (Open Policy Agent)?

While OPA is a general-purpose policy engine using Rego, Sentinel is embedded deeply into the HashiCorp ecosystem. Sentinel’s integration with Terraform Cloud allows it to access data from the Plan, Configuration, and State without complex external setups. However, OPA is often used for Kubernetes (Gatekeeper), whereas Sentinel excels in the Terraform layer.

Can I access cost estimates in my policy?

Yes. Terraform Cloud generates a cost estimate for every plan. By importing tfrun, you can write policies that deny infrastructure changes if the delta in monthly cost exceeds a certain threshold (e.g., increasing the bill by more than $500/month).

Does Sentinel affect the performance of Terraform runs?

Sentinel executes after the plan is calculated. While the execution time of the policy itself is usually negligible (milliseconds to seconds), extensive API calls within the policy (if using external HTTP imports) can add latency. Stick to using the standard tfplan imports for optimal performance.

Conclusion

Implementing Terraform Sentinel Policy is a definitive step towards maturity in your cloud operating model. It shifts security left, turning vague compliance documents into executable code that scales with your AWS infrastructure. By treating policy as code—authoring, testing, and versioning it—you empower your developers to deploy faster with the confidence that the guardrails will catch any critical errors.

Start small: Audit your current AWS environment, identify the top 3 risks (e.g., unencrypted volumes, open security groups), and implement them as Advisory policies today. Thank you for reading the DevopsRoles page!

Mastering Factorio with Terraform: The Ultimate Automation Guide

For the uninitiated, Factorio is a game about automation. For the Senior DevOps Engineer, it is a spiritual mirror of our daily lives. You start by manually crafting plates (manual provisioning), move to burner drills (shell scripts), and eventually build a mega-base capable of launching rockets per minute (fully automated Kubernetes clusters).

But why stop at automating the gameplay? As infrastructure experts, we know that the factory must grow, and the server hosting it should be as resilient and reproducible as the factory itself. In this guide, we will bridge the gap between gaming and professional Infrastructure as Code (IaC). We are going to deploy a high-performance, cost-optimized, and fully persistent Factorio dedicated server using Factorio with Terraform.

Why Terraform for a Game Server?

If you are reading this, you likely already know Terraform’s value proposition. However, applying it to stateful workloads like game servers presents unique challenges that test your architectural patterns.

  • Immutable Infrastructure: Treat the game server binary and OS as ephemeral. Only the /saves directory matters.
  • Cost Control: Factorio servers don’t need to run 24/7 if no one is playing. Terraform allows you to spin up the infrastructure for a weekend session and destroy it Sunday night, while preserving state.
  • Disaster Recovery: If your server crashes or the instance degrades, a simple terraform apply brings the factory back online in minutes.

Pro-Tip: Factorio is heavily single-threaded. When choosing your compute instance (e.g., AWS EC2), prioritize high clock speeds (GHz) over core count. An AWS c5.large or c6i.large is often superior to general-purpose instances for maintaining 60 UPS (Updates Per Second) on large mega-bases.

Architecture Overview

We will design a modular architecture on AWS, though the concepts apply to GCP, Azure, or DigitalOcean. Our stack includes:

  • Compute: EC2 Instance (optimized for compute).
  • Storage: Separate EBS volume for game saves (preventing data loss on instance termination) or an S3-sync strategy.
  • Network: VPC, Subnet, and Security Groups allowing UDP/34197.
  • Provisioning: Cloud-Init (`user_data`) to bootstrap Docker and the headless Factorio container.

Step 1: The Network & Security Layer

Factorio uses UDP port 34197 by default. Unlike HTTP services, we don’t need a complex Load Balancer; a direct public IP attachment is sufficient and reduces latency.

resource "aws_security_group" "factorio_sg" {
  name        = "factorio-allow-udp"
  description = "Allow Factorio UDP traffic"
  vpc_id      = module.vpc.vpc_id

  ingress {
    description = "Factorio Game Port"
    from_port   = 34197
    to_port     = 34197
    protocol    = "udp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  ingress {
    description = "SSH Access (Strict)"
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = [var.admin_ip] # Always restrict SSH!
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }
}

Step 2: Persistent Storage Strategy

This is the most critical section. In a “Factorio with Terraform” setup, if you run terraform destroy, you must not lose the factory. We have two primary patterns:

  1. EBS Volume Attachment: A dedicated EBS volume that exists outside the lifecycle of the EC2 instance.
  2. S3 Sync (The Cloud-Native Way): The instance pulls the latest save from S3 on boot and pushes it back on shutdown (or via cron).

For experts, I recommend the S3 Sync pattern for true immutability. It avoids the headaches of EBS volume attachment states and availability zone constraints.

resource "aws_iam_role_policy" "factorio_s3_access" {
  name = "factorio_s3_policy"
  role = aws_iam_role.factorio_role.id

  policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Action = [
          "s3:GetObject",
          "s3:PutObject",
          "s3:ListBucket"
        ]
        Effect   = "Allow"
        Resource = [
          aws_s3_bucket.factorio_saves.arn,
          "${aws_s3_bucket.factorio_saves.arn}/*"
        ]
      },
    ]
  })
}

Step 3: The Compute Instance & Cloud-Init

We use the user_data field to bootstrap the environment. We will utilize the community-standard factoriotools/factorio Docker image. This image is robust and handles updates automatically.

data "template_file" "user_data" {
  template = file("${path.module}/scripts/setup.sh.tpl")

  vars = {
    bucket_name = aws_s3_bucket.factorio_saves.id
    save_file   = "my-megabase.zip"
  }
}

resource "aws_instance" "server" {
  ami           = data.aws_ami.ubuntu.id
  instance_type = "c5.large" # High single-core performance
  
  subnet_id                   = module.vpc.public_subnets[0]
  vpc_security_group_ids      = [aws_security_group.factorio_sg.id]
  iam_instance_profile        = aws_iam_instance_profile.factorio_profile.name
  user_data                   = data.template_file.user_data.rendered

  # Spot instances can save you 70% cost, but ensure you handle interruption!
  instance_market_options {
    market_type = "spot"
  }

  tags = {
    Name = "Factorio-Server"
  }
}

The Cloud-Init Script (setup.sh.tpl)

The bash script below handles the “hydrate” phase (downloading save) and the “run” phase.

#!/bin/bash
# Install Docker and AWS CLI
apt-get update && apt-get install -y docker.io awscli

# 1. Hydrate: Download latest save from S3
mkdir -p /opt/factorio/saves
aws s3 cp s3://${bucket_name}/${save_file} /opt/factorio/saves/save.zip || echo "No save found, starting fresh"

# 2. Permissions
chown -R 845:845 /opt/factorio

# 3. Run Factorio Container
docker run -d \
  -p 34197:34197/udp \
  -v /opt/factorio:/factorio \
  --name factorio \
  --restart always \
  factoriotools/factorio

# 4. Setup Auto-Save Sync (Crontab)
echo "*/5 * * * * aws s3 sync /opt/factorio/saves s3://${bucket_name}/ --delete" > /tmp/cronjob
crontab /tmp/cronjob

Advanced Concept: To prevent data loss on Spot Instance termination, listen for the EC2 Instance Termination Warning (via metadata service) and trigger a force-save and S3 upload immediately.

Managing State and Updates

One of the benefits of using Factorio with Terraform is update management. When Wube Software releases a new version of Factorio:

  1. Update the Docker tag in your Terraform variable or Cloud-Init script.
  2. Run terraform apply (or taint the instance).
  3. Terraform replaces the instance.
  4. Cloud-Init pulls the save from S3 and the new binary version.
  5. The server is back online in 2 minutes with the latest patch.

Cost Optimization: The Weekend Warrior Pattern

Running a c5.large 24/7 can cost roughly $60-$70/month. If you only play on weekends, this is wasteful.

By wrapping your Terraform configuration in a CI/CD pipeline (like GitHub Actions), you can create a “ChatOps” workflow (e.g., via Discord slash commands). A command like /start-server triggers terraform apply, and /stop-server triggers terraform destroy. Because your state is safely in S3 (both Terraform state and Game save state), you pay $0 for compute during the week.

Frequently Asked Questions (FAQ)

Can I use Terraform to manage in-game mods?

Yes. The factoriotools/factorio image supports a mods/ directory. You can upload your mod-list.json and zip files to S3, and have the Cloud-Init script pull them alongside the save file. Alternatively, you can define the mod list as an environment variable passed into the container.

How do I handle the initial world generation?

If no save file exists in S3 (the first run), the Docker container will generate a new map based on the server-settings.json. Once generated, your cron job will upload this new save to S3, establishing the persistence loop.

Is Terraform overkill for a single server?

For a “click-ops” manual setup, maybe. But as an expert, you know that “manual” means “unmaintainable.” Terraform documents your configuration, allows for version control of your server settings, and enables effortless migration between cloud providers or regions.

Conclusion

Deploying Factorio with Terraform is more than just a fun project; it is an exercise in designing stateful, resilient applications on ephemeral infrastructure. By decoupling storage (S3) from compute (EC2) and automating the configuration via Cloud-Init, you achieve a server setup that is robust, cheap to run, and easy to upgrade.

The factory must grow, and now, your infrastructure can grow with it. Thank you for reading the DevopsRoles page!

Deploy Generative AI with Terraform: Automated Agent Lifecycle

The shift from Jupyter notebooks to production-grade infrastructure is often the “valley of death” for AI projects. While data scientists excel at model tuning, the operational reality of managing API quotas, secure context retrieval, and scalable inference endpoints requires rigorous engineering. This is where Generative AI with Terraform becomes the critical bridge between experimental code and reliable, scalable application delivery.

In this guide, we will bypass the basics of “what is IaC” and focus on architecting a robust automated lifecycle for Generative AI agents. We will cover provisioning vector databases for RAG (Retrieval-Augmented Generation), securing LLM credentials via Secrets Manager, and deploying containerized agents using Amazon ECS—all defined strictly in HCL.

The Architecture of AI-Native Infrastructure

When we talk about deploying Generative AI with Terraform, we are typically orchestrating three distinct layers. Unlike traditional web apps, AI applications require specialized state management for embeddings and massive compute bursts for inference.

  • Knowledge Layer (RAG): Vector databases (e.g., Pinecone, Milvus, or AWS OpenSearch) to store embeddings.
  • Inference Layer (Compute): Containers hosting the orchestration logic (LangChain/LlamaIndex) running on ECS, EKS, or Lambda.
  • Model Gateway (API): Secure interfaces to foundation models (AWS Bedrock, OpenAI, Anthropic).

Pro-Tip for SREs: Avoid managing model weights directly in Terraform state. Terraform is designed for infrastructure state, not gigabyte-sized binary blobs. Use Terraform to provision the S3 buckets and permissions, but delegate the artifact upload to your CI/CD pipeline or DVC (Data Version Control).

1. Provisioning the Knowledge Base (Vector Store)

For a RAG architecture, the vector store is your database. Below is a production-ready pattern for deploying an AWS OpenSearch Serverless collection, which serves as a highly scalable vector store compatible with LangChain.

resource "aws_opensearchserverless_collection" "agent_memory" {
  name        = "gen-ai-agent-memory"
  type        = "VECTORSEARCH"
  description = "Vector store for Generative AI embeddings"

  depends_on = [aws_opensearchserverless_security_policy.encryption]
}

resource "aws_opensearchserverless_security_policy" "encryption" {
  name        = "agent-memory-encryption"
  type        = "encryption"
  policy      = jsonencode({
    Rules = [
      {
        ResourceType = "collection"
        Resource = ["collection/gen-ai-agent-memory"]
      }
    ],
    AWSOwnedKey = true
  })
}

output "vector_endpoint" {
  value = aws_opensearchserverless_collection.agent_memory.collection_endpoint
}

This HCL snippet ensures that encryption is enabled by default—a non-negotiable requirement for enterprise AI apps handling proprietary data.

2. Securing LLM Credentials

Hardcoding API keys is a cardinal sin in DevOps, but in GenAI, it’s also a financial risk due to usage-based billing. We leverage AWS Secrets Manager to inject keys into our agent’s environment at runtime.

resource "aws_secretsmanager_secret" "openai_api_key" {
  name        = "production/gen-ai/openai-key"
  description = "API Key for OpenAI Model Access"
}

resource "aws_iam_role_policy" "ecs_task_secrets" {
  name = "ecs-task-secrets-access"
  role = aws_iam_role.ecs_task_execution_role.id

  policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Action = "secretsmanager:GetSecretValue"
        Effect = "Allow"
        Resource = aws_secretsmanager_secret.openai_api_key.arn
      }
    ]
  })
}

By explicitly defining the IAM policy, we adhere to the principle of least privilege. The container hosting the AI agent can strictly access only the specific secret required for inference.

3. Deploying the Agent Runtime (ECS Fargate)

For agents that require long-running processes (e.g., maintaining WebSocket connections or processing large documents), AWS Lambda often hits timeout limits. ECS Fargate provides a serverless container environment perfect for hosting Python-based LangChain agents.

resource "aws_ecs_task_definition" "agent_task" {
  family                   = "gen-ai-agent"
  network_mode             = "awsvpc"
  requires_compatibilities = ["FARGATE"]
  cpu                      = 1024
  memory                   = 2048
  execution_role_arn       = aws_iam_role.ecs_task_execution_role.arn

  container_definitions = jsonencode([
    {
      name      = "agent_container"
      image     = "${aws_ecr_repository.agent_repo.repository_url}:latest"
      essential = true
      secrets   = [
        {
          name      = "OPENAI_API_KEY"
          valueFrom = aws_secretsmanager_secret.openai_api_key.arn
        }
      ]
      environment = [
        {
          name  = "VECTOR_DB_ENDPOINT"
          value = aws_opensearchserverless_collection.agent_memory.collection_endpoint
        }
      ]
      logConfiguration = {
        logDriver = "awslogs"
        options = {
          "awslogs-group"         = "/ecs/gen-ai-agent"
          "awslogs-region"        = var.aws_region
          "awslogs-stream-prefix" = "ecs"
        }
      }
    }
  ])
}

This configuration dynamically links the output of your vector store resource (created in Step 1) into the container’s environment variables. This creates a self-healing dependency graph where infrastructure updates automatically propagate to the application configuration.

4. Automating the Lifecycle with Terraform & CI/CD

Deploying Generative AI with Terraform isn’t just about the initial setup; it’s about the lifecycle. As models drift and prompts need updating, you need a pipeline that handles redeployment without downtime.

The “Blue/Green” Strategy for AI Agents

AI agents are non-deterministic. A prompt change that works for one query might break another. Implementing a Blue/Green deployment strategy using Terraform is crucial.

  • Infrastructure (Terraform): Defines the Load Balancer and Target Groups.
  • Application (CodeDeploy): Shifts traffic from the old agent version (Blue) to the new version (Green) gradually.

Using the AWS CodeDeploy Terraform resource, you can script this traffic shift to automatically rollback if error rates spike (e.g., if the LLM starts hallucinating or timing out).

Frequently Asked Questions (FAQ)

Can Terraform manage the actual LLM models?

Generally, no. Terraform is for infrastructure. While you can use Terraform to provision an Amazon SageMaker Endpoint or an EC2 instance with GPU support, the model weights themselves (the artifacts) are better managed by tools like DVC or MLflow. Terraform sets the stage; the ML pipeline puts the actors on it.

How do I handle GPU provisioning for self-hosted LLMs in Terraform?

If you are hosting open-source models (like Llama 3 or Mistral), you will need to specify instance types with GPU acceleration. In the aws_instance or aws_launch_template resource, ensure you select the appropriate instance type (e.g., g5.2xlarge or p3.2xlarge) and utilize a deeply integrated AMI (Amazon Machine Image) like the AWS Deep Learning AMI.

Is Terraform suitable for prompt management?

No. Prompts are application code/configuration, not infrastructure. Storing prompts in Terraform variables creates unnecessary friction. Store prompts in a dedicated database or as config files within your application repository.

Conclusion

Deploying Generative AI with Terraform transforms a fragile experiment into a resilient enterprise asset. By codifying the vector storage, compute environment, and security policies, you eliminate the “it works on my machine” syndrome that plagues AI development.

The code snippets provided above offer a foundational skeleton. As you scale, look into modularizing these resources into reusable Terraform Modules to empower your data science teams to spin up compliant environments on demand. Thank you for reading the DevopsRoles page!

Mastering AWS Account Deployment: Terraform & AWS Control Tower

For modern enterprises, AWS account deployment is no longer a manual task of clicking through the AWS Organizations console. As infrastructure scales, the need for consistent, compliant, and automated “vending machines” for AWS accounts becomes paramount. By combining the governance power of AWS Control Tower with the Infrastructure as Code (IaC) flexibility of Terraform, SREs and Cloud Architects can build a robust deployment pipeline that satisfies both developer velocity and security requirements.

The Foundations: Why Control Tower & Terraform?

In a decentralized cloud environment, AWS account deployment must address three critical pillars: Governance, Security, and Scalability. While AWS Control Tower provides the managed “Landing Zone” environment, Terraform provides the declarative state management required to manage thousands of resources across multiple accounts without configuration drift.

Advanced Concept: Control Tower uses “Guardrails” (Service Control Policies and Config Rules). When deploying accounts via Terraform, you aren’t just creating a container; you are attaching a policy-driven ecosystem that inherits the root organization’s security posture by default.

By leveraging the Terraform AWS Provider alongside Control Tower, you enable a “GitOps” workflow where an account request is simply a .tf file in a repository. This approach ensures that every account is born with the correct IAM roles, VPC configurations, and logging buckets pre-provisioned.

Deep Dive: Account Factory for Terraform (AFT)

The AWS Control Tower Account Factory for Terraform (AFT) is the official bridge between these two worlds. AFT sets up a separate orchestration engine that listens for Terraform changes and triggers the Control Tower account creation API.

The AFT Component Stack

  • AFT Management Account: A dedicated account within your Organization to host the AFT pipeline.
  • Request Metadata: A DynamoDB table or Git repo that stores account parameters (Email, OU, SSO user).
  • Customization Pipeline: A series of Step Functions and Lambda functions that apply “Global” and “Account-level” Terraform modules after the account is provisioned.

Step-by-Step: Deploying Your First Managed Account

To master AWS account deployment via AFT, you must understand the structure of an account request. Below is a production-grade example of a Terraform module call to request a new “Production” account.


module "sandbox_account" {
  source = "github.com/aws-ia/terraform-aws-control_tower_account_factory"

  control_tower_parameters = {
    AccountEmail              = "cloud-ops+prod-app-01@example.com"
    AccountName               = "production-app-01"
    ManagedOrganizationalUnit = "Production"
    SSOUserEmail              = "admin@example.com"
    SSOUserFirstName          = "Platform"
    SSOUserLastName           = "Team"
  }

  account_tags = {
    "Project"     = "Apollo"
    "Environment" = "Production"
    "CostCenter"  = "12345"
  }

  change_management_parameters = {
    change_requested_by = "DevOps Team"
    change_reason       = "New microservice deployment for Q4"
  }

  custom_fields = {
    vpc_cidr = "10.0.0.0/20"
  }
}

After applying this Terraform code, AFT triggers a workflow in the background. It calls the Control Tower ProvisionProduct API, waits for the account to be “Ready,” and then executes your post-provisioning Terraform modules to set up VPCs, IAM roles, and CloudWatch alarms.

Production-Ready Best Practices

Expert SREs know that AWS account deployment is only 20% of the battle; the other 80% is maintaining those accounts. Follow these standards:

  • Idempotency is King: Ensure your post-provisioning scripts can run multiple times without failure. Use Terraform’s lifecycle { prevent_destroy = true } on critical resources like S3 logging buckets.
  • Service Quota Management: Newly deployed accounts start with default limits. Use the aws_servicequotas_service_quota resource to automatically request increases for EC2 instances or VPCs during the deployment phase.
  • Region Deny Policies: Use Control Tower guardrails to restrict deployments to approved regions. This reduces your attack surface and prevents “shadow IT” in unmonitored regions like me-south-1.
  • Centralized Logging: Always ensure the aws_s3_bucket_policy in your log-archive account allows the newly created account’s CloudTrail service principal to write logs immediately.

Troubleshooting Common Deployment Failures

Even with automation, AWS account deployment can encounter hurdles. Here are the most common failure modes observed in enterprise environments:

IssueRoot CauseResolution
Email Already in UseAWS account emails must be globally unique across all of AWS.Use email sub-addressing (e.g., ops+acc1@company.com) if supported by your provider.
STS TimeoutAFT cannot assume the AWSControlTowerExecution role in the new account.Check if a Service Control Policy (SCP) is blocking sts:AssumeRole in the target OU.
Customization LoopTerraform state mismatch in the AFT pipeline.Manually clear the DynamoDB lock table for the specific account ID in the AFT Management account.

Frequently Asked Questions

Can I use Terraform to deploy accounts without Control Tower?

Yes, using the aws_organizations_account resource. However, you lose the managed guardrails and automated dashboarding provided by Control Tower. For expert-level setups, Control Tower + AFT is the industry standard for compliance.

How does AFT handle Terraform state?

AFT manages state files in an S3 bucket within the AFT Management account. It creates a unique state key for each account it provisions to ensure isolation and prevent blast-radius issues during updates.

How long does a typical AWS account deployment take via AFT?

Usually between 20 to 45 minutes. This includes the time AWS takes to provision the physical account container, apply Control Tower guardrails, and run your custom Terraform modules.

Conclusion

Mastering AWS account deployment requires a shift from manual administration to a software engineering mindset. By treating your accounts as immutable infrastructure and managing them through Terraform and AWS Control Tower, you gain the ability to scale your cloud footprint with confidence. Whether you are managing five accounts or five thousand, the combination of AFT and IaC provides the consistency and auditability required by modern regulatory frameworks. For further technical details, refer to the Official AFT Documentation. Thank you for reading the DevopsRoles page!

Terraform Secrets: Deploy Your Terraform Workers Like a Pro

If you are reading this, you’ve likely moved past the “Hello World” stage of Infrastructure as Code. You aren’t just spinning up a single EC2 instance; you are orchestrating fleets. Whether you are managing high-throughput Celery nodes, Kubernetes worker pools, or self-hosted Terraform Workers (Terraform Cloud Agents), the game changes at scale.

In this guide, we dive deep into the architecture of deploying resilient, immutable worker nodes. We will move beyond basic resource blocks and explore lifecycle management, drift detection strategies, and the “cattle not pets” philosophy that distinguishes a Junior SysAdmin from a Staff Engineer.

The Philosophy of Immutable Terraform Workers

When we talk about Terraform Workers in an expert context, we are usually discussing compute resources that perform background processing. The biggest mistake I see in production environments is treating these workers as mutable infrastructure—servers that are patched, updated, and nursed back to health.

To deploy workers like a pro, you must embrace Immutability. Your Terraform configuration should not describe changes to a worker; it should describe the replacement of a worker.

GigaCode Pro-Tip: Stop using remote-exec provisioners to configure your workers. It introduces brittleness and makes your terraform apply dependent on SSH connectivity and runtime package repositories. Instead, shift left. Use HashiCorp Packer to bake your dependencies into a Golden Image, and use Terraform solely for orchestration.

Architecting Resilient Worker Fleets

Let’s look at the actual HCL required to deploy a robust fleet of workers. We aren’t just using aws_instance; we are using Launch Templates and Auto Scaling Groups (ASGs) to ensure self-healing capabilities.

1. The Golden Image Strategy

Your Terraform Workers should boot instantly. If your user_data script takes 15 minutes to install Python dependencies, your autoscaling events will be too slow to handle traffic spikes.

data "aws_ami" "worker_golden_image" {
  most_recent = true
  owners      = ["self"]

  filter {
    name   = "name"
    values = ["my-worker-image-v*"]
  }

  filter {
    name   = "tag:Status"
    values = ["production"]
  }
}

2. Zero-Downtime Rotation with Lifecycle Blocks

One of the most powerful yet underutilized features for managing workers is the lifecycle meta-argument. When you update a Launch Template, Terraform’s default behavior might be aggressive.

To ensure you don’t kill active jobs, use create_before_destroy within your resource definitions. This ensures new workers are healthy before the old ones are terminated.

resource "aws_autoscaling_group" "worker_fleet" {
  name                = "worker-asg-${aws_launch_template.worker.latest_version}"
  min_size            = 3
  max_size            = 10
  vpc_zone_identifier = module.vpc.private_subnets

  launch_template {
    id      = aws_launch_template.worker.id
    version = "$Latest"
  }

  instance_refresh {
    strategy = "Rolling"
    preferences {
      min_healthy_percentage = 90
    }
  }

  lifecycle {
    create_before_destroy = true
    ignore_changes        = [load_balancers, target_group_arns]
  }
}

Specific Use Case: Terraform Cloud Agents (Self-Hosted Workers)

Sometimes, “Terraform Workers” refers specifically to Terraform Cloud Agents. These are specialized workers you deploy in your own private network to execute Terraform runs on behalf of Terraform Cloud (TFC) or Terraform Enterprise (TFE). This allows TFC to manage resources behind your corporate firewall without whitelisting public IPs.

Security & Isolation

When deploying TFC Agents, security is paramount. These workers hold the “Keys to the Kingdom”—they need broad IAM permissions to provision infrastructure.

  • Network Isolation: Deploy these workers in private subnets with no ingress access, only egress (443) to app.terraform.io.
  • Ephemeral Tokens: Do not hardcode the TFC Agent Token. Inject it via a secrets manager (like AWS Secrets Manager or HashiCorp Vault) at runtime.
  • Single-Use Agents: For maximum security, configure your agents to terminate after a single job (if your architecture supports high churn) to prevent credential caching attacks.
# Example: Passing a TFC Agent Token securely via User Data
resource "aws_launch_template" "tfc_agent" {
  name_prefix   = "tfc-agent-"
  image_id      = data.aws_ami.ubuntu.id
  instance_type = "t3.medium"

  user_data = base64encode(<<-EOF
              #!/bin/bash
              # Fetch token from Secrets Manager (requires IAM role)
              export TFC_AGENT_TOKEN=$(aws secretsmanager get-secret-value --secret-id tfc-agent-token --query SecretString --output text)
              
              # Start the agent container
              docker run -d --restart always \
                --name tfc-agent \
                -e TFC_AGENT_TOKEN=$TFC_AGENT_TOKEN \
                -e TFC_AGENT_NAME="worker-$(hostname)" \
                hashicorp/tfc-agent:latest
              EOF
  )
}

Advanced Troubleshooting & Drift Detection

Even the best-architected Terraform Workers can experience drift. This happens when a process on the worker changes a configuration file, or a manual intervention occurs.

Detecting “Zombie” Workers

A common failure mode is a worker that passes the EC2 status check but fails the application health check. Terraform generally looks at the cloud provider API status.

The Solution: decouple your health checks. Use Terraform to provision the infrastructure, but rely on the Autoscaling Group’s health_check_type = "ELB" (if using Load Balancers) or custom CloudWatch alarms to terminate unhealthy instances. Terraform’s job is to define the state of the fleet, not monitor the health of the application process inside it.

Frequently Asked Questions (FAQ)

1. Should I use Terraform count or for_each for worker nodes?

For identical worker nodes (like an ASG), you generally shouldn’t use either—you should use an Autoscaling Group resource which handles the count dynamically. However, if you are deploying distinct workers (e.g., “Worker-High-CPU” vs “Worker-High-Mem”), use for_each. It allows you to add or remove specific workers without shifting the index of all other resources, which happens with count.

2. How do I handle secrets on my Terraform Workers?

Never commit secrets to your Terraform state or code. Use IAM Roles (Instance Profiles) attached to the workers. The code running on the worker should use the AWS SDK (or equivalent) to fetch secrets from a managed service like AWS Secrets Manager or Vault at runtime.

3. What is the difference between Terraform Workers and Cloudflare Workers?

This is a common confusion. Terraform Workers (in this context) are compute instances managed by Terraform. Cloudflare Workers are a serverless execution environment provided by Cloudflare. Interestingly, you can use the cloudflare Terraform provider to manage Cloudflare Workers, treating the serverless code itself as an infrastructure resource!

Conclusion

Deploying Terraform Workers effectively requires a shift in mindset from “managing servers” to “managing fleets.” By leveraging Golden Images, utilizing ASG lifecycle hooks, and securing your TFC Agents, you elevate your infrastructure from fragile to anti-fragile.

Remember, the goal of an expert DevOps engineer isn’t just to write code that works; it’s to write code that scales, heals, and protects itself. Thank you for reading the DevopsRoles page!

Terraform AWS IAM: Simplify Policy Management Now

For expert DevOps engineers and SREs, managing Identity and Access Management (IAM) at scale is rarely about clicking buttons in the AWS Console. It is about architectural purity, auditability, and the Principle of Least Privilege. When implemented correctly, Terraform AWS IAM management transforms a potential security swamp into a precise, version-controlled fortress.

However, as infrastructure grows, so does the complexity of JSON policy documents, cross-account trust relationships, and conditional logic. This guide moves beyond the basics of resource "aws_iam_user" and dives into advanced patterns for constructing scalable, maintainable, and secure IAM hierarchies using HashiCorp Terraform.

The Evolution from Raw JSON to HCL Data Sources

In the early days of Terraform, engineers often embedded raw JSON strings into their aws_iam_policy resources using Heredoc syntax. While functional, this approach is brittle. It lacks syntax validation during the terraform plan phase and makes dynamic interpolation painful.

The expert standard today relies heavily on the aws_iam_policy_document data source. This allows you to write policies in HCL (HashiCorp Configuration Language), enabling leveraging Terraform’s native logic capabilities like dynamic blocks and conditionals.

Why aws_iam_policy_document is Superior

  • Validation: Terraform validates HCL syntax before the API call is made.
  • Composability: You can merge multiple data sources using the source_policy_documents or override_policy_documents arguments, allowing for modular policy construction.
  • Readability: It abstracts the JSON formatting, letting you focus on the logic.

Advanced Example: Dynamic Conditions and Merging

data "aws_iam_policy_document" "base_deny" {
  statement {
    sid       = "DenyNonSecureTransport"
    effect    = "Deny"
    actions   = ["s3:*"]
    resources = ["arn:aws:s3:::*"]

    condition {
      test     = "Bool"
      variable = "aws:SecureTransport"
      values   = ["false"]
    }

    principals {
      type        = "AWS"
      identifiers = ["*"]
    }
  }
}

data "aws_iam_policy_document" "s3_read_only" {
  # Merge the base deny policy into this specific policy
  source_policy_documents = [data.aws_iam_policy_document.base_deny.json]

  statement {
    sid       = "AllowS3List"
    effect    = "Allow"
    actions   = ["s3:ListBucket", "s3:GetObject"]
    resources = [
      var.s3_bucket_arn,
      "${var.s3_bucket_arn}/*"
    ]
  }
}

resource "aws_iam_policy" "secure_read_only" {
  name   = "secure-s3-read-only"
  policy = data.aws_iam_policy_document.s3_read_only.json
}

Pro-Tip: Use override_policy_documents sparingly. While powerful for hot-fixing policies in downstream modules, it can obscure the final policy outcome, making debugging permissions difficult. Prefer source_policy_documents for additive composition.

Mastering Trust Policies (Assume Role)

One of the most common friction points in Terraform AWS IAM is the “Assume Role Policy” (or Trust Policy). Unlike standard permission policies, this defines who can assume the role.

Hardcoding principals in JSON is a mistake when working with dynamic environments (e.g., ephemeral EKS clusters). Instead, leverage the aws_iam_policy_document for trust relationships as well.

Pattern: IRSA (IAM Roles for Service Accounts)

When working with Kubernetes (EKS), you often need to construct OIDC trust relationships. This requires precise string manipulation to match the OIDC provider URL and the specific Service Account namespace/name.

data "aws_iam_policy_document" "eks_oidc_assume" {
  statement {
    actions = ["sts:AssumeRoleWithWebIdentity"]

    principals {
      type        = "Federated"
      identifiers = [var.oidc_provider_arn]
    }

    condition {
      test     = "StringEquals"
      variable = "${replace(var.oidc_provider_url, "https://", "")}:sub"
      values   = ["system:serviceaccount:${var.namespace}:${var.service_account_name}"]
    }
  }
}

resource "aws_iam_role" "app_role" {
  name               = "eks-app-role"
  assume_role_policy = data.aws_iam_policy_document.eks_oidc_assume.json
}

Handling Circular Dependencies

A classic deadlock occurs when you try to create an IAM Role that needs to be referenced in a Policy, which is then attached to that Role. Terraform’s graph dependency engine usually handles this well, but edge cases exist, particularly with S3 Bucket Policies referencing specific Roles.

To resolve this, rely on aws_iam_role.name or aws_iam_role.arn strictly where needed. If a circular dependency arises (e.g., KMS Key Policy referencing a Role that needs the Key ARN), you may need to break the cycle by using a separate aws_iam_role_policy_attachment resource rather than inline policies, or by using data sources to look up ARNs if the resources are loosely coupled.

Scaling with Modules: The “Terraform AWS IAM” Ecosystem

Writing every policy from scratch violates DRY (Don’t Repeat Yourself). For enterprise-grade implementations, the Community AWS IAM Module is the gold standard.

It abstracts complex logic for creating IAM users, groups, and assumable roles. However, for highly specific internal platforms, building a custom internal module is often better.

When to Build vs. Buy (Use Community Module)

ScenarioRecommendationReasoning
Standard Roles (EC2, Lambda)Community ModuleHandles standard trust policies and common attachments instantly.
Complex IAM UsersCommunity ModuleSimplifies PGP key encryption for secret keys and login profiles.
Strict Compliance (PCI/HIPAA)Custom ModuleAllows strict enforcement of Permission Boundaries and naming conventions hardcoded into the module logic.

Best Practices for Security & Compliance

1. Enforce Permission Boundaries

Delegating IAM creation to developer teams is risky. Using Permission Boundaries is the only safe way to allow teams to create roles. In Terraform, ensure your module accepts a permissions_boundary_arn variable and applies it to every role created.

2. Lock Down with terraform-compliance or OPA

Before your Terraform applies, your CI/CD pipeline should scan the plan. Tools like Open Policy Agent (OPA) or Sentinel can block Effect: Allow on Action: "*".

# Example Rego policy (OPA) to deny wildcard actions
deny[msg] {
  resource := input.resource_changes[_]
  resource.type == "aws_iam_policy"
  statement := json.unmarshal(resource.change.after.policy).Statement[_]
  statement.Effect == "Allow"
  statement.Action == "*"
  msg = sprintf("Wildcard action not allowed in policy: %v", [resource.name])
}

Frequently Asked Questions (FAQ)

Can I manage IAM resources across multiple AWS accounts with one Terraform apply?

Technically yes, using multiple provider aliases. However, this is generally an anti-pattern due to the “blast radius” risk. It is better to separate state files by account or environment and use a pipeline to orchestrate updates.

How do I import existing IAM roles into Terraform?

Use the import block (available in Terraform 1.5+) or the CLI command: terraform import aws_iam_role.example role_name. Be careful with attached policies; you must identify if they are inline policies or managed policy attachments and import those separately to avoid state drift.

Inline Policies vs. Managed Policies: Which is better?

Managed Policies (standalone aws_iam_policy resources) are superior. They are reusable, versioned by AWS (allowing rollback), and easier to audit. Inline policies die with the role and can bloat the state file significantly.

Conclusion

Mastering Terraform AWS IAM is about shifting from “making it work” to “making it governable.” By utilizing aws_iam_policy_document for robust HCL definitions, understanding the nuances of OIDC trust relationships, and leveraging modular architectures, you ensure your cloud security scales as fast as your infrastructure.

Start refactoring your legacy JSON Heredoc strings into data sources today to improve readability and future-proof your IAM strategy. Thank you for reading the DevopsRoles page!

Master TimescaleDB Deployment on AWS using Terraform

Time-series data is the lifeblood of modern observability, IoT, and financial analytics. While managed services exist, enterprise-grade requirements—such as strict data sovereignty, VPC peering latency, or custom ZFS compression tuning—often mandate a self-hosted architecture. This guide focuses on a production-ready TimescaleDB deployment on AWS using Terraform.

We aren’t just spinning up an EC2 instance; we are engineering a storage layer capable of handling massive ingest rates and complex analytical queries. We will leverage Infrastructure as Code (IaC) to orchestrate compute, high-performance block storage, and automated bootstrapping.

Architecture Decisions: optimizing for Throughput

Before writing HCL, we must define the infrastructure characteristics required by TimescaleDB. Unlike stateless microservices, database performance is bound by I/O and memory.

  • Compute (EC2): We will target memory-optimized instances (e.g., r6i or r7g families) to maximize the RAM available for PostgreSQL’s shared buffers and OS page cache.
  • Storage (EBS): We will separate the WAL (Write Ahead Log) from the Data directory.
    • WAL Volume: Requires low latency sequential writes. io2 Block Express or high-throughput gp3.
    • Data Volume: Requires high random read/write throughput. gp3 is usually sufficient, but striping multiple volumes (RAID 0) is a common pattern for extreme performance.
  • OS Tuning: We will use cloud-init to tune kernel parameters (hugepages, swappiness) and run timescaledb-tune automatically.

Pro-Tip: Avoid using burstable instances (T-family) for production databases. The CPU credit exhaustion can lead to catastrophic latency spikes during data compaction or high-ingest periods.

Phase 1: Provider & VPC Foundation

Assuming you have a VPC setup, let’s establish the security context. Your TimescaleDB instance should reside in a private subnet, accessible only via a Bastion host or VPN.

Security Group Definition

resource "aws_security_group" "timescale_sg" {
  name        = "timescaledb-sg"
  description = "Security group for TimescaleDB Node"
  vpc_id      = var.vpc_id

  # Inbound: PostgreSQL Standard Port
  ingress {
    from_port       = 5432
    to_port         = 5432
    protocol        = "tcp"
    security_groups = [var.app_security_group_id] # Only allow app tier
    description     = "Allow PGSQL access from App Tier"
  }

  # Outbound: Allow package updates and S3 backups
  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

  tags = {
    Name = "timescaledb-production-sg"
  }
}

Phase 2: Storage Engineering (EBS)

This is the critical differentiator for expert deployments. We explicitly define EBS volumes separate from the root device to ensure data persistence independent of the instance lifecycle and to optimize I/O channels.

# Data Volume - Optimized for Throughput
resource "aws_ebs_volume" "pg_data" {
  availability_zone = var.availability_zone
  size              = 500
  type              = "gp3"
  iops              = 12000 # Provisioned IOPS
  throughput        = 500   # MB/s

  tags = {
    Name = "timescaledb-data-vol"
  }
}

# WAL Volume - Optimized for Latency
resource "aws_ebs_volume" "pg_wal" {
  availability_zone = var.availability_zone
  size              = 100
  type              = "io2"
  iops              = 5000 

  tags = {
    Name = "timescaledb-wal-vol"
  }
}

resource "aws_volume_attachment" "pg_data_attach" {
  device_name = "/dev/sdf"
  volume_id   = aws_ebs_volume.pg_data.id
  instance_id = aws_instance.timescale_node.id
}

resource "aws_volume_attachment" "pg_wal_attach" {
  device_name = "/dev/sdg"
  volume_id   = aws_ebs_volume.pg_wal.id
  instance_id = aws_instance.timescale_node.id
}

Phase 3: The TimescaleDB Instance & Bootstrapping

We use the user_data attribute to handle the “Day 0” operations: mounting volumes, installing the TimescaleDB packages (which install PostgreSQL as a dependency), and applying initial configuration tuning.

Warning: Ensure your IAM Role attached to this instance has permissions for ec2:DescribeTags if you use cloud-init to self-discover volume tags, or s3:* if you automate WAL-G backups immediately.

resource "aws_instance" "timescale_node" {
  ami           = data.aws_ami.ubuntu.id # Recommend Ubuntu 22.04 or 24.04 LTS
  instance_type = "r6i.2xlarge"
  subnet_id     = var.private_subnet_id
  key_name      = var.key_name

  vpc_security_group_ids = [aws_security_group.timescale_sg.id]
  iam_instance_profile   = aws_iam_instance_profile.timescale_role.name

  root_block_device {
    volume_type = "gp3"
    volume_size = 50
  }

  # "Day 0" Configuration Script
  user_data = <<-EOF
    #!/bin/bash
    set -e
    
    # 1. Mount EBS Volumes
    # Note: NVMe device names may vary on Nitro instances (e.g., /dev/nvme1n1)
    mkfs.xfs /dev/sdf
    mkfs.xfs /dev/sdg
    mkdir -p /var/lib/postgresql/data
    mkdir -p /var/lib/postgresql/wal
    mount /dev/sdf /var/lib/postgresql/data
    mount /dev/sdg /var/lib/postgresql/wal
    
    # Persist mounts in fstab... (omitted for brevity)

    # 2. Add Timescale PPA & Install
    echo "deb https://packagecloud.io/timescale/timescaledb/ubuntu/ $(lsb_release -c -s) main" | sudo tee /etc/apt/sources.list.d/timescaledb.list
    wget --quiet -O - https://packagecloud.io/timescale/timescaledb/gpgkey | sudo apt-key add -
    apt-get update
    apt-get install -y timescaledb-2-postgresql-14

    # 3. Initialize Database
    chown -R postgres:postgres /var/lib/postgresql
    su - postgres -c "/usr/lib/postgresql/14/bin/initdb -D /var/lib/postgresql/data --waldir=/var/lib/postgresql/wal"

    # 4. Tune Configuration
    # This is critical: It calculates memory settings based on the instance type
    timescaledb-tune --quiet --yes --conf-path=/var/lib/postgresql/data/postgresql.conf

    # 5. Enable Service
    systemctl enable postgresql
    systemctl start postgresql
  EOF

  tags = {
    Name = "TimescaleDB-Primary"
  }
}

Optimizing Terraform for Stateful Resources

Managing databases with Terraform requires handling state carefully. Unlike a stateless web server, you cannot simply destroy and recreate this resource if you change a parameter.

Lifecycle Management

Use the lifecycle meta-argument to prevent accidental deletion of your primary database node.

lifecycle {
  prevent_destroy = true
  ignore_changes  = [
    ami, 
    user_data # Prevent recreation if boot script changes
  ]
}

Validation and Post-Deployment

Once terraform apply completes, verification is necessary. You should verify that the TimescaleDB extension is correctly loaded and that your memory settings reflect the timescaledb-tune execution.

Connect to your instance and run:

sudo -u postgres psql -c "SELECT * FROM pg_extension WHERE extname = 'timescaledb';"
sudo -u postgres psql -c "SHOW shared_buffers;"

For further reading on tuning parameters, refer to the official TimescaleDB Tune documentation.

Frequently Asked Questions (FAQ)

1. Can I use RDS for TimescaleDB instead of EC2?

Yes, AWS RDS for PostgreSQL supports the TimescaleDB extension. However, you are often limited to older versions of the extension, and you lose control over low-level filesystem tuning (like using ZFS for compression) which can be critical for high-volume time-series data.

2. How do I handle High Availability (HA) with this Terraform setup?

This guide covers a single-node deployment. For HA, you would expand the Terraform code to deploy a secondary EC2 instance in a different Availability Zone and configure Streaming Replication. Tools like Patroni are the industry standard for managing auto-failover on self-hosted PostgreSQL/TimescaleDB.

3. Why separate WAL and Data volumes?

WAL operations are sequential and synchronous. If they share bandwidth with random read/write operations of the Data volume, write latency will spike, causing backpressure on your ingestion pipeline. Separating them physically (different EBS volumes) ensures consistent write performance.

Conclusion

Mastering TimescaleDB Deployment on AWS requires moving beyond simple “click-ops” to a codified, reproducible infrastructure. By using Terraform to orchestrate not just the compute, but the specific storage characteristics required for time-series workloads, you ensure your database can scale with your data.

Next Steps: Once your instance is running, implement a backup strategy using WAL-G to stream backups directly to S3, ensuring point-in-time recovery (PITR) capabilities. Thank you for reading the DevopsRoles page!

Unlock Reusable VPC Modules: Terraform for Dev/Stage/Prod Environments

If you are managing infrastructure at scale, you have likely felt the pain of the “copy-paste” sprawl. You define a VPC for Development, then copy the code for Staging, and again for Production, perhaps changing a CIDR block or an instance count manually. This breaks the fundamental DevOps principle of DRY (Don’t Repeat Yourself) and introduces drift risk.

For Senior DevOps Engineers and SREs, the goal isn’t just to write code that works; it’s to architect abstractions that scale. Reusable VPC Modules are the cornerstone of a mature Infrastructure as Code (IaC) strategy. They allow you to define the “Gold Standard” for networking once and instantiate it infinitely across environments with predictable results.

In this guide, we will move beyond basic syntax. We will construct a production-grade, agnostic VPC module capable of dynamic subnet calculation, conditional resource creation (like NAT Gateways), and strict variable validation suitable for high-compliance Dev, Stage, and Prod environments.

Why Reusable VPC Modules Matter (Beyond DRY)

While reducing code duplication is the obvious benefit, the strategic value of modularizing your VPC architecture runs deeper.

  • Governance & Compliance: By centralizing your network logic, you enforce security standards (e.g., “Flow Logs must always be enabled” or “Private subnets must not have public IP assignment”) in a single location.
  • Testing & Versioning: You can version your module (e.g., v1.2.0). Production can remain pinned to a stable version while you iterate on features in Development, effectively applying software engineering lifecycles to your network.
  • Abstraction Complexity: A consumer of your module (perhaps a developer spinning up an ephemeral environment) shouldn’t need to understand Route Tables or NACLs. They should only need to provide a CIDR block and an Environment name.

Pro-Tip: Avoid the “God Module” anti-pattern. While it’s tempting to bundle the VPC, EKS, and RDS into one giant module, this leads to dependency hell. Keep your Reusable VPC Modules strictly focused on networking primitives: VPC, Subnets, Route Tables, Gateways, and ACLs.

Anatomy of a Production-Grade Module

Let’s build a module that calculates subnets dynamically based on Availability Zones (AZs) and handles environment-specific logic (like high availability in Prod vs. cost savings in Dev).

1. Input Strategy & Validation

Modern Terraform (v1.0+) allows for powerful variable validation. We want to ensure that downstream users don’t accidentally pass invalid CIDR blocks.

# modules/vpc/variables.tf

variable "environment" {
  description = "Deployment environment (dev, stage, prod)"
  type        = string
  validation {
    condition     = contains(["dev", "stage", "prod"], var.environment)
    error_message = "Environment must be one of: dev, stage, prod."
  }
}

variable "vpc_cidr" {
  description = "CIDR block for the VPC"
  type        = string
  validation {
    condition     = can(cidrhost(var.vpc_cidr, 0))
    error_message = "Must be a valid IPv4 CIDR block."
  }
}

variable "az_count" {
  description = "Number of AZs to utilize"
  type        = number
  default     = 2
}

2. Dynamic Subnetting with cidrsubnet

Hardcoding subnet CIDRs (e.g., 10.0.1.0/24) is brittle. Instead, use the cidrsubnet function to mathematically carve up the VPC CIDR. This ensures no overlap and automatic scalability if you change the base CIDR size.

# modules/vpc/main.tf

data "aws_availability_zones" "available" {
  state = "available"
}

resource "aws_vpc" "main" {
  cidr_block           = var.vpc_cidr
  enable_dns_hostnames = true
  enable_dns_support   = true

  tags = {
    Name = "${var.environment}-vpc"
  }
}

# Public Subnets
resource "aws_subnet" "public" {
  count                   = var.az_count
  vpc_id                  = aws_vpc.main.id
  # Example: 10.0.0.0/16 -> 10.0.1.0/24, 10.0.2.0/24, etc.
  cidr_block              = cidrsubnet(var.vpc_cidr, 8, count.index)
  availability_zone       = data.aws_availability_zones.available.names[count.index]
  map_public_ip_on_launch = true

  tags = {
    Name = "${var.environment}-public-${data.aws_availability_zones.available.names[count.index]}"
    Tier = "Public"
  }
}

# Private Subnets
resource "aws_subnet" "private" {
  count             = var.az_count
  vpc_id            = aws_vpc.main.id
  # Offset the CIDR calculation by 'az_count' to avoid overlap with public subnets
  cidr_block        = cidrsubnet(var.vpc_cidr, 8, count.index + var.az_count)
  availability_zone = data.aws_availability_zones.available.names[count.index]

  tags = {
    Name = "${var.environment}-private-${data.aws_availability_zones.available.names[count.index]}"
    Tier = "Private"
  }
}

3. Conditional NAT Gateways (Cost Optimization)

NAT Gateways are expensive. In a Dev environment, you might only need one shared NAT Gateway (or none if you use instances with public IPs for testing), whereas Prod requires High Availability (one NAT per AZ).

# modules/vpc/main.tf

locals {
  # If Prod, create NAT per AZ. If Dev/Stage, create only 1 NAT total to save costs.
  nat_gateway_count = var.environment == "prod" ? var.az_count : 1
}

resource "aws_eip" "nat" {
  count = local.nat_gateway_count
  domain = "vpc"
}

resource "aws_nat_gateway" "main" {
  count         = local.nat_gateway_count
  allocation_id = aws_eip.nat[count.index].id
  subnet_id     = aws_subnet.public[count.index].id

  tags = {
    Name = "${var.environment}-nat-${count.index}"
  }
}

Implementing Across Environments

Once your Reusable VPC Module is polished, utilizing it across environments becomes a trivial exercise in configuration management. I recommend a directory-based structure over Terraform Workspaces for clearer isolation of state files and variable definitions.

Directory Structure

infrastructure/
├── modules/
│   └── vpc/ (The code we wrote above)
├── environments/
│   ├── dev/
│   │   └── main.tf
│   ├── stage/
│   │   └── main.tf
│   └── prod/
│       └── main.tf

The Implementation (DRY at work)

In environments/prod/main.tf, your code is now incredibly concise:

module "vpc" {
  source      = "../../modules/vpc"
  
  environment = "prod"
  vpc_cidr    = "10.0.0.0/16"
  az_count    = 3 # High Availability
}

Contrast this with environments/dev/main.tf:

module "vpc" {
  source      = "../../modules/vpc"
  
  environment = "dev"
  vpc_cidr    = "10.10.0.0/16" # Different CIDR
  az_count    = 2 # Lower cost
}

Advanced Patterns & Considerations

Tagging Standards

Effective tagging is non-negotiable for cost allocation and resource tracking. Use the default_tags feature in the AWS provider configuration to apply global tags, but ensure your module accepts a tags map variable to merge specific metadata.

Outputting Values for Dependency Injection

Your VPC module is likely the foundation for other modules (like EKS or RDS). Ensure you output the IDs required by these dependent resources.

# modules/vpc/outputs.tf

output "vpc_id" {
  value = aws_vpc.main.id
}

output "private_subnet_ids" {
  value = aws_subnet.private[*].id
}

output "public_subnet_ids" {
  value = aws_subnet.public[*].id
}

Frequently Asked Questions (FAQ)

Should I use the official terraform-aws-modules/vpc/aws or build my own?

For beginners or rapid prototyping, the community module is excellent. However, for Expert SRE teams, building your own Reusable VPC Module is often preferred. It reduces “bloat” (unused features from the community module) and allows strict adherence to internal naming conventions and security compliance logic that a generic module cannot provide.

How do I handle VPC Peering between these environments?

Generally, you should avoid peering Dev and Prod. However, if you need shared services (like a tooling VPC), create a separate vpc-peering module. Do not bake peering logic into the core VPC module, as it creates circular dependencies and makes the module difficult to destroy.

What about VPC Flow Logs?

Flow Logs should be a standard part of your reusable module. I recommend adding a variable enable_flow_logs (defaulting to true) and storing logs in S3 or CloudWatch Logs. This ensures that every environment spun up with your module has auditing enabled by default.

Conclusion

Transitioning to Reusable VPC Modules transforms your infrastructure from a collection of static scripts into a dynamic, versioned product. By abstracting the complexity of subnet math and resource allocation, you empower your team to deploy Dev, Stage, and Prod environments that are consistent, compliant, and cost-optimized.

Start refactoring your hardcoded network configurations today. Isolate your logic into a module, version it, and watch your drift disappear. Thank you for reading the DevopsRoles page!