Tag Archives: AWS

Master Amazon EKS Metrics: Automated Collection with AWS Prometheus

Observability at scale is the silent killer of Kubernetes operations. For expert platform engineers, the challenge isn’t just generating Amazon EKS metrics; it is ingesting, storing, and querying them without managing a fragile, self-hosted Prometheus stateful set that collapses under high cardinality.

In this guide, we bypass the basics. We will architect a production-grade observability pipeline using Amazon Managed Service for Prometheus (AMP) and the AWS Distro for OpenTelemetry (ADOT). We will cover Infrastructure as Code (Terraform) implementation, IAM Roles for Service Accounts (IRSA) security patterns, and advanced filtering techniques to keep your metric ingestion costs manageable.

The Scaling Problem: Why Self-Hosted Prometheus Fails EKS

Standard Prometheus deployments on EKS work flawlessly for development clusters. However, as you scale to hundreds of nodes and thousands of pods, the “pull-based” model combined with local TSDB storage hits a ceiling.

  • Vertical Scaling Limits: A single Prometheus server eventually runs out of memory (OOM) attempting to ingest millions of active series.
  • Data Persistence: Managing EBS volumes for long-term metric retention is operational toil.
  • High Availability: Running HA Prometheus pairs doubles your cost and introduces “gap” complexities during failovers.

Pro-Tip: The solution is to decouple collection from storage. By using stateless collectors (ADOT) to scrape Amazon EKS metrics and remote-writing them to a managed backend (AMP), you offload the heavy lifting of storage, availability, and backups to AWS.

Architecture: EKS, ADOT, and AMP

The modern AWS-native observability stack consists of three distinct layers:

  1. Generation: Your application pods and Kubernetes node exporters.
  2. Collection (The Agent): The AWS Distro for OpenTelemetry (ADOT) collector running as a DaemonSet or Deployment. It scrapes Prometheus endpoints and remote-writes data.
  3. Storage (The Backend): Amazon Managed Service for Prometheus (AMP), which is Cortex-based, scalable, and fully compatible with PromQL.

Step-by-Step Implementation

We will use Terraform for the infrastructure foundation and Helm for the Kubernetes components.

1. Provisioning the AMP Workspace

First, we create the AMP workspace. This is the distinct logical space where your metrics will reside.

resource "aws_prometheus_workspace" "eks_observability" {
  alias = "production-eks-metrics"

  tags = {
    Environment = "Production"
    ManagedBy   = "Terraform"
  }
}

output "amp_workspace_id" {
  value = aws_prometheus_workspace.eks_observability.id
}

output "amp_remote_write_url" {
  value = "${aws_prometheus_workspace.eks_observability.prometheus_endpoint}api/v1/remote_write"
}

2. Security: IRSA for Metric Ingestion

The ADOT collector needs permission to write to AMP. We utilize IAM Roles for Service Accounts (IRSA) to grant least-privilege access, avoiding static access keys.

Create an IAM policy AWSManagedPrometheusWriteAccess (or a scoped inline policy) and attach it to a role trusted by your EKS OIDC provider.

data "aws_iam_policy_document" "amp_ingest_policy" {
  statement {
    actions = [
      "aps:RemoteWrite",
      "aps:GetSeries",
      "aps:GetLabels",
      "aps:GetMetricMetadata"
    ]
    resources = [aws_prometheus_workspace.eks_observability.arn]
  }
}

resource "aws_iam_role" "adot_collector" {
  name = "eks-adot-collector-role"

  assume_role_policy = jsonencode({
    Version = "2012-10-17"
    Statement = [{
      Action = "sts:AssumeRoleWithWebIdentity"
      Effect = "Allow"
      Principal = {
        Federated = "arn:aws:iam::${var.account_id}:oidc-provider/${var.oidc_provider}"
      }
      Condition = {
        StringEquals = {
          "${var.oidc_provider}:sub" = "system:serviceaccount:adot-system:adot-collector"
        }
      }
    }]
  })
}

3. Deploying the ADOT Collector

We deploy the ADOT collector using the EKS add-on or Helm. For granular control over the scraping configuration, the Helm chart is often preferred by power users.

Below is a snippet of the values.yaml configuration required to enable the Prometheus receiver and configure the remote write exporter to send Amazon EKS metrics to your workspace.

# ADOT Helm values.yaml
mode: deployment
serviceAccount:
  create: true
  name: adot-collector
  annotations:
    eks.amazonaws.com/role-arn: "arn:aws:iam::123456789012:role/eks-adot-collector-role"

config:
  receivers:
    prometheus:
      config:
        global:
          scrape_interval: 15s
        scrape_configs:
          - job_name: 'kubernetes-pods'
            kubernetes_sd_configs:
              - role: pod
            relabel_configs:
              - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
                action: keep
                regex: true

  exporters:
    prometheusremotewrite:
      endpoint: "https://aps-workspaces.us-east-1.amazonaws.com/workspaces/ws-xxxx/api/v1/remote_write"
      auth:
        authenticator: sigv4auth

  extensions:
    sigv4auth:
      region: "us-east-1"
      service: "aps"

  service:
    extensions: [sigv4auth]
    pipelines:
      metrics:
        receivers: [prometheus]
        exporters: [prometheusremotewrite]

Optimizing Costs: Managing High Cardinality

Amazon EKS metrics can generate massive bills if you ingest every label from every ephemeral pod. AMP charges based on ingestion (samples) and storage.

Filtering at the Collector Level

Use the processors block in your ADOT configuration to drop unnecessary metrics or labels before they leave the cluster.

processors:
  filter:
    metrics:
      exclude:
        match_type: strict
        metric_names:
          - kubelet_volume_stats_available_bytes
          - kubelet_volume_stats_capacity_bytes
          - container_fs_usage_bytes # Often high noise, low value
  resource:
    attributes:
      - key: jenkins_build_id
        action: delete  # Remove high-cardinality labels

Advanced Concept: Avoid including high-cardinality labels such as client_ip, user_id, or unique request_id in your metric dimensions. These explode the series count and degrade query performance in PromQL.

Visualizing with Amazon Managed Grafana

Once data is flowing into AMP, visualization is standard.

  1. Deploy Amazon Managed Grafana (AMG).
  2. Add the “Prometheus” data source.
  3. Toggle “SigV4 SDK” authentication in the data source settings (this seamlessly uses the AMG workspace IAM role to query AMP).
  4. Select your AMP region and workspace.

Because AMP is 100% PromQL compatible, you can import standard community dashboards (like the Kubernetes Cluster Monitoring dashboard) and they will work immediately.

Frequently Asked Questions (FAQ)

Does AMP support Prometheus Alert Manager?

Yes. AMP supports a serverless Alert Manager. You upload your alerting rules (YAML) and routing configuration directly to the AMP workspace via the AWS CLI or Terraform. You do not need to run a separate Alert Manager pod in your cluster.

What is the difference between ADOT and the standard Prometheus Server?

The standard Prometheus server is a monolithic binary that scrapes, stores, and serves data. ADOT (based on the OpenTelemetry Collector) is a pipeline that receives data, processes it, and exports it. ADOT is stateless and easier to scale horizontally, making it ideal for shipping Amazon EKS metrics to a managed backend.

How do I monitor the control plane (API Server, etcd)?

EKS Control Plane metrics are not exposed via standard scraping endpoints inside your VPC because the control plane is managed by AWS. However, you can enable “Control Plane Logging” in EKS to send metrics to CloudWatch, or use specific PromQL exporters if AWS exposes the metrics endpoint (varies by EKS version and configuration).

Conclusion

Migrating to Amazon Managed Service for Prometheus allows expert teams to treat observability as a service rather than a server. By leveraging ADOT for collection and IRSA for security, you build a robust, scalable pipeline for your Amazon EKS metrics.

Your next step is to audit your current metric cardinality using the ADOT processor configuration to ensure you aren’t paying for noise. Focus on the golden signals—Latency, Traffic, Errors, and Saturation—and let AWS manage the infrastructure. Thank you for reading the DevopsRoles page!

AWS SDK for Rust: Your Essential Guide to Quick Setup

In the evolving landscape of cloud-native development, the AWS SDK for Rust represents a paradigm shift toward memory safety, high performance, and predictable resource consumption. While languages like Python and Node.js have long dominated the AWS ecosystem, Rust provides an unparalleled advantage for high-throughput services and cost-optimized Lambda functions. This guide moves beyond the basics, offering a technical deep-dive into setting up a production-ready environment using the SDK.

Pro-Tip: The AWS SDK for Rust is built on top of smithy-rs, a code generator capable of generating SDKs from Smithy models. This architecture ensures that the Rust SDK stays in sync with AWS service updates almost instantly.

1. Project Initialization and Dependency Management

To begin working with the AWS SDK for Rust, you must configure your Cargo.toml carefully. Unlike monolithic SDKs, the Rust SDK is modular. You only include the crates for the services you actually use, which significantly reduces compile times and binary sizes.

Every project requires the aws-config crate for authentication and the specific service crates (e.g., aws-sdk-s3). Since the SDK is inherently asynchronous, a runtime like Tokio is mandatory.

[dependencies]
# Core configuration and credential provider
aws-config = { version = "1.1", features = ["behavior-version-latest"] }

# Service specific crates
aws-sdk-s3 = "1.17"
aws-sdk-dynamodb = "1.16"

# Async runtime
tokio = { version = "1", features = ["full"] }

# Error handling
anyhow = "1.0"

2. Deep Dive: Configuring the AWS SDK for Rust

The entry point for almost any application is the aws_config::load_from_env() function. For expert developers, understanding how the SdkConfig object manages the credential provider chain and region resolution is critical for debugging cross-account or cross-region deployments.

Asynchronous Initialization

The SDK uses async/await throughout. Here is the standard boilerplate for a robust initialization:

use aws_config::meta::region::RegionProviderChain;
use aws_config::BehaviorVersion;

#[tokio::main]
async fn main() -> Result<(), anyhow::Error> {
    // Determine region, falling back to us-east-1 if not set
    let region_provider = RegionProviderChain::default_provider().or_else("us-east-1");
    
    // Load configuration with the latest behavior version for future-proofing
    let config = aws_config::defaults(BehaviorVersion::latest())
        .region(region_provider)
        .load()
        .await;

    // Initialize service clients
    let s3_client = aws_sdk_s3::Client::new(&config);
    
    println!("AWS SDK for Rust initialized for region: {:?}", config.region().unwrap());
    Ok(())
}

Advanced Concept: The BehaviorVersion parameter is crucial. It allows the AWS team to introduce breaking changes to default behaviors (like retry logic) without breaking existing binaries. Always use latest() for new projects or a specific version for legacy stability.

3. Production Patterns: Interacting with Services

Once the AWS SDK for Rust is configured, interacting with services follows a consistent “Builder” pattern. This pattern ensures type safety and prevents the construction of invalid requests at compile time.

Example: High-Performance S3 Object Retrieval

When fetching large objects, leveraging Rust’s stream handling is significantly more efficient than buffering the entire payload into memory.

use aws_sdk_s3::Client;

async fn download_object(client: &Client, bucket: &str, key: &str) -> Result<(), anyhow::Error> {
    let resp = client
        .get_object()
        .bucket(bucket)
        .key(key)
        .send()
        .await?;

    let data = resp.body.collect().await?;
    println!("Downloaded {} bytes", data.into_bytes().len());

    Ok(())
}

4. Error Handling and Troubleshooting

Error handling in the AWS SDK for Rust is exhaustive. Each operation returns a specialized error type that distinguishes between service-specific errors (e.g., NoSuchKey) and transient network failures.

  • Service Errors: Errors returned by the AWS API (4xx or 5xx).
  • SdkErrors: Errors related to the local environment, such as construction failures or timeouts.

For more details on error structures, refer to the Official Smithy Error Documentation.

FeatureRust AdvantageImpact on DevOps
Memory SafetyZero-cost abstractions/OwnershipLower crash rates in production.
Binary SizeModular cratesFaster Lambda cold starts.
ConcurrencyFearless concurrency with TokioHigh throughput on minimal hardware.

Frequently Asked Questions (FAQ)

Is the AWS SDK for Rust production-ready?

Yes. As of late 2023, the AWS SDK for Rust is General Availability (GA). It is used internally by AWS and by numerous high-scale organizations for production workloads.

How do I handle authentication for local development?

The SDK follows the standard AWS credential provider chain. It will automatically check for environment variables (AWS_ACCESS_KEY_ID), the ~/.aws/credentials file, and IAM roles if running on EC2 or EKS.

Can I use the SDK without Tokio?

While the SDK is built to be executor-agnostic in theory, currently, aws-config and the default HTTP clients are heavily integrated with Tokio and Hyper. Using a different runtime requires implementing custom HTTP connectors.

Conclusion

Setting up the AWS SDK for Rust is a strategic move for developers who prioritize performance and reliability. By utilizing the modular crate system, embracing the async-first architecture of Tokio, and understanding the SdkConfig lifecycle, you can build cloud applications that are both cost-effective and remarkably fast. Whether you are building microservices on EKS or high-performance Lambda functions, Rust offers the tooling necessary to master the AWS ecosystem.

Would you like me to generate a specialized guide on optimizing AWS Lambda cold starts using the Rust SDK and Cargo Lambda? Thank you for reading the DevopsRoles page!

Mastering AWS Account Deployment: Terraform & AWS Control Tower

For modern enterprises, AWS account deployment is no longer a manual task of clicking through the AWS Organizations console. As infrastructure scales, the need for consistent, compliant, and automated “vending machines” for AWS accounts becomes paramount. By combining the governance power of AWS Control Tower with the Infrastructure as Code (IaC) flexibility of Terraform, SREs and Cloud Architects can build a robust deployment pipeline that satisfies both developer velocity and security requirements.

The Foundations: Why Control Tower & Terraform?

In a decentralized cloud environment, AWS account deployment must address three critical pillars: Governance, Security, and Scalability. While AWS Control Tower provides the managed “Landing Zone” environment, Terraform provides the declarative state management required to manage thousands of resources across multiple accounts without configuration drift.

Advanced Concept: Control Tower uses “Guardrails” (Service Control Policies and Config Rules). When deploying accounts via Terraform, you aren’t just creating a container; you are attaching a policy-driven ecosystem that inherits the root organization’s security posture by default.

By leveraging the Terraform AWS Provider alongside Control Tower, you enable a “GitOps” workflow where an account request is simply a .tf file in a repository. This approach ensures that every account is born with the correct IAM roles, VPC configurations, and logging buckets pre-provisioned.

Deep Dive: Account Factory for Terraform (AFT)

The AWS Control Tower Account Factory for Terraform (AFT) is the official bridge between these two worlds. AFT sets up a separate orchestration engine that listens for Terraform changes and triggers the Control Tower account creation API.

The AFT Component Stack

  • AFT Management Account: A dedicated account within your Organization to host the AFT pipeline.
  • Request Metadata: A DynamoDB table or Git repo that stores account parameters (Email, OU, SSO user).
  • Customization Pipeline: A series of Step Functions and Lambda functions that apply “Global” and “Account-level” Terraform modules after the account is provisioned.

Step-by-Step: Deploying Your First Managed Account

To master AWS account deployment via AFT, you must understand the structure of an account request. Below is a production-grade example of a Terraform module call to request a new “Production” account.


module "sandbox_account" {
  source = "github.com/aws-ia/terraform-aws-control_tower_account_factory"

  control_tower_parameters = {
    AccountEmail              = "cloud-ops+prod-app-01@example.com"
    AccountName               = "production-app-01"
    ManagedOrganizationalUnit = "Production"
    SSOUserEmail              = "admin@example.com"
    SSOUserFirstName          = "Platform"
    SSOUserLastName           = "Team"
  }

  account_tags = {
    "Project"     = "Apollo"
    "Environment" = "Production"
    "CostCenter"  = "12345"
  }

  change_management_parameters = {
    change_requested_by = "DevOps Team"
    change_reason       = "New microservice deployment for Q4"
  }

  custom_fields = {
    vpc_cidr = "10.0.0.0/20"
  }
}

After applying this Terraform code, AFT triggers a workflow in the background. It calls the Control Tower ProvisionProduct API, waits for the account to be “Ready,” and then executes your post-provisioning Terraform modules to set up VPCs, IAM roles, and CloudWatch alarms.

Production-Ready Best Practices

Expert SREs know that AWS account deployment is only 20% of the battle; the other 80% is maintaining those accounts. Follow these standards:

  • Idempotency is King: Ensure your post-provisioning scripts can run multiple times without failure. Use Terraform’s lifecycle { prevent_destroy = true } on critical resources like S3 logging buckets.
  • Service Quota Management: Newly deployed accounts start with default limits. Use the aws_servicequotas_service_quota resource to automatically request increases for EC2 instances or VPCs during the deployment phase.
  • Region Deny Policies: Use Control Tower guardrails to restrict deployments to approved regions. This reduces your attack surface and prevents “shadow IT” in unmonitored regions like me-south-1.
  • Centralized Logging: Always ensure the aws_s3_bucket_policy in your log-archive account allows the newly created account’s CloudTrail service principal to write logs immediately.

Troubleshooting Common Deployment Failures

Even with automation, AWS account deployment can encounter hurdles. Here are the most common failure modes observed in enterprise environments:

IssueRoot CauseResolution
Email Already in UseAWS account emails must be globally unique across all of AWS.Use email sub-addressing (e.g., ops+acc1@company.com) if supported by your provider.
STS TimeoutAFT cannot assume the AWSControlTowerExecution role in the new account.Check if a Service Control Policy (SCP) is blocking sts:AssumeRole in the target OU.
Customization LoopTerraform state mismatch in the AFT pipeline.Manually clear the DynamoDB lock table for the specific account ID in the AFT Management account.

Frequently Asked Questions

Can I use Terraform to deploy accounts without Control Tower?

Yes, using the aws_organizations_account resource. However, you lose the managed guardrails and automated dashboarding provided by Control Tower. For expert-level setups, Control Tower + AFT is the industry standard for compliance.

How does AFT handle Terraform state?

AFT manages state files in an S3 bucket within the AFT Management account. It creates a unique state key for each account it provisions to ensure isolation and prevent blast-radius issues during updates.

How long does a typical AWS account deployment take via AFT?

Usually between 20 to 45 minutes. This includes the time AWS takes to provision the physical account container, apply Control Tower guardrails, and run your custom Terraform modules.

Conclusion

Mastering AWS account deployment requires a shift from manual administration to a software engineering mindset. By treating your accounts as immutable infrastructure and managing them through Terraform and AWS Control Tower, you gain the ability to scale your cloud footprint with confidence. Whether you are managing five accounts or five thousand, the combination of AFT and IaC provides the consistency and auditability required by modern regulatory frameworks. For further technical details, refer to the Official AFT Documentation. Thank you for reading the DevopsRoles page!

Master AWS Batch: Terraform Deployment on Amazon EKS

For years, AWS Batch and Amazon EKS (Elastic Kubernetes Service) operated in parallel universes. Batch excelled at queue management and compute provisioning for high-throughput workloads, while Kubernetes won the war for container orchestration. With the introduction of AWS Batch support for EKS, we can finally unify these paradigms.

This convergence allows you to leverage the robust job scheduling of AWS Batch while utilizing the namespace isolation, sidecars, and familiarity of your existing EKS clusters. However, orchestrating this integration via Infrastructure as Code (IaC) is non-trivial. It requires precise IAM trust relationships, Kubernetes RBAC (Role-Based Access Control) configuration, and specific compute environment parameters.

In this guide, we will bypass the GUI entirely. We will architect and deploy a production-ready AWS Batch Terraform EKS solution, focusing on the nuances that trip up even experienced engineers.

GigaCode Pro-Tip:
Unlike standard EC2 compute environments, AWS Batch on EKS does not manage the EC2 instances directly. Instead, it submits Pods to your cluster. This means your EKS Nodes (Node Groups) must already exist and scale appropriately (e.g., using Karpenter or Cluster Autoscaler) to handle the pending Pods injected by Batch.

Architecture: How Batch Talks to Kubernetes

Before writing Terraform, understand the control flow:

  1. Job Submission: You submit a job to an AWS Batch Job Queue.
  2. Translation: AWS Batch translates the job definition into a Kubernetes PodSpec.
  3. API Call: The AWS Batch Service Principal interacts with the EKS Control Plane (API Server) to create the Pod.
  4. Execution: The Pod is scheduled on an available node in your EKS cluster.

This flow implies two critical security boundaries we must bridge with Terraform: IAM (AWS permissions) and RBAC (Kubernetes permissions).

Step 1: IAM Roles for Batch Service

AWS Batch needs a specific service-linked role or a custom IAM role to communicate with the EKS cluster. For strict security, we define a custom role.

resource "aws_iam_role" "batch_eks_service_role" {
  name = "aws-batch-eks-service-role"

  assume_role_policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Action = "sts:AssumeRole"
        Effect = "Allow"
        Principal = {
          Service = "batch.amazonaws.com"
        }
      }
    ]
  })
}

resource "aws_iam_role_policy_attachment" "batch_eks_policy" {
  role       = aws_iam_role.batch_eks_service_role.name
  policy_arn = "arn:aws:iam::aws:policy/AWSBatchServiceRole"
}

Step 2: Preparing the EKS Cluster (RBAC)

This is the most common failure point for AWS Batch Terraform EKS deployments. Even with the correct IAM role, Batch cannot schedule Pods if the Kubernetes API rejects the request.

We must map the IAM role created in Step 1 to a Kubernetes user, then grant that user permissions via a ClusterRole and ClusterRoleBinding. We can use the HashiCorp Kubernetes Provider for this.

2.1 Define the ClusterRole

resource "kubernetes_cluster_role" "aws_batch_cluster_role" {
  metadata {
    name = "aws-batch-cluster-role"
  }

  rule {
    api_groups = [""]
    resources  = ["namespaces"]
    verbs      = ["get", "list", "watch"]
  }

  rule {
    api_groups = [""]
    resources  = ["nodes"]
    verbs      = ["get", "list", "watch"]
  }

  rule {
    api_groups = [""]
    resources  = ["pods"]
    verbs      = ["get", "list", "watch", "create", "delete", "patch"]
  }

  rule {
    api_groups = ["rbac.authorization.k8s.io"]
    resources  = ["clusterroles", "clusterrolebindings"]
    verbs      = ["get", "list"]
  }
}

2.2 Bind the Role to the IAM User

You must ensure the IAM role ARN matches the user configured in your aws-auth ConfigMap (or EKS Access Entries if using the newer API). Here, we create the binding assuming the user is mapped to aws-batch.

resource "kubernetes_cluster_role_binding" "aws_batch_cluster_role_binding" {
  metadata {
    name = "aws-batch-cluster-role-binding"
  }

  role_ref {
    api_group = "rbac.authorization.k8s.io"
    kind      = "ClusterRole"
    name      = kubernetes_cluster_role.aws_batch_cluster_role.metadata[0].name
  }

  subject {
    kind      = "User"
    name      = "aws-batch" # This must match the username in aws-auth
    api_group = "rbac.authorization.k8s.io"
  }
}

Step 3: The Terraform Compute Environment

Now we define the aws_batch_compute_environment resource. The key differentiator here is the compute_resources block type, which must be set to FARGATE_SPOT, FARGATE, EC2, or SPOT, and strictly linked to the EKS configuration.

resource "aws_batch_compute_environment" "eks_batch_ce" {
  compute_environment_name = "eks-batch-compute-env"
  type                     = "MANAGED"
  service_role             = aws_iam_role.batch_eks_service_role.arn

  eks_configuration {
    eks_cluster_arn      = data.aws_eks_cluster.main.arn
    kubernetes_namespace = "batch-jobs" # Ensure this namespace exists!
  }

  compute_resources {
    type               = "EC2" # Or FARGATE
    max_vcpus          = 256
    min_vcpus          = 0
    
    # Note: For EKS, security_group_ids and subnets might be ignored 
    # if you are relying on existing Node Groups, but are required for validation.
    security_group_ids = [aws_security_group.batch_sg.id]
    subnets            = module.vpc.private_subnets
    
    instance_types = ["c5.large", "m5.large"]
  }

  depends_on = [
    aws_iam_role_policy_attachment.batch_eks_policy,
    kubernetes_cluster_role_binding.aws_batch_cluster_role_binding
  ]
}

Technical Note:
When using EKS, the instance_types and subnets defined in the Batch Compute Environment are primarily used by Batch to calculate scaling requirements. However, the actual Pod placement depends on the Node Groups (or Karpenter provisioners) available in your EKS cluster.

Step 4: Job Queues and Definitions

Finally, we wire up the Job Queue and a basic Job Definition. In the EKS context, the Job Definition looks different—it wraps Kubernetes properties.

resource "aws_batch_job_queue" "eks_batch_jq" {
  name                 = "eks-batch-queue"
  state                = "ENABLED"
  priority             = 10
  compute_environments = [aws_batch_compute_environment.eks_batch_ce.arn]
}

resource "aws_batch_job_definition" "eks_job_def" {
  name        = "eks-job-def"
  type        = "container"
  
  # Crucial: EKS Job Definitions define node properties differently
  eks_properties {
    pod_properties {
      host_network = false
      containers {
        image = "public.ecr.aws/amazonlinux/amazonlinux:latest"
        command = ["/bin/sh", "-c", "echo 'Hello from EKS Batch'; sleep 30"]
        
        resources {
          limits = {
            cpu    = "1.0"
            memory = "1024Mi"
          }
          requests = {
            cpu    = "0.5"
            memory = "512Mi"
          }
        }
      }
    }
  }
}

Best Practices for Production

  • Use Karpenter: Standard Cluster Autoscaler can be sluggish with Batch spikes. Karpenter observes the unschedulable Pods created by Batch and provisions nodes in seconds.
  • Namespace Isolation: Always isolate Batch workloads in a dedicated Kubernetes namespace (e.g., batch-jobs). Configure ResourceQuotas on this namespace to prevent Batch from starving your microservices.
  • Logging: Ensure your EKS nodes have Fluent Bit or similar log forwarders installed. Batch logs in the console are helpful, but aggregating them into CloudWatch or OpenSearch via the node’s daemonset is superior for debugging.

Frequently Asked Questions (FAQ)

Can I use Fargate with AWS Batch on EKS?

Yes. You can specify FARGATE or FARGATE_SPOT in your compute resources. However, you must ensure you have a Fargate Profile in your EKS cluster that matches the namespace and labels defined in your Batch Job Definition.

Why is my Job stuck in RUNNABLE status?

This is the classic “It’s DNS” of Batch. In EKS, RUNNABLE usually means Batch has successfully submitted the Pod to the API Server, but the Pod is Pending. Check your K8s events (kubectl get events -n batch-jobs). You likely lack sufficient capacity (Node Groups not scaling) or have a `Taint/Toleration` mismatch.

How does this compare to standard Batch on EC2?

Standard Batch manages the ASG (Auto Scaling Group) for you. Batch on EKS delegates the infrastructure management to you (or your EKS autoscaler). EKS offers better unification if you already run K8s, but standard Batch is simpler if you just need raw compute without K8s management overhead.

Conclusion

Integrating AWS Batch with Amazon EKS using Terraform provides a powerful, unified compute plane for high-performance computing. By explicitly defining your IAM trust boundaries and Kubernetes RBAC permissions, you eliminate the “black box” magic and gain full control over your batch processing lifecycle.

Start by deploying the IAM roles and RBAC bindings defined above. Once the permissions handshake is verified, layer on the Compute Environment and Job Queues. Your infrastructure is now ready to process petabytes at scale. Thank you for reading the DevopsRoles page!

AWS ECS & EKS Power Up with Remote MCP Servers

The Model Context Protocol (MCP) has rapidly become the standard for connecting AI models to your data and tools. However, most initial implementations are strictly local—relying on stdio to pipe data between a local process and your AI client (like Claude Desktop or Cursor). While this works for personal scripts, it doesn’t scale for teams.

To truly unlock the potential of AI agents in the enterprise, you need to decouple the “Brain” (the AI client) from the “Hands” (the tools). This means moving your MCP servers from localhost to robust cloud infrastructure.

This guide details the architectural shift required to run AWS ECS EKS MCP workloads. We will cover how to deploy remote MCP servers using Server-Sent Events (SSE), how to host them on Fargate and Kubernetes, and—most importantly—how to secure them so you aren’t exposing your internal database tools to the open internet.

The Architecture Shift: From Stdio to Remote SSE

In a local setup, the MCP client spawns the server process and communicates via standard input/output. This is secure by default because it’s isolated to your machine. To move this to AWS, we must switch the transport layer.

The MCP specification supports SSE (Server-Sent Events) for remote connections. This changes the communication flow:

  • Server-to-Client: Uses a persistent SSE connection to push events (like tool outputs or log messages).
  • Client-to-Server: Uses standard HTTP POST requests to send commands (like “call tool X”).

Pro-Tip: Unlike WebSockets, SSE is unidirectional (Server -> Client). This is why the protocol also requires an HTTP POST endpoint for the client to talk back. When deploying to AWS, your Load Balancer must support long-lived HTTP connections for the SSE channel.

Option A: Serverless Simplicity with AWS ECS (Fargate)

For most standalone MCP servers—such as a tool that queries a specific RDS database or interacts with an internal API—AWS ECS Fargate is the ideal host. It removes the overhead of managing EC2 instances while providing native integration with AWS VPCs for security.

1. The Container Image

You need an MCP server that listens on a port (usually via a web framework like FastAPI or Starlette) rather than just running a script. Here is a conceptual Dockerfile for a Python-based remote MCP server:

FROM python:3.11-slim

WORKDIR /app

# Install MCP SDK and a web server (e.g., Starlette/Uvicorn)
RUN pip install mcp[cli] uvicorn starlette

COPY . .

# Expose the port for SSE and HTTP POST
EXPOSE 8080

# Run the server using the SSE transport adapter
CMD ["uvicorn", "server:app", "--host", "0.0.0.0", "--port", "8080"]

2. The Task Definition & ALB

When defining your ECS Service, you must place an Application Load Balancer (ALB) in front of your tasks. The critical configuration here is the Idle Timeout.

  • Health Checks: Ensure your container exposes a simple /health endpoint, or the ALB will kill the task during long AI-generation cycles.
  • Timeout: Increase the ALB idle timeout to at least 300 seconds. AI models can take time to “think” or process large tool outputs, and you don’t want the SSE connection to drop prematurely.

Option B: Scalable Orchestration with Amazon EKS

If your organization already operates on Kubernetes, deploying AWS ECS EKS MCP servers as standard deployments allows for advanced traffic management. This is particularly useful if you are running a “Mesh” of MCP servers.

The Ingress Challenge

The biggest hurdle on EKS is the Ingress Controller. If you use NGINX Ingress, it defaults to buffering responses, which breaks SSE (the client waits for the buffer to fill before receiving the first event).

You must apply specific annotations to your Ingress resource to disable buffering for the SSE path:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: mcp-server-ingress
  annotations:
    # Critical for SSE to work properly
    nginx.ingress.kubernetes.io/proxy-buffering: "off"
    nginx.ingress.kubernetes.io/proxy-read-timeout: "3600"
    nginx.ingress.kubernetes.io/proxy-send-timeout: "3600"
spec:
  ingressClassName: nginx
  rules:
    - host: mcp.internal.yourcompany.com
      http:
        paths:
          - path: /sse
            pathType: Prefix
            backend:
              service:
                name: mcp-service
                port:
                  number: 80

Warning: Never expose an MCP server Service as LoadBalancer (public) without strict Security Groups or authentication. An exposed MCP server gives an AI direct execution access to whatever tools you’ve enabled (e.g., “Drop Database”).

Security: The “MCP Proxy” & Auth Patterns

This is the section that separates a “toy” project from a production deployment. How do you let an AI client (running on a developer’s laptop) access a private ECS/EKS service securely?

1. The VPN / Tailscale Approach

The simplest method is network isolation. Keep the MCP server in a private subnet. Developers must be on the corporate VPN or use a mesh overlay like Tailscale to reach the `http://internal-mcp:8080/sse` endpoint. This requires zero code changes to the MCP server.

2. The AWS SigV4 / Auth Proxy Approach

For a more cloud-native approach, AWS recently introduced the concept of an MCP Proxy. This involves:

  1. Placing your MCP Server behind an ALB with AWS IAM Authentication or Cognito.
  2. Running a small local proxy on the client machine (the developer’s laptop).
  3. The developer configures their AI client to talk to localhost:proxy-port.
  4. The local proxy signs requests with the developer’s AWS credentials (SigV4) and forwards them to the remote ECS/EKS endpoint.

This ensures that only users with the correct IAM Policy (e.g., AllowInvokeMcpServer) can access your tools.

Frequently Asked Questions (FAQ)

Can I use the official Amazon EKS MCP Server remotely?

Yes, but it’s important to distinguish between hosting a server and using a tool. AWS provides an open-source Amazon EKS MCP Server. This is a tool you run (locally or remotely) that gives your AI the ability to run kubectl commands and inspect your cluster. You can host this inside your cluster to give an AI agent “SRE superpowers” over that specific environment.

Why does my remote MCP connection drop after 60 seconds?

This is almost always a Load Balancer or Reverse Proxy timeout. SSE requires a persistent connection. Check your AWS ALB “Idle Timeout” settings or your Nginx proxy_read_timeout. Ensure they are set to a value higher than your longest expected idle time (e.g., 5-10 minutes).

Should I use ECS or Lambda for MCP?

While Lambda is cheaper for sporadic use, MCP is a stateful protocol (via SSE). Running SSE on Lambda requires using Function URLs with response streaming, which has a 15-minute hard limit and can be tricky to debug. ECS Fargate is generally preferred for the stability of the long-lived connection required by the protocol.

Conclusion

Moving your Model Context Protocol infrastructure from local scripts to AWS ECS and EKS is a pivotal step in maturing your AI operations. By leveraging Fargate for simplicity or EKS for mesh-scale orchestration, you provide your AI agents with a stable, high-performance environment to operate in.

Remember, “Powering Up” isn’t just about connectivity; it’s about security. Whether you choose a VPN-based approach or the robust AWS SigV4 proxy pattern, ensuring your AI tools are authenticated is non-negotiable in a production environment.

Next Step: Audit your current local MCP tools. Identify one “heavy” tool (like a database inspector or a large-context retriever) and containerize it using the Dockerfile pattern above to deploy your first remote MCP service on Fargate. Thank you for reading the DevopsRoles page!

Agentic AI is Revolutionizing AWS Security Incident Response

For years, the gold standard in cloud security has been defined by deterministic automation. We detect an anomaly in Amazon GuardDuty, trigger a CloudWatch Event (now EventBridge), and fire a Lambda function to execute a hard-coded remediation script. While effective for known threats, this approach is brittle. It lacks context, reasoning, and adaptability.

Enter Agentic AI. By integrating Large Language Models (LLMs) via services like Amazon Bedrock into your security stack, we are moving from static “Runbooks” to dynamic “Reasoning Engines.” AWS Security Incident Response is no longer just about automation; it is about autonomy. This guide explores how to architect Agentic workflows that can analyze forensics, reason through containment strategies, and execute remediation with human-level nuance at machine speed.

The Evolution: From SOAR to Agentic Security

Traditional Security Orchestration, Automation, and Response (SOAR) platforms rely on linear logic: If X, then Y. This works for blocking an IP address, but it fails when the threat requires investigation. For example, if an IAM role is exfiltrating data, a standard script might revoke keys immediately—potentially breaking production applications—whereas a human analyst would first check if the activity aligns with a scheduled maintenance window.

Agentic AI introduces the ReAct (Reasoning + Acting) pattern to AWS Security Incident Response. Instead of blindly firing scripts, the AI Agent:

  1. Observes the finding (e.g., “S3 Bucket Public Access Enabled”).
  2. Reasons about the context (Queries CloudTrail: “Who did this? Was it authorized?”).
  3. Acts using defined tools (Calls boto3 functions to correct the policy).
  4. Evaluates the result (Verifies the bucket is private).

GigaCode Pro-Tip:
Don’t confuse “Generative AI” with “Agentic AI.” Generative AI writes a report about the hack. Agentic AI logs into the console (via API) and fixes the hack. The differentiator is the Action Group.

Architecture: Building a Bedrock Security Agent

To modernize your AWS Security Incident Response, we leverage Amazon Bedrock Agents. This managed service orchestrates the interaction between the LLM (reasoning), the knowledge base (RAG for company policies), and the action groups (Lambda functions).

1. The Foundation: Knowledge Bases

Your agent needs context. Using Retrieval-Augmented Generation (RAG), you can index your internal Wiki, incident response playbooks, and architecture diagrams into an Amazon OpenSearch Serverless vector store connected to Bedrock. When a finding occurs, the agent first queries this base: “What is the protocol for a compromised EC2 instance in the Production VPC?”

2. Action Groups (The Hands)

Action groups map OpenAPI schemas to AWS Lambda functions. This allows the LLM to “call” Python code. Below is an example of a remediation tool that an agent might decide to use during an active incident.

Code Implementation: The Isolation Tool

This Lambda function serves as a “tool” that the Bedrock Agent can invoke when it decides an instance must be quarantined.

import boto3
import json
import logging

logger = logging.getLogger()
logger.setLevel(logging.INFO)

ec2 = boto3.client('ec2')

def lambda_handler(event, context):
    """
    Tool for Bedrock Agent: Isolates an EC2 instance by attaching a forensic SG.
    Input: {'instance_id': 'i-xxxx', 'vpc_id': 'vpc-xxxx'}
    """
    agent_params = event.get('parameters', [])
    instance_id = next((p['value'] for p in agent_params if p['name'] == 'instance_id'), None)
    
    if not instance_id:
        return {"response": "Error: Instance ID is required for isolation."}

    try:
        # Logic to find or create a 'Forensic-No-Ingress' Security Group
        logger.info(f"Agent requested isolation for {instance_id}")
        
        # 1. Get current SG for rollback context (Forensics)
        current_attr = ec2.describe_instance_attribute(
            InstanceId=instance_id, Attribute='groupSet'
        )
        
        # 2. Attach Isolation SG (Assuming sg-isolation-id is pre-provisioned)
        isolation_sg = "sg-0123456789abcdef0" 
        
        ec2.modify_instance_attribute(
            InstanceId=instance_id,
            Groups=[isolation_sg]
        )
        
        return {
            "response": f"SUCCESS: Instance {instance_id} has been isolated. Previous SGs logged for analysis."
        }
        
    except Exception as e:
        logger.error(f"Failed to isolate: {str(e)}")
        return {"response": f"FAILED: Could not isolate instance. Reason: {str(e)}"}

Implementing the Workflow

Deploying this requires an Event-Driven Architecture. Here is the lifecycle of an Agentic AWS Security Incident Response:

  • Detection: GuardDuty detects UnauthorizedAccess:EC2/TorIPCaller.
  • Ingestion: EventBridge captures the finding and pushes it to an SQS queue (for throttling/buffering).
  • Invocation: A Lambda “Controller” picks up the finding and invokes the Bedrock Agent Alias using the invoke_agent API.
  • Reasoning Loop:
    • The Agent receives the finding details.
    • It checks the “Knowledge Base” and sees that Tor connections are strictly prohibited.
    • It decides to call the GetInstanceDetails tool to check tags.
    • It sees the tag Environment: Production.
    • It decides to call the IsolateInstance tool (code above).
  • Resolution: The Agent updates AWS Security Hub with the workflow status, marks the finding as RESOLVED, and emails the SOC team a summary of its actions.

Human-in-the-Loop (HITL) and Guardrails

For expert practitioners, the fear of “hallucinating” agents deleting production databases is real. To mitigate this in AWS Security Incident Response, we implement Guardrails for Amazon Bedrock.

Guardrails allow you to define denied topics and content filters. Furthermore, for high-impact actions (like terminating instances), you should design the Agent to request approval rather than execute immediately. The Agent can send an SNS notification with a standard “Approve/Deny” link. The Agent pauses execution until the approval signal is received via a callback webhook.

Pro-Tip: Use CloudTrail Lake to audit your Agents. Every API call made by the Agent (via the assumed IAM role) is logged. Create a QuickSight dashboard to visualize “Agent Remediation Success Rates” vs. “Human Intervention Required.”

Frequently Asked Questions (FAQ)

How does Agentic AI differ from AWS Lambda automation?

Lambda automation is deterministic (scripted steps). Agentic AI is probabilistic and reasoning-based. It can handle ambiguity, such as deciding not to act if a threat looks like a false positive based on cross-referencing logs, whereas a script would execute blindly.

Is it safe to let AI modify security groups automatically?

It is safe if scoped correctly using IAM Roles. The Agent’s role should adhere to the Principle of Least Privilege. Start with “Read-Only” agents that only perform forensics and suggest remediation, then graduate to “Active” agents for low-risk environments.

Which AWS services are required for this architecture?

At a minimum: Amazon Bedrock (Agents & Knowledge Bases), AWS Lambda (Action Groups), Amazon EventBridge (Triggers), Amazon GuardDuty (Detection), and AWS Security Hub (Centralized Management).

Conclusion

The landscape of AWS Security Incident Response is shifting. By adopting Agentic AI, organizations can reduce Mean Time to Respond (MTTR) from hours to seconds. However, this is not a “set and forget” solution. It requires rigorous engineering of prompts, action schemas, and IAM boundaries.

Start small: Build an agent that purely performs automated forensics—gathering logs, querying configurations, and summarizing the blast radius—before letting it touch your infrastructure. The future of cloud security is autonomous, and the architects who master these agents today will define the standards of tomorrow.

For deeper reading on configuring Bedrock Agents, consult the official AWS Bedrock User Guide or review the AWS Security Incident Response Guide.

Swift AWS Lambda Runtime: Now in AWSLabs!

For years, the Swift-on-server community has relied on the excellent community-driven swift-server/swift-aws-lambda-runtime. Today, that hard work is officially recognized and accelerated: AWS has released an official Swift AWS Lambda Runtime, now available in AWSLabs. For expert AWS engineers, this move signals a significant new option for building high-performance, type-safe, and AOT-compiled serverless functions.

This isn’t just a “me-too” runtime. This new library is built from the ground up on SwiftNIO, providing a high-performance, non-blocking I/O foundation. In this guide, we’ll bypass the basics and dive straight into what experts need to know: how to build, deploy, and optimize Swift on Lambda.

From Community to AWSLabs: Why This Matters

The original community runtime, now stewarded by the Swift Server Work Group (SSWG), paved the way. The new AWSLabs/swift-aws-lambda-runtime builds on this legacy with a few key implications for expert users:

  • Official AWS Backing: While still in AWSLabs (experimental), this signals a clear path toward official support, deeper integration with AWS tools, and alignment with the official AWS SDK for Swift (preview).
  • Performance-First Design: Re-architecting on SwiftNIO ensures the runtime itself is a minimal, non-blocking layer, allowing your Swift code to execute with near-native performance.
  • Modern Swift Concurrency: The runtime is designed to integrate seamlessly with Swift’s modern structured concurrency (async/await), making asynchronous code clean and maintainable.

Architectural Note: The Runtime Interface Client (RIC)

Under the hood, this is a Custom Lambda Runtime. The swift-aws-lambda-runtime library is essentially a highly-optimized Runtime Interface Client (RIC). It implements the loop that polls the Lambda Runtime API (/2018-06-01/runtime/invocation/next), retrieves an event, passes it to your Swift handler, and POSTs the response back. Your executable, named bootstrap, is the entry point Lambda invokes.

Getting Started: Your First Swift AWS Lambda Runtime Function

We’ll skip the “Hello, World” and build a function that decodes a real event. The most robust way to build and deploy is using the AWS Serverless Application Model (SAM) with a container image, which gives you a reproducible build environment.

Prerequisites

  • Swift 5.7+
  • Docker
  • AWS SAM CLI
  • AWS CLI

1. Initialize Your Swift Package

Create a new executable package.

mkdir MySwiftLambda && cd MySwiftLambda
swift package init --type executable

2. Configure Package.swift Dependencies

Edit your Package.swift to include the new runtime and the event types library.

// swift-tools-version:5.7
import PackageDescription

let package = Package(
    name: "MySwiftLambda",
    platforms: [
        .macOS(.v12) // Specify platforms for development
    ],
    products: [
        .executable(name: "MySwiftLambda", targets: ["MySwiftLambda"])
    ],
    dependencies: [
        .package(url: "https://github.com/awslabs/swift-aws-lambda-runtime.git", from: "1.0.0-alpha"),
        .package(url: "https://github.com/swift-server/swift-aws-lambda-events.git", from: "0.2.0")
    ],
    targets: [
        .executableTarget(
            name: "MySwiftLambda",
            dependencies: [
                .product(name: "AWSLambdaRuntime", package: "swift-aws-lambda-runtime"),
                .product(name: "AWSLambdaEvents", package: "swift-aws-lambda-events")
            ],
            path: "Sources"
        )
    ]
)

3. Write Your Lambda Handler (main.swift)

Replace the contents of Sources/main.swift. We’ll use modern async/await syntax to handle an API Gateway v2 HTTP request (HTTP API).

import AWSLambdaRuntime
import AWSLambdaEvents

@main
struct MyLambdaHandler: SimpleLambdaHandler {
    
    // This is the function that will be called for every invocation.
    // It's async, so we can perform non-blocking work.
    func handle(_ request: APIGateway.V2.Request, context: LambdaContext) async throws -> APIGateway.V2.Response {
        
        // Log to CloudWatch
        context.logger.info("Received request: \(request.rawPath)")
        
        // Example: Accessing path parameters
        let name = request.pathParameters?["name"] ?? "World"

        let responseBody = "Hello, \(name)!"

        // Return a valid APIGateway.V2.Response
        return APIGateway.V2.Response(
            statusCode: .ok,
            headers: ["Content-Type": "text/plain"],
            body: responseBody
        )
    }
}

Deployment Strategy: Container Image with SAM

While you *can* use the provided.al2 runtime by compiling and zipping a bootstrap executable, the container image flow is cleaner and more repeatable for Swift projects.

1. Create the Dockerfile

Create a Dockerfile in your root directory. We’ll use a multi-stage build to keep the final image minimal.

# --- 1. Build Stage ---
FROM swift:5.7-amazonlinux2 AS build

# Set up environment
RUN yum -y install libuuid-devel libicu-devel libedit-devel libxml2-devel sqlite-devel \
    libstdc++-static libatomic-static \
    && yum -y clean all

WORKDIR /build

# Copy and resolve dependencies
COPY Package.swift .
COPY Package.resolved .
RUN swift package resolve

# Copy full source and build
COPY . .
RUN swift build -c release --static-swift-stdlib

# --- 2. Final Lambda Runtime Stage ---
FROM amazon/aws-lambda-provided:al2

# Copy the built executable from the 'build' stage
# Lambda expects the executable to be named 'bootstrap'
COPY --from=build /build/.build/release/MySwiftLambda /var/runtime/bootstrap

# Set the Lambda entrypoint
ENTRYPOINT [ "/var/runtime/bootstrap" ]

2. Create the SAM Template

Create a template.yaml file.

AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: >
  Sample SAM template for a Swift AWS Lambda Runtime function.

Globals:
  Function:
    Timeout: 10
    MemorySize: 256

Resources:
  MySwiftFunction:
    Type: AWS::Serverless::Function
    Properties:
      PackageType: Image
      Architectures:
        - x86_64 # or arm64 if you build on an M1/M2 Mac
      Events:
        HttpApiEvent:
          Type: HttpApi
          Properties:
            Path: /hello/{name}
            Method: GET
    Metadata:
      DockerTag: v1
      DockerContext: .
      Dockerfile: Dockerfile

Outputs:
  ApiEndpoint:
    Description: "API Gateway endpoint URL"
    Value: !Sub "https://${ServerlessHttpApi}.execute-api.${AWS::Region}.amazonaws.com/hello/GigaCode"

3. Build and Deploy

Now, run the standard SAM build and deploy process.

# Build the Docker image, guided by SAM
sam build

# Deploy the function to AWS
sam deploy --guided

After deployment, SAM will output the API endpoint. You can curl it (e.g., curl https://[api-id].execute-api.us-east-1.amazonaws.com/hello/SwiftDev) and get your response!

Performance & Cold Start Considerations

This is what you’re here for. How does it perform?

  • Cold Starts: Swift is an Ahead-of-Time (AOT) compiled language. Unlike Python or Node.js, there is no JIT or interpreter startup time. Its cold start performance profile is very similar to Go and Rust. You can expect cold starts in the sub-100ms range for simple functions, depending on VPC configuration.
  • Warm Invokes: Once warm, Swift is exceptionally fast. Because it’s compiled to native machine code, warm invocation times are typically single-digit milliseconds (1-5ms).
  • Memory Usage: Swift’s memory footprint is lean. With static linking and optimized release builds, simple functions can run comfortably in 128MB or 256MB of RAM.

Performance Insight: Static Linking

The --static-swift-stdlib flag in our Dockerfile build command is critical. It bundles the Swift standard library into your executable, creating a self-contained binary. This slightly increases the package size but significantly improves cold start time, as the Lambda environment doesn’t need to find and load shared .so libraries. It’s the recommended approach for production Lambda builds.

Frequently Asked Questions (FAQ)

How does the AWSLabs runtime differ from the swift-server community one?

The core difference is the foundation. The AWSLabs version is built on SwiftNIO 2 for its core I/O, aligning it with other modern Swift server frameworks. The community version (swift-server/swift-aws-lambda-runtime) is also excellent and stable but is built on a different internal stack. The AWSLabs version will likely see faster integration with new AWS services and SDKs.

What is the cold start performance of Swift on Lambda?

Excellent. As an AOT-compiled language, it avoids interpreter and JIT overhead. It is in the same class as Go and Rust, with typical P99 cold starts well under 200ms and P50 often under 100ms for simple functions.

Can I use async/await with the Swift AWS Lambda Runtime?

Yes, absolutely. It is the recommended way to use the runtime. The library provides both a LambdaHandler (closure-based) and a SimpleLambdaHandler (async/await-based) protocol. You should use the async/await patterns, as shown in the example, for clean, non-blocking asynchronous code.

How do I handle JSON serialization/deserialization?

Swift’s built-in Codable protocol is the standard. The swift-aws-lambda-events library provides all the Codable structs for common AWS events (API Gateway, SQS, S3, etc.). For your own custom JSON payloads, simply define your struct or class as Codable.

Conclusion

The arrival of an official Swift AWS Lambda Runtime in AWSLabs is a game-changing moment for the Swift-on-server ecosystem. For expert AWS users, it presents a compelling, high-performance, and type-safe alternative to Go, Rust, or TypeScript (Node.js).

By combining AOT compilation, a minimal memory footprint, and the power of SwiftNIO and structured concurrency, this new runtime is more than an experiment—it’s a production-ready path for building your most demanding serverless functions. Thank you for reading the DevopsRoles page!

What Really Caused the Massive AWS Outage?

If you’re an SRE, DevOps engineer, or cloud architect, you don’t just feel an AWS outage; you live it. Pagers scream, dashboards bleed red, and customer trust evaporates. The most recent massive outage, which brought down services from streaming platforms to financial systems, was not a simple hardware failure. It was a complex, cascading event born from the very dependencies that make the cloud powerful.

This isn’t another “the cloud is down” post. This is a technical root cause analysis (RCA) for expert practitioners. We’ll bypass the basics and dissect the specific automation and architectural flaws—focusing on the DynamoDB DNS failure in us-east-1—that triggered a system-wide collapse, and what we, as engineers, must learn from it.

Executive Summary: The TL;DR for SREs

The root cause of the October 2025 AWS outage was a DNS resolution failure for the DynamoDB API endpoint in the us-east-1 region. This was not a typical DNS issue, but a failure within AWS’s internal, automated DNS management system. This failure effectively made DynamoDB—a foundational “Layer 1” service—disappear from the network, causing a catastrophic cascading failure for all dependent services, including IAM, EC2, Lambda, and the AWS Management Console itself.

The key problem was a latent bug in an automation “Enactor” system responsible for updating DNS records. This bug, combined with a specific sequence of events (often called a “race condition”), resulted in an empty DNS record being propagated for dynamodb.us-east-1.amazonaws.com. Because countless other AWS services (and customer applications) are hard-wired with dependencies on DynamoDB in that specific region, the blast radius was immediate and global.

A Pattern of Fragility: The Legacy of US-EAST-1

To understand this outage, we must first understand us-east-1 (N. Virginia). It is AWS’s oldest, largest, and most critical region. It also hosts the global endpoints for foundational services like IAM. This unique status as “Region Zero” has made it the epicenter of AWS’s most significant historical failures.

Brief Post-Mortems of Past Failures

2017: The S3 “Typo” Outage

On February 28, 2017, a well-intentioned engineer executing a playbook to debug the S3 billing system made a typo in a command. Instead of removing a small subset of servers, the command triggered the removal of a massive number of servers supporting the S3 index and placement subsystems. Because these core subsystems had not been fully restarted in years, the recovery time was catastrophically slow, taking the internet’s “hard drive” offline for hours.

2020: The Kinesis “Thread Limit” Outage

On November 25, 2020, a “relatively small addition of capacity” to the Kinesis front-end fleet in us-east-1 triggered a long-latent bug. The fleet’s servers used an all-to-all communication mesh, with each server maintaining one OS thread per peer. The capacity addition pushed the servers over the maximum-allowed OS thread limit, causing the entire fleet to fail. This Kinesis failure cascaded to Cognito, CloudWatch, Lambda, and others, as they all feed data into Kinesis.

The pattern is clear: us-east-1 is a complex, aging system where small, routine actions can trigger non-linear, catastrophic failures due to undiscovered bugs and deep-rooted service dependencies.

Anatomy of the Latest AWS Outage: The DynamoDB DNS Failure

This latest AWS outage follows the classic pattern but with a new culprit: the internal DNS automation for DynamoDB.

The Initial Trigger: A Flaw in DNS Automation

According to AWS’s own (and admirably transparent) post-event summary, the failure originated in the automated system that manages DNS records for DynamoDB’s regional endpoint. This system, which we can call the “DNS Enactor,” is responsible for adding and removing IP addresses from the dynamodb.us-east-1.amazonaws.com record to manage load and health.

A latent defect in this automation, triggered by a specific, rare sequence of events, caused the Enactor to incorrectly remove all IP addresses associated with the DNS record. For any system attempting to resolve this_name, the answer was effectively “not found,” or an empty record. This is the digital equivalent of a building’s address being erased from every map in the world simultaneously.

The “Blast Radius” Explained: A Cascade of Dependencies

Why was this so catastrophic? Because AWS practices “dogfooding”—their own services run on their own infrastructure. This is usually a strength, but here it’s a critical vulnerability.

  • IAM (Identity and Access Management): The IAM service, even global operations, has a hard dependency on DynamoDB in us-east-1 for certain functions. When DynamoDB vanished, authentication and authorization requests began to fail.
  • EC2 Control Plane: Launching new instances or managing existing ones often requires metadata lookup and state management, which, you guessed it, leverages DynamoDB.
  • Lambda & API Gateway: These services heavily rely on DynamoDB for backend state, throttling rules, and metadata.
  • AWS Management Console: The console itself is an application that makes API calls to services like IAM (to see if you’re logged in) and EC2 (to list your instances). It was unusable because its own backend dependencies were failing.

This is a classic cascading failure. The failure of one “Layer 1” foundational service (DynamoDB) created a tidal wave that took down “Layer 2” and “Layer 3” services, which in turn took down customer applications.

Advanced Concept: The “Swiss Cheese Model” of Failure
This outage wasn’t caused by a single bug. It was a “Swiss Cheese” event, where multiple, independent layers of defense all failed in perfect alignment.

  1. The Latent Bug: A flaw in the DNS Enactor automation (a hole in one slice).
  2. The Trigger: A specific, rare sequence of operations (a second hole).
  3. The Lack of Self-Repair: The system’s monitoring failed to detect or correct the “empty state” (a third hole).
  4. The Architectural Dependency: The global reliance on us-east-1‘s DynamoDB endpoint (a fourth, massive hole).

When all four holes lined up, the disaster occurred.

Key Architectural Takeaways for Expert AWS Users

As engineers, we cannot prevent an AWS outage. We can only architect our systems to be resilient to them. Here are the key lessons.

Lesson 1: US-EAST-1 is a Single Point of Failure (Even for Global Services)

Treat us-east-1 as toxic. While it’s necessary for some global operations (like creating IAM roles or managing Route 53 zones), your runtime application traffic should have no hard dependencies on it. Avoid using the us-east-1 region for your primary workloads if you can. If you must use it, you must have an active-active or active-passive failover plan.

Lesson 2: Implement Cross-Region DNS Failover (and Test It)

The single best defense against this specific outage is a multi-region architecture with automated DNS failover using Amazon Route 53. Do not rely on a single regional endpoint. Use Route 53’s health checks to monitor your application’s endpoint in each region. If one region fails (like us-east-1), Route 53 can automatically stop routing traffic to it.

Here is a basic, production-ready example of a “Failover” routing policy in a Terraform configuration. This setup routes primary traffic to us-east-1 but automatically fails over to us-west-2 if the primary health check fails.

# 1. Define the health check for the primary (us-east-1) endpoint
resource "aws_route53_health_check" "primary_endpoint_health" {
  fqdn              = "myapp.us-east-1.example.com"
  port              = 443
  type              = "HTTPS"
  resource_path     = "/health"
  failure_threshold = 3
  request_interval  = 30

  tags = {
    Name = "primary-app-health-check"
  }
}

# 2. Define the "A" record for our main application
resource "aws_route53_record" "primary" {
  zone_id = aws_route53_zone.primary.zone_id
  name    = "app.example.com"
  type    = "A"
  
  # This record is for the PRIMARY (us-east-1) endpoint
  set_identifier = "primary-us-east-1"
  
  # Use Failover routing
  failover_routing_policy {
    type = "PRIMARY"
  }

  # Link to the health check
  health_check_id = aws_route53_health_check.primary_endpoint_health.id
  
  # Alias to the us-east-1 Load Balancer
  alias {
    name                   = aws_lb.primary.dns_name
    zone_id                = aws_lb.primary.zone_id
    evaluate_target_health = true
  }
}

resource "aws_route53_record" "secondary" {
  zone_id = aws_route53_zone.primary.zone_id
  name    = "app.example.com"
  type    = "A"

  # This record is for the SECONDARY (us-west-2) endpoint
  set_identifier = "secondary-us-west-2"
  
  # Use Failover routing
  failover_routing_policy {
    type = "SECONDARY"
  }
  
  # Alias to the us-west-2 Load Balancer
  # Note: No health check is needed for a SECONDARY record.
  # If the PRIMARY fails, traffic routes here.
  alias {
    name                   = aws_lb.secondary.dns_name
    zone_id                = aws_lb.secondary.zone_id
    evaluate_target_health = false
  }
}

Lesson 3: The Myth of “Five Nines” and Preparing for Correlated Failures

The “five nines” (99.999% uptime) SLA applies to a *single service*, not the complex, interconnected system you’ve built. As these outages demonstrate, failures are often *correlated*. A Kinesis outage takes down Cognito. A DynamoDB outage takes down IAM. Your resilience planning must assume that multiple, seemingly independent services will fail at the same time.

Frequently Asked Questions (FAQ)

What was the root cause of the most recent massive AWS outage?

The technical root cause was a failure in an internal, automated DNS management system for the DynamoDB service in the us-east-1 region. A bug caused this system to publish an empty DNS record, making the DynamoDB API endpoint unreachable and triggering a cascading failure across dependent services.

Why does US-EAST-1 cause so many AWS outages?

us-east-1 (N. Virginia) is AWS’s oldest, largest, and most complex region. It also uniquely hosts the control planes and endpoints for some of AWS’s global services, like IAM. Its age and central importance create a unique “blast radius,” where small failures can have an outsized, and sometimes global, impact.

What AWS services were affected by the DynamoDB outage?

The list is extensive, but key affected services included IAM, EC2 (control plane), Lambda, API Gateway, AWS Management Console, CloudWatch, and Cognito, among many others. Any service or customer application that relied on DynamoDB in us-east-1 for its operation was impacted.

How can I protect my application from an AWS outage?

You cannot prevent a provider-level outage, but you can build resilience. The primary strategy is a multi-region architecture. At a minimum, deploy your application to at least two different AWS regions (e.g., us-east-1 and us-west-2) and use Amazon Route 53 with health checks to automate DNS failover between them. Also, architect for graceful degradation—your app should still function (perhaps in a read-only mode) even if a backend dependency fails.

Conclusion: Building Resiliently in an Unreliable World

The recent massive AWS outage is not an indictment of cloud computing; it’s a doctorate-level lesson in distributed systems failure. It reinforces that “the cloud” is not a magical utility—it is a complex, interdependent machine built by humans, with automation layered on top of automation.

As expert practitioners, we must internalize the lessons from the S3 typo, the Kinesis thread limit, and now the DynamoDB DNS failure. We must abandon our implicit trust in any single region, especially us-east-1. The ultimate responsibility for resilience does not lie with AWS; it lies with us, the architects, to design systems that anticipate, and survive, the inevitable failure.

For further reading and official RCAs, we highly recommend bookmarking the AWS Post-Event Summaries page. It is an invaluable resource for understanding how these complex systems fail. Thank you for reading the DevopsRoles page!

Ultimate Guide to AWS SES: Deep Dive into Simple Email Service

For expert AWS practitioners, email is often treated as a critical, high-risk piece of infrastructure. It’s not just about sending notifications; it’s about deliverability, reputation, authentication, and large-scale event handling. While many services offer a simple “send” API, AWS SES (Simple Email Service) provides a powerful, unmanaged, and highly scalable *email platform* that integrates directly into your cloud architecture. If you’re managing applications on AWS, using SES is a high-leverage decision for cost, integration, and control.

This deep dive assumes you’re comfortable with AWS, IAM, and DNS. We’ll skip the basics and jump straight into the architecture, production-level configurations, and advanced features you need to master AWS SES.

Table of Contents

AWS SES Core Architecture: Beyond the Basics

At its core, SES is a decoupled sending and receiving engine. As an expert, the two most important architectural decisions you’ll make upfront concern IP addressing and your sending limits.

Shared IP Pools vs. Dedicated IPs

By default, your account sends from a massive pool of IP addresses shared with other AWS SES customers.

  • Shared IPs (Default):
    • Pros: No extra cost. AWS actively monitors and manages the pool’s reputation, removing bad actors. For most workloads with good sending habits, this is a “warmed-up” and reliable option.
    • Cons: You are susceptible to “noisy neighbors.” A sudden spike in spam from another tenant in your shared pool *could* temporarily affect your deliverability, though AWS is very good at mitigating this.
  • Dedicated IPs (Add-on):
    • Pros: Your sending reputation is 100% your own. You have full control and are not impacted by others. This is essential for high-volume senders who need predictable deliverability.
    • Cons: You *must* warm them up yourself. Sending 1 million emails on day one from a “cold” IP will get you blacklisted instantly. This requires a gradual ramp-up strategy over several weeks. It also has an additional monthly cost.

Expert Pro-Tip: Don’t buy dedicated IPs unless you are a high-volume sender (e.g., 500k+ emails/day) and have an explicit warm-up strategy. For most corporate and transactional mail, the default shared pool is superior because it’s already warm and managed by AWS.

Understanding Sending Quotas & Reputation

Every new AWS SES account starts in the **sandbox**. This is a highly restricted environment designed to prevent spam. While in the sandbox, you can *only* send email to verified identities (domains or email addresses you own).

To leave the sandbox, you must open a support ticket requesting production access. You will need to explain your use case, how you manage bounces and complaints, and how you obtained your email list (e.g., “All emails are transactional for users who sign up on our platform”).

Once you’re in production, your account has two key limits:

  1. Sending Quota: The maximum number of emails you can send in a 24-hour period.
  2. Sending Rate: The maximum number of emails you can send per second.

These limits increase automatically *as long as you maintain a low bounce rate and a near-zero complaint rate*. Your sender reputation is the single most valuable asset you have in email. Protect it.


Production-Ready Setup: Identity & Authentication

Before you can send a single email, you must prove you own the “From” address. You do this by verifying an identity, which can be a single email address or (preferably) an entire domain.

Domain Verification

Verifying a domain allows you to send from *any* address at that domain (e.g., noreply@example.com, support@example.com). This is the standard for production systems. SES gives you two verification methods: DKIM (default) or a TXT record.

You can do this via the console, but using the AWS CLI is faster and more scriptable:

# Request verification for your domain
$ aws ses verify-domain-identity --domain example.com

# This will return a VerificationToken
# {
#    "VerificationToken": "abc123xyz789..."
# }

# You must add this token as a TXT record to your DNS
# Record: _amazonses.example.com
# Type:   TXT
# Value:  "abc123xyz789..."

Once AWS detects this DNS record (which can take minutes to hours), your domain identity will move to a “verified” state.

Mastering Email Authentication: SPF, DKIM, and DMARC

This is non-negotiable for production sending. Mail servers use these three standards to verify that you are who you say you are. Failing to implement them guarantees your mail will land in spam.

  • SPF (Sender Policy Framework): A DNS TXT record that lists which IP addresses are allowed to send email on behalf of your domain. When you use SES, you simply add include:amazonses.com to your existing SPF record.
  • DKIM (DomainKeys Identified Mail): This is the most important. DKIM adds a cryptographic signature to your email headers. SES manages the private key and signs your outgoing mail. You just need to add the public key (provided by SES) as a CNAME record in your DNS. This is what the “Easy DKIM” setup in SES configures for you.
  • DMARC (Domain-based Message Authentication, Reporting & Conformance): DMARC tells receiving mail servers *what to do* with emails that fail SPF or DKIM. It’s a DNS TXT record that enforces your policy (e.g., p=quarantine or p=reject) and provides an address for servers to send you reports on failures. For a deep dive, check out the official DMARC overview.

Sending Email at Scale: API vs. SMTP

AWS SES provides two distinct endpoints for sending mail, each suiting different architectures.

Method 1: The SMTP Interface

SES provides a standard SMTP endpoint (e.g., email-smtp.us-east-1.amazonaws.com). This is the “legacy” or “compatibility” option.

  • Use Case: Integrating with existing applications, third-party software (like Jenkins, GitLab), or older codebases that are hard-coded to use SMTP.
  • Authentication: You generate SMTP credentials (a username and password) from the SES console. These are *not* your standard AWS access keys. You should create a dedicated IAM user with a policy that *only* allows ses:SendRawEmail and then derive the SMTP credentials from that user.

Method 2: The SendEmail & SendRawEmail APIs

This is the modern, cloud-native way to send email. You use the AWS SDK (e.g., Boto3 for Python, AWS SDK for Go) or the AWS CLI, authenticating via standard IAM roles or keys.

You have two primary API calls:

  1. SendEmail: A simple, structured API. You provide the From, To, Subject, and Body (Text and HTML). It’s easy to use but limited.
  2. SendRawEmail: The expert’s choice. This API accepts a single blob: the raw, MIME-formatted email message. You are responsible for building the entire email, including headers, parts (text and HTML), and attachments.

Expert Pro-Tip: Always use SendRawEmail in production. While SendEmail is fine for a quick test, SendRawEmail is the only way to send attachments, add custom headers (like List-Unsubscribe), or create complex multipart MIME messages. Most mature email-sending libraries will build this raw message for you.

Example: Sending with SendRawEmail using Boto3 (Python)

This example demonstrates the power of SendRawEmail by using Python’s email library to construct a multipart message (with both HTML and plain-text versions) and then sending it via Boto3.

import boto3
from email.mime.multipart import MIMEMultipart
from email.mime.text import MIMEText

# Create the SES client
ses_client = boto3.client('ses', region_name='us-east-1')

# Create the root message and set headers
msg = MIMEMultipart('alternative')
msg['Subject'] = "Production-Ready Email Example"
msg['From'] = "Sender Name <sender@example.com>"
msg['To'] = "recipient@example.com"

# Define the plain-text and HTML versions
text_part = "Hello, this is the plain-text version of the email."
html_part = """
<html>
<head></head>
<body>
  <h1>Hello!</h1>
  <p>This is the <b>HTML</b> version of the email.</p>
</body>
</html>
"""

# Attach parts to the message
msg.attach(MIMEText(text_part, 'plain'))
msg.attach(MIMEText(html_part, 'html'))

try:
    # Send the email
    response = ses_client.send_raw_email(
        Source=msg['From'],
        Destinations=[msg['To']],
        RawMessage={'Data': msg.as_string()}
    )
    print(f"Email sent! Message ID: {response['MessageId']}")

except Exception as e:
    print(f"Error sending email: {e}")


Reputation Management: The Most Critical Component

Sending the email is easy. Ensuring it doesn’t get blacklisted is hard. This is where Configuration Sets come in. You should *never* send a production email without one.

Configuration Sets: Your Control Panel

A Configuration Set is a ruleset you apply to your outgoing emails (by adding a custom header or specifying it in the API call). Its primary purpose is to define **Event Destinations**.

Handling Bounces & Complaints (The Feedback Loop)

When an email bounces (hard bounce, e.g., address doesn’t exist) or a user clicks “This is Spam” (a complaint), the receiving server sends a notification back. AWS SES processes this feedback loop. If you ignore it and keep sending to bad addresses, your reputation will plummet, and AWS will throttle or even suspend your sending privileges.

Setting Up Event Destinations

An Event Destination is where SES publishes detailed events about your email’s lifecycle: sends, deliveries, bounces, complaints, opens, and clicks.

You have three main options for destinations:

  1. Amazon SNS: The most common choice. Send all bounce and complaint notifications to an SNS topic. Subscribe an SQS queue or an AWS Lambda function to this topic. Your Lambda function should then parse the message and update your application’s database (e.g., mark the user as unsubscribed or email_invalid). This creates a critical, automated feedback loop.
  2. Amazon CloudWatch: Useful for aggregating metrics and setting alarms. For example, “Alert SRE team if the bounce rate exceeds 5% in any 10-minute window.”
  3. Amazon Kinesis Firehose: The high-throughput, SRE choice. This allows you to stream *all* email events (including deliveries and opens) to a destination like S3 (for long-term analysis), Redshift, or OpenSearch. This is how you build a comprehensive analytics dashboard for your email program.

For more details on setting up event destinations, refer to the official AWS SES documentation.


Advanced Features: AWS SES Mail Receiving

SES isn’t just for sending. It’s also a powerful, serverless email *receiving* endpoint. Instead of running your own postfix or Exchange server, you can configure SES to catch all mail for your domain (or specific addresses).

How it Works: The Architecture

You create a “Receipt Rule” that defines a set of actions to take when an email is received. The typical flow is:

  1. Email arrives at SES (e.g., inbound-support@example.com).
  2. SES scans it for spam and viruses (and rejects it if it fails).
  3. The Receipt Rule is triggered.
  4. The rule specifies an action, such as:
    • Save to S3 Bucket: Dumps the raw email (.eml file) into an S3 bucket.
    • Trigger Lambda Function: Invokes a Lambda function, passing the email content as an event.
    • Publish to SNS Topic: Sends a notification to SNS.

Example Use Case: Automated Inbound Processing

A common pattern is SES -> S3 -> Lambda.

  1. SES receives an email (e.g., an invoice from a vendor).
  2. The Receipt Rule saves the raw .eml file to an S3 bucket (s3://my-inbound-emails/).
  3. The S3 bucket has an event notification configured to trigger a Lambda function on s3:ObjectCreated:*.
  4. The Lambda function retrieves the .eml file, parses it (using a MIME-parsing library), extracts the PDF attachment, and saves it to a separate “invoices” bucket for processing.

This serverless architecture is infinitely scalable, highly resilient, and extremely cost-effective. You’ve just built a complex mail-processing engine with no servers to manage.


AWS SES vs. The Competition (SendGrid, Mailgun)

As an expert, you’re always evaluating trade-offs. Here’s the high-level breakdown:

| Feature | AWS SES | SendGrid / Mailgun / Postmark |
| :— | :— | :— |
| **Model** | Unmanaged Infrastructure | Managed Service |
| **Cost** | **Extremely Low.** Pay-per-email. | Higher. Tiered plans based on volume. |
| **Integration** | **Deepest (AWS).** Native IAM, SNS, S3, Lambda. | Excellent. Strong APIs, but external to your VPC. |
| **Features** | A-la-carte. You build your own analytics, template management, etc. | **All-in-one.** Includes template builders, analytics dashboards, and deliverability support. |
| **Support** | AWS Support. You are the expert. | Specialized email deliverability support. They will help you warm up IPs. |

The Verdict: If you are already deep in the AWS ecosystem and have the SRE/DevOps talent to build your own reputation monitoring and analytics (using CloudWatch/Kinesis), AWS SES is almost always the right choice for cost and integration. If you are a marketing-led team with no developer support, a managed service like SendGrid is a better fit.


Frequently Asked Questions (FAQ)

How do I get out of the AWS SES sandbox?
You must open a service limit increase ticket with AWS Support. In the ticket, clearly explain your use case (e.g., transactional emails for app signups), how you will manage your lists (e.g., immediate removal of bounces/complaints via SNS), and confirm that you are not sending unsolicited mail. A clear, well-written request is usually approved within 24 hours.

What’s the difference between SendEmail and SendRawEmail?
SendEmail is a simple, high-level API for basic text or HTML emails. SendRawEmail is a low-level API that requires you to build the full, MIME-compliant raw email message. You *must* use SendRawEmail if you want to add attachments, use custom headers, or send complex multipart messages.

How does AWS SES pricing work?
It’s incredibly cheap. You are charged per 1,000 emails sent and per GB of data (for attachments). If you are sending from an EC2 instance in the same region, the first 62,000 emails sent per month are often free (as part of the AWS Free Tier, but check current pricing). This makes it one of the most cost-effective solutions on the market.

Can I use AWS SES for marketing emails?
Yes, but you must be extremely careful. SES is optimized for transactional mail. You can use it for bulk marketing, but you are 100% responsible for list management, unsubscribes (must be one-click), and reputation. If your complaint rate spikes, AWS will shut you down. For large-scale marketing, AWS offers Amazon Pinpoint, which is built on top of SES but adds campaign management and analytics features.

Conclusion

AWS SES is not a “set it and forget it” email provider. It’s a powerful, low-level infrastructure component that gives you ultimate control, scalability, and cost-efficiency. For expert AWS users, it’s the clear choice for building robust, integrated applications.

By mastering its core components—identity authentication (DKIM/DMARC), reputation management (Configuration Sets and Event Destinations), and the choice between SMTP and API sending—you can build a world-class email architecture that is both resilient and remarkably inexpensive. The real power of AWS SES is unlocked when you stop treating it as a mail server and start treating it as a serverless event source for your S3, Lambda, and Kinesis-based applications. Thank you for reading the DevopsRoles page!

Top 10 AWS Services Every Developer Must Know in 2025

In 2025, navigating the cloud landscape is no longer an optional skill for a developer—it’s a core competency. Amazon Web Services (AWS) continues to dominate the market, with the 2024 Stack Overflow Developer Survey reporting that over 52% of professional developers use AWS. But with a portfolio of over 200 products, “learning AWS” can feel like an impossible task. The key isn’t to know every service, but to deeply understand the *right* ones. This guide focuses on the top 10 essential AWS Services that provide the most value to developers, enabling you to build, deploy, and scale modern applications efficiently.

Whether you’re building serverless microservices, container-based applications, or sophisticated AI-driven platforms, mastering this curated list will be a significant differentiator in your career. We’ll move beyond the simple definitions to explore *why* each service is critical for a developer’s workflow and how they interconnect to form the backbone of a robust, cloud-native stack.

Why Mastering AWS Services is Non-Negotiable for Developers in 2025

The “DevOps” movement has fully matured, and the lines between writing code and managing the infrastructure it runs on have blurred. Modern developers are increasingly responsible for the entire application lifecycle—a concept known as “you build it, you run it.” Understanding core AWS services is the key to fulfilling this responsibility effectively.

Here’s why this knowledge is crucial:

  • Architectural Fluency: You can’t write efficient, cloud-native code if you don’t understand the components you’re writing for. Knowing when to use a Lambda function versus a Fargate container, or DynamoDB versus RDS, is an architectural decision that begins at the code level.
  • Performance & Cost Optimization: A developer who understands AWS services can write code that leverages them optimally. This means building applications that are not only fast and scalable (e.g., using SQS to decouple services) but also cost-effective (e.g., choosing the right S3 storage tier or Lambda provisioned concurrency).
  • Reduced Dependencies: When you can provision your own database with RDS or define your own security rules in IAM, you reduce friction and dependency on a separate operations team. This leads to faster development cycles and more autonomous, agile teams.
  • Career Advancement: Proficiency in AWS is one of the most in-demand skills in tech. It opens doors to senior roles, DevOps and SRE positions, and higher-paying contracts. The AWS Certified Developer – Associate certification is a testament to this, validating that you can develop, deploy, and debug applications on the platform.

In short, the
platform is no longer just “where the app is deployed.” It is an integral part of the application itself. The services listed below are your new standard library.

The Top 10 AWS Services for Developers

This list is curated for a developer’s perspective, prioritizing compute, storage, data, and the “glue” services that connect everything. We’ll focus on services that you will interact with directly from your code or your CI/CD pipeline.

1. AWS Lambda

What it is: A serverless, event-driven compute service. Lambda lets you run code without provisioning or managing servers. You simply upload your code as a “function,” and AWS handles all the scaling, patching, and high availability.

Why it matters for Developers: Lambda is the heart of serverless architecture. As a developer, your focus shifts entirely to writing business logic. You don’t care about the underlying OS, runtime, or scaling. Your function can be triggered by dozens of other AWS services, such as an HTTP request from API Gateway, a file upload to S3, or a message in an SQS queue. This event-driven model is powerful for building decoupled microservices. You only pay for the compute time you consume, down to the millisecond, making it incredibly cost-effective for spiky or low-traffic workloads.

Practical Example (Python): A simple Lambda function that triggers when a new image is uploaded to an S3 bucket.


import json
import boto3

s3 = boto3.client('s3')

def lambda_handler(event, context):
    # 1. Get the bucket and key from the event
    bucket = event['Records'][0]['s3']['bucket']['name']
    key = event['Records'][0]['s3']['object']['key']
    
    print(f"New file {key} was uploaded to {bucket}.")
    
    # Example: Add a tag to the new object
    s3.put_object_tagging(
        Bucket=bucket,
        Key=key,
        Tagging={
            'TagSet': [
                {
                    'Key': 'processed',
                    'Value': 'false'
                },
            ]
        }
    )
    
    return {
        'statusCode': 200,
        'body': json.dumps('Tag added successfully!')
    }

2. Amazon S3 (Simple Storage Service)

What it is: A highly durable, scalable, and secure object storage service. Think of it as a limitless hard drive in the cloud, but with an API.

Why it matters for Developers: S3 is far more than just “storage.” It’s a foundational service that nearly every application touches. For developers, S3 is the go-to solution for:

  • Storing User Uploads: Images, videos, documents, etc.
  • Static Website Hosting: You can host an entire single-page application (SPA) like a React or Vue app directly from an S3 bucket.
  • Data Lakes: The primary repository for raw data used in analytics and machine learning.
  • Logs and Backups: A durable, low-cost target for application logs and database backups.

Your code will interact with S3 daily, using AWS SDKs to upload (PutObject) and retrieve (GetObject) files securely.

Practical Example (AWS CLI): Copying your application’s built static assets to an S3 bucket configured for website hosting.


# Build your React app
npm run build

# Sync the build directory to your S3 bucket
# The --delete flag removes old files
aws s3 sync build/ s3://my-static-website-bucket --delete

3. Amazon DynamoDB

What it is: A fully managed, serverless NoSQL key-value and document database. It’s designed for single-digit millisecond performance at any scale.

Why it matters for Developers: DynamoDB is the default database for serverless applications. Because it’s fully managed, you never worry about patching, scaling, or replication. Its key-value nature makes it incredibly fast for “hot” data access, such as user profiles, session states, shopping carts, and gaming leaderboards. For a developer, the primary challenge (and power) is in data modeling. Instead of complex joins, you design your tables around your application’s specific access patterns, which forces you to think about *how* your app will be used upfront. It pairs perfectly with Lambda, as a Lambda function can read or write to DynamoDB with extremely low latency.

4. Amazon ECS (Elastic Container Service) & AWS Fargate

What it is: ECS is a highly scalable, high-performance container orchestration service. It manages your Docker containers. Fargate is a “serverless” compute engine for containers that removes the need to manage the underlying EC2 instances (virtual servers).

Why it matters for Developers: Containers are the standard for packaging and deploying applications. ECS with Fargate gives developers the power of containers without the operational overhead of managing servers. You define your application in a Dockerfile, create a “Task Definition” (a JSON blueprint), and tell Fargate to run it. Fargate finds the compute, deploys your container, and handles scaling automatically. This is the ideal path for migrating a traditional monolithic application (e.g., a Node.js Express server or a Java Spring Boot app) to the cloud, or for running long-lived microservices that don’t fit the short-lived, event-driven model of Lambda.

Practical Example (ECS Task Definition Snippet): A simple definition for a web server container.


{
  "family": "my-web-app",
  "containerDefinitions": [
    {
      "name": "my-app-container",
      "image": "123456789012.dkr.ecr.us-east-1.amazonaws.com/my-app:latest",
      "portMappings": [
        {
          "containerPort": 8080,
          "hostPort": 8080,
          "protocol": "tcp"
        }
      ],
      "cpu": 512,
      "memory": 1024,
      "essential": true
    }
  ],
  "requiresCompatibilities": ["FARGATE"],
  "networkMode": "awsvpc",
  "cpu": "512",
  "memory": "1024"
}

5. Amazon EKS (Elastic Kubernetes Service)

What it is: A fully managed Kubernetes service. Kubernetes (K8s) is the open-source industry standard for container orchestration, and EKS provides a managed, secure, and highly available K8s control plane.

Why it matters for Developers: If your company or team has standardized on Kubernetes, EKS is how you run it on AWS. While ECS is a simpler, AWS-native option, EKS provides the full, unadulterated Kubernetes API. This is crucial for portability (avoiding vendor lock-in) and for leveraging the massive open-source K8s ecosystem (tools like Helm, Prometheus, and Istio). As a developer, you’ll interact with the cluster using kubectl just as you would any other K8s cluster, but without the nightmare of managing the control plane (etcd, API server, etc.) yourself.

6. AWS IAM (Identity and Access Management)

What it is: The security backbone of AWS. IAM manages “who can do what.” It controls access to all AWS services and resources using users, groups, roles, and policies.

Why it matters for Developers: This is arguably the most critical service for a developer to understand. You *will* write insecure code if you don’t grasp IAM. The golden rule is “least privilege.” Your Lambda function doesn’t need admin access; it needs an IAM Role that gives it *only* the dynamodb:PutItem permission for a *specific* table. As a developer, you will constantly be defining these roles and policies to securely connect your services. Using IAM roles (instead of hard-coding secret keys in your app) is the non-negotiable best practice for application security on AWS.

Practical Example (IAM Policy): A policy for a Lambda function that only allows it to write to a specific DynamoDB table.


{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "dynamodb:PutItem",
        "dynamodb:UpdateItem"
      ],
      "Resource": "arn:aws:dynamodb:us-east-1:123456789012:table/my-user-table"
    },
    {
      "Effect": "Allow",
      "Action": [
        "logs:CreateLogGroup",
        "logs:CreateLogStream",
        "logs:PutLogEvents"
      ],
      "Resource": "arn:aws:logs:*:*:*"
    }
  ]
}

7. Amazon API Gateway

What it is: A fully managed service for creating, publishing, maintaining, and securing APIs at any scale. It acts as the “front door” for your application’s backend logic.

Why it matters for Developers: API Gateway is the bridge between the outside world (e.g., a mobile app or a web front end) and your backend compute (e.g., Lambda, ECS, or even an EC2 instance). As a developer, you use it to define your RESTful or WebSocket APIs. It handles all the undifferentiated heavy lifting: request/response transformation, throttling to prevent abuse, caching, authentication (e.g., with AWS Cognito or IAM), and monitoring. You can create a full, production-ready, serverless API by simply mapping API Gateway endpoints to Lambda functions.

8. Amazon RDS (Relational Database Service)

What it is: A managed service for relational databases. It supports popular engines like PostgreSQL, MySQL, MariaDB, SQL Server, and Oracle.

Why it matters for Developers: While DynamoDB is great for many use cases, sometimes you just need a SQL database. You might be migrating a legacy application, or your data has complex relationships and requires transactional integrity (ACID compliance). RDS gives you a SQL database without forcing you to become a Database Administrator (DBA). It automates provisioning, patching, backups, and high-availability (multi-AZ) failover. As a developer, you get a simple connection string, and you can use your favorite SQL dialect and ORM (e.g., SQLAlchemy, TypeORM, Prisma) just as you would with any other database.

9. AWS CodePipeline / CodeCommit / CodeBuild

What it is: This is a suite of fully managed CI/CD (Continuous Integration / Continuous Deployment) services.

  • CodeCommit: A private, managed Git repository.
  • CodeBuild: A managed build service that compiles your code, runs tests, and produces artifacts (like a Docker image).
  • CodePipeline: The “orchestrator” that defines your release process (e.g., “On push to main, run CodeBuild, then deploy to ECS”).

Why it matters for Developers: “You build it, you run it” also means “you deploy it.” As a developer, you are responsible for automating your path to production. While many companies use third-party tools like Jenkins or GitLab CI, the AWS CodeSuite provides a deeply integrated, serverless way to build and deploy your applications *natively* on AWS. You can configure a pipeline that automatically builds and deploys your Lambda function, Fargate container, or even your S3 static site on every single commit, allowing for true continuous delivery.

10. Amazon SQS (Simple Queue Service)

What it is: A fully managed message queuing service. It’s one of the oldest and most reliable AWS services.

Why it matters for Developers: SQS is the essential “glue” for building decoupled, resilient, and asynchronous microservices. Instead of one service calling another directly via an API (a synchronous call that can fail), the first service (the “producer”) sends a “job” as a message to an SQS queue. A second service (the “consumer”) then polls the queue, pulls off a message, and processes it at its own pace.

This pattern is incredibly powerful. If the consumer service crashes, the message stays safely in the queue and can be re-processed later (this is called “fault tolerance”). If the producer service sends 10,000 messages at once, the queue absorbs the spike, and the consumer can work through them steadily (this is called “smoothing” or “load leveling”). As a developer, SQS is your primary tool for moving from a fragile monolith to a robust, event-driven architecture.

Beyond the Top 10: Honorable Mentions

Once you have a handle on the services above, these are the next logical ones to explore to round out your AWS for developers toolkit:

  • AWS CDK (Cloud Development Kit): Stop clicking in the console. The CDK lets you define your entire infrastructure (VPCs, databases, Lambda functions, everything) in a real programming language like TypeScript, Python, or Go. This is the modern face of Infrastructure as Code (IaC).
  • Details
  • Amazon CloudWatch: You can’t run what you can’t see. CloudWatch is the native monitoring and observability service. It collects logs (CloudWatch Logs), metrics (CloudWatch Metrics), and allows you to set alarms based on them (e.g., “Alert me if my Lambda function has more than 5 errors in 1 minute”).
  • Amazon Cognito: A fully managed user identity and authentication service. If your app needs a “Sign Up / Sign In” page, Cognito handles the user pool, password resets, and social federation (e.g., “Login with Google”) for you.

Frequently Asked Questions

What’s the best way to start learning these AWS services?

The single best way is to build something. Take advantage of the AWS Free Tier, which gives you a generous amount of many of these services (Lambda, DynamoDB, S3, etc.) for 12 months. Pick a project—like a serverless URL shortener or a photo-sharing app—and build it. Use API Gateway + Lambda + DynamoDB. Host the frontend on S3. You’ll learn more from one weekend of building than from weeks of passive reading.

Do I need to know all 10 of these?

No, but you should know *of* them. Your daily focus will depend on your stack. If you’re a pure serverless developer, your world will be Lambda, DynamoDB, API Gateway, SQS, and IAM. If you’re on a team managing large-scale microservices, you’ll live inside EKS, RDS, and CloudWatch. The most universal services that *every* developer should know are S3 and IAM, as they are truly foundational.

How is AWS different from Azure or GCP for a developer?

All three major cloud providers (AWS, Microsoft Azure, and Google Cloud Platform) offer the same core “primitives.” They all have a serverless function service (Lambda, Azure Functions, Google Cloud Functions), a container service (ECS/EKS, Azure Kubernetes Service, Google Kubernetes Engine), and managed databases. The concepts are 100% transferable. AWS is the most mature, has the largest market share, and offers the widest breadth of services. The main difference a developer will feel is in the naming conventions, the specific SDKs/APIs, and the IAM system, which is unique to each cloud.

Conclusion

The cloud is no longer just infrastructure; it’s the new application server, the new database, and the new deployment pipeline, all rolled into one API-driven platform. For developers in 2025, fluency in these core AWS Services is not just a “nice to have” skill—it is a fundamental part of the job. By focusing on this top 10 list, you move beyond just writing code and become an architect of scalable, resilient, and powerful cloud-native applications. Start with S3 and IAM, pick a compute layer like Lambda or Fargate, add a database like DynamoDB or RDS, and you’ll have the foundation you need to build almost anything. Thank you for reading the DevopsRoles page!