Tag Archives: DevOps

Developing Secure Software: Docker & Sonatype at Scale

In the era of Log4Shell and SolarWinds, the mandate for engineering leaders is clear: security cannot be a gatekeeper at the end of the release cycle; it must be the pavement on which the pipeline runs. Developing secure software at an enterprise scale requires more than just scanning code—it demands a comprehensive orchestration of the software supply chain.

For organizations leveraging the Docker ecosystem, the challenge is twofold: ensuring the base images are immutable and trusted, and ensuring the application artifacts injected into those images are free from malicious dependencies. This is where the synergy between Docker’s containerization standards and Sonatype’s Nexus platform (Lifecycle and Repository) becomes critical.

This guide moves beyond basic setup instructions. We will explore architectural strategies for integrating Sonatype Nexus IQ with Docker registries, implementing policy-as-code in CI/CD, and managing the noise of vulnerability reporting to maintain high-velocity deployments.

The Supply Chain Paradigm: Beyond Simple Scanning

To succeed in developing secure software, we must acknowledge that modern applications are 80-90% open-source components. The “code” your developers write is often just glue logic binding third-party libraries together. Therefore, the security posture of your Docker container is directly inherited from the upstream supply chain.

Enterprise strategies must align with frameworks like the NIST Secure Software Development Framework (SSDF) and SLSA (Supply-chain Levels for Software Artifacts). The goal is not just to find bugs, but to establish provenance and governance.

Pro-Tip for Architects: Don’t just scan build artifacts. Implement a “Nexus Firewall” at the proxy level. If a developer requests a library with a CVSS score of 9.8, the proxy should block the download entirely, preventing the vulnerability from ever entering your ecosystem. This is “Shift Left” in its purest form.

Architecture: Integrating Nexus IQ with Docker Registries

At scale, you cannot rely on developers manually running CLI scans. Integration must be seamless. A robust architecture typically involves three layers of defense using Sonatype Nexus and Docker.

1. The Proxy Layer (Ingestion)

Configure Nexus Repository Manager (NXRM) as a proxy for Docker Hub. All `docker pull` requests should go through NXRM. This allows you to cache images (improving build speeds) and, more importantly, inspect them.

2. The Build Layer (CI Integration)

This is where the Nexus IQ Server comes into play. During the build, the CI server (Jenkins, GitLab CI, GitHub Actions) generates an SBOM (Software Bill of Materials) of the application and sends it to Nexus IQ for policy evaluation.

3. The Registry Layer (Continuous Monitoring)

Even if an image is safe today, it might be vulnerable tomorrow (Zero-Day). Nexus Lifecycle offers “Continuous Monitoring” for artifacts stored in the repository, alerting you to new CVEs in old images without requiring a rebuild.

Policy-as-Code: Enforcement in CI/CD

Developing secure software effectively means automating decision-making. Policies should be defined in Nexus IQ (e.g., “No Critical CVEs in Production App”) and enforced by the pipeline.

Below is a production-grade Jenkinsfile snippet demonstrating how to enforce a blocking policy using the Nexus Platform Plugin. Note the use of failBuildOnNetworkError to ensure fail-safe behavior.

pipeline {
    agent any
    stages {
        stage('Build & Package') {
            steps {
                sh 'mvn clean package -DskipTests' // Create the artifact
                sh 'docker build -t my-app:latest .' // Build the container
            }
        }
        stage('Sonatype Policy Evaluation') {
            steps {
                script {
                    // Evaluate the application JARs and the Docker Image
                    nexusPolicyEvaluation failBuildOnNetworkError: true,
                                          iqApplication: 'payment-service-v2',
                                          iqStage: 'build',
                                          iqScanPatterns: [[pattern: 'target/*.jar'], [pattern: 'Dockerfile']]
                }
            }
        }
        stage('Push to Registry') {
            steps {
                // Only executes if Policy Evaluation passes
                sh 'docker push private-repo.corp.com/my-app:latest'
            }
        }
    }
}

By scanning the Dockerfile and the application binaries simultaneously, you catch OS-level vulnerabilities (e.g., glibc issues in the base image) and Application-level vulnerabilities (e.g., log4j in the Java classpath).

Optimizing Docker Builds for Security

While Sonatype handles the governance, the way you construct your Docker images fundamentally impacts your risk profile. Expert teams minimize the attack surface using Multi-Stage Builds and Distroless images.

This approach removes build tools (Maven, GCC, Gradle) and shells from the final runtime image, making it significantly harder for attackers to achieve persistence or lateral movement.

Secure Dockerfile Pattern

# Stage 1: The Build Environment
FROM maven:3.8.6-eclipse-temurin-17 AS builder
WORKDIR /app
COPY pom.xml .
COPY src ./src
RUN mvn package -DskipTests

# Stage 2: The Runtime Environment
# Using Google's Distroless image for Java 17
# No shell, no package manager, minimal CVE footprint
FROM gcr.io/distroless/java17-debian11
COPY --from=builder /app/target/my-app.jar /app/my-app.jar
WORKDIR /app
CMD ["my-app.jar"]

Pro-Tip: When scanning distroless images or stripped binaries, standard scanners often fail because they rely on package managers (like apt or apk) to list installed software. Sonatype’s “Advanced Binary Fingerprinting” is superior here as it identifies components based on hash signatures rather than package manifests.

Scaling Operations: Automated Waivers & API Magic

The biggest friction point in developing secure software is the “False Positive” or the “Unfixable Vulnerability.” If you block builds for a vulnerability that has no patch available, developers will revolt.

To handle this at scale, you must utilize the Nexus IQ Server API. You can script logic that automatically grants temporary waivers for vulnerabilities that meet specific criteria (e.g., “Vendor status: Will Not Fix” AND “CVSS < 7.0”).

Here is a conceptual example of how to interact with the API to manage waivers programmatically:

# Pseudo-code for automating waivers via Nexus IQ API
import requests

IQ_SERVER = "https://iq.corp.local"
APP_ID = "payment-service-v2"
AUTH = ('admin', 'password123')

def apply_waiver(violation_id, reason):
    endpoint = f"{IQ_SERVER}/api/v2/policyViolations/{violation_id}/waiver"
    payload = {
        "comment": reason,
        "expiryTime": "2025-12-31T23:59:59.999Z" # Waiver expires in future
    }
    response = requests.post(endpoint, json=payload, auth=AUTH)
    if response.status_code == 200:
        print(f"Waiver applied for {violation_id}")

# Logic: If vulnerability is effectively 'noise', auto-waive it
# This prevents the pipeline from breaking on non-actionable items

Frequently Asked Questions (FAQ)

How does Sonatype IQ differ from ‘docker scan’?

docker scan (often powered by Snyk) is excellent for ad-hoc developer checks. Sonatype IQ is an enterprise governance platform. It provides centralized policy management, legal compliance (license checking), and deep binary fingerprinting that persists across the entire SDLC, not just the local machine.

What is the performance impact of scanning in CI/CD?

A full binary scan can take time. To optimize, ensure your Nexus IQ Server is co-located (network-wise) with your CI runners. Additionally, utilize the “Proprietary Code” settings in Nexus to exclude your internal JARs/DLLs from being fingerprinted against the public Central Repository, which speeds up analysis significantly.

How do we handle “InnerSource” components?

Large enterprises often reuse internal libraries. You should publish these to a hosted repository in Nexus. By configuring your policies correctly, you can ensure that consuming applications verify the version age and quality of these internal components, applying the same rigor to internal code as you do to open source.

Conclusion

Developing secure software using Docker and Sonatype at scale is not an endpoint; it is a continuous operational practice. It requires shifting from a reactive “patching” mindset to a proactive “supply chain management” mindset.

By integrating Nexus Firewall to block bad components at the door, enforcing Policy-as-Code in your CI/CD pipelines, and utilizing minimal Docker base images, you create a defense-in-depth strategy. This allows your organization to innovate at the speed of Docker, with the assurance and governance required by the enterprise.

Next Step: Audit your current CI pipeline. If you are running scans but not blocking builds on critical policy violations, you are gathering data, not securing software. Switch your Nexus action from “Warn” to “Fail” for CVSS 9+ vulnerabilities today. Thank you for reading the DevopsRoles page!

Kubernetes Migration: Strategies & Best Practices

For the modern enterprise, the question is no longer if you will adopt cloud-native orchestration, but how you will manage the transition. Kubernetes migration is rarely a linear process; it is a complex architectural shift that demands a rigorous understanding of distributed systems, state persistence, and networking primitives. Whether you are moving legacy monoliths from bare metal to K8s, or orchestrating a multi-cloud cluster-to-cluster shift, the margin for error is nonexistent.

This guide is designed for Senior DevOps Engineers and SREs. We will bypass the introductory concepts and dive straight into the strategic patterns, technical hurdles of stateful workloads, and zero-downtime cutover techniques required for a successful production migration.

The Architectural Landscape of Migration

A successful Kubernetes migration is 20% infrastructure provisioning and 80% application refactoring and data gravity management. Before a single YAML manifest is applied, the migration path must be categorized based on the source and destination architectures.

Types of Migration Contexts

  • V2C (VM to Container): The classic modernization path. Requires containerization (Dockerfiles), defining resource limits, and decoupling configuration from code (12-Factor App adherence).
  • C2C (Cluster to Cluster): Moving from on-prem OpenShift to EKS, or GKE to EKS. This involves handling API version discrepancies, CNI (Container Network Interface) translation, and Ingress controller mapping.
  • Hybrid/Multi-Cloud: Spanning workloads across clusters. Complexity lies in service mesh implementation (Istio/Linkerd) and consistent security policies.

GigaCode Pro-Tip: In C2C migrations, strictly audit your API versions using tools like kubent (Kube No Trouble) before migration. Deprecated APIs in the source cluster (e.g., v1beta1 Ingress) will cause immediate deployment failures in a newer destination cluster version.

Strategic Patterns: The 6 Rs in a K8s Context

While the “6 Rs” of cloud migration are standard, their application in a Kubernetes migration is distinct.

1. Rehost (Lift and Shift)

Wrapping a legacy binary in a container without code changes. While fast, this often results in “fat containers” that behave like VMs (using SupervisorD, lacking liveness probes, local logging).

Best for: Low-criticality internal apps or immediate datacenter exits.

2. Replatform (Tweak and Shift)

Moving to containers while replacing backend services with cloud-native equivalents. For example, migrating a local MySQL instance inside a VM to Amazon RDS or Google Cloud SQL, while the application moves to Kubernetes.

3. Refactor (Re-architect)

Breaking a monolith into microservices to fully leverage Kubernetes primitives like scaling, self-healing, and distinct release cycles.

Technical Deep Dive: Migrating Stateful Workloads

Stateless apps are trivial to migrate. The true challenge in any Kubernetes migration is Data Gravity. Handling StatefulSets and PersistentVolumeClaims (PVCs) requires ensuring data integrity and minimizing Return to Operation (RTO) time.

CSI and Volume Snapshots

Modern migrations rely heavily on the Container Storage Interface (CSI). If you are migrating between clusters (C2C), you cannot simply “move” a PV. You must replicate the data.

Migration Strategy: Velero with Restic/Kopia

Velero is the industry standard for backing up and restoring Kubernetes cluster resources and persistent volumes. For storage backends that do not support native snapshots across different providers, Velero integrates with Restic (or Kopia in newer versions) to perform file-level backups of PVC data.

# Example: Creating a backup including PVCs using Velero
velero backup create migration-backup \
  --include-namespaces production-app \
  --default-volumes-to-fs-backup \
  --wait

Upon restoration in the target cluster, Velero reconstructs the Kubernetes objects (Deployments, Services, PVCs) and hydrates the data into the new StorageClass defined in the destination.

Database Migration Patterns

For high-throughput databases, file-level backup/restore is often too slow (high downtime). Instead, utilize replication:

  1. Setup a Replica: Configure a read-replica in the destination Kubernetes cluster (or managed DB service) pointing to the source master.
  2. Sync: Allow replication lag to drop to near zero.
  3. Promote: During the maintenance window, stop writes to the source, wait for the final sync, and promote the destination replica to master.

Zero-Downtime Cutover Strategies

Once the workload is running in the destination environment, switching traffic is the highest-risk phase. A “Big Bang” DNS switch is rarely advisable for high-traffic systems.

1. DNS Weighted Routing (Canary Cutover)

Utilize DNS providers (like AWS Route53 or Cloudflare) to shift traffic gradually. Start with a 5% weight to the new cluster’s Ingress IP.

2. Ingress Shadowing (Dark Traffic)

Before the actual cutover, mirror production traffic to the new cluster to validate performance without affecting real users. This can be achieved using Service Mesh capabilities (like Istio) or Nginx ingress annotations.

# Example: Nginx Ingress Mirroring Annotation
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: production-ingress
  annotations:
    nginx.ingress.kubernetes.io/mirror-target-service: "new-cluster-endpoint"
spec:
  rules:
  - host: api.example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: legacy-service
            port:
              number: 80

CI/CD and GitOps Adaptation

A Kubernetes migration is the perfect opportunity to enforce GitOps. Migrating pipeline logic (Jenkins, GitLab CI) directly to Kubernetes manifests managed by ArgoCD or Flux ensures that the “source of truth” for your infrastructure is version controlled.

When migrating pipelines:

  • Abstraction: Replace complex imperative deployment scripts (kubectl apply -f ...) with Helm Charts or Kustomize overlays.
  • Secret Management: Move away from environment variables stored in CI tools. Adopt Secrets Store CSI Driver (Vault/AWS Secrets Manager) or Sealed Secrets.

Frequently Asked Questions (FAQ)

How do I handle disparate Ingress Controllers during migration?

If moving from AWS ALB Ingress to Nginx Ingress, the annotations will differ significantly. Use a “Translation Layer” approach: Use Helm to template your Ingress resources. Define values files for the source (ALB) and destination (Nginx) that render the correct annotations dynamically, allowing you to deploy to both environments from the same codebase during the transition.

What is the biggest risk in Kubernetes migration?

Network connectivity and latency. Often, migrated services in the new cluster need to communicate with legacy services left behind on-prem or in a different VPC. Ensure you have established robust peering, VPNs, or Transit Gateways before moving applications to prevent timeouts.

Should I migrate stateful workloads to Kubernetes at all?

This is a contentious topic. For experts, the answer is: “Yes, if you have the operational maturity.” Operators (like the Prometheus Operator or Postgres Operator) make managing stateful apps easier, but if your team lacks deep K8s storage knowledge, offloading state to managed services (RDS, Cloud SQL) lowers the migration risk profile significantly.

Conclusion

Kubernetes migration is a multifaceted engineering challenge that extends far beyond simple containerization. It requires a holistic strategy encompassing data persistence, traffic shaping, and observability.

By leveraging tools like Velero for state transfer, adopting GitOps for configuration consistency, and utilizing weighted DNS for traffic cutovers, you can execute a migration that not only modernizes your stack but does so with minimal risk to the business. The goal is not just to be on Kubernetes, but to operate a platform that is resilient, scalable, and easier to manage than the legacy system it replaces. Thank you for reading the DevopsRoles page!

Unlock Reusable VPC Modules: Terraform for Dev/Stage/Prod Environments

If you are managing infrastructure at scale, you have likely felt the pain of the “copy-paste” sprawl. You define a VPC for Development, then copy the code for Staging, and again for Production, perhaps changing a CIDR block or an instance count manually. This breaks the fundamental DevOps principle of DRY (Don’t Repeat Yourself) and introduces drift risk.

For Senior DevOps Engineers and SREs, the goal isn’t just to write code that works; it’s to architect abstractions that scale. Reusable VPC Modules are the cornerstone of a mature Infrastructure as Code (IaC) strategy. They allow you to define the “Gold Standard” for networking once and instantiate it infinitely across environments with predictable results.

In this guide, we will move beyond basic syntax. We will construct a production-grade, agnostic VPC module capable of dynamic subnet calculation, conditional resource creation (like NAT Gateways), and strict variable validation suitable for high-compliance Dev, Stage, and Prod environments.

Why Reusable VPC Modules Matter (Beyond DRY)

While reducing code duplication is the obvious benefit, the strategic value of modularizing your VPC architecture runs deeper.

  • Governance & Compliance: By centralizing your network logic, you enforce security standards (e.g., “Flow Logs must always be enabled” or “Private subnets must not have public IP assignment”) in a single location.
  • Testing & Versioning: You can version your module (e.g., v1.2.0). Production can remain pinned to a stable version while you iterate on features in Development, effectively applying software engineering lifecycles to your network.
  • Abstraction Complexity: A consumer of your module (perhaps a developer spinning up an ephemeral environment) shouldn’t need to understand Route Tables or NACLs. They should only need to provide a CIDR block and an Environment name.

Pro-Tip: Avoid the “God Module” anti-pattern. While it’s tempting to bundle the VPC, EKS, and RDS into one giant module, this leads to dependency hell. Keep your Reusable VPC Modules strictly focused on networking primitives: VPC, Subnets, Route Tables, Gateways, and ACLs.

Anatomy of a Production-Grade Module

Let’s build a module that calculates subnets dynamically based on Availability Zones (AZs) and handles environment-specific logic (like high availability in Prod vs. cost savings in Dev).

1. Input Strategy & Validation

Modern Terraform (v1.0+) allows for powerful variable validation. We want to ensure that downstream users don’t accidentally pass invalid CIDR blocks.

# modules/vpc/variables.tf

variable "environment" {
  description = "Deployment environment (dev, stage, prod)"
  type        = string
  validation {
    condition     = contains(["dev", "stage", "prod"], var.environment)
    error_message = "Environment must be one of: dev, stage, prod."
  }
}

variable "vpc_cidr" {
  description = "CIDR block for the VPC"
  type        = string
  validation {
    condition     = can(cidrhost(var.vpc_cidr, 0))
    error_message = "Must be a valid IPv4 CIDR block."
  }
}

variable "az_count" {
  description = "Number of AZs to utilize"
  type        = number
  default     = 2
}

2. Dynamic Subnetting with cidrsubnet

Hardcoding subnet CIDRs (e.g., 10.0.1.0/24) is brittle. Instead, use the cidrsubnet function to mathematically carve up the VPC CIDR. This ensures no overlap and automatic scalability if you change the base CIDR size.

# modules/vpc/main.tf

data "aws_availability_zones" "available" {
  state = "available"
}

resource "aws_vpc" "main" {
  cidr_block           = var.vpc_cidr
  enable_dns_hostnames = true
  enable_dns_support   = true

  tags = {
    Name = "${var.environment}-vpc"
  }
}

# Public Subnets
resource "aws_subnet" "public" {
  count                   = var.az_count
  vpc_id                  = aws_vpc.main.id
  # Example: 10.0.0.0/16 -> 10.0.1.0/24, 10.0.2.0/24, etc.
  cidr_block              = cidrsubnet(var.vpc_cidr, 8, count.index)
  availability_zone       = data.aws_availability_zones.available.names[count.index]
  map_public_ip_on_launch = true

  tags = {
    Name = "${var.environment}-public-${data.aws_availability_zones.available.names[count.index]}"
    Tier = "Public"
  }
}

# Private Subnets
resource "aws_subnet" "private" {
  count             = var.az_count
  vpc_id            = aws_vpc.main.id
  # Offset the CIDR calculation by 'az_count' to avoid overlap with public subnets
  cidr_block        = cidrsubnet(var.vpc_cidr, 8, count.index + var.az_count)
  availability_zone = data.aws_availability_zones.available.names[count.index]

  tags = {
    Name = "${var.environment}-private-${data.aws_availability_zones.available.names[count.index]}"
    Tier = "Private"
  }
}

3. Conditional NAT Gateways (Cost Optimization)

NAT Gateways are expensive. In a Dev environment, you might only need one shared NAT Gateway (or none if you use instances with public IPs for testing), whereas Prod requires High Availability (one NAT per AZ).

# modules/vpc/main.tf

locals {
  # If Prod, create NAT per AZ. If Dev/Stage, create only 1 NAT total to save costs.
  nat_gateway_count = var.environment == "prod" ? var.az_count : 1
}

resource "aws_eip" "nat" {
  count = local.nat_gateway_count
  domain = "vpc"
}

resource "aws_nat_gateway" "main" {
  count         = local.nat_gateway_count
  allocation_id = aws_eip.nat[count.index].id
  subnet_id     = aws_subnet.public[count.index].id

  tags = {
    Name = "${var.environment}-nat-${count.index}"
  }
}

Implementing Across Environments

Once your Reusable VPC Module is polished, utilizing it across environments becomes a trivial exercise in configuration management. I recommend a directory-based structure over Terraform Workspaces for clearer isolation of state files and variable definitions.

Directory Structure

infrastructure/
├── modules/
│   └── vpc/ (The code we wrote above)
├── environments/
│   ├── dev/
│   │   └── main.tf
│   ├── stage/
│   │   └── main.tf
│   └── prod/
│       └── main.tf

The Implementation (DRY at work)

In environments/prod/main.tf, your code is now incredibly concise:

module "vpc" {
  source      = "../../modules/vpc"
  
  environment = "prod"
  vpc_cidr    = "10.0.0.0/16"
  az_count    = 3 # High Availability
}

Contrast this with environments/dev/main.tf:

module "vpc" {
  source      = "../../modules/vpc"
  
  environment = "dev"
  vpc_cidr    = "10.10.0.0/16" # Different CIDR
  az_count    = 2 # Lower cost
}

Advanced Patterns & Considerations

Tagging Standards

Effective tagging is non-negotiable for cost allocation and resource tracking. Use the default_tags feature in the AWS provider configuration to apply global tags, but ensure your module accepts a tags map variable to merge specific metadata.

Outputting Values for Dependency Injection

Your VPC module is likely the foundation for other modules (like EKS or RDS). Ensure you output the IDs required by these dependent resources.

# modules/vpc/outputs.tf

output "vpc_id" {
  value = aws_vpc.main.id
}

output "private_subnet_ids" {
  value = aws_subnet.private[*].id
}

output "public_subnet_ids" {
  value = aws_subnet.public[*].id
}

Frequently Asked Questions (FAQ)

Should I use the official terraform-aws-modules/vpc/aws or build my own?

For beginners or rapid prototyping, the community module is excellent. However, for Expert SRE teams, building your own Reusable VPC Module is often preferred. It reduces “bloat” (unused features from the community module) and allows strict adherence to internal naming conventions and security compliance logic that a generic module cannot provide.

How do I handle VPC Peering between these environments?

Generally, you should avoid peering Dev and Prod. However, if you need shared services (like a tooling VPC), create a separate vpc-peering module. Do not bake peering logic into the core VPC module, as it creates circular dependencies and makes the module difficult to destroy.

What about VPC Flow Logs?

Flow Logs should be a standard part of your reusable module. I recommend adding a variable enable_flow_logs (defaulting to true) and storing logs in S3 or CloudWatch Logs. This ensures that every environment spun up with your module has auditing enabled by default.

Conclusion

Transitioning to Reusable VPC Modules transforms your infrastructure from a collection of static scripts into a dynamic, versioned product. By abstracting the complexity of subnet math and resource allocation, you empower your team to deploy Dev, Stage, and Prod environments that are consistent, compliant, and cost-optimized.

Start refactoring your hardcoded network configurations today. Isolate your logic into a module, version it, and watch your drift disappear. Thank you for reading the DevopsRoles page!

Monitor Docker: Efficient Container Monitoring Across All Servers with Beszel

In the world of Docker container monitoring, we often pay a heavy “Observability Tax.” We deploy complex stacks—Prometheus, Grafana, Node Exporter, cAdvisor—just to check if a container is OOM (Out of Memory). For large Kubernetes clusters, that complexity is justified. For a fleet of Docker servers, home labs, or edge devices, it’s overkill.

Enter Beszel. It is a lightweight monitoring hub that fundamentally changes the ROI of observability. It gives you historical CPU, RAM, and Disk I/O data, plus specific Docker stats for every running container, all while consuming less than 10MB of RAM.

This guide is for the expert SysAdmin or DevOps engineer who wants robust metrics without the bloat. We will deploy the Beszel Hub, configure Agents with hardened security settings, and set up alerting.

Why Beszel for Docker Environments?

Unlike push-based models that require heavy scrappers, or agentless models that lack granularity, Beszel uses a Hub-and-Agent architecture designed for efficiency.

  • Low Overhead: The agent is a single binary (packaged in a container) that typically uses negligible CPU and <15MB RAM.
  • Docker Socket Integration: By mounting the Docker socket, the agent automatically discovers running containers and pulls stats (CPU/MEM %) directly from the daemon.
  • Automatic Alerts: No complex PromQL queries. You get out-of-the-box alerting for disk pressure, memory spikes, and offline status.

Pro-Tip: Beszel is distinct from “Uptime Monitors” (like Uptime Kuma) because it tracks resource usage trends inside the container, not just HTTP 200 OK statuses.

Step 1: Deploying the Beszel Hub (Control Plane)

The Hub is the central dashboard. It ingests metrics from all your agents. We will use Docker Compose to define it.

Hub Configuration

services:
  beszel:
    image: 'henrygd/beszel:latest'
    container_name: 'beszel'
    restart: unless-stopped
    ports:
      - '8090:8090'
    volumes:
      - ./beszel_data:/beszel_data

Deployment:

Run docker compose up -d. Navigate to http://your-server-ip:8090 and create your admin account.

Step 2: Deploying the Agent (Data Plane)

This is where the magic happens. The agent sits on your Docker hosts, collects metrics, and pushes them to the Hub.

Prerequisite: In the Hub UI, click “Add System”. Enter the IP of the node you want to monitor. The Hub will generate a Public Key. You need this key for the agent configuration.

The Hardened Agent Compose File

We use network_mode: host to allow the agent to accurately report network interface statistics for the host machine. We also mount the Docker socket in read-only mode to adhere to the Principle of Least Privilege.

services:
  beszel-agent:
    image: 'henrygd/beszel-agent:latest'
    container_name: 'beszel-agent'
    restart: unless-stopped
    network_mode: host
    volumes:
      # Critical: Mount socket RO (Read-Only) for security
      - /var/run/docker.sock:/var/run/docker.sock:ro
      # Optional: Mount extra partitions if you want to monitor specific disks
      # - /mnt/storage:/extra-filesystems/sdb1:ro
    environment:
      - PORT=45876
      - KEY=YOUR_PUBLIC_KEY_FROM_HUB
      # - FILESYSTEM=/dev/sda1 # Optional: Override default root disk monitoring

Technical Breakdown

  • /var/run/docker.sock:ro: This is the critical line for Docker Container Monitoring. It allows the Beszel agent to query the Docker Daemon API to fetch real-time stats (CPU shares, memory usage) for other containers running on the host. The :ro flag ensures the agent cannot modify or stop your containers.
  • network_mode: host: Without this, the agent would only report network traffic for its own container, which is useless for host monitoring.

Step 3: Advanced Alerting & Notification

Beszel simplifies alerting. Instead of writing alert rules in YAML files, you configure them in the GUI.

Go to Settings > Notifications. You can configure:

  • Webhooks: Standard JSON payloads for integration with custom dashboards or n8n workflows.
  • Discord/Slack: Paste your channel webhook URL.
  • Email (SMTP): For traditional alerts.

Expert Strategy: Configure a “System Offline” alert with a 2-minute threshold. Since Beszel agents push data, the Hub immediately knows when a heartbeat is missed, providing faster “Server Down” alerts than external ping checks that might be blocked by firewalls.

Comparison: Beszel vs. Prometheus Stack

For experts deciding between the two, here is the resource reality:

FeatureBeszelPrometheus + Grafana + Exporters
RAM Usage (Agent)~10-15 MB100MB+ (Node Exporter + cAdvisor)
Setup Time< 5 MinutesHours (Configuring targets, dashboards)
Data RetentionSQLite (Auto-pruning)TSDB (Requires management for long-term)
Ideal Use CaseVPS Fleets, Home Labs, Docker HostsKubernetes Clusters, Microservices Tracing

Frequently Asked Questions (FAQ)

Is it safe to expose the Docker socket?

Mounting docker.sock always carries risk. However, by mounting it as read-only (:ro), you mitigate the risk of the agent (or an attacker inside the agent) modifying your container states. The agent only reads metrics; it does not issue commands.

Can I monitor remote servers behind a NAT/Firewall?

Yes. Because the Agent connects to the Hub (or the Hub can connect to the agent, but the standard Docker setup usually relies on the Agent knowing the Hub’s location if using the binary, but in the Docker agent setup, the Hub scrapes the agent).

Correction for Docker Agent: The Hub actually polls the agent. Therefore, if your Agent is behind a NAT, you have two options:
1. Use a VPN (like Tailscale) to mesh the networks.
2. Use a reverse proxy (like Caddy or Nginx) on the Agent side to expose the port securely with SSL.

Does Beszel support GPU monitoring?

As of the latest versions, GPU monitoring (NVIDIA/AMD) is supported but may require passing specific hardware devices to the container or running the binary directly on the host for full driver access.

Conclusion

For Docker container monitoring, Beszel represents a shift towards “Just Enough Administration.” It removes the friction of maintaining the monitoring stack itself, allowing you to focus on the services you are actually hosting.

Your Next Step: Spin up the Beszel Hub on a low-priority VPS today. Add your most critical Docker host as a system using the :ro socket mount technique above. You will have full visibility into your container resource usage in under 10 minutes. Thank you for reading the DevopsRoles page!

Boost Kubernetes: Fast & Secure with AKS Automatic

For years, the “Promise of Kubernetes” has been somewhat at odds with the “Reality of Kubernetes.” While K8s offers unparalleled orchestration capabilities, the operational overhead for Platform Engineering teams is immense. You are constantly balancing node pool sizing, OS patching, upgrade cadences, and security baselining. Enter Kubernetes AKS Automatic.

This is not just another SKU; it is Microsoft’s answer to the “NoOps” paradigm, structurally similar to GKE Autopilot but deeply integrated into the Azure ecosystem. For expert practitioners, AKS Automatic represents a shift from managing infrastructure to managing workload definitions.

In this guide, we will dissect the architecture of Kubernetes AKS Automatic, evaluate the trade-offs regarding control vs. convenience, and provide Terraform implementation strategies for production-grade environments.

The Architectural Shift: Why AKS Automatic Matters

In a Standard AKS deployment, the responsibility model is split. Microsoft manages the Control Plane, but you own the Data Plane (Worker Nodes). If a node runs out of memory, or if an OS patch fails, that is your pager going off.

Kubernetes AKS Automatic changes this ownership model. It applies an opinionated configuration that enforces best practices by default.

1. Node Autoprovisioning (NAP)

Forget about calculating the perfect VM size for your node pools. AKS Automatic utilizes Node Autoprovisioning. Instead of static Virtual Machine Scale Sets (VMSS) that you define, NAP analyzes the pending pods in the scheduler. It looks at CPU/Memory requests, taints, and tolerations, and then spins up the exact compute resources required to fit those pods.

Pro-Tip: Under the Hood
NAP functions similarly to the open-source project Karpenter. It bypasses the traditional Cluster Autoscaler’s logic of scaling existing groups and instead provisions just-in-time compute capacity directly against the Azure Compute API.

2. Guardrails and Policies

AKS Automatic comes with Azure Policy enabled and configured in “Deny” mode for critical security baselines. This includes:

  • Disallowing Privileged Containers: Unless explicitly exempted.
  • Enforcing Resource Quotas: Pods without resource requests may be mutated or rejected to ensure the scheduler can make accurate placement decisions.
  • Network Security: Strict network policies are applied by default.

Deep Dive: Technical Specifications

For the Senior SRE, understanding the boundaries of the platform is critical. Here is what the stack looks like:

FeatureSpecification in AKS Automatic
CNI PluginAzure CNI Overlay (Powered by Cilium)
IngressManaged NGINX (via Application Routing add-on)
Service MeshIstio (Managed add-on available and recommended)
OS UpdatesFully Automated (Node image upgrades handled by Azure)
SLAProduction SLA (Uptime SLA) enabled by default

Implementation: Deploying AKS Automatic via Terraform

As of the latest Azure providers, deploying an Automatic cluster requires specific configuration flags. Below is a production-ready snippet using the azurerm provider.

Note: Ensure you are using an azurerm provider version > 3.100 or the 4.x series.

resource "azurerm_kubernetes_cluster" "aks_automatic" {
  name                = "aks-prod-automatic-01"
  location            = azurerm_resource_group.rg.location
  resource_group_name = azurerm_resource_group.rg.name
  dns_prefix          = "aks-prod-auto"

  # The key differentiator for Automatic SKU
  sku_tier = "Standard" # Automatic features are enabled via run_command or specific profile flags in current GA
  
  # Automatic typically requires Managed Identity
  identity {
    type = "SystemAssigned"
  }

  # Enable the Automatic feature profile
  # Note: Syntax may vary slightly based on Preview/GA status updates
  auto_scaler_profile {
    balance_similar_node_groups = true
  }

  # Network Profile defaults for Automatic
  network_profile {
    network_plugin      = "azure"
    network_plugin_mode = "overlay"
    network_policy      = "cilium"
    load_balancer_sku   = "standard"
  }

  # Enabling the addons associated with Automatic behavior
  maintenance_window {
    allowed {
        day   = "Saturday"
        hours = [21, 23]
    }
  }
  
  tags = {
    Environment = "Production"
    ManagedBy   = "Terraform"
  }
}

Note on IaC: Microsoft is rapidly iterating on the Terraform provider support for the specific sku_tier = "Automatic" alias. Always check the official Terraform AzureRM documentation for the breaking changes in the latest provider release.

The Trade-offs: What Experts Need to Know

Moving to Kubernetes AKS Automatic is not a silver bullet. You are trading control for operational velocity. Here are the friction points you must evaluate:

1. No SSH Access

You generally cannot SSH into the worker nodes. The nodes are treated as ephemeral resources.

The Fix: Use kubectl debug node/<node-name> -it --image=mcr.microsoft.com/dotnet/runtime-deps:6.0 to launch a privileged ephemeral container for debugging.

2. DaemonSet Complexity

Since you don’t control the node pools, running DaemonSets (like heavy security agents or custom logging forwarders) can be trickier. While supported, you must ensure your DaemonSets tolerate the taints applied by the Node Autoprovisioning logic.

3. Cost Implications

While you save on “slack” capacity (because you don’t have over-provisioned static node pools waiting for traffic), the unit cost of compute in managed modes can sometimes be higher than Spot instances managed manually. However, for 90% of enterprises, the reduction in engineering hours spent on upgrades outweighs the raw compute premium.

Frequently Asked Questions (FAQ)

Is AKS Automatic suitable for stateful workloads?

Yes. AKS Automatic supports Azure Disk and Azure Files CSI drivers. However, because nodes can be recycled more aggressively by the autoprovisioner, ensure your applications handle `SIGTERM` gracefully and that your Persistent Volume Claims (PVCs) utilize Retain policies where appropriate to prevent accidental data loss during rapid scaling events.

Can I use Spot Instances with AKS Automatic?

Yes, AKS Automatic supports Spot VMs. You define this intent in your workload manifest (PodSpec) using nodeSelector or tolerations specifically targeting spot capability, and the provisioner will attempt to fulfill the request with Spot capacity.

How does this differ from GKE Autopilot?

Conceptually, they are identical. The main difference lies in the ecosystem integration. AKS Automatic is deeply coupled with Azure Monitor, Azure Policy, and the specific versions of Azure CNI. If you are a multi-cloud shop, the developer experience (DX) is converging, but the underlying network implementation (Overlay vs VPC-native) differs.

Conclusion

Kubernetes AKS Automatic is the maturity of the cloud-native ecosystem manifesting in a product. It acknowledges that for most organizations, the value is in the application, not in curating the OS version of the worker nodes.

For the expert SRE, AKS Automatic allows you to refocus your efforts on higher-order problems: Service Mesh configurations, progressive delivery strategies (Canary/Blue-Green), and application resilience, rather than nursing a Node Pool upgrade at 2 AM.

Next Step: If you are running a Standard AKS cluster today, try creating a secondary node pool with Node Autoprovisioning enabled (preview features permitting) or spin up a sandbox AKS Automatic cluster to test your Helm charts against the stricter security policies. Thank you for reading the DevopsRoles page!

Building the Largest Kubernetes Cluster: 130k Nodes & Beyond

The official upstream documentation states that a single Kubernetes Cluster supports up to 5,000 nodes. For the average enterprise, this is overkill. For hyperscalers and platform engineers designing the next generation of cloud infrastructure, it’s merely a starting point.

When we talk about managing a fleet of 130,000 nodes, we enter a realm where standard defaults fail catastrophically. We are no longer just configuring software; we are battling the laws of physics regarding network latency, etcd storage quotas, and Go routine scheduling. This article dissects the architectural patterns, kernel tuning, and control plane sharding required to push a Kubernetes Cluster (or a unified fleet of clusters) to these extreme limits.

The “Singularity” vs. The Fleet: Defining the 130k Boundary

Before diving into the sysctl flags, let’s clarify the architecture. Running 130k nodes in a single control plane is currently theoretically impossible with vanilla upstream Kubernetes due to the etcd hard storage limit (8GB recommended max) and the sheer volume of watch events.

Achieving this scale requires one of two approaches:

  1. The “Super-Cluster” (Heavily Modified): Utilizing sharded API servers and segmented etcd clusters (splitting events from resources) to push a single cluster ID towards 10k–15k nodes.
  2. The Federated Fleet: Managing 130k nodes across multiple clusters via a unified control plane (like Karmada or custom controllers) that abstracts the “cluster” concept away from the user.

We will focus on optimizing the unit—the Kubernetes Cluster—to its absolute maximum, as these optimizations are prerequisites for any large-scale fleet.

Phase 1: Surgical Etcd Tuning

At scale, etcd is almost always the first bottleneck. In a default Kubernetes Cluster, etcd stores both cluster state (Pods, Services) and high-frequency events. At 10,000+ nodes, the write IOPS from Kubelet heartbeats and event recording will bring the cluster to its knees.

1. Vertical Sharding of Etcd

You must split your etcd topology. Never run events in the same etcd instance as your cluster configuration.

# Example API Server flags to split storage
--etcd-servers="https://etcd-main-0:2379,https://etcd-main-1:2379,..."
--etcd-servers-overrides="/events#https://etcd-events-0:2379,https://etcd-events-1:2379,..."

2. Compression and Quotas

The default 2GB quota is insufficient. Increase the backend quota to 8GB (the practical safety limit). Furthermore, enable compression in the API server to reduce the payload size hitting etcd.

Pro-Tip: Monitor the etcd_mvcc_db_total_size_in_bytes metric religiously. If this hits the quota, your cluster enters a read-only state. Implement aggressive defragmentation schedules (e.g., every hour) for the events cluster, as high churn creates massive fragmentation.

Phase 2: The API Server & Control Plane

The kube-apiserver is the CPU-hungry brain. In a massive Kubernetes Cluster, the cost of serialization and deserialization (encoding/decoding JSON/Protobuf) dominates CPU cycles.

Priority and Fairness (APF)

Introduced to prevent controller loops from dDoSing the API server, APF is critical at scale. You must define custom FlowSchemas and PriorityLevelConfigurations. The default “catch-all” buckets will fill up instantly with 10k nodes, causing legitimate administrative calls (`kubectl get pods`) to time out.

apiVersion: flowcontrol.apiserver.k8s.io/v1beta3
kind: PriorityLevelConfiguration
metadata:
  name: system-critical-high
spec:
  type: Limited
  limited:
    assuredConcurrencyShares: 50
    limitResponse:
      type: Queue

Disable Unnecessary API Watches

Every node runs a kube-proxy and a kubelet. If you have 130k nodes, that is 130k watchers. If a significantly scoped change happens (like an EndpointSlice update), the API server must serialize that update 130k times.

  • Optimization: Use EndpointSlices instead of Endpoints.
  • Optimization: Set --watch-cache-sizes manually for high-churn resources to prevent cache misses which force expensive calls to etcd.

Phase 3: The Scheduler Throughput Challenge

The default Kubernetes scheduler evaluates every feasible node to find the “best” fit. With 130k nodes (or even 5k), scanning every node is O(N) complexity that results in massive scheduling latency.

You must tune the percentageOfNodesToScore parameter.

apiVersion: kubescheduler.config.k8s.io/v1
kind: KubeSchedulerConfiguration
profiles:
  - schedulerName: default-scheduler
percentageOfNodesToScore: 5  # Only look at 5% of nodes before making a decision

By lowering this to 5% (or even less in hyperscale environments), you trade a theoretical “perfect” placement for the ability to actually schedule pods in a reasonable timeframe.

Phase 4: Networking (CNI) at Scale

In a massive Kubernetes Cluster, iptables is the enemy. It relies on linear list traversal for rule updates. At 5,000 services, iptables becomes a noticeable CPU drag. At larger scales, it renders the network unusable.

IPVS vs. eBPF

While IPVS (IP Virtual Server) uses hash tables and offers O(1) complexity, modern high-scale clusters are moving entirely to eBPF (Extended Berkeley Packet Filter) solutions like Cilium.

  • Why: eBPF bypasses the host networking stack for pod-to-pod communication, significantly reducing latency and CPU overhead.
  • Identity Management: At 130k nodes, storing IP-to-Pod mappings is expensive. eBPF-based CNIs can use identity-based security policies rather than IP-based, which scales better in high-churn environments.

Phase 5: The Node (Kubelet) Perspective

Often overlooked, the Kubelet itself can dDoS the control plane.

  • Heartbeats: Adjust --node-status-update-frequency. In a 130k node environment (likely federated), you do not need 10-second heartbeats. Increasing this to 1 minute drastically reduces API server load.
  • Image Pulls: Serialize image pulls (`–serialize-image-pulls=false`) carefully. While parallel pulling is faster, it can spike disk I/O and network bandwidth, causing the node to go NotReady under load.

Frequently Asked Questions (FAQ)

What is the hard limit for a single Kubernetes Cluster?

As of Kubernetes v1.29+, the official scalability thresholds are 5,000 nodes, 150,000 total pods, and 300,000 total containers. Exceeding this requires significant customization of the control plane, specifically around etcd storage and API server caching mechanisms.

How do Alibaba and Google run larger clusters?

Tech giants often run customized versions of Kubernetes. They utilize techniques like “Cell” architectures (sharding the cluster into smaller failure domains), custom etcd storage drivers, and highly optimized networking stacks that replace standard Kube-Proxy implementations.

Should I use Federation or one giant cluster?

For 99% of use cases, Federation (multi-cluster) is superior. It provides better isolation, simpler upgrades, and drastically reduces the blast radius of a failure. Managing a single Kubernetes Cluster of 10k+ nodes is a high-risk operational endeavor.

Conclusion

Building a Kubernetes Cluster that scales toward the 130k node horizon is less about installing software and more about systems engineering. It requires a deep understanding of the interaction between the etcd key-value store, the Go runtime scheduler, and the Linux kernel networking stack.

While the allure of a single massive cluster is strong, the industry best practice for reaching this scale involves a sophisticated fleet management strategy. However, applying the optimizations discussed here-etcd sharding, APF tuning, and eBPF networking-will make your clusters, regardless of size, more resilient and performant. Thank you for reading the DevopsRoles page!

Automate Software Delivery & Deployment with DevOps

At the Senior Staff level, we know that DevOps automation is no longer just about writing bash scripts to replace manual server commands. It is about architecting self-sustaining platforms that treat infrastructure, security, and compliance as first-class software artifacts. In an era of microservices sprawl and multi-cloud complexity, the goal is to decouple deployment complexity from developer velocity.

This guide moves beyond the basics of CI/CD. We will explore how to implement rigorous DevOps automation strategies using GitOps patterns, Policy as Code (PaC), and ephemeral environments to achieve the elite performance metrics defined by DORA (DevOps Research and Assessment).

The Shift: From Scripting to Platform Engineering

Historically, automation was imperative: “Run this script to install Nginx.” Today, expert automation is declarative and convergent. We define the desired state, and autonomous controllers ensure the actual state matches it. This shift is crucial for scaling.

When we talk about automating software delivery in 2025, we are orchestrating a complex interaction between:

  • Infrastructure Provisioning: Dynamic, immutable resources.
  • Application Delivery: Progressive delivery (Canary/Blue-Green).
  • Governance: Automated guardrails that prevent bad configurations from ever reaching production.

Pro-Tip: Don’t just automate the “Happy Path.” True DevOps automation resilience comes from automating the failure domains—automatic rollbacks based on Prometheus metrics, self-healing infrastructure nodes, and automated certificate rotation.

Core Pillars of Advanced DevOps Automation

1. GitOps: The Single Source of Truth

GitOps elevates DevOps automation by using Git repositories as the source of truth for both infrastructure and application code. Tools like ArgoCD or Flux do not just “deploy”; they continuously reconcile the cluster state with the Git state.

This creates an audit trail for every change and eliminates “configuration drift”—the silent killer of reliability. If a human manually changes a Kubernetes deployment, the GitOps controller detects the drift and reverts it immediately.

2. Policy as Code (PaC)

In a high-velocity environment, manual security reviews are a bottleneck. PaC automates compliance. By using the Open Policy Agent (OPA), you can write policies that reject deployments if they don’t meet security standards (e.g., running as root, missing resource limits).

Here is a practical example of a Rego policy ensuring no container runs as root:

package kubernetes.admission

deny[msg] {
    input.request.kind.kind == "Pod"
    input.request.operation == "CREATE"
    container := input.request.object.spec.containers[_]
    container.securityContext.runAsNonRoot != true
    msg := sprintf("Container '%v' must set securityContext.runAsNonRoot to true", [container.name])
}

Integrating this into your pipeline or admission controller ensures that DevOps automation acts as a security gatekeeper, not just a delivery mechanism.

3. Ephemeral Environments

Static staging environments are often broken or outdated. A mature automation strategy involves spinning up full-stack ephemeral environments for every Pull Request. This allows QA and Product teams to test changes in isolation before merging.

Using tools like Crossplane or Terraform within your CI pipeline, you can provision a namespace, database, and ingress route dynamically, run integration tests, and tear it down automatically to save costs.

Orchestrating the Pipeline: A Modern Approach

To achieve true DevOps automation, your pipeline should resemble an assembly line with distinct stages of verification. It is not enough to simply build a Docker image; you must sign it, scan it, and attest its provenance.

Example: Secure Supply Chain Pipeline

Below is a conceptual high-level workflow for a secure, automated delivery pipeline:

  1. Code Commit: Triggers CI.
  2. Lint & Unit Test: Fast feedback loops.
  3. SAST/SCA Scan: Check for vulnerabilities in code and dependencies.
  4. Build & Sign: Build the artifact and sign it (e.g., Sigstore/Cosign).
  5. Deploy to Ephemeral: Dynamic environment creation.
  6. Integration Tests: E2E testing against the ephemeral env.
  7. GitOps Promotion: CI opens a PR to the infrastructure repo to update the version tag for production.

Advanced Concept: Implement “Progressive Delivery” using a Service Mesh (like Istio or Linkerd). Automate the traffic shift so that a new version receives only 1% of traffic initially. If error rates spike (measured by Prometheus), the automation automatically halts the rollout and reverts traffic to the stable version without human intervention.

Frequently Asked Questions (FAQ)

What is the difference between CI/CD and DevOps Automation?

CI/CD (Continuous Integration/Continuous Delivery) is a subset of DevOps Automation. CI/CD focuses specifically on the software release lifecycle. DevOps automation is broader, encompassing infrastructure provisioning, security policy enforcement, log management, database maintenance, and self-healing operational tasks.

How do I measure the ROI of DevOps Automation?

Focus on the DORA metrics: Deployment Frequency, Lead Time for Changes, Time to Restore Service, and Change Failure Rate. Automation should directly correlate with an increase in frequency and a decrease in lead time and failure rates.

Can you automate too much?

Yes. “Automating the mess” just makes the mess faster. Before applying automation, ensure your processes are optimized. Additionally, avoid automating tasks that require complex human judgment or are done so rarely that the engineering effort to automate exceeds the time saved (xkcd theory of automation).

Conclusion

Mastering DevOps automation requires a mindset shift from “maintaining servers” to “engineering platforms.” By leveraging GitOps for consistency, Policy as Code for security, and ephemeral environments for testing velocity, you build a system that is resilient, scalable, and efficient.

The ultimate goal of automation is to make the right way of doing things the easiest way. As you refine your pipelines, focus on observability and feedback loops—because an automated system that fails silently is worse than a manual one. Thank you for reading the DevopsRoles page!

7 Tips for Docker Security Hardening on Production Servers

In a world where containerized applications are the backbone of micro‑service architectures, Docker Security Hardening is no longer optional—it’s essential. As you deploy containers in production, you’re exposed to a range of attack vectors: privilege escalation, image tampering, insecure runtime defaults, and more. This guide walks you through seven battle‑tested hardening techniques that protect your Docker hosts, images, and containers from the most common threats, while keeping your DevOps workflows efficient.

Tip 1: Choose Minimal Base Images

Every extra layer in your image is a potential attack surface. By selecting a slim, purpose‑built base—such as alpine, distroless, or a minimal debian variant—you reduce the number of packages, libraries, and compiled binaries that attackers can exploit. Minimal images also shrink your image size, improving deployment times.

  • Use --platform to lock the OS architecture.
  • Remove build tools after compilation. For example, install gcc just for the build step, then delete it in the final image.
  • Leverage multi‑stage builds. This technique allows you to compile from a full Debian image but copy only the artifacts into a lightweight runtime image.
# Dockerfile example: multi‑stage build
FROM golang:1.22 AS builder
WORKDIR /app
COPY . .
RUN go build -o myapp .

FROM alpine:3.20
WORKDIR /app
COPY --from=builder /app/myapp .
CMD ["./myapp"]

Tip 2: Run Containers as a Non‑Root User

Containers default to the root user, which grants full host access if the container is compromised. Creating a dedicated user in the image and using the --user flag mitigates this risk. Docker also supports USER directives in the Dockerfile to enforce this at build time.

# Dockerfile snippet
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
USER appuser

When running the container, you can double‑check the user with:

docker run --rm myimage id

Tip 3: Use Read‑Only Filesystems

Mount the container’s filesystem as read‑only to prevent accidental or malicious modifications. If your application needs to write logs or temporary data, mount dedicated writable volumes. This practice limits the impact of a compromised container and protects the integrity of your image.

docker run --read-only --mount type=tmpfs,destination=/tmp myimage

Tip 4: Limit Capabilities and Disable Privileged Mode

Docker grants all Linux capabilities by default, many of which are unnecessary for most services. Use the --cap-drop flag to remove them, and drop the dangerous SYS_ADMIN capability unless explicitly required.

docker run --cap-drop ALL --cap-add NET_BIND_SERVICE myimage

Privileged mode should be a last resort. If you must enable it, isolate the container in its own network namespace and use user namespaces for added isolation.

Tip 5: Enforce Security Profiles – SELinux and AppArmor

Linux security modules like SELinux and AppArmor add mandatory access control (MAC) that further restricts container actions. Enabling them on the Docker host and binding a profile to your container strengthens the barrier between the host and the container.

  • SELinux: Use --security-opt label=type:my_label_t when running containers.
  • AppArmor: Apply a custom profile via --security-opt apparmor=myprofile.

For detailed guidance, consult the Docker documentation on Seccomp and AppArmor integration.

Tip 6: Use Docker Secrets and Avoid Environment Variables for Sensitive Data

Storing secrets in environment variables or plain text files is risky because they can leak via container logs or process listings. Docker Secrets, managed through Docker Swarm or orchestrators like Kubernetes, keep secrets encrypted at rest and provide runtime injection.

# Create a secret
echo "my-super-secret" | docker secret create my_secret -

# Deploy service with the secret
docker service create --name myapp --secret my_secret myimage

If you’re not using Swarm, consider external secret managers such as HashiCorp Vault or AWS Secrets Manager.

Tip 7: Keep Images Updated and Scan for Vulnerabilities

Image drift and outdated dependencies can expose known CVEs. Automate image updates using tools like Anchore Engine or Docker’s own image scanning feature. Sign your images with Docker Content Trust to ensure provenance and integrity.

# Enable Docker Content Trust
export DOCKER_CONTENT_TRUST=1

# Sign image
docker trust sign myimage:latest

Run docker scan during CI to catch vulnerabilities early:

docker scan myimage:latest

Frequently Asked Questions

What is the difference between Docker Security Hardening and general container security?

Docker Security Hardening focuses on the specific configuration options, best practices, and tooling available within the Docker ecosystem—such as Dockerfile directives, runtime flags, and Docker’s built‑in scanning—while general container security covers cross‑platform concerns that apply to any OCI‑compatible runtime.

Do I need to re‑build images after applying hardening changes?

Any change that affects the container’s runtime behavior (like adding USER or --cap-drop) requires a new image layer. It’s good practice to rebuild and re‑tag the image to preserve a clean history.

Can I trust --read-only to fully secure my container?

It significantly reduces modification risks, but it’s not a silver bullet. Combine it with other hardening techniques, and never rely on a single configuration to protect your entire stack.

Conclusion

Implementing these seven hardening measures is the cornerstone of a robust Docker production environment. Minimal base images, non‑root users, read‑only filesystems, limited capabilities, enforced MAC profiles, secret management, and continuous image updates together create a layered defense strategy that defends against privilege escalation, CVE exploitation, and data leakage. By routinely auditing your Docker host and container configurations, you’ll ensure that Docker Security Hardening remains an ongoing commitment, keeping your micro‑services resilient, compliant, and ready for any future threat. Thank you for reading the DevopsRoles page!

VMware Migration: Boost Workflows with Agentic AI

The infrastructure landscape has shifted seismicially. Following broad market consolidations and licensing changes, VMware migration has graduated from a “nice-to-have” modernization project to a critical boardroom imperative. For Enterprise Architects and Senior DevOps engineers, the challenge isn’t just moving bits—it’s untangling decades of technical debt, undocumented dependencies, and “pet” servers without causing business downtime.

Traditional migration strategies often rely on “Lift and Shift” approaches that carry legacy problems into new environments. This is where Agentic AI—autonomous AI systems capable of reasoning, tool use, and execution—changes the calculus. Unlike standard generative AI which simply suggests code, Agentic AI can actively analyze vSphere clusters, generate target-specific Infrastructure as Code (IaC), and execute validation tests.

In this guide, we will dissect how to architect an agent-driven migration pipeline, moving beyond simple scripts to intelligent, self-correcting workflows.

The Scale Problem: Why Traditional Scripts Fail

In a typical enterprise environment managing thousands of VMs, manual migration via UI wizards or basic PowerCLI scripts hits a ceiling. The complexity isn’t in the data transfer (rsync is reliable); the complexity is in the context.

  • Opaque Dependencies: That legacy database VM might have hardcoded IP dependencies in an application server three VLANs away.
  • Configuration Drift: What is defined in your CMDB often contradicts the actual running state in vCenter.
  • Target Translation: Mapping a Distributed Resource Scheduler (DRS) rule from VMware to a Kubernetes PodDisruptionBudget or an AWS Auto Scaling Group requires semantic understanding, not just format conversion.

Pro-Tip: The “6 Rs” Paradox
While AWS defines the “6 Rs” of migration (Rehost, Replatform, etc.), Agentic AI blurs the line between Rehost and Refactor. By using agents to automatically generate Terraform during the move, you can achieve a “Refactor-lite” outcome with the speed of a Rehost.

Architecture: The Agentic Migration Loop

To leverage AI effectively, we treat the migration as a software problem. We employ “Agents”—LLMs wrapped with execution environments (like LangChain or AutoGen)—that have access to specific tools.

1. The Discovery Agent (Observer)

Instead of relying on static Excel sheets, a Discovery Agent connects to the vSphere API and SSH terminals. It doesn’t just list VMs; it builds a semantic graph.

  • Tool Access: govc (Go vSphere Client), netstat, traffic flow logs.
  • Task: Identify “affinity groups.” If VM A and VM B talk 5,000 times an hour, the Agent tags them to migrate in the same wave.

2. The Transpiler Agent (Architect)

This agent takes the source configuration (VMX files, NSX rules) and “transpiles” them into the target dialect (Terraform for AWS, YAML for KubeVirt/OpenShift).

3. The Validation Agent (Tester)

Before any switch is flipped, this agent spins up a sandbox environment, applies the new config, and runs smoke tests. If a test fails, the agent reads the error log, adjusts the Terraform code, and retries—autonomously.

Technical Implementation: Building a Migration Agent

Let’s look at a simplified Python representation of how you might structure a LangChain agent to analyze a VMware VM and generate a corresponding KubeVirt manifest.

import os
from langchain.agents import initialize_agent, Tool
from langchain.llms import OpenAI

# Mock function to simulate vSphere API call
def get_vm_config(vm_name):
    # In production, use pyvmomi or govc here
    return f"""
    VM: {vm_name}
    CPUs: 4
    RAM: 16GB
    Network: VLAN_10 (192.168.10.x)
    Storage: 500GB vSAN
    Annotations: "Role: Postgres Primary"
    """

# Tool definition for the Agent
tools = [
    Tool(
        name="GetVMConfig",
        func=get_vm_config,
        description="Useful for retrieving current hardware specs of a VMware VM."
    )
]

# The Prompt Template instructs the AI on specific migration constraints
system_prompt = """
You are a Senior DevOps Migration Assistant. 
Your goal is to convert VMware configurations into KubeVirt (VirtualMachineInstance) YAML.
1. Retrieve the VM config.
2. Map VLANs to Multus CNI network-attachment-definitions.
3. Add a 'migration-wave' label based on the annotations.
"""

# Initialize the Agent (Pseudo-code for brevity)
# agent = initialize_agent(tools, llm, agent="zero-shot-react-description", verbose=True)

# Execution
# response = agent.run("Generate a KubeVirt manifest for vm-postgres-01")

The magic here isn’t the string formatting; it’s the reasoning. If the agent sees “Role: Postgres Primary”, it can be instructed (via system prompt) to automatically add a podAntiAffinity rule to the generated YAML to ensure high availability in the new cluster.

Strategies for Target Environments

Your VMware migration strategy depends heavily on where the workloads are landing.

TargetAgent FocusKey Tooling
Public Cloud (AWS/Azure)Right-sizing instances to avoid over-provisioning cost shock. Agents analyze historical CPU/RAM usage (95th percentile) rather than allocated specs.Terraform, Packer, CloudEndure
KubeVirt / OpenShiftConverting vSwitch networking to CNI/Multus configurations and mapping storage classes (vSAN to ODF/Ceph).Konveyor, oc-cli, customize
Bare Metal (Nutanix/KVM)Driver compatibility (VirtIO) and preserving MAC addresses for license-bound legacy software.Virt-v2v, Ansible

Best Practices & Guardrails

While “Agentic” implies autonomy, migration requires strict guardrails. We are dealing with production data.

1. Read-Only Access by Default

Ensure your Discovery Agents have Read-Only permissions in vCenter. Agents should generate *plans* (Pull Requests), not execute changes directly against production without human approval (Human-in-the-Loop).

2. The “Plan, Apply, Rollback” Pattern

Use your agents to generate Terraform Plans. These plans serve as the artifact for review. If the migration fails during execution, the agent must have a pre-generated rollback script ready.

3. Hallucination Checks

LLMs can hallucinate configuration parameters that don’t exist. Implement a “Linter Agent” step where the output of the “Architect Agent” is validated against the official schema (e.g., kubectl validate or terraform validate) before it ever reaches a human reviewer.

Frequently Asked Questions (FAQ)

Can AI completely automate a VMware migration?

Not 100%. Agentic AI is excellent at the “heavy lifting” of discovery, dependency mapping, and code generation. However, final cutover decisions, complex business logic validation, and UAT (User Acceptance Testing) sign-off should remain human-led activities.

How does Agentic AI differ from using standard migration tools like HCX?

VMware HCX is a transport mechanism. Agentic AI operates at the logic layer. HCX moves the bits; Agentic AI helps you decide what to move, when to move it, and automatically refactors the infrastructure-as-code wrappers around the VM for the new environment.

What is the biggest risk in AI-driven migration?

Context loss. If an agent refactors a network configuration without understanding the security group implications, it could expose a private database to the public internet. Always use Policy-as-Code (e.g., OPA Gatekeeper or Sentinel) to validate agent outputs.

Conclusion

The era of the “spreadsheet migration” is ending. By integrating Agentic AI into your VMware migration pipelines, you do more than just speed up the process—you increase accuracy and reduce the technical debt usually incurred during these high-pressure transitions.

Start small. Deploy a “Discovery Agent” to map a non-critical cluster. Audit its findings against your manual documentation. You will likely find that the AI sees connections you missed, proving the value of machine intelligence in modern infrastructure operations. Thank you for reading the DevopsRoles page!

Deploy Rails Apps for $5/Month: Vultr VPS Hosting Guide

Moving from a Platform-as-a-Service (PaaS) like Heroku to a Virtual Private Server (VPS) is a rite of passage for many Ruby developers. While PaaS offers convenience, the cost scales aggressively. If you are looking to deploy Rails apps with full control over your infrastructure, low latency, and predictable pricing, a $5/month VPS from a provider like Vultr is an unbeatable solution.

However, with great power comes great responsibility. You are no longer just an application developer; you are now the system administrator. This guide will walk you through setting up a production-hardened Linux environment, tuning PostgreSQL for low-memory servers, and configuring the classic Nginx/Puma stack for maximum performance.

Why Choose a VPS for Rails Deployment?

Before diving into the terminal, it is essential to understand the architectural trade-offs. When you deploy Rails apps on a raw VPS, you gain:

  • Cost Efficiency: A $5 Vultr instance (usually 1 vCPU, 1GB RAM) can easily handle hundreds of requests per minute if optimized correctly.
  • No “Sleeping” Dynos: Unlike free or cheap PaaS tiers, your VPS is always on. Background jobs (Sidekiq/Resque) run without needing expensive add-ons.
  • Environment Control: You choose the specific version of Linux, the database configuration, and the system libraries (e.g., ImageMagick, libvips).

Pro-Tip: Managing Resources
A 1GB RAM server is tight for modern Rails apps. The secret to stability on a $5 VPS is Swap Memory. Without it, your server will crash during memory-intensive tasks like bundle install or Webpacker compilation. We will cover this in step 2.

🚀 Prerequisite: Get Your Server

To follow this guide, you need a fresh Ubuntu VPS. We recommend Vultr for its high-performance SSDs and global locations.

Deploy Instance on Vultr →

(New users often receive free credits via this link)

Step 1: Server Provisioning and Initial Security

Assuming you have spun up a fresh Ubuntu 22.04 or 24.04 LTS instance on Vultr, the first step is to secure it. Do not deploy as root.

1.1 Create a Deploy User

Log in as root and create a user with sudo privileges. We will name ours deploy.

adduser deploy
usermod -aG sudo deploy
# Switch to the new user
su - deploy

1.2 SSH Hardening

Password authentication is a security risk. Copy your local SSH public key to the server (ssh-copy-id deploy@your_server_ip), then disable password login.

sudo nano /etc/ssh/sshd_config

# Change these lines:
PermitRootLogin no
PasswordAuthentication no

Restart SSH: sudo service ssh restart.

1.3 Firewall Configuration (UFW)

Setup a basic firewall to only allow SSH, HTTP, and HTTPS connections.

sudo ufw allow OpenSSH
sudo ufw allow 'Nginx Full'
sudo ufw enable

Step 2: Performance Tuning (Crucial for $5 Instances)

Rails is memory hungry. To successfully deploy Rails apps on limited hardware, you must set up a Swap file. This acts as “virtual RAM” on your SSD.

# Allocate 1GB or 2GB of swap
sudo fallocate -l 2G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile

# Make it permanent
echo '/swapfile none swap sw 0 0' | sudo tee -a /etc/fstab

Adjust the “Swappiness” value to 10 (default is 60) to tell the OS to prefer RAM over Swap unless absolutely necessary.

sudo sysctl vm.swappiness=10
echo 'vm.swappiness=10' | sudo tee -a /etc/sysctl.conf

Step 3: Installing the Stack (Ruby, Node, Postgres, Redis)

3.1 Dependencies

Update your system and install the build tools required for compiling Ruby.

sudo apt update && sudo apt upgrade -y
sudo apt install -y git curl libssl-dev libreadline-dev zlib1g-dev \
autoconf bison build-essential libyaml-dev libreadline-dev \
libncurses5-dev libffi-dev libgdbm-dev

3.2 Ruby (via rbenv)

We recommend rbenv over RVM for production environments due to its lightweight nature.

# Install rbenv
git clone https://github.com/rbenv/rbenv.git ~/.rbenv
echo 'export PATH="$HOME/.rbenv/bin:$PATH"' >> ~/.bashrc
echo 'eval "$(rbenv init -)"' >> ~/.bashrc
exec $SHELL

# Install ruby-build
git clone https://github.com/rbenv/ruby-build.git ~/.rbenv/plugins/ruby-build

# Install Ruby (Replace 3.3.0 with your project version)
rbenv install 3.3.0
rbenv global 3.3.0

3.3 Database: PostgreSQL

Install PostgreSQL and creating a database user.

sudo apt install -y postgresql postgresql-contrib libpq-dev

# Create a postgres user matching your system user
sudo -u postgres createuser -s deploy

Optimization Note: On a 1GB server, PostgreSQL default settings are too aggressive. Edit /etc/postgresql/14/main/postgresql.conf (version may vary) and reduce shared_buffers to 128MB to leave room for your Rails application.

Step 4: The Application Server (Puma & Systemd)

You shouldn’t run Rails using rails server in production. We use Puma managed by Systemd. This ensures your app restarts automatically if it crashes or the server reboots.

First, clone your Rails app into /var/www/my_app and run bundle install. Then, create a systemd service file.

File: /etc/systemd/system/my_app.service

[Unit]
Description=Puma HTTP Server
After=network.target

[Service]
# Foreground process (do not use --daemon in ExecStart or config.rb)
Type=simple

# User and Group the process will run as
User=deploy
Group=deploy

# Working Directory
WorkingDirectory=/var/www/my_app/current

# Environment Variables
Environment=RAILS_ENV=production

# ExecStart command
ExecStart=/home/deploy/.rbenv/shims/bundle exec puma -C /var/www/my_app/shared/puma.rb

Restart=always
KillSignal=SIGTERM

[Install]
WantedBy=multi-user.target

Enable and start the service:

sudo systemctl enable my_app
sudo systemctl start my_app

Step 5: The Web Server (Nginx Reverse Proxy)

Nginx sits in front of Puma. It handles SSL, serves static files (assets), and acts as a buffer for slow clients. This prevents the “Slowloris” attack from tying up your Ruby threads.

Install Nginx: sudo apt install nginx.

Create a configuration block at /etc/nginx/sites-available/my_app:

upstream app {
    # Path to Puma UNIX socket
    server unix:/var/www/my_app/shared/tmp/sockets/puma.sock fail_timeout=0;
}

server {
    listen 80;
    server_name example.com www.example.com;

    root /var/www/my_app/current/public;

    try_files $uri/index.html $uri @app;

    location @app {
        proxy_pass http://app;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header Host $http_host;
        proxy_redirect off;
    }

    error_page 500 502 503 504 /500.html;
    client_max_body_size 10M;
    keepalive_timeout 10;
}

Link it and restart Nginx:

sudo ln -s /etc/nginx/sites-available/my_app /etc/nginx/sites-enabled/
sudo rm /etc/nginx/sites-enabled/default
sudo service nginx restart

Step 6: SSL Certificates with Let’s Encrypt

Never deploy Rails apps without HTTPS. Certbot makes this free and automatic.

sudo apt install certbot python3-certbot-nginx
sudo certbot --nginx -d example.com -d www.example.com

Certbot will automatically modify your Nginx config to redirect HTTP to HTTPS and configure SSL parameters.

Frequently Asked Questions (FAQ)

Is a $5/month VPS really enough for production?

Yes, for many use cases. A $5 Vultr or DigitalOcean droplet is perfect for portfolios, MVPs, and small business apps. However, if you have heavy image processing or hundreds of concurrent users, you should upgrade to a $10 or $20 plan with 2GB+ RAM.

Why use Nginx with Puma? Can’t Puma serve web requests?

Puma is an application server, not a web server. While it can serve requests directly, Nginx is significantly faster at serving static assets (images, CSS, JS) and managing SSL connections. Using Nginx frees up your expensive Ruby workers to do what they do best: process application logic.

How do I automate deployments?

Once the server is set up as above, you should not be manually copying files. The industry standard tool is Capistrano. Alternatively, for a more Docker-centric approach (similar to Heroku), look into Kamal (formerly MRSK), which is gaining massive popularity in the Rails community.

Conclusion

You have successfully configured a robust, production-ready environment to deploy Rails apps on a budget. By managing your own Vultr VPS, you have cut costs and gained valuable systems knowledge.

Your stack now includes:

  • OS: Ubuntu LTS (Hardened)
  • Web Server: Nginx (Reverse Proxy & SSL)
  • App Server: Puma (Managed by Systemd)
  • Database: PostgreSQL (Tuned)

The next step in your journey is automating this process. I recommend setting up a GitHub Action or a Capistrano script to push code changes to your new server with a single command. Thank you for reading the DevopsRoles page!