Tag Archives: Docker

How To Create Minimal Docker Images for Python Applications

Introduction

Creating minimal Docker images for Python applications is essential for optimizing performance, reducing attack surface, and saving bandwidth. A smaller Docker image can significantly speed up the deployment process and make your applications more portable. This guide will walk you through the process of creating minimal Docker images for Python applications, from basic steps to more advanced techniques.

Why Create Minimal Docker Images?

Benefits of Minimal Docker Images

  • Reduced Size: Smaller images use less disk space.
  • Faster Deployment: Smaller images transfer and load quicker.
  • Improved Security: Fewer components mean a smaller attack surface.
  • Efficiency: Optimized images use fewer resources, leading to better performance.

Common Pitfalls

  • Overcomplication: Trying to do too much in one image.
  • Redundancy: Including unnecessary libraries and tools.
  • Poor Layer Management: Not structuring Dockerfile effectively, leading to larger images.

Basic Steps to Create Minimal Docker Images

Step 1: Choose a Minimal Base Image

Using a minimal base image is the first step in reducing the overall size of your Docker image. Common minimal base images include alpine and python:slim.

Example: Using Alpine

FROM python:3.9-alpine

Step 2: Install Only Required Dependencies

Only install the dependencies that your application needs. Use requirements.txt to manage these dependencies efficiently.

Example: Installing Dependencies

FROM python:3.9-alpine

# Set working directory
WORKDIR /app

# Copy requirements.txt and install dependencies
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

# Copy the rest of the application code
COPY . .

Step 3: Remove Build Dependencies

After installing dependencies, remove any packages or tools used for building that are not needed at runtime.

Example: Removing Build Tools

FROM python:3.9-alpine

# Install build dependencies
RUN apk add --no-cache gcc musl-dev

# Set working directory
WORKDIR /app

# Copy requirements.txt and install dependencies
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

# Remove build dependencies
RUN apk del gcc musl-dev

# Copy the rest of the application code
COPY . .

Intermediate Techniques for Reducing Image Size

Use Multi-Stage Builds

Multi-stage builds allow you to separate the build environment from the runtime environment, resulting in smaller final images.

Example: Multi-Stage Build

# Stage 1: Build
FROM python:3.9-alpine as build

# Install build dependencies
RUN apk add --no-cache gcc musl-dev

# Set working directory
WORKDIR /app

# Copy requirements.txt and install dependencies
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

# Copy the rest of the application code
COPY . .

# Stage 2: Runtime
FROM python:3.9-alpine

# Set working directory
WORKDIR /app

# Copy dependencies and application code from build stage
COPY --from=build /app /app

# Command to run the application
CMD ["python", "app.py"]

Use .dockerignore File

Similar to .gitignore, the .dockerignore file specifies which files and directories should be excluded from the Docker image. This can help reduce the image size and improve build times.

Example: .dockerignore

*.pyc
__pycache__/
.env
tests/

Advanced Techniques for Optimizing Docker Images

Minimize Layers

Each command in a Dockerfile creates a new layer in the image. Combining multiple commands into a single RUN instruction can reduce the number of layers and thus the overall image size.

Example: Combining Commands

FROM python:3.9-alpine

# Set working directory
WORKDIR /app

# Copy requirements.txt and install dependencies
COPY requirements.txt .
RUN apk add --no-cache gcc musl-dev \
    && pip install --no-cache-dir -r requirements.txt \
    && apk del gcc musl-dev

# Copy the rest of the application code
COPY . .

Use Scratch Base Image

For the ultimate minimal image, you can use the scratch base image. This is an empty image, so you’ll need to include everything your application needs to run.

Example: Using Scratch

# Stage 1: Build
FROM python:3.9-alpine as build

# Install build dependencies
RUN apk add --no-cache gcc musl-dev

# Set working directory
WORKDIR /app

# Copy requirements.txt and install dependencies
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

# Copy the rest of the application code
COPY . .

# Stage 2: Create minimal runtime image
FROM scratch

# Copy Python binary and dependencies from the build stage
COPY --from=build /usr/local /usr/local
COPY --from=build /app /app

# Set working directory
WORKDIR /app

# Command to run the application
CMD ["/usr/local/bin/python", "app.py"]

Frequently Asked Questions (FAQs)

What is the difference between alpine and slim base images?

Alpine is a minimal Docker image based on Alpine Linux, known for its small size. Slim images are stripped-down versions of the official images, removing unnecessary files while keeping essential functionalities.

How can I further reduce my Docker image size?

  • Use multi-stage builds.
  • Minimize the number of layers.
  • Use .dockerignore to exclude unnecessary files.
  • Optimize your application and dependencies.

Why is my Docker image still large after following these steps?

Check for large files or dependencies that might be included unintentionally. Use tools dive to inspect and analyze your Docker image layers.

How do I manage environment variables in Docker?

You can use the ENV instruction in your Dockerfile to set environment variables, or pass them at runtime using the -e flag with docker run.

Is it safe to use minimal images in production?

Yes, minimal images can be safe if you include all necessary security patches and dependencies. They often enhance security by reducing the attack surface.

Conclusion

Creating minimal Docker images for Python applications involves selecting a minimal base image, installing only necessary dependencies, and using advanced techniques like multi-stage builds and combining commands. By following these practices, you can significantly reduce the size of your Docker images, leading to faster deployments and more efficient applications. Implement these steps in your next project to experience the benefits of optimized Docker images. Thank you for reading the DevopsRoles page!

Understand the Difference Between Docker Engine and Docker Desktop: A Comprehensive Guide

Introduction

Docker has revolutionized the way we build, share, and run applications. However, many users find themselves confused about the difference between Docker Engine and Docker Desktop. This guide aims to demystify these two essential components, explaining their differences, use cases, and how to get the most out of them. Whether you’re a beginner or an experienced developer, this article will provide valuable insights into Docker’s ecosystem.

What is Docker Engine?

Docker Engine is the core software that enables containerization. It is a client-server application that includes three main components:

Docker Daemon (dockerd)

The Docker Daemon is a background service responsible for managing Docker containers on your system. It listens for Docker API requests and manages Docker objects such as images, containers, networks, and volumes.

Docker Client (docker)

The Docker Client is a command-line interface (CLI) that users interact with to communicate with the Docker Daemon. It accepts commands from the user and communicates with the Docker Daemon to execute them.

REST API

The Docker REST API is used by applications to communicate with the Docker Daemon programmatically. This API allows you to integrate Docker functionalities into your software.

What is Docker Desktop?

Docker Desktop is an application that simplifies the use of Docker on macOS and Windows systems. It provides an easy-to-use interface and includes everything you need to build and share containerized applications.

Docker Desktop Components

Docker Desktop includes the Docker Engine, Docker CLI client, Docker Compose, Kubernetes, and other tools necessary for a seamless container development experience.

GUI Integration

Docker Desktop provides a graphical user interface (GUI) that makes it easier for users to manage their Docker environments. The GUI includes dashboards, logs, and other tools to help you monitor and manage your containers.

Docker Desktop for Mac and Windows

Docker Desktop is tailored for macOS and Windows environments, providing native integration with these operating systems. This means that Docker Desktop abstracts away many of the complexities associated with running Docker on non-Linux platforms.

Key Difference Between Docker Engine and Docker Desktop

Platform Compatibility

  • Docker Engine: Primarily designed for Linux systems, though it can run on Windows and macOS through Docker Desktop or virtual machines.
  • Docker Desktop: Specifically designed for Windows and macOS, providing native integration and additional features to support these environments.

User Interface

  • Docker Engine: Managed primarily through the command line, suitable for users comfortable with CLI operations.
  • Docker Desktop: Offers both CLI and GUI options, making it accessible for users who prefer graphical interfaces.

Additional Features

  • Docker Engine: Focuses on core containerization functionalities.
  • Docker Desktop: Includes extra tools like Docker Compose, Kubernetes, and integrated development environments (IDEs) to enhance the development workflow.

Resource Management

  • Docker Engine: Requires manual configuration for resource allocation.
  • Docker Desktop: Automatically manages resource allocation, with options to adjust settings through the GUI.

When to Use Docker Engine?

Server Environments

Docker Engine is ideal for server environments where resources are managed by IT professionals. It provides the flexibility and control needed to run containers at scale.

Advanced Customization

For users who need to customize their Docker setup extensively, Docker Engine offers more granular control over configuration and operation.

When to Use Docker Desktop?

Development and Testing

Docker Desktop is perfect for development and testing on local machines. It simplifies the setup process and provides tools to streamline the development workflow.

Cross-Platform Development

If you’re working in a cross-platform environment, Docker Desktop ensures that your Docker setup behaves consistently across macOS and Windows systems.

Pros and Cons of Docker Engine and Docker Desktop

FAQs

What is the main purpose of Docker Engine?

The main purpose of Docker Engine is to enable containerization, allowing developers to package applications and their dependencies into containers that can run consistently across different environments.

Can Docker Desktop be used in production environments?

Docker Desktop is primarily designed for development and testing. For production environments, it is recommended to use Docker Engine on a server or cloud platform.

Is Docker Desktop free to use?

Docker Desktop offers a free tier for individual developers and small teams. However, there are paid plans available with additional features and support for larger organizations.

How does Docker Desktop manage resources on macOS and Windows?

Docker Desktop uses a lightweight virtual machine to run the Docker Daemon on macOS and Windows. It automatically manages resource allocation, but users can adjust CPU, memory, and disk settings through the Docker Desktop GUI.

Conclusion

Understanding the difference between Docker Engine and Docker Desktop is crucial for choosing the right tool for your containerization needs. Docker Engine provides the core functionalities required for running containers, making it suitable for server environments and advanced users. On the other hand, Docker Desktop simplifies the development and testing process, offering a user-friendly interface and additional tools for macOS and Windows users. By selecting the appropriate tool, you can optimize your workflow and leverage the full potential of Docker’s powerful ecosystem. Thank you for reading the DevopsRoles page!

Docker Engine Authentication Bypass Vulnerability Exploited: Secure Your Containers Now

Introduction

In recent times, Docker Engine has become a cornerstone for containerization in DevOps and development environments. However, like any powerful tool, it can also be a target for security vulnerabilities. One such critical issue is the Docker Engine authentication bypass vulnerability. This article will explore the details of this vulnerability, how it’s exploited, and what steps you can take to secure your Docker environments. We’ll start with basic concepts and move to more advanced topics, ensuring a comprehensive understanding of the issue.

Understanding Docker Engine Authentication Bypass Vulnerability

What is Docker Engine?

Docker Engine is a containerization platform that enables developers to package applications and their dependencies into containers. This allows for consistent environments across different stages of development and production.

What is an Authentication Bypass?

Authentication bypass is a security flaw that allows attackers to gain unauthorized access to a system without the correct credentials. In the context of Docker, this could mean gaining control over Docker containers and the host system.

How Does the Vulnerability Work?

The Docker Engine authentication bypass vulnerability typically arises due to improper validation of user credentials or session management issues. Attackers exploit these weaknesses to bypass authentication mechanisms and gain access to sensitive areas of the Docker environment.

Basic Examples of Exploitation

Example 1: Default Configuration

One common scenario is exploiting Docker installations with default configurations. Many users deploy Docker with default settings, which might not enforce strict authentication controls.

  1. Deploying Docker with Default Settings:
    • sudo apt-get update
    • sudo apt-get install docker-ce docker-ce-cli containerd.io
  2. Accessing Docker Daemon without Authentication:
    • docker -H tcp://<docker-host>:2375 ps

In this example, if the Docker daemon is exposed on a network without proper authentication, anyone can list the running containers and execute commands.

Example 2: Misconfigured Access Control

Another basic example involves misconfigured access control policies that allow unauthorized users to perform administrative actions.

Configuring Docker with Insecure Access:

{
  "hosts": ["tcp://0.0.0.0:2375"]
}

Exploiting the Misconfiguration:

docker -H tcp://<docker-host>:2375 exec -it <container-id> /bin/bash

Advanced Examples of Exploitation

Example 3: Session Hijacking

Advanced attackers might use session hijacking techniques to exploit authentication bypass vulnerabilities. This involves stealing session tokens and using them to gain access.

  1. Capturing Session Tokens: Attackers use network sniffing tools like Wireshark to capture authentication tokens.
  2. Replaying Captured Tokens:
    • curl -H "Authorization: Bearer <captured-token>" http://<docker-host>:2375/containers/json

Example 4: Exploiting API Vulnerabilities

Docker provides an API for managing containers, which can be exploited if not properly secured.

  1. Discovering API Endpoints:
    • curl http://<docker-host>:2375/v1.24/containers/json
  2. Executing Commands via API:
    • curl -X POST -H "Content-Type: application/json" -d '{"Cmd": ["echo", "Hello World"], "Image": "busybox"}' http://<docker-host>:2375/containers/create

Protecting Your Docker Environment

Implementing Secure Configuration

Enable TLS for Docker Daemon:

{
  "tls": true,
  "tlscert": "/path/to/cert.pem",
  "tlskey": "/path/to/key.pem",
  "hosts": ["tcp://0.0.0.0:2376"]
}

Use Docker Bench for Security: Docker provides a security benchmark tool to check for best practices.

docker run -it --net host --pid host --userns host --cap-add audit_control \
  -e DOCKER_CONTENT_TRUST=$DOCKER_CONTENT_TRUST \
  -v /var/lib:/var/lib \
  -v /var/run/docker.sock:/var/run/docker.sock \
  -v /usr/lib/systemd:/usr/lib/systemd \
  -v /etc:/etc \
  --label docker_bench_security \
  docker/docker-bench-security

Access Control Best Practices

  1. Implement Role-Based Access Control (RBAC): Use Docker’s built-in RBAC to limit access to authorized users only.
    • docker swarm init
    • docker network create --driver overlay my-overlay
  2. Use External Authentication Providers: Integrate Docker with external authentication systems like LDAP or OAuth for better control.

Regular Audits and Monitoring

Enable Docker Logging:

{
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "10m",
    "max-file": "3"
  }
}

Monitor Docker Activity: Use tools like Prometheus and Grafana to monitor Docker metrics and alerts.

Security Updates and Patching

  1. Keep Docker Updated: Regularly update Docker to the latest version to mitigate known vulnerabilities.
    • sudo apt-get update
    • sudo apt-get upgrade docker-ce
  2. Patch Vulnerabilities Promptly: Subscribe to Docker security announcements to stay informed about patches and updates.

Frequently Asked Questions

What is Docker Engine Authentication Bypass Vulnerability?

The Docker Engine authentication bypass vulnerability allows attackers to gain unauthorized access to Docker environments by exploiting weaknesses in the authentication mechanisms.

How Can I Protect My Docker Environment from This Vulnerability?

Implement secure configurations, use TLS, enable RBAC, integrate with external authentication providers, perform regular audits, monitor Docker activity, and keep Docker updated.

Why is Authentication Bypass a Critical Issue for Docker?

Authentication bypass can lead to unauthorized access, allowing attackers to control Docker containers, steal data, and execute malicious code, compromising the security of the entire system.

Conclusion

Docker Engine authentication bypass vulnerability poses a significant threat to containerized environments. By understanding how this vulnerability is exploited and implementing robust security measures, you can protect your Docker environments from unauthorized access and potential attacks. Regular audits, secure configurations, and keeping your Docker installation up-to-date are essential steps in maintaining a secure containerized infrastructure. Thank you for reading the DevopsRoles page!

Stay secure, and keep your Docker environments safe from vulnerabilities.

5 Easy Steps to Securely Connect Tailscale in Docker Containers on Linux – Boost Your Network!

Discover the revolutionary way to enhance your network security by integrating Tailscale in Docker containers on Linux. This comprehensive guide will walk you through the essential steps needed to set up Tailscale, ensuring your containerized applications remain secure and interconnected. Dive into the world of seamless networking today!

Introduction to Tailscale in Docker Containers

In the dynamic world of technology, ensuring robust network security and seamless connectivity has become paramount. Enter Tailscale, a user-friendly, secure network mesh that leverages WireGuard’s noise protocol. When combined with Docker, a leading software containerization platform, Tailscale empowers Linux users to secure and streamline their network connections effortlessly. This guide will unveil how to leverage Tailscale within Docker containers on Linux, paving the way for enhanced security and simplified connectivity.

Preparing Your Linux Environment

Before diving into the world of Docker and Tailscale, it’s essential to prepare your Linux environment. Begin by ensuring your system is up-to-date:

sudo apt-get update && sudo apt-get upgrade

Next, install Docker on your Linux machine if you haven’t already:

sudo apt-get install docker.io

Once Docker is installed, start the Docker service and enable it to launch at boot:

sudo systemctl start docker && sudo systemctl enable docker

Ensure your user is added to the Docker group to avoid using sudo for Docker commands:

sudo usermod -aG docker ${USER}

Log out and back in for this change to take effect, or if you’re in a terminal, type newgrp docker.

Setting Up Tailscale in Docker Containers

Now, let’s set up Tailscale within a Docker container. Create a Dockerfile to build your Tailscale container:

FROM alpine:latest
RUN apk --no-cache add tailscale
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]

In your entrypoint.sh, include the Tailscale startup commands:

#!/bin/sh
tailscale up --advertise-routes=10.0.0.0/24 --accept-routes

Build and run your Docker container:

docker build -t tailscale . 
docker run --name=mytailscale --privileged -d tailscale

The --privileged flag is essential for Tailscale to modify the network interfaces within the container.

Verifying Connectivity and Security

After setting up Tailscale in your Docker container, it’s crucial to verify connectivity and ensure your network is secure. Check the Tailscale interface and connectivity:

docker exec mytailscale tailscale status

This command provides details on your Tailscale network, including the connected devices. Test the security and functionality by accessing services across your Tailscale network, ensuring that all traffic is encrypted and routes correctly.

Tips and Best Practices

To maximize the benefits of Tailscale in Docker containers on Linux, consider the following tips and best practices:

  • Regularly update your Tailscale and Docker packages to benefit from the latest features and security improvements.
  • Explore Tailscale’s ACLs (Access Control Lists) to fine-tune which devices and services can communicate across your network.
  • Consider using Docker Compose to manage Tailscale containers alongside your other Dockerized services for ease of use and automation.

I hope will this your helpful. Thank you for reading the DevopsRoles page!

Docker deploy Joomla

Introduction

Docker has become an essential tool in the DevOps world, simplifying the deployment and management of applications. Using Docker to deploy Joomla – one of the most popular Content Management Systems (CMS) – offers significant advantages. In this article, we will guide you through each step to Docker deploy Joomla, helping you leverage the full potential of Docker for your Joomla project.

Requirements

  • Have installed Docker on your system.
  • The host OS is Ubuntu Server.

To deploy Joomla using Docker, you’ll need to follow these steps:

Docker Joomla

Create a new Docker joomla network

docker network create joomla-network

Check Joomla network is created.

Next, pull Joomla and MySQL images as command below:

docker pull mysql:5.7
docker pull joomla

Create the MySQL volume

docker volume create mysql-data

Deploy the database

docker run -d --name joomladb  -v mysql-data:/var/lib/mysql --network joomla-network -e "MYSQL_ROOT_PASSWORD=PWORD_MYSQL" -e MYSQL_USER=joomla -e "MYSQL_PASSWORD=PWORD_MYSQL" -e "MYSQL_DATABASE=joomla" mysql:5.7

Where PWORD_MYSQL is a unique/strong password

How to deploy Joomla

create a volume to hold the Joomla data as command below:

docker volume create joomla-data
docker run -d --name joomla -p 80:80 -v joomla-data:/var/www/html --network joomla-network -e JOOMLA_DB_HOST=joomladb -e JOOMLA_DB_USER=joomla -e JOOMLA_DB_PASSWORD=PWORD_MYSQL joomla

Access the web-based installer

Open the web browser to http://SERVER:PORT, where SERVER is either the IP address or domain of the hosting server, and PORT is the external port.

Follow the Joomla setup wizard to configure your Joomla instance.

Via Youtube

Conclusion

Deploying Joomla with Docker not only simplifies the installation and configuration process but also enhances the management and scalability of your application. With the detailed steps provided in this guide, you can confidently deploy and manage Joomla on the Docker platform. Using Docker saves time and improves the performance and reliability of your system. Start today to experience the benefits Docker brings to your Joomla project. I hope will this your helpful. Thank you for reading the DevopsRoles page!

Deploying Services to a Docker Swarm Cluster

Introduction

In this tutorial, How to Deploy Services to a Docker Swarm Cluster. First, You need to install the Docker swarm cluster here. In today’s world of distributed systems, containerization has become a popular choice for deploying and managing applications.

Docker Swarm, a native clustering and orchestration solution for Docker, allows you to create a swarm of Docker nodes that work together as a cluster.

In this blog post, we will explore the steps to deploy a service to a Docker Swarm cluster and take advantage of its powerful features for managing containerized applications.

Deploying Services to a Docker Swarm Cluster

The simple, I will deploy the NGINX container service.

Log into Controller. Run the following command.

docker service create --name nginx_test nginx

To check the service status as command below

vagrant@controller:~$ docker service ls
ID             NAME         MODE         REPLICAS   IMAGE          PORTS
44sp9ig3k65o   nginx_test   replicated   1/1        nginx:latest

We will deploy that service to all three nodes as commanded below

docker service create --replicas 3 --name nginx3nodes nginx

The result is Deploy service to the swarm as the picture below

You want to scale the service to all five nodes.

docker service scale nginx3nodes=5

We deploy Portainer on the controller to easily manage the swarm.

docker run -d -p 8000:8000 -p 9443:9443 --name=portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer-ce

Open the browser, and go to http://SERVER:9443 (Where SERVER is the IP address of the server). you should see Swarm listed in the left navigation

Swarm on portainer

Conclusion

You have to Deploy the service to the swarm. Docker Swarm will take care of scheduling the service across the Swarm nodes and managing its lifecycle. Docker Swarm simplifies the management and scaling of containerized applications, providing fault tolerance and high availability. By following the steps outlined in this blog post, you can easily deploy your services to a Docker Swarm cluster and take advantage of its powerful orchestration capabilities. I hope will this your helpful. Thank you for reading the DevopsRoles page!

Deploy a self-hosted Docker registry

Introduction

In this tutorial, How to Deploy a self-hosted Docker registry with self-signed certificates. How to access it from a remote machine.

To deploy a self-hosted Docker registry, you can use the official Docker Registry image.

Here’s a step-by-step Deploy a self-hosted Docker registry guide to help you.

Prepare your directories

I will create them on my user home directory, but you can place them in any directory.

mkdir ~/registry

Create subdirectories in the registry directory.

mkdir ~/registry/{certs,auth}

Go into the certs directory.

cd ~/registry/certs

Create a private key

openssl genrsa 1024 > devopsroles.com.key
chmod 400 devopsroles.com.key

The output terminal is as below:

Create a docker_register.cnf file with the content below:

nano docker_register.cnf

In that file, paste the following contents.

[req]

default_bits  = 2048

distinguished_name = req_distinguished_name

req_extensions = req_ext

x509_extensions = v3_req

prompt = no

[req_distinguished_name]

countryName = XX

stateOrProvinceName = N/A

localityName = N/A

organizationName = Self-signed certificate

commonName = 120.0.0.1: Self-signed certificate

[req_ext]

subjectAltName = @alt_names

[v3_req]

subjectAltName = @alt_names

[alt_names]


IP.1 = 192.168.3.7

Note: Make sure to change IP.1 to match the IP address of your hosting server.

Save and close the file.

Generate the key with:

openssl req -new -x509 -nodes -sha1 -days 365 -key devopsroles.com.key -out devopsroles.com.crt -config docker_register.cnf

Go into auth directory.

cd ../auth

Generate an htpasswd file

docker run --rm --entrypoint htpasswd registry:2.7.0 -Bbn USERNAME PASSWORD > htpasswd

Where USERNAME is a unique username and PASSWORD is a unique/strong password.

The output terminal is the picture below:

Now, Deploy a self-hosted Docker registry

Change back to the base registry directory.

cd ~/registry

Deploy the registry container with the command below:

docker run -d \

--restart=always \

--name registry \

-v `pwd`/auth:/auth \

-v `pwd`/certs:/certs \

-v `pwd`/certs:/certs \

-e REGISTRY_AUTH=htpasswd \

-e REGISTRY_AUTH_HTPASSWD_REALM="Registry Realm" \

-e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd \

-e REGISTRY_HTTP_ADDR=0.0.0.0:443 \

-e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/devopsroles.com.crt \

-e REGISTRY_HTTP_TLS_KEY=/certs/devopsroles.com.key \

-p 443:443 \

registry:2.7.0

Now, you can access it from the local machine. however, you want to access it from a remote system. we need to add a ca.crt file. you need the copy the contents of the ~/registry/certs/devopsroles.com.crt file.

Login into your second machine

Create folder

sudo mkdir -p /etc/docker/certs.d/SERVER:443

where SERVER is the IP address of the machine hosting the registry.

Create the new file with:

sudo nano /etc/docker/certs.d/SERVER:443/ca.crt

paste the contents from devopsroles.com.crt ( from the hosting server) save and close the file.

How to do login into the new registry

From the second machine.

docker login -u USER -p https://SERVER:443

Where USER is the user you added when you generated the htpasswd file above.

Conclusion

You have successfully deployed a self-hosted Docker registry. You can now use it to store and share your Docker images within your network. I hope will this your helpful. Thank you for reading the DevopsRoles page!

How to install Docker on Ubuntu

Introduction

This tutorial explains how to install Docker on Ubuntu 21.04, highlighting Docker as an efficient open platform for building, testing, and deploying applications. Docker simplifies and accelerates the deployment process, making it less time-consuming to build and test applications. The guide is ideal for anyone looking to streamline their development workflow using Docker on the Ubuntu system.

How to install Docker on Ubuntu

To install Docker on Ubuntu, you can follow these steps:

Prerequisites

  • A system running Ubuntu 21.04
  • A user account with sudo privileges

Step 1: Update your system

Update your existing packages:

sudo apt update

Step 2: Install the curl package

sudo apt install curl -y

Step 3: Download the Latest Docker Version

curl -fsSL https://get.docker.com -o get-docker.sh

Step 4: Install Docker

sh get-docker.sh

Step 5: To make sure that the current user can access the docker daemon

To avoid using sudo for docker activities, add your username to the Docker Group

sudo usermod -aG docker $USER

Step 6: Check Docker Version

To verify the installation, check the Docker version by command below:

docker --version

Uninstall Docker on Ubuntu

Check the package installed docker on Ubuntu.

dpkg -l | grep -i docker

Use the apt remove command to uninstall Docker on Ubuntu.

sudo apt-get purge docker-ce docker-ce-cli docker-ce-rootless-extras docker-scan-plugin
sudo rm -rf /var/lib/docker

Remove Software Dependencies

sudo apt autoremove

Conclusion

How to install Docker on Ubuntu 21.04. After completing these steps, Docker should be successfully installed on your Ubuntu system, and you can start using Docker commands to manage containers and images. I hope will this your helpful. Thank you for reading the DevopsRoles page!

Step-by-Step: Create Docker Image from a Running Container

Introduction

In this tutorial, We will deploy a container Nginx server, modify it, and then create a new image from that running container. Now, let’s go to Create Docker Image from a Running Container.

What does docker mean?

Docker is a set of platform as a service products that use OS-level virtualization to deliver software in packages called containers. Containers are isolated from one another and bundle their own software, libraries, and configuration files Quota from Wikipedia

Install Docker on Ubuntu

If you don’t already have Docker installed, let’s do so. I will install Docker on Ubuntu Server. I use Ubuntu version 21.04 to install Docker.

To install Docker on Your Ubuntu server command below

sudo apt-get install docker.io -y

Add your user to the docker group with the command below

sudo usermod -aG docker $USER

Logging out and logging back in to ensure the changes take effect.

Create Docker Image from a Running Container

Create the New Container

We will create the new container with the command below:

sudo docker create --name nginx-devops -p 80:80 nginx:alpine
  • Create a new container: nginx-devops
  • Internal port ( Guest ): 80
  • External ( host ) port: 80
  • Use image: nginx:alpine

The output terminal is the picture below:

Start the Nginx container with the command below

After creating a new container, you open a Web browser and point it. You see the NGINX welcome page.

Modify the Existing Container

We will create a new index.html page for Nginx.

To do this, create a new page with the command below

vi index.html

In that file, paste the content (you can modify it to say whatever you want):

<html>
    <h2>DevopsRoles.com</h2>
</html>

Save and close the file

We copy index.html to the document root on nginx-devops container with the command below:

sudo docker cp index.html nginx-devops:/usr/share/nginx/html/index.html

you refresh the page in your web browser, you see a new welcome as the picture below:

Create a New Image

How to create a new image that includes the changes. It is very simple.

  1. We will commit the changes with the command below:
sudo docker commit nginx-devops

we will list all current images with the command below

sudo docker images

2. We will tag for docker-base-image

sudo docker tag IMAGE_ID nginx-devops-container

Where IMAGE_ID is the actual ID of your new container.

you’ll see something like this:

sudo docker images

In the output terminal step, 1 and step 2 as in the picture below

You’ve created a new Docker image from a running container.

Let’s stop and remove the original container. we will remove nginx-devops container with the command below

sudo docker ps -a
sudo docker stop ID
sudo docker rm ID

Where ID is the first four digits of the original container.

You could deploy a container from the new image with a command like:

sudo docker create --name nginx-new -p 80:80 nginx-devops-container:latest

The output terminal as the command below

vagrant@devopsroles:~$ sudo docker ps -a
CONTAINER ID   IMAGE          COMMAND                  CREATED          STATUS         PORTS                               NAMES
fe3d2e383b80   nginx:alpine   "/docker-entrypoint.…"   11 minutes ago   Up 8 minutes   0.0.0.0:80->80/tcp, :::80->80/tcp   nginx-devops
vagrant@devopsroles:~$ sudo docker stop fe3d2e383b80
fe3d2e383b80
vagrant@devopsroles:~$ sudo docker rm fe3d2e383b80
fe3d2e383b80
vagrant@devopsroles:~$ sudo docker ps
CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
vagrant@devopsroles:~$ sudo docker create --name nginx-new -p 80:80 nginx-devops-container:latest
91175e61375cf86fc935c55081be6f81354923564c9c0c0f4e5055ef0f590600
vagrant@devopsroles:~$ sudo docker ps -a
CONTAINER ID   IMAGE                           COMMAND                  CREATED              STATUS    PORTS     NAMES
91175e61375c   nginx-devops-container:latest   "/docker-entrypoint.…"   About a minute ago   Created             nginx-new
vagrant@devopsroles:~$ sudo docker start 91175e61375c
91175e61375c
vagrant@devopsroles:~$

What is the difference between docker commit and docker build?

docker commit creates an image from a container’s state, while docker build creating an image from a Dockerfile, allowing for a more controlled and reproducible build process.

Refresh your web browser and you should, once again, see the DevopsRoles page, New Stack! Welcome page.

YouTube: Create Docker Image from a Running Container

Conclusion

Create Docker Image from a Running Container is a powerful feature that enables you to capture the exact state of an application at any given moment. By following the steps outlined in this guide, you can easily commit a running container to a new image and use advanced techniques to add tags, commit messages, and author information. Whether you’re looking to back up your application, replicate environments, or share your work with others, this process provides a simple and effective solution. I hope will this your helpful. Thank you for reading the DevopsRoles page!

SonarQube from a Jenkins Pipeline job in Docker

Introduction

In today’s fast-paced DevOps environment, maintaining code quality is paramount. Integrating SonarQube with Jenkins in a Docker environment offers a robust solution for continuous code inspection and improvement.

This guide will walk you through the steps to set up SonarQube from a Jenkins pipeline job in Docker, ensuring your projects adhere to high standards of code quality and security.

Integrating SonarQube from a Jenkins Pipeline job in Docker: A Step-by-Step Guide.

Docker Compose for SonarQube

Create directories to keep SonarQube’s data

# mkdir -p /data/sonarqube/{conf,logs,temp,data,extensions,bundled_plugins,postgresql,postgresql_data}

Create a new user and change those directories owner

# adduser sonarqube
# usermod -aG docker sonarqube
# chown -R sonarqube:sonarqube /data/sonarqube/

Find UID of sonarqube user

# id sonarqube

Create a Docker Compose file using the UID in the user.

version: "3"

networks:
  sonarnet:
    driver: bridge

services:
  sonarqube:
    // use UID here
    user: 1005:1005
    image: sonarqube
    ports:
      - "9000:9000"
    networks:
      - sonarnet
    environment:
      - sonar.jdbc.url=jdbc:postgresql://db:5432/sonar
    volumes:
      - /data/sonarqube/conf:/opt/sonarqube/conf
      - /data/sonarqube/logs:/opt/sonarqube/logs
      - /data/sonarqube/temp:/opt/sonarqube/temp
      - /data/sonarqube/data:/opt/sonarqube/data
      - /data/sonarqube/extensions:/opt/sonarqube/extensions
      - /data/sonarqube/bundled_plugins:/opt/sonarqube/lib/bundled-plugins

  db:
    image: postgres
    networks:
      - sonarnet
    environment:
      - POSTGRES_USER=sonar
      - POSTGRES_PASSWORD=sonar
    volumes:
      - /data/sonarqube/postgresql:/var/lib/postgresql
      - /data/sonarqube/postgresql_data:/var/lib/postgresql/data

Use docker-compose start

# docker-compose -f sonarqube-compose.yml up

Install and configure Nginx

Nginx Install

# yum install nginx

Start Nginx service

# service nginx start

Configure nginx

I have created a “/etc/nginx/conf.d/sonar.devopsroles.com.conf” file, look like as below:

upstream sonar {
    server 127.0.0.1:9000;
}


server {

    listen 80;
    server_name  dev.sonar.devopsroles.com;


    root /var/www/html;
        allow all;
    }

    location / {
        return 301 https://dev.sonar.devopsroles.com;
    }
}

server {

    listen       443 ssl;
    server_name  dev.sonar.devopsroles.com;

    access_log  /var/log/nginx/dev.sonar.devopsroles.com-access.log proxy;
    error_log /var/log/nginx/dev.sonar.devopsroles.com-error.log warn;

    location / {
        proxy_http_version 1.1;
        proxy_request_buffering off;
        proxy_buffering off;

        proxy_redirect          off;
        proxy_set_header        Host            $host;
        proxy_set_header        X-Real-IP       $remote_addr;
        proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header        X-Forwarded-Proto   $scheme;
        proxy_pass http://sonar$request_uri;
    }
}

Check syntax and reload NGINX’s configs

# nginx -t && systemctl start nginx

Jenkins Docker Compose

Here is an example of a Jenkins Docker Compose setup that could be used for integrating SonarQube from a Jenkins pipeline job:

version: '3'

services:
  jenkins:
    image: jenkins/jenkins:lts
    container_name: jenkins
    ports:
      - "8080:8080"
      - "50000:50000"
    volumes:
      - jenkins_home:/var/jenkins_home
    networks:
      - jenkins-sonarqube

  sonarqube:
    image: sonarqube:latest
    container_name: sonarqube
    ports:
      - "9000:9000"
    environment:
      - SONAR_JDBC_URL=jdbc:postgresql://db:5432/sonarqube
      - SONAR_JDBC_USERNAME=sonar
      - SONAR_JDBC_PASSWORD=sonar
    networks:
      - jenkins-sonarqube

  db:
    image: postgres:latest
    container_name: postgres
    environment:
      - POSTGRES_USER=sonar
      - POSTGRES_PASSWORD=sonar
      - POSTGRES_DB=sonarqube
    networks:
      - jenkins-sonarqube

networks:
  jenkins-sonarqube:

volumes:
  jenkins_home:

Explanation:

  • Jenkins Service: Runs Jenkins on the default LTS image. It exposes ports 8080 (Jenkins web UI) and 50000 (Jenkins slave agents).
  • SonarQube Service: Runs SonarQube on the latest image. It connects to a PostgreSQL database for data storage.
  • PostgreSQL Service: Provides the database backend for SonarQube.
  • Networks and Volumes: Shared network (jenkins-sonarqube) and a named volume (jenkins_home) for Jenkins data persistence.

Conclusion

By following this comprehensive guide, you have successfully integrated SonarQube with Jenkins using Docker, enhancing your continuous integration pipeline. This setup not only helps in maintaining code quality but also ensures your development process is more efficient and reliable. Thank you for visiting DevOpsRoles, and we hope this tutorial has been helpful in improving your DevOps practices.