Category Archives: Docker

Master Docker with DevOpsRoles.com. Discover comprehensive guides and tutorials to efficiently use Docker for containerization and streamline your DevOps processes.

Step-by-Step Guide: Deploy Ghost Blog with docker

Introduction

In this tutorial, How to Deploy Ghost Blog with docker. cms Ghost is a popular content creation platform that’s written in JavaScript with Node.js. It’s open-source software.

Docker has become a popular choice for deploying applications due to its ease of use, scalability, and portability. If you’re looking to start a blog using Ghost, a powerful and flexible open-source blogging platform, using Docker can simplify the deployment process.

We will use Docker to quickly Ghost blog. First, we have installed Docker and Docker Compose on your host.

Prerequisites

Before we get started, make sure you have Docker and Docker Compose installed on your server or local machine.

Deploy a Ghost Container

We will start a basic Ghost site with the docker command as below:

docker run -d -p 2368:2368 --name simple-ghost ghost:4

Ghost the default port is 2368. We will view the site via http://localhost:2368 or http://localhost:2368/ghost to access the Ghost admin panel.

We will first run settings to finalize your Ghost installation and create an initial user account.

This approach is if we just trying out Ghost. However, we have not set up persistent storage yet so your data will be lost when the container stops.

We will use uses Docker Compose to set up Ghost with a Docker volume. Mount a volume to the /var/lib/ghost/content directory to store Ghost’s data outside the container.

Example docker-compose.yml file

version: "3"
services:
  ghost:
    image: ghost:4
    ports:
      - 8080:2368
    environment:
      url: https://ghost.devopsroles.com
    volumes:
      - ghost:/var/lib/ghost/content
    restart: unless-stopped
volumes:
  ghost:

Now use Compose to bring up your site:

docker-compose up -d

External Database

Ghost defaults to using an SQLite database that’s stored as a file in your site’s content directory.

services:
  ghost:
    # ...
    environment:
      database__client: mysql
      database__connection__host: ghost_mysql
      database__connection__user: root
      database__connection__password: databasePw
      database__connection__database: ghost

  ghost_mysql:
    image: mysql:5.7
    expose:
      - 3306
    environment:
      MYSQL_DATABASE: ghost
      MYSQL_ROOT_PASSWORD: databasePw
    volumes:
      - mysql:/var/lib/mysql
    restart: unless-stopped

volumes:
  mysql:

Summary Ghost blog docker-compose

version: "3"
services:
  ghost:
    image: ghost:4
    ports:
      - 8080:2368
    environment:
      url: https://ghost.devopsroles.com
    volumes:
      - ghost:/var/lib/ghost/content
    restart: unless-stopped
    environment:
      database__client: mysql
      database__connection__host: ghost_mysql
      database__connection__user: root
      database__connection__password: databasePw
      database__connection__database: ghost

  ghost_mysql:
    image: mysql:5.7
    expose:
      - 3306
    environment:
      MYSQL_DATABASE: ghost
      MYSQL_ROOT_PASSWORD: databasePw
    volumes:
      - mysql:/var/lib/mysql
    restart: unless-stopped
volumes:
  ghost:
  mysql:

Proxying Traffic to Your Container

Add NGINX to your host:

sudo apt update
sudo apt install nginx

# Allow HTTP/HTTPS traffic through the firewall
sudo ufw allow 80
sudo ufw allow 443

Define an NGINX host for your site in /etc/nginx/sites-available/ghost.devopsroles.com

server {
    
    server_name ghost.devopsroles.com;
    index index.html;

    access_log /var/log/nginx/ghost_access.log;
    error_log /var/log/nginx/ghost_error.log error;

    location / {
        proxy_pass http://127.0.0.1:8080;
        proxy_redirect off;
        proxy_set_header Host $http_host;
        proxy_set_header X-Original-IP $remote_addr;
    }

}

Create symbolic link

sudo ln -s /etc/nginx/sites-available/ghost.devopsroles.com /etc/nginx/sites-enabled/ghost.devopsroles.com

Restart Nginx service

sudo systemctl restart nginx

The result is Deploy cms Ghost Blog with docker Nginx

How to Deploy Ghost Blog with Docker via Youtube

Conclusion

You’ve successfully deployed a Ghost blog using Docker, taking advantage of the benefits of containerization for managing your blog.

With your cms Ghost blog now live, you can explore additional Docker features to enhance your blog’s capabilities, such as using custom themes, adding plugins, and scaling your blog to handle increasing traffic.

I hope will this your helpful. Thank you for reading the DevopsRoles page!

How to install Docker compose on Ubuntu

Introduction

In this tutorial, How to install Docker compose on Ubuntu 21.04. Docker is an open platform that allows you to build, test, and deploy applications quickly.

Docker Compose is used for defining and running multi-container Docker applications. It allows users to launch, execute, communicate with a single coordinated command. Docker Compose is yet another useful Docker tool.

Prerequisites

  • A system running Ubuntu 21.04
  • A user account with sudo privileges
  • Docker installed on Ubuntu 21.04

Step 1: Update your system

Update your existing packages:

sudo apt update

Step 2: Install curl package

sudo apt install curl -y

Step 3: Download the Latest Docker compose Version

Docker-compose new stable versions, refer to the official list of releases on GitHub.

sudo curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose

Step 4: Change File Permission

sudo chmod +x /usr/local/bin/docker-compose

Step 5: Check Docker Version

To verify the installation, check the Docker compose version by command below:

docker-compose --version

Uninstall Docker compose on Ubuntu

Delete the Binary

sudo rm /usr/local/bin/docker-compose

Uninstall the Package

sudo apt remove docker-compose

Remove Software Dependencies

sudo apt autoremove

Via Youtube

Conclusion

You have to install Docker compose on Ubuntu 21.04. I hope will this your helpful. Thank you for reading the DevopsRoles page!

How to install Docker on Ubuntu

Introduction

This tutorial explains how to install Docker on Ubuntu 21.04, highlighting Docker as an efficient open platform for building, testing, and deploying applications. Docker simplifies and accelerates the deployment process, making it less time-consuming to build and test applications. The guide is ideal for anyone looking to streamline their development workflow using Docker on the Ubuntu system.

How to install Docker on Ubuntu

To install Docker on Ubuntu, you can follow these steps:

Prerequisites

  • A system running Ubuntu 21.04
  • A user account with sudo privileges

Step 1: Update your system

Update your existing packages:

sudo apt update

Step 2: Install the curl package

sudo apt install curl -y

Step 3: Download the Latest Docker Version

curl -fsSL https://get.docker.com -o get-docker.sh

Step 4: Install Docker

sh get-docker.sh

Step 5: To make sure that the current user can access the docker daemon

To avoid using sudo for docker activities, add your username to the Docker Group

sudo usermod -aG docker $USER

Step 6: Check Docker Version

To verify the installation, check the Docker version by command below:

docker --version

Uninstall Docker on Ubuntu

Check the package installed docker on Ubuntu.

dpkg -l | grep -i docker

Use the apt remove command to uninstall Docker on Ubuntu.

sudo apt-get purge docker-ce docker-ce-cli docker-ce-rootless-extras docker-scan-plugin
sudo rm -rf /var/lib/docker

Remove Software Dependencies

sudo apt autoremove

Conclusion

How to install Docker on Ubuntu 21.04. After completing these steps, Docker should be successfully installed on your Ubuntu system, and you can start using Docker commands to manage containers and images. I hope will this your helpful. Thank you for reading the DevopsRoles page!

Step-by-Step: Create Docker Image from a Running Container

Introduction

In this tutorial, We will deploy a container Nginx server, modify it, and then create a new image from that running container. Now, let’s go to Create Docker Image from a Running Container.

What does docker mean?

Docker is a set of platform as a service products that use OS-level virtualization to deliver software in packages called containers. Containers are isolated from one another and bundle their own software, libraries, and configuration files Quota from Wikipedia

Install Docker on Ubuntu

If you don’t already have Docker installed, let’s do so. I will install Docker on Ubuntu Server. I use Ubuntu version 21.04 to install Docker.

To install Docker on Your Ubuntu server command below

sudo apt-get install docker.io -y

Add your user to the docker group with the command below

sudo usermod -aG docker $USER

Logging out and logging back in to ensure the changes take effect.

Create Docker Image from a Running Container

Create the New Container

We will create the new container with the command below:

sudo docker create --name nginx-devops -p 80:80 nginx:alpine
  • Create a new container: nginx-devops
  • Internal port ( Guest ): 80
  • External ( host ) port: 80
  • Use image: nginx:alpine

The output terminal is the picture below:

Start the Nginx container with the command below

After creating a new container, you open a Web browser and point it. You see the NGINX welcome page.

Modify the Existing Container

We will create a new index.html page for Nginx.

To do this, create a new page with the command below

vi index.html

In that file, paste the content (you can modify it to say whatever you want):

<html>
    <h2>DevopsRoles.com</h2>
</html>

Save and close the file

We copy index.html to the document root on nginx-devops container with the command below:

sudo docker cp index.html nginx-devops:/usr/share/nginx/html/index.html

you refresh the page in your web browser, you see a new welcome as the picture below:

Create a New Image

How to create a new image that includes the changes. It is very simple.

  1. We will commit the changes with the command below:
sudo docker commit nginx-devops

we will list all current images with the command below

sudo docker images

2. We will tag for docker-base-image

sudo docker tag IMAGE_ID nginx-devops-container

Where IMAGE_ID is the actual ID of your new container.

you’ll see something like this:

sudo docker images

In the output terminal step, 1 and step 2 as in the picture below

You’ve created a new Docker image from a running container.

Let’s stop and remove the original container. we will remove nginx-devops container with the command below

sudo docker ps -a
sudo docker stop ID
sudo docker rm ID

Where ID is the first four digits of the original container.

You could deploy a container from the new image with a command like:

sudo docker create --name nginx-new -p 80:80 nginx-devops-container:latest

The output terminal as the command below

vagrant@devopsroles:~$ sudo docker ps -a
CONTAINER ID   IMAGE          COMMAND                  CREATED          STATUS         PORTS                               NAMES
fe3d2e383b80   nginx:alpine   "/docker-entrypoint.…"   11 minutes ago   Up 8 minutes   0.0.0.0:80->80/tcp, :::80->80/tcp   nginx-devops
vagrant@devopsroles:~$ sudo docker stop fe3d2e383b80
fe3d2e383b80
vagrant@devopsroles:~$ sudo docker rm fe3d2e383b80
fe3d2e383b80
vagrant@devopsroles:~$ sudo docker ps
CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
vagrant@devopsroles:~$ sudo docker create --name nginx-new -p 80:80 nginx-devops-container:latest
91175e61375cf86fc935c55081be6f81354923564c9c0c0f4e5055ef0f590600
vagrant@devopsroles:~$ sudo docker ps -a
CONTAINER ID   IMAGE                           COMMAND                  CREATED              STATUS    PORTS     NAMES
91175e61375c   nginx-devops-container:latest   "/docker-entrypoint.…"   About a minute ago   Created             nginx-new
vagrant@devopsroles:~$ sudo docker start 91175e61375c
91175e61375c
vagrant@devopsroles:~$

What is the difference between docker commit and docker build?

docker commit creates an image from a container’s state, while docker build creating an image from a Dockerfile, allowing for a more controlled and reproducible build process.

Refresh your web browser and you should, once again, see the DevopsRoles page, New Stack! Welcome page.

YouTube: Create Docker Image from a Running Container

Conclusion

Create Docker Image from a Running Container is a powerful feature that enables you to capture the exact state of an application at any given moment. By following the steps outlined in this guide, you can easily commit a running container to a new image and use advanced techniques to add tags, commit messages, and author information. Whether you’re looking to back up your application, replicate environments, or share your work with others, this process provides a simple and effective solution. I hope will this your helpful. Thank you for reading the DevopsRoles page!

Docker deploy a Bitwarden server

#Introduction

In this tutorial, How to deploy an in-house password manager server.

Bitwarden is an integrated open source password management solution for individuals, teams, and business organizations.

Install Docker on Ubuntu

sudo apt install apt-transport-https ca-certificates curl gnupg-agent software-properties-common -y
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
sudo apt update
sudo apt install docker-ce docker-compose

Obtain Bitwarden’s Installation key and ID

you via Bitwarden page get key and ID as the picture below

The display picture as below as below

Create the Bitwarden user

sudo mkdir /opt/bituser
sudo adduser bituser
sudo chmod -R 700 /opt/bituser
sudo chown -R bituser:bituser /opt/bituser
sudo usermod -aG docker bituser

Change to the Bitwarden user with the command below

su bituser
cd
pwd

Download the script and deploy Bitwarden

Download the script with the command below

curl -Lso bitwarden.sh https://go.btwrdn.co/bw-sh && chmod 700 bitwarden.sh

Bitwarden use on port 80, If you start apache/Nginx then stop it.

sudo systemctl stop apache2
# Redhat
sudo systemctl stop httpd
# Stop Nginx
sudo systemctl stop nginx

Installer Bitwarden

./bitwarden.sh install

The output terminal as below

Finally, we need to configure the SMTP server that Bitwarden will use it.

After installing Bitwarden, open the configuration file with:

nano /home/bituser/bwdata/env/global.override.env

You will replace every REPLACE with your SMTP Server.

globalSettings__mail__smtp__host=REPLACE
globalSettings__mail__smtp__port=REPLACE
globalSettings__mail__smtp__ssl=REPLACE
globalSettings__mail__smtp__username=REPLACE
globalSettings__mail__smtp__password=REPLACE
adminSettings__admins= ADMIN_EMAIL

Start the Bitwarden server.

./bitwarden.sh start

Access your Bitwarden server

Open a web browser and point it to https://SERVER

The display picture as below as below

Note: Create a new account to login into Bitwarden

Conclusion

You have to deploy a Bitwarden server. I hope will this your helpful. Thank you for reading the DevopsRoles page!

Docker run PostgreSQL: A Step-by-Step Guide

Introduction

In today’s fast-paced development environments, the ability to quickly deploy and manage databases is crucial. Docker provides a powerful solution for running PostgreSQL databases in isolated containers, making it easier to develop, test, and deploy your applications. In this tutorial, you will learn how to use Docker run PostgreSQL databases and connect to them, enabling you to efficiently manage your database environments with minimal setup. Whether you’re new to Docker or looking to streamline your database management, this guide will equip you with the essential knowledge to get started.

  • PostgreSQL is a powerful, open-source object-relational database
  • Docker is an open platform that runs an application in an isolated environment called a container.

Prerequisites

Docker Run PostgreSQL container

You have to use Docker to run PostgreSQL databases. Below is an example command to run a PostgreSQL container:

docker run --name my-postgres-db -p 9000:5432 -e POSTGRES_PASSWORD=123456789  -e POSTGRES_USER=devopsroles  -e POSTGRES_DB=my-db -d postgres:14

Note:

  • -p 9000:5432: Host port 9000 and Container port 5432
  • Image: postgres version 14
  • Container name: my-postgres-db
  • Environment variables to configure our database: POSTGRES_USER, POSTGRES_PASSWORD, and POSTGRES_DB

The output terminal is below

Using psql command to connect the database

psql --host localhost --port 5432 --username devopsroles --dbname my-db

The output terminal is below

Your database is currently empty. I will create a table as an example

CREATE TABLE sites (id SERIAL PRIMARY KEY, name VARCHAR(100));
INSERT INTO sites (name)
  VALUES ('devopsroles.com'), ('huuphan.com');

I will run a command to query the table created.

SELECT * FROM sites;

The output terminal is below

Docker Manage data persistence

The problem is that we stop and start the container with the commands “docker stop my-postgres-db” and “docker start my-postgres-db” when creating a new container will not allow us to access the database that you are created, as it was isolated in your container.

Create a new volume with the following command. The solution stores the database outside of the container

docker volume create my-postgres-db-db

You will stop and remove your current container and create a new one.

docker stop my-postgres-db
docker rm my-postgres-db
docker run --name my-postgres-db -p 5432:5432  -e POSTGRES_PASSWORD=123456789  -e POSTGRES_USER=devopsroles  -e POSTGRES_DB=my-db -v my-postgres-db-db:/var/lib/postgresql/data -d postgres:14

How to know where the database is stored on your computer

docker inspect my-postgres-db-db

The output terminal is below

Conclusion

Using Docker to run PostgreSQL databases offers a streamlined approach to managing your database environments with ease and efficiency. I hope this tutorial has provided you with the necessary insights and steps to confidently set up and connect to PostgreSQL using Docker. Thank you for reading the  DevopsRoles page and I hope this guide proves helpful in your journey toward optimizing your development and deployment processes.

How to delete docker image with dependent child images

As experienced DevOps engineers, we often treat Docker image cleanup as a routine garbage collection task. However, the Docker daemon occasionally halts this process with a specific conflict error that prevents the removal of an image ID. If you have attempted to remove a base image and encountered the error conflict: unable to delete [IMAGE_ID] (cannot be forced) - image has dependent child images, you are dealing with Docker’s layered filesystem architecture in action.

This guide dives into the mechanics of UnionFS layers to explain why this happens and provides production-ready strategies to delete docker image with dependent child images safely and effectively.

Understanding the “Dependent Child Images” Error

To fix the problem, we must first respect the underlying architecture. Docker images are not monolithic files; they are a stack of read-only layers. When you build Image B using FROM Image A, Image B becomes a “child” of Image A. Docker uses storage drivers (like overlay2) to reference the layers of the parent image rather than duplicating them.

The error occurs because deleting the parent image (Image A) would render the child image (Image B) corrupt, as its base layers would vanish from the filesystem.

Technical Context: Unlike a running container conflict—which blocks deletion because a read-write layer is active—a dependent child conflict is purely about filesystem integrity. The Docker daemon protects the Directed Acyclic Graph (DAG) of your image layers.

Today, I can’t delete docker images with dependent child images. How to delete docker image with dependent child images.

I want to delete image e16184e8dd39

[vagrant@localhost docker-flask-app]$ docker images
REPOSITORY             TAG       IMAGE ID       CREATED        SIZE
python                 3.6       e16184e8dd39   9 days ago     902MB
mysql                  5.7       938b57d64674   2 weeks ago    448MB
docker-flask-app_app   latest    44ae2f35ec29   3 weeks ago    915MB
<none>                 <none>    0e0359a5ec25   3 weeks ago    908MB
<none>                 <none>    3a59efe32b9c   3 weeks ago    908MB
postgres               latest    6ce504119cc8   5 weeks ago    374MB
odoo                   14        4f53998176ca   5 weeks ago    1.41GB
postgres               <none>    346c7820a8fb   2 months ago   315MB

I am trying with command as below

sudo docker rmi e16184e8dd39

Error delete docker image with dependent child images

[vagrant@localhost docker-flask-app]$ sudo docker rmi e16184e8dd39
Error response from daemon: conflict: unable to delete e16184e8dd39 (cannot be forced) - image has dependent child images
[vagrant@localhost docker-flask-app]$

I can not delete an image with the -f flag.

How to Fixed it

You should try to remove unnecessary images before removing the image:

[vagrant@localhost docker-flask-app]$ docker rmi $(docker images -q) -f

The output terminal as below

Note: This command above will remove images all.

After you remove the image.

docker rmi -f e16184e8dd39

Expert Troubleshooting: Why `docker rmi` Still Fails

Sometimes, even after following the steps above to delete docker image with dependent child images, the daemon persists with errors. Here are the edge cases:

  • Stopped Containers: A stopped container still holds a reference to the image. Ensure you run docker ps -a to catch exited containers.
  • Build Cache: Modern Docker BuildKit uses a separate cache. If you are seeing space usage issues not resolved by rmi, you may need to prune the build cache specifically.
docker builder prune

For a deeper understanding of how Docker manages storage and references, consult the official Docker Storage Driver documentation.

Frequently Asked Questions (FAQ)

What is the difference between a dependent child image and a container?

A container is a runtime instance of an image (a read-write layer on top). A dependent child image is a separate static image that was built FROM the parent image. docker rm handles containers, while docker rmi handles images.

Why do I see so many <none> images?

These are usually dangling images (layers that have no relationship to any tagged images) or intermediate layers. They frequently cause dependency errors when you try to delete their parents. Using docker image prune is the standard maintenance procedure for this.

Is it safe to delete the /var/lib/docker folder directly?

This is the “nuclear option” and should be avoided unless the daemon is completely corrupted. Manually deleting files in /var/lib/docker bypasses the Docker daemon’s database, which can lead to inconsistent states and require a full reinstall of the Docker engine.

Conclusion

Managing the lifecycle of container artifacts is a core competency for any DevOps engineer. The error regarding dependent child images is a safety mechanism, ensuring that the shared layers required by your ecosystem remain intact.

To successfully delete docker image with dependent child images, prioritize identifying the child image using docker inspect. Use force flags judiciously, and lean on docker system prune for maintaining hygiene in your build environments. By understanding the parent-child relationship in UnionFS, you can keep your registry clean without breaking production dependencies . Thank you for reading the DevopsRoles page!

Docker setup Nginx Flask and Postgres

Introduction

In this tutorial, How to use Docker setup Nginx Flask and Postgres.

Prerequisites

Docker setup Nginx Flask and Postgres

The structure folder and file of the app.

Nginx

The first Docker Nginx. It will be used as a proxy server.

User --> Nginx --> Python application

Dockerfile file

[vagrant@localhost nginx-flask-postgres]$ cat nginx/Dockerfile
FROM nginx:latest

RUN rm /etc/nginx/conf.d/default.conf
COPY nginx/devopsroles.conf /etc/nginx/conf.d/

Example devopsroles.conf file

server {
  listen 80;
  server_name _;

  location / {
    try_files $uri @app;
  }

  location @app {
    include /etc/nginx/uwsgi_params;
    uwsgi_pass flask:8080;
  }
}

Flask

The second Docker container Python application running on a uWSGI server.

Dockerfile file

[vagrant@localhost nginx-flask-postgres]$ cat nginx/Dockerfile
FROM nginx:latest

RUN rm /etc/nginx/conf.d/default.conf
COPY nginx/devopsroles.conf /etc/nginx/conf.d/
[vagrant@localhost nginx-flask-postgres]$ cat flask/Dockerfile
# Base Image
FROM python:3.6-alpine as BASE

RUN apk add --no-cache linux-headers g++ postgresql-dev gcc build-base linux-headers ca-certificates python3-dev libffi-dev libressl-dev libxslt-dev
RUN pip wheel --wheel-dir=/root/wheels psycopg2
RUN pip wheel --wheel-dir=/root/wheels cryptography

# Actual Image
FROM python:3.6-alpine as RELEASE

EXPOSE 8080
WORKDIR /app

ENV POSTGRES_USER="" POSTGRES_PASSWORD="" POSTGRES_HOST=postgres POSTGRES_PORT=5432 POSTGRES_DB=""

COPY dist/ ./dist/
COPY flask/uwsgi.ini ./
COPY --from=BASE /root/wheels /root/wheels

RUN apk add --no-cache build-base linux-headers postgresql-dev pcre-dev libpq uwsgi-python3 && \
    pip install --no-index --find-links=/root/wheels /root/wheels/* && \
    pip install dist/*

CMD ["uwsgi", "--ini", "/app/uwsgi.ini"]

Note:

  • It exposes port 8080
  • creates a default directory /app/

uwsgi.ini file

[uwsgi]
socket = :8080
module = devopsroles.wsgi:app
master = 1
processes = 4
plugin = python

Postgres

Get Postgres the latest Postgres image from Docker Hub. Then we pass environment variables to it.

Example: database.conf file

[vagrant@localhost nginx-flask-postgres]$ cat postgres/database.conf
POSTGRES_USER=test
POSTGRES_PASSWORD=password
POSTGRES_HOST=postgres
POSTGRES_PORT=5432
POSTGRES_DB=devopsroles

The final docker-compose.yml

[vagrant@localhost nginx-flask-postgres]$ pwd
/home/vagrant/nginx-flask-postgres
[vagrant@localhost nginx-flask-postgres]$ cat docker-compose.yml
version: '3.5'

services:
  web_server:
    container_name: nginx
    build:
      context: .
      dockerfile: nginx/Dockerfile
    ports:
      - 80:80
    depends_on:
      - app

  app:
    container_name: flask
    build:
      context: .
      dockerfile: flask/Dockerfile
    env_file: postgres/database.conf
    expose:
      - 8080
    depends_on:
      - database

  database:
    container_name: postgres
    image: postgres:latest
    env_file: postgres/database.conf
    ports:
      - 5432:5432
    volumes:
      - db_volume:/var/lib/postgresql

volumes:
  db_volume:

Docker Compose Build and Run

docker-compose up --build -d

The result Docker container Nginx, Flask, and Postgres

Conclusion

You have Deploy Docker setup Nginx Flask and Postgres. I hope will this your helpful. Thank you for reading the DevopsRoles page!

Deploy Flask-MySQL app with docker-compose

Introduction

In this tutorial, How to deploy Flask-MySQL app with docker-compose. From the official docs. Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration. The fist, You need to install Docker and docker-compose. Next, we will Deploy Flask-MySQL app.

The folder and file structure of the app

[vagrant@localhost ~]$ tree docker-flask-app
docker-flask-app
├── app
│   ├── app.py
│   ├── Dockerfile
│   └── requirements.txt
├── db
│   └── init.sql
└── docker-compose.yml

2 directories, 5 files
[vagrant@localhost ~]$
  • File app.py is the Flask app which connect to database and exposes on REST API endpoint.
  • init.sql an SQL script to initialize the database before run app.

The content of app.py and init.sql as below

[vagrant@localhost docker-flask-app]$ cat app/app.py
from typing import List, Dict
from flask import Flask
import mysql.connector
import json

app = Flask(__name__)

def test_table() -> List[Dict]:
    config = {
        'user': 'root',
        'password': 'root',
        'host': 'db',
        'port': '3306',
        'database': 'devopsroles'
    }
    connection = mysql.connector.connect(**config)
    cursor = connection.cursor()
    cursor.execute('SELECT * FROM test_table')
    results = [{name: color} for (name, color) in cursor]
    cursor.close()
    connection.close()

    return results


@app.route('/')
def index() -> str:
    return json.dumps({'test_table': test_table()})


if __name__ == '__main__':
    app.run(host='0.0.0.0')
[vagrant@localhost docker-flask-app]$ cat db/init.sql
create database devopsroles;
use devopsroles;

CREATE TABLE test_table (
  name VARCHAR(20),
  color VARCHAR(10)
);

INSERT INTO test_table
  (name, color)
VALUES
  ('dev', 'blue'),
  ('pro', 'yellow');

Create a Docker image for Flask app

Create a Dockerfile file in the app folder.

[vagrant@localhost docker-flask-app]$ cat app/Dockerfile
# Use an official Python runtime as an image
FROM python:3.6

# The EXPOSE instruction indicates the ports on which a container
EXPOSE 5000

# Sets the working directory for following COPY and CMD instructions
# Notice we haven’t created a directory by this name - this instruction
# creates a directory with this name if it doesn’t exist
WORKDIR /app

COPY requirements.txt /app
RUN python -m pip install --upgrade pip
RUN pip install --trusted-host pypi.org --trusted-host pypi.python.org --trusted-host=files.pythonhosted.org --no-cache-dir -r requirements.txt

# Run app.py when the container launches
COPY app.py /app
CMD python app.py

You need dependencies Flask and mysql-connector in File requirements.txt

[vagrant@localhost docker-flask-app]$ cat app/requirements.txt
flask
mysql-connector

Create a docker-compose.yml

[vagrant@localhost docker-flask-app]$ cat docker-compose.yml
version: "2"
services:
  app:
    build: ./app
    links:
      - db
    ports:
      - "5000:5000"
  db:
    image: mysql:5.7
    ports:
      - "3306:3306"
    environment:
      MYSQL_ROOT_PASSWORD: root
    volumes:
      - ./db:/docker-entrypoint-initdb.d/:ro

Running the Flask app

[vagrant@localhost docker-flask-app]$ docker-compose up -d

The result, after running the Flask app

FAQs

1. What is Docker-Compose?

Docker-Compose is a tool for defining and running multi-container Docker applications. It allows you to configure your application’s services in a YAML file and start all services with a single command.

2. How can I persist data in MySQL?

In the Docker-Compose file, the db_data volume ensures that the data in MySQL is persisted even if the container is stopped.

3. Can I use a different database with Flask?

Yes, Flask can work with various databases like PostgreSQL, SQLite, and more. You need to adjust the connection setup in your Flask app and Docker-Compose file accordingly.

Conclusion

You have Deploy Flask-MySQL app with docker-compose. I hope will this your helpful. Thank you for reading the DevopsRoles page!

How to install Odoo on Docker Container

#Introduction

In this tutorial, I will install Odoo version 13/14 on Docker Container. Odoo is a suite of well-known open-source business software that covers all your company needs: CRM, eCommerce, inventory, point of sale, project … Next, we will install Odoo on Docker Container

Install Odoo on Docker Container

  • OS Host: Centos 7
  • Docker image: odoo:14 and Postgres

Install Odoo Docker Image

To install Odoo use the command below:

#For odoo version 14
docker pull odoo:14

# For Oddo 13
docker pull odoo:13

Install PostgreSQL Database Docker Image

Use the command below:

docker pull postgres

The output terminal is as follows:

Create Database Container

docker run -d -v odoo-db:/var/lib/postgresql/data -e POSTGRES_USER=odoo -e POSTGRES_PASSWORD=odoo -e POSTGRES_DB=postgres --name db postgres

Note:

  • odoo-db:/var/lib/postgresql/data – store the database data. This means after remove the container, odoo data will remain.
  • POSTGRES_USER=odoo– A User created for database
  • POSTGRES_PASSWORD=odoo – Password for the created database user
  • POSTGRES_DB=postgres– It is the Database name
  • –name db – Container name
  • postgres – The name docker image

Create and Run Odoo Container

docker run -v odoo-data:/var/lib/odoo -d -p 8069:8069 --name odoo --link db:db -t odoo:14

The output terminal is as follows

Allow port firewall

For Ubuntu, Debian, and others similar:

sudo ufw allow 8069

For RHEL, CentOS, AlmaLinux, RockyLinux, Oracle:

firewall-cmd --zone=public --add-port=8069/tcp --permanent
firewall-cmd --reload

Access Odoo Web interface

From your PC, Access Odoo and Create Database.

For example, http://192.168.3.4:8069

The result, Installed Oddoo on Docker Container

Conclusion

You have to install Oddoo on Docker Container. I hope will this your helpful. Thank you for reading the DevopsRoles page!