Create docker secret and deploy a service

#Introduction

In this tutorial, How to create docker secret and deploy a service. Docker secrets encrypt things like passwords and certificates within a service and container.

Requirements

How to create a secret

We’ll use the command printf and pipe the output to the docker command to create a secret called test_secret. As command below:

printf "My secret secret" | docker secret create test_secret -

To check the result with the command below

docker secret ls

The output as below:

vagrant@controller:~$ docker secret ls
ID                          NAME          DRIVER    CREATED          UPDATED
txrthzah1vnl4kyrh282j39ft   test_secret             24 seconds ago   24 seconds ago

create a service that uses the secret

To deploy that service, using the test_secret secret, the command looks something like this:

docker service  create --name redis --secret test_secret redis:alpine

Verify the service is running as the command below

docker service ps redis

The output is as below:

vagrant@controller:~$ docker service ps redis
ID             NAME      IMAGE          NODE         DESIRED STATE   CURRENT STATE            ERROR     PORTS
y6249s3xftxa   redis.1   redis:alpine   controller   Running         Running 33 seconds ago   

Verify the service has access to the secret as below

docker container exec $(docker ps --filter name=redis -q) ls -l /run/secrets

The output is as below:

vagrant@controller:~$ docker container exec $(docker ps --filter name=redis -q) ls -l /run/secrets
total 4
-r--r--r--    1 root     root            16 May 30 13:50 test_secret

Finally, you can view the contents of the secret with the command:

docker container exec $(docker ps --filter name=redis -q) cat /run/secrets/test_secret

The output is as below:

My secret secret

If you commit the container, the secret is no longer available.

docker commit $(docker ps --filter name=redis -q) committed_redis

Verify the secret is no longer available with the command below:

docker run --rm -it committed_redis cat /run/secrets/test_secret

You can then remove access to the secret with the command:

docker service update --secret-rm test_secret redis

Conclusion

You have to Create docker secret and deploy a service. I hope will this your helpful. Thank you for reading the DevopsRoles page!

Using Ansible with Testinfra test infrastructure

Introduction

In this tutorial, How to ansible testinfra test infrastructure. You can write unit tests in Python to test the actual status of your target servers.

Ansible Testinfra is a testing framework that allows you to write tests for your Ansible playbooks and roles. It is built on top of the popular Python testing framework pytest and provides a simple way to test the state of your systems after running Ansible playbooks.

Testinfra is a powerful library for writing tests to verify an infrastructure’s state. It is a Python library and uses the powerful pytest test engine.

Ansible Testinfra test infrastructure

Install Testinfra

Method 1: Use pip ( Python package manager ) to install Testinfra and a Python virtual environment.

python3 -m venv ansible
source ansible/bin/activate
(ansible) $ pip install testinfra

Method 2: Testinfra is also available in the package repositories of Fedora and CentOS using the EPEL repository. For example, install on CentOS 7 as command below:

$ yum install -y epel-release
$ yum install -y python-testinfra

For example, A simple test script

I create test.py with the content below:

import testinfra

def test_os_release(host):
    assert host.file("/etc/os-release").contains("centos")

def test_sshd_active(host):
    assert host.service("sshd").is_running is True

To run these tests on your local machine, execute the following command:

(ansible)$ pytest test.py

Testinfra and Ansible

Use pip to install the Ansible package.

pip install ansible

Here is the resulting output:

(ansible) [vagrant@ansible ~]$ pip install ansible
Collecting ansible
  Downloading ansible-4.10.0.tar.gz (36.8 MB)
     |████████████████████████████████| 36.8 MB 6.9 MB/s
  Preparing metadata (setup.py) ... done
Collecting ansible-core~=2.11.7
  Downloading ansible-core-2.11.11.tar.gz (7.1 MB)
     |████████████████████████████████| 7.1 MB 9.6 MB/s
  Preparing metadata (setup.py) ... done
Collecting jinja2
  Downloading Jinja2-3.0.3-py3-none-any.whl (133 kB)
     |████████████████████████████████| 133 kB 10.4 MB/s
Collecting PyYAML
  Downloading PyYAML-6.0-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (603 kB)
     |████████████████████████████████| 603 kB 8.7 MB/s
Collecting cryptography
  Downloading cryptography-37.0.2-cp36-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (4.1 MB)
     |████████████████████████████████| 4.1 MB 11.5 MB/s
Requirement already satisfied: packaging in ./ansible/lib/python3.6/site-packages (from ansible-core~=2.11.7->ansible) (21.3)
Collecting resolvelib<0.6.0,>=0.5.3
  Downloading resolvelib-0.5.4-py2.py3-none-any.whl (12 kB)
Collecting cffi>=1.12
  Downloading cffi-1.15.0-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.whl (405 kB)
     |████████████████████████████████| 405 kB 10.7 MB/s
Collecting MarkupSafe>=2.0
  Downloading MarkupSafe-2.0.1-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (30 kB)
Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in ./ansible/lib/python3.6/site-packages (from packaging->ansible-core~=2.11.7->ansible) (3.0.9)
Collecting pycparser
  Downloading pycparser-2.21-py2.py3-none-any.whl (118 kB)
     |████████████████████████████████| 118 kB 12.3 MB/s
Using legacy 'setup.py install' for ansible, since package 'wheel' is not installed.
Using legacy 'setup.py install' for ansible-core, since package 'wheel' is not installed.
Installing collected packages: pycparser, MarkupSafe, cffi, resolvelib, PyYAML, jinja2, cryptography, ansible-core, ansible
    Running setup.py install for ansible-core ... done
    Running setup.py install for ansible ... done
Successfully installed MarkupSafe-2.0.1 PyYAML-6.0 ansible-4.10.0 ansible-core-2.11.11 cffi-1.15.0 cryptography-37.0.2 jinja2-3.0.3 pycparser-2.21 resolvelib-0.5.4
(ansible) [vagrant@ansible ~]$

Testinfra can directly use Ansible’s inventory file and a group of machines defined in the inventory. For example, inventory hosts are the target server Ubuntu.

(ansible) [vagrant@ansible ~]$ cat hosts
[target-server]
UbuntuServer

We verify the operating system and ensure that SSH is running on the target server.

To test using Testinfra and Ansible, use the following command:

pytest -vv --sudo --hosts=target-server --ansible-inventory=hosts --connection=ansible test.py

The terminal output is as follows:

Conclusion

You have Ansible Testinfra test infrastructure. Ansible and Testinfra are complementary tools that can be used together in an infrastructure provisioning and testing workflow, but they serve different purposes.

Ansible specializes in automation and configuration management, whereas Testinfra is dedicated to testing the infrastructure. I hope you find this information helpful. Thank you for visiting the DevopsRoles page!

Elastic APM Tool for Application Performance Monitoring

#Introduction

Elastic APM helps you monitor overall application health and performance. It is part of the Elastic Stack, which includes Elasticsearch, Logstash, and Kibana.

Elastic APM allows you to track and analyze the performance metrics of your applications, identify bottlenecks, and troubleshoot issues.

Prerequisites

Elastic APM Tool

How to integrate the application with APM tool: elasticsearch, kibana, and apm-server

We will create a docker-compose file with the content below:

nano docker-compose.yml

The content file as below:

version: '3'
services:
      elasticsearch:
        image: docker.elastic.co/elasticsearch/elasticsearch:6.6.1
        ports:
          - "9200:9200"
      kibana:
        image: docker.elastic.co/kibana/kibana:6.6.1
        ports:
          - "5601:5601"
        depends_on:
          - elasticsearch
      apm-server:
          image: docker.elastic.co/apm/apm-server:6.6.1
          environment:
            - output.elasticsearch.hosts=["elasticsearch:9200"]
          ports:
            - "8200:8200"
          depends_on:
            - elasticsearch

Save and close that file.

Running the APM tool

docker-compose up -d

The result is the picture below:

Now, We will start the application with the javaagent and point to apm-server URL

java -javaagent:/<path-to-jar>/elastic-apm-agent-<version>.jar \
     -Delastic.apm.service_name=my-application \
     -Delastic.apm.server_url=http://localhost:8200 \
     -Delastic.apm.secret_token= \
     -Delastic.apm.application_packages=<base package> \
     -jar <jar name>.jar

Note: apm.application_packages → package where the main class is present

Configuration steps on Kibana

  • Goto: http://localhost:5601 (http://localhost:5601/)
  • Click on Add APM
  • Scroll down and click on Check APM Server Status
  • Scroll down and click on Check agent status
  • Click on Load Kibana objects
  • Launch APM

APM is now ready and integrated with the service.

Conclusion

You have Elastic APM Tool for Application Performance Monitoring.

Elastic APM is a comprehensive tool for monitoring the performance of your applications, providing deep insights into transaction traces, metrics, errors, and code-level details. It helps you identify performance issues, optimize application performance, and deliver a better user experience.

I hope will this your helpful. Thank you for reading the DevopsRoles page!

How to run a single command on multiple Linux machines at once

Introduction

In this tutorial, How to run a single command on multiple Linux machines at once. I will create a script on the Linux servers to send commands to multiple servers.

Prerequisites

  • 2 Virtual Machine
  • ssh Connectivity Between 2 Servers

Configure SSH on Server 1

For example, The Server calls the name controller. This server is using the operating system Ubuntu.

create the SSH config file with the command:

cat ~/.ssh/config

For example, the content as output below

Create the script to run a single command on multiple Linux machines at once

We will create 1 script to run a command for a remote Linux server.

sudo vi /usr/local/bin/script_node1

In that file, paste the following:

#!/bin/bash

# Get the user's input as command
[[ -z ${@} ]] && exit || CMD_EXEC="${@}"

# Get the hosts from ~/.ssh/config
HOSTS=$(grep -Po 'Host\s\Knode1' "$HOME/.ssh/config")

# Test weather the input command uses sudo
if [[ $CMD_EXEC =~ ^sudo ]]
then
    # Ask for password
    read -sp '[sudo] password for remote_admin: ' password; echo

    # Rewrite the command
    CMD_EXEC=$(sed "s/^sudo/echo '$password' | sudo -S/" <<< "$CMD_EXEC")
fi

# loop over the hosts and execute the SSH command, remove `-a` to override the>
while IFS= read -r host
do
   echo -e '\n\033[1;33m'"HOST: ${host}"'\033[0m'
   ssh -n "$host" "$CMD_EXEC 2>&1" | tee -a "/tmp/$(basename "${0}").${host}"
done <<< "$HOSTS"

Save and close the file.

Now, We will update the package for server calls name node1 as command below:

script_node1 sudo apt-get update

Via youtube

Conclusion

You have to run a single command on multiple Linux machines at once. I hope will this your helpful. Thank you for reading the DevopsRoles page!

Docker Swarm cheat sheet: Essential Commands and Tips

Introduction

Here’s a Docker Swarm cheat sheet to help you with common commands and operations:

Docker Swarm is a powerful tool for container management and application orchestration. For those working in the DevOps field, mastering Docker Swarm commands and techniques is essential for effective system deployment and management.

This article provides a detailed cheat sheet, compiling important commands and useful tips, to help you quickly master Docker Swarm and optimize your workflow.

The Docker swarm cheat sheet

Docker swarm Management

Set up master

docker swarm init --advertise-addr <ip>

How to Force the Manager on a Broken Cluster

docker swarm init --force-new-cluster -advertise-addr <ip>

Enable auto-lock

docker swarm init –autolock

Get a token to join the workers

docker swarm join-token worker

Get a token to join the new manager

docker swarm join-token manager

Join the host as a worker

docker swarm join <server> worker

Have a node leave a swarm

docker swarm leave

Unlock a manager host after the docker

docker swarm unlock

Print key needed for ‘unlock’

docker swarm unlock-key

Print swarm node list

docker node ls

Docker Service Management

Create a new service:

docker service create <options> <image> <command>

List the services running in a swarm

docker service ls

Inspect a service:

docker service inspect <service-id>

Scale a service (increase or decrease replicas)

docker service scale <service-id>=<replica-count>

Update a service:

docker service update <options> <service-id>

Remove a service:

docker service rm <service-id>

List the tasks of the service_name

docker service ps service_name

list running (active) tasks for a given service

docker service ps --filter desired-state=running <service id|name>

print console log of a service

docker service logs --follow <service id|name>

Promote a worker node to the manager

docker node promote node_name

The output terminal Promote a worker node to the manager as below

vagrant@controller:~$ docker node ls
ID                            HOSTNAME     STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION
9b2211c8l1bmhu3h2ij3kthxv *   controller   Ready     Active         Leader           20.10.14
0j0pslqf4g6xkki8ajydvc123     node1        Ready     Active                          20.10.14
f4cxubqg0wqdxsaj8pe4qsqlg     node2        Ready     Active                          20.10.14

vagrant@controller:~$ docker node promote f4cxubqg0wqdxsaj8pe4qsqlg
Node f4cxubqg0wqdxsaj8pe4qsqlg promoted to a manager in the swarm.

vagrant@controller:~$ docker node ls
ID                            HOSTNAME     STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION
9b2211c8l1bmhu3h2ij3kthxv *   controller   Ready     Active         Leader           20.10.14
0j0pslqf4g6xkki8ajydvc123     node1        Ready     Active                          20.10.14
f4cxubqg0wqdxsaj8pe4qsqlg     node2        Ready     Active         Reachable        20.10.14

Docker Stack Management

List running swarms

docker stack ls

Deploy a stack using a Compose file:

docker stack deploy --compose-file <compose-file> <stack-name>

Inspect a stack:

docker stack inspect <stack-name>

List services in a stack:

docker stack services <stack-name>

List containers in a stack:

docker stack ps <stack-name>

Remove a stack:

docker stack rm <stack-name>

Conclusion

You now have the Docker Swarm cheat sheet, which includes some of the most essential commands used in Docker Swarm.

For a more comprehensive list of options and additional commands, please refer to the official Docker documentation:

For a more detailed list of options and additional commands, you can refer to the official Docker documentation. I hope you find this helpful. Thank you for visiting the DevopsRoles page!

Deploying Services to a Docker Swarm Cluster

Introduction

In this tutorial, How to Deploy Services to a Docker Swarm Cluster. First, You need to install the Docker swarm cluster here. In today’s world of distributed systems, containerization has become a popular choice for deploying and managing applications.

Docker Swarm, a native clustering and orchestration solution for Docker, allows you to create a swarm of Docker nodes that work together as a cluster.

In this blog post, we will explore the steps to deploy a service to a Docker Swarm cluster and take advantage of its powerful features for managing containerized applications.

Deploying Services to a Docker Swarm Cluster

The simple, I will deploy the NGINX container service.

Log into Controller. Run the following command.

docker service create --name nginx_test nginx

To check the service status as command below

vagrant@controller:~$ docker service ls
ID             NAME         MODE         REPLICAS   IMAGE          PORTS
44sp9ig3k65o   nginx_test   replicated   1/1        nginx:latest

We will deploy that service to all three nodes as commanded below

docker service create --replicas 3 --name nginx3nodes nginx

The result is Deploy service to the swarm as the picture below

You want to scale the service to all five nodes.

docker service scale nginx3nodes=5

We deploy Portainer on the controller to easily manage the swarm.

docker run -d -p 8000:8000 -p 9443:9443 --name=portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer-ce

Open the browser, and go to http://SERVER:9443 (Where SERVER is the IP address of the server). you should see Swarm listed in the left navigation

Swarm on portainer

Conclusion

You have to Deploy the service to the swarm. Docker Swarm will take care of scheduling the service across the Swarm nodes and managing its lifecycle. Docker Swarm simplifies the management and scaling of containerized applications, providing fault tolerance and high availability. By following the steps outlined in this blog post, you can easily deploy your services to a Docker Swarm cluster and take advantage of its powerful orchestration capabilities. I hope will this your helpful. Thank you for reading the DevopsRoles page!

Install Docker Swarm cluster

#Introduction

In this tutorial, How to Install Docker Swarm cluster. Docker Swarm is a way to create a cluster for container deployment.

With a minute you can deploy the cluster up and running for high availability, failover, and scalability.

To install a Docker Swarm cluster, you will need multiple nodes or machines that will act as Swarm managers and workers

Here’s a step-by-step guide on how to install Docker Swarm

Prerequisites

  • Host OS: Ubuntu 21.04
  • 1 Controller.
  • 2 nodes.
  • Installed Docker on your controller and node.

How to install Docker Swarm cluster

1. Log into the Docker Swarm controller

We run the command line as below:

sudo apt-get update -y
sudo apt-get install ca-certificates curl gnupg lsb-release -y
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update -y 
sudo apt-get install docker-ce docker-ce-cli containerd.io -y
sudo systemctl enable --now docker
sudo usermod -aG docker $USER
newgrp docker

2. Log into Docker Swarm Node1

We run the command line as below:

sudo apt-get update -y
sudo apt-get install ca-certificates curl gnupg lsb-release -y
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update -y 
sudo apt-get install docker-ce docker-ce-cli containerd.io -y
sudo systemctl enable --now docker
sudo usermod -aG docker $USER
newgrp docker

3. Log into Docker Swarm Node2

We run the command line as below:

sudo apt-get update -y
sudo apt-get install ca-certificates curl gnupg lsb-release -y
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update -y 
sudo apt-get install docker-ce docker-ce-cli containerd.io -y
sudo systemctl enable --now docker
sudo usermod -aG docker $USER
newgrp docker

For example, as the picture below:

Back at the Server Docker Controller, initialize the swarm with as command below:

docker swarm init --advertise-addr 192.168.56.11

You can see with the join command that will look something as below

docker swarm join --token SWMTKN-1-1godvlo74ufchdrmck9earbshkxa2u91w7ss742bryl40f7c8i-aq684grkb94d7vaguh4aep7rt 192.168.56.11:2377

Log into Node1 run the command docker swarm join

Log into Node2 run the command docker swarm join

We will verify the result on the Server controller:

docker info

Conclusion

You have successfully installed a Docker Swarm cluster and deployed services to it. You can continue exploring Docker Swarm features to manage and scale your applications effectively. I hope will this your helpful. Thank you for reading the DevopsRoles page!

Easy Guide to Vagrant proxy configuration

Introduction

In this tutorial, we will explore the steps to implement Vagrant proxy configuration on a virtual machine. Configuring a proxy for Vagrant involves utilizing the various options provided by the Vagrant Proxy Configuration.

Vagrant proxy configuration is a crucial aspect of managing virtualized development environments seamlessly. Vagrant, a powerful tool for building and managing virtualized development environments, allows developers to create consistent and reproducible environments across different machines. When it comes to networking in these environments, proxy configuration plays a vital role, ensuring smooth communication between the virtual machine and external resources.

Configuring a proxy in Vagrant involves specifying the necessary settings to enable the virtual machine to access the internet through a proxy server. This is particularly useful in corporate environments or other scenarios where internet access is controlled through a proxy. The flexibility of Vagrant Proxy Configuration allows users to tailor settings according to their specific proxy server requirements.

One key element of Vagrant proxy configuration is the ability to set up a generic HTTP proxy. This enables the virtual machine to route its internet requests through the specified proxy server, facilitating internet connectivity for software installations, updates, and other online interactions within the virtual environment.

Moreover, Vagrant extends its proxy support to various tools commonly used in development workflows. Users can configure proxy settings for Docker, Git, npm, Subversion, Yum, and more. This comprehensive proxy integration ensures that all the components of the development stack can seamlessly operate within the virtualized environment, regardless of the network restrictions imposed by the proxy server.

Users need to adapt the proxy settings to match the specific configuration of their proxy servers. This adaptability ensures that the virtualized environment aligns with the network policies in place, enabling a smooth and uninterrupted development experience.

The Vagrant plugin allows you to set up the following:

  • generic http_proxy
  • proxy configuration for Docker
  • proxy configuration for Git
  • proxy configuration for npm
  • proxy configuration for Subversion
  • proxy configuration for Yum
  • etc.

Install the Vagrant plugin called vagrant-proxyconf

This plugin requires Vagrant version 1.2 or newer

vagrant plugin install vagrant-proxyconf

The output terminal is as below:

To set up configurations for all Vagrant VMs

Vagrant.configure("2") do |config|
  if Vagrant.has_plugin?("vagrant-proxyconf")
    config.proxy.http     = "http://IP-ADDRESS:3128/"
    config.proxy.https    = "http://IP-ADDRESS:3128/"
    config.proxy.no_proxy = "localhost,127.0.0.1,devopsroles.com,huuphan.com"
  end
  # ... other stuff
end

Environment variables

  • VAGRANT_HTTP_PROXY
  • VAGRANT_HTTPS_PROXY
  • VAGRANT_FTP_PROXY
  • VAGRANT_NO_PROXY

These also override the Vagrantfile configuration.

As an illustration, Vagrant executes the following command:

VAGRANT_HTTP_PROXY="http://devopsroles.com:8080" vagrant up

Turning off the plugin

config.proxy.enabled         # => all applications enabled(default)
config.proxy.enabled = true  # => all applications enabled
config.proxy.enabled = { svn: false, docker: false }  # => specific applications disabled
config.proxy.enabled = ""    # => all applications disabled
config.proxy.enabled = false # => all applications disabled

For example Vagrantfile file

Vagrant.configure("2") do |config|
  config.proxy.http = "http://192.168.3.7:8080/"

  config.vm.provider :my_devopsroles do |cloud, override|
    override.proxy.enabled = false
  end
  # ... other stuff
end

An illustration of Vagrant proxy configuration in my setup.

# -*- mode: ruby -*-
# vi: set ft=ruby :

Vagrant.configure("2") do |config|

  config.vm.define "myserver" do |myserver|
    myserver.vm.box = "ubuntu/impish64"
    myserver.vm.hostname = "devopsroles.com.local"
    myserver.vm.network "private_network", ip: "192.168.56.10"
    myserver.vm.network "forwarded_port", guest: 80, host: 8080
    myserver.vm.provider :virtualbox do |v|
	  v.customize ["modifyvm", :id, "--natdnshostresolver1", "on"]
	  v.customize ["modifyvm", :id, "--memory", 1024]
	  v.customize ["modifyvm", :id, "--name", "myserver"]
	  end
    if Vagrant.has_plugin?("vagrant-proxyconf")
      config.proxy.http     = "http://192.168.4.7:8080/"
      config.proxy.https    = "http://192.168.4.7:8080/"
      config.proxy.no_proxy = "localhost,127.0.0.1,devopsroles.com,huuphan.com"
    end
  end

end

Through Youtube

Conclusion

You’ve successfully set up a proxy for your Vagrant environment. Be sure to adjust the proxy settings based on your specific proxy server configuration. I hope you find this information helpful.

Vagrant proxy configuration is a fundamental aspect of creating robust and consistent development environments. By providing users with the tools to tailor proxy settings and support for various development tools, Vagrant empowers developers to overcome network constraints and focus on building and testing their applications efficiently.

Thank you for visiting the DevOpsRoles page!

Deploy a self-hosted Docker registry

Introduction

In this tutorial, How to Deploy a self-hosted Docker registry with self-signed certificates. How to access it from a remote machine.

To deploy a self-hosted Docker registry, you can use the official Docker Registry image.

Here’s a step-by-step Deploy a self-hosted Docker registry guide to help you.

Prepare your directories

I will create them on my user home directory, but you can place them in any directory.

mkdir ~/registry

Create subdirectories in the registry directory.

mkdir ~/registry/{certs,auth}

Go into the certs directory.

cd ~/registry/certs

Create a private key

openssl genrsa 1024 > devopsroles.com.key
chmod 400 devopsroles.com.key

The output terminal is as below:

Create a docker_register.cnf file with the content below:

nano docker_register.cnf

In that file, paste the following contents.

[req]

default_bits  = 2048

distinguished_name = req_distinguished_name

req_extensions = req_ext

x509_extensions = v3_req

prompt = no

[req_distinguished_name]

countryName = XX

stateOrProvinceName = N/A

localityName = N/A

organizationName = Self-signed certificate

commonName = 120.0.0.1: Self-signed certificate

[req_ext]

subjectAltName = @alt_names

[v3_req]

subjectAltName = @alt_names

[alt_names]


IP.1 = 192.168.3.7

Note: Make sure to change IP.1 to match the IP address of your hosting server.

Save and close the file.

Generate the key with:

openssl req -new -x509 -nodes -sha1 -days 365 -key devopsroles.com.key -out devopsroles.com.crt -config docker_register.cnf

Go into auth directory.

cd ../auth

Generate an htpasswd file

docker run --rm --entrypoint htpasswd registry:2.7.0 -Bbn USERNAME PASSWORD > htpasswd

Where USERNAME is a unique username and PASSWORD is a unique/strong password.

The output terminal is the picture below:

Now, Deploy a self-hosted Docker registry

Change back to the base registry directory.

cd ~/registry

Deploy the registry container with the command below:

docker run -d \

--restart=always \

--name registry \

-v `pwd`/auth:/auth \

-v `pwd`/certs:/certs \

-v `pwd`/certs:/certs \

-e REGISTRY_AUTH=htpasswd \

-e REGISTRY_AUTH_HTPASSWD_REALM="Registry Realm" \

-e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd \

-e REGISTRY_HTTP_ADDR=0.0.0.0:443 \

-e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/devopsroles.com.crt \

-e REGISTRY_HTTP_TLS_KEY=/certs/devopsroles.com.key \

-p 443:443 \

registry:2.7.0

Now, you can access it from the local machine. however, you want to access it from a remote system. we need to add a ca.crt file. you need the copy the contents of the ~/registry/certs/devopsroles.com.crt file.

Login into your second machine

Create folder

sudo mkdir -p /etc/docker/certs.d/SERVER:443

where SERVER is the IP address of the machine hosting the registry.

Create the new file with:

sudo nano /etc/docker/certs.d/SERVER:443/ca.crt

paste the contents from devopsroles.com.crt ( from the hosting server) save and close the file.

How to do login into the new registry

From the second machine.

docker login -u USER -p https://SERVER:443

Where USER is the user you added when you generated the htpasswd file above.

Conclusion

You have successfully deployed a self-hosted Docker registry. You can now use it to store and share your Docker images within your network. I hope will this your helpful. Thank you for reading the DevopsRoles page!

Deploy Redash data visualization dashboard

#Introduction

This tutorial, How to Deploy Redash data visualization dashboard helps use Docker.

You can deploy the powerful data visualization tool Redash as a Docker container.

Redash is a powerful data visualization tool that is built for fast access to data collected from various sources. Redash helps you make sense of your data

Requirements

  • Running instance of Ubuntu Server.
  • A user with sudo privileges.

To deploy Redash, a data visualization dashboard, you can follow these steps:

Install Docker

First, You need to install Docker on the Ubuntu server. Refer to How to install docker on Ubuntu Server. and Install Docker-compose on the Ubuntu Server. Refer to here.

Deploy Redash data visualization dashboard

You need to update your server at the latest.

sudo apt-get update
sudo apt-get upgrade -y

Deploy Redash

curl -O https://raw.githubusercontent.com/getredash/setup/master/setup.sh
chmod u+x setup.sh
sudo ./setup.sh

The deployment will take anywhere from 2-10 minutes.

The output terminal is as below:

Docker containers running Redash data visualization dashboard

How to access Redash

Open a web browser, and type http://ipaddress as in the picture below:

The Redash main page

You now have to deploy a data visualization tool. Next time, How to connect a data source to Redash.

Conclusion

You have successfully deployed the Redash data visualization dashboard and can now start creating visualizations and dashboards for your data. Continue exploring the Redash documentation and features to leverage its full capabilities for data visualization and analysis.

I hope will this your helpful. Thank you for reading the DevopsRoles page!

Devops Tutorial

Exit mobile version