Deploying Services to a Docker Swarm Cluster

Introduction

In this tutorial, How to Deploy Services to a Docker Swarm Cluster. First, You need to install the Docker swarm cluster here. In today’s world of distributed systems, containerization has become a popular choice for deploying and managing applications.

Docker Swarm, a native clustering and orchestration solution for Docker, allows you to create a swarm of Docker nodes that work together as a cluster.

In this blog post, we will explore the steps to deploy a service to a Docker Swarm cluster and take advantage of its powerful features for managing containerized applications.

Deploying Services to a Docker Swarm Cluster

The simple, I will deploy the NGINX container service.

Log into Controller. Run the following command.

docker service create --name nginx_test nginx

To check the service status as command below

vagrant@controller:~$ docker service ls
ID             NAME         MODE         REPLICAS   IMAGE          PORTS
44sp9ig3k65o   nginx_test   replicated   1/1        nginx:latest

We will deploy that service to all three nodes as commanded below

docker service create --replicas 3 --name nginx3nodes nginx

The result is Deploy service to the swarm as the picture below

You want to scale the service to all five nodes.

docker service scale nginx3nodes=5

We deploy Portainer on the controller to easily manage the swarm.

docker run -d -p 8000:8000 -p 9443:9443 --name=portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer-ce

Open the browser, and go to http://SERVER:9443 (Where SERVER is the IP address of the server). you should see Swarm listed in the left navigation

Swarm on portainer

Conclusion

You have to Deploy the service to the swarm. Docker Swarm will take care of scheduling the service across the Swarm nodes and managing its lifecycle. Docker Swarm simplifies the management and scaling of containerized applications, providing fault tolerance and high availability. By following the steps outlined in this blog post, you can easily deploy your services to a Docker Swarm cluster and take advantage of its powerful orchestration capabilities. I hope will this your helpful. Thank you for reading the DevopsRoles page!

Install Docker Swarm cluster

#Introduction

In this tutorial, How to Install Docker Swarm cluster. Docker Swarm is a way to create a cluster for container deployment.

With a minute you can deploy the cluster up and running for high availability, failover, and scalability.

To install a Docker Swarm cluster, you will need multiple nodes or machines that will act as Swarm managers and workers

Here’s a step-by-step guide on how to install Docker Swarm

Prerequisites

  • Host OS: Ubuntu 21.04
  • 1 Controller.
  • 2 nodes.
  • Installed Docker on your controller and node.

How to install Docker Swarm cluster

1. Log into the Docker Swarm controller

We run the command line as below:

sudo apt-get update -y
sudo apt-get install ca-certificates curl gnupg lsb-release -y
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update -y 
sudo apt-get install docker-ce docker-ce-cli containerd.io -y
sudo systemctl enable --now docker
sudo usermod -aG docker $USER
newgrp docker

2. Log into Docker Swarm Node1

We run the command line as below:

sudo apt-get update -y
sudo apt-get install ca-certificates curl gnupg lsb-release -y
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update -y 
sudo apt-get install docker-ce docker-ce-cli containerd.io -y
sudo systemctl enable --now docker
sudo usermod -aG docker $USER
newgrp docker

3. Log into Docker Swarm Node2

We run the command line as below:

sudo apt-get update -y
sudo apt-get install ca-certificates curl gnupg lsb-release -y
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update -y 
sudo apt-get install docker-ce docker-ce-cli containerd.io -y
sudo systemctl enable --now docker
sudo usermod -aG docker $USER
newgrp docker

For example, as the picture below:

Back at the Server Docker Controller, initialize the swarm with as command below:

docker swarm init --advertise-addr 192.168.56.11

You can see with the join command that will look something as below

docker swarm join --token SWMTKN-1-1godvlo74ufchdrmck9earbshkxa2u91w7ss742bryl40f7c8i-aq684grkb94d7vaguh4aep7rt 192.168.56.11:2377

Log into Node1 run the command docker swarm join

Log into Node2 run the command docker swarm join

We will verify the result on the Server controller:

docker info

Conclusion

You have successfully installed a Docker Swarm cluster and deployed services to it. You can continue exploring Docker Swarm features to manage and scale your applications effectively. I hope will this your helpful. Thank you for reading the DevopsRoles page!

Easy Guide to Vagrant proxy configuration

Introduction

In this tutorial, we will explore the steps to implement Vagrant proxy configuration on a virtual machine. Configuring a proxy for Vagrant involves utilizing the various options provided by the Vagrant Proxy Configuration.

Vagrant proxy configuration is a crucial aspect of managing virtualized development environments seamlessly. Vagrant, a powerful tool for building and managing virtualized development environments, allows developers to create consistent and reproducible environments across different machines. When it comes to networking in these environments, proxy configuration plays a vital role, ensuring smooth communication between the virtual machine and external resources.

Configuring a proxy in Vagrant involves specifying the necessary settings to enable the virtual machine to access the internet through a proxy server. This is particularly useful in corporate environments or other scenarios where internet access is controlled through a proxy. The flexibility of Vagrant Proxy Configuration allows users to tailor settings according to their specific proxy server requirements.

One key element of Vagrant proxy configuration is the ability to set up a generic HTTP proxy. This enables the virtual machine to route its internet requests through the specified proxy server, facilitating internet connectivity for software installations, updates, and other online interactions within the virtual environment.

Moreover, Vagrant extends its proxy support to various tools commonly used in development workflows. Users can configure proxy settings for Docker, Git, npm, Subversion, Yum, and more. This comprehensive proxy integration ensures that all the components of the development stack can seamlessly operate within the virtualized environment, regardless of the network restrictions imposed by the proxy server.

Users need to adapt the proxy settings to match the specific configuration of their proxy servers. This adaptability ensures that the virtualized environment aligns with the network policies in place, enabling a smooth and uninterrupted development experience.

The Vagrant plugin allows you to set up the following:

  • generic http_proxy
  • proxy configuration for Docker
  • proxy configuration for Git
  • proxy configuration for npm
  • proxy configuration for Subversion
  • proxy configuration for Yum
  • etc.

Install the Vagrant plugin called vagrant-proxyconf

This plugin requires Vagrant version 1.2 or newer

vagrant plugin install vagrant-proxyconf

The output terminal is as below:

To set up configurations for all Vagrant VMs

Vagrant.configure("2") do |config|
  if Vagrant.has_plugin?("vagrant-proxyconf")
    config.proxy.http     = "http://IP-ADDRESS:3128/"
    config.proxy.https    = "http://IP-ADDRESS:3128/"
    config.proxy.no_proxy = "localhost,127.0.0.1,devopsroles.com,huuphan.com"
  end
  # ... other stuff
end

Environment variables

  • VAGRANT_HTTP_PROXY
  • VAGRANT_HTTPS_PROXY
  • VAGRANT_FTP_PROXY
  • VAGRANT_NO_PROXY

These also override the Vagrantfile configuration.

As an illustration, Vagrant executes the following command:

VAGRANT_HTTP_PROXY="http://devopsroles.com:8080" vagrant up

Turning off the plugin

config.proxy.enabled         # => all applications enabled(default)
config.proxy.enabled = true  # => all applications enabled
config.proxy.enabled = { svn: false, docker: false }  # => specific applications disabled
config.proxy.enabled = ""    # => all applications disabled
config.proxy.enabled = false # => all applications disabled

For example Vagrantfile file

Vagrant.configure("2") do |config|
  config.proxy.http = "http://192.168.3.7:8080/"

  config.vm.provider :my_devopsroles do |cloud, override|
    override.proxy.enabled = false
  end
  # ... other stuff
end

An illustration of Vagrant proxy configuration in my setup.

# -*- mode: ruby -*-
# vi: set ft=ruby :

Vagrant.configure("2") do |config|

  config.vm.define "myserver" do |myserver|
    myserver.vm.box = "ubuntu/impish64"
    myserver.vm.hostname = "devopsroles.com.local"
    myserver.vm.network "private_network", ip: "192.168.56.10"
    myserver.vm.network "forwarded_port", guest: 80, host: 8080
    myserver.vm.provider :virtualbox do |v|
	  v.customize ["modifyvm", :id, "--natdnshostresolver1", "on"]
	  v.customize ["modifyvm", :id, "--memory", 1024]
	  v.customize ["modifyvm", :id, "--name", "myserver"]
	  end
    if Vagrant.has_plugin?("vagrant-proxyconf")
      config.proxy.http     = "http://192.168.4.7:8080/"
      config.proxy.https    = "http://192.168.4.7:8080/"
      config.proxy.no_proxy = "localhost,127.0.0.1,devopsroles.com,huuphan.com"
    end
  end

end

Through Youtube

Conclusion

You’ve successfully set up a proxy for your Vagrant environment. Be sure to adjust the proxy settings based on your specific proxy server configuration. I hope you find this information helpful.

Vagrant proxy configuration is a fundamental aspect of creating robust and consistent development environments. By providing users with the tools to tailor proxy settings and support for various development tools, Vagrant empowers developers to overcome network constraints and focus on building and testing their applications efficiently.

Thank you for visiting the DevOpsRoles page!

Deploy a self-hosted Docker registry

Introduction

In this tutorial, How to Deploy a self-hosted Docker registry with self-signed certificates. How to access it from a remote machine.

To deploy a self-hosted Docker registry, you can use the official Docker Registry image.

Here’s a step-by-step Deploy a self-hosted Docker registry guide to help you.

Prepare your directories

I will create them on my user home directory, but you can place them in any directory.

mkdir ~/registry

Create subdirectories in the registry directory.

mkdir ~/registry/{certs,auth}

Go into the certs directory.

cd ~/registry/certs

Create a private key

openssl genrsa 1024 > devopsroles.com.key
chmod 400 devopsroles.com.key

The output terminal is as below:

Create a docker_register.cnf file with the content below:

nano docker_register.cnf

In that file, paste the following contents.

[req]

default_bits  = 2048

distinguished_name = req_distinguished_name

req_extensions = req_ext

x509_extensions = v3_req

prompt = no

[req_distinguished_name]

countryName = XX

stateOrProvinceName = N/A

localityName = N/A

organizationName = Self-signed certificate

commonName = 120.0.0.1: Self-signed certificate

[req_ext]

subjectAltName = @alt_names

[v3_req]

subjectAltName = @alt_names

[alt_names]


IP.1 = 192.168.3.7

Note: Make sure to change IP.1 to match the IP address of your hosting server.

Save and close the file.

Generate the key with:

openssl req -new -x509 -nodes -sha1 -days 365 -key devopsroles.com.key -out devopsroles.com.crt -config docker_register.cnf

Go into auth directory.

cd ../auth

Generate an htpasswd file

docker run --rm --entrypoint htpasswd registry:2.7.0 -Bbn USERNAME PASSWORD > htpasswd

Where USERNAME is a unique username and PASSWORD is a unique/strong password.

The output terminal is the picture below:

Now, Deploy a self-hosted Docker registry

Change back to the base registry directory.

cd ~/registry

Deploy the registry container with the command below:

docker run -d \

--restart=always \

--name registry \

-v `pwd`/auth:/auth \

-v `pwd`/certs:/certs \

-v `pwd`/certs:/certs \

-e REGISTRY_AUTH=htpasswd \

-e REGISTRY_AUTH_HTPASSWD_REALM="Registry Realm" \

-e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd \

-e REGISTRY_HTTP_ADDR=0.0.0.0:443 \

-e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/devopsroles.com.crt \

-e REGISTRY_HTTP_TLS_KEY=/certs/devopsroles.com.key \

-p 443:443 \

registry:2.7.0

Now, you can access it from the local machine. however, you want to access it from a remote system. we need to add a ca.crt file. you need the copy the contents of the ~/registry/certs/devopsroles.com.crt file.

Login into your second machine

Create folder

sudo mkdir -p /etc/docker/certs.d/SERVER:443

where SERVER is the IP address of the machine hosting the registry.

Create the new file with:

sudo nano /etc/docker/certs.d/SERVER:443/ca.crt

paste the contents from devopsroles.com.crt ( from the hosting server) save and close the file.

How to do login into the new registry

From the second machine.

docker login -u USER -p https://SERVER:443

Where USER is the user you added when you generated the htpasswd file above.

Conclusion

You have successfully deployed a self-hosted Docker registry. You can now use it to store and share your Docker images within your network. I hope will this your helpful. Thank you for reading the DevopsRoles page!

Deploy Redash data visualization dashboard

#Introduction

This tutorial, How to Deploy Redash data visualization dashboard helps use Docker.

You can deploy the powerful data visualization tool Redash as a Docker container.

Redash is a powerful data visualization tool that is built for fast access to data collected from various sources. Redash helps you make sense of your data

Requirements

  • Running instance of Ubuntu Server.
  • A user with sudo privileges.

To deploy Redash, a data visualization dashboard, you can follow these steps:

Install Docker

First, You need to install Docker on the Ubuntu server. Refer to How to install docker on Ubuntu Server. and Install Docker-compose on the Ubuntu Server. Refer to here.

Deploy Redash data visualization dashboard

You need to update your server at the latest.

sudo apt-get update
sudo apt-get upgrade -y

Deploy Redash

curl -O https://raw.githubusercontent.com/getredash/setup/master/setup.sh
chmod u+x setup.sh
sudo ./setup.sh

The deployment will take anywhere from 2-10 minutes.

The output terminal is as below:

Docker containers running Redash data visualization dashboard

How to access Redash

Open a web browser, and type http://ipaddress as in the picture below:

The Redash main page

You now have to deploy a data visualization tool. Next time, How to connect a data source to Redash.

Conclusion

You have successfully deployed the Redash data visualization dashboard and can now start creating visualizations and dashboards for your data. Continue exploring the Redash documentation and features to leverage its full capabilities for data visualization and analysis.

I hope will this your helpful. Thank you for reading the DevopsRoles page!

Install Kubernetes Cluster with KubeKey

#Introduction

In this tutorial, How to Install Kubernetes Cluster with KubeKey. I will deploy a Kubernetes Cluster using KubeKey Quickstart.

You can use KubeKey to spin up a Kubernetes deployment for development/testing purposes. As Kubernetes is the de-facto standard in container orchestration.

Requirements install Kubernetes Cluster with KubeKey

  • Running instance of Ubuntu Server.
  • A user with sudo privileges.

Install Docker

First, You need to install Docker on the Ubuntu server. Refer to How to install docker on Ubuntu Server.

How to Install KubeKey

You need to download KubeKey and make it executable as the command below:

sudo apt-get install conntrack -y
curl -sfL https://get-kk.kubesphere.io | VERSION=v1.2.1 sh -
chmod u+x kk

The output terminal is the picture below:

Make the kk executable to run from any directory by copying the file to /user/local/bin directory with the command below:

sudo cp kk /usr/local/bin

Verify the install it

kk -h

The output terminal is as below:

vagrant@devopsroles:~$ kk -h
Deploy a Kubernetes or KubeSphere cluster efficiently, flexibly and easily. There are three scenarios to use KubeKey.
1. Install Kubernetes only
2. Install Kubernetes and KubeSphere together in one command
3. Install Kubernetes first, then deploy KubeSphere on it using https://github.com/kubesphere/ks-installer

Usage:
  kk [command]

Available Commands:
  add         Add nodes to kubernetes cluster
  certs       cluster certs
  completion  Generate shell completion scripts
  create      Create a cluster or a cluster configuration file
  delete      Delete nodes or cluster
  help        Help about any command
  init        Initializes the installation environment
  upgrade     Upgrade your cluster smoothly to a newer version with this command
  version     print the client version information

Flags:
      --debug        Print detailed information (default true)
  -h, --help         help for kk
      --in-cluster   Running inside the cluster

Use "kk [command] --help" for more information about a command.

Deploy the Cluster

You need to deploy the cluster with the command below:

sudo kk create cluster

This process will take some time.

The output terminal is as below:

vagrant@devopsroles:~$ sudo kk create cluster
+-------------+------+------+---------+----------+-------+-------+-----------+----------+------------+-------------+------------------+--------------+
| name        | sudo | curl | openssl | ebtables | socat | ipset | conntrack | docker   | nfs client | ceph client | glusterfs client | time         |
+-------------+------+------+---------+----------+-------+-------+-----------+----------+------------+-------------+------------------+--------------+
| devopsroles | y    | y    | y       | y        |       |       | y         | 20.10.12 |            |             |                  | UTC 14:06:10 |
+-------------+------+------+---------+----------+-------+-------+-----------+----------+------------+-------------+------------------+--------------+

This is a simple check of your environment.
Before installation, you should ensure that your machines meet all requirements specified at
https://github.com/kubesphere/kubekey#requirements-and-recommendations

Continue this installation? [yes/no]: yes
INFO[14:06:20 UTC] Downloading Installation Files
INFO[14:06:20 UTC] Downloading kubeadm ...
INFO[14:06:24 UTC] Downloading kubelet ...
INFO[14:06:32 UTC] Downloading kubectl ...
INFO[14:06:36 UTC] Downloading helm ...
INFO[14:06:38 UTC] Downloading kubecni ...
INFO[14:06:43 UTC] Downloading etcd ...
INFO[14:06:48 UTC] Downloading docker ...
INFO[14:06:52 UTC] Downloading crictl ...
INFO[14:06:55 UTC] Configuring operating system ...
[devopsroles 10.0.2.15] MSG:
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
vm.max_map_count = 262144
vm.swappiness = 1
fs.inotify.max_user_instances = 524288
no crontab for root
INFO[14:06:56 UTC] Get cluster status
INFO[14:06:56 UTC] Installing Container Runtime ...
INFO[14:06:56 UTC] Start to download images on all nodes
[devopsroles] Downloading image: kubesphere/pause:3.4.1
[devopsroles] Downloading image: kubesphere/kube-apiserver:v1.21.5
[devopsroles] Downloading image: kubesphere/kube-controller-manager:v1.21.5
[devopsroles] Downloading image: kubesphere/kube-scheduler:v1.21.5
[devopsroles] Downloading image: kubesphere/kube-proxy:v1.21.5
[devopsroles] Downloading image: coredns/coredns:1.8.0
[devopsroles] Downloading image: kubesphere/k8s-dns-node-cache:1.15.12
[devopsroles] Downloading image: calico/kube-controllers:v3.20.0
[devopsroles] Downloading image: calico/cni:v3.20.0
[devopsroles] Downloading image: calico/node:v3.20.0
[devopsroles] Downloading image: calico/pod2daemon-flexvol:v3.20.0
INFO[14:08:42 UTC] Getting etcd status
[devopsroles 10.0.2.15] MSG:
Configuration file will be created
INFO[14:08:42 UTC] Generating etcd certs
INFO[14:08:43 UTC] Synchronizing etcd certs
INFO[14:08:43 UTC] Creating etcd service
Push /home/vagrant/kubekey/v1.21.5/amd64/etcd-v3.4.13-linux-amd64.tar.gz to 10.0.2.15:/tmp/kubekey/etcd-v3.4.13-linux-amd64.tar.gz   Done
INFO[14:08:43 UTC] Starting etcd cluster
INFO[14:08:44 UTC] Refreshing etcd configuration
[devopsroles 10.0.2.15] MSG:
Created symlink /etc/systemd/system/multi-user.target.wants/etcd.service → /etc/systemd/system/etcd.service.
INFO[14:08:47 UTC] Backup etcd data regularly
INFO[14:08:54 UTC] Installing kube binaries
Push /home/vagrant/kubekey/v1.21.5/amd64/kubeadm to 10.0.2.15:/tmp/kubekey/kubeadm   Done
Push /home/vagrant/kubekey/v1.21.5/amd64/kubelet to 10.0.2.15:/tmp/kubekey/kubelet   Done
Push /home/vagrant/kubekey/v1.21.5/amd64/kubectl to 10.0.2.15:/tmp/kubekey/kubectl   Done
Push /home/vagrant/kubekey/v1.21.5/amd64/helm to 10.0.2.15:/tmp/kubekey/helm   Done
Push /home/vagrant/kubekey/v1.21.5/amd64/cni-plugins-linux-amd64-v0.9.1.tgz to 10.0.2.15:/tmp/kubekey/cni-plugins-linux-amd64-v0.9.1.tgz   Done
INFO[14:09:01 UTC] Initializing kubernetes cluster
[devopsroles 10.0.2.15] MSG:
W0323 14:09:02.442567    4631 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
[init] Using Kubernetes version: v1.21.5
[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
        [WARNING FileExisting-socat]: socat not found in system path
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [devopsroles devopsroles.cluster.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.kubesphere.local localhost] and IPs [10.233.0.1 10.0.2.15 127.0.0.1]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] External etcd mode: Skipping etcd/ca certificate authority generation
[certs] External etcd mode: Skipping etcd/server certificate generation
[certs] External etcd mode: Skipping etcd/peer certificate generation
[certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation
[certs] External etcd mode: Skipping apiserver-etcd-client certificate generation
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 7.502071 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.21" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node devopsroles as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node devopsroles as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: w6uaty.abpybmw8jhw1tdlg
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join lb.kubesphere.local:6443 --token w6uaty.abpybmw8jhw1tdlg \
        --discovery-token-ca-cert-hash sha256:98d31447eb6457d74c0d13088aceed7ae8dd1fd0b8e98cb1a15683fcbb5ef4d5 \
        --control-plane

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join lb.kubesphere.local:6443 --token w6uaty.abpybmw8jhw1tdlg \
        --discovery-token-ca-cert-hash sha256:98d31447eb6457d74c0d13088aceed7ae8dd1fd0b8e98cb1a15683fcbb5ef4d5
[devopsroles 10.0.2.15] MSG:
node/devopsroles untainted
[devopsroles 10.0.2.15] MSG:
node/devopsroles labeled
[devopsroles 10.0.2.15] MSG:
service "kube-dns" deleted
[devopsroles 10.0.2.15] MSG:
service/coredns created
Warning: resource clusterroles/system:coredns is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
clusterrole.rbac.authorization.k8s.io/system:coredns configured
[devopsroles 10.0.2.15] MSG:
serviceaccount/nodelocaldns created
daemonset.apps/nodelocaldns created
[devopsroles 10.0.2.15] MSG:
configmap/nodelocaldns created
INFO[14:09:34 UTC] Get cluster status
INFO[14:09:35 UTC] Joining nodes to cluster
INFO[14:09:35 UTC] Deploying network plugin ...
[devopsroles 10.0.2.15] MSG:
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
Warning: policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
poddisruptionbudget.policy/calico-kube-controllers created
INFO[14:09:36 UTC] Congratulations! Installation is successful.

To verify Kubectl has been installed with the command below:

kubectl --help

The output terminal is as below:

vagrant@devopsroles:~$ kubectl --help
kubectl controls the Kubernetes cluster manager.

 Find more information at: https://kubernetes.io/docs/reference/kubectl/overview/

Basic Commands (Beginner):
  create        Create a resource from a file or from stdin.
  expose        Take a replication controller, service, deployment or pod and expose it as a new Kubernetes Service
  run           Run a particular image on the cluster
  set           Set specific features on objects

Basic Commands (Intermediate):
  explain       Documentation of resources
  get           Display one or many resources
  edit          Edit a resource on the server
  delete        Delete resources by filenames, stdin, resources and names, or by resources and label selector

Deploy Commands:
  rollout       Manage the rollout of a resource
  scale         Set a new size for a Deployment, ReplicaSet or Replication Controller
  autoscale     Auto-scale a Deployment, ReplicaSet, StatefulSet, or ReplicationController

Cluster Management Commands:
  certificate   Modify certificate resources.
  cluster-info  Display cluster info
  top           Display Resource (CPU/Memory) usage.
  cordon        Mark node as unschedulable
  uncordon      Mark node as schedulable
  drain         Drain node in preparation for maintenance
  taint         Update the taints on one or more nodes

Troubleshooting and Debugging Commands:
  describe      Show details of a specific resource or group of resources
  logs          Print the logs for a container in a pod
  attach        Attach to a running container
  exec          Execute a command in a container
  port-forward  Forward one or more local ports to a pod
  proxy         Run a proxy to the Kubernetes API server
  cp            Copy files and directories to and from containers.
  auth          Inspect authorization
  debug         Create debugging sessions for troubleshooting workloads and nodes

Advanced Commands:
  diff          Diff live version against would-be applied version
  apply         Apply a configuration to a resource by filename or stdin
  patch         Update field(s) of a resource
  replace       Replace a resource by filename or stdin
  wait          Experimental: Wait for a specific condition on one or many resources.
  kustomize     Build a kustomization target from a directory or URL.

Settings Commands:
  label         Update the labels on a resource
  annotate      Update the annotations on a resource
  completion    Output shell completion code for the specified shell (bash or zsh)

Other Commands:
  api-resources Print the supported API resources on the server
  api-versions  Print the supported API versions on the server, in the form of "group/version"
  config        Modify kubeconfig files
  plugin        Provides utilities for interacting with plugins.
  version       Print the client and server version information

Usage:
  kubectl [flags] [options]

Use "kubectl <command> --help" for more information about a given command.
Use "kubectl options" for a list of global command-line options (applies to all commands).

Deploy the Kubernetes Dashboard

Deploy the Kubernetes Dashboard with the command:

sudo kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml

You need to deploy the dashboard, How to determine the IP address of the cluster uses the command below:

sudo kubectl get svc -n kubernetes-dashboard

The output terminal is as below:

vagrant@devopsroles:~$ sudo kubectl get svc -n kubernetes-dashboard
NAME                        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
dashboard-metrics-scraper   ClusterIP   10.233.19.190   <none>        8000/TCP   64s
kubernetes-dashboard        ClusterIP   10.233.23.203   <none>        443/TCP    64s

You might want to expose the dashboard via NodePort instead of ClusterIP

sudo kubectl edit svc kubernetes-dashboard -o yaml -n kubernetes-dashboard

This will open the configure file in vi editors.

Before:

type: ClusterIP

After change it

type: NodePort

Next, you must run the kubectl proxy command as below:

sudo kubectl proxy

While the proxy is running, open a web browser and point it to the IP address and port number listed in the results from the sudo kubectl get svc -n kubernetes-dashboard command.

vagrant@devopsroles:~$ sudo kubectl get svc -n kubernetes-dashboard
NAME                        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
dashboard-metrics-scraper   ClusterIP   10.233.19.190   <none>        8000/TCP        47h
kubernetes-dashboard        NodePort    10.233.23.203   <none>        443:32469/TCP   47h

Open a web browser:

https://192.168.3.7:32469/

To log in to the dashboard. You need to create a ServiceAccount object and a ClusterRoleBinding object. Create the account with the name admin-user in the namespace kubernetes-dashboard

cat <<EOF | sudo kubectl apply -f -
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard
EOF

Create a ClusterRoleBinding object.

cat <<EOF | sudo kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard
EOF

To retrieve that token with the command below:

sudo kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}')

Expected output:

The result,

This isn’t production-ready, but it’s a great way to get up to speed with Kubernetes and even develop for the platform.

The result is Docker Kubernetes containers running Ubuntu Server. Install Kubernetes Cluster with KubeKey

Conclusions

You have to Install Kubernetes Cluster with KubeKey. I hope will this your helpful. Thank you for reading the DevopsRoles page!

How to Docker install Oracle 12c: A Step-by-Step Guide

Introduction

In this quick-start tutorial, learn how to Docker install Oracle 12c. This guide provides straightforward steps for setting up Oracle 12c in a Docker container, allowing you to leverage the benefits of a virtualized environment for database management. Perfect for those seeking a practical approach to deploying Oracle 12c with Docker.

Requirements

  • You need an account on Docker. Create an account here.
  • Install or update Docker on your PC

Oracle Database 12c Docker Image

Oracle Database Enterprise Edition 12c is available as an image in the Docker Store.

The following figures show the checkout

Docker install Oracle 12c

How to start an Oracle Database Server instance.

  • Docker login
  • Pull Oracle Database Enterprise Edition 12.2.0.1
  • Run the Docker container from the image

The command line is below

$ docker login
$ docker pull store/oracle/database-enterprise:12.2.0.1
$ mkdir ~/oracle-db-data
$ chmod 775 ~/oracle-db-data
$ sudo chown 54321:54322  ~/oracle-db-data
$ docker run -d -it --name oracle-db-12c \
-p 1521:1521 \
-e DB_SID=ORADEV \
-e DB_PDB=ORADEVPDB \
-e DB_DOMAIN=oracledb.devopsroles.local \
-v ~/oracle-db-data:/ORCL \
store/oracle/database-enterprise:12.2.0.1

I created an oracle-db-data directory on the host system for data volume. This data volume is mounted inside the container at /ORCL

  • Container name oracle-db-12c
  • Host port 1521 and Guest port 1521
  • Environment variable:

Set the SID to ORADEV

Set the PDB to ORADEVPDB

Set the DB Domain to oracledb.devopsroles.local

The output terminal is as below:

vagrant@devopsroles:~$ docker login
Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID, head over to https://hub.docker.com to create one.
Username: your_account
Password:

Login Succeeded

vagrant@devopsroles:~$ docker pull store/oracle/database-enterprise:12.2.0.1
12.2.0.1: Pulling from store/oracle/database-enterprise
4ce27fe12c04: Pull complete
9d3556e8e792: Pull complete
fc60a1a28025: Pull complete
0c32e4ed872e: Pull complete
b465d9b6e399: Pull complete
Digest: sha256:40760ac70dba2c4c70d0c542e42e082e8b04d9040d91688d63f728af764a2f5d
Status: Downloaded newer image for store/oracle/database-enterprise:12.2.0.1
docker.io/store/oracle/database-enterprise:12.2.0.1

vagrant@devopsroles:~$ mkdir ~/oracle-db-data
vagrant@devopsroles:~$ chmod 775 ~/oracle-db-data
vagrant@devopsroles:~$ sudo chown 54321:54322  ~/oracle-db-data

vagrant@devopsroles:~$ docker run -d -it --name oracle-db-12c \
> -p 1521:1521 \
> -e DB_SID=ORADEV \
> -e DB_PDB=ORADEVPDB \
> -e DB_DOMAIN=oracledb.devopsroles.local \
> -v ~/oracle-db-data:/ORCL \
> store/oracle/database-enterprise:12.2.0.1
7589a72bb9794d6408eb4b635772b22bc3e711210d3e4422ab0dce639a439c4a

You can check and monitor the container with the command lines below:

$ docker logs -f oracle-db-12c
$ docker ps --format 'table {{.Names}}\t{{.Image}}\t{{.Status}}\t{{.Ports}}' -f name=oracle-db-12c

Oracle Database Setup

The default password to connect to the database with the sys user is Oradoc_db1. Check the character set which should be AL32UTF8

docker exec -it oracle-db-12c bash -c "source /home/oracle/.bashrc; sqlplus /nolog"

SQL> connect sys/Oradoc_db1@ORADEV as sysdba
SQL> alter session set container=ORADEVPDB;
SQL> show parameters db_block_size;
SQL> select value from nls_database_parameters where parameter='NLS_CHARACTERSET';

Create data and temp tablespace

SQL> 
--Create tablespace for new devopsroles Project 
CREATE TABLESPACE huupv_devopsroles_data DATAFILE '/u01/app/oracle/product/12.2.0/dbhome_1/data/huupv_devopsroles_data.db' SIZE 64M AUTOEXTEND ON NEXT 32M MAXSIZE 4096M EXTENT MANAGEMENT LOCAL;
 
--Create temp tablespace for new devopsroles Project
CREATE TEMPORARY TABLESPACE huupv_devopsroles_temp TEMPFILE '/u01/app/oracle/product/12.2.0/dbhome_1/data/huupv_devopsroles_temp.db' SIZE 64M  AUTOEXTEND ON NEXT 32M MAXSIZE 4096M EXTENT MANAGEMENT LOCAL;
  • Do not start with too large an initial size, because it can waste space.
  • Use a single block size (8K) for the whole DB
  • Do not allow individual data files to grow large (beyond 8-10Gb)

Create a user and assign a grant

SQL> 
--Create user for devopsroles schema
CREATE USER huupv_devopsroles IDENTIFIED BY huupv_devopsroles DEFAULT TABLESPACE huupv_devopsroles_data TEMPORARY TABLESPACE huupv_devopsroles_temp PROFILE default ACCOUNT UNLOCK;
 
--Assign grant to user 
GRANT CONNECT TO huupv_devopsroles;
GRANT RESOURCE TO huupv_devopsroles;
GRANT UNLIMITED TABLESPACE TO huupv_devopsroles;

Test the new scheme using a tool such as Oracle SQLDeveloper

The parameters to connect to the new scheme are:

  • Username/password: huupv_devopsroles/huupv_devopsroles
  • Hostname: 192.168.3.7
  • Service Name: ORADEVPDB.oracledb.devopsroles.local

Via youtube

Conclusions

While installing Oracle Database 12c on Docker is not officially supported by Oracle, which only offers Docker images for Database 18c and later, you can still proceed by following the outlined steps in this guide.

Keep in mind that these steps are unofficial and might present certain limitations or compatibility issues. For optimal results and support, consider using the officially provided Oracle Database versions on Docker. I hope will this your helpful. Thank you for reading the DevopsRoles page!

Docker deploy MySQL cluster

Introduction

In this tutorial, How to use Docker deploy MySQL cluster and connect to the nodes from your local machine. We will be deploying the MySQL server with docker.

To deploy a MySQL cluster using Docker, you can use the MySQL official Docker images and Docker Compose. Here’s a step-by-step guide:

  • 1 Management node
  • 2 Data nodes
  • 2 SQL nodes

The nodes in the cluster are running on separate hosts in a network.

First, You have installed docker on your machine.

Docker deploy MySQL cluster

Step 1: Create the docker network.

I will create a network for MySQL cluster with the following docker command.

docker network create cluster --subnet=192.168.4.0/24

Step 2: Get the mysql docker repository

git clone https://github.com/mysql/mysql-docker.git
cd mysql-docker
git checkout -b mysql-cluster

I will change the IP address of each node to match the subnet. Open mysql-cluster/8.0/cnf/mysql-cluster.cnf file

For example

[ndbd default]
NoOfReplicas=2
DataMemory=80M
IndexMemory=18M


[ndb_mgmd]
NodeId=1
hostname=192.168.4.2
datadir=/var/lib/mysql

[ndbd]
NodeId=2
hostname=192.168.4.3
datadir=/var/lib/mysql

[ndbd]
NodeId=3
hostname=192.168.4.4
datadir=/var/lib/mysql

[mysqld]
NodeId=4
hostname=192.168.4.10

[mysqld]
NodeId=5
hostname=192.168.4.11

Open mysql-cluster/8.0/cnf/my.cnf and modify as below

[mysqld]
ndbcluster
ndb-connectstring=192.168.4.2
user=mysql

[mysql_cluster]
ndb-connectstring=192.168.4.2

Docker image build

docker build -t <image_name> <Path to docker file>
docker build -t mysql-cluster mysql-cluster/8.0

Step 3: Create the manager node.

docker run -d --net=cluster --name=management1 --ip=192.168.4.2 mysql-cluster ndb_mgmd

Step 4: Create the data nodes

docker run -d --net=cluster --name=ndb1 --ip=192.168.4.3 mysql-cluster ndbd
docker run -d --net=cluster --name=ndb2 --ip=192.168.4.4 mysql-cluster ndbd

Step 5: Create the SQL nodes.

docker run -d --net=cluster --name=mysql1 --ip=192.168.4.10 -e MYSQL_RANDOM_ROOT_PASSWORD=true mysql-cluster mysqld
docker run -d --net=cluster --name=mysql2 --ip=192.168.4.11 -e MYSQL_RANDOM_ROOT_PASSWORD=true mysql-cluster mysqld
docker run -it --net=cluster mysql-cluster ndb_mgm

The cluster management console will be loaded.

[Entrypoint] MySQL Docker Image 8.0.28-1.2.7-cluster
[Entrypoint] Starting ndb_mgm
-- NDB Cluster -- Management Client --
ndb_mgm>

Run show command

ndb_mgm> show
Connected to Management Server at: 192.168.4.2:1186
Cluster Configuration
---------------------
[ndbd(NDB)]     2 node(s)
id=2    @192.168.4.3  (mysql-8.0.28 ndb-8.0.28, Nodegroup: 0, *)
id=3    @192.168.4.4  (mysql-8.0.28 ndb-8.0.28, Nodegroup: 0)

[ndb_mgmd(MGM)] 1 node(s)
id=1    @192.168.4.2  (mysql-8.0.28 ndb-8.0.28)

[mysqld(API)]   2 node(s)
id=4    @192.168.4.10  (mysql-8.0.28 ndb-8.0.28)
id=5    @192.168.4.11  (mysql-8.0.28 ndb-8.0.28)

ndb_mgm>

Step 7. Change the default passwords.

MySQL node 1:

The SQL nodes are created initially, with a random password. Get the default password.

docker logs mysql1 2>&1 | grep PASSWORD

To change the password, first, Input the password default at Step 7

docker exec -it mysql1 mysql -uroot -p
ALTER USER 'root'@'localhost' IDENTIFIED BY 'root';
FLUSH PRIVILEGES;

MySQL node 2:

The SQL nodes are created initially, with a random password. Get the default password.

docker logs mysql2 2>&1 | grep PASSWORD

To change the password, first, Input the password default at Step 7

docker exec -it mysql2 mysql -uroot -p
ALTER USER 'root'@'localhost' IDENTIFIED BY 'root';
FLUSH PRIVILEGES;

Step 8: Login and create a new database.

For example, I will create huupv an account on mysql1 and mysql2 containers and access any hosts.

# For mysql1
docker exec -it mysql1 mysql -uroot -p
CREATE USER 'huupv'@'%' IDENTIFIED BY '123456789';
GRANT ALL ON *.* TO 'huupv'@'%';
FLUSH PRIVILEGES;

# For mysql2
docker exec -it mysql2 mysql -uroot -p
CREATE USER 'huupv'@'%' IDENTIFIED BY '123456789';
GRANT ALL ON *.* TO 'huupv'@'%';
FLUSH PRIVILEGES;

Create new a database.

create schema test_db;

The output terminal is as below:

vagrant@devopsroles:~/mysql-docker$ docker exec -it mysql1 mysql -uroot -p
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 15
Server version: 8.0.28-cluster MySQL Cluster Community Server - GPL

Copyright (c) 2000, 2022, Oracle and/or its affiliates.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> CREATE USER 'huupv'@'%' IDENTIFIED BY '123456789';
Query OK, 0 rows affected (0.02 sec)

mysql> GRANT ALL ON *.* TO 'huupv'@'%';
Query OK, 0 rows affected (0.11 sec)

mysql> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.02 sec)

mysql> exit
Bye
vagrant@devopsroles:~/mysql-docker$ docker exec -it mysql2 mysql -uroot -p
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 16
Server version: 8.0.28-cluster MySQL Cluster Community Server - GPL

Copyright (c) 2000, 2022, Oracle and/or its affiliates.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> CREATE USER 'huupv'@'%' IDENTIFIED BY '123456789';
ERROR 1396 (HY000): Operation CREATE USER failed for 'huupv'@'%'
mysql> GRANT ALL ON *.* TO 'huupv'@'%';
Query OK, 0 rows affected (0.10 sec)

mysql> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.01 sec)

mysql> exit
Bye
vagrant@devopsroles:~/mysql-docker$

Login from my machine.

 mysql -h192.168.4.10 -uhuupv -p
 mysql -h192.168.4.11 -uhuupv -p

Via My Youtube

Conclusion

You have successfully deployed a MySQL cluster using Docker. You can now use the cluster for your applications or explore additional configuration options for MySQL clustering, such as replication and high availability. I hope will this your helpful. Thank you for reading the DevopsRoles page!

Docker install Nginx container

#Introduction

In this tutorial, I will use Docker to install an Nginx web server. How to set up an Nginx web server in a Docker container. Now, let go Docker install Nginx container.

Docker as a platform container. I will install Docker on Debian/Ubuntu and install the Nginx container from Docker Hub.

Install Docker on Ubuntu here

Docker install Nginx container

Run Nginx Container Using Docker. I will pull an Nginx image from the docker hub.

$ docker pull nginx

List the Docker images as command below

$ docker images

Create Docker Volume for Nginx

$ docker volume create nginx-data-persistent

Get the docker volume information as command below

$ docker volume inspect nginx-data-persistent

Building a Web Page to Serve on Nginx

Create an HTML file

$ sudo vi /var/lib/docker/volumes/nginx-data-persistent/_data/index.html

The content is as below:

<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <title>Learn Docker at DevopsRoles.com</title>
</head>
<body>
    <h1>Learn Docker With DevopsRoles.com</h1>   
</body>
</html>

Start the Nginx container with persistent data storage. Data storage location “/var/lib/docker/volumes/nginx-data-persistent/_data” on Host Ubuntu and the path in the container is “/usr/share/nginx/html” Run command as below:

$ docker run -d --name nginx-server -p 8080:80 -v nginx-data-persistent:/usr/share/nginx/html nginx

Explain:

  • d: run the container in detached mode
  • name: name of the container to be created
  • p: port to be mapped with host, Example: host port is 8080 and guest port is 80
  • v: name of docker volume

you can create a symlink of the docker volume directory

$ sudo ln -s /var/lib/docker/volumes/nginx-data-persistent/_data /nginx-data

Option

Access into Nginx container with the command below

$ docker exec -it nginx-server /bin/bash

Now, you can stop docker container apache

docker stop nginx-server

and remove it:

docker rm nginx-server

You can clean up and delete the image that was used in the container.

docker image remove nginx

Youtube Docker install Nginx container

Conclusion

You have successfully installed and run an Nginx container using Docker. You can now customize the Nginx configuration, serve your own website, or explore additional Nginx features and settings. I hope will this your helpful. Thank you for reading the DevopsRoles page!

Docker install Apache Web Server

#Introduction

In this tutorial, I will use Docker to install an apache web server. How to set up Apache web server in a Docker container. Now, let go Docker install Apache Web Server.

Docker as a platform container. I will install Docker on Debian/Ubuntu and install Apache 2.4 container from Docker Hub.

Install Docker on Ubuntu here

Setting up the Apache Docker Container

For example, I will install Apache 2.4 container named devops-roles. Use an image called httpd:2.4 from Docker Hub. and detached from the current terminal.

Host port is 8080 to be redirected to guest port 80 on the container. I will serve a simple web page form /home/vagrant/website.

Docker install Apache web server

docker run -itd --name devops-roles -p 8080:80 -v /home/vagrant/website/:/usr/local/apache2/htdocs/ httpd:2.4

Check Docker Apache container running as command below

docker ps

To created index.html file inside /home/vagrant/website directory.

sudo vi /home/vagrant/website/index.html

The content as below

<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <title>Learn Docker at DevopsRoles.com</title>
</head>
<body>
    <h1>Learn Docker With DevopsRoles.com</h1>   
</body>
</html>

Open web browser type server-ip:8080/index.html

The output terminal as below

Now, you can stop docker container apache

docker stop devops-roles

and remove it:

docker rm devops-roles

You can clean up and delete the image that was used in the container.

docker image remove httpd:2.4

Via Youtube

Conclusion

You have to Docker install Apache Web Server. I hope will this your helpful. Thank you for reading the DevopsRoles page!

Devops Tutorial

Exit mobile version