Deploy the Latest Portainer Release

Portainer frequently releases updates. You can not simply login to your Portainer container upgrade button, in this tutorial, How to Deploy the Latest Portainer Release.

Backup Portainer container.

I’ll back up the current Portainer deployment.
Login with admin user and click Settings in the left navigation as in the picture below:

I will download the backup file to local storage and save a .tar.gz

Stop/Remove the Current Portainer Container

Login to the Container ID of Portainer and run the command below:

docker ps -a|grep portainer
docker stop ID
docker rm ID

Where ID is the ID of the Portainer container.

For example, The output terminal is as below:

Deploy the Latest Portainer Release

See install Portainer docker here. I used portainer_data for persistent storage.

First, You need to pull down the Portainer latest.

docker pull portainer/portainer-ce:latest

Deploy Portainer with the command line as below

docker run -d -p 8000:8000 -p 9000:9000 --name=portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer-ce

Open a web browser and access to link: http://SERVER:9000

Restore a Backup for the Portainer container.

You need to deploy a fresh instance of Portainer with an empty data volume and choose the Restore Portainer from the backup option during setup. Click Select file and navigate to the .tar.gz file you downloaded earlier. After selecting the file, click Restore Portainer.

Conclusion

You have to deploy Portainer Docker. Please note that these steps provide a basic overview of deploying Portainer.

For a production environment, it’s essential to consider security measures and other best practices. It’s always recommended to consult Portainer’s official documentation for more detailed instructions specific to the latest release and any additional configuration options.

I hope will this your helpful. Thank you for reading the DevopsRoles page!

How to install CouchDB on Rocky Linux 8: A Comprehensive Guide

Introduction

In this tutorial, How to Install CouchDB on Rocky Linux Server. We’ll cover each step, from updating your system and installing necessary dependencies to configuring CouchDB for optimal performance. By the end of this guide, you will have a fully functional CouchDB instance running on your Rocky Linux server, ready to handle your database needs with reliability and efficiency.

  • CouchDB is a NoSQL database.
  • CouchDB stores and presents data in JavaScript Object Notation.

Install CouchDB on Rocky Linux 8

Update the System

$ sudo yum update

Install “epel-release” repository.

$ sudo yum install epel-release

Enable the Apache CouchDB package repository.

$ sudo vi /etc/yum.repos.d/apache-couchdb.repo

The content is as below:

[bintray--apache-couchdb-rpm]
name=bintray--apache-couchdb-rpm
baseurl=http://apache.bintray.com/couchdb-rpm/el$releasever/$basearch/
gpgcheck=0
repo_gpgcheck=0
enabled=1

You want to install Standalone or Cluster Mode. I will use standalone mode.

Install CouchDB

How to fix the error code below:

[vagrant@localhost ~]$ sudo yum install couchdb
bintray--apache-couchdb-rpm                                                                                                                                                      170  B/s | 166  B     00:00
Errors during downloading metadata for repository 'bintray--apache-couchdb-rpm':
  - Status code: 502 for http://apache.bintray.com/couchdb-rpm/el8/x86_64/repodata/repomd.xml (IP: 34.215.50.170)
Error: Failed to download metadata for repo 'bintray--apache-couchdb-rpm': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried

You need to delete the repo and add the repo for “couchdb.repo” as below:

$ sudo rm /etc/yum.repos.d/apache-couchdb.repo
$ sudo yum-config-manager --add-repo https://couchdb.apache.org/repo/couchdb.repo

YouTube: Step-by-Step Guide: Installing CouchDB on Rocky Linux 8

Now, You install the Couchdb server.

$ sudo yum install couchdb

The output terminal is the picture below:

Verify the installation.

Start services

# systemctl start couchdb
# systemctl enable couchdb
# systemctl status couchdb

Firewalld setting allows port 5984

$ sudo firewall-cmd --zone=public --permanent --add-port=5984/tcp
$ sudo systemctl start firewalld

The CouchDB server will run on localhost:5984

Configuration of CouchDB on Rocky Linux 8

CouchDB’s configuration files are located in the /opt/couchdb/etc/ directory.

Open the file

$ sudo vi /opt/couchdb/etc/local.ini

uncommenting the line just below

[admins]
admin = mypassword

Uncomment the port and bind-address values.

[chttpd]
port = 5984
bind_address = 0.0.0.0

Save the changes and exit the configuration file.

Creating a Database

Access CouchDB Web Interface at http://127.0.0.1:5984/_utils/ with your user credentials and create a new database with CouchDB.

By default, CouchDB listens on port 5984.

Conclusion

You have to Install CouchDB on Rocky Linux 8. CouchDB should now be installed and running on your Rocky Linux 8 system. Remember to secure your CouchDB installation and configure any necessary authentication and access controls based on your requirements. I hope this will be helpful for you. Thank you for reading the DevopsRoles page!

How to install kubernetes on ubuntu

To install Kubernetes on Ubuntu, you can use the kubeadm tool, which simplifies the process of setting up a cluster. Here’s a step-by-step guide on how to install Kubernetes on Ubuntu.

Kubernetes Requirements

  • Master Node.
  • Worker Node 1.
  • The host OS is Ubuntu Server.

1. Update Ubuntu Server

It is always an updated system package. use the command line below:

$ sudo apt update

2. Install Docker on Ubuntu Server

$ sudo apt install curl -y
$ curl -fsSL https://get.docker.com -o get-docker.sh
$ sh get-docker.sh
$ sudo usermod -aG docker $USER
$ docker --version

Repeat on all other nodes.

3.How to install Kubernetes on Ubuntu server.

Use the command line to install K8s as below.

$ sudo apt-get install -y apt-transport-https ca-certificates curl
$ sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg

Repeat for each server node.

4. Add software repositories

k8s are not included in the default repositories. Use the command line below:

$ echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
$ sudo apt-get update

Repeat for each server node.

5. Install the tool for Kubernetes

A tool that helps to initialize a cluster using Kubernetes Admin or Kubeadm. Kubelet is the worker package, which runs on all nodes and starts containers. Use the command line below

$ sudo apt-get install -y kubeadm kubelet kubectl
$ sudo apt-mark hold kubelet kubeadm kubectl

Repeat for each server node.

6. Disabling swap memory on each server

$ sudo swapoff -a

7. Setting a unique hostname for each server node.

Decide which server to set as the primary node.

$ sudo hostnamectl set-hostname k8s-master

Next, Set a worker node hostname

$ sudo hostnamectl set-hostname k8s-node1

If you add a worker node, set a unique hostname on each one.

8. Initialize Kubernetes on the master node

On the master node, Run the command below:

$ sudo kubeadm init --apiserver-advertise-address=192.168.56.11 --pod-network-cidr=10.100.0.0/16

The output terminal is as below:

vagrant@k8s-master:~$ sudo kubeadm init --apiserver-advertise-address=192.168.56.11 --pod-network-cidr=10.100.0.0/16
[init] Using Kubernetes version: v1.24.2
[preflight] Running pre-flight checks
        [WARNING SystemVerification]: missing optional cgroups: blkio
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.56.11]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.56.11 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.56.11 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 4.501962 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: 8ke6fa.3c5ll272057418qj
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.56.11:6443 --token 8ke6fa.3c5ll272057418qj \
        --discovery-token-ca-cert-hash sha256:4e69979a597f673781606eb7ab6bed5ccb741f46756b17f196b9cd76d3b51053

This is used to join the worker nodes to the cluster. We will create a directory for the cluster.

$ mkdir -p $HOME/.kube 
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config 
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

9. Deploy the pod network to the cluster.

A pod network allows communication between different nodes in the cluster. For example, Use the flannel virtual network. Use the command line below

$ sudo kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

Verify working and communicating

# kubectl get all -A

Join the worker node to Cluster.

The syntax:

kubeadm join --token <token> <control-plane-host>:<control-plane-port> --discovery-token-ca-cert-hash sha256:<hash>

Use kubeadm join command on each worker node to join it to the cluster.

$ sudo kubeadm join 192.168.56.11:6443 --token 8ke6fa.3c5ll272057418qj \
        --discovery-token-ca-cert-hash \
sha256:4e69979a597f673781606eb7ab6bed5ccb741f46756b17f196b9cd76d3b5105

Waiting minutes, You can check the status of the nodes. Switch to the Master node and run the command line as below:

kubectl get nodes

Conclusion

You have successfully installed Kubernetes on Ubuntu. I hope this guide proves to be helpful for you in navigating the installation process. Thank you for choosing to read the DevopsRoles page! For more detailed instructions on how to install Kubernetes on Ubuntu, please refer to our comprehensive guide on installing Kubernetes on Ubuntu.

Docker deploy Joomla

Introduction

Docker has become an essential tool in the DevOps world, simplifying the deployment and management of applications. Using Docker to deploy Joomla – one of the most popular Content Management Systems (CMS) – offers significant advantages. In this article, we will guide you through each step to Docker deploy Joomla, helping you leverage the full potential of Docker for your Joomla project.

Requirements

  • Have installed Docker on your system.
  • The host OS is Ubuntu Server.

To deploy Joomla using Docker, you’ll need to follow these steps:

Docker Joomla

Create a new Docker joomla network

docker network create joomla-network

Check Joomla network is created.

Next, pull Joomla and MySQL images as command below:

docker pull mysql:5.7
docker pull joomla

Create the MySQL volume

docker volume create mysql-data

Deploy the database

docker run -d --name joomladb  -v mysql-data:/var/lib/mysql --network joomla-network -e "MYSQL_ROOT_PASSWORD=PWORD_MYSQL" -e MYSQL_USER=joomla -e "MYSQL_PASSWORD=PWORD_MYSQL" -e "MYSQL_DATABASE=joomla" mysql:5.7

Where PWORD_MYSQL is a unique/strong password

How to deploy Joomla

create a volume to hold the Joomla data as command below:

docker volume create joomla-data
docker run -d --name joomla -p 80:80 -v joomla-data:/var/www/html --network joomla-network -e JOOMLA_DB_HOST=joomladb -e JOOMLA_DB_USER=joomla -e JOOMLA_DB_PASSWORD=PWORD_MYSQL joomla

Access the web-based installer

Open the web browser to http://SERVER:PORT, where SERVER is either the IP address or domain of the hosting server, and PORT is the external port.

Follow the Joomla setup wizard to configure your Joomla instance.

Via Youtube

Conclusion

Deploying Joomla with Docker not only simplifies the installation and configuration process but also enhances the management and scalability of your application. With the detailed steps provided in this guide, you can confidently deploy and manage Joomla on the Docker platform. Using Docker saves time and improves the performance and reliability of your system. Start today to experience the benefits Docker brings to your Joomla project. I hope will this your helpful. Thank you for reading the DevopsRoles page!

Setup Kubernetes Cluster with K3s

Introduction

In this tutorial, How to setup Kubernetes Cluster with K3s. It is a lightweight Kubernetes distribution developed by Rancher. K3s consume fewer resources than traditional distributions. It’s easy to set up and manage Kubernetes Cluster.

To set up a Kubernetes cluster using K3s, you can follow these steps:

Setup Kubernetes Cluster with K3s

First, You need to get a Linux machine.

Start by provisioning the servers that will be part of your Kubernetes cluster.

You can use virtual machines or bare-metal servers. Ensure that the servers have a compatible operating system (such as Ubuntu, CentOS, or RHEL).

Download the Rancher binary

Link download here. or you use the wget command or curl command to get download it.

wget https://github.com/k3s-io/k3s/releases/download/v1.23.5%2Bk3s1/k3s

make the binary executable.

chmod +x k3s

The output terminal is as below:

vagrant@controller:~$ wget https://github.com/k3s-io/k3s/releases/download/v1.23.5%2Bk3s1/k3s
--2022-06-05 08:49:28--  https://github.com/k3s-io/k3s/releases/download/v1.23.5%2Bk3s1/k3s
Resolving github.com (github.com)... 20.205.243.166
Connecting to github.com (github.com)|20.205.243.166|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://objects.githubusercontent.com/github-production-release-asset-2e65be/135516270/88b18d50-2447-4216-b672-fdf17488cb41?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20220605%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20220605T084929Z&X-Amz-Expires=300&X-Amz-Signature=54b2aa58e831f8f8d179189c940eb5c38b4df7d4e3e33c18c9376b446f029742&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=135516270&response-content-disposition=attachment%3B%20filename%3Dk3s&response-content-type=application%2Foctet-stream [following]
--2022-06-05 08:49:28--  https://objects.githubusercontent.com/github-production-release-asset-2e65be/135516270/88b18d50-2447-4216-b672-fdf17488cb41?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20220605%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20220605T084929Z&X-Amz-Expires=300&X-Amz-Signature=54b2aa58e831f8f8d179189c940eb5c38b4df7d4e3e33c18c9376b446f029742&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=135516270&response-content-disposition=attachment%3B%20filename%3Dk3s&response-content-type=application%2Foctet-stream
Resolving objects.githubusercontent.com (objects.githubusercontent.com)... 185.199.108.133, 185.199.109.133, 185.199.110.133, ...
Connecting to objects.githubusercontent.com (objects.githubusercontent.com)|185.199.108.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 62468096 (60M) [application/octet-stream]
Saving to: ‘k3s’

k3s                                100%[==============================================================>]  59.57M  4.95MB/s    in 24s

2022-06-05 08:49:53 (2.46 MB/s) - ‘k3s’ saved [62468096/62468096]

vagrant@controller:~$ ls -l
total 61004
-rw-rw-r-- 1 vagrant vagrant 62468096 Mar 31 01:05 k3s
vagrant@controller:~$ chmod +x k3s

Start the K3s Server.

sudo ./k3s server

Check your Kubernetes cluster

sudo ./k3s kubectl get nodes

The output terminal is as below:

How to manage your cluster

You can run kubectl commands through the k3s binary. K3s provides a built-in kubectl utility.

For example, To drain the node you created.

sudo ./k3s kubectl drain your-hostname

Or use kubectl cordon a node with the command below:

sudo ./k3s kubectl cordon your-hostname

Add nodes to a K3s cluster

We created a cluster with just one node with the step above. If you want add 1 node to your cluster.

On Master:

first, you determine the node token value of your server. get the token value with the command below:

sudo cat /var/lib/rancher/k3s/server/node-token

For example, the Token values the output as below:

K10c94d11d4970c4ac58973c98ee32c9c1c4cb4fc30d81adfaf3ddf405ba1c48b49::server:7b1dcfc180415f105af019717027e77c

On Worker

Install k3s agent

export K3S_NODE_NAME=node1
export K3S_URL="https://192.168.56.11:6443"
export K3S_TOKEN=K10c94d11d4970c4ac58973c98ee32c9c1c4cb4fc30d81adfaf3ddf405ba1c48b49::server:7b1dcfc180415f105af019717027e77c
curl -sfL https://get.k3s.io | sh -s -

Then, the Additional node run command below:

sudo k3s agent --server https://myserver:6443 --token K3S_TOKEN

Repeat this process to add as many nodes as you want to your cluster.

The result, of the Master

Conclusion

You have setup Kubernetes cluster using K3s. You can now use kubectl it to deploy and manage your applications on the cluster.

Further, you can do with k8s k3s such as customize networking or logging, change the container runtime, and set up certificates. I hope will this your helpful. Thank you for reading the DevopsRoles page!

Create docker secret and deploy a service

#Introduction

In this tutorial, How to create docker secret and deploy a service. Docker secrets encrypt things like passwords and certificates within a service and container.

Requirements

How to create a secret

We’ll use the command printf and pipe the output to the docker command to create a secret called test_secret. As command below:

printf "My secret secret" | docker secret create test_secret -

To check the result with the command below

docker secret ls

The output as below:

vagrant@controller:~$ docker secret ls
ID                          NAME          DRIVER    CREATED          UPDATED
txrthzah1vnl4kyrh282j39ft   test_secret             24 seconds ago   24 seconds ago

create a service that uses the secret

To deploy that service, using the test_secret secret, the command looks something like this:

docker service  create --name redis --secret test_secret redis:alpine

Verify the service is running as the command below

docker service ps redis

The output is as below:

vagrant@controller:~$ docker service ps redis
ID             NAME      IMAGE          NODE         DESIRED STATE   CURRENT STATE            ERROR     PORTS
y6249s3xftxa   redis.1   redis:alpine   controller   Running         Running 33 seconds ago   

Verify the service has access to the secret as below

docker container exec $(docker ps --filter name=redis -q) ls -l /run/secrets

The output is as below:

vagrant@controller:~$ docker container exec $(docker ps --filter name=redis -q) ls -l /run/secrets
total 4
-r--r--r--    1 root     root            16 May 30 13:50 test_secret

Finally, you can view the contents of the secret with the command:

docker container exec $(docker ps --filter name=redis -q) cat /run/secrets/test_secret

The output is as below:

My secret secret

If you commit the container, the secret is no longer available.

docker commit $(docker ps --filter name=redis -q) committed_redis

Verify the secret is no longer available with the command below:

docker run --rm -it committed_redis cat /run/secrets/test_secret

You can then remove access to the secret with the command:

docker service update --secret-rm test_secret redis

Conclusion

You have to Create docker secret and deploy a service. I hope will this your helpful. Thank you for reading the DevopsRoles page!

Using Ansible with Testinfra test infrastructure

Introduction

In this tutorial, How to ansible testinfra test infrastructure. You can write unit tests in Python to test the actual status of your target servers.

Ansible Testinfra is a testing framework that allows you to write tests for your Ansible playbooks and roles. It is built on top of the popular Python testing framework pytest and provides a simple way to test the state of your systems after running Ansible playbooks.

Testinfra is a powerful library for writing tests to verify an infrastructure’s state. It is a Python library and uses the powerful pytest test engine.

Ansible Testinfra test infrastructure

Install Testinfra

Method 1: Use pip ( Python package manager ) to install Testinfra and a Python virtual environment.

python3 -m venv ansible
source ansible/bin/activate
(ansible) $ pip install testinfra

Method 2: Testinfra is also available in the package repositories of Fedora and CentOS using the EPEL repository. For example, install on CentOS 7 as command below:

$ yum install -y epel-release
$ yum install -y python-testinfra

For example, A simple test script

I create test.py with the content below:

import testinfra

def test_os_release(host):
    assert host.file("/etc/os-release").contains("centos")

def test_sshd_active(host):
    assert host.service("sshd").is_running is True

To run these tests on your local machine, execute the following command:

(ansible)$ pytest test.py

Testinfra and Ansible

Use pip to install the Ansible package.

pip install ansible

Here is the resulting output:

(ansible) [vagrant@ansible ~]$ pip install ansible
Collecting ansible
  Downloading ansible-4.10.0.tar.gz (36.8 MB)
     |████████████████████████████████| 36.8 MB 6.9 MB/s
  Preparing metadata (setup.py) ... done
Collecting ansible-core~=2.11.7
  Downloading ansible-core-2.11.11.tar.gz (7.1 MB)
     |████████████████████████████████| 7.1 MB 9.6 MB/s
  Preparing metadata (setup.py) ... done
Collecting jinja2
  Downloading Jinja2-3.0.3-py3-none-any.whl (133 kB)
     |████████████████████████████████| 133 kB 10.4 MB/s
Collecting PyYAML
  Downloading PyYAML-6.0-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (603 kB)
     |████████████████████████████████| 603 kB 8.7 MB/s
Collecting cryptography
  Downloading cryptography-37.0.2-cp36-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (4.1 MB)
     |████████████████████████████████| 4.1 MB 11.5 MB/s
Requirement already satisfied: packaging in ./ansible/lib/python3.6/site-packages (from ansible-core~=2.11.7->ansible) (21.3)
Collecting resolvelib<0.6.0,>=0.5.3
  Downloading resolvelib-0.5.4-py2.py3-none-any.whl (12 kB)
Collecting cffi>=1.12
  Downloading cffi-1.15.0-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.whl (405 kB)
     |████████████████████████████████| 405 kB 10.7 MB/s
Collecting MarkupSafe>=2.0
  Downloading MarkupSafe-2.0.1-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (30 kB)
Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in ./ansible/lib/python3.6/site-packages (from packaging->ansible-core~=2.11.7->ansible) (3.0.9)
Collecting pycparser
  Downloading pycparser-2.21-py2.py3-none-any.whl (118 kB)
     |████████████████████████████████| 118 kB 12.3 MB/s
Using legacy 'setup.py install' for ansible, since package 'wheel' is not installed.
Using legacy 'setup.py install' for ansible-core, since package 'wheel' is not installed.
Installing collected packages: pycparser, MarkupSafe, cffi, resolvelib, PyYAML, jinja2, cryptography, ansible-core, ansible
    Running setup.py install for ansible-core ... done
    Running setup.py install for ansible ... done
Successfully installed MarkupSafe-2.0.1 PyYAML-6.0 ansible-4.10.0 ansible-core-2.11.11 cffi-1.15.0 cryptography-37.0.2 jinja2-3.0.3 pycparser-2.21 resolvelib-0.5.4
(ansible) [vagrant@ansible ~]$

Testinfra can directly use Ansible’s inventory file and a group of machines defined in the inventory. For example, inventory hosts are the target server Ubuntu.

(ansible) [vagrant@ansible ~]$ cat hosts
[target-server]
UbuntuServer

We verify the operating system and ensure that SSH is running on the target server.

To test using Testinfra and Ansible, use the following command:

pytest -vv --sudo --hosts=target-server --ansible-inventory=hosts --connection=ansible test.py

The terminal output is as follows:

Conclusion

You have Ansible Testinfra test infrastructure. Ansible and Testinfra are complementary tools that can be used together in an infrastructure provisioning and testing workflow, but they serve different purposes.

Ansible specializes in automation and configuration management, whereas Testinfra is dedicated to testing the infrastructure. I hope you find this information helpful. Thank you for visiting the DevopsRoles page!

Elastic APM Tool for Application Performance Monitoring

#Introduction

Elastic APM helps you monitor overall application health and performance. It is part of the Elastic Stack, which includes Elasticsearch, Logstash, and Kibana.

Elastic APM allows you to track and analyze the performance metrics of your applications, identify bottlenecks, and troubleshoot issues.

Prerequisites

Elastic APM Tool

How to integrate the application with APM tool: elasticsearch, kibana, and apm-server

We will create a docker-compose file with the content below:

nano docker-compose.yml

The content file as below:

version: '3'
services:
      elasticsearch:
        image: docker.elastic.co/elasticsearch/elasticsearch:6.6.1
        ports:
          - "9200:9200"
      kibana:
        image: docker.elastic.co/kibana/kibana:6.6.1
        ports:
          - "5601:5601"
        depends_on:
          - elasticsearch
      apm-server:
          image: docker.elastic.co/apm/apm-server:6.6.1
          environment:
            - output.elasticsearch.hosts=["elasticsearch:9200"]
          ports:
            - "8200:8200"
          depends_on:
            - elasticsearch

Save and close that file.

Running the APM tool

docker-compose up -d

The result is the picture below:

Now, We will start the application with the javaagent and point to apm-server URL

java -javaagent:/<path-to-jar>/elastic-apm-agent-<version>.jar \
     -Delastic.apm.service_name=my-application \
     -Delastic.apm.server_url=http://localhost:8200 \
     -Delastic.apm.secret_token= \
     -Delastic.apm.application_packages=<base package> \
     -jar <jar name>.jar

Note: apm.application_packages → package where the main class is present

Configuration steps on Kibana

  • Goto: http://localhost:5601 (http://localhost:5601/)
  • Click on Add APM
  • Scroll down and click on Check APM Server Status
  • Scroll down and click on Check agent status
  • Click on Load Kibana objects
  • Launch APM

APM is now ready and integrated with the service.

Conclusion

You have Elastic APM Tool for Application Performance Monitoring.

Elastic APM is a comprehensive tool for monitoring the performance of your applications, providing deep insights into transaction traces, metrics, errors, and code-level details. It helps you identify performance issues, optimize application performance, and deliver a better user experience.

I hope will this your helpful. Thank you for reading the DevopsRoles page!

How to run a single command on multiple Linux machines at once

Introduction

In this tutorial, How to run a single command on multiple Linux machines at once. I will create a script on the Linux servers to send commands to multiple servers.

Prerequisites

  • 2 Virtual Machine
  • ssh Connectivity Between 2 Servers

Configure SSH on Server 1

For example, The Server calls the name controller. This server is using the operating system Ubuntu.

create the SSH config file with the command:

cat ~/.ssh/config

For example, the content as output below

Create the script to run a single command on multiple Linux machines at once

We will create 1 script to run a command for a remote Linux server.

sudo vi /usr/local/bin/script_node1

In that file, paste the following:

#!/bin/bash

# Get the user's input as command
[[ -z ${@} ]] && exit || CMD_EXEC="${@}"

# Get the hosts from ~/.ssh/config
HOSTS=$(grep -Po 'Host\s\Knode1' "$HOME/.ssh/config")

# Test weather the input command uses sudo
if [[ $CMD_EXEC =~ ^sudo ]]
then
    # Ask for password
    read -sp '[sudo] password for remote_admin: ' password; echo

    # Rewrite the command
    CMD_EXEC=$(sed "s/^sudo/echo '$password' | sudo -S/" <<< "$CMD_EXEC")
fi

# loop over the hosts and execute the SSH command, remove `-a` to override the>
while IFS= read -r host
do
   echo -e '\n\033[1;33m'"HOST: ${host}"'\033[0m'
   ssh -n "$host" "$CMD_EXEC 2>&1" | tee -a "/tmp/$(basename "${0}").${host}"
done <<< "$HOSTS"

Save and close the file.

Now, We will update the package for server calls name node1 as command below:

script_node1 sudo apt-get update

Via youtube

Conclusion

You have to run a single command on multiple Linux machines at once. I hope will this your helpful. Thank you for reading the DevopsRoles page!

Docker Swarm cheat sheet: Essential Commands and Tips

Introduction

Here’s a Docker Swarm cheat sheet to help you with common commands and operations:

Docker Swarm is a powerful tool for container management and application orchestration. For those working in the DevOps field, mastering Docker Swarm commands and techniques is essential for effective system deployment and management.

This article provides a detailed cheat sheet, compiling important commands and useful tips, to help you quickly master Docker Swarm and optimize your workflow.

The Docker swarm cheat sheet

Docker swarm Management

Set up master

docker swarm init --advertise-addr <ip>

How to Force the Manager on a Broken Cluster

docker swarm init --force-new-cluster -advertise-addr <ip>

Enable auto-lock

docker swarm init –autolock

Get a token to join the workers

docker swarm join-token worker

Get a token to join the new manager

docker swarm join-token manager

Join the host as a worker

docker swarm join <server> worker

Have a node leave a swarm

docker swarm leave

Unlock a manager host after the docker

docker swarm unlock

Print key needed for ‘unlock’

docker swarm unlock-key

Print swarm node list

docker node ls

Docker Service Management

Create a new service:

docker service create <options> <image> <command>

List the services running in a swarm

docker service ls

Inspect a service:

docker service inspect <service-id>

Scale a service (increase or decrease replicas)

docker service scale <service-id>=<replica-count>

Update a service:

docker service update <options> <service-id>

Remove a service:

docker service rm <service-id>

List the tasks of the service_name

docker service ps service_name

list running (active) tasks for a given service

docker service ps --filter desired-state=running <service id|name>

print console log of a service

docker service logs --follow <service id|name>

Promote a worker node to the manager

docker node promote node_name

The output terminal Promote a worker node to the manager as below

vagrant@controller:~$ docker node ls
ID                            HOSTNAME     STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION
9b2211c8l1bmhu3h2ij3kthxv *   controller   Ready     Active         Leader           20.10.14
0j0pslqf4g6xkki8ajydvc123     node1        Ready     Active                          20.10.14
f4cxubqg0wqdxsaj8pe4qsqlg     node2        Ready     Active                          20.10.14

vagrant@controller:~$ docker node promote f4cxubqg0wqdxsaj8pe4qsqlg
Node f4cxubqg0wqdxsaj8pe4qsqlg promoted to a manager in the swarm.

vagrant@controller:~$ docker node ls
ID                            HOSTNAME     STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION
9b2211c8l1bmhu3h2ij3kthxv *   controller   Ready     Active         Leader           20.10.14
0j0pslqf4g6xkki8ajydvc123     node1        Ready     Active                          20.10.14
f4cxubqg0wqdxsaj8pe4qsqlg     node2        Ready     Active         Reachable        20.10.14

Docker Stack Management

List running swarms

docker stack ls

Deploy a stack using a Compose file:

docker stack deploy --compose-file <compose-file> <stack-name>

Inspect a stack:

docker stack inspect <stack-name>

List services in a stack:

docker stack services <stack-name>

List containers in a stack:

docker stack ps <stack-name>

Remove a stack:

docker stack rm <stack-name>

Conclusion

You now have the Docker Swarm cheat sheet, which includes some of the most essential commands used in Docker Swarm.

For a more comprehensive list of options and additional commands, please refer to the official Docker documentation:

For a more detailed list of options and additional commands, you can refer to the official Docker documentation. I hope you find this helpful. Thank you for visiting the DevopsRoles page!

Devops Tutorial

Exit mobile version