Unlocking helm commands for Kubernetes

Introduction

Explore the essential Helm commands for Kubernetes in this detailed tutorial. Whether you’re a beginner or a seasoned Kubernetes user, this guide will help you install Helm and utilize its commands to manage charts and repositories effectively, streamlining your Kubernetes deployments.

In this tutorial, How to install helm and run helm commands. Helm provides many commands for managing charts and Helm repositories. Now, let’s helm commands for Kubernetes

  • Helm Commands for Kubernetes: Simplifies application deployment and management in Kubernetes environments.
  • What is a Helm Chart?: A package containing pre-configured Kubernetes resources used for deploying applications.
  • Purpose of Helm: Streamlines deployments, manages versions and rollbacks and allows customization of installations through charts.
  • Using Helm Charts: Install Helm, add repositories, and manage applications within your Kubernetes cluster using Helm’s command suite, including install, update, and delete operations.

Install Helm commands for Kubernetes

You can ref here.

From Homebrew (Mac)

brew install helm

From Windows

choco install kubernetes-helm

Check version

helm version

The output is as below:

Windows PowerShell
Copyright (C) Microsoft Corporation. All rights reserved.

Try the new cross-platform PowerShell https://aka.ms/pscore6

PS C:\Windows\system32> choco install kubernetes-helm
Chocolatey v0.10.15
Installing the following packages:
kubernetes-helm
By installing you accept licenses for the packages.
Progress: Downloading kubernetes-helm 3.5.4... 100%

kubernetes-helm v3.5.4 [Approved]
kubernetes-helm package files install completed. Performing other installation steps.
The package kubernetes-helm wants to run 'chocolateyInstall.ps1'.
Note: If you don't run this script, the installation will fail.
Note: To confirm automatically next time, use '-y' or consider:
choco feature enable -n allowGlobalConfirmation
Do you want to run the script?([Y]es/[A]ll - yes to all/[N]o/[P]rint): Yes

Downloading kubernetes-helm 64 bit
  from 'https://get.helm.sh/helm-v3.5.4-windows-amd64.zip'
Progress: 100% - Completed download of C:\Users\USERNAME\AppData\Local\Temp\chocolatey\kubernetes-helm\3.5.4\helm-v3.5.4-windows-amd64.zip (11.96 MB).
Download of helm-v3.5.4-windows-amd64.zip (11.96 MB) completed.
Hashes match.
Extracting C:\Users\USERNAME\AppData\Local\Temp\chocolatey\kubernetes-helm\3.5.4\helm-v3.5.4-windows-amd64.zip to C:\ProgramData\chocolatey\lib\kubernetes-helm\tools...
C:\ProgramData\chocolatey\lib\kubernetes-helm\tools
 ShimGen has successfully created a shim for helm.exe
 The install of kubernetes-helm was successful.
  Software installed to 'C:\ProgramData\chocolatey\lib\kubernetes-helm\tools'

Chocolatey installed 1/1 packages.
 See the log for details (C:\ProgramData\chocolatey\logs\chocolatey.log).

Did you know the proceeds of Pro (and some proceeds from other
 licensed editions) go into bettering the community infrastructure?
 Your support ensures an active community, keeps Chocolatey tip top,
 plus it nets you some awesome features!
 https://chocolatey.org/compare
PS C:\Windows\system32> helm version
version.BuildInfo{Version:"v3.5.4", GitCommit:"1b5edb69df3d3a08df77c9902dc17af864ff05d1", GitTreeState:"clean", GoVersion:"go1.15.11"}
PS C:\Windows\system32>

Helm Commands kubernetes

Add Helm repo link here

# Example
helm repo add stable https://charts.helm.sh/stable

I will add bitnami repo and search for the Nginx server as command follows:

helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update
helm search repo nginx
helm search repo bitnami/nginx

The output is as commands follows:

E:\Study\cka\devopsroles>helm search repo nginx
NAME                                    CHART VERSION   APP VERSION     DESCRIPTION
bitnami/nginx                           8.9.1           1.19.10         Chart for the nginx server
bitnami/nginx-ingress-controller        7.6.9           0.46.0          Chart for the nginx Ingress controller
stable/nginx-ingress                    1.41.3          v0.34.1         DEPRECATED! An nginx Ingress controller that us...
stable/nginx-ldapauth-proxy             0.1.6           1.13.5          DEPRECATED - nginx proxy with ldapauth
stable/nginx-lego                       0.3.1                           Chart for nginx-ingress-controller and kube-lego
bitnami/kong                            3.7.4           2.4.1           Kong is a scalable, open source API layer (aka ...
stable/gcloud-endpoints                 0.1.2           1               DEPRECATED Develop, deploy, protect and monitor...

E:\Study\cka\devopsroles>helm search repo bitnami/nginx
NAME                                    CHART VERSION   APP VERSION     DESCRIPTION
bitnami/nginx                           8.9.1           1.19.10         Chart for the nginx server
bitnami/nginx-ingress-controller        7.6.9           0.46.0          Chart for the nginx Ingress controller

Install Nginx using the helm command

helm install nginx bitnami/nginx

Update Nginx

helm upgrade nginx bitnami/nginx --dry-run

# Upgrade using values in overrides_nginx.yaml
helm upgrade nignx bitnami/nginx -f overrides_nginx.yaml

# rollback
helm rollback nginx REVISION_NUMBER

Basic helm command

helm status nginx
helm history nginx
# get manifest and values from deployment
helm get manifest nginx
helm get values nginx
helm uninstall nginx

Conclusion

Mastering Helm commands enhances your Kubernetes management skills, allowing for more efficient application deployment and management. This tutorial provides the foundation you need to confidently use Helm in your Kubernetes environment, improving your operational capabilities.

You have to use helm commands for Kubernetes. I hope will this your helpful. Thank you for reading the DevopsRoles page!

Install EKS on AWS

In this tutorial, how to Set Up EKS 1.16 cluster with eksctl . Amazon Elastic Kubernetes Service (Amazon EKS) gives you the flexibility to start, run, and scale Kubernetes applications in the AWS cloud or on-premises. No, let’s go Install EKS on AWS.

1.The first create a free account on AWS.

Link here:

Example: Create User: devopsroles-demo as the picture below:

2.Install AWS cli on Windows

Refer here:

Create AWS Profile

We easier to switch to different AWS IAM user or IAM role identiy by ‘export AWS_PROFILE=PROFILE_NAME‘ . I will not using ‘default‘ profile created by ‘aws configure‘. For example, I create a named AWS Profile ‘devopsroles-demo‘ in two ways:

  • ‘aws configure –profile devopsroles-demo’
E:\Study\cka\devopsroles>aws configure --profile devopsroles-demo
AWS Access Key ID [None]: XXXXZHBNJLCKKCE7EQQQ
AWS Secret Access Key [None]: fdfdfdfd43434dYlQ1il1xKNCnqwUvNHFSv41111
Default region name [None]:
Default output format [None]:

E:\Study\cka\devopsroles>set AWS_PROFILE=devopsroles-demo
E:\Study\cka\devopsroles>aws sts get-caller-identity
{
    "UserId": "AAQAZHBNJLCKPEGKYAV1R",
    "Account": "456602660300",
    "Arn": "arn:aws:iam::456602660300:user/devopsroles-demo"
}
  • Create profile entry in ~/.aws/credentials file

The content credentials file as below:

[devopsroles-demo] 
aws_access_key_id=YOUR_ACCESS_KEY 
aws_secret_access_key=YOUR_SECRET_ACCESS_KEY
aws_region = YOUR_REGION

Check new profile

export AWS_PROFILE=devopsroles-demo
# Windows
set AWS_PROFILE=devopsroles-demo

3.Install aws-iam-authenticator

# Windows
# install chocolatey first: https://chocolatey.org/install
choco install -y aws-iam-authenticator

4.Install kubectl

Ref here:

choco install kubernetes-cli
kubectl version

5.Install eksctl

Ref here:

# install eskctl from chocolatey
chocolatey install -y eksctl 
eksctl version

6.Create ssh key for EKS worker nodes

ssh-keygen
# Example name key is devopsroles_worker_nodes_demo.pem

7.Setup EKS cluster with eksctl (so you don’t need to manually create VPC)

eksctl tool will create K8s Control Plane (master nodes, etcd, API server, etc), worker nodes, VPC, Security Groups, Subnets, Routes, Internet Gateway, etc.

  • use official AWS EKS AMI
  • dedicated VPC
  • EKS not supported in us-west-1
eksctl create cluster  --name devopsroles-from-eksctl --version 1.16  --region us-west-2  --nodegroup-name workers --node-type t3.medium --nodes 2 --nodes-min 1 --nodes-max 4 --ssh-access --ssh-public-key ~/.ssh/devopsroles_worker_nodes_demo.pem.pub --managed

The output

E:\Study\cka\devopsroles>eksctl create cluster  --name devopsroles-from-eksctl --version 1.16  --region us-west-2  --nodegroup-name workers --node-type t3.medium --nodes 2 --nodes-min 1 --nodes-max 4 --ssh-access
--ssh-public-key ~/.ssh/devopsroles_worker_nodes_demo.pem.pub --managed
2021-05-23 15:19:30 [ℹ]  eksctl version 0.49.0
2021-05-23 15:19:30 [ℹ]  using region us-west-2
2021-05-23 15:19:31 [ℹ]  setting availability zones to [us-west-2a us-west-2b us-west-2c]
2021-05-23 15:19:31 [ℹ]  subnets for us-west-2a - public:192.168.0.0/19 private:192.168.96.0/19
2021-05-23 15:19:31 [ℹ]  subnets for us-west-2b - public:192.168.32.0/19 private:192.168.128.0/19
2021-05-23 15:19:31 [ℹ]  subnets for us-west-2c - public:192.168.64.0/19 private:192.168.160.0/19
2021-05-23 15:19:31 [ℹ]  using SSH public key "C:\\Users\\USERNAME/.ssh/devopsroles_worker_nodes_demo.pem.pub" as "eksctl-devopsroles-from-eksctl-nodegroup-workers-29:e7:8c:c3:df:a5:23:1b:bb:74:ad:51:bc:fb:80:9b" 
2021-05-23 15:19:32 [ℹ]  using Kubernetes version 1.16
2021-05-23 15:19:32 [ℹ]  creating EKS cluster "devopsroles-from-eksctl" in "us-west-2" region with managed nodes
2021-05-23 15:19:32 [ℹ]  will create 2 separate CloudFormation stacks for cluster itself and the initial managed nodegroup
2021-05-23 15:19:32 [ℹ]  if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=us-west-2 --cluster=devopsroles-from-eksctl'
2021-05-23 15:19:32 [ℹ]  CloudWatch logging will not be enabled for cluster "devopsroles-from-eksctl" in "us-west-2"
2021-05-23 15:19:32 [ℹ]  you can enable it with 'eksctl utils update-cluster-logging --enable-types={SPECIFY-YOUR-LOG-TYPES-HERE (e.g. all)} --region=us-west-2 --cluster=devopsroles-from-eksctl'
2021-05-23 15:19:32 [ℹ]  Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "devopsroles-from-eksctl" in "us-west-2"
2021-05-23 15:19:32 [ℹ]  2 sequential tasks: { create cluster control plane "devopsroles-from-eksctl", 2 sequential sub-tasks: { wait for control plane to become ready, create managed nodegroup "workers" } }
2021-05-23 15:19:32 [ℹ]  building cluster stack "eksctl-devopsroles-from-eksctl-cluster"
2021-05-23 15:19:34 [ℹ]  deploying stack "eksctl-devopsroles-from-eksctl-cluster"
2021-05-23 15:20:04 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-cluster"
2021-05-23 15:20:35 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-cluster"
2021-05-23 15:21:36 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-cluster"
2021-05-23 15:22:37 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-cluster"
2021-05-23 15:23:39 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-cluster"
2021-05-23 15:24:40 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-cluster"
2021-05-23 15:25:41 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-cluster"
2021-05-23 15:26:42 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-cluster"
2021-05-23 15:27:44 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-cluster"
2021-05-23 15:28:45 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-cluster"
2021-05-23 15:29:46 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-cluster"
2021-05-23 15:30:47 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-cluster"
2021-05-23 15:31:48 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-cluster"
2021-05-23 15:31:52 [ℹ]  building managed nodegroup stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:31:53 [ℹ]  deploying stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:31:53 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:32:09 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:32:27 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:32:48 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:33:05 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:33:26 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:33:47 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:34:06 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:34:24 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:34:43 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:35:01 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:35:17 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:35:37 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:35:38 [ℹ]  waiting for the control plane availability...
2021-05-23 15:35:38 [✔]  saved kubeconfig as "C:\\Users\\USERNAME/.kube/config"
2021-05-23 15:35:38 [ℹ]  no tasks
2021-05-23 15:35:38 [✔]  all EKS cluster resources for "devopsroles-from-eksctl" have been created
2021-05-23 15:35:39 [ℹ]  nodegroup "workers" has 2 node(s)
2021-05-23 15:35:39 [ℹ]  node "ip-192-168-53-14.us-west-2.compute.internal" is ready
2021-05-23 15:35:39 [ℹ]  node "ip-192-168-90-229.us-west-2.compute.internal" is ready
2021-05-23 15:35:39 [ℹ]  waiting for at least 1 node(s) to become ready in "workers"
2021-05-23 15:35:39 [ℹ]  nodegroup "workers" has 2 node(s)
2021-05-23 15:35:39 [ℹ]  node "ip-192-168-53-14.us-west-2.compute.internal" is ready
2021-05-23 15:35:39 [ℹ]  node "ip-192-168-90-229.us-west-2.compute.internal" is ready
2021-05-23 15:35:47 [ℹ]  kubectl command should work with "C:\\Users\\USERNAME/.kube/config", try 'kubectl get nodes'
2021-05-23 15:35:47 [✔]  EKS cluster "devopsroles-from-eksctl" in "us-west-2" region is ready

You have created a cluster, To find that cluster credentials added in ~/.kube/config

The result on AWS

Amazon EKS Clusters

CloudFormation

EC2

For example basic command line:

Get info about cluster resources

aws eks describe-cluster --name devopsroles-from-eksctl --region us-west-2

Get services

kubectl get svc

The output

E:\Study\cka\devopsroles>kubectl get svc
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.100.0.1   <none>        443/TCP   11m

Delete EKS Cluster

eksctl delete cluster --name devopsroles-from-eksctl --region us-west-2

The output

E:\Study\cka\devopsroles>eksctl delete cluster --name devopsroles-from-eksctl --region us-west-2
2021-05-23 15:57:31 [ℹ]  eksctl version 0.49.0
2021-05-23 15:57:31 [ℹ]  using region us-west-2
2021-05-23 15:57:31 [ℹ]  deleting EKS cluster "devopsroles-from-eksctl"
2021-05-23 15:57:34 [ℹ]  deleted 0 Fargate profile(s)
2021-05-23 15:57:37 [✔]  kubeconfig has been updated
2021-05-23 15:57:37 [ℹ]  cleaning up AWS load balancers created by Kubernetes objects of Kind Service or Ingress
2021-05-23 15:57:45 [ℹ]  2 sequential tasks: { delete nodegroup "workers", delete cluster control plane "devopsroles-from-eksctl" [async] }
2021-05-23 15:57:45 [ℹ]  will delete stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:57:45 [ℹ]  waiting for stack "eksctl-devopsroles-from-eksctl-nodegroup-workers" to get deleted
2021-05-23 15:57:45 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:58:02 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:58:19 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:58:40 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:58:58 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:59:19 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:59:20 [ℹ]  will delete stack "eksctl-devopsroles-from-eksctl-cluster"
2021-05-23 15:59:20 [✔]  all cluster resources were deleted

Conclusion

You have Install EKS on AWS. I hope will this your helpful. Thank you for reading the DevopsRoles page!

How to deploy OpenProject platform as a Docker Container

#Introduction

In this tutorial, How to deploy OpenProject platform as a Docker Container.

OpenProject is an outstanding platform for project management. It is manage meetings, control project budgets, run reports on your projects, communicate with a project team, etc.

Deploy OpenProject platform as a Docker Container

Install Docker and Docker-Compose

I will this deployment on Ubuntu Server.

sudo apt-get install docker.io -y
sudo usermod -aG docker $USER
sudo curl -L "https://github.com/docker/compose/releases/download/1.23.1/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose

Check version after finished install.

root@DevopsRoles:~# docker --version
Docker version 20.10.2, build 20.10.2-0ubuntu2
root@DevopsRoles:~# docker-compose --version
docker-compose version 1.23.1, build b02f1306

Deploy OpenProject with Docker Compose

git clone https://github.com/opf/openproject-deploy --depth=1 --branch=stable/11 myopenproject
cd myopenproject/compose
docker-compose pull # Make sure to update docker images.
docker-compose up -d # You need to wait a few minutes.

For example, The output terminal is as below:

root@DevopsRoles:~# git clone https://github.com/opf/openproject-deploy --depth=1 --branch=stable/11 myopenproject
Cloning into 'myopenproject'...
remote: Enumerating objects: 7, done.
remote: Counting objects: 100% (7/7), done.
remote: Compressing objects: 100% (6/6), done.
remote: Total 7 (delta 0), reused 2 (delta 0), pack-reused 0
Receiving objects: 100% (7/7), done.
root@DevopsRoles:~# cd myopenproject/compose
root@DevopsRoles:~/myopenproject/compose# docker-compose pull
Pulling db     ... done
Pulling cache  ... done
Pulling seeder ... done
Pulling cron   ... done
Pulling worker ... done
Pulling web    ... done
Pulling proxy  ... done
root@DevopsRoles:~/myopenproject/compose# docker-compose up -d
Creating network "compose_backend" with the default driver
Creating network "compose_frontend" with the default driver
Creating volume "compose_pgdata" with default driver
Creating volume "compose_opdata" with default driver
Creating compose_seeder_1_f0f0cb90c947 ... done
Creating compose_cache_1_e6fe61ccd342  ... done
Creating compose_db_1_17392590a82e     ... done
Creating compose_cron_1_f15f9d68fc11   ... done
Creating compose_web_1_ce68c823fc5f    ... done
Creating compose_worker_1_a9c88ca2f672 ... done
Creating compose_proxy_1_c7c5f08e77e8  ... done
root@DevopsRoles:~/myopenproject/compose#
root@DevopsRoles:~/myopenproject/compose# docker ps
CONTAINER ID   IMAGE                      COMMAND                  CREATED          STATUS          PORTS                            NAMES
9aa1fe05f737   openproject/community:11   "./docker/prod/entry…"   40 seconds ago   Up 35 seconds   5432/tcp, 0.0.0.0:8080->80/tcp   compose_proxy_1_5fdbbaa63ec5
2df1b233a515   openproject/community:11   "./docker/prod/entry…"   42 seconds ago   Up 39 seconds   80/tcp, 5432/tcp                 compose_worker_1_6db9f1adb68b
e1b6878e9e32   openproject/community:11   "./docker/prod/entry…"   42 seconds ago   Up 39 seconds   80/tcp, 5432/tcp                 compose_web_1_544e288b78ff
ef3b645bc783   openproject/community:11   "./docker/prod/entry…"   42 seconds ago   Up 39 seconds   80/tcp, 5432/tcp                 compose_cron_1_db11c0e207d9
0dad3d1c28d1   postgres:10                "docker-entrypoint.s…"   46 seconds ago   Up 41 seconds   5432/tcp                         compose_db_1_31484339d5bc
1cd386cca514   memcached                  "docker-entrypoint.s…"   46 seconds ago   Up 41 seconds   11211/tcp                        compose_cache_1_6b9f381e6e82
13f9ad2a8cfa   openproject/community:11   "./docker/prod/entry…"   46 seconds ago   Up 41 seconds   80/tcp, 5432/tcp                 compose_seeder_1_f88dde804cb4

The result is openproject docker the picture below:

How to deploy with docker

sudo mkdir -p /var/lib/myopenproject/{pgdata,assets}
head /dev/urandom | tr -dc A-Za-z0-9 | head -c 32 ; echo '' # The random key generated for SECRET_KEY_BASE variable.

Deploy the OpenProject containers with the command:

docker run -d -p 8080:80 --name myopenproject -e SECRET_KEY_BASE=secret -v /var/lib/myopenproject/pgdata:/var/myopenproject/pgdata -v /var/lib/myopenproject/assets:/var/myopenproject/assets openproject/community:11

The output terminal is as below:

root@DevopsRoles:~/myopenproject/compose# head /dev/urandom | tr -dc A-Za-z0-9 | head -c 32 ; echo ''
HgRwijSjJXYBHRl8MSPfm7oiYd0F5hmK
root@DevopsRoles:~/myopenproject/compose# docker run -d -p 8080:80 --name myopenproject -e SECRET_KEY_BASE=HgRwijSjJXYBHRl8MSPfm7oiYd0F5hmK -v /var/lib/myopenproject/pgdata:/var/myopenproject/pgdata -v /var/lib/myopenproject/assets:/var/myopenproject/assets openproject/community:11
24c5f3fb9b560f4eca821555a50d8cab8ef7b3e38616071db9083ed2784219fe
root@DevopsRoles:~/myopenproject/compose# docker ps
CONTAINER ID   IMAGE                      COMMAND                  CREATED         STATUS         PORTS                            NAMES
24c5f3fb9b56   openproject/community:11   "./docker/prod/entry…"   5 seconds ago   Up 4 seconds   5432/tcp, 0.0.0.0:8080->80/tcp   myopenproject
root@DevopsRoles:~/myopenproject/compose# netstat -nplt
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 127.0.0.1:38701         0.0.0.0:*               LISTEN      15247/containerd
tcp        0      0 0.0.0.0:8080            0.0.0.0:*               LISTEN      19208/docker-proxy
tcp        0      0 127.0.0.53:53           0.0.0.0:*               LISTEN      599/systemd-resolve
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      1091/sshd: /usr/sbi
tcp6       0      0 :::22                   :::*                    LISTEN      1091/sshd: /usr/sbi

Conclusion

You have to deploy the OpenProject platform as a Docker Container. I hope will this your helpful. Thank you for reading the DevopsRoles page! deploy OpenProject.

Reduce AWS Billing and Setup Email Alerts

In this tutorial, How to Reduce AWS Billing & Setup Email Alerts.

Setup AWS Billing email alert

You need to login to AWS Console as a Root user

Follow these steps on aws:

Choice “My Billing Dashboard

Create a budget

Choice “Cost budget

Example, My “Budgeted amount” is 15 USD.

Input: Email contacts for Alerts

Final, Create it

Conclusion

You have set up Email Alerts to reduce AWS Billing. I hope will this your helpful. Thank you for reading the DevopsRoles page!

Install Portainer Docker Web GUI on Linux

#Introduction

In this tutorial, How to Install Portainer Docker Web GUI on Linux. Portainer is a web-based management UI for Docker hosts.

Install Portainer Docker

I have installed docker on My computer. First, Create a Docker volume portainer_data

$ docker volume create portainer_data
Or,
$ sudo docker volume create portainer_data

Example:

[root@DockerServer ~]# docker volume create portainer_data
portainer_data
[root@DockerServer ~]# docker volume ls
DRIVER              VOLUME NAME
local               portainer_data

Create a Portainer Docker container.

docker run -d -p 8000:8000 -p 9000:9000 --name=portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer-ce

-d – the container runs in the background
-p 8000: 8000 – Releases the corresponding ports on the host system of Docker
–Name = Portainer – To easily identify the container created for Portainer Docker
–Restart = always – The container is always restarted here.
-v /var/run/docker.sock:/var/run/docker.sock – Specifies that the container is allowed to access the Docker socket storage.
-v portainer_data: / data – Created storage “portainer_data” to the storage folder “/data” within the container.

Example:

[root@DockerServer ~]# docker run -d -p 8000:8000 -p 9000:9000 --name=portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer-ce
Unable to find image 'portainer/portainer-ce:latest' locally
Trying to pull repository docker.io/portainer/portainer-ce ...
latest: Pulling from docker.io/portainer/portainer-ce
94cfa856b2b1: Pull complete
49d59ee0881a: Pull complete
527b866940d5: Pull complete
Digest: sha256:5064d8414091c175c55ef6f8744da1210819388c2136273b4607a629b7d93358
Status: Downloaded newer image for docker.io/portainer/portainer-ce:latest
531acdb49696ad5206043915b157bc63bcd0783149485d17bfdd18993a72535a
[root@DockerServer ~]# docker ps
CONTAINER ID        IMAGE                    COMMAND             CREATED             STATUS              PORTS                                            NAMES
531acdb49696        portainer/portainer-ce   "/portainer"        11 seconds ago      Up 10 seconds       0.0.0.0:8000->8000/tcp, 0.0.0.0:9000->9000/tcp   portainer

Accessing Portainer Docker Web Interface

Check Portainer is running in the background as a container on Docker.

[root@DockerServer ~]# docker ps
CONTAINER ID        IMAGE                    COMMAND             CREATED             STATUS              PORTS                                            NAMES
531acdb49696        portainer/portainer-ce   "/portainer"        11 seconds ago      Up 10 seconds       0.0.0.0:8000->8000/tcp, 0.0.0.0:9000->9000/tcp   portainer

Open a web browser http://192.168.3.6:9000

Create Administrator account

Connect Portainer to the container environment

Web Interface

Conclusion

You have to install Portainer Docker Web GUI on Linux. I hope will this your helpful. Thank you for reading the DevopsRoles page!

Vagrant provision inline: A Step-by-Step Guide for Developers

Introduction

In this tutorial, How to use vagrant provision inline. Use inline to execute on the remote machine. Explore the essentials of inline provisioning with Vagrant in our latest tutorial. This guide provides a practical walkthrough on setting up and configuring virtual environments directly through Vagrant’s inline commands. Perfect for developers and IT professionals, this tutorial simplifies the process of using inline scripts to manage complex configurations, ensuring a seamless and efficient setup for your development projects.

What is Vagrant Inline Provisioning?

Vagrant Inline Provisioning is a method used in Vagrant for automatically configuring and setting up virtual machines. Here’s a simple explanation in English, presented as a bulleted list:

  • Purpose: Automatically configures and sets up virtual machines.
  • Method: Uses simple scripts written directly in the Vagrantfile.
  • Functionality: Allows users to execute shell commands during the virtual machine setup process.
  • Benefits: Streamlines and automates the configuration of development environments efficiently.

Vagrant provision inline example as below


Vagrant.configure("2") do |config|

  config.vm.define :ansible_controller do |ansible_controller|
    ansible_controller.vm.hostname = "ansible"
    ansible_controller.vm.box = "centos/6.5"
    ENV["LC_ALL"] = "en_US.UTF-8"
    config.ssh.insert_key = false
    ansible_controller.vm.network "private_network", ip: "192.168.3.10", :netmask => "255.255.255.0"
    ansible_controller.vm.provision "shell", inline: <<-SHELL
      echo "hello"
      echo "192.168.3.11 centos8" >> /etc/hosts
      groupadd -g 20000 gadmin
      useradd -d /home/huupv -u 9999 -m huupv
      echo -e "huupv\nhuupv\n" |  passwd huupv
      echo "huupv ALL = (root) NOPASSWD:ALL" | tee /etc/sudoers.d/huupv
      chmod 0440 /etc/sudoers.d/huupv
      su - huupv -c "echo |ssh-keygen -t rsa"
      SHELL
  end

  config.vm.define :centos do |centos|
    centos.vm.hostname = "centos8"
    centos.vm.box = "centos/8"
    ENV["LC_ALL"] = "en_US.UTF-8"
    config.ssh.insert_key = false
    centos.vm.network "private_network", ip: "192.168.3.11", :netmask => "255.255.255.0"
    centos.vm.provision "shell", inline: <<-SHELL
      echo "hello"
      groupadd -g 20000 gadmin
      useradd -d /home/huupv -u 9999 -m huupv
      echo -e "huupv\nhuupv\n" |  passwd huupv
      echo "huupv ALL = (root) NOPASSWD:ALL" | tee /etc/sudoers.d/huupv
      chmod 0440 /etc/sudoers.d/huupv
      useradd -m -d /home/user001 -s  /bin/bash   -g gadmin -u 8889  user001; echo -e "user001\nuser001\n"  |  passwd  user001
	  su - huupv -c "mkdir /home/huupv/.ssh"
      su - huupv -c "touch /home/huupv/.ssh/authorized_keys"
      su - huupv -c "chmod 600 /home/huupv/.ssh/authorized_keys"
      su - huupv -c "chmod 700 /home/huupv/.ssh"
    SHELL
  end
  end

Conclusion

You have to use a vagrant inline to execute on the remote machine. Mastering Vagrant’s inline provisioning can significantly streamline your development workflow. We’ve covered the key steps to execute scripts within your Vagrantfile, enhancing your capability to manage and automate your virtual environments. Keep experimenting with different configurations and scripts to fully leverage the power of Vagrant in your projects. I hope will this your helpful. Thank you for reading the DevopsRoles page!

How to git rename branch: A Step-by-Step Guide

Introduction

In this tutorial, How to git rename branch in Git. How to rename both local and remote git branches. Branches are a powerful feature in Git that allows developers to work on multiple features or experiments concurrently.

However, there may be situations where you need to rename a branch for clarity or consistency. In this guide, we’ll walk you through the steps to rename a branch in Git.

How to use git rename branch

Rename a Local Branch in Git

we can find out the local branches.

$ git branch
$ git branch -a # The -a option lists the remote branches.

Check the local Branch

$ git checkout <old-branch-name>
$ git checkout oldbranch

Rename the Local Branch

we have switched to the desired branch. you can rename the local branch as the command follows

$ git branch -m <new-branch-name>
$ git branch -m newbranch

This command changes the name of the local branch oldbranch to newbranch

You can also rename a local branch from inside another git branch

$ git branch -m <old-branch-name> <new-branch-name>
$ git branch -m oldbranch  newbranch

Check the New Branch Name

$ git branch -a

Rename a Remote Branch in Git

  • You need first to rename the local branch
  • push the new branch to the server
  • delete the old branch from your repository.

Step 1. Rename the Local Branch

$ git branch -m newbranch
# or
$ git branch -m oldbranch newbranch

Step 2: Push the Updated Branch

Push the renamed branch newbranch to the remote server

$ git push origin <new-branch-name>
$ git push origin newbranch

Set the Upstream

Set up tracking between the local branch newbranch and the remote branch newbranch.

$ git push origin -u <new-branch-name>
$ git push origin -u newbranch

Step 3: Remove the Old Branch

$ git push origin --delete <old-branch-name>
$ git push origin --delete oldbranch

Conclusion

You have successfully renamed both your local and remote Git branches. Git rename branch is a straightforward process that enhances the clarity and consistency of your project’s branch structure.

It’s important to note that renaming a branch changes only its name without affecting the commit history or the contents of the branch. If other developers are working on this branch, make sure to inform them about the name change to facilitate smooth collaboration.

Thanks to its flexibility and robust branch management features, Git remains a vital tool for version control and collaborative development. For more tips like how to rename a branch, stay tuned.

Thank you for visiting DevOpsRoles!

Mastering the lsof command example:Essential for Linux System Administration

Introduction

lsof command meaning “List open files“. This command will not find CentOS7/RHEL. We will install lsof command example as below:

$ sudo yum install lsof

In the realm of Linux administration, understanding the tools at your disposal is key to effective system management. The lsof command, which stands for “List Open Files”, is an indispensable utility that provides crucial visibility into the system’s file usage. By listing information about files opened by processes, lsof helps administrators manage resources, troubleshoot system issues, and ensure secure operations. This guide aims to demystify the lsof command through practical examples, enhancing your system management toolkit.

Basic Usage

lsof

This will display a list of all open files and the processes that are using them.

lsof command examples

List open files

$ lsof -n

Kill a process running on port 8443

$ lsof -i :8443 | awk '{print $2}' | tail -n 1 | xargs kill
# or
$ lsof -i :8443  | awk 'NR > 1 {print $2}' | xargs --no-run-if-empty kill

Show the 15 Largest Open Files in Linux.

$ lsof / | awk '{ if($7 > 1048576) print $7/1048576 "MB" " " $9 " " $1 }' | sort -n -u | tail -n 15

List User-Specific Opened Files. This will display a list of all open files that are being used by the specified user.

$ lsof -u huupv

Search by PID

$ lsof -p 1

Exclude User with ^ Character

$ lsof -i -u^root

List TCP Port ranges 8000-9000

$ lsof -i TCP:8000-9000

Conclusion

The lsof command is a powerful tool in the Linux administrator’s arsenal, offering deep insights into the system’s interaction with files. From tracking down process-specific files to managing system resources, lsof facilitates a wide range of administrative tasks.

By mastering its usage through the examples provided, you enhance your capabilities in system management, contributing to the overall efficiency and security of your operations. Dive into these examples to leverage lsof effectively, ensuring your Linux systems run smoothly and securely. I hope will this your helpful. Thank you for reading the DevopsRoles page!

Guide to Install Python 3.6 on Centos 6

Introduction

OS Centos 6 is the default Python version 2. How to Install Python 3.6 on Centos 6. Python is a powerful and flexible programming language widely used in various fields such as web development, data science, artificial intelligence, and DevOps. Python 3.6 brings many improvements and new features, enhancing performance and security.

In this article, we will guide you through the process of installing Python 3.6 on CentOS 6, one of the popular Linux operating systems for server environments. This installation will allow you to take full advantage of Python 3.6 in your projects.

Installation packages pre-requisites

sudo yum -y install gcc openssl-devel bzip2-devel wget

How to Install Python 3.6 on Centos 6

cd /tmp/
wget https://www.python.org/ftp/python/3.6.6/Python-3.6.6.tgz
tar xzf Python-3.6.6.tgz
cd Python-3.6.6
./configure --enable-optimizations
sudo make altinstall

Create symbolic link

sudo ln -sfn /usr/local/bin/python3.6 /usr/bin/python3.6

Python verifying new version.

[huupv2@server1 ~]$ python -V
Python 3.6.6

The result is Python 3.6 on Centos 6.

[huupv2@server1 ~]$ cat /etc/redhat-release
CentOS release 6.5 (Final)
[huupv2@server1 ~]$ ll /usr/bin/python*
-rwxr-xr-x 2 root root 4864 Aug 18  2016 /usr/bin/python
lrwxrwxrwx 1 root root    6 Jul 19  2018 /usr/bin/python2 -> python
-rwxr-xr-x 2 root root 4864 Aug 18  2016 /usr/bin/python2.6
lrwxrwxrwx 1 root root    9 Mar  8 13:26 /usr/bin/python3 -> python3.4
-rwxr-xr-x 2 root root 6088 Oct  5  2019 /usr/bin/python3.4
-rwxr-xr-x 2 root root 6088 Oct  5  2019 /usr/bin/python3.4m
lrwxrwxrwx 1 root root   24 Mar  8 14:45 /usr/bin/python3.6 -> /usr/local/bin/python3.6

Configure alias python on .bashrc file

[huupv2@server1 ~]$ cat .bashrc
# .bashrc

# Source global definitions
if [ -f /etc/bashrc ]; then
        . /etc/bashrc
fi

# User specific aliases and functions
alias python='/usr/bin/python3.6'

Check Python version 3.6 on Centos 6

[huupv2@server1 ~]$ python
Python 3.6.6 (default, Mar  8 2021, 14:41:43)
[GCC 4.4.7 20120313 (Red Hat 4.4.7-23)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>>
[huupv2@server1 ~]$

Conclusion

You have to install Python 3.6 on Centos 6. Installing Python 3.6 on CentOS 6 may present some challenges, but with this detailed guide, you can easily accomplish it. Python 3.6 will open up many new opportunities for your projects, from web application development to data processing and automating DevOps workflows.

We wish you success in installing and leveraging the full potential of Python 3.6 on CentOS 6. If you encounter any difficulties, don’t hesitate to contact us or refer to community support resources. I hope will this your helpful. Thank you for reading the DevopsRoles page!

Docker Container Essentials: A Comprehensive Handbook for Efficient Application Deployment

Introduction

You will be learning about container manipulation basics in detail. Container manipulation you will be performing every single day. You can visit the official refer here for the Docker command Line. Now, We write a Docker container handbook.

What is a docker container?

its core is a self-contained and lightweight entity that wraps up an application along with all its dependencies. This encapsulation ensures that the application runs consistently across different computing environments. It operates on the principle of containerization, with Docker being a widely used platform for implementing this concept.

One of the key advantages of Docker containers lies in their portability. Applications packaged within containers can seamlessly transition between development, testing, and production environments. This consistent behavior across various platforms simplifies the deployment process and minimizes compatibility challenges.

Containers leverage the host system’s kernel, resulting in efficiency gains by sharing resources and enabling quick startup times. These containers are built from images, which are compact and versioned packages containing the application and its required components. Docker containers are not just isolated units; they also facilitate streamlined software development, testing, and deployment processes.

Run a Container

The syntax of this command is below for version 1.13:

docker run <image name>

In the new version, The syntax of this command is as below:

docker <object> <command> <options>

In this syntax:

  • object can be a container, image, network, or volume object.
  • command is the run command.
  • options can be any valid parameter that can override the default behavior of the command. example, –publish option for port mapping.

Now, for this syntax, the run command is as follows:

docker container run <image name>

The “image name” can be any image from an online registry or your local system.

For example, To run a container using the image as my terminal below:

docker container run --publish 8080:80 nginx

Publish a Port

The host system doesn’t know inside a container. How to outside access inside a container. The syntax, The Publish a port a container.

--publish <host port>:<container port>

Use Detached Mode

To a container running in the background, you can use the –detach option with the run command.

docker container run --detach --publish 8080:80 nginx

List Containers

You will list out containers that are currently running.

docker container ls

List all out containers.

docker container ls --all

Stop or Kill a Running Container

The syntax Stop or kill a Container

docker container stop <container identifier>
docker container kill <container identifier>
  • <container identifier>: can either be the id or the name of the container.

How to restart a Container

Restarting a container that has been previously stopped or killed

docker container start <container identifier>

Rebooting a running container.

docker container restart <container identifier>

Rename a Container

By default, every container has two identifiers

  • CONTAINER ID
  • NAME

Using the –name option defined Naming a container

docker container run --detach --publish 8888:80 --name nginx-container nginx

The syntax renames a container

docker container rename <container identifier> <new name>

Remove Dangling Containers

Find out containers are not running, use the command “docker container ls –all

The syntax removes Dangling Containers

docker container rm <container identifier>

Execute Commands Inside a Container

docker run name-of-image uname -a

In conclusion, the Docker container handbook provides a comprehensive understanding of the principles and benefits of containerization. I trust that this resource proves to be valuable for your endeavors in deploying and managing applications using Docker containers. The versatility, consistency, and efficiency offered by containerization, as highlighted in the handbook, are crucial aspects that enhance the software development and deployment lifecycle. If you have any further questions or seek additional insights, feel free to explore more on the DevopsRoles page! Thank you for taking the time to read and engage with the content. Best of luck with your Docker container journey in the realm of DevOps!

Devops Tutorial

Exit mobile version