Install Nginx Ingress Controller using Helm Chart

#Introduction

In this tutorial, How to Install Nginx Ingress Controller using Helm Chart. I want to expose pods using Ingress Controller. I will use the Nginx ingress controller to set up AWS ELB.

Production Environment, You would be using istio gateway, traefik.

Create Nginx Ingress Controller using Helm Chart

kubectl create namespace nginx-ingress-controller

Add new repo ingress-nginx/ingress-nginx

helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo add stable https://charts.helm.sh/stable
helm repo update

Install ingress Nginx

helm install nginx-ingress-controller ingress-nginx/ingress-nginx

The output as below

$ helm install nginx-ingress-controller ingress-nginx/ingress-nginx
NAME: nginx-ingress-controller
LAST DEPLOYED: Sat Jun 26 16:11:27 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
The ingress-nginx controller has been installed.
It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status by running 'kubectl --namespace default get services -o wide -w nginx-ingress-controller-ingress-nginx-controller'

An example Ingress that makes use of the controller:

  apiVersion: networking.k8s.io/v1beta1
  kind: Ingress
  metadata:
    annotations:
      kubernetes.io/ingress.class: nginx
    name: example
    namespace: foo
  spec:
    rules:
      - host: www.example.com
        http:
          paths:
            - backend:
                serviceName: exampleService
                servicePort: 80
              path: /
    # This section is only required if TLS is to be enabled for the Ingress
    tls:
        - hosts:
            - www.example.com
          secretName: example-tls

If TLS is enabled for the Ingress, a Secret containing the certificate and key must also be provided:

  apiVersion: v1
  kind: Secret
  metadata:
    name: example-tls
    namespace: foo
  data:
    tls.crt: <base64 encoded cert>
    tls.key: <base64 encoded key>
  type: kubernetes.io/tls

Check nginx-ingress-controller namespace have been created

kubectl get pod,svc,deploy

# The output as bellow
$ kubectl get pod,svc,deploy
NAME                                                                  READY   STATUS    RESTARTS   AGE
pod/guestbook-tnc42                                                   1/1     Running   0          38m
pod/guestbook-vqgws                                                   1/1     Running   0          38m
pod/guestbook-vqnxf                                                   1/1     Running   0          38m
pod/nginx-ingress-controller-ingress-nginx-controller-7f8f65bf4g6c7   1/1     Running   0          4m23s
pod/redis-master-dp7h7                                                1/1     Running   0          41m
pod/redis-slave-54mt6                                                 1/1     Running   0          39m
pod/redis-slave-8g8h4                                                 1/1     Running   0          39m

NAME                                                                  TYPE           CLUSTER-IP       EXTERNAL-IP                                                              PORT(S)                      AGE
service/guestbook                                                     LoadBalancer   10.100.231.216   aff3d414c479f4faaa2ab82062c87fe5-264485147.us-west-2.elb.amazonaws.com   3000:32767/TCP               38m
service/kubernetes                                                    ClusterIP      10.100.0.1       <none>                                                                   443/TCP                      88m
service/nginx-ingress-controller-ingress-nginx-controller             LoadBalancer   10.100.57.204    a04b31985b6c64607a50d794ef692c57-291207293.us-west-2.elb.amazonaws.com   80:31382/TCP,443:32520/TCP   4m25s
service/nginx-ingress-controller-ingress-nginx-controller-admission   ClusterIP      10.100.216.28    <none>                                                                   443/TCP                      4m25s
service/redis-master                                                  ClusterIP      10.100.76.16     <none>                                                                   6379/TCP                     40m
service/redis-slave                                                   ClusterIP      10.100.126.163   <none>                                                                   6379/TCP                     39m

NAME                                                                READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nginx-ingress-controller-ingress-nginx-controller   1/1     1            1           4m25s

TYPE of nginx-ingress-controller-controller service is LoadBalancer. This is the L4 Load balancer. How to nginx-ingress-controller-controller pod is running Nginx inside to do L7 load balancing inside the EKS cluster.

The result 404 as below

Create Ingress resource for L7 load balancing

For example, ingress_example.yaml file.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
    annotations:
      kubernetes.io/ingress.class: nginx
    name: guestbook
    namespace: default
spec:
    rules:
      - http:
          paths:
            - backend:
                serviceName: guestbook
                servicePort: 3000 
              path: /

Apply it

kubectl apply -f ingress_example.yaml

Get the public DNS of AWS ELB created from the Nginx ingress controller service

kubectl  get svc nginx-ingress-controller-ingress-nginx-controller  | awk '{ print $4 }' | tail -1

The output as below


a04b31985b6c64607a50d794ef692c57-291207293.us-west-2.elb.amazonaws.com

Access link on browser

Delete service guestbook

$ kubectl delete svc guestbook
service "guestbook" deleted

Conclusion

You have to install Nginx Ingress Controller using Helm Chart. I hope will this your helpful. Thank you for reading the DevopsRoles page!

Create Docker Image of a .NET Web API

Introduction

Docker helps you to run software projects without the need to set up complex development environments.

In this tutorial, How to Create Docker Image of a .NET Web API. You can use Docker image to run the backend from any PC that has Docker installed and front-end web project interact with the API.

Create a .NET Web API

I will create a project named “exdockerapi” use the dotnet CLI with the following command:

dotnet new webapi -o aspdockerapi

If you don’t have .NET 5 installed on your PC. Please, you can download it here.

Now, you can go into the project exdockerapi is created, and run the web API.

cd exdockerapi
dotnet run

By default, the application is on port 5001.

Create Docker Image of a .NET Web API

I will create a new file Dockerfile with the following command:

touch Dockerfile

Now, we will copy and paste the code below into Dockerfile.

FROM mcr.microsoft.com/dotnet/aspnet:5.0-focal AS base
WORKDIR /app
EXPOSE 80
FROM mcr.microsoft.com/dotnet/sdk:5.0-focal AS build
WORKDIR /src
COPY ["exdockerapi.csproj", "./"]
RUN dotnet restore "./exdockerapi.csproj"
COPY . .
WORKDIR "/src/."
RUN dotnet build "exdockerapi.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "exdockerapi.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "exdockerapi.dll"]

How to building the Docker Image

The simply build the Docker image based on the Dockerfile with the following command:

docker build -t dockerwebapiex -f Dockerfile .

To check Docker image is dockerwebapiex with the following command:

docker images | grep dockerwebapiex

Run the Docker Image

Use the following command.

docker run -ti --rm -p 8080:80 dockerwebapiex
  • The -ti option specifies that the image should be run in an interactive terminal mode
  • –rm specifies that the container should be removed immediately after it exits.

Conclusion

You have installed Create Docker Image of a .NET Web API. I hope will this your helpful. Thank you for reading the DevopsRoles page!

Terraform build EC2 instance

Introduction

In this tutorial, How to build a simple environment with one EC2 instance base AWS. Terraform build EC2 instance. This time, I created as follows.

  • VPC
  • Internet Gateway
  • Subnet
  • Route Table
  • Security Group
  • EC2

My Environment for Terraform build EC2 instance

  • OS Window
  • Terraform

To install Terraform, By referring to the following.

If you are on Windows, you can install it as follows.

choco install terraform
terraform -help

Create a template file

First of all, Create a subdirectory and a Terraform template file in it. The name of the template file is arbitrary, but the extensions are *.tf

$ mkdir terraform-aws
$ cd terraform-aws
$ touch main.tf

Terraform Provider settings

We use the provided settings AWS. Terraform supports multiple providers.

provider "aws" {
    access_key = "ACCESS_KEY_HERE"
    secret_key = "SECRET_KEY_HERE"
    region = "us-west-2"
}

Credential information

Use of Terraform variables

variable "access_key" {}
variable "secret_key" {}

provider "aws" {
    access_key = "${var.access_key}"
    secret_key = "${var.secret_key}"
    region = "us-west-2"
}

Assigning a value to a variable

There are three ways to assign a value to a variable.

1.Terraform command

$ terraform apply \
-var 'access_key=AXXXXXXXXXXXXXXXXXXXXXX' \
-var 'secret_key=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX'

2.Value in the environment variable

$ export TF_VAR_access_key="AXXXXXXXXXXXXXXXXXXXXX"
$ export TF_VAR_secret_key="XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"

3.Pass the value in a file

For example, the content terraform.tfvars file.

aws_access_key = "AXXXXXXXXXXXXXXXXXXXXX"
aws_secret_key = "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"

How to set Default value of variable

For example, We can set default values for variables.

variable "aws_access_key" {}
variable "aws_secret_key" {}
variable "region" {
    default = "us-west-2"
}

provider "aws" {
    access_key = "${var.aws_access_key}"
    secret_key = "${var.aws_secret_key}"
    region = "${var.region}"
}

Provider: AWS –Terraform by HashiCorp

Terraform Resource settings.

In Terraform the resource type is aws_* predefined. Example aws_vpc a VPC, EC2 is aws_instance. Each AWS resource in the format of item name = value. Example the VPC settings.

resource "aws_vpc" "myVPC" {
    cidr_block = "10.1.0.0/16"
    instance_tenancy = "default"
    enable_dns_support = "true"
    enable_dns_hostnames = "false"
    tags = {
      Name = "myVPC"
    }
}

Refer other resources

Internet Gateway settings.

resource "aws_vpc" "myVPC" {
    cidr_block = "10.1.0.0/16"
    instance_tenancy = "default"
    enable_dns_support = "true"
    enable_dns_hostnames = "false"
    tags {
      Name = "myVPC"
    }
}

resource "aws_internet_gateway" "myGW" {
    vpc_id = "${aws_vpc.myVPC.id}"
}

Dependencies between resources

For example, set up a dependency between the VPC and Internet Gateway.

resource "aws_vpc" "myVPC" {
    cidr_block = "10.1.0.0/16"
    instance_tenancy = "default"
    enable_dns_support = "true"
    enable_dns_hostnames = "false"
    tags {
      Name = "myVPC"
    }
}

resource "aws_internet_gateway" "myGW" {
    vpc_id = "${aws_vpc.myVPC.id}"
    depends_on = "${aws_vpc.myVPC}"
}

We mentioned above how to set the default value for a variable. we use of Map as follows

variable "images" {
    default = {
        us-east-1 = "ami-1ecae776"
        us-west-2 = "ami-e7527ed7"
        us-west-1 = "ami-d114f295"
    }
}

The values of variables defined as var.images.us-east-1

Output on the console

output "public ip of aws-test" {
  value = "${aws_instance.aws-test.public_ip}"
}

Terraform build EC2 instance summary

variable "aws_access_key" {}
variable "aws_secret_key" {}
variable "region" {
    default = "us-west-2"
}

variable "images" {
    default = {
        us-east-1 = "ami-1ecae776"
        us-west-2 = "ami-e7527ed7"
        us-west-1 = "ami-d114f295"
    }
}

provider "aws" {
    access_key = "${var.aws_access_key}"
    secret_key = "${var.aws_secret_key}"
    region = "${var.region}"
}

resource "aws_vpc" "myVPC" {
    cidr_block = "10.1.0.0/16"
    instance_tenancy = "default"
    enable_dns_support = "true"
    enable_dns_hostnames = "false"
    tags {
      Name = "myVPC"
    }
}

resource "aws_internet_gateway" "myGW" {
    vpc_id = "${aws_vpc.myVPC.id}"
}

resource "aws_subnet" "public-a" {
    vpc_id = "${aws_vpc.myVPC.id}"
    cidr_block = "10.1.1.0/24"
    availability_zone = "us-west-2a"
}

resource "aws_route_table" "public-route" {
    vpc_id = "${aws_vpc.myVPC.id}"
    route {
        cidr_block = "0.0.0.0/0"
        gateway_id = "${aws_internet_gateway.myGW.id}"
    }
}

resource "aws_route_table_association" "puclic-a" {
    subnet_id = "${aws_subnet.public-a.id}"
    route_table_id = "${aws_route_table.public-route.id}"
}

resource "aws_security_group" "admin" {
    name = "admin"
    description = "Allow SSH inbound traffic"
    vpc_id = "${aws_vpc.myVPC.id}"
    ingress {
        from_port = 22
        to_port = 22
        protocol = "tcp"
        cidr_blocks = ["0.0.0.0/0"]
    }
    egress {
        from_port = 0
        to_port = 0
        protocol = "-1"
        cidr_blocks = ["0.0.0.0/0"]
    }
}

resource "aws_instance" "aws-test" {
    ami = "${var.images.us-west-2}"
    instance_type = "t2.micro"
    key_name = "aws.devopsroles.com"
    vpc_security_group_ids = [
      "${aws_security_group.admin.id}"
    ]
    subnet_id = "${aws_subnet.public-a.id}"
    associate_public_ip_address = "true"
    root_block_device = {
      volume_type = "gp2"
      volume_size = "20"
    }
    ebs_block_device = {
      device_name = "/dev/sdf"
      volume_type = "gp2"
      volume_size = "100"
    }
    tags {
        Name = "aws-test"
    }
}

output "public ip of aws-test" {
  value = "${aws_instance.aws-test.public_ip}"
}

Dry-Run Terraform command

$ terraform plan

terraform plan command will check for syntax errors and parameter errors set in the block, but will not check for the correctness of the parameter values.

Applying a template

Let’s go we apply the template and create a resource on AWS.

$ terraform apply

Use terraform to show the display the content

$ terraform show

Resource changes

  • We add the content in main.tf file.
  • Use terraform plan to check the execution plan. marked with a ” -/ + “. This indicates that the resource will be deleted & recreated as the attribute changes .
  • terraform apply command for creating.

Delete resource

terraform destroy command can delete a set of resources in the template. terraform plan -destroy you can find out the execution plan for resource deletion.

$ terraform plan -destroy
$ terraform destroy

How to split template file

I have settings together in one template file main.tf

You can be divided into 3 files as below

main.tf

provider "aws" {
    access_key = "${var.aws_access_key}"
    secret_key = "${var.aws_secret_key}"
    region = "${var.region}"
}

## Describe the definition of the resource
resource "aws_vpc" "myVPC" {
    cidr_block = "10.1.0.0/16"
    instance_tenancy = "default"
    enable_dns_support = "true"
    enable_dns_hostnames = "false"
    tags {
      Name = "myVPC"
    }
}

...

variables.tf

variable "aws_access_key" {}
variable "aws_secret_key" {}
variable "region" {
    default = "us-west-2"
}

variable "images" {
    default = {
        us-east-1 = "ami-1ecae776"
        us-west-2 = "ami-e7527ed7"
        us-west-1 = "ami-d114f295"
    }
}

outputs.tf

output "public ip of aws-test" {
  value = "${aws_instance.aws-test.public_ip}"
}

Conclusion

You have to use Terraform build EC2 instance. I hope will this your helpful. Thank you for reading the DevopsRoles page!

How to deploy example application to Kubernetes

Introduction

Dive into the world of Kubernetes with our easy-to-follow guide on deploy example application to Kubernetes. Whether you’re a beginner or an experienced developer, this tutorial will demonstrate how to efficiently deploy and manage applications in a Kubernetes environment.

Learn how to set up your application with Kubernetes pods, services, and external load balancers to ensure scalable and resilient deployment.

Step-by-Step deploy example application to Kubernetes

Fronted application

  • load balanced by public ELB
  • read request load balanced to multiple slaves
  • write a request to a single master

Backend example Redis

  • single master (write)
  • multi slaves (read)
  • slaves sync continuously from the master

Deploy Redis Master and Redis Slave

For Redis Master

kubectl apply -f https://raw.githubusercontent.com/kubernetes/examples/master/guestbook-go/redis-master-controller.json
kubectl apply -f https://raw.githubusercontent.com/kubernetes/examples/master/guestbook-go/redis-master-service.json

For redis slave

kubectl apply -f https://raw.githubusercontent.com/kubernetes/examples/master/guestbook-go/redis-slave-controller.json
kubectl apply -f https://raw.githubusercontent.com/kubernetes/examples/master/guestbook-go/redis-slave-service.json

Deploy frontend app

kubectl apply -f https://raw.githubusercontent.com/kubernetes/examples/master/guestbook-go/guestbook-controller.json
kubectl apply -f https://raw.githubusercontent.com/kubernetes/examples/master/guestbook-go/guestbook-service.json

The result redis-master, redis-slave, and guestbook created

kubectl get replicationcontroller
# The output as below
$ kubectl get replicationcontroller
NAME           DESIRED   CURRENT   READY   AGE
guestbook      3         3         3       2m13s
redis-master   1         1         1       4m32s
redis-slave    2         2         2       3m10s

Get service and pod

kubectl get pod,service

The output terminal

$ kubectl get pod,service
NAME                     READY   STATUS    RESTARTS   AGE
pod/guestbook-tnc42      1/1     Running   0          2m33s
pod/guestbook-vqgws      1/1     Running   0          2m33s
pod/guestbook-vqnxf      1/1     Running   0          2m33s
pod/redis-master-dp7h7   1/1     Running   0          4m52s
pod/redis-slave-54mt6    1/1     Running   0          3m30s
pod/redis-slave-8g8h4    1/1     Running   0          3m30s

NAME                   TYPE           CLUSTER-IP       EXTERNAL-IP                                                              PORT(S)          AGE
service/guestbook      LoadBalancer   10.100.231.216   aff3d414c479f4faaa2ab82062c87fe5-264485147.us-west-2.elb.amazonaws.com   3000:32767/TCP   2m25s
service/kubernetes     ClusterIP      10.100.0.1       <none>                                                                   443/TCP          52m
service/redis-master   ClusterIP      10.100.76.16     <none>                                                                   6379/TCP         4m24s
service/redis-slave    ClusterIP      10.100.126.163   <none>                                                                   6379/TCP         3m25s

Show external ELB DNS

echo $(kubectl  get svc guestbook | awk '{ print $4 }' | tail -1):$(kubectl  get svc guestbook | awk '{ print $5 }' | tail -1 | cut -d ":" -f 13000)

The output terminal

$ echo $(kubectl  get svc guestbook | awk '{ print $4 }' | tail -1):$(kubectl  get svc guestbook | awk '{ print $5 }' | tail -1 | cut -d ":" -f 13000)
aff3d414c479f4faaa2ab82062c87fe5-264485147.us-west-2.elb.amazonaws.com:

Note: ELB is ready in 3-5 minutes

Conclusion

You have to Deploy an example application to Kubernetes. we will expose these pods with Ingress in the post later.

Congratulations on taking your first steps towards mastering application deployment with Kubernetes! By following this tutorial, you’ve gained valuable insights into the intricacies of Kubernetes architecture and how to leverage it for effective application management. Continue to explore and expand your skills, and don’t hesitate to revisit this guide or explore further topics on our website to enhance your Kubernetes expertise.

I hope will this your helpful. Thank you for reading the DevopsRoles page! Refer source

How to install dashboard Kubernetes

In this tutorial, How to install Metrics Server and install Dashboard Kubernetes.

Install Metrics Server

Metrics Server is a scalable, efficient source of container resource metrics for Kubernetes built-in autoscaling pipelines.

Prerequisites

You have created an Amazon EKS cluster. By following the steps.

Install Metrics server

kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.6/components.yaml

#The output
E:\Study\cka\devopsroles> kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.6/components.yaml
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
serviceaccount/metrics-server created
deployment.apps/metrics-server created
service/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created

Check metrics-server have deployment

kubectl get deployment metrics-server -n kube-system

The output terminal

E:\Study\cka\devopsroles>kubectl get deployment metrics-server -n kube-system
NAME             READY   UP-TO-DATE   AVAILABLE   AGE
metrics-server   1/1     1            1           19s

Install Dashboard Kubernetes

Refer here.

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta8/aio/deploy/recommended.yaml

The output resources created in kubernetes-dashboard namespace

E:\Study\cka\devopsroles>kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta8/aio/deploy/recommended.yaml
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created

Check namespace

kubectl get namespace
kubectl get pod,deployment,svc -n kubernetes-dashboard

The output terminal

E:\Study\cka\devopsroles>kubectl get namespace
NAME                   STATUS   AGE
default                Active   12m
kube-node-lease        Active   12m
kube-public            Active   12m
kube-system            Active   12m
kubernetes-dashboard   Active   73s

E:\Study\cka\devopsroles>kubectl get pod,deployment,svc -n kubernetes-dashboard
NAME                                             READY   STATUS    RESTARTS   AGE
pod/dashboard-metrics-scraper-76585494d8-hbv6t   1/1     Running   0          60s
pod/kubernetes-dashboard-5996555fd8-qhcxz        1/1     Running   0          64s

NAME                                        READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/dashboard-metrics-scraper   1/1     1            1           60s
deployment.apps/kubernetes-dashboard        1/1     1            1           64s

NAME                                TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
service/dashboard-metrics-scraper   ClusterIP   10.100.242.231   <none>        8000/TCP   63s
service/kubernetes-dashboard        ClusterIP   10.100.135.44    <none>        443/TCP    78s

Get token for dashboard

export AWS_PROFILE=devopsroles-demo
kubectl describe secret $(kubectl get secret -n kubernetes-dashboard | grep kubernetes-dashboard-token | awk '{ print $1 }') -n kubernetes-dashboard

The output terminal token

$  kubectl describe secret $(kubectl get secret -n kubernetes-dashboard | grep kubernetes-dashboard-token | awk '{ print $1 }') -n kubernetes-dashboard
Name:         kubernetes-dashboard-token-grg5n
Namespace:    kubernetes-dashboard
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: kubernetes-dashboard
              kubernetes.io/service-account.uid: 6a5c693b-892c-4564-9a79-20e0995bc012

Type:  kubernetes.io/service-account-token

Data
====
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6InVXaXBWU1BGWnp2SFRPaV80c1A1azlhYXA0ZkpyUDhkSlF1MTlkOFNHRG8ifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC10b2tlbi1ncmc1biIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjZhNWM2OTNiLTg5MmMtNDU2NC05YTc5LTIwZTA5OTViYzAxMiIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.XoYdQMM256sBLVc8x0zcsHn7RXSe03EWZKiNhHxwhCOrFQKPjyuKb__Myw0KNZj-C3Jo2PpvdE9-kcYgKYAPaOg3BWmseTw6cbveGzIs3tTrubgGsTNvxK9M_RGnrSxot_Ajul7-pTAHTaqSELM3CvctbrBRl39drEcIBajbzyHhlWHpkaeFPS2YG3Ct6jLhcm4saZgAV6WxJyb3cAyRdOhuSdM2dLRDJ4LhQTy8wPGHiwY7MWUE-ybi--ehaEExXiBFp2sTUVC8RQQSHoDhb8RCqh9aR9q1AwHEKze6T0lZMLH93RCb3IWEie950aOV7auQntJkeOfLKcE86Uzigw
ca.crt:     1025 bytes
namespace:  20 bytes

Create a secure channel from local to API server in Kubernetes cluster

kubectl proxy

access this url from browser

http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/

Input token. The result as the picture below

Create RBAC to control what metrics can be visible

File: eks-admin-service-account.yaml with the content as follows

apiVersion: v1
kind: ServiceAccount
metadata:
  name: eks-admin
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: eks-admin
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin # this is the cluster admin role
subjects:
- kind: ServiceAccount
  name: eks-admin
  namespace: kube-system

Apply it

kubectl apply -f eks-admin-service-account.yaml

The output terminal

E:\Study\cka\devopsroles>kubectl apply -f eks-admin-service-account.yaml
serviceaccount/eks-admin created
clusterrolebinding.rbac.authorization.k8s.io/eks-admin created

Check it created in kube-system namespace

E:\Study\cka\devopsroles> kubectl get serviceaccount -n kube-system | grep eks-admin
eks-admin                            1         67s

Get a token from the eks-admin serviceaccount

kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep eks-admin | awk '{print $1}')

The output terminal token

$ kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep eks-admin | awk '{print $1}')
Name:         eks-admin-token-k2pxz
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: eks-admin
              kubernetes.io/service-account.uid: b7005b72-97fc-488b-951d-ac1eaa44dfee

Type:  kubernetes.io/service-account-token

Data
====
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6InVXaXBWU1BGWnp2SFRPaV80c1A1azlhYXA0ZkpyUDhkSlF1MTlkOFNHRG8ifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJla3MtYWRtaW4tdG9rZW4tazJweHoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZWtzLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiYjcwMDViNzItOTdmYy00ODhiLTk1MWQtYWMxZWFhNDRkZmVlIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmVrcy1hZG1pbiJ9.TA-nf-jlyVcUWcvoq7ixMRpZVgsgZ7sBWzTywjMF010bvN4b7p5V5PbMVmNldUylPZ4FQniQY9Xs-quQDytI2TH8zRFdZDFMccEx_E1vzh1CjizKjqbc233N3JYD7xJPTP1U_xq9inaZ3uXvedCtJBzDXVqjjY9eLKcDfuU_I-LLvPEmzh6Sk0FNDzyngAddD_COs2F8q2KbEUOVL6oHr2ZpgbXgxumM_NG_EAKHQx00iXl7Kr_z7wZZSAsvb24ZwQPOShHn_e49C0SvEQKq_TVMG21xxsn0wLIDfH_PpQd1m_vKHbOyyT_yMs_fQnZXnfq_VK7xEhjU3NvdsolMWg
ca.crt:     1025 bytes
namespace:  11 bytes

Create a secure channel from local to API server in Kubernetes cluster

kubectl proxy

Access this URL from browser

http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/

Input token as the picture below

The result have installed k8s dashboard

Uninstall Dashboard

kubectl delete -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta8/aio/deploy/recommended.yaml
kubectl delete -f eks-admin-service-account.yaml

Conclusion

You have to install dashboard Kubernetes. I hope will this your helpful. Thank you for reading the DevopsRoles page!

Unlocking helm commands for Kubernetes

Introduction

Explore the essential Helm commands for Kubernetes in this detailed tutorial. Whether you’re a beginner or a seasoned Kubernetes user, this guide will help you install Helm and utilize its commands to manage charts and repositories effectively, streamlining your Kubernetes deployments.

In this tutorial, How to install helm and run helm commands. Helm provides many commands for managing charts and Helm repositories. Now, let’s helm commands for Kubernetes

  • Helm Commands for Kubernetes: Simplifies application deployment and management in Kubernetes environments.
  • What is a Helm Chart?: A package containing pre-configured Kubernetes resources used for deploying applications.
  • Purpose of Helm: Streamlines deployments, manages versions and rollbacks and allows customization of installations through charts.
  • Using Helm Charts: Install Helm, add repositories, and manage applications within your Kubernetes cluster using Helm’s command suite, including install, update, and delete operations.

Install Helm commands for Kubernetes

You can ref here.

From Homebrew (Mac)

brew install helm

From Windows

choco install kubernetes-helm

Check version

helm version

The output is as below:

Windows PowerShell
Copyright (C) Microsoft Corporation. All rights reserved.

Try the new cross-platform PowerShell https://aka.ms/pscore6

PS C:\Windows\system32> choco install kubernetes-helm
Chocolatey v0.10.15
Installing the following packages:
kubernetes-helm
By installing you accept licenses for the packages.
Progress: Downloading kubernetes-helm 3.5.4... 100%

kubernetes-helm v3.5.4 [Approved]
kubernetes-helm package files install completed. Performing other installation steps.
The package kubernetes-helm wants to run 'chocolateyInstall.ps1'.
Note: If you don't run this script, the installation will fail.
Note: To confirm automatically next time, use '-y' or consider:
choco feature enable -n allowGlobalConfirmation
Do you want to run the script?([Y]es/[A]ll - yes to all/[N]o/[P]rint): Yes

Downloading kubernetes-helm 64 bit
  from 'https://get.helm.sh/helm-v3.5.4-windows-amd64.zip'
Progress: 100% - Completed download of C:\Users\USERNAME\AppData\Local\Temp\chocolatey\kubernetes-helm\3.5.4\helm-v3.5.4-windows-amd64.zip (11.96 MB).
Download of helm-v3.5.4-windows-amd64.zip (11.96 MB) completed.
Hashes match.
Extracting C:\Users\USERNAME\AppData\Local\Temp\chocolatey\kubernetes-helm\3.5.4\helm-v3.5.4-windows-amd64.zip to C:\ProgramData\chocolatey\lib\kubernetes-helm\tools...
C:\ProgramData\chocolatey\lib\kubernetes-helm\tools
 ShimGen has successfully created a shim for helm.exe
 The install of kubernetes-helm was successful.
  Software installed to 'C:\ProgramData\chocolatey\lib\kubernetes-helm\tools'

Chocolatey installed 1/1 packages.
 See the log for details (C:\ProgramData\chocolatey\logs\chocolatey.log).

Did you know the proceeds of Pro (and some proceeds from other
 licensed editions) go into bettering the community infrastructure?
 Your support ensures an active community, keeps Chocolatey tip top,
 plus it nets you some awesome features!
 https://chocolatey.org/compare
PS C:\Windows\system32> helm version
version.BuildInfo{Version:"v3.5.4", GitCommit:"1b5edb69df3d3a08df77c9902dc17af864ff05d1", GitTreeState:"clean", GoVersion:"go1.15.11"}
PS C:\Windows\system32>

Helm Commands kubernetes

Add Helm repo link here

# Example
helm repo add stable https://charts.helm.sh/stable

I will add bitnami repo and search for the Nginx server as command follows:

helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update
helm search repo nginx
helm search repo bitnami/nginx

The output is as commands follows:

E:\Study\cka\devopsroles>helm search repo nginx
NAME                                    CHART VERSION   APP VERSION     DESCRIPTION
bitnami/nginx                           8.9.1           1.19.10         Chart for the nginx server
bitnami/nginx-ingress-controller        7.6.9           0.46.0          Chart for the nginx Ingress controller
stable/nginx-ingress                    1.41.3          v0.34.1         DEPRECATED! An nginx Ingress controller that us...
stable/nginx-ldapauth-proxy             0.1.6           1.13.5          DEPRECATED - nginx proxy with ldapauth
stable/nginx-lego                       0.3.1                           Chart for nginx-ingress-controller and kube-lego
bitnami/kong                            3.7.4           2.4.1           Kong is a scalable, open source API layer (aka ...
stable/gcloud-endpoints                 0.1.2           1               DEPRECATED Develop, deploy, protect and monitor...

E:\Study\cka\devopsroles>helm search repo bitnami/nginx
NAME                                    CHART VERSION   APP VERSION     DESCRIPTION
bitnami/nginx                           8.9.1           1.19.10         Chart for the nginx server
bitnami/nginx-ingress-controller        7.6.9           0.46.0          Chart for the nginx Ingress controller

Install Nginx using the helm command

helm install nginx bitnami/nginx

Update Nginx

helm upgrade nginx bitnami/nginx --dry-run

# Upgrade using values in overrides_nginx.yaml
helm upgrade nignx bitnami/nginx -f overrides_nginx.yaml

# rollback
helm rollback nginx REVISION_NUMBER

Basic helm command

helm status nginx
helm history nginx
# get manifest and values from deployment
helm get manifest nginx
helm get values nginx
helm uninstall nginx

Conclusion

Mastering Helm commands enhances your Kubernetes management skills, allowing for more efficient application deployment and management. This tutorial provides the foundation you need to confidently use Helm in your Kubernetes environment, improving your operational capabilities.

You have to use helm commands for Kubernetes. I hope will this your helpful. Thank you for reading the DevopsRoles page!

Install EKS on AWS

In this tutorial, how to Set Up EKS 1.16 cluster with eksctl . Amazon Elastic Kubernetes Service (Amazon EKS) gives you the flexibility to start, run, and scale Kubernetes applications in the AWS cloud or on-premises. No, let’s go Install EKS on AWS.

1.The first create a free account on AWS.

Link here:

Example: Create User: devopsroles-demo as the picture below:

2.Install AWS cli on Windows

Refer here:

Create AWS Profile

We easier to switch to different AWS IAM user or IAM role identiy by ‘export AWS_PROFILE=PROFILE_NAME‘ . I will not using ‘default‘ profile created by ‘aws configure‘. For example, I create a named AWS Profile ‘devopsroles-demo‘ in two ways:

  • ‘aws configure –profile devopsroles-demo’
E:\Study\cka\devopsroles>aws configure --profile devopsroles-demo
AWS Access Key ID [None]: XXXXZHBNJLCKKCE7EQQQ
AWS Secret Access Key [None]: fdfdfdfd43434dYlQ1il1xKNCnqwUvNHFSv41111
Default region name [None]:
Default output format [None]:

E:\Study\cka\devopsroles>set AWS_PROFILE=devopsroles-demo
E:\Study\cka\devopsroles>aws sts get-caller-identity
{
    "UserId": "AAQAZHBNJLCKPEGKYAV1R",
    "Account": "456602660300",
    "Arn": "arn:aws:iam::456602660300:user/devopsroles-demo"
}
  • Create profile entry in ~/.aws/credentials file

The content credentials file as below:

[devopsroles-demo] 
aws_access_key_id=YOUR_ACCESS_KEY 
aws_secret_access_key=YOUR_SECRET_ACCESS_KEY
aws_region = YOUR_REGION

Check new profile

export AWS_PROFILE=devopsroles-demo
# Windows
set AWS_PROFILE=devopsroles-demo

3.Install aws-iam-authenticator

# Windows
# install chocolatey first: https://chocolatey.org/install
choco install -y aws-iam-authenticator

4.Install kubectl

Ref here:

choco install kubernetes-cli
kubectl version

5.Install eksctl

Ref here:

# install eskctl from chocolatey
chocolatey install -y eksctl 
eksctl version

6.Create ssh key for EKS worker nodes

ssh-keygen
# Example name key is devopsroles_worker_nodes_demo.pem

7.Setup EKS cluster with eksctl (so you don’t need to manually create VPC)

eksctl tool will create K8s Control Plane (master nodes, etcd, API server, etc), worker nodes, VPC, Security Groups, Subnets, Routes, Internet Gateway, etc.

  • use official AWS EKS AMI
  • dedicated VPC
  • EKS not supported in us-west-1
eksctl create cluster  --name devopsroles-from-eksctl --version 1.16  --region us-west-2  --nodegroup-name workers --node-type t3.medium --nodes 2 --nodes-min 1 --nodes-max 4 --ssh-access --ssh-public-key ~/.ssh/devopsroles_worker_nodes_demo.pem.pub --managed

The output

E:\Study\cka\devopsroles>eksctl create cluster  --name devopsroles-from-eksctl --version 1.16  --region us-west-2  --nodegroup-name workers --node-type t3.medium --nodes 2 --nodes-min 1 --nodes-max 4 --ssh-access
--ssh-public-key ~/.ssh/devopsroles_worker_nodes_demo.pem.pub --managed
2021-05-23 15:19:30 [ℹ]  eksctl version 0.49.0
2021-05-23 15:19:30 [ℹ]  using region us-west-2
2021-05-23 15:19:31 [ℹ]  setting availability zones to [us-west-2a us-west-2b us-west-2c]
2021-05-23 15:19:31 [ℹ]  subnets for us-west-2a - public:192.168.0.0/19 private:192.168.96.0/19
2021-05-23 15:19:31 [ℹ]  subnets for us-west-2b - public:192.168.32.0/19 private:192.168.128.0/19
2021-05-23 15:19:31 [ℹ]  subnets for us-west-2c - public:192.168.64.0/19 private:192.168.160.0/19
2021-05-23 15:19:31 [ℹ]  using SSH public key "C:\\Users\\USERNAME/.ssh/devopsroles_worker_nodes_demo.pem.pub" as "eksctl-devopsroles-from-eksctl-nodegroup-workers-29:e7:8c:c3:df:a5:23:1b:bb:74:ad:51:bc:fb:80:9b" 
2021-05-23 15:19:32 [ℹ]  using Kubernetes version 1.16
2021-05-23 15:19:32 [ℹ]  creating EKS cluster "devopsroles-from-eksctl" in "us-west-2" region with managed nodes
2021-05-23 15:19:32 [ℹ]  will create 2 separate CloudFormation stacks for cluster itself and the initial managed nodegroup
2021-05-23 15:19:32 [ℹ]  if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=us-west-2 --cluster=devopsroles-from-eksctl'
2021-05-23 15:19:32 [ℹ]  CloudWatch logging will not be enabled for cluster "devopsroles-from-eksctl" in "us-west-2"
2021-05-23 15:19:32 [ℹ]  you can enable it with 'eksctl utils update-cluster-logging --enable-types={SPECIFY-YOUR-LOG-TYPES-HERE (e.g. all)} --region=us-west-2 --cluster=devopsroles-from-eksctl'
2021-05-23 15:19:32 [ℹ]  Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "devopsroles-from-eksctl" in "us-west-2"
2021-05-23 15:19:32 [ℹ]  2 sequential tasks: { create cluster control plane "devopsroles-from-eksctl", 2 sequential sub-tasks: { wait for control plane to become ready, create managed nodegroup "workers" } }
2021-05-23 15:19:32 [ℹ]  building cluster stack "eksctl-devopsroles-from-eksctl-cluster"
2021-05-23 15:19:34 [ℹ]  deploying stack "eksctl-devopsroles-from-eksctl-cluster"
2021-05-23 15:20:04 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-cluster"
2021-05-23 15:20:35 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-cluster"
2021-05-23 15:21:36 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-cluster"
2021-05-23 15:22:37 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-cluster"
2021-05-23 15:23:39 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-cluster"
2021-05-23 15:24:40 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-cluster"
2021-05-23 15:25:41 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-cluster"
2021-05-23 15:26:42 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-cluster"
2021-05-23 15:27:44 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-cluster"
2021-05-23 15:28:45 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-cluster"
2021-05-23 15:29:46 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-cluster"
2021-05-23 15:30:47 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-cluster"
2021-05-23 15:31:48 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-cluster"
2021-05-23 15:31:52 [ℹ]  building managed nodegroup stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:31:53 [ℹ]  deploying stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:31:53 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:32:09 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:32:27 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:32:48 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:33:05 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:33:26 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:33:47 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:34:06 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:34:24 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:34:43 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:35:01 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:35:17 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:35:37 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:35:38 [ℹ]  waiting for the control plane availability...
2021-05-23 15:35:38 [✔]  saved kubeconfig as "C:\\Users\\USERNAME/.kube/config"
2021-05-23 15:35:38 [ℹ]  no tasks
2021-05-23 15:35:38 [✔]  all EKS cluster resources for "devopsroles-from-eksctl" have been created
2021-05-23 15:35:39 [ℹ]  nodegroup "workers" has 2 node(s)
2021-05-23 15:35:39 [ℹ]  node "ip-192-168-53-14.us-west-2.compute.internal" is ready
2021-05-23 15:35:39 [ℹ]  node "ip-192-168-90-229.us-west-2.compute.internal" is ready
2021-05-23 15:35:39 [ℹ]  waiting for at least 1 node(s) to become ready in "workers"
2021-05-23 15:35:39 [ℹ]  nodegroup "workers" has 2 node(s)
2021-05-23 15:35:39 [ℹ]  node "ip-192-168-53-14.us-west-2.compute.internal" is ready
2021-05-23 15:35:39 [ℹ]  node "ip-192-168-90-229.us-west-2.compute.internal" is ready
2021-05-23 15:35:47 [ℹ]  kubectl command should work with "C:\\Users\\USERNAME/.kube/config", try 'kubectl get nodes'
2021-05-23 15:35:47 [✔]  EKS cluster "devopsroles-from-eksctl" in "us-west-2" region is ready

You have created a cluster, To find that cluster credentials added in ~/.kube/config

The result on AWS

Amazon EKS Clusters

CloudFormation

EC2

For example basic command line:

Get info about cluster resources

aws eks describe-cluster --name devopsroles-from-eksctl --region us-west-2

Get services

kubectl get svc

The output

E:\Study\cka\devopsroles>kubectl get svc
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.100.0.1   <none>        443/TCP   11m

Delete EKS Cluster

eksctl delete cluster --name devopsroles-from-eksctl --region us-west-2

The output

E:\Study\cka\devopsroles>eksctl delete cluster --name devopsroles-from-eksctl --region us-west-2
2021-05-23 15:57:31 [ℹ]  eksctl version 0.49.0
2021-05-23 15:57:31 [ℹ]  using region us-west-2
2021-05-23 15:57:31 [ℹ]  deleting EKS cluster "devopsroles-from-eksctl"
2021-05-23 15:57:34 [ℹ]  deleted 0 Fargate profile(s)
2021-05-23 15:57:37 [✔]  kubeconfig has been updated
2021-05-23 15:57:37 [ℹ]  cleaning up AWS load balancers created by Kubernetes objects of Kind Service or Ingress
2021-05-23 15:57:45 [ℹ]  2 sequential tasks: { delete nodegroup "workers", delete cluster control plane "devopsroles-from-eksctl" [async] }
2021-05-23 15:57:45 [ℹ]  will delete stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:57:45 [ℹ]  waiting for stack "eksctl-devopsroles-from-eksctl-nodegroup-workers" to get deleted
2021-05-23 15:57:45 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:58:02 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:58:19 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:58:40 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:58:58 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:59:19 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:59:20 [ℹ]  will delete stack "eksctl-devopsroles-from-eksctl-cluster"
2021-05-23 15:59:20 [✔]  all cluster resources were deleted

Conclusion

You have Install EKS on AWS. I hope will this your helpful. Thank you for reading the DevopsRoles page!

How to deploy OpenProject platform as a Docker Container

#Introduction

In this tutorial, How to deploy OpenProject platform as a Docker Container.

OpenProject is an outstanding platform for project management. It is manage meetings, control project budgets, run reports on your projects, communicate with a project team, etc.

Deploy OpenProject platform as a Docker Container

Install Docker and Docker-Compose

I will this deployment on Ubuntu Server.

sudo apt-get install docker.io -y
sudo usermod -aG docker $USER
sudo curl -L "https://github.com/docker/compose/releases/download/1.23.1/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose

Check version after finished install.

root@DevopsRoles:~# docker --version
Docker version 20.10.2, build 20.10.2-0ubuntu2
root@DevopsRoles:~# docker-compose --version
docker-compose version 1.23.1, build b02f1306

Deploy OpenProject with Docker Compose

git clone https://github.com/opf/openproject-deploy --depth=1 --branch=stable/11 myopenproject
cd myopenproject/compose
docker-compose pull # Make sure to update docker images.
docker-compose up -d # You need to wait a few minutes.

For example, The output terminal is as below:

root@DevopsRoles:~# git clone https://github.com/opf/openproject-deploy --depth=1 --branch=stable/11 myopenproject
Cloning into 'myopenproject'...
remote: Enumerating objects: 7, done.
remote: Counting objects: 100% (7/7), done.
remote: Compressing objects: 100% (6/6), done.
remote: Total 7 (delta 0), reused 2 (delta 0), pack-reused 0
Receiving objects: 100% (7/7), done.
root@DevopsRoles:~# cd myopenproject/compose
root@DevopsRoles:~/myopenproject/compose# docker-compose pull
Pulling db     ... done
Pulling cache  ... done
Pulling seeder ... done
Pulling cron   ... done
Pulling worker ... done
Pulling web    ... done
Pulling proxy  ... done
root@DevopsRoles:~/myopenproject/compose# docker-compose up -d
Creating network "compose_backend" with the default driver
Creating network "compose_frontend" with the default driver
Creating volume "compose_pgdata" with default driver
Creating volume "compose_opdata" with default driver
Creating compose_seeder_1_f0f0cb90c947 ... done
Creating compose_cache_1_e6fe61ccd342  ... done
Creating compose_db_1_17392590a82e     ... done
Creating compose_cron_1_f15f9d68fc11   ... done
Creating compose_web_1_ce68c823fc5f    ... done
Creating compose_worker_1_a9c88ca2f672 ... done
Creating compose_proxy_1_c7c5f08e77e8  ... done
root@DevopsRoles:~/myopenproject/compose#
root@DevopsRoles:~/myopenproject/compose# docker ps
CONTAINER ID   IMAGE                      COMMAND                  CREATED          STATUS          PORTS                            NAMES
9aa1fe05f737   openproject/community:11   "./docker/prod/entry…"   40 seconds ago   Up 35 seconds   5432/tcp, 0.0.0.0:8080->80/tcp   compose_proxy_1_5fdbbaa63ec5
2df1b233a515   openproject/community:11   "./docker/prod/entry…"   42 seconds ago   Up 39 seconds   80/tcp, 5432/tcp                 compose_worker_1_6db9f1adb68b
e1b6878e9e32   openproject/community:11   "./docker/prod/entry…"   42 seconds ago   Up 39 seconds   80/tcp, 5432/tcp                 compose_web_1_544e288b78ff
ef3b645bc783   openproject/community:11   "./docker/prod/entry…"   42 seconds ago   Up 39 seconds   80/tcp, 5432/tcp                 compose_cron_1_db11c0e207d9
0dad3d1c28d1   postgres:10                "docker-entrypoint.s…"   46 seconds ago   Up 41 seconds   5432/tcp                         compose_db_1_31484339d5bc
1cd386cca514   memcached                  "docker-entrypoint.s…"   46 seconds ago   Up 41 seconds   11211/tcp                        compose_cache_1_6b9f381e6e82
13f9ad2a8cfa   openproject/community:11   "./docker/prod/entry…"   46 seconds ago   Up 41 seconds   80/tcp, 5432/tcp                 compose_seeder_1_f88dde804cb4

The result is openproject docker the picture below:

How to deploy with docker

sudo mkdir -p /var/lib/myopenproject/{pgdata,assets}
head /dev/urandom | tr -dc A-Za-z0-9 | head -c 32 ; echo '' # The random key generated for SECRET_KEY_BASE variable.

Deploy the OpenProject containers with the command:

docker run -d -p 8080:80 --name myopenproject -e SECRET_KEY_BASE=secret -v /var/lib/myopenproject/pgdata:/var/myopenproject/pgdata -v /var/lib/myopenproject/assets:/var/myopenproject/assets openproject/community:11

The output terminal is as below:

root@DevopsRoles:~/myopenproject/compose# head /dev/urandom | tr -dc A-Za-z0-9 | head -c 32 ; echo ''
HgRwijSjJXYBHRl8MSPfm7oiYd0F5hmK
root@DevopsRoles:~/myopenproject/compose# docker run -d -p 8080:80 --name myopenproject -e SECRET_KEY_BASE=HgRwijSjJXYBHRl8MSPfm7oiYd0F5hmK -v /var/lib/myopenproject/pgdata:/var/myopenproject/pgdata -v /var/lib/myopenproject/assets:/var/myopenproject/assets openproject/community:11
24c5f3fb9b560f4eca821555a50d8cab8ef7b3e38616071db9083ed2784219fe
root@DevopsRoles:~/myopenproject/compose# docker ps
CONTAINER ID   IMAGE                      COMMAND                  CREATED         STATUS         PORTS                            NAMES
24c5f3fb9b56   openproject/community:11   "./docker/prod/entry…"   5 seconds ago   Up 4 seconds   5432/tcp, 0.0.0.0:8080->80/tcp   myopenproject
root@DevopsRoles:~/myopenproject/compose# netstat -nplt
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 127.0.0.1:38701         0.0.0.0:*               LISTEN      15247/containerd
tcp        0      0 0.0.0.0:8080            0.0.0.0:*               LISTEN      19208/docker-proxy
tcp        0      0 127.0.0.53:53           0.0.0.0:*               LISTEN      599/systemd-resolve
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      1091/sshd: /usr/sbi
tcp6       0      0 :::22                   :::*                    LISTEN      1091/sshd: /usr/sbi

Conclusion

You have to deploy the OpenProject platform as a Docker Container. I hope will this your helpful. Thank you for reading the DevopsRoles page! deploy OpenProject.

Reduce AWS Billing and Setup Email Alerts

In this tutorial, How to Reduce AWS Billing & Setup Email Alerts.

Setup AWS Billing email alert

You need to login to AWS Console as a Root user

Follow these steps on aws:

Choice “My Billing Dashboard

Create a budget

Choice “Cost budget

Example, My “Budgeted amount” is 15 USD.

Input: Email contacts for Alerts

Final, Create it

Conclusion

You have set up Email Alerts to reduce AWS Billing. I hope will this your helpful. Thank you for reading the DevopsRoles page!

Install Portainer Docker Web GUI on Linux

#Introduction

In this tutorial, How to Install Portainer Docker Web GUI on Linux. Portainer is a web-based management UI for Docker hosts.

Install Portainer Docker

I have installed docker on My computer. First, Create a Docker volume portainer_data

$ docker volume create portainer_data
Or,
$ sudo docker volume create portainer_data

Example:

[root@DockerServer ~]# docker volume create portainer_data
portainer_data
[root@DockerServer ~]# docker volume ls
DRIVER              VOLUME NAME
local               portainer_data

Create a Portainer Docker container.

docker run -d -p 8000:8000 -p 9000:9000 --name=portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer-ce

-d – the container runs in the background
-p 8000: 8000 – Releases the corresponding ports on the host system of Docker
–Name = Portainer – To easily identify the container created for Portainer Docker
–Restart = always – The container is always restarted here.
-v /var/run/docker.sock:/var/run/docker.sock – Specifies that the container is allowed to access the Docker socket storage.
-v portainer_data: / data – Created storage “portainer_data” to the storage folder “/data” within the container.

Example:

[root@DockerServer ~]# docker run -d -p 8000:8000 -p 9000:9000 --name=portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer-ce
Unable to find image 'portainer/portainer-ce:latest' locally
Trying to pull repository docker.io/portainer/portainer-ce ...
latest: Pulling from docker.io/portainer/portainer-ce
94cfa856b2b1: Pull complete
49d59ee0881a: Pull complete
527b866940d5: Pull complete
Digest: sha256:5064d8414091c175c55ef6f8744da1210819388c2136273b4607a629b7d93358
Status: Downloaded newer image for docker.io/portainer/portainer-ce:latest
531acdb49696ad5206043915b157bc63bcd0783149485d17bfdd18993a72535a
[root@DockerServer ~]# docker ps
CONTAINER ID        IMAGE                    COMMAND             CREATED             STATUS              PORTS                                            NAMES
531acdb49696        portainer/portainer-ce   "/portainer"        11 seconds ago      Up 10 seconds       0.0.0.0:8000->8000/tcp, 0.0.0.0:9000->9000/tcp   portainer

Accessing Portainer Docker Web Interface

Check Portainer is running in the background as a container on Docker.

[root@DockerServer ~]# docker ps
CONTAINER ID        IMAGE                    COMMAND             CREATED             STATUS              PORTS                                            NAMES
531acdb49696        portainer/portainer-ce   "/portainer"        11 seconds ago      Up 10 seconds       0.0.0.0:8000->8000/tcp, 0.0.0.0:9000->9000/tcp   portainer

Open a web browser http://192.168.3.6:9000

Create Administrator account

Connect Portainer to the container environment

Web Interface

Conclusion

You have to install Portainer Docker Web GUI on Linux. I hope will this your helpful. Thank you for reading the DevopsRoles page!

Devops Tutorial

Exit mobile version