How to call git bash command from powershell

Introduction

Combining PowerShell with Git Bash can enhance your productivity by allowing you to use Unix-like commands within a Windows environment. In this guide, we’ll show you how to call Git Bash commands from PowerShell, using an example script. We’ll cover everything from basic setups to advanced configurations. In this tutorial, How to call the git bash command from Powershell. For example, git bash content to split CSV files on windows :

Setting Up Your Environment

Installing Git Bash

First, ensure you have Git Bash installed on your system. Download it from the official Git website and follow the installation instructions.

Adding Git Bash to Your System PATH

To call Git Bash command from PowerShell, add Git Bash to your system PATH:

  1. Open the Start menu, search for “Environment Variables,” and select “Edit the system environment variables.”
  2. Click the “Environment Variables” button.
  3. Under “System variables,” find and select the “Path” variable, then click “Edit.”
  4. Click “New” and add the path to the Git Bash executable, typically C:\Program Files\Git\bin.

Call the git bash command from Powershell

Save the following script as split.sh in your K:/TEST directory:

#!/bin/bash
cd $1
echo split start
date
pwd
split Filetest.CSV Filetest -l 20000000 -d
ls -l
for filename in $1/*; do
    wc -l $filename
done
date
echo split end
exit

This script performs the following tasks:

  1. Changes the directory to the one specified by the first argument.
  2. Prints a start message and the current date.
  3. Displays the current directory.
  4. Splits Filetest.CSV into smaller files with 20,000,000 lines each.
  5. Lists the files in the directory.
  6. Counts the number of lines in each file in the directory.
  7. Prints the current date and an end message.
  8. Exits the script.

PowerShell Script

Create a PowerShell script to call the split.sh script:

$TOOL_PATH = "K:/TEST"
$FOLDER_PATH = "K:/TEST/INPUT"

$COMMAND = "bash.exe " + $TOOL_PATH + "/split.sh " + $FOLDER_PATH
echo $COMMAND
Invoke-Expression $COMMAND

This PowerShell script does the following:

  1. Defines the path to the directory containing the split.sh script.
  2. Defines the path to the directory to be processed by the split.sh script.
  3. Constructs the command to call the split.sh script using bash.exe.
  4. Prints the constructed command.
  5. Executes the constructed command.

Explanation

  1. $TOOL_PATH: This variable holds the path where your split.sh script is located.
  2. $FOLDER_PATH: This variable holds the path to the directory you want to process with the split.sh script.
  3. $COMMAND: This variable constructs the full command string that calls bash.exe with the script path and the folder path as arguments.
  4. echo $COMMAND: This line prints the constructed command for verification.
  5. Invoke-Expression $COMMAND: This line executes the constructed command.

Add C:\Program Files\Git\bin into the PATH environment

OK!

Troubleshooting

Common Issues and Solutions

  • Git Bash not found: Ensure Git Bash is installed and added to your system PATH.
  • Permission denied: Make sure your script has execute permissions (chmod +x split.sh).
  • Command not recognized: Verify the syntax and ensure you’re using the correct paths.
  • Incorrect output or errors: Print debugging information in your scripts to diagnose issues.

FAQs

How do I add Git Bash to my PATH variable?

Add the path to Git Bash (e.g., C:\Program Files\Git\bin) to the system PATH environment variable.

Can I pass multiple arguments from PowerShell to Git Bash?

Yes, you can pass multiple arguments by modifying the command string in the PowerShell script.

How do I capture the output of a Git Bash command in PowerShell?

Use the following approach to capture the output:

$output = bash -c "git status"
Write-Output $output

Can I automate Git Bash scripts with PowerShell?

Yes, you can automate Git Bash scripts by scheduling PowerShell scripts or using task automation features in PowerShell.

Conclusion

By following this guide, you can easily call Git Bash command from PowerShell, enabling you to leverage the strengths of both command-line interfaces. Whether you’re performing basic operations or advanced scripting, integrating Git Bash with PowerShell can significantly enhance your workflow. Thank you for reading the DevopsRoles page!

Terraform commands cheat sheet

#Introduction

In this tutorial, to learn about Terraform commands cheat sheet.

Terraform commands cheat sheet.

Initialize the terraform stack and download provider plugins as per provider and also verify the syntax

#terraform init

How to display the preview of the resources that are going to be created, updated, and destroyed.

#terraform plan

Build or change infrastructure.

 #terraform apply

Destroy terraform-managed infrastructure.

#terraform destroy

Run plan or apply or destroy only to the target resource

#terraform plan -target=resource
#terraform apply -target=resource
#terraform destroy -target=resource

Show human-readable output from Terraform state.

#terraform show

How to validate terraform syntax

 #terraform validate

Rewrites terraform code to a canonical format. This useful run command before git commit that codes very nicely formatted.

#terraform fmt

Manually mark a resource for re-creation

#terraform taint

Download and install modules for the configurations. When you update the module.

#terraform get

Create a visual dependency graph of resources.

#terraform graph

Configure remote state storage.

#terraform remote

Use this command for advanced state management.

#terraform state

Conclusion

You have learned about Terraform commands cheat sheet. I hope will this your helpful. Thank you for reading the DevopsRoles page!

Docker compose run Grafana 8

Introduction

In this tutorial, how to run Grafana version 8 using Docker-compose. Now, we will use Docker compose run Grafana version 8.

Docker compose run Grafana

Grafana Docker container

I will create a folder for Grafana on the Host Ubuntu OS.

sudo mkdir -p /home/huupv/docker/grafana/data
sudo mkdir -p /home/huupv/docker/compose-files/Grafana
cd /home/huupv/docker/compose-files/Grafana

Create a new docker-compose.yml file with the following

version: "3.5"

services:
  grafana8:
    image: grafana/grafana:latest
    network_mode: "bridge"
    container_name: grafana8
	# user: "1000" # needs to be `id -u` // alternatively chown the grafana/data dir to 472:472
    volumes:
      - /home/huupv/docker/grafana/data:/var/lib/grafana
    ports:
      - "3000:3000"
    restart: always

The change port if you want.

The deploy Grafana

sudo docker-compose up -d

Your open browser the following URL: http://yourIP:3000

To log in for the first time, use the admin:admin username and password combination.

Login the success

The during start Grafana error [fixed it]

Code Error Grafana

huupv@ubuntu:~/docker/compose-files/grafana$ sudo docker ps
CONTAINER ID   IMAGE                    COMMAND                  CREATED          STATUS                         PORTS                                                                                        NAMES
4e1f6f81da09   grafana/grafana:latest   "/run.sh"                27 seconds ago   Restarting (1) 3 seconds ago                                                                                                grafana8
304a31a535c5   telegraf:latest          "/entrypoint.sh tele…"   3 hours ago      Up 2 hours                     0.0.0.0:8092->8092/tcp, 8092/udp, 0.0.0.0:8094->8094/tcp, 8125/udp, 0.0.0.0:8125->8125/tcp   telegraf
5909e05dd5bd   influxdb:latest          "/entrypoint.sh infl…"   3 hours ago      Up 2 hours                     0.0.0.0:8086->8086/tcp                                                                       influxdb2
huupv@ubuntu:~/docker/compose-files/grafana$ sudo docker logs grafana8
GF_PATHS_DATA='/var/lib/grafana' is not writable.
You may have issues with file permissions, more information here: http://docs.grafana.org/installation/docker/#migrate-to-v51-or-later
mkdir: can't create directory '/var/lib/grafana/plugins': Permission denied
GF_PATHS_DATA='/var/lib/grafana' is not writable.
You may have issues with file permissions, more information here: http://docs.grafana.org/installation/docker/#migrate-to-v51-or-later
mkdir: can't create directory '/var/lib/grafana/plugins': Permission denied
GF_PATHS_DATA='/var/lib/grafana' is not writable.
You may have issues with file permissions, more information here: http://docs.grafana.org/installation/docker/#migrate-to-v51-or-later
mkdir: can't create directory '/var/lib/grafana/plugins': Permission denied
GF_PATHS_DATA='/var/lib/grafana' is not writable.
You may have issues with file permissions, more information here: http://docs.grafana.org/installation/docker/#migrate-to-v51-or-later
mkdir: can't create directory '/var/lib/grafana/plugins': Permission denied
GF_PATHS_DATA='/var/lib/grafana' is not writable.
You may have issues with file permissions, more information here: http://docs.grafana.org/installation/docker/#migrate-to-v51-or-later
mkdir: can't create directory '/var/lib/grafana/plugins': Permission denied
GF_PATHS_DATA='/var/lib/grafana' is not writable.
You may have issues with file permissions, more information here: http://docs.grafana.org/installation/docker/#migrate-to-v51-or-later
mkdir: can't create directory '/var/lib/grafana/plugins': Permission denied
GF_PATHS_DATA='/var/lib/grafana' is not writable.
You may have issues with file permissions, more information here: http://docs.grafana.org/installation/docker/#migrate-to-v51-or-later
mkdir: can't create directory '/var/lib/grafana/plugins': Permission denied
GF_PATHS_DATA='/var/lib/grafana' is not writable.
You may have issues with file permissions, more information here: http://docs.grafana.org/installation/docker/#migrate-to-v51-or-later
mkdir: can't create directory '/var/lib/grafana/plugins': Permission denied
GF_PATHS_DATA='/var/lib/grafana' is not writable.
You may have issues with file permissions, more information here: http://docs.grafana.org/installation/docker/#migrate-to-v51-or-later
mkdir: can't create directory '/var/lib/grafana/plugins': Permission denied

Fixed it

huupv@ubuntu:~/docker/compose-files/grafana$ id -u
1001

# user: "1000" # needs to be `id -u` // alternatively chown the grafana/data dir to 472:472


huupv@ubuntu:~/docker/compose-files/grafana$ sudo chown 472:472 /home/huupv/docker/grafana/data

Result, Docker compose run Grafana 8.

huupv@ubuntu:~/docker/compose-files/grafana$ sudo docker ps
CONTAINER ID   IMAGE                    COMMAND                  CREATED         STATUS                          PORTS                                                                                        NAMES
4e1f6f81da09   grafana/grafana:latest   "/run.sh"                5 minutes ago   Restarting (1) 50 seconds ago                                                                                                grafana8
304a31a535c5   telegraf:latest          "/entrypoint.sh tele…"   3 hours ago     Up 2 hours                      0.0.0.0:8092->8092/tcp, 8092/udp, 0.0.0.0:8094->8094/tcp, 8125/udp, 0.0.0.0:8125->8125/tcp   telegraf
5909e05dd5bd   influxdb:latest          "/entrypoint.sh infl…"   3 hours ago     Up 2 hours                      0.0.0.0:8086->8086/tcp                                                                       influxdb2
huupv@ubuntu:~/docker/compose-files/grafana$ sudo docker restart grafana8
grafana8

huupv@ubuntu:~/docker/compose-files/grafana$ sudo chown 472:472 /home/huupv/docker/grafana/data
huupv@ubuntu:~/docker/compose-files/grafana$ ls -ld  /home/huupv/docker/grafana/data
drwxrwxr-x 2 472 472 4096 Aug  8 21:23 /home/huupv/docker/grafana/data

huupv@ubuntu:~$ sudo docker ps
CONTAINER ID   IMAGE                    COMMAND                  CREATED       STATUS       PORTS                                                                                        NAMES
c4d40e897c36   grafana/grafana:latest   "/run.sh"                2 hours ago   Up 2 hours   0.0.0.0:3000->3000/tcp                                                                       grafana8
304a31a535c5   telegraf:latest          "/entrypoint.sh tele…"   5 hours ago   Up 4 hours   0.0.0.0:8092->8092/tcp, 8092/udp, 0.0.0.0:8094->8094/tcp, 8125/udp, 0.0.0.0:8125->8125/tcp   telegraf
5909e05dd5bd   influxdb:latest          "/entrypoint.sh infl…"   5 hours ago   Up 4 hours   0.0.0.0:8086->8086/tcp                                                                       influxdb2
huupv@ubuntu:~$

Conclusion

Using Docker Compose, you’ve successfully launched Grafana 8. This tutorial aims to be a helpful resource for your monitoring setup. Thank you for choosing DevopsRoles for guidance. We appreciate your engagement and are here to support your continued learning in Docker and software deployment.

Docker container image for rocky Linux

#Introduction

How to use Rocky Linux as a Docker container image. In this tutorial, how to get it, deploy it. Now, We will use the Docker container image for rocky Linux.

Rocky Linux to replace Centos.

You will need a VM with the Docker engine installed and running.

Docker container image for rocky Linux

pull down the Rocky Linux image

Open a terminal window. The run command is below

docker pull rockylinux/rockylinux

The docker image will be saved to your local repository and used it. You can verify it

docker images

create a container from the Rocky Linux image

Create a container devopsroles and deploy it in detached mode as the command below

docker run -it --name devopsroles -d rockylinux/rockylinux

Access to the container devopsroles

docker exec -it --user root devopsroles /bin/bash

You can destroy the Rocky Linux container as command below

docker ps -a
docker stop Container_ID
docker rm Container_ID

Conclusion

You have used Rocky Linux as a Docker container image. I hope will this your helpful. Thank you for reading the DevopsRoles page!

Terraform AWS create VPC: A Comprehensive Step-by-Step Guide

Introduction

How to create AWS VPC using Terraform. In this tutorial, I’m using Terraform AWS create VPC example. Embark on a journey to mastering cloud infrastructure with our detailed tutorial on creating a Virtual Private Cloud (VPC) in AWS using Terraform.

This guide is tailored for DevOps professionals and enthusiasts eager to leverage the scalability and efficiency of AWS. By the end of this tutorial, you will have a clear understanding of how to set up a VPC that aligns perfectly with your organizational needs, ensuring a secure and robust network environment.

Terraform aws create VPC example

The structure folder and files for AWS VPC are as follows

E:\STUDY\TERRAFORM\AWS
└───EC2
│ main.tf
│ outputs.tf
│ variables.tf

Terraform init command is used to initialize a working directory containing Terraform configuration files

terraform init 

The output as the picture below

Use the terraform plan command to create an execution plan.

terraform plan

The output as the picture below

Use terraform apply is to run it.

terraform apply

The output is the picture below

The result on AWS VPC

The content files for Terraform AWS VPC

main.tf file

# dry-run
# terraform plan
# Apply it
# terraform apply
provider "aws" {
    access_key = "${var.aws_access_key}"
	secret_key = "${var.aws_secret_key}"
	region = "${var.region}"
}
# describe the VPC resource.
resource "aws_vpc" "myVPC" {
    cidr_block = "10.1.0.0/16"
    instance_tenancy = "default"
    enable_dns_support = "true"
    enable_dns_hostnames = "false"
    tags = {
      Name = "myVPC"
    }
}

variables.tf file

variable "aws_access_key" {}
variable "aws_secret_key" {}
variable "region" {
    default = "us-west-2"
}

outputs.tf file

output "test" {
    value ="${aws_vpc.myVPC.cidr_block}"
}

Conclusion

Congratulations on successfully creating your AWS VPC using Terraform! This guide aimed to simplify the complexities of cloud networking, providing you with a solid foundation to build upon. As you continue to explore Terraform and AWS, remember that the flexibility and power of these tools can significantly enhance your infrastructure’s reliability and performance.

Keep experimenting and refining your skills to stay ahead in the ever-evolving world of cloud computing. I hope will this your helpful. Thank you for reading the DevopsRoles page!

Install Nginx Ingress Controller using Helm Chart

#Introduction

In this tutorial, How to Install Nginx Ingress Controller using Helm Chart. I want to expose pods using Ingress Controller. I will use the Nginx ingress controller to set up AWS ELB.

Production Environment, You would be using istio gateway, traefik.

Create Nginx Ingress Controller using Helm Chart

kubectl create namespace nginx-ingress-controller

Add new repo ingress-nginx/ingress-nginx

helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo add stable https://charts.helm.sh/stable
helm repo update

Install ingress Nginx

helm install nginx-ingress-controller ingress-nginx/ingress-nginx

The output as below

$ helm install nginx-ingress-controller ingress-nginx/ingress-nginx
NAME: nginx-ingress-controller
LAST DEPLOYED: Sat Jun 26 16:11:27 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
The ingress-nginx controller has been installed.
It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status by running 'kubectl --namespace default get services -o wide -w nginx-ingress-controller-ingress-nginx-controller'

An example Ingress that makes use of the controller:

  apiVersion: networking.k8s.io/v1beta1
  kind: Ingress
  metadata:
    annotations:
      kubernetes.io/ingress.class: nginx
    name: example
    namespace: foo
  spec:
    rules:
      - host: www.example.com
        http:
          paths:
            - backend:
                serviceName: exampleService
                servicePort: 80
              path: /
    # This section is only required if TLS is to be enabled for the Ingress
    tls:
        - hosts:
            - www.example.com
          secretName: example-tls

If TLS is enabled for the Ingress, a Secret containing the certificate and key must also be provided:

  apiVersion: v1
  kind: Secret
  metadata:
    name: example-tls
    namespace: foo
  data:
    tls.crt: <base64 encoded cert>
    tls.key: <base64 encoded key>
  type: kubernetes.io/tls

Check nginx-ingress-controller namespace have been created

kubectl get pod,svc,deploy

# The output as bellow
$ kubectl get pod,svc,deploy
NAME                                                                  READY   STATUS    RESTARTS   AGE
pod/guestbook-tnc42                                                   1/1     Running   0          38m
pod/guestbook-vqgws                                                   1/1     Running   0          38m
pod/guestbook-vqnxf                                                   1/1     Running   0          38m
pod/nginx-ingress-controller-ingress-nginx-controller-7f8f65bf4g6c7   1/1     Running   0          4m23s
pod/redis-master-dp7h7                                                1/1     Running   0          41m
pod/redis-slave-54mt6                                                 1/1     Running   0          39m
pod/redis-slave-8g8h4                                                 1/1     Running   0          39m

NAME                                                                  TYPE           CLUSTER-IP       EXTERNAL-IP                                                              PORT(S)                      AGE
service/guestbook                                                     LoadBalancer   10.100.231.216   aff3d414c479f4faaa2ab82062c87fe5-264485147.us-west-2.elb.amazonaws.com   3000:32767/TCP               38m
service/kubernetes                                                    ClusterIP      10.100.0.1       <none>                                                                   443/TCP                      88m
service/nginx-ingress-controller-ingress-nginx-controller             LoadBalancer   10.100.57.204    a04b31985b6c64607a50d794ef692c57-291207293.us-west-2.elb.amazonaws.com   80:31382/TCP,443:32520/TCP   4m25s
service/nginx-ingress-controller-ingress-nginx-controller-admission   ClusterIP      10.100.216.28    <none>                                                                   443/TCP                      4m25s
service/redis-master                                                  ClusterIP      10.100.76.16     <none>                                                                   6379/TCP                     40m
service/redis-slave                                                   ClusterIP      10.100.126.163   <none>                                                                   6379/TCP                     39m

NAME                                                                READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nginx-ingress-controller-ingress-nginx-controller   1/1     1            1           4m25s

TYPE of nginx-ingress-controller-controller service is LoadBalancer. This is the L4 Load balancer. How to nginx-ingress-controller-controller pod is running Nginx inside to do L7 load balancing inside the EKS cluster.

The result 404 as below

Create Ingress resource for L7 load balancing

For example, ingress_example.yaml file.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
    annotations:
      kubernetes.io/ingress.class: nginx
    name: guestbook
    namespace: default
spec:
    rules:
      - http:
          paths:
            - backend:
                serviceName: guestbook
                servicePort: 3000 
              path: /

Apply it

kubectl apply -f ingress_example.yaml

Get the public DNS of AWS ELB created from the Nginx ingress controller service

kubectl  get svc nginx-ingress-controller-ingress-nginx-controller  | awk '{ print $4 }' | tail -1

The output as below


a04b31985b6c64607a50d794ef692c57-291207293.us-west-2.elb.amazonaws.com

Access link on browser

Delete service guestbook

$ kubectl delete svc guestbook
service "guestbook" deleted

Conclusion

You have to install Nginx Ingress Controller using Helm Chart. I hope will this your helpful. Thank you for reading the DevopsRoles page!

Create Docker Image of a .NET Web API

Introduction

Docker helps you to run software projects without the need to set up complex development environments.

In this tutorial, How to Create Docker Image of a .NET Web API. You can use Docker image to run the backend from any PC that has Docker installed and front-end web project interact with the API.

Create a .NET Web API

I will create a project named “exdockerapi” use the dotnet CLI with the following command:

dotnet new webapi -o aspdockerapi

If you don’t have .NET 5 installed on your PC. Please, you can download it here.

Now, you can go into the project exdockerapi is created, and run the web API.

cd exdockerapi
dotnet run

By default, the application is on port 5001.

Create Docker Image of a .NET Web API

I will create a new file Dockerfile with the following command:

touch Dockerfile

Now, we will copy and paste the code below into Dockerfile.

FROM mcr.microsoft.com/dotnet/aspnet:5.0-focal AS base
WORKDIR /app
EXPOSE 80
FROM mcr.microsoft.com/dotnet/sdk:5.0-focal AS build
WORKDIR /src
COPY ["exdockerapi.csproj", "./"]
RUN dotnet restore "./exdockerapi.csproj"
COPY . .
WORKDIR "/src/."
RUN dotnet build "exdockerapi.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "exdockerapi.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "exdockerapi.dll"]

How to building the Docker Image

The simply build the Docker image based on the Dockerfile with the following command:

docker build -t dockerwebapiex -f Dockerfile .

To check Docker image is dockerwebapiex with the following command:

docker images | grep dockerwebapiex

Run the Docker Image

Use the following command.

docker run -ti --rm -p 8080:80 dockerwebapiex
  • The -ti option specifies that the image should be run in an interactive terminal mode
  • –rm specifies that the container should be removed immediately after it exits.

Conclusion

You have installed Create Docker Image of a .NET Web API. I hope will this your helpful. Thank you for reading the DevopsRoles page!

Terraform build EC2 instance

Introduction

In this tutorial, How to build a simple environment with one EC2 instance base AWS. Terraform build EC2 instance. This time, I created as follows.

  • VPC
  • Internet Gateway
  • Subnet
  • Route Table
  • Security Group
  • EC2

My Environment for Terraform build EC2 instance

  • OS Window
  • Terraform

To install Terraform, By referring to the following.

If you are on Windows, you can install it as follows.

choco install terraform
terraform -help

Create a template file

First of all, Create a subdirectory and a Terraform template file in it. The name of the template file is arbitrary, but the extensions are *.tf

$ mkdir terraform-aws
$ cd terraform-aws
$ touch main.tf

Terraform Provider settings

We use the provided settings AWS. Terraform supports multiple providers.

provider "aws" {
    access_key = "ACCESS_KEY_HERE"
    secret_key = "SECRET_KEY_HERE"
    region = "us-west-2"
}

Credential information

Use of Terraform variables

variable "access_key" {}
variable "secret_key" {}

provider "aws" {
    access_key = "${var.access_key}"
    secret_key = "${var.secret_key}"
    region = "us-west-2"
}

Assigning a value to a variable

There are three ways to assign a value to a variable.

1.Terraform command

$ terraform apply \
-var 'access_key=AXXXXXXXXXXXXXXXXXXXXXX' \
-var 'secret_key=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX'

2.Value in the environment variable

$ export TF_VAR_access_key="AXXXXXXXXXXXXXXXXXXXXX"
$ export TF_VAR_secret_key="XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"

3.Pass the value in a file

For example, the content terraform.tfvars file.

aws_access_key = "AXXXXXXXXXXXXXXXXXXXXX"
aws_secret_key = "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"

How to set Default value of variable

For example, We can set default values for variables.

variable "aws_access_key" {}
variable "aws_secret_key" {}
variable "region" {
    default = "us-west-2"
}

provider "aws" {
    access_key = "${var.aws_access_key}"
    secret_key = "${var.aws_secret_key}"
    region = "${var.region}"
}

Provider: AWS –Terraform by HashiCorp

Terraform Resource settings.

In Terraform the resource type is aws_* predefined. Example aws_vpc a VPC, EC2 is aws_instance. Each AWS resource in the format of item name = value. Example the VPC settings.

resource "aws_vpc" "myVPC" {
    cidr_block = "10.1.0.0/16"
    instance_tenancy = "default"
    enable_dns_support = "true"
    enable_dns_hostnames = "false"
    tags = {
      Name = "myVPC"
    }
}

Refer other resources

Internet Gateway settings.

resource "aws_vpc" "myVPC" {
    cidr_block = "10.1.0.0/16"
    instance_tenancy = "default"
    enable_dns_support = "true"
    enable_dns_hostnames = "false"
    tags {
      Name = "myVPC"
    }
}

resource "aws_internet_gateway" "myGW" {
    vpc_id = "${aws_vpc.myVPC.id}"
}

Dependencies between resources

For example, set up a dependency between the VPC and Internet Gateway.

resource "aws_vpc" "myVPC" {
    cidr_block = "10.1.0.0/16"
    instance_tenancy = "default"
    enable_dns_support = "true"
    enable_dns_hostnames = "false"
    tags {
      Name = "myVPC"
    }
}

resource "aws_internet_gateway" "myGW" {
    vpc_id = "${aws_vpc.myVPC.id}"
    depends_on = "${aws_vpc.myVPC}"
}

We mentioned above how to set the default value for a variable. we use of Map as follows

variable "images" {
    default = {
        us-east-1 = "ami-1ecae776"
        us-west-2 = "ami-e7527ed7"
        us-west-1 = "ami-d114f295"
    }
}

The values of variables defined as var.images.us-east-1

Output on the console

output "public ip of aws-test" {
  value = "${aws_instance.aws-test.public_ip}"
}

Terraform build EC2 instance summary

variable "aws_access_key" {}
variable "aws_secret_key" {}
variable "region" {
    default = "us-west-2"
}

variable "images" {
    default = {
        us-east-1 = "ami-1ecae776"
        us-west-2 = "ami-e7527ed7"
        us-west-1 = "ami-d114f295"
    }
}

provider "aws" {
    access_key = "${var.aws_access_key}"
    secret_key = "${var.aws_secret_key}"
    region = "${var.region}"
}

resource "aws_vpc" "myVPC" {
    cidr_block = "10.1.0.0/16"
    instance_tenancy = "default"
    enable_dns_support = "true"
    enable_dns_hostnames = "false"
    tags {
      Name = "myVPC"
    }
}

resource "aws_internet_gateway" "myGW" {
    vpc_id = "${aws_vpc.myVPC.id}"
}

resource "aws_subnet" "public-a" {
    vpc_id = "${aws_vpc.myVPC.id}"
    cidr_block = "10.1.1.0/24"
    availability_zone = "us-west-2a"
}

resource "aws_route_table" "public-route" {
    vpc_id = "${aws_vpc.myVPC.id}"
    route {
        cidr_block = "0.0.0.0/0"
        gateway_id = "${aws_internet_gateway.myGW.id}"
    }
}

resource "aws_route_table_association" "puclic-a" {
    subnet_id = "${aws_subnet.public-a.id}"
    route_table_id = "${aws_route_table.public-route.id}"
}

resource "aws_security_group" "admin" {
    name = "admin"
    description = "Allow SSH inbound traffic"
    vpc_id = "${aws_vpc.myVPC.id}"
    ingress {
        from_port = 22
        to_port = 22
        protocol = "tcp"
        cidr_blocks = ["0.0.0.0/0"]
    }
    egress {
        from_port = 0
        to_port = 0
        protocol = "-1"
        cidr_blocks = ["0.0.0.0/0"]
    }
}

resource "aws_instance" "aws-test" {
    ami = "${var.images.us-west-2}"
    instance_type = "t2.micro"
    key_name = "aws.devopsroles.com"
    vpc_security_group_ids = [
      "${aws_security_group.admin.id}"
    ]
    subnet_id = "${aws_subnet.public-a.id}"
    associate_public_ip_address = "true"
    root_block_device = {
      volume_type = "gp2"
      volume_size = "20"
    }
    ebs_block_device = {
      device_name = "/dev/sdf"
      volume_type = "gp2"
      volume_size = "100"
    }
    tags {
        Name = "aws-test"
    }
}

output "public ip of aws-test" {
  value = "${aws_instance.aws-test.public_ip}"
}

Dry-Run Terraform command

$ terraform plan

terraform plan command will check for syntax errors and parameter errors set in the block, but will not check for the correctness of the parameter values.

Applying a template

Let’s go we apply the template and create a resource on AWS.

$ terraform apply

Use terraform to show the display the content

$ terraform show

Resource changes

  • We add the content in main.tf file.
  • Use terraform plan to check the execution plan. marked with a ” -/ + “. This indicates that the resource will be deleted & recreated as the attribute changes .
  • terraform apply command for creating.

Delete resource

terraform destroy command can delete a set of resources in the template. terraform plan -destroy you can find out the execution plan for resource deletion.

$ terraform plan -destroy
$ terraform destroy

How to split template file

I have settings together in one template file main.tf

You can be divided into 3 files as below

main.tf

provider "aws" {
    access_key = "${var.aws_access_key}"
    secret_key = "${var.aws_secret_key}"
    region = "${var.region}"
}

## Describe the definition of the resource
resource "aws_vpc" "myVPC" {
    cidr_block = "10.1.0.0/16"
    instance_tenancy = "default"
    enable_dns_support = "true"
    enable_dns_hostnames = "false"
    tags {
      Name = "myVPC"
    }
}

...

variables.tf

variable "aws_access_key" {}
variable "aws_secret_key" {}
variable "region" {
    default = "us-west-2"
}

variable "images" {
    default = {
        us-east-1 = "ami-1ecae776"
        us-west-2 = "ami-e7527ed7"
        us-west-1 = "ami-d114f295"
    }
}

outputs.tf

output "public ip of aws-test" {
  value = "${aws_instance.aws-test.public_ip}"
}

Conclusion

You have to use Terraform build EC2 instance. I hope will this your helpful. Thank you for reading the DevopsRoles page!

How to deploy example application to Kubernetes

Introduction

Dive into the world of Kubernetes with our easy-to-follow guide on deploy example application to Kubernetes. Whether you’re a beginner or an experienced developer, this tutorial will demonstrate how to efficiently deploy and manage applications in a Kubernetes environment.

Learn how to set up your application with Kubernetes pods, services, and external load balancers to ensure scalable and resilient deployment.

Step-by-Step deploy example application to Kubernetes

Fronted application

  • load balanced by public ELB
  • read request load balanced to multiple slaves
  • write a request to a single master

Backend example Redis

  • single master (write)
  • multi slaves (read)
  • slaves sync continuously from the master

Deploy Redis Master and Redis Slave

For Redis Master

kubectl apply -f https://raw.githubusercontent.com/kubernetes/examples/master/guestbook-go/redis-master-controller.json
kubectl apply -f https://raw.githubusercontent.com/kubernetes/examples/master/guestbook-go/redis-master-service.json

For redis slave

kubectl apply -f https://raw.githubusercontent.com/kubernetes/examples/master/guestbook-go/redis-slave-controller.json
kubectl apply -f https://raw.githubusercontent.com/kubernetes/examples/master/guestbook-go/redis-slave-service.json

Deploy frontend app

kubectl apply -f https://raw.githubusercontent.com/kubernetes/examples/master/guestbook-go/guestbook-controller.json
kubectl apply -f https://raw.githubusercontent.com/kubernetes/examples/master/guestbook-go/guestbook-service.json

The result redis-master, redis-slave, and guestbook created

kubectl get replicationcontroller
# The output as below
$ kubectl get replicationcontroller
NAME           DESIRED   CURRENT   READY   AGE
guestbook      3         3         3       2m13s
redis-master   1         1         1       4m32s
redis-slave    2         2         2       3m10s

Get service and pod

kubectl get pod,service

The output terminal

$ kubectl get pod,service
NAME                     READY   STATUS    RESTARTS   AGE
pod/guestbook-tnc42      1/1     Running   0          2m33s
pod/guestbook-vqgws      1/1     Running   0          2m33s
pod/guestbook-vqnxf      1/1     Running   0          2m33s
pod/redis-master-dp7h7   1/1     Running   0          4m52s
pod/redis-slave-54mt6    1/1     Running   0          3m30s
pod/redis-slave-8g8h4    1/1     Running   0          3m30s

NAME                   TYPE           CLUSTER-IP       EXTERNAL-IP                                                              PORT(S)          AGE
service/guestbook      LoadBalancer   10.100.231.216   aff3d414c479f4faaa2ab82062c87fe5-264485147.us-west-2.elb.amazonaws.com   3000:32767/TCP   2m25s
service/kubernetes     ClusterIP      10.100.0.1       <none>                                                                   443/TCP          52m
service/redis-master   ClusterIP      10.100.76.16     <none>                                                                   6379/TCP         4m24s
service/redis-slave    ClusterIP      10.100.126.163   <none>                                                                   6379/TCP         3m25s

Show external ELB DNS

echo $(kubectl  get svc guestbook | awk '{ print $4 }' | tail -1):$(kubectl  get svc guestbook | awk '{ print $5 }' | tail -1 | cut -d ":" -f 13000)

The output terminal

$ echo $(kubectl  get svc guestbook | awk '{ print $4 }' | tail -1):$(kubectl  get svc guestbook | awk '{ print $5 }' | tail -1 | cut -d ":" -f 13000)
aff3d414c479f4faaa2ab82062c87fe5-264485147.us-west-2.elb.amazonaws.com:

Note: ELB is ready in 3-5 minutes

Conclusion

You have to Deploy an example application to Kubernetes. we will expose these pods with Ingress in the post later.

Congratulations on taking your first steps towards mastering application deployment with Kubernetes! By following this tutorial, you’ve gained valuable insights into the intricacies of Kubernetes architecture and how to leverage it for effective application management. Continue to explore and expand your skills, and don’t hesitate to revisit this guide or explore further topics on our website to enhance your Kubernetes expertise.

I hope will this your helpful. Thank you for reading the DevopsRoles page! Refer source

How to install dashboard Kubernetes

In this tutorial, How to install Metrics Server and install Dashboard Kubernetes.

Install Metrics Server

Metrics Server is a scalable, efficient source of container resource metrics for Kubernetes built-in autoscaling pipelines.

Prerequisites

You have created an Amazon EKS cluster. By following the steps.

Install Metrics server

kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.6/components.yaml

#The output
E:\Study\cka\devopsroles> kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.6/components.yaml
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
serviceaccount/metrics-server created
deployment.apps/metrics-server created
service/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created

Check metrics-server have deployment

kubectl get deployment metrics-server -n kube-system

The output terminal

E:\Study\cka\devopsroles>kubectl get deployment metrics-server -n kube-system
NAME             READY   UP-TO-DATE   AVAILABLE   AGE
metrics-server   1/1     1            1           19s

Install Dashboard Kubernetes

Refer here.

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta8/aio/deploy/recommended.yaml

The output resources created in kubernetes-dashboard namespace

E:\Study\cka\devopsroles>kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta8/aio/deploy/recommended.yaml
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created

Check namespace

kubectl get namespace
kubectl get pod,deployment,svc -n kubernetes-dashboard

The output terminal

E:\Study\cka\devopsroles>kubectl get namespace
NAME                   STATUS   AGE
default                Active   12m
kube-node-lease        Active   12m
kube-public            Active   12m
kube-system            Active   12m
kubernetes-dashboard   Active   73s

E:\Study\cka\devopsroles>kubectl get pod,deployment,svc -n kubernetes-dashboard
NAME                                             READY   STATUS    RESTARTS   AGE
pod/dashboard-metrics-scraper-76585494d8-hbv6t   1/1     Running   0          60s
pod/kubernetes-dashboard-5996555fd8-qhcxz        1/1     Running   0          64s

NAME                                        READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/dashboard-metrics-scraper   1/1     1            1           60s
deployment.apps/kubernetes-dashboard        1/1     1            1           64s

NAME                                TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
service/dashboard-metrics-scraper   ClusterIP   10.100.242.231   <none>        8000/TCP   63s
service/kubernetes-dashboard        ClusterIP   10.100.135.44    <none>        443/TCP    78s

Get token for dashboard

export AWS_PROFILE=devopsroles-demo
kubectl describe secret $(kubectl get secret -n kubernetes-dashboard | grep kubernetes-dashboard-token | awk '{ print $1 }') -n kubernetes-dashboard

The output terminal token

$  kubectl describe secret $(kubectl get secret -n kubernetes-dashboard | grep kubernetes-dashboard-token | awk '{ print $1 }') -n kubernetes-dashboard
Name:         kubernetes-dashboard-token-grg5n
Namespace:    kubernetes-dashboard
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: kubernetes-dashboard
              kubernetes.io/service-account.uid: 6a5c693b-892c-4564-9a79-20e0995bc012

Type:  kubernetes.io/service-account-token

Data
====
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6InVXaXBWU1BGWnp2SFRPaV80c1A1azlhYXA0ZkpyUDhkSlF1MTlkOFNHRG8ifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC10b2tlbi1ncmc1biIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjZhNWM2OTNiLTg5MmMtNDU2NC05YTc5LTIwZTA5OTViYzAxMiIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.XoYdQMM256sBLVc8x0zcsHn7RXSe03EWZKiNhHxwhCOrFQKPjyuKb__Myw0KNZj-C3Jo2PpvdE9-kcYgKYAPaOg3BWmseTw6cbveGzIs3tTrubgGsTNvxK9M_RGnrSxot_Ajul7-pTAHTaqSELM3CvctbrBRl39drEcIBajbzyHhlWHpkaeFPS2YG3Ct6jLhcm4saZgAV6WxJyb3cAyRdOhuSdM2dLRDJ4LhQTy8wPGHiwY7MWUE-ybi--ehaEExXiBFp2sTUVC8RQQSHoDhb8RCqh9aR9q1AwHEKze6T0lZMLH93RCb3IWEie950aOV7auQntJkeOfLKcE86Uzigw
ca.crt:     1025 bytes
namespace:  20 bytes

Create a secure channel from local to API server in Kubernetes cluster

kubectl proxy

access this url from browser

http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/

Input token. The result as the picture below

Create RBAC to control what metrics can be visible

File: eks-admin-service-account.yaml with the content as follows

apiVersion: v1
kind: ServiceAccount
metadata:
  name: eks-admin
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: eks-admin
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin # this is the cluster admin role
subjects:
- kind: ServiceAccount
  name: eks-admin
  namespace: kube-system

Apply it

kubectl apply -f eks-admin-service-account.yaml

The output terminal

E:\Study\cka\devopsroles>kubectl apply -f eks-admin-service-account.yaml
serviceaccount/eks-admin created
clusterrolebinding.rbac.authorization.k8s.io/eks-admin created

Check it created in kube-system namespace

E:\Study\cka\devopsroles> kubectl get serviceaccount -n kube-system | grep eks-admin
eks-admin                            1         67s

Get a token from the eks-admin serviceaccount

kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep eks-admin | awk '{print $1}')

The output terminal token

$ kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep eks-admin | awk '{print $1}')
Name:         eks-admin-token-k2pxz
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: eks-admin
              kubernetes.io/service-account.uid: b7005b72-97fc-488b-951d-ac1eaa44dfee

Type:  kubernetes.io/service-account-token

Data
====
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6InVXaXBWU1BGWnp2SFRPaV80c1A1azlhYXA0ZkpyUDhkSlF1MTlkOFNHRG8ifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJla3MtYWRtaW4tdG9rZW4tazJweHoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZWtzLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiYjcwMDViNzItOTdmYy00ODhiLTk1MWQtYWMxZWFhNDRkZmVlIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmVrcy1hZG1pbiJ9.TA-nf-jlyVcUWcvoq7ixMRpZVgsgZ7sBWzTywjMF010bvN4b7p5V5PbMVmNldUylPZ4FQniQY9Xs-quQDytI2TH8zRFdZDFMccEx_E1vzh1CjizKjqbc233N3JYD7xJPTP1U_xq9inaZ3uXvedCtJBzDXVqjjY9eLKcDfuU_I-LLvPEmzh6Sk0FNDzyngAddD_COs2F8q2KbEUOVL6oHr2ZpgbXgxumM_NG_EAKHQx00iXl7Kr_z7wZZSAsvb24ZwQPOShHn_e49C0SvEQKq_TVMG21xxsn0wLIDfH_PpQd1m_vKHbOyyT_yMs_fQnZXnfq_VK7xEhjU3NvdsolMWg
ca.crt:     1025 bytes
namespace:  11 bytes

Create a secure channel from local to API server in Kubernetes cluster

kubectl proxy

Access this URL from browser

http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/

Input token as the picture below

The result have installed k8s dashboard

Uninstall Dashboard

kubectl delete -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta8/aio/deploy/recommended.yaml
kubectl delete -f eks-admin-service-account.yaml

Conclusion

You have to install dashboard Kubernetes. I hope will this your helpful. Thank you for reading the DevopsRoles page!

Devops Tutorial

Exit mobile version