Category Archives: Kubernetes

Learn Kubernetes with DevOpsRoles.com. Access comprehensive guides and tutorials to orchestrate containerized applications and streamline your DevOps processes with Kubernetes.

Setup Kubernetes Cluster with K3s

Introduction

In this tutorial, How to setup Kubernetes Cluster with K3s. It is a lightweight Kubernetes distribution developed by Rancher. K3s consume fewer resources than traditional distributions. It’s easy to set up and manage Kubernetes Cluster.

To set up a Kubernetes cluster using K3s, you can follow these steps:

Setup Kubernetes Cluster with K3s

First, You need to get a Linux machine.

Start by provisioning the servers that will be part of your Kubernetes cluster.

You can use virtual machines or bare-metal servers. Ensure that the servers have a compatible operating system (such as Ubuntu, CentOS, or RHEL).

Download the Rancher binary

Link download here. or you use the wget command or curl command to get download it.

wget https://github.com/k3s-io/k3s/releases/download/v1.23.5%2Bk3s1/k3s

make the binary executable.

chmod +x k3s

The output terminal is as below:

vagrant@controller:~$ wget https://github.com/k3s-io/k3s/releases/download/v1.23.5%2Bk3s1/k3s
--2022-06-05 08:49:28--  https://github.com/k3s-io/k3s/releases/download/v1.23.5%2Bk3s1/k3s
Resolving github.com (github.com)... 20.205.243.166
Connecting to github.com (github.com)|20.205.243.166|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://objects.githubusercontent.com/github-production-release-asset-2e65be/135516270/88b18d50-2447-4216-b672-fdf17488cb41?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20220605%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20220605T084929Z&X-Amz-Expires=300&X-Amz-Signature=54b2aa58e831f8f8d179189c940eb5c38b4df7d4e3e33c18c9376b446f029742&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=135516270&response-content-disposition=attachment%3B%20filename%3Dk3s&response-content-type=application%2Foctet-stream [following]
--2022-06-05 08:49:28--  https://objects.githubusercontent.com/github-production-release-asset-2e65be/135516270/88b18d50-2447-4216-b672-fdf17488cb41?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20220605%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20220605T084929Z&X-Amz-Expires=300&X-Amz-Signature=54b2aa58e831f8f8d179189c940eb5c38b4df7d4e3e33c18c9376b446f029742&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=135516270&response-content-disposition=attachment%3B%20filename%3Dk3s&response-content-type=application%2Foctet-stream
Resolving objects.githubusercontent.com (objects.githubusercontent.com)... 185.199.108.133, 185.199.109.133, 185.199.110.133, ...
Connecting to objects.githubusercontent.com (objects.githubusercontent.com)|185.199.108.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 62468096 (60M) [application/octet-stream]
Saving to: ‘k3s’

k3s                                100%[==============================================================>]  59.57M  4.95MB/s    in 24s

2022-06-05 08:49:53 (2.46 MB/s) - ‘k3s’ saved [62468096/62468096]

vagrant@controller:~$ ls -l
total 61004
-rw-rw-r-- 1 vagrant vagrant 62468096 Mar 31 01:05 k3s
vagrant@controller:~$ chmod +x k3s

Start the K3s Server.

sudo ./k3s server

Check your Kubernetes cluster

sudo ./k3s kubectl get nodes

The output terminal is as below:

How to manage your cluster

You can run kubectl commands through the k3s binary. K3s provides a built-in kubectl utility.

For example, To drain the node you created.

sudo ./k3s kubectl drain your-hostname

Or use kubectl cordon a node with the command below:

sudo ./k3s kubectl cordon your-hostname

Add nodes to a K3s cluster

We created a cluster with just one node with the step above. If you want add 1 node to your cluster.

On Master:

first, you determine the node token value of your server. get the token value with the command below:

sudo cat /var/lib/rancher/k3s/server/node-token

For example, the Token values the output as below:

K10c94d11d4970c4ac58973c98ee32c9c1c4cb4fc30d81adfaf3ddf405ba1c48b49::server:7b1dcfc180415f105af019717027e77c

On Worker

Install k3s agent

export K3S_NODE_NAME=node1
export K3S_URL="https://192.168.56.11:6443"
export K3S_TOKEN=K10c94d11d4970c4ac58973c98ee32c9c1c4cb4fc30d81adfaf3ddf405ba1c48b49::server:7b1dcfc180415f105af019717027e77c
curl -sfL https://get.k3s.io | sh -s -

Then, the Additional node run command below:

sudo k3s agent --server https://myserver:6443 --token K3S_TOKEN

Repeat this process to add as many nodes as you want to your cluster.

The result, of the Master

Conclusion

You have setup Kubernetes cluster using K3s. You can now use kubectl it to deploy and manage your applications on the cluster.

Further, you can do with k8s k3s such as customize networking or logging, change the container runtime, and set up certificates. I hope will this your helpful. Thank you for reading the DevopsRoles page!

Install Kubernetes Cluster with KubeKey

#Introduction

In this tutorial, How to Install Kubernetes Cluster with KubeKey. I will deploy a Kubernetes Cluster using KubeKey Quickstart.

You can use KubeKey to spin up a Kubernetes deployment for development/testing purposes. As Kubernetes is the de-facto standard in container orchestration.

Requirements install Kubernetes Cluster with KubeKey

  • Running instance of Ubuntu Server.
  • A user with sudo privileges.

Install Docker

First, You need to install Docker on the Ubuntu server. Refer to How to install docker on Ubuntu Server.

How to Install KubeKey

You need to download KubeKey and make it executable as the command below:

sudo apt-get install conntrack -y
curl -sfL https://get-kk.kubesphere.io | VERSION=v1.2.1 sh -
chmod u+x kk

The output terminal is the picture below:

Make the kk executable to run from any directory by copying the file to /user/local/bin directory with the command below:

sudo cp kk /usr/local/bin

Verify the install it

kk -h

The output terminal is as below:

vagrant@devopsroles:~$ kk -h
Deploy a Kubernetes or KubeSphere cluster efficiently, flexibly and easily. There are three scenarios to use KubeKey.
1. Install Kubernetes only
2. Install Kubernetes and KubeSphere together in one command
3. Install Kubernetes first, then deploy KubeSphere on it using https://github.com/kubesphere/ks-installer

Usage:
  kk [command]

Available Commands:
  add         Add nodes to kubernetes cluster
  certs       cluster certs
  completion  Generate shell completion scripts
  create      Create a cluster or a cluster configuration file
  delete      Delete nodes or cluster
  help        Help about any command
  init        Initializes the installation environment
  upgrade     Upgrade your cluster smoothly to a newer version with this command
  version     print the client version information

Flags:
      --debug        Print detailed information (default true)
  -h, --help         help for kk
      --in-cluster   Running inside the cluster

Use "kk [command] --help" for more information about a command.

Deploy the Cluster

You need to deploy the cluster with the command below:

sudo kk create cluster

This process will take some time.

The output terminal is as below:

vagrant@devopsroles:~$ sudo kk create cluster
+-------------+------+------+---------+----------+-------+-------+-----------+----------+------------+-------------+------------------+--------------+
| name        | sudo | curl | openssl | ebtables | socat | ipset | conntrack | docker   | nfs client | ceph client | glusterfs client | time         |
+-------------+------+------+---------+----------+-------+-------+-----------+----------+------------+-------------+------------------+--------------+
| devopsroles | y    | y    | y       | y        |       |       | y         | 20.10.12 |            |             |                  | UTC 14:06:10 |
+-------------+------+------+---------+----------+-------+-------+-----------+----------+------------+-------------+------------------+--------------+

This is a simple check of your environment.
Before installation, you should ensure that your machines meet all requirements specified at
https://github.com/kubesphere/kubekey#requirements-and-recommendations

Continue this installation? [yes/no]: yes
INFO[14:06:20 UTC] Downloading Installation Files
INFO[14:06:20 UTC] Downloading kubeadm ...
INFO[14:06:24 UTC] Downloading kubelet ...
INFO[14:06:32 UTC] Downloading kubectl ...
INFO[14:06:36 UTC] Downloading helm ...
INFO[14:06:38 UTC] Downloading kubecni ...
INFO[14:06:43 UTC] Downloading etcd ...
INFO[14:06:48 UTC] Downloading docker ...
INFO[14:06:52 UTC] Downloading crictl ...
INFO[14:06:55 UTC] Configuring operating system ...
[devopsroles 10.0.2.15] MSG:
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
vm.max_map_count = 262144
vm.swappiness = 1
fs.inotify.max_user_instances = 524288
no crontab for root
INFO[14:06:56 UTC] Get cluster status
INFO[14:06:56 UTC] Installing Container Runtime ...
INFO[14:06:56 UTC] Start to download images on all nodes
[devopsroles] Downloading image: kubesphere/pause:3.4.1
[devopsroles] Downloading image: kubesphere/kube-apiserver:v1.21.5
[devopsroles] Downloading image: kubesphere/kube-controller-manager:v1.21.5
[devopsroles] Downloading image: kubesphere/kube-scheduler:v1.21.5
[devopsroles] Downloading image: kubesphere/kube-proxy:v1.21.5
[devopsroles] Downloading image: coredns/coredns:1.8.0
[devopsroles] Downloading image: kubesphere/k8s-dns-node-cache:1.15.12
[devopsroles] Downloading image: calico/kube-controllers:v3.20.0
[devopsroles] Downloading image: calico/cni:v3.20.0
[devopsroles] Downloading image: calico/node:v3.20.0
[devopsroles] Downloading image: calico/pod2daemon-flexvol:v3.20.0
INFO[14:08:42 UTC] Getting etcd status
[devopsroles 10.0.2.15] MSG:
Configuration file will be created
INFO[14:08:42 UTC] Generating etcd certs
INFO[14:08:43 UTC] Synchronizing etcd certs
INFO[14:08:43 UTC] Creating etcd service
Push /home/vagrant/kubekey/v1.21.5/amd64/etcd-v3.4.13-linux-amd64.tar.gz to 10.0.2.15:/tmp/kubekey/etcd-v3.4.13-linux-amd64.tar.gz   Done
INFO[14:08:43 UTC] Starting etcd cluster
INFO[14:08:44 UTC] Refreshing etcd configuration
[devopsroles 10.0.2.15] MSG:
Created symlink /etc/systemd/system/multi-user.target.wants/etcd.service → /etc/systemd/system/etcd.service.
INFO[14:08:47 UTC] Backup etcd data regularly
INFO[14:08:54 UTC] Installing kube binaries
Push /home/vagrant/kubekey/v1.21.5/amd64/kubeadm to 10.0.2.15:/tmp/kubekey/kubeadm   Done
Push /home/vagrant/kubekey/v1.21.5/amd64/kubelet to 10.0.2.15:/tmp/kubekey/kubelet   Done
Push /home/vagrant/kubekey/v1.21.5/amd64/kubectl to 10.0.2.15:/tmp/kubekey/kubectl   Done
Push /home/vagrant/kubekey/v1.21.5/amd64/helm to 10.0.2.15:/tmp/kubekey/helm   Done
Push /home/vagrant/kubekey/v1.21.5/amd64/cni-plugins-linux-amd64-v0.9.1.tgz to 10.0.2.15:/tmp/kubekey/cni-plugins-linux-amd64-v0.9.1.tgz   Done
INFO[14:09:01 UTC] Initializing kubernetes cluster
[devopsroles 10.0.2.15] MSG:
W0323 14:09:02.442567    4631 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
[init] Using Kubernetes version: v1.21.5
[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
        [WARNING FileExisting-socat]: socat not found in system path
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [devopsroles devopsroles.cluster.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.kubesphere.local localhost] and IPs [10.233.0.1 10.0.2.15 127.0.0.1]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] External etcd mode: Skipping etcd/ca certificate authority generation
[certs] External etcd mode: Skipping etcd/server certificate generation
[certs] External etcd mode: Skipping etcd/peer certificate generation
[certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation
[certs] External etcd mode: Skipping apiserver-etcd-client certificate generation
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 7.502071 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.21" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node devopsroles as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node devopsroles as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: w6uaty.abpybmw8jhw1tdlg
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join lb.kubesphere.local:6443 --token w6uaty.abpybmw8jhw1tdlg \
        --discovery-token-ca-cert-hash sha256:98d31447eb6457d74c0d13088aceed7ae8dd1fd0b8e98cb1a15683fcbb5ef4d5 \
        --control-plane

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join lb.kubesphere.local:6443 --token w6uaty.abpybmw8jhw1tdlg \
        --discovery-token-ca-cert-hash sha256:98d31447eb6457d74c0d13088aceed7ae8dd1fd0b8e98cb1a15683fcbb5ef4d5
[devopsroles 10.0.2.15] MSG:
node/devopsroles untainted
[devopsroles 10.0.2.15] MSG:
node/devopsroles labeled
[devopsroles 10.0.2.15] MSG:
service "kube-dns" deleted
[devopsroles 10.0.2.15] MSG:
service/coredns created
Warning: resource clusterroles/system:coredns is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
clusterrole.rbac.authorization.k8s.io/system:coredns configured
[devopsroles 10.0.2.15] MSG:
serviceaccount/nodelocaldns created
daemonset.apps/nodelocaldns created
[devopsroles 10.0.2.15] MSG:
configmap/nodelocaldns created
INFO[14:09:34 UTC] Get cluster status
INFO[14:09:35 UTC] Joining nodes to cluster
INFO[14:09:35 UTC] Deploying network plugin ...
[devopsroles 10.0.2.15] MSG:
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
Warning: policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
poddisruptionbudget.policy/calico-kube-controllers created
INFO[14:09:36 UTC] Congratulations! Installation is successful.

To verify Kubectl has been installed with the command below:

kubectl --help

The output terminal is as below:

vagrant@devopsroles:~$ kubectl --help
kubectl controls the Kubernetes cluster manager.

 Find more information at: https://kubernetes.io/docs/reference/kubectl/overview/

Basic Commands (Beginner):
  create        Create a resource from a file or from stdin.
  expose        Take a replication controller, service, deployment or pod and expose it as a new Kubernetes Service
  run           Run a particular image on the cluster
  set           Set specific features on objects

Basic Commands (Intermediate):
  explain       Documentation of resources
  get           Display one or many resources
  edit          Edit a resource on the server
  delete        Delete resources by filenames, stdin, resources and names, or by resources and label selector

Deploy Commands:
  rollout       Manage the rollout of a resource
  scale         Set a new size for a Deployment, ReplicaSet or Replication Controller
  autoscale     Auto-scale a Deployment, ReplicaSet, StatefulSet, or ReplicationController

Cluster Management Commands:
  certificate   Modify certificate resources.
  cluster-info  Display cluster info
  top           Display Resource (CPU/Memory) usage.
  cordon        Mark node as unschedulable
  uncordon      Mark node as schedulable
  drain         Drain node in preparation for maintenance
  taint         Update the taints on one or more nodes

Troubleshooting and Debugging Commands:
  describe      Show details of a specific resource or group of resources
  logs          Print the logs for a container in a pod
  attach        Attach to a running container
  exec          Execute a command in a container
  port-forward  Forward one or more local ports to a pod
  proxy         Run a proxy to the Kubernetes API server
  cp            Copy files and directories to and from containers.
  auth          Inspect authorization
  debug         Create debugging sessions for troubleshooting workloads and nodes

Advanced Commands:
  diff          Diff live version against would-be applied version
  apply         Apply a configuration to a resource by filename or stdin
  patch         Update field(s) of a resource
  replace       Replace a resource by filename or stdin
  wait          Experimental: Wait for a specific condition on one or many resources.
  kustomize     Build a kustomization target from a directory or URL.

Settings Commands:
  label         Update the labels on a resource
  annotate      Update the annotations on a resource
  completion    Output shell completion code for the specified shell (bash or zsh)

Other Commands:
  api-resources Print the supported API resources on the server
  api-versions  Print the supported API versions on the server, in the form of "group/version"
  config        Modify kubeconfig files
  plugin        Provides utilities for interacting with plugins.
  version       Print the client and server version information

Usage:
  kubectl [flags] [options]

Use "kubectl <command> --help" for more information about a given command.
Use "kubectl options" for a list of global command-line options (applies to all commands).

Deploy the Kubernetes Dashboard

Deploy the Kubernetes Dashboard with the command:

sudo kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml

You need to deploy the dashboard, How to determine the IP address of the cluster uses the command below:

sudo kubectl get svc -n kubernetes-dashboard

The output terminal is as below:

vagrant@devopsroles:~$ sudo kubectl get svc -n kubernetes-dashboard
NAME                        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
dashboard-metrics-scraper   ClusterIP   10.233.19.190   <none>        8000/TCP   64s
kubernetes-dashboard        ClusterIP   10.233.23.203   <none>        443/TCP    64s

You might want to expose the dashboard via NodePort instead of ClusterIP

sudo kubectl edit svc kubernetes-dashboard -o yaml -n kubernetes-dashboard

This will open the configure file in vi editors.

Before:

type: ClusterIP

After change it

type: NodePort

Next, you must run the kubectl proxy command as below:

sudo kubectl proxy

While the proxy is running, open a web browser and point it to the IP address and port number listed in the results from the sudo kubectl get svc -n kubernetes-dashboard command.

vagrant@devopsroles:~$ sudo kubectl get svc -n kubernetes-dashboard
NAME                        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
dashboard-metrics-scraper   ClusterIP   10.233.19.190   <none>        8000/TCP        47h
kubernetes-dashboard        NodePort    10.233.23.203   <none>        443:32469/TCP   47h

Open a web browser:

https://192.168.3.7:32469/

To log in to the dashboard. You need to create a ServiceAccount object and a ClusterRoleBinding object. Create the account with the name admin-user in the namespace kubernetes-dashboard

cat <<EOF | sudo kubectl apply -f -
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard
EOF

Create a ClusterRoleBinding object.

cat <<EOF | sudo kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard
EOF

To retrieve that token with the command below:

sudo kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}')

Expected output:

The result,

This isn’t production-ready, but it’s a great way to get up to speed with Kubernetes and even develop for the platform.

The result is Docker Kubernetes containers running Ubuntu Server. Install Kubernetes Cluster with KubeKey

Conclusions

You have to Install Kubernetes Cluster with KubeKey. I hope will this your helpful. Thank you for reading the DevopsRoles page!

Install Nginx Ingress Controller using Helm Chart

#Introduction

In this tutorial, How to Install Nginx Ingress Controller using Helm Chart. I want to expose pods using Ingress Controller. I will use the Nginx ingress controller to set up AWS ELB.

Production Environment, You would be using istio gateway, traefik.

Create Nginx Ingress Controller using Helm Chart

kubectl create namespace nginx-ingress-controller

Add new repo ingress-nginx/ingress-nginx

helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo add stable https://charts.helm.sh/stable
helm repo update

Install ingress Nginx

helm install nginx-ingress-controller ingress-nginx/ingress-nginx

The output as below

$ helm install nginx-ingress-controller ingress-nginx/ingress-nginx
NAME: nginx-ingress-controller
LAST DEPLOYED: Sat Jun 26 16:11:27 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
The ingress-nginx controller has been installed.
It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status by running 'kubectl --namespace default get services -o wide -w nginx-ingress-controller-ingress-nginx-controller'

An example Ingress that makes use of the controller:

  apiVersion: networking.k8s.io/v1beta1
  kind: Ingress
  metadata:
    annotations:
      kubernetes.io/ingress.class: nginx
    name: example
    namespace: foo
  spec:
    rules:
      - host: www.example.com
        http:
          paths:
            - backend:
                serviceName: exampleService
                servicePort: 80
              path: /
    # This section is only required if TLS is to be enabled for the Ingress
    tls:
        - hosts:
            - www.example.com
          secretName: example-tls

If TLS is enabled for the Ingress, a Secret containing the certificate and key must also be provided:

  apiVersion: v1
  kind: Secret
  metadata:
    name: example-tls
    namespace: foo
  data:
    tls.crt: <base64 encoded cert>
    tls.key: <base64 encoded key>
  type: kubernetes.io/tls

Check nginx-ingress-controller namespace have been created

kubectl get pod,svc,deploy

# The output as bellow
$ kubectl get pod,svc,deploy
NAME                                                                  READY   STATUS    RESTARTS   AGE
pod/guestbook-tnc42                                                   1/1     Running   0          38m
pod/guestbook-vqgws                                                   1/1     Running   0          38m
pod/guestbook-vqnxf                                                   1/1     Running   0          38m
pod/nginx-ingress-controller-ingress-nginx-controller-7f8f65bf4g6c7   1/1     Running   0          4m23s
pod/redis-master-dp7h7                                                1/1     Running   0          41m
pod/redis-slave-54mt6                                                 1/1     Running   0          39m
pod/redis-slave-8g8h4                                                 1/1     Running   0          39m

NAME                                                                  TYPE           CLUSTER-IP       EXTERNAL-IP                                                              PORT(S)                      AGE
service/guestbook                                                     LoadBalancer   10.100.231.216   aff3d414c479f4faaa2ab82062c87fe5-264485147.us-west-2.elb.amazonaws.com   3000:32767/TCP               38m
service/kubernetes                                                    ClusterIP      10.100.0.1       <none>                                                                   443/TCP                      88m
service/nginx-ingress-controller-ingress-nginx-controller             LoadBalancer   10.100.57.204    a04b31985b6c64607a50d794ef692c57-291207293.us-west-2.elb.amazonaws.com   80:31382/TCP,443:32520/TCP   4m25s
service/nginx-ingress-controller-ingress-nginx-controller-admission   ClusterIP      10.100.216.28    <none>                                                                   443/TCP                      4m25s
service/redis-master                                                  ClusterIP      10.100.76.16     <none>                                                                   6379/TCP                     40m
service/redis-slave                                                   ClusterIP      10.100.126.163   <none>                                                                   6379/TCP                     39m

NAME                                                                READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nginx-ingress-controller-ingress-nginx-controller   1/1     1            1           4m25s

TYPE of nginx-ingress-controller-controller service is LoadBalancer. This is the L4 Load balancer. How to nginx-ingress-controller-controller pod is running Nginx inside to do L7 load balancing inside the EKS cluster.

The result 404 as below

Create Ingress resource for L7 load balancing

For example, ingress_example.yaml file.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
    annotations:
      kubernetes.io/ingress.class: nginx
    name: guestbook
    namespace: default
spec:
    rules:
      - http:
          paths:
            - backend:
                serviceName: guestbook
                servicePort: 3000 
              path: /

Apply it

kubectl apply -f ingress_example.yaml

Get the public DNS of AWS ELB created from the Nginx ingress controller service

kubectl  get svc nginx-ingress-controller-ingress-nginx-controller  | awk '{ print $4 }' | tail -1

The output as below


a04b31985b6c64607a50d794ef692c57-291207293.us-west-2.elb.amazonaws.com

Access link on browser

Delete service guestbook

$ kubectl delete svc guestbook
service "guestbook" deleted

Conclusion

You have to install Nginx Ingress Controller using Helm Chart. I hope will this your helpful. Thank you for reading the DevopsRoles page!

How to deploy example application to Kubernetes

Introduction

Dive into the world of Kubernetes with our easy-to-follow guide on deploy example application to Kubernetes. Whether you’re a beginner or an experienced developer, this tutorial will demonstrate how to efficiently deploy and manage applications in a Kubernetes environment.

Learn how to set up your application with Kubernetes pods, services, and external load balancers to ensure scalable and resilient deployment.

Step-by-Step deploy example application to Kubernetes

Fronted application

  • load balanced by public ELB
  • read request load balanced to multiple slaves
  • write a request to a single master

Backend example Redis

  • single master (write)
  • multi slaves (read)
  • slaves sync continuously from the master

Deploy Redis Master and Redis Slave

For Redis Master

kubectl apply -f https://raw.githubusercontent.com/kubernetes/examples/master/guestbook-go/redis-master-controller.json
kubectl apply -f https://raw.githubusercontent.com/kubernetes/examples/master/guestbook-go/redis-master-service.json

For redis slave

kubectl apply -f https://raw.githubusercontent.com/kubernetes/examples/master/guestbook-go/redis-slave-controller.json
kubectl apply -f https://raw.githubusercontent.com/kubernetes/examples/master/guestbook-go/redis-slave-service.json

Deploy frontend app

kubectl apply -f https://raw.githubusercontent.com/kubernetes/examples/master/guestbook-go/guestbook-controller.json
kubectl apply -f https://raw.githubusercontent.com/kubernetes/examples/master/guestbook-go/guestbook-service.json

The result redis-master, redis-slave, and guestbook created

kubectl get replicationcontroller
# The output as below
$ kubectl get replicationcontroller
NAME           DESIRED   CURRENT   READY   AGE
guestbook      3         3         3       2m13s
redis-master   1         1         1       4m32s
redis-slave    2         2         2       3m10s

Get service and pod

kubectl get pod,service

The output terminal

$ kubectl get pod,service
NAME                     READY   STATUS    RESTARTS   AGE
pod/guestbook-tnc42      1/1     Running   0          2m33s
pod/guestbook-vqgws      1/1     Running   0          2m33s
pod/guestbook-vqnxf      1/1     Running   0          2m33s
pod/redis-master-dp7h7   1/1     Running   0          4m52s
pod/redis-slave-54mt6    1/1     Running   0          3m30s
pod/redis-slave-8g8h4    1/1     Running   0          3m30s

NAME                   TYPE           CLUSTER-IP       EXTERNAL-IP                                                              PORT(S)          AGE
service/guestbook      LoadBalancer   10.100.231.216   aff3d414c479f4faaa2ab82062c87fe5-264485147.us-west-2.elb.amazonaws.com   3000:32767/TCP   2m25s
service/kubernetes     ClusterIP      10.100.0.1       <none>                                                                   443/TCP          52m
service/redis-master   ClusterIP      10.100.76.16     <none>                                                                   6379/TCP         4m24s
service/redis-slave    ClusterIP      10.100.126.163   <none>                                                                   6379/TCP         3m25s

Show external ELB DNS

echo $(kubectl  get svc guestbook | awk '{ print $4 }' | tail -1):$(kubectl  get svc guestbook | awk '{ print $5 }' | tail -1 | cut -d ":" -f 13000)

The output terminal

$ echo $(kubectl  get svc guestbook | awk '{ print $4 }' | tail -1):$(kubectl  get svc guestbook | awk '{ print $5 }' | tail -1 | cut -d ":" -f 13000)
aff3d414c479f4faaa2ab82062c87fe5-264485147.us-west-2.elb.amazonaws.com:

Note: ELB is ready in 3-5 minutes

Conclusion

You have to Deploy an example application to Kubernetes. we will expose these pods with Ingress in the post later.

Congratulations on taking your first steps towards mastering application deployment with Kubernetes! By following this tutorial, you’ve gained valuable insights into the intricacies of Kubernetes architecture and how to leverage it for effective application management. Continue to explore and expand your skills, and don’t hesitate to revisit this guide or explore further topics on our website to enhance your Kubernetes expertise.

I hope will this your helpful. Thank you for reading the DevopsRoles page! Refer source

How to install dashboard Kubernetes

In this tutorial, How to install Metrics Server and install Dashboard Kubernetes.

Install Metrics Server

Metrics Server is a scalable, efficient source of container resource metrics for Kubernetes built-in autoscaling pipelines.

Prerequisites

You have created an Amazon EKS cluster. By following the steps.

Install Metrics server

kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.6/components.yaml

#The output
E:\Study\cka\devopsroles> kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.6/components.yaml
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
serviceaccount/metrics-server created
deployment.apps/metrics-server created
service/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created

Check metrics-server have deployment

kubectl get deployment metrics-server -n kube-system

The output terminal

E:\Study\cka\devopsroles>kubectl get deployment metrics-server -n kube-system
NAME             READY   UP-TO-DATE   AVAILABLE   AGE
metrics-server   1/1     1            1           19s

Install Dashboard Kubernetes

Refer here.

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta8/aio/deploy/recommended.yaml

The output resources created in kubernetes-dashboard namespace

E:\Study\cka\devopsroles>kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta8/aio/deploy/recommended.yaml
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created

Check namespace

kubectl get namespace
kubectl get pod,deployment,svc -n kubernetes-dashboard

The output terminal

E:\Study\cka\devopsroles>kubectl get namespace
NAME                   STATUS   AGE
default                Active   12m
kube-node-lease        Active   12m
kube-public            Active   12m
kube-system            Active   12m
kubernetes-dashboard   Active   73s

E:\Study\cka\devopsroles>kubectl get pod,deployment,svc -n kubernetes-dashboard
NAME                                             READY   STATUS    RESTARTS   AGE
pod/dashboard-metrics-scraper-76585494d8-hbv6t   1/1     Running   0          60s
pod/kubernetes-dashboard-5996555fd8-qhcxz        1/1     Running   0          64s

NAME                                        READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/dashboard-metrics-scraper   1/1     1            1           60s
deployment.apps/kubernetes-dashboard        1/1     1            1           64s

NAME                                TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
service/dashboard-metrics-scraper   ClusterIP   10.100.242.231   <none>        8000/TCP   63s
service/kubernetes-dashboard        ClusterIP   10.100.135.44    <none>        443/TCP    78s

Get token for dashboard

export AWS_PROFILE=devopsroles-demo
kubectl describe secret $(kubectl get secret -n kubernetes-dashboard | grep kubernetes-dashboard-token | awk '{ print $1 }') -n kubernetes-dashboard

The output terminal token

$  kubectl describe secret $(kubectl get secret -n kubernetes-dashboard | grep kubernetes-dashboard-token | awk '{ print $1 }') -n kubernetes-dashboard
Name:         kubernetes-dashboard-token-grg5n
Namespace:    kubernetes-dashboard
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: kubernetes-dashboard
              kubernetes.io/service-account.uid: 6a5c693b-892c-4564-9a79-20e0995bc012

Type:  kubernetes.io/service-account-token

Data
====
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6InVXaXBWU1BGWnp2SFRPaV80c1A1azlhYXA0ZkpyUDhkSlF1MTlkOFNHRG8ifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC10b2tlbi1ncmc1biIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjZhNWM2OTNiLTg5MmMtNDU2NC05YTc5LTIwZTA5OTViYzAxMiIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.XoYdQMM256sBLVc8x0zcsHn7RXSe03EWZKiNhHxwhCOrFQKPjyuKb__Myw0KNZj-C3Jo2PpvdE9-kcYgKYAPaOg3BWmseTw6cbveGzIs3tTrubgGsTNvxK9M_RGnrSxot_Ajul7-pTAHTaqSELM3CvctbrBRl39drEcIBajbzyHhlWHpkaeFPS2YG3Ct6jLhcm4saZgAV6WxJyb3cAyRdOhuSdM2dLRDJ4LhQTy8wPGHiwY7MWUE-ybi--ehaEExXiBFp2sTUVC8RQQSHoDhb8RCqh9aR9q1AwHEKze6T0lZMLH93RCb3IWEie950aOV7auQntJkeOfLKcE86Uzigw
ca.crt:     1025 bytes
namespace:  20 bytes

Create a secure channel from local to API server in Kubernetes cluster

kubectl proxy

access this url from browser

http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/

Input token. The result as the picture below

Create RBAC to control what metrics can be visible

File: eks-admin-service-account.yaml with the content as follows

apiVersion: v1
kind: ServiceAccount
metadata:
  name: eks-admin
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: eks-admin
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin # this is the cluster admin role
subjects:
- kind: ServiceAccount
  name: eks-admin
  namespace: kube-system

Apply it

kubectl apply -f eks-admin-service-account.yaml

The output terminal

E:\Study\cka\devopsroles>kubectl apply -f eks-admin-service-account.yaml
serviceaccount/eks-admin created
clusterrolebinding.rbac.authorization.k8s.io/eks-admin created

Check it created in kube-system namespace

E:\Study\cka\devopsroles> kubectl get serviceaccount -n kube-system | grep eks-admin
eks-admin                            1         67s

Get a token from the eks-admin serviceaccount

kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep eks-admin | awk '{print $1}')

The output terminal token

$ kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep eks-admin | awk '{print $1}')
Name:         eks-admin-token-k2pxz
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: eks-admin
              kubernetes.io/service-account.uid: b7005b72-97fc-488b-951d-ac1eaa44dfee

Type:  kubernetes.io/service-account-token

Data
====
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6InVXaXBWU1BGWnp2SFRPaV80c1A1azlhYXA0ZkpyUDhkSlF1MTlkOFNHRG8ifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJla3MtYWRtaW4tdG9rZW4tazJweHoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZWtzLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiYjcwMDViNzItOTdmYy00ODhiLTk1MWQtYWMxZWFhNDRkZmVlIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmVrcy1hZG1pbiJ9.TA-nf-jlyVcUWcvoq7ixMRpZVgsgZ7sBWzTywjMF010bvN4b7p5V5PbMVmNldUylPZ4FQniQY9Xs-quQDytI2TH8zRFdZDFMccEx_E1vzh1CjizKjqbc233N3JYD7xJPTP1U_xq9inaZ3uXvedCtJBzDXVqjjY9eLKcDfuU_I-LLvPEmzh6Sk0FNDzyngAddD_COs2F8q2KbEUOVL6oHr2ZpgbXgxumM_NG_EAKHQx00iXl7Kr_z7wZZSAsvb24ZwQPOShHn_e49C0SvEQKq_TVMG21xxsn0wLIDfH_PpQd1m_vKHbOyyT_yMs_fQnZXnfq_VK7xEhjU3NvdsolMWg
ca.crt:     1025 bytes
namespace:  11 bytes

Create a secure channel from local to API server in Kubernetes cluster

kubectl proxy

Access this URL from browser

http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/

Input token as the picture below

The result have installed k8s dashboard

Uninstall Dashboard

kubectl delete -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta8/aio/deploy/recommended.yaml
kubectl delete -f eks-admin-service-account.yaml

Conclusion

You have to install dashboard Kubernetes. I hope will this your helpful. Thank you for reading the DevopsRoles page!

Unlocking helm commands for Kubernetes

Introduction

Explore the essential Helm commands for Kubernetes in this detailed tutorial. Whether you’re a beginner or a seasoned Kubernetes user, this guide will help you install Helm and utilize its commands to manage charts and repositories effectively, streamlining your Kubernetes deployments.

In this tutorial, How to install helm and run helm commands. Helm provides many commands for managing charts and Helm repositories. Now, let’s helm commands for Kubernetes

  • Helm Commands for Kubernetes: Simplifies application deployment and management in Kubernetes environments.
  • What is a Helm Chart?: A package containing pre-configured Kubernetes resources used for deploying applications.
  • Purpose of Helm: Streamlines deployments, manages versions and rollbacks and allows customization of installations through charts.
  • Using Helm Charts: Install Helm, add repositories, and manage applications within your Kubernetes cluster using Helm’s command suite, including install, update, and delete operations.

Install Helm commands for Kubernetes

You can ref here.

From Homebrew (Mac)

brew install helm

From Windows

choco install kubernetes-helm

Check version

helm version

The output is as below:

Windows PowerShell
Copyright (C) Microsoft Corporation. All rights reserved.

Try the new cross-platform PowerShell https://aka.ms/pscore6

PS C:\Windows\system32> choco install kubernetes-helm
Chocolatey v0.10.15
Installing the following packages:
kubernetes-helm
By installing you accept licenses for the packages.
Progress: Downloading kubernetes-helm 3.5.4... 100%

kubernetes-helm v3.5.4 [Approved]
kubernetes-helm package files install completed. Performing other installation steps.
The package kubernetes-helm wants to run 'chocolateyInstall.ps1'.
Note: If you don't run this script, the installation will fail.
Note: To confirm automatically next time, use '-y' or consider:
choco feature enable -n allowGlobalConfirmation
Do you want to run the script?([Y]es/[A]ll - yes to all/[N]o/[P]rint): Yes

Downloading kubernetes-helm 64 bit
  from 'https://get.helm.sh/helm-v3.5.4-windows-amd64.zip'
Progress: 100% - Completed download of C:\Users\USERNAME\AppData\Local\Temp\chocolatey\kubernetes-helm\3.5.4\helm-v3.5.4-windows-amd64.zip (11.96 MB).
Download of helm-v3.5.4-windows-amd64.zip (11.96 MB) completed.
Hashes match.
Extracting C:\Users\USERNAME\AppData\Local\Temp\chocolatey\kubernetes-helm\3.5.4\helm-v3.5.4-windows-amd64.zip to C:\ProgramData\chocolatey\lib\kubernetes-helm\tools...
C:\ProgramData\chocolatey\lib\kubernetes-helm\tools
 ShimGen has successfully created a shim for helm.exe
 The install of kubernetes-helm was successful.
  Software installed to 'C:\ProgramData\chocolatey\lib\kubernetes-helm\tools'

Chocolatey installed 1/1 packages.
 See the log for details (C:\ProgramData\chocolatey\logs\chocolatey.log).

Did you know the proceeds of Pro (and some proceeds from other
 licensed editions) go into bettering the community infrastructure?
 Your support ensures an active community, keeps Chocolatey tip top,
 plus it nets you some awesome features!
 https://chocolatey.org/compare
PS C:\Windows\system32> helm version
version.BuildInfo{Version:"v3.5.4", GitCommit:"1b5edb69df3d3a08df77c9902dc17af864ff05d1", GitTreeState:"clean", GoVersion:"go1.15.11"}
PS C:\Windows\system32>

Helm Commands kubernetes

Add Helm repo link here

# Example
helm repo add stable https://charts.helm.sh/stable

I will add bitnami repo and search for the Nginx server as command follows:

helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update
helm search repo nginx
helm search repo bitnami/nginx

The output is as commands follows:

E:\Study\cka\devopsroles>helm search repo nginx
NAME                                    CHART VERSION   APP VERSION     DESCRIPTION
bitnami/nginx                           8.9.1           1.19.10         Chart for the nginx server
bitnami/nginx-ingress-controller        7.6.9           0.46.0          Chart for the nginx Ingress controller
stable/nginx-ingress                    1.41.3          v0.34.1         DEPRECATED! An nginx Ingress controller that us...
stable/nginx-ldapauth-proxy             0.1.6           1.13.5          DEPRECATED - nginx proxy with ldapauth
stable/nginx-lego                       0.3.1                           Chart for nginx-ingress-controller and kube-lego
bitnami/kong                            3.7.4           2.4.1           Kong is a scalable, open source API layer (aka ...
stable/gcloud-endpoints                 0.1.2           1               DEPRECATED Develop, deploy, protect and monitor...

E:\Study\cka\devopsroles>helm search repo bitnami/nginx
NAME                                    CHART VERSION   APP VERSION     DESCRIPTION
bitnami/nginx                           8.9.1           1.19.10         Chart for the nginx server
bitnami/nginx-ingress-controller        7.6.9           0.46.0          Chart for the nginx Ingress controller

Install Nginx using the helm command

helm install nginx bitnami/nginx

Update Nginx

helm upgrade nginx bitnami/nginx --dry-run

# Upgrade using values in overrides_nginx.yaml
helm upgrade nignx bitnami/nginx -f overrides_nginx.yaml

# rollback
helm rollback nginx REVISION_NUMBER

Basic helm command

helm status nginx
helm history nginx
# get manifest and values from deployment
helm get manifest nginx
helm get values nginx
helm uninstall nginx

Conclusion

Mastering Helm commands enhances your Kubernetes management skills, allowing for more efficient application deployment and management. This tutorial provides the foundation you need to confidently use Helm in your Kubernetes environment, improving your operational capabilities.

You have to use helm commands for Kubernetes. I hope will this your helpful. Thank you for reading the DevopsRoles page!

Install EKS on AWS

In this tutorial, how to Set Up EKS 1.16 cluster with eksctl . Amazon Elastic Kubernetes Service (Amazon EKS) gives you the flexibility to start, run, and scale Kubernetes applications in the AWS cloud or on-premises. No, let’s go Install EKS on AWS.

1.The first create a free account on AWS.

Link here:

Example: Create User: devopsroles-demo as the picture below:

2.Install AWS cli on Windows

Refer here:

Create AWS Profile

We easier to switch to different AWS IAM user or IAM role identiy by ‘export AWS_PROFILE=PROFILE_NAME‘ . I will not using ‘default‘ profile created by ‘aws configure‘. For example, I create a named AWS Profile ‘devopsroles-demo‘ in two ways:

  • ‘aws configure –profile devopsroles-demo’
E:\Study\cka\devopsroles>aws configure --profile devopsroles-demo
AWS Access Key ID [None]: XXXXZHBNJLCKKCE7EQQQ
AWS Secret Access Key [None]: fdfdfdfd43434dYlQ1il1xKNCnqwUvNHFSv41111
Default region name [None]:
Default output format [None]:

E:\Study\cka\devopsroles>set AWS_PROFILE=devopsroles-demo
E:\Study\cka\devopsroles>aws sts get-caller-identity
{
    "UserId": "AAQAZHBNJLCKPEGKYAV1R",
    "Account": "456602660300",
    "Arn": "arn:aws:iam::456602660300:user/devopsroles-demo"
}
  • Create profile entry in ~/.aws/credentials file

The content credentials file as below:

[devopsroles-demo] 
aws_access_key_id=YOUR_ACCESS_KEY 
aws_secret_access_key=YOUR_SECRET_ACCESS_KEY
aws_region = YOUR_REGION

Check new profile

export AWS_PROFILE=devopsroles-demo
# Windows
set AWS_PROFILE=devopsroles-demo

3.Install aws-iam-authenticator

# Windows
# install chocolatey first: https://chocolatey.org/install
choco install -y aws-iam-authenticator

4.Install kubectl

Ref here:

choco install kubernetes-cli
kubectl version

5.Install eksctl

Ref here:

# install eskctl from chocolatey
chocolatey install -y eksctl 
eksctl version

6.Create ssh key for EKS worker nodes

ssh-keygen
# Example name key is devopsroles_worker_nodes_demo.pem

7.Setup EKS cluster with eksctl (so you don’t need to manually create VPC)

eksctl tool will create K8s Control Plane (master nodes, etcd, API server, etc), worker nodes, VPC, Security Groups, Subnets, Routes, Internet Gateway, etc.

  • use official AWS EKS AMI
  • dedicated VPC
  • EKS not supported in us-west-1
eksctl create cluster  --name devopsroles-from-eksctl --version 1.16  --region us-west-2  --nodegroup-name workers --node-type t3.medium --nodes 2 --nodes-min 1 --nodes-max 4 --ssh-access --ssh-public-key ~/.ssh/devopsroles_worker_nodes_demo.pem.pub --managed

The output

E:\Study\cka\devopsroles>eksctl create cluster  --name devopsroles-from-eksctl --version 1.16  --region us-west-2  --nodegroup-name workers --node-type t3.medium --nodes 2 --nodes-min 1 --nodes-max 4 --ssh-access
--ssh-public-key ~/.ssh/devopsroles_worker_nodes_demo.pem.pub --managed
2021-05-23 15:19:30 [ℹ]  eksctl version 0.49.0
2021-05-23 15:19:30 [ℹ]  using region us-west-2
2021-05-23 15:19:31 [ℹ]  setting availability zones to [us-west-2a us-west-2b us-west-2c]
2021-05-23 15:19:31 [ℹ]  subnets for us-west-2a - public:192.168.0.0/19 private:192.168.96.0/19
2021-05-23 15:19:31 [ℹ]  subnets for us-west-2b - public:192.168.32.0/19 private:192.168.128.0/19
2021-05-23 15:19:31 [ℹ]  subnets for us-west-2c - public:192.168.64.0/19 private:192.168.160.0/19
2021-05-23 15:19:31 [ℹ]  using SSH public key "C:\\Users\\USERNAME/.ssh/devopsroles_worker_nodes_demo.pem.pub" as "eksctl-devopsroles-from-eksctl-nodegroup-workers-29:e7:8c:c3:df:a5:23:1b:bb:74:ad:51:bc:fb:80:9b" 
2021-05-23 15:19:32 [ℹ]  using Kubernetes version 1.16
2021-05-23 15:19:32 [ℹ]  creating EKS cluster "devopsroles-from-eksctl" in "us-west-2" region with managed nodes
2021-05-23 15:19:32 [ℹ]  will create 2 separate CloudFormation stacks for cluster itself and the initial managed nodegroup
2021-05-23 15:19:32 [ℹ]  if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=us-west-2 --cluster=devopsroles-from-eksctl'
2021-05-23 15:19:32 [ℹ]  CloudWatch logging will not be enabled for cluster "devopsroles-from-eksctl" in "us-west-2"
2021-05-23 15:19:32 [ℹ]  you can enable it with 'eksctl utils update-cluster-logging --enable-types={SPECIFY-YOUR-LOG-TYPES-HERE (e.g. all)} --region=us-west-2 --cluster=devopsroles-from-eksctl'
2021-05-23 15:19:32 [ℹ]  Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "devopsroles-from-eksctl" in "us-west-2"
2021-05-23 15:19:32 [ℹ]  2 sequential tasks: { create cluster control plane "devopsroles-from-eksctl", 2 sequential sub-tasks: { wait for control plane to become ready, create managed nodegroup "workers" } }
2021-05-23 15:19:32 [ℹ]  building cluster stack "eksctl-devopsroles-from-eksctl-cluster"
2021-05-23 15:19:34 [ℹ]  deploying stack "eksctl-devopsroles-from-eksctl-cluster"
2021-05-23 15:20:04 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-cluster"
2021-05-23 15:20:35 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-cluster"
2021-05-23 15:21:36 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-cluster"
2021-05-23 15:22:37 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-cluster"
2021-05-23 15:23:39 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-cluster"
2021-05-23 15:24:40 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-cluster"
2021-05-23 15:25:41 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-cluster"
2021-05-23 15:26:42 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-cluster"
2021-05-23 15:27:44 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-cluster"
2021-05-23 15:28:45 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-cluster"
2021-05-23 15:29:46 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-cluster"
2021-05-23 15:30:47 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-cluster"
2021-05-23 15:31:48 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-cluster"
2021-05-23 15:31:52 [ℹ]  building managed nodegroup stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:31:53 [ℹ]  deploying stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:31:53 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:32:09 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:32:27 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:32:48 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:33:05 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:33:26 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:33:47 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:34:06 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:34:24 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:34:43 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:35:01 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:35:17 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:35:37 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:35:38 [ℹ]  waiting for the control plane availability...
2021-05-23 15:35:38 [✔]  saved kubeconfig as "C:\\Users\\USERNAME/.kube/config"
2021-05-23 15:35:38 [ℹ]  no tasks
2021-05-23 15:35:38 [✔]  all EKS cluster resources for "devopsroles-from-eksctl" have been created
2021-05-23 15:35:39 [ℹ]  nodegroup "workers" has 2 node(s)
2021-05-23 15:35:39 [ℹ]  node "ip-192-168-53-14.us-west-2.compute.internal" is ready
2021-05-23 15:35:39 [ℹ]  node "ip-192-168-90-229.us-west-2.compute.internal" is ready
2021-05-23 15:35:39 [ℹ]  waiting for at least 1 node(s) to become ready in "workers"
2021-05-23 15:35:39 [ℹ]  nodegroup "workers" has 2 node(s)
2021-05-23 15:35:39 [ℹ]  node "ip-192-168-53-14.us-west-2.compute.internal" is ready
2021-05-23 15:35:39 [ℹ]  node "ip-192-168-90-229.us-west-2.compute.internal" is ready
2021-05-23 15:35:47 [ℹ]  kubectl command should work with "C:\\Users\\USERNAME/.kube/config", try 'kubectl get nodes'
2021-05-23 15:35:47 [✔]  EKS cluster "devopsroles-from-eksctl" in "us-west-2" region is ready

You have created a cluster, To find that cluster credentials added in ~/.kube/config

The result on AWS

Amazon EKS Clusters

CloudFormation

EC2

For example basic command line:

Get info about cluster resources

aws eks describe-cluster --name devopsroles-from-eksctl --region us-west-2

Get services

kubectl get svc

The output

E:\Study\cka\devopsroles>kubectl get svc
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.100.0.1   <none>        443/TCP   11m

Delete EKS Cluster

eksctl delete cluster --name devopsroles-from-eksctl --region us-west-2

The output

E:\Study\cka\devopsroles>eksctl delete cluster --name devopsroles-from-eksctl --region us-west-2
2021-05-23 15:57:31 [ℹ]  eksctl version 0.49.0
2021-05-23 15:57:31 [ℹ]  using region us-west-2
2021-05-23 15:57:31 [ℹ]  deleting EKS cluster "devopsroles-from-eksctl"
2021-05-23 15:57:34 [ℹ]  deleted 0 Fargate profile(s)
2021-05-23 15:57:37 [✔]  kubeconfig has been updated
2021-05-23 15:57:37 [ℹ]  cleaning up AWS load balancers created by Kubernetes objects of Kind Service or Ingress
2021-05-23 15:57:45 [ℹ]  2 sequential tasks: { delete nodegroup "workers", delete cluster control plane "devopsroles-from-eksctl" [async] }
2021-05-23 15:57:45 [ℹ]  will delete stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:57:45 [ℹ]  waiting for stack "eksctl-devopsroles-from-eksctl-nodegroup-workers" to get deleted
2021-05-23 15:57:45 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:58:02 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:58:19 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:58:40 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:58:58 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:59:19 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:59:20 [ℹ]  will delete stack "eksctl-devopsroles-from-eksctl-cluster"
2021-05-23 15:59:20 [✔]  all cluster resources were deleted

Conclusion

You have Install EKS on AWS. I hope will this your helpful. Thank you for reading the DevopsRoles page!

How to Kubernetes Minikube Deploy Pods

Introduction

In the rapidly evolving world of cloud-native applications, Kubernetes has emerged as the go-to platform for automating deployment, scaling, and managing containerized applications. For those who are new to Kubernetes or looking to experiment with it in a local environment, Minikube is the ideal tool. Minikube allows you to run a single-node Kubernetes cluster on your local machine, making it easier to learn and test.

This guide will walk you through the process of deploying and managing Pods on Kubernetes Minikube. We will cover everything from basic concepts to advanced operations like scaling and exposing your services. Whether you are a beginner or an experienced developer, this guide will provide you with valuable insights and practical steps to effectively manage your Kubernetes environment.

What is Kubernetes Minikube?

Kubernetes is an open-source platform that automates the deployment, scaling, and operation of application containers across clusters of hosts. Minikube is a tool that enables you to run a single-node Kubernetes cluster on your local machine. It’s an excellent way to start learning Kubernetes without needing access to a full-fledged Kubernetes cluster.

Key Components of Kubernetes Minikube

Before diving into the hands-on steps, let’s understand some key components you’ll interact with:

  • Service: An abstract way to expose an application running on a set of Pods as a network service.
  • Pod: The smallest and simplest Kubernetes object. A Pod represents a running process on your cluster and contains one or more containers.
  • kubectl: The command-line interface (CLI) tool used to interact with Kubernetes clusters.

Kubernetes Minikube Deploy Pods

Create Pods

[root@DevopsRoles ~]# kubectl run test-nginx --image=nginx --replicas=2 --port=80 
[root@DevopsRoles ~]# kubectl get pods 

The output environment variable for test-nginx pod

[root@DevopsRoles ~]# kubectl exec test-nginx-c8b797d7d-mzf91 env

Access to test-nginx pod

[root@DevopsRoles ~]# kubectl exec -it test-nginx-c8b797d7d-mzf91 bash
root@test-nginx-c8b797d7d-mzf91:/# curl localhost 

show logs of test-nginx pod

[root@DevopsRoles ~]# kubectl logs test-nginx-c8b797d7d-mzf91

How to scale out pods

[root@DevopsRoles ~]# kubectl scale deployment test-nginx --replicas=3 
[root@DevopsRoles ~]# kubectl get pods 

set service

[root@DevopsRoles ~]# kubectl expose deployment test-nginx --type="NodePort" --port 80 
[root@DevopsRoles ~]# kubectl get services test-nginx
[root@DevopsRoles ~]# minikube service test-nginx --url
[root@DevopsRoles ~]# curl http://10.0.2.10:31495 

Delete service and pods

[root@DevopsRoles ~]# kubectl delete services test-nginx
[root@DevopsRoles ~]# kubectl delete deployment test-nginx 

Frequently Asked Questions

What is Minikube in Kubernetes?

Minikube is a tool that allows you to run a Kubernetes cluster locally on your machine. It’s particularly useful for learning and testing Kubernetes without the need for a full-blown cluster.

How do I create a Pod in Kubernetes Minikube?

You can create a Pod in Kubernetes Minikube using the kubectl run command. For example: kubectl run test-nginx --image=nginx --replicas=2 --port=80.

How can I scale a Pod in Kubernetes?

To scale a Pod in Kubernetes, you can use the kubectl scale command. For instance, kubectl scale deployment test-nginx --replicas=3 will scale the deployment to three replicas.

What is the purpose of a Service in Kubernetes?

A Service in Kubernetes is used to expose an application running on a set of Pods as a network service. It allows external traffic to access the Pods.

How do I delete a Service in Kubernetes?

You can delete a Service in Kubernetes using the kubectl delete services <service-name> command. For example: kubectl delete services test-nginx.

Conclusion

Deploying and managing Pods on Kubernetes Minikube is a foundational skill for anyone working in cloud-native environments. This guide has provided you with the essential steps to create, scale, expose, and delete Pods and Services using Minikube.

By mastering these operations, you’ll be well-equipped to manage more complex Kubernetes deployments in production environments. Whether you’re scaling applications, troubleshooting issues, or exposing services, the knowledge gained from this guide will be invaluable. Thank you for reading the DevopsRoles page!