Category Archives: Kubernetes

Learn Kubernetes with DevOpsRoles.com. Access comprehensive guides and tutorials to orchestrate containerized applications and streamline your DevOps processes with Kubernetes.

Kubernetes RBAC (Role-Based Access Control)

Introduction

In Kubernetes RBAC is a method for controlling access to resources based on the roles assigned to users or service accounts within the cluster. RBAC helps to enforce the principle of least privilege, ensuring that users only have the permissions necessary to perform their tasks.

Kubernetes RBAC best practices

Kubernetes create Service Account

Service accounts are used to authenticate applications running inside a Kubernetes cluster to the API server. Here’s how you can create a service account named huupvuser:

kubectl create sa huupvuser
kubectl get sa

The result is as follows:

Creating ClusterRole and ClusterRoleBinding

Creating a ClusterRole

A ClusterRole defines a set of permissions for accessing Kubernetes resources across all namespaces. Below is an example of creating a ClusterRole named test-reader that grants read-only access to pods:

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: test-reader
rules:
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["get", "watch", "list"]

Apply the ClusterRole:

kubectl apply -f clusterrole.yml

Creating a ClusterRoleBinding

A ClusterRoleBinding binds a ClusterRole to one or more subjects, such as users or service accounts, and defines the permissions granted to those subjects. Here’s an example of creating a ClusterRoleBinding named test-read-pod-global that binds the test-reader ClusterRole to the huupvuser service account:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: test-read-pod-global
subjects:
- kind: ServiceAccount
  name: huupvuser
  apiGroup: ""
  namespace: default
roleRef:
  kind: ClusterRole
  name: test-reader
  apiGroup: rbac.authorization.k8s.io

Apply the ClusterRoleBinding:

kubectl apply -f clusterrolebinding.yaml

Combined Role YAML

For convenience, you can combine the ClusterRole and ClusterRoleBinding into a single YAML file for easier management. Here’s an example role.yml:

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: test-reader
rules:
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["get", "watch", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: test-read-pod-global
subjects:
- kind: ServiceAccount
  name: huupvuser
  apiGroup: ""
  namespace: default
roleRef:
  kind: ClusterRole
  name: test-reader
  apiGroup: rbac.authorization.k8s.io

Apply the combined YAML file:

kubectl apply -f role.yml

Verify ClusterRole and ClusterRoleBinding:

kubectl get clusterrole | grep test-reader
kubectl get clusterrolebinding | grep test-read-pod-global

The result is as follows.

Delete ClusterRole and ClusterRoleBinding:

kubectl delete clusterrole test-reader
kubectl delete clusterrolebinding test-read-pod-global

The result is as follows.

Conclusion

we’ve explored the basics of Role-Based Access Control (RBAC) in Kubernetes RBAC best practices. Through the creation of Service Accounts, ClusterRoles, and ClusterRoleBindings, we’ve demonstrated how to grant specific permissions to users or service accounts within a Kubernetes cluster.

RBAC is a powerful mechanism for ensuring security and access control in Kubernetes environments, allowing administrators to define fine-grained access policies tailored to their specific needs. By understanding and implementing RBAC effectively, organizations can maintain a secure and well-managed Kubernetes infrastructure. I hope will this your helpful. Thank you for reading the DevopsRoles page!

A Comprehensive Guide to Validating Kubernetes Cluster Installed using Kubeadm

Introduction

When setting up a Kubernetes cluster using Kubeadm, it’s essential to validate the installation to ensure everything is functioning correctly. In this blog post, we will guide you through the steps to Validating Kubernetes Cluster Installed using Kubeadm and Kubectl.

Learn how to validate your Kubernetes cluster installation using Kubeadm and ensure smooth operations. Follow our step-by-step guide for easy validation.

You have Installed Kubernetes using Kubeadm on Ubuntu: A Step-by-Step Guide

Validating Kubernetes Cluster Installed using Kubeadm: Step-by-Step Guide

Validating CMD Tools: Kubeadm & Kubectl

First, let’s check the versions of Kubeadm and Kubectl to ensure they match your cluster setup.

Checking “kubeadm” version

kubeadm version

Checking “kubectl” version

kubectl version

Make sure the versions of Kubeadm and Kubectl are compatible with your Kubernetes cluster.

Validating Cluster Nodes

Next, we need to ensure that all nodes in the cluster, including both Master and Worker nodes, are in the “Ready” state.

To check the status of all nodes:

kubectl get nodes
kubectl get nodes -o wide

This command will display a list of all nodes in the cluster along with their status. Ensure that all nodes are marked as “Ready.”

Validating Kubernetes Components

It’s crucial to verify that all Kubernetes components on the Master node are running correctly.

To check the status of Kubernetes components:

kubectl get pods -n kube-system
kubectl get pods -n kube-system -o wide

This command will show the status of various Kubernetes components in the kube-system namespace. Ensure that all components are in the “Running” state.

Validating Services: Docker & Kubelet

To ensure the proper functioning of your cluster, we need to validate the services Docker and Kubelet on all nodes.

Checking Docker service status

systemctl status docker

This command will display the status of the Docker service. Ensure that it is “Active” and running without any errors.

Checking Kubelet service status

systemctl status kubelet

This command will show the status of the Kubelet service. Verify that it is “Active” and running correctly.

Deploying Test Deployment

To further validate your cluster, let’s deploy a sample Nginx deployment and check its status.

Deploying the sample “nginx” deployment:

kubectl apply -f https://k8s.io/examples/controllers/nginx-deployment.yaml

This command will create the Nginx deployment in your cluster.

Validate the deployment:

kubectl get deploy
kubectl get deploy -o wide

These commands will display the status of the Nginx deployment, including the number of replicas and the desired and current states.

Check if the pods are in the “Running” state:

kubectl get pods
kubectl get pods -o wide

Make sure all pods are running without any errors.

Verify that containers are running on the respective worker nodes:

docker ps

This command will show the running containers on each worker node. Ensure that the Nginx containers are running as expected.

Delete the deployment:

kubectl delete -f https://k8s.io/examples/controllers/nginx-deployment.yaml

This command will delete the Nginx deployment from your cluster.

Conclusion

By following these steps, you can validate your Kubernetes cluster installation using Kubeadm and Kubectl. It’s essential to ensure that all the components, services, and deployments are running correctly to have a reliable and stable Kubernetes environment. I hope will this your helpful. Thank you for reading the DevopsRoles page!

Installing Kubernetes using Kubeadm on Ubuntu: A Step-by-Step Guide

Introduction

Kubernetes has emerged as the go-to solution for container orchestration and management. If you’re looking to set up a Kubernetes cluster on a Ubuntu server, you’re in the right place. In this step-by-step guide, we’ll walk you through the process of installing Kubernetes using Kubeadm on Ubuntu.

Prerequisites

I have created 3 VMs for Kubernetes Cluster Nodes to Cloud Google Compute Engine (GCE)

  • Master(1): 2 vCPUs – 4GB Ram
  • Worker(2): 2 vCPUs – 2GB RAM
  • OS: Ubuntu 16.04 or CentOS/RHEL 7

I have configured Firewall Rules Ingress in Google Compute Engine (GCE)

  • Master Node: 2379,6443,10250,10251,10252
  • Worker Node: 10250,30000-32767

Installing Kubernetes using Kubeadm on Ubuntu

Set hostname on Each Node

# hostnamectl set-hostname "k8s-master"    // For Master node
# hostnamectl set-hostname "k8s-worker1"   // For 1st worker node
# hostnamectl set-hostname "k8s-worker2"   // For 2nd worker node

Add the following entries in /etc/hosts file on each node

192.168.1.14   k8s-master
192.168.1.16   k8s-worker1
192.168.1.17   k8s-worker2

Disable Swap and Bridge Traffic

Kubernetes does not work well with swap enabled. Run it on MASTER & WORKER Nodes

Disable SWAP

# swapoff -a
# sed -i.bak -r 's/(.+ swap .+)/#\1/' /etc/fstab

Load the following kernel modules on all the nodes,

# tee /etc/modules-load.d/containerd.conf <<EOF
overlay
br_netfilter
EOF
# modprobe overlay
# modprobe br_netfilter

Set the following Kernel parameters for Kubernetes, run beneath tee command

# tee /etc/sysctl.d/kubernetes.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF

Reload the above changes

# sysctl --system

For example, The output terminal of worker1.

Installing Docker

Run it on MASTER & WORKER Nodes. Kubernetes requires a container runtime, and Docker is a popular choice. To install Docker, run the following commands:

apt-get update  
apt-get install -y  apt-transport-https ca-certificates curl software-properties-common gnupg2

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
  $(lsb_release -cs) \
  stable"

Installing Docker

apt-get update && sudo apt-get install \
  containerd.io=1.6.24-1 \
  docker-ce=5:20.10.24~3-0~ubuntu-$(lsb_release -cs) \
  docker-ce-cli=5:20.10.24~3-0~ubuntu-$(lsb_release -cs)

For example, The output terminal is as below:

Setting up the Docker daemon

cat <<EOF | sudo tee /etc/docker/daemon.json
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2"
}
EOF

mkdir -p /etc/systemd/system/docker.service.d

Start and enable the docker

systemctl daemon-reload
systemctl enable docker
systemctl restart docker
systemctl status docker

Install Kubeadm, Kubelet, and Kubectl

Add the Kubernetes repository and install Kubeadm, Kubelet, and Kubectl

apt-get update && sudo apt-get install -y apt-transport-https curl

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -

cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF

Installing Kubeadm, Kubelet, Kubectl

apt-get update
apt-get install -y kubelet kubeadm kubectl

apt-mark hold kubelet kubeadm kubectl

Start and enable Kubelet

systemctl daemon-reload
systemctl enable kubelet
systemctl restart kubelet
systemctl status kubelet

Initializing CONTROL-PLANE

Run it on MASTER Node only. On your master node, initialize the Kubernetes cluster with the command below:

kubeadm init

Make note of the kubeadm join command that’s provided at the end; you’ll need it to join worker nodes.

Installing POD-NETWORK add-on

Run it on MASTER Node only

For kubectl

mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

Installing “Weave CNI” (Pod-Network add-on)

kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"

NOTE: There are multiple CNI Plug-ins available. You can install a choice of yours. In the case above commands don’t work, try checking the below link for more info

Joining Worker Nodes

Run it on WORKER Node only

On your worker nodes, use the kubeadm join command from above kubeadm init output to join them to the cluster.

kubeadm join <...>

Run this command IF you do not have the above join command and/or create a NEW one.

kubeadm token create --print-join-command

Verify the Cluster

On the master node, ensure your cluster is up and running

kubectl get nodes

You should see the master node marked as “Ready” and any joined worker nodes.

Conclusion

Congratulations! You’ve successfully installed Kubernetes using Kubeadm on Ubuntu. With your Kubernetes cluster up and running, you’re ready to deploy and manage containerized applications and services at scale.

Kubernetes offers vast capabilities for container orchestration, scaling, and management. As you become more familiar with Kubernetes, you can explore advanced configurations and features to optimize your containerized environment.

I hope will this your helpful. Thank you for reading the DevopsRoles page!

Additional Resources:

Exploring the Docker and Kubernetes comparison

Introduction

Docker and Kubernetes are both open-source container platforms that enable the packaging and deployment of applications. We will Unravel the Enigmatic Docker and Kubernetes comparison.

In the realm of modern software deployment, where technology evolves at the speed of light, two names reign supreme: Docker and Kubernetes.

These two technological titans have transformed the landscape of application development, but they often find themselves in the spotlight together, leaving many wondering: what’s the real difference between Docker and Kubernetes? Buckle up as we embark on an illuminating journey through the cosmos of containers and orchestration.

What is Docker: The Art of Containerization

Docker, launched in 2013, is the pioneer in containerization technology. At its core, Docker allows developers to package an application and its dependencies, including libraries and configurations, into a single unit called a container.

Imagine Docker as a master craftsman, wielding tools to sculpt applications into self-contained, portable entities known as containers. Docker empowers developers to encapsulate an application along with its dependencies, libraries, and configurations into a single unit. This container becomes a traveler, capable of seamlessly transitioning between development environments, testing stages, and production servers.

The allure of Docker lies in its promise of consistency. No longer must developers grapple with the frustrating “it works on my machine” dilemma. Docker containers ensure that an application behaves the same way regardless of where it’s run, minimizing compatibility woes and facilitating smoother collaboration among developers.

What is Kubernetes: The Cosmic Choreographer

While Docker handles the creation and management of containers, Kubernetes steps in to manage the orchestration and deployment of these containers at scale. Launched by Google in 2014, Kubernetes provides a powerful solution for automating, scaling, and managing containerized applications.

In this cosmic dance of software deployment, Kubernetes steps onto the stage as a masterful choreographer, orchestrating the movement of containers with finesse and precision. Kubernetes introduces the concept of pods groups of interconnected containers that share network and storage resources. This dynamic entity enables seamless load balancing, ensuring smooth traffic distribution across the dance floor of application deployment.

Yet, Kubernetes offers more than elegant moves. It’s a wizard of automation, capable of dynamically scaling applications based on demand. Like a cosmic conductor, Kubernetes monitors the performance of applications and orchestrates adjustments, ensuring that the performance remains stellar even as the audience and the users grow.

Docker and Kubernetes comparison

Docker vs Kubernetes Pros and cons

Key Differences and Complementary Roles

While Docker and Kubernetes fulfill distinct roles, they synergize seamlessly to offer a holistic solution for containerization and orchestration. Docker excels in crafting portable and uniform containers that encapsulate applications, while Kubernetes steps in to oversee the intricate dance of deploying, scaling, and monitoring these containers.

Consider Docker the cornerstone of containerization, providing the essential building blocks that Kubernetes builds upon. Through Docker, developers elegantly wrap applications and dependencies in containers that maintain their coherence across diverse environments, from the intimate realms of development laptops to the grand stages of production servers.

On the contrasting side of the spectrum, Kubernetes emerges as the maestro of container lifecycle management. Its genius lies in abstracting the complex infrastructure beneath and orchestrating multifaceted tasks load balancing, scaling, mending, and updating with automated grace. As organizations venture into vast container deployments, Kubernetes emerges as the compass, ensuring not only high availability but also the optimal utilization of resources, resulting in a symphony of efficiency.

In conclusion

Docker and Kubernetes, though distinct technologies are interconnected in the world of modern software deployment. Docker empowers developers to create portable and consistent containers, while Kubernetes takes the reins of orchestration, automating the deployment and scaling of those containers. Together, they offer a robust ecosystem for building, deploying, and managing applications in today’s fast-paced and ever-changing technology landscape.

Thank you for reading Docker and Kubernetes comparison. I hope will this your helpful. Thank you for reading the DevopsRoles page!

Tool to Spin up Kwok Kubernetes Nodes

#What is Kwok Kubernetes?

Kwok Kubernetes is a tool that allows you to quickly and easily spin up Kubernetes nodes in a local environment using VirtualBox and Vagrant.

Kwok provides an easy way to set up a local Kubernetes cluster for development and testing purposes.

It is not designed for production use, as it’s intended only for local development environments.

Deploy Kwok Kubernetes to cluster

you can follow these general steps:

  • Install VirtualBox and Vagrant on your local machine.
  • Download or clone the Kwok repository from GitHub.
  • Modify the config.yml file to specify the number of nodes and other settings for your Kubernetes cluster.
  • Run the vagrant up command to start the Kubernetes cluster.
  • Once the cluster is up and running, you can use the kubectl command-line tool to interact with it and deploy your applications.

Install VirtualBox and Vagrant on your local machine.

You can refer to here install vagrant and VirtualBox.

Download or clone the Kwok repository from GitHub.

Go to the Kwok GitHub repository page: https://github.com/squat/kwok

Click on the green “Code” button, and then click on “Download ZIP” to download a zip file of the repository

For example, You can use the command line below

git clone https://github.com/squat/kwok.git

Modify the config.yml file in your Kubernetes cluster.

Open the config.yml file in a text editor.

Modify the settings in the config.yml file as needed.

  • num_nodes: This setting specifies the number of nodes to create in the Kubernetes cluster.
  • vm_cpus: This setting specifies the number of CPUs to allocate to each node.
  • vm_memory: This setting specifies the amount of memory to allocate to each node.
  • ip_prefix: This setting specifies the IP address prefix to use for the nodes in the cluster.
  • kubernetes_version: This setting specifies the version of Kubernetes to use in the cluster.
  • Save your changes to the config.yml file.

For example: create a three-node Kubernetes cluster with 2 CPUs and 4 GB of memory allocated to each node, using the IP address prefix “192.168.32” and Kubernetes version 1.21.0

# Number of nodes to create
num_nodes: 3

# CPU and memory settings for each node
vm_cpus: 2
vm_memory: 4096

# Network settings
ip_prefix: "192.168.32"
network_plugin: flannel

# Kubernetes version to install
kubernetes_version: "1.21.0"

# Docker version to install
docker_version: "20.10.8"

Once you have modified the config.yml file to specify the desired settings for your Kubernetes cluster

Start the Kubernetes cluster

run the vagrant up command to start the Kubernetes cluster.

Now, You can use deploy your applications.

Conclusion

You use Kwok Kubernetes, a Tool to Spin up Kubernetes Nodes. I hope will this your helpful. Thank you for reading the DevopsRoles page!

Kubernetes day one

#Introduction

In this tutorial, We study together K8s (Kubernetes). Let go of Kubernetes day one.

  • Method installs for Kubernetes.
  • Use Managed Kubernetes service from CSP.
  • Install and configure Kubectl for Linux and macOS.
  • Configuring Kubernetes in Minikube Windows or Linux.
  • Minikube install commands.
  • Create the First POD configuration in YAML.
  • How to generate Pod Manifests via CLI.

Method installs for Kubernetes.

There are multiple methods to get started with a full Kubernetes Environment.

  • Use the Managed Kubernetes Service.
  • Install and configure Kubernetes Manually
  • Use Minikube

Managed Kubernetes Service.

Provides like: AWS, GCP, IBM, and others providers.

Minikube

It’s a tool that makes it easy to run Kubernetes locally.

Minikube run a single node K8s cluster on VM on your laptop. You can try out or develop with it.

Install and configure Kubernetes Manually

It’s the Hard Way. You install and configure components of K8s.

Install Kubectl on Windows

it allows running commands against K8s clusters.

Kubectl to deploy applications, manage cluster resources, and view logs. Link here.

Download Kubectl here

Add path environment variable for K8s

Confirm kubectl command on Windows.

Install Minikube on Windows

Link here.

For example below:

From a terminal start your cluster.

K8S PODS

Kubernetes Pod is a group of one or more application containers that share resources.

  • A Pod runs on a Node.
  • Each Node is managed by the Master.
  • A Node has multiple pods.
  • A Node is a worker machine in K8s.

The command same docker vs K8s

docker run –name webserver nginxkubectl run webserver –image=nginx
docker exec -it webserver bashkubectl exec -it webserver — bash
docker stop webserver
docker rm webserver
kubectl delete pod webserver

Updating later!

How to install kubernetes on ubuntu

To install Kubernetes on Ubuntu, you can use the kubeadm tool, which simplifies the process of setting up a cluster. Here’s a step-by-step guide on how to install Kubernetes on Ubuntu.

Kubernetes Requirements

  • Master Node.
  • Worker Node 1.
  • The host OS is Ubuntu Server.

1. Update Ubuntu Server

It is always an updated system package. use the command line below:

$ sudo apt update

2. Install Docker on Ubuntu Server

$ sudo apt install curl -y
$ curl -fsSL https://get.docker.com -o get-docker.sh
$ sh get-docker.sh
$ sudo usermod -aG docker $USER
$ docker --version

Repeat on all other nodes.

3.How to install Kubernetes on Ubuntu server.

Use the command line to install K8s as below.

$ sudo apt-get install -y apt-transport-https ca-certificates curl
$ sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg

Repeat for each server node.

4. Add software repositories

k8s are not included in the default repositories. Use the command line below:

$ echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
$ sudo apt-get update

Repeat for each server node.

5. Install the tool for Kubernetes

A tool that helps to initialize a cluster using Kubernetes Admin or Kubeadm. Kubelet is the worker package, which runs on all nodes and starts containers. Use the command line below

$ sudo apt-get install -y kubeadm kubelet kubectl
$ sudo apt-mark hold kubelet kubeadm kubectl

Repeat for each server node.

6. Disabling swap memory on each server

$ sudo swapoff -a

7. Setting a unique hostname for each server node.

Decide which server to set as the primary node.

$ sudo hostnamectl set-hostname k8s-master

Next, Set a worker node hostname

$ sudo hostnamectl set-hostname k8s-node1

If you add a worker node, set a unique hostname on each one.

8. Initialize Kubernetes on the master node

On the master node, Run the command below:

$ sudo kubeadm init --apiserver-advertise-address=192.168.56.11 --pod-network-cidr=10.100.0.0/16

The output terminal is as below:

vagrant@k8s-master:~$ sudo kubeadm init --apiserver-advertise-address=192.168.56.11 --pod-network-cidr=10.100.0.0/16
[init] Using Kubernetes version: v1.24.2
[preflight] Running pre-flight checks
        [WARNING SystemVerification]: missing optional cgroups: blkio
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.56.11]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.56.11 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.56.11 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 4.501962 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: 8ke6fa.3c5ll272057418qj
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.56.11:6443 --token 8ke6fa.3c5ll272057418qj \
        --discovery-token-ca-cert-hash sha256:4e69979a597f673781606eb7ab6bed5ccb741f46756b17f196b9cd76d3b51053

This is used to join the worker nodes to the cluster. We will create a directory for the cluster.

$ mkdir -p $HOME/.kube 
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config 
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

9. Deploy the pod network to the cluster.

A pod network allows communication between different nodes in the cluster. For example, Use the flannel virtual network. Use the command line below

$ sudo kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

Verify working and communicating

# kubectl get all -A

Join the worker node to Cluster.

The syntax:

kubeadm join --token <token> <control-plane-host>:<control-plane-port> --discovery-token-ca-cert-hash sha256:<hash>

Use kubeadm join command on each worker node to join it to the cluster.

$ sudo kubeadm join 192.168.56.11:6443 --token 8ke6fa.3c5ll272057418qj \
        --discovery-token-ca-cert-hash \
sha256:4e69979a597f673781606eb7ab6bed5ccb741f46756b17f196b9cd76d3b5105

Waiting minutes, You can check the status of the nodes. Switch to the Master node and run the command line as below:

kubectl get nodes

Conclusion

You have successfully installed Kubernetes on Ubuntu. I hope this guide proves to be helpful for you in navigating the installation process. Thank you for choosing to read the DevopsRoles page! For more detailed instructions on how to install Kubernetes on Ubuntu, please refer to our comprehensive guide on installing Kubernetes on Ubuntu.

Setup Kubernetes Cluster with K3s

Introduction

In this tutorial, How to setup Kubernetes Cluster with K3s. It is a lightweight Kubernetes distribution developed by Rancher. K3s consume fewer resources than traditional distributions. It’s easy to set up and manage Kubernetes Cluster.

To set up a Kubernetes cluster using K3s, you can follow these steps:

Setup Kubernetes Cluster with K3s

First, You need to get a Linux machine.

Start by provisioning the servers that will be part of your Kubernetes cluster.

You can use virtual machines or bare-metal servers. Ensure that the servers have a compatible operating system (such as Ubuntu, CentOS, or RHEL).

Download the Rancher binary

Link download here. or you use the wget command or curl command to get download it.

wget https://github.com/k3s-io/k3s/releases/download/v1.23.5%2Bk3s1/k3s

make the binary executable.

chmod +x k3s

The output terminal is as below:

vagrant@controller:~$ wget https://github.com/k3s-io/k3s/releases/download/v1.23.5%2Bk3s1/k3s
--2022-06-05 08:49:28--  https://github.com/k3s-io/k3s/releases/download/v1.23.5%2Bk3s1/k3s
Resolving github.com (github.com)... 20.205.243.166
Connecting to github.com (github.com)|20.205.243.166|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://objects.githubusercontent.com/github-production-release-asset-2e65be/135516270/88b18d50-2447-4216-b672-fdf17488cb41?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20220605%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20220605T084929Z&X-Amz-Expires=300&X-Amz-Signature=54b2aa58e831f8f8d179189c940eb5c38b4df7d4e3e33c18c9376b446f029742&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=135516270&response-content-disposition=attachment%3B%20filename%3Dk3s&response-content-type=application%2Foctet-stream [following]
--2022-06-05 08:49:28--  https://objects.githubusercontent.com/github-production-release-asset-2e65be/135516270/88b18d50-2447-4216-b672-fdf17488cb41?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20220605%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20220605T084929Z&X-Amz-Expires=300&X-Amz-Signature=54b2aa58e831f8f8d179189c940eb5c38b4df7d4e3e33c18c9376b446f029742&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=135516270&response-content-disposition=attachment%3B%20filename%3Dk3s&response-content-type=application%2Foctet-stream
Resolving objects.githubusercontent.com (objects.githubusercontent.com)... 185.199.108.133, 185.199.109.133, 185.199.110.133, ...
Connecting to objects.githubusercontent.com (objects.githubusercontent.com)|185.199.108.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 62468096 (60M) [application/octet-stream]
Saving to: ‘k3s’

k3s                                100%[==============================================================>]  59.57M  4.95MB/s    in 24s

2022-06-05 08:49:53 (2.46 MB/s) - ‘k3s’ saved [62468096/62468096]

vagrant@controller:~$ ls -l
total 61004
-rw-rw-r-- 1 vagrant vagrant 62468096 Mar 31 01:05 k3s
vagrant@controller:~$ chmod +x k3s

Start the K3s Server.

sudo ./k3s server

Check your Kubernetes cluster

sudo ./k3s kubectl get nodes

The output terminal is as below:

How to manage your cluster

You can run kubectl commands through the k3s binary. K3s provides a built-in kubectl utility.

For example, To drain the node you created.

sudo ./k3s kubectl drain your-hostname

Or use kubectl cordon a node with the command below:

sudo ./k3s kubectl cordon your-hostname

Add nodes to a K3s cluster

We created a cluster with just one node with the step above. If you want add 1 node to your cluster.

On Master:

first, you determine the node token value of your server. get the token value with the command below:

sudo cat /var/lib/rancher/k3s/server/node-token

For example, the Token values the output as below:

K10c94d11d4970c4ac58973c98ee32c9c1c4cb4fc30d81adfaf3ddf405ba1c48b49::server:7b1dcfc180415f105af019717027e77c

On Worker

Install k3s agent

export K3S_NODE_NAME=node1
export K3S_URL="https://192.168.56.11:6443"
export K3S_TOKEN=K10c94d11d4970c4ac58973c98ee32c9c1c4cb4fc30d81adfaf3ddf405ba1c48b49::server:7b1dcfc180415f105af019717027e77c
curl -sfL https://get.k3s.io | sh -s -

Then, the Additional node run command below:

sudo k3s agent --server https://myserver:6443 --token K3S_TOKEN

Repeat this process to add as many nodes as you want to your cluster.

The result, of the Master

Conclusion

You have setup Kubernetes cluster using K3s. You can now use kubectl it to deploy and manage your applications on the cluster.

Further, you can do with k8s k3s such as customize networking or logging, change the container runtime, and set up certificates. I hope will this your helpful. Thank you for reading the DevopsRoles page!

Install Kubernetes Cluster with KubeKey

#Introduction

In this tutorial, How to Install Kubernetes Cluster with KubeKey. I will deploy a Kubernetes Cluster using KubeKey Quickstart.

You can use KubeKey to spin up a Kubernetes deployment for development/testing purposes. As Kubernetes is the de-facto standard in container orchestration.

Requirements install Kubernetes Cluster with KubeKey

  • Running instance of Ubuntu Server.
  • A user with sudo privileges.

Install Docker

First, You need to install Docker on the Ubuntu server. Refer to How to install docker on Ubuntu Server.

How to Install KubeKey

You need to download KubeKey and make it executable as the command below:

sudo apt-get install conntrack -y
curl -sfL https://get-kk.kubesphere.io | VERSION=v1.2.1 sh -
chmod u+x kk

The output terminal is the picture below:

Make the kk executable to run from any directory by copying the file to /user/local/bin directory with the command below:

sudo cp kk /usr/local/bin

Verify the install it

kk -h

The output terminal is as below:

vagrant@devopsroles:~$ kk -h
Deploy a Kubernetes or KubeSphere cluster efficiently, flexibly and easily. There are three scenarios to use KubeKey.
1. Install Kubernetes only
2. Install Kubernetes and KubeSphere together in one command
3. Install Kubernetes first, then deploy KubeSphere on it using https://github.com/kubesphere/ks-installer

Usage:
  kk [command]

Available Commands:
  add         Add nodes to kubernetes cluster
  certs       cluster certs
  completion  Generate shell completion scripts
  create      Create a cluster or a cluster configuration file
  delete      Delete nodes or cluster
  help        Help about any command
  init        Initializes the installation environment
  upgrade     Upgrade your cluster smoothly to a newer version with this command
  version     print the client version information

Flags:
      --debug        Print detailed information (default true)
  -h, --help         help for kk
      --in-cluster   Running inside the cluster

Use "kk [command] --help" for more information about a command.

Deploy the Cluster

You need to deploy the cluster with the command below:

sudo kk create cluster

This process will take some time.

The output terminal is as below:

vagrant@devopsroles:~$ sudo kk create cluster
+-------------+------+------+---------+----------+-------+-------+-----------+----------+------------+-------------+------------------+--------------+
| name        | sudo | curl | openssl | ebtables | socat | ipset | conntrack | docker   | nfs client | ceph client | glusterfs client | time         |
+-------------+------+------+---------+----------+-------+-------+-----------+----------+------------+-------------+------------------+--------------+
| devopsroles | y    | y    | y       | y        |       |       | y         | 20.10.12 |            |             |                  | UTC 14:06:10 |
+-------------+------+------+---------+----------+-------+-------+-----------+----------+------------+-------------+------------------+--------------+

This is a simple check of your environment.
Before installation, you should ensure that your machines meet all requirements specified at
https://github.com/kubesphere/kubekey#requirements-and-recommendations

Continue this installation? [yes/no]: yes
INFO[14:06:20 UTC] Downloading Installation Files
INFO[14:06:20 UTC] Downloading kubeadm ...
INFO[14:06:24 UTC] Downloading kubelet ...
INFO[14:06:32 UTC] Downloading kubectl ...
INFO[14:06:36 UTC] Downloading helm ...
INFO[14:06:38 UTC] Downloading kubecni ...
INFO[14:06:43 UTC] Downloading etcd ...
INFO[14:06:48 UTC] Downloading docker ...
INFO[14:06:52 UTC] Downloading crictl ...
INFO[14:06:55 UTC] Configuring operating system ...
[devopsroles 10.0.2.15] MSG:
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
vm.max_map_count = 262144
vm.swappiness = 1
fs.inotify.max_user_instances = 524288
no crontab for root
INFO[14:06:56 UTC] Get cluster status
INFO[14:06:56 UTC] Installing Container Runtime ...
INFO[14:06:56 UTC] Start to download images on all nodes
[devopsroles] Downloading image: kubesphere/pause:3.4.1
[devopsroles] Downloading image: kubesphere/kube-apiserver:v1.21.5
[devopsroles] Downloading image: kubesphere/kube-controller-manager:v1.21.5
[devopsroles] Downloading image: kubesphere/kube-scheduler:v1.21.5
[devopsroles] Downloading image: kubesphere/kube-proxy:v1.21.5
[devopsroles] Downloading image: coredns/coredns:1.8.0
[devopsroles] Downloading image: kubesphere/k8s-dns-node-cache:1.15.12
[devopsroles] Downloading image: calico/kube-controllers:v3.20.0
[devopsroles] Downloading image: calico/cni:v3.20.0
[devopsroles] Downloading image: calico/node:v3.20.0
[devopsroles] Downloading image: calico/pod2daemon-flexvol:v3.20.0
INFO[14:08:42 UTC] Getting etcd status
[devopsroles 10.0.2.15] MSG:
Configuration file will be created
INFO[14:08:42 UTC] Generating etcd certs
INFO[14:08:43 UTC] Synchronizing etcd certs
INFO[14:08:43 UTC] Creating etcd service
Push /home/vagrant/kubekey/v1.21.5/amd64/etcd-v3.4.13-linux-amd64.tar.gz to 10.0.2.15:/tmp/kubekey/etcd-v3.4.13-linux-amd64.tar.gz   Done
INFO[14:08:43 UTC] Starting etcd cluster
INFO[14:08:44 UTC] Refreshing etcd configuration
[devopsroles 10.0.2.15] MSG:
Created symlink /etc/systemd/system/multi-user.target.wants/etcd.service → /etc/systemd/system/etcd.service.
INFO[14:08:47 UTC] Backup etcd data regularly
INFO[14:08:54 UTC] Installing kube binaries
Push /home/vagrant/kubekey/v1.21.5/amd64/kubeadm to 10.0.2.15:/tmp/kubekey/kubeadm   Done
Push /home/vagrant/kubekey/v1.21.5/amd64/kubelet to 10.0.2.15:/tmp/kubekey/kubelet   Done
Push /home/vagrant/kubekey/v1.21.5/amd64/kubectl to 10.0.2.15:/tmp/kubekey/kubectl   Done
Push /home/vagrant/kubekey/v1.21.5/amd64/helm to 10.0.2.15:/tmp/kubekey/helm   Done
Push /home/vagrant/kubekey/v1.21.5/amd64/cni-plugins-linux-amd64-v0.9.1.tgz to 10.0.2.15:/tmp/kubekey/cni-plugins-linux-amd64-v0.9.1.tgz   Done
INFO[14:09:01 UTC] Initializing kubernetes cluster
[devopsroles 10.0.2.15] MSG:
W0323 14:09:02.442567    4631 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
[init] Using Kubernetes version: v1.21.5
[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
        [WARNING FileExisting-socat]: socat not found in system path
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [devopsroles devopsroles.cluster.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.kubesphere.local localhost] and IPs [10.233.0.1 10.0.2.15 127.0.0.1]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] External etcd mode: Skipping etcd/ca certificate authority generation
[certs] External etcd mode: Skipping etcd/server certificate generation
[certs] External etcd mode: Skipping etcd/peer certificate generation
[certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation
[certs] External etcd mode: Skipping apiserver-etcd-client certificate generation
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 7.502071 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.21" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node devopsroles as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node devopsroles as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: w6uaty.abpybmw8jhw1tdlg
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join lb.kubesphere.local:6443 --token w6uaty.abpybmw8jhw1tdlg \
        --discovery-token-ca-cert-hash sha256:98d31447eb6457d74c0d13088aceed7ae8dd1fd0b8e98cb1a15683fcbb5ef4d5 \
        --control-plane

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join lb.kubesphere.local:6443 --token w6uaty.abpybmw8jhw1tdlg \
        --discovery-token-ca-cert-hash sha256:98d31447eb6457d74c0d13088aceed7ae8dd1fd0b8e98cb1a15683fcbb5ef4d5
[devopsroles 10.0.2.15] MSG:
node/devopsroles untainted
[devopsroles 10.0.2.15] MSG:
node/devopsroles labeled
[devopsroles 10.0.2.15] MSG:
service "kube-dns" deleted
[devopsroles 10.0.2.15] MSG:
service/coredns created
Warning: resource clusterroles/system:coredns is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
clusterrole.rbac.authorization.k8s.io/system:coredns configured
[devopsroles 10.0.2.15] MSG:
serviceaccount/nodelocaldns created
daemonset.apps/nodelocaldns created
[devopsroles 10.0.2.15] MSG:
configmap/nodelocaldns created
INFO[14:09:34 UTC] Get cluster status
INFO[14:09:35 UTC] Joining nodes to cluster
INFO[14:09:35 UTC] Deploying network plugin ...
[devopsroles 10.0.2.15] MSG:
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
Warning: policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
poddisruptionbudget.policy/calico-kube-controllers created
INFO[14:09:36 UTC] Congratulations! Installation is successful.

To verify Kubectl has been installed with the command below:

kubectl --help

The output terminal is as below:

vagrant@devopsroles:~$ kubectl --help
kubectl controls the Kubernetes cluster manager.

 Find more information at: https://kubernetes.io/docs/reference/kubectl/overview/

Basic Commands (Beginner):
  create        Create a resource from a file or from stdin.
  expose        Take a replication controller, service, deployment or pod and expose it as a new Kubernetes Service
  run           Run a particular image on the cluster
  set           Set specific features on objects

Basic Commands (Intermediate):
  explain       Documentation of resources
  get           Display one or many resources
  edit          Edit a resource on the server
  delete        Delete resources by filenames, stdin, resources and names, or by resources and label selector

Deploy Commands:
  rollout       Manage the rollout of a resource
  scale         Set a new size for a Deployment, ReplicaSet or Replication Controller
  autoscale     Auto-scale a Deployment, ReplicaSet, StatefulSet, or ReplicationController

Cluster Management Commands:
  certificate   Modify certificate resources.
  cluster-info  Display cluster info
  top           Display Resource (CPU/Memory) usage.
  cordon        Mark node as unschedulable
  uncordon      Mark node as schedulable
  drain         Drain node in preparation for maintenance
  taint         Update the taints on one or more nodes

Troubleshooting and Debugging Commands:
  describe      Show details of a specific resource or group of resources
  logs          Print the logs for a container in a pod
  attach        Attach to a running container
  exec          Execute a command in a container
  port-forward  Forward one or more local ports to a pod
  proxy         Run a proxy to the Kubernetes API server
  cp            Copy files and directories to and from containers.
  auth          Inspect authorization
  debug         Create debugging sessions for troubleshooting workloads and nodes

Advanced Commands:
  diff          Diff live version against would-be applied version
  apply         Apply a configuration to a resource by filename or stdin
  patch         Update field(s) of a resource
  replace       Replace a resource by filename or stdin
  wait          Experimental: Wait for a specific condition on one or many resources.
  kustomize     Build a kustomization target from a directory or URL.

Settings Commands:
  label         Update the labels on a resource
  annotate      Update the annotations on a resource
  completion    Output shell completion code for the specified shell (bash or zsh)

Other Commands:
  api-resources Print the supported API resources on the server
  api-versions  Print the supported API versions on the server, in the form of "group/version"
  config        Modify kubeconfig files
  plugin        Provides utilities for interacting with plugins.
  version       Print the client and server version information

Usage:
  kubectl [flags] [options]

Use "kubectl <command> --help" for more information about a given command.
Use "kubectl options" for a list of global command-line options (applies to all commands).

Deploy the Kubernetes Dashboard

Deploy the Kubernetes Dashboard with the command:

sudo kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml

You need to deploy the dashboard, How to determine the IP address of the cluster uses the command below:

sudo kubectl get svc -n kubernetes-dashboard

The output terminal is as below:

vagrant@devopsroles:~$ sudo kubectl get svc -n kubernetes-dashboard
NAME                        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
dashboard-metrics-scraper   ClusterIP   10.233.19.190   <none>        8000/TCP   64s
kubernetes-dashboard        ClusterIP   10.233.23.203   <none>        443/TCP    64s

You might want to expose the dashboard via NodePort instead of ClusterIP

sudo kubectl edit svc kubernetes-dashboard -o yaml -n kubernetes-dashboard

This will open the configure file in vi editors.

Before:

type: ClusterIP

After change it

type: NodePort

Next, you must run the kubectl proxy command as below:

sudo kubectl proxy

While the proxy is running, open a web browser and point it to the IP address and port number listed in the results from the sudo kubectl get svc -n kubernetes-dashboard command.

vagrant@devopsroles:~$ sudo kubectl get svc -n kubernetes-dashboard
NAME                        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
dashboard-metrics-scraper   ClusterIP   10.233.19.190   <none>        8000/TCP        47h
kubernetes-dashboard        NodePort    10.233.23.203   <none>        443:32469/TCP   47h

Open a web browser:

https://192.168.3.7:32469/

To log in to the dashboard. You need to create a ServiceAccount object and a ClusterRoleBinding object. Create the account with the name admin-user in the namespace kubernetes-dashboard

cat <<EOF | sudo kubectl apply -f -
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard
EOF

Create a ClusterRoleBinding object.

cat <<EOF | sudo kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard
EOF

To retrieve that token with the command below:

sudo kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}')

Expected output:

The result,

This isn’t production-ready, but it’s a great way to get up to speed with Kubernetes and even develop for the platform.

The result is Docker Kubernetes containers running Ubuntu Server. Install Kubernetes Cluster with KubeKey

Conclusions

You have to Install Kubernetes Cluster with KubeKey. I hope will this your helpful. Thank you for reading the DevopsRoles page!

Install Nginx Ingress Controller using Helm Chart

#Introduction

In this tutorial, How to Install Nginx Ingress Controller using Helm Chart. I want to expose pods using Ingress Controller. I will use the Nginx ingress controller to set up AWS ELB.

Production Environment, You would be using istio gateway, traefik.

Create Nginx Ingress Controller using Helm Chart

kubectl create namespace nginx-ingress-controller

Add new repo ingress-nginx/ingress-nginx

helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo add stable https://charts.helm.sh/stable
helm repo update

Install ingress Nginx

helm install nginx-ingress-controller ingress-nginx/ingress-nginx

The output as below

$ helm install nginx-ingress-controller ingress-nginx/ingress-nginx
NAME: nginx-ingress-controller
LAST DEPLOYED: Sat Jun 26 16:11:27 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
The ingress-nginx controller has been installed.
It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status by running 'kubectl --namespace default get services -o wide -w nginx-ingress-controller-ingress-nginx-controller'

An example Ingress that makes use of the controller:

  apiVersion: networking.k8s.io/v1beta1
  kind: Ingress
  metadata:
    annotations:
      kubernetes.io/ingress.class: nginx
    name: example
    namespace: foo
  spec:
    rules:
      - host: www.example.com
        http:
          paths:
            - backend:
                serviceName: exampleService
                servicePort: 80
              path: /
    # This section is only required if TLS is to be enabled for the Ingress
    tls:
        - hosts:
            - www.example.com
          secretName: example-tls

If TLS is enabled for the Ingress, a Secret containing the certificate and key must also be provided:

  apiVersion: v1
  kind: Secret
  metadata:
    name: example-tls
    namespace: foo
  data:
    tls.crt: <base64 encoded cert>
    tls.key: <base64 encoded key>
  type: kubernetes.io/tls

Check nginx-ingress-controller namespace have been created

kubectl get pod,svc,deploy

# The output as bellow
$ kubectl get pod,svc,deploy
NAME                                                                  READY   STATUS    RESTARTS   AGE
pod/guestbook-tnc42                                                   1/1     Running   0          38m
pod/guestbook-vqgws                                                   1/1     Running   0          38m
pod/guestbook-vqnxf                                                   1/1     Running   0          38m
pod/nginx-ingress-controller-ingress-nginx-controller-7f8f65bf4g6c7   1/1     Running   0          4m23s
pod/redis-master-dp7h7                                                1/1     Running   0          41m
pod/redis-slave-54mt6                                                 1/1     Running   0          39m
pod/redis-slave-8g8h4                                                 1/1     Running   0          39m

NAME                                                                  TYPE           CLUSTER-IP       EXTERNAL-IP                                                              PORT(S)                      AGE
service/guestbook                                                     LoadBalancer   10.100.231.216   aff3d414c479f4faaa2ab82062c87fe5-264485147.us-west-2.elb.amazonaws.com   3000:32767/TCP               38m
service/kubernetes                                                    ClusterIP      10.100.0.1       <none>                                                                   443/TCP                      88m
service/nginx-ingress-controller-ingress-nginx-controller             LoadBalancer   10.100.57.204    a04b31985b6c64607a50d794ef692c57-291207293.us-west-2.elb.amazonaws.com   80:31382/TCP,443:32520/TCP   4m25s
service/nginx-ingress-controller-ingress-nginx-controller-admission   ClusterIP      10.100.216.28    <none>                                                                   443/TCP                      4m25s
service/redis-master                                                  ClusterIP      10.100.76.16     <none>                                                                   6379/TCP                     40m
service/redis-slave                                                   ClusterIP      10.100.126.163   <none>                                                                   6379/TCP                     39m

NAME                                                                READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nginx-ingress-controller-ingress-nginx-controller   1/1     1            1           4m25s

TYPE of nginx-ingress-controller-controller service is LoadBalancer. This is the L4 Load balancer. How to nginx-ingress-controller-controller pod is running Nginx inside to do L7 load balancing inside the EKS cluster.

The result 404 as below

Create Ingress resource for L7 load balancing

For example, ingress_example.yaml file.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
    annotations:
      kubernetes.io/ingress.class: nginx
    name: guestbook
    namespace: default
spec:
    rules:
      - http:
          paths:
            - backend:
                serviceName: guestbook
                servicePort: 3000 
              path: /

Apply it

kubectl apply -f ingress_example.yaml

Get the public DNS of AWS ELB created from the Nginx ingress controller service

kubectl  get svc nginx-ingress-controller-ingress-nginx-controller  | awk '{ print $4 }' | tail -1

The output as below


a04b31985b6c64607a50d794ef692c57-291207293.us-west-2.elb.amazonaws.com

Access link on browser

Delete service guestbook

$ kubectl delete svc guestbook
service "guestbook" deleted

Conclusion

You have to install Nginx Ingress Controller using Helm Chart. I hope will this your helpful. Thank you for reading the DevopsRoles page!