Category Archives: Kubernetes

Learn Kubernetes with DevOpsRoles.com. Access comprehensive guides and tutorials to orchestrate containerized applications and streamline your DevOps processes with Kubernetes.

How to Install CNI for Kubernetes: A Comprehensive Guide

Introduction

In this tutorial, How to Install CNI for Kubernetes. Container orchestration has become an indispensable part of modern IT infrastructure management, and Kubernetes stands out as a leading platform in this domain. One of the key components that contribute to Kubernetes’ flexibility and scalability is the Container Networking Interface (CNI). In this comprehensive guide, we’ll delve into the intricacies of installing CNI for Kubernetes, ensuring smooth communication between pods and services within your cluster.

What is CNI and Why is it Important?

Before we delve into the installation process, let’s understand the significance of the Container Networking Interface (CNI) in the Kubernetes ecosystem. CNI serves as a standard interface for configuring networking in Linux containers. It facilitates seamless communication between pods, enabling them to communicate with each other and external resources. By abstracting network configuration, CNI simplifies the deployment and management of containerized applications within Kubernetes clusters.

Preparing for Installation

Before embarking on the installation journey, it’s essential to ensure that you have the necessary prerequisites in place. Firstly, you’ll need access to your Kubernetes cluster, along with appropriate permissions to install CNI plugins. Additionally, familiarity with basic Kubernetes concepts and command-line tools such as kubectl will prove beneficial during the installation process.

Step-by-Step How to Install CNI for Kubernetes

Example: Installing Calico CNI Plugin

Install kubectl: If you haven’t already installed kubectl, you can do so by following the official Kubernetes documentation for your operating system. For example, on a Linux system, you can use the following command:

curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl

Once installed, verify the installation by running:

kubectl version --client

Choose Calico as the CNI Plugin: Calico is a popular CNI plugin known for its simplicity and scalability. To install Calico, you can choose from various deployment methods, including YAML manifests or Helm charts. For this example, we’ll use YAML manifests.

Download the Calico Manifests: Calico provides YAML manifests for easy deployment. Download the manifests using the following command:

curl https://docs.projectcalico.org/manifests/calico.yaml -O

Configure Calico: Before applying the Calico manifests to your Kubernetes cluster, you may need to configure certain parameters, such as the IP pool for pod IPs. Open the calico.yaml file in a text editor and modify the configuration as needed.

vi calico.yaml

Here’s an example configuration snippet specifying an IP pool:

- name: CALICO_IPV4POOL_CIDR
  value: "192.168.0.0/16"

Apply Calico Manifests to Kubernetes: Once you’ve configured Calico according to your requirements, apply the manifests to your Kubernetes cluster using kubectl:

kubectl apply -f calico.yaml

This command will create the necessary Kubernetes resources, including Custom Resource Definitions (CRDs), Pods, Services, and ConfigMaps, to deploy Calico within your cluster.

Verify Installation: After applying the Calico manifests, verify the successful installation by checking the status of Calico pods and related resources:

kubectl get pods -n kube-system

Conclusion

Installing Container Network Interface (CNI) plugins for Kubernetes is a critical step towards enabling seamless communication between containers within a Kubernetes cluster. This process, while it might seem intricate at first, can significantly streamline and secure network operations, providing the flexibility to choose from a wide array of CNI plugins that best fit the specific requirements of your environment. By following the best practices and steps outlined for the installation process, users can ensure that their Kubernetes cluster is equipped with a robust and efficient networking solution.

This not only enhances the performance of applications running on the cluster but also leverages Kubernetes’ capabilities to the fullest, ensuring a scalable, manageable, and highly available system. Whether you’re deploying on-premise or in the cloud, understanding and implementing CNI effectively can profoundly impact your Kubernetes ecosystem’s efficiency and reliability. . Thank you for reading the DevopsRoles page!

Step-by-Step Guide to Setting Up Rolling Updates in Kubernetes with Nginx

Introduction

In the realm of Kubernetes, ensuring zero downtime during application updates is crucial. Rolling Updates in Kubernetes provide a seamless way to update the application’s pods without affecting its availability. In this guide, we’ll walk through setting up rolling updates for an Nginx deployment in Kubernetes, ensuring your services remain uninterrupted.

Preparation

Before proceeding, ensure you have Kubernetes and kubectl installed and configured. This guide assumes you have basic knowledge of Kubernetes components and YAML syntax.

Deployment and Service Configuration

First, let’s understand the components of our .yaml file which configures both the Nginx deployment and service:

Deployment Configuration

  • apiVersion: apps/v1 indicates the API version.
  • kind: Deployment specifies the kind of Kubernetes object.
  • metadata: Defines the name of the deployment.
  • spec: Describes the desired state of the deployment:
    • selector: Ensures the deployment applies to pods with the label app: nginxdeployment.
    • revisionHistoryLimit: The number of old ReplicaSets to retain.
    • progressDeadlineSeconds: Time to wait before indicating progress has stalled.
    • minReadySeconds: Minimum duration a pod should be ready without any of its containers crashing, for it to be considered available.
    • strategy: Specifies the strategy used to replace old pods with new ones. Here, it’s set to RollingUpdate.
    • replicas: Number of desired pods.
    • template: Template for the pods the deployment creates.
    • containers: Specifies the Nginx container and its settings, such as image and ports.

Service Configuration

  • apiVersion: v1 indicates the API version.
  • kind: Service specifies the kind of Kubernetes object.
  • metadata: Defines the name of the service.
  • spec: Describes the desired state of the service:
    • selector: Maps the service to the deployment.
    • ports: Specifies the port configuration.

Implementing Rolling Updates in Kubernetes

To apply these configurations and initiate rolling updates, follow these steps:

Step 1. Create or update your deployment and service file named nginx-deployment-service.yaml with the content below.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  selector:
    matchLabels:
      app: nginxdeployment
  revisionHistoryLimit: 3
  progressDeadlineSeconds: 300
  minReadySeconds: 10
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
      maxSurge: 1
  replicas: 3
  template:
    metadata:
      labels:
        app: nginxdeployment
    spec:
      containers:
      - name: nginxdeployment
        # image: nginx:1.22
        image: nginx:latest
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: nginxservice
spec:
  selector:
    app: nginxdeployment
  ports:
    - protocol: TCP
      port: 80

Step 2. Apply the configuration using the command:

kubectl apply -f nginx-deployment-service.yaml

Step 3. To update the Nginx image or make other changes, modify the nginx-deployment-service.yaml file, and then reapply it. Kubernetes will handle the rolling update according to your strategy specifications.

Monitoring and Troubleshooting:

Monitor the update process using:

kubectl rollout status deployment/nginx-deployment

Check the status of your pods with:

kubectl get pods

If you need to revert to a previous version due to an issue, use:

kubectl rollout undo deployment/nginx-deployment

Conclusion

Rolling updates are essential for maintaining application availability and user satisfaction. By following this guide, you’ve learned how to set up and manage rolling updates for an Nginx deployment in Kubernetes. As you continue to work with Kubernetes, remember that careful planning and monitoring are key to successful deployment management. I hope will this your helpful. Thank you for reading the DevopsRoles page!

How to Install a Helm Chart on a Kubernetes Cluster

Introduction

In this blog post, we’ll explore the steps needed how to install a Helm chart on a Kubernetes cluster. Helm is a package manager for Kubernetes that allows users to manage Kubernetes applications. Helm Charts help you define, install, and upgrade even the most complex Kubernetes application.

How to Install a Helm Chart

Prerequisites

Before we begin, make sure you have the following:

  • A running Kubernetes cluster
  • The kubectl command-line tool, configured to communicate with your cluster
  • The Helm command-line tool installed

Step 1: Setting Up Your Environment

First, ensure your kubectl is configured to interact with your Kubernetes cluster. Test this by running the following command:

kubectl cluster-info

If you see the cluster details, you’re good to go. Next, install Helm if it’s not already installed. You can download Helm from Helm’s official website.

Step 2: Adding a Helm Chart Repository

Before you can install a chart, you need to add a chart repository. Helm charts are stored in repositories where they are organized and shared. Add the official Helm stable charts repository with this command:

helm repo add stable https://charts.helm.sh/stable

Then, update your charts list:

helm repo update

Step 3: Searching for the Right Helm Chart

You can search for Helm charts in the repository you just added:

helm search repo [chart-name]

Replace [chart-name] with the name of the application you want to install.

Step 4: Installing the Helm Chart

Once you’ve found the chart you want to install, you can install it using the following command:

helm install [release-name] [chart-name] --version [chart-version] --namespace [namespace]

Replace [release-name] with the name you want to give your deployment, [chart-name] with the name of the chart from the search results, [chart-version] with the specific chart version you want, and [namespace] with the namespace where you want to install the chart.

Step 5: Verifying the Installation

After installing the chart, you can check the status of the release:

helm status [release-name]

Additionally, use kubectl to see the resources created:

kubectl get all -n [namespace]

Conclusion

Congratulations! You’ve successfully installed a Helm chart on your Kubernetes cluster. Helm charts make it easier to deploy and manage applications on Kubernetes. By following these steps, you can install, upgrade, and manage any application on your Kubernetes cluster.

Remember, the real power of Helm comes from the community and the shared repositories of charts. Explore other charts and see how they can help you in your Kubernetes journey. I hope will this your helpful. Thank you for reading the DevopsRoles page!

Kubernetes RBAC (Role-Based Access Control)

Introduction

In Kubernetes RBAC is a method for controlling access to resources based on the roles assigned to users or service accounts within the cluster. RBAC helps to enforce the principle of least privilege, ensuring that users only have the permissions necessary to perform their tasks.

Kubernetes RBAC best practices

Kubernetes create Service Account

Service accounts are used to authenticate applications running inside a Kubernetes cluster to the API server. Here’s how you can create a service account named huupvuser:

kubectl create sa huupvuser
kubectl get sa

The result is as follows:

Creating ClusterRole and ClusterRoleBinding

Creating a ClusterRole

A ClusterRole defines a set of permissions for accessing Kubernetes resources across all namespaces. Below is an example of creating a ClusterRole named test-reader that grants read-only access to pods:

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: test-reader
rules:
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["get", "watch", "list"]

Apply the ClusterRole:

kubectl apply -f clusterrole.yml

Creating a ClusterRoleBinding

A ClusterRoleBinding binds a ClusterRole to one or more subjects, such as users or service accounts, and defines the permissions granted to those subjects. Here’s an example of creating a ClusterRoleBinding named test-read-pod-global that binds the test-reader ClusterRole to the huupvuser service account:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: test-read-pod-global
subjects:
- kind: ServiceAccount
  name: huupvuser
  apiGroup: ""
  namespace: default
roleRef:
  kind: ClusterRole
  name: test-reader
  apiGroup: rbac.authorization.k8s.io

Apply the ClusterRoleBinding:

kubectl apply -f clusterrolebinding.yaml

Combined Role YAML

For convenience, you can combine the ClusterRole and ClusterRoleBinding into a single YAML file for easier management. Here’s an example role.yml:

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: test-reader
rules:
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["get", "watch", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: test-read-pod-global
subjects:
- kind: ServiceAccount
  name: huupvuser
  apiGroup: ""
  namespace: default
roleRef:
  kind: ClusterRole
  name: test-reader
  apiGroup: rbac.authorization.k8s.io

Apply the combined YAML file:

kubectl apply -f role.yml

Verify ClusterRole and ClusterRoleBinding:

kubectl get clusterrole | grep test-reader
kubectl get clusterrolebinding | grep test-read-pod-global

The result is as follows.

Delete ClusterRole and ClusterRoleBinding:

kubectl delete clusterrole test-reader
kubectl delete clusterrolebinding test-read-pod-global

The result is as follows.

Conclusion

we’ve explored the basics of Role-Based Access Control (RBAC) in Kubernetes RBAC best practices. Through the creation of Service Accounts, ClusterRoles, and ClusterRoleBindings, we’ve demonstrated how to grant specific permissions to users or service accounts within a Kubernetes cluster.

RBAC is a powerful mechanism for ensuring security and access control in Kubernetes environments, allowing administrators to define fine-grained access policies tailored to their specific needs. By understanding and implementing RBAC effectively, organizations can maintain a secure and well-managed Kubernetes infrastructure. I hope will this your helpful. Thank you for reading the DevopsRoles page!

A Comprehensive Guide to Validating Kubernetes Cluster Installed using Kubeadm

Introduction

When setting up a Kubernetes cluster using Kubeadm, it’s essential to validate the installation to ensure everything is functioning correctly. In this blog post, we will guide you through the steps to Validating Kubernetes Cluster Installed using Kubeadm and Kubectl.

Learn how to validate your Kubernetes cluster installation using Kubeadm and ensure smooth operations. Follow our step-by-step guide for easy validation.

You have Installed Kubernetes using Kubeadm on Ubuntu: A Step-by-Step Guide

Validating Kubernetes Cluster Installed using Kubeadm: Step-by-Step Guide

Validating CMD Tools: Kubeadm & Kubectl

First, let’s check the versions of Kubeadm and Kubectl to ensure they match your cluster setup.

Checking “kubeadm” version

kubeadm version

Checking “kubectl” version

kubectl version

Make sure the versions of Kubeadm and Kubectl are compatible with your Kubernetes cluster.

Validating Cluster Nodes

Next, we need to ensure that all nodes in the cluster, including both Master and Worker nodes, are in the “Ready” state.

To check the status of all nodes:

kubectl get nodes
kubectl get nodes -o wide

This command will display a list of all nodes in the cluster along with their status. Ensure that all nodes are marked as “Ready.”

Validating Kubernetes Components

It’s crucial to verify that all Kubernetes components on the Master node are running correctly.

To check the status of Kubernetes components:

kubectl get pods -n kube-system
kubectl get pods -n kube-system -o wide

This command will show the status of various Kubernetes components in the kube-system namespace. Ensure that all components are in the “Running” state.

Validating Services: Docker & Kubelet

To ensure the proper functioning of your cluster, we need to validate the services Docker and Kubelet on all nodes.

Checking Docker service status

systemctl status docker

This command will display the status of the Docker service. Ensure that it is “Active” and running without any errors.

Checking Kubelet service status

systemctl status kubelet

This command will show the status of the Kubelet service. Verify that it is “Active” and running correctly.

Deploying Test Deployment

To further validate your cluster, let’s deploy a sample Nginx deployment and check its status.

Deploying the sample “nginx” deployment:

kubectl apply -f https://k8s.io/examples/controllers/nginx-deployment.yaml

This command will create the Nginx deployment in your cluster.

Validate the deployment:

kubectl get deploy
kubectl get deploy -o wide

These commands will display the status of the Nginx deployment, including the number of replicas and the desired and current states.

Check if the pods are in the “Running” state:

kubectl get pods
kubectl get pods -o wide

Make sure all pods are running without any errors.

Verify that containers are running on the respective worker nodes:

docker ps

This command will show the running containers on each worker node. Ensure that the Nginx containers are running as expected.

Delete the deployment:

kubectl delete -f https://k8s.io/examples/controllers/nginx-deployment.yaml

This command will delete the Nginx deployment from your cluster.

Conclusion

By following these steps, you can validate your Kubernetes cluster installation using Kubeadm and Kubectl. It’s essential to ensure that all the components, services, and deployments are running correctly to have a reliable and stable Kubernetes environment. I hope will this your helpful. Thank you for reading the DevopsRoles page!

Installing Kubernetes using Kubeadm on Ubuntu: A Step-by-Step Guide

Introduction

Kubernetes has emerged as the go-to solution for container orchestration and management. If you’re looking to set up a Kubernetes cluster on a Ubuntu server, you’re in the right place. In this step-by-step guide, we’ll walk you through the process of installing Kubernetes using Kubeadm on Ubuntu.

Prerequisites

I have created 3 VMs for Kubernetes Cluster Nodes to Cloud Google Compute Engine (GCE)

  • Master(1): 2 vCPUs – 4GB Ram
  • Worker(2): 2 vCPUs – 2GB RAM
  • OS: Ubuntu 16.04 or CentOS/RHEL 7

I have configured Firewall Rules Ingress in Google Compute Engine (GCE)

  • Master Node: 2379,6443,10250,10251,10252
  • Worker Node: 10250,30000-32767

Installing Kubernetes using Kubeadm on Ubuntu

Set hostname on Each Node

# hostnamectl set-hostname "k8s-master"    // For Master node
# hostnamectl set-hostname "k8s-worker1"   // For 1st worker node
# hostnamectl set-hostname "k8s-worker2"   // For 2nd worker node

Add the following entries in /etc/hosts file on each node

192.168.1.14   k8s-master
192.168.1.16   k8s-worker1
192.168.1.17   k8s-worker2

Disable Swap and Bridge Traffic

Kubernetes does not work well with swap enabled. Run it on MASTER & WORKER Nodes

Disable SWAP

# swapoff -a
# sed -i.bak -r 's/(.+ swap .+)/#\1/' /etc/fstab

Load the following kernel modules on all the nodes,

# tee /etc/modules-load.d/containerd.conf <<EOF
overlay
br_netfilter
EOF
# modprobe overlay
# modprobe br_netfilter

Set the following Kernel parameters for Kubernetes, run beneath tee command

# tee /etc/sysctl.d/kubernetes.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF

Reload the above changes

# sysctl --system

For example, The output terminal of worker1.

Installing Docker

Run it on MASTER & WORKER Nodes. Kubernetes requires a container runtime, and Docker is a popular choice. To install Docker, run the following commands:

apt-get update  
apt-get install -y  apt-transport-https ca-certificates curl software-properties-common gnupg2

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
  $(lsb_release -cs) \
  stable"

Installing Docker

apt-get update && sudo apt-get install \
  containerd.io=1.6.24-1 \
  docker-ce=5:20.10.24~3-0~ubuntu-$(lsb_release -cs) \
  docker-ce-cli=5:20.10.24~3-0~ubuntu-$(lsb_release -cs)

For example, The output terminal is as below:

Setting up the Docker daemon

cat <<EOF | sudo tee /etc/docker/daemon.json
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2"
}
EOF

mkdir -p /etc/systemd/system/docker.service.d

Start and enable the docker

systemctl daemon-reload
systemctl enable docker
systemctl restart docker
systemctl status docker

Install Kubeadm, Kubelet, and Kubectl

Add the Kubernetes repository and install Kubeadm, Kubelet, and Kubectl

apt-get update && sudo apt-get install -y apt-transport-https curl

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -

cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF

Installing Kubeadm, Kubelet, Kubectl

apt-get update
apt-get install -y kubelet kubeadm kubectl

apt-mark hold kubelet kubeadm kubectl

Start and enable Kubelet

systemctl daemon-reload
systemctl enable kubelet
systemctl restart kubelet
systemctl status kubelet

Initializing CONTROL-PLANE

Run it on MASTER Node only. On your master node, initialize the Kubernetes cluster with the command below:

kubeadm init

Make note of the kubeadm join command that’s provided at the end; you’ll need it to join worker nodes.

Installing POD-NETWORK add-on

Run it on MASTER Node only

For kubectl

mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

Installing “Weave CNI” (Pod-Network add-on)

kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"

NOTE: There are multiple CNI Plug-ins available. You can install a choice of yours. In the case above commands don’t work, try checking the below link for more info

Joining Worker Nodes

Run it on WORKER Node only

On your worker nodes, use the kubeadm join command from above kubeadm init output to join them to the cluster.

kubeadm join <...>

Run this command IF you do not have the above join command and/or create a NEW one.

kubeadm token create --print-join-command

Verify the Cluster

On the master node, ensure your cluster is up and running

kubectl get nodes

You should see the master node marked as “Ready” and any joined worker nodes.

Conclusion

Congratulations! You’ve successfully installed Kubernetes using Kubeadm on Ubuntu. With your Kubernetes cluster up and running, you’re ready to deploy and manage containerized applications and services at scale.

Kubernetes offers vast capabilities for container orchestration, scaling, and management. As you become more familiar with Kubernetes, you can explore advanced configurations and features to optimize your containerized environment.

I hope will this your helpful. Thank you for reading the DevopsRoles page!

Additional Resources:

Exploring the Docker and Kubernetes comparison

Introduction

Docker and Kubernetes are both open-source container platforms that enable the packaging and deployment of applications. We will Unravel the Enigmatic Docker and Kubernetes comparison.

In the realm of modern software deployment, where technology evolves at the speed of light, two names reign supreme: Docker and Kubernetes.

These two technological titans have transformed the landscape of application development, but they often find themselves in the spotlight together, leaving many wondering: what’s the real difference between Docker and Kubernetes? Buckle up as we embark on an illuminating journey through the cosmos of containers and orchestration.

What is Docker: The Art of Containerization

Docker, launched in 2013, is the pioneer in containerization technology. At its core, Docker allows developers to package an application and its dependencies, including libraries and configurations, into a single unit called a container.

Imagine Docker as a master craftsman, wielding tools to sculpt applications into self-contained, portable entities known as containers. Docker empowers developers to encapsulate an application along with its dependencies, libraries, and configurations into a single unit. This container becomes a traveler, capable of seamlessly transitioning between development environments, testing stages, and production servers.

The allure of Docker lies in its promise of consistency. No longer must developers grapple with the frustrating “it works on my machine” dilemma. Docker containers ensure that an application behaves the same way regardless of where it’s run, minimizing compatibility woes and facilitating smoother collaboration among developers.

What is Kubernetes: The Cosmic Choreographer

While Docker handles the creation and management of containers, Kubernetes steps in to manage the orchestration and deployment of these containers at scale. Launched by Google in 2014, Kubernetes provides a powerful solution for automating, scaling, and managing containerized applications.

In this cosmic dance of software deployment, Kubernetes steps onto the stage as a masterful choreographer, orchestrating the movement of containers with finesse and precision. Kubernetes introduces the concept of pods groups of interconnected containers that share network and storage resources. This dynamic entity enables seamless load balancing, ensuring smooth traffic distribution across the dance floor of application deployment.

Yet, Kubernetes offers more than elegant moves. It’s a wizard of automation, capable of dynamically scaling applications based on demand. Like a cosmic conductor, Kubernetes monitors the performance of applications and orchestrates adjustments, ensuring that the performance remains stellar even as the audience and the users grow.

Docker and Kubernetes comparison

Docker vs Kubernetes Pros and cons

Key Differences and Complementary Roles

While Docker and Kubernetes fulfill distinct roles, they synergize seamlessly to offer a holistic solution for containerization and orchestration. Docker excels in crafting portable and uniform containers that encapsulate applications, while Kubernetes steps in to oversee the intricate dance of deploying, scaling, and monitoring these containers.

Consider Docker the cornerstone of containerization, providing the essential building blocks that Kubernetes builds upon. Through Docker, developers elegantly wrap applications and dependencies in containers that maintain their coherence across diverse environments, from the intimate realms of development laptops to the grand stages of production servers.

On the contrasting side of the spectrum, Kubernetes emerges as the maestro of container lifecycle management. Its genius lies in abstracting the complex infrastructure beneath and orchestrating multifaceted tasks load balancing, scaling, mending, and updating with automated grace. As organizations venture into vast container deployments, Kubernetes emerges as the compass, ensuring not only high availability but also the optimal utilization of resources, resulting in a symphony of efficiency.

In conclusion

Docker and Kubernetes, though distinct technologies are interconnected in the world of modern software deployment. Docker empowers developers to create portable and consistent containers, while Kubernetes takes the reins of orchestration, automating the deployment and scaling of those containers. Together, they offer a robust ecosystem for building, deploying, and managing applications in today’s fast-paced and ever-changing technology landscape.

Thank you for reading Docker and Kubernetes comparison. I hope will this your helpful. Thank you for reading the DevopsRoles page!

Tool to Spin up Kwok Kubernetes Nodes

#What is Kwok Kubernetes?

Kwok Kubernetes is a tool that allows you to quickly and easily spin up Kubernetes nodes in a local environment using VirtualBox and Vagrant.

Kwok provides an easy way to set up a local Kubernetes cluster for development and testing purposes.

It is not designed for production use, as it’s intended only for local development environments.

Deploy Kwok Kubernetes to cluster

you can follow these general steps:

  • Install VirtualBox and Vagrant on your local machine.
  • Download or clone the Kwok repository from GitHub.
  • Modify the config.yml file to specify the number of nodes and other settings for your Kubernetes cluster.
  • Run the vagrant up command to start the Kubernetes cluster.
  • Once the cluster is up and running, you can use the kubectl command-line tool to interact with it and deploy your applications.

Install VirtualBox and Vagrant on your local machine.

You can refer to here install vagrant and VirtualBox.

Download or clone the Kwok repository from GitHub.

Go to the Kwok GitHub repository page: https://github.com/squat/kwok

Click on the green “Code” button, and then click on “Download ZIP” to download a zip file of the repository

For example, You can use the command line below

git clone https://github.com/squat/kwok.git

Modify the config.yml file in your Kubernetes cluster.

Open the config.yml file in a text editor.

Modify the settings in the config.yml file as needed.

  • num_nodes: This setting specifies the number of nodes to create in the Kubernetes cluster.
  • vm_cpus: This setting specifies the number of CPUs to allocate to each node.
  • vm_memory: This setting specifies the amount of memory to allocate to each node.
  • ip_prefix: This setting specifies the IP address prefix to use for the nodes in the cluster.
  • kubernetes_version: This setting specifies the version of Kubernetes to use in the cluster.
  • Save your changes to the config.yml file.

For example: create a three-node Kubernetes cluster with 2 CPUs and 4 GB of memory allocated to each node, using the IP address prefix “192.168.32” and Kubernetes version 1.21.0

# Number of nodes to create
num_nodes: 3

# CPU and memory settings for each node
vm_cpus: 2
vm_memory: 4096

# Network settings
ip_prefix: "192.168.32"
network_plugin: flannel

# Kubernetes version to install
kubernetes_version: "1.21.0"

# Docker version to install
docker_version: "20.10.8"

Once you have modified the config.yml file to specify the desired settings for your Kubernetes cluster

Start the Kubernetes cluster

run the vagrant up command to start the Kubernetes cluster.

Now, You can use deploy your applications.

Conclusion

You use Kwok Kubernetes, a Tool to Spin up Kubernetes Nodes. I hope will this your helpful. Thank you for reading the DevopsRoles page!

Kubernetes day one

#Introduction

In this tutorial, We study together K8s (Kubernetes). Let go of Kubernetes day one.

  • Method installs for Kubernetes.
  • Use Managed Kubernetes service from CSP.
  • Install and configure Kubectl for Linux and macOS.
  • Configuring Kubernetes in Minikube Windows or Linux.
  • Minikube install commands.
  • Create the First POD configuration in YAML.
  • How to generate Pod Manifests via CLI.

Method installs for Kubernetes.

There are multiple methods to get started with a full Kubernetes Environment.

  • Use the Managed Kubernetes Service.
  • Install and configure Kubernetes Manually
  • Use Minikube

Managed Kubernetes Service.

Provides like: AWS, GCP, IBM, and others providers.

Minikube

It’s a tool that makes it easy to run Kubernetes locally.

Minikube run a single node K8s cluster on VM on your laptop. You can try out or develop with it.

Install and configure Kubernetes Manually

It’s the Hard Way. You install and configure components of K8s.

Install Kubectl on Windows

it allows running commands against K8s clusters.

Kubectl to deploy applications, manage cluster resources, and view logs. Link here.

Download Kubectl here

Add path environment variable for K8s

Confirm kubectl command on Windows.

Install Minikube on Windows

Link here.

For example below:

From a terminal start your cluster.

K8S PODS

Kubernetes Pod is a group of one or more application containers that share resources.

  • A Pod runs on a Node.
  • Each Node is managed by the Master.
  • A Node has multiple pods.
  • A Node is a worker machine in K8s.

The command same docker vs K8s

docker run –name webserver nginxkubectl run webserver –image=nginx
docker exec -it webserver bashkubectl exec -it webserver — bash
docker stop webserver
docker rm webserver
kubectl delete pod webserver

Updating later!

How to install kubernetes on ubuntu

To install Kubernetes on Ubuntu, you can use the kubeadm tool, which simplifies the process of setting up a cluster. Here’s a step-by-step guide on how to install Kubernetes on Ubuntu.

Kubernetes Requirements

  • Master Node.
  • Worker Node 1.
  • The host OS is Ubuntu Server.

1. Update Ubuntu Server

It is always an updated system package. use the command line below:

$ sudo apt update

2. Install Docker on Ubuntu Server

$ sudo apt install curl -y
$ curl -fsSL https://get.docker.com -o get-docker.sh
$ sh get-docker.sh
$ sudo usermod -aG docker $USER
$ docker --version

Repeat on all other nodes.

3.How to install Kubernetes on Ubuntu server.

Use the command line to install K8s as below.

$ sudo apt-get install -y apt-transport-https ca-certificates curl
$ sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg

Repeat for each server node.

4. Add software repositories

k8s are not included in the default repositories. Use the command line below:

$ echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
$ sudo apt-get update

Repeat for each server node.

5. Install the tool for Kubernetes

A tool that helps to initialize a cluster using Kubernetes Admin or Kubeadm. Kubelet is the worker package, which runs on all nodes and starts containers. Use the command line below

$ sudo apt-get install -y kubeadm kubelet kubectl
$ sudo apt-mark hold kubelet kubeadm kubectl

Repeat for each server node.

6. Disabling swap memory on each server

$ sudo swapoff -a

7. Setting a unique hostname for each server node.

Decide which server to set as the primary node.

$ sudo hostnamectl set-hostname k8s-master

Next, Set a worker node hostname

$ sudo hostnamectl set-hostname k8s-node1

If you add a worker node, set a unique hostname on each one.

8. Initialize Kubernetes on the master node

On the master node, Run the command below:

$ sudo kubeadm init --apiserver-advertise-address=192.168.56.11 --pod-network-cidr=10.100.0.0/16

The output terminal is as below:

vagrant@k8s-master:~$ sudo kubeadm init --apiserver-advertise-address=192.168.56.11 --pod-network-cidr=10.100.0.0/16
[init] Using Kubernetes version: v1.24.2
[preflight] Running pre-flight checks
        [WARNING SystemVerification]: missing optional cgroups: blkio
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.56.11]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.56.11 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.56.11 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 4.501962 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: 8ke6fa.3c5ll272057418qj
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.56.11:6443 --token 8ke6fa.3c5ll272057418qj \
        --discovery-token-ca-cert-hash sha256:4e69979a597f673781606eb7ab6bed5ccb741f46756b17f196b9cd76d3b51053

This is used to join the worker nodes to the cluster. We will create a directory for the cluster.

$ mkdir -p $HOME/.kube 
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config 
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

9. Deploy the pod network to the cluster.

A pod network allows communication between different nodes in the cluster. For example, Use the flannel virtual network. Use the command line below

$ sudo kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

Verify working and communicating

# kubectl get all -A

Join the worker node to Cluster.

The syntax:

kubeadm join --token <token> <control-plane-host>:<control-plane-port> --discovery-token-ca-cert-hash sha256:<hash>

Use kubeadm join command on each worker node to join it to the cluster.

$ sudo kubeadm join 192.168.56.11:6443 --token 8ke6fa.3c5ll272057418qj \
        --discovery-token-ca-cert-hash \
sha256:4e69979a597f673781606eb7ab6bed5ccb741f46756b17f196b9cd76d3b5105

Waiting minutes, You can check the status of the nodes. Switch to the Master node and run the command line as below:

kubectl get nodes

Conclusion

You have successfully installed Kubernetes on Ubuntu. I hope this guide proves to be helpful for you in navigating the installation process. Thank you for choosing to read the DevopsRoles page! For more detailed instructions on how to install Kubernetes on Ubuntu, please refer to our comprehensive guide on installing Kubernetes on Ubuntu.