Tag Archives: Kubernetes

Implementing Canary Deployments on Kubernetes: A Comprehensive Guide

Introduction

Canary deployments are a powerful strategy for rolling out new application versions with minimal risk. By gradually shifting traffic to the new version, you can test and monitor its performance before fully committing.

Prerequisites

  • Access to a command line/terminal
  • Docker installed on the system
  • Kubernetes or Minikube
  • A fully configured kubectl command-line tool on your local machine

What is a Canary Deployment?

A canary deployment is a method for releasing new software versions to a small subset of users before making it available to the broader audience. Named after the practice of using canaries in coal mines to detect toxic gases, this strategy helps identify potential issues with the new version without affecting all users. By directing a small portion of traffic to the new version, developers can monitor its performance and gather feedback, allowing for a safe and controlled rollout.

Step-by-Step Guide to Canary Deployments

Step 1: Pull the Docker Image

Retrieve your base Docker image using:

docker pull <image-name>

Step 2: Create Deployment

Define your Kubernetes deployment in a YAML file:

apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
replicas: 3
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-app
image: <image-name>
ports:
- containerPort: 80

Apply the deployment with:

kubectl apply -f deployment.yaml

Step 3: Create Service

Set up a service to route traffic to your pods:

apiVersion: v1
kind: Service
metadata:
name: my-app-service
spec:
selector:
app: my-app
ports:
- protocol: TCP
port: 80
targetPort: 80
type: LoadBalancer

Apply the service with:

kubectl apply -f service.yaml

Step 4: Deploy Initial Version

Ensure your initial deployment is functioning correctly. You can verify the status of your pods and services using:

kubectl get pods
kubectl get services

Step 5: Create Canary Deployment

Create a new deployment for the canary version:

apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app-canary
spec:
replicas: 1
selector:
matchLabels:
app: my-app-canary
template:
metadata:
labels:
app: my-app-canary
spec:
containers:
- name: my-app
image: <new-image-name>
ports:
- containerPort: 80

Apply the canary deployment with:

kubectl apply -f canary-deployment.yaml

Step 6: Update Service

Modify your service to route some traffic to the Canary version. This can be done using Istio or any other service mesh tool. For example, using Istio:

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: my-app
spec:
hosts:
- "*"
gateways:
- my-app-gateway
http:
- route:
- destination:
host: my-app
subset: v1
weight: 90
- destination:
host: my-app-canary
subset: v2
weight: 10

Step 7: Monitor and Adjust

Monitor the performance and behavior of the canary deployment. Use tools like Prometheus, Grafana, or Kubernetes’ built-in monitoring. Adjust the traffic split as necessary until you are confident in the new version’s stability.

Conclusion

Implementing a canary deployment strategy on K8s allows for safer, incremental updates to your applications. By carefully monitoring the new version and adjusting traffic as needed, you can ensure a smooth transition with minimal risk. This approach helps maintain application stability while delivering new features to users. I hope will this your helpful. Thank you for reading the DevopsRoles page!

Mastering kubectl create namespace

Introduction

In the expansive world of Kubernetes, managing multiple applications systematically within the same cluster is made possible with namespaces. This article explores how to efficiently use the kubectl create namespace command and other related functionalities to enhance your Kubernetes management skills.

What is a Namespace?

A namespace in Kubernetes serves as a virtual cluster within a physical cluster. It helps in organizing resources where multiple teams or projects share the cluster, and it limits access and resource consumption per namespace.

Best Practices for Using kubectl create namespace

Adding Labels to Existing Namespaces

Labels are key-value pairs associated with Kubernetes objects, which aid in organizing and selecting subsets of objects. To add a label to an existing namespace, use the command:

kubectl label namespaces <namespace-name> <label-key>=<label-value>

This modification helps in managing attributes or categorizing namespaces based on environments, ownership, or any other criteria.

Simulating Namespace Creation

Simulating the creation of a namespace can be useful for testing scripts or understanding the impact of namespace creation without making actual changes. This can be done by appending --dry-run=client to your standard creation command, allowing you to verify the command syntax without executing it:

kubectl create namespace example --dry-run=client -o yaml

Creating a Namespace Using a YAML File

For more complex configurations, namespaces can be created using a YAML file. Here’s a basic template:

apiVersion: v1
kind: Namespace
metadata:
name: mynamespace

Save this to a file (e.g., mynamespace.yaml) and apply it with:

kubectl apply -f mynamespace.yaml

Creating Multiple Namespaces at Once

To create multiple namespaces simultaneously, you can include multiple namespace definitions in a single YAML file, separated by ---:

apiVersion: v1
kind: Namespace
metadata:
name: dev
---
apiVersion: v1
kind: Namespace
metadata:
name: prod

Then apply the file using the same kubectl apply -f command.

Creating a Namespace Using a JSON File

Similarly, namespaces can be created using JSON format. Here’s how a simple namespace JSON looks:

{
"apiVersion": "v1",
"kind": "Namespace",
"metadata": {
"name": "jsonnamespace"
}
}

This can be saved to a file and applied using:

kubectl apply -f jsonnamespace.json

Best Practices When Choosing a Namespace

Selecting a name for your namespace involves more than just a naming convention. Consider the purpose of the namespace, and align the name with its intended use (e.g., test, development, production), and avoid using reserved Kubernetes names or overly generic terms that could cause confusion.

Conclusion

Namespaces are a fundamental part of Kubernetes management, providing essential isolation, security, and scalability. By mastering the kubectl create namespace command and related functionalities, you can enhance the organization and efficiency of your cluster. Whether you’re managing a few services or orchestrating large-scale applications, namespaces are invaluable tools in your Kubernetes toolkit. I hope will this your helpful. Thank you for reading the DevopsRoles page!

Adding Kubernetes Worker Nodes: A Detailed Guide to Expanding Your Cluster

Introduction

How to Adding Kubernetes Worker Nodes to Your Kubernetes Cluster. Kubernetes has become an essential tool for managing containerized applications. Expanding your cluster by adding worker nodes can enhance performance and reliability. In this article, we will guide you through the process of adding worker nodes to your Kubernetes cluster effortlessly.

Prerequisites for Adding Kubernetes Worker Nodes

Before you begin, ensure that:

  • The Kubernetes CLI (kubectl) is installed and configured on your machine.
  • You have administrative rights on the Kubernetes cluster you are working with.

Adding Worker Nodes to a Kubernetes Cluster

Step 1: Install and Configure Kubelet

First, install the Kubelet on the new machine that will act as a worker node. You can install the Kubelet using the following command:

sudo apt-get update && sudo apt-get install -y kubelet

Step 2: Join the Cluster

After installing Kubelet, your new node needs a token to join the cluster. You can generate a token on an existing node in the cluster using the following command:

kubeadm token create --print-join-command 

Then, on the new node, run the command you just received to join the cluster:

sudo kubeadm join --token <your-token> <master-node-IP>:<master-port>

Step 3: Check the Status of Nodes

You can check whether the new worker nodes have successfully been added to the cluster by using the command:

kubectl get nodes 

This command displays all the nodes in the cluster, including their status.

Best Practices and Tips

  • Security: Always ensure that all nodes in your cluster are up-to-date with security patches.
  • Monitoring and Management: Use tools like Prometheus and Grafana to monitor the performance of nodes.

References

Steps to Add More Worker Nodes to Your Kubernetes Cluster

Prepare the New Nodes:

  • Hardware/VM Setup: Ensure that each new worker node has the required hardware specifications (CPU, memory, disk space, network connectivity) to meet your cluster’s performance needs.
  • Operating System: Install a compatible operating system and ensure it is fully updated.

Install Necessary Software:

  • Container Runtime: Install a container runtime such as Docker, containerd, or CRI-O.
  • Kubelet: Install Kubelet, which is responsible for running containers on the node.
  • Kubeadm and Kube-proxy: These tools help in starting the node and connecting it to the cluster.

Join the Node to the Cluster:

  • Generate a join command from one of your existing control-plane nodes. You can do this by running:
  • kubeadm token create --print-join-command
  • Run the output join command on each new worker node. This command will look something like:
  • kubeadm join <control-plane-host>:<port> --token <token> --discovery-token-ca-cert-hash sha256:<hash>

Verify Node Addition:

  • Once the new nodes have joined the cluster, you can check their status using:
  • kubectl get nodes
  • This command will show you all the nodes in the cluster, including the newly added workers, and their status.

Conclusion

Successfully adding worker nodes to your Kubernetes cluster can significantly enhance its performance and scalability. By following the steps outlined in this guide—from installing Kubelet to joining the new nodes to the cluster—you can ensure a smooth expansion process.

Remember, maintaining the security of your nodes and continuously monitoring their performance is crucial for sustaining the health and efficiency of your Kubernetes environment. As your cluster grows, keep leveraging best practices and the robust tools available within the Kubernetes ecosystem to manage your resources effectively.

Whether you’re scaling up for increased demand or improving redundancy, the ability to efficiently add worker nodes is a key skill for any Kubernetes administrator. This capability not only supports your current needs but also prepares your infrastructure for future growth and challenges. I hope will this your helpful. Thank you for reading the DevopsRoles page!

A Comprehensive Guide to Kubernetes RBAC Verbs List: From A to Z

Introduction

Kubernetes, a leading container management platform, offers a robust access control framework known as Role-Based Access Control (RBAC). RBAC allows users to tightly control access to Kubernetes resources, thereby enhancing security and efficient management.

Defining RBAC Verbs

  1. Get: This verb allows users to access detailed information about a specific object. In a multi-user environment, ensuring that only authorized users can “get” information is crucial.
  2. List: Provides the ability to see all objects within a group, allowing users a comprehensive view of available resources.
  3. Watch: Users can monitor real-time changes to Kubernetes objects, aiding in quick detection and response to events.
  4. Create: Creating new objects is fundamental for expanding and configuring services within Kubernetes.
  5. Update: Updating an object allows users to modify existing configurations, necessary for maintaining stable and optimal operations.
  6. Patch: Similar to “update,” but allows for modifications to a part of the object without sending a full new configuration.
  7. Delete: Removing an object when it’s no longer necessary or to manage resources more effectively.
  8. Deletecollection: Allows users to remove a batch of objects, saving time and effort in managing large resources.

Why Are RBAC Verbs Important?

RBAC verbs are central to configuring access in Kubernetes. They not only help optimize resource management but also ensure that operations are performed within the granted permissions.

Comparing with Other Access Control Methods

Compared to ABAC (Attribute-Based Access Control) and DAC (Discretionary Access Control), RBAC offers a more efficient and manageable approach in multi-user and multi-service environments like Kubernetes. Although RBAC can be complex to configure initially, it provides significant benefits in terms of security and compliance.

For example, a typical RBAC role might look like this in YAML format when defined in a Kubernetes manifest:

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: default
  name: pod-reader
rules:
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["get", "watch", "list"]

In this example, the Role named “pod-reader” allows the user to perform “get”, “watch”, and “list” operations on Pods within the “default” namespace. This kind of granularity helps administrators control access to Kubernetes resources effectively, ensuring that users and applications have the permissions they need without exceeding what is necessary for their function.

Conclusion

RBAC is an indispensable tool in Kubernetes management, ensuring that each operation on the system is controlled and complies with security policies. Understanding and effectively using RBAC verbs will help your organization operate smoothly and safely.

References

For more information, consider consulting the official Kubernetes documentation and online courses on Kubernetes management and security. I hope will this your helpful. Thank you for reading the DevopsRoles page!

Step-by-Step Guide to Setting Up Rolling Updates in Kubernetes with Nginx

Introduction

In the realm of Kubernetes, ensuring zero downtime during application updates is crucial. Rolling Updates in Kubernetes provide a seamless way to update the application’s pods without affecting its availability. In this guide, we’ll walk through setting up rolling updates for an Nginx deployment in Kubernetes, ensuring your services remain uninterrupted.

Preparation

Before proceeding, ensure you have Kubernetes and kubectl installed and configured. This guide assumes you have basic knowledge of Kubernetes components and YAML syntax.

Deployment and Service Configuration

First, let’s understand the components of our .yaml file which configures both the Nginx deployment and service:

Deployment Configuration

  • apiVersion: apps/v1 indicates the API version.
  • kind: Deployment specifies the kind of Kubernetes object.
  • metadata: Defines the name of the deployment.
  • spec: Describes the desired state of the deployment:
    • selector: Ensures the deployment applies to pods with the label app: nginxdeployment.
    • revisionHistoryLimit: The number of old ReplicaSets to retain.
    • progressDeadlineSeconds: Time to wait before indicating progress has stalled.
    • minReadySeconds: Minimum duration a pod should be ready without any of its containers crashing, for it to be considered available.
    • strategy: Specifies the strategy used to replace old pods with new ones. Here, it’s set to RollingUpdate.
    • replicas: Number of desired pods.
    • template: Template for the pods the deployment creates.
    • containers: Specifies the Nginx container and its settings, such as image and ports.

Service Configuration

  • apiVersion: v1 indicates the API version.
  • kind: Service specifies the kind of Kubernetes object.
  • metadata: Defines the name of the service.
  • spec: Describes the desired state of the service:
    • selector: Maps the service to the deployment.
    • ports: Specifies the port configuration.

Implementing Rolling Updates in Kubernetes

To apply these configurations and initiate rolling updates, follow these steps:

Step 1. Create or update your deployment and service file named nginx-deployment-service.yaml with the content below.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  selector:
    matchLabels:
      app: nginxdeployment
  revisionHistoryLimit: 3
  progressDeadlineSeconds: 300
  minReadySeconds: 10
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
      maxSurge: 1
  replicas: 3
  template:
    metadata:
      labels:
        app: nginxdeployment
    spec:
      containers:
      - name: nginxdeployment
        # image: nginx:1.22
        image: nginx:latest
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: nginxservice
spec:
  selector:
    app: nginxdeployment
  ports:
    - protocol: TCP
      port: 80

Step 2. Apply the configuration using the command:

kubectl apply -f nginx-deployment-service.yaml

Step 3. To update the Nginx image or make other changes, modify the nginx-deployment-service.yaml file, and then reapply it. Kubernetes will handle the rolling update according to your strategy specifications.

Monitoring and Troubleshooting:

Monitor the update process using:

kubectl rollout status deployment/nginx-deployment

Check the status of your pods with:

kubectl get pods

If you need to revert to a previous version due to an issue, use:

kubectl rollout undo deployment/nginx-deployment

Conclusion

Rolling updates are essential for maintaining application availability and user satisfaction. By following this guide, you’ve learned how to set up and manage rolling updates for an Nginx deployment in Kubernetes. As you continue to work with Kubernetes, remember that careful planning and monitoring are key to successful deployment management. I hope will this your helpful. Thank you for reading the DevopsRoles page!

Kubernetes RBAC (Role-Based Access Control)

Introduction

In Kubernetes RBAC is a method for controlling access to resources based on the roles assigned to users or service accounts within the cluster. RBAC helps to enforce the principle of least privilege, ensuring that users only have the permissions necessary to perform their tasks.

Kubernetes RBAC best practices

Kubernetes create Service Account

Service accounts are used to authenticate applications running inside a Kubernetes cluster to the API server. Here’s how you can create a service account named huupvuser:

kubectl create sa huupvuser
kubectl get sa

The result is as follows:

Creating ClusterRole and ClusterRoleBinding

Creating a ClusterRole

A ClusterRole defines a set of permissions for accessing Kubernetes resources across all namespaces. Below is an example of creating a ClusterRole named test-reader that grants read-only access to pods:

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: test-reader
rules:
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["get", "watch", "list"]

Apply the ClusterRole:

kubectl apply -f clusterrole.yml

Creating a ClusterRoleBinding

A ClusterRoleBinding binds a ClusterRole to one or more subjects, such as users or service accounts, and defines the permissions granted to those subjects. Here’s an example of creating a ClusterRoleBinding named test-read-pod-global that binds the test-reader ClusterRole to the huupvuser service account:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: test-read-pod-global
subjects:
- kind: ServiceAccount
  name: huupvuser
  apiGroup: ""
  namespace: default
roleRef:
  kind: ClusterRole
  name: test-reader
  apiGroup: rbac.authorization.k8s.io

Apply the ClusterRoleBinding:

kubectl apply -f clusterrolebinding.yaml

Combined Role YAML

For convenience, you can combine the ClusterRole and ClusterRoleBinding into a single YAML file for easier management. Here’s an example role.yml:

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: test-reader
rules:
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["get", "watch", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: test-read-pod-global
subjects:
- kind: ServiceAccount
  name: huupvuser
  apiGroup: ""
  namespace: default
roleRef:
  kind: ClusterRole
  name: test-reader
  apiGroup: rbac.authorization.k8s.io

Apply the combined YAML file:

kubectl apply -f role.yml

Verify ClusterRole and ClusterRoleBinding:

kubectl get clusterrole | grep test-reader
kubectl get clusterrolebinding | grep test-read-pod-global

The result is as follows.

Delete ClusterRole and ClusterRoleBinding:

kubectl delete clusterrole test-reader
kubectl delete clusterrolebinding test-read-pod-global

The result is as follows.

Conclusion

we’ve explored the basics of Role-Based Access Control (RBAC) in Kubernetes RBAC best practices. Through the creation of Service Accounts, ClusterRoles, and ClusterRoleBindings, we’ve demonstrated how to grant specific permissions to users or service accounts within a Kubernetes cluster.

RBAC is a powerful mechanism for ensuring security and access control in Kubernetes environments, allowing administrators to define fine-grained access policies tailored to their specific needs. By understanding and implementing RBAC effectively, organizations can maintain a secure and well-managed Kubernetes infrastructure. I hope will this your helpful. Thank you for reading the DevopsRoles page!

Setup Kubernetes Cluster with K3s

Introduction

In this tutorial, How to setup Kubernetes Cluster with K3s. It is a lightweight Kubernetes distribution developed by Rancher. K3s consume fewer resources than traditional distributions. It’s easy to set up and manage Kubernetes Cluster.

To set up a Kubernetes cluster using K3s, you can follow these steps:

Setup Kubernetes Cluster with K3s

First, You need to get a Linux machine.

Start by provisioning the servers that will be part of your Kubernetes cluster.

You can use virtual machines or bare-metal servers. Ensure that the servers have a compatible operating system (such as Ubuntu, CentOS, or RHEL).

Download the Rancher binary

Link download here. or you use the wget command or curl command to get download it.

wget https://github.com/k3s-io/k3s/releases/download/v1.23.5%2Bk3s1/k3s

make the binary executable.

chmod +x k3s

The output terminal is as below:

vagrant@controller:~$ wget https://github.com/k3s-io/k3s/releases/download/v1.23.5%2Bk3s1/k3s
--2022-06-05 08:49:28--  https://github.com/k3s-io/k3s/releases/download/v1.23.5%2Bk3s1/k3s
Resolving github.com (github.com)... 20.205.243.166
Connecting to github.com (github.com)|20.205.243.166|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://objects.githubusercontent.com/github-production-release-asset-2e65be/135516270/88b18d50-2447-4216-b672-fdf17488cb41?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20220605%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20220605T084929Z&X-Amz-Expires=300&X-Amz-Signature=54b2aa58e831f8f8d179189c940eb5c38b4df7d4e3e33c18c9376b446f029742&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=135516270&response-content-disposition=attachment%3B%20filename%3Dk3s&response-content-type=application%2Foctet-stream [following]
--2022-06-05 08:49:28--  https://objects.githubusercontent.com/github-production-release-asset-2e65be/135516270/88b18d50-2447-4216-b672-fdf17488cb41?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20220605%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20220605T084929Z&X-Amz-Expires=300&X-Amz-Signature=54b2aa58e831f8f8d179189c940eb5c38b4df7d4e3e33c18c9376b446f029742&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=135516270&response-content-disposition=attachment%3B%20filename%3Dk3s&response-content-type=application%2Foctet-stream
Resolving objects.githubusercontent.com (objects.githubusercontent.com)... 185.199.108.133, 185.199.109.133, 185.199.110.133, ...
Connecting to objects.githubusercontent.com (objects.githubusercontent.com)|185.199.108.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 62468096 (60M) [application/octet-stream]
Saving to: ‘k3s’

k3s                                100%[==============================================================>]  59.57M  4.95MB/s    in 24s

2022-06-05 08:49:53 (2.46 MB/s) - ‘k3s’ saved [62468096/62468096]

vagrant@controller:~$ ls -l
total 61004
-rw-rw-r-- 1 vagrant vagrant 62468096 Mar 31 01:05 k3s
vagrant@controller:~$ chmod +x k3s

Start the K3s Server.

sudo ./k3s server

Check your Kubernetes cluster

sudo ./k3s kubectl get nodes

The output terminal is as below:

How to manage your cluster

You can run kubectl commands through the k3s binary. K3s provides a built-in kubectl utility.

For example, To drain the node you created.

sudo ./k3s kubectl drain your-hostname

Or use kubectl cordon a node with the command below:

sudo ./k3s kubectl cordon your-hostname

Add nodes to a K3s cluster

We created a cluster with just one node with the step above. If you want add 1 node to your cluster.

On Master:

first, you determine the node token value of your server. get the token value with the command below:

sudo cat /var/lib/rancher/k3s/server/node-token

For example, the Token values the output as below:

K10c94d11d4970c4ac58973c98ee32c9c1c4cb4fc30d81adfaf3ddf405ba1c48b49::server:7b1dcfc180415f105af019717027e77c

On Worker

Install k3s agent

export K3S_NODE_NAME=node1
export K3S_URL="https://192.168.56.11:6443"
export K3S_TOKEN=K10c94d11d4970c4ac58973c98ee32c9c1c4cb4fc30d81adfaf3ddf405ba1c48b49::server:7b1dcfc180415f105af019717027e77c
curl -sfL https://get.k3s.io | sh -s -

Then, the Additional node run command below:

sudo k3s agent --server https://myserver:6443 --token K3S_TOKEN

Repeat this process to add as many nodes as you want to your cluster.

The result, of the Master

Conclusion

You have setup Kubernetes cluster using K3s. You can now use kubectl it to deploy and manage your applications on the cluster.

Further, you can do with k8s k3s such as customize networking or logging, change the container runtime, and set up certificates. I hope will this your helpful. Thank you for reading the DevopsRoles page!

Unlocking helm commands for Kubernetes

Introduction

Explore the essential Helm commands for Kubernetes in this detailed tutorial. Whether you’re a beginner or a seasoned Kubernetes user, this guide will help you install Helm and utilize its commands to manage charts and repositories effectively, streamlining your Kubernetes deployments.

In this tutorial, How to install helm and run helm commands. Helm provides many commands for managing charts and Helm repositories. Now, let’s helm commands for Kubernetes

  • Helm Commands for Kubernetes: Simplifies application deployment and management in Kubernetes environments.
  • What is a Helm Chart?: A package containing pre-configured Kubernetes resources used for deploying applications.
  • Purpose of Helm: Streamlines deployments, manages versions and rollbacks and allows customization of installations through charts.
  • Using Helm Charts: Install Helm, add repositories, and manage applications within your Kubernetes cluster using Helm’s command suite, including install, update, and delete operations.

Install Helm commands for Kubernetes

You can ref here.

From Homebrew (Mac)

brew install helm

From Windows

choco install kubernetes-helm

Check version

helm version

The output is as below:

Windows PowerShell
Copyright (C) Microsoft Corporation. All rights reserved.

Try the new cross-platform PowerShell https://aka.ms/pscore6

PS C:\Windows\system32> choco install kubernetes-helm
Chocolatey v0.10.15
Installing the following packages:
kubernetes-helm
By installing you accept licenses for the packages.
Progress: Downloading kubernetes-helm 3.5.4... 100%

kubernetes-helm v3.5.4 [Approved]
kubernetes-helm package files install completed. Performing other installation steps.
The package kubernetes-helm wants to run 'chocolateyInstall.ps1'.
Note: If you don't run this script, the installation will fail.
Note: To confirm automatically next time, use '-y' or consider:
choco feature enable -n allowGlobalConfirmation
Do you want to run the script?([Y]es/[A]ll - yes to all/[N]o/[P]rint): Yes

Downloading kubernetes-helm 64 bit
  from 'https://get.helm.sh/helm-v3.5.4-windows-amd64.zip'
Progress: 100% - Completed download of C:\Users\USERNAME\AppData\Local\Temp\chocolatey\kubernetes-helm\3.5.4\helm-v3.5.4-windows-amd64.zip (11.96 MB).
Download of helm-v3.5.4-windows-amd64.zip (11.96 MB) completed.
Hashes match.
Extracting C:\Users\USERNAME\AppData\Local\Temp\chocolatey\kubernetes-helm\3.5.4\helm-v3.5.4-windows-amd64.zip to C:\ProgramData\chocolatey\lib\kubernetes-helm\tools...
C:\ProgramData\chocolatey\lib\kubernetes-helm\tools
 ShimGen has successfully created a shim for helm.exe
 The install of kubernetes-helm was successful.
  Software installed to 'C:\ProgramData\chocolatey\lib\kubernetes-helm\tools'

Chocolatey installed 1/1 packages.
 See the log for details (C:\ProgramData\chocolatey\logs\chocolatey.log).

Did you know the proceeds of Pro (and some proceeds from other
 licensed editions) go into bettering the community infrastructure?
 Your support ensures an active community, keeps Chocolatey tip top,
 plus it nets you some awesome features!
 https://chocolatey.org/compare
PS C:\Windows\system32> helm version
version.BuildInfo{Version:"v3.5.4", GitCommit:"1b5edb69df3d3a08df77c9902dc17af864ff05d1", GitTreeState:"clean", GoVersion:"go1.15.11"}
PS C:\Windows\system32>

Helm Commands kubernetes

Add Helm repo link here

# Example
helm repo add stable https://charts.helm.sh/stable

I will add bitnami repo and search for the Nginx server as command follows:

helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update
helm search repo nginx
helm search repo bitnami/nginx

The output is as commands follows:

E:\Study\cka\devopsroles>helm search repo nginx
NAME                                    CHART VERSION   APP VERSION     DESCRIPTION
bitnami/nginx                           8.9.1           1.19.10         Chart for the nginx server
bitnami/nginx-ingress-controller        7.6.9           0.46.0          Chart for the nginx Ingress controller
stable/nginx-ingress                    1.41.3          v0.34.1         DEPRECATED! An nginx Ingress controller that us...
stable/nginx-ldapauth-proxy             0.1.6           1.13.5          DEPRECATED - nginx proxy with ldapauth
stable/nginx-lego                       0.3.1                           Chart for nginx-ingress-controller and kube-lego
bitnami/kong                            3.7.4           2.4.1           Kong is a scalable, open source API layer (aka ...
stable/gcloud-endpoints                 0.1.2           1               DEPRECATED Develop, deploy, protect and monitor...

E:\Study\cka\devopsroles>helm search repo bitnami/nginx
NAME                                    CHART VERSION   APP VERSION     DESCRIPTION
bitnami/nginx                           8.9.1           1.19.10         Chart for the nginx server
bitnami/nginx-ingress-controller        7.6.9           0.46.0          Chart for the nginx Ingress controller

Install Nginx using the helm command

helm install nginx bitnami/nginx

Update Nginx

helm upgrade nginx bitnami/nginx --dry-run

# Upgrade using values in overrides_nginx.yaml
helm upgrade nignx bitnami/nginx -f overrides_nginx.yaml

# rollback
helm rollback nginx REVISION_NUMBER

Basic helm command

helm status nginx
helm history nginx
# get manifest and values from deployment
helm get manifest nginx
helm get values nginx
helm uninstall nginx

Conclusion

Mastering Helm commands enhances your Kubernetes management skills, allowing for more efficient application deployment and management. This tutorial provides the foundation you need to confidently use Helm in your Kubernetes environment, improving your operational capabilities.

You have to use helm commands for Kubernetes. I hope will this your helpful. Thank you for reading the DevopsRoles page!

How to Kubernetes Minikube Deploy Pods

Introduction

In the rapidly evolving world of cloud-native applications, Kubernetes has emerged as the go-to platform for automating deployment, scaling, and managing containerized applications. For those who are new to Kubernetes or looking to experiment with it in a local environment, Minikube is the ideal tool. Minikube allows you to run a single-node Kubernetes cluster on your local machine, making it easier to learn and test.

This guide will walk you through the process of deploying and managing Pods on Kubernetes Minikube. We will cover everything from basic concepts to advanced operations like scaling and exposing your services. Whether you are a beginner or an experienced developer, this guide will provide you with valuable insights and practical steps to effectively manage your Kubernetes environment.

What is Kubernetes Minikube?

Kubernetes is an open-source platform that automates the deployment, scaling, and operation of application containers across clusters of hosts. Minikube is a tool that enables you to run a single-node Kubernetes cluster on your local machine. It’s an excellent way to start learning Kubernetes without needing access to a full-fledged Kubernetes cluster.

Key Components of Kubernetes Minikube

Before diving into the hands-on steps, let’s understand some key components you’ll interact with:

  • Service: An abstract way to expose an application running on a set of Pods as a network service.
  • Pod: The smallest and simplest Kubernetes object. A Pod represents a running process on your cluster and contains one or more containers.
  • kubectl: The command-line interface (CLI) tool used to interact with Kubernetes clusters.

Kubernetes Minikube Deploy Pods

Create Pods

[root@DevopsRoles ~]# kubectl run test-nginx --image=nginx --replicas=2 --port=80 
[root@DevopsRoles ~]# kubectl get pods 

The output environment variable for test-nginx pod

[root@DevopsRoles ~]# kubectl exec test-nginx-c8b797d7d-mzf91 env

Access to test-nginx pod

[root@DevopsRoles ~]# kubectl exec -it test-nginx-c8b797d7d-mzf91 bash
root@test-nginx-c8b797d7d-mzf91:/# curl localhost 

show logs of test-nginx pod

[root@DevopsRoles ~]# kubectl logs test-nginx-c8b797d7d-mzf91

How to scale out pods

[root@DevopsRoles ~]# kubectl scale deployment test-nginx --replicas=3 
[root@DevopsRoles ~]# kubectl get pods 

set service

[root@DevopsRoles ~]# kubectl expose deployment test-nginx --type="NodePort" --port 80 
[root@DevopsRoles ~]# kubectl get services test-nginx
[root@DevopsRoles ~]# minikube service test-nginx --url
[root@DevopsRoles ~]# curl http://10.0.2.10:31495 

Delete service and pods

[root@DevopsRoles ~]# kubectl delete services test-nginx
[root@DevopsRoles ~]# kubectl delete deployment test-nginx 

Frequently Asked Questions

What is Minikube in Kubernetes?

Minikube is a tool that allows you to run a Kubernetes cluster locally on your machine. It’s particularly useful for learning and testing Kubernetes without the need for a full-blown cluster.

How do I create a Pod in Kubernetes Minikube?

You can create a Pod in Kubernetes Minikube using the kubectl run command. For example: kubectl run test-nginx --image=nginx --replicas=2 --port=80.

How can I scale a Pod in Kubernetes?

To scale a Pod in Kubernetes, you can use the kubectl scale command. For instance, kubectl scale deployment test-nginx --replicas=3 will scale the deployment to three replicas.

What is the purpose of a Service in Kubernetes?

A Service in Kubernetes is used to expose an application running on a set of Pods as a network service. It allows external traffic to access the Pods.

How do I delete a Service in Kubernetes?

You can delete a Service in Kubernetes using the kubectl delete services <service-name> command. For example: kubectl delete services test-nginx.

Conclusion

Deploying and managing Pods on Kubernetes Minikube is a foundational skill for anyone working in cloud-native environments. This guide has provided you with the essential steps to create, scale, expose, and delete Pods and Services using Minikube.

By mastering these operations, you’ll be well-equipped to manage more complex Kubernetes deployments in production environments. Whether you’re scaling applications, troubleshooting issues, or exposing services, the knowledge gained from this guide will be invaluable. Thank you for reading the DevopsRoles page!