Learn Kubernetes with DevOpsRoles.com. Access comprehensive guides and tutorials to orchestrate containerized applications and streamline your DevOps processes with Kubernetes.
In Kubernetes RBAC is a method for controlling access to resources based on the roles assigned to users or service accounts within the cluster. RBAC helps to enforce the principle of least privilege, ensuring that users only have the permissions necessary to perform their tasks.
Kubernetes RBAC best practices
Kubernetes create Service Account
Service accounts are used to authenticate applications running inside a Kubernetes cluster to the API server. Here’s how you can create a service account named huupvuser:
kubectl create sa huupvuser
kubectl get sa
The result is as follows:
Creating ClusterRole and ClusterRoleBinding
Creating a ClusterRole
A ClusterRole defines a set of permissions for accessing Kubernetes resources across all namespaces. Below is an example of creating a ClusterRole named test-reader that grants read-only access to pods:
A ClusterRoleBinding binds a ClusterRole to one or more subjects, such as users or service accounts, and defines the permissions granted to those subjects. Here’s an example of creating a ClusterRoleBinding named test-read-pod-global that binds the test-reader ClusterRole to the huupvuser service account:
we’ve explored the basics of Role-Based Access Control (RBAC) in Kubernetes RBAC best practices. Through the creation of Service Accounts, ClusterRoles, and ClusterRoleBindings, we’ve demonstrated how to grant specific permissions to users or service accounts within a Kubernetes cluster.
RBAC is a powerful mechanism for ensuring security and access control in Kubernetes environments, allowing administrators to define fine-grained access policies tailored to their specific needs. By understanding and implementing RBAC effectively, organizations can maintain a secure and well-managed Kubernetes infrastructure. I hope will this your helpful. Thank you for reading the DevopsRoles page!
When setting up a Kubernetes cluster using Kubeadm, it’s essential to validate the installation to ensure everything is functioning correctly. In this blog post, we will guide you through the steps to Validating Kubernetes Cluster Installed using Kubeadm and Kubectl.
Learn how to validate your Kubernetes cluster installation using Kubeadm and ensure smooth operations. Follow our step-by-step guide for easy validation.
Validating Kubernetes Cluster Installed using Kubeadm: Step-by-Step Guide
Validating CMD Tools: Kubeadm & Kubectl
First, let’s check the versions of Kubeadm and Kubectl to ensure they match your cluster setup.
Checking “kubeadm” version
kubeadm version
Checking “kubectl” version
kubectl version
Make sure the versions of Kubeadm and Kubectl are compatible with your Kubernetes cluster.
Validating Cluster Nodes
Next, we need to ensure that all nodes in the cluster, including both Master and Worker nodes, are in the “Ready” state.
To check the status of all nodes:
kubectl get nodes
kubectl get nodes -o wide
This command will display a list of all nodes in the cluster along with their status. Ensure that all nodes are marked as “Ready.”
Validating Kubernetes Components
It’s crucial to verify that all Kubernetes components on the Master node are running correctly.
To check the status of Kubernetes components:
kubectl get pods -n kube-system
kubectl get pods -n kube-system -o wide
This command will show the status of various Kubernetes components in the kube-system namespace. Ensure that all components are in the “Running” state.
Validating Services: Docker & Kubelet
To ensure the proper functioning of your cluster, we need to validate the services Docker and Kubelet on all nodes.
Checking Docker service status
systemctl status docker
This command will display the status of the Docker service. Ensure that it is “Active” and running without any errors.
Checking Kubelet service status
systemctl status kubelet
This command will show the status of the Kubelet service. Verify that it is “Active” and running correctly.
Deploying Test Deployment
To further validate your cluster, let’s deploy a sample Nginx deployment and check its status.
This command will delete the Nginx deployment from your cluster.
Conclusion
By following these steps, you can validate your Kubernetes cluster installation using Kubeadm and Kubectl. It’s essential to ensure that all the components, services, and deployments are running correctly to have a reliable and stable Kubernetes environment. I hope will this your helpful. Thank you for reading the DevopsRoles page!
Kubernetes has emerged as the go-to solution for container orchestration and management. If you’re looking to set up a Kubernetes cluster on a Ubuntu server, you’re in the right place. In this step-by-step guide, we’ll walk you through the process of installing Kubernetes using Kubeadm on Ubuntu.
Prerequisites
I have created 3 VMs for Kubernetes Cluster Nodes to Cloud Google Compute Engine (GCE)
Master(1): 2 vCPUs – 4GB Ram
Worker(2): 2 vCPUs – 2GB RAM
OS: Ubuntu 16.04 or CentOS/RHEL 7
I have configured Firewall Rules Ingress in Google Compute Engine (GCE)
Master Node: 2379,6443,10250,10251,10252
Worker Node: 10250,30000-32767
Installing Kubernetes using Kubeadm on Ubuntu
Set hostname on Each Node
# hostnamectl set-hostname "k8s-master" // For Master node
# hostnamectl set-hostname "k8s-worker1" // For 1st worker node
# hostnamectl set-hostname "k8s-worker2" // For 2nd worker node
Add the following entries in /etc/hosts file on each node
Run it on MASTER & WORKER Nodes. Kubernetes requires a container runtime, and Docker is a popular choice. To install Docker, run the following commands:
NOTE: There are multiple CNI Plug-ins available. You can install a choice of yours. In the case above commands don’t work, try checking the below link for more info
Joining Worker Nodes
Run it on WORKER Node only
On your worker nodes, use the kubeadm join command from above kubeadm init output to join them to the cluster.
kubeadm join <...>
Run this command IF you do not have the above join command and/or create a NEW one.
kubeadm token create --print-join-command
Verify the Cluster
On the master node, ensure your cluster is up and running
kubectl get nodes
You should see the master node marked as “Ready” and any joined worker nodes.
Conclusion
Congratulations! You’ve successfully installed Kubernetes using Kubeadm on Ubuntu. With your Kubernetes cluster up and running, you’re ready to deploy and manage containerized applications and services at scale.
Kubernetes offers vast capabilities for container orchestration, scaling, and management. As you become more familiar with Kubernetes, you can explore advanced configurations and features to optimize your containerized environment.
I hope will this your helpful. Thank you for reading the DevopsRoles page!
Docker and Kubernetes are both open-source container platforms that enable the packaging and deployment of applications. We will Unravel the Enigmatic Docker and Kubernetes comparison.
In the realm of modern software deployment, where technology evolves at the speed of light, two names reign supreme: Docker and Kubernetes.
These two technological titans have transformed the landscape of application development, but they often find themselves in the spotlight together, leaving many wondering: what’s the real difference between Docker and Kubernetes? Buckle up as we embark on an illuminating journey through the cosmos of containers and orchestration.
What is Docker: The Art of Containerization
Docker, launched in 2013, is the pioneer in containerization technology. At its core, Docker allows developers to package an application and its dependencies, including libraries and configurations, into a single unit called a container.
Imagine Docker as a master craftsman, wielding tools to sculpt applications into self-contained, portable entities known as containers. Docker empowers developers to encapsulate an application along with its dependencies, libraries, and configurations into a single unit. This container becomes a traveler, capable of seamlessly transitioning between development environments, testing stages, and production servers.
The allure of Docker lies in its promise of consistency. No longer must developers grapple with the frustrating “it works on my machine” dilemma. Docker containers ensure that an application behaves the same way regardless of where it’s run, minimizing compatibility woes and facilitating smoother collaboration among developers.
What is Kubernetes: The Cosmic Choreographer
While Docker handles the creation and management of containers, Kubernetes steps in to manage the orchestration and deployment of these containers at scale. Launched by Google in 2014, Kubernetes provides a powerful solution for automating, scaling, and managing containerized applications.
In this cosmic dance of software deployment, Kubernetes steps onto the stage as a masterful choreographer, orchestrating the movement of containers with finesse and precision. Kubernetes introduces the concept of pods groups of interconnected containers that share network and storage resources. This dynamic entity enables seamless load balancing, ensuring smooth traffic distribution across the dance floor of application deployment.
Yet, Kubernetes offers more than elegant moves. It’s a wizard of automation, capable of dynamically scaling applications based on demand. Like a cosmic conductor, Kubernetes monitors the performance of applications and orchestrates adjustments, ensuring that the performance remains stellar even as the audience and the users grow.
Docker and Kubernetes comparison
Docker vs Kubernetes Pros and cons
Key Differences and Complementary Roles
While Docker and Kubernetes fulfill distinct roles, they synergize seamlessly to offer a holistic solution for containerization and orchestration. Docker excels in crafting portable and uniform containers that encapsulate applications, while Kubernetes steps in to oversee the intricate dance of deploying, scaling, and monitoring these containers.
Consider Docker the cornerstone of containerization, providing the essential building blocks that Kubernetes builds upon. Through Docker, developers elegantly wrap applications and dependencies in containers that maintain their coherence across diverse environments, from the intimate realms of development laptops to the grand stages of production servers.
On the contrasting side of the spectrum, Kubernetes emerges as the maestro of container lifecycle management. Its genius lies in abstracting the complex infrastructure beneath and orchestrating multifaceted tasks load balancing, scaling, mending, and updating with automated grace. As organizations venture into vast container deployments, Kubernetes emerges as the compass, ensuring not only high availability but also the optimal utilization of resources, resulting in a symphony of efficiency.
In conclusion
Docker and Kubernetes, though distinct technologies are interconnected in the world of modern software deployment. Docker empowers developers to create portable and consistent containers, while Kubernetes takes the reins of orchestration, automating the deployment and scaling of those containers. Together, they offer a robust ecosystem for building, deploying, and managing applications in today’s fast-paced and ever-changing technology landscape.
Thank you for reading Docker and Kubernetes comparison. I hope will this your helpful. Thank you for reading the DevopsRoles page!
Click on the green “Code” button, and then click on “Download ZIP” to download a zip file of the repository
For example, You can use the command line below
git clone https://github.com/squat/kwok.git
Modify the config.yml file in your Kubernetes cluster.
Open the config.yml file in a text editor.
Modify the settings in the config.yml file as needed.
num_nodes: This setting specifies the number of nodes to create in the Kubernetes cluster.
vm_cpus: This setting specifies the number of CPUs to allocate to each node.
vm_memory: This setting specifies the amount of memory to allocate to each node.
ip_prefix: This setting specifies the IP address prefix to use for the nodes in the cluster.
kubernetes_version: This setting specifies the version of Kubernetes to use in the cluster.
Save your changes to the config.yml file.
For example: create a three-node Kubernetes cluster with 2 CPUs and 4 GB of memory allocated to each node, using the IP address prefix “192.168.32” and Kubernetes version 1.21.0
# Number of nodes to create
num_nodes: 3
# CPU and memory settings for each node
vm_cpus: 2
vm_memory: 4096
# Network settings
ip_prefix: "192.168.32"
network_plugin: flannel
# Kubernetes version to install
kubernetes_version: "1.21.0"
# Docker version to install
docker_version: "20.10.8"
Once you have modified the config.yml file to specify the desired settings for your Kubernetes cluster
Start the Kubernetes cluster
run the vagrant up command to start the Kubernetes cluster.
Now, You can use deploy your applications.
Conclusion
You use Kwok Kubernetes, a Tool to Spin up Kubernetes Nodes. I hope will this your helpful. Thank you for reading the DevopsRoles page!
To install Kubernetes on Ubuntu, you can use the kubeadm tool, which simplifies the process of setting up a cluster. Here’s a step-by-step guide on how to install Kubernetes on Ubuntu.
Kubernetes Requirements
Master Node.
Worker Node 1.
The host OS is Ubuntu Server.
1. Update Ubuntu Server
It is always an updated system package. use the command line below:
A tool that helps to initialize a cluster using Kubernetes Admin or Kubeadm. Kubelet is the worker package, which runs on all nodes and starts containers. Use the command line below
vagrant@k8s-master:~$ sudo kubeadm init --apiserver-advertise-address=192.168.56.11 --pod-network-cidr=10.100.0.0/16
[init] Using Kubernetes version: v1.24.2
[preflight] Running pre-flight checks
[WARNING SystemVerification]: missing optional cgroups: blkio
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.56.11]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.56.11 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.56.11 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 4.501962 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: 8ke6fa.3c5ll272057418qj
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.56.11:6443 --token 8ke6fa.3c5ll272057418qj \
--discovery-token-ca-cert-hash sha256:4e69979a597f673781606eb7ab6bed5ccb741f46756b17f196b9cd76d3b51053
This is used to join the worker nodes to the cluster. We will create a directory for the cluster.
Waiting minutes, You can check the status of the nodes. Switch to the Master node and run the command line as below:
kubectl get nodes
Conclusion
You have successfully installed Kubernetes on Ubuntu. I hope this guide proves to be helpful for you in navigating the installation process. Thank you for choosing to read the DevopsRoles page! For more detailed instructions on how to install Kubernetes on Ubuntu, please refer to our comprehensive guide on installing Kubernetes on Ubuntu.
In this tutorial, How to setup Kubernetes Cluster with K3s. It is a lightweight Kubernetes distribution developed by Rancher. K3s consume fewer resources than traditional distributions. It’s easy to set up and manage Kubernetes Cluster.
To set up a Kubernetes cluster using K3s, you can follow these steps:
Setup Kubernetes Cluster with K3s
First, You need to get a Linux machine.
Start by provisioning the servers that will be part of your Kubernetes cluster.
You can use virtual machines or bare-metal servers. Ensure that the servers have a compatible operating system (such as Ubuntu, CentOS, or RHEL).
Download the Rancher binary
Link download here. or you use the wget command or curl command to get download it.
Repeat this process to add as many nodes as you want to your cluster.
The result, of the Master
Conclusion
You have setup Kubernetes cluster using K3s. You can now use kubectl it to deploy and manage your applications on the cluster.
Further, you can do with k8s k3s such as customize networking or logging, change the container runtime, and set up certificates. I hope will this your helpful. Thank you for reading the DevopsRoles page!
In this tutorial, How to Install Kubernetes Cluster with KubeKey. I will deploy a Kubernetes Cluster using KubeKey Quickstart.
You can use KubeKey to spin up a Kubernetes deployment for development/testing purposes. As Kubernetes is the de-facto standard in container orchestration.
Requirements install Kubernetes Cluster with KubeKey
You need to download KubeKey and make it executable as the command below:
sudo apt-get install conntrack -y
curl -sfL https://get-kk.kubesphere.io | VERSION=v1.2.1 sh -
chmod u+x kk
The output terminal is the picture below:
Make the kk executable to run from any directory by copying the file to /user/local/bin directory with the command below:
sudo cp kk /usr/local/bin
Verify the install it
kk -h
The output terminal is as below:
vagrant@devopsroles:~$ kk -h
Deploy a Kubernetes or KubeSphere cluster efficiently, flexibly and easily. There are three scenarios to use KubeKey.
1. Install Kubernetes only
2. Install Kubernetes and KubeSphere together in one command
3. Install Kubernetes first, then deploy KubeSphere on it using https://github.com/kubesphere/ks-installer
Usage:
kk [command]
Available Commands:
add Add nodes to kubernetes cluster
certs cluster certs
completion Generate shell completion scripts
create Create a cluster or a cluster configuration file
delete Delete nodes or cluster
help Help about any command
init Initializes the installation environment
upgrade Upgrade your cluster smoothly to a newer version with this command
version print the client version information
Flags:
--debug Print detailed information (default true)
-h, --help help for kk
--in-cluster Running inside the cluster
Use "kk [command] --help" for more information about a command.
Deploy the Cluster
You need to deploy the cluster with the command below:
sudo kk create cluster
This process will take some time.
The output terminal is as below:
vagrant@devopsroles:~$ sudo kk create cluster
+-------------+------+------+---------+----------+-------+-------+-----------+----------+------------+-------------+------------------+--------------+
| name | sudo | curl | openssl | ebtables | socat | ipset | conntrack | docker | nfs client | ceph client | glusterfs client | time |
+-------------+------+------+---------+----------+-------+-------+-----------+----------+------------+-------------+------------------+--------------+
| devopsroles | y | y | y | y | | | y | 20.10.12 | | | | UTC 14:06:10 |
+-------------+------+------+---------+----------+-------+-------+-----------+----------+------------+-------------+------------------+--------------+
This is a simple check of your environment.
Before installation, you should ensure that your machines meet all requirements specified at
https://github.com/kubesphere/kubekey#requirements-and-recommendations
Continue this installation? [yes/no]: yes
INFO[14:06:20 UTC] Downloading Installation Files
INFO[14:06:20 UTC] Downloading kubeadm ...
INFO[14:06:24 UTC] Downloading kubelet ...
INFO[14:06:32 UTC] Downloading kubectl ...
INFO[14:06:36 UTC] Downloading helm ...
INFO[14:06:38 UTC] Downloading kubecni ...
INFO[14:06:43 UTC] Downloading etcd ...
INFO[14:06:48 UTC] Downloading docker ...
INFO[14:06:52 UTC] Downloading crictl ...
INFO[14:06:55 UTC] Configuring operating system ...
[devopsroles 10.0.2.15] MSG:
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
vm.max_map_count = 262144
vm.swappiness = 1
fs.inotify.max_user_instances = 524288
no crontab for root
INFO[14:06:56 UTC] Get cluster status
INFO[14:06:56 UTC] Installing Container Runtime ...
INFO[14:06:56 UTC] Start to download images on all nodes
[devopsroles] Downloading image: kubesphere/pause:3.4.1
[devopsroles] Downloading image: kubesphere/kube-apiserver:v1.21.5
[devopsroles] Downloading image: kubesphere/kube-controller-manager:v1.21.5
[devopsroles] Downloading image: kubesphere/kube-scheduler:v1.21.5
[devopsroles] Downloading image: kubesphere/kube-proxy:v1.21.5
[devopsroles] Downloading image: coredns/coredns:1.8.0
[devopsroles] Downloading image: kubesphere/k8s-dns-node-cache:1.15.12
[devopsroles] Downloading image: calico/kube-controllers:v3.20.0
[devopsroles] Downloading image: calico/cni:v3.20.0
[devopsroles] Downloading image: calico/node:v3.20.0
[devopsroles] Downloading image: calico/pod2daemon-flexvol:v3.20.0
INFO[14:08:42 UTC] Getting etcd status
[devopsroles 10.0.2.15] MSG:
Configuration file will be created
INFO[14:08:42 UTC] Generating etcd certs
INFO[14:08:43 UTC] Synchronizing etcd certs
INFO[14:08:43 UTC] Creating etcd service
Push /home/vagrant/kubekey/v1.21.5/amd64/etcd-v3.4.13-linux-amd64.tar.gz to 10.0.2.15:/tmp/kubekey/etcd-v3.4.13-linux-amd64.tar.gz Done
INFO[14:08:43 UTC] Starting etcd cluster
INFO[14:08:44 UTC] Refreshing etcd configuration
[devopsroles 10.0.2.15] MSG:
Created symlink /etc/systemd/system/multi-user.target.wants/etcd.service → /etc/systemd/system/etcd.service.
INFO[14:08:47 UTC] Backup etcd data regularly
INFO[14:08:54 UTC] Installing kube binaries
Push /home/vagrant/kubekey/v1.21.5/amd64/kubeadm to 10.0.2.15:/tmp/kubekey/kubeadm Done
Push /home/vagrant/kubekey/v1.21.5/amd64/kubelet to 10.0.2.15:/tmp/kubekey/kubelet Done
Push /home/vagrant/kubekey/v1.21.5/amd64/kubectl to 10.0.2.15:/tmp/kubekey/kubectl Done
Push /home/vagrant/kubekey/v1.21.5/amd64/helm to 10.0.2.15:/tmp/kubekey/helm Done
Push /home/vagrant/kubekey/v1.21.5/amd64/cni-plugins-linux-amd64-v0.9.1.tgz to 10.0.2.15:/tmp/kubekey/cni-plugins-linux-amd64-v0.9.1.tgz Done
INFO[14:09:01 UTC] Initializing kubernetes cluster
[devopsroles 10.0.2.15] MSG:
W0323 14:09:02.442567 4631 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
[init] Using Kubernetes version: v1.21.5
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING FileExisting-socat]: socat not found in system path
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [devopsroles devopsroles.cluster.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.kubesphere.local localhost] and IPs [10.233.0.1 10.0.2.15 127.0.0.1]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] External etcd mode: Skipping etcd/ca certificate authority generation
[certs] External etcd mode: Skipping etcd/server certificate generation
[certs] External etcd mode: Skipping etcd/peer certificate generation
[certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation
[certs] External etcd mode: Skipping apiserver-etcd-client certificate generation
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 7.502071 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.21" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node devopsroles as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node devopsroles as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: w6uaty.abpybmw8jhw1tdlg
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:
kubeadm join lb.kubesphere.local:6443 --token w6uaty.abpybmw8jhw1tdlg \
--discovery-token-ca-cert-hash sha256:98d31447eb6457d74c0d13088aceed7ae8dd1fd0b8e98cb1a15683fcbb5ef4d5 \
--control-plane
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join lb.kubesphere.local:6443 --token w6uaty.abpybmw8jhw1tdlg \
--discovery-token-ca-cert-hash sha256:98d31447eb6457d74c0d13088aceed7ae8dd1fd0b8e98cb1a15683fcbb5ef4d5
[devopsroles 10.0.2.15] MSG:
node/devopsroles untainted
[devopsroles 10.0.2.15] MSG:
node/devopsroles labeled
[devopsroles 10.0.2.15] MSG:
service "kube-dns" deleted
[devopsroles 10.0.2.15] MSG:
service/coredns created
Warning: resource clusterroles/system:coredns is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
clusterrole.rbac.authorization.k8s.io/system:coredns configured
[devopsroles 10.0.2.15] MSG:
serviceaccount/nodelocaldns created
daemonset.apps/nodelocaldns created
[devopsroles 10.0.2.15] MSG:
configmap/nodelocaldns created
INFO[14:09:34 UTC] Get cluster status
INFO[14:09:35 UTC] Joining nodes to cluster
INFO[14:09:35 UTC] Deploying network plugin ...
[devopsroles 10.0.2.15] MSG:
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
Warning: policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
poddisruptionbudget.policy/calico-kube-controllers created
INFO[14:09:36 UTC] Congratulations! Installation is successful.
To verify Kubectl has been installed with the command below:
kubectl --help
The output terminal is as below:
vagrant@devopsroles:~$ kubectl --help
kubectl controls the Kubernetes cluster manager.
Find more information at: https://kubernetes.io/docs/reference/kubectl/overview/
Basic Commands (Beginner):
create Create a resource from a file or from stdin.
expose Take a replication controller, service, deployment or pod and expose it as a new Kubernetes Service
run Run a particular image on the cluster
set Set specific features on objects
Basic Commands (Intermediate):
explain Documentation of resources
get Display one or many resources
edit Edit a resource on the server
delete Delete resources by filenames, stdin, resources and names, or by resources and label selector
Deploy Commands:
rollout Manage the rollout of a resource
scale Set a new size for a Deployment, ReplicaSet or Replication Controller
autoscale Auto-scale a Deployment, ReplicaSet, StatefulSet, or ReplicationController
Cluster Management Commands:
certificate Modify certificate resources.
cluster-info Display cluster info
top Display Resource (CPU/Memory) usage.
cordon Mark node as unschedulable
uncordon Mark node as schedulable
drain Drain node in preparation for maintenance
taint Update the taints on one or more nodes
Troubleshooting and Debugging Commands:
describe Show details of a specific resource or group of resources
logs Print the logs for a container in a pod
attach Attach to a running container
exec Execute a command in a container
port-forward Forward one or more local ports to a pod
proxy Run a proxy to the Kubernetes API server
cp Copy files and directories to and from containers.
auth Inspect authorization
debug Create debugging sessions for troubleshooting workloads and nodes
Advanced Commands:
diff Diff live version against would-be applied version
apply Apply a configuration to a resource by filename or stdin
patch Update field(s) of a resource
replace Replace a resource by filename or stdin
wait Experimental: Wait for a specific condition on one or many resources.
kustomize Build a kustomization target from a directory or URL.
Settings Commands:
label Update the labels on a resource
annotate Update the annotations on a resource
completion Output shell completion code for the specified shell (bash or zsh)
Other Commands:
api-resources Print the supported API resources on the server
api-versions Print the supported API versions on the server, in the form of "group/version"
config Modify kubeconfig files
plugin Provides utilities for interacting with plugins.
version Print the client and server version information
Usage:
kubectl [flags] [options]
Use "kubectl <command> --help" for more information about a given command.
Use "kubectl options" for a list of global command-line options (applies to all commands).
Next, you must run the kubectl proxy command as below:
sudo kubectl proxy
While the proxy is running, open a web browser and point it to the IP address and port number listed in the results from the sudo kubectl get svc -n kubernetes-dashboard command.
vagrant@devopsroles:~$ sudo kubectl get svc -n kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dashboard-metrics-scraper ClusterIP 10.233.19.190 <none> 8000/TCP 47h
kubernetes-dashboard NodePort 10.233.23.203 <none> 443:32469/TCP 47h
Open a web browser:
https://192.168.3.7:32469/
To log in to the dashboard. You need to create a ServiceAccount object and a ClusterRoleBinding object. Create the account with the name admin-user in the namespace kubernetes-dashboard
In this tutorial, How to Install Nginx Ingress Controller using Helm Chart. I want to expose pods using Ingress Controller. I will use the Nginx ingress controller to set up AWS ELB.
ProductionEnvironment, You would be using istio gateway, traefik.
$ helm install nginx-ingress-controller ingress-nginx/ingress-nginx
NAME: nginx-ingress-controller
LAST DEPLOYED: Sat Jun 26 16:11:27 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
The ingress-nginx controller has been installed.
It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status by running 'kubectl --namespace default get services -o wide -w nginx-ingress-controller-ingress-nginx-controller'
An example Ingress that makes use of the controller:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
name: example
namespace: foo
spec:
rules:
- host: www.example.com
http:
paths:
- backend:
serviceName: exampleService
servicePort: 80
path: /
# This section is only required if TLS is to be enabled for the Ingress
tls:
- hosts:
- www.example.com
secretName: example-tls
If TLS is enabled for the Ingress, a Secret containing the certificate and key must also be provided:
apiVersion: v1
kind: Secret
metadata:
name: example-tls
namespace: foo
data:
tls.crt: <base64 encoded cert>
tls.key: <base64 encoded key>
type: kubernetes.io/tls
Check nginx-ingress-controller namespace have been created
kubectl get pod,svc,deploy
# The output as bellow
$ kubectl get pod,svc,deploy
NAME READY STATUS RESTARTS AGE
pod/guestbook-tnc42 1/1 Running 0 38m
pod/guestbook-vqgws 1/1 Running 0 38m
pod/guestbook-vqnxf 1/1 Running 0 38m
pod/nginx-ingress-controller-ingress-nginx-controller-7f8f65bf4g6c7 1/1 Running 0 4m23s
pod/redis-master-dp7h7 1/1 Running 0 41m
pod/redis-slave-54mt6 1/1 Running 0 39m
pod/redis-slave-8g8h4 1/1 Running 0 39m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/guestbook LoadBalancer 10.100.231.216 aff3d414c479f4faaa2ab82062c87fe5-264485147.us-west-2.elb.amazonaws.com 3000:32767/TCP 38m
service/kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 88m
service/nginx-ingress-controller-ingress-nginx-controller LoadBalancer 10.100.57.204 a04b31985b6c64607a50d794ef692c57-291207293.us-west-2.elb.amazonaws.com 80:31382/TCP,443:32520/TCP 4m25s
service/nginx-ingress-controller-ingress-nginx-controller-admission ClusterIP 10.100.216.28 <none> 443/TCP 4m25s
service/redis-master ClusterIP 10.100.76.16 <none> 6379/TCP 40m
service/redis-slave ClusterIP 10.100.126.163 <none> 6379/TCP 39m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nginx-ingress-controller-ingress-nginx-controller 1/1 1 1 4m25s
TYPE of nginx-ingress-controller-controller service is LoadBalancer. This is the L4 Load balancer. How to nginx-ingress-controller-controller pod is running Nginx inside to do L7 load balancing inside the EKS cluster.