Learn Kubernetes with DevOpsRoles.com. Access comprehensive guides and tutorials to orchestrate containerized applications and streamline your DevOps processes with Kubernetes.
In this tutorial, How to setup Kubernetes Cluster with K3s. It is a lightweight Kubernetes distribution developed by Rancher. K3s consume fewer resources than traditional distributions. It’s easy to set up and manage Kubernetes Cluster.
To set up a Kubernetes cluster using K3s, you can follow these steps:
Setup Kubernetes Cluster with K3s
First, You need to get a Linux machine.
Start by provisioning the servers that will be part of your Kubernetes cluster.
You can use virtual machines or bare-metal servers. Ensure that the servers have a compatible operating system (such as Ubuntu, CentOS, or RHEL).
Download the Rancher binary
Link download here. or you use the wget command or curl command to get download it.
Repeat this process to add as many nodes as you want to your cluster.
The result, of the Master
Conclusion
You have setup Kubernetes cluster using K3s. You can now use kubectl it to deploy and manage your applications on the cluster.
Further, you can do with k8s k3s such as customize networking or logging, change the container runtime, and set up certificates. I hope will this your helpful. Thank you for reading the DevopsRoles page!
In this tutorial, How to Install Kubernetes Cluster with KubeKey. I will deploy a Kubernetes Cluster using KubeKey Quickstart.
You can use KubeKey to spin up a Kubernetes deployment for development/testing purposes. As Kubernetes is the de-facto standard in container orchestration.
Requirements install Kubernetes Cluster with KubeKey
You need to download KubeKey and make it executable as the command below:
sudo apt-get install conntrack -y
curl -sfL https://get-kk.kubesphere.io | VERSION=v1.2.1 sh -
chmod u+x kk
The output terminal is the picture below:
Make the kk executable to run from any directory by copying the file to /user/local/bin directory with the command below:
sudo cp kk /usr/local/bin
Verify the install it
kk -h
The output terminal is as below:
vagrant@devopsroles:~$ kk -h
Deploy a Kubernetes or KubeSphere cluster efficiently, flexibly and easily. There are three scenarios to use KubeKey.
1. Install Kubernetes only
2. Install Kubernetes and KubeSphere together in one command
3. Install Kubernetes first, then deploy KubeSphere on it using https://github.com/kubesphere/ks-installer
Usage:
kk [command]
Available Commands:
add Add nodes to kubernetes cluster
certs cluster certs
completion Generate shell completion scripts
create Create a cluster or a cluster configuration file
delete Delete nodes or cluster
help Help about any command
init Initializes the installation environment
upgrade Upgrade your cluster smoothly to a newer version with this command
version print the client version information
Flags:
--debug Print detailed information (default true)
-h, --help help for kk
--in-cluster Running inside the cluster
Use "kk [command] --help" for more information about a command.
Deploy the Cluster
You need to deploy the cluster with the command below:
sudo kk create cluster
This process will take some time.
The output terminal is as below:
vagrant@devopsroles:~$ sudo kk create cluster
+-------------+------+------+---------+----------+-------+-------+-----------+----------+------------+-------------+------------------+--------------+
| name | sudo | curl | openssl | ebtables | socat | ipset | conntrack | docker | nfs client | ceph client | glusterfs client | time |
+-------------+------+------+---------+----------+-------+-------+-----------+----------+------------+-------------+------------------+--------------+
| devopsroles | y | y | y | y | | | y | 20.10.12 | | | | UTC 14:06:10 |
+-------------+------+------+---------+----------+-------+-------+-----------+----------+------------+-------------+------------------+--------------+
This is a simple check of your environment.
Before installation, you should ensure that your machines meet all requirements specified at
https://github.com/kubesphere/kubekey#requirements-and-recommendations
Continue this installation? [yes/no]: yes
INFO[14:06:20 UTC] Downloading Installation Files
INFO[14:06:20 UTC] Downloading kubeadm ...
INFO[14:06:24 UTC] Downloading kubelet ...
INFO[14:06:32 UTC] Downloading kubectl ...
INFO[14:06:36 UTC] Downloading helm ...
INFO[14:06:38 UTC] Downloading kubecni ...
INFO[14:06:43 UTC] Downloading etcd ...
INFO[14:06:48 UTC] Downloading docker ...
INFO[14:06:52 UTC] Downloading crictl ...
INFO[14:06:55 UTC] Configuring operating system ...
[devopsroles 10.0.2.15] MSG:
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
vm.max_map_count = 262144
vm.swappiness = 1
fs.inotify.max_user_instances = 524288
no crontab for root
INFO[14:06:56 UTC] Get cluster status
INFO[14:06:56 UTC] Installing Container Runtime ...
INFO[14:06:56 UTC] Start to download images on all nodes
[devopsroles] Downloading image: kubesphere/pause:3.4.1
[devopsroles] Downloading image: kubesphere/kube-apiserver:v1.21.5
[devopsroles] Downloading image: kubesphere/kube-controller-manager:v1.21.5
[devopsroles] Downloading image: kubesphere/kube-scheduler:v1.21.5
[devopsroles] Downloading image: kubesphere/kube-proxy:v1.21.5
[devopsroles] Downloading image: coredns/coredns:1.8.0
[devopsroles] Downloading image: kubesphere/k8s-dns-node-cache:1.15.12
[devopsroles] Downloading image: calico/kube-controllers:v3.20.0
[devopsroles] Downloading image: calico/cni:v3.20.0
[devopsroles] Downloading image: calico/node:v3.20.0
[devopsroles] Downloading image: calico/pod2daemon-flexvol:v3.20.0
INFO[14:08:42 UTC] Getting etcd status
[devopsroles 10.0.2.15] MSG:
Configuration file will be created
INFO[14:08:42 UTC] Generating etcd certs
INFO[14:08:43 UTC] Synchronizing etcd certs
INFO[14:08:43 UTC] Creating etcd service
Push /home/vagrant/kubekey/v1.21.5/amd64/etcd-v3.4.13-linux-amd64.tar.gz to 10.0.2.15:/tmp/kubekey/etcd-v3.4.13-linux-amd64.tar.gz Done
INFO[14:08:43 UTC] Starting etcd cluster
INFO[14:08:44 UTC] Refreshing etcd configuration
[devopsroles 10.0.2.15] MSG:
Created symlink /etc/systemd/system/multi-user.target.wants/etcd.service → /etc/systemd/system/etcd.service.
INFO[14:08:47 UTC] Backup etcd data regularly
INFO[14:08:54 UTC] Installing kube binaries
Push /home/vagrant/kubekey/v1.21.5/amd64/kubeadm to 10.0.2.15:/tmp/kubekey/kubeadm Done
Push /home/vagrant/kubekey/v1.21.5/amd64/kubelet to 10.0.2.15:/tmp/kubekey/kubelet Done
Push /home/vagrant/kubekey/v1.21.5/amd64/kubectl to 10.0.2.15:/tmp/kubekey/kubectl Done
Push /home/vagrant/kubekey/v1.21.5/amd64/helm to 10.0.2.15:/tmp/kubekey/helm Done
Push /home/vagrant/kubekey/v1.21.5/amd64/cni-plugins-linux-amd64-v0.9.1.tgz to 10.0.2.15:/tmp/kubekey/cni-plugins-linux-amd64-v0.9.1.tgz Done
INFO[14:09:01 UTC] Initializing kubernetes cluster
[devopsroles 10.0.2.15] MSG:
W0323 14:09:02.442567 4631 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
[init] Using Kubernetes version: v1.21.5
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING FileExisting-socat]: socat not found in system path
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [devopsroles devopsroles.cluster.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.kubesphere.local localhost] and IPs [10.233.0.1 10.0.2.15 127.0.0.1]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] External etcd mode: Skipping etcd/ca certificate authority generation
[certs] External etcd mode: Skipping etcd/server certificate generation
[certs] External etcd mode: Skipping etcd/peer certificate generation
[certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation
[certs] External etcd mode: Skipping apiserver-etcd-client certificate generation
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 7.502071 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.21" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node devopsroles as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node devopsroles as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: w6uaty.abpybmw8jhw1tdlg
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:
kubeadm join lb.kubesphere.local:6443 --token w6uaty.abpybmw8jhw1tdlg \
--discovery-token-ca-cert-hash sha256:98d31447eb6457d74c0d13088aceed7ae8dd1fd0b8e98cb1a15683fcbb5ef4d5 \
--control-plane
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join lb.kubesphere.local:6443 --token w6uaty.abpybmw8jhw1tdlg \
--discovery-token-ca-cert-hash sha256:98d31447eb6457d74c0d13088aceed7ae8dd1fd0b8e98cb1a15683fcbb5ef4d5
[devopsroles 10.0.2.15] MSG:
node/devopsroles untainted
[devopsroles 10.0.2.15] MSG:
node/devopsroles labeled
[devopsroles 10.0.2.15] MSG:
service "kube-dns" deleted
[devopsroles 10.0.2.15] MSG:
service/coredns created
Warning: resource clusterroles/system:coredns is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
clusterrole.rbac.authorization.k8s.io/system:coredns configured
[devopsroles 10.0.2.15] MSG:
serviceaccount/nodelocaldns created
daemonset.apps/nodelocaldns created
[devopsroles 10.0.2.15] MSG:
configmap/nodelocaldns created
INFO[14:09:34 UTC] Get cluster status
INFO[14:09:35 UTC] Joining nodes to cluster
INFO[14:09:35 UTC] Deploying network plugin ...
[devopsroles 10.0.2.15] MSG:
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
Warning: policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
poddisruptionbudget.policy/calico-kube-controllers created
INFO[14:09:36 UTC] Congratulations! Installation is successful.
To verify Kubectl has been installed with the command below:
kubectl --help
The output terminal is as below:
vagrant@devopsroles:~$ kubectl --help
kubectl controls the Kubernetes cluster manager.
Find more information at: https://kubernetes.io/docs/reference/kubectl/overview/
Basic Commands (Beginner):
create Create a resource from a file or from stdin.
expose Take a replication controller, service, deployment or pod and expose it as a new Kubernetes Service
run Run a particular image on the cluster
set Set specific features on objects
Basic Commands (Intermediate):
explain Documentation of resources
get Display one or many resources
edit Edit a resource on the server
delete Delete resources by filenames, stdin, resources and names, or by resources and label selector
Deploy Commands:
rollout Manage the rollout of a resource
scale Set a new size for a Deployment, ReplicaSet or Replication Controller
autoscale Auto-scale a Deployment, ReplicaSet, StatefulSet, or ReplicationController
Cluster Management Commands:
certificate Modify certificate resources.
cluster-info Display cluster info
top Display Resource (CPU/Memory) usage.
cordon Mark node as unschedulable
uncordon Mark node as schedulable
drain Drain node in preparation for maintenance
taint Update the taints on one or more nodes
Troubleshooting and Debugging Commands:
describe Show details of a specific resource or group of resources
logs Print the logs for a container in a pod
attach Attach to a running container
exec Execute a command in a container
port-forward Forward one or more local ports to a pod
proxy Run a proxy to the Kubernetes API server
cp Copy files and directories to and from containers.
auth Inspect authorization
debug Create debugging sessions for troubleshooting workloads and nodes
Advanced Commands:
diff Diff live version against would-be applied version
apply Apply a configuration to a resource by filename or stdin
patch Update field(s) of a resource
replace Replace a resource by filename or stdin
wait Experimental: Wait for a specific condition on one or many resources.
kustomize Build a kustomization target from a directory or URL.
Settings Commands:
label Update the labels on a resource
annotate Update the annotations on a resource
completion Output shell completion code for the specified shell (bash or zsh)
Other Commands:
api-resources Print the supported API resources on the server
api-versions Print the supported API versions on the server, in the form of "group/version"
config Modify kubeconfig files
plugin Provides utilities for interacting with plugins.
version Print the client and server version information
Usage:
kubectl [flags] [options]
Use "kubectl <command> --help" for more information about a given command.
Use "kubectl options" for a list of global command-line options (applies to all commands).
Next, you must run the kubectl proxy command as below:
sudo kubectl proxy
While the proxy is running, open a web browser and point it to the IP address and port number listed in the results from the sudo kubectl get svc -n kubernetes-dashboard command.
vagrant@devopsroles:~$ sudo kubectl get svc -n kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dashboard-metrics-scraper ClusterIP 10.233.19.190 <none> 8000/TCP 47h
kubernetes-dashboard NodePort 10.233.23.203 <none> 443:32469/TCP 47h
Open a web browser:
https://192.168.3.7:32469/
To log in to the dashboard. You need to create a ServiceAccount object and a ClusterRoleBinding object. Create the account with the name admin-user in the namespace kubernetes-dashboard
In this tutorial, How to Install Nginx Ingress Controller using Helm Chart. I want to expose pods using Ingress Controller. I will use the Nginx ingress controller to set up AWS ELB.
ProductionEnvironment, You would be using istio gateway, traefik.
$ helm install nginx-ingress-controller ingress-nginx/ingress-nginx
NAME: nginx-ingress-controller
LAST DEPLOYED: Sat Jun 26 16:11:27 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
The ingress-nginx controller has been installed.
It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status by running 'kubectl --namespace default get services -o wide -w nginx-ingress-controller-ingress-nginx-controller'
An example Ingress that makes use of the controller:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
name: example
namespace: foo
spec:
rules:
- host: www.example.com
http:
paths:
- backend:
serviceName: exampleService
servicePort: 80
path: /
# This section is only required if TLS is to be enabled for the Ingress
tls:
- hosts:
- www.example.com
secretName: example-tls
If TLS is enabled for the Ingress, a Secret containing the certificate and key must also be provided:
apiVersion: v1
kind: Secret
metadata:
name: example-tls
namespace: foo
data:
tls.crt: <base64 encoded cert>
tls.key: <base64 encoded key>
type: kubernetes.io/tls
Check nginx-ingress-controller namespace have been created
kubectl get pod,svc,deploy
# The output as bellow
$ kubectl get pod,svc,deploy
NAME READY STATUS RESTARTS AGE
pod/guestbook-tnc42 1/1 Running 0 38m
pod/guestbook-vqgws 1/1 Running 0 38m
pod/guestbook-vqnxf 1/1 Running 0 38m
pod/nginx-ingress-controller-ingress-nginx-controller-7f8f65bf4g6c7 1/1 Running 0 4m23s
pod/redis-master-dp7h7 1/1 Running 0 41m
pod/redis-slave-54mt6 1/1 Running 0 39m
pod/redis-slave-8g8h4 1/1 Running 0 39m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/guestbook LoadBalancer 10.100.231.216 aff3d414c479f4faaa2ab82062c87fe5-264485147.us-west-2.elb.amazonaws.com 3000:32767/TCP 38m
service/kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 88m
service/nginx-ingress-controller-ingress-nginx-controller LoadBalancer 10.100.57.204 a04b31985b6c64607a50d794ef692c57-291207293.us-west-2.elb.amazonaws.com 80:31382/TCP,443:32520/TCP 4m25s
service/nginx-ingress-controller-ingress-nginx-controller-admission ClusterIP 10.100.216.28 <none> 443/TCP 4m25s
service/redis-master ClusterIP 10.100.76.16 <none> 6379/TCP 40m
service/redis-slave ClusterIP 10.100.126.163 <none> 6379/TCP 39m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nginx-ingress-controller-ingress-nginx-controller 1/1 1 1 4m25s
TYPE of nginx-ingress-controller-controller service is LoadBalancer. This is the L4 Load balancer. How to nginx-ingress-controller-controller pod is running Nginx inside to do L7 load balancing inside the EKS cluster.
Dive into the world of Kubernetes with our easy-to-follow guide on deploy example application to Kubernetes. Whether you’re a beginner or an experienced developer, this tutorial will demonstrate how to efficiently deploy and manage applications in a Kubernetes environment.
Learn how to set up your application with Kubernetes pods, services, and external load balancers to ensure scalable and resilient deployment.
Step-by-Step deploy example application to Kubernetes
The result redis-master, redis-slave, and guestbook created
kubectl get replicationcontroller
# The output as below
$ kubectl get replicationcontroller
NAME DESIRED CURRENT READY AGE
guestbook 3 3 3 2m13s
redis-master 1 1 1 4m32s
redis-slave 2 2 2 3m10s
Get service and pod
kubectl get pod,service
The output terminal
$ kubectl get pod,service
NAME READY STATUS RESTARTS AGE
pod/guestbook-tnc42 1/1 Running 0 2m33s
pod/guestbook-vqgws 1/1 Running 0 2m33s
pod/guestbook-vqnxf 1/1 Running 0 2m33s
pod/redis-master-dp7h7 1/1 Running 0 4m52s
pod/redis-slave-54mt6 1/1 Running 0 3m30s
pod/redis-slave-8g8h4 1/1 Running 0 3m30s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/guestbook LoadBalancer 10.100.231.216 aff3d414c479f4faaa2ab82062c87fe5-264485147.us-west-2.elb.amazonaws.com 3000:32767/TCP 2m25s
service/kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 52m
service/redis-master ClusterIP 10.100.76.16 <none> 6379/TCP 4m24s
service/redis-slave ClusterIP 10.100.126.163 <none> 6379/TCP 3m25s
You have to Deploy an example application to Kubernetes. we will expose these pods with Ingress in the post later.
Congratulations on taking your first steps towards mastering application deployment with Kubernetes! By following this tutorial, you’ve gained valuable insights into the intricacies of Kubernetes architecture and how to leverage it for effective application management. Continue to explore and expand your skills, and don’t hesitate to revisit this guide or explore further topics on our website to enhance your Kubernetes expertise.
I hope will this your helpful. Thank you for reading the DevopsRoles page! Refer source
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.6/components.yaml
#The output
E:\Study\cka\devopsroles> kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.6/components.yaml
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
serviceaccount/metrics-server created
deployment.apps/metrics-server created
service/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
Check metrics-server have deployment
kubectl get deployment metrics-server -n kube-system
The output terminal
E:\Study\cka\devopsroles>kubectl get deployment metrics-server -n kube-system
NAME READY UP-TO-DATE AVAILABLE AGE
metrics-server 1/1 1 1 19s
The output resources created in kubernetes-dashboard namespace
E:\Study\cka\devopsroles>kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta8/aio/deploy/recommended.yaml
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created
Check namespace
kubectl get namespace
kubectl get pod,deployment,svc -n kubernetes-dashboard
The output terminal
E:\Study\cka\devopsroles>kubectl get namespace
NAME STATUS AGE
default Active 12m
kube-node-lease Active 12m
kube-public Active 12m
kube-system Active 12m
kubernetes-dashboard Active 73s
E:\Study\cka\devopsroles>kubectl get pod,deployment,svc -n kubernetes-dashboard
NAME READY STATUS RESTARTS AGE
pod/dashboard-metrics-scraper-76585494d8-hbv6t 1/1 Running 0 60s
pod/kubernetes-dashboard-5996555fd8-qhcxz 1/1 Running 0 64s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/dashboard-metrics-scraper 1/1 1 1 60s
deployment.apps/kubernetes-dashboard 1/1 1 1 64s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/dashboard-metrics-scraper ClusterIP 10.100.242.231 <none> 8000/TCP 63s
service/kubernetes-dashboard ClusterIP 10.100.135.44 <none> 443/TCP 78s
Create RBAC to control what metrics can be visible
File: eks-admin-service-account.yaml with the content as follows
apiVersion: v1
kind: ServiceAccount
metadata:
name: eks-admin
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: eks-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin # this is the cluster admin role
subjects:
- kind: ServiceAccount
name: eks-admin
namespace: kube-system
Apply it
kubectl apply -f eks-admin-service-account.yaml
The output terminal
E:\Study\cka\devopsroles>kubectl apply -f eks-admin-service-account.yaml
serviceaccount/eks-admin created
clusterrolebinding.rbac.authorization.k8s.io/eks-admin created
Explore the essential Helm commands for Kubernetes in this detailed tutorial. Whether you’re a beginner or a seasoned Kubernetes user, this guide will help you install Helm and utilize its commands to manage charts and repositories effectively, streamlining your Kubernetes deployments.
In this tutorial, How to install helm and run helm commands. Helm provides many commands for managing charts and Helm repositories. Now, let’s helm commands for Kubernetes
Helm Commands for Kubernetes: Simplifies application deployment and management in Kubernetes environments.
What is a Helm Chart?: A package containing pre-configured Kubernetes resources used for deploying applications.
Purpose of Helm: Streamlines deployments, manages versions and rollbacks and allows customization of installations through charts.
Using Helm Charts: Install Helm, add repositories, and manage applications within your Kubernetes cluster using Helm’s command suite, including install, update, and delete operations.
Windows PowerShell
Copyright (C) Microsoft Corporation. All rights reserved.
Try the new cross-platform PowerShell https://aka.ms/pscore6
PS C:\Windows\system32> choco install kubernetes-helm
Chocolatey v0.10.15
Installing the following packages:
kubernetes-helm
By installing you accept licenses for the packages.
Progress: Downloading kubernetes-helm 3.5.4... 100%
kubernetes-helm v3.5.4 [Approved]
kubernetes-helm package files install completed. Performing other installation steps.
The package kubernetes-helm wants to run 'chocolateyInstall.ps1'.
Note: If you don't run this script, the installation will fail.
Note: To confirm automatically next time, use '-y' or consider:
choco feature enable -n allowGlobalConfirmation
Do you want to run the script?([Y]es/[A]ll - yes to all/[N]o/[P]rint): Yes
Downloading kubernetes-helm 64 bit
from 'https://get.helm.sh/helm-v3.5.4-windows-amd64.zip'
Progress: 100% - Completed download of C:\Users\USERNAME\AppData\Local\Temp\chocolatey\kubernetes-helm\3.5.4\helm-v3.5.4-windows-amd64.zip (11.96 MB).
Download of helm-v3.5.4-windows-amd64.zip (11.96 MB) completed.
Hashes match.
Extracting C:\Users\USERNAME\AppData\Local\Temp\chocolatey\kubernetes-helm\3.5.4\helm-v3.5.4-windows-amd64.zip to C:\ProgramData\chocolatey\lib\kubernetes-helm\tools...
C:\ProgramData\chocolatey\lib\kubernetes-helm\tools
ShimGen has successfully created a shim for helm.exe
The install of kubernetes-helm was successful.
Software installed to 'C:\ProgramData\chocolatey\lib\kubernetes-helm\tools'
Chocolatey installed 1/1 packages.
See the log for details (C:\ProgramData\chocolatey\logs\chocolatey.log).
Did you know the proceeds of Pro (and some proceeds from other
licensed editions) go into bettering the community infrastructure?
Your support ensures an active community, keeps Chocolatey tip top,
plus it nets you some awesome features!
https://chocolatey.org/compare
PS C:\Windows\system32> helm version
version.BuildInfo{Version:"v3.5.4", GitCommit:"1b5edb69df3d3a08df77c9902dc17af864ff05d1", GitTreeState:"clean", GoVersion:"go1.15.11"}
PS C:\Windows\system32>
E:\Study\cka\devopsroles>helm search repo nginx
NAME CHART VERSION APP VERSION DESCRIPTION
bitnami/nginx 8.9.1 1.19.10 Chart for the nginx server
bitnami/nginx-ingress-controller 7.6.9 0.46.0 Chart for the nginx Ingress controller
stable/nginx-ingress 1.41.3 v0.34.1 DEPRECATED! An nginx Ingress controller that us...
stable/nginx-ldapauth-proxy 0.1.6 1.13.5 DEPRECATED - nginx proxy with ldapauth
stable/nginx-lego 0.3.1 Chart for nginx-ingress-controller and kube-lego
bitnami/kong 3.7.4 2.4.1 Kong is a scalable, open source API layer (aka ...
stable/gcloud-endpoints 0.1.2 1 DEPRECATED Develop, deploy, protect and monitor...
E:\Study\cka\devopsroles>helm search repo bitnami/nginx
NAME CHART VERSION APP VERSION DESCRIPTION
bitnami/nginx 8.9.1 1.19.10 Chart for the nginx server
bitnami/nginx-ingress-controller 7.6.9 0.46.0 Chart for the nginx Ingress controller
helm status nginx
helm history nginx
# get manifest and values from deployment
helm get manifest nginx
helm get values nginx
helm uninstall nginx
Conclusion
Mastering Helm commands enhances your Kubernetes management skills, allowing for more efficient application deployment and management. This tutorial provides the foundation you need to confidently use Helm in your Kubernetes environment, improving your operational capabilities.
You have to use helm commands for Kubernetes. I hope will this your helpful. Thank you for reading the DevopsRoles page!
In this tutorial, how to Set Up EKS 1.16 cluster with eksctl . Amazon Elastic Kubernetes Service (Amazon EKS) gives you the flexibility to start, run, and scale Kubernetes applications in the AWS cloud or on-premises. No, let’s go Install EKS on AWS.
We easier to switch to different AWS IAM user or IAM role identiy by ‘export AWS_PROFILE=PROFILE_NAME‘ . I will not using ‘default‘ profile created by ‘aws configure‘. For example, I create a named AWS Profile ‘devopsroles-demo‘ in two ways:
‘aws configure –profile devopsroles-demo’
E:\Study\cka\devopsroles>aws configure --profile devopsroles-demo
AWS Access Key ID [None]: XXXXZHBNJLCKKCE7EQQQ
AWS Secret Access Key [None]: fdfdfdfd43434dYlQ1il1xKNCnqwUvNHFSv41111
Default region name [None]:
Default output format [None]:
E:\Study\cka\devopsroles>set AWS_PROFILE=devopsroles-demo
E:\Study\cka\devopsroles>aws sts get-caller-identity
{
"UserId": "AAQAZHBNJLCKPEGKYAV1R",
"Account": "456602660300",
"Arn": "arn:aws:iam::456602660300:user/devopsroles-demo"
}
# install eskctl from chocolatey
chocolatey install -y eksctl
eksctl version
6.Create ssh key for EKS worker nodes
ssh-keygen
# Example name key is devopsroles_worker_nodes_demo.pem
7.Setup EKS cluster with eksctl (so you don’t need to manually create VPC)
eksctltool will create K8s Control Plane (master nodes, etcd, API server, etc), worker nodes, VPC, Security Groups, Subnets, Routes, Internet Gateway, etc.
In the rapidly evolving world of cloud-native applications, Kubernetes has emerged as the go-to platform for automating deployment, scaling, and managing containerized applications. For those who are new to Kubernetes or looking to experiment with it in a local environment, Minikube is the ideal tool. Minikube allows you to run a single-node Kubernetes cluster on your local machine, making it easier to learn and test.
This guide will walk you through the process of deploying and managing Pods on Kubernetes Minikube. We will cover everything from basic concepts to advanced operations like scaling and exposing your services. Whether you are a beginner or an experienced developer, this guide will provide you with valuable insights and practical steps to effectively manage your Kubernetes environment.
What is Kubernetes Minikube?
Kubernetes is an open-source platform that automates the deployment, scaling, and operation of application containers across clusters of hosts. Minikube is a tool that enables you to run a single-node Kubernetes cluster on your local machine. It’s an excellent way to start learning Kubernetes without needing access to a full-fledged Kubernetes cluster.
Key Components of Kubernetes Minikube
Before diving into the hands-on steps, let’s understand some key components you’ll interact with:
Service: An abstract way to expose an application running on a set of Pods as a network service.
Pod: The smallest and simplest Kubernetes object. A Pod represents a running process on your cluster and contains one or more containers.
kubectl: The command-line interface (CLI) tool used to interact with Kubernetes clusters.
Kubernetes Minikube Deploy Pods
Create Pods
[root@DevopsRoles ~]# kubectl run test-nginx --image=nginx --replicas=2 --port=80
[root@DevopsRoles ~]# kubectl get pods
The output environment variable for test-nginx pod
Minikube is a tool that allows you to run a Kubernetes cluster locally on your machine. It’s particularly useful for learning and testing Kubernetes without the need for a full-blown cluster.
How do I create a Pod in Kubernetes Minikube?
You can create a Pod in Kubernetes Minikube using the kubectl run command. For example: kubectl run test-nginx --image=nginx --replicas=2 --port=80.
How can I scale a Pod in Kubernetes?
To scale a Pod in Kubernetes, you can use the kubectl scale command. For instance, kubectl scale deployment test-nginx --replicas=3 will scale the deployment to three replicas.
What is the purpose of a Service in Kubernetes?
A Service in Kubernetes is used to expose an application running on a set of Pods as a network service. It allows external traffic to access the Pods.
How do I delete a Service in Kubernetes?
You can delete a Service in Kubernetes using the kubectl delete services <service-name> command. For example: kubectl delete services test-nginx.
Conclusion
Deploying and managing Pods on Kubernetes Minikube is a foundational skill for anyone working in cloud-native environments. This guide has provided you with the essential steps to create, scale, expose, and delete Pods and Services using Minikube.
By mastering these operations, you’ll be well-equipped to manage more complex Kubernetes deployments in production environments. Whether you’re scaling applications, troubleshooting issues, or exposing services, the knowledge gained from this guide will be invaluable. Thank you for reading the DevopsRoles page!