Install Kubernetes Cluster with KubeKey

#Introduction

In this tutorial, How to Install Kubernetes Cluster with KubeKey. I will deploy a Kubernetes Cluster using KubeKey Quickstart.

You can use KubeKey to spin up a Kubernetes deployment for development/testing purposes. As Kubernetes is the de-facto standard in container orchestration.

Requirements install Kubernetes Cluster with KubeKey

  • Running instance of Ubuntu Server.
  • A user with sudo privileges.

Install Docker

First, You need to install Docker on the Ubuntu server. Refer to How to install docker on Ubuntu Server.

How to Install KubeKey

You need to download KubeKey and make it executable as the command below:

sudo apt-get install conntrack -y
curl -sfL https://get-kk.kubesphere.io | VERSION=v1.2.1 sh -
chmod u+x kk

The output terminal is the picture below:

Make the kk executable to run from any directory by copying the file to /user/local/bin directory with the command below:

sudo cp kk /usr/local/bin

Verify the install it

kk -h

The output terminal is as below:

vagrant@devopsroles:~$ kk -h
Deploy a Kubernetes or KubeSphere cluster efficiently, flexibly and easily. There are three scenarios to use KubeKey.
1. Install Kubernetes only
2. Install Kubernetes and KubeSphere together in one command
3. Install Kubernetes first, then deploy KubeSphere on it using https://github.com/kubesphere/ks-installer

Usage:
  kk [command]

Available Commands:
  add         Add nodes to kubernetes cluster
  certs       cluster certs
  completion  Generate shell completion scripts
  create      Create a cluster or a cluster configuration file
  delete      Delete nodes or cluster
  help        Help about any command
  init        Initializes the installation environment
  upgrade     Upgrade your cluster smoothly to a newer version with this command
  version     print the client version information

Flags:
      --debug        Print detailed information (default true)
  -h, --help         help for kk
      --in-cluster   Running inside the cluster

Use "kk [command] --help" for more information about a command.

Deploy the Cluster

You need to deploy the cluster with the command below:

sudo kk create cluster

This process will take some time.

The output terminal is as below:

vagrant@devopsroles:~$ sudo kk create cluster
+-------------+------+------+---------+----------+-------+-------+-----------+----------+------------+-------------+------------------+--------------+
| name        | sudo | curl | openssl | ebtables | socat | ipset | conntrack | docker   | nfs client | ceph client | glusterfs client | time         |
+-------------+------+------+---------+----------+-------+-------+-----------+----------+------------+-------------+------------------+--------------+
| devopsroles | y    | y    | y       | y        |       |       | y         | 20.10.12 |            |             |                  | UTC 14:06:10 |
+-------------+------+------+---------+----------+-------+-------+-----------+----------+------------+-------------+------------------+--------------+

This is a simple check of your environment.
Before installation, you should ensure that your machines meet all requirements specified at
https://github.com/kubesphere/kubekey#requirements-and-recommendations

Continue this installation? [yes/no]: yes
INFO[14:06:20 UTC] Downloading Installation Files
INFO[14:06:20 UTC] Downloading kubeadm ...
INFO[14:06:24 UTC] Downloading kubelet ...
INFO[14:06:32 UTC] Downloading kubectl ...
INFO[14:06:36 UTC] Downloading helm ...
INFO[14:06:38 UTC] Downloading kubecni ...
INFO[14:06:43 UTC] Downloading etcd ...
INFO[14:06:48 UTC] Downloading docker ...
INFO[14:06:52 UTC] Downloading crictl ...
INFO[14:06:55 UTC] Configuring operating system ...
[devopsroles 10.0.2.15] MSG:
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
vm.max_map_count = 262144
vm.swappiness = 1
fs.inotify.max_user_instances = 524288
no crontab for root
INFO[14:06:56 UTC] Get cluster status
INFO[14:06:56 UTC] Installing Container Runtime ...
INFO[14:06:56 UTC] Start to download images on all nodes
[devopsroles] Downloading image: kubesphere/pause:3.4.1
[devopsroles] Downloading image: kubesphere/kube-apiserver:v1.21.5
[devopsroles] Downloading image: kubesphere/kube-controller-manager:v1.21.5
[devopsroles] Downloading image: kubesphere/kube-scheduler:v1.21.5
[devopsroles] Downloading image: kubesphere/kube-proxy:v1.21.5
[devopsroles] Downloading image: coredns/coredns:1.8.0
[devopsroles] Downloading image: kubesphere/k8s-dns-node-cache:1.15.12
[devopsroles] Downloading image: calico/kube-controllers:v3.20.0
[devopsroles] Downloading image: calico/cni:v3.20.0
[devopsroles] Downloading image: calico/node:v3.20.0
[devopsroles] Downloading image: calico/pod2daemon-flexvol:v3.20.0
INFO[14:08:42 UTC] Getting etcd status
[devopsroles 10.0.2.15] MSG:
Configuration file will be created
INFO[14:08:42 UTC] Generating etcd certs
INFO[14:08:43 UTC] Synchronizing etcd certs
INFO[14:08:43 UTC] Creating etcd service
Push /home/vagrant/kubekey/v1.21.5/amd64/etcd-v3.4.13-linux-amd64.tar.gz to 10.0.2.15:/tmp/kubekey/etcd-v3.4.13-linux-amd64.tar.gz   Done
INFO[14:08:43 UTC] Starting etcd cluster
INFO[14:08:44 UTC] Refreshing etcd configuration
[devopsroles 10.0.2.15] MSG:
Created symlink /etc/systemd/system/multi-user.target.wants/etcd.service → /etc/systemd/system/etcd.service.
INFO[14:08:47 UTC] Backup etcd data regularly
INFO[14:08:54 UTC] Installing kube binaries
Push /home/vagrant/kubekey/v1.21.5/amd64/kubeadm to 10.0.2.15:/tmp/kubekey/kubeadm   Done
Push /home/vagrant/kubekey/v1.21.5/amd64/kubelet to 10.0.2.15:/tmp/kubekey/kubelet   Done
Push /home/vagrant/kubekey/v1.21.5/amd64/kubectl to 10.0.2.15:/tmp/kubekey/kubectl   Done
Push /home/vagrant/kubekey/v1.21.5/amd64/helm to 10.0.2.15:/tmp/kubekey/helm   Done
Push /home/vagrant/kubekey/v1.21.5/amd64/cni-plugins-linux-amd64-v0.9.1.tgz to 10.0.2.15:/tmp/kubekey/cni-plugins-linux-amd64-v0.9.1.tgz   Done
INFO[14:09:01 UTC] Initializing kubernetes cluster
[devopsroles 10.0.2.15] MSG:
W0323 14:09:02.442567    4631 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
[init] Using Kubernetes version: v1.21.5
[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
        [WARNING FileExisting-socat]: socat not found in system path
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [devopsroles devopsroles.cluster.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.kubesphere.local localhost] and IPs [10.233.0.1 10.0.2.15 127.0.0.1]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] External etcd mode: Skipping etcd/ca certificate authority generation
[certs] External etcd mode: Skipping etcd/server certificate generation
[certs] External etcd mode: Skipping etcd/peer certificate generation
[certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation
[certs] External etcd mode: Skipping apiserver-etcd-client certificate generation
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 7.502071 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.21" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node devopsroles as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node devopsroles as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: w6uaty.abpybmw8jhw1tdlg
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join lb.kubesphere.local:6443 --token w6uaty.abpybmw8jhw1tdlg \
        --discovery-token-ca-cert-hash sha256:98d31447eb6457d74c0d13088aceed7ae8dd1fd0b8e98cb1a15683fcbb5ef4d5 \
        --control-plane

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join lb.kubesphere.local:6443 --token w6uaty.abpybmw8jhw1tdlg \
        --discovery-token-ca-cert-hash sha256:98d31447eb6457d74c0d13088aceed7ae8dd1fd0b8e98cb1a15683fcbb5ef4d5
[devopsroles 10.0.2.15] MSG:
node/devopsroles untainted
[devopsroles 10.0.2.15] MSG:
node/devopsroles labeled
[devopsroles 10.0.2.15] MSG:
service "kube-dns" deleted
[devopsroles 10.0.2.15] MSG:
service/coredns created
Warning: resource clusterroles/system:coredns is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
clusterrole.rbac.authorization.k8s.io/system:coredns configured
[devopsroles 10.0.2.15] MSG:
serviceaccount/nodelocaldns created
daemonset.apps/nodelocaldns created
[devopsroles 10.0.2.15] MSG:
configmap/nodelocaldns created
INFO[14:09:34 UTC] Get cluster status
INFO[14:09:35 UTC] Joining nodes to cluster
INFO[14:09:35 UTC] Deploying network plugin ...
[devopsroles 10.0.2.15] MSG:
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
Warning: policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
poddisruptionbudget.policy/calico-kube-controllers created
INFO[14:09:36 UTC] Congratulations! Installation is successful.

To verify Kubectl has been installed with the command below:

kubectl --help

The output terminal is as below:

vagrant@devopsroles:~$ kubectl --help
kubectl controls the Kubernetes cluster manager.

 Find more information at: https://kubernetes.io/docs/reference/kubectl/overview/

Basic Commands (Beginner):
  create        Create a resource from a file or from stdin.
  expose        Take a replication controller, service, deployment or pod and expose it as a new Kubernetes Service
  run           Run a particular image on the cluster
  set           Set specific features on objects

Basic Commands (Intermediate):
  explain       Documentation of resources
  get           Display one or many resources
  edit          Edit a resource on the server
  delete        Delete resources by filenames, stdin, resources and names, or by resources and label selector

Deploy Commands:
  rollout       Manage the rollout of a resource
  scale         Set a new size for a Deployment, ReplicaSet or Replication Controller
  autoscale     Auto-scale a Deployment, ReplicaSet, StatefulSet, or ReplicationController

Cluster Management Commands:
  certificate   Modify certificate resources.
  cluster-info  Display cluster info
  top           Display Resource (CPU/Memory) usage.
  cordon        Mark node as unschedulable
  uncordon      Mark node as schedulable
  drain         Drain node in preparation for maintenance
  taint         Update the taints on one or more nodes

Troubleshooting and Debugging Commands:
  describe      Show details of a specific resource or group of resources
  logs          Print the logs for a container in a pod
  attach        Attach to a running container
  exec          Execute a command in a container
  port-forward  Forward one or more local ports to a pod
  proxy         Run a proxy to the Kubernetes API server
  cp            Copy files and directories to and from containers.
  auth          Inspect authorization
  debug         Create debugging sessions for troubleshooting workloads and nodes

Advanced Commands:
  diff          Diff live version against would-be applied version
  apply         Apply a configuration to a resource by filename or stdin
  patch         Update field(s) of a resource
  replace       Replace a resource by filename or stdin
  wait          Experimental: Wait for a specific condition on one or many resources.
  kustomize     Build a kustomization target from a directory or URL.

Settings Commands:
  label         Update the labels on a resource
  annotate      Update the annotations on a resource
  completion    Output shell completion code for the specified shell (bash or zsh)

Other Commands:
  api-resources Print the supported API resources on the server
  api-versions  Print the supported API versions on the server, in the form of "group/version"
  config        Modify kubeconfig files
  plugin        Provides utilities for interacting with plugins.
  version       Print the client and server version information

Usage:
  kubectl [flags] [options]

Use "kubectl <command> --help" for more information about a given command.
Use "kubectl options" for a list of global command-line options (applies to all commands).

Deploy the Kubernetes Dashboard

Deploy the Kubernetes Dashboard with the command:

sudo kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml

You need to deploy the dashboard, How to determine the IP address of the cluster uses the command below:

sudo kubectl get svc -n kubernetes-dashboard

The output terminal is as below:

vagrant@devopsroles:~$ sudo kubectl get svc -n kubernetes-dashboard
NAME                        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
dashboard-metrics-scraper   ClusterIP   10.233.19.190   <none>        8000/TCP   64s
kubernetes-dashboard        ClusterIP   10.233.23.203   <none>        443/TCP    64s

You might want to expose the dashboard via NodePort instead of ClusterIP

sudo kubectl edit svc kubernetes-dashboard -o yaml -n kubernetes-dashboard

This will open the configure file in vi editors.

Before:

type: ClusterIP

After change it

type: NodePort

Next, you must run the kubectl proxy command as below:

sudo kubectl proxy

While the proxy is running, open a web browser and point it to the IP address and port number listed in the results from the sudo kubectl get svc -n kubernetes-dashboard command.

vagrant@devopsroles:~$ sudo kubectl get svc -n kubernetes-dashboard
NAME                        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
dashboard-metrics-scraper   ClusterIP   10.233.19.190   <none>        8000/TCP        47h
kubernetes-dashboard        NodePort    10.233.23.203   <none>        443:32469/TCP   47h

Open a web browser:

https://192.168.3.7:32469/

To log in to the dashboard. You need to create a ServiceAccount object and a ClusterRoleBinding object. Create the account with the name admin-user in the namespace kubernetes-dashboard

cat <<EOF | sudo kubectl apply -f -
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard
EOF

Create a ClusterRoleBinding object.

cat <<EOF | sudo kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard
EOF

To retrieve that token with the command below:

sudo kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}')

Expected output:

The result,

This isn’t production-ready, but it’s a great way to get up to speed with Kubernetes and even develop for the platform.

The result is Docker Kubernetes containers running Ubuntu Server. Install Kubernetes Cluster with KubeKey

Conclusions

You have to Install Kubernetes Cluster with KubeKey. I hope will this your helpful. Thank you for reading the DevopsRoles page!

How to Docker install Oracle 12c: A Step-by-Step Guide

Introduction

In this quick-start tutorial, learn how to Docker install Oracle 12c. This guide provides straightforward steps for setting up Oracle 12c in a Docker container, allowing you to leverage the benefits of a virtualized environment for database management. Perfect for those seeking a practical approach to deploying Oracle 12c with Docker.

Requirements

  • You need an account on Docker. Create an account here.
  • Install or update Docker on your PC

Oracle Database 12c Docker Image

Oracle Database Enterprise Edition 12c is available as an image in the Docker Store.

The following figures show the checkout

Docker install Oracle 12c

How to start an Oracle Database Server instance.

  • Docker login
  • Pull Oracle Database Enterprise Edition 12.2.0.1
  • Run the Docker container from the image

The command line is below

$ docker login
$ docker pull store/oracle/database-enterprise:12.2.0.1
$ mkdir ~/oracle-db-data
$ chmod 775 ~/oracle-db-data
$ sudo chown 54321:54322  ~/oracle-db-data
$ docker run -d -it --name oracle-db-12c \
-p 1521:1521 \
-e DB_SID=ORADEV \
-e DB_PDB=ORADEVPDB \
-e DB_DOMAIN=oracledb.devopsroles.local \
-v ~/oracle-db-data:/ORCL \
store/oracle/database-enterprise:12.2.0.1

I created an oracle-db-data directory on the host system for data volume. This data volume is mounted inside the container at /ORCL

  • Container name oracle-db-12c
  • Host port 1521 and Guest port 1521
  • Environment variable:

Set the SID to ORADEV

Set the PDB to ORADEVPDB

Set the DB Domain to oracledb.devopsroles.local

The output terminal is as below:

vagrant@devopsroles:~$ docker login
Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID, head over to https://hub.docker.com to create one.
Username: your_account
Password:

Login Succeeded

vagrant@devopsroles:~$ docker pull store/oracle/database-enterprise:12.2.0.1
12.2.0.1: Pulling from store/oracle/database-enterprise
4ce27fe12c04: Pull complete
9d3556e8e792: Pull complete
fc60a1a28025: Pull complete
0c32e4ed872e: Pull complete
b465d9b6e399: Pull complete
Digest: sha256:40760ac70dba2c4c70d0c542e42e082e8b04d9040d91688d63f728af764a2f5d
Status: Downloaded newer image for store/oracle/database-enterprise:12.2.0.1
docker.io/store/oracle/database-enterprise:12.2.0.1

vagrant@devopsroles:~$ mkdir ~/oracle-db-data
vagrant@devopsroles:~$ chmod 775 ~/oracle-db-data
vagrant@devopsroles:~$ sudo chown 54321:54322  ~/oracle-db-data

vagrant@devopsroles:~$ docker run -d -it --name oracle-db-12c \
> -p 1521:1521 \
> -e DB_SID=ORADEV \
> -e DB_PDB=ORADEVPDB \
> -e DB_DOMAIN=oracledb.devopsroles.local \
> -v ~/oracle-db-data:/ORCL \
> store/oracle/database-enterprise:12.2.0.1
7589a72bb9794d6408eb4b635772b22bc3e711210d3e4422ab0dce639a439c4a

You can check and monitor the container with the command lines below:

$ docker logs -f oracle-db-12c
$ docker ps --format 'table {{.Names}}\t{{.Image}}\t{{.Status}}\t{{.Ports}}' -f name=oracle-db-12c

Oracle Database Setup

The default password to connect to the database with the sys user is Oradoc_db1. Check the character set which should be AL32UTF8

docker exec -it oracle-db-12c bash -c "source /home/oracle/.bashrc; sqlplus /nolog"

SQL> connect sys/Oradoc_db1@ORADEV as sysdba
SQL> alter session set container=ORADEVPDB;
SQL> show parameters db_block_size;
SQL> select value from nls_database_parameters where parameter='NLS_CHARACTERSET';

Create data and temp tablespace

SQL> 
--Create tablespace for new devopsroles Project 
CREATE TABLESPACE huupv_devopsroles_data DATAFILE '/u01/app/oracle/product/12.2.0/dbhome_1/data/huupv_devopsroles_data.db' SIZE 64M AUTOEXTEND ON NEXT 32M MAXSIZE 4096M EXTENT MANAGEMENT LOCAL;
 
--Create temp tablespace for new devopsroles Project
CREATE TEMPORARY TABLESPACE huupv_devopsroles_temp TEMPFILE '/u01/app/oracle/product/12.2.0/dbhome_1/data/huupv_devopsroles_temp.db' SIZE 64M  AUTOEXTEND ON NEXT 32M MAXSIZE 4096M EXTENT MANAGEMENT LOCAL;
  • Do not start with too large an initial size, because it can waste space.
  • Use a single block size (8K) for the whole DB
  • Do not allow individual data files to grow large (beyond 8-10Gb)

Create a user and assign a grant

SQL> 
--Create user for devopsroles schema
CREATE USER huupv_devopsroles IDENTIFIED BY huupv_devopsroles DEFAULT TABLESPACE huupv_devopsroles_data TEMPORARY TABLESPACE huupv_devopsroles_temp PROFILE default ACCOUNT UNLOCK;
 
--Assign grant to user 
GRANT CONNECT TO huupv_devopsroles;
GRANT RESOURCE TO huupv_devopsroles;
GRANT UNLIMITED TABLESPACE TO huupv_devopsroles;

Test the new scheme using a tool such as Oracle SQLDeveloper

The parameters to connect to the new scheme are:

  • Username/password: huupv_devopsroles/huupv_devopsroles
  • Hostname: 192.168.3.7
  • Service Name: ORADEVPDB.oracledb.devopsroles.local

Via youtube

Conclusions

While installing Oracle Database 12c on Docker is not officially supported by Oracle, which only offers Docker images for Database 18c and later, you can still proceed by following the outlined steps in this guide.

Keep in mind that these steps are unofficial and might present certain limitations or compatibility issues. For optimal results and support, consider using the officially provided Oracle Database versions on Docker. I hope will this your helpful. Thank you for reading the DevopsRoles page!

Docker deploy MySQL cluster

Introduction

In this tutorial, How to use Docker deploy MySQL cluster and connect to the nodes from your local machine. We will be deploying the MySQL server with docker.

To deploy a MySQL cluster using Docker, you can use the MySQL official Docker images and Docker Compose. Here’s a step-by-step guide:

  • 1 Management node
  • 2 Data nodes
  • 2 SQL nodes

The nodes in the cluster are running on separate hosts in a network.

First, You have installed docker on your machine.

Docker deploy MySQL cluster

Step 1: Create the docker network.

I will create a network for MySQL cluster with the following docker command.

docker network create cluster --subnet=192.168.4.0/24

Step 2: Get the mysql docker repository

git clone https://github.com/mysql/mysql-docker.git
cd mysql-docker
git checkout -b mysql-cluster

I will change the IP address of each node to match the subnet. Open mysql-cluster/8.0/cnf/mysql-cluster.cnf file

For example

[ndbd default]
NoOfReplicas=2
DataMemory=80M
IndexMemory=18M


[ndb_mgmd]
NodeId=1
hostname=192.168.4.2
datadir=/var/lib/mysql

[ndbd]
NodeId=2
hostname=192.168.4.3
datadir=/var/lib/mysql

[ndbd]
NodeId=3
hostname=192.168.4.4
datadir=/var/lib/mysql

[mysqld]
NodeId=4
hostname=192.168.4.10

[mysqld]
NodeId=5
hostname=192.168.4.11

Open mysql-cluster/8.0/cnf/my.cnf and modify as below

[mysqld]
ndbcluster
ndb-connectstring=192.168.4.2
user=mysql

[mysql_cluster]
ndb-connectstring=192.168.4.2

Docker image build

docker build -t <image_name> <Path to docker file>
docker build -t mysql-cluster mysql-cluster/8.0

Step 3: Create the manager node.

docker run -d --net=cluster --name=management1 --ip=192.168.4.2 mysql-cluster ndb_mgmd

Step 4: Create the data nodes

docker run -d --net=cluster --name=ndb1 --ip=192.168.4.3 mysql-cluster ndbd
docker run -d --net=cluster --name=ndb2 --ip=192.168.4.4 mysql-cluster ndbd

Step 5: Create the SQL nodes.

docker run -d --net=cluster --name=mysql1 --ip=192.168.4.10 -e MYSQL_RANDOM_ROOT_PASSWORD=true mysql-cluster mysqld
docker run -d --net=cluster --name=mysql2 --ip=192.168.4.11 -e MYSQL_RANDOM_ROOT_PASSWORD=true mysql-cluster mysqld
docker run -it --net=cluster mysql-cluster ndb_mgm

The cluster management console will be loaded.

[Entrypoint] MySQL Docker Image 8.0.28-1.2.7-cluster
[Entrypoint] Starting ndb_mgm
-- NDB Cluster -- Management Client --
ndb_mgm>

Run show command

ndb_mgm> show
Connected to Management Server at: 192.168.4.2:1186
Cluster Configuration
---------------------
[ndbd(NDB)]     2 node(s)
id=2    @192.168.4.3  (mysql-8.0.28 ndb-8.0.28, Nodegroup: 0, *)
id=3    @192.168.4.4  (mysql-8.0.28 ndb-8.0.28, Nodegroup: 0)

[ndb_mgmd(MGM)] 1 node(s)
id=1    @192.168.4.2  (mysql-8.0.28 ndb-8.0.28)

[mysqld(API)]   2 node(s)
id=4    @192.168.4.10  (mysql-8.0.28 ndb-8.0.28)
id=5    @192.168.4.11  (mysql-8.0.28 ndb-8.0.28)

ndb_mgm>

Step 7. Change the default passwords.

MySQL node 1:

The SQL nodes are created initially, with a random password. Get the default password.

docker logs mysql1 2>&1 | grep PASSWORD

To change the password, first, Input the password default at Step 7

docker exec -it mysql1 mysql -uroot -p
ALTER USER 'root'@'localhost' IDENTIFIED BY 'root';
FLUSH PRIVILEGES;

MySQL node 2:

The SQL nodes are created initially, with a random password. Get the default password.

docker logs mysql2 2>&1 | grep PASSWORD

To change the password, first, Input the password default at Step 7

docker exec -it mysql2 mysql -uroot -p
ALTER USER 'root'@'localhost' IDENTIFIED BY 'root';
FLUSH PRIVILEGES;

Step 8: Login and create a new database.

For example, I will create huupv an account on mysql1 and mysql2 containers and access any hosts.

# For mysql1
docker exec -it mysql1 mysql -uroot -p
CREATE USER 'huupv'@'%' IDENTIFIED BY '123456789';
GRANT ALL ON *.* TO 'huupv'@'%';
FLUSH PRIVILEGES;

# For mysql2
docker exec -it mysql2 mysql -uroot -p
CREATE USER 'huupv'@'%' IDENTIFIED BY '123456789';
GRANT ALL ON *.* TO 'huupv'@'%';
FLUSH PRIVILEGES;

Create new a database.

create schema test_db;

The output terminal is as below:

vagrant@devopsroles:~/mysql-docker$ docker exec -it mysql1 mysql -uroot -p
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 15
Server version: 8.0.28-cluster MySQL Cluster Community Server - GPL

Copyright (c) 2000, 2022, Oracle and/or its affiliates.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> CREATE USER 'huupv'@'%' IDENTIFIED BY '123456789';
Query OK, 0 rows affected (0.02 sec)

mysql> GRANT ALL ON *.* TO 'huupv'@'%';
Query OK, 0 rows affected (0.11 sec)

mysql> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.02 sec)

mysql> exit
Bye
vagrant@devopsroles:~/mysql-docker$ docker exec -it mysql2 mysql -uroot -p
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 16
Server version: 8.0.28-cluster MySQL Cluster Community Server - GPL

Copyright (c) 2000, 2022, Oracle and/or its affiliates.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> CREATE USER 'huupv'@'%' IDENTIFIED BY '123456789';
ERROR 1396 (HY000): Operation CREATE USER failed for 'huupv'@'%'
mysql> GRANT ALL ON *.* TO 'huupv'@'%';
Query OK, 0 rows affected (0.10 sec)

mysql> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.01 sec)

mysql> exit
Bye
vagrant@devopsroles:~/mysql-docker$

Login from my machine.

 mysql -h192.168.4.10 -uhuupv -p
 mysql -h192.168.4.11 -uhuupv -p

Via My Youtube

Conclusion

You have successfully deployed a MySQL cluster using Docker. You can now use the cluster for your applications or explore additional configuration options for MySQL clustering, such as replication and high availability. I hope will this your helpful. Thank you for reading the DevopsRoles page!

Docker install Nginx container

#Introduction

In this tutorial, I will use Docker to install an Nginx web server. How to set up an Nginx web server in a Docker container. Now, let go Docker install Nginx container.

Docker as a platform container. I will install Docker on Debian/Ubuntu and install the Nginx container from Docker Hub.

Install Docker on Ubuntu here

Docker install Nginx container

Run Nginx Container Using Docker. I will pull an Nginx image from the docker hub.

$ docker pull nginx

List the Docker images as command below

$ docker images

Create Docker Volume for Nginx

$ docker volume create nginx-data-persistent

Get the docker volume information as command below

$ docker volume inspect nginx-data-persistent

Building a Web Page to Serve on Nginx

Create an HTML file

$ sudo vi /var/lib/docker/volumes/nginx-data-persistent/_data/index.html

The content is as below:

<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <title>Learn Docker at DevopsRoles.com</title>
</head>
<body>
    <h1>Learn Docker With DevopsRoles.com</h1>   
</body>
</html>

Start the Nginx container with persistent data storage. Data storage location “/var/lib/docker/volumes/nginx-data-persistent/_data” on Host Ubuntu and the path in the container is “/usr/share/nginx/html” Run command as below:

$ docker run -d --name nginx-server -p 8080:80 -v nginx-data-persistent:/usr/share/nginx/html nginx

Explain:

  • d: run the container in detached mode
  • name: name of the container to be created
  • p: port to be mapped with host, Example: host port is 8080 and guest port is 80
  • v: name of docker volume

you can create a symlink of the docker volume directory

$ sudo ln -s /var/lib/docker/volumes/nginx-data-persistent/_data /nginx-data

Option

Access into Nginx container with the command below

$ docker exec -it nginx-server /bin/bash

Now, you can stop docker container apache

docker stop nginx-server

and remove it:

docker rm nginx-server

You can clean up and delete the image that was used in the container.

docker image remove nginx

Youtube Docker install Nginx container

Conclusion

You have successfully installed and run an Nginx container using Docker. You can now customize the Nginx configuration, serve your own website, or explore additional Nginx features and settings. I hope will this your helpful. Thank you for reading the DevopsRoles page!

Docker install Apache Web Server

#Introduction

In this tutorial, I will use Docker to install an apache web server. How to set up Apache web server in a Docker container. Now, let go Docker install Apache Web Server.

Docker as a platform container. I will install Docker on Debian/Ubuntu and install Apache 2.4 container from Docker Hub.

Install Docker on Ubuntu here

Setting up the Apache Docker Container

For example, I will install Apache 2.4 container named devops-roles. Use an image called httpd:2.4 from Docker Hub. and detached from the current terminal.

Host port is 8080 to be redirected to guest port 80 on the container. I will serve a simple web page form /home/vagrant/website.

Docker install Apache web server

docker run -itd --name devops-roles -p 8080:80 -v /home/vagrant/website/:/usr/local/apache2/htdocs/ httpd:2.4

Check Docker Apache container running as command below

docker ps

To created index.html file inside /home/vagrant/website directory.

sudo vi /home/vagrant/website/index.html

The content as below

<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <title>Learn Docker at DevopsRoles.com</title>
</head>
<body>
    <h1>Learn Docker With DevopsRoles.com</h1>   
</body>
</html>

Open web browser type server-ip:8080/index.html

The output terminal as below

Now, you can stop docker container apache

docker stop devops-roles

and remove it:

docker rm devops-roles

You can clean up and delete the image that was used in the container.

docker image remove httpd:2.4

Via Youtube

Conclusion

You have to Docker install Apache Web Server. I hope will this your helpful. Thank you for reading the DevopsRoles page!

Docker Containers Share Data

Introduction

In this blog, we will explore various methods of data sharing in Docker containers and how they can be effectively used in your projects. Now, let go Docker Containers Share Data.

Docker containers can share data using various mechanisms provided by Docker. Here are a few ways to share data between Docker containers.

I will deploy 2 containers using docker to share data between containers.

Prerequisites

  • Host OS: Ubuntu 21.04
  • Installed Docker

Create a Volume for Docker containers share data

Volumes are a fundamental feature in Docker that enable data persistence and sharing between containers and the host system.

When you create a volume, it acts as a dedicated storage location that can be mounted into one or more containers. The data stored in a volume persists even if the container using it is removed.

We will create a Volume to save our data as command below:

docker volume create --name persistent-data-devops

The volume is created in the /var/lib/docker/volumes directory.

vagrant@devopsroles:~$ sudo ls /var/lib/docker/volumes/persistent-data-devops/_data/test.txt
/var/lib/docker/volumes/persistent-data-devops/_data/test.txt

For example, We will deploy the first container which will use the persistent volume.

docker run -it --name=container1 -v persistent-data-devops:/data ubuntu:latest
  • The container named: container1
  • mount the persistent-data-devops volume into the /data directory within the new container

Login new container and create a new file

echo "Hello, www.devopsroles.com" >> /data/test.txt

We’ll now deploy a second container as command below

docker run -it --name=container2 -v persistent-data-devops:/data ubuntu:latest

Login second container and display content

cat /data/test.txt

Edit /data/test.txt file. I use the vim command line.

Add the following at the bottom of the file:

This data share between containers 1 and 2

The output terminal as below

vagrant@devopsroles:~$ docker run -it --name=container2 -v persistent-data-devops:/data ubuntu:latest

root@ed16b6be95f8:/# cat /data/test.txt
Hello, www.devopsroles.com

root@ed16b6be95f8:/# vim /data/test.txt
root@ed16b6be95f8:/# cat /data/test.txt
Hello, www.devopsroles.com
This data share between containers 1 and 2

Exit the running container with the exit command. You can stop and remove them with the commands:

docker stop ID
docker rm ID

After stopping and removing docker container1, we will deploy again, But data will remain data.

Youtube: Docker Containers Share Data

Conclusion

You have to Docker Containers Share Data. These are some of the common methods to share data between Docker containers. Choose the one that best suits your requirements based on the nature of the data and the use case you have in mind.

Data sharing is a crucial aspect of Docker containerization. By utilizing volumes, bind mounts, named volumes, Docker Compose, and Docker networks, you can effectively share data between containers and the host system. I hope will this your helpful. Thank you for reading the DevopsRoles page!

Docker deploy MySQL and phpMyAdmin

#Introduction

In this tutorial, How to use Docker deploy MySQL and phpMyAdmin.

Docker is a software platform designed to make it easier to create, deploy, and run applications by using containers.

MySQL Database Service is a fully managed database service to deploy cloud-native applications. Quota from MySQL.

Prerequisites Docker deploy MySQL and phpMyAdmin

  • Host OS: Ubuntu 21.04
  • Installed Docker

Deploy the MySQL Database

First I will create a volume for MySQL to remain persistent. I will create a volume name is mysql-volume with the command below:

docker volume create mysql-volume

The output terminal is as below:

vagrant@devopsroles:~$ docker volume create mysql-volume
mysql-volume
vagrant@devopsroles:~$ docker volume ls
DRIVER    VOLUME NAME
local     1b944b2bc6253d6ef43a7fe8c0b57f7e070a219d495a69d89ee5e33ab3ffc48f
local     4f63522a2fe6eec851e2502fc56bbed0706a73df8e2a119ef1a9382c234f1b5b
local     8c2473be65a52c7a0a47e2d7b48b6781f29255f3ef6372b60e4c227644ff0844
local     c6e1ca4cc9617ca9031d974f4d19a5efbac96102850aa255d07c9dc53bd7f6e4
local     c509424aaaeda0374dc1d3d849bc245a183e865cddffa4bc1606d736ee3fb615
local     cb6583b8ad3d474f06e6c8fef30f5d4d11cb1a51e69ca0cc5d2df15a9deae1c3
local     ghost-blog_ghost
local     ghost-blog_mysql
local     my-postgres-db-db
local     mysql-volume

After our volume ready, we will deploy the MySQL container with named is devops_mysql and connect it to the volume with the command below:

docker run --name=devops_mysql -p3306:3306 -v mysql-volume:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=123abc@ -d mysql/mysql-server

The output terminal as below

vagrant@devopsroles:~$ docker run --name=devops_mysql -p3306:3306 -v mysql-volume:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=123abc@ -d mysql/mysql-server
Unable to find image 'mysql/mysql-server:latest' locally
latest: Pulling from mysql/mysql-server
221c7ea50c9e: Pull complete
d32a20f3a6af: Pull complete
28749a63c815: Pull complete
3cdab959ca41: Pull complete
30ceffa70af4: Pull complete
e4b028b699c1: Pull complete
3abed4e8adad: Pull complete
Digest: sha256:6fca505a0d41c7198b577628584e01d3841707c3292499baae87037f886c9fa2
Status: Downloaded newer image for mysql/mysql-server:latest
8b5a319d3cdaac3cd34046e6a32d8a6df25cbf17d28b3e83ac759389d2916e0c
vagrant@devopsroles:~$ docker ps
CONTAINER ID   IMAGE                COMMAND                  CREATED          STATUS                             PORTS                                                        NAMES
8b5a319d3cda   mysql/mysql-server   "/entrypoint.sh mysq…"   16 seconds ago   Up 14 seconds (health: starting)   0.0.0.0:3306->3306/tcp, :::3306->3306/tcp, 33060-33061/tcp   devops_mysql

Deploy the phpMyAdmin Container

I will deploy the phpMyAdmin Container. Create a volume for phpMyAdmin with the command line as below

docker volume create phpmyadmin-volume

The output terminal is as below:

vagrant@devopsroles:~$ docker volume create phpmyadmin-volume
phpmyadmin-volume
vagrant@devopsroles:~$ docker volume ls | grep phpmyadmin-volume
local     phpmyadmin-volume

deploy the phpMyAdmin container with the command:

docker run --name devops-phpmyadmin -v phpmyadmin-volume:/etc/phpmyadmin/config.usr.inc.php --link devops_mysql:db -p 82:80 -d phpmyadmin/phpmyadmin

The output terminal is as below:

vagrant@devopsroles:~$ docker run --name devops-phpmyadmin -v phpmyadmin-volume:/etc/phpmyadmin/config.usr.inc.php --link devops_mysql:db -p 82:80 -d phpmyadmin/phpmyadmin
b505829b235660a4a4881cbc6df777b8592a32b4bd014cc543e02238c3b1f409
vagrant@devopsroles:~$ docker ps | grep  devops-phpmyadmin
b505829b2356   phpmyadmin/phpmyadmin   "/docker-entrypoint.…"   12 seconds ago   Up 11 seconds            0.0.0.0:82->80/tcp, :::82->80/tcp                            devops-phpmyadmin
vagrant@devopsroles:~$

The explanation of the above command

  • Deploying a container named devops-phpmyadmin
  • Linking to devops_mysql database
  • external port is 82
  • Internal port is 80
  • running the container in daemon mode (with the -d option)

How to Access phpMyAdmin

Open a web browser and point it to http://SERVER:82

Log in with the username root and the password you used when you deployed the MySQL container.

For example, the username is root and the Password is 123abc@

Can not access MySQL docker as the picture below

How to fix it

docker exec -it devops_mysql mysql -u root -p
update mysql.user set host='%' where user='root' and host = 'localhost';
flush privileges;

The result, Login access to phpMyAdmin

How to access MySQL and phpMyAdmin containers

docker exec -it devops_mysql /bin/bash
docker exec -it devops-phpmyadmin /bin/bash

Via youtube

Conclusion

You have to Docker deploy MySQL and phpMyAdmin. I hope will this your helpful. Thank you for reading the DevopsRoles page!

Install pyenv and manage multiple python versions

Introduction

In this post, How to Install pyenv and manage multiple python versions.

  • Pyenv is a fantastic tool for installing and managing multiple Python versions. It also offers the ability to quickly switch from one version of Python to another.
  • Pyenv integrates with the Virtualenv plugin to support creating virtual environments (virtual environments), and library projects will be installed in isolation in this virtual environment without affecting the system.

Install Dependencies

On Rocky Linux / Centos

sudo yum install zlib-devel bzip2 bzip2-devel readline-devel sqlite sqlite-devel openssl-devel xz xz-devel libffi-devel

On Ubuntu

sudo apt-get install make build-essential libssl-dev zlib1g-dev libbz2-dev \
  libreadline-dev libsqlite3-dev wget curl libncurses5-dev libncursesw5-dev \
  xz-utils tk-dev libffi-dev liblzma-dev

Install pyenv

You can view the installation instructions from Pyenv’s homepage on Github or use the command below

$ curl -L https://github.com/pyenv/pyenv-installer/raw/master/bin/pyenv-installer | bash

The output terminal install pyenv as below

vagrant@devopsroles:~$ curl -L https://github.com/pyenv/pyenv-installer/raw/master/bin/pyenv-installer | bash
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   148  100   148    0     0    778      0 --:--:-- --:--:-- --:--:--   778
100  1687  100  1687    0     0   5477      0 --:--:-- --:--:-- --:--:-- 17214
Cloning into '/home/vagrant/.pyenv'...
remote: Enumerating objects: 882, done.
remote: Counting objects: 100% (882/882), done.
remote: Compressing objects: 100% (438/438), done.
remote: Total 882 (delta 493), reused 565 (delta 338), pack-reused 0
Receiving objects: 100% (882/882), 460.45 KiB | 1.92 MiB/s, done.
Resolving deltas: 100% (493/493), done.
Cloning into '/home/vagrant/.pyenv/plugins/pyenv-doctor'...
remote: Enumerating objects: 11, done.
remote: Counting objects: 100% (11/11), done.
remote: Compressing objects: 100% (9/9), done.
remote: Total 11 (delta 1), reused 3 (delta 0), pack-reused 0
Receiving objects: 100% (11/11), 38.71 KiB | 1.49 MiB/s, done.
Resolving deltas: 100% (1/1), done.
Cloning into '/home/vagrant/.pyenv/plugins/pyenv-installer'...
remote: Enumerating objects: 16, done.
remote: Counting objects: 100% (16/16), done.
remote: Compressing objects: 100% (13/13), done.
remote: Total 16 (delta 1), reused 10 (delta 0), pack-reused 0
Receiving objects: 100% (16/16), 5.78 KiB | 5.78 MiB/s, done.
Resolving deltas: 100% (1/1), done.
Cloning into '/home/vagrant/.pyenv/plugins/pyenv-update'...
remote: Enumerating objects: 10, done.
remote: Counting objects: 100% (10/10), done.
remote: Compressing objects: 100% (6/6), done.
remote: Total 10 (delta 1), reused 5 (delta 0), pack-reused 0
Receiving objects: 100% (10/10), done.
Resolving deltas: 100% (1/1), done.
Cloning into '/home/vagrant/.pyenv/plugins/pyenv-virtualenv'...
remote: Enumerating objects: 61, done.
remote: Counting objects: 100% (61/61), done.
remote: Compressing objects: 100% (55/55), done.
remote: Total 61 (delta 11), reused 22 (delta 0), pack-reused 0
Receiving objects: 100% (61/61), 37.94 KiB | 2.11 MiB/s, done.
Resolving deltas: 100% (11/11), done.
Cloning into '/home/vagrant/.pyenv/plugins/pyenv-which-ext'...
remote: Enumerating objects: 10, done.
remote: Counting objects: 100% (10/10), done.
remote: Compressing objects: 100% (6/6), done.
remote: Total 10 (delta 1), reused 6 (delta 0), pack-reused 0
Receiving objects: 100% (10/10), done.
Resolving deltas: 100% (1/1), done.

WARNING: seems you still have not added 'pyenv' to the load path.


# See the README for instructions on how to set up
# your shell environment for Pyenv.

# Load pyenv-virtualenv automatically by adding
# the following to ~/.bashrc:

eval "$(pyenv virtualenv-init -)"

Then insert the following 3 lines into the shell’s config file ~/.bashrc

export PATH="/home/vagrant/.pyenv/bin:$PATH"
eval "$(pyenv init -)"
eval "$(pyenv virtualenv-init -)"

The output terminal config file ~/.bashrc as below

[vagrant@devopsroles ~]$ cat .bashrc
# .bashrc

# Source global definitions
if [ -f /etc/bashrc ]; then
        . /etc/bashrc
fi

# User specific environment
if ! [[ "$PATH" =~ "$HOME/.local/bin:$HOME/bin:" ]]
then
    PATH="$HOME/.local/bin:$HOME/bin:$PATH"
fi
export PATH

# Uncomment the following line if you don't like systemctl's auto-paging feature:
# export SYSTEMD_PAGER=

# User specific aliases and functions
export PATH="/home/vagrant/.pyenv/bin:$PATH"
eval "$(pyenv init -)"
eval "$(pyenv virtualenv-init -)"
[vagrant@devopsroles ~]$ source .bashrc
[vagrant@devopsroles ~]$ pyenv --version
pyenv 2.2.4
[vagrant@devopsroles ~]$

use pyenv

For example, I will introduce how to use pyenv to set up a virtual environment using python 3.6.6 for the project demo. Path ~/projects/demo

$ pyenv install 3.6.6

The output terminal as below

[vagrant@devopsroles ~]$ pyenv install 3.6.6

Installing Python-3.6.6...
Installed Python-3.6.6 to /home/vagrant/.pyenv/versions/3.6.6
[vagrant@devopsroles ~]$

If the installation fails, maybe your system lacks the necessary libraries for compiling, install the missing libraries here

Create a virtual environment with virtualenv, which uses Python 3.6.6.

pyenv virtualenv 3.6.6 app

Enable environment settings

pyenv activate app

To deactivate environment settings

pyenv deactivate app

Useful Commands

Conclusion

You have to Install pyenv and manage multiple python versions. I hope will this your helpful. Thank you for reading the DevopsRoles page! Install pyenv, Install pyenv.

Step-by-Step Guide: Deploy Ghost Blog with docker

Introduction

In this tutorial, How to Deploy Ghost Blog with docker. cms Ghost is a popular content creation platform that’s written in JavaScript with Node.js. It’s open-source software.

Docker has become a popular choice for deploying applications due to its ease of use, scalability, and portability. If you’re looking to start a blog using Ghost, a powerful and flexible open-source blogging platform, using Docker can simplify the deployment process.

We will use Docker to quickly Ghost blog. First, we have installed Docker and Docker Compose on your host.

Prerequisites

Before we get started, make sure you have Docker and Docker Compose installed on your server or local machine.

Deploy a Ghost Container

We will start a basic Ghost site with the docker command as below:

docker run -d -p 2368:2368 --name simple-ghost ghost:4

Ghost the default port is 2368. We will view the site via http://localhost:2368 or http://localhost:2368/ghost to access the Ghost admin panel.

We will first run settings to finalize your Ghost installation and create an initial user account.

This approach is if we just trying out Ghost. However, we have not set up persistent storage yet so your data will be lost when the container stops.

We will use uses Docker Compose to set up Ghost with a Docker volume. Mount a volume to the /var/lib/ghost/content directory to store Ghost’s data outside the container.

Example docker-compose.yml file

version: "3"
services:
  ghost:
    image: ghost:4
    ports:
      - 8080:2368
    environment:
      url: https://ghost.devopsroles.com
    volumes:
      - ghost:/var/lib/ghost/content
    restart: unless-stopped
volumes:
  ghost:

Now use Compose to bring up your site:

docker-compose up -d

External Database

Ghost defaults to using an SQLite database that’s stored as a file in your site’s content directory.

services:
  ghost:
    # ...
    environment:
      database__client: mysql
      database__connection__host: ghost_mysql
      database__connection__user: root
      database__connection__password: databasePw
      database__connection__database: ghost

  ghost_mysql:
    image: mysql:5.7
    expose:
      - 3306
    environment:
      MYSQL_DATABASE: ghost
      MYSQL_ROOT_PASSWORD: databasePw
    volumes:
      - mysql:/var/lib/mysql
    restart: unless-stopped

volumes:
  mysql:

Summary Ghost blog docker-compose

version: "3"
services:
  ghost:
    image: ghost:4
    ports:
      - 8080:2368
    environment:
      url: https://ghost.devopsroles.com
    volumes:
      - ghost:/var/lib/ghost/content
    restart: unless-stopped
    environment:
      database__client: mysql
      database__connection__host: ghost_mysql
      database__connection__user: root
      database__connection__password: databasePw
      database__connection__database: ghost

  ghost_mysql:
    image: mysql:5.7
    expose:
      - 3306
    environment:
      MYSQL_DATABASE: ghost
      MYSQL_ROOT_PASSWORD: databasePw
    volumes:
      - mysql:/var/lib/mysql
    restart: unless-stopped
volumes:
  ghost:
  mysql:

Proxying Traffic to Your Container

Add NGINX to your host:

sudo apt update
sudo apt install nginx

# Allow HTTP/HTTPS traffic through the firewall
sudo ufw allow 80
sudo ufw allow 443

Define an NGINX host for your site in /etc/nginx/sites-available/ghost.devopsroles.com

server {
    
    server_name ghost.devopsroles.com;
    index index.html;

    access_log /var/log/nginx/ghost_access.log;
    error_log /var/log/nginx/ghost_error.log error;

    location / {
        proxy_pass http://127.0.0.1:8080;
        proxy_redirect off;
        proxy_set_header Host $http_host;
        proxy_set_header X-Original-IP $remote_addr;
    }

}

Create symbolic link

sudo ln -s /etc/nginx/sites-available/ghost.devopsroles.com /etc/nginx/sites-enabled/ghost.devopsroles.com

Restart Nginx service

sudo systemctl restart nginx

The result is Deploy cms Ghost Blog with docker Nginx

How to Deploy Ghost Blog with Docker via Youtube

Conclusion

You’ve successfully deployed a Ghost blog using Docker, taking advantage of the benefits of containerization for managing your blog.

With your cms Ghost blog now live, you can explore additional Docker features to enhance your blog’s capabilities, such as using custom themes, adding plugins, and scaling your blog to handle increasing traffic.

I hope will this your helpful. Thank you for reading the DevopsRoles page!

How to install Docker compose on Ubuntu

Introduction

In this tutorial, How to install Docker compose on Ubuntu 21.04. Docker is an open platform that allows you to build, test, and deploy applications quickly.

Docker Compose is used for defining and running multi-container Docker applications. It allows users to launch, execute, communicate with a single coordinated command. Docker Compose is yet another useful Docker tool.

Prerequisites

  • A system running Ubuntu 21.04
  • A user account with sudo privileges
  • Docker installed on Ubuntu 21.04

Step 1: Update your system

Update your existing packages:

sudo apt update

Step 2: Install curl package

sudo apt install curl -y

Step 3: Download the Latest Docker compose Version

Docker-compose new stable versions, refer to the official list of releases on GitHub.

sudo curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose

Step 4: Change File Permission

sudo chmod +x /usr/local/bin/docker-compose

Step 5: Check Docker Version

To verify the installation, check the Docker compose version by command below:

docker-compose --version

Uninstall Docker compose on Ubuntu

Delete the Binary

sudo rm /usr/local/bin/docker-compose

Uninstall the Package

sudo apt remove docker-compose

Remove Software Dependencies

sudo apt autoremove

Via Youtube

Conclusion

You have to install Docker compose on Ubuntu 21.04. I hope will this your helpful. Thank you for reading the DevopsRoles page!

Devops Tutorial

Exit mobile version