Minikube Build local Kubernetes environment

Introduction

In today’s DevOps-driven world, Kubernetes has become an essential tool for managing containerized applications at scale. However, setting up a full Kubernetes cluster can be complex and resource-intensive. How to Minikube Build local Kubernetes.

Minikube is a lightweight Kubernetes implementation that creates a local, single-node Kubernetes cluster for development and testing. In this guide, we’ll walk you through everything you need to know to build a local Kubernetes environment using Minikube, from basic setup to advanced configurations. In this tutorial, How to use Minikube Build local Kubernetes environment.

Minikube Build local Kubernetes environment

Why Use Minikube?

  • Ease of Use: Minikube simplifies the process of setting up a Kubernetes cluster.
  • Local Development: Ideal for local development and testing.
  • Resource Efficient: Requires fewer resources compared to a full-scale Kubernetes cluster.
  • Feature-Rich: Supports most Kubernetes features and add-ons.

Prerequisites

Before you start, ensure you have the following:

  • A computer with at least 2GB of RAM and 20GB of free disk space.
  • A hypervisor like VirtualBox, VMware, Hyper-V, or KVM.
  • kubectl, the Kubernetes command-line tool, was installed.
  • Minikube installed.

My Virtual Machine has installed Docker. Reference: Link here

Minikube Build local Kubernetes

Install Minikube and kubectl

Minikube

$ curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/

kubectl

$ curl -Lo kubectl https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/

Set of environment variables.

$ sudo vi /etc/profile

#Add end line in file
export MINIKUBE_WANTUPDATENOTIFICATION=false
export MINIKUBE_WANTREPORTERRORPROMPT=false
export MINIKUBE_HOME=/root
export CHANGE_MINIKUBE_NONE_USER=true
export KUBECONFIG=/root/.kube/config

$ sudo mkdir -p /root/.kube || true
$ sudo touch /root/.kube/config

Launch Minikube

You can use used "--vm-driver=none" option to build Kubernetes on the host running Minikube

$ sudo /usr/local/bin/minikube start --vm-driver=none

File /root/.kube/config have been created. verify the content

$ sudo kubectl config view

Check service minikube status

$ sudo minikube status

Allow port 8443 on the firewall

$ sudo firewall-cmd --add-port=8443/tcp --zone=public --permanent
$ sudo firewall-cmd --reload
$ sudo firewall-cmd --list-all

Create and start container dashboard

using image “k8s.gcr.io/echoserver:1.4” to create the dashboard

$ sudo docker images | grep 'k8s.gcr.io/echoserver'

Minikube created a pod

$ sudo kubectl run hello-minikube --image=k8s.gcr.io/echoserver:1.4 --port=808

To verify node and pod

$ sudo kubectl get nodes
$ sudo kubectl get pods

Instances run on nodes as Docker containers. Displays a list of deployments.

$ sudo kubectl get deployments

Creating service

You use "--type=NodePort" option. Open the service on the IP of each node to the static port (NordPort). A ClusterIP Service routed by the NodePort Service is automatically created. You can access the NordPort Service from outside the cluster by requesting :

$ sudo kubectl expose deployment hello-minikube --type=NodePort
$ sudo kubectl get services

Access dashboard

Get the dashboard URL. The port automatically changes one time.

$ sudo minikube dashboard --url
$ curl http://127.0.0.1:36439/api/v1/namespaces/kube-system/services/http:kubernetes-dashboard:/proxy/

When accessing via Kubernetes proxy

kubectl acts as a reverse proxy to the API endpoint.

$ sudo minikube dashboard --url
$ sudo kubectl proxy
$ curl http://localhost:8001

Accessible dashboard from outside

Minikube started with the "--vm-driver=none" option, it can only be accessed from the host OS using the proxy. Therefore, change the proxy settings so that they can be accessed from outside the host OS (browser).

$ sudo minikube dashboard --url
$ sudo kubectl proxy --address=0.0.0.0 --accept-hosts='.*'

Link URL: http://{YOUR_HOST_NAME}:8001/api/v1/namespaces/kube-system/services/http:kubernetes-dashboard:/proxy/#!/overview?namespace=default

The verify log

$ sudo kubectl logs hello-minikube-78c9fc5f89-9whkn
$ minikube logs -f 

Using kubectl command to delete service, deployment

$ sudo kubectl delete services hello-minikube
$ sudo kubectl delete deployment hello-minikube

Get status to have delete service, deployment

$ sudo kubectl get nodes
$ sudo kubectl get pods
$ sudo kubectl get services
$ sudo kubectl get deployments

Stop minikube and delete the cluster

$ sudo minikube stop
$ minikube delete

Conclusion

Minikube is an excellent tool for developers who want to learn and experiment with Kubernetes without the complexity of setting up a full-scale cluster. By following this guide, you can easily set up a local Kubernetes environment using Minikube, deploy applications, and explore advanced features. Whether you’re a beginner or an experienced developer, Minikube provides a convenient and efficient way to work with Kubernetes on your local machine.

How to set $PATH in Linux

In this tutorial, How to set $PATH in Linux. You may set the $PATH permanently in 2 ways:

  • Set PATH for Particular user.
  • or set a common path for ALL system users.

You need to make “.bash_profile” in-home directory in the user for set PATH Particular user as command below

[huupv@DevopsRoles vagrant]$ echo "export PATH=$PATH:/path/to/dir" >> /home/huupv/.bash_profile
[huupv@DevopsRoles vagrant]$ source /home/huupv/.bash_profile

Or set a common path for ALL system users, you need to set path as below

[root@DevopsRoles vagrant]# echo "export PATH=$PATH:/path/to/dir" >> /etc/profile
[root@DevopsRoles vagrant]# source /etc/profile

An example set a common path for ALL system users as the picture below

Conclusion

Thought the article, You can set PATH 2 way in Linux as above. I hope will this your helpful. Thank you for reading the DevopsRoles page!

AWS Certified Solutions Architect Exercises- part 2 Amazon EC2 and Amazon EBS

In this article, we’ll learn about Amazon Elastic Compute Cloud (Amazon EC2) and Amazon Elastic Block Store (Amazon EBS). Now, let’s go to AWS Certified Solutions Architect Exercises.

AWS Certified Solutions Architect Exercises

1. Today tasks

  • Launch and Connect to a Linux Instance
  • Launch a Windows Instance with Bootstrapping
  • Launch a Spot Instance
  • Access Metadata
  • Create an Amazon EBS Volume and Show That It Remains After the Instance Is Terminated
  • Take a Snapshot and Restore

2. Before you begin

  • Puttygen.exe: Tool creating a .ppk file from a .pem file
  • Command-line tool to SSH into the Linux instance.

3. Let do it

EXERCISE 1: Launch and Connect to a Linux Instance

In this exercise, you will launch a new Linux instance, log in with SSH, and install any security updates.

1. Launch an instance in the Amazon EC2 console.

2. Choose the Amazon Linux 2 AMI (HVM), SSD Volume Type – ami-0c3fd0f5d33134a76.

3. Choose the t2.micro instance type.

4. Configure Instance Details as below

  • Network: Launch the instance in the default VPC.
  • Subnet: Select a default subnet
  • Auto-assign Public IP: Enable.

5. Add Storage setting default, click 「Next: Add tags」button, next screen, you click 「Add tag」button to add a tag to the instance

Example: add a tag with Key is Name, the value of Key: ec2-demo.

6. Create a new security group called demo-sg.

7. Add a rule to demo-sg :

7.1. Allowing SSH access from the IP address of your computer with: Source is My IP(to secure this way is recommended)

7.2. Allowing all IP access with Source is Custom, CidrIP:0.0.0.0/0.

8. Review and Launch the instance.

9. When prompted for a key pair, choose a key pair you already have or create a new one and download the private portion. Amazon generates a keyname.pem file, and you will need a keyname.ppk file to connect to the instance via SSH. Puttygen.exe is one utility that will create a .ppk file from a .pem file.

10. SSH into the instance using the IPv4 public IP address,

To SSH to EC2 created, you able to use tools such as Terraterm

and the user name ec2-user, and the keyname.ppk file created at step 9.

11. From the command-line prompt, run

sudo yum update

12. Close the SSH window and terminate the instance.

EXERCISE 2: Launch a Windows Instance with Bootstrapping

In this exercise, you will launch a Windows instance and specify a very simple bootstrap script. You will then confirm that the bootstrap script was executed on the instance.

1. Launch an instance in the Amazon EC2 console.

2. Choose the Microsoft Windows Server 2019 Base AMI.

3. Choose the t2.micro instance type.

4. Launch the instance in either the default VPC and default subnet, Auto-assign Public IP: Enable.

5. In the Advanced Details section, enter the following text as UserData:

<script>
md c:\temp
</script>

6. Add a tag to the instance of Key: Name, Value: EC2-Demo2

7. Use the demo-sg security group from Exercise 1.

8. Launch the instance.

9. Use the key pair from Exercise 1.

10. On the Connect to Your Instance screen, decrypt the administrator password and then download the RDP file to attempt to connect to the instance. Your attempt should fail because the demo-sg security group does not allow RDP access.

11. Open the demo-sg security group and add a rule that allows RDP access from your IP address.

12. Attempt to access the instance via RDP again.

13. Once the RDP session is connected, open Windows Explorer and confirm that the c:\temp folder has been created.

14. End the RDP session and terminate the instance.

EXERCISE 3: Launch a Spot Instance

In this exercise, you will create a Spot Instance.

1. Launch an instance in the Amazon EC2 console.

2. Choose the Amazon Linux AMI.

3. Choose the instance type.

4. On the Configure Instance page, request a Spot Instance.

5. Launch the instance in either the default VPC and default subnet, Auto-assign Public IP: Enable.

6. Request a Spot Instance and enter a bid a few cents above the recorded Spot price.

7. Finish launching the instance.

8. Go to the Spot Request page. Watch your request.

9. Find the instance on the Instances page of the Amazon EC2 console.

10. Once the instance is running, terminate it.

EXERCISE 4: Access Metadata

In this exercise, you will access the instance metadata from the OS.

1. Execute steps as in EXERCISE 1.

2. At the Linux command prompt, retrieve a list of the available metadata by typing:
curl http://169.254.169.254/latest/meta-data/

3. To see a value, add the name to the end of the URL. For example, to see the security groups, type:
curl http://169.254.169.254/latest/meta-data/security-groups

4. Close the SSH window and terminate the instance.

EXERCISE 5: Create an Amazon EBS Volume and Show That It Remains After the Instance Is Terminated

In this exercise, you will see how an Amazon EBS volume persists beyond the life of an instance.

1. Execute steps as in EXERCISE 1.

2. However, in step 5 「Add Storage 」, add a second Amazon EBS volume of size 50 GB. Note that the Root Volume is set to Delete on Termination.

3. Launch the instance, has two Amazon EBS volumes on the Amazon EBS console, Name them both 「EC2-Demo5」.

4. Terminate the instance.

5. Check that the boot drive is destroyed, but the additional Amazon EBS volume remains and now says Available. Do not delete the Available volume.

EXERCISE 6: Take a Snapshot and Restore

This exercise guides you through taking a snapshot and restoring it in three different ways.

1. Find the volume you created in Exercise 5 in the Amazon EBS Menu.

2. Take a snapshot of that volume. Tag with Name the snapshot Exercise-6, wait for the snapshot to be completed.

3. Method 1 to restore EBS volume: On the Snapshot console, choose the new snapshot created at step2 and select Create Volume, create the volume with all the defaults, tag with Name is 「Exercise-6-volumn-restored」

4. Method 2 to restore EBS volume: Locate the snapshot again, choose to Create Volume, setting the size of the new volume to 100 GB (restoring the snapshot to a new, larger volume is how you address the problem of increasing the size of an existing volume). Tag with Name is 「Exercise-6-volume-restored-100GB」

5. Method 3 to restore EBS volume: Locate the snapshot again and choose Copy. Copy the snapshot to another region.

Go to the other region and wait for the snapshot to become available. Create a volume from the snapshot in the new region.

This is how you share an Amazon EBS volume between regions; that is, by taking a snapshot and copying the snapshot.

5. Delete all four volumes.

AWS Certified Solutions Architect Exercises- part 2 Amazon EC2 and Amazon EBS. I hope will this your helpful. Thank you for reading the DevopsRoles page!

How to Kubernetes Minikube Deploy Pods

Introduction

In the rapidly evolving world of cloud-native applications, Kubernetes has emerged as the go-to platform for automating deployment, scaling, and managing containerized applications. For those who are new to Kubernetes or looking to experiment with it in a local environment, Minikube is the ideal tool. Minikube allows you to run a single-node Kubernetes cluster on your local machine, making it easier to learn and test.

This guide will walk you through the process of deploying and managing Pods on Kubernetes Minikube. We will cover everything from basic concepts to advanced operations like scaling and exposing your services. Whether you are a beginner or an experienced developer, this guide will provide you with valuable insights and practical steps to effectively manage your Kubernetes environment.

What is Kubernetes Minikube?

Kubernetes is an open-source platform that automates the deployment, scaling, and operation of application containers across clusters of hosts. Minikube is a tool that enables you to run a single-node Kubernetes cluster on your local machine. It’s an excellent way to start learning Kubernetes without needing access to a full-fledged Kubernetes cluster.

Key Components of Kubernetes Minikube

Before diving into the hands-on steps, let’s understand some key components you’ll interact with:

  • Service: An abstract way to expose an application running on a set of Pods as a network service.
  • Pod: The smallest and simplest Kubernetes object. A Pod represents a running process on your cluster and contains one or more containers.
  • kubectl: The command-line interface (CLI) tool used to interact with Kubernetes clusters.

Kubernetes Minikube Deploy Pods

Create Pods

[root@DevopsRoles ~]# kubectl run test-nginx --image=nginx --replicas=2 --port=80 
[root@DevopsRoles ~]# kubectl get pods 

The output environment variable for test-nginx pod

[root@DevopsRoles ~]# kubectl exec test-nginx-c8b797d7d-mzf91 env

Access to test-nginx pod

[root@DevopsRoles ~]# kubectl exec -it test-nginx-c8b797d7d-mzf91 bash
root@test-nginx-c8b797d7d-mzf91:/# curl localhost 

show logs of test-nginx pod

[root@DevopsRoles ~]# kubectl logs test-nginx-c8b797d7d-mzf91

How to scale out pods

[root@DevopsRoles ~]# kubectl scale deployment test-nginx --replicas=3 
[root@DevopsRoles ~]# kubectl get pods 

set service

[root@DevopsRoles ~]# kubectl expose deployment test-nginx --type="NodePort" --port 80 
[root@DevopsRoles ~]# kubectl get services test-nginx
[root@DevopsRoles ~]# minikube service test-nginx --url
[root@DevopsRoles ~]# curl http://10.0.2.10:31495 

Delete service and pods

[root@DevopsRoles ~]# kubectl delete services test-nginx
[root@DevopsRoles ~]# kubectl delete deployment test-nginx 

Frequently Asked Questions

What is Minikube in Kubernetes?

Minikube is a tool that allows you to run a Kubernetes cluster locally on your machine. It’s particularly useful for learning and testing Kubernetes without the need for a full-blown cluster.

How do I create a Pod in Kubernetes Minikube?

You can create a Pod in Kubernetes Minikube using the kubectl run command. For example: kubectl run test-nginx --image=nginx --replicas=2 --port=80.

How can I scale a Pod in Kubernetes?

To scale a Pod in Kubernetes, you can use the kubectl scale command. For instance, kubectl scale deployment test-nginx --replicas=3 will scale the deployment to three replicas.

What is the purpose of a Service in Kubernetes?

A Service in Kubernetes is used to expose an application running on a set of Pods as a network service. It allows external traffic to access the Pods.

How do I delete a Service in Kubernetes?

You can delete a Service in Kubernetes using the kubectl delete services <service-name> command. For example: kubectl delete services test-nginx.

Conclusion

Deploying and managing Pods on Kubernetes Minikube is a foundational skill for anyone working in cloud-native environments. This guide has provided you with the essential steps to create, scale, expose, and delete Pods and Services using Minikube.

By mastering these operations, you’ll be well-equipped to manage more complex Kubernetes deployments in production environments. Whether you’re scaling applications, troubleshooting issues, or exposing services, the knowledge gained from this guide will be invaluable. Thank you for reading the DevopsRoles page!

Install Minikube kubernetes on Centos

In this tutorial, How to install Minikube kubernetes to configure a single Node Cluster within a VM. How do I Configure Kubernetes which is a Docker Container system?

A Hypervisor supported by Minikube. In this example, Install KVM Hypervisor. You can use other Hypervisors such as VirtualBox, VMware Fusion v.v.

Install Minikube kubernetes

KVM Hypervisor installing

[root@DevopsRoles ~]# yum -y install qemu-kvm libvirt libvirt-python libguestfs-tools virt-install libvirt-daemon-kvm
[root@DevopsRoles ~]# systemctl start libvirtd
[root@DevopsRoles ~]# systemctl enable libvirtd

Add repository for configure Kubernetes and Install Minikube

Configure Kubernetes repository

[root@DevopsRoles ~]# cat <<'EOF' > /etc/yum.repos.d/kubernetes.repo

[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-$basearch
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF

Install Minikube

[root@DevopsRoles ~]# yum -y install kubectl
[root@DevopsRoles ~]# wget https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 -O minikube
[root@DevopsRoles ~]# wget https://storage.googleapis.com/minikube/releases/latest/docker-machine-driver-kvm2
[root@DevopsRoles ~]# chmod 755 minikube docker-machine-driver-kvm2
[root@DevopsRoles ~]# mv minikube docker-machine-driver-kvm2 /usr/local/bin/

Check version minikube

[root@DevopsRoles ~]# /usr/local/bin/minikube version
[root@DevopsRoles ~]# kubectl version -o json 

Start Minikube

[root@DevopsRoles ~]# minikube start --vm-driver kvm2 

Minikube the command line

#show status
[root@DevopsRoles ~]# minikube status
[root@DevopsRoles ~]# minikube service list 
[root@DevopsRoles ~]# minikube docker-env 
[root@DevopsRoles ~]# kubectl cluster-info 
[root@DevopsRoles ~]# kubectl get nodes 
[root@DevopsRoles ~]# virsh list
[root@DevopsRoles ~]# minikube ssh # possible to access with SSH to the VM
[root@DevopsRoles ~]# minikube stop # to stop minikube, do like follows
[root@DevopsRoles ~]# minikube delete  # to remove minikube, do like follows
[root@DevopsRoles ~]# virsh list --all 

Conclusion

Thought the article, How to install Minikube Kubernetes on Centos as above. I hope will this your helpful. Thank you for reading the DevopsRoles page!

How to create and Run Instances on OpenStack

In this tutorial, How to create and Run Instances on OpenStack. In previous, My post has How to install OpenStack all in one Centos 7.

Create and Run Instances on OpenStack

How to add users in keystone who can use OpenStack System.

# add project
[root@DevopsRoles ~(keystone)]# openstack project create --domain default --description "Huupv Project" huupv 

# add user
[root@DevopsRoles ~(keystone)]# openstack user create --domain default --project huupv --password userpassword devopsroles 

# add role
[root@DevopsRoles ~(keystone)]# openstack role create CloudUser 

# add user to the role
[root@DevopsRoles ~(keystone)]# openstack role add --project huupv --user devopsroles CloudUser

# add flavor
[root@DevopsRoles ~(keystone)]# openstack flavor create --id 0 --vcpus 1 --ram 2048 --disk 10 m1.small 

Create and Start Virtual Machine Instance.

The username or password in the config is above. Also, create and run an Instance.

[cent@DevopsRoles ~]$ vi ~/keystonerc

#The content as below
export OS_PROJECT_DOMAIN_NAME=default
export OS_USER_DOMAIN_NAME=default
export OS_PROJECT_NAME=huupv
export OS_USERNAME=devopsroles
export OS_PASSWORD=userpassword
export OS_AUTH_URL=http://10.0.2.15:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
export PS1='[\u@\h \W(keystone)]\$ '

[cent@DevopsRoles ~]$ chmod 600 ~/keystonerc 
[cent@DevopsRoles ~]$ source ~/keystonerc 
[cent@DevopsRoles ~(keystone)]$ echo "source ~/keystonerc " >> ~/.bash_profile
# show flavor list
[cent@DevopsRoles ~(keystone)]$ openstack flavor list 

# show image list
[cent@DevopsRoles ~(keystone)]$ openstack image list 


# show network list
[cent@DevopsRoles ~(keystone)]$ openstack network list 

# create a security group for instances
[cent@DevopsRoles ~(keystone)]$ openstack security group create secgroup01 

[cent@DevopsRoles ~(keystone)]$ openstack security group list 

# create a SSH keypair for connecting to instances
[cent@DevopsRoles ~(keystone)]$ ssh-keygen -q -N "" 

# add public-key
[cent@DevopsRoles ~(keystone)]$ openstack keypair create --public-key ~/.ssh/id_rsa.pub mykey 

[cent@DevopsRoles ~(keystone)]$ openstack keypair list 

[cent@DevopsRoles ~(keystone)]$ netID=$(openstack network list | grep sharednet1 | awk '{ print $2 }') 
# create and boot an instance
[cent@DevopsRoles ~(keystone)]$ openstack server create --flavor m1.small --image CentOS7 --security-group secgroup01 --nic net-id=$netID --key-name mykey CentOS_7

# show status ([BUILD] status is shown when building instance)
[cent@DevopsRoles ~(keystone)]$ openstack server list 

# when starting noramlly, the status turns to [ACTIVE]
[cent@DevopsRoles ~(keystone)]$ openstack server list 

You Access the URL via 10.0.2.15:6080. Thank you for reading the DevopsRoles page!

How to configure OpenStack Networking

In this tutorial, How to configure OpenStack Networking. In previous, My post has How to install OpenStack all in one Centos 7.  This example configures the FLAT type of provider networking. First, you have to configure basic settings Openstack Neutron Services refer here.

For example, Configure FLAT type for Node 2 network interfaces

  • eth0: 10.0.2.15
  • eth1: UP with no IP

Configure OpenStack Networking

Configure Neutron services

# add bridge
[root@DevopsRoles ~(keystone)]# ovs-vsctl add-br br-eth1 

# add eth1 to the port of the bridge above
[root@DevopsRoles ~(keystone)]# ovs-vsctl add-port br-eth1 eth1 

[root@DevopsRoles ~(keystone)]# vi /etc/neutron/plugins/ml2/ml2_conf.ini

# The content as below
# line 181: add
[ml2_type_flat]
flat_networks = physnet1

[root@DevopsRoles ~(keystone)]# vi /etc/neutron/plugins/ml2/openvswitch_agent.ini
# The content as below
# line 194: add
[ovs]
bridge_mappings = physnet1:br-eth1

[root@DevopsRoles ~(keystone)]# systemctl restart neutron-openvswitch-agent

Creating a Virtual network.

[root@DevopsRoles ~(keystone)]# projectID=$(openstack project list | grep service | awk '{print $2}')
# create network named [sharednet1]
[root@DevopsRoles ~(keystone)]# openstack network create --project $projectID \
--share --provider-network-type flat --provider-physical-network physnet1 sharednet1 

# create subnet [10.0.2.0/24] in [sharednet1]
[root@DevopsRoles ~(keystone)]# openstack subnet create subnet1 --network sharednet1 \
--project $projectID --subnet-range 10.0.2.0/24 \
--allocation-pool start=10.0.2.200,end=10.0.2.254 \
--gateway 10.0.2.1 --dns-nameserver 10.0.2.10 

# confirm settings
[root@DevopsRoles ~(keystone)]# openstack network list 

[root@DevopsRoles ~(keystone)]# openstack subnet list 

You have configured OpenStack Networking. Thank you for reading the DevopsRoles page!

How to Install and Configure OpenStack Neutron

In this tutorial, How to Install and Configure OpenStack Neutron. This example chooses the ML2 plugin. In previous, My post has How to install OpenStack all in one Centos 7If you have not yet installed OpenStack Neutron then step install as below

Step 1: Create a User and Database for OpenStack Neutron

First, set up a database in MariaDB for Neutron:

[root@DevopsRoles ~]# mysql -u root -p
MariaDB [(none)]> CREATE DATABASE neutron_ml2;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron_ml2.* TO 'neutron'@'localhost' IDENTIFIED BY 'password';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron_ml2.* TO 'neutron'@'%' IDENTIFIED BY 'password';
MariaDB [(none)]> FLUSH PRIVILEGES;
MariaDB [(none)]> EXIT;

Step 2: Add User and Service for Neutron in Keystone

Create a user, assign a role, and configure endpoints for Neutron in Keystone:

# Create the Neutron user
[root@DevopsRoles ~]# openstack user create --domain default --project service --password servicepassword neutron

# Assign the admin role to Neutron
[root@DevopsRoles ~]# openstack role add --project service --user neutron admin

# Register the Neutron service
[root@DevopsRoles ~]# openstack service create --name neutron --description "OpenStack Networking service" network

# Define the Keystone host
[root@DevopsRoles ~]# export controller=10.0.2.15

# Create Neutron endpoints
[root@DevopsRoles ~]# openstack endpoint create --region RegionOne network public http://$controller:9696
[root@DevopsRoles ~]# openstack endpoint create --region RegionOne network internal http://$controller:9696
[root@DevopsRoles ~]# openstack endpoint create --region RegionOne network admin http://$controller:9696

Step 3: Install Neutron Services

Install the necessary Neutron components from the Stein repository:

[root@DevopsRoles ~]# yum --enablerepo=centos-openstack-stein,epel -y install openstack-neutron openstack-neutron-ml2 openstack-neutron-openvswitch

Step 4: Configure Neutron

Edit the Neutron configuration file to set up database and authentication details:

[root@DevopsRoles ~]# mv /etc/neutron/neutron.conf /etc/neutron/neutron.conf_BK
[root@DevopsRoles ~]# vi /etc/neutron/neutron.conf

Add the following:

iniCopy code[DEFAULT]
core_plugin = ml2
service_plugins = router
auth_strategy = keystone
state_path = /var/lib/neutron
dhcp_agent_notification = True
allow_overlapping_ips = True
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
transport_url = rabbit://openstack:password@10.0.2.15

[keystone_authtoken]
www_authenticate_uri = http://10.0.2.15:5000
auth_url = http://10.0.2.15:5000
memcached_servers = 10.0.2.15:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = servicepassword

[database]
connection = mysql+pymysql://neutron:password@10.0.2.15/neutron_ml2

[nova]
auth_url = http://10.0.2.15:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = servicepassword

[oslo_concurrency]
lock_path = $state_path/tmp

Apply permissions:

[root@DevopsRoles ~]# chmod 640 /etc/neutron/neutron.conf
[root@DevopsRoles ~]# chgrp neutron /etc/neutron/neutron.conf

Step 5: Configure Neutron Agents

Update the following files:

    L3 Agent Configuration:

    [DEFAULT]
    interface_driver = openvswitch
    

    DHCP Agent Configuration:

    [DEFAULT]
    interface_driver = openvswitch
    dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
    enable_isolated_metadata = true
    

    Metadata Agent Configuration:

    [DEFAULT]
    nova_metadata_host = 10.0.2.15
    metadata_proxy_shared_secret = metadata_secret
    memcache_servers = 10.0.2.15:11211
    

    ML2 Plugin Configuration:

    [ml2]
    type_drivers = flat,vlan,gre,vxlan
    tenant_network_types =
    mechanism_drivers = openvswitch
    extension_drivers = port_security

    Step 6: Configure Open vSwitch

    Start and configure Open vSwitch:

    [root@DevopsRoles ~]# systemctl start openvswitch
    [root@DevopsRoles ~]# systemctl enable openvswitch
    [root@DevopsRoles ~]# ovs-vsctl add-br br-int
    [root@DevopsRoles ~]# ovs-vsctl add-br br-ex
    [root@DevopsRoles ~]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
    

    Step 7: Populate the Neutron Database

    Run the database migration command:

    [root@DevopsRoles ~]# su -s /bin/bash neutron -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugin.ini upgrade head"
    

    Step 8: Start Neutron Services

    Start all required Neutron services:

    [root@DevopsRoles ~]# systemctl start neutron-server neutron-dhcp-agent neutron-l3-agent neutron-metadata-agent neutron-openvswitch-agent
    [root@DevopsRoles ~]# systemctl enable neutron-server neutron-dhcp-agent neutron-l3-agent neutron-metadata-agent neutron-openvswitch-agent
    

    Step 9: Verify Installation

    Check the status of the Neutron agents:

    [root@DevopsRoles ~]# openstack network agent list
    

    Conclusion

    You have successfully installed and configured OpenStack Neutron. This setup enables robust networking capabilities, allowing your OpenStack environment to support complex networking scenarios.

    AWS Certified Solutions Architect Exercises- part 1 Amazon S3 and Amazon Glacier Storage

    In this series, let together will exercise these practices below

    1. S3 Amazon Simple Storage Service and Amazon Glacier Storage
    2. Amazon Elastic Compute Cloud (Amazon EC2) and Amazon Elastic Block Store (Amazon EBS)
    3. Amazon Virtual Private Cloud (Amazon VPC)
    4. Elastic Load Balancing, Amazon CloudWatch, and Auto Scaling
    5. AWS Identity and Access Management (IAM)
    6. Databases and AWS
    7. SQS, SWF, and SNS
    8. Domain Name System (DNS) and Amazon Route 53
    9. Amazon ElastiCache
    10. Additional Key Services
    11. Security on AWS
    12. AWS Risk and Compliance
    13. Architecture Best Practices

    Based on the document: AWS-Certified-Solutions-Architect-Official-Study-Guide.pdf

    Now, start practicing!!!

    1. AWS Certified Solutions Architect Exercises- part 1 Amazon S3 and Amazon Glacier Storage

    1.1. Today’s tasks

    1: Create an Amazon Simple Storage Service (Amazon S3) Bucket
    2: Upload, Make Public, and Delete Objects in Your Bucket
    3: Enable Version Control
    4: Delete an Object and Then Restore It.
    5: Lifecycle Management
    6: Enable Static Hosting on Your Bucket

    1.2. Before you begin

    I default you have an AWS account.

    1.3. Let’s do it

    EXERCISE 1: Create an Amazon Simple Storage Service (Amazon S3) Bucket

    1. Log in to the AWS Management Console at the link: https://console.aws.amazon.com/console/home?nc2=h_ct&src=header-signin

    2. Choose an appropriate region, such as the Asia Pacific (Tokyo) Region.

    3. Navigate to the Amazon S3 console. Notice that the region indicator now says Global. Remember that Amazon S3 buckets form a global namespace, even though each bucket is created in a specific region.

    4. Start the create bucket process. Click the button Create bucket.

    5. When prompted for Bucket Name, use yourname-demo-bucket-yyyymmdd.

    6. Choose a region, such as Asia Pacific (Tokyo).

    You should now have a new Amazon S3 bucket.

    EXERCISE 2: Upload, Make Public, and Delete Objects in Your Bucket

    In this exercise, you will upload a new object to your bucket. You will then make this object public and view the object in your browser. You will then rename the object and finally delete it from the bucket.

    Upload an Object

    1. Load your new bucket in the Amazon S3 console.

    2. Select Upload, then Add Files.

    3. Locate a file on your PC that you are okay with uploading to Amazon S3 and making public to the Internet. (We suggest using a non-personal image file for the purposes of this exercise.)

    4. Select a suitable file, then Start Upload. You will see the status of your file in the Transfers section.

    5. After your file is uploaded, the status should change to Done.

    The file you uploaded is now stored as an Amazon S3 object and should be now listed in the contents of your bucket

    Open the Amazon S3 URL

    1. Now open the properties for the object. The properties should include bucket, name, and link.

    2. Copy the Amazon S3 URL for the object.

    3. Paste the URL in the address bar of a new browser window or tab.

    You should get a message with an XML error code AccessDenied. Even though the object has a URL, it is private by default, so it cannot be accessed by a web browser.

    Make the Bucket Public

    1. Go back to the Permission tab of your bucket and set Block public access is Off.

    2. Policy for everyone access with action is readonly

    {
         "Version": "2012-10-17",
         "Id": "http referer policy example",
         "Statement": [
             {
                 "Sid": "Allow get requests originating from global",
                 "Effect": "Allow",
                 "Principal": "",
                 "Action": "s3:GetObject",             
                 "Resource": "arn:aws:s3:::yourname-demo-bucket-yyyymmdd/"
             }
         ]
     }

    2. Copy the Amazon S3 URL again and try to open it in a browser or tab. Your public image file should now display in the browser or browser tab.

    3. Another public setting, read more at the link: https://aws.amazon.com/premiumsupport/knowledge-center/read-access-objects-s3-bucket/

    Delete the Object

    1. In the Amazon S3 console, select Object ➝ Actions ➝ Delete. Choose Delete when prompted if you want to delete the object.

    2. The object has now been deleted.

    3. To verify, try to reload the deleted object’s Amazon S3 URL.
    You should once again get the XML AccessDenied error message.

    EXERCISE 3: Enable Version Control

    In this exercise, you will enable version control on your newly created bucket.

    Enable Versioning

    1. In the Amazon S3 console, open your bucket. Click Properties tab ➝ Versioning ➝ select Enable versioning ➝ Save.

    Your bucket now has versioning enabled.

    Create Multiple Versions of an Object

    1. Create a text file named foo.txt on your computer and write the word blue in the text file.

    2. Save the text file to a location of your choosing.

    3. Upload the text file to your bucket. This will be version 1.

    4. After you have uploaded the text file to your bucket, open the copy on your local computer and change the word blue to red. Save the text file with the original filename.

    5. Upload the modified file to your bucket.

    6. Select Show Versions on the uploaded object.

    You will now see two different versions of the object with different Version IDs and possibly different sizes. Note that when you select Show Version, the Amazon S3 URL now includes the version ID in the query string after the object name.

    EXERCISE 4: Delete an Object and Then Restore It

    In this exercise, you will delete an object in your Amazon S3 bucket and then restore it.

    Delete an Object

    Open the bucket containing the text file for which you now have two versions.

    1. Select Hide Versions.
    2. Select Actions ➝ Delete, and then select Delete to verify.
    3. Your object will now be deleted, and you can no longer see the object.
    4. Select Show Versions.
      Both versions of the object now show their version IDs.

    Restore an Object

    Open your bucket.

    1. Select Show Versions.
    2. Select the oldest version and download the object. Note that the filename is simply foo.txt with no version indicator.
    3. Select Hide Versions, upload foo.txt to the same bucket.
    4. The file foo.txt should re-appear, select Show Version.

    EXERCISE 5: Lifecycle Management

    In this exercise, you will explore the various options for lifecycle management.

    1. Select your bucket in the Amazon S3 console.
    2. Under Management ➝ Lifecycle, add a Lifecycle Rule.
    3. Explore the various options to add lifecycle rules to objects in this bucket. It is recommended that you do not implement any of these options, as you may incur additional costs. After you have finished, click the Cancel button.

    My example: transitions object to GLACIER storage class after 1days.

    EXERCISE 6: Enable Static Hosting on Your Bucket

    In this exercise, you will enable static hosting on your newly created bucket.

    1. Select your bucket in the Amazon S3 console.

    2. In the Properties section, select Static Website Hosting.

    For the index document name, enter index.txt, and for the error document name, enter error.txt.

    3. Use a text editor to create two text files and save them as index.txt and error.txt.
    In the index.txt file, write the phrase “Hello World,” and in the error.txt file, write the phrase “Error Page.” Save both text files and upload them to your bucket.

    4. Copy the Endpoint: link under Static Website Hosting and paste it in a browser window or tab. You should now see the phrase “Hello World” displayed.

    5. In the address bar in your browser, try adding a forward slash followed by a made-up filename (for example, /test.html). You should now see the phrase “Error Page” displayed.

    Finally is important, to clean up, delete all of the objects in your bucket and then delete the bucket itself. AWS Certified Solutions Architect Exercises- part 1 Amazon S3 and Amazon Glacier Storage. Thank you for reading the DevopsRoles page!

    How to Install and Configure OpenStack Nova

    In this tutorial, How to Install and Configure the OpenStack compute ( Nova). In previous, My post has How to install OpenStack all in one Centos 7. If you have not yet installed OpenStack nova then step install as below

    Step-by-Step Installation and Configuration of OpenStack Nova

    Step 1: Create a User and Database for Nova

    Use MariaDB to set up the required databases and users:

    [root@DevopsRoles ~(keystone)]# mysql -u root -p
    

    Run the following commands to create the necessary databases:

    CREATE DATABASE nova;
    GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'password';
    GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'password';
    
    CREATE DATABASE nova_api;
    GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY 'password';
    GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'password';
    
    CREATE DATABASE nova_placement;
    GRANT ALL PRIVILEGES ON nova_placement.* TO 'nova'@'localhost' IDENTIFIED BY 'password';
    GRANT ALL PRIVILEGES ON nova_placement.* TO 'nova'@'%' IDENTIFIED BY 'password';
    
    CREATE DATABASE nova_cell0;
    GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY 'password';
    GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY 'password';
    
    FLUSH PRIVILEGES;
    EXIT;
    

    Step 2: Add Users and Configure Services in Keystone

    1. Add the nova user to the service project:
      • openstack user create --domain default --project service --password servicepassword nova
      • openstack role add --project service --user nova admin
    2. Add the placement user:
      • openstack user create --domain default --project service --password servicepassword placement
      • openstack role add --project service --user placement admin
    3. Create service entries:
      • openstack service create --name nova --description "OpenStack Compute service" compute
      • openstack service create --name placement --description "OpenStack Compute Placement service" placement
    4. Define the Keystone controller address:
      • export controller=10.0.2.15
    5. Add endpoints:
      • openstack endpoint create --region RegionOne compute public http://$controller:8774/v2.1/%\(tenant_id\)s
      • openstack endpoint create --region RegionOne compute internal http://$controller:8774/v2.1/%\(tenant_id\)s
      • openstack endpoint create --region RegionOne compute admin http://$controller:8774/v2.1/%\(tenant_id\)s
      • openstack endpoint create --region RegionOne placement public http://$controller:8778
      • openstack endpoint create --region RegionOne placement internal http://$controller:8778
      • openstack endpoint create --region RegionOne placement admin http://$controller:8778

    Step 3: Install OpenStack Nova

    Install the Nova packages:

    yum --enablerepo=centos-openstack-stein,epel -y install openstack-nova
    

    Step 4: Configure OpenStack Nova

    Edit the Nova configuration file:

    mv /etc/nova/nova.conf /etc/nova/nova.conf.org
    vi /etc/nova/nova.conf
    

    Add the following configuration:

    [DEFAULT]
    my_ip = 10.0.2.15
    state_path = /var/lib/nova
    enabled_apis = osapi_compute,metadata
    log_dir = /var/log/nova
    transport_url = rabbit://openstack:password@10.0.2.15
    
    [api]
    auth_strategy = keystone
    
    [glance]
    api_servers = http://10.0.2.15:9292
    
    [oslo_concurrency]
    lock_path = $state_path/tmp
    
    [api_database]
    connection = mysql+pymysql://nova:password@10.0.2.15/nova_api
    
    [database]
    connection = mysql+pymysql://nova:password@10.0.2.15/nova
    
    [keystone_authtoken]
    www_authenticate_uri = http://10.0.2.15:5000
    auth_url = http://10.0.2.15:5000
    memcached_servers = 10.0.2.15:11211
    auth_type = password
    project_domain_name = default
    user_domain_name = default
    project_name = service
    username = nova
    password = servicepassword
    
    [placement]
    auth_url = http://10.0.2.15:5000
    os_region_name = RegionOne
    auth_type = password
    project_domain_name = default
    user_domain_name = default
    project_name = service
    username = placement
    password = servicepassword
    
    [placement_database]
    connection = mysql+pymysql://nova:password@10.0.2.15/nova_placement
    

    Apply the correct permissions:

    chmod 640 /etc/nova/nova.conf
    chgrp nova /etc/nova/nova.conf
    

    Step 5: Set Up SELinux and Firewall Rules

    Enable SELinux for OpenStack:

    yum --enablerepo=centos-openstack-stein -y install openstack-selinux
    semanage port -a -t http_port_t -p tcp 8778
    

    Update the firewall rules:

    firewall-cmd --add-port={6080/tcp,6081/tcp,6082/tcp,8774/tcp,8775/tcp,8778/tcp} --permanent
    firewall-cmd --reload
    

    Step 6: Initialize the Database

    Synchronize the database:

    su -s /bin/bash nova -c "nova-manage api_db sync"
    su -s /bin/bash nova -c "nova-manage cell_v2 map_cell0"
    su -s /bin/bash nova -c "nova-manage db sync"
    su -s /bin/bash nova -c "nova-manage cell_v2 create_cell --name cell1"
    

    Step 7: Start Nova Services

    Start and enable Nova services:

    systemctl start openstack-nova-api openstack-nova-consoleauth openstack-nova-conductor openstack-nova-scheduler openstack-nova-novncproxy
    systemctl enable openstack-nova-api openstack-nova-consoleauth openstack-nova-conductor openstack-nova-scheduler openstack-nova-novncproxy
    

    Step 8: Install and Configure Nova Compute

    Install Nova Compute:

    yum --enablerepo=centos-openstack-stein,epel -y install openstack-nova-compute
    

    Update the Nova configuration to enable VNC:

    [vnc]
    enabled = True
    server_listen = 0.0.0.0
    server_proxyclient_address = 10.0.2.15
    novncproxy_base_url = http://10.0.2.15:6080/vnc_auto.html
    

    Restart the service:

    systemctl start openstack-nova-compute
    systemctl enable openstack-nova-compute
    

    Final Steps

    Verify the Nova setup:

    openstack compute service list
    

    Congratulations! You have successfully installed and configured OpenStack Nova. Thank you for reading the DevopsRoles page!

    Devops Tutorial

    Exit mobile version