Tool to Spin up Kwok Kubernetes Nodes

#What is Kwok Kubernetes?

Kwok Kubernetes is a tool that allows you to quickly and easily spin up Kubernetes nodes in a local environment using VirtualBox and Vagrant.

Kwok provides an easy way to set up a local Kubernetes cluster for development and testing purposes.

It is not designed for production use, as it’s intended only for local development environments.

Deploy Kwok Kubernetes to cluster

you can follow these general steps:

  • Install VirtualBox and Vagrant on your local machine.
  • Download or clone the Kwok repository from GitHub.
  • Modify the config.yml file to specify the number of nodes and other settings for your Kubernetes cluster.
  • Run the vagrant up command to start the Kubernetes cluster.
  • Once the cluster is up and running, you can use the kubectl command-line tool to interact with it and deploy your applications.

Install VirtualBox and Vagrant on your local machine.

You can refer to here install vagrant and VirtualBox.

Download or clone the Kwok repository from GitHub.

Go to the Kwok GitHub repository page: https://github.com/squat/kwok

Click on the green “Code” button, and then click on “Download ZIP” to download a zip file of the repository

For example, You can use the command line below

git clone https://github.com/squat/kwok.git

Modify the config.yml file in your Kubernetes cluster.

Open the config.yml file in a text editor.

Modify the settings in the config.yml file as needed.

  • num_nodes: This setting specifies the number of nodes to create in the Kubernetes cluster.
  • vm_cpus: This setting specifies the number of CPUs to allocate to each node.
  • vm_memory: This setting specifies the amount of memory to allocate to each node.
  • ip_prefix: This setting specifies the IP address prefix to use for the nodes in the cluster.
  • kubernetes_version: This setting specifies the version of Kubernetes to use in the cluster.
  • Save your changes to the config.yml file.

For example: create a three-node Kubernetes cluster with 2 CPUs and 4 GB of memory allocated to each node, using the IP address prefix “192.168.32” and Kubernetes version 1.21.0

# Number of nodes to create
num_nodes: 3

# CPU and memory settings for each node
vm_cpus: 2
vm_memory: 4096

# Network settings
ip_prefix: "192.168.32"
network_plugin: flannel

# Kubernetes version to install
kubernetes_version: "1.21.0"

# Docker version to install
docker_version: "20.10.8"

Once you have modified the config.yml file to specify the desired settings for your Kubernetes cluster

Start the Kubernetes cluster

run the vagrant up command to start the Kubernetes cluster.

Now, You can use deploy your applications.

Conclusion

You use Kwok Kubernetes, a Tool to Spin up Kubernetes Nodes. I hope will this your helpful. Thank you for reading the DevopsRoles page!

Trends for DevOps engineering

What is DevOps?

DevOps is a software development approach that aims to combine software development (Dev) and IT operations (Ops) to improve the speed, quality, and reliability of software delivery. Trends for DevOps engineering

It is a set of practices that emphasize collaboration, communication, automation, and monitoring throughout the entire software development lifecycle (SDLC).

The main goal of DevOps is to enable organizations to deliver software products more quickly and reliably by reducing the time and effort required to release new software features and updates.

DevOps also helps to minimize the risk of failures and errors in software systems, by ensuring that development, testing, deployment, and maintenance are all aligned and integrated seamlessly.

Some of the key practices and tools used in DevOps engineering

  • continuous integration (CI)
  • continuous delivery (CD)
  • infrastructure as code (IaC)
  • automated testing
  • Monitoring and Logging.
  • Collaboration and Communication: DevOps places a strong emphasis on collaboration and communication between development and operations teams

Here are some of the key trends and developments that are likely to shape the future of DevOps engineering in 2024:

  • Increased adoption of AI/ML and automation
  • Focus on security and compliance
  • Integration with cloud and serverless technologies
  • DevSecOps
  • Shift towards GitOps: GitOps is a new approach to DevOps that involves using Git as the central source of truth for infrastructure and application configuration.

DevOps Tools

Here are some of the most commonly used DevOps tools:

  • Jenkins: Jenkins is a popular open-source automation server that is used for continuous integration and continuous delivery (CI/CD) processes. Jenkins enables teams to automate the building, testing, and deployment of software applications.
  • Git: Git is a widely used distributed version control system that enables teams to manage and track changes to software code. Git makes it easy to collaborate on code changes and to manage different branches of code.
  • Docker: Docker is a containerization platform that enables teams to package applications and their dependencies into containers. Containers are lightweight, portable, and easy to deploy, making them a popular choice for DevOps teams.
  • Kubernetes: Kubernetes is an open-source container orchestration system that is used to manage and scale containerized applications. Kubernetes provides features such as load balancing, auto-scaling, and self-healing, making it easier to manage and deploy containerized applications at scale.
  • Ansible: Ansible is a popular automation tool that is used for configuration management, application deployment, and infrastructure management. Ansible enables teams to automate the deployment and management of infrastructure and applications, making it easier to manage complex systems.
  • Grafana: Grafana is an open-source platform for data visualization and monitoring. Grafana enables teams to visualize and analyze data from various sources, including metrics, logs, and databases, making it easier to identify and diagnose issues in software applications.
  • Prometheus: Prometheus is an open-source monitoring and alerting system that is used to collect and analyze metrics from software applications. Prometheus provides a powerful query language and an intuitive user interface, making it easier to monitor and troubleshoot software applications.

Some trends and tools in DevOps space in the coming years

Cloud-Native Technologies: cloud-based architectures and cloud-native technologies such as Kubernetes, Istio, and Helm are likely to become even more popular for managing containerized applications and microservices.

Machine Learning and AI: As machine learning and AI become more prevalent in software applications, tools that enable DevOps teams to manage and deploy machine learning models will become more important. Some emerging tools in this space include Kubeflow, MLflow, and TensorBoard.

Security and Compliance: With increasing concerns around security and compliance, tools that help DevOps teams manage security and compliance requirements throughout the SDLC will be in high demand. This includes tools for security testing, vulnerability scanning, and compliance auditing.

GitOps: GitOps is an emerging approach to infrastructure management that emphasizes using Git as the single source of truth for all infrastructure changes. GitOps enables teams to manage infrastructure as code, enabling greater automation and collaboration.

Serverless Computing: Serverless computing is an emerging technology that enables teams to deploy and run applications without managing servers or infrastructure. Tools such as AWS Lambda, Azure Functions, and Google Cloud Functions are likely to become more popular as serverless computing continues to gain traction.

Conclusion

To succeed with DevOps engineering, organizations must embrace a variety of practices, including continuous integration, continuous delivery, testing, monitoring, and infrastructure as code. They must also leverage a wide range of DevOps tools and technologies to automate and streamline their software development and delivery processes.

Ultimately, DevOps is not just a set of practices or tools, but a cultural shift towards a more collaborative, iterative, and customer-centric approach to software development. By embracing DevOps and continuously improving their processes and technologies, organizations can stay competitive and deliver value to their customers in an increasingly fast-paced and complex technology landscape. DevopsRoles.com for more information about it.

10 Docker Commands You Need to Know

Introduction

In this tutorial, We will delve into the fundamental Docker commands crucial for anyone working with this widely adopted containerization tool. Docker has become a cornerstone for developers and DevOps engineers, providing a streamlined approach to constructing, transporting, and executing applications within containers.

Its simplicity and efficiency make it an indispensable asset in application deployment and management. Whether you are a novice exploring Docker’s capabilities or a seasoned professional implementing it in production, understanding these essential commands is pivotal.

This article aims to highlight and explain the ten imperative Docker commands that will be integral to your routine tasks.

10 Docker Commands

Docker run

The docker run the command is used to start a new container from an image. It is the most basic and commonly used Docker command. Here’s an example of how to use it:

docker run nginx

This command will download the latest Nginx image from the Docker Hub and start a new container from it. The container will be started in the foreground and you can see the logs as they are generated.

docker ps

The docker ps the command is used to list the running containers on your system. It provides information such as the container ID, image name, and status. Here’s an example of how to use it:

docker ps

This command will display a list of all the running containers on your system. If you want to see all containers (including stopped containers), you can use the -a option:

docker ps -a

This will display a list of all the containers on your system, regardless of their status.

docker stop

The docker stop the command is used to stop a running container. It sends a SIGTERM signal to the container, allowing it to gracefully shut down. Here’s an example of how to use it:

docker stop mycontainer

This command will stop the container with the name mycontainer. If you want to forcefully stop a container, you can use the docker kill command:

docker kill mycontainer

This will send a SIGKILL signal to the container, which will immediately stop it. However, this may cause data loss or other issues if the container is not properly shut down.

docker rm

The docker rm command is used to remove a stopped container.

syntax

docker rm <container>

For example, to remove the container with the ID “xxx123“, you can use the command

docker rm xxx123

docker images

The docker images command is used to list the images available locally. This command will display a list of all the images that are currently available on your system.

docker rmi

The docker rmi the command is used to remove a local image.

syntax

docker rmi <image>

For example, to remove the image with the name “myimage“, you can use the command.

docker rmi myimage

docker logs

The docker logs command is used to show the logs of a running container.

Syntax

docker logs <container>

For example, to show the logs of the container with the ID “xxx123“, you can use the command docker logs xxx123.

docker exec -it

The docker exec command is used to run a command inside a running container.

syntax

docker exec -it <container> <command>

For example, to run a bash shell inside the container with the ID “xxx123“, you can use the command

docker exec -it xxx123 bash

docker build -t

The docker build command is used to build a Docker image from a Dockerfile file.

syntax

docker build -t <image> <path>

For example, to build an image with the name “myimage” From a Dockerfile located in the current directory, you can use the command

docker build -t myimage .

docker-compose up

The docker-compose command is used to start containers defined in a docker-compose file. This command will start all the containers defined in the Compose file.

Conclusion

These are the top Docker commands that you’ll use frequently when working with Docker. Mastering these commands will help you get started with Docker and make it easier to deploy and manage your applications. I hope will this your helpful. Thank you for reading the DevopsRoles page!

Ansible practice exercises: Step-by-Step Tutorials and Examples for Automation Mastery

Introduction

Welcome to our comprehensive guide on Ansible practice exercises, where we delve into hands-on examples to master this powerful automation tool. In this tutorial, We will use Ansible practice exercises examples.

An Introduction Ansible

Ansible is a popular open-source automation tool for IT operations and configuration management. One of the key features of Ansible is its ability to execute tasks with elevated privileges, which is often necessary when configuring or managing systems.

Ansible practice: how to create a user and grant them sudo permissions in Ansible.

- name: Create user
  user:
    name: huupv
    state: present

- name: Add user to sudoers
  lineinfile:
    path: /etc/sudoers
    line: "huupv ALL=(ALL) NOPASSWD: ALL"
    state: present

In the first task, the “user” module is used to create a user with the name “huupv”. The “state” directive is set to “present” to ensure that the user is created if it doesn’t already exist.

In the second task, the “lineinfile” module is used to add the user “huupv” to the sudoers file. The “line” directive specifies that “huupv” can run all commands as any user without a password. The “state” directive is set to “present” to ensure that the line is added if it doesn’t already exist in the sudoers file.

Note: It is recommended to use the “visudo” command to edit the sudoers file instead of directly editing the file, as it checks the syntax of the file before saving changes.

You try it ansible!

How to disable SELinux in Ansible.

- name: Disable SELinux
  lineinfile:
    path: /etc/selinux/config
    line: SELINUX=disabled
    state: present

- name: Restart the system to apply the changes
  command: shutdown -r now
  when: "'disabled' in selinux.getenforce()"

In the first task, the “lineinfile” module is used to set the SELinux state to “disabled” in the SELinux configuration file located at “/etc/selinux/config”. The “state” directive is set to “present” to ensure that the line is added if it doesn’t already exist in the configuration file.

In the second task, the “command” module is used to restart the system to apply the changes. The “when” directive is used to only execute the task if the SELinux state is currently set to “disabled”.

Note: Disabling SELinux is not recommended for security reasons. If you need to modify the SELinux policy, it is better to set SELinux to “permissive” mode, which logs SELinux violations but does not enforce them, rather than completely disabling SELinux.

How to allow ports 22, 80, and 443 in the firewall on Ubuntu using Ansible

- name: Allow ports 22, 80, and 443 in firewall
  ufw:
    rule: allow
    port: [22,80,443]

- name: Verify firewall rules
  command: ufw status
  register: firewall_status

- name: Display firewall status
  debug:
    var: firewall_status.stdout_lines
  • In the first task, the “ufw” module is used to allow incoming traffic on ports 22, 80, and 443. The “rule” directive is set to “allow” and the “port” directive is set to a list of ports to allow.
  • In the second task, the “command” module is used to run the “ufw status” command and register the result in the “firewall_status” variable.
  • In the third task, the “debug” module is used to display the firewall status, which is stored in the “firewall_status.stdout_lines” variable.

Note: Make sure the “ufw” firewall is installed and enabled on the target system before running this playbook.

How to change the hostname on Ubuntu, CentOS, RHEL, and Oracle Linux using Ansible.

- name: Change hostname
  become: yes
  become_method: sudo
  lineinfile:
    dest: /etc/hosts
    regexp: '^.*{{ inventory_hostname }}.*$'
    line: '{{ ansible_default_ipv4.address }} {{ new_hostname }} {{ inventory_hostname }}'
    state: present
  replace:
    dest: /etc/hostname
    regexp: '^.*{{ inventory_hostname }}.*$'
    replace: '{{ new_hostname }}'
    state: present

- name: Reload hostname
  shell: |
    hostname {{ new_hostname }}
    echo {{ new_hostname }} > /etc/hostname
    if [[ $(grep -q {{ new_hostname }} /etc/sysconfig/network) -eq 0 ]]; then
      sed -i "s/^HOSTNAME=.*/HOSTNAME={{ new_hostname }}/" /etc/sysconfig/network
    fi
    if [[ $(grep -q {{ new_hostname }} /etc/sysconfig/network-scripts/ifcfg-* 2> /dev/null) -eq 0 ]]; then
      for ifcfg in $(grep -l {{ inventory_hostname }} /etc/sysconfig/network-scripts/ifcfg-*); do
        sed -i "s/^HOSTNAME=.*/HOSTNAME={{ new_hostname }}/" $ifcfg
      done
    fi
  when: "'Ubuntu' in ansible_os_family or 'Debian' in ansible_os_family"

- name: Reload hostname
  shell: |
    hostname {{ new_hostname }}
    echo {{ new_hostname }} > /etc/hostname
    sed -i "s/^HOSTNAME=.*/HOSTNAME={{ new_hostname }}/" /etc/sysconfig/network
  when: "'RedHat' in ansible_os_family or 'CentOS' in ansible_os_family or 'OracleLinux' in ansible_os_family"

- name: Check the hostname
  shell: hostname
  register: hostname_check

- name: Display the hostname
  debug:
    var: hostname_check.stdout
  • In the first task, the “lineinfile” module is used to update the “/etc/hosts” file with the new hostname, which is specified in the “new_hostname” variable. The “state” directive is set to “present” to ensure the line is added to the file if it doesn’t exist. The “replace” module is used to update the “/etc/hostname” file with the new hostname.
  • In the second task, the “shell” module is used to reload the hostname on Ubuntu and Debian systems. The “when” directive is used to only execute this task if the target system is an Ubuntu or Debian system.
  • In the third task, the “shell” module is used to reload the hostname on Red Hat, CentOS, and Oracle Linux systems. The “when” directive is used to only execute this task if the target system is a Red Hat, CentOS, or Oracle Linux system.

To run the Ansible playbook

  • Save the playbook content in a file with a .yml extension, for example, change_hostname.yml
  • Run the command ansible-playbook change_hostname.yml on the terminal.
  • Set the value of the new_hostname variable by passing it as an extra-var argument with the command: ansible-playbook change_hostname.yml --extra-vars "new_hostname=newhostname"
  • Before running the playbook, ensure you have the target server information in your Ansible inventory file and that the necessary SSH connection is set up.
  • If you have set become: yes in the playbook, make sure you have the necessary permissions on the target server to run the playbook with elevated privileges.

5. To list all the packages installed on a target server

- name: List all packages
  hosts: target
  tasks:
    - name: Get list of all packages
      command: "{{ 'dpkg-query -f \'{{.Package}}\\n\' -W' if (ansible_distribution == 'Ubuntu') else 'rpm -qa' }}"
      register: packages

    - name: Display packages
      debug:
        var: packages
  • Where target is the group of hosts defined in the inventory file.

To run this playbook, you can use the following command:

  • Where list_packages.yml is the name of the playbook file.
  • This playbook will use the appropriate command (dpkg-query for Ubuntu, rpm -qa for CentOS, RHEL, and Oracle Linux) to get a list of all the installed packages and display them using the debug module.

Note: The ansible_distribution the variable is used to determine the type of operating system running on the target host, and the appropriate command is executed based on the result.

Conclusion

We hope this guide on Ansible practice exercises has empowered you with the knowledge and skills to optimize your IT operations. By walking through these practical examples, you should now feel more confident in using Ansible Automation exercises complex tasks and improve efficiency across your systems. Continue to explore and experiment with Ansible to unlock its full potential and adapt its capabilities to meet your unique operational needs. Update later! Ansible practice exercises examples. I hope will this your helpful. Thank you for reading the DevopsRoles page!

How to run shell commands in Python

Introduction

In this tutorial, How to run shell commands in Python. the ability to automate tasks and scripts is invaluable, and Python offers robust tools to execute these operations efficiently. this guide will provide the necessary insights and examples to integrate shell commands within your Python applications.

  1. Use subprocess module
  2. Use OS module
  3. Use sh library

Run shell commands in Python

Use subprocess module

You can use the subprocess module in Python to run shell commands. The subprocess.run() function can be used to run a command and return the output.

Here is an example of how to use the subprocess.run() function to run the ls command and print the output:

import subprocess

result = subprocess.run(['ls', '-l'], stdout=subprocess.PIPE)
print(result.stdout.decode())

You can also use subprocess.Popen to run shell commands and access the input/output channels of the commands.

import subprocess

p = subprocess.Popen(['ls', '-l'], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout, stderr = p.communicate()
print(stdout.decode())

However, it’s generally recommended to avoid using shell commands in python, and instead use python libraries that provide equivalent functionality, as it is more secure and less error-prone.

use os module

The os module in Python provides a way to interact with the operating system and can be used to run shell commands as well.

Here is an example of how to use the os.system() function to run the ls command and print the output:

import os

os.system('ls -l')

Alternatively, you can use the os.popen() function to run a command and return the output as a file object, which can be read using the read() or readlines() method.

import os

output = os.popen('ls -l').read()
print(output)

You can also use os.popen() and os.popen3() to run shell commands and access the input/output channels of the commands.

import os

p = os.popen3('ls -l')
stdout, stderr = p.communicate()
print(stdout)

It’s worth noting that the os.system() and os.popen() methods are considered old and are not recommended to use. subprocess module is recommended instead, as it provides more control over the process being executed and is considered more secure.

use sh library

The sh library is a Python library that provides a simple way to run shell commands, it’s a wrapper around the subprocess module, and it provides a more convenient interface for running shell commands and handling the output.

Here is an example of how to use the sh library to run the ls command and print the output:

from sh import ls

print(ls("-l"))

You can also use sh to run shell commands and access the input/output channels of the command.

from sh import ls

output = ls("-l", _iter=True)
for line in output:
    print(line)

You can also capture the output of a command to a variable

from sh import ls

output = ls("-l", _ok_code=[0,1])
print(output)

It’s worth noting that the sh library provides a very convenient way to run shell commands and handle the output, but it can be less secure, as it allows arbitrary command execution, so it’s recommended to use it with caution.

Conclusion

Throughout this tutorial, we explored various methods for executing shell commands using Python, focusing on the subprocess module, os module, and the sh library. Each method offers unique advantages depending on your specific needs, from enhanced security and control to simplicity and convenience.

You have now learned how to run shell commands in Python. I hope you find this tutorial useful. Thank you for reading the DevopsRoles page!

Encrypt Files in Linux with Tomb

Introduction

In this tutorial, How to Encrypt Files in Linux with Tomb. It’s a simple shell script to allow you to encrypt folders and files in Linux.

  • The Tomb is a powerful encryption tool for Linux that allows users to create encrypted directories and files, providing an extra layer of security for sensitive data.
  • Tomb uses both GNU Privacy Guard to handle its encryption and dd to wipe and format its virtual partitions.

Installing Tomb in Ubuntu

sudo apt install -y tomb

How to encrypt Files in Linux with Tomb

First, you must use the dig subcommand to create a 150M Tomb file with “first-encrypt.tomb

tomb dig -s 150 first-encrypt.tomb

Next, You create a new key for the tomb file:

tomb forge -k first-encrypt.tomb.key

Second, You need to link the new key to your new tomb file as command below:

tomb lock -k first-encrypt.tomb.key first-encrypt.tomb

The final, You can open a new locked tomb with the open subcommand below:

tomb open -k first-encrypt.tomb.key first-encrypt.tomb

Create an image key to Encrypt files

Use the bury subcommand the combine my “first-encrypt.tomb.key” with the image.jpg

Now, You can open the tomb file using my new image key.

tomb open -k image.jpg first-encrypt.tomb

Close a tomb (fails if the tomb is being used by a process)

tomb close

Forcefully close all open tombs, killing any applications using them

tomb slam all

List all open tombs

tomb list

How do expand the size of my first-encrypt.tomb file from 150MB to 1GB:

tomb resize -k first-encrypt.tomb.key -s 1000 first-encrypt.tomb

Search your tomb

tomb index # The first, In order to search through your tomb files.
tomb search test # after search your want to 

Conclusion

With Tomb, you can easily encrypt sensitive files and keep them secure on your Linux system. You know How to Encrypt Files in Linux with Tomb. I hope this will be helpful. Thank you for reading the DevopsRoles page!

Python Docker

Introduction

In this tutorial, How to run Python on Docker. Python is a programming language. Python Docker image is the latest point release. 

The Docker image version was released in March 2022.

Image 3.9 release
Debian 11 3.9.2
Ubuntu 20.04 3.9.5
RHEL 8 3.9.7
RHEL 9 3.9.10
Docker python 3.9.14

You need to install Docker on Ubuntu.

The working directory python docker:

root@devopsroles:~/dockerpython# ls -lF
total 20
-rw-r--r-- 1 root root  111 Nov 20 14:31 app.py
-rw-r--r-- 1 root root  236 Nov 20 15:00 Dockerfile
-rw-r--r-- 1 root root   20 Nov 20 14:27 requirements.txt

How to build a Docker container running a simple Python application.

Setup dockerfile python

Create a folder and create a virtual environment. This isolated the environment for the Python Docker project.

For example

mkdir dockerpython
cd dockerpython
python3 -m venv dockerpython
source dockerpython/bin/activate

Create a new file dockerfile python.

FROM python:3.9-slim-buster
ENV PYTHONUNBUFFERED=1
WORKDIR /app

COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt

COPY . .
EXPOSE 5000

CMD [ "python3", "-m" , "flask", "run", "--host=0.0.0.0", "--port=5000"]

Save and close.

Create the Python App

Create an app.py file. For example as below

from flask import Flask
app = Flask(__name__)

@app.route('/')
def hello_world():
     return 'Hello, Docker!'

Save and close.

Create the requirements.txt file. This contains dependencies for the app to run.

For example, we add the packages for the requirements.txt file.

Flask=2.0.3
pylint

Or method 2: show all packages installed via pip use pip3 freeze and save to the requirements.txt file.

pip3 install Flask
pip3 freeze | grep Flask >> requirements.txt

we will test if the works localhost using the command below

python3 -m flask run --host=0.0.0.0 --port=5000

Open the browser with the URL http://localhost:5000

Docker Build image and container

You build a Docker image from created Dockerfile. Use the command below to build the Docker image

docker build --tag dockerpython .

Tag the image using the syntax

docker tag <imageId> <hostname>/<imagename>:<tag>

For example, Tag the image.

docker tag 8fbb6cdc5e76 huupv/dockerpython:latest

Now, the Run Docker image has the created and tagged with the command line below:

docker run --publish 5000:5000 <imagename>

Use the command below to list containers running.

docker ps

The result below:

You can now test your application using http://localhost:5000 on your browser.

Docker pushed and retrieved from Docker Hub

The container will be pushed and retrieved from the Docker Hub registry. The command simple

docker push <hub-user>/<repo-name>:<tag>.

On the website Docker Hub. Click Create Repository to give the repo a name.

Create 1 repo name and description as below:

Copy the command to your terminal, replacing tagname with version latest.

In your terminal, we run the command docker login to connect the remote repository to the local environment. Add username and password to valid login.

Run the command to push to the repository you created it.

docker push <hub-user>/<repo-name>:tagname

Confirm your image to be pushed to the Docker Hub page

In any terminal, Docker pulls the Docker image from Docker Hub.

root@devopsroles:~# docker pull huupv/dockerpython:latest

For example, use Docker Compose

version: "3.9"  
services:
  web:
    build: .
    ports:
      - "5000:5000"
    volumes:
      - .:/app
      - logvol:/var/log
    links:
      - redis
  
 redis:
     image: redis
 volumes:
   logvol: {}

Conclusion

You know How to run Python on Docker. I hope this will be helpful. Thank you for reading the DevopsRoles page!

Install MariaDB via the Docker container

#Introduction

In this tutorial, How to Install MariaDB via docker. MariaDB is a popular open-source database server.

You need to install Docker on Ubuntu.

Install MariaDB via Docker

Download MariaDB Docker Image.

docker search mariadb
docker pull mariadb

Get a list of installed images on Docker.

docker images

Creating a MariaDB Container

We will create a MariaDB Container such as the flags below:

  • –name my-mariadb: To set the name of the container. If nothing is specified a random if will be automatically generated.
  • –env MYSQL_ROOT_PASSWORD=password_secret: Setting root password to Mariadb.
  • –detach is to run the container in the background.
docker run --detach --name my-mariadb --env MARIADB_USER=example-user --env MARIADB_PASSWORD=example_user_password_secret --env MARIADB_ROOT_PASSWORD=password_secret mariadb:latest

Get the active docker container

docker ps

The container is running, How to access it?

docker exec -it my-mariadb mysql -uexample-user -p

Starting and Stopping MariaDB Container

restart MariaDB container

docker restart my-mariadb

stop MariaDB container

docker stop my-mariadb

start MariaDB container

docker start my-mariadb

In case we want to destroy a container,

docker rm -v my-mariadb

Conclusion

You Install MariaDB via Docker container. I hope this will be helpful. Thank you for reading the DevopsRoles page!

python data type cheatsheet

#Introduction

In this tutorial, we learn about Integers, Lists, Dictionaries, and Tuples. Python data type cheatsheet begins.

python data type cheatsheet

Integers

Integers represent whole numbers. Example

age = 30
rank = 20

Floats represent decimal numbers. For example,

temperature = 20.2

Strings represent text. For example

site = "devopsroles.com"

Lists represent arrays of values that may change during the program

members = ["HuuPV", "no name", "Jack"]
ages_values = [30, 12, 54]

Dictionaries represent pairs of keys and values:

phone_numbers = {"huupv": "+123456789", "Jack": "+99999999"}

The Keys and values of a dictionary can be extracted

Tuples

Tuples represent arrays of values that are not to be changed during the program

convert Tuples to list

 convert the list to a tuple

get a list of attributes of a data type

get a list of Python built-in functions

get the documentation of a Python data type

help(str)
help(str.replace)
help(dict.values)

Conclusion

You learn python data type cheatsheet. I hope this will be helpful. Thank you for reading the DevopsRoles page!

How to Deploy MongoDB as a Docker Container

Introduction

In this tutorial, How to Deploy MongoDB as a Docker container. MongoDB is a source-available cross-platform document-oriented database program. Classified as a NoSQL database program, MongoDB uses JSON-like documents with optional schemas.

In today’s world of modern application development, containerization has become a popular approach for deploying and managing applications.

Install Docker

You need to install Docker on Ubuntu.

How to Deploy MongoDB as a Docker Container

First, We will pull version MongoDB as command below:

docker pull mongo:3.4.4

To create a volume for the database and retain data. The command line creates the volume as below:

docker volume create mongo_data

We will deploy it

docker run -d -v mongo_data:/data/db --name mymongoDB --net=host mongo:3.4.4 --bind_ip 127.0.0.1 --port 27000

You need to verify the deployment as command below:

docker ps -a

The container is running, How to access it?

docker exec -it mymongoDB mongo localhost:27000

If you need to stop the MongoDB container.

docker stop ID

If you need to start the MongoDB container.

docker start ID

Conclusion

You have successfully deployed MongoDB as a Docker container. This approach offers flexibility, scalability, and portability for your MongoDB deployments. With containerization, you can easily manage your database infrastructure and ensure consistency across different environments. I hope this will be helpful. Thank you for reading the DevopsRoles page!

Devops Tutorial

Exit mobile version