Top 10 free ai tools and Powerful AIs for Text, Art, and More!

Top 10 free AI tools Gems that are Both Powerful and Free for Bro’s Delight. Explore a curated list of the top 10 free AI tools that cater to various applications. From text processing to creative arts, these powerful tools empower users with cutting-edge artificial intelligence capabilities. Enhance your projects, streamline tasks, and unlock new possibilities with these versatile and accessible AI solutions.

Top 10 free AI tools

1. Bard (Google AI tools)

Bard, a Large Language Model (LLM), is trained on an extensive dataset of text and code. It excels in generating text, and language translation, creating various creative content, and providing comprehensive and informative answers.

2. ChatGPT

Developed by OpenAI, ChatGPT, based on the GPT-3 model, is versatile for tasks such as answering questions, generating text, language translation, crafting creative content, and engaging in user conversations.

3. Leonardor AI

A drawing AI tool from UC Berkeley, Leonardor AI produces realistic and creative artworks using a machine learning model trained on a vast dataset of images and videos, offering various artistic styles.

4. Pika AI

Another creation from UC Berkeley, Pika AI, leverages a machine learning model trained on text and code datasets. It excels in tasks like answering questions, generating text, language translation, and crafting creative content.

5. Rytr

Rytr is a text content creation app designed for crafting articles, emails, letters, and various other forms of textual content. With its user-friendly interface and advanced language generation capabilities, Rytr makes the process of writing effortless and efficient. Whether you’re a professional writer or someone looking to streamline your communication, Rytr provides a versatile platform for generating high-quality written content.

6. Grammarly

A grammar and spelling checker app, Grammarly enhances the quality of your text. Grammarly, an app dedicated to checking grammar and spelling, significantly elevates the quality of your written content. By seamlessly integrating into your writing process, Grammarly ensures not only grammatical accuracy but also enhances overall clarity and coherence. This powerful tool offers real-time suggestions, helping you refine your text and communicate more effectively. Whether you’re writing an email, crafting an article, or working on any other textual project, Grammarly is a reliable companion for writers of all levels. Improve your writing skills, polish your prose, and convey your message with precision using Grammarly.

7. Leaipix

Developed by researchers at the National University of Singapore, Leaipix utilizes a machine-learning model trained on a vast dataset of images and videos. It’s versatile for art drawing, illustration, graphic design, and creating content for social media.

8. Elevenlabs

An AI startup based in Singapore, Elevenlabs develops AI tools and services for businesses.

  • Elevenbot: An AI chatbot for customer service, sales support, and marketing.
  • Elevensense: An AI tool for data analysis and trend prediction.
  • Elevenview: An AI tool for creating creative images and videos.

9. Bing AI

Powered by the GPT-4 model, Bing AI, developed by Microsoft, excels at tasks such as answering questions, generating text, language translation, and creating diverse forms of creative written content.

10. Stable Diffusion

Developed by Stability AI, Stable Diffusion uses a machine learning model trained on a vast dataset of images and videos. It excels in artistic drawing, illustration, graphic design, and creating content for social media.

Conclusion

To use these AIs, access them online via a web browser or mobile app. Here are some tips for effective AI usage:

• Clearly define your needs.
• Provide comprehensive information to enable effective AI operation.
• Be patient; AI may take time to process information and produce results.
• Be creative; AI can help generate new and unique ideas.

I trust that these suggestions will improve your effective utilization of AI. Visit DevopsRoles.com for additional information, including a list of the top 10 free AI tools.

Unlocking the Power of Top 10 Python Libraries for Data Science in 2024

Introduction

Python continues to dominate the field of data science in 2024, offering powerful libraries that streamline everything from data analysis to machine learning and visualization. Whether you’re a seasoned data scientist or a newcomer to the field, leveraging the right tools is key to success. This article explores the top 10 Python libraries for data science in 2024, showcasing their features, use cases, and practical examples.

Top 10 Python Libraries for Data Science in 2024

1. NumPy

Overview

NumPy (Numerical Python) remains a cornerstone for scientific computing in Python. It provides robust support for multi-dimensional arrays, mathematical functions, and efficient operations on large datasets.

Key Features

  • Multi-dimensional array manipulation.
  • Built-in mathematical functions for algebra, statistics, and more.
  • High-performance tools for linear algebra and Fourier transforms.

Example

import numpy as np

# Create a NumPy array
data = np.array([1, 2, 3, 4, 5])

# Perform operations
print("Mean:", np.mean(data))
print("Standard Deviation:", np.std(data))

2. Pandas

Overview

Pandas is a game-changer for data manipulation and analysis. It simplifies working with structured data through its versatile DataFrame and Series objects.

Key Features

  • Data cleaning and transformation.
  • Handling missing data.
  • Powerful grouping, merging, and aggregation functionalities.

Example

import pandas as pd

# Create a DataFrame
data = pd.DataFrame({
    'Name': ['Alice', 'Bob', 'Charlie'],
    'Age': [25, 30, 35]
})

# Analyze data
print(data.describe())

3. Matplotlib

Overview

Matplotlib is a versatile library for creating static, animated, and interactive visualizations.

Key Features

  • Extensive plotting capabilities.
  • Customization options for axes, titles, and styles.
  • Compatibility with multiple file formats.

Example

import matplotlib.pyplot as plt

# Create a simple line plot
x = [1, 2, 3, 4, 5]
y = [2, 4, 6, 8, 10]

plt.plot(x, y)
plt.title("Simple Line Plot")
plt.show()

4. Seaborn

Overview

Seaborn builds on Matplotlib, providing an intuitive interface for creating aesthetically pleasing and informative statistical graphics.

Key Features

  • Built-in themes for attractive plots.
  • Support for complex visualizations like heatmaps and pair plots.
  • Easy integration with Pandas DataFrames.

Example

import seaborn as sns
import pandas as pd

# Create a heatmap
data = pd.DataFrame({
    'A': [1, 2, 3],
    'B': [4, 5, 6],
    'C': [7, 8, 9]
})

sns.heatmap(data, annot=True)

5. Scikit-learn

Overview

Scikit-learn is the go-to library for machine learning. It offers tools for everything from simple predictive models to complex algorithms.

Key Features

  • Support for supervised and unsupervised learning.
  • Tools for feature selection and preprocessing.
  • Comprehensive documentation and examples.

Example

from sklearn.linear_model import LinearRegression

# Simple linear regression
model = LinearRegression()
X = [[1], [2], [3]]
y = [2, 4, 6]
model.fit(X, y)

print("Predicted:", model.predict([[4]]))

6. TensorFlow

Overview

TensorFlow, developed by Google, is a powerful library for deep learning and large-scale machine learning.

Key Features

  • Versatile neural network building blocks.
  • GPU acceleration for high-performance training.
  • Pre-trained models for tasks like image and speech recognition.

Example

import tensorflow as tf

# Define a simple constant
hello = tf.constant('Hello, TensorFlow!')
print(hello.numpy())

7. PyTorch

Overview

PyTorch, developed by Facebook, is another deep learning framework that excels in flexibility and dynamic computation graphs.

Key Features

  • Intuitive syntax.
  • Dynamic computation graphs.
  • Strong community support.

Example

import torch

# Create a tensor
tensor = torch.tensor([1.0, 2.0, 3.0])
print(tensor * 2)

8. SciPy

Overview

SciPy complements NumPy by offering advanced mathematical and scientific computing tools.

Key Features

  • Functions for optimization, integration, and interpolation.
  • Tools for signal and image processing.
  • Support for sparse matrices.

Example

from scipy.optimize import minimize

# Minimize a quadratic function
result = minimize(lambda x: (x - 2)**2, 0)
print("Optimal Value:", result.x)

9. Plotly

Overview

Plotly excels at creating interactive visualizations for web-based applications.

Key Features

  • Interactive dashboards.
  • Support for 3D plotting.
  • Compatibility with Python, R, and JavaScript.

Example

import plotly.express as px

# Create an interactive scatter plot
df = px.data.iris()
fig = px.scatter(df, x='sepal_width', y='sepal_length', color='species')
fig.show()

10. NLTK

Overview

Natural Language Toolkit (NLTK) is essential for text processing and computational linguistics.

Key Features

  • Tools for tokenization, stemming, and sentiment analysis.
  • Extensive corpus support.
  • Educational resources and documentation.

Example

import nltk
from nltk.tokenize import word_tokenize

# Tokenize a sentence
sentence = "Data science is amazing!"
tokens = word_tokenize(sentence)
print(tokens)

FAQ

What is the best Python library for beginners in data science?

Pandas and Matplotlib are ideal for beginners due to their intuitive syntax and wide range of functionalities.

Are these libraries free to use?

Yes, all the libraries mentioned in this article are open-source and free to use.

Which library should I choose for deep learning?

Both TensorFlow and PyTorch are excellent for deep learning, with TensorFlow being preferred for production and PyTorch for research.

Conclusion

The Python ecosystem in 2024 offers a robust toolkit for data scientists. Libraries like NumPy, Pandas, Scikit-learn, and TensorFlow continue to push the boundaries of what’s possible in data science. By mastering these tools, you can unlock new insights, build sophisticated models, and create impactful visualizations. Start exploring these libraries today and take your data science projects to the next level.

External Links

I hope will this your helpful. Thank you for reading the DevopsRoles page!

Filter Data in Pandas Dataframe

Introduction

Pandas, a popular Python library for data manipulation and analysis, provides powerful tools for filtering data within a Pandas DataFrame. Filtering is a fundamental operation when working with large datasets, as it allows you to focus on specific subsets of your data that meet certain criteria. In this guide, we’ll explore various techniques for filtering data in Pandas DataFrame.

Prerequisites

Before starting, you should have the following prerequisites configured

  • Visual Studio Code with Jupyter extension to run the notebook
  • Python 3.9, pandas library
  • CSV data file sample

Using tool to create a sample CSV file at page https://extendsclass.com/csv-generator.html

Basic Filtering

  • Read CSV file into a Pandas DataFrame object
  • Using the query Method
  • Filtering with isin
  • Filtering Null (NaN) Values

Read CSV file into a Pandas DataFrame object

use read_csv() function to read data from CSV file and setting header for the dataframe

import pandas as pd
student_cols = [
    'id','firstname','lastname','email','email2','profession'
]
students = pd.read_csv(
    'data/myFile0.csv',
    names=student_cols
)

Using the query Method

The query method allows you to express conditions as strings, providing a more concise and readable syntax:

students.query('profession == "doctor"')

You can use logical operators (& for AND, | for OR) to combine multiple conditions:

students.query('profession == "doctor" and lastname == "Mike"')
students.query('profession == "doctor" or profession == "worker"')
students.query('profession == ("doctor", "worker")')

Filtering with isin

The isin method is useful when you want to filter rows based on a list of values:

name_list = ['firefighter']
filtered_df = students[students['profession'].isin(name_list)]
print(filtered_df)

Filtering Null (NaN) Values

You can use the isnull() or notnull() methods to filter rows with missing data:

filtered_df = students[students[‘profession’].notnull()]

print(filtered_df)

Conclusion

Filtering data is a crucial skill when working with Pandas DataFrames. Whether you need to select rows based on simple conditions or complex queries, Pandas provides a versatile set of tools to handle your data effectively.

Experiment with these techniques on your own datasets to gain a deeper understanding of how to filter data in Pandas DataFrames. As you become more comfortable with these methods, you’ll be better equipped to extract valuable insights from your data. Thank you for reading the DevopsRoles page!

Build lambda with a custom docker image

Introduction

In this tutorial, we will build lambda with a custom docker image.

Prerequisites

Before starting, you should have the following prerequisites configured

  • An AWS account
  • AWS CLI on your computer

Walkthrough

  • Create a Python virtual environment
  • Create a Python app
  • Create a lambda with a custom docker image of ECR
  • Create ECR repositories and push an image
  • Create a lamba from the ECR image
  • Test lambda function on local
  • Test lambda function on AWS

Create a Python virtual environment

Create a Python virtual environment with the name py_virtual_env

python3 -m venv py_virtual_env

Create a Python app

This Python source code will pull a JSON file from https://data.gharchive.org and put it into the S3 bucket.
Then, transform the uploaded file to parquet format.

Download the source code from here and put it into the py_virtual_env folder.

Create a lambda with a custom docker image of ECR

#build docker image
source ./py_virtual_env/bin/activate
cd py_virtual_env
docker build -t test-image .
docker images

Create ECR repositories and push an image

#create an ecr repository
aws ecr create-repository --repository-name lambda-python-lab \
--query 'repository.repositoryUri' --output text

#authen 
aws ecr get-login-password --region ap-northeast-1 | docker login --username AWS --password-stdin xxxxxxxxxxxx.dkr.ecr.ap-northeast-1.amazonaws.com

#tag image
docker tag test-image:latest xxxxxxxxxxxx.dkr.ecr.ap-northeast-1.amazonaws.com/lambda-python-lab:latest
docker images

#push the image to ecr repository
docker push xxxxxxxxxxxx.dkr.ecr.ap-northeast-1.amazonaws.com/lambda-python-lab:latest

#check image
aws ecr list-images --repository-name lambda-python-lab

Create a lamba from the ECR image

Create a lambda function from the ECR image

General configuration

Environment variables

Test lambda function on local

To test the lambda function locally, you can follow

docker run --name test-lambda -v /Users/hieudang/.aws:/root/.aws \
  -e BUCKET_NAME=hieu320231129 \
  -e TARGET_FOLDER=TRANSFORMED \
  -e FILENAME=2022-06-05-0.json.gz \
  -d test-image
docker container list
docker exec -it test-lambda bash
python -c "import S3app;S3app.lambda_ingest(None, None)"

Test lambda function on AWS

Conclusion

These steps provide an example of creating a lambda function and running it on docker, then, we put the docker image in the ECR repository, and create a lambda function from the ECR repository. The specific configuration details may vary depending on your environment and setup. It’s recommended to consult the relevant documentation from AWS for detailed instructions on setting up. I hope will this be helpful. Thank you for reading the DevopsRoles page!

Refer

https://docs.aws.amazon.com/lambda/latest/dg/python-package.html#python-package-create-no-dependencies

App source code: https://github.com/itversity/ghactivity-aws

Amazon DocumentDB

Introduction

In this tutorial, you will create an Amazon DocumentDB cluster. Operations on the cluster using CLI commands using CLI commands. For more information about Amazon DocumentDB, see Amazon DocumentDB Developer Guide.

Prerequisites

Before starting, you should have the following prerequisites configured

  • An AWS account
  • AWS CLI on your computer

Amazon DocumentDB tutorial

  • Create an Amazon DocumentDB cluster using AWS CLI
  • Adding an Amazon DocumentDB instance to a cluster using AWS CLI
  • Describing Clusters and Instances using AWS CLI
  • Install the mongo shell on MacOS
  • Connecting to Amazon DocumentDB
  • Performing Amazon DocumentDB CRUD operations using Mongo Shell
  • Performing Amazon DocumentDB CRUD operations using python
  • Adding a Replica to an Amazon DocumentDB Cluster using AWS CLI
  • Amazon DocumentDB High Availability Failover using AWS CLI
  • Creating an Amazon DocumentDB global cluster using AWS CLI
  • Delete an Instance from a Cluster using AWS CLI
  • Delete an Amazon DocumentDB global cluster using AWS CLI
  • Removing Global Clusters using AWS CLI

Create an Amazon DocumentDB cluster using AWS CLI

Before you begin, If you have not installed the AWS CLI, see Setting up the Amazon Redshift CLI. This tutorial uses the us-east-1 region.

Now we’re ready to launch a Amazon DocumentDB cluster by using the AWS CLI.

An Amazon DocumentDB cluster consists of instances and a cluster volume that represents the data for the cluster. The cluster volume is replicated six ways across three Availability Zones as a single, virtual volume. The cluster contains a primary instance and, optionally, up to 15 replica instances. 

The following sections show how to create an Amazon DocumentDB cluster using the AWS CLI. You can then add additional replica instances for that cluster.

  • When you use the console to create your Amazon DocumentDB cluster, a primary instance is automatically created for you at the same time.
  • When you use the AWS CLI to create your Amazon DocumentDB cluster, after the cluster’s status is available, you must then create the primary instance for that cluster.

The following procedures describe how to use the AWS CLI to launch an Amazon DocumentDB cluster and create an Amazon DocumentDB replica.

To create an Amazon DocumentDB cluster, call the create-db-cluster AWS CLI.

aws docdb create-db-cluster \
      --db-cluster-identifier sample-cluster \
      --engine docdb \
      --engine-version 5.0.0 \
      --master-username masteruser \
      --master-user-password masteruser123

The db-subnet-group-name or vpc-security-group-id parameter is not specified, Amazon DocumentDB will use the default subnet group and Amazon VPC security group for the given region.

This command returns the following result.

It takes several minutes to create the cluster. You can use the following AWS CLI to monitor the status of your cluster. 

aws docdb describe-db-clusters \
--filter Name=engine,Values=docdb \
--db-cluster-identifier sample-cluster \
--query 'DBClusters[*].Status'

Adding an Amazon DocumentDB instance to a cluster using AWS CLI

Use the create-db-instance AWS CLI operation with the following parameters to create the primary instance for your cluster.

You can choice instance class from result of following command

aws docdb describe-orderable-db-instance-options --engine docdb --query 'OrderableDBInstanceOptions[*].DBInstanceClass'
aws docdb create-db-instance \
--db-cluster-identifier sample-cluster \
--db-instance-identifier primary-instance \
--db-instance-class db.t3.medium \
--engine docdb

This command returns the following result.

The following AWS CLI command lists the details for Amazon DocumentDB instances in a region.

aws docdb describe-db-instances --db-instance-identifier primary-instance

This command returns the following result.

Describing Clusters and Instances using AWS CLI

To view the details of your Amazon DocumentDB clusters using the AWS CLI, use the describe-db-clusters command. 

The following AWS CLI command list information about the Amazon DocumentDB cluster identification, status, and endpoint.

aws docdb describe-db-clusters --db-cluster-identifier sample-cluster --query 'DBClusters[*].[DBClusterIdentifier,Status,Endpoint]'

This command returns the following result.

Install the mongo shell on MacOS

Install the mongo shell with the following command:

brew tap mongodb/brew
brew install mongosh

To encrypt data in transit, download the public key for Amazon DocumentDB. The following command downloads a file named global-bundle.pem:

cd Downloads
curl -O https://truststore.pki.rds.amazonaws.com/global/global-bundle.pem

You must explicitly grant inbound access to your client in order to connect to the cluster. When you created a cluster in the previous step, because you did not specify a security group, you associated the default cluster security group with the cluster.

The default cluster security group contains no rules to authorize any inbound traffic to the cluster. To access the new cluster, you must add rules for inbound traffic, which are called ingress rules, to the cluster security group. If you are accessing your cluster from the Internet, you will need to authorize a Classless Inter-Domain Routing IP (CIDR/IP) address range.

Run the following command to enable your computer to connect to your Redshift cluster. Then login into your cluster using mongo shell.

#get VpcSecurityGroupId
aws docdb describe-clusters --cluster-identifier sample-cluster --query 'DBClusters[*].[VpcSecurityGroups]'

#allow connect to DocumentDB cluster from my computer
aws ec2 authorize-security-group-ingress --group-id sg-083f2ca0560111a3b --protocol tcp --port 27017 --cidr 111.111.111.111/32

This command returns the following result.

Connecting to Amazon DocumentDB

 Run the following command to connect the Amazon DocumentDB cluster

docdbEndpoint=sample-cluster.cluster-cy1qzrkhqwpp.us-east-1.docdb.amazonaws.com:27017
docdbUser=masteruser
docdbPass=masteruser123
mongosh --tls --host $docdbEndpoint --tlsCAFile global-bundle.pem --username $docdbUser --password $docdbPass

This command returns the following result.

Use the below command to view the available databases in the your Amazon DocumentDB cluster

show dbs

Performing Amazon DocumentDB CRUD operations using Mongo Shell

MongoDB database concepts:

  • A record in MongoDB is a document, which is a data structure composed of field and value pairs, similar to JSON objects. The value of a field can include other documents, arrays, and arrays of documents. A document is roughly equivalent to a row in a relational database table.
  • collection in MongoDB is a group of documents, and is roughly equivalent to a relational database table.
  • database in MongoDB is a group of collections, and is similar to a relational database with a group of related tables.

To show current database name

db

To create a database in Amazon DocumentDB, execute the use command, specifying a database name. Create a new database called docdbdemo.

use docdbdemo

When you create a new database in Amazon DocumentDB, there are no collections created for you. You can see this on your cluster by running the following command.

show collections

Creating Documents

You will now insert a document to a new collection called products in your docdbdemo database using the below query.

db.products.insert({
  "name":"java cookbook",
  "sku":"222222",
  "description":"Problems and Solutions for Java Developers",
  "price":200
})

You should see output that looks like this

You can insert multiple documents in a single batch to bulk load products. Use the insertMany command below. 

db.products.insertMany([
{
  "name":"Python3 boto",
  "sku":"222223",
  "description":"basic boto3 and python for everyone",
  "price":100
},
{
  "name":"C# Programmer's Handbook",
  "sku":"222224",
  "description":"complete coverage of features of C#",
  "price":100
}
])

Reading Documents

Use the below query to read data inserted to Amazon DocumentDB. The find command takes a filter criteria and returns the document matching the criteria. The pretty command is appended to display the results in an easy-to-read format.

db.products.find({"sku":"222223"}).pretty()

The matched document is returned as the output of the above query.

Use the find() command to return all the documents in the profiles collection. Input the following:

db.products.find().pretty()

Updating Documents

You will now update a document to add reviews using the $set operator with the update command. Reviews is a new array containing review and rating fields.

db.products.update(
  {
    "sku":"222223"
  },
  {
    $set:{    
      "reviews":[
        {
          "rating":4,
          "review":"perfect book"
        },
        {
          "rating":4.5,
          "review":"very good"
        },
        {
          "rating":5,
          "review":"Just love it"
        }
      ]
    }
  }
)

The output indicates the number of documents that were matched, upserted, and modified.

You can read the document modified above to ensure that the changes are applied.

db.products.find({"sku":"222223"}).pretty()

Deleting Documents

You can delete a document using the below code.

db.products.remove({"sku":"222224"})

Performing Amazon DocumentDB CRUD operations using python

Prerequisites

Before starting, you should have the following prerequisites configured

To install pymongo, execute following command on MacOS

pip3 install pymongo

Edit sample_python_documentdb.py

Open sample_python_documentdb.py, edit variable with current DocumentDB value

username = "masteruser"
password = "masteruser@123"
clusterendpoint = "your_endpoint:27017"
tlsCAFile = "global-bundle.pem"

Execute python file

Execute the code and examine the output

python3 sample_python_documentdb.py

Adding a Replica to an Amazon DocumentDB Cluster using AWS CLI

To add an instance to your Amazon DocumentDB cluster, run the following command

aws docdb create-db-instance \
       --db-cluster-identifier sample-cluster \
       --db-instance-identifier instance-2 \
       --availability-zone us-east-1b \
       --promotion-tier 1 \
       --db-instance-class db.t3.medium \
       --engine docdb

The following example returns the DBClusterIdentifier, DBInstanceIdentifier  for sample-cluster

aws docdb describe-db-clusters \
    --db-cluster-identifier sample-cluster \
    --query 'DBClusters[*].[DBClusterIdentifier,DBClusterMembers[*].DBInstanceIdentifier]'

Amazon DocumentDB High Availability Failover using AWS CLI

A failover for a cluster promotes one of the Amazon DocumentDB replicas (read-only instances) in the cluster to be the primary instance (the cluster writer).When the primary instance fails, Amazon DocumentDB automatically fails over to an Amazon DocumentDB replica

The following operation forces a failover of the sample-cluster cluster. 

aws docdb failover-db-cluster --db-cluster-identifier sample-cluster

Creating an Amazon DocumentDB global cluster using AWS CLI

To create an Amazon DocumentDB regional cluster, call the create-db-clusterAWS CLI. The following AWS CLI command creates an Amazon DocumentDB cluster named global-cluster-id

aws docdb create-db-cluster \
--global-cluster-identifier global-cluster-id \
--source-db-cluster-identifier arn:aws:rds:us-east-1:111122223333:cluster-id

Output from this operation looks something like the following

A global cluster needs at least one secondary cluster in a different region than the primary cluster, and you can add up to five secondary clusters. 

I’m not adding a secondary cluster in this tutorial, but to add an AWS Region to an Amazon DocumentDB global cluster you can use:

aws docdb --region us-east-2 \
  create-db-cluster \
    --db-cluster-identifier cluster-id \
    --global-cluster-identifier global-cluster-id \
    --engine-version version

aws docdb --region us-east-2 \
  create-db-instance \
    --db-cluster-identifier cluster-id \
    --global-cluster-identifier global-cluster-id \
    --engine-version version \
    --engine docdb
      

Delete an Instance from a Cluster using AWS CLI

The following procedure deletes an Amazon DocumentDB instance using the AWS CLI.

aws docdb delete-db-instance --db-instance-identifier instance-2

Output from this operation looks something like the following.

Removing Global Clusters using AWS CLI

You can’t delete the global cluster until after you detach all associated clusters, leaving the primary for last. 

To remove a cluster from a global cluster, run the remove-from-global-clusterCLI command with the following parameters:

  • --global-cluster-identifier — The name (identifier) of your global cluster.
  • --db-cluster-identifier — The name of each cluster to remove from the global cluster.

Example:

aws docdb --region secondary_region \
  remove-from-global-cluster \
    --db-cluster-identifier secondary_cluster_ARN \
    --global-cluster-identifier global_cluster_id

aws docdb --region primary_region \
  remove-from-global-cluster \
    --db-cluster-identifier primary_cluster_ARN \
    --global-cluster-identifier global_cluster_id

In my case

aws docdb remove-from-global-cluster --db-cluster-identifier arn:aws:rds:us-east-1:111122223333:cluster-id --global-cluster-identifier global_cluster_id

To delete a global cluster, run the delete-global-cluster CLI command with the name of the AWS Region and the global cluster identifier, as shown in the following example.

aws docdb –region us-eas-1 delete-global-cluster \
–global-cluster-identifier global_cluster_id

If you also want to delete cluster, run the following command.

#list instance
aws docdb describe-db-clusters \
--db-cluster-identifier sample-cluster \
--query 'DBClusters[].[DBClusterIdentifier,DBClusterMembers[].DBInstanceIdentifier]'

#delete instance
aws docdb delete-db-instance \
    --db-instance-identifier sample-instance

#delete cluster
aws docdb delete-db-cluster \
    --db-cluster-identifier sample-cluster \
    -skip-final-snapshot

Conclusion

These steps provide an example to manage Amazon DocumentDB cluster. The specific configuration details may vary depending on your environment and setup. It’s recommended to consult the relevant documentation from AWS for detailed instructions on setting up. I hope will this your helpful. Thank you for reading the DevopsRoles page!

Refer

https://docs.aws.amazon.com/documentdb/latest/developerguide/what-is.html

https://catalog.us-east-1.prod.workshops.aws/workshops/464d6c17-9faa-4fef-ac9f-dd49610174d3/en-US

https://docs.aws.amazon.com/lambda/latest/dg/with-documentdb-tutorial.html#docdb-prerequisites

AWS Lambda function with requests module

Introduction

In this tutorial, we will create an AWS Lambda function with requests module. Then create a .zip deployment package containing the dependencies.

Prerequisites

Before starting, you should have the following prerequisites configured

  • An AWS account
  • AWS CLI on your computer

Walkthrough

  • Create the deployment package
  • Create AWS Lambda function with requests module

Create the deployment package

Navigate to the project directory containing your lambda_function.py source code file. In this example, the directory is named my_function.

mkdir my_function
cd my_function
ls -alt

Install “requests” dependencies in the my_function directory.

pip3 install requests --target .

Create lambda_function.py source code file. This sample uses region_name=”ap-northeast-1″.(Tokyo region)

import boto3
import requests

def lambda_handler(event, context):
    file_name = "2023-11-26-0.json.gz"
    bucket_name = "hieu320231129"
    print(f'Getting the {file_name} from gharchive')
    res = requests.get(f'https://data.gharchive.org/{file_name}')
    print(f'Uploading {file_name} to s3 under s3://{bucket_name}')
    s3_client = boto3.client('s3', region_name="ap-northeast-1")
    upload_res = s3_client.put_object(
        Bucket=bucket_name,
        Key=file_name,
        Body=res.content
    )

    objects = s3_client.list_objects(Bucket=bucket_name)['Contents']
    objectname= []
    for obj in objects:
        objectname.append(obj['Key'])

    return {
        'object_names': objectname,
        'status_code': '200'
    }

Create a .zip file with the installed libraries and lambda source code file.

zip -r ../my_function.zip .
cd ..
ls -alt

Test lambda function from local computer.

# check bucket
aws s3 ls s3://hieu320231129/ --recursive

#invoke lambda function from local
python3 -c "import lambda_function;lambda_function.lambda_handler(None, None)"

#delete uploaded file
aws s3 rm s3://hieu320231129/ --recursive

Create AWS Lambda function with requests module

I will deploy zip file with python3.11 and change environment setting

Change environment setting

Test lambda function with request module

Conclusion

These steps provide an example of creating a lambda function with dependencies. I used the request module to read a file from a website and put it into an AWS S3 bucket. The specific configuration details may vary depending on your environment and setup. It’s recommended to consult the relevant documentation from AWS for detailed instructions on setting up. I hope will this your helpful. Thank you for reading the DevopsRoles page!

Refer

https://docs.aws.amazon.com/lambda/latest/dg/python-package.html

A Comprehensive Guide to Validating Kubernetes Cluster Installed using Kubeadm

Introduction

When setting up a Kubernetes cluster using Kubeadm, it’s essential to validate the installation to ensure everything is functioning correctly. In this blog post, we will guide you through the steps to Validating Kubernetes Cluster Installed using Kubeadm and Kubectl.

Learn how to validate your Kubernetes cluster installation using Kubeadm and ensure smooth operations. Follow our step-by-step guide for easy validation.

You have Installed Kubernetes using Kubeadm on Ubuntu: A Step-by-Step Guide

Validating Kubernetes Cluster Installed using Kubeadm: Step-by-Step Guide

Validating CMD Tools: Kubeadm & Kubectl

First, let’s check the versions of Kubeadm and Kubectl to ensure they match your cluster setup.

Checking “kubeadm” version

kubeadm version

Checking “kubectl” version

kubectl version

Make sure the versions of Kubeadm and Kubectl are compatible with your Kubernetes cluster.

Validating Cluster Nodes

Next, we need to ensure that all nodes in the cluster, including both Master and Worker nodes, are in the “Ready” state.

To check the status of all nodes:

kubectl get nodes
kubectl get nodes -o wide

This command will display a list of all nodes in the cluster along with their status. Ensure that all nodes are marked as “Ready.”

Validating Kubernetes Components

It’s crucial to verify that all Kubernetes components on the Master node are running correctly.

To check the status of Kubernetes components:

kubectl get pods -n kube-system
kubectl get pods -n kube-system -o wide

This command will show the status of various Kubernetes components in the kube-system namespace. Ensure that all components are in the “Running” state.

Validating Services: Docker & Kubelet

To ensure the proper functioning of your cluster, we need to validate the services Docker and Kubelet on all nodes.

Checking Docker service status

systemctl status docker

This command will display the status of the Docker service. Ensure that it is “Active” and running without any errors.

Checking Kubelet service status

systemctl status kubelet

This command will show the status of the Kubelet service. Verify that it is “Active” and running correctly.

Deploying Test Deployment

To further validate your cluster, let’s deploy a sample Nginx deployment and check its status.

Deploying the sample “nginx” deployment:

kubectl apply -f https://k8s.io/examples/controllers/nginx-deployment.yaml

This command will create the Nginx deployment in your cluster.

Validate the deployment:

kubectl get deploy
kubectl get deploy -o wide

These commands will display the status of the Nginx deployment, including the number of replicas and the desired and current states.

Check if the pods are in the “Running” state:

kubectl get pods
kubectl get pods -o wide

Make sure all pods are running without any errors.

Verify that containers are running on the respective worker nodes:

docker ps

This command will show the running containers on each worker node. Ensure that the Nginx containers are running as expected.

Delete the deployment:

kubectl delete -f https://k8s.io/examples/controllers/nginx-deployment.yaml

This command will delete the Nginx deployment from your cluster.

Conclusion

By following these steps, you can validate your Kubernetes cluster installation using Kubeadm and Kubectl. It’s essential to ensure that all the components, services, and deployments are running correctly to have a reliable and stable Kubernetes environment. I hope will this your helpful. Thank you for reading the DevopsRoles page!

Installing Kubernetes using Kubeadm on Ubuntu: A Step-by-Step Guide

Introduction

Kubernetes has emerged as the go-to solution for container orchestration and management. If you’re looking to set up a Kubernetes cluster on a Ubuntu server, you’re in the right place. In this step-by-step guide, we’ll walk you through the process of installing Kubernetes using Kubeadm on Ubuntu.

Prerequisites

I have created 3 VMs for Kubernetes Cluster Nodes to Cloud Google Compute Engine (GCE)

  • Master(1): 2 vCPUs – 4GB Ram
  • Worker(2): 2 vCPUs – 2GB RAM
  • OS: Ubuntu 16.04 or CentOS/RHEL 7

I have configured Firewall Rules Ingress in Google Compute Engine (GCE)

  • Master Node: 2379,6443,10250,10251,10252
  • Worker Node: 10250,30000-32767

Installing Kubernetes using Kubeadm on Ubuntu

Set hostname on Each Node

# hostnamectl set-hostname "k8s-master"    // For Master node
# hostnamectl set-hostname "k8s-worker1"   // For 1st worker node
# hostnamectl set-hostname "k8s-worker2"   // For 2nd worker node

Add the following entries in /etc/hosts file on each node

192.168.1.14   k8s-master
192.168.1.16   k8s-worker1
192.168.1.17   k8s-worker2

Disable Swap and Bridge Traffic

Kubernetes does not work well with swap enabled. Run it on MASTER & WORKER Nodes

Disable SWAP

# swapoff -a
# sed -i.bak -r 's/(.+ swap .+)/#\1/' /etc/fstab

Load the following kernel modules on all the nodes,

# tee /etc/modules-load.d/containerd.conf <<EOF
overlay
br_netfilter
EOF
# modprobe overlay
# modprobe br_netfilter

Set the following Kernel parameters for Kubernetes, run beneath tee command

# tee /etc/sysctl.d/kubernetes.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF

Reload the above changes

# sysctl --system

For example, The output terminal of worker1.

Installing Docker

Run it on MASTER & WORKER Nodes. Kubernetes requires a container runtime, and Docker is a popular choice. To install Docker, run the following commands:

apt-get update  
apt-get install -y  apt-transport-https ca-certificates curl software-properties-common gnupg2

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
  $(lsb_release -cs) \
  stable"

Installing Docker

apt-get update && sudo apt-get install \
  containerd.io=1.6.24-1 \
  docker-ce=5:20.10.24~3-0~ubuntu-$(lsb_release -cs) \
  docker-ce-cli=5:20.10.24~3-0~ubuntu-$(lsb_release -cs)

For example, The output terminal is as below:

Setting up the Docker daemon

cat <<EOF | sudo tee /etc/docker/daemon.json
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2"
}
EOF

mkdir -p /etc/systemd/system/docker.service.d

Start and enable the docker

systemctl daemon-reload
systemctl enable docker
systemctl restart docker
systemctl status docker

Install Kubeadm, Kubelet, and Kubectl

Add the Kubernetes repository and install Kubeadm, Kubelet, and Kubectl

apt-get update && sudo apt-get install -y apt-transport-https curl

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -

cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF

Installing Kubeadm, Kubelet, Kubectl

apt-get update
apt-get install -y kubelet kubeadm kubectl

apt-mark hold kubelet kubeadm kubectl

Start and enable Kubelet

systemctl daemon-reload
systemctl enable kubelet
systemctl restart kubelet
systemctl status kubelet

Initializing CONTROL-PLANE

Run it on MASTER Node only. On your master node, initialize the Kubernetes cluster with the command below:

kubeadm init

Make note of the kubeadm join command that’s provided at the end; you’ll need it to join worker nodes.

Installing POD-NETWORK add-on

Run it on MASTER Node only

For kubectl

mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

Installing “Weave CNI” (Pod-Network add-on)

kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"

NOTE: There are multiple CNI Plug-ins available. You can install a choice of yours. In the case above commands don’t work, try checking the below link for more info

Joining Worker Nodes

Run it on WORKER Node only

On your worker nodes, use the kubeadm join command from above kubeadm init output to join them to the cluster.

kubeadm join <...>

Run this command IF you do not have the above join command and/or create a NEW one.

kubeadm token create --print-join-command

Verify the Cluster

On the master node, ensure your cluster is up and running

kubectl get nodes

You should see the master node marked as “Ready” and any joined worker nodes.

Conclusion

Congratulations! You’ve successfully installed Kubernetes using Kubeadm on Ubuntu. With your Kubernetes cluster up and running, you’re ready to deploy and manage containerized applications and services at scale.

Kubernetes offers vast capabilities for container orchestration, scaling, and management. As you become more familiar with Kubernetes, you can explore advanced configurations and features to optimize your containerized environment.

I hope will this your helpful. Thank you for reading the DevopsRoles page!

Additional Resources:

HISTCONTROL ignorespace Force history in Linux

Introduction

How to Force history not to remember a particular command using HISTCONTROL ignorespace in Linux. When executing a command, you can use HISTCONTROL with ignorespace and precede the command with a space to ensure it’s ignored in your command history.

This might be tempting for junior sysadmins seeking discretion, but it’s essential to grasp how ignorespace functions. As a best practice, it’s generally discouraged to purposefully hide commands from your history, as transparency and accountability are crucial in system administration and troubleshooting.

What is HISTCONTROL?

HISTCONTROL is an environment variable in Linux that defines how your command history is managed. It allows you to specify which commands should be recorded in your history and which should be excluded. This can help you maintain a cleaner and more efficient command history.

ignorespace – An Option for HISTCONTROL

One of the settings you can use with HISTCONTROL is ignorespace. When ignorespace is included in the value of HISTCONTROL, any command line that begins with a space character will not be recorded in your command history. This can be incredibly handy for preventing sensitive information, such as passwords, from being stored in your history.

Working with HISTCONTROL ignorespace

Step 1: Check Your Current HISTCONTROL Setting

Before you start using HISTCONTROL with ignorespace, it’s a good idea to check your current HISTCONTROL setting. Open a terminal and run the following command:

echo $HISTCONTROL

This will display your current HISTCONTROL setting. If it’s empty or doesn’t include ignorespace, you can proceed to the next step.

Step 2: Set HISTCONTROL to ignorespace

To enable ignorespace in your HISTCONTROL, you can add the following line to your shell configuration file (e.g., ~/.bashrc for Bash users):

export HISTCONTROL=ignorespace

After making this change, be sure to reload your shell configuration or start a new terminal session for the changes to take effect.

Step 3: Test ignorespace

Now that you’ve set HISTCONTROL to ignorespace, you can test its functionality. Try entering a command with a leading space, like this:

 ls -l

Notice that the space at the beginning of the command is intentional. This command will not be recorded in your command history because of the ignorespace setting.

Step 4: Verify Your Command History

To verify that the command you just entered is not in your history, you can display your command history using the history command:

history

Conclusion

utilizing HISTCONTROL with ignorespace empowers you to better manage your Linux command history. This feature proves especially useful when excluding commands with sensitive data or temporary experiments. Understanding and harnessing HISTCONTROL ignorespace and its options, like ignorespace, enhances both the efficiency and security of your Linux command line experience.

Remember that these settings are user-specific, so individual configuration is necessary for each user on a multi-user system. Armed with this knowledge, you can exercise greater control over your command history and enhance your overall command line efficiency in Linux. You can Force history not to remember a particular command using HISTCONTROL ignorespace

Dive view the contents of docker images

Introduction

How to view the contents of docker images? “Dive” is a command-line tool for exploring and analyzing Docker images. It allows you to inspect the contents of a Docker image, view its layers, and understand the file structure and sizes within those layers.

This tool can be helpful for optimizing Docker images and gaining insights into their composition. Dive: A Simple App for Viewing the Contents of a Docker Image.

For MacOS, Dive can be installed with either Homebrew and on Windows, Dive can be installed with a downloaded installer file for the OS.

What You’ll Need

  • Dive: You’ll need to install the Dive tool on your system to use it.
  • Docker: Dive works with Docker images, so you should have Docker installed on your system to pull and work with Docker images. For example, install docker on Ubuntu here.

Installing Dive

To install Dive, you can use package managers like Homebrew (on macOS) or download the binary from the Dive GitHub repository.

Using Homebrew (on macOS)

brew install dive

Downloading the binary

You can visit the Dive GitHub repository (dive) and download the binary for your platform from the “Releases” section. You installing Dive on Ubuntu.

$ export DIVE_VERSION=$(curl -sL "https://api.github.com/repos/wagoodman/dive/releases/latest" | grep '"tag_name":' | sed -E 's/.*"v([^"]+)".*/\1/')
$ curl -OL https://github.com/wagoodman/dive/releases/download/v${DIVE_VERSION}/dive_${DIVE_VERSION}_linux_amd64.deb
$ sudo apt install ./dive_${DIVE_VERSION}_linux_amd64.deb

The result as the picture below

Using Dive

Once you have Dive installed, you can use it to view the contents of a Docker image as follows:

  1. Open your terminal or command prompt.
  2. Run the following command, replacing with the name or ID of the Docker image you want to inspect:
  3. Dive will launch a text-based interface that allows you to navigate through the layers of the Docker image. You can explore the file structure, check the sizes of individual layers, and gain insights into the image’s contents.

View the contents of docker images

To examine the latest Alpine Docker image

dive alpine:latest

You can define a different source using the source option

dive IMAGE --source SOURCE

SOURCE is the location of the repository.

The features of Dive

  • Layer Visualization: Dive provides a visual representation of a Docker image’s layers, showing how they are stacked on top of each other.
  • Layer Size Information: Dive displays the size of each individual layer in the Docker image.
  • File and Directory Listing: You can navigate through the contents of each layer and view the files and directories it contains.
  • Image Efficiency Analysis: Dive helps you identify inefficiencies in your Docker images.
  • Image Build Context Analysis: Dive can analyze the build context of a Docker image.
  • Image Diffing: Dive allows you to compare two Docker images and see the differences between them.

Conclusion

Dive is a powerful tool for image analysis and optimization, and it can help you gain insights into what’s inside a Docker image. It’s particularly useful for identifying large files or unnecessary dependencies that can be removed to create smaller and more efficient Docker images.

You can view the contents of docker images using Dive.

Devops Tutorial

Exit mobile version