In this tutorial, we’ll explore the process of batch Linux rename file with the inclusion of dates. You can use the rename command or a combination of find commands and mv commands to batch rename files in a directory with new filenames containing dates in Linux.
Linux rename file with Dates
Today I want to share how to batch rename files in a directory to new filenames with dates in Linux only with one command
For example, in my directory, I have files named like this:
test123.txt
test456.txt
test789.txt
The result after running the command looks like this:
“20230805” is the execution date.
test123_20230805.txt
test456_20230805.txt
test789_20230805.txt
Batch Renaming Files with Date Appended
find -name "test[0-9][0-9][0-9].txt" -type f -exec sh -c 'mv -f {} "$(dirname {})/$(basename {} .txt)_$(date +%Y%m%d).txt"' \;
This command searches for and renames files in the current directory (and subdirectories) with names in the format test###.txt (where ### represents three digits).
find -name "test[0-9][0-9][0-9].txt": Searches for files with names matching the pattern test###.txt.
-type f: Only searches for files (not directories).
-exec sh -c '...' \;: Executes the shell command for each file found.
mv -f {}: Renames the file (forcing if necessary).
"$(dirname {})/$(basename {} .txt)_$(date +%Y%m%d).txt": Renames the file by removing the .txt extension, appending the current date (in YYYYMMDD format), and then adding back the .txt extension.
For example, test001.txt would be renamed to test001_20240812.txt (assuming the current date is August 12, 2024).
The outcome of rename file in Linux is depicted in the image below:
Conclusion
Remember to back up your files before performing batch operations like this, just to be safe. Also, modify the date format and naming convention as per your requirements. I hope will this be helpful. Thank you for reading the DevopsRoles page!
Security-Enhanced Linux is a powerful security system that is enabled, by default, on most Linux distributions based on RHEL. Here are the general steps to configure SELinux for applications and services:
In this blog, we will explore the process of configuring SELinux to safeguard your applications, providing a detailed understanding of SELinux modes, Booleans, custom policies, and troubleshooting tips.
For example, the Apache web server. Apache on RHEL-based distributions defaults to the /var/httpd directory as the document root and ports 80 (for HTTP) and 443 (for HTTPS) when installed.
You can use a different directory and port for a website might opt for /opt as the document root and port 8080.
Out of the box, SELinux denies those nonstandard options, so they must be configured to work properly.
configure SELinux for nonstandard configurations
You also need a user with sudo privileges.
Install Apache
First, you need to install the Apache web server on a Linux distribution such as Rocky Linux, AlmaLinux, or RHEL.
If SELinux is not installed on your system, install the necessary packages. The package names might vary depending on your Linux distribution. For example, on CentOS/RHEL systems, you can use:
SELinux has three main modes: enforcing, permissive, and disabled. The enforcing mode enforces security policies, the permissive mode logs policy violations but does not block actions, and the disabled mode turns off SELinux.
For production use, you should typically set SELinux to enforcing mode. You can temporarily set it to permissive mode for debugging purposes.
Set SELinux modes
To set the SELinux mode, use the following command:
If issues arise, review SELinux logs in /var/log/audit/audit.log and system logs to identify potential problems.
SELinux Booleans:
You can list available Booleans and their statuses using the semanage boolean -l or getsebool -a command. To change a Boolean value, use the setsebool command.
View SELinux Context:
You can view the SELinux context for a specific file or directory using the ls -Z command.
Creating Custom SELinux Policies (Optional):
This involves using SELinux policy development tools like audit2allow and semodule to define the necessary rules.
Conclusion
Incorporating SELinux into your Linux system’s security posture can significantly improve its resilience against cyber threats.
By following the steps outlined in this guide, you’ll be well-equipped to configure SELinux effectively for your applications and services, bolstering the overall security of your Linux environment.
Remember to continually monitor and update your SELinux configurations to keep up with evolving security challenges. I hope will this your helpful. Thank you for reading the DevopsRoles page!
How to share your Route 53 Domains across AWS accounts via Route 53 hostedzone. What I will do here is that I already have a Production account with the domain, so I want to use this domain on the Test account to conduct my other activities.
Share your Route 53 Domains across AWS accounts
To share your Route 53 Domains across AWS accounts, you can follow these general steps
Create a Public Hosted Zone in the Test account: In the Test create a public hosted zone in Route 53 for the domain you want to use
Create a record in the Product account: In the account that owns the domain, create a record in Route 53 Hosted Zone for the domain you want to share.
Create a record in the Test account: in the Test account, create a record to route traffic to ALB
Step by step: Share your Route 53 Domains across the AWS account
Create a Public Hosted Zone in the Test account
When you have to fill out the information and return to your Route 53 hosted zone, you need to copy the 4 lines inside the value box, which contains the nameserver information you need to use in the next step.
Create a record in the Product account
Paste the 4 nameservers from your Test account Route 53 Hosted Zone into the value list.
Create a record in the Test account
In the Test account, create a record to route traffic to ALB
Test page
Conclusion
These steps provide a general overview of the process. The specific configuration details may vary depending on your environment and setup. It’s recommended to consult the relevant documentation from AWS for detailed instructions on setting up. I hope will this be helpful. Thank you for reading the DevopsRoles page!
Refer to: This lab html uses the template of ChandraLingam
How to master the curl command in Linux. In this article, we’ll explore the capabilities of curl and learn how to leverage its features to simplify your web-related operations.
The command line interface (CLI) is a powerful ally for Linux users, providing efficient ways to perform various tasks. Among the multitude of commands available, curl stands out as a versatile tool for making HTTP requests and interacting with different protocols.
What is the curl command?
curl is a command-line utility that allows you to transfer data to or from a server, supporting various protocols such as HTTP, HTTPS, FTP, SMTP, and more. With its extensive range of options, curl empowers users to send and receive data using different methods, customize headers, handle authentication, and automate web-related tasks.
Basic Usage
The syntax of curl is straightforward. It follows this general structure:
curl [options] [URL]
You can simply run curl followed by a URL to send a GET request and retrieve the content of a web page. For example:
curl https://devopsroles.com
How to Use Curl Command in Linux
Saving a File using the curl command
If you prefer to save the downloaded file with a different name, you can use the -o option followed by the desired filename.
The command you provided is using curl to download a file from an FTP server using basic authentication. Here’s an explanation of the options used in the command:
-v: Enables verbose output, providing detailed information about the request and response.
-udemo:password: Specifies the username and password for basic authentication. Replace the demo with the actual username and password with the corresponding password.
-O: Saves the downloaded file with its original name.
ftp://test.devopsroles.com/readme.txt: Specifies the FTP URL from which the file should be downloaded. Replace test.devopsroles.com with the actual FTP server address and readme.txt with the file name you want to download.
Testing If a Server Is Available or Not
curl -I https://test.devopsroles.com/
You can use it to send an HTTP HEAD request to the specified URL in order to retrieve the response headers.
-I: Specifies that curl should send an HTTP HEAD request instead of the default GET request. This means that only the response headers will be retrieved, and not the entire content of the page.
https://test.devopsroles.com/: Specifies the URL to which the request will be sent. Replace test.devopsroles.com with the actual domain or website you want to retrieve the headers from.
Check Server Response Time
How to measure the total time taken for the HTTP request to the specified website.
When you run this command, curl will initiate an HTTP request to the specified website and measure the total time taken for the request.
-w “%{time_total}\n”: Specifies a custom output format using the -w option. In this case, %{time_total} is a placeholder that represents the total time taken for the request, and \n is a newline character that adds a line break after the output. The total time is measured in seconds.
-o /dev/null: Redirects the response body to /dev/null, which is a special device file in Unix-like systems that discards any data written to it. By doing this, we discard the response body and only focus on measuring the time taken for the request.
devopsroles.com: Specifies the URL of the website to which the HTTP request will be sent. Replace devopsroles.com with the actual domain or website you want to measure the time for.
When you run this command, curl will establish an SSL/TLS-encrypted connection to the specified website and send an HTTP request. The response headers received from the server may include cookies, and curl will save these cookies to the specified “cookies.txt” file.
These cookies can be used for subsequent requests by providing the –cookie option and specifying the cookie file.
Setting User Agent Value with curl command
Uses curl to make an HTTP request to the specified website while setting a custom user-agent header.
curl --user-agent "Mozilla/4.73 [en] (X11; U; Linux 2.2.15 i686)" test.devopsroles.com
–user-agent “Mozilla/4.73 [en] (X11; U; Linux 2.2.15 i686)”: Sets the User-Agent header in the HTTP request. In this case, the specified user-agent string is “Mozilla/4.73 [en] (X11; U; Linux 2.2.15 i686)”. The User-Agent header provides information about the client or browser making the request. By setting a custom user-agent, you can emulate a specific browser or client behavior.
test.devopsroles.com: Specifies the URL of the website to which the HTTP request will be sent. Replace test.devopsroles.com with the actual domain or website you want to access.
Conclusion
The curl command is a powerful ally for Linux users, providing extensive capabilities to interact with web services and protocols. With its simple syntax and rich set of options, curl empowers you to perform a wide range of tasks, from retrieving web pages to sending data and handling authentication. By mastering curl, you can streamline your web-related operations and enhance your command line prowess.
Whether you’re a developer, sysadmin, or simply an avid Linux user, curl is a tool worth exploring. Its versatility and flexibility make it an invaluable asset for interacting with web services from the command line.
Creating files in Linux, whether you are a system administrator, developer, or everyday user. Linux offers several methods to create files, giving you flexibility and convenience.
In this tutorial, we will explore different approaches to creating files in Linux, including command-line tools and text editors.
Creating Files in Linux
Method 1: Using the touch Command
The touch command is a versatile tool in Linux primarily used to update file timestamps. However, it can also create a new file if it doesn’t already exist.
To create a new file using the touch command, use the following syntax:
Replace filename.txt with the desired name and extension for your file. The touch command will create a new empty file with the specified name if it doesn’t already exist.
You can refer to the touch command in Linux with the examplehere.
Method 2: Using the echo Command and Output Redirection
Another method to create a file in Linux is by using the echo command in combination with output redirection.
To create a file and write content to it using echo, use the following syntax:
Replace “Content” with the desired content you want to write and filename.txt with the desired name and extension for your file. The > symbol redirects the output of the echo command to the specified file.
You can refer to the echo command in Linux with Examples here.
Method 3: Using a Text Editor
Linux provides various text editors that allow you to create and edit files. Some popular text editors include vi, vim, nano, and gedit. Using a text editor gives you more flexibility to create files and input content.
To create a file using a text editor, execute the respective command for the desired text editor, followed by the filename:
nano filename.txt
vi filename.txt
vim filename.txt
This command will open the specified text editor with a new file for editing. You can then start typing or paste existing content into the file. After making changes, save the file and exit the editor according to the editor’s instructions.
Conclusion
Creating files in Linux is a straightforward process with multiple methods at your disposal. The touch command allows you to quickly create empty files, while the echo command combined with output redirection lets you create files and populate them with content.
Additionally, text editors provide a more interactive approach to file creation, allowing you to input and edit content. Choose the method that suits your needs and preferences, and leverage the power of Linux to efficiently create and manage files on your system.
I hope will this your helpful. Thank you for reading the DevopsRoles page!
In this tutorial, How to run OpenTelemetry on Docker. Is the project created demo services to help cloud-native community members better understand cloud-native development practices?
What is OpenTelemetry?
It is open-source.
It provides APIs, libraries, and tools for instrumenting, generating, collecting, and exporting telemetry data.
It aims to standardize and simplify the collection of observability data.
To use OpenTelemetry with Docker
You’ll need the following prerequisites:
Docker: Ensure that Docker is installed on your machine. Docker allows you to create and manage containers, which provide isolated environments to run your applications. You can download and install Docker from the official Docker website.
You can proceed with running OpenTelemetry within a Docker container
Docker Compose: Ensure that Docker Compose is installed on your machine
4 GB of RAM
Run OpenTelemetry on Docker
you can follow these steps:
Create a Dockerfile
Create a file named Dockerfile (without any file extension) in your project directory.
This file will define the Docker image configuration.
Open the Dockerfile in a text editor and add the following content:
FROM golang:latest
# Install dependencies
RUN go get go.opentelemetry.io/otel
# Set the working directory
WORKDIR /app
# Copy your application code to the container
COPY . .
# Build the application
RUN go build -o myapp
# Set the entry point
CMD ["./myapp"]
If you are not using Go, modify the Dockerfile according to your programming language and framework.
Build the Docker image
docker build -t myapp .
This command builds a Docker image named myapp based on Dockerfile the current directory.
The -t flag assigns a tag (name) to the image.
Run the Docker container
Once the image is built, you can run a container based on it using the following command:
MariaDB Galera Cluster is a robust solution that combines the power of MariaDB, an open-source relational database management system, with Galera Cluster, a synchronous multi-master replication plugin.
MariaDB, a popular MySQL fork, offers a feature-enhanced and community-driven alternative to MySQL.
Galera Cluster, on the other hand, is a sophisticated replication technology that operates in a synchronous manner. It allows multiple database nodes to work together as a cluster, ensuring that all nodes have an identical copy of the database.
Use Cases : MariaDB Galera Cluster
E-commerce platforms
Ensuring uninterrupted availability and consistent data across multiple locations.
Financial systems
Maintaining data integrity and eliminating single points of failure.
Mission-critical applications
Providing fault tolerance and high performance for critical business operations.
Key Features and Benefits: MariaDB Galera Cluster
High Availability
With MariaDB Galera Cluster, your database remains highly available even in the event of node failures. If one node becomes unreachable, the cluster automatically promotes another node as the new primary, ensuring continuous operation and minimal downtime.
Data Consistency
Galera Cluster’s synchronous replication ensures that data consistency is maintained across all nodes in real time. Each transaction is applied uniformly to every node, preventing any data discrepancies or conflicts.
Scalability and Load Balancing
By distributing the workload across multiple nodes, MariaDB Galera Cluster allows you to scale your database system horizontally. As your application grows, you can add additional nodes to handle increased traffic, providing enhanced performance and improved response times. Load balancing is inherent to the cluster setup, enabling efficient resource utilization.
Automatic Data Distribution
When you write data to the cluster, it is automatically synchronized across all nodes. This means that read queries can be executed on any node, promoting load balancing and reducing the load on individual nodes.
How to set up a MariaDB Galera Cluster using Docker
you can follow these steps
Install Docker
If you haven’t already, install Docker on your system. Refer to
In the above commands, adjust the image version, container names, network, and other environment variables according to your requirements.
Verify Cluster Status
Check the cluster status by executing the following command in any of the cluster nodes:
docker exec -it node1 mysql -uroot -pmy-secret-pw -e "SHOW STATUS LIKE 'wsrep_cluster_size'"
This command should display the number of nodes in the cluster
Conclusion
You now have a MariaDB Galera Cluster set up using Docker. You can connect to any of the nodes using the appropriate connection details (e.g., hostname, port, username, password) and start using the cluster.
I hope will this your helpful. Thank you for reading the DevopsRoles page!
In this blog post, we’ll cover how to view and flush the DNS cache on Linux. Linux flush DNS cache can help resolve HTTP errors and safeguard against DNS spoofing attacks. Follow along to learn the steps for managing your DNS cache effectively.
DNS Cache on Linux?
DNS Cache on Linux refers to the stored records of DNS lookups that the system keeps locally. These records contain information about previously resolved domain names and their corresponding IP addresses. By caching this information, Linux can speed up subsequent DNS queries, reducing the time required to resolve domain names.
Flushing the DNS cache on Linux clears this stored information, forcing the system to perform new DNS lookups for subsequent queries. This can be useful for troubleshooting DNS-related issues or ensuring that the system retrieves the most up-to-date information from DNS servers.
Here are a few commonly used DNS resolvers on Linux:
systemd-resolved
dnsmasq
NetworkManager
BIND (Berkeley Internet Name Domain)
Unbound
pdnsd
Why Flush DNS Cache on Linux?
Flushing the DNS cache on Linux can be useful in several scenarios:
Flushing the DNS cache ensures that your Linux system fetches the latest DNS information from authoritative DNS servers
A Flush DNS cache allows your system to start with a clean cache and retrieve fresh DNS information.
Network configuration changes
Clearing the DNS cache can help protect your privacy and security.
How to View the Local DNS Cache on Linux
To view the local DNS cache on Linux, the method varies depending on the DNS resolver software in use.
Viewing the local DNS cache on Linux varies based on the DNS resolver software.
For systemd-resolved users, employ systemd-resolve –statistics to check cache usage.
Alternatively, dnsmasq users can use dnsmasq –test or sudo rndc dumpdb -cache.
NetworkManager users can check cache status with nmcli dev show | grep DNS.
Familiarity with these methods aids in monitoring and troubleshooting DNS resolution for optimal system performance.
View DNS Cache for systemd-resolved
By sending a SIGUSR1 signal to kill the systemd-resolved service
sudo killall -USR1 systemd-resolved
Use the journalctl command and the standard output operator to save the output to a text file
You use the vim command line to open “/tmp/cache.txt” file. search for “CACHE:” by pressing Escape, typing “/CACHE:”, and hitting Enter.
View the Local DNS Cache for nscd
To view the local DNS cache for nscd (Name Service Cache Daemon), you can follow these steps:
sudo strings /var/cache/nscd/hosts | uniq
This command provides a comprehensive overview of nscd command, presenting statistics such as cache size, cache hits, and cache misses, offering valuable insights into the performance and operation of the Name Service Cache Daemon.
sudo nscd --statistics
Use dnsmasq command display the DNS Cache
To display the DNS cache generated by dnsmasq command , you can use the following command:
sudo dnsmasq --test --cache
Linux flush DNS cache
To view and flush the DNS cache on Linux, you can follow the steps below:
1. Open a terminal window. You can do this by pressing Ctrl+Alt+T on most Linux distributions.
2. To view the current contents of the DNS cache, use the following command:
sudo systemd-resolve --statistics
This command will display various statistics related to the DNS resolver, including the cache size, cache hits, and cache misses.
3. To flush the DNS cache, you need to restart the DNS resolver service. The method depends on your Linux distribution.
For Ubuntu 16.04 and later, Fedora, and CentOS 7, and later. You can use the following command:
sudo systemctl restart systemd-resolved.service
For Ubuntu 14.04 and earlier, CentOS 6 and earlier. you can use the following command:
sudo /etc/init.d/nscd restart
After executing the appropriate command, the DNS cache will be flushed, and any previously cached DNS entries will be cleared.
Conclusion
Linux flush DNS cache can temporarily disrupt domain name resolution on your system, as it clears existing DNS data. However, this process ensures that your system fetches updated DNS information, enhancing accuracy and security in the long run. I hope this will be helpful. Thank you for reading the DevopsRoles page!
How to Manage Your Docker Containers Easily With lazydocker. It is to ease container management. If you’ve multiple Docker containers spread throughout your filesystem.
It is a terminal-based UI tool for managing Docker containers, images, volumes, networks, and more.
Highlight its features, such as a user-friendly interface and various functionalities.
To manage your Docker containers easily with lazydocker, you can follow these steps:
Install lazydocker
Prerequisites: Before proceeding, ensure that you have Docker installed on your system. It is a tool designed to work with Docker, so it requires Docker to be installed and running.
Installation on Linux:
Open your terminal.
Execute the following command to download the binary:
Windows: Add the directory containing lazydocker.exe to your PATH. You can do this by following these steps:
Search for “Environment Variables” in the Start menu and open the “Edit the system environment variables” option.
Click the “Environment Variables” button.
In the “System Variables” section, select the “Path” variable and click “Edit.”
Add the directory path that lazydocker.exe is located to the list of paths. Click “OK” to save the changes.
Verify the installation:
Open a new terminal or PowerShell/command prompt window.
Execute the command.
lazydocker If everything was installed correctly, lazydocker should launch and display the terminal-based GUI.
Launch lazy docker
Open your terminal or command prompt. Ensure that Docker is running on your system.
Enter the command lazydocker and press Enter.
The lazy docker application will start, and you’ll see a terminal-based graphical user interface (GUI) on your screen.
If this is your first time running lazydocker, it may take a moment to set up and initialize the application.
Once it is launched, you can start managing your Docker containers, images, volumes, networks, and other Docker-related tasks using the interactive interface.
Use the arrow keys or Vim-like keybindings to navigate through the different sections of the GUI. The available sections typically include Containers, Images, Volumes, Networks, and Logs.
Select the desired section by highlighting it with the arrow keys and pressing Enter. For example, to view and manage your containers, select the “Containers” section.
Within each section, you’ll find relevant information and actions specific to the selected category. Use the arrow keys to scroll through the list of containers, images, volumes, etc.
Use the available keybindings or on-screen prompts to perform actions such as starting, stopping, or removing containers, building images, attaching to a container’s shell, viewing logs, etc.
Customize the interface and settings according to your preferences. You can refer to the documentation for details on how to modify keybindings, themes, and other configuration options.
To exit lazydocker, simply close the terminal window or press Ctrl + C in the terminal.
Explore the interface: Once it launches, you’ll see a terminal-based GUI with different sections and navigation options.
Navigate through containers: Use the arrow keys or Vim-like keybindings to move between different sections. Select the “Containers” section to view your running containers.
View container details: Within the containers section, you’ll see a list of your containers, including their names, statuses, IDs, and ports. Scroll through the list to find the container you want to manage.
Perform container actions: With the container selected, you can perform various actions on it. Use the available keybindings or the on-screen prompts to start, stop, restart, or remove the container. You can also attach it to a container’s shell to interact with it directly.
Manage container logs:
Walk readers through the process of managing Docker containers using it.
Cover actions like starting, stopping, restarting, removing, and attaching to container shells.
Emphasize the ease and convenience of performing these actions within the lazy docker interface.
Explore other sections: lazy docker provides additional sections for managing images, volumes, networks, and Docker Compose services. Use the navigation options to explore these sections and perform corresponding actions.
Customize settings: lazy docker allows you to customize settings such as keybindings, themes, and container display options. Refer to the documentation for instructions on how to modify these settings according to your preferences.
Exit lazydocker: When you’re done managing your Docker containers, you can exit by pressing Ctrl + C in the terminal.
Conclusion:
Recap the benefits of using lazy docker for Docker container management.
Encourage readers to try Lazydocker and provide resources for further information and documentation. I hope will this your helpful. Thank you for reading the DevopsRoles page!
How to login to AWS Management Console with IAM Identity Center(AWS SSO) via a self-managed directory in Active Directory
To use IAM Identity Center(AWS SSO), a self-managed directory in Active Directory to log in to the AWS Management Console, you can follow these general steps:
Set up an ADFS server: Install and configure ADFS on a Windows server that is joined to your Active Directory domain. This server will act as the identity provider (IdP) in the SAML authentication flow. Refer to create AD server at the link create AD with Windows server 2012-r2
Enable IAM Identity Center: follow this AWS user guide to configure IAM Identity Center
Choose your identity source: follow this AWS user guide. Self-managed directory in Active Directory using the below link
Create a two-way trust relationship (in my case): Create a trust relationship between your AWS Managed Microsoft AD and your self-managed Active Directory domain. Another way you can use AD connecter.
Attribute mappingsand sync your AD user to the IAM identity Center: follow this guided setup of AWS user guide to configure attribute mappings and add users and groups you want(choose the user of Self-managed AD not the user of AWS-managed AD).
Create an administrative permission set: follow this guide to create a permission set that grants administrative permissions to your user.
Set up AWS account access for your administrativeuser: follow this guide to set up AWS account access for a user in the IAM Identity Center.
Sign in to the AWS access portal with your administrative credentials: you can sign in to the AWS access portal by using the credentials of the administrative user that you have already configured.
Using AWS Management Console with IAM Identity Center(AWS SSO)
Using AWS Management Console with IAM Identity Center(AWS SSO) via self-managed directory in Active Directory to log in to the AWS Management Console lab
I tried the lab, so I’ll make a note for everyone:
The two-way trust relationship step is able fails to verify, you can miss setting Outbound rules for AWS Managed Microsoft AD ENIs, try this to configure the security group.
Enable the IAM Identity Center with the AWS account root user.
A single user can access multiple AWS accounts (same organization) from a self-managed directory.
Sign in to the AWS access portal with your user name, not your user email
Conclusion
Using IAM Identity Center(AWS SSO), a self-managed directory in Active Directory to log in to the AWS Management Console. These steps provide a general overview of the process. The specific configuration details may vary depending on your environment and setup.
It’s recommended to consult the relevant documentation from AWS and Microsoft for detailed instructions on setting up the integration between your ADFS and AWS. I hope will this your helpful. Thank you for reading the DevopsRoles page!