Category Archives: AWS

Explore Amazon Web Services (AWS) at DevOpsRoles.com. Access in-depth tutorials and guides to master AWS for cloud computing and DevOps automation.

Manage the Aurora PostgreSQL global database

Introduction

You can use the AWS Console Manager to manage the Aurora PostgreSQL global database, alternatively, you can manage the Aurora PostgreSQL global database using the AWS CLI in Linux(AWS Cloud9 for my lab) as below.

Guide to creating and managing the Aurora PostgreSQL global database using the AWS CLI.

This lab contains the following tasks

Create Aurora Postgresql global database from a Regional cluster using AWS CLI

Add reader instances in the Secondary Aurora DB cluster using AWS CLI

Perform a Managed Planned Failover to the secondary region using AWS CLI

Detaches an Aurora secondary cluster from an Aurora global database cluster using AWS CLI

Prerequisites

For this walkthrough, you should have the following prerequisites configured:

  • Amazon Aurora PostgreSQL cluster in a single region
  • AWS CLI environment deployed
  • Cluster Parameter Group Name, VPC Security Group, and DB Subnet Group were deployed into the primary region and the secondary region

Detail Steps

Create Aurora Postgresql global database from a Regional cluster using AWS CLI

On the primary AWS Region, execute the below code using AWS CLI

# Get current cluster ARN
CLUSTER_ID=`aws rds describe-db-clusters --db-cluster-identifier aupg-labs-cluster --query 'DBClusters[*].DBClusterArn' | jq -r '.[0]'`

# convert the Aurora Provisioned cluster to global
aws rds create-global-cluster  --global-cluster-identifier auroralab-postgres-global --source-db-cluster-identifier $CLUSTER_ID

This operation will take 2-5 minutes to complete. 

In the next step, perform the following actions using AWS CLI to add a secondary region.

# obtain KeyID of the KMS key in the secondary region
aws kms describe-key --key-id alias/aws/rds --region us-west-1 --query 'KeyMetadata.KeyId'

# create the secondary cluster
aws rds --region us-east-1 \
  create-db-cluster \
     --db-cluster-identifier auroralab-postgres-secondary \
     --global-cluster-identifier auroralab-postgres-global \
     --engine aurora-postgresql \
     --kms-key-id d71e19d3-24a3-48cb-9e7f-10fbd28ef271 \
     --engine-version 15.3 \
     --db-cluster-parameter-group-name rds-apgcustomclusterparamgroup \
     --db-subnet-group-name aupg-labs-db-subnet-group \
     --vpc-security-group-ids sg-0cdcd29e64fd436c6 \
     --backup-retention-period 7 --region us-west-1

This operation will take 5-10 minutes to complete. 

Add reader instances in the Secondary Aurora DB cluster using AWS CLI

# Database Parameter group
DB_PARAMETER_GP=`aws rds describe-db-parameter-groups --region us-west-1 --query 'DBParameterGroups[*].DBParameterGroupName' | jq -r '.[0]'`

# Enhanced Monitoring role ARN
MONITOR_R=`aws iam get-role --role-name aupg-labs-monitor-us-west-2 --query 'Role.Arn' --output text`

# Add a Reader instance to the secondary Aurora DB cluster
aws rds --region us-west-1 \
  create-db-instance \
     --db-instance-identifier auroralab-postgres-instance1 \
     --db-cluster-identifier auroralab-postgres-secondary \
     --db-instance-class db.r6g.large \
     --engine aurora-postgresql \
     --enable-performance-insights \
     --performance-insights-retention-period 7 \
     --db-parameter-group-name $DB_PARAMETER_GP \
     --monitoring-interval 1 \
     --monitoring-role-arn $MONITOR_R \
     --no-auto-minor-version-upgrade

This operation will take 5-10 minutes to complete. 

Perform a Managed Planned Failover to the secondary region using AWS CLI

This method is recommended for disaster recovery. When you use this method, Aurora automatically adds back the old primary Region to the global database as a secondary Region when it becomes available again. Thus, the original topology of your global cluster is maintained.

Before failover

Failover

aws rds failover-global-cluster --global-cluster-identifier auroralab-postgres-global --target-db-cluster-identifier arn:aws:rds:us-west-1:XXXXXXXXX:cluster:auroralab-postgres-secondary

Now that the managed failover is completed.

To recover from an unplanned outage refer Recovering an Amazon Aurora global database from an unplanned outage

This alternative method can be used when managed failover isn’t an option, for example, when your primary and secondary Regions are running incompatible engine versions.

Detaches an Aurora secondary cluster from an Aurora global database cluster using AWS CLI

aws rds remove-from-global-cluster --global-cluster-identifier auroralab-postgres-global --db-cluster-identifier arn:aws:rds:us-west-2:XXXXXXXX:cluster:aupg-labs-cluster

This operation will take 5-10 minutes to complete. 

Now that the detach is completed.

And latest, the global database can be deleted with a command, see the AWS CLI document here:

aws rds delete-global-cluster --global-cluster-identifier <value>

Conclusion

These steps provide a general AWS CLI of the process of managing the Aurora global Postgresql instance. The specific configuration details may vary depending on your environment and setup. It’s recommended to consult the relevant documentation from AWS for detailed instructions on setting up.

Manage the Aurora PostgreSQL global database

I hope will this be helpful. Thank you for reading the DevopsRoles page!

Manage the RDS PostgreSQL instance using the AWS CLI

Introduction

You can use the AWS Console Manager to manage the RDS PostgreSQL instance, alternatively, you can manage the RDS PostgreSQL instance using the AWS CLI in Linux as below.

Guide to creating and managing the RDS PostgreSQL instance using the AWS CLI.

This lab contains the following tasks

Step 1: Install AWS CLI into the Cloud9 instance

Step 2: Create an RDS PostgreSQL Instance using the AWS CLI

Step 3: Configure the RDS PostgreSQL client on the Cloud9 instance

Step 4: Create Read-replica using the AWS CLI

Step 5: Promote Read Replica into a standalone instance using the AWS CLI

Step 6: Scale up the instance using the AWS CLI

Step 7: Migrating to a Multi-AZ DB cluster using the AWS CLI

Step 8: Promote this Multi-AZ read replica cluster to a stand-alone cluster using the AWS CLI

Step 9: Create a read replica from a Multi-AZ read replica cluster using the AWS CLI

Step 10: Check if the instance is Multi-AZ using the AWS CLI

Step 11: Convert the instance to Multi-AZ using the AWS CLI

Step 12: Create an SNS Topic and an RDS Event Subscription using the AWS CLI

Step 13: Perform failover of a Multi-AZ RDS instance using the AWS CLI

Step 14: View the instance’s backups using the AWS CLI

Step 15: Take a manual snapshot of the RDS instance using the AWS CLI

Step 16: Restores an instance from the latest manual snapshot using the AWS CLI

Step 17: Point in time restore the RDS instance using the AWS CLI

Step 18: Delete the RDS instances using the AWS CLI

Step 19: Upgrading the engine version of RDS instances using the AWS CLI

Detail Steps

Step 1: Install AWS CLI into the Cloud9 instance

sudo rm -rf /usr/local/aws
sudo rm /usr/bin/aws
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install
rm awscliv2.zip

Step 2: Create an RDS PostgreSQL Instance cluster using the AWS CLI

read -s -p "Enter a Password: " MASTER_USER_PASSWORD
AWSREGION=`aws configure get region`
DBSUBNETGRP=XXXXXXX
DBSECGRP=XXXXXXX
EMROLEARN=XXXXXXX
RDSKMSKEY=XXXXXXX

aws rds create-db-instance \
	--db-instance-identifier rds-pg-labs \
	--db-name pglab \
	--engine postgres \
	--engine-version 13.8 \
	--master-username masteruser \
	--master-user-password $MASTER_USER_PASSWORD \
	--db-instance-class db.t2.micro \
	--storage-type io1 \
	--iops 1000 \
	--allocated-storage 100 \
	--no-multi-az \
	--db-subnet-group $DBSUBNETGRP \
	--vpc-security-group-ids $DBSECGRP \
	--no-publicly-accessible \
	--enable-iam-database-authentication \
	--backup-retention-period 1 \
	--copy-tags-to-snapshot \
	--auto-minor-version-upgrade \
	--storage-encrypted \
	--kms-key-id $RDSKMSKEY \
	--monitoring-interval 1 \
	--monitoring-role-arn $EMROLEARN \
	--enable-performance-insights \
	--performance-insights-kms-key-id $RDSKMSKEY \
	--performance-insights-retention-period 7 \
	--enable-cloudwatch-logs-exports '["postgresql","upgrade"]' \
	--deletion-protection \
	--region $AWSREGION

Step 3: Configure the RDS PostgreSQL client on the Cloud9 instance

sudo amazon-linux-extras install -y postgresql14
sudo yum install -y postgresql-contrib sysbench jq
AWSREGION=`aws configure get region`
sudo amazon-linux-extras install -y postgresql14
sudo yum install -y postgresql-contrib sysbench jq
AWSREGION=`aws configure get region`
export DBUSER="XXXXXX"
export DBPASS="XXXXXX"

export PGHOST=rds-pg-labs.XXXXXX.us-east-1.rds.amazonaws.com
export PGUSER=$DBUSER
export PGPASSWORD="$DBPASS"
echo "export DBPASS=\"$DBPASS\"" >> /home/ec2-user/.bashrc
echo "export DBUSER=$DBUSER" >> /home/ec2-user/.bashrc
echo "export DBENDP=$DBENDP" >> /home/ec2-user/.bashrc
echo "export AWSREGION=$AWSREGION" >> /home/ec2-user/.bashrc
echo "export PGUSER=$DBUSER" >> /home/ec2-user/.bashrc
echo "export PGPASSWORD=\"$DBPASS\"" >> /home/ec2-user/.bashrc
echo "export PGHOST=$DBENDP" >> /home/ec2-user/.bashrc

Now, Verify DB Instance as bellow
psql pglab

Step 4: Create Read-replica cluster using the AWS CLI

AWSREGION=`aws configure get region`

aws rds create-db-instance-read-replica \
	--db-instance-identifier rds-pg-labs-read \
	--source-db-instance-identifier rds-pg-labs \
	--db-instance-class db.t3.medium \
	--region $AWSREGION

Step 5: Promote Read Replica into a standalone instance cluster using the AWS CLI

AWSREGION=`aws configure get region`
aws rds promote-read-replica \
--db-instance-identifier rds-pg-labs-read \
--backup-retention-period 1 \
--region $AWSREGION

Step 6: Scale up the instance using the AWS CLI

AWSREGION=`aws configure get region`
aws rds modify-db-instance \
	--db-instance-identifier rds-pg-labs \
	--db-instance-class db.t3.large \
	--apply-immediately \
	--region $AWSREGION

Step 7: Migrating to a Multi-AZ DB cluster cluster using the AWS CLI

ARN=`aws rds describe-db-instances --db-instance-identifier rds-pg-labs --query 'DBInstances[].DBInstanceArn' --output text --region $AWSREGION`

aws rds create-db-cluster \
        --db-cluster-identifier rds-pg-labs-cluster \
        --engine postgres \
        --replication-source-identifier $ARN \
        --db-cluster-instance-class db.r5d.large \
        --storage-type io1 --iops 1000 \
        --region $AWSREGION \
        --db-subnet-group-name XXXXXXX

Please note the following message.

An error occurred (InvalidSubnet) when calling the CreateDBCluster operation: No default subnet was detected in VPC. Please contact AWS Support to recreate the default Subnets.

An error occurred (InvalidParameterCombination) when calling the CreateDBCluster operation: The combination of engine version 13.8 and DB instance class db.t3.medium isn’t supported for Multi-AZ DB clusters.

Step 8: Promote this Multi-AZ read replica cluster to a stand-alone cluster using the AWS CLI

aws rds promote-read-replica-db-cluster \
        --db-cluster-identifier rds-pg-labs-cluster

Step 9: Create a read replica from a Multi-AZ read replica cluster using the AWS CLI

aws rds create-db-instance-read-replica \
   --db-instance-identifier rds-pg-labs-cluster-replica \
   --source-db-cluster-identifier rds-pg-labs-cluster

Note: For RDS for PostgreSQL, the source Multi-AZ DB cluster must be running version 15.2-R2 or higher to create a DB instance read replica. See other Limitations in the Amazon RDS User Guide.

Step 10: Check if the instance is Multi-AZ using the AWS CLI

AWSREGION=`aws configure get region`
aws rds describe-db-instances \
	--db-instance-identifier rds-pg-labs \
	--query 'DBInstances[].MultiAZ' \
	--output text \
	--region $AWSREGION

Step 11: Convert the instance to Multi-AZ using the AWS CLI

aws rds modify-db-instance \
	--db-instance-identifier rds-pg-labs \
	--multi-az \
	--apply-immediately \
	--region $AWSREGION

Confirm that your instance is now Multi-AZ

Step 12: Create an SNS Topic and an RDS Event Subscription using the AWS CLI

Step 13: Perform failover of a Multi-AZ RDS instance using the AWS CLI

# connection to the database at 10-second intervals
while true;
do
psql pglab -c 'select now() ,inet_server_addr(), pg_postmaster_start_time() '; 
echo -e "\n\n"
sleep 10
done

# reboot the instance with failover
AWSREGION=`aws configure get region`
aws rds reboot-db-instance --db-instance-identifier rds-pg-labs --force-failover --region $AWSREGION

Before failover

Failover

Step 14: View the instance’s backups using the AWS CLI

AWSREGION=`aws configure get region`

# List the automated backups for the instance
m

# List the snapshots for the instance
aws rds describe-db-snapshots \
	--db-instance-identifier rds-pg-labs \
	--region $AWSREGION --output table

# Check the Latest Restorable Time (LRT) of the instance
aws rds describe-db-instances \
	--db-instance-identifier rds-pg-labs \
	--query 'DBInstances[].LatestRestorableTime' \
	--region $AWSREGION \
	--output text

Step 15: Take a manual snapshot of the RDS instance using the AWS CLI

AWSREGION=`aws configure get region`

aws rds create-db-snapshot \
	--db-instance-identifier rds-pg-labs \
	--db-snapshot-identifier manual-snapshot-rds-pg-labs \
	--region $AWSREGION

Step 16: Restores an instance from the latest manual snapshot using the AWS CLI

AWSREGION=`aws configure get region`
# Get the Latest Manual Snapshot ID
LATESTSNAP=`aws rds describe-db-snapshots --db-instance-identifier rds-pg-labs --snapshot-type manual \
    --query 'DBSnapshots | sort_by(@, &SnapshotCreateTime) | [-1].DBSnapshotIdentifier' \
    --output text --region $AWSREGION`

# Restore the Snapshot
aws rds restore-db-instance-from-db-snapshot \
	--db-instance-identifier rds-pg-labs-restore-manual-snapshot \
	--db-snapshot-identifier $LATESTSNAP \
	--db-instance-class db.m6g.large \
	--region $AWSREGION \
    --db-subnet-group-name XXXXXXX
# Monitor the progress and status of the restoration 
aws rds describe-db-instances --db-instance-identifier rds-pg-labs-restore-manual-snapshot \
	--query 'DBInstances[0].[DBInstanceStatus,Endpoint.Address]' \
    --output text --region $AWSREGION

Monitor the progress and status of the restoration

Step 17: Point in time restore the RDS instance using the AWS CLI

AWSREGION=`aws configure get region`
# Lookup the latest restore time for your database
LASTRESTORE=`aws rds describe-db-instances \
  --db-instance-identifier rds-pg-labs \
  --region $AWSREGION \
  --query 'DBInstances[0].LatestRestorableTime' \
  --output text`

# or list restore time for your database
aws rds describe-db-snapshots --db-instance-identifier rds-pg-labs \
    --snapshot-type automated \
    --query 'DBSnapshots[].{ID:DBSnapshotIdentifier,Status:Status,Type:SnapshotType,CreateTime:SnapshotCreateTime}' \
    --output table \
    --region $AWSREGION

# Restore the database to the latest restorable time
aws rds restore-db-instance-to-point-in-time \
    --source-db-instance-identifier rds-pg-labs \
    --target-db-instance-identifier rds-pg-labs-restore-latest \
    --restore-time $LASTRESTORE \
    --db-subnet-group-name XXXXXXX

# Monitor the progress and status of the restoration 
aws rds describe-db-instances --db-instance-identifier rds-pg-labs-restore-latest \
	--query 'DBInstances[0].[DBInstanceStatus,Endpoint.Address]' \
    --output text --region $AWSREGION

Step 18: Delete the RDS instances using the AWS CLI

# list RDS instance
aws rds describe-db-instances --query "DBInstances[*].[DBInstanceIdentifier,Engine,DBInstanceStatus,Endpoint.Address]" --output table

AWSREGION=`aws configure get region`
# delete RDS instance
aws rds delete-db-instance \
	--db-instance-identifier rds-pg-labs-restore-latest \
	--skip-final-snapshot \
	--delete-automated-backups \
	--region $AWSREGION

Step 19: Upgrading the engine version of RDS instances using the AWS CLI

AWSREGION=`aws configure get region`
aws rds modify-db-instance --db-instance-identifier rds-pg-labs --engine-version 14.8 --allow-major-version-upgrade --apply-immediately --region $AWSREGION
aws rds describe-db-instances --db-instance-identifier rds-pg-labs --region $AWSREGION --query 'DBInstances[*].EngineVersion'

Conclusion

These steps provide a general AWS CLI of the process of managing RDS instances. The specific configuration details may vary depending on your environment and setup. It’s recommended to consult the relevant documentation from AWS for detailed instructions on setting up.

Manage the RDS PostgreSQL instance using the AWS CLI

I hope will this be helpful. Thank you for reading the DevopsRoles page!

How to share your Route 53 Domains across AWS accounts

Introduction

How to share your Route 53 Domains across AWS accounts via Route 53 hosted zone. What I will do here is that I already have a Production account with the domain, so I want to use this domain on the Test account to conduct my other activities.

Share your Route 53 Domains across AWS accounts

To share your Route 53 Domains across AWS accounts, you can follow these general steps

  • Create a Public Hosted Zone in the Test account: In the Test create a public hosted zone in Route 53 for the domain you want to use
  • Create a record in the Product account: In the account that owns the domain, create a record in Route 53 Hosted Zone for the domain you want to share.
  • Create a record in the Test account: in the Test account, create a record to route traffic to ALB

Step by step: Share your Route 53 Domains across the AWS account

  1. Create a Public Hosted Zone in the Test account

When you have to fill out the information and return to your Route 53 hosted zone, you need to copy the 4 lines inside the value box, which contains the nameserver information you need to use in the next step.

  1. Create a record in the Product account

Paste the 4 nameservers from your Test account Route 53 Hosted Zone into the value list.

  1. Create a record in the Test account

In the Test account, create a record to route traffic to ALB

  1. Test page

Conclusion

These steps provide a general overview of the process. The specific configuration details may vary depending on your environment and setup. It’s recommended to consult the relevant documentation from AWS for detailed instructions on setting up. I hope will this be helpful. Thank you for reading the DevopsRoles page!

Refer to: This lab html uses the template of ChandraLingam

How to login to AWS Management Console with IAM Identity Center(AWS SSO) via a self-managed directory in Active Directory

Introduction

How to login to AWS Management Console with IAM Identity Center(AWS SSO) via a self-managed directory in Active Directory

To use IAM Identity Center(AWS SSO), a self-managed directory in Active Directory to log in to the AWS Management Console, you can follow these general steps:

  • Set up an ADFS server: Install and configure ADFS on a Windows server that is joined to your Active Directory domain. This server will act as the identity provider (IdP) in the SAML authentication flow. Refer to create AD server at the link create AD with Windows server 2012-r2
  • Enable IAM Identity Center: follow this AWS user guide to configure IAM Identity Center
  • Choose your identity source: follow this AWS user guide. Self-managed directory in Active Directory using the below link
    • Create a two-way trust relationship (in my case): Create a trust relationship between your AWS Managed Microsoft AD and your self-managed Active Directory domain. Another way you can use AD connecter.
    • Attribute mappings and sync your AD user to the IAM identity Center: follow this guided setup of AWS user guide to configure attribute mappings and add users and groups you want(choose the user of Self-managed AD not the user of AWS-managed AD).
  • Create an administrative permission set: follow this guide to create a permission set that grants administrative permissions to your user.
  • Set up AWS account access for your administrative user: follow this guide to set up AWS account access for a user in the IAM Identity Center.
  • Sign in to the AWS access portal with your administrative credentials: you can sign in to the AWS access portal by using the credentials of the administrative user that you have already configured.

Using AWS Management Console with IAM Identity Center(AWS SSO)

Using AWS Management Console with IAM Identity Center(AWS SSO) via self-managed directory in Active Directory to log in to the AWS Management Console lab

I tried the lab, so I’ll make a note for everyone:

  • The two-way trust relationship step is able fails to verify, you can miss setting Outbound rules for AWS Managed Microsoft AD ENIs, try this to configure the security group.
  • Enable the IAM Identity Center with the AWS account root user.
  • A single user can access multiple AWS accounts (same organization) from a self-managed directory.
  • Sign in to the AWS access portal with your user name, not your user email

Conclusion

Using IAM Identity Center(AWS SSO), a self-managed directory in Active Directory to log in to the AWS Management Console. These steps provide a general overview of the process. The specific configuration details may vary depending on your environment and setup.

It’s recommended to consult the relevant documentation from AWS and Microsoft for detailed instructions on setting up the integration between your ADFS and AWS. I hope will this your helpful. Thank you for reading the DevopsRoles page!

Using Windows Active Directory ADFS and SAML 2.0 to login AWS Console note

Introduction

To use Windows Active Directory (AD), Active Directory Federation Services (ADFS), and Security Assertion Markup Language 2.0 (SAML 2.0) to log in to the AWS Management Console, you can follow these general steps How to Use Windows Active Directory, ADFS, and SAML 2.0 to login AWS Console:

  • Set up an ADFS server: Install and configure ADFS on a Windows server that is joined to your Active Directory domain. This server will act as the identity provider (IdP) in the SAML authentication flow.
  • Configure AWS as a relying party trust: In the ADFS server, create a relying party trust for AWS. This trust establishes a relationship between the ADFS server and AWS, allowing the exchange of SAML assertions.
  • Obtain the AWS metadata document: Download the AWS SAML metadata document from the AWS Management Console. This document contains the necessary configuration information for AWS.
  • Configure claims rules: Set up claims rules in the ADFS server to map Active Directory attributes to the corresponding AWS SAML attributes. This step ensures that the necessary user information is included in the SAML assertion sent to AWS.
  • Set up AWS IAM roles: Create IAM roles in AWS that define the permissions and access policies for users authenticated through SAML. These roles will determine the level of access users have in the AWS Management Console.
  • Configure AWS IAM identity provider: Create an IAM identity provider in AWS and upload the ADFS metadata XML file. This step establishes the trust relationship between AWS and the ADFS server.
  • Create an IAM role mapping: Create a role mapping in AWS that maps the SAML attributes received from ADFS to the corresponding IAM roles. This mapping determines which IAM role should be assumed based on the user’s attributes.
  • Test the login process: Attempt to log in to the AWS Management Console using the ADFS server as the IdP. You should be redirected to the ADFS login page, and after successful authentication, you will be logged in to the AWS Management Console with the appropriate IAM role.

What is ADFS and SAML 2.0?

ADFS Overview

ADFS (Active Directory Federation Services) is a component of Windows Server that provides users with single sign-on access to systems and applications located across organizational boundaries.

SAML 2.0 Overview

SAML (Security Assertion Markup Language) 2.0 is an open standard for exchanging authentication and authorization data between identity providers (IdP) and service providers (SP).

When integrated, ADFS acts as the IdP and AWS acts as the SP, enabling users to log in to AWS using their Windows domain credentials.

Benefits of Using ADFS and SAML with AWS

  • Centralized identity management using Active Directory.
  • Improved security with token-based authentication.
  • No need to manage IAM user passwords in AWS.
  • Enhanced user experience through seamless SSO.
  • Audit trail and compliance alignment with enterprise policies.

Using Windows Active Directory ADFS and SAML 2.0 to login AWS Console

Today, I tried the lab enabling federation to aws using windows active directory adfs and saml 2.0

I have a note for everyone: Use Windows Active Directory, ADFS, and SAML 2.0 to login AWS Console

  • The Cloudformation template is an older version and some AMI IDs are older and cannot be used. In my case, I was using the Tokyo area, but I couldn’t use AMI and it crashed.
  • Do not use Windows Server 2016 for your AD server. The “Configure AWS as a trusted relying party” step does not succeed and I am unable to log in to the AWS console afterward.
  • Cloudformation template does not set up IIS, manual configuration, or create CERT yourself
  • If you get an error when you visit the site https://localhost/adfs/ls/IdpInitiatedSignOn.aspx
An error occurred
The resource you are trying to access is not available. Contact your administrator for more information.

Change setting of EnableIdpInitiatedSignonPage property:Set-AdfsProperties – EnableIdpInitiatedSignonPage $True

You have finished the lab by logging into the AWS console with the administrator role

External Links

Conclusion

Using Windows Active Directory ADFS and SAML 2.0 to log in to AWS Console. These steps provide a general overview of the process. The specific configuration details may vary depending on your environment and setup. It’s recommended to consult the relevant documentation from AWS and Microsoft for detailed instructions on setting up the integration between ADFS, SAML, and AWS. I hope will this your helpful. Thank you for reading the DevopsRoles page!

Install EKS on AWS

In this tutorial, how to Set Up EKS 1.16 cluster with eksctl . Amazon Elastic Kubernetes Service (Amazon EKS) gives you the flexibility to start, run, and scale Kubernetes applications in the AWS cloud or on-premises. No, let’s go Install EKS on AWS.

1.The first create a free account on AWS.

Link here:

Example: Create User: devopsroles-demo as the picture below:

2.Install AWS cli on Windows

Refer here:

Create AWS Profile

We easier to switch to different AWS IAM user or IAM role identiy by ‘export AWS_PROFILE=PROFILE_NAME‘ . I will not using ‘default‘ profile created by ‘aws configure‘. For example, I create a named AWS Profile ‘devopsroles-demo‘ in two ways:

  • ‘aws configure –profile devopsroles-demo’
E:\Study\cka\devopsroles>aws configure --profile devopsroles-demo
AWS Access Key ID [None]: XXXXZHBNJLCKKCE7EQQQ
AWS Secret Access Key [None]: fdfdfdfd43434dYlQ1il1xKNCnqwUvNHFSv41111
Default region name [None]:
Default output format [None]:

E:\Study\cka\devopsroles>set AWS_PROFILE=devopsroles-demo
E:\Study\cka\devopsroles>aws sts get-caller-identity
{
    "UserId": "AAQAZHBNJLCKPEGKYAV1R",
    "Account": "456602660300",
    "Arn": "arn:aws:iam::456602660300:user/devopsroles-demo"
}
  • Create profile entry in ~/.aws/credentials file

The content credentials file as below:

[devopsroles-demo] 
aws_access_key_id=YOUR_ACCESS_KEY 
aws_secret_access_key=YOUR_SECRET_ACCESS_KEY
aws_region = YOUR_REGION

Check new profile

export AWS_PROFILE=devopsroles-demo
# Windows
set AWS_PROFILE=devopsroles-demo

3.Install aws-iam-authenticator

# Windows
# install chocolatey first: https://chocolatey.org/install
choco install -y aws-iam-authenticator

4.Install kubectl

Ref here:

choco install kubernetes-cli
kubectl version

5.Install eksctl

Ref here:

# install eskctl from chocolatey
chocolatey install -y eksctl 
eksctl version

6.Create ssh key for EKS worker nodes

ssh-keygen
# Example name key is devopsroles_worker_nodes_demo.pem

7.Setup EKS cluster with eksctl (so you don’t need to manually create VPC)

eksctl tool will create K8s Control Plane (master nodes, etcd, API server, etc), worker nodes, VPC, Security Groups, Subnets, Routes, Internet Gateway, etc.

  • use official AWS EKS AMI
  • dedicated VPC
  • EKS not supported in us-west-1
eksctl create cluster  --name devopsroles-from-eksctl --version 1.16  --region us-west-2  --nodegroup-name workers --node-type t3.medium --nodes 2 --nodes-min 1 --nodes-max 4 --ssh-access --ssh-public-key ~/.ssh/devopsroles_worker_nodes_demo.pem.pub --managed

The output

E:\Study\cka\devopsroles>eksctl create cluster  --name devopsroles-from-eksctl --version 1.16  --region us-west-2  --nodegroup-name workers --node-type t3.medium --nodes 2 --nodes-min 1 --nodes-max 4 --ssh-access
--ssh-public-key ~/.ssh/devopsroles_worker_nodes_demo.pem.pub --managed
2021-05-23 15:19:30 [ℹ]  eksctl version 0.49.0
2021-05-23 15:19:30 [ℹ]  using region us-west-2
2021-05-23 15:19:31 [ℹ]  setting availability zones to [us-west-2a us-west-2b us-west-2c]
2021-05-23 15:19:31 [ℹ]  subnets for us-west-2a - public:192.168.0.0/19 private:192.168.96.0/19
2021-05-23 15:19:31 [ℹ]  subnets for us-west-2b - public:192.168.32.0/19 private:192.168.128.0/19
2021-05-23 15:19:31 [ℹ]  subnets for us-west-2c - public:192.168.64.0/19 private:192.168.160.0/19
2021-05-23 15:19:31 [ℹ]  using SSH public key "C:\\Users\\USERNAME/.ssh/devopsroles_worker_nodes_demo.pem.pub" as "eksctl-devopsroles-from-eksctl-nodegroup-workers-29:e7:8c:c3:df:a5:23:1b:bb:74:ad:51:bc:fb:80:9b" 
2021-05-23 15:19:32 [ℹ]  using Kubernetes version 1.16
2021-05-23 15:19:32 [ℹ]  creating EKS cluster "devopsroles-from-eksctl" in "us-west-2" region with managed nodes
2021-05-23 15:19:32 [ℹ]  will create 2 separate CloudFormation stacks for cluster itself and the initial managed nodegroup
2021-05-23 15:19:32 [ℹ]  if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=us-west-2 --cluster=devopsroles-from-eksctl'
2021-05-23 15:19:32 [ℹ]  CloudWatch logging will not be enabled for cluster "devopsroles-from-eksctl" in "us-west-2"
2021-05-23 15:19:32 [ℹ]  you can enable it with 'eksctl utils update-cluster-logging --enable-types={SPECIFY-YOUR-LOG-TYPES-HERE (e.g. all)} --region=us-west-2 --cluster=devopsroles-from-eksctl'
2021-05-23 15:19:32 [ℹ]  Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "devopsroles-from-eksctl" in "us-west-2"
2021-05-23 15:19:32 [ℹ]  2 sequential tasks: { create cluster control plane "devopsroles-from-eksctl", 2 sequential sub-tasks: { wait for control plane to become ready, create managed nodegroup "workers" } }
2021-05-23 15:19:32 [ℹ]  building cluster stack "eksctl-devopsroles-from-eksctl-cluster"
2021-05-23 15:19:34 [ℹ]  deploying stack "eksctl-devopsroles-from-eksctl-cluster"
2021-05-23 15:20:04 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-cluster"
2021-05-23 15:20:35 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-cluster"
2021-05-23 15:21:36 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-cluster"
2021-05-23 15:22:37 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-cluster"
2021-05-23 15:23:39 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-cluster"
2021-05-23 15:24:40 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-cluster"
2021-05-23 15:25:41 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-cluster"
2021-05-23 15:26:42 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-cluster"
2021-05-23 15:27:44 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-cluster"
2021-05-23 15:28:45 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-cluster"
2021-05-23 15:29:46 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-cluster"
2021-05-23 15:30:47 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-cluster"
2021-05-23 15:31:48 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-cluster"
2021-05-23 15:31:52 [ℹ]  building managed nodegroup stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:31:53 [ℹ]  deploying stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:31:53 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:32:09 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:32:27 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:32:48 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:33:05 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:33:26 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:33:47 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:34:06 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:34:24 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:34:43 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:35:01 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:35:17 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:35:37 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:35:38 [ℹ]  waiting for the control plane availability...
2021-05-23 15:35:38 [✔]  saved kubeconfig as "C:\\Users\\USERNAME/.kube/config"
2021-05-23 15:35:38 [ℹ]  no tasks
2021-05-23 15:35:38 [✔]  all EKS cluster resources for "devopsroles-from-eksctl" have been created
2021-05-23 15:35:39 [ℹ]  nodegroup "workers" has 2 node(s)
2021-05-23 15:35:39 [ℹ]  node "ip-192-168-53-14.us-west-2.compute.internal" is ready
2021-05-23 15:35:39 [ℹ]  node "ip-192-168-90-229.us-west-2.compute.internal" is ready
2021-05-23 15:35:39 [ℹ]  waiting for at least 1 node(s) to become ready in "workers"
2021-05-23 15:35:39 [ℹ]  nodegroup "workers" has 2 node(s)
2021-05-23 15:35:39 [ℹ]  node "ip-192-168-53-14.us-west-2.compute.internal" is ready
2021-05-23 15:35:39 [ℹ]  node "ip-192-168-90-229.us-west-2.compute.internal" is ready
2021-05-23 15:35:47 [ℹ]  kubectl command should work with "C:\\Users\\USERNAME/.kube/config", try 'kubectl get nodes'
2021-05-23 15:35:47 [✔]  EKS cluster "devopsroles-from-eksctl" in "us-west-2" region is ready

You have created a cluster, To find that cluster credentials added in ~/.kube/config

The result on AWS

Amazon EKS Clusters

CloudFormation

EC2

For example basic command line:

Get info about cluster resources

aws eks describe-cluster --name devopsroles-from-eksctl --region us-west-2

Get services

kubectl get svc

The output

E:\Study\cka\devopsroles>kubectl get svc
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.100.0.1   <none>        443/TCP   11m

Delete EKS Cluster

eksctl delete cluster --name devopsroles-from-eksctl --region us-west-2

The output

E:\Study\cka\devopsroles>eksctl delete cluster --name devopsroles-from-eksctl --region us-west-2
2021-05-23 15:57:31 [ℹ]  eksctl version 0.49.0
2021-05-23 15:57:31 [ℹ]  using region us-west-2
2021-05-23 15:57:31 [ℹ]  deleting EKS cluster "devopsroles-from-eksctl"
2021-05-23 15:57:34 [ℹ]  deleted 0 Fargate profile(s)
2021-05-23 15:57:37 [✔]  kubeconfig has been updated
2021-05-23 15:57:37 [ℹ]  cleaning up AWS load balancers created by Kubernetes objects of Kind Service or Ingress
2021-05-23 15:57:45 [ℹ]  2 sequential tasks: { delete nodegroup "workers", delete cluster control plane "devopsroles-from-eksctl" [async] }
2021-05-23 15:57:45 [ℹ]  will delete stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:57:45 [ℹ]  waiting for stack "eksctl-devopsroles-from-eksctl-nodegroup-workers" to get deleted
2021-05-23 15:57:45 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:58:02 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:58:19 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:58:40 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:58:58 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:59:19 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:59:20 [ℹ]  will delete stack "eksctl-devopsroles-from-eksctl-cluster"
2021-05-23 15:59:20 [✔]  all cluster resources were deleted

Conclusion

You have Install EKS on AWS. I hope will this your helpful. Thank you for reading the DevopsRoles page!

Reduce AWS Billing and Setup Email Alerts

In this tutorial, How to Reduce AWS Billing & Setup Email Alerts.

Setup AWS Billing email alert

You need to login to AWS Console as a Root user

Follow these steps on aws:

Choice “My Billing Dashboard

Create a budget

Choice “Cost budget

Example, My “Budgeted amount” is 15 USD.

Input: Email contacts for Alerts

Final, Create it

Conclusion

You have set up Email Alerts to reduce AWS Billing. I hope will this your helpful. Thank you for reading the DevopsRoles page!

How to setup SSL/TLS connection for AWS RDS Oracle Database using SQL*PLUS, SQL Developer, JDBC

Introduction

Hi everyone, today I am going to show everyone how to set up an SSL / TLS connection from the client to the AWS RDS Oracle Database.

Prepare AWS RDS Oracle Database

  • An EC2 instance with Windows Server 2019.
  • An RDS Oracle instance (12.1.0.2.v19)
  • Connect normal to RDS Oracle instance with TCP protocol

Check the current connection with the following command

sqlplus admin/admin12345@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=orcl12.xxxxxxx.ap-northeast-1.rds.amazonaws.com)(PORT=1521))(CONNECT_DATA=(SID=SSLLAB)))

sqlplus > SELECT SYS_CONTEXT('USERENV', 'network_protocol') FROM DUAL;

Task today

  1. Modify the DB instance to change the CA from rds-ca-2015 to rds-ca-2019.
  2. Adding the SSL Option
  3. Using SQL*Plus for SSL/TLS connections(with Oracle Wallets).
  4. Using SQL Developer for SSL/TLS connections(with JKS).
  5. Using JDBC to establish SSL/TLS connections(with JKS).

Modify the DB instance to change the CA from rds-ca-2015 to rds-ca-2019

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://console.aws.amazon.com/rds/.

2. In the navigation pane, choose Databases, and then choose the DB instance that you want to modify.

3. Choose Modify. The Modify DB Instance page appears.

4. In the Network & Security section, choose rds-ca-2019.

5. Choose Continue and check the summary of modifications.

6. To apply the changes immediately, choose Apply immediately. Choosing this option restarts your database immediately.

Adding the SSL Option

1. Create or Modify an existing option group to which you can add the SSL option for your RDS intance.

Add the SSL option to the option group.

Setting the option

SQLNET.CIPHER_SUITE:SSL_RSA_WITH_AES_256_CBC_SHA

SQLNET.SSL_VERSION:1.0 or 1.2

FIPS.SSLFIPS_140:TRUE

2. Setting Security Group using for your RDS Oracle instance with allow inbound PORT 2484, Source Range is your IPv4 CIDR VPC or EC2 instance client.

Using SQL*Plus for SSL/TLS connections

1. Download middleware

  • Oracle Database Client (12.1.0.2.0) for Microsoft Windows (x64) require for orapki Utility(download link).

Install Folder path: C:\app\client\Administrator\product\12.1.0\client_1

2. Download the 2019 root certificate that works for all AWS Regions and put the file in the ssl_wallet directory.

https://s3.amazonaws.com/rds-downloads/rds-ca-2019-root.pem

Folder path: C:\app\client\Administrator\product\12.1.0\client_1\ssl_wallet

3. Run the following command to create the Oracle wallet.

C:\app\client\Administrator\product\12.1.0\client_1\BIN\orapki wallet create -wallet C:\app\client\Administrator\product\12.1.0\client_1\ssl_wallet -auto_login_only

4. Run the following command to add a cert to the Oracle wallet.

C:\app\client\Administrator\product\12.1.0\client_1\BIN\orapki wallet add -wallet C:\app\client\Administrator\product\12.1.0\client_1\ssl_wallet -trusted_cert -cert C:\app\client\Administrator\product\12.1.0\client_1\ssl_wallet\rds-ca-2019-root.pem -auto_login_only

5. Run the following command to confirm that the wallet was updated successfully.

C:\app\client\Administrator\product\12.1.0\client_1\BIN\orapki wallet display -wallet C:\app\client\Administrator\product\12.1.0\client_1\ssl_wallet

6. Create the net service name to log in with SQL*PLUS.

  • Create a file name C:\app\client\Administrator\product\12.1.0\client_1\network\admin\tnsnames.ora with content.
ORCL12 =
(DESCRIPTION =
 (ADDRESS = (PROTOCOL = TCPS)(HOST = orcl12.xxxxxxx.ap-northeast-1.rds.amazonaws.com)(PORT = 2484))
 (CONNECT_DATA=
  (SERVER = DEDICATED)
  (SERVICE_NAME = SSLLAB))
 )
)
  • Edit C:\app\client\Administrator\product\12.1.0\client_1\network\admin\sqlnet.ora file with content.
WALLET_LOCATION=  
  (SOURCE=
      (METHOD=file)
      (METHOD_DATA=  
         (DIRECTORY=C:\app\client\Administrator\product\12.1.0\client_1\ssl_wallet)))
SSL_CLIENT_AUTHENTICATION = FALSE    
SSL_VERSION = 1.2    
SSL_CIPHER_SUITES = (SSL_RSA_WITH_AES_256_CBC_SHA)    
SSL_SERVER_DN_MATCH = NO   
SQLNET.AUTHENTICATION_SERVICES = (TCPS,TNS)    
NAMES.DIRECTORY_PATH= (TNSNAMES, EZCONNECT) 
  • Setting TNS_ADMIN user environment
TNS_ADMIN = C:\app\client\Administrator\product\12.1.0\client_1\network\admin\

7. Test connect with SQL*PLUS

sqlplus admin/admin12345@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCPS)(HOST=orcl12.xxxxxxxxx.ap-northeast-1.rds.amazonaws.com)(PORT=2484))(CONNECT_DATA=(SID=SSLLAB)))
or with TNS name service
sqlplus admin/admin12345@ORCL12
SELECT SYS_CONTEXT('USERENV', 'network_protocol') FROM DUAL;
tcps

Using SQL Developer for SSL/TLS connections

1. Download middleware

2. Convert the certificate to .der format using the following command.

openssl x509 -outform der -in C:\app\client\Administrator\product\12.1.0\client_1\ssl_wallet\rds-ca-2019-root.pem -out rds-ca-2019-root.der

Copy the output file to C:\app\client\Administrator\product\12.1.0\client_1\ssl_wallet\rds-ca-2019-root.der

3. Create the Keystore using the following command.

C:\app\client\Administrator\product\12.1.0\client_1\jdk\bin\keytool -keystore clientkeystore -genkey -alias client

Copy the output file to C:\app\client\Administrator\product\12.1.0\client_1\jdk\jre\lib\security\clientkeystore

4. Import the certificate into the key store using the following command.

C:\app\client\Administrator\product\12.1.0\client_1\jdk\bin\keytool -import -alias rds-root -keystore C:\app\client\Administrator\product\12.1.0\client_1\jdk\jre\lib\security\clientkeystore -file C:\app\client\Administrator\product\12.1.0\client_1\ssl_wallet\rds-ca-2019-root.der
Input pass of clientkeystore and confirm yes at below question , to import cert.

Trust this certificate? [no]:  yes
 Certificate was added to keystore

5. Confirm that the key store was updated successfully.

C:\app\client\Administrator\product\12.1.0\client_1\jdk\bin\keytool -list -v -keystore C:\app\client\Administrator\product\12.1.0\client_1\jdk\jre\lib\security\clientkeystore

6. Down the new version of JCE for JDK6, and remove the old jar file, copy the new jar file to under directory C:\app\client\Administrator\product\12.1.0\client_1\jdk\jre\lib\security\

Note: If you using other versions of jdk, please refer to the following link and download the correct version of JCE.

https://blogs.oracle.com/java-platform-group/diagnosing-tls,-ssl,-and-https

7. Config C:¥app¥client¥sqldeveloper¥sqldeveloper¥bin¥sqldeveloper.conf file, add the following line.

SetJavaHome C:\app\client\Administrator\product\12.1.0\client_1\jdk
#Configure some JDBC settings
AddVMOption -Djavax.net.ssl.trustStore=C:\app\client\Administrator\product\12.1.0\client_1\jdk\jre\lib\security\clientkeystore	
AddVMOption -Djavax.net.ssl.trustStoreType=JKS	
AddVMOption -Djavax.net.ssl.trustStorePassword=admin12345	
AddVMOption -Doracle.net.ssl_cipher_suites=TLS_RSA_WITH_AES_256_CBC_SHA

8. Test connect to AWS RDS Oracle instance with SQL developer tool with the connection string.

jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCPS)(HOST=orcl12.cgl7xlmapx2h.ap-northeast-1.rds.amazonaws.com)(PORT=2484))(CONNECT_DATA=(SID=SSLLAB)))

Using JDBC to establish SSL/TLS connections

1. Source code sample.

The following code example shows how to set up the SSL connection using JDBC.

import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.SQLException;
import java.util.Properties;

public class OracleSslConnectionTest {
	private static final String DB_SERVER_NAME = "orcl12.xxxxxx.ap-northeast-1.rds.amazonaws.com";
    private static final String SSL_PORT = "2484";
    private static final String DB_SID = "SSLLAB";
    private static final String DB_USER = " admin";
    private static final String DB_PASSWORD = "admin12345";

    private static final String KEY_STORE_FILE_PATH = "C:\\app\\client\\Administrator\\product\\12.1.0\\client_1\\jdk\\jre\\lib\\security\\clientkeystore";
    private static final String KEY_STORE_PASS = "admin12345";
    private static final String SSL_CIPHER_SUITES = "TLS_RSA_WITH_AES_256_CBC_SHA";
    
	public static void main(String args[])  throws SQLException {  
		final Properties properties = new Properties();
        final String connectionString = String.format(
                "jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCPS)(HOST=%s)(PORT=%s))(CONNECT_DATA=(SID=%s))(SECURITY = (SSL_SERVER_CERT_DN = \"CN=Amazon RDS Root 2019 CA,OU=Amazon RDS,O=Amazon Web Services, Inc.,ST=Washington,L=Seattle,C=US\")))",
                DB_SERVER_NAME, SSL_PORT, DB_SID);
        properties.put("user", DB_USER);
        properties.put("password", DB_PASSWORD);
        
        properties.put("javax.net.ssl.trustStore", KEY_STORE_FILE_PATH);
        properties.put("javax.net.ssl.trustStoreType", "JKS");
        properties.put("javax.net.ssl.trustStorePassword", KEY_STORE_PASS);
        
        properties.put("oracle.net.ssl_cipher_suites", SSL_CIPHER_SUITES);
        
        final Connection connection = DriverManager.getConnection(connectionString, properties);
        // If no exception, that means handshake has passed, and an SSL connection can be opened
        System.out.println("connected..");
	}
	
}

2. Test connect to AWS RDS Oracle instance with JDBC thin driver.

java -Djavax.net.debug=all -cp .;C:\app\client\Administrator\product\12.1.0\client_1\jdbc\lib\ojdbc7.jar OracleSslConnectionTest

The end.Good luck to you and happy with AWS cloud.

Thank you for reading the DevopsRoles page!

AWS Certified Solutions Architect Exercises- part 3 Amazon VPC

Introduction

In the ever-evolving landscape of technology, mastering the skills and knowledge of AWS solution architecture is more crucial than ever. Understanding and practicing exercises related to Amazon Virtual Private Cloud (VPC) is a key component in becoming an AWS Certified Solutions Architect. This article, the third installment in our series, will guide you through essential exercises involving Amazon VPC. We will help you grasp how to set up and manage VPCs, understand their core components, and create a secure, flexible networking environment for your applications.

In this article, we’ll learn about Amazon VPC, the best way to become familiar with Amazon VPC is to build your own custom Amazon VPC and then deploy Amazon EC2 instances into it. AWS Certified Solutions Architect Exercises- part 3 Amazon VPC

1. Today’s tasks

  • Create a Custom Amazon VPC
  • Create Two Subnets for Your Custom Amazon VPC
  • Connect Your Custom Amazon VPC to the Internet and Establish Routing
  • Launch an Amazon EC2 Instance and Test the Connection to the Internet.

2. Before you begin AWS Certified Solutions Architect

  • Command-line tool to SSH into the Linux instance.

3. Let’s do it

EXERCISE 1:

Create a Custom Amazon VPC

1. Open the Amazon VPC console

2. In the navigation pane, choose Your VPCs, and Create VPC.

3. Specify the following VPC details as necessary and choose to Create.

  • Name tag: My First VPC
  • IPv4 CIDR block: 192.168.0.0/16
  • IPv6 CIDR block:  No IPv6 CIDR Block
  • Tenancy:  Default

EXERCISE 2:

Create Two Subnets for Your Custom Amazon VPC

To add a subnet to your VPC using the console

1. Open the Amazon VPC console

2. In the navigation pane, choose SubnetsCreate subnet.

3. Specify the subnet details as necessary and choose to Create.

  • Name tag: My First Public Subnet.
  • VPC: Choose the VPC from Exercise 1.
  • Availability Zone: Optionally choose an Availability Zone in which your subnet will reside, or leave the default No Preference to let AWS choose an Availability Zone for you.
  • IPv4 CIDR block: 192.168.1.0/24.

4. Create a subnet with a CIDR block equal to 192.168.2.0/24 and a name tag of My First Private Subnet. Create the subnet in the Amazon VPC from Exercise 1, and specify a different Availability Zone for the subnet than previously specified (for example, ap-northeast-1c). You have now created two new subnets, each in its own Availability Zone.

EXERCISE 3:

Connect Your Custom Amazon VPC to the Internet and Establish Routing

1. Create an IGW with a name tag of My First IGW and attach it to your custom Amazon VPC.

2. Add a route to the main route table for your custom Amazon VPC that directs Internet traffic (0.0.0.0/0) to the IGW.

3. Create a NAT gateway, place it in the public subnet of your custom Amazon VPC, and assign it an EIP.

4. Create a new route table with a name tag of My First Private Route Table and place it within your custom Amazon VPC. Add a route to it that directs Internet traffic (0.0.0.0/0) to the NAT gateway and associate it with the private subnet.

EXERCISE 4:

Launch an Amazon EC2 Instance and Test the Connection to the Internet

1. Launch a t2.micro Amazon Linux AMI as an Amazon EC2 instance into the public subnet of your custom Amazon VPC, give it a name tag of My First Public Instance and select your key pair for secure access to the instance.

2. Securely access the Amazon EC2 instance in the public subnet via SSH with a key pair.

3. Execute an update to the operating system instance libraries by executing the following command:

sudo yum update -y

4. You should see an output showing the instance downloading software from the Internet and installing it.

5. Delete all resources created in this exercise.

Conclusion

Mastering exercises related to Amazon VPC not only prepares you better for the AWS Certified Solutions Architect exam but also equips you with vital skills for deploying and managing cloud infrastructure effectively. From creating and configuring VPCs to setting up route tables and network ACLs, each step in this process contributes to building a robust and secure network system. We hope this article boosts your confidence in applying the knowledge gained and continues your journey toward becoming an AWS expert.

If you have any questions or need further assistance, don’t hesitate to reach out to us. Best of luck on your path to becoming an AWS Certified Solutions Architect! AWS Certified Solutions Architect Exercises- part 3 Amazon VPC. Happy Clouding!!! Thank you for reading the DevopsRoles page!

AWS Certified Solutions Architect Exercises- part 2 Amazon EC2 and Amazon EBS

In this article, we’ll learn about Amazon Elastic Compute Cloud (Amazon EC2) and Amazon Elastic Block Store (Amazon EBS). Now, let’s go to AWS Certified Solutions Architect Exercises.

AWS Certified Solutions Architect Exercises

1. Today tasks

  • Launch and Connect to a Linux Instance
  • Launch a Windows Instance with Bootstrapping
  • Launch a Spot Instance
  • Access Metadata
  • Create an Amazon EBS Volume and Show That It Remains After the Instance Is Terminated
  • Take a Snapshot and Restore

2. Before you begin

  • Puttygen.exe: Tool creating a .ppk file from a .pem file
  • Command-line tool to SSH into the Linux instance.

3. Let do it

EXERCISE 1: Launch and Connect to a Linux Instance

In this exercise, you will launch a new Linux instance, log in with SSH, and install any security updates.

1. Launch an instance in the Amazon EC2 console.

2. Choose the Amazon Linux 2 AMI (HVM), SSD Volume Type – ami-0c3fd0f5d33134a76.

3. Choose the t2.micro instance type.

4. Configure Instance Details as below

  • Network: Launch the instance in the default VPC.
  • Subnet: Select a default subnet
  • Auto-assign Public IP: Enable.

5. Add Storage setting default, click 「Next: Add tags」button, next screen, you click 「Add tag」button to add a tag to the instance

Example: add a tag with Key is Name, the value of Key: ec2-demo.

6. Create a new security group called demo-sg.

7. Add a rule to demo-sg :

7.1. Allowing SSH access from the IP address of your computer with: Source is My IP(to secure this way is recommended)

7.2. Allowing all IP access with Source is Custom, CidrIP:0.0.0.0/0.

8. Review and Launch the instance.

9. When prompted for a key pair, choose a key pair you already have or create a new one and download the private portion. Amazon generates a keyname.pem file, and you will need a keyname.ppk file to connect to the instance via SSH. Puttygen.exe is one utility that will create a .ppk file from a .pem file.

10. SSH into the instance using the IPv4 public IP address,

To SSH to EC2 created, you able to use tools such as Terraterm

and the user name ec2-user, and the keyname.ppk file created at step 9.

11. From the command-line prompt, run

sudo yum update

12. Close the SSH window and terminate the instance.

EXERCISE 2: Launch a Windows Instance with Bootstrapping

In this exercise, you will launch a Windows instance and specify a very simple bootstrap script. You will then confirm that the bootstrap script was executed on the instance.

1. Launch an instance in the Amazon EC2 console.

2. Choose the Microsoft Windows Server 2019 Base AMI.

3. Choose the t2.micro instance type.

4. Launch the instance in either the default VPC and default subnet, Auto-assign Public IP: Enable.

5. In the Advanced Details section, enter the following text as UserData:

<script>
md c:\temp
</script>

6. Add a tag to the instance of Key: Name, Value: EC2-Demo2

7. Use the demo-sg security group from Exercise 1.

8. Launch the instance.

9. Use the key pair from Exercise 1.

10. On the Connect to Your Instance screen, decrypt the administrator password and then download the RDP file to attempt to connect to the instance. Your attempt should fail because the demo-sg security group does not allow RDP access.

11. Open the demo-sg security group and add a rule that allows RDP access from your IP address.

12. Attempt to access the instance via RDP again.

13. Once the RDP session is connected, open Windows Explorer and confirm that the c:\temp folder has been created.

14. End the RDP session and terminate the instance.

EXERCISE 3: Launch a Spot Instance

In this exercise, you will create a Spot Instance.

1. Launch an instance in the Amazon EC2 console.

2. Choose the Amazon Linux AMI.

3. Choose the instance type.

4. On the Configure Instance page, request a Spot Instance.

5. Launch the instance in either the default VPC and default subnet, Auto-assign Public IP: Enable.

6. Request a Spot Instance and enter a bid a few cents above the recorded Spot price.

7. Finish launching the instance.

8. Go to the Spot Request page. Watch your request.

9. Find the instance on the Instances page of the Amazon EC2 console.

10. Once the instance is running, terminate it.

EXERCISE 4: Access Metadata

In this exercise, you will access the instance metadata from the OS.

1. Execute steps as in EXERCISE 1.

2. At the Linux command prompt, retrieve a list of the available metadata by typing:
curl http://169.254.169.254/latest/meta-data/

3. To see a value, add the name to the end of the URL. For example, to see the security groups, type:
curl http://169.254.169.254/latest/meta-data/security-groups

4. Close the SSH window and terminate the instance.

EXERCISE 5: Create an Amazon EBS Volume and Show That It Remains After the Instance Is Terminated

In this exercise, you will see how an Amazon EBS volume persists beyond the life of an instance.

1. Execute steps as in EXERCISE 1.

2. However, in step 5 「Add Storage 」, add a second Amazon EBS volume of size 50 GB. Note that the Root Volume is set to Delete on Termination.

3. Launch the instance, has two Amazon EBS volumes on the Amazon EBS console, Name them both 「EC2-Demo5」.

4. Terminate the instance.

5. Check that the boot drive is destroyed, but the additional Amazon EBS volume remains and now says Available. Do not delete the Available volume.

EXERCISE 6: Take a Snapshot and Restore

This exercise guides you through taking a snapshot and restoring it in three different ways.

1. Find the volume you created in Exercise 5 in the Amazon EBS Menu.

2. Take a snapshot of that volume. Tag with Name the snapshot Exercise-6, wait for the snapshot to be completed.

3. Method 1 to restore EBS volume: On the Snapshot console, choose the new snapshot created at step2 and select Create Volume, create the volume with all the defaults, tag with Name is 「Exercise-6-volumn-restored」

4. Method 2 to restore EBS volume: Locate the snapshot again, choose to Create Volume, setting the size of the new volume to 100 GB (restoring the snapshot to a new, larger volume is how you address the problem of increasing the size of an existing volume). Tag with Name is 「Exercise-6-volume-restored-100GB」

5. Method 3 to restore EBS volume: Locate the snapshot again and choose Copy. Copy the snapshot to another region.

Go to the other region and wait for the snapshot to become available. Create a volume from the snapshot in the new region.

This is how you share an Amazon EBS volume between regions; that is, by taking a snapshot and copying the snapshot.

5. Delete all four volumes.

AWS Certified Solutions Architect Exercises- part 2 Amazon EC2 and Amazon EBS. I hope will this your helpful. Thank you for reading the DevopsRoles page!