Category Archives: AWS

Explore Amazon Web Services (AWS) at DevOpsRoles.com. Access in-depth tutorials and guides to master AWS for cloud computing and DevOps automation.

Using Windows Active Directory ADFS and SAML 2.0 to login AWS Console note

Introduction

To use Windows Active Directory (AD), Active Directory Federation Services (ADFS), and Security Assertion Markup Language 2.0 (SAML 2.0) to log in to the AWS Management Console, you can follow these general steps How to Use Windows Active Directory, ADFS, and SAML 2.0 to login AWS Console:

  • Set up an ADFS server: Install and configure ADFS on a Windows server that is joined to your Active Directory domain. This server will act as the identity provider (IdP) in the SAML authentication flow.
  • Configure AWS as a relying party trust: In the ADFS server, create a relying party trust for AWS. This trust establishes a relationship between the ADFS server and AWS, allowing the exchange of SAML assertions.
  • Obtain the AWS metadata document: Download the AWS SAML metadata document from the AWS Management Console. This document contains the necessary configuration information for AWS.
  • Configure claims rules: Set up claims rules in the ADFS server to map Active Directory attributes to the corresponding AWS SAML attributes. This step ensures that the necessary user information is included in the SAML assertion sent to AWS.
  • Set up AWS IAM roles: Create IAM roles in AWS that define the permissions and access policies for users authenticated through SAML. These roles will determine the level of access users have in the AWS Management Console.
  • Configure AWS IAM identity provider: Create an IAM identity provider in AWS and upload the ADFS metadata XML file. This step establishes the trust relationship between AWS and the ADFS server.
  • Create an IAM role mapping: Create a role mapping in AWS that maps the SAML attributes received from ADFS to the corresponding IAM roles. This mapping determines which IAM role should be assumed based on the user’s attributes.
  • Test the login process: Attempt to log in to the AWS Management Console using the ADFS server as the IdP. You should be redirected to the ADFS login page, and after successful authentication, you will be logged in to the AWS Management Console with the appropriate IAM role.

What is ADFS and SAML 2.0?

ADFS Overview

ADFS (Active Directory Federation Services) is a component of Windows Server that provides users with single sign-on access to systems and applications located across organizational boundaries.

SAML 2.0 Overview

SAML (Security Assertion Markup Language) 2.0 is an open standard for exchanging authentication and authorization data between identity providers (IdP) and service providers (SP).

When integrated, ADFS acts as the IdP and AWS acts as the SP, enabling users to log in to AWS using their Windows domain credentials.

Benefits of Using ADFS and SAML with AWS

  • Centralized identity management using Active Directory.
  • Improved security with token-based authentication.
  • No need to manage IAM user passwords in AWS.
  • Enhanced user experience through seamless SSO.
  • Audit trail and compliance alignment with enterprise policies.

Using Windows Active Directory ADFS and SAML 2.0 to login AWS Console

Today, I tried the lab enabling federation to aws using windows active directory adfs and saml 2.0

I have a note for everyone: Use Windows Active Directory, ADFS, and SAML 2.0 to login AWS Console

  • The Cloudformation template is an older version and some AMI IDs are older and cannot be used. In my case, I was using the Tokyo area, but I couldn’t use AMI and it crashed.
  • Do not use Windows Server 2016 for your AD server. The “Configure AWS as a trusted relying party” step does not succeed and I am unable to log in to the AWS console afterward.
  • Cloudformation template does not set up IIS, manual configuration, or create CERT yourself
  • If you get an error when you visit the site https://localhost/adfs/ls/IdpInitiatedSignOn.aspx
An error occurred
The resource you are trying to access is not available. Contact your administrator for more information.

Change setting of EnableIdpInitiatedSignonPage property:Set-AdfsProperties – EnableIdpInitiatedSignonPage $True

You have finished the lab by logging into the AWS console with the administrator role

External Links

Conclusion

Using Windows Active Directory ADFS and SAML 2.0 to log in to AWS Console. These steps provide a general overview of the process. The specific configuration details may vary depending on your environment and setup. It’s recommended to consult the relevant documentation from AWS and Microsoft for detailed instructions on setting up the integration between ADFS, SAML, and AWS. I hope will this your helpful. Thank you for reading the DevopsRoles page!

Install EKS on AWS

In this tutorial, how to Set Up EKS 1.16 cluster with eksctl . Amazon Elastic Kubernetes Service (Amazon EKS) gives you the flexibility to start, run, and scale Kubernetes applications in the AWS cloud or on-premises. No, let’s go Install EKS on AWS.

1.The first create a free account on AWS.

Link here:

Example: Create User: devopsroles-demo as the picture below:

2.Install AWS cli on Windows

Refer here:

Create AWS Profile

We easier to switch to different AWS IAM user or IAM role identiy by ‘export AWS_PROFILE=PROFILE_NAME‘ . I will not using ‘default‘ profile created by ‘aws configure‘. For example, I create a named AWS Profile ‘devopsroles-demo‘ in two ways:

  • ‘aws configure –profile devopsroles-demo’
E:\Study\cka\devopsroles>aws configure --profile devopsroles-demo
AWS Access Key ID [None]: XXXXZHBNJLCKKCE7EQQQ
AWS Secret Access Key [None]: fdfdfdfd43434dYlQ1il1xKNCnqwUvNHFSv41111
Default region name [None]:
Default output format [None]:

E:\Study\cka\devopsroles>set AWS_PROFILE=devopsroles-demo
E:\Study\cka\devopsroles>aws sts get-caller-identity
{
    "UserId": "AAQAZHBNJLCKPEGKYAV1R",
    "Account": "456602660300",
    "Arn": "arn:aws:iam::456602660300:user/devopsroles-demo"
}
  • Create profile entry in ~/.aws/credentials file

The content credentials file as below:

[devopsroles-demo] 
aws_access_key_id=YOUR_ACCESS_KEY 
aws_secret_access_key=YOUR_SECRET_ACCESS_KEY
aws_region = YOUR_REGION

Check new profile

export AWS_PROFILE=devopsroles-demo
# Windows
set AWS_PROFILE=devopsroles-demo

3.Install aws-iam-authenticator

# Windows
# install chocolatey first: https://chocolatey.org/install
choco install -y aws-iam-authenticator

4.Install kubectl

Ref here:

choco install kubernetes-cli
kubectl version

5.Install eksctl

Ref here:

# install eskctl from chocolatey
chocolatey install -y eksctl 
eksctl version

6.Create ssh key for EKS worker nodes

ssh-keygen
# Example name key is devopsroles_worker_nodes_demo.pem

7.Setup EKS cluster with eksctl (so you don’t need to manually create VPC)

eksctl tool will create K8s Control Plane (master nodes, etcd, API server, etc), worker nodes, VPC, Security Groups, Subnets, Routes, Internet Gateway, etc.

  • use official AWS EKS AMI
  • dedicated VPC
  • EKS not supported in us-west-1
eksctl create cluster  --name devopsroles-from-eksctl --version 1.16  --region us-west-2  --nodegroup-name workers --node-type t3.medium --nodes 2 --nodes-min 1 --nodes-max 4 --ssh-access --ssh-public-key ~/.ssh/devopsroles_worker_nodes_demo.pem.pub --managed

The output

E:\Study\cka\devopsroles>eksctl create cluster  --name devopsroles-from-eksctl --version 1.16  --region us-west-2  --nodegroup-name workers --node-type t3.medium --nodes 2 --nodes-min 1 --nodes-max 4 --ssh-access
--ssh-public-key ~/.ssh/devopsroles_worker_nodes_demo.pem.pub --managed
2021-05-23 15:19:30 [ℹ]  eksctl version 0.49.0
2021-05-23 15:19:30 [ℹ]  using region us-west-2
2021-05-23 15:19:31 [ℹ]  setting availability zones to [us-west-2a us-west-2b us-west-2c]
2021-05-23 15:19:31 [ℹ]  subnets for us-west-2a - public:192.168.0.0/19 private:192.168.96.0/19
2021-05-23 15:19:31 [ℹ]  subnets for us-west-2b - public:192.168.32.0/19 private:192.168.128.0/19
2021-05-23 15:19:31 [ℹ]  subnets for us-west-2c - public:192.168.64.0/19 private:192.168.160.0/19
2021-05-23 15:19:31 [ℹ]  using SSH public key "C:\\Users\\USERNAME/.ssh/devopsroles_worker_nodes_demo.pem.pub" as "eksctl-devopsroles-from-eksctl-nodegroup-workers-29:e7:8c:c3:df:a5:23:1b:bb:74:ad:51:bc:fb:80:9b" 
2021-05-23 15:19:32 [ℹ]  using Kubernetes version 1.16
2021-05-23 15:19:32 [ℹ]  creating EKS cluster "devopsroles-from-eksctl" in "us-west-2" region with managed nodes
2021-05-23 15:19:32 [ℹ]  will create 2 separate CloudFormation stacks for cluster itself and the initial managed nodegroup
2021-05-23 15:19:32 [ℹ]  if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=us-west-2 --cluster=devopsroles-from-eksctl'
2021-05-23 15:19:32 [ℹ]  CloudWatch logging will not be enabled for cluster "devopsroles-from-eksctl" in "us-west-2"
2021-05-23 15:19:32 [ℹ]  you can enable it with 'eksctl utils update-cluster-logging --enable-types={SPECIFY-YOUR-LOG-TYPES-HERE (e.g. all)} --region=us-west-2 --cluster=devopsroles-from-eksctl'
2021-05-23 15:19:32 [ℹ]  Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "devopsroles-from-eksctl" in "us-west-2"
2021-05-23 15:19:32 [ℹ]  2 sequential tasks: { create cluster control plane "devopsroles-from-eksctl", 2 sequential sub-tasks: { wait for control plane to become ready, create managed nodegroup "workers" } }
2021-05-23 15:19:32 [ℹ]  building cluster stack "eksctl-devopsroles-from-eksctl-cluster"
2021-05-23 15:19:34 [ℹ]  deploying stack "eksctl-devopsroles-from-eksctl-cluster"
2021-05-23 15:20:04 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-cluster"
2021-05-23 15:20:35 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-cluster"
2021-05-23 15:21:36 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-cluster"
2021-05-23 15:22:37 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-cluster"
2021-05-23 15:23:39 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-cluster"
2021-05-23 15:24:40 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-cluster"
2021-05-23 15:25:41 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-cluster"
2021-05-23 15:26:42 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-cluster"
2021-05-23 15:27:44 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-cluster"
2021-05-23 15:28:45 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-cluster"
2021-05-23 15:29:46 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-cluster"
2021-05-23 15:30:47 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-cluster"
2021-05-23 15:31:48 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-cluster"
2021-05-23 15:31:52 [ℹ]  building managed nodegroup stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:31:53 [ℹ]  deploying stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:31:53 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:32:09 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:32:27 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:32:48 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:33:05 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:33:26 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:33:47 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:34:06 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:34:24 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:34:43 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:35:01 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:35:17 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:35:37 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:35:38 [ℹ]  waiting for the control plane availability...
2021-05-23 15:35:38 [✔]  saved kubeconfig as "C:\\Users\\USERNAME/.kube/config"
2021-05-23 15:35:38 [ℹ]  no tasks
2021-05-23 15:35:38 [✔]  all EKS cluster resources for "devopsroles-from-eksctl" have been created
2021-05-23 15:35:39 [ℹ]  nodegroup "workers" has 2 node(s)
2021-05-23 15:35:39 [ℹ]  node "ip-192-168-53-14.us-west-2.compute.internal" is ready
2021-05-23 15:35:39 [ℹ]  node "ip-192-168-90-229.us-west-2.compute.internal" is ready
2021-05-23 15:35:39 [ℹ]  waiting for at least 1 node(s) to become ready in "workers"
2021-05-23 15:35:39 [ℹ]  nodegroup "workers" has 2 node(s)
2021-05-23 15:35:39 [ℹ]  node "ip-192-168-53-14.us-west-2.compute.internal" is ready
2021-05-23 15:35:39 [ℹ]  node "ip-192-168-90-229.us-west-2.compute.internal" is ready
2021-05-23 15:35:47 [ℹ]  kubectl command should work with "C:\\Users\\USERNAME/.kube/config", try 'kubectl get nodes'
2021-05-23 15:35:47 [✔]  EKS cluster "devopsroles-from-eksctl" in "us-west-2" region is ready

You have created a cluster, To find that cluster credentials added in ~/.kube/config

The result on AWS

Amazon EKS Clusters

CloudFormation

EC2

For example basic command line:

Get info about cluster resources

aws eks describe-cluster --name devopsroles-from-eksctl --region us-west-2

Get services

kubectl get svc

The output

E:\Study\cka\devopsroles>kubectl get svc
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.100.0.1   <none>        443/TCP   11m

Delete EKS Cluster

eksctl delete cluster --name devopsroles-from-eksctl --region us-west-2

The output

E:\Study\cka\devopsroles>eksctl delete cluster --name devopsroles-from-eksctl --region us-west-2
2021-05-23 15:57:31 [ℹ]  eksctl version 0.49.0
2021-05-23 15:57:31 [ℹ]  using region us-west-2
2021-05-23 15:57:31 [ℹ]  deleting EKS cluster "devopsroles-from-eksctl"
2021-05-23 15:57:34 [ℹ]  deleted 0 Fargate profile(s)
2021-05-23 15:57:37 [✔]  kubeconfig has been updated
2021-05-23 15:57:37 [ℹ]  cleaning up AWS load balancers created by Kubernetes objects of Kind Service or Ingress
2021-05-23 15:57:45 [ℹ]  2 sequential tasks: { delete nodegroup "workers", delete cluster control plane "devopsroles-from-eksctl" [async] }
2021-05-23 15:57:45 [ℹ]  will delete stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:57:45 [ℹ]  waiting for stack "eksctl-devopsroles-from-eksctl-nodegroup-workers" to get deleted
2021-05-23 15:57:45 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:58:02 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:58:19 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:58:40 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:58:58 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:59:19 [ℹ]  waiting for CloudFormation stack "eksctl-devopsroles-from-eksctl-nodegroup-workers"
2021-05-23 15:59:20 [ℹ]  will delete stack "eksctl-devopsroles-from-eksctl-cluster"
2021-05-23 15:59:20 [✔]  all cluster resources were deleted

Conclusion

You have Install EKS on AWS. I hope will this your helpful. Thank you for reading the DevopsRoles page!

Reduce AWS Billing and Setup Email Alerts

In this tutorial, How to Reduce AWS Billing & Setup Email Alerts.

Setup AWS Billing email alert

You need to login to AWS Console as a Root user

Follow these steps on aws:

Choice “My Billing Dashboard

Create a budget

Choice “Cost budget

Example, My “Budgeted amount” is 15 USD.

Input: Email contacts for Alerts

Final, Create it

Conclusion

You have set up Email Alerts to reduce AWS Billing. I hope will this your helpful. Thank you for reading the DevopsRoles page!

How to setup SSL/TLS connection for AWS RDS Oracle Database using SQL*PLUS, SQL Developer, JDBC

Introduction

Hi everyone, today I am going to show everyone how to set up an SSL / TLS connection from the client to the AWS RDS Oracle Database.

Prepare AWS RDS Oracle Database

  • An EC2 instance with Windows Server 2019.
  • An RDS Oracle instance (12.1.0.2.v19)
  • Connect normal to RDS Oracle instance with TCP protocol

Check the current connection with the following command

sqlplus admin/admin12345@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=orcl12.xxxxxxx.ap-northeast-1.rds.amazonaws.com)(PORT=1521))(CONNECT_DATA=(SID=SSLLAB)))

sqlplus > SELECT SYS_CONTEXT('USERENV', 'network_protocol') FROM DUAL;

Task today

  1. Modify the DB instance to change the CA from rds-ca-2015 to rds-ca-2019.
  2. Adding the SSL Option
  3. Using SQL*Plus for SSL/TLS connections(with Oracle Wallets).
  4. Using SQL Developer for SSL/TLS connections(with JKS).
  5. Using JDBC to establish SSL/TLS connections(with JKS).

Modify the DB instance to change the CA from rds-ca-2015 to rds-ca-2019

1. Sign in to the AWS Management Console and open the Amazon RDS console at https://console.aws.amazon.com/rds/.

2. In the navigation pane, choose Databases, and then choose the DB instance that you want to modify.

3. Choose Modify. The Modify DB Instance page appears.

4. In the Network & Security section, choose rds-ca-2019.

5. Choose Continue and check the summary of modifications.

6. To apply the changes immediately, choose Apply immediately. Choosing this option restarts your database immediately.

Adding the SSL Option

1. Create or Modify an existing option group to which you can add the SSL option for your RDS intance.

Add the SSL option to the option group.

Setting the option

SQLNET.CIPHER_SUITE:SSL_RSA_WITH_AES_256_CBC_SHA

SQLNET.SSL_VERSION:1.0 or 1.2

FIPS.SSLFIPS_140:TRUE

2. Setting Security Group using for your RDS Oracle instance with allow inbound PORT 2484, Source Range is your IPv4 CIDR VPC or EC2 instance client.

Using SQL*Plus for SSL/TLS connections

1. Download middleware

  • Oracle Database Client (12.1.0.2.0) for Microsoft Windows (x64) require for orapki Utility(download link).

Install Folder path: C:\app\client\Administrator\product\12.1.0\client_1

2. Download the 2019 root certificate that works for all AWS Regions and put the file in the ssl_wallet directory.

https://s3.amazonaws.com/rds-downloads/rds-ca-2019-root.pem

Folder path: C:\app\client\Administrator\product\12.1.0\client_1\ssl_wallet

3. Run the following command to create the Oracle wallet.

C:\app\client\Administrator\product\12.1.0\client_1\BIN\orapki wallet create -wallet C:\app\client\Administrator\product\12.1.0\client_1\ssl_wallet -auto_login_only

4. Run the following command to add a cert to the Oracle wallet.

C:\app\client\Administrator\product\12.1.0\client_1\BIN\orapki wallet add -wallet C:\app\client\Administrator\product\12.1.0\client_1\ssl_wallet -trusted_cert -cert C:\app\client\Administrator\product\12.1.0\client_1\ssl_wallet\rds-ca-2019-root.pem -auto_login_only

5. Run the following command to confirm that the wallet was updated successfully.

C:\app\client\Administrator\product\12.1.0\client_1\BIN\orapki wallet display -wallet C:\app\client\Administrator\product\12.1.0\client_1\ssl_wallet

6. Create the net service name to log in with SQL*PLUS.

  • Create a file name C:\app\client\Administrator\product\12.1.0\client_1\network\admin\tnsnames.ora with content.
ORCL12 =
(DESCRIPTION =
 (ADDRESS = (PROTOCOL = TCPS)(HOST = orcl12.xxxxxxx.ap-northeast-1.rds.amazonaws.com)(PORT = 2484))
 (CONNECT_DATA=
  (SERVER = DEDICATED)
  (SERVICE_NAME = SSLLAB))
 )
)
  • Edit C:\app\client\Administrator\product\12.1.0\client_1\network\admin\sqlnet.ora file with content.
WALLET_LOCATION=  
  (SOURCE=
      (METHOD=file)
      (METHOD_DATA=  
         (DIRECTORY=C:\app\client\Administrator\product\12.1.0\client_1\ssl_wallet)))
SSL_CLIENT_AUTHENTICATION = FALSE    
SSL_VERSION = 1.2    
SSL_CIPHER_SUITES = (SSL_RSA_WITH_AES_256_CBC_SHA)    
SSL_SERVER_DN_MATCH = NO   
SQLNET.AUTHENTICATION_SERVICES = (TCPS,TNS)    
NAMES.DIRECTORY_PATH= (TNSNAMES, EZCONNECT) 
  • Setting TNS_ADMIN user environment
TNS_ADMIN = C:\app\client\Administrator\product\12.1.0\client_1\network\admin\

7. Test connect with SQL*PLUS

sqlplus admin/admin12345@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCPS)(HOST=orcl12.xxxxxxxxx.ap-northeast-1.rds.amazonaws.com)(PORT=2484))(CONNECT_DATA=(SID=SSLLAB)))
or with TNS name service
sqlplus admin/admin12345@ORCL12
SELECT SYS_CONTEXT('USERENV', 'network_protocol') FROM DUAL;
tcps

Using SQL Developer for SSL/TLS connections

1. Download middleware

2. Convert the certificate to .der format using the following command.

openssl x509 -outform der -in C:\app\client\Administrator\product\12.1.0\client_1\ssl_wallet\rds-ca-2019-root.pem -out rds-ca-2019-root.der

Copy the output file to C:\app\client\Administrator\product\12.1.0\client_1\ssl_wallet\rds-ca-2019-root.der

3. Create the Keystore using the following command.

C:\app\client\Administrator\product\12.1.0\client_1\jdk\bin\keytool -keystore clientkeystore -genkey -alias client

Copy the output file to C:\app\client\Administrator\product\12.1.0\client_1\jdk\jre\lib\security\clientkeystore

4. Import the certificate into the key store using the following command.

C:\app\client\Administrator\product\12.1.0\client_1\jdk\bin\keytool -import -alias rds-root -keystore C:\app\client\Administrator\product\12.1.0\client_1\jdk\jre\lib\security\clientkeystore -file C:\app\client\Administrator\product\12.1.0\client_1\ssl_wallet\rds-ca-2019-root.der
Input pass of clientkeystore and confirm yes at below question , to import cert.

Trust this certificate? [no]:  yes
 Certificate was added to keystore

5. Confirm that the key store was updated successfully.

C:\app\client\Administrator\product\12.1.0\client_1\jdk\bin\keytool -list -v -keystore C:\app\client\Administrator\product\12.1.0\client_1\jdk\jre\lib\security\clientkeystore

6. Down the new version of JCE for JDK6, and remove the old jar file, copy the new jar file to under directory C:\app\client\Administrator\product\12.1.0\client_1\jdk\jre\lib\security\

Note: If you using other versions of jdk, please refer to the following link and download the correct version of JCE.

https://blogs.oracle.com/java-platform-group/diagnosing-tls,-ssl,-and-https

7. Config C:¥app¥client¥sqldeveloper¥sqldeveloper¥bin¥sqldeveloper.conf file, add the following line.

SetJavaHome C:\app\client\Administrator\product\12.1.0\client_1\jdk
#Configure some JDBC settings
AddVMOption -Djavax.net.ssl.trustStore=C:\app\client\Administrator\product\12.1.0\client_1\jdk\jre\lib\security\clientkeystore	
AddVMOption -Djavax.net.ssl.trustStoreType=JKS	
AddVMOption -Djavax.net.ssl.trustStorePassword=admin12345	
AddVMOption -Doracle.net.ssl_cipher_suites=TLS_RSA_WITH_AES_256_CBC_SHA

8. Test connect to AWS RDS Oracle instance with SQL developer tool with the connection string.

jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCPS)(HOST=orcl12.cgl7xlmapx2h.ap-northeast-1.rds.amazonaws.com)(PORT=2484))(CONNECT_DATA=(SID=SSLLAB)))

Using JDBC to establish SSL/TLS connections

1. Source code sample.

The following code example shows how to set up the SSL connection using JDBC.

import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.SQLException;
import java.util.Properties;

public class OracleSslConnectionTest {
	private static final String DB_SERVER_NAME = "orcl12.xxxxxx.ap-northeast-1.rds.amazonaws.com";
    private static final String SSL_PORT = "2484";
    private static final String DB_SID = "SSLLAB";
    private static final String DB_USER = " admin";
    private static final String DB_PASSWORD = "admin12345";

    private static final String KEY_STORE_FILE_PATH = "C:\\app\\client\\Administrator\\product\\12.1.0\\client_1\\jdk\\jre\\lib\\security\\clientkeystore";
    private static final String KEY_STORE_PASS = "admin12345";
    private static final String SSL_CIPHER_SUITES = "TLS_RSA_WITH_AES_256_CBC_SHA";
    
	public static void main(String args[])  throws SQLException {  
		final Properties properties = new Properties();
        final String connectionString = String.format(
                "jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCPS)(HOST=%s)(PORT=%s))(CONNECT_DATA=(SID=%s))(SECURITY = (SSL_SERVER_CERT_DN = \"CN=Amazon RDS Root 2019 CA,OU=Amazon RDS,O=Amazon Web Services, Inc.,ST=Washington,L=Seattle,C=US\")))",
                DB_SERVER_NAME, SSL_PORT, DB_SID);
        properties.put("user", DB_USER);
        properties.put("password", DB_PASSWORD);
        
        properties.put("javax.net.ssl.trustStore", KEY_STORE_FILE_PATH);
        properties.put("javax.net.ssl.trustStoreType", "JKS");
        properties.put("javax.net.ssl.trustStorePassword", KEY_STORE_PASS);
        
        properties.put("oracle.net.ssl_cipher_suites", SSL_CIPHER_SUITES);
        
        final Connection connection = DriverManager.getConnection(connectionString, properties);
        // If no exception, that means handshake has passed, and an SSL connection can be opened
        System.out.println("connected..");
	}
	
}

2. Test connect to AWS RDS Oracle instance with JDBC thin driver.

java -Djavax.net.debug=all -cp .;C:\app\client\Administrator\product\12.1.0\client_1\jdbc\lib\ojdbc7.jar OracleSslConnectionTest

The end.Good luck to you and happy with AWS cloud.

Thank you for reading the DevopsRoles page!

AWS Certified Solutions Architect Exercises- part 3 Amazon VPC

Introduction

In the ever-evolving landscape of technology, mastering the skills and knowledge of AWS solution architecture is more crucial than ever. Understanding and practicing exercises related to Amazon Virtual Private Cloud (VPC) is a key component in becoming an AWS Certified Solutions Architect. This article, the third installment in our series, will guide you through essential exercises involving Amazon VPC. We will help you grasp how to set up and manage VPCs, understand their core components, and create a secure, flexible networking environment for your applications.

In this article, we’ll learn about Amazon VPC, the best way to become familiar with Amazon VPC is to build your own custom Amazon VPC and then deploy Amazon EC2 instances into it. AWS Certified Solutions Architect Exercises- part 3 Amazon VPC

1. Today’s tasks

  • Create a Custom Amazon VPC
  • Create Two Subnets for Your Custom Amazon VPC
  • Connect Your Custom Amazon VPC to the Internet and Establish Routing
  • Launch an Amazon EC2 Instance and Test the Connection to the Internet.

2. Before you begin AWS Certified Solutions Architect

  • Command-line tool to SSH into the Linux instance.

3. Let’s do it

EXERCISE 1:

Create a Custom Amazon VPC

1. Open the Amazon VPC console

2. In the navigation pane, choose Your VPCs, and Create VPC.

3. Specify the following VPC details as necessary and choose to Create.

  • Name tag: My First VPC
  • IPv4 CIDR block: 192.168.0.0/16
  • IPv6 CIDR block:  No IPv6 CIDR Block
  • Tenancy:  Default

EXERCISE 2:

Create Two Subnets for Your Custom Amazon VPC

To add a subnet to your VPC using the console

1. Open the Amazon VPC console

2. In the navigation pane, choose SubnetsCreate subnet.

3. Specify the subnet details as necessary and choose to Create.

  • Name tag: My First Public Subnet.
  • VPC: Choose the VPC from Exercise 1.
  • Availability Zone: Optionally choose an Availability Zone in which your subnet will reside, or leave the default No Preference to let AWS choose an Availability Zone for you.
  • IPv4 CIDR block: 192.168.1.0/24.

4. Create a subnet with a CIDR block equal to 192.168.2.0/24 and a name tag of My First Private Subnet. Create the subnet in the Amazon VPC from Exercise 1, and specify a different Availability Zone for the subnet than previously specified (for example, ap-northeast-1c). You have now created two new subnets, each in its own Availability Zone.

EXERCISE 3:

Connect Your Custom Amazon VPC to the Internet and Establish Routing

1. Create an IGW with a name tag of My First IGW and attach it to your custom Amazon VPC.

2. Add a route to the main route table for your custom Amazon VPC that directs Internet traffic (0.0.0.0/0) to the IGW.

3. Create a NAT gateway, place it in the public subnet of your custom Amazon VPC, and assign it an EIP.

4. Create a new route table with a name tag of My First Private Route Table and place it within your custom Amazon VPC. Add a route to it that directs Internet traffic (0.0.0.0/0) to the NAT gateway and associate it with the private subnet.

EXERCISE 4:

Launch an Amazon EC2 Instance and Test the Connection to the Internet

1. Launch a t2.micro Amazon Linux AMI as an Amazon EC2 instance into the public subnet of your custom Amazon VPC, give it a name tag of My First Public Instance and select your key pair for secure access to the instance.

2. Securely access the Amazon EC2 instance in the public subnet via SSH with a key pair.

3. Execute an update to the operating system instance libraries by executing the following command:

sudo yum update -y

4. You should see an output showing the instance downloading software from the Internet and installing it.

5. Delete all resources created in this exercise.

Conclusion

Mastering exercises related to Amazon VPC not only prepares you better for the AWS Certified Solutions Architect exam but also equips you with vital skills for deploying and managing cloud infrastructure effectively. From creating and configuring VPCs to setting up route tables and network ACLs, each step in this process contributes to building a robust and secure network system. We hope this article boosts your confidence in applying the knowledge gained and continues your journey toward becoming an AWS expert.

If you have any questions or need further assistance, don’t hesitate to reach out to us. Best of luck on your path to becoming an AWS Certified Solutions Architect! AWS Certified Solutions Architect Exercises- part 3 Amazon VPC. Happy Clouding!!! Thank you for reading the DevopsRoles page!

AWS Certified Solutions Architect Exercises- part 2 Amazon EC2 and Amazon EBS

In this article, we’ll learn about Amazon Elastic Compute Cloud (Amazon EC2) and Amazon Elastic Block Store (Amazon EBS). Now, let’s go to AWS Certified Solutions Architect Exercises.

AWS Certified Solutions Architect Exercises

1. Today tasks

  • Launch and Connect to a Linux Instance
  • Launch a Windows Instance with Bootstrapping
  • Launch a Spot Instance
  • Access Metadata
  • Create an Amazon EBS Volume and Show That It Remains After the Instance Is Terminated
  • Take a Snapshot and Restore

2. Before you begin

  • Puttygen.exe: Tool creating a .ppk file from a .pem file
  • Command-line tool to SSH into the Linux instance.

3. Let do it

EXERCISE 1: Launch and Connect to a Linux Instance

In this exercise, you will launch a new Linux instance, log in with SSH, and install any security updates.

1. Launch an instance in the Amazon EC2 console.

2. Choose the Amazon Linux 2 AMI (HVM), SSD Volume Type – ami-0c3fd0f5d33134a76.

3. Choose the t2.micro instance type.

4. Configure Instance Details as below

  • Network: Launch the instance in the default VPC.
  • Subnet: Select a default subnet
  • Auto-assign Public IP: Enable.

5. Add Storage setting default, click 「Next: Add tags」button, next screen, you click 「Add tag」button to add a tag to the instance

Example: add a tag with Key is Name, the value of Key: ec2-demo.

6. Create a new security group called demo-sg.

7. Add a rule to demo-sg :

7.1. Allowing SSH access from the IP address of your computer with: Source is My IP(to secure this way is recommended)

7.2. Allowing all IP access with Source is Custom, CidrIP:0.0.0.0/0.

8. Review and Launch the instance.

9. When prompted for a key pair, choose a key pair you already have or create a new one and download the private portion. Amazon generates a keyname.pem file, and you will need a keyname.ppk file to connect to the instance via SSH. Puttygen.exe is one utility that will create a .ppk file from a .pem file.

10. SSH into the instance using the IPv4 public IP address,

To SSH to EC2 created, you able to use tools such as Terraterm

and the user name ec2-user, and the keyname.ppk file created at step 9.

11. From the command-line prompt, run

sudo yum update

12. Close the SSH window and terminate the instance.

EXERCISE 2: Launch a Windows Instance with Bootstrapping

In this exercise, you will launch a Windows instance and specify a very simple bootstrap script. You will then confirm that the bootstrap script was executed on the instance.

1. Launch an instance in the Amazon EC2 console.

2. Choose the Microsoft Windows Server 2019 Base AMI.

3. Choose the t2.micro instance type.

4. Launch the instance in either the default VPC and default subnet, Auto-assign Public IP: Enable.

5. In the Advanced Details section, enter the following text as UserData:

<script>
md c:\temp
</script>

6. Add a tag to the instance of Key: Name, Value: EC2-Demo2

7. Use the demo-sg security group from Exercise 1.

8. Launch the instance.

9. Use the key pair from Exercise 1.

10. On the Connect to Your Instance screen, decrypt the administrator password and then download the RDP file to attempt to connect to the instance. Your attempt should fail because the demo-sg security group does not allow RDP access.

11. Open the demo-sg security group and add a rule that allows RDP access from your IP address.

12. Attempt to access the instance via RDP again.

13. Once the RDP session is connected, open Windows Explorer and confirm that the c:\temp folder has been created.

14. End the RDP session and terminate the instance.

EXERCISE 3: Launch a Spot Instance

In this exercise, you will create a Spot Instance.

1. Launch an instance in the Amazon EC2 console.

2. Choose the Amazon Linux AMI.

3. Choose the instance type.

4. On the Configure Instance page, request a Spot Instance.

5. Launch the instance in either the default VPC and default subnet, Auto-assign Public IP: Enable.

6. Request a Spot Instance and enter a bid a few cents above the recorded Spot price.

7. Finish launching the instance.

8. Go to the Spot Request page. Watch your request.

9. Find the instance on the Instances page of the Amazon EC2 console.

10. Once the instance is running, terminate it.

EXERCISE 4: Access Metadata

In this exercise, you will access the instance metadata from the OS.

1. Execute steps as in EXERCISE 1.

2. At the Linux command prompt, retrieve a list of the available metadata by typing:
curl http://169.254.169.254/latest/meta-data/

3. To see a value, add the name to the end of the URL. For example, to see the security groups, type:
curl http://169.254.169.254/latest/meta-data/security-groups

4. Close the SSH window and terminate the instance.

EXERCISE 5: Create an Amazon EBS Volume and Show That It Remains After the Instance Is Terminated

In this exercise, you will see how an Amazon EBS volume persists beyond the life of an instance.

1. Execute steps as in EXERCISE 1.

2. However, in step 5 「Add Storage 」, add a second Amazon EBS volume of size 50 GB. Note that the Root Volume is set to Delete on Termination.

3. Launch the instance, has two Amazon EBS volumes on the Amazon EBS console, Name them both 「EC2-Demo5」.

4. Terminate the instance.

5. Check that the boot drive is destroyed, but the additional Amazon EBS volume remains and now says Available. Do not delete the Available volume.

EXERCISE 6: Take a Snapshot and Restore

This exercise guides you through taking a snapshot and restoring it in three different ways.

1. Find the volume you created in Exercise 5 in the Amazon EBS Menu.

2. Take a snapshot of that volume. Tag with Name the snapshot Exercise-6, wait for the snapshot to be completed.

3. Method 1 to restore EBS volume: On the Snapshot console, choose the new snapshot created at step2 and select Create Volume, create the volume with all the defaults, tag with Name is 「Exercise-6-volumn-restored」

4. Method 2 to restore EBS volume: Locate the snapshot again, choose to Create Volume, setting the size of the new volume to 100 GB (restoring the snapshot to a new, larger volume is how you address the problem of increasing the size of an existing volume). Tag with Name is 「Exercise-6-volume-restored-100GB」

5. Method 3 to restore EBS volume: Locate the snapshot again and choose Copy. Copy the snapshot to another region.

Go to the other region and wait for the snapshot to become available. Create a volume from the snapshot in the new region.

This is how you share an Amazon EBS volume between regions; that is, by taking a snapshot and copying the snapshot.

5. Delete all four volumes.

AWS Certified Solutions Architect Exercises- part 2 Amazon EC2 and Amazon EBS. I hope will this your helpful. Thank you for reading the DevopsRoles page!

AWS Certified Solutions Architect Exercises- part 1 Amazon S3 and Amazon Glacier Storage

In this series, let together will exercise these practices below

  1. S3 Amazon Simple Storage Service and Amazon Glacier Storage
  2. Amazon Elastic Compute Cloud (Amazon EC2) and Amazon Elastic Block Store (Amazon EBS)
  3. Amazon Virtual Private Cloud (Amazon VPC)
  4. Elastic Load Balancing, Amazon CloudWatch, and Auto Scaling
  5. AWS Identity and Access Management (IAM)
  6. Databases and AWS
  7. SQS, SWF, and SNS
  8. Domain Name System (DNS) and Amazon Route 53
  9. Amazon ElastiCache
  10. Additional Key Services
  11. Security on AWS
  12. AWS Risk and Compliance
  13. Architecture Best Practices

Based on the document: AWS-Certified-Solutions-Architect-Official-Study-Guide.pdf

Now, start practicing!!!

1. AWS Certified Solutions Architect Exercises- part 1 Amazon S3 and Amazon Glacier Storage

1.1. Today’s tasks

1: Create an Amazon Simple Storage Service (Amazon S3) Bucket
2: Upload, Make Public, and Delete Objects in Your Bucket
3: Enable Version Control
4: Delete an Object and Then Restore It.
5: Lifecycle Management
6: Enable Static Hosting on Your Bucket

1.2. Before you begin

I default you have an AWS account.

1.3. Let’s do it

EXERCISE 1: Create an Amazon Simple Storage Service (Amazon S3) Bucket

1. Log in to the AWS Management Console at the link: https://console.aws.amazon.com/console/home?nc2=h_ct&src=header-signin

2. Choose an appropriate region, such as the Asia Pacific (Tokyo) Region.

3. Navigate to the Amazon S3 console. Notice that the region indicator now says Global. Remember that Amazon S3 buckets form a global namespace, even though each bucket is created in a specific region.

4. Start the create bucket process. Click the button Create bucket.

5. When prompted for Bucket Name, use yourname-demo-bucket-yyyymmdd.

6. Choose a region, such as Asia Pacific (Tokyo).

You should now have a new Amazon S3 bucket.

EXERCISE 2: Upload, Make Public, and Delete Objects in Your Bucket

In this exercise, you will upload a new object to your bucket. You will then make this object public and view the object in your browser. You will then rename the object and finally delete it from the bucket.

Upload an Object

1. Load your new bucket in the Amazon S3 console.

2. Select Upload, then Add Files.

3. Locate a file on your PC that you are okay with uploading to Amazon S3 and making public to the Internet. (We suggest using a non-personal image file for the purposes of this exercise.)

4. Select a suitable file, then Start Upload. You will see the status of your file in the Transfers section.

5. After your file is uploaded, the status should change to Done.

The file you uploaded is now stored as an Amazon S3 object and should be now listed in the contents of your bucket

Open the Amazon S3 URL

1. Now open the properties for the object. The properties should include bucket, name, and link.

2. Copy the Amazon S3 URL for the object.

3. Paste the URL in the address bar of a new browser window or tab.

You should get a message with an XML error code AccessDenied. Even though the object has a URL, it is private by default, so it cannot be accessed by a web browser.

Make the Bucket Public

1. Go back to the Permission tab of your bucket and set Block public access is Off.

2. Policy for everyone access with action is readonly

{
     "Version": "2012-10-17",
     "Id": "http referer policy example",
     "Statement": [
         {
             "Sid": "Allow get requests originating from global",
             "Effect": "Allow",
             "Principal": "",
             "Action": "s3:GetObject",             
             "Resource": "arn:aws:s3:::yourname-demo-bucket-yyyymmdd/"
         }
     ]
 }

2. Copy the Amazon S3 URL again and try to open it in a browser or tab. Your public image file should now display in the browser or browser tab.

3. Another public setting, read more at the link: https://aws.amazon.com/premiumsupport/knowledge-center/read-access-objects-s3-bucket/

Delete the Object

1. In the Amazon S3 console, select Object ➝ Actions ➝ Delete. Choose Delete when prompted if you want to delete the object.

2. The object has now been deleted.

3. To verify, try to reload the deleted object’s Amazon S3 URL.
You should once again get the XML AccessDenied error message.

EXERCISE 3: Enable Version Control

In this exercise, you will enable version control on your newly created bucket.

Enable Versioning

1. In the Amazon S3 console, open your bucket. Click Properties tab ➝ Versioning ➝ select Enable versioning ➝ Save.

Your bucket now has versioning enabled.

Create Multiple Versions of an Object

1. Create a text file named foo.txt on your computer and write the word blue in the text file.

2. Save the text file to a location of your choosing.

3. Upload the text file to your bucket. This will be version 1.

4. After you have uploaded the text file to your bucket, open the copy on your local computer and change the word blue to red. Save the text file with the original filename.

5. Upload the modified file to your bucket.

6. Select Show Versions on the uploaded object.

You will now see two different versions of the object with different Version IDs and possibly different sizes. Note that when you select Show Version, the Amazon S3 URL now includes the version ID in the query string after the object name.

EXERCISE 4: Delete an Object and Then Restore It

In this exercise, you will delete an object in your Amazon S3 bucket and then restore it.

Delete an Object

Open the bucket containing the text file for which you now have two versions.

  1. Select Hide Versions.
  2. Select Actions ➝ Delete, and then select Delete to verify.
  3. Your object will now be deleted, and you can no longer see the object.
  4. Select Show Versions.
    Both versions of the object now show their version IDs.

Restore an Object

Open your bucket.

  1. Select Show Versions.
  2. Select the oldest version and download the object. Note that the filename is simply foo.txt with no version indicator.
  3. Select Hide Versions, upload foo.txt to the same bucket.
  4. The file foo.txt should re-appear, select Show Version.

EXERCISE 5: Lifecycle Management

In this exercise, you will explore the various options for lifecycle management.

  1. Select your bucket in the Amazon S3 console.
  2. Under Management ➝ Lifecycle, add a Lifecycle Rule.
  3. Explore the various options to add lifecycle rules to objects in this bucket. It is recommended that you do not implement any of these options, as you may incur additional costs. After you have finished, click the Cancel button.

My example: transitions object to GLACIER storage class after 1days.

EXERCISE 6: Enable Static Hosting on Your Bucket

In this exercise, you will enable static hosting on your newly created bucket.

1. Select your bucket in the Amazon S3 console.

2. In the Properties section, select Static Website Hosting.

For the index document name, enter index.txt, and for the error document name, enter error.txt.

3. Use a text editor to create two text files and save them as index.txt and error.txt.
In the index.txt file, write the phrase “Hello World,” and in the error.txt file, write the phrase “Error Page.” Save both text files and upload them to your bucket.

4. Copy the Endpoint: link under Static Website Hosting and paste it in a browser window or tab. You should now see the phrase “Hello World” displayed.

5. In the address bar in your browser, try adding a forward slash followed by a made-up filename (for example, /test.html). You should now see the phrase “Error Page” displayed.

Finally is important, to clean up, delete all of the objects in your bucket and then delete the bucket itself. AWS Certified Solutions Architect Exercises- part 1 Amazon S3 and Amazon Glacier Storage. Thank you for reading the DevopsRoles page!