Dec 29, 2018

Automatically Launch/Stop EC2 Instances Using Shells Script & AWS CLI

Automatically Launch Stop EC2 Instances Using Shell Script & AWS CLI


#How to start an EC2 instance:

Step 1: Install and configure AWS CLI if not done already. (If you missed that, Please refer my earlier post  / video https://tinyurl.com/yaf8dltr )

Step 2: Create a script file 'ec2startscript.sh'. I will recommend to create this file in root project directory path : /var/www/html

>> cd /var/www/html
>> touch ec2startscript.sh

Step 3: Get instance ids from AWS console which you want to start.

Step 4: Now open that file and add below code and save it.

>> nano ec2startscript.sh

Code:
#!/bin/bash
# Start EC2 Instance
aws ec2 start-instances --instance-ids pasteInstanceIdHere


#How to stop an EC2 instance:

Step 1: Create a script file 'ec2stopscript.sh'. I will recommend to create this file in root project directory path : /var/www/html

>> cd /var/www/html
>> touch ec2stopscript.sh

Step 2: Get instance id from AWS console which you want to start.

Step 3: Now open that file and add below code and save it.

>> nano ec2stopscript.sh

Code:
 #!/bin/bash
 # Stop EC2 Instance
aws ec2 stop-instances --instance-ids pasteInstanceIdHere

Note: Provide executable permission to shell script file ec2stopscript.sh

#How to provide public IP address to running EC2 instance:

Step 1: Create a script file 'ec2ipscript.sh'. I will recommend to create this file in root project directory path : /var/www/html

>> cd /var/www/html
>> touch ec2ipscript.sh

Step 2: Get instance id from AWS console which you want to start.

Step 3: Now open that file and add below code and save it.

>> nano ec2ipscript.sh

Code:
#!/bin/bash

stateArray=`sh ec2startscript.sh | grep "Name"`
IFS=':' read -a currentState <<< "$stateArray[1]"
stateString=${currentState[1]}
IFS='"' read -a state <<< "$stateString[1]"
while [ "${state[1]}" == "running" ]
do
    aws ec2 describe-instances | grep "PublicIpAddress"
    break
done

Happy Learning AWS Services!!!! :) Still Doubts? lets put your questions in below comment box! Thanks!


Watch Video:

Sep 30, 2018

Amazon Simple Email Service [SES] Using CLI

Amazon Simple Email Service [SES] Using CLI

Amazon SES is an easy and cost-effective email platform way for you to send and receive email using your own email addresses and domains.

Amazon SES is AWS cloud base email service built on reliable and scalable AWS infrastructure.

You only pay for what you used.

SES Use cases: Marketing emails, Transnational email, Notifications and newsletters emails etc.

Why use Amazon SES?

If you want to create your large email solution for business then it's very complex and costly and also you need to manage email server management. Amazon SES eliminates these challenges and provide you a easy cost effective solution for bulk email service.

Limitations of AWS SES:

SES service currently only available in AWS three below regions.

  1. EU (Ireland) (eu-west-1)
  2. US West (Oregon) (us-west-2)
  3. US East (N. Virginia) (us-east-1)

Emails can only be sent to or receive from verified email address and domain to reduced Spam attack emails.

In the sandbox environment:

To help prevent fraud and abuse, and to help protect your reputation as a sender, Amazon apply certain restrictions to new Amazon SES accounts.

You can only send mail to verified email addresses and domains in sandbox account only.

You can only send mail from verified email addresses to any other email addresses and domains in both case sandbox and production mode.

You can send 1 email per seconds only but you can request AWS support to increase send rate limit for short period, not forever.

Email send quota is 200 emails per 24 hours. You can request to increase this limit.

To request that your account be removed from the Amazon SES sandbox check this Amazon tutorial.

Amazon SES and other AWS services:

SES provides easy APIs and we can integrate it with other AWS services like S3, KMS, Work emails, SNS and Lambda.

We can defines receipt rules to control incoming emails and route them. SES can route emails to AWS S3 or forward it to AWS SNS even process it via AWS lambda.

How to configure AWS SES using AWS CLI:

Step 1: First of all we need verify email address.

>> aws ses verify-email-identity --email-address your_email_address_1@gmail.com --region eu-west-1

Once execute this command you will receive a verification email and click on link provided in that email to verify.

In AWS console you can see email verification status 'verified' by navigating SES->Email address

Step 2: Now you need to verify receiver email id to receive emails.

>> aws ses verify-email-identity --email-address your_email_address_2@gmail.com --region eu-west-1

Again follow same step to verify receiver email address and verified it.

Step 3: Check list of registered email addresses

>> aws ses list-identities

Step 4: Now try to send email from sender email id to receiver email id.

>> aws ses send-email --from your_email_address_1@gmail --to your_email_address_2@gmail.com --subject "SES Test subject" --text "This is test description" --region eu-west-1

Note: Use supported region only if your default region is not supported SES.

Step 5: Check send quota

>> aws ses get-send-quota --region eu-west-1

Output:
{
"Max24HourSendd":200,
"SendLAst24Hours":1, <- 199="" 1="" 24="" and="" b="" email="" hours="" means="" pending="" per="" sent="" that="">
"MaxSendRate":1
}

Compliance:
Always follow strictly email marketing and anti-spam laws and regulations in the countries and regions you send email to. You're responsible for ensuring that the email you send complies with these laws. Otherwise Amazon can ban sending and receiving emails.


Happy Learning AWS Services!!!! :) Still Doubts? lets put your questions in below comment box! Thanks!

Video SES:


Reference: https://docs.aws.amazon.com/ses/latest/DeveloperGuide/Welcome.html

Aug 25, 2018

AWS ECS (Elastic Container Service) & Docker Containers Using CLI

AWS ECS

Introduction:

ECS is container management micro service provided by AWS. It is highly scalable and fast way to manage / start / stops Docker containers within AWS cloud.

It allows easily to manage applications running under docker containers and deployed on ECS cluster of EC2 instances.

When you deployed docker container in ECS that means you placed them on cluster which is logical grouping of EC2 instances.

ECS allows us to access some core important features like security group, EBS volume and Elastic Load Balancer (ELB) same like EC2 Instance.

ECS is an abstraction of EC2 service so instead of using EC2 virtual machines we can use most compact portable docker containers.

AWS provides APIs to automate and integrate with AWS ECS services.

Challenges with traditional software development life cycle:

Infrastructure:

  • Difficulties to scale up apps
  • Long build / test / release cycle
  • Operation was nightmare
Application:

  • Architecture as hard to maintain AND evolve
  • New release takes months
  • Long time to add new features
Business:

  • Lack of agility
  • Lack of innovations
  • Frustrated customers

Why move to containers?

  • Portable 
  • Flexible
  • Fast
  • Efficient
  • Super Lightweight


Following are the core concept of ECS service:

Docker File: Is s simple text file, defines app environment and dependencies libraries.
Example: Server (Linux/Ubuntu etc.) & dependencies libraries (Apache, MySql and PHP etc.)

Docker File Best Practice:

  • Reduce build image size and number of layers. For example don't add unnecessary things in image and keep it optimized.
  • Use specific tag naming conventions for version of image so that we can rollout and rollback image. For example: Version 1.0.0.1 & Version 1.0.0.2 etc.
  • In case of multi region pull same image and push to all region.

Docker Image:
Is a packaged application code like source code, dependencies libraries, environment variables, config files etc. It is reproducible, portable and immutable that means you can run this image on any platform.

Elastic Container Registry (ECR):
It is a docker container registry which is fully managed that means you no need to manage infrastructure & hosting these docker registry.
It is highly available as it internally using S3 service.
It is secure as it uses IAM resource based policies so you can provide push permission to some account and pull permission to other account user.

Cluster:
It is logical grouping of all resources that you have.
Infrastructure isolations boundary
IAM permission boundary

Container Instances:
They are EC2 instances with ECS agent installed on ECS agent. ECS agent is a open source software that looks state of instances and respond back to service.

Task Definitions:
It is a way to define your applications
Defines application containers which includes image URL , CPU , and memory requirements. etc. It is a blueprint of task.

Task:
It is a running instantiation of task definition

Service:
Run and maintain the desired number of copies of tasks.
Maintain n running copies
Integrated with ELB
Unhealthy task automatically replaced with healthy tasks.
It very similar to EC2 auto-scaling concept.

Advantages of ECS:

  • You can group multiple containers in task definition
  • You can attach / mount volume to task definition
  • You can link two containers and established a network between them to talk each other. 
  • For Example: There are 2 applications one in JAVA and other in .NET and both are in different containers, then using APIS they can interact with each other.
  • Good for log-running applications
  • Load balance traffic across containers
  • Automatically recover unhealthy containers
  • Discover services


Container Use Case:
You can use containers to create distributed applications by breaking your application into independent tasks or processes like AWS microservices.

Example 1: Distributed Applications and Microservices
You can have separate containers for your webserver, application server, message queue and backend workers. Containers are ideal for running single tasks, so you can use containers as the base unit for a task when scaling up and scaling down. Each component of your application can be made from different container images. Docker containers provide process isolation allowing you to run and scale different components side by side regardless of the programming language or libraries running in each container.

Example 2: A Multiplayer Gaming Platform
Ubisoft gaming company using AWS ECS. They creates, publishes, and distributes popular interactive video games for players throughout the world.
Read more here about Ubisoft 

ECS WorkFlow


How To Install Wordpress and PhpMyAdmin with Docker Compose on Amazon Linux instance using CLI:

1. Update the installed packages and package cache on your instance.

>> sudo yum update -y

2. Install the most recent Docker Community Edition package.

>> sudo yum install -y docker

3. Start the Docker service.

>> sudo service docker start

4. Add the ec2-user to the docker group so you can execute Docker commands without using sudo.

>> sudo usermod -a -G docker ec2-user

5. Pull centos docker public image

>> sudo docker pull centos

6. Run centos container to test

>>sudo docker run -it centos
>> exit

7. We will use Docker compose to run all containers with a command (wordpress , mysql , phpmyadmin containers) define all these containers in one yaml file.

8. First install python pin (if not already done)

>>sudo yum install python-pip

9. Install docker compose

>> sudo pip install docker-compose

10. Now create a directory called wordpress in project root directory in my case var/www/html

>> cd /var/www/html
>> sudo mkdir wordpress
>> cd wordpress

11. create docker-compose.yaml file &

>> sudo touch docker-compose.yaml
>> sudo chmod -R 666 docker-compose.yaml

paste below definitions in above file

wordpress:
  image: wordpress
  links:
    - wordpress_db:mysql
  ports:
    - 8080:80
wordpress_db:
  image: mariadb
  environment:
    MYSQL_ROOT_PASSWORD: pgtest
phpmyadmin:
  image: corbinu/docker-phpmyadmin
  links:
    - wordpress_db:mysql
  ports:
    - 8181:80
  environment:
    MYSQL_USERNAME: root
    MYSQL_ROOT_PASSWORD: pgtest


12. Below command will create 3 containers and you are done with setup.

>> sudo docker-compose up -d

Lets verify:

http://your-public-ip-address:8080 wordpress admin
http://your-public-ip-address:8181 wordpress phpmyadmin


How to un-install docker on Centos image and docker containers:

First stop your current running docker-compose session

>> docker-compose stop

Remove the existing container

>> docker-compose rm wordpress




Happy Learning AWS Services!!!! :) Still Doubts? lets put your questions in below comment box! Thanks!

Watch Video: 


References:
https://aws.amazon.com/ecs/
https://docs.aws.amazon.com/cli/latest/userguide/awscli-install-linux.html
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-lamp-amazon-linux-2.html
https://www.digitalocean.com/community/tutorials/how-to-install-wordpress-and-phpmyadmin-with-docker-compose-on-ubuntu-14-04
https://docs.docker.com/machine/examples/aws/
https://docs.docker.com/compose/reference/rm/

Jul 22, 2018

How To Configure SQS Using AWS CLI

How To Configure SQS Using AWS CLI

SQS stands for Amazon Simple Queue Service.

It is flexible, reliable, secure, highly-scalable, easy & fully managed message queuing service for storing unlimited messages.

Using SQS we can sends multiple message at same times.

It support two types of queue, Standard and FIFO (First In First Out) queues.

"Standard" queues are available in all regions & "FIFO" queues are available only in the US East (N. Virginia), US East (Ohio), US West (Oregon), and EU (Ireland) regions.

If you don't specify the FifoQueue attribute while creating queue then by default it will create a Standard queue.

Important Notes:
  • Once you created a queue you can't update queue type.
  • If you forget to provide a value for an attribute then queue will use default value for that attribute.
  • If you delete a queue then you must wait at least 60 seconds before creating a queue with the same name.
  • If you provide the name of an existing queue along with the exact name and values of all the queue's attributes then create-queue returns the queue URL for the existing queue.
  • If the queue name, attribute names, or attribute values don't match an existing queue then create-queue returns an error.
  • A queue name can have up to 80 characters.
  • A queue name can be of alphanumeric characters, hyphens (- ), and underscores (_ ).
  • A FIFO queue name must end with the .fifo suffix.
  • Queue names are case-sensitive.
  • A message can include only XML, JSON, and unformatted text. The Unicode characters are allowed: For example: #x9 | #xA | #xD | #x20 to #xD7FF | #xE000 to #xFFFD | #x10000 to #x10FFFF etc
  • AWS SQS does not delete message once it has been received by receiver because of distributed system and it has no guarantee that receiver received that message or not due to some technical difficulties. Therefore it is receiver responsibility to delete message from queue. 
  • SQS provides 'visibility period' is a period for which message become unavailable for specific period after being received by consumer / receiver.

Use Case:
You can use Amazon SQS when you want to move data between distributed application components.

Lets see one by one...

How to create a queue:

It require queue name and attributes like DelaySeconds, MaximumMessageSize etc. we can also save these attribute values in a JSON file and pass it instead of passing attributes directly.

Create a JSON file name as attributejson.json and write below attribute.

{"MessageRetentionPeriod":"259200"} 

This parameter tells AWS SQS to persist the message in queue for that particular time in seconds.

In above example we set it for 3 days (3 days * 24 hours * 60 minutes * 60 seconds).

>> aws sqs create-queue --queue-name pgqueue --attributes file://attributejson.json

Once you execute command above it will output the URL of newly created queue. Save this URL for future utilization.

Output:
{
  "QueueUrl": "https://queue.amazonaws.com/099998EXAMPLE/pgqueue"
}

To verify we can use command below.

>> aws sqs list-queues 


How to send messages in new SQS Queue:

>> aws sqs send-message --queue-url --message-body "This is my first queue message"

AWS provides 'send-message' command to send message to any specified queue and parameter we need to pass --queue-url and --messgae-body.

Once we execute above command it will send that message into that queue. You can also verify this from AWS management console.


How to receive/delete queue message:

AWS provides a command that is --receive-message and only one parameter need to pass which is --queue-url from which want to receive messages.

>> aws sqs receive-message --queue-url 

From the output you can note receipt-handle to delete this message in future.

>> aws sqs delete-message --queue-url --receipt-handle


How to send message with delay in queue:

We can send message in queue with specific time delay that means the message will be available for processing after that delayed time finished.

>>aws sqs send-message --queue-url --message-body "Message with 10 second delay" --delay-seconds 10

Here we set delay of 10 seconds that means the message will be delivered to queue after 10 seconds. in this command we just need to pass extra parameter i.e --delay-seconds with value in seconds.


How to delete SQS queue:

SQS provides a command delete-queue with parameter queue-url to delete queue.

>>aws sqs delete-queue --queue-url

We can verify deleted queue by command below.

>> aws sqs list-queues



Happy Learning AWS Services!!!! :) Still Doubts? lets put your questions in below comment box! Thanks!

Watch Video:


Reference: https://docs.aws.amazon.com/cli/latest/reference/sqs/index.html


Jun 24, 2018

Amazon SNS [Simple Notification Service] Using CLI

Amazon SNS

SNS [Amazon Simple Notification Service]:

SNS means Simple Notification Service. It provides robust messaging service for web applications.

It is a versatile messaging service which can deliver message to any devices also to send notification to different AWS resources.

SNS offers 'push' messaging service which is based on publisher subscriber model that means multiple publishing application can to communicate with multiple subscribing application using AWS SNS.

It supports multiple transport protocols that are Amazon SQS, Lambda, HTTP, HTTPS, Email, SMS and Mobile Notifications.

SNS Support Protocols


Following are the components of AWS SNS:

Topic:
It contains subject and content event each topic has unique identifier [ URI ]. URI identify the SNS endpoint to publishing and subscribing for messages related to particular topic.

Owners: 
They are the topic creators which defines the topics

Subscribers:
They are the belongs the client, end users, applications, services that wants to receives notification on specific topic. Single topic can have multiple subscribers.

Publishers:
They are the message or notification carriers they send message to topics and SNS matches the topic with list of subscribers interested in that topic and deliver message to each one. Publisher have rights to publish message on different topics

SNS Usage Emaple


Lets see...

How To Configure SNS Using AWS CLI:

First create a SNS topic using command below:

>> aws sns create-topic --name pg-topic

Output: 

"TopicArn": "arn:aws:sns:us-east-2:948488888:pg-topic"

Save this ARN for future reference or usage.

Now subscribe to this pg-topic topic:

>> aws sns subscribe --topic-arn EnterTopicARN --protocol email --notification-endpoint pranaydac08@gmail.com

Once we execute above command it will send confirmation email on that email ID and they need to confirm subscription by clicking on confirmation link then only publisher can send emails to subscriber.

Now publish topic and send email to subscriber:

>> aws sns publish --topic-arn EnterTopicARN --message "This is test message..."

This will send email to all subscribers who subscribed this "pg-topic" topic and you can also confirm this from AWS management console by clicking on menu SNS.


How To Unsubscribe Topic Using CLI:

>> aws sns unsubscribe --subscription-arn EnterSubscriptionARN

How To Delete AWS SNS Topic Using CLI:

>> aws sns delete-topic --topic-arn EnterTopicARN

We can get topic ARN by using command below:

>> aws sns list-topics

Please note in next session we will learn "How To Configure SQS Using AWS CLI"...

Happy Learning AWS Services!!!! :) Still Doubts? lets put your questions in below comment box! Thanks!

Watch Video:




Jun 17, 2018

AWS Internal Load Balancer Using CLI

AWS Internal Load Balancer Using CLI

The Classic Load Balancer & Application Load Balancer are the external load balancers which could be access by external client over Internet having public IP addresses hence external load balancer routes the request from client over Internet but some times we need to load balance internal services which are not accessible by external clients.

For Example: In AWS bunch of micro services which is only used by AWS Infrastructure internally so to balance the load of these internal services we can use Internal Load Balancer.

The Internal load balancer only have private IP address and therefore internal load balancer only route request from client which have VPC access.

For Example: If our application has multiple tiers like web server connected to Internet and Database server that only connected to Web server. So in this case we can create a Internal Load Balancer for Database Server. Web Server will receive request from External Load Balancer and send request to Database Server via Internal Load Balancer and DB Server receives request from Internal Load Balancer and will respond to Web Server.

Internal-Load-Balancer-Flow


Lets see...

How To Create Internal Load Balancer Using AWS CLI:

>> aws elb create-load-balancer --load-balancer-name pgelbinternal --listeners "Protocol=HTTP,LoadBalancerPort=80,InstanceProtocol=HTTP,InstancePort=80" --scheme internal --subnets EnterSubnetsIds --security-groups EnterSecurityGroupID

To create internal load balancer we need to use following command and parameters:

Command : aws elb create-load-balancer

Parameters:

load-balancer-name : Load Balancer Name
listeners : Load balancer listener
scheme : In this parameter we need to pass value 'internal' because by default it will consider 'external' which means public internet facing load balancer.
subnets : Recommended Private Subnet Ids
security-group : Security group Ids

Output: It will show one DNS with prefix 'internal' like "internal-pgelbinternal-021252222.region.elb.amazonaws.com"

We can verify it from AWS management console by clicking on Load Balancer menu.

Lets see...

How To Register Instances With Internal Load Balancer:

>> aws elb register-instances-with-load-balancer --load-balancer-name pgelbinternal --instances EnterInstanceID

To register instance with internal load balancer we need to use following command and parameters:

Command: aws elb register-instances-with-load-balancer

Parameters:

load-balancer-name : Internal Load Balancer Name which we want to register with instance
instances : Instance Id which we want to register with Internal Load Balancer

We can verify it from AWS console by clicking 'Load Balancer' menu and click on "Instance" tab

How To De-register Instances From Internal Load Balancer:

>> aws elb deregister-instances-from-load-balancer --load-balancer-name pgelbinternal --instances EnterInstanceID

To de-register instance with internal load balancer we need to use following command and parameters:

Command: aws elb deregister-instances-from-load-balancer

Parameters:

load-balancer-name : Internal Load Balancer Name which we want to deregister from instance
instances : Instance Id which we want to deregister from Internal Load Balancer

We can verify it from AWS console by clicking 'Load Balancer' menu and click on "Instance" tab


How To Delete Internal / External Load Balancer Via AWS CLI:

>> aws elb delete-load-balancer --load-balancer-name pgelbinternal



Happy Learning AWS Services!!!! :) Still Doubts? lets put your questions in below comment box! Thanks!

Watch Video:

Jun 10, 2018

Create Classic Load Balancer Using AWS CLI [Part-2]

Classic Load Balancer Using AWS CLI

In last session we have seen how to create Application Load Balancer using "elbv2" command.

Classic Load Balancer is used to route traffic, based on applications or network level details.

It is best option for simple load balancing traffic across multiple EC2 instances where high availability, auto scaling and robust security are basic requirement of application.

It works under 4th layer of OSI model which is Transport Layer and supports TCP, SSL, HTTP, HTTPS protocols.

Classic Load Balancer supports command "aws elb" without v2 version operation where as in last session for Application Load Balancer we used command "aws elbv2".

>> aws elb create-load-balancer --load-balancer-name pgLoadBalClassic --listeners "Protocol=HTTP,LoadBalancerPort=80,InstanceProtocol=HTTP,InstancePort=80" --subnets EnterSubnetsIds --security-group EnterSecurityGroupID

Following are the command and parameters to create Classic Load Balancer:

Command: aws elb create-load-balancer

Parameters:

load-balancer-name: Set load balancer name

listeners: Set listeners for load balancer

Note: In Classic Load Balancer, We are specifying listeners while creating classic load balancer and in case of Application Load Balancer first we created listener separately then register that with load balancer.

Protocol & Port: 

We configured FrontEnd protocol (Protocol=HTTP,LoadBalancerPort=80) that means client request will convert to load balancer via HTTP protocol on port 80.

Also we configured BackEnd protocol (InstanceProtocol=HTTP,InstancePort=80) that means connection between ELB and instances via HTTP protocol on port 80.

subnets: Provide your VPC subnet ids. Specify one subnet per Availability Zone specified

security-group: Provide security group ID

SSLCertificateId: This parameter is only use for HTTPS listener

Output:

{
    "DNSName": "pgLoadBalClassic......elb.amazonaws.com"
}

This DNS address which AWS has been assigned to our new classic load balancer.

You can verify it from aws management console by clicking on "Load Balancer" menu.

Lets see, how to create new listener for existing Classic Load Balancer:

To create new load balancer listener we need to specify name of existing load balancer and specify listener configuration for FrontEnd and BackEnd protocols.

>> aws elb create-load-balancer-listeners --load-balancer-name pgLoadBalClassic --listener "Protocol=HTTP,LoadBalancerPort=80,InstanceProtocol=HTTP,InstancePort=80"

lets verify it from aws management console by clicking on "Load Balancer" menu.

Happy Learning AWS Services!!!! :) Still Doubts? lets put your questions in below comment box! Thanks!

Watch Video:


Jun 3, 2018

AWS Elastic Load Balancer [ELB] Using CLI Part-1

AWS Elastic Load Balancer [ELB] Using CLI

ELB stands for AWS Elastic Load Balancer. ELB is used to manage load balance between multiple EC2 instances running on AWS cloud. It provides scalability and fault tolerance for application.

What Are The ELB Feature?

  • ELB is fully managed component which manage and distribute incoming application network traffic across multiple EC2 instances which can be in multiple AWS EC2 availability zones.
  • ELB can be used to load balance services in private and public IPs.
  • ELB can be terminate and process incoming secure SSL connection which improve system performance.
  • ELB provides sticky session feature to maintain user Cookie for that particular session and ensure that user session request should sent to same EC2 instance.
  • It provides Auto-Scaling that scale out AWS cloud space automatically.
  • It monitor the health of multiple EC2 instances running behind ELB and configured rules
  • It integrate with Route 53 which enables to configure our application with custom domains and global distribution of application content.

AWS support two types of load balancers:

1. Classic Load Balancer [Old Original Type]
2. Application Load Balancer [Latest Introduced Type]


types-of-load-balancer


1. Classic Load Balancer: is used to route traffic, based on applications or network level details.

It is best option for simple load balancing traffic across multiple EC2 instances where high availability, auto scaling and robust security are basic requirement of application.

It works under 4th layer of OSI model which is Transport Layer.

2. Application Load Balancer: For advanced functionality and application level support we can use Application Load Balancer.

This service operates at Application Layer and allows user to define routing rules based on content across multiple services running on one or more AWS EC2 instances.

It works under layer 7 of OSI model that is Application Layer.


How To Create Application Load Balancer:

We will need subnets and security groups to create Application Load Balancer that we can get from AWS management console which has been assigned to our EC2 instances.

>> aws elbv2 create-load-balancer --name pg-loadbal --subnets EnterMultipleSubnetsIdHere --security-group EnterSecurityGroupID

Once we execute above command, it will create application load balancer that can verify from console by navigating "Load Balancer" menu.

Please save load balancer ARN from output for future reference.


What is Target Group:

It is a groups of all targets within load balancer. Load balancer routes the requests to targets registered with target groups based on load balancers rules.

Targets could be EC2 instances or ECS contains clusters or any other services which can accept request from ELB


How To Create ELB Target Group Using CLI:

To create ELB Target Groups First we need to check status of load balancer that should be in active state.

Go to console click on menu "Load Balancer" and verify state. If state is active then only we can create Target Groups for ELB.

As we have created Application Load Balancer, so we need to use V2 operation commands to create target groups.

>> aws elbv2 create-target-group --name pggroup --protocol HTTP --port 80 --vpc-id EnterVPCIdHere

Application Load Balancer only support HTTP & HTTPS protocol where as Classic Load Balancer support some other protocols like HTTP, HTTPS, TCP, SSL and we need to mention port 80.

Also we need to specify VPC id associate with our ELB which contains same subnets which we provided while creating same ELB.

Once you press enter just save target group ARN for future reference.


How To Create / Register Target In To New Target Group:

>> aws elbv2 register-targets --target-group-arn EnterTargetGroupARNHere --targets Id=EnterTargetId

To register target in target group, we need to provide target group ARN that we just created and targets ID.

Please note Target ID can be EC2 instances id or any other services ids. In this example we are considering EC2 instance id. 

Once you execute command above it will create or register targets in that target group.


What Is The Use Of AWS Listener:

The listener is a process which checks the incoming connection requests on specific protocols and points. This protocol and port combination called as "Front End Connection". We can also define these configuration for back-end instance connections called as "Back-end Connection".

The listeners rules determine how load balancer route the request to the targets in target group.

Classic ELB Listener supports HTTP, HTTPS, TCP and SSL where as Application Load Balancer Listener supports HTTP & HTTPS only.


How To Create Listener & Check Health Of Register Target (In Our Case EC2 Instance) Using AWS CLI:

>> aws elbv2 create-listener --load-balancer-arn EnterLoadBalARNHere --protocol HTTP --port 80 --default-actions Type=forward, TargetGroupArn=EnterTargetGroupARNHere 


To create a listener we can use command "aws elbv2 create-listener" and parameters "load-balancer-arn" which we created earlier can copy from AWS management console then provide protocol and port.

We will use HTTP protocol with port 80 which is most suitable and usual combination for web application.

Then we need to set "default-actions" which means the default action to be taken by load balancer when condition and rules matched.

For example "Type=forward" that means when conditions matched, it forward the request to that defined target group.

Once you execute command you can verify this from console by clicking on Load Balancer menu.

You can see there "Listeners" tab in which you can see newly added listener. Also we can verify health of register target which we just registered.

>> aws elbv2 describe-target-health --target-group-arn EntertargetgrouoARNHere


How To Delete Load Balancer & Target Group:

>> aws elbv2 delete-load-balancer --load-balancer-arn EnterLoadBalARNHere

To delete load balancer we need to provide parameter load "load-balancer-arn" with command "aws elbv2 delete-load-balancers".

To verify go to console click on "Load Balancer" menu.

However if you click on "Target Groups" you will still see Target Group exists.

Lets See How To Delete Target Group Using CLI:

>> aws elbv2 delete-target-group --target-group-arn EnterTargetGroupARNHere

So we need to provide target group ARN to delete that specific target group.

Happy Learning AWS Services!!!! :) Still Doubts? lets put your questions in below comment box! Thanks!

Watch Video:


May 27, 2018

How To Assign / Remove Policy To IAM User Group Using CLI

How To Assign / Remove Policy To IAM User Group Using CLI

In last session we have learned how to create IAM users and groups. If you missed that please go through it, if you required.

How To Assign Policy To User Group:

To assign policy to group, we will need ARN which is Amazon unique resource name.

To get ARN of particular policy got to AWS management console by navigating IAM->Policies then click on 'Administrator Access' policy to get ARN of this policy.

Lets assume we want to use 'Administrator Access' policy for IAM group but we can select any other policies from AWS console as per our requirement.

>> aws iam attach-group-policy --policy=EnterpolicyARNHere --group-name pggroup


How To Remove User From IAM Group:

We can use command below to remove any user from IAM group.

>> aws iam remove-user-from-group --group-name pggroup --user-name pgtestuser


Delete IAM Group Using CLI:

>> aws iam delete-group --group-name pggroup 

Please make a note, before executing this command we need to detached policy of this group otherwise it will throw error. Like "An error occurred (DeleteConflict) when calling the DeleteGroup operation: Cannot delete entity, must detach all policies first."

So lets first remove policy from this IAM group using command below.

>> aws iam detach-group-policy --policy=EnterPolicyARNHere --group-name pggroup

After executing this command, this group has no users and policies assigned to it.

Now we are ready to remove this IAM group using CLI successfully.

>> aws iam delete-group --group-name pggroup

We can verify this group from AWS console once it has been deleted.


Happy Learning AWS Services!!!! :) Still Doubts? lets put your questions in below comment box! Thanks!

Watch Video:

May 19, 2018

How to edit my.ini parameters of an Amazon RDS instance

How to edit my.ini parameters of an Amazon RDS instance

Before editing DB engine parameters, lets know about DB parameter group.

We can manage DB engine configuration through the parameters in a DB parameter group.  A default DB parameter group is created while creating new DB instance. We cannot modify the parameter settings of a default DB parameter group directly.

If we want to edit any existing parameter value then we must create new custom DB parameter group rather than default group. Once you create custom DB parameter group, it will show you all the parameters in list and details like which parameters are editable, default value and parameter type for example dynamic or static.

Some important points about DB parameter group:
  • When we edit a dynamic parameter and save, the change is applied immediately regardless of the Apply Immediately setting.
  • When we edit a static parameter and save, the parameter change will take effect after you manually reboot the DB instance.
  • When we change the DB parameter group associated with a DB instance, you must manually reboot the instance before the new custom DB parameter group is used by the DB instance.
  • The parameter value can be specified as an integer or as an integer expression built from formulas, variables, functions & operators.
Note: Improperly setting parameters in a DB parameter group can have unintended adverse effects, including degraded performance and system instability. You should try out parameter group setting changes on a test DB instance before applying those parameter group changes to a production DB instance.


Steps To Edit my.ini Parameters Of an Amazon RDS Instance:

1: Sign in to your AWS Management Console and search "RDS" in "AWS Service" search box.

2: Once you find RDS service, Open that Amazon RDS console page.

3: On left side panel click on "Instances" menu, then click on your "DB instance"

4: Click on "Parameter groups" menu from left panel then click on "Create parameter group" button

5: Select "Parameter group family" as per your default parameter group,in my case I will select "mysql5.6"

6: Enter new "Group name" like custom-test-group

7: Enter Description for the DB parameter group

8: Then click on "Create" button, it will create new custom Parameter group.

9: Now click "Parameter groups" menu from left panel again, it will list new recently created parameter group

10: Click on that new custom parameter group, You will see "Parameters" filter option.

11: Search for parameter you want to update the value like "wait_timeout" and click on "Edit parameters" button

12: You can add value from range mention in "Allowed values" column of that parameter. like 1-31536000 for "wait_timeout" parameter.

13: Once you done with changes, click on "Save Changes" button, It will save all recent parameter changes.

14: After this we need to associate this new custom parameter group to our DB Instance.

15: On left side panel click on "Instances" menu again, then click on your "DB instance". On the right top corner click on "Instance Action" drop-down then click on "Modify" menu.

16: You will see "Modify DB Instance" page, Now scroll down and go to "Database options" section and select newly created custom parameter group from "DB parameter group" dropbox.

17: Click on "Continue" button, It will take some time to apply new parameter group.

18: Once done Step 17, Again On left side panel click on "Instances" menu, then click on your "DB instance". On the right top corner click on "Instance Action" drop-down then click on "Reboot" menu.

Happy Learning AWS Services!!!! :) Still Doubts? lets put your questions in below comment box! Thanks!

Watch Practical Video:

10 Quick Steps To Create Amazon Free Tier RDS MySql Database Instance

10 Quick Steps To Create Amazon Free Tier RDS MySql Database Instance

The Amazon Relational Database Service (Amazon RDS) Free Tier allows you to do some experiment with Amazon RDS for free.

Please note AWS Free Tier is available to you for 12 months starting with the date on which you create your AWS account.

AWS Free Tier with Amazon RDS:

  • AWS provides 750 hours for RDS with single availability zone, db.t2.micro instance with database engine MySQL, MariaDB, PostgreSQL, Oracle BYOL or SQL Server (SQL Server Express Edition)
  • 20 GB of General Purpose (SSD) DB Storage
  • 20 GB of backup storage for your automated database backups and any user-initiated DB Snapshots

Steps To Create an RDS DB Instance:

1: Sign in to your AWS Management Console and search "RDS" in "AWS Service" search box.

2: Once you find RDS service, Open that Amazon RDS console page.

3: If required, You can change default region by clicking the region menu on upper-right corner.

4: On left side panel click on "Instances" menu, then click on "Launch DB Instance" button.

5: Now you must be on "Select Engine" page, The most important thing here is "Only enable options eligible for RDS Free Usage Tier" option you can see at bottom of this page. Click on that checkbox if not already selected, so that options that are not covered under the Free Tier will be inactive.

Note: Please make sure, do not cross RDS Free Tier limit otherwise it will charge you accordingly in Free tier as well.

6: Choose "MySql" engine and click on "Select" button.

7: On "Specify DB Details" page, Under "Instance Specifications" section,

Select "DB Instance Class" value "db.t2.micro--1vCPU,1GBi RAM"

Set Multi-AZ Deployment to "No"

Keep "Storage Type" and "Allocated Storage" as it is.

Then under "Settings" section, enter values for a "DB Instance Identifier", "Master Username", "Master Password" and "Confirm Password"

Then Click on "Next Step"

8: On the "Configure Advanced" settings page, In "Network & Security" section keep default setting for all fields(VPC, Subnet,Publicly Accessible, Availability Zone & VPC Security Groups).

9: In "Database Options" section, enter your database name and "Enable IAM DB Authentication" to "Yes" (Optional) and keep default setting for other fields.

10: In "Backup" section, set "Backup Retention Period" to (zero) days otherwise it will charge you.

Next click on "Launch DB Instance" button and You are done!


Happy Learning AWS Services!!!! :) Still Doubts? lets put your questions in below comment box! Thanks!

Watch Video:

May 13, 2018

How to use/manage AWS IAM using AWS CLI

How to use/manage AWS IAM using AWS CLI

AWS IAM stands for AWS Identity and Access Management. We will see here how we can create IAM Users and IAM Groups using AWS CLI commands.

This is very important service which help us to manage and deploy AWS infrastructure securely. We can create users and groups using IAM and set them permissions to allow or deny them to specific AWS resources.

AWS IAM service provides following features:

  • Secure AWS Account
  • Identity Management Framework
  • Centralized control over all AWS users and groups
  • Fine Grained Control
  • Flexibility
  • In-built security policies
  • Multilevel management i.e Users, Groups & Permissions
  • Easy to configure and manage using AWS management console, AWS CLI, AWS Power Shell or AWS software development kit (SDK)


AWS IAM service provides following features


Lets see some examples using CLI....

List all users and create new users using CLI

>> aws iam list-users

This command will list all users present in our AWS account.

There are two type of account we can create, that are Root Account and IAM User Account

1. Root Account:
Root account is used to setup an AWS subscription when you first time create your AWS account. You can sign in to AWS account using root account username and password.

2. IAM User Account:
These are regular IAM user by which you can provide these user to sign' in to your AWS account with specific permissions and controls to access AWS resources to manage AWS account securely.

How to create new IAM user via CLI commands:

>> aws iam create-user --user-name pgtestuser

Using above command we can create a new IAM user having username 'pgtestuser'

Lets see how to create users IAM group. The IAM user group is collection of IAM users. If number of users needs similar permissions then we can create a group and assign them rather than granting these permissions to each users. Also one user can be part of multiple groups and note one thing that AWS IAM groups can not be nested.

How to create IAM group and add user to that group:

>>  aws iam create-group --group-name pggroup

Here we have just created 'pggroup' group and now lets add user to that group.

>> aws iam add-user-to-group --user-name pgtestuser --group-name pggroup

After this command executed the user 'pgtestuser' will be assigned to group 'pggroup'.

We can verify this from AWS console management from IAM Group menu.

How to get group details and it's users inside that group:

>> aws iam get-group --group-name pggroup


IAM Authentication Methods:

There are three types of authentication methods provided by AWS.

1. Username / Password
2. API Access Key 
3. Multi Factor Token

How to login IAM user using username / password authentication:

Before that first assign username and password to this user by using command below.

>> aws iam create-login-profile --user-name pgtestuser --password testpassword

This will set password to user 'pgtestuser'

How to authenticate IAM user using API Access Key:

API stands for Application programming interface. API keys allows other program or script to login within our AWS cloud programatically and interact with AWS resources. So using API Access Key other program or script can access resources.

Lets see how we can setup or create this API access key using CLI for a specific user.

>> aws iam create-access-key -user-name pgtestuser

Please make note that, it is essentials to store the key details in safe location for future usage, because the key details is only available while creation of these keys. In case if you loose the key, it is recommended to delete old key set and create new.

Lets see how to delete access keys. You can get user access key id from AWS console management.

>> aws iam delete -access-key --user-name pgtestuser -- access-key-id EnterYourAccessKeyId

Note, one user can have multiple access keys and it is not mandatory to delete old key before creating new but this is not recommended approach.

How to create access key for specific user and store that key details in a key.txt file for security purpose:

>> aws iam create-access-key --user-name pgtestuser >> key.txt

Lets verify key details by opening key.txt file using any editor.

>> sudo vim key.txt

Once you enter this command, it will display key details. Also you can verify it from AWS management console.


Happy Learning AWS Services!!!! :) Still Doubts? lets put your questions in below comment box! Thanks!

Watch Practical Video:


Apr 28, 2018

AWS CLI Commands For AWS S3 Services

AWS S3 Services new


AWS S3 is stands for AWS Simple Storage Service which provides facility to store any data on cloud and access anywhere on the web. AWS S3 store data as object within bucket. Bucket is logical grouping of objects (Files/Folders). S3 provides three operations on the bucket objects i.e Read, Write and Delete.

So we will see here how we can perform these operations using AWS CLI commands.

When we create or write any file on S3 bucket and if we want to read it from S3 bucket, S3 service provides us a URL of that particular file rather than that file. So we can use that file URL anywhere on the web applications.

#Following are the features of AWS S3 Services:
  • Internet Accessible Storage
  • Region Level Storage
  • Almost Unlimited Scalability
  • No File Locking Mechanisms
  • Support REST & SOAP API's
  • 99.9999% of durability of object stored
  • 99.99% Availability Of Services
  • Bucket must have unique name


>> aws s3 help

Once you execute this command, it will show you all documentation related to s3 services. We can use this documentation and do read, write and delete operations.

Lets see and do one by one...

#Create Bucket:

>> aws s3 mb s3://mynewbucket

We can use above command 'mb' to create new bucket but make sure bucket name must be unique universally on s3 service.

#How to upload new object on aws s3 bucket:

For example, we have one test.txt file which is on server root. Lets see how we can copy this file to s3 bucket.

>> aws s3 cp test s3://mynewbucket

Once you execute above command it will copy 'test' file to s3 storage bucket 'mynewbucket' which we created.

You can verify this copied file from AWS Management Console.

S3 service provide multiple options to upload any files on bucket via parameters 'storage-classes'.

Below are the options provided by 'storage-classes':



#STANDARD Storage Class: 
This is default storage class, which is usage for dynamic websites, mobile and games apps, cloud applications, content distribution and big-data analysis. Because it provides high durability and availability for frequently access data which is required less frequently but required rapid access when needed.

#STANDARD_IA Storage Class: 
Standard IA (Infrequent Access) offers high durability and low latency as compared to STANDARD Storage Class which provides low storage price and high performance. It makes STANDARD_IA Storage ideal for long term storage backups and data store for disaster recovery.

#REDUCED-REDUNDANCY Storage Class (RRS):
It is an S3 storage option which provide customer to store non critical reproducible data at low level of redundancy than STANDARD Storage Class. It provides highly available solution for distributing or sharing content that is durably store or storing thumbnails, transported media or other processed data that can be easily reproduced.

For Example: We can store app log files on the RRS which are not very critical for our organization with low maintenance price as we just need that logs for troubleshooting and analysis purpose only.

Minimum object size in STANDARD_IA is 1028 KB compare to 1 byte with STANDARD storage and maximum size 5TB

Lets see some S3 commands we can use for some additional operations. Most of the commands are similar to Linux commands.

>> aws s3 ls s3://mynewbucket

This command will list all files and folders present inside bucket mynewbucket.

>> aws s3 cp test s3://mynewbucket

This command will upload test file to aws s3 mynewbucket storage.

Lets copy files using storage class:

Lets assume we have a file which is not critical and important and want to save our money. In this case will use STANDARD-IA or REDUCED-REDUNDANCY rather than expensive STANDARD storage class.

>> aws s3 cp test s3://mynewbucket --storage-class STANDARD_IA  


# How to move AWS S3 object using AWS CLI commands:

Here we will see how we can download and move our S3 bucket storage file into local storage.

>> aws s3 mv s3://mynewbucket . --recursive

This command will transfer all the files and sub folders from our bucket to local system drive. mv command also support include and exclude parameters which provide feature to exclude or include files from move. Using mv command we can transfer file from local system drive to aws s3 bucket and vice-versa.

>> aws s3 mv . s3://mynewbucket --recursive --exclude '*' --include '*.txt'

Here we want to exclude all files but include only text files from our local syatem to upload on s3 bucket.

# Sync AWS S3 Object Using AWS CLI:

S3 service provide to synchronized the files from source to s3 bucket and vice-versa.

Lets try this commands.

First create an empty text file on our system

>> touch test2.txt 

Now sync this file with s3 bucket by using command below.

>> aws s3 sync . s3://mynewbucket

Here you can see dot operator after sync command which means it pointing to current source directory, Now lets see how to delete files which is outdated which are not found in our source folders while synchronization. So that it also be deleted from our destination folder.

Lets remove test2.txt file from source directory using command below.

>> rm test2.txt

Now it is deleted from local source directory but still existing on s3 bucket. Lets try to sync this on s3 bucket as well.

>> aws s3 sync . s3://mynewbucket --delete

Sync command is also supported commands like storage class, include and exclude options and ACL option.

# How to use ACL option in AWS Sync command:

>> aws s3 sync . s3://mynewbucket --acl public-read

The public read access permission will be apply on sync files by using above command acl.


# How to delete AWS S3 bucket using AWS CLI Command:

Lets see how to remove bucket that we created earlier.

>> aws s3 rb s3://mynewbucket

But before removing bucket make sure you first removed all files and folders inside that bucket otherwise it will throw error .

Lets first delete all objects of that bucket.

>> aws s3 rm s3://mynewbucket --recursive

--recursive is to delete all objects sequentially from that bucket rather than deleting files one by one.

Watch Video:




Happy Learning AWS S3 Services!!!! :) Still Doubts? lets put your questions in below comment box! Thanks!

Apr 24, 2018

How to clear/invalidates configuration | config | getStoreConfig cache programatically in Magento

How to clearinvalidates configuration  config  getStoreConfig cache programatically in Magento

Magento generally cached all configuration values which we saved from admin or programatically.

If we changed old configuration attribute value with new value and it seems that while fetching value Magento returned old value instead of new the reason behind this Magento cached configuration values on the server.

We can refresh cache by using reinit method before fetching value, Please see example below.


Mage::app()->getConfig()->reinit(); //to clear/invalidates configuration cache

$getConfigValue = Mage::getStoreConfig('test/testSetting/testID');

Apr 22, 2018

AWS CLI Commands For AWS EC2 (Amazon Elastic Compute Cloud)

AWS CLI Commands For AWS EC2 (Amazon Elastic Compute Cloud)

In this post we will learn, how to use AWS CLI commands to configure or setup AWS EC2 service (Amazon Elastic Compute Cloud) with different supplementary components.

AWS EC2 is one of the most fundamental IaaS (Infrastructure as a Service) service provided by AWS. It allow users to rent raw compute power in shape of virtual machine on which they can run their own applications. It provides secure, scalable, re-sizable & robust computing capacity in cloud.

It is very reasonable to developers & small startups. We can rent EC2 as per requirements and we can terminate it whenever it no longer required within minutes instead of days & hours. We can create number of instances in EC2 and will pay for that only.

There are multiple ways in which EC2 instance can be created and deployed into AWS cloud.
  • AWS CLI
  • AWS Management Web Console
  • AWS SDK
  • AWS API

Below are the EC2 components which we need to understand.
  • AWS EC2 Key Pair
  • Security Groups
  • AWS AMI
  • VPC & Subnets
  • Elastic IP Address




Key Pair: 
We need to create Key Pair, it allows us to connect to our EC2 instance securely.

Lets see how to create new Key Pair using AWS CLI, You can use command below to create keypair.

>> aws ec2 create-key-pair --key-name MyKeyPair

Once key has been generated AWS does not provide access private key part of key pair. So we need to download private key on the time of key pair generation for future use and keep it in safe & secure place.

Lets create key pair and save key details in a file securely.

>> aws ec2 create-key-pair --key-name MyKeyPair --output text > test.pem

After enter you will see testkey.pem file has been created in root path, lets type 'ls' command to check file

>> ls
OutPut>> testkey.pem

This file contains our private key and make sure you will save it in secure location.

Security Groups: 
It controls which ports and protocols allow or disallows traffic towards EC2 instance. These groups are similar to Firewall in which we can add some networking rules to control traffics.

Lets create security groups using AWS CLI.

>> aws ec2 create-security-groups

There are two types of security groups we can create

EC2 Classic and EC2-VPC

EC2 Classic is the old way to create group, in this our instances run into single slab network that we shared with other AWS customers. Recently AWS released a new version of EC2 security groups service in which our instances are runs into virtual private cloud. That means it is logically isolated from other AWS customers. Hence EC2-VPC is more secure then EC2-classic.

To create EC2-VPC security group we need VPC is which can be get from default VPC created by AWS. We can get this ID from AWS Management Console.

>> aws ec2 create-security-group --group-name testgroup --description "Test Description" --vpc-id 'EnterVPCID'

Output>> {

"GroupId" : "sg-3....."

}
Lets verify newly security group created as follows.

>> aws ec2 describe-security-groups --group-ids EnterGroupIDHere

Once you execute this it will display the security group which we created just now.

AMI (Amazon Machine Image): 
It is Amazon machine image and template for EC2 instance, which provides the information about baseline operating system image that is require to launch EC2 instance. Using AMI we can pre-load the desired OS and server software on EC2 instance. This also includes launch permission that control which AWS account can use AMI to launch instance. Using this AMI we can launch various type of EC2 instances like various type of linux, windows and other server applications like webserver, databases server. You can search for AMI that needs the criteria for your EC2 instance.

To create AMI we need  AMI ID which we can get that from AWS Management Console.

VPC (Virtual Private Cloud) & Subnets: 
These are basic buildings blocks of networking infrastructure within AWS cloud. All EC2 instances are assign to one of these VPC and subnets for secure communication with other AWS components. We will use default VPC and Subnets for further used.

Deploy AWS EC2 Instance:
Now we are ready to launch AWS new ec2 instance with information we prepared above. Lets see how we can use all these information and create instance using AWS CLI.

>> aws ec2 run-instances --image-id enterAmiIdHere --count 1 --instance-type t2.micro --key-name MyKeyPair --security-group-ids enterSecurityGroupIdHere --subnet-id enterSubnetIdHere

Lets see all these in details one by one.

image-id : It is AMI id which is by default created by AWS for our account, we can get it from AWS Management Console.
instance-type: Type of instance we want to create
key-name: This is key which we created earlier
security-group-ids: This is security group id which we created
subnet-id: This is default subnet id, we can get it from AWS Management Console.

Full deployment of EC2 instance will get some time. You can verify it by using AWS CLI command as follow.

>> aws ec2 describe-instances

It will show status pending as it will take some time to activate instance.

Elastic IP Address:
We can create elastic ip address of our EC2 instance so that it will have a static ip address forever. Elastic ip address is the public ip address which reachable from Internet. We can associate any elastic ip address with any EC2 instance. If we don't have elastic ip address then whenever we reboot or shutdown and start EC2 instance, it will create new DNS ip address for each reboot which not recommended and user friendly.

Lets see how to create elastic ip address and associate to EC2 instance.

>> aws ec2 allocate-address

OutPut>> 
{

"PublicIp":"52......",
"Domain":"vpc",
"AllocationId":"eipalloc-od-----",

}

Note: We need to store our allocation ID to associate with EC2 instance.

Now lets associate new IP address to our EC2 instance.

>> aws ec2 associate-address -- instance-id enterInstanceIdHere --allocation-id enterAllocationIdHere

OutPut>> { "AssociationId":"eipassoc-el7-------" }

To verify everything have setup as expected we can run below command

>> aws ec2 describe-instances

and you will see all the information of our newly created EC2 instance and elastic ip address.


What is user data:
This is usually used when we launches EC2 instances. It is used to provides some custom user data to instance that can to perform common automated configuration task and even run script after the instance start.

>> aws ec2 run-instances --image-id enterImageIDHere -- count 1 -- instance-type t2.micro --key-name MyKeyPair --security-group-ids enterSecurityGroupIdHere --subnet-id enterSubnetIdHere -- user-data "sudo apt-get install Nginx"

This command will install Nginx server automatically as our EC2 instance launches within EC2 cloud.

How to terminate EC2 instance using AWS CLI:
>> aws ec2 terminate-instances --instance-id enterInstanceIdHere

Once you execute this command you will see a termination message in output like below.

Output>> { "TerminatingInstances":{{ "InstanceId":"InstanceIdWillDisplayHere" }} }

Watch Video: