Apr 28, 2018

AWS CLI Commands For AWS S3 Services

AWS S3 Services new


AWS S3 is stands for AWS Simple Storage Service which provides facility to store any data on cloud and access anywhere on the web. AWS S3 store data as object within bucket. Bucket is logical grouping of objects (Files/Folders). S3 provides three operations on the bucket objects i.e Read, Write and Delete.

So we will see here how we can perform these operations using AWS CLI commands.

When we create or write any file on S3 bucket and if we want to read it from S3 bucket, S3 service provides us a URL of that particular file rather than that file. So we can use that file URL anywhere on the web applications.

#Following are the features of AWS S3 Services:
  • Internet Accessible Storage
  • Region Level Storage
  • Almost Unlimited Scalability
  • No File Locking Mechanisms
  • Support REST & SOAP API's
  • 99.9999% of durability of object stored
  • 99.99% Availability Of Services
  • Bucket must have unique name


>> aws s3 help

Once you execute this command, it will show you all documentation related to s3 services. We can use this documentation and do read, write and delete operations.

Lets see and do one by one...

#Create Bucket:

>> aws s3 mb s3://mynewbucket

We can use above command 'mb' to create new bucket but make sure bucket name must be unique universally on s3 service.

#How to upload new object on aws s3 bucket:

For example, we have one test.txt file which is on server root. Lets see how we can copy this file to s3 bucket.

>> aws s3 cp test s3://mynewbucket

Once you execute above command it will copy 'test' file to s3 storage bucket 'mynewbucket' which we created.

You can verify this copied file from AWS Management Console.

S3 service provide multiple options to upload any files on bucket via parameters 'storage-classes'.

Below are the options provided by 'storage-classes':



#STANDARD Storage Class: 
This is default storage class, which is usage for dynamic websites, mobile and games apps, cloud applications, content distribution and big-data analysis. Because it provides high durability and availability for frequently access data which is required less frequently but required rapid access when needed.

#STANDARD_IA Storage Class: 
Standard IA (Infrequent Access) offers high durability and low latency as compared to STANDARD Storage Class which provides low storage price and high performance. It makes STANDARD_IA Storage ideal for long term storage backups and data store for disaster recovery.

#REDUCED-REDUNDANCY Storage Class (RRS):
It is an S3 storage option which provide customer to store non critical reproducible data at low level of redundancy than STANDARD Storage Class. It provides highly available solution for distributing or sharing content that is durably store or storing thumbnails, transported media or other processed data that can be easily reproduced.

For Example: We can store app log files on the RRS which are not very critical for our organization with low maintenance price as we just need that logs for troubleshooting and analysis purpose only.

Minimum object size in STANDARD_IA is 1028 KB compare to 1 byte with STANDARD storage and maximum size 5TB

Lets see some S3 commands we can use for some additional operations. Most of the commands are similar to Linux commands.

>> aws s3 ls s3://mynewbucket

This command will list all files and folders present inside bucket mynewbucket.

>> aws s3 cp test s3://mynewbucket

This command will upload test file to aws s3 mynewbucket storage.

Lets copy files using storage class:

Lets assume we have a file which is not critical and important and want to save our money. In this case will use STANDARD-IA or REDUCED-REDUNDANCY rather than expensive STANDARD storage class.

>> aws s3 cp test s3://mynewbucket --storage-class STANDARD_IA  


# How to move AWS S3 object using AWS CLI commands:

Here we will see how we can download and move our S3 bucket storage file into local storage.

>> aws s3 mv s3://mynewbucket . --recursive

This command will transfer all the files and sub folders from our bucket to local system drive. mv command also support include and exclude parameters which provide feature to exclude or include files from move. Using mv command we can transfer file from local system drive to aws s3 bucket and vice-versa.

>> aws s3 mv . s3://mynewbucket --recursive --exclude '*' --include '*.txt'

Here we want to exclude all files but include only text files from our local syatem to upload on s3 bucket.

# Sync AWS S3 Object Using AWS CLI:

S3 service provide to synchronized the files from source to s3 bucket and vice-versa.

Lets try this commands.

First create an empty text file on our system

>> touch test2.txt 

Now sync this file with s3 bucket by using command below.

>> aws s3 sync . s3://mynewbucket

Here you can see dot operator after sync command which means it pointing to current source directory, Now lets see how to delete files which is outdated which are not found in our source folders while synchronization. So that it also be deleted from our destination folder.

Lets remove test2.txt file from source directory using command below.

>> rm test2.txt

Now it is deleted from local source directory but still existing on s3 bucket. Lets try to sync this on s3 bucket as well.

>> aws s3 sync . s3://mynewbucket --delete

Sync command is also supported commands like storage class, include and exclude options and ACL option.

# How to use ACL option in AWS Sync command:

>> aws s3 sync . s3://mynewbucket --acl public-read

The public read access permission will be apply on sync files by using above command acl.


# How to delete AWS S3 bucket using AWS CLI Command:

Lets see how to remove bucket that we created earlier.

>> aws s3 rb s3://mynewbucket

But before removing bucket make sure you first removed all files and folders inside that bucket otherwise it will throw error .

Lets first delete all objects of that bucket.

>> aws s3 rm s3://mynewbucket --recursive

--recursive is to delete all objects sequentially from that bucket rather than deleting files one by one.

Watch Video:




Happy Learning AWS S3 Services!!!! :) Still Doubts? lets put your questions in below comment box! Thanks!

Apr 24, 2018

How to clear/invalidates configuration | config | getStoreConfig cache programatically in Magento

How to clearinvalidates configuration  config  getStoreConfig cache programatically in Magento

Magento generally cached all configuration values which we saved from admin or programatically.

If we changed old configuration attribute value with new value and it seems that while fetching value Magento returned old value instead of new the reason behind this Magento cached configuration values on the server.

We can refresh cache by using reinit method before fetching value, Please see example below.


Mage::app()->getConfig()->reinit(); //to clear/invalidates configuration cache

$getConfigValue = Mage::getStoreConfig('test/testSetting/testID');

Apr 22, 2018

AWS CLI Commands For AWS EC2 (Amazon Elastic Compute Cloud)

AWS CLI Commands For AWS EC2 (Amazon Elastic Compute Cloud)

In this post we will learn, how to use AWS CLI commands to configure or setup AWS EC2 service (Amazon Elastic Compute Cloud) with different supplementary components.

AWS EC2 is one of the most fundamental IaaS (Infrastructure as a Service) service provided by AWS. It allow users to rent raw compute power in shape of virtual machine on which they can run their own applications. It provides secure, scalable, re-sizable & robust computing capacity in cloud.

It is very reasonable to developers & small startups. We can rent EC2 as per requirements and we can terminate it whenever it no longer required within minutes instead of days & hours. We can create number of instances in EC2 and will pay for that only.

There are multiple ways in which EC2 instance can be created and deployed into AWS cloud.
  • AWS CLI
  • AWS Management Web Console
  • AWS SDK
  • AWS API

Below are the EC2 components which we need to understand.
  • AWS EC2 Key Pair
  • Security Groups
  • AWS AMI
  • VPC & Subnets
  • Elastic IP Address




Key Pair: 
We need to create Key Pair, it allows us to connect to our EC2 instance securely.

Lets see how to create new Key Pair using AWS CLI, You can use command below to create keypair.

>> aws ec2 create-key-pair --key-name MyKeyPair

Once key has been generated AWS does not provide access private key part of key pair. So we need to download private key on the time of key pair generation for future use and keep it in safe & secure place.

Lets create key pair and save key details in a file securely.

>> aws ec2 create-key-pair --key-name MyKeyPair --output text > test.pem

After enter you will see testkey.pem file has been created in root path, lets type 'ls' command to check file

>> ls
OutPut>> testkey.pem

This file contains our private key and make sure you will save it in secure location.

Security Groups: 
It controls which ports and protocols allow or disallows traffic towards EC2 instance. These groups are similar to Firewall in which we can add some networking rules to control traffics.

Lets create security groups using AWS CLI.

>> aws ec2 create-security-groups

There are two types of security groups we can create

EC2 Classic and EC2-VPC

EC2 Classic is the old way to create group, in this our instances run into single slab network that we shared with other AWS customers. Recently AWS released a new version of EC2 security groups service in which our instances are runs into virtual private cloud. That means it is logically isolated from other AWS customers. Hence EC2-VPC is more secure then EC2-classic.

To create EC2-VPC security group we need VPC is which can be get from default VPC created by AWS. We can get this ID from AWS Management Console.

>> aws ec2 create-security-group --group-name testgroup --description "Test Description" --vpc-id 'EnterVPCID'

Output>> {

"GroupId" : "sg-3....."

}
Lets verify newly security group created as follows.

>> aws ec2 describe-security-groups --group-ids EnterGroupIDHere

Once you execute this it will display the security group which we created just now.

AMI (Amazon Machine Image): 
It is Amazon machine image and template for EC2 instance, which provides the information about baseline operating system image that is require to launch EC2 instance. Using AMI we can pre-load the desired OS and server software on EC2 instance. This also includes launch permission that control which AWS account can use AMI to launch instance. Using this AMI we can launch various type of EC2 instances like various type of linux, windows and other server applications like webserver, databases server. You can search for AMI that needs the criteria for your EC2 instance.

To create AMI we need  AMI ID which we can get that from AWS Management Console.

VPC (Virtual Private Cloud) & Subnets: 
These are basic buildings blocks of networking infrastructure within AWS cloud. All EC2 instances are assign to one of these VPC and subnets for secure communication with other AWS components. We will use default VPC and Subnets for further used.

Deploy AWS EC2 Instance:
Now we are ready to launch AWS new ec2 instance with information we prepared above. Lets see how we can use all these information and create instance using AWS CLI.

>> aws ec2 run-instances --image-id enterAmiIdHere --count 1 --instance-type t2.micro --key-name MyKeyPair --security-group-ids enterSecurityGroupIdHere --subnet-id enterSubnetIdHere

Lets see all these in details one by one.

image-id : It is AMI id which is by default created by AWS for our account, we can get it from AWS Management Console.
instance-type: Type of instance we want to create
key-name: This is key which we created earlier
security-group-ids: This is security group id which we created
subnet-id: This is default subnet id, we can get it from AWS Management Console.

Full deployment of EC2 instance will get some time. You can verify it by using AWS CLI command as follow.

>> aws ec2 describe-instances

It will show status pending as it will take some time to activate instance.

Elastic IP Address:
We can create elastic ip address of our EC2 instance so that it will have a static ip address forever. Elastic ip address is the public ip address which reachable from Internet. We can associate any elastic ip address with any EC2 instance. If we don't have elastic ip address then whenever we reboot or shutdown and start EC2 instance, it will create new DNS ip address for each reboot which not recommended and user friendly.

Lets see how to create elastic ip address and associate to EC2 instance.

>> aws ec2 allocate-address

OutPut>> 
{

"PublicIp":"52......",
"Domain":"vpc",
"AllocationId":"eipalloc-od-----",

}

Note: We need to store our allocation ID to associate with EC2 instance.

Now lets associate new IP address to our EC2 instance.

>> aws ec2 associate-address -- instance-id enterInstanceIdHere --allocation-id enterAllocationIdHere

OutPut>> { "AssociationId":"eipassoc-el7-------" }

To verify everything have setup as expected we can run below command

>> aws ec2 describe-instances

and you will see all the information of our newly created EC2 instance and elastic ip address.


What is user data:
This is usually used when we launches EC2 instances. It is used to provides some custom user data to instance that can to perform common automated configuration task and even run script after the instance start.

>> aws ec2 run-instances --image-id enterImageIDHere -- count 1 -- instance-type t2.micro --key-name MyKeyPair --security-group-ids enterSecurityGroupIdHere --subnet-id enterSubnetIdHere -- user-data "sudo apt-get install Nginx"

This command will install Nginx server automatically as our EC2 instance launches within EC2 cloud.

How to terminate EC2 instance using AWS CLI:
>> aws ec2 terminate-instances --instance-id enterInstanceIdHere

Once you execute this command you will see a termination message in output like below.

Output>> { "TerminatingInstances":{{ "InstanceId":"InstanceIdWillDisplayHere" }} }

Watch Video:


Apr 8, 2018

AWS Testing Permissions With Dry Run Option & Testing Functionality With JEMSPath Terminal

AWS Testing Permissions With Dry Run Option & Testing Functionality With JEMSPath Terminal

#AWS Testing Permissions With Dry Run Option:

How we can verify, if current IAM user has permissions or not to perform operations on AWS CLI.

To check this we have a AWS feature that is Dry Run Option.

Lets see how we can use this feature...

>> aws ec2 describe-regions --dry-run

Output>> An error occurred (Dry Run Operation) when calling describe regions Operations:Request would have succeed, but Dry-Run flag is set.

Here above output statement tells that IAM user which we are using has necessary permissions to execute this command. Since we are executing this command with dry-run parameters / flag, It does not give out but it shows an informational message which confirm IAM user has permission to run this command.

#Testing Functionality With JEMSPath Terminal:

JEMSPath terminal is a JEMSPath expression tool which run in terminals. This tool will help us to write query expressions for aws cli commands.

To install this tool type command below and press enter.

>> sudo pip install jmespath-terminal

lets checkout this tool's feature provided for AWS CLI...

>> aws ec2 describe-regions | jpterm

So, on above example we are exporting output of left side command of pipe separator JEMSPath window. You will see jpterm has two blocks below.

1: Left Block (Input JSON) 
2: Right BLock (JMESPath Result)

So on the left block you can see much more easy formatted JSON output and on the right side block we can write query to filter out the output.




So instead of manually checking, editing and repeatedly rechecking JEMSPath expression and using hit & try method, to write your query expression. Using JEMSPath terminal is best way to execute query expressions.

JEMSPath Expression: Regions[?RegionName=='us-west-2']  

Once you run above expression in JEMSPath terminal, it will filter out result and display output on right side block.

To execute JEMSPath terminal use "F5" or "ctrl + c"