Tuesday 1 May 2012

How to create an instance in Amazon AWS

If the current AWS set up is running with Instance-store root devices, then this means it does not have permament storage. If any one by mistake shutdown instance .. all data will be loose

We need EC2 Instancre with EBS backed, means that it has a Permament storage. so if any one shutdown instance also data won't loose.


Check the below screen shoots to create an amazon instance

1. EC2












2. ec2












3. NewInstance












4. Select-an-ebs-os












5. Launch-Instance












6. launching-instance












7. Setting-key&value












8. Generating key-pair












9. Setting-firewall-rules












10. Final-instance-before-launch












11. Instance-details-with-local-ip-and-publicDNS












12. Launching-instances












13. ssh-to-the-server













14. ssh-error-bcz-of-permission













15. Inside-amazon-instance













Use the .pem file for doing ssh into the server ctechzkey.pem

Public DNS:     ec2-107-21-185-85.compute-1.amazonaws.com

# ping ec2-107-21-185-85.compute-1.amazonaws.com

# ssh -i ctechz/ctechzkey.pem root@ec2-107-21-185-85.compute-1.amazonaws.com

Amazon AWS Tips

1. Explain Elastic Block Storage?  What type of performance can you expect?
    How do you back it up?  How do you improve performance?

EBS is a virtualized SAN or storage area network.  That means it is RAID storage to start with so it’s redundant and fault tolerant.  If disks die in that RAID you don’t lose data.  Great!  It is also virtualized, so you can provision and allocate storage, and attach it to your server with various API calls.  No calling the storage expert and asking him or her to run specialized commands from the hardware vendor.

Performance on EBS can exhibit variability.  That is it can go above the SLA performance level, then drop below it.  The SLA provides you with an average disk I/O rate you can expect.  This can frustrate some folks especially performance experts who expect reliable and consistent disk throughput on a server.  Traditional physically hosted servers behave that way.  Virtual AWS instances do not.

Backup EBS volumes by using the snapshot facility via API call or via a GUI interface like elasticfox.

Improve performance by using Linux software raid and striping across four volumes.

2. What is S3?  What is it used for?  Should encryption be used?

S3 stands for Simple Storage Service.  You can think of it like ftp storage, where you can move files to and from there, but not mount it like a filesystem.  AWS automatically puts your snapshots there, as well as AMIs there.  Encryption should be considered for sensitive data, as S3 is a proprietary technology developed by Amazon themselves, and as yet unproven vis-a-vis a security standpoint.

3. What is an AMI?  How do I build one?

AMI stands for Amazon Machine Image.  It is effectively a snapshot of the root filesystem.  Commodity hardware servers have a bios that points the the master boot record of the first block on a disk.  A disk image though can sit anywhere physically on a disk, so Linux can boot from an arbitrary location on the EBS storage network.

Build a new AMI by first spinning up and instance from a trusted AMI.  Then adding packages and components as required.  Be wary of putting sensitive data onto an AMI.  For instance your access credentials should be added to an instance after spinup.  With a database, mount an outside volume that holds your MySQL data after spinup as well.

4. Can I vertically scale an Amazon instance?  How?

Yes.  This is an incredible feature of AWS and cloud virtualization.  Spinup a new larger instance than the one you are currently running.  Pause that instance and detach the root ebs volume from this server and discard.  Then stop your live instance, detach its root volume.  Note the unique device ID and attach that root volume to your new server.   And the start it again.  Voila you have scaled vertically in-place!!

5. What is auto-scaling?  How does it work?

Autoscaling is a feature of AWS which allows you to configure and automatically provision and spinup new instances without the need for your intervention.  You do this by setting thresholds and metrics to monitor.  When those thresholds are crossed a new instance of your choosing will be spun up, configured, and rolled into the load balancer pool.  Voila you’ve scaled horizontally without any operator intervention!

6. What automation tools can I use to spinup servers?

The most obvious way is to roll-your-own scripts, and use the AWS API tools.  Such scripts could be written in bash, perl or other language or your choice.  Next option is to use a configuration management and provisioning tool like puppet or better it’s successor Opscode Chef.  You might also look towards a tool like Scalr.  Lastly you can go with a managed solution such as Rightscale.

7. What is configuration management?  Why would I want to use it with cloud
    provisioning of resources?

Configuration management has been around for a long time in web operations and systems administration.  Yet the cultural popularity of it has been limited.  Most systems administrators configure machines as software was developed before version control – that is manually making changes on servers.  Each server can then and usually is slightly different.  Troubleshooting though is straightforward as you login to the box and operate on it directly.  Configuration management brings a large automation tool into the picture, managing servers like strings of a puppet.  This forces standardization, best practices, and reproducibility as all configs are versioned and managed.  It also introduces a new way of working which is the biggest hurdle to its adoption.

Enter the cloud, and configuration management becomes even more critical.  That’s because virtual servers such as amazons EC2 instances are much less reliable than physical ones.  You absolutely need a mechanism to rebuild them as-is at any moment.  This pushes best practices like automation, reproducibility and disaster recovery into center stage.

8. Explain how you would simulate perimeter security using Amazon Web
    Services model?

Traditional perimeter security that we’re already familiar with using firewalls and so forth is not supported in the Amazon EC2 world.  AWS supports security groups.  One can create a security group for a jump box with ssh access – only port 22 open.  From there a webserver group and database group are created.  The webserver group allows 80 and 443 from the world, but port 22 *only* from the jump box group.  Further the database group allows port 3306 from the webserver group and port 22 from the jump box group.  Add any machines to the webserver group and they can all hit the database.  No one from the world can, and no one can directly ssh to any of your boxes.

Want to further lock this configuration down?  Only allow ssh access from specific IP addresses on your network, or allow just your subnet.

9. Scalability / Scaling
 
Scaling is the action taken to increase the capacity of a software environment to address performance problems.

scalability is the ability of a system, network, or process, to handle growing amount of work in a capable manner or its ability to be enlarged to accommodate that growth.

A system whose performance improves after adding hardware, proportionally to the capacity added, is said to be a scalable system.

10. what is vertical scaling & horizontal scaling

Scaling is the action taken to increase the capacity of a software environment to address performance problems.

Methods of adding more resources for a particular application fall into two broad categories,

* Scale horizontally (scale out) & Scale vertically (scale up), Horizontal and vertical scaling.

Suppose your order entry system is running slowly,

* To vertically scale that system, you would go out and buy more CPUs and put them in the order entry server to make it go faster.

* To horizontally scale, you go out and buy more machines and add them to the network of machines that run the order entry system.

The trick is that you can’t always vertically scale, and you can’t always horizontally scale. A blend of each is usually appropriate.

 * Vertical scaling

Vertical scaling, also described as scale up, typically refers to adding more processors and storage to an Symmetric Multiple Processing to extend processing capability. Generally, this form of scaling employs only one instance of the operating system.

To scale vertically (or scale up) means to add resources to a single node in a system, typically involving the addition of CPUs or memory to a single computer. 

* Horizontal scaling

Horizontal scaling, or scale out, usually refers to tying multiple independent computers together to provide more processing power. Horizontal scaling typically implies multiple instances of  operating systems, residing on separate servers.
 
Scaling horizontally means adding more instances to your pool of machines. When scaling horizontally, you have to look at what the load balancer can.


Amazon AWS Terms

 Amazon Web Services (AWS)
 *******************************
With the Cloud, businesses no longer need to plan for and procure servers and other IT infrastructure weeks or months in advance. Instead, they can instantly spin up hundreds or thousands of servers in minutes and deliver results faster.

Amazon Elastic Compute Cloud (Amazon EC2)
************************************************
Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides resizable compute capacity in the cloud. It is designed to make web-scale computing easier for developers.

Amazon EC2 enables “compute” in the cloud. Amazon EC2’s simple web service interface allows you to obtain and configure capacity with minimal friction.

Amazon EC2 reduces the time required to obtain and boot new server instances to minutes, allowing you to quickly scale capacity, both up and down, as your computing requirements change. Amazon EC2 changes the economics of computing by allowing you to pay only for capacity that you actually use.

Amazon EC2 presents a true virtual computing environment, allowing you to use web service interfaces to launch instances with  a variety of operating systems,load them with your custom application environment, manage your network's access permissions, and run your image using as many or few systems as you desire.

*  Select a pre-configured, templated image to get up and running immediately. 
    Or create an Amazon Machine Image (AMI) containing your applications,
    libraries, data, and associated configuration settings.

*  Configure security and network access on your Amazon EC2 instance.

*  Choose which instance type(s) and operating system you want, then start, 
   terminate, and monitor as many instances of your AMI as needed, using the 
   web service APIs or the variety of management tools provided.

*  Determine whether you want to run in multiple locations, utilize static IP 
   endpoints, or attach persistent block storage to your instances.

*  Pay only for the resources that you actually consume, like instance-hours or
   data transfer.

Elasticity :  Amazon EC2 enables you to increase or decrease capacity 
  within minutes, not hours or days. You can commission one, hundreds or even thousands of server instances simultaneously.  Of course, because this is all controlled with web service APIs, your application can automatically scale itself up and down depending on its needs.

Completely Controlled :  You have complete control of your instances. You have root access to each one, and you can interact with them as you would any machine.   You can stop your instance while retaining the data on your boot partition and then subsequently restart the same instance using web service APIs.   Instances can be rebooted remotely using web service APIs. You also have access to console output of your instances.

Flexible : You have the choice of multiple instance types, operating systems, and software packages.  Amazon EC2 allows you to select a configuration of memory, CPU, instance storage, and the boot partition size that is optimal for your choice of operating system and application.

Amazon Simple Storage Service (Amazon S3)
***********************************************
Amazon Simple Storage Service (Amazon S3) enables storage in the cloud.

Amazon S3 is storage for the Internet. It is designed to make web-scale computing easier for developers.

Amazon S3 provides a simple web services interface that can be used to store and retrieve any amount of data, at any time, from anywhere on the web. It gives any developer access to the same highly scalable, reliable, fast, inexpensive data storage infrastructure that Amazon uses to run its own global network of web sites. The service aims to maximize benefits of scale and to pass those benefits on to developers.

Write, read, and delete objects containing from 1 byte to 5 terabytes of data each. The number of objects you can store is unlimited.

 Each object is stored in a bucket and retrieved via a unique, developer-assigned key.A bucket can be stored in one of several Regions. You can choose a Region to optimize for latency, minimize costs, or address regulatory requirements.
 
Data stored in Amazon S3 is secure by default; only bucket and object owners have access to the Amazon S3 resources they create.

Amazon Simple Queue Service (Amazon SQS)
************************************************
Amazon Simple Queue Service (Amazon SQS) offers a reliable, highly scalable hosted queue for storing messages as they travel between computers. By using Amazon SQS, developers can simply move data between distributed application components performing different tasks, without losing messages or requiring each component to be always available.

Amazon SQS works by exposing Amazon’s web-scale messaging infrastructure as a web service. Any computer on the Internet can add or read messages without any installed software or special firewall configurations.

Alexa Web Information Service (AWIS)
****************************************
The Alexa Web Information Service makes Alexa’s vast repository of information about the traffic and structure of the web available to developers.

SSH info
**********
To connect to your Linux/UNIX instance from a Windows machine, use an SSH
  client. The following instructions explain how to use PuTTY, a free SSH client 
  for Windows machines. Some of the Prerequisites are :- 

* Enable SSH traffic Open the instance's SSH port before you try to connect, ensure that your Amazon EC2 instance accepts incoming SSH traffic (usually on port 22).  For more information, see Authorize Network Access to Your Instances.

* Instance ID Get the ID of your Amazon EC2 instance retrieve the Instance ID of the Amazon EC2 instance you want to access. The Instance ID for all your instances are available in the AWS Management Console or through the CLI command ec2-describe-instances.

* Instance's public DNS Get the public DNS of your Amazon EC2 instance   retrieve the public DNS of the Amazon EC2 instance you want to access. You can find the public DNS for your instance using the AWS Management Console or by calling the CLI command ec2-describe-instances. The format of an instance's public DNS is ec2-w-x-y-z-compute-1.amazonaws.com where w, x, y, and z each represents a number between 0 and 255 inclusive.

* Private key Get the path to your private key you'll need the fully qualified path of the private key file associated with your instance. For more information on key pairs, see Getting an SSH Key Pair.

Amazon Machine Image (AMI)
*******************************
An Amazon Machine Image (AMI) is a special type of virtual machine image which is used to instantiate (create) a virtual machine within the Amazon Elastic Compute Cloud. It serves as the basic unit of deployment for services delivered using EC2.

Amazon Relational Database Service (Amazon RDS)
******************************************************
Amazon RDS makes it easy to set up, operate, and scale a relational database in the cloud.   It provides cost-efficient and resizable database capacity while managing time-consuming database administration tasks.

Use the AWS Management Console or Amazon RDS APIs to launch a Database Instance (DB Instance), selecting the DB Engine (MySQL or Oracle),   License Type, DB Instance class and storage capacity that best meets your needs.

Connect to your DB Instance using your favorite database tool or programming language.   Since you have direct access to a familiar MySQL or Oracle database, most tools designed for these engines should work unmodified with Amazon RDS.

Different Amazon Instances
*****************************
* On-Demand DB Instances : On-Demand DB Instances let you pay for compute 
      capacity by the hour with no long-term commitments.

This frees you from the costs and complexities of planning, purchasing, and maintaining hardware and transforms what are commonly large fixed costs
 into much smaller variable costs. On-Demand Instances also remove the need to buy safety net capacity to handle periodic traffic spikes.

* Reserved DB Instances : Reserved DB Instances give you the option to make a 
    low,  one-time payment for each DB Instance you want to reserve and in turn 
    receive a significant discount on the hourly usage charge for that Instance.

There are three Reserved Instance types (Light, Medium, and Heavy Utilization Reserved Instances) that enable you to balance the amount   you pay upfront with your effective hourly price.

* Spot Instances : Spot Instances allow customers to bid on unused Amazon EC2 capacity and run those instances for as long as their bid exceeds the current Spot Price.   The Spot Price changes periodically based on supply and demand, and customers whose bids meet or exceed it gain access to the available Spot Instances.   If you have flexibility in when your applications can run, Spot Instances can significantly lower your Amazon EC2 costs.

*  Standard Instances

* Micro Instances : Micro instances (t1.micro) provide a small amount of 
  consistent CPU resources and allow you to increase CPU capacity in short 
  bursts when additional cycles are available.

* High-Memory Instances :- Instances of this family offer large memory sizes for 
  high throughput applications, including database and memory caching 
  applications.

* High-CPU Instances :- Instances of this family have proportionally more CPU
   resources than memory (RAM) and are well suited for compute-intensive 
   applications.

* Cluster Compute Instances :- Instances of this family provide proportionally
    high CPU resources with increased network performance and are well suited 
    for High Performance Compute (HPC) applications and other demanding 
    network-bound applications.

* Cluster GPU Instances :- Instances of this family provide general-purpose
    graphics processing units (GPUs) with proportionally high CPU and increased 
    network performance for applications benefitting from highly parallelized     
    processing, including HPC, rendering and media processing applications.

Amazon Virtual Private Cloud (Amazon VPC)
***********************************************
Amazon VPC is a secure and seamless bridge between a company's existing IT infrastructure and the AWS cloud.  Amazon VPC enables enterprises to connect their existing infrastructure to a set of isolated AWS compute resources via a Virtual Private Network (VPN) connection, and to extend their existing management capabilities such as security services, firewalls,  and intrusion detection systems to include their AWS resources.

Amazon Elastic Block Store (EBS)
************************************
Amazon Elastic Block Store (EBS) offers persistent storage for Amazon EC2 instances.  Amazon EBS volumes provide off-instance storage that persists independently from the life of an instance.

Amazon EBS volumes are highly available, highly reliable volumes that can be leveraged as an Amazon EC2 instance's boot partition or attached to a running Amazon EC2 instance as a standard block device. When used as a boot partition, Amazon EC2 instances can be stopped and subsequently restarted, enabling you to only pay for the storage resources used while maintaining your instance's state. 

Amazon EBS volumes offer greatly improved durability over local Amazon EC2 instance stores, as Amazon EBS volumes are automatically replicated on the backend (in a single Availability Zone). For those wanting even more durability, Amazon EBS provides the ability to create point-in-time consistent snapshots of your volumes that are then stored in Amazon S3, and automatically replicated across multiple Availability Zones. These snapshots can be used as the starting point for new Amazon EBS volumes, and can protect your data for long term durability.   You can also easily share these snapshots with co-workers and other AWS developers.

Instant Elasticity
******************
you can instantly deploy new applications, instantly scale up as your workload grows, and instantly scale down based on demand.

Security
*********
Amazon EC2 includes web service interfaces to configure firewall settings that control network access to and between groups of instances.

 When launching Amazon EC2 resources within Amazon Virtual Private Cloud (Amazon VPC),   you can isolate your compute instances by specifying the IP range you wish to use, and connect to your existing IT infrastructure using
 industry-standard encrypted IPsec VPN. You can also choose to launch Dedicated Instances into your VPC. Dedicated Instances are Amazon EC2 Instances that run on hardware dedicated to a single customer for additional isolation.

Multiple Locations
********************
Amazon EC2 provides the ability to place instances in multiple locations. Amazon EC2 locations are composed of Regions and Availability Zones. 

Availability Zones are distinct locations that are engineered to be insulated from failures in other Availability Zones and provide inexpensive,  low latency network connectivity to other Availability Zones in the same Region. By launching instances in separate Availability Zones,   you can protect your applications from failure of a single location. Regions consist of one or more Availability Zones, are geographically dispersed,  and will be in separate geographic areas or countries.

Elastic IP Addresses
**********************
Elastic IP addresses are static IP addresses designed for dynamic cloud computing.  An Elastic IP address is associated with your account not a particular instance, and you control that address until you choose to explicitly release it. Unlike traditional static IP addresses, however, Elastic IP addresses allow you to  mask instance or Availability Zone failures by programmatically remapping your public IP addresses to any instance in your account. Rather than waiting on a data technician to reconfigure or replace your host, or waiting for DNS to propagate to all of your customers, Amazon EC2 enables you to engineer around problems with your instance or software by quickly remapping your Elastic IP address to a replacement instance. 

Amazon CloudWatch
**********************
Amazon CloudWatch is a web service that provides monitoring for AWS cloud resources and applications, starting with Amazon EC2.   It provides you with visibility into resource utilization, operational performance, and overall demand patterns including metrics such as CPU utilization, disk reads and writes, and network traffic. You can get statistics, view graphs, and set alarms for your metric data.   To use Amazon CloudWatch, simply select the Amazon EC2 instances that youd like to monitor.

Auto Scaling
**************
Auto Scaling allows you to automatically scale your Amazon EC2 capacity up or down according to conditions you define. With Auto Scaling, you can ensure that the number of Amazon EC2 instances you are using scales up seamlessly during demand spikes to maintain performance, and scales down automatically during demand lulls to minimize costs.  Auto Scaling is particularly well suited for applications that experience hourly, daily, or weekly variability in usage. Auto Scaling is enabled by Amazon CloudWatch and available at no additional charge beyond Amazon CloudWatch fees.

Elastic Load Balancing
************************
Elastic Load Balancing automatically distributes incoming application traffic across multiple Amazon EC2 instances.   It enables you to achieve even greater fault tolerance in your applications, seamlessly providing the amount of load balancing capacity  needed in response to incoming application traffic. Elastic Load Balancing detects unhealthy instances within a pool and automatically reroutes traffic  to healthy instances until the unhealthy instances have been restored. You can enable Elastic Load Balancing within a single Availability Zone  or across multiple zones for even more consistent application performance.

High Performance Computing (HPC) Clusters
***********************************************
Customers with complex computational workloads such as tightly coupled parallel processes, or with applications sensitive to network performance, can achieve the same high compute and network performance provided by custom-built infrastructure while benefiting from the elasticity,   flexibility and cost advantages of Amazon EC2.

VM Import
************
 VM Import enables you to easily import virtual machine images from your existing environment to Amazon EC2 instances. VM Import allows you to leverage your existing investments in the virtual machines that you have built to meet your IT security, configuration management, and compliance requirements by seamlessly bringing those virtual machines into Amazon EC2 as ready-to-use instances. This offering is available at no additional charge beyond standard usage charges for Amazon EC2 and Amazon S3.

Sharing Files and Folders between Ubuntu11.04 (host) and Windows Virtual Machine

1. After Installing VirtualBox,install VM virtual-box-guest-add-on before continuing.

2. To do so boot the windows virtual machine which is inside the HOST Ubuntu machine, Then go to "Devices" Tab and click "Install Guest additions".
    This will download and install addition into the windows xp virtual machine.

3. Shut down the VM and click on setting on the Virtual Box. Then the Setting for Windows VM will be displayed.

4. Click on Shared Folders in the given list of settings. You can see already shared folders if any. else add a share folder.

5. After that Click On "ADD SHARED FOLDER" button to add a new shared folder. When clicking the button a new window appear, Drop down in FOLDER PATH: and select <others> option from it and Browse the folder you want to share with windows VM and then click ok. And then Enable AUTO MOUNT option and click ok. Then the folder is added in the list of shared files and folder. And then Restart the Virtual machine again.

6. Map Network Drive and enable sharing between Windows and Linux.
    Go to my-computer, then Click on tools and then go to Map Network Drive.
    Under Map Network Drive window assign the drive name,Click on browse button, then expand the “Virtualbox shared folder” then expand “\\vboxsvr” and then select the drive path for the folder you want to share and then click ok.

What ever we stored in this folder can access both in linux and in windows as well, ie, from Host and from Guest.

The shared folder is now displayed as a network drive in your virtual machine, windows.






























































































































































































































We can access this test folder from linux as well as from windows machines.

 To Remove this shared folder
 **********************************
1. Poweroff the virtual machine again. Go to "Settings", "Shared Folders", select the share and click the tab "Remove the Shared Folder".


How to add Extra storage in Virtual Box

You can add extra storage by following the steps and after adding the new partitioned space, format the space and mount it where you want.

File -----> Virtual Media Manager ------> Add

In VirtualBox 4.0.0 virtual media manager there is no 'add' option. Therefore you can not create new virtual disks for machines that are currently powered on. This is a serious problem because thick provisioning disks can take a long amount of time, for large disks. To create a new virtual disk one must first power down a vm, then 'edit settings' This is blocking because you cannot edit settings while a virtual machine is running.

A vm must be powered down to create a new -disk- for any vm in 4.0.0.

The work around right now is to create a 'junk vm' just so you can create a new disk for a vm that is currently running. Once disk is provisioned, poweroff the virtual machine you really want the new disk for, edit settings and attach the disk you created for the 'junk vm'. Now remove the junk vm and select 'remove only' when it asks about what to do with the files for junk vm.

Reboot After adding the disk and format it as ext3 and mount the partation to a mount point.

Follow the screen shoots to add an extra space in the vm


After add the extra space you can able to view the added space using the command 

# fdisk -l

and format the new partition using the command # fdisk

# fdisk /dev/sdb
 cmd:n

# partprobe

# mkdir /STORAGE

# mkfs.ext3 /dev/sdb2

add the newly created partation in fstab  # vim /etc/fstab

# mount  /dev/sdb2 /STORAGE

# mount -a