Posts Tagged ‘Virtualization’

Hi All,

Following the idea in a previous report about using AWS, this is a report after 3 years.

Differences? No, my idea is not changed: a cloud solution is better than a physical server.

The main event/problem happen during these last three years was the August 2011 power outage. During the blackout, a single zone was affected with a real downtown around 48 hours. The recovery of the zone was completed in more than a week.

At first time, I thought that it was a very long time, but after some reflections, I understood that this kind of problems reach the limit of our technology. I remember some years ago when I worked on a physical server in a farm in Miami. During a night in the building, a switch started to burn with all consequences you can imagine. The farm was down for about 24 hours.

You can follow all best practice you know but there’s always something unpredictable who breaks your eggs in the basket. So what is important? Have a plan B to recover everything.

An example on a physical server you usually use a backup where your usually store the application data, only in few situations you backup all files of a server. So restore a server means to reinstall the OS, all user applications and reconfigure every service. This may takes a lot of time.

On AWS, this is very fast: usually you have static copy of your server called AMI, who is different from the running copy of your server called Instance. They are stored in two different stores located in two different zones, one in the store of AWS (S3) and the other on the physical server. Creating an Instance from S3 takes less than 10 minutes, a big advantage respect to reinstall a server. Creating an AMI is easy too using an Instance, but in this case the time is a little bit longer. What is really important is that you have all instruments to do that, and you have not to buy a specific software or a special DAT resource.

Backing to AWS August disaster, a zone was affected, putting down some all services but not completely: in my case I lost the raid data disk but not the instance, others lost their instances. All restore actions takes less than two hours, mainly spent moving files from snapshots to volumes and restore the data.fs, plone/zope store.

At the end, AWS confirms my ideas, the infrastructure is very powerful, and it requires less administration time compare a physical server.

Cheers.

Advertisements

This is the title of my talk about EC2 autoscaling service. This talk is for Linux Day in Pisa, October 23th 2010.
As soon as possible the slide will be available.

Official Site

See you soon.

Hi All,

yesterday, a server of mine on AWS died without any apparently problem. Simply, it ends to respond on any port: panic!!

What’s happen? Why a server online since mid 2009 went down? How recover any data? Backup or else? How would it takes?

To make it short: after three reboot from panel nothing changes, so nothing but create a new server was the solution.

After detach the ip and terminte the zombie server, I started a new server, attached disk, update the packages of linux release, remapped some paths…. and the server is up and running!!!

All in 30 mins.

<pause>

This is possible using EBS volumes that are persistent resources: mapping persistent folders on a EBS volume you can start and stop any server without loosing datas.

In my post AWS: a simple backup suite, I spent two words about what I mean with “persistent”. On AWS, if you use instance-store image (AMI), you know that root fs is not persisent across a shutdown, so you need to use a secondary disk (from EBS) to store any information you want to keep stored.

In that previous post I made an example with /srv but in reality I use parts of /etc, /var, /opt, and /home.

Unfortunatelly, you can forgot some folder or hope that a very stable system never goes down, but as this fact teachs, everything’ dies… servers too.

Hi all,

this post is around AWS, my server farm.

November 25th will be the first anniversary of using Amazon Web Services. After an year, I can say that I changed my mind, but this is the end of the story, so we back to the start.

I’m a computer programmer, I love all things about that, and what I found really interesting is the low level. For example, my favorite language is C (not ++), I like to assemble pcs, create myself the boards, cables  and whatelse. In few words: hardware is my life.

One year ago, I changed my job and I started to work for Reflab, as sysadmin, and they ask me about AWS. The first reaction was not so very good: virtualization of server, no physical contact with the server,  mmmm, too much… I didn’t like it. With a package full of doubts, I made the registration and I started my first server.

The documentation was not so good in really, but merging different sources it was not so hard. I’d like to say that my first server is running today, but is not the true: I had to shutdown that server cos was a small instance for testing. What I can say is that start a server takes 5 minutes.

Now we have 11 server online who are running happy and healthy. The counter of fault is less than 5 who required to restart the server. I thing it is a good result: more the 60% of server runs 100% of time, and the other 40% runs for 99.99% of time. After that, we do not have any problem about connection or disk fault.

The conclusion: we are very enjoy about this choice.

As I say, i’m the one who likes to see his hardware, but this solution is very great.

So… Happy birthday to you, happy birthday to you 🙂