Posts Tagged ‘Amazon’

Hi All,

Following the idea in a previous report about using AWS, this is a report after 3 years.

Differences? No, my idea is not changed: a cloud solution is better than a physical server.

The main event/problem happen during these last three years was the August 2011 power outage. During the blackout, a single zone was affected with a real downtown around 48 hours. The recovery of the zone was completed in more than a week.

At first time, I thought that it was a very long time, but after some reflections, I understood that this kind of problems reach the limit of our technology. I remember some years ago when I worked on a physical server in a farm in Miami. During a night in the building, a switch started to burn with all consequences you can imagine. The farm was down for about 24 hours.

You can follow all best practice you know but there’s always something unpredictable who breaks your eggs in the basket. So what is important? Have a plan B to recover everything.

An example on a physical server you usually use a backup where your usually store the application data, only in few situations you backup all files of a server. So restore a server means to reinstall the OS, all user applications and reconfigure every service. This may takes a lot of time.

On AWS, this is very fast: usually you have static copy of your server called AMI, who is different from the running copy of your server called Instance. They are stored in two different stores located in two different zones, one in the store of AWS (S3) and the other on the physical server. Creating an Instance from S3 takes less than 10 minutes, a big advantage respect to reinstall a server. Creating an AMI is easy too using an Instance, but in this case the time is a little bit longer. What is really important is that you have all instruments to do that, and you have not to buy a specific software or a special DAT resource.

Backing to AWS August disaster, a zone was affected, putting down some all services but not completely: in my case I lost the raid data disk but not the instance, others lost their instances. All restore actions takes less than two hours, mainly spent moving files from snapshots to volumes and restore the data.fs, plone/zope store.

At the end, AWS confirms my ideas, the infrastructure is very powerful, and it requires less administration time compare a physical server.

Cheers.

Hi All,

Today I uploaded mu backup suite for AWS on github.

This is the url:

https://github.com/cippino/Backup-AWS

For details about that, please read this article.
Cheers.

Hi All,

yesterday, a server of mine on AWS died without any apparently problem. Simply, it ends to respond on any port: panic!!

What’s happen? Why a server online since mid 2009 went down? How recover any data? Backup or else? How would it takes?

To make it short: after three reboot from panel nothing changes, so nothing but create a new server was the solution.

After detach the ip and terminte the zombie server, I started a new server, attached disk, update the packages of linux release, remapped some paths…. and the server is up and running!!!

All in 30 mins.

<pause>

This is possible using EBS volumes that are persistent resources: mapping persistent folders on a EBS volume you can start and stop any server without loosing datas.

In my post AWS: a simple backup suite, I spent two words about what I mean with “persistent”. On AWS, if you use instance-store image (AMI), you know that root fs is not persisent across a shutdown, so you need to use a secondary disk (from EBS) to store any information you want to keep stored.

In that previous post I made an example with /srv but in reality I use parts of /etc, /var, /opt, and /home.

Unfortunatelly, you can forgot some folder or hope that a very stable system never goes down, but as this fact teachs, everything’ dies… servers too.

Hi All,
this release is about a small set of script that I use on our AWS server to make backup of data. Well, it is not complete cos I found every day something to add or modify, but for me is a nice point to start.

Two Words before start

Two words about my AWS server setup:usually, I use an ami on instance storage and two EBS volumes with same size.  Others details like Reserved IP are optional, cos any information about server is provided in the config files. I use to attach the two disk on /dev/sdo and /dev/sdq, patitioned with a single partition for each. If you don’t like, you have to check the script mkbackup.sh and fix.

The related mount point are /mnt/partition for /dev/sdo1 and /mnt/backup for /dev/sdq1, but remember this thing: only /mnt/partition is mounted when backup procedure is NOT running.

In /mnt/partition I move any folder that I want to backup: for example /srv/zope will be moved in /mnt/partition/srv/zope. You can see I respect the original tree of folders. On the root I create a symbolic link, so anyone can access to it without problems.

This is an example schema of my fs:

My backup suite is stored in /opt/backup_utils. Take care about path, because in config file there an explicit call to it.

Usually, we use monit to check the status of any application.

The Suite

The suite contains two tools: the first one make a copy of ami, the second one syncs the two EBS volumes and make snapshots. There’s a third tool to clean old snapshots, but is still in development. Soon will be available.

The first script is called amibackup.sh. It invokes in order: ec2-bundle-vol, ec2-upload-bundle and ec2-register. Calling this script is easy create a new ami starting from a running instance. It is usefull when bigs updated on server.

The second one is called mkbackup.sh and it is  more complicated. It is developed for daily backup and allows plugins. The flow of the actions is the follow: mounts backup disk, runs every enabled plugins, unmounts backup disk and call ec2-create-snapshot. The plugins make the rest.

At this time there are four plugins:

  • base: to sync folders but srv/zope without shutdown any service
  • mysql: an easy mysqldump
  • postgres: an easy pg_dump
  • zope: a complex solution for zope instance (with buildout or not, with zeoserver or not, with backup scripts or not)

A complete explain on zope plugin would be great but it is a little bit long.

Configure

The configuration files are two: excluded and conf. The first one contains a list of expression to identify which files will not copied: more in detail, is the parameter of –exclude-from option of rsync. The second one is the real configuration file, and it is a little bit long. It contains: AWS Info, Location Info, Db Info and Zope Info.

AWS Info: contains informations about the location of your AWS private key, AWS certificate, AWS user id, AWS access key, AWS secret key. These files and informations are provided by AWS. If do not know what they are, please check Quick Guide of EC2. Other informations in this section are: the bucket of S3 where you want to store your ami, the prefix filename of the ami, the region and the location where you are running your instance.

Location Info: base is the mount point of the backup disk, source_backup is the base folder for backup, rsync_options_base and rsync_options are the sets of parameters for rsync. Take care about rsync two options, because everyone is needed and used.

Db Info: contains login, password, and path to make a dump.

Zope Info: contains an important parameter called zope_scripts who contains the list of available script to start and stop zope. By default, I set instance (default buildout installation) and zopectl (default zope installation), but usually I add others with grok applications for example. The other parameter in this section is zope_pack_days: it means that the day we use to make the pack of ZoDB is sunday. Remember that this option works only if you have made a buildout with zeo/zope architecture.

Install

The setup is very easy: unpack the file in /opt/backup_utils and be sure to set the rights on file, usually I use chmod 700  *.sh and plugins too. After that you have to going in plugins-enabled, and make a symbolic link for every plugin you want. The idea is the same of apache virtualhost on debian systems or rc scripts.

After that, you can setup in cron the mkbackup.sh script to work daily.

Download

Click on me to download.

Licence

GPL v2

Hi all,

this post is around AWS, my server farm.

November 25th will be the first anniversary of using Amazon Web Services. After an year, I can say that I changed my mind, but this is the end of the story, so we back to the start.

I’m a computer programmer, I love all things about that, and what I found really interesting is the low level. For example, my favorite language is C (not ++), I like to assemble pcs, create myself the boards, cables  and whatelse. In few words: hardware is my life.

One year ago, I changed my job and I started to work for Reflab, as sysadmin, and they ask me about AWS. The first reaction was not so very good: virtualization of server, no physical contact with the server,  mmmm, too much… I didn’t like it. With a package full of doubts, I made the registration and I started my first server.

The documentation was not so good in really, but merging different sources it was not so hard. I’d like to say that my first server is running today, but is not the true: I had to shutdown that server cos was a small instance for testing. What I can say is that start a server takes 5 minutes.

Now we have 11 server online who are running happy and healthy. The counter of fault is less than 5 who required to restart the server. I thing it is a good result: more the 60% of server runs 100% of time, and the other 40% runs for 99.99% of time. After that, we do not have any problem about connection or disk fault.

The conclusion: we are very enjoy about this choice.

As I say, i’m the one who likes to see his hardware, but this solution is very great.

So… Happy birthday to you, happy birthday to you 🙂