Posts Tagged ‘Reflab’

Reflab: the new site

Posted: 22/06/2010 in Develop

It could be easy says that we are very excited about our new site, but it is the true.

We hope you can understand better than ever what we try to do. Any comment is well accepted. 🙂


Hi All,

yesterday, a server of mine on AWS died without any apparently problem. Simply, it ends to respond on any port: panic!!

What’s happen? Why a server online since mid 2009 went down? How recover any data? Backup or else? How would it takes?

To make it short: after three reboot from panel nothing changes, so nothing but create a new server was the solution.

After detach the ip and terminte the zombie server, I started a new server, attached disk, update the packages of linux release, remapped some paths…. and the server is up and running!!!

All in 30 mins.


This is possible using EBS volumes that are persistent resources: mapping persistent folders on a EBS volume you can start and stop any server without loosing datas.

In my post AWS: a simple backup suite, I spent two words about what I mean with “persistent”. On AWS, if you use instance-store image (AMI), you know that root fs is not persisent across a shutdown, so you need to use a secondary disk (from EBS) to store any information you want to keep stored.

In that previous post I made an example with /srv but in reality I use parts of /etc, /var, /opt, and /home.

Unfortunatelly, you can forgot some folder or hope that a very stable system never goes down, but as this fact teachs, everything’ dies… servers too.

Hi All,
this release is about a small set of script that I use on our AWS server to make backup of data. Well, it is not complete cos I found every day something to add or modify, but for me is a nice point to start.

Two Words before start

Two words about my AWS server setup:usually, I use an ami on instance storage and two EBS volumes with same size.  Others details like Reserved IP are optional, cos any information about server is provided in the config files. I use to attach the two disk on /dev/sdo and /dev/sdq, patitioned with a single partition for each. If you don’t like, you have to check the script and fix.

The related mount point are /mnt/partition for /dev/sdo1 and /mnt/backup for /dev/sdq1, but remember this thing: only /mnt/partition is mounted when backup procedure is NOT running.

In /mnt/partition I move any folder that I want to backup: for example /srv/zope will be moved in /mnt/partition/srv/zope. You can see I respect the original tree of folders. On the root I create a symbolic link, so anyone can access to it without problems.

This is an example schema of my fs:

My backup suite is stored in /opt/backup_utils. Take care about path, because in config file there an explicit call to it.

Usually, we use monit to check the status of any application.

The Suite

The suite contains two tools: the first one make a copy of ami, the second one syncs the two EBS volumes and make snapshots. There’s a third tool to clean old snapshots, but is still in development. Soon will be available.

The first script is called It invokes in order: ec2-bundle-vol, ec2-upload-bundle and ec2-register. Calling this script is easy create a new ami starting from a running instance. It is usefull when bigs updated on server.

The second one is called and it is  more complicated. It is developed for daily backup and allows plugins. The flow of the actions is the follow: mounts backup disk, runs every enabled plugins, unmounts backup disk and call ec2-create-snapshot. The plugins make the rest.

At this time there are four plugins:

  • base: to sync folders but srv/zope without shutdown any service
  • mysql: an easy mysqldump
  • postgres: an easy pg_dump
  • zope: a complex solution for zope instance (with buildout or not, with zeoserver or not, with backup scripts or not)

A complete explain on zope plugin would be great but it is a little bit long.


The configuration files are two: excluded and conf. The first one contains a list of expression to identify which files will not copied: more in detail, is the parameter of –exclude-from option of rsync. The second one is the real configuration file, and it is a little bit long. It contains: AWS Info, Location Info, Db Info and Zope Info.

AWS Info: contains informations about the location of your AWS private key, AWS certificate, AWS user id, AWS access key, AWS secret key. These files and informations are provided by AWS. If do not know what they are, please check Quick Guide of EC2. Other informations in this section are: the bucket of S3 where you want to store your ami, the prefix filename of the ami, the region and the location where you are running your instance.

Location Info: base is the mount point of the backup disk, source_backup is the base folder for backup, rsync_options_base and rsync_options are the sets of parameters for rsync. Take care about rsync two options, because everyone is needed and used.

Db Info: contains login, password, and path to make a dump.

Zope Info: contains an important parameter called zope_scripts who contains the list of available script to start and stop zope. By default, I set instance (default buildout installation) and zopectl (default zope installation), but usually I add others with grok applications for example. The other parameter in this section is zope_pack_days: it means that the day we use to make the pack of ZoDB is sunday. Remember that this option works only if you have made a buildout with zeo/zope architecture.


The setup is very easy: unpack the file in /opt/backup_utils and be sure to set the rights on file, usually I use chmod 700  *.sh and plugins too. After that you have to going in plugins-enabled, and make a symbolic link for every plugin you want. The idea is the same of apache virtualhost on debian systems or rc scripts.

After that, you can setup in cron the script to work daily.


Click on me to download.


GPL v2

Hi all,
remember the previous post about an easy way to transform a netbook in a router for hsdpa connections? Well, I decided to make a new version of the script that i proposed in that post.
The new version is a little bit wide. I mean, on netbook I have four network interfaces: ethernet, wireless, pan and dun. Sometimes I use bridge, tap, etc. In this situation that script needs to edited every time, changing internal and external interface… very boring, i know.

So this is the rule: the script is callable with a list of interfaces, the first is the gateway and others are the sources. Simple, don’t you?

echo -e "\n Configuring NAT:"
if [ ! -n "$2" ]; then
  res=65 #bad arguments
  msg="Usage: `basename $0` extif intif[s]"
  echo -en " - loading modules: "
  echo -en "ip_tables, "
  $MODPROBE ip_tables
  echo -en "ip_conntrack, "
  $MODPROBE ip_conntrack
  echo -en "ip_conntrack_ftp, "
  $MODPROBE ip_conntrack_ftp
  echo -en "ip_conntrack_irc, "
  $MODPROBE ip_conntrack_irc
  echo -en "iptable_nat, "
  $MODPROBE iptable_nat
  echo -en "ip_nat_ftp, "
  $MODPROBE ip_nat_ftp
  echo ""
  echo " - Enabling forwarding "
  echo "1" > /proc/sys/net/ipv4/ip_forward
  echo " - Enabling DynamicAddr"
  echo "1" > /proc/sys/net/ipv4/ip_dynaddr
  echo " - Clearing any existing rules and setting default policy"
  $IPTABLES -t nat -F
  echo -n " - Forwarding: "
  for int in "$@"; do
    if [ "$int" != "$EXTIF" ]; then
      echo -n "$int, "
      $IPTABLES -A FORWARD -i $EXTIF -o $int -m state --state ESTABLISHED,RELATED -j ACCEPT
      $IPTABLES -A FORWARD -i $int -o $EXTIF -j ACCEPT
  echo ""
  echo " - Masquerade: $EXTIF"
if [ -n "$msg" ]; then
  echo -e "$msg"
exit $res

Actually, I use this script to manage virtualbox lan connections, but this is another story.
Cheers to all.

Today is a good day to die. This is the mind of all routers of our provider cos today they decide to suicide, letting us without a way to reach Internet.

Today I planned to make live the server our customer with an effort of 2 hours. But Murphy’s law is all around us and then the DSL link of our offices went down.

Well, the choice was two: a party or my hsdpa cellphone…. I chose  for the second one :(. So i decided to use the netbook as gateway for lan traffic, connecting it to my pc with ethernet cable.

What is interesting is how is versatile iptables who allow me to configure my netbook as router, and my pc as a gateway for other computer in the office.

These are the miracle four rows (on netbook)

iptables -A FORWARD -i ppp0 -o eth0 -m state –state ESTABLISHED,RELATED -j ACCEPT
iptables -A FORWARD -i eth0 -o ppp0 -j ACCEPT
iptables -A FORWARD -j LOG
iptables -t nat -A POSTROUTING -o ppp0 -j MASQUERADE

and these are the four for the pc

iptables -A FORWARD -i eth0 -o eth1 -m state –state ESTABLISHED,RELATED -j ACCEPT
iptables -A FORWARD -i eth1 -o eth0 -j ACCEPT
iptables -A FORWARD -j LOG
iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE

Obviously I make a lan between my pc and netbook configured by hand using ifconfig, and I changed the routing using route command.

This is a link of a full script that use iptables to transform a pc in a router.

I know iptables is a little bit hard to use, but there are solutions at high level as shorewall, who manage iptables and allow you to know only what you really need to know.


These are only links about my slide for the World Plone Day 09. They are very easy to understand, but for any question you ma ask…

There are two version EN and IT.

EN ->

IT ->

I hope you enjoy.

In next days i think to publish a buildout for plone for deployment. Stay tuned :).