Posts Tagged ‘Apache’

Nginx is the actual choice, talking about webserver. It is easy to configure and can serve high levels of parallel requests. Apache, long loved webserver, is losing fan… until now.

Google develop a very interesting module called pagespeed who transform a site in a optimized site using PageSpeed,analyzer. A simple example would be images: every time you analyze a site, PageSpeed shown you how your images are bad, fat a bandwidth eager. This module perform the optimization on the image and keep it a local cache to serve any request.

The site you have to read is

Setup is very easy, use apt, rpm or what_you_like toi install Apache. After that, you can download mod_pagespeed directly from the previous url, and install it.

Now you are ready to fight: start your backend, start apache and test it. Usually I user pagespeed plugin for firefox, because the one for chromium doesn’t works.

If you use Plone as backend, many problems are easy fixed installing Strong caching for static resources and no-cache for else may be a good way. Remember to enable gzip compression. Forget all about cache invalidation, it doesn’t works and I’m able to explain why. Contact me if you want to know.

After setting content expiration, the big problem is optimize all images: this link show you how you ca do that cut & pasting few rows in apache cfg file.

My test was made using a production website that obtain 84/100 with pagespeed using varnish as frontend and Plone as backend. I installed apache instead varnish and applied the image optimization: the ne score is 93/100.

Nice fly for 15 working minutes.



during last months a very interesting question comes from my job: how fast is a site? This may appear as a speculative question, but it harder than you could think.

I’m going to show an example: you have a site who needs uptime 24/7 but during the night from a part of earth someone says to you that your site is unavailable. In this case, all sysadmins starts to install their loved tools to monitoring the connections, traffic, load, cpu  usage and else.


In my case this tool is munin, an these are my self developed plugins:

  • apache latency aggregation
  • apache latency (max, avarange)
  • apache http coder

At this moment they are very poors because are something  like an alfa release, but I think to make them better asap.

This is a screenshot (click on it to see larger):

The first is graph show how many pages are served in time: in this way you can evaluate not only if your site is fast but how many cases it is slow.

The second one show the max latency, the averange latency and the number of errors in apache.

These two graph works togheter: if the max latency is high but the avarange is low may means nothing because it not evaluate how many high-latency pages are used, for example.

The third is related http response codes.

How to use

I should say that they are not very clear, but I’ll clear it asap. To simplify the manage, I decided to create a script called “” to create all munin needs. After downloading file, you have to untar the file in some folder (/opt/MuninPlugins for me, but you can choose another one). Next step is to run, but is necessary to explain a thing about that: this script is realized for a Debian installation (ubuntu is the same) and gets all it needs by /etc/apache/sites-enabled folder. If you have a differend situation, please contact me to get help, because it’s a little bit boring explain what it does. But what you have to know is that creates a list of symbolic links from /etc/munin/plugins to the script folder you have created.

The scripts are divided in two types: runners and workers. Runners scripts manipolate the parameteres for workers, and workers returns the result for munin.

Call a worker is very easy, is enough to call passing a title, a group and an apache log file (zipped or not). This is an example:

$ 'Http codes' Apache /var/log/apache2/access.log

Some scripts need a specific LogFormat that contains response time, this is what I used:

LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" %D" combined_with_time

Look in worker file to exactly how to use.

Runners are a little bit complicated: a runner script takes its parameters as difference from symbolic link name and real file name. This is an example:

$ ls -al
lrwxrwxrwx 1 root root 46 2010-12-13 09:32 -> /opt/MuninPlugins/

When you call it, the script parses the symlink name getting as title, apache as group and www.reflab.com_access.log as apache log file. After that it calls the worker.

For more details, please contact me.


Check it on github.


Hi All,
during these weeks I work around an interesting thing: SSO.
The SSO, or Single Sing On, is the procedure that allow you to put a password on a system and inherit authorization on all services (in that system).
In our case, we have a Plone with no public areas, so for access to contents every user have to make login on Plone. What our customer ask was a way to inherit the pc authentication, to allow user to bypass login form keeping his name and roles.
This is possible having an Active Directory Server.
The complete procedure for setup is readable following this link: SSO for Plone.
In my case, I didn’t use Likewise or NTLM, but I’d like to see the difference between mod_ntlm2 and mod_auth_kerb.