Diazo: advanced use

Posted: 09/07/2012 in Develop
Tags: , , , ,

Here, there’a an article related an advanced use of Diazo. Soon, I will translate it from italian.

Advertisements

http://www.tomshw.it/mobile/cont/news/ssd-che-si-auto-distrugge-cliccando-il-pulsante-rosso/37516/1.html

Link  —  Posted: 18/05/2012 in System Administration
Tags:

Chameleon and Plone

Posted: 16/01/2012 in Develop
Tags: , , ,

Plone uses page templates in a intensive way, so they have to work very fast. Chameleon is not a new solution for this kind of problems but it works on a Plone Vanilla out of the box. Some years ago I tested it on Plone 3.x but too much work was needed to fix all page templates, so it was aborted. I do not want to spend a time to explain how it works, you can read all about that at pagetemplates.org.

I made some tests comparing Plone Vanilla with Plone + five.pt and these are the results:

Plone Vanilla
 
Plone Five.pt
(first call)
Plone Five.pt
(following calls)
Real 2.075 8.292 1.699
User 0.013 0.007 0.010
Sys 0.017 0.027 0.023

Times are in seconds and they are a result multiple invocation of

$ time wget -r -np http://plone.me:8080/PloneSite

It is impressive, 15% faster than a Vanilla. A great strike in the match Compilers vs Interpreters 🙂 .

Cheers.

collective.solr_checks

Posted: 15/11/2011 in Uncategorized

Hi all,

this is an extension of well-known collective.solr product which allow plone to interact with solr. Using plone 3.3.5 and collective.solr 1.0, sometimes they lose their coherence. To solve these problems and find the reason I wrote this package which match all contents in zodb with information stored in solr. Obviously you can check only or fix all found errors.

Cheers.

 

collective.plone.mailer

Posted: 15/11/2011 in Develop
Tags: , ,

Hi all,

after publication of collective.plone.reader (see previous post for details) I published an extension called collective.plone.mailer. This one looks for all aggregators and for each ones create periodically an email reporting last updates to the owner. Recurrence is configurable for each aggregator.

There’s no configuration available on site, but every user can choose if he wants to receive the email.

It was develop for plone 3.5 so I’m no sure it works correctly on other versions.

Please report any experience.

Cheers.

collective.plone.reader

Posted: 15/11/2011 in Develop
Tags: , ,

Hi All,

I uploaded on github a project of mine called collective.plone.reader.

This product realize a something like a plone topic (collection) not based on criteria but folder subscription.

It is very easy to use:

  • add to your working plone this product
  • install into plone the product (quickinstall)
  • every user needs to create an aggregator (portal type) keeping ownership

Now every user can navigate through the site and add contents to the aggregator using the related action (a star icon).

If you need, you can manipulate the setup of the product using portal_properties: access to ZMI/portal_properties, select  collective.plone.reader_properties properties sheet.

Here you can customize what you like:

  • enabled_types: list of content types you want to aggregate
  • visible_contents: list of visible contents in the aggregator
  • mapping_content_to_index: list of mapping from portal types to catalog indexes, this is very useful when you use facets
  • other_options: list of options for solr, they are static

It was develop for plone 3.5, so I’m not sure it works correctly on following versions, if you got an experience please contact me.

An extension is available, it is called collective.plone.mailer. This package allows to send periodically an e-mail to the owner of an aggregator a report related aggregated contents.

Cheers.

Hi All,

Following the idea in a previous report about using AWS, this is a report after 3 years.

Differences? No, my idea is not changed: a cloud solution is better than a physical server.

The main event/problem happen during these last three years was the August 2011 power outage. During the blackout, a single zone was affected with a real downtown around 48 hours. The recovery of the zone was completed in more than a week.

At first time, I thought that it was a very long time, but after some reflections, I understood that this kind of problems reach the limit of our technology. I remember some years ago when I worked on a physical server in a farm in Miami. During a night in the building, a switch started to burn with all consequences you can imagine. The farm was down for about 24 hours.

You can follow all best practice you know but there’s always something unpredictable who breaks your eggs in the basket. So what is important? Have a plan B to recover everything.

An example on a physical server you usually use a backup where your usually store the application data, only in few situations you backup all files of a server. So restore a server means to reinstall the OS, all user applications and reconfigure every service. This may takes a lot of time.

On AWS, this is very fast: usually you have static copy of your server called AMI, who is different from the running copy of your server called Instance. They are stored in two different stores located in two different zones, one in the store of AWS (S3) and the other on the physical server. Creating an Instance from S3 takes less than 10 minutes, a big advantage respect to reinstall a server. Creating an AMI is easy too using an Instance, but in this case the time is a little bit longer. What is really important is that you have all instruments to do that, and you have not to buy a specific software or a special DAT resource.

Backing to AWS August disaster, a zone was affected, putting down some all services but not completely: in my case I lost the raid data disk but not the instance, others lost their instances. All restore actions takes less than two hours, mainly spent moving files from snapshots to volumes and restore the data.fs, plone/zope store.

At the end, AWS confirms my ideas, the infrastructure is very powerful, and it requires less administration time compare a physical server.

Cheers.