mercredi, 30 décembre 2009

Sequences scalability

Sequences are central part of most computer systems. They are used both in technical part (generate unique identifier for object) and in business part (count the remaining free tickets which can be sold).

When the load on the application goes higher, sequences become quickly central bottlenecks of systems. So the goal here is to first categorize different types of sequences and then gives some keys to see how sequences be distributed to increase their scalability, and hence the global scalability of the whole system.

Characteristics of sequences :

Order : the number generate should be used in a given order (ascendant or descendant), or the order do not matter. I will talk about ordered or unordered sequence.

Missing values : some values can be missing in the sequence. For example the sequence miss the value 4 should not alter the behavior of the application. I will talk about continued or discontinued sequence.

Finite sequence : all the values of the sequence are know and countable. Countable sequences are particular case of finite sequence where the sequence has a reasonable amount of values, and thus could be represented by one object per values. I will talk about finite or infinite sequence, and countable sequence.

Key to distribute sequences

Discontinued sequences are easy to scale : the application can pick numbers n by n in the sequence, reducing the amount of calls to the sequence by n. Good example for of discontinued sequences are objects identifier generators.

Unordered sequences are also easy to scale : a counter can be decomposed in n subcounter, picking at random one of the subcounter to increment or decrement will reduce the contention on one counter by n. A typical example is a quantity of object to sell. N subcounters are initialized with the sum is the total amount to sell. The object cannot be sold anymore when every subcounters are at 0.

Countable sequences can be scaled by representing every value of the sequence as a single object in the database. Updating an unused object is a easy task.

Finaly infinite, continued ordered sequence are very hard to scale because synchronization between all callers is required, so a single point of contention is mandatory.


Sequences are choke point in most systems, but as we saw above they can scale relatively simply with two simple tricks : read number n by n instead of 1 by 1, or decompose a counter in n subcounters.

dimanche, 15 novembre 2009

Call different services, use the first to reply

The needs

My needs were the following : I have an arbitrary amount of webservices to call, but I don't know, for a particular query, which one will return the answer. More than that, some are slower and others are faster, but again, I don't know which is the fastest for my particular query.

So what I will do is to parallelize the queries, and use the value of the first which return my a good value.

Use an Executor

The first component to implement such a requirement is an Executor.
The Executor will run all my queries, in as many threads as I want, permitting to fine tune the behavior of the system.
As many queries will input the system and will be translated in many more threads, it is important to use a limited amount of threads to not crash the system.

Synchronization on result

A specific object will be used to synchronize the result : the first result will be returned to the caller, letting the running threads die, and no more threads will be run for this query.

Some basic primitive for concurrent programming (java.util.concurrent) will be used, such as CountDownLatch : the caller will block on the latch, and once a thread will find the answer, it will set the value into the result and count down the latch. The caller will be freed and the value will be available.

Watchdog if all the queries fail

One point to not forget is if every queries fail, the caller should be informed of this fail.
A watchdog will be used : it will block till all the threads executing the query will finish, and will then be run. Its only task is to free the caller. This one will continue the processing, but the result will not be available. This will be the responsibility of the caller to run user code to handle this special case.

Strong requirements

The requirements are the following :
  • Use as little synchronization as possible
  • Use as much concurrent primitives (java.util.concurrent) as possible
  • Do it as generic as possible
Java implementation

The interesting part of the implementation is given here below.
The Executor is simply an java.util.concurrent.Executor. I'm not good enough to implement a better solution than they do.
The result object is composed of the following attributes :
public class FutureResult {
private T result;
private boolean done;
private AtomicInteger threadCount = new AtomicInteger(0);
private CountDownLatch latch = new CountDownLatch(1);
A generic type for the value, a AtomicInteger to count how many other threads are working on this result, and a latch to make the caller wait.

The watchdog class will simply wait till all the threads are finish and free the caller if it is not already the case :
class WatchDogRunnable implements Runnable {
public void run() {
while (result.hasThreadsWorking()) {
synchronized (result) {
And finally an abstract callable worker which can be easily extended to run the wanted tasks. The call function will increment the AtomicInteger, do the work if it is not already done, if a result is found, free the caller, decrement the AtomicInteger and return the value if it is set. Not that the worker implementation will implement the callInternal function.
public abstract class AbstractCallableWorker
implements Callable {
private FutureResult result;
public final T call() {
if (!result.isDone()) {
T callResult = callInternal();
if (callResult != null) {
return result.getResult();
protected abstract T callInternal();
Everything put together, you can implement as much worker as you want and call as many webservice as you want, with the only guaranty to get the caller continue with the fastest service to reply !

samedi, 14 novembre 2009

Clustering Jackrabbit 1.5.x with DataStore configuration

Jackrabbit is the reference implementation of JSR-170 : JCR, Java Content Repository.

One day we felt the needs to run a Jackrabbit repository in a clustered environment for reliability. So a JCR runs on two separated servers, backed by the an Oracle db and a NFS shared datastore.

It exists some difficulties to run such architecture, mainly because both JCR nodes (JCR1 and JCR2) do not know each others. So if we insert some content on one node, it takes some time before we can read it on the other node.

In Jackrabbit 1.5.x, Session.refresh do not work correctly, so we need to do some trick to enable cluster (synchronization) features.

First of all, we add a custom ServletFilter which handle the GET (read) request and transform it on a HEAD one (test if the content exists) using apache.commons.HttpClient.
If the return of the query says the content exists, then we run the normal request. If the content do not exists, we way for some arbitrary time and relaunch the HEAD request.

This filter will let enough time to the node to synchronize itself, and then perform the query in the right way.

Other possibilities like HEAD on the current node and then HEAD on a different node should also be possible, but this will require every nodes to know about every other nodes. No the best idea.

One another tricky point is to store all the content to the configured datastore. Be carefull to not rely on a JNDI persistence manager, but be sure to use a subclass of a bundle persistence manager, otherwise you will have some surprise with the size of the database.

samedi, 17 janvier 2009

Des nombres à usage unique (nonce)

Afin de répondre à une problématique de transmission d'informations sensibles sur un réseau pas sûr (Internet donc...), une solution est l'utilisation de nonce, diminutif de number used once, nombre à usage unique.

Ils sont utilisé pour mettre du sel dans des informations sensibles qui seront ensuite hashées et transmises en clair sur le réseau. Mais leur efficacité n'est effective seulement si on a la garantie qu'un même nonce ne va jamais être utilisé 2 fois !

Exemple type : vous développez un service d'authentification, mais celui-ce ne peut se faire sur du SSL. Vous souhaitez toutefois que si une personne intercepte les informations, elle ne puisse pas les réutiliser pour s'identifier à son tour.

Le seul hashage du mot de passe ne suffisant pas car la personne qui a interceptée le mot de passe hashé pourra rejouer la séquence pour simuler sa connexion, une solution est de hasher le mot de passe concaténé au nonce (du style sha256("#" + mot de passe + "#" + nonce + "#")), puis de transmettre le nom d'utilisateur, le nonce utilisé et le résultat du hashage. Le serveur disposera ainsi de toutes les informations pour vérifier si le mot de passe utilisé pour le hashage était effectivement correcte ou pas. Les prochaines requêtes faites avec ce nonce seront systématiquement refusées.

C'est pour faciliter l'accès à ces nombres que je mets à disposition un petit service de génération de nombre à unique, accessible publiquement à l'adresse

Chaque appel à cette url retourne un chaine de 32 caractères composées des lettres a-z, A-Z et les chiffres 0-9. Notez que les chaines retournées sont donc senible à la casse.

Le nombre de nonce possible est donc (26+26+10)^32 = 2.27 * 10^57, soit à peu près 6-parasite, ou beaucoup plus que d'atomes dans l'univers.

A utiliser sans modération pour toute application nécessitant la transmission d'informations sensibles sur un réseau non sécurisé.

Edit (06.01.2011) : GAE ne supporte plus le proxying, vous pouvez essayer l'app directement sur