Monday, 24 June 2013

Id generation from prime numbers factorization

How to generate unique ID numbers on a multithread application??


  • Generated ID are unique
  • Minimal synchronization between thread
  • No synchronization between thread during generation
  • No order assumed, just different values

Ids generated from primes

The idea is to generate the ids as product of powers of prime numbers

 id = p_1^f1 * p_2^f2 * p_2^f3 * ... * p_n^fn

We use different prime numbers in each thread to generate different sets of ids in each thread.

Assuming that we use primes (2,3,5), the sequence will be:

2, 2^2, 2^3, 2^4, 2^5,..., 2^64

Then, when we see that a overflow will be generated, we roll the factor to the next prime:

3, 2*3 , 2^2*3, 2^3*3, 2^4*3, 2^5*3,..., 2^62*3

Generation class

Each instance of class IdFactorialGenerator will generate different sets of ids.

To have a thread save generation of Ids, just use ThreadLocal to have a per-thread instance setup.

package eu.pmsoft.sam.idgenerator;

public class IdFactorialGenerator {
    private static AtomicInteger nextPrimeNumber = 0;

    private int usedSlots;
    private int[] primes = new int[64];
    private int[] factors = new int[64];
    private long id;
    public IdFactorialGenerator(){
        usedSlots = 1;
        primes[0] = Sieve$.MODULE$.primeNumber(nextPrimeNumber.getAndAdd(1));
        factors[0] = 1;
        id = 1;

    public long nextId(){
        for (int factorToUpdate = 0; factorToUpdate < 64; factorToUpdate++) {
            if(factorToUpdate == usedSlots) {
                factors[factorToUpdate] = 1;
                primes[factorToUpdate] = Sieve$.MODULE$.primeNumber(nextPrimeNumber.getAndAdd(1));
            int primeToExtend = primes[factorToUpdate];
            if( primeToExtend < Long.MAX_VALUE / id) {
                // id * primeToExtend < Long.MAX_VALUE
                factors[factorToUpdate] = factors[factorToUpdate]*primeToExtend;
                id = id*primeToExtend;
                return id;
            } else {
                factors[factorToUpdate] = 1;
                id = 1;
                for (int i = 0; i < usedSlots; i++) {
                    id = id*factors[i];
        throw new IllegalStateException("I can not generate more ids");

To get the prime numbers I use a implementations on scala provided here in the problem 7:

object Sieve {

  def primeNumber(position: Int): Int = ps(position)

  private lazy val ps: Stream[Int] = 2 #:: Stream.from(3).filter(i =>
    ps.takeWhile(j => j * j <= i).forall(i % _ > 0))

Tuesday, 22 May 2012

Controlled scope


A specific use of Guice custom scopes is presented. You can have many problems trying something similar, so be careful.
The reason to present this post, it to have a reference to explain a similar custom scope used in Service Architecture Model:

Problem: nested passing of parameters

A business operation may be explicitly defined in a context of several high-level context objects. To improve reusability, code is split in several layers and nested method execution pass the context object. In the example below such situation is shown in one class.
public class NestedParamaterPass {

 public void businessOperation(BusinessData data, Person person, Manager manager, Context context){
  // some operation
  firstNestedCall(data, person, manager, context);

 private void firstNestedCall(BusinessData data, Person person, Manager manager, Context context) {
  secondNestedCall(data, person, manager, context);

 private void secondNestedCall(BusinessData data, Person person, Manager manager, Context context) {
  // just keep going and pass all parameters
Because this is one class, it may be possible to create private fields and some how synchronize code execution (We don't want to lose thread confinement). Below it is done with ThreadLocal.
public class NestedParamaterPassLocally {

 private ThreadLocal<BusinessData> data;
 private ThreadLocal<Person> person;
 private ThreadLocal<Manager> manager;
 private ThreadLocal<Context> context;

 private void clearBusinessContext() {;

 private void setupBusinessContext(BusinessData data, Person person, Manager manager, Context context) {;

 public void businessOperation(BusinessData data, Person person, Manager manager, Context context){
  setupBusinessContext(data, person, manager, context);

  // some operation

 private void firstNestedCall() {
  BusinessData currentData = data.get();

 private void secondNestedCall() {
  // just keep going

  // How to pass context object to other class?????
  SomeHelperClass helper = new SomeHelperClass();
Note that this solution only solve the problem for local methods. To expose the business objects to other classes, it is necessary to some how publish references to the ThreadLocal containers. This is the case of the SomeHelperClass helper class used on secondNestedCall above.

Solution: Publish objects by a controlled scope

The controlled scope approach will allow You to have a implementation of SomeHelperClass as below:
public class SomeHelperClass {
 private Provider<Person> person;
 private Provider<Manager> manager;
 public void passMePleaseTheBusinessObjects(){
  Person currentPerson = person.get();
  Manager currentManager = manager.get();
  // make some business operation on Person and Manager only.
To make this work, the guice provider must be aware of the ThreadLocal containers used to keep the current context object.A solution is to create a custom scope that will manage the 4 ThreadLocalContainers given above. More generic solutions can be done, but to ilustrate the idea assume the following Guice scope:
public class ControlledScope implements Scope, BusinessContextControl {

 private final ThreadLocal<BusinessData> data = new ThreadLocal<BusinessData>();
 private final ThreadLocal<Person> person = new ThreadLocal<Person>();
 private final ThreadLocal<Manager> manager = new ThreadLocal<Manager>();
 private final ThreadLocal<Context> context = new ThreadLocal<Context>();

 private static class ThreadLocalWrapperProvider<T> implements Provider<T> {
  private final ThreadLocal<T> reference;
  public T get() {
   return reference.get();

  private ThreadLocalWrapperProvider(ThreadLocal<T> reference) {
   this.reference = reference;

 public <T> Provider<T> scope(Key<T> key, Provider<T> unscoped) {
  if (Key.get(BusinessData.class).equals(key)) {
   return (Provider<T>) new ThreadLocalWrapperProvider<BusinessData>(data);
  } else if (Key.get(Person.class).equals(key)) {
   return (Provider<T>) new ThreadLocalWrapperProvider<Person>(person);
  } else if (Key.get(Manager.class).equals(key)) {
   return (Provider<T>) new ThreadLocalWrapperProvider<Manager>(manager);
  } else if (Key.get(Context.class).equals(key)) {
   return (Provider<T>) new ThreadLocalWrapperProvider<Context>(context);
  } else {
   throw new IllegalArgumentException("This controlled scope is only for specific business object types.");

 public void clearBusinessContext() {;

 public void setupBusinessContext(BusinessData data, Person person, Manager manager, Context context) {;
Note the BusinessContextControl interface in the scope implementation. This interface defines a public interface to control the business context:
public interface BusinessContextControl {

 void setupBusinessContext(BusinessData data, Person person, Manager manager, Context context);
 void clearBusinessContext();
Now the first solution can be simplified to
public class NestedParamaterPassControlled {
 private BusinessContextControl controller;
 private Provider<BusinessData> dataProvider;
 private SomeHelperClass helper;

 public void businessOperation(BusinessData data, Person person, Manager manager, Context context){
  controller.setupBusinessContext(data, person, manager, context);
  // some operation

 private void firstNestedCall() {
  BusinessData currentData = dataProvider.get();

 private void secondNestedCall() {
  // just keep going
Now the business code have only two method calls related to context setup, and business objects are available in any nested method call in the thread. Note that helper object is created before the business context is setup, this is ok because SomeHelperClass injects Providers of the business objects.

Guice configuration

To put it all together configure the guice module as follow:
public class ControlledContextModule extends PrivateModule {

 protected void configure() {
  ControlledScope scope = new ControlledScope();
  // additional bindings related to business operation with context given
  // by the controlled scope.
  // Don't forget to expose business API


Thursday, 17 May 2012

Canonical Protocol


I will like to show some ideas about the canonical protocol I have defined. In the final example it is shown that such protocol provide features given by OAuth at the protocol level, it means that You can pass external resources without any change on services implementations or additional inter-service integration.

RPC integration paradigm

When using webservices to integrate remote systems, the development process can be summarized to:

Client side
-- Prepare request data from context
-- make call
-- Parse response and interpret result
Service provider side
-- Parse request
-- interpret data to make internal API calls
-- Create response

Well, not so difficult. But the problem is that all the code used to serialize/parse context information and interpret request/response does not add any value to the system.
Note that change from technologies like CORBA to web-services is just a standardization of the binary format used to create/parse the messages. The RPC "Remote Procedure Call" paradigm is still the same.

Proposition: Canonical Protocol

Working on systems integration (to many systems, to many webservices) I had an idea to make a new kind of integration. Before going to the details, lets see how the previous integration schema may change:

Client side
-- Inject a service API interface
-- make calls to service API
Service provider side
-- Provide a service API implementation

But where is the integration??. The integration information can be extracted from "business code" to a external layer. From the code point of view, You get a implementation of an interface and don't care about where is the implementation. To get a real difference, the injected implementation of the interface must be realized with a different mechanism than RPC. This new mechanism is the Caonnical Protocol.

A simple example

Assume that a service provide a implementation of interface CoreServiceExample on some external machine
public interface CoreServiceExample {

 void resetProcess();
 void putData(int value);
 boolean isProcessStatusOk();

Then the client code may look like this:
public class ClientSimple {

 private CoreServiceExample coreService;

 public boolean simpleMethod(int internalCounter) {
  for (int i = 0; i < internalCounter; i++) {
  return coreService.isProcessStatusOk();

In the case that interface CoreServiceExample would be available as a webservice, then every call to a method should be executed as a RPC. To change method simpleMethod to make only one external execution of a webservice it is necessary to create a new method on interface CoreServiceExample with some complex data model. This means change client and provider implementation.

Canonical Protocol

The canonical protocol provide a new solution to this integration problem. Note that only the last method isProcessStatusOk() returns a type different that void. Client execution flow do not depends on the intermediates calls to interface CoreServiceExample. This means that execution may be recorded locally during program execution and one external call to service provider may be done with a single message similar to:
#0->BindingKeyInstanceReference [key=Key[type=CoreServiceExample, annotation=[none]], instanceNr=0]
#1->DataObjectInstanceReference [ instanceNr=1]objectReference="INTEGER_SERIALIZATION"
#2->DataObjectInstanceReference [ instanceNr=2]objectReference="INTEGER_SERIALIZATION"
#n->DataObjectInstanceReference [ instanceNr=n]objectReference="INTEGER_SERIALIZATION"
#(n+1)->PendingDataInstanceReference [dataType=boolean, instanceNr=5]]
The message information is enough to recreate executions to interface CoreServiceExample on the provider side. Logical execution with Guice injection API may look as:
  Injector providerInjector = getProviderInjector();
  Key serviceApiKey = context.getBindingKey(0);
  // previous call return the key: Key.get(CoreServiceExample.class);
  CoreServiceExample realService = providerInjector.getInstance(serviceApiKey);
  boolean realResult = realService.isProcessStatusOk();
The real service provider code is untouched!!!. It just have to implement interface CoreServiceExample.class, regardless of how the client uses the API!. The response message is then:
#(n+1)->FilledDataInstanceReference [instanceNr=(n+1), getObjectReference()="values of realResult"]]
Not only method calls returning void can be delayed. If a method returns a interface, then further nested execution of method can be recorded.

A new quality on integration

The service interaction flow presented below is taken from OAuth documentation The big difference is that interaction between PhotoSharingAPI(faji service) and PrintingPhotoAPI(beppa service) is realized at the canonical protocol level, and that services don't have any information about each other: No direct integration between PhotoSharingAPI-PrintingPhotoAPI is necessary.
The real power of the canonical protocol is the possibility to pass external references between services. To show this, assume the following client code realizing Jane interaction from OAuth workflow example:
 public class JaneGuiInteraction implements JaneGuiService {

  private PhotoSharingAPI sharingApi;

  private PrintingPhotoAPI printingApi;

  public boolean runPrintingPhotoTest() {
   String credentials = sharingApi.register("JANE");
   PhotoSharingAlbum myVacationAlbum = sharingApi.getAccessToAlbum(credentials);

   PrintingPhotoOrder order = printingApi.createPrintingOrder();
   order.addAdressInformation("grandmother direction");

   PhotoInfo[] vacationPhotos = myVacationAlbum.getPhotoList();
   PhotoResource[] photos = new PhotoResource[vacationPhotos.length];
   for (int i = 0; i < photos.length; i++) {
   return order.submitOrder();
Note the call
The method getPhotoSharingResource returns a interface PhotoResource from service bind to PhotoSharingAPI. This reference is passed to a order created on service printingApi. In such interaction, the canonical protocol records the references to PhotoResource in the context of each service slightly different:

For PhotoSharingAPI service
#11->BindingKeyInstanceReference [key=Key[type=pmsoft.sam.module.definition.test.oauth.service.PhotoResource, annotation=[none]], instanceNr=11]
The canonical protocol now know that a instance of PhotoResource is created from service PhotoSharingAPI by a call to method getPhotoSharingResource.

For PrintingPhotoAPI service
#3->ExternalSlotInstanceReference [key=Key[type=pmsoft.sam.module.definition.test.oauth.service.PhotoResource, annotation=[none]], instanceNr=3]
When client code pass this reference to service PrintingPhotoAPI, the canonical protocol implementation creates a key with type ExternalSlotInstanceReferece and remembers internally the mapping (#3,PrintingPhotoAPI)->(#11,PhotoSharingAPI).
When recorded methods execution is realized in PrintingPhotoAPI real implementation, the execution of method addPhotoResourceReference is realized with a Proxy instance that records methods calls. As a return information, additional calls to service PhotoSharingAPI may be created. In this case it is:
#8->ServerPendingDataInstanceReference [instanceNr=8]]
On base of mapping (#3,PrintingPhotoAPI)->(#11,PhotoSharingAPI), this call is translated to a request to PhotoSharingAPI service:
#16->PendingDataInstanceReference [dataType=class [B, instanceNr=16]]
This call returns a final data object without any further method call:
#16->FilledDataInstanceReference [instanceNr=16, getObjectReference()=[B@7f4d1d41]]
and then result can be retrieved to PrintingPhotoAPI service. Note that the key #8 is the original reference number of a ServerPendingDataInstanceReference above.
#8->FilledDataInstanceReference [instanceNr=8, getObjectReference()=[B@7f4d1d41]]

Prototype implementation

This post don't define strictly the canonical protocol, but just shows some ideas. To see the prototype implementation go to

Friday, 24 February 2012

Inconsistency on Maven model


Trying to build maven from a single svn commit (actually it was a TAG commit), I found that it is impossible without a central repository or information from other svn commits. I present details about this case of "cross commit version cycle". I call this a "inconsistency" on Maven Model because it allows to create commit versions of a project that are not self-sufficient.
Any inconsistency about this inconsistency is welcome.

Maven project model (fragment)

In maven there are two different project relations Parent Relation and Module Relation.
Parent Relation inherent project properties and Module Relation manage the build process: "When building this project, build also this submodules".
Look at the maven reference documentation for full reference:

Snapshot versions and release versions

Most Maven projects have a main parent and many submodules as on figure 1.
Figure 1. Project structure and relations

Snapshot versions are used during development, to get the last version of the source code. Maven look on the central (or project if configured) repository to look for the last build.
When a release is ready, all the projects may be commited with version 1.0. You get then a concrete release and relations as on figure 2.
Figure 2. Release commit with version 1.0

Problem on partial version upgrade

The problem with Parent/Module relations appears on partial increase of versions. Assume that only some submodules versions are increased on release 1.1 (fig 3). As parent project has old version 1.0 (it was not changed) then submodules 1.1 refere to version 1.0. Lets call this commit 1.1_partial.
Figure 3. 1.1_partial commit

Now, increase on a new commit 1.1_rest the version of remaining projects to 1.1. You get a commit where each pom has version 1.1, BUT!! submodules changed on commit 1.1_partial have as parent pom with version 1.0 (fig. 4)
Figure4. Inconsistency relations cross commits

It may look as a bad use of maven versioning, but the model allows this situations so: how to interpret such a 1.1_rest commit??

Some simple arguments

Talking about this "inconsistency" on maven model, I get the follow counterarguments:

This don't happen!

I found this trying to build maven from source from a tag commit, so here is the example:
  • PARENT: org.apache.maven.plugins:maven-plugins:pom
  • SUBMODULE: maven-clean-plugin:maven-plugin
Refer to this TAG commit on maven:
Project relations are (version in brackets)
PARENT(18) --submodule--> SUBMODULE(2.5-SNAPSHOT) --parent--> PARENT(17)
Look here to see this relations:
You may not have any problem to build maven with this commit IF you have pom of PARENT(17) on you maven repository. This means that building from scratch is possible only if you use 2 different source commits, at lease 2 if you don't get more old version references.

I will take care of my pom versions, so it will not happen on my project

You control pom versions only on you projects, so if some external dependency have such a cross commit version cycle you will inherent it

Old versions are all on central repo, so no problem

I would like to have maven projects on gentoo, so is a problem for me.

Is this a real problem??

It is not a problem, until You have a central maven repository with old release versions. It seems to be sad, that actually We are forced to have such a historical repo.

Tuesday, 10 January 2012

Inverse of Control is not Dependency Injection

How to understand the difference between "Dependency Injection" and "Inverse of Control"?
Problem lies in lack of examples to "Inverse of Control" different than "Dependency Injection". So, below You can find a simple example to see the difference.
On car traffic, each car is "controlled" by a driver. On front of stoplights, cars line up waiting for green light. Drivers accelerate at the sight of green lights, but they do it with delay. This leads to a situation as shown:

Applying the Inverse of Control principle, stoplights could take "control" of car's acceleration and execute a synchronized and fast start on green light. So inverse of control for car-semaphore scenario looks like this:
How is Dependency Injection related to Inverse of Control?? Well, on Dependency Injection you take the "control" over the new statement from developers to architect. Control on developer looks like this:
public class DirectControl {

 private SomeAppInterface app = new MyFavoriteImplementation(8080);
With Dependency Injection, Architect take the control and You get:
import javax.inject.Inject;

public class InverseControl {

 private SomeAppInterface app;