Browse Prior Art Database

A Method for Scaleable, Ordered Message Processing Disclosure Number: IPCOM000020233D
Original Publication Date: 2003-Nov-04
Included in the Prior Art Database: 2003-Nov-04
Document File: 3 page(s) / 10K

Publishing Venue



This invention provides a way to allow for scaleable message processing by multiple competing receivers when the order of message processing is important. It works by adding specialized sequence information to the outgoing messages and then processing the sequences in the receiver in such a way as to persistently store messages received out of sequence and then process them once a partial sequence is complete. It requires a shared database or id service on the sending end, and a separate shared persistence mechanism on the receiving end, but does not place any additional requirements on the system design as a whole.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 41% of the total text.

Page 1 of 3

A Method for Scaleable, Ordered Message Processing

  When using an asynchronous messaging system like WebSphere MQ, there is often a mismatch between the order in which messages need to be processed based on the requirements of the sending application, and the order in which the messages can be processed by the receiving applications. We show how to allow for scaleable message processing by multiple competing receivers by adding specialized sequence information to outgoing messages and then processing the messages in the receiver in such a way as to restore message sequences. For instance, imagine that asynchronous messages must be processed in their arrival order by a receiving application. If messages m1 and m2 arrive in that order, then message processor has to process message m1 first; if m1 processing is successful, only then can the processing of message m2 occur. Knowing this, sending systems usually assume that this order will be preserved in all cases, and are designed around this assumption. For instance, consider the case where a set of messages represent inserts, updates and deletes from a database table. If the messages are processed out of sequence (say, attempting to update a row before it has been inserted) then the results can be disastrous.

The problem is that message queuing systems do not automatically guarantee the processing order of messages when multiple receivers compete for messages from a shared queue. If two message receivers listen on a single queue, then one receiver may process messages faster than another receiver. For instance, consider if there were two receivers, on of which ran at half the speed of the other receiver. In that case, it may take twice as long to process some messages. Thus a sequence m1, m2, m3, m4 may actually be processed in the order m1, m3, m2, m4 if receiver "A" (the fast one), retrieves message m1, receiver "B" (the slow one) receives m2, and then receiver "A" processes message m3 before receiver "B" completes its processing of m2. Due to these inherent order dependencies, the most common solution is to have a single receiver point which processes all messages coming in from the queue. Unfortunately, the processor of such messages becomes a single point of failure and is hence not scalable. Our scaleable solution combines database and J2EE MDB technologies.

We place only a single requirement on the sending applications that place the messages on the queue. This requirement is that all messages of a particular kind, generated by each application (regardless of how many processes or machines run that application) be marked with a globally unique sequence number (or GUS). The GUS contains a monotonically increasing number that must be added to the message itself before it is placed on the final queue for transmission. The GUS provides a way of grouping messages of a particular kind and associates an order amongst messages of a particular group. The GUS may be added by the applica...