This Blog Has Moved!

Right, so yes, five years ago I moved to github pages, and never bothered to redirect any of these pages there. Now I've moved on from there, and... Finally I am using my real domain, trishagee.com . My blog is now at trishagee.com/blog .  See you there!

Dissecting the Disruptor: Wiring up the dependencies

So now I've covered the ring buffer itself, reading from it and writing to it.

Logically the next thing to do is to wire everything up together.

I talked about multiple producers - they have the producer barrier to keep them in order and under control.  I've talked about consumers in a simple situation.  Multiple consumers can get a little more involved.  We've done some clever stuff to allow the consumers to be dependent on each other and the ring buffer.  Like a lot of applications, we have a pipeline of things that need to happen before we can actually get on with the business logic - for example, we need to make sure the messages have been journalled to disk before we can do anything.

The Disruptor paper and the performance tests cover some basic configurations that you might want. I'm going to go over the most interesting one, mostly because I needed the practice with the graphics tablet.

Diamond configuration
DiamondPath1P3CPerfTest illustrates a configuration which is not too uncommon - a single producer with three consumers.  The tricky point being that the third consumer is dependent upon the previous two consumers to finish before it can do anything.


Consumer three might be your business logic, consumer one could be backing up the data received, and consumer two may be preparing the data or something.

Diamond configuration using queues
In a SEDA-style architecture, each stage will be separated by a queue:


(Why does queue have to have so many "e"s?  It's the letter I have the most trouble with in these drawings).

You might get an inkling of the problem here: for a message to get from P1 to C3 it has to travel through four whole queues, each queue taking its cost in terms of putting the message on the queue and taking it off again.

Diamond configuration using the Disruptor
In the Disruptor world, it's all managed on a single ring buffer:

It does look more complicated.  But the ring buffer remains the single point of contact between all the players, and the interactions are all based on the barriers checking the sequence numbers of the things it's dependent upon.

The producer side is fairly simple, it's the single producer model described in my last post. Interestingly, the producer barrier doesn't have to care about all the consumers.  It only cares about consumer three, because if consumer three has finished with an item in the ring buffer the other two will already have processed it.  So if C3 has moved on, that slot in the ring buffer is available.

To manage the dependencies between the consumers you need two consumer barriers.  The first just talks to the ring buffer and consumers one and two ask it for the next available item.  The second consumer barrier knows about consumers one and two, and it will return the lowest sequence number processed by both consumers.

How consumer dependencies work in the Disruptor
Hmm.  I can see I'm going to need an example.
We're joining the party halfway through the story: the producer has filled the ring buffer up to sequence number 22; consumer one has read and processed everything up to 21; consumer two has processed everything up to sequence 18; consumer three, which is dependent upon the other consumers, has only made it as far as 15.

The producer can't write anything more to the ring buffer because sequence 15 is taking up the slot where we'd want to put sequence 23.
(I'm sorry, I really did try to find an alternative to red and green, but everything else was just as ambiguous).

The first consumer barrier lets consumers one and two know they can grab anything up to sequence 22, the highest sequence number in the ring buffer.  The second consumer barrier checks the ring buffer sequence, but it also checks the sequences on the other two consumers and returns the lowest value.  So consumer three is told it can get anything up to sequence 18 from the ring buffer.

Note that the consumers are still reading the entries directly from the ring buffer - consumers one and two are not taking the entries off the ring buffer and then passing them on to consumer three.  Instead, the second consumer barrier is letting consumer three know which entry in the ring buffer it's safe to process.

This raises a question - if everything comes directly off the ring buffer, how is consumer three going to find out about anything the first two consumers have done?  If all consumer three cares about is that the earlier consumers have done their job (e.g. replicating the data to somewhere else) then everything's fine - when consumer three is told the job is done, it's happy.  If, however, consumer three needs the results of an earlier consumer's processing, where does it get that from?

Modifying entries
The secret is to write them to the ring buffer Entry itself.  This way, when consumer three grabs the entry off the ring buffer, it will have been populated with all the information consumer three needs to do the job.  The really important part of this is that for each field on the Entry only one consumer is allowed to write to it.  This prevents any write-contention which will slow the whole thing down.


You can see this in DiamondPath1P3CPerfTest - FizzBuzzEntry has two fields as well as the value: fizz and buzz.  If the consumer is a Fizz consumer, it writes to fizz.  If it's a Buzz consumer, it writes to buzz.  The third consumer, FizzBuzz, will read both of these fields but not write to either, since reading is fine and won't cause contention.

Some actual Java code
All this looks more complicated than the queue implementation.  And yes, it does involve a bit more coordination.  But this is hidden from the consumers and producers, they just talk to the barriers.  The trick is in the configuration.  The diamond graph in the example above would be created using something like the following:

ConsumerBarrier consumerBarrier1 = ringBuffer.createConsumerBarrier();

BatchConsumer consumer1 = new BatchConsumer(consumerBarrier1, handler1);
BatchConsumer consumer2 = new BatchConsumer(consumerBarrier1, handler2);

ConsumerBarrier consumerBarrier2 = 
    ringBuffer.createConsumerBarrier(consumer1, consumer2);

BatchConsumer consumer3 = new BatchConsumer(consumerBarrier2, handler3);

ProducerBarrier producerBarrier = 
    ringBuffer.createProducerBarrier(consumer3);

In summary
So there you have it - how to wire up the Disruptor with multiple consumers that are dependent on each other.  The key points:
  • Use multiple consumer barriers to manage dependencies between consumers.
  • Have the producer barrier watch the last consumer in the graph.
  • Allow only one consumer to write to an individual field in an Entry.
EDIT: Adrian has written a nice DSL to make wiring up the Disruptor much easier.

EDIT 2: Note that version 2.0 of the Disruptor uses different names to the ones in this article.  Please see my summary of the changes if you are confused about class names.  Also Adrian's DSL is now part of the main Disruptor code base.

    Comments

    1. And if you want to write less code when you wire up your consumers, take a look at the disruptor dsl:
      http://github.com/ajsutton/disruptorWizard

      It makes it easy to wire up even the most complex consumer patterns just by passing in batch handlers and the dsl will take care of creating the actual consumers, the barriers and ensure the producer barriers block on the end-of-chain consumers.

      ReplyDelete
    2. Thanks for writing this up. On your last section, the java code, should consumer3 be based off of consumerBarrier2 instead of consumerBarrier1?

      ReplyDelete
    3. Er, yes. Well spotted! Have updated it.

      ReplyDelete
    4. Wow this is pretty stuff... one question cames up -how to manage "more" than one ring buffer for several working steps - means:

      producer A1 - ringbuffer1 - consumer B1 / C1 - consumer B1 / C1 is "producer" B2C2 - ringbuffer2 - consumer D3 / D4 - getting "producer "D3D4" ...

      is this possible in any way?

      ReplyDelete
    5. @bayoda I'm not sure why you'd need more than one ring buffer - the point is that all the producers and consumers should work off a single ring buffer. So, you either wire up your dependencies so they all use a single ring buffer, or you hand off to a second disruptor. The LMAX architecture frequently uses two disruptors, one to handle the incoming information and business logic, and the second to handle marshalling back on to the wire. Martin Fowler's article talks a bit about this approach - http://martinfowler.com/articles/lmax.html

      ReplyDelete
    6. Hi... a little bit difficult to describe... could i mail you? we think like a disruptor i think - in circles ;-)

      ReplyDelete
    7. Hi Trisha!

      I have a question regarding modification of entries by consumers. In FizzBuzzEventHandler, as you explained, each field of the Event can be written by only one consumer thread and read by multiple of them afterwards. However, these fields are not volatile and don't have any synchronization and I don't understand what guarantees that the field that has been modified by one consumer will be read correctly (the up-to-date value) by the next one (dependent on the first one). Is there any guarantee of this in Java?

      The same is about the sequence field of the AbstractEvent. It's has no synchronization, can be written (in RingBuffer.nextEvent()) and read by multiple threads.

      Thanks!
      Sergey

      ReplyDelete
    8. Oh... I think I understood where the guarantee comes from. Once a consumer (event processor) has finished processing of an event it sets a new value to its sequence, which is volatile. This works a kind of as a "commit" for all changes of the event at this phase. If a subsequent consumer reaches this entry, then the sequence value updated by the previous consumer has been seen and all other changes applied to the event itself will be visible because of the "happen before" relation between them.

      ReplyDelete
    9. This comment has been removed by the author.

      ReplyDelete
    10. is there a way to ensure that messages are processed in sequence?

      ReplyDelete
    11. The messages are always processed in sequence, the Disruptor was designed to be FIFO. Each event handler will get each event sequentially, and event handlers downstream can be sure the earlier event handlers have processes the messages up to their sequence number.

      ReplyDelete
    12. Just watched your infoq video and now read your article. Interesting stuff. Thanks for comparing your approach to the SEDA style. For me one question remains, though: Is it right you´re assuming all consumers to be working on the same (!) original message/event? In your video presentation it was the chassis of the car.

      If so, the comparison to the SEDA style might somewhat miss the mark. Your argument seems to be true for a network of consumers all working on the same basic data (given there is no concurrent writing to it). But what if the network you sketched looks like this:

      P1 -A-> C1.
      P1 -A-> C2.

      C1 -B-> C3.
      C2 -C-> C3.

      P1 is producing some data of type A. C1 and C2 are working on that. But they are transformers. C1 transforms an A into a B. C2 transforms an A into a C. And C3 takes both a B and a C to do its work.

      In a car production line several process stages work on the same thing - either in parallel or in sequence: the car.

      But I find this to be rare in software systems. Rather what gets passed into a software process is transformed by each processing step.

      Take the simple task of creating a dictionary (lookup table of key value pairs) from a string like "a=1;b=2;c=3". I see this as a three step process: split the string into substrings at ";" -> then split the substrings at "=" into key-value-pairs -> then put the key-value pairs into a dictionary data structure.

      With each step the data gets transformed. What is received is not what gets passed on to the next step.

      How do you deal with these kind of scenarios?

      ReplyDelete
      Replies
      1. It's not actually to be compared to SEDA - we started with a SEDA architecture but realised the queues between stages are what slows it down. Having the single ring buffer and the single event that all the stages work on is what makes the Disruptor much faster than our implementation of SEDA.

        In example you give, we would have a pipeline of dependencies, rather than the parallel examples from both this article and the car chassis. This would mean you wouldn't get the advantage of running various stages in parallel, but because it's still a zero-contention data structure, it's still quicker than sticking the event onto queues between pipelines.

        See Pipeline3StepPerfTest (in the Disruptor code) to see how this compares to a queue implementation.

        Delete
    13. Thanks for explanation. But my scenario is little different. In my app, i need multiple producers and single consumer. can you please explain me or provide some sample code for my understanding. Thanks

      ReplyDelete
      Replies
      1. The multiple consumer scenario has changed and improved since this blog post was written - your best bet is to check out the Google Group for the Disruptor, there's lots of advice and example there: https://groups.google.com/forum/#!forum/lmax-disruptor

        Delete
    14. HI Trisha,
      Is there any way to monitor the usage of ring buffer ? At a given time how many slots are occupied or free ? Is there any logging framework which gives more details ? Also, how can i get complete stack trace in case of exception ?

      Thanks,
      Kiran

      ReplyDelete
      Replies
      1. Hi Kiran,

        The best place to ask questions like this is the Google Group: https://groups.google.com/forum/#!forum/lmax-disruptor

        I haven't worked with the Disruptor for a couple of years now so I'm not up to date on best practices.

        Delete
      2. Thanks Trisha..i will post it there.

        Delete
    15. This comment has been removed by a blog administrator.

      ReplyDelete

    Post a Comment

    Comments have been disabled since this blog is no longer active.

    Popular posts from this blog

    Dissecting the Disruptor: What's so special about a ring buffer?

    Dissecting the Disruptor: Writing to the ring buffer

    Dissecting the Disruptor: Why it's so fast (part one) - Locks Are Bad