tag:blogger.com,1999:blog-90440057274722849.post4779725373579082285..comments2023-09-21T15:29:33.588+01:00Comments on Trisha's Ramblings: Dissecting the Disruptor: Demystifying Memory BarriersTrishahttp://www.blogger.com/profile/11486870702929760981noreply@blogger.comBlogger15125tag:blogger.com,1999:blog-90440057274722849.post-63752903454071693712015-10-21T11:21:29.706+01:002015-10-21T11:21:29.706+01:00LMAX started to use the Disruptor in a different w...LMAX started to use the Disruptor in a different way after I wrote this post, I believe they no longer store the entries as a byte array. The Disruptor has moved on a lot since I wrote this four years ago, the best place to get info about how it works is now the Google group: https://groups.google.com/forum/#!forum/lmax-disruptor<br /><br />There's a lot of information there already, and they're a friendly bunch if you want to ask specific questions about how things work.Trishahttps://www.blogger.com/profile/11486870702929760981noreply@blogger.comtag:blogger.com,1999:blog-90440057274722849.post-37133254686570284082015-10-19T21:24:13.648+01:002015-10-19T21:24:13.648+01:00Hi Trisha, thx for your effort and mostly I have g...Hi Trisha, thx for your effort and mostly I have got it. One q regarding choice of 2 dimensional array.. so the array has reference to a byte array which stores Entry.. but this ref.. is it volatile (performance hit again.. but will work)? I am assuming volatile ref to Entry will make sure all bytes in the volatile ref are correctly 'visible'. How about having just a single array of size = nr_of_elements * size_of_one_element?Apurva Singhhttps://www.blogger.com/profile/10142515158914173783noreply@blogger.comtag:blogger.com,1999:blog-90440057274722849.post-15694820279974145302015-07-19T09:44:15.949+01:002015-07-19T09:44:15.949+01:00a) there's no need, you'll never run out o...a) there's no need, you'll never run out of numbers<br />b) define finished? when you've written to all the slots? What if there's still a reader waiting to read from slot one? How can you tell the difference between sequence number one the first time around and sequence number one the second time around?<br /><br />The nice thing about the disruptor is that if you journal all the entries and their sequence numbers every time they've been written (see Martin Fowler's article for the overall LMAX architecture) you get a clear sequence of events that happened throughout the system, from zero when you started up to some insanely high number when it shut down. This is a really nice feature that can be used for debugging events that went through the system.Trishahttps://www.blogger.com/profile/11486870702929760981noreply@blogger.comtag:blogger.com,1999:blog-90440057274722849.post-79357286920049541522015-07-19T08:55:41.256+01:002015-07-19T08:55:41.256+01:00Sorry for the very stupid question to come ... I h...Sorry for the very stupid question to come ... I haven't actually went through the disruptor source code, but why can't it simply reset the sequence number once one cycle is finished ? Anonymoushttps://www.blogger.com/profile/14872752813016466864noreply@blogger.comtag:blogger.com,1999:blog-90440057274722849.post-55342597073023409342012-04-02T14:15:06.976+01:002012-04-02T14:15:06.976+01:00It's a fair point, the sequence number is fini...It's a fair point, the sequence number is finite, so what happens when we reach the maximum?<br /><br />The sequence number is a Long, so the maximum value this can be is 9223372036854775807.<br /><br />If you process a million messages a second, it will take you<br />9223372036854775807 / 1 000 000 = 9223372036854 seconds <br />to reach this value.<br /><br />Which is <br />9223372036854 / 60 = 153722867280 minutes<br /><br />Which is <br />153722867280 / 60 / 24 = 106751991 days<br /><br />Which is approx<br />106751991 / 365 = 292,471 years<br /><br />So, yes, you will run out of sequence numbers at some point. But if you're processing a million messages a second it's still going to take a looong time before you wrap the buffer. It's like the y2k problem, but I think global warming is a more pressing matter.Trishahttps://www.blogger.com/profile/11486870702929760981noreply@blogger.comtag:blogger.com,1999:blog-90440057274722849.post-30976032229456707472012-03-21T17:15:56.190+00:002012-03-21T17:15:56.190+00:00Apologies for the noob question but I'm battli...Apologies for the noob question but I'm battling to understand what happens to the sequence number when it reaches it's maximum size? I get the idea of the ring wrapping and how you determine the position in the array however the sequence number is finite, what am I missing?stuart cullinanhttps://www.blogger.com/profile/03522809232588186535noreply@blogger.comtag:blogger.com,1999:blog-90440057274722849.post-2963785437794795342012-01-13T17:45:51.891+00:002012-01-13T17:45:51.891+00:00Smile, no worries, it's fun to think about the...Smile, no worries, it's fun to think about these things.<br /><br />So, for 2.0, Martin managed to improve the Disruptor from 6 million mps to 25 million mps. That's insane.Michael Bloomfieldhttps://www.blogger.com/profile/10141763944272601426noreply@blogger.comtag:blogger.com,1999:blog-90440057274722849.post-49087483456533759492012-01-13T09:48:16.261+00:002012-01-13T09:48:16.261+00:00I think I'll avoid giving away any more secret...I think I'll avoid giving away any more secrets ;-)Trishahttps://www.blogger.com/profile/11486870702929760981noreply@blogger.comtag:blogger.com,1999:blog-90440057274722849.post-27613687234477450722012-01-12T20:00:26.941+00:002012-01-12T20:00:26.941+00:00Ah, intellectual property... Do tell :-) "in ...Ah, intellectual property... Do tell :-) "in a predictable fashion" is the key word.<br /><br />When does a message get stamped with the "exchange timestamp"? Does the input source Disruptor handle timestamping or does the exchange Disruptor handle the timestamping? When I think about the algorithm of multiple input sources converging into one exchange Disruptor, it seems more likely that the "exchange timestamp" occurs in the exchange Disruptor.Michael Bloomfieldhttps://www.blogger.com/profile/10141763944272601426noreply@blogger.comtag:blogger.com,1999:blog-90440057274722849.post-10111733265193507772012-01-12T17:36:02.077+00:002012-01-12T17:36:02.077+00:00I'm not sure I'm allowed to talk about tha...I'm not sure I'm allowed to talk about that if I'm honest! Obviously we've open sourced various parts of the system like the Disruptor, Freud (http://code.google.com/p/freud/) and JMicrobench, but talking about how we really make the most of them is probably where we start straying into the territory of our intellectual property.<br /><br />I will say, however, that we have a number of different channels into the exchange - we have FIX gateways for liquidity providers, FIX gateways for retail users, the Web UI and the API/protocol. So we don't have a single source of millions of orders to marshal into the Disruptor, we have a number of sources of orders, each gateway also using the Disruptor to marshal into the exchange in a predictable fashion.Trishahttps://www.blogger.com/profile/11486870702929760981noreply@blogger.comtag:blogger.com,1999:blog-90440057274722849.post-3758331926205113652012-01-12T00:32:34.985+00:002012-01-12T00:32:34.985+00:00I've read all the blogs, technical papers, art...I've read all the blogs, technical papers, articles, etc. on LMAX. One part that I haven't wrapped my head around is this:<br /><br />LMAX is a retail exchange where you have millions of users buying/sellings derivatives. The Disruptor is a Single Writer/Multiple Reader design.<br /><br />How do you collect 1 million buy/sell orders (from millions of inputs/users) and organize them to write to Disruptor in orderly fashion? I presume in timestamp order, too?<br /><br />Are you using a round robin approach to giving each input the opportunity to write to the Disruptor?Michael Bloomfieldhttps://www.blogger.com/profile/10141763944272601426noreply@blogger.comtag:blogger.com,1999:blog-90440057274722849.post-35286183175202338332012-01-11T17:45:30.287+00:002012-01-11T17:45:30.287+00:00I could write a whole article on why we chose Java...I could write a whole article on why we chose Java over C++!<br /><br />The short version is that yes, C/C++ might have given us greater control. But modern Java compilers are very efficient, and not worrying about a lot of the low level details is actually an advantage - with Java, we can at least pick out the stuff we want to care about and let the compiler take care of everything else.<br /><br />Another advantage is the sheer quantity of good quality Java devs in London - it makes hiring a lot easier, and with a good dev you can teach them the specifics of performance for your system even if they're not high performance gurus. <br /><br />Yet another advantage is (I'm told, I'm not a C/C++ developer but I heard Martin talk about this) that getting the code fast, correct, and readable in C/C++ was going to take longer than in Java. Sure we could get a higher performance system but it would take longer than the time it took to write in Java, and this is fast enough for us for now.Trishahttps://www.blogger.com/profile/11486870702929760981noreply@blogger.comtag:blogger.com,1999:blog-90440057274722849.post-34936577702973197412012-01-11T17:34:49.913+00:002012-01-11T17:34:49.913+00:00I'm impressed with the Disruptor design and co...I'm impressed with the Disruptor design and concept. The LMAX has worked hard to think through how to use the hardware (memory, hard drives, etc.)<br /><br />I'm wondering if C/C++/C# would have given you guys better control of memory management than Java, especially with low level instructions?Michael Bloomfieldhttps://www.blogger.com/profile/10141763944272601426noreply@blogger.comtag:blogger.com,1999:blog-90440057274722849.post-85546491104376764062011-08-10T09:47:23.604+01:002011-08-10T09:47:23.604+01:00Yes, if a slow consumer is doing stuff to entries ...Yes, if a slow consumer is doing stuff to entries 10-30 (for example), then consumers that are dependent upon this slow one will only be able to process up to number 10.<br /><br />If you have a much slower consumer which other things are dependent on, at some point everything's going to be waiting for it anyway - any set of dependencies is only going to be as fast as the slowest thing, no matter what structure you use to organise them. The disruptor is designed to smooth out bursts of activity, in which case at some point the slow consumer will catch up during a period of low activity.<br /><br />If a consumer is consistently slowing the rest of the system down, it's either a sign you need to address the performance of that consumer, or you can parallelise it, for example having two, one to process odd numbers and one to process even numbers.Trishahttps://www.blogger.com/profile/11486870702929760981noreply@blogger.comtag:blogger.com,1999:blog-90440057274722849.post-20699322423517959552011-08-10T00:44:27.216+01:002011-08-10T00:44:27.216+01:00If you have multiple things blocking on a slow con...If you have multiple things blocking on a slow consumer, doesn't forcing the sequence update to wait till the end of the batch prevent them from running in parallel on earlier parts of the batch?Matt Fowleshttps://www.blogger.com/profile/08369073330803992975noreply@blogger.com