Generally strive to remove all backpressure

The topic’s name is a quote from a slide in WhatsApp’s Anton Lavrik’s talk at Code BEAM 2018.

He basically describes WhatsApp approach to handling distribution in their Erlang cluster and casually mentions that the BEAM is not really made to handle overload, which is a problem. He goes on to list a few things to consider to manage that, including prominently :

Generally strive to remove all backpressure

Well, that got me scratching my head a bit. Voluntary backpressure is a powerful mechanism to manage message queues length and regulate throughtput. Projects like gen_stage and flow leverage that.

Sure, it has a cost associated to it. In Designing for scalability with Erlang/OTP, @FrancescoC and @rvirding (well, at least one of you two :slight_smile: ) state

Start controlling load only if you have to.

So, unless Anton Lavrik meant a different kind of involuntary backpressure (silent message drops on process failure/restart, something like that?), and not backpressure as a load control tool, it means (I’m assuming the WhatsApp engineers do know full well what they’re doing) that the regulation costs more than it helps. The major cost that seems to be presented in the rest of the talk is that backpressure due to a hotspot or failure in the cluster in effect propagates failure to the rest of the cluster.

It feels counter-intuitive, as my mental model shows increased benefits with scale, unless you’re in the kind of system where to apply backpressure you need to serialize requests into dispatcher processes acting as a bottleneck in front of your workers. But given the efforts the WhatsApp team made to partition their data to reduce locking and optimize replication/recovery granularity, I doubt this is the case.

Then it got me thinking in terms not of two cost/benefit ratio curves that might intersect several times but a bit more like the modern approach to buffering in network communications. The industry used to solve real-time stream ordering issues and several other network warts by using increasingly large buffers in L2 and L3 network equipment until the buffer bloat problem became a thing. Both buffered and wire-speed approaches work well, but in different domains. The fix that buffering gives you is very comfortable in terms of capacity management and micro-bursts handling (typical of stuff like iSCSI storage) but is untenable when you do a lot of VoIP for example, because of an accordion effect in time distribution of delivery caused by injected jitter and much higher 10th percentile latencies.

Is it the same kind of tradeoff at play here? Is the important consideration inbound traffic characteristics like bursty-ness and jitter?

For the particular case of Erlang process message queues, how would one start from Little’s Law and cook a model to estimate the impact of both approaches?

What do you guys think?

8 Likes

Any place where you have back pressure, you have created a bottleneck. Not every kind of system needs to care about bottle necks in the same way. If you’re building a real time messaging platform, you don’t want anything to build up behind a bottleneck. If you’re building a streaming ETL system, which GenStage is good at, you may just care that you don’t drop events and it doesn’t matter a ton that everything is calculated immediately.

1 Like