I’ve been reading up on reactive programming and ran into this comment  on HN.
The poster wrote that he’d had some serious problems with Erlang’s GC.
I spent a total of 11 months consulting with a company that built a large (50kloc) financial system in erlang. They had terrible performance problems that were caused entirely by erlang.
Imagine you have a large amount of data (order books, accounts, etc). You could put it all in on erlang process, but the gc does not cope well with large heaps (multi-second pauses). You could store the data outside the heap (eg ets) but then you pay the cost of copying on every access and have to tradeoff ease of use (more data per key) vs performance (less data per key). You could split the data up into many processes and then all your simple calculations become asynchronous protocols. Have fun debugging the math or rolling back changes on errors.
I went into that contract with a fondness for erlang. Now I wouldn’t touch it ever again. A naive single-threaded blocking server achieved 10x less code, 40x better throughput and 100x better latency. I used clojure, but any sane platform would have worked just as well with that design.
I usually hear about how good Erlang’s way of doing garbage collection (per process) is, but it seems it’s not particularly suitable in the case when there are “large numbers of small objects eg thousands of orders per market.”
I wonder if anyone anyone else had similar problems with it and how they solved it?