I have had multiple conversations with Dave on multiple topics in the past. I would say the best summary of those discussions is that I agree with Dave on the problems but we often disagree on the solutions.
For example, I agree on the problems the Jeeves project aims to solve. It is really nice to have some state and then being able to pool it, partition it, or whatever based on a single line of code change. Those things are very annoying to do today and Jeeves aims to solve it. However, Jeeves is a proof of concept and it has other areas that require improvements, such as its over reliance on macros. I don’t mean this as a criticism at all, my point is exactly that figuring out a complete model is a lot of work and we - as a community - should probably have done a better job of encouraging this type of experimentation. I also think there should have been more discussions on the problems rather than solutions, because when we jump into solutions, a lot of people’s knee jerk reaction is to say “there is nothing wrong with GenServer”, which I agree is very discouraging to hear when you are trying to improve it, but it also means a lack of agreement on the problem statement.
There is a cognitive dissonance, where there is a language that is extensible, but there is a conservative community that would rather not extend it. I believe most of the problems Dave mentioned are solvable technically - you can replace how pipe works, you can compose Plugs at runtime, you can implement your own data pipeline solution, etc - but most people would prefer to not change things.
If this conservatism is a good or bad thing, it is up to the community to decide. Some communities are conservative by definition (see Go) - but this means people like Dave would rather go elsewhere - while other people will prefer the stability.
I listened to parts of it and found the elixir critique towards the end rather weird. When he was talking about pipelines he seems to have missed the concurrency/processes aspect of things. I know he hasn’t but his idea of using pipes just doesn’t fit in. Yes, you could have composable pipes with some form of currying but that doesn’t really attack the same problem as streaming data concurrently through a system. Worst case you end up with a very complex tool which could be difficult and confusing to use. And if he wants to throw the oldest stuff in the pipe then just fix it.
If you want your system to live for more than a few months then you need a more conservative base on which to build it which isn’t changing all the time otherwise it very quickly becomes unmaintainable in the long run. Tough, but that’s the way it is. And that’s not restricted to telecom systems.
Agreed. Even without considering the concurrency aspect, a function pipeline is unfortunately too simplistic to model all of the concerns in a data streaming system. The Haskell community is not using function pipelines to model those either: for example, the conduit library has an explicit focus on sources and sinks, which is something similar to what you would find in Flow. In data streaming, you may have multiple sources, data repartitions, multiple sinks, and all of those lead to topologies that are by definition more complex than a function pipeline. I guess someone could apply the Flow programming model to Plug but then it would make Plug much more complex than what it needs to be. It would be similar to removing the |> operator and adding monads, because you can express pipes with monads anyway.
Once you add concurrency and error handling, it becomes even more distinct. I have been working on this problem for 5+ years, first with GenStage (which does support push and pull systems as well as using buffers for load regulation instead of back-pressure), then with Flow, which is more data-centric, and now Broadway. The difference between Flow and Broadway is exactly that once you want to take error handling into account, which is a must for ingestion systems, the error modelling needs to be to put front and center rather than an afterthought. You need to reason about what happens when half of the data fails to process, what if acknowledgements are missed, etc.
I quite liked Dave’s Component library. I think it attacks an important problem: boilerplate. While many prefer being explicit and gain a peace of mind by having the exact GenServer functions laid out in their file there are some of us (myself included) that know most of the OTP coding cruft by heart and don’t need to see it.
IMO having nible_parsec-y way of handling various OTP boilerplate – with the optional feature of dumping the generated code in a source file if the programmer so desires – would be a very good direction.
Example of something similar from the past: the Elixir core team hugely reducing the complexity of describing your app’s main supervisor children. The current syntax is much more pleasant and intuitive.
I was certainly a bit bummed to hear that he has mostly moved on. I’ve enjoyed listening to his talks and some podcast appearances. I’m not convinced that his movung on is something to worry about though. If he doesn’t find that the community gels with his ideas it seems sound that he keeps looking.
I didn’t really grasp whatever the importance of what he wanted to do was. I think the certain level of conservatism he found around the language is likely real and factual. I even think that might be a plus in my book. I’m not very conservative overall but I’m good with avoiding the rollercoaster of JS life when possible.
I see a stable core and ambitious projects building on top of it. He mostly dismissed LiveView, that’s definitely something one can argue about. But I could ignore the web entirely and just look at Nerves, Scenic, Lumen, Membrane and I get plenty excited.
I think the ecosystem and community is fine and doing fine. If there is merit to whatever he was proposing I hope it catches on. I thin it has to be fine that some people don’t find what they need and move along. Even the entertaining and high-profile ones.