Elixir enables stateful web applications, is it wrong to think like this?

@hubertlepicki thanks for the insight, this is what @peerreynders was talking about as well. I’ve been an OO guy all my career, once I start to understand what you guys are talking about, I find it super interesting.

still have “user” processes but in such case try limiting their responsibility to serving, and persisting the data. No business logic if possible.

What you are describing is actually an anti pattern in OO, it’s Anemic Domain Models (AnemicDomainModel). I am not saying at all that you are wrong, but I am very curious how you guys acquired that thinking, I want to be able to think like that too, is there any resources that you can link me to?

thanks

1 Like

Well, @peerreynders thoguths are in line with mine, I’m sorry, should have read the whole thread properly :).

Ok, so if we’re speaking in the terms of DDD, the things that persist and fetch Users from somewhere would not be part of the domain at all. It’s part of infrastructure, a “value object” if you please, held somewhere in the memory by something being part of infrastructure. While “Registration” might be a domain model entity operating on the User, there might, and will be more entities using the same value objects.

1 Like

I currently see myself as a recovering OO-aholic - does that explain my position? :smile:

Now while real life -aholics need to abstain completely I simply have to come to terms with my disillusionment of what OO can actually accomplish. I don’t throw out the baby with the bathwater. I don’t define OO in terms of what can be accomplished within the confines of Java or C# and I’ve spent enough time with the SOLID principles to get a glimpse of some core insights that may be applicable outside of OO.

But at it’s core OO is an approach that [adds complexity] (http://www.cpptips.com/heuristics2) to manage complexity. Quite often maintaining state inside an object feels like sweeping the dirt under the rug - and justifying it because nobody can see it because “it’s encapsulated”.

But in the end even Effective Java 2e recommends:

Item15: Minimize mutability

Once you make immutability the default (in order to minimize mutability), state becomes a troublesome concept and you have to rethink the way you process information.

Note that the blog is 14 years old - and back then I would have agreed wholeheartedly because from that perspective functional programming simply looks like pumping DTOs through functions which doesn’t seem that far from procedural programming. But procedural programming doesn’t have first class functions and functional composition.

And while it seems easy to compose stateful objects, the resulting artifact of an object network is notoriously difficult to “reason about”. Processing immutable data structures through composed functions tends to be much more predictable.

Michael Feathers: :

OO makes code understandable by encapsulating moving parts. FP makes code understandable by minimizing moving parts.

So when it comes designing collaborating processes I usually prefer (near-)stateless data transformation arrangements rather than approaches that require long-lived stateful (thumb-twiddling) processes. Note that this is merely a preference guideline as state can be useful, especially if it only exists temporarily and when it is used with extreme discretion.

7 Likes

Precisely! Long-lived processes that hold state, especially big, complex state, and manipulate it with some business logic are evil. It’s difficult to reason about these, and bugs that will creep in will do a lot of damage. Small, focused, reactive in spirit processes are the way to go.

4 Likes

Can you explain why? I don’t entirely agree with this, so I’d really like to hear the reasoning behind this.

In my experience, long-lived processes that hold state can be really easy to understood, reason about and test if done right. IMO, the most important think is to have this state as independent as possible and decouple the logic-related and process-related code.

This works really well in case of realtime apps. In my case (handling WebRTC calls), I had one process per call (so it were quite long-lived processes). In the process state I kept all the information needed for this call. As said before, the key was to keep all of the logic in a purely functional code operating on simple data structures. This way the code was super easy to test and to reason about. So the process (genserver) was simple a “dumb” container which stored the state and executed business logic functions. I think we should note that processes and functions are two different things and solve different problems. Functions model our business logic (behaviour), processes model the runtime, state change and time. As long as we keep these two concerns separate and decoupled and don’t try to model business logic with processes, I think we’re fine.

I guess the thing that should be avoided is to have long-lived and “chatty” processes that depends heavily on state from other processes. Then, we face the danger of reinventing poorly designed OO applications. If the processes are independent and autonomous, they really can be easy to understand , even if they leave for a long time.

3 Likes

Yes. I do not disagree with you on any of your stances. My views are related to original post where author suggested “one process per user”. This is most likely an example of such “chatty” architecture.

Keeping a long running processes for connections makes perfect sense.

3 Likes

Because it is essentially a global variable, with all the horrors that global variables entail.

In my opinion a new process/actor should only be introduced when you need either a contention point (think a mutex, this is basically a GenServer), or you want concurrency (like listening to many network connections, like what Phoenix does for its websocket channels). There are a couple other minor cases, but in general I just make more modules and just keep everything functional with occasional database or ETS calls thrown in. I can always turn a module interface to a genserver in the background anyway if I ever need after all. ^.^

3 Likes

Actually it doesn’t have to be like a global variable. I try to treat such processes as a representation of, well… some business process, not some state. Its main goal is to react to certain send event notification to interested client/actors. The process state itself should be considered an implementation detail and not be accessed from outside of the process. Another aspect: processes are great tool to handle failures. If we want to ensure that if something fails, everyone interested knows about this, processes are a great tool to use.

I can always turn a module interface to a genserver in the background anyway if I ever need after all

I think there’s a really important observation in this statement if you turned it around. What’s important is to have good interfaces which hide the implementation details. Other parts of the system shouldn’t know that you’re using processes/database/ets to keep the state internally, they should only be concerned with the inputs and outputs of your functions.

2 Likes

I think we need to tread carefully here. While I’m all for “hiding details” in order to avoid coupling, I also like BIG RED FLAGS when the semantics change - and the semantics of a process function call and a message dispatch are different. Otherwise we slide back into the realm of oversimplification and convenience of correctness.

That’s why I always feel a bit queasy when I see message handling buried inside something that looks like a simple process function call.

3 Likes

Hear hear. I try to ‘name’ functions in one of 3 ways, either a function call that does something now and returns, a function call that may take a long while, and a function call that does something out-of-band (like sending a message). Now this doesn’t mean they ‘do’ do something like send a message, but that is the conceptual idea of that call whether it does or not.

1 Like

Yes, but you have the same problem when you use the database. With Ecto, it’s not only a message passing, but also a network call to the database hidden behind a function call. But people are sometimes so used to databases and ORMs that they don’t think about this when using them.

Don’t get me wrong, I’m not saying that one solution is better than the other, just that we need too carefully choose a right solution because all of them have their trade-offs.

1 Like

I say ignorance isn’t an excuse for poor application design.

Now I’m not familiar with the Ecto codebase but I’d expect that by and large the module to give a hint as to whether or not a function call crosses the process boundary. And even when that isn’t the case the function parameters probably tell their own story - if you need to specify the repo you’re likely crossing the process boundary.

3 Likes

But isn’t this done everywhere in the erlang ecosystem? Do you really care if something uses a process, or an ets table or something else and how would you ever be able to change the implementation unless you hide the functionality behind an interface?

It is even recommended in a number of books to hide the fact that you are talking to a for example a gen_server.

I start with the API I want. Once that is set I can change the implementation anyway I want (and I often do because I don’t know the optimal solution up front) without having to change all the code using the library. Perhaps this is convenience over correctness but in practice I don’t see it working in any other way.

3 Likes

I think the answer is, as always, it depends.

For client-server architectures (where client can be anything short-lived, most commonly a process handling a single HTTP request), I think that most of the times it’s fine not to care about if it’s a function call of sending a message. In worst case scenario you will just time out. Of course there are various cases where this might cause a bottleneck or other problem, but I’d say that this should be the server’s concern, not the client’s.

The situation looks a lot different where it’s not client-server, but there’re multiple cooperating processes. Then, you are exposed to a lot of different problems (for example deadlocks) which are really hard to debug. In this case, the difference between a function call and sending a message (and, what’s the most problematic - sending a message and awaiting the answer) is huge.

1 Like

I would expect a competent Erlang developer to realize on a conscious and subconscious level that (synchronous) API calls are in fact:

  • blocking
  • subject to a possible timeout capable of terminating the process

while at the same time using the “let it crash” philosophy and the tools of process links, exits (+ trapping), monitors, supervision, etc. to manage potential termination in some reasonable fashion.

Do you really care if something uses a process

I would at least prefer to know up front whether a function call has the potential to block process execution. Because at that point the decision needs to be made whether it makes sense for the process to be blocked - and if it isn’t, it’s time to spawn a separate process for the express purpose of being blocked (instead).

Elsewhere I already expressed my puzzlement about some places suggesting to choose “call” as a default over “cast”. All too often a “default” is interpreted as a strong preference or worse, a “99% of the time” choice - often accepted to avoid having to actually understand the nuanced consequences of all the available choices.

I think one of the best uses of process state is to store context and correlation information for pending asynchronous requests (i.e. cast messages expected to eventually result in some kind of return message or timeout).

Synchronous API calls have their place but my concern is that mindless adoption (for the sake of convenience) will waste valuable opportunities in process architectures.

In terms of Erlang vs. Elixir - in my experience Erlang educational resources tend to dwell much more on the details of working with the primitives of spawning new processes, sending and receiving (asynchronous) messages - giving the learner a better perspective how different (from more mainstream platforms) the BEAM environment actually is.

Elixir resources seem all too eager to instead impress with agents, tasks etc., without building a sense and appreciation of how really asynchronous the fundamentals are (sometimes leaving neophytes with a false sense of familiarity and security).

6 Likes

There are some very articulate thoughts about when it does and does not make sense to use processes to hold state above.

I have some intuition myself about this, mostly leading to using process state for ongoing interactions when the interaction model is time-bound and matches a single connexion or session. Those would be some types of games, web sessions, etc.

However I would be grateful if you could apply your opinion re. whether or not it’s a good idea to use process state to various examples just to make sure I understand what you mean. Especially if @peerreynders and @hubertlepicki could chime in, that would be very useful :slight_smile:

  • In his online Elixir course (non-free, Elixir for Programmers) Dave Thomas uses one stateful GenServer holding the state of a Hangman game (letters already guessed, game status, turns left, etc.) for each connected client.

  • Lance Halvorsen, in line with his book that got recommended on this forum on several occasions (Functional Web Development with Elixir, OTP, and Phoenix) presented his Elixir at a Walking Pace talk at ElixirConf US 2018. In it, he describes how he uses one process per item to be tracked in a wharehouse / shipping system, complete with on-demand cold start (read state from disk). Albeit he admits this is going to be hell to scale to a multi-node system in the talk and does not really give an applicable solution for this, he seems to see quite a few benefits in doing this in terms of architecture.

EDIT: There is a reference by @peerreynders to a thread on the PragProg forums that might address this at least partially (GenServer docs: “handle_cast … should be used sparingly” Why?) but those forums seem to be closed now and I could find no cache of this thread elsewhere.

  • On several occasions on this forum, it has been suggested that GenServers are a good fit for holding DDD-aggregate type state, my understanding being a GenServer per aggregate instance, similar to Lance Halvorsen’s example, and not simply a stateless implementation of business logic, for example Domain Driven design : Aggregate roots as Elixir Processes/GenServer.

On that last one, I’d especially would love to get feedback from @hubertlepicki, as you seem in agreement above with not using processes to hold state as some default model, but at the same time seemed to agree that using them to maintaing aggregates is fine. I might be seeing a contradiction that does not exist there, and would really like to hear it from the horse’s mouth :slight_smile:

All this I guess eventually boils down to the (already mentionned above) excellent To spawn, or not to spawn? by @sasajuric

To frame it in terms of that article, is using a GenServer per DDD aggregate not a pure “thought concern” (red flag reason for using a process to maintain his state) because its role is to serialize access to that aggregate, and thus is a legitimate concurrency unit?

2 Likes

Dave Thomas uses one stateful GenServer holding the state of a Hangman game

In my view it’s an acceptable implementation largely because the state is ephemeral, i.e.:

  • Most of the time “it just works” and you can discard the process when the game is over.
  • If the process crashes that game is lost but that doesn’t effect any other game that is in progress and the lost process be easily replaced, however it’s previous state can’t be recreated. But in some situations that is a good thing because the state could have been corrupted and caused the crash in the first place.

he seems to see quite a few benefits in doing this in terms of architecture.

The primary problem would be finding the relevant state if you don’t know where it lives.

  • A PID always includes the node the process lives on, so you’re OK until the process dies.
  • If the process is named you can always get to it even if it has been replaced.

However in most cases it is just simpler to stick with a single node solution. Because the BEAM is so resource efficient you can pack a lot of stuff on a single node. Also keep in mind that multi-node solutions are primarily a reliability measure, i.e. maintaining availablity in the face node hardware failure. Distributing heterogenous task across nodes is possible but not the primary focus of BEAM distribution and can quickly get complicated if you don’t want a single node failure to cripple your system (because that node was the only node to provide some critical services).

it has been suggested that GenServers are a good fit for holding DDD-aggregate type state

I think the point is that they can be under certain circumstances, i.e. it depends.

Ultimately state is simply data and you will have functions in modules to inspect that data and to create new versions that evolve that data as a result of the sequence of events that the state has been exposed to over time.

That is all you need to represent state because that data can be handed to a function wishing to inspect the current state.

Placing that data inside a process gives it a location or place where the containing process can “evolve” the state while the process itself be contacted by other processes.

I think classifying a process as a container of state is an oversimplification - it makes it sound like the process only serves to maintain state. A process exists to enact a protocol in concert with other processes and the state contained by all the participating processes serves the protocol (or whatever the protocol is trying to accomplish).

Viewing processes merely as containers of state could easily lead to “ask-style” architectures (Tell, Don’t Ask) which would be far from optimal.

2 Likes

Many thanks for taking the time to share your insights.

OK, that connects with what I had in mind as well, re. this being time-bound and not having strong persistence requirements .

Yes, I’ve spent a lot of time during the last years doing distributed-systems related work, and despite the ambient kool-aid my advice to customers is to avoid at all cost doing distribution if they can avoid it. The BEAM is indeed a good tool to stretch what one can do on a single node, and even with Erlang/Elixir I tend to design by default for stateless server farms when horizontal scalability and/or HA and disaster recovery are required.

But that being said, all “stateful process” situations do not create that situation, a good example would be stateful client channels with Phoenix, as already mentionned in this thread.

Haha, yes, of course. This is interesting to get back to the good old “it depends” there, as the stateful vs stateless question tends to trigger some pretty dogmatic point of views if my recent online readings are any indication. No silver bullet etc.

I get the dumb container anti-pattern, and was exposed to the TellDontAsk / LawOfDemeter / OneResponsibilityRule corpus of principles years ago.

However, approching that set of issues from the “Tell, don’t ask” angle when using remote and/or message-passing systems and not strictly in the context of neo-OO (Java, C++) often leads me to some peripheral (to this problem) considerations, close to Steve Vinosky’s stance on RPC.

It’s actually quite funny that you choose to point to Alec Sharp’s take on it and not one of the more commonly linked articles from Martin Fowler or even the c2 pages. He probably talks about it with a Smalltalk-centric vision. Which for me, in an Elixir context makes the idea that CSP / Agents / Erlang processes might be the purest implementation of the Alan Kay vision of objects resurface. Which might seem paradoxical, because as mentionned in the article you linked, the TDA principle often is coined as being a reminder that Objects are bundles of data + functions at their core, which depending on how you read and interpret it might be at odds with Alan Kay’s “The important thing about objects is message-passing”.

Is this just a surface, language-level-trick apparent contradiction in your opinion or is there something more profound at work there?

EDIT: I reread myself and it seems that I was just rambling, feel free to ignore me :slight_smile:

2 Likes

I came across the idea via Why getter and setter methods are evil (2003) - more recently Allen Holub has been going with the “Tell, Don’t Ask” moniker in his OO talks. And most sources seem to ultimately point back to pragprog.

Steve Vinosky’s stance on RPC.

I keep quoting Convenience over Correctness (2008) hoping somebody will listen.

interpret it might be at odds with Alan Kay’s “The important thing about objects is message-passing”.

Alan Kay’s insight is enlightening as it moves focus away from “the mighty object” but it could still be considered myopic.

I’m referring to Protocols as discussed in Thinking like an Erlanger.

The software industry got caught up with CASE tools while Class-Responsibility-Collaboration (CRC) Cards (1989) got largely ignored. Lots of attention to Classes (Objects) and Responsibilities but the dynamics of Collaboration - not so much. People focusing on individual trees and ignoring the forest.

PS: Some further discussion on To spawn, or not to spawn?

1 Like

I guess that we should wrap this up before we get moved to an other thread as being off-topic, even though I feel we’re digging in a direction directly related to the original thread topic :slight_smile:

Well, there is quite a lot of old ideas recycling in all those well-coined modern catchphrases.

I distinctly remember spending a long time thinking about how splitting objects in a system would work to reduce complexity, and how to design interactions between those objects and make sure we don’t fall into the bare-objects trap by ensuring we’re not querying state but exchanging events/messages/commands. This was after reading a pre-UML book by Grady Booch, the one with the plastic tracing ruler included, for drawing model diagrams :slight_smile: And this was probably the early 90’s, I’m quite sure that i.e. Fred Brooks touched on those topics as well with similar conclusions.

And I thank you for that, one of your links prompted me to read that one again a few days ago. A good candidate for my “read once a year” list :slight_smile:

I did not particularly enjoy that talk, not sure why, maybe because I was already convinced :slight_smile: But I can definitely vouch for the excellent Principles of Protocol Design by Robin Sharp. As a consultant I tried to put the message out that thinking in terms of protocol yields many benefits (of course there are tradeoffs, as always), but with little success so far. Either I lack the proper approach / arguments or it’s simply too much of a mind-shift for my target audience.

Yup. And I fear all the Agile agitation this last decade, even though some of it was based on starting points that made sense, misled a generation of programmers (sorry, “craftsmen” :slight_smile: ) to thinking that they can get away without doing any systems design and just focus on micro-optimization of their individual bricks, with the assumption that adhering to whatever communication method of the day exonerated them from thinking about the actual communication taking place, sometimes optimizing for consistence of call conventions rather than consistence in communications semantics. People coming from manufacturing systems, logistics or in general industrial computing however seem to have been more immute to that than the general public though (totally anecdotical observation, I have no proof of that).

Thanks for the heads up, I’ve read the original article but missed the thread on these forums. Will do :slight_smile:

2 Likes