Immutability tradeoff: reliable data via extra auxiliary space

According to GeeksForGeeks, “Stack space in recursive calls counts too as extra space required by a program.”

In other words, to get the benefit of immutable data in memory, that requires a lot of temporarily-used extra memory (as compared with a mutable language).

Add in garbage collection of the heap to clear up this space.

In what use cases is there actually a tradeoff here?

Garbage collection happens more often. Erlang’s design handles potential issues there, right?

So then, only when dealing with large inputs could that extra temporarily-used space lead to issues.

Is that right? Just considering this out loud.


Elixir has tail call optimization, so if the last thing a recursive function does, is call itself the compiler can optimize this. Here’s a blog post about it


Found this: “Tail-call optimization is where you are able to avoid allocating a new stack frame for a function because the calling function will simply return the value that it gets from the called function.”

So then functional programmers made this consideration. Hence there is no tradeoff in memory usage.

I.e. a function that mutates its argument iteratively will use just about the same amount of space as one that handles its argument with tail-recursion because a tail-recursive function can drop each call from the call stack just as it recurses to the next call (because the final call returns the output, rather than the initial one).

In effect, this means that the memory allocated for inputs can be reused by tail-recursive calls, correct?

1 Like

Yes, the memory allocated for a function call is reused with the new inputs


The way tail recursion is implemented in BEAM is quite interesting. Instead of automatically allocating stack frames when you enter a function and deallocating on return like many VMs do, in BEAM this stack management in manual - you get a allocate n instruction which allocates a stack frame of size n and a corresponding deallocate n instruction that does the reverse. This means that some functions that don’t need stack space don’t allocate anything at all in the first place, but other functions can deallocate their stack frame before calling the last function - this effectively is a tail call optimisation.

Additionally Erlang (and Elixir) implement a slightly more powerful version of tail recursion optimisation called last call optimisation, where not only recursive calls are optimised, but all calls in the “tail” or “last” position are optimised. This makes it trivial to tail-optimise mutually recursive functions (a calls b which calls a which calls b …), which is very hard or impossible in other schemes.


So this gives more flexibility.

Reading up a bit more, here’s my takeaway:

Software needs mutability in order to do things. The consideration for the programmer is which levels/layers of the software should and should not be mutable.

e.g. Elixir uses immutability but any database it uses will, by its function, be mutable.

Read “immutability changes everything”?

1 Like


My takeaway from this article is the increases in storage, scale, and distribution create issues which are dealt with by making more layers of an app immutable, including the database via versioning and hardware via wear leveling.

There are trade-off’s. But it is good to be able to make well-informed choices.

1 Like

Already read about “functional core, imperative shell”/“onion architecture” and whatever names it has (hexagonal etc)

1 Like

The other thing worth pointing out is that, because we know data is immutable, the language compiler can perform a bunch of optimizations, such as storing constant values in a literal pool so you don’t allocate new memory on every use, being able to point to existing data types as we know they won’t mutate, etc.