Memory cost of temporary variables

While playing and learning LiveView I’m thinking a lot about memory costs of small choices we make. I’ve never though much about it but since Liveview is stateful it means if we take several decisions that all sum up into being more costly, memory wise, then we’re not going to be efficient scaling-up.
For example, what is the cost of using several temporary variables instead of just working it all in one step. For example:

def mount(_params, _session, socket) do
  socket = assign(socket, assign_a: "Looooooooon string")
  socket = assign(socket, assgn_b: "Very loooooooooong string"))
  ...
  socket = assign(socket, assgn_N: "Very, very, very loooooooooong string"))
  {:ok, socket}
end

versus:

def mount(_params, _session, socket) do
  {:ok, assign(socket,
                  assign_a: "Looooooooon string",
                  ...,
                  assign_N: "Very, very, very loooooooooong string"
                  )
  }
end

I guess the cost shouldn’t’t be much as I think variables are just pointers, but nevertheless, memory wise, what would be a good or a bad approach?

If the binaries (the strings with the " delimiters) grow large and you have cycles between them then they may end up never being garbage-collected. But that’s not what happens, most of the time anyway.

Haven’t used LiveView in a while (basically tried it 2-3 times shortly after its release) but you might consider using iolists and not strings when returning data, if LV allows it?

1 Like

The repeated calls to assign, each of which socket is replaced by its previous value will indeed end up looking in Erlang like socket1 = assign(socket0, other, params); socket2 = assign(socket2, other, params) (note that I am still using Elixir syntax here; in the actual Erlang, variables would of course start with uppercase letters.)

But once this is further compiled down to BEAM bytecode these temporary variables will no longer live on beyond the only place they are being used. I.e. the result of one call to assign will just be immediately passed to the next call, just as if you’d written assign(assign(assign(socket0, ...), ...), ...) or indeed socket0 |> assign(...) |> assign(...) |> assign(...) |> ....

If you want to be 100% sure what really happens, you should of course take a look yourself by decompiling the bytecode.

My educated guess is that there will be no memory overhead between your two examples, and a very minor time overhead of the first example due to more external function calls.


I actually wonder how hard it is to introduce cycles in bitstrings in Erlang. I’d expect it to be very difficult because of the immutable nature of most operations on the BEAM. Do you happen to know some examples which result in a cycle?

1 Like

ooops, I accidentally pressed the delete button on my post. Let me try again.

There’s practically no additional memory used with either example. Erlang maps (and thus Elixir’s structs) are immutable data structures that use structural sharing, so it’s just a handful of pointers arranged in slightly different ways in each example. The actual strings do not get copied.

4 Likes

No, but I heard reports of people’s applications gradually consuming more and more memory because certain large binaries weren’t being garbage-collected.

Ah, I see.

Yes, in the case you are reading binaries from user input and storing them in memory they might be around for a rather long time. This is even more a problem when converting user input to atoms (c.f. the documentation of String.to_atom and String.to_existing_atom), as atoms are never reclaimed.

From what I hear, one biggest way the failure to gc binaries happens is if you are building ets tables with binary data snippetted from BIG jsons (think megabytes). You think you’re only storing a bit but it’s a reference to something that is refcounted, and the source doesn’t die.

The next issue is binary concatenation of big binaries (e.g. a base64 encoded binary content stuffed into xml, html, json) where you keep appending more and more (even small bits) onto it… These might still be transient but they add up with volume in extreme ways.

2 Likes