Anyone who wants to speculate about this tweet from José

I just saw this tweet passing by but I have no idea what it’s hinting at, but it looks awesome :slight_smile:

So does anyone have an idea? I know we should probably just wait for more information, but speculation can be fun.

My first thought was about the JIT compiler, but the speedup seems to big for that. Anyone with a better idea?


It says benchmarking binaries vs. “this new thing” so maybe a new way of building large strings? E.g. some native string builder type or such?

But then I’m not really sure what kind if binary operation would take 200ms…


Absolutely curious about this new thing.


Apparently it’s Elixir on JVM.


Well … if we are talking about José Valim then I would not be surprised if he would announce a successful contact with aliens using his new Elixir software or new hardware like CPU with billion tiny cores designed for Erlang / Elixir. :smiling_imp:

But seriously … @dominicletz may be close, but I’m not really sure that we are talking about new type. Look that folks are already confused with Erlang string (charlist) vs Elixir string. I guess it’s something closer to binaries or something outside Elixir core as I’m not sure if José Valim would like to see a new string-like type in Elixir core …

JIT is not first and not last thing which would optimize code … I would not be surprised if Erlang core team have number of speed improvements.

That’s said there are 3 important parts:

  1. this (…) thing - look that’s a singular form
  2. (…) new (…) - so it’s not just a speed improvement
  3. comparing to binaries - firstly I had in mind bitstring, but it’s not new

Maybe some parts of Elixir core are going to be rewritten without API changes, but that looks too beautiful for me … That may be a new type, but in Erlang core. Elixir could use it for bigger strings, however … that would be possible only for compilation or file operation which is not really convincing me …

For me it may be a new nimble_* library which is optimizing … let’s say reading/parsing/compiling files. I guess that the benchmark is comparing compilation of Phoenix templates based on old binaries implementation and new library … Not sure if that makes sense.

1 Like

This sounds like you have more information about this. Why do you think it’s Elixir on the JVM?

So some something that’s swapped it, but just not strings?


I think that was just a joke. :smiley:

I’m worried that José Valim made a perfect trap for us. :smiling_imp:

I have no idea if by strings he mean only binaries or just all bitstrings. For those who don’t understand:

'erlang string' # charlist
"elixir string" # binary and bitstring
<<1::3>> # bitstring

iex> is_binary('foo')
iex> is_bitstring('foo')
iex> is_list('foo')

iex> is_binary("foo")
iex> is_bitstring("foo")
iex> is_list("foo")

iex> is_binary(<<1::3>>)
iex> is_bitstring(<<1::3>>)
iex> is_list(<<1::3>>)

However we know something new:

  1. This is not about JIT - yeah, one question less and 10 new questions! :smiling_imp:
  2. It can be swapped in cleanly - which means no breaking changes!
  3. We have 3 parts:
    a) “alternate implementation for handling the binary type in Elixir”
    b) “strings get lots faster”
    c) “not string related”

Because of just word which was not about swapped in part we may assume that’s not about a new type.

From this I can only think about:

  1. That may be bitstring optimization. Having that in mind a) and c) are not in conflict if we replace binary type part in a) with bitstring type.

  2. However if by strings he mean all bitstrings then it’s completely different story! What things uses charlists? I think a lot in core such as … file operations! :smiling_imp:

iex> {:ok, device} ='file_name.extension', [:read])
iex> :file.read_line(device)
'file contents'

Please pay attention that a new library does not breaks Elixir's API, so it can be cleanly swapped in as well especially if we are talking about example like above about optimizing Phoenix templates - you would notice only one extra dependency and that’s all!

After this tweet I still think about:

  1. Optimization for list (especially charlist) or bitstrings. Most probably it would happen in Erlang and would not affect Elixir's API.

  2. new nimble_* library like guessed previously - if we are talking about new thing which can be cleanly swapped in I can’t think about anything else except just optimizations which does not introduce anything new.


It is already there and is called iolist() with some BIF functions to change into raw binary.


Just another interesting message was send!

Ok, so I was pretty close as I almost hit a target (said Phoenix - actually Phoenix LiveVIew), but was thinking about something “outside” Phoenix (like Erlang core optimization or standalone nimble_* library).

Anyway … did you read it? snapshots? I honestly thought that LiveView updates are already incredibly fast and I do not even saw a way to optimize it, so that’s why I was thinking about “normal” templates.

1 Like

The diff payload for repetitive components are still large (e.g. recursively calling a bunch of functions), for each function call will produces a key (the number) just 100 calls could cost hundred of kilobytes, for numbers of cids will have overhead on client side toString ( before handing over to morphdom.

Diffing list is also hard,

~E"" is very fast, faster than ~L"" as end-to-end patch process for those use case.

The snapshots is interesting! (I have no idea what it is). Possibly memory consumption improvement or diff as binary format, less overhead. I’d love to keep live routing stuff, pushEvent, push_event api, and patch data with javascript string literal template (it’s even the same technique as leex that is; a list of static part, a list of dynamic part)

1 Like

This is great idea. I don’t mean the tweet, I mean we need our own conspiracy theory section. :laughing:


Yeah, I was looking for that when I started this post :grinning:


I am enjoying it too! It may even backfire and convince me to hold these trade secrets for longer to see the speculations…

Just kidding. We will definitely let everyone know once it is ready for sharing.

PS: the LiveView snapshops are not related to the benchmarks, two different things. :slight_smile:


my guess was “mutable binaries” but you couldn’t really call them binaries; this would be a nice statefulness escape hatch (a la ets) for things like ML (tensor/matrix multiplications)… positional indexing and replacement for f32 f64, maybe i64 values too. i8, bfloat16, “tensorfloat32 (” if you want to get really fancy.


So 2 things to speculate about. Nice to know. maybe we can pry even more information out by just speculating :slight_smile:

I think you’re on the right track with regards to tensors/Tensorflow. I’ll submit two recent erlang questions mailing lists posts by José as evidence :grin:

Implications of setting SIGCHLD in relation to NIFs

I am working on Tensorflow bindings and, at some point, Tensorflow forks a
child process to invoke a separate program. Unfortunately, when running
inside the Erlang VM, Tensorflow fails when calling waitpid, in exactly
this line…


Are NIF resources meant to be destroyed immediately on GC?

We are working on some code that allocates large chunks of memory inside a
NIF and ties them to a resource (using enif_alloc_resource +
enif_make_resource). While running some tests, I noticed that we were
holding onto these resources for longer than we wanted to, so we have added
calls to erlang:garbage_collect/1. In a nutshell, the code looks like this…



Totally makes sense. Two separate things. Like aliens and bigfoot.

Unless bigfoot is actually an alien /mystery intensifies/.


Whatever it is, my body is ready :smiley:

1 Like

@josevalim please have all the fun you want, but eventually, maybe after release, you will tell us, ok? :wink:

1 Like