Well … if we are talking about José Valim then I would not be surprised if he would announce a successful contact with aliens using his new Elixir software or new hardware like CPU with billion tiny cores designed for Erlang / Elixir.
But seriously … @dominicletz may be close, but I’m not really sure that we are talking about new type. Look that folks are already confused with Erlang string (charlist) vs Elixir string. I guess it’s something closer to binaries or something outside Elixir core as I’m not sure if José Valim would like to see a new string-like type in Elixir core …
JIT is not first and not last thing which would optimize code … I would not be surprised if Erlang core team have number of speed improvements.
That’s said there are 3 important parts:
this (…) thing - look that’s a singular form
(…) new (…) - so it’s not just a speed improvement
comparing to binaries - firstly I had in mind bitstring, but it’s not new
Maybe some parts of Elixir core are going to be rewritten without API changes, but that looks too beautiful for me … That may be a new type, but in Erlang core. Elixir could use it for bigger strings, however … that would be possible only for compilation or file operation which is not really convincing me …
For me it may be a new nimble_* library which is optimizing … let’s say reading/parsing/compiling files. I guess that the benchmark is comparing compilation of Phoenix templates based on old binaries implementation and new library … Not sure if that makes sense.
This is not about JIT - yeah, one question less and 10 new questions!
It can be swapped in cleanly - which means no breaking changes!
We have 3 parts:
a) “alternate implementation for handling the binary type in Elixir”
b) “strings get lots faster”
c) “not string related”
Because of just word which was not about swapped in part we may assume that’s not about a new type.
From this I can only think about:
That may be bitstring optimization. Having that in mind a) and c) are not in conflict if we replace binary type part in a) with bitstring type.
However if by strings he mean all bitstrings then it’s completely different story! What things uses charlists? I think a lot in core such as … file operations!
Please pay attention that a new library does not breaks Elixir’s API, so it can be cleanly swapped in as well especially if we are talking about example like above about optimizing Phoenix templates - you would notice only one extra dependency and that’s all!
After this tweet I still think about:
Optimization for list (especially charlist) or bitstrings. Most probably it would happen in Erlang and would not affect Elixir’s API.
new nimble_* library like guessed previously - if we are talking about new thing which can be cleanly swapped in I can’t think about anything else except just optimizations which does not introduce anything new.
Ok, so I was pretty close as I almost hit a target (said Phoenix - actually Phoenix LiveVIew), but was thinking about something “outside” Phoenix (like Erlang core optimization or standalone nimble_* library).
Anyway … did you read it? snapshots? I honestly thought that LiveView updates are already incredibly fast and I do not even saw a way to optimize it, so that’s why I was thinking about “normal” templates.
The diff payload for repetitive components are still large (e.g. recursively calling a bunch of functions), for each function call will produces a key (the number) just 100 calls could cost hundred of kilobytes, for numbers of cids will have overhead on client side toString (https://github.com/phoenixframework/phoenix_live_view/blob/master/assets/js/phoenix_live_view.js#L449) before handing over to morphdom.
~E"" is very fast, faster than ~L"" as end-to-end patch process for those use case.
The snapshots is interesting! (I have no idea what it is). Possibly memory consumption improvement or diff as binary format, less overhead. I’d love to keep live routing stuff, pushEvent, push_event api, and patch data with javascript string literal template (it’s even the same technique as leex that is; a list of static part, a list of dynamic part)
my guess was “mutable binaries” but you couldn’t really call them binaries; this would be a nice statefulness escape hatch (a la ets) for things like ML (tensor/matrix multiplications)… positional indexing and replacement for f32 f64, maybe i64 values too. i8, bfloat16, “tensorfloat32 (https://blogs.nvidia.com/blog/2020/05/14/tensorfloat-32-precision-format/)” if you want to get really fancy.
I think you’re on the right track with regards to tensors/Tensorflow. I’ll submit two recent erlang questions mailing lists posts by José as evidence
Implications of setting SIGCHLD in relation to NIFs
I am working on Tensorflow bindings and, at some point, Tensorflow forks a
child process to invoke a separate program. Unfortunately, when running
inside the Erlang VM, Tensorflow fails when calling waitpid, in exactly
this line…
Are NIF resources meant to be destroyed immediately on GC?
We are working on some code that allocates large chunks of memory inside a
NIF and ties them to a resource (using enif_alloc_resource +
enif_make_resource). While running some tests, I noticed that we were
holding onto these resources for longer than we wanted to, so we have added
calls to erlang:garbage_collect/1. In a nutshell, the code looks like this…