Google chrome opens every tab and extention in a separate process so the browser don’t crash if a tab crashes. Is it similar to elixir? Can elixir provide this? What is the difference?
Thanks!
Google chrome opens every tab and extention in a separate process so the browser don’t crash if a tab crashes. Is it similar to elixir? Can elixir provide this? What is the difference?
Thanks!
tl;dr -> Yes, the idea is the same, and is provided in Elixir using its language-level support for processes, message passing and Supervisors.
Longer:
Conceptually they are indeed both similar: each task (web page in chrome; process in Elixir) is run in an isolated execution context, so (as you observed) failure in one of those tasks (e.g. a browser tab in chrome; a process in Elixir) does not impact the stability of other tasks (e.g. other tabs in chrome; other tasks in your Elixir paplications) or the the host application (e.g. chrome; your Elixir app).
With elixir, each code path you put into a process is isolated from the rest of the application. This is easy to do because Elixir provides both language-level tools and fantastic abstractions in the standard libraries such as GenServer.
There are caveats to the MP approach, as with anything: it is possible for one browser tab to use all the memory on your system and bring things to its knees that way; and it is possible for one elixir process to end up with a full message box or create a deadlock due to mutual sync calls between it and another process and as a result stall other processes that may be in use … but these are not deal breakers in most cases. Just something to be aware of as you go, and certainly things you don’t need to think about as you are starting out or indeed in most application code you’ll end up writing.
That said … past the concept, the implementation of the two are completely different. Chrome uses operating system processes: these are big, expensive, and chrome has very little ability to see what is going on inside of them. This is why there is that IPC system between the webpage processes and the host browser app which reaches right down into the web render stack. Not overly pretty, and very limiting in what chrome can do, because the kernel is the real owner of things there.
In contrast, with Elixir using the BEAM virtual machine for its execution, processes are very light (~300 words of system memory overhead per process, iirc) and very fast to start and stop (this will never be a limiter in your applications), and the VM maintains oversight of these processes so it is very easy to see what is happening in your application as a whole with tools like observer. Cleanup and even recovery from failure is made very easy since all the processes are actually in the VM and Elixir provides great tools for supervision, etc.
HTH.
Wow, really good answer! Thank you very much!
I think a good quote which describes our thoughts about when developing and implementing Erlang is by Mike Williams:
three properties of a programming language are central to the efficient operation of a concurrent language or operating system. These are: 1) the time to create a process. 2) the time to perform a context switch between two different processes and 3) the time to copy a message between two processes.
The performance of any highly-concurrent system is dominated by these three times.
That’s very insightful indeed … thanks for sharing that. The small memory overhead required for an individual process … do you know if that helps at all with the BEAM’s version of context switching? I’ve always wondered about how the BEAM does in terms of cache misses with the preemptive scheduling between processes …
For me, it certainly grants the freedom to model actions+state that belong together as processes without having to worry about memory overhead, even when some of those applications are handling 50k+ concurrent connections (each of which results in a few processes) on rather modest hardware. But the impact on performance has been something I’ve longed wondered about, but I’ve never had the time / energy to try and pick out those numbers and haven’t come across anything online (though admittedly I haven’t done anything like an exhaustive search …)
I’m unsure of your language background, so I’ll use a pseudo-C+±like language. Basically a ‘process’ on the EVM is like this:
struct EVMProcess {
word *stack;
typedef std::function<ProcCall*(EVMProcess*)> ProcCall;
ProcCall nextCallback;
EVMAtomicList<EVMMsg> mailbox;
/* Other stuff */
}
And basically a call in elixir like:
def blah(s) do
String.to_integer(s) + 42
end
Becomes this in a C++ like pseudo-code again:
EVMProcess::CallProc *blah_2(EVMProcess *proc) {
EVMValue temp_1 = proc->popValue();
EVMProcess::CallProc *next = proc->popReturnFunc();
proc->pushValue(temp_1.add_value(42));
return next;
}
EVMProcess::CallProc *blah_1(EVMProcess *proc) {
EVMValue s = proc->popValue();
EVMModule *String = EVMModule.get(EVMValue::Atom("String"));
proc->pushFuncCall(&blah_2);
return EVMCall::perform(String->call(EVMValue::Atom("to_integer"), EVMValue::ListOf(s));
}
Except it is actually in bytecode and less function calls and more direct instructions and very optimized and such. Think of it as inlined trampolining with direct goto’s instead of slower function calls and more and more and more. This is why a ‘reduction’ (like a process may get 1000 ‘reductions’ before it is pre-emptively switched) only happens at function calls, but that is no problem since there is no way to loop on the EVM without a function call anyway. ^.^
Don’t say the “EVM”, I go ballistic when I hear that. It is the BEAM.
Lol! I know, but to be technical the BEAM is just the current incarnation of the Erlang virtual machine, there have been others both in the past and on alternative platforms (does the java one still exist?). It is useful to have a thing that refers to kind of all of them instead of a specific one. ^.^
But yeah, I wish the BEAM would be the uniform one, just EVM seems to be taken by people from the ‘outside’ much better because it sounds like the JVM (which does kind of send shivers down my back…). >.>
Yes, processes are very light and contact switching is fast. Seeing the BEAM is a VM it manages everything about a (erlang) process. Memory management when context switching is easy as a process basically has one block of memory with the stack at one end and the heap at other.
At the core of the BEAM is the concept of a scheduler, it is sort of a semi-autonomous VM running in an OS thread. By default when you start a node one scheduler will be started per core, other threads are also started by the core, for example for file i/o. You can control how many you want. The schedulers of course cooperate with each, they are all running the same node, but they try to be as independent as possible to minimise locking and synchronisation. It is the schedulers working together which handle the load balancing.
This means that context switching a process is local to one scheduler.
The BEAM is all about implementing Erlang and most of the language features have support in it. For example immutable data, processes, messages passing, error handling and code handling are all supported in the BEAM.
I have given some talks on the BEAM internals, the latest at the EUC 2014, “Hitch-hikers Tour of the BEAM”. I see the video is there but not the slides. I can probably find them but can I include them in a post?
Yes, I’m familiar with the memory management strategy and implementation in the BEAM. There are some great presentations from past Erlang conferences on the topic that I’ve really enjoyed …
My (admittedly mostly academic) wondering is on how CPU caches (L1 / L2) behave with the context switching. e.g. how many cache misses are incurred in a typical workload on the BEAM due to the context switching. Since that switching can happen at various points in the currently executed codepath, I wonder what, if any (measurable), affect that might have on performance. e.g. if a process is currently processing a list of a few thousand elements, and it is preempted during that to allow another process execution time in that scheduler thread, and then it is switched back to, are the L1/L2 caches blown? What sort of delay is incurred due to refetching data from main memory?
Performance is not my main concern when choosing a BEAM language … but I still do think about it from time to time
That depends mainly of the process you are switching. If you keep your whole memory small enough to stay fully in the CPU cache, you are good. You can even collocate multiple of those. You may even not need to load all the memory in the cache, it is highly dependant of a loooot of stuff. And of your memory manager. IIRC these days the BEAM use a lot of mmap when it can, so that would be handled at an even lower level.
Hard to answer as it is highly dependant of your load.
I think Robert means it’s OK to call it the BEAM or the Erlang Virtual Machine (but not shortened to the ‘EVM’). I’m sure I’ve read this a few times anyway. I’m sure he’ll correct me if I’m wrong
Personally I like the BEAM - it’s so cool