There is a word
Vulcan in English, just in this case the spec is named
Vulkan, yes it is weird. ^.^;
They've gotten significantly better in recent years actually.
The utter and sheer amount of data that has to be sent to the GPU I could not see being done efficiently via BEAM's message passing. Perhaps updating a rendered scene state in a native code world via the BEAM would work, but the act of rendering itself and all the synchronization points ('Flushes' in GPU parlance) that have to be serialized would never allow it to perform anywhere within a factor as well as native code.
Maaaaybe if you make NIF's and program them very carefully would it work somewhat decent. But even then keeping the rendered 'state' and just sending over changes to native code would be significantly more efficient.
This is one of those cases that is very CPU/Memory bound, which the BEAM is not suited for.
I honestly think that even 5 to 10 times faster in native code is being kind, I'd really honestly believe that except in very simple demo's that it will be multiple orders of magnitude type difference, which is the difference between a smooth 60 fps and getting 2 fps.
Network would be awesome on the BEAM, that is what it is practically designed for after all. ^.^
However using those network packets to do things like, oh, handle Physics processing or so forth, that would fall flat again.
If your GPU code is crashing then depending on the quality of the drivers you will crash the entire OS or just have a significant speed hit as the rendering pipeline has to be flushed and cleared (I've seen stalls of this routinely cost well over 1 second per frame, which drops you to <1fps instantly). GPU coding has to be done right or it is not a 'crashing' issue but rather you become unplayable just because the hardware has to reset its state.
This is generally done once, so it does not matter where it is done.
This itself is fine on the BEAM, but what is done 'with' those packets may not be.
It's not just the player but the entire world that you have to simulate, this is hugely CPU bound and will not be done well in the BEAM except for absolutely trivial scenes.
This will not be done well on the BEAM. A draw call to the GPU can be batched up in command buffers and so forth, but they have to be flushed at times and at certain points and these are absolutely timing critical (micro-second kind of critical to the hardware).
Eh, notifications and such are pretty trivial.
Some 'types' sure, like a card game would not matter for sure. Potentially even something as complex as a 2d shooter becomes infeasible unless the object count is kept extremely low.