Are cluster PCs possible with a BEAM-based OS?

I’m imagining a BEAM-based desktop OS on top of Nerves on cluster of Raspberry Pis. Would that allow me to increase processing power by just dropping in another Pi of any vintage?

It would be great to have a PC that never became obsolete. When the system gets sluggish, you could just add another generic compute module to your cluster. Would the compute modules even have to run the same processor? Could I have a cluster with both ARM and x86 processors all running the same desktop environment? What about a system that uses every available computer in the house when it needs more processing power? This one computer per person model is broken. I should be able to run my desktop on any and all computers on my network. Is this a pipe dream or does it sound even remotely feasable?

I think that the hardest part of this solution is that you need to specifically target a remote process on a different BEAM node with messages, as far as I know. If a system was able to dynamically allocate processes across the cluster, then you could add new nodes to that cluster and it would take advantage of them. I believe that would be fairly technically impressive to pull off in a general sense, though.

Curious to hear from others here.

Ah, thank you, that does sound like a hard part. I was hoping there were already mechanisms for load-balancing processes across nodes in a cluster built into the BEAM.

What I really want is an OS that runs across all my hardware and data resources in the cloud and on the local network. Imagine a swarm of BEAM processes in supervision trees that move across the network to wherever needed resources are. On my virtual private server, the network is much better than at home, but here I have a graphics card and a printer. It would be grand to have a desktop that comes to you no matter if you’re on your Mac, your tablet, or a remote web browser. One seamless cloud/local environment with views based on the type of device used.

What kind of “desktop” environment would you be running? Nerves/erlang/elixir doesn’t have a desktop, so you’d have to build some sort of web based thing, just wondering what you’re trying to accomplish.

Sounds like what you really want is a clustered VNC setup, or a thin client type situation, like the old Citrix stuff.

This is a beautiful idea and I think it’s totally doable, but would require rewriting an os and a kernel, if I had taken os theory in college I would be doing this now, of course also I probably wouldn’t have discovered Elixir.

2 Likes

Yeah, basically. I’d love to have big iron as my desktop. I was looking into the “blade workstation” concept. But it’s still just running Windows. I’d rather have one BEAM node per cpu, and have my workload dynamically move to whatever node has the resources it needs, be they CPU, network, data access, or whatever. Frames of the view would be assembled on the nodes closest to me, and sent like RDP or VNC as a frame buffer. The graphics card wouldn’t need to be on the viewing device. It could be a totally dumb client. They say never trust a computer that you don’t know where it’s brain is, but the ability to use whatever brain is available trumps that to me.

This would have to be a completely new desktop environment. I’m interested in a desktop that breaks the everything is a file metaphor. Something like Ted Nelson’s Xanadu with ZZStructures, bidirectional links, and transclusion. But the desktop could be anything. I just want a computer that can be cobbled together from random compute modules of any vintage, a desktop environment that runs on any computer available to me, and programs that can migrate to wherever they’re needed across the network.

This would be a really cool project but I believe it has enormous complexity, especially if you need it to be robust enough for general daily usage.

Perhaps a more feasible approach/strategy would be to first focus on the low level stuff, and provide a code base for someone else to build more abstractions on top. Kinda like Plug and Phoenix (Jose began with Plug and Chris used it for Phoenix) or like Linux started with the kernel code and then some basic drivers and then Gnome/KDE etc came to be. All this stuff were written by different people, because frankly a lifetime may not be enough for a project of this magnitude.

2 Likes

“because frankly a lifetime may not be enough for a project of this magnitude.” You nailed my problem. I’m a fifty-two year old non-programmer who’s been homeless and sick with HIV for years. This is one of the problems I think about when I’m in bed and trying to distract myself from pain.

Today I was thinking about layering of the OS. Should it be bare metal and implement device drivers, or should it be higher level, like Nerves which runs on top of a Buildroot Linux distro. I’d say let Linux be the low-level system, and let the OS be more like a desktop environment on the level of GnuStep, Gnome, or Android. That way you don’t lose any functionality of Linux as we bootstrap the next generation environment. This structure also leaves the possibility of replacing the Linux layer with something else later on. Relying on all the work done on Linux saves a few lifetimes at least. Linux is based on the “everything is a file” metaphor, but that doesn’t mean we can’t break that metaphor in our system. We just let Linux be Linux and start our dreaming at the level of the BEAM/OTP which can run on top of what we already have.

1 Like