Anyone have an idea that if I hypothetically ran BEAM in a big.LITTLE environment (i.e. where there are fast cores and slow cores), would it work out of the box? Would I need to do some customisation (like setting scheduler amounts and core affinities)?

Maybe we could collect such information in this thread. :slight_smile:


Why wouldn’t it work out of the box?

If you’re looking to, like, develop on a RK3399 Chromebook (or the upcoming Pinebook Pro) — you don’t have to worry about anything. You can set affinity to the fast cores if your app really needs the performance. There might be a Linux scheduler that fills up the fast cores first to ensure max performance too. I use FreeBSD on my RK3399 board, so I just cpuset everything that needs speed (like compilation) to A72 cores.

If you’re looking to run such a system in serious production… don’t :slight_smile:

Well I imagine it will work, but I was thinking more on the lines of, can I have my expensive processes be stuck on schedulers that are running on the slow cores or does BEAM have any logic to migrate them?

What exactly would constitute an expensive process anyway? Whether a process is going to do 10k reductions or 1 million reductions over its lifetime doesn’t change the fact that after 2k reductions it will be scheduled out to something else.

The closest thing I think you could get to this would be to run two instances of the VM, one pinned to the small cores and another pinned to the large. Put them in the same cluster, and then distribute your workload accordingly.


But if it keeps being run again on the same scheduler, that runs on the same core, it may end up running “forever” on a slow core and wasting CPU potential.

It’s most likely a nonissue for me, but it is an interesting thought at least in my opinion.

The BEAM is not really meant to do (some specific) stuff quicker than other things. All (in reality rather most) processes do get the same slice of computation and they are then switched out for other processes. So for the whole beam there’s no difference in terms of “moving forward with computation” between cpus of the same size or of different size. Both can only do so many reductions of computation per time slice. Differences in queue sizes are already load balanced between schedulers as not each process really needs all it’s available reductions, so even if the bigger cpu is quicker it should not run out of work and the other way round. The only difference I can imagine here is that differently sized cpus might result in more processes being moved between scheduler queues.