Recording and limiting resource usage of processes


In the same way some companies allow customers to rent compute/memory/bandwidth etc. using docker containers, would it be possible to allow for something similar, but using processes on the BEAM?

For example, a user could rent access to 5 processes, each of which with a specific quota for memory, bandwidth and compute time. In addition to that customers would be able to scale the number of processes and their corresponding resource quotas on-demand.

I imagine the main problem being recording and capping the resource usage of processes, any other important considerations?

To my knowledge no capabilities within the BEAM for restricting the CPU or memory use of a process. If different users were “renting” access to different processes you’d also have no ability to prevent users from escaping the confines of their processes by spawning more of them. The BEAM wasn’t built to run untrusted code in an isolated fashion.

1 Like

Ok let me clarify

I would not allow for arbitrary code execution, but just increase the resource-allocation of certain user-facing tasks. All code would be considered safe and trusted.

For example say you had a webcrawler-as-a-service type of product, where users could define and host their webcrawlers. If a user rents more processes, their web crawler could run at an increased throughput.

I also couldn’t find anything related to resource limitation of processes myself, but I was wondering if there was any way using OTP to add such capabilities

@madshargreave I think you can achieve billing styles like that without actually requiring VM level resource constraints. If you had a webcrawler as a service style billing you’d bill by requests per second or number of simultaneous crawlers. IE: application level constraints. CPU / memory usage would be a weird way to bill that since it depends on the performance and algorithms used by the underlying code.


Sure that makes sense, but you the size of the downloaded websites would probably differ, in some cases by a lot, so it would make sense to capture that as well in a pricing model

Sure, but I suppose my point is this: Charging for bare resource consumption makes sense when you’re selling resources. A “scraper as a service” isn’t selling resources it’s selling scraping. You control the code, you control the resources that code runs on. If someone wants to scrape a very large website your pricing model should have some way to capture that expense, agreed. But that’s a product / market / pricing decision to make. The customer doesn’t care if it takes 1 CPU or 0.1 CPUs provided that you offer a competitive set of capabilities at a competitive price. If you find a way to cut your resource consumption in half you should be able to capture that upside.

I agree, part of my original question was exactly about this ie. how best to track this billable resource consumption, but you’re right that it doesn’t make sense from the end-users point of view to talk about number of CPUs etc.