So I have this project that consists of a software as a service that allows for users to create their own HTTP APIs. We host them under our domain.
The api’s are created by the users by clicking on a web interface where they enter a bunch of parameters and configurations and get HTTP api as a result. Probably REST style.
API access is there as a product, but the target audience is a mix of API consumers that write their clients and people that would just download a ready auto-generated app that is hardwired to the API they created. The latest don’t need to have programming knowledge. and in fact don’t even need to know how it works, all they want is their app.
I want to be able to scale to hundreds of thousands of APIs. All User created apis have their own data. So part of this project is an embarrassingly parallel problem. Hence the reason for me to post here.
I am secret functional programmer at night, and have a boring job as a programmer at day time. I am familiar with BEAMs green process concept but I have never worked with erlang or elixir.
How would one build such a product with elixir or erlang? Say I have 200.000 APIs, is it realist spawn 200.000 processes and let them be up 24/7 ready to reply? Or is it more common to spawn them once a request comes in? What about the public HTTP software bit? How does one scale it across multiple machines? Is it simple so serve a single website with multiple machines? I am talking about the HTTP frontend, not the
A whole availability strategy I have seen countless times is hiding a bunch of worker machines behind a high scale HTTP server such as nginx or HAproxy… then as needed buy a more expensive machine for the HTTP server. It puzzles me that most people don’t think beyond the single beefy HTTP frontend… what to do when there are too many HTTP requests for a single machine?
Does running erlang or elixir makes this problem easier to solve in any way?