How would you go about building a large scale SaaS ?

So I have this project that consists of a software as a service that allows for users to create their own HTTP APIs. We host them under our domain.
The api’s are created by the users by clicking on a web interface where they enter a bunch of parameters and configurations and get HTTP api as a result. Probably REST style.

API access is there as a product, but the target audience is a mix of API consumers that write their clients and people that would just download a ready auto-generated app that is hardwired to the API they created. The latest don’t need to have programming knowledge. and in fact don’t even need to know how it works, all they want is their app.

I want to be able to scale to hundreds of thousands of APIs. All User created apis have their own data. So part of this project is an embarrassingly parallel problem. Hence the reason for me to post here.

I am secret functional programmer at night, and have a boring job as a programmer at day time. I am familiar with BEAMs green process concept but I have never worked with erlang or elixir.

How would one build such a product with elixir or erlang? Say I have 200.000 APIs, is it realist spawn 200.000 processes and let them be up 24/7 ready to reply? Or is it more common to spawn them once a request comes in? What about the public HTTP software bit? How does one scale it across multiple machines? Is it simple so serve a single website with multiple machines? I am talking about the HTTP frontend, not the

A whole availability strategy I have seen countless times is hiding a bunch of worker machines behind a high scale HTTP server such as nginx or HAproxy… then as needed buy a more expensive machine for the HTTP server. It puzzles me that most people don’t think beyond the single beefy HTTP frontend… what to do when there are too many HTTP requests for a single machine?
Does running erlang or elixir makes this problem easier to solve in any way?

4 Likes

Having 200.000 processes ready to reply would not be very efficient, since there is no guarantee all the APIs will be used all the time. But of course spawning the requests when they come in is also not a good solution, you could have some ready to reply, and when they reply, you kill them and spawn another process and put it in the pool to be used. That’s what phoenix does.

2 Likes

Well if using cowboy, plug, or phoenix (phoenix builds on plug which builds on cowboy) the system manages a pool of acceptor sockets for you and when a request comes in it scans the router and creates a new internal process/greenthread/actor to manage the entire connection, so it would scale well just using the default methods.

Scaling across machines is built in to Phoenix and has a documentation section for it. :slight_smile:
It also scales very well to all cores on a single machine.

True running Rust Actix or so would be more efficient and scale better on a single node if you truly need absolute CPU speed, but it’s also missing the ecosystem that’s already built up around this pattern and is not as easy to scale ‘out’ off a single server. However, Rust runs very well embedded as Ports or NIFs in an Elixir ecosystem too if you need to do anything way CPU heavy. :slight_smile:

But yes, as @kelvinst said, having the processes already up is a waste of memory, and phoenix already handles spooling up processes on demand, so just the normal built-in way is best. :slight_smile:

3 Likes