Thanks for your patient explanation, @bitwalker, and your good questions/points @outlog.
The app basically manages three JSON configs. Each of these JSON configs will be updated between 1 - 5 time per day. However, they will be read hundreds of millions of times per day.
In terms of the payload size: the configs themselves are small – just a couple dozen lines of JSON each. All the app will do is issue 302 redirects hundreds of millions of times per day based on the contents of the configs – so it will just constantly read out of the ETS table, perform a quick calculation, and then issue a redirect.
I’m not sure what spike RPS means, but traffic ebbs and flows with the time of day and the day of the week, and can rise or fall by a factor of 100.
In terms of load balancing, I haven’t thought about that yet - somehow I thought AWS/OTP handles that under the covers for me…?
Geo distributed… that would probably be an extremely good idea for our use case, but not planning to do this for the initial iteration.
Max accepted latency for the client to receive the 302 redirect would be about 30 ms. In terms of updating ETS and synchronizing ETS between all nodes, I’d be happy with 5 seconds and could live with 5 minutes.