no. never rely on a sole server.
I can’t see what the OKCupid site offers so don’t know what its needs might be, however I would say if designed and coded well, a Phoenix app on a dedicated server will take quite a lot load - and in some cases, significantly more in comparison to many other languages or frameworks.
The great thing is that dedicated servers are far cheaper than ‘hosting in the cloud’, so if you outgrow one, it should not be prohibitive to add another.
I would say if budget is an issue, just go for it! If you outgrow a server you can always add more later - and you’ll be surprised by how well a single (relatively beefy) server performs.
I would however stress the importance of making sure you do off-server back ups, and to have two HDs set up in a raid array (so if one fails your app won’t go down).
Elixir is a top performer probably beaten only by C / C++, you have to put it on the test and see how it does, add hardware if needed. Even on a limited resource shared VPS it very fast.
That seems unreasonably optimistic.
Edit: Elixir is fast, but there are plenty of things that are faster.
I think maybe @Neurofunk was urging for more than one server for the same reason as you urged for RAID arrays & backups: Redundancy and potential for zero-downtime.
It’s not usually a super productive discussion, but I think it depends on what you mean by “faster”. Average latency is one metric I expect to see good things in from anything running on the BEAM. It’s made for handling thousands of things “fast enough for each”, so I don’t think it’s unreasonable to say that it might beat most in that category.
The great thing is that dedicated servers are far cheaper than ‘hosting in
I would say that’s exactly backwards until you hit significant scale.
… so if you outgrow one, it should not be prohibitive to add another.
And if you over-bought? In my experience most people wildly
overestimate their requirements, and if you own it you can’t
easily downgrade to a more appropriate config as you can with
OTOH, if you start with the smallest system that will run your app
you can build up a performance profile and upgrade when and if
it makes sense.
no. never rely on a sole server.
- 1 – having multiple (even 2) servers offers both redundancy and
Yep - I guess it depends on how quickly you can commission a new server / how much down-time you can afford. However I would still say the same - if budget does not allow it, just do it. In many non-critical applications, being down for a few hours or a day shouldn’t be too much of an issue. It’s like @hassan says below - I think it’s better than overspending.
I was going by the OP’s 2,000 to 3,000 online users, where I think a dedicated box would be cheaper. Having said that, dedicated servers have come down quite a bit in recent years so I don’t imagine the difference to be as significant as some might think (unless your app is pretty small).
It would simply mean switching to a different (cheaper server). Most rental contracts are month by month. I wouldn’t buy a server (i.e. co-locate) unless I needed more than 10 servers.
For anyone interested, there are some good recommendations (and further discussion) on dedicated servers in this thread: Deploying Elixir - opinions on Heroku, Digitalocean, Bluemix etc
The said server costs only $29 at scaleway.
I don’t think I’ve heard of them - but I’d definitely recommend looking at the thread I posted as I believe quite a few recommendations are in there (and a few other threads we’ve had on the topic)
I think Phoenix should easily handle that much load on the said server.
Scaleway is quite cheaper than the options you said in that post. https://www.scaleway.com/baremetal-cloud-servers/
If You plan to have a huge application, then it would be a good idea to think distribution from the start. That way, You will benefit from one of Erlang/Elixir main functionality.
You won’t need to think if your server is big enough, because You could just add one if needed.
Distribution is hard, but mastering it solve scaling problem.
I’ve not heard of them before and tbh, their configuration seems a bit odd - 50GB SSD and a “250GB Direct SSD Disk” - what’s a ‘direct’ SSD? It just seems a bit odd. I’d read some reviews and post on webhostingtalk.com for thoughts from other users.
Or look at the dedicated servers from the other companies posted in that thread I linked to
I am not very good at distribution. Is there some book/tutorial etc on how to deploy Phoenix in a distributed way, say deploying it on 4 smaller dedicated servers.
I do recall a distribution example in The little elixir otp guidebook . A distributed chuck norris jokes server.
There is an excellent post about nodes clustering by @aseigo
But to me the subject lack a reference documentation (or I did not find yet).
I remember reading that Scaleway wasn’t particularly great in disk IO and networking https://www.webstack.de/blog/e/cloud-hosting-provider-comparison-2017/. But things might have changed since.
Tsung for example. That’s the only way to get any clue before deploying it.
By default, Scaleway disks are networked.
A direct SSD is a temporary SSD but that’s directly linked to the PCB of the server; instead of being networked.
Another tool is MZBench. It takes a different approach to tsung in that it is request rather than user session oriented. It is also a bit more approachable in its web UI, and writing load tests is rather more straight-forward. It does not have all the functionality (by far!) of tsung, but if you just want simple request load testing, it’s pretty good. It also supports clustering and cloud hosting out of the box with rather simple setup.
I now use both, depending on what I am trying to measure.