7.5 second average page load speed for 200 visitors hitting a Phoenix driven website - on a $5/month Digital Ocean Droplet

performance
phoenix

#41

The Episodes listing contains considerably more data than it did two and a half weeks ago.

I’ve added several new episodes, expanded the show notes on several others and made a couple of other changes as well. You’ll get a more reliable measure testing the changes I covered your own sites (or from my tests where I at least tried to control for these things!)

Edit: 124 reqs/s is considerably lower than what I’m seeing from loader.io, though.

I’m not sure what would be causing such a big difference between it and your test. Again, I’d say you’d get more benefit from looking at relative changes in benchmarks of your app after trying different tweaks than an apples and oranges test against a moving target (which alchemist.camp will be)…


#42

Did you do anything else to improve the markdown rendering speed?

Because the before / after on an individual episode page is a substantial improvement.

2.5 weeks ago:

nick@workstation:/e/tmp$ wrk -t8 -c200 -d30 https://alchemist.camp/episodes/welcome
Running 30s test @ https://alchemist.camp/episodes/welcome
  8 threads and 200 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     1.59s   293.12ms   1.99s    77.11%
    Req/Sec    10.18      7.35    40.00     72.05%
  1698 requests in 30.06s, 18.96MB read
  Socket errors: connect 0, read 0, write 0, timeout 1449
Requests/sec:     56.49
Transfer/sec:    645.82KB

Today:

nick@workstation:/e/tmp$ wrk -t8 -c200 -d30 https://alchemist.camp/episodes/welcome
Running 30s test @ https://alchemist.camp/episodes/welcome
  8 threads and 200 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   350.83ms   47.11ms   1.86s    84.97%
    Req/Sec    68.06     23.98   160.00     71.28%
  15555 requests in 30.09s, 177.82MB read
Requests/sec:    517.00
Transfer/sec:      5.91MB

That’s a ~10x improvement with a huge reduction in average latency. Well done.


#43

Just the changes I mentioned @ you before recording the episode!


#44

It’s also super useful to have this exact count I need to decrement my view counts by :laughing:


#45

Haha yeah, sorry about that.


#46

It just dawned on me that maybe that could be an interesting episode idea. One where you write a plug to prevent the view count from being incremented by requests that look like bots or automated tools. Could go into parsing user agents and / or filtering out known IPs, etc…


#47

The low request rate might be due to your internet connection speed between you and the hosting provider for the website. Whenever I tried running wrk from my laptop on a public website in the past, the results were never particularly high (even for very high traffic websites). But if you run wrk in a local network, they show a more “real” picture.

Just one overloaded switch between you and the hosting provider would skew the results significantly. And there are probably more than one at any single point in time, since that’s how tcp/ip operates.


#48

Yeah that’s true. I think his site is hosted in Singapore where I ping 250ms to it.

If I wrk google.com I get substantially higher results:

Running 30s test @ https://google.com
  8 threads and 200 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    40.57ms   27.17ms 375.18ms   95.89%
    Req/Sec   660.74     96.04   820.00     77.78%
  156974 requests in 30.09s, 88.77MB read
Requests/sec:   5216.20
Transfer/sec:      2.95MB

But I also ping 20ms to Google’s servers. This is with a 70mb/25mb wired cable connection.


#49

That is a good idea! I’ve already got (honest) bots filtered and a rate limiter planned, but I hadn’t thought about making a screencast about it automatically fixing view counts.


#50

It’s interesting that total data transfer is still relatively low, given that it’s Google. I wonder if they’re rate limiting you.


#51

Why to use GenServer for ets in the first place? At least, in your case. Why not use etc directly?


#52

It’s not that bad to wrap ets inside a GenServer, even if the api will talk directly to ets.

It will start and clean up ets for You.


#53

You basically must start an :ets table from a genserver because :ets tables only live as long as the process that started them. If you start it from an HTTP request then it’ll die when the request terminates.


#54

That’s not strictly true.


#55

Can you elaborate? “basically must” does not mean “in absolutely every case”. I wasn’t trying to be strict, I was trying to provide a general rule of thumb. In any case where you want a shared ets table, you want to spawn it in a genserver or other long lived process. If you take issue with that notion please articulate a concrete alternative position.