The worth of us creating our own benchmarking/tests?

We all could make our own benchmarking github repo to test all the servers we can find in a huge variety of real-world conditions? Everything from simple file response, text response, some simple processing, some complex processing, simple DB, complex DB, variety of DB’s, overload conditions, failure conditions, etc… etc…

6 Likes

That would be very cool.

I wonder if forking that “which framework is fastest” repo would be a good place to start?

Seemed like a pretty balanced setup, just a matter of his desire to expand things at this point. I got the impression that it was created with the intention of showing off Crystal.

Heh, ditto, which is funny because crystal falls behind pretty hard when many cores are added. ^.^

But perhaps, at least the style, maybe not the repo itself, it’s testing methods are not at all sufficient.

You know…I’ve got digital ocean credit I could donate to this cause. I don’t have the time to do it, but if somebody wants to create a multi-language test suite that uses their API’s to spin up instances, run the tests, spin them down and report it somewhere else I’ll volunteer my account. We could use the 8 core droplets that are $0.238 / hr. That would put it to much better use than me just using it on something because I have it.

2 Likes

That would be cool, I have servers I could test on as well, more places means more data points. Would want to make a good testing structure first that dumps as much data as possible so it can be run over later to create displayable information.

What should the ‘client’ tester be though, hmm, making it in any language that is being tested could introduce an unknown artificial throttling limit, plus what about testing cross-servers, that means latency, but on a single server means that both the server and the client will be fighting for resources, etc…

1 Like

There is also google cloud who give out $300 for a year to every new user.

What about colocation? Using two boxes inside the same data center/provider? Any latency there should be almost negligible.

Something like Phoenix Showdown Comparative Benchmarks @ Rackspace · GitHub

If there is enough transfer as well, I know that in the elixir websocket tests that even at 2 million active connections it still did not saturate their memory and CPU of the server…

Right. By the way, it would be pretty interesting to compare cowboy’s websockets with uWebSockets [0] (it seems too good to be true). And also pub/sub implementations, http/2 web servers (chatterbox vs cowboy2), overhead of containers/vms at scale … or maybe that’s too much.

[0] https://github.com/uWebSockets/uWebSockets

1 Like

I’ve split these posts into a new thread - sounds like a great idea :023:

@OvermindDL1, feel free to rename the thread as you see fit.

3 Likes

Scaleway can be an easy way

To load… well Tsung ?

1 Like

I could try contacting a load testing company like https://flood.io/ and see if they’d be up for sponsoring it. It would get them some good advertising and if we were to document the configuration, open source the containers and make the process entirely reproducible via digital ocean droplets they might even get some business from people wanting to duplicate the tests.

3 Likes

I think this would be a great idea. I don’t have a lot to bring to the table myself, but I do think that it would be really helpful to have a benchmark that:

  • Has test(s) that show realistic scenarios.
  • Measures not only the fastest and average time, but also the 99th percentile of the frameworks, so we see how well they can balance their load.
  • Not only tracks speed, but memory usage as well.
1 Like

Is this done?

From the previous comment, guess this was not done?

I’m new to performance testing. I would like to run some tests comparing

  • Ace
  • Cowboy
  • Cowboy + Plug

Can anyone point me to a good getting started guide or even better a service that can do it for me?

My first though is to try and recreate http://www.ostinelli.net/a-comparison-between-misultin-mochiweb-cowboy-nodejs-and-tornadoweb/ though this is an old article, maybe a better way now.

I like the fact it tests a bit of the HTTP message parsing and building.

Therefore, what is being tested is:

  • headers parsing
  • querystring parsing
  • string concatenation
  • sockets implementation

watch this video for insights in measuring:

I suggest using https://gatling.io as that talk also recommends (it’s akka/java though) - it’s pretty good and outputs pretty html(and graphs/scatter plots) as well as json for the result (so some day we should orchestrate it from elixir)…
there is a version 3 underway, but have only used stable myself… ymmv

for online services - I’ve only tried the free tiers and https://loader.io was the only one (I found) able to saturate a $2 phoenix server… but surely there should be plenty out there…

1 Like