Plug POST performance

The last part of your post caught my eye, @josevalim , (I haven’t managed to resolve the issue yet, the closest I’ve gotten is 160k using elli with raw erlang. I plan to try and tackle this again on the weekend).

I installed erlang both from “kerl” (on fedora23) and from the erlang-solutions package repo on Ubuntu. Are there additional optimizations of erlang that people don’t get out of the box that can be compiled in? I didn’t set any flags during the installation with kerl, just whatever the defaults were.

3 Likes

If it helps, here is my .kerlc file:

$ cat ~/.kerlrc
KERL_CONFIGURE_OPTIONS="--disable-debug --disable-silent-rules --without-javac --enable-shared-zlib --enable-dynamic-ssl-lib --enable-hipe --enable-sctp --enable-smp-support --enable-threads --enable-kernel-poll --enable-wx --enable-darwin-64bit"
KERL_DEFAULT_INSTALL_DIR=/Users/jose/.kerl/installs

If you run erl and paste here the beginning of the output, we will have an idea of what is enabled and not in your machine. IIRC, compiling Erlang without those flags is at least twice slower in my machine.

5 Likes

@apenney Is there any update on this? I’m curious if you were able to solve some of your performance problems.

2 Likes

Here’s an update from @apenney on the thread I opened over on elixir-lang-talk:

Just a quick note from someone that tried (and failed) to get acceptable performance numbers from Elixir.

You should check into wrk2 instead of wrk1 for your testing. The reason being that wrk1 has a “coordinated omission” problem whereby the tool and the service accidently conspire to produce nicer numbers!

Wrk2 will send traffic at a steady rate instead of “when it can”, which often results in drastically different numbers. I was seeing latency in the seconds at the 99th percentile with wrk2 and a quick test with just plug.

When we tested our own scala based service internally we went from seeing 100ms at 100% to 9 minutes with wrk2, so it was quite an eye opener!

Sad that there wasn’t a happy ending :frowning:

1 Like

Maybe I’m wrong and this is not the problem you have, but we got the same performance problem and fixed it increasing the number of acceptors in the cowboy configuration. You can try it anyway.

3 Likes

Inspired by this thread and the similar thread, I blogged about getting some low latency numbers with wrk. You can find the article here.

5 Likes

I have a weird, semi useless update, as I haven’t really tried again due to a lack of time.

I did some experimentation with just raw cowboy (no Elixir) and had terrible results no matter what I did. I tried cowboy 2 as well, same bad results.

However, I did some experiments with raw Elli and somehow (I still don’t understand how, given +A 0 given to me by rebar3) I was able to get 160,000 QPS out of it running wrk on the same machine. I tried a Elli Plug adapter I found and got 2300 QPS so I didn’t continue down that path.

I also did some experiments with raw nginx to ensure I was doing something possible and around 160k was the limit I was able to get between the two servers I have. (Given the huge volume of POSTs that’s pretty great).

So for anyone still trying this, I would definitely evaluate Elli over Cowboy in terms of raw performance.

4 Likes

@sasajuric Thank you so much for taking the time to put that blog post together! My tests are mostly pretty similar, but wasn’t able to get much above 10k RPS with 100 concurrent connections. I’m going to try to repro what you put together and see if I can replicate your performance numbers.[quote=“apenney, post:27, topic:473”]
So for anyone still trying this, I would definitely evaluate Elli over Cowboy in terms of raw performance.
[/quote]

Thank you for the update @apenney. Is it pretty easy to configure Phoenix to use Elli?

1 Like

I was also looking at this and came across pastelli and pastelli_phoenix. Might be what you are looking for.

2 Likes