Hi all - cross posting this from the elixir-talk mailing list. I could use some help. I am currently evaluating Elixir and Phoenix for a performance-critical application for a Fortune 500 company. This could be another great case study for Elixir and Phoenix if I can show that it can meet our needs. Initial performance testing looked phenomenal, but I am running into some performance concerns that will force me to abandon this tech stack entirely if I cannot make the case.
The setup: an out-of-the box phoenix app using mix phoenix.new. No ecto. Returning a static json response. Basically a hello-world app.
The hardware:
Macbook Pro, 16gb, 8 core, 2.5ghz, running elixir/phoenix natively, and also using docker container
Amazon EC2 T2.Medium running Elixir Docker image
The tests: used ab, wrk, siege, artillery, curl with a variety of configurations. Up to 100 concurrent connections. Not super scientific, i knowā¦ but
No matter what I try, Phoenix logs out impressive numbers to stdout - generally on the order of 150-300 microseconds. However, none of the load testing tooling agrees. No matter the hardware or load test configuration, I see around 20-40 ms response times. The goal for the services that I am designing is 20ms and several thousand requests per second. The load tests that @chrismccord and others have published suggest that I should be able to expect 3ms or less when running localhost, but iām not seeing anything close to that.
Would anyone be willing to work with me to look at some options here? Iād be incredibly grateful. Donāt make me go back to Java, please Is this even possible what I am asking?
To expand on Sasaās point, the :browser endpoint by default generates CSRF tokens to be used in forms as a security measure. These can be fairly expensive to generate at least compared to a hello_world JSON endpoint. Definitely make sure you arenāt doing that.
Thank you for your quick response on this @sasajuric. I looked around in the forums first but didnāt see the other thread. Lots of great information over there to try. Pretty sure Iām piping through :api - hereās my router:
defmodule MarketApi.Router do
use MarketApi.Web, :router
pipeline :api do
plug :accepts, ["json"]
end
scope "/api", MarketApi do
pipe_through :api
resources "/products", ProductController, except: [:new, :edit]
end
end
CPU is pretty well maxed out across all cores when I run the tests.
Thanks @benwilson512 - Iām super green to Phoenix, so I appreciate the tip. I think I am using :api. Is there anything I need to do besides pipe_through :api ?[quote=āthinkpadder1, post:6, topic:832, full:trueā]
Check this micro benchmark tool released by a member of the community: Benche and BencheeCSV 0.1.0 release - easy and extensible (micro) benchmarking
[/quote]
I hadnāt seen this before - Iāll give it a whirl. Is it possible with the tool to have it benchmark where Phoenix is spending all its time?
Hi all, just wanted to say thank you for all of your timely help on this. I have been working through the suggestions a bit at a time as time permits, and I will put up some code as soon as I can. So far, turning off all of the output to stdout and building an exrm release have provided some improvements - though there are still a few scenarios where MIX_ENV=PROD outperforms even the exrm release.
Iāll update as soon as I have some more information. Thanks again!
Any updates, Matt? I came to this thread from Sasaās blog post and I remember a thread on Hacker News performing benchmarks at Rackspace. In your original post you had mentioned going back to Java, so I thought Iād link you to their tests: https://gist.github.com/omnibs/e5e72b31e6bd25caf39a
If you running tests from outside AWS you might see some delay because of connecting time to AWS.
2.Try to have AWS instance with SSD volumes instead EBS volumes.
If you can send the sample code, I can take a look and suggest you better.
Thanks for this @sudostack - I have been concentrating on getting some benchmarks for Go, Java and Elixir since last I checked in. Elixir is next up and I hope to get some more data this week. Thanks for the link to the tests - would be awesome to see some updated numbers. Itās impressive to see the throughput that Phoenix was getting back then. Iāve been getting <20k RPS on my macbook.[quote=āsubbu05, post:15, topic:832ā]
If you running tests from outside AWS you might see some delay because of connecting time to AWS.2.Try to have AWS instance with SSD volumes instead EBS volumes.
[/quote]
Iām going to try hitting it from either the same box or one within the same VPC. Good advice on #2. We were spinning these up on Docker instances, which people donāt seem to do with Elixir. I havenāt seen a good reason yet, but I am curious as to what overhead is introduced by Docker.
I have been doing similar benchmarks etcā¦ for my application which is a json api - and I have found that elixir / phoenix is not the fastest thing out there (nor does it claim to be), but combined with the balance of productivity vs performance in my opinion it beats scala / java / go and the rest, but I am considering it coming from ruby / rails so its similarity to ruby was important too. Saying that, I have worked with scala and java before and cannot see the frameworks such as play being anything like as easy to develop with, but they probably will get more requests per second out of a single box, but elixir scales in a more predictable way is what I have read when adding more hardware - not got to that point yet though.
Phoenix is fast, but only as fast as your pipeline allows it to be. The point of phoenix is to be ānearā the fastest while having beyond intense reliability.
However my main point for this post, *DO*NOT*TEST*FROM*WINDOWS*. We learned that the hard way from work. If a server is on windows or the testing client is on Windows then (at least Windows 10) introduces a near 200ms latency on initial connections while it āfills the TCP bufferā, and at least on windows 10 weāve tried everything to disable it, registry edits, setting the TCP connection with NONAGLE and a hundred things in-betweenā¦
I know this has been looong ago, but I have to clarify this for myself: how is it possible that Phoenix, which is built on top of Plug, has a (slightly) lower latency and considerably better consistency (lower Ļ)?