Today I did a search on Google for “2017 web frameworks” and one of the top results was this.
As of writing this, Phoenix was #3 on the list for top backend web frameworks.
I was happy to see this because I personally have been thrilled working with Phoenix and think it deserves to be at the top of a list of web frameworks.
The site seems to be driven directly by user recommendations on some algorithm of “up votes” and “down votes” and Phoenix is winning with the best ratio of up to down which is why it’s ranked so high.
Although a few extra votes wouldn’t hurt anything
EDIT: As of today (July 26th) Phoenix is in a solid #1 position thanks to all of the support from everyone on this forum. Time to share the link with your buddy or boss that you’ve been wanting to introduce Elixir / Phoenix to but worry they’ll write it off as not being mainstream enough
I do not understand how they are ranking them. When I looked just now Phoenix was ranked number 1 with 31 thumbs up votes, and Laravel was #2 with 92 thumbs ups. Even after subtracting thumbs downs, Laravel would have way more votes than Phoenix.
I’m having a hard time reverse engineering their formula; so far every way I calculate it I can’t get the order to match what they are showing on the page. Maybe a more talented engineer on this forum can help us out
All I can think is that they seem to weigh down-votes or “Not Recommended” pretty heavily, and total number of up-votes or “Recommended” doesn’t seem to be the primary driver.
Based on the “About” page for the site (here) I can understand how a formula like that could do a better job getting good products to the top of the list, even if they’re not the most widely used products.
Personally I like the approach because it promotes products that are excellent that people are having great experiences with, and it gives them visibility (which is why I was excited to see Phoenix near the top of the list).
I have not looked at the data, but it is possible they are using some kind of time weighted algorithm. Essentially, if they get a lot of up / down votes for a particular framework within some arbitrarily small window, those votes mean little to nothing within the weighting.
Another possibility is that they could be looking at the HTTP Referer header. If they see a bunch of people coming from a single site (like this one), and a large quantity are either up or down voting a particular framework (presumably Phoenix in this case) those votes will count for little or nothing.
Great thoughts! I wonder if they are using some sort of time weighted algorithm or http referrer header as a part of the calculation. Since Phoenix went from #3 to #1 overnight though I wonder if it’s working in the favor of bursts of activity in a small period of time.
Like maybe because the last 10 or so recommendations have all been for Phoenix it’s weighing those recommendations more heavily than the older ones given for the other frameworks.
I just looked at the site again and it looks like Phoenix is ahead of Laravel now even if you subtract the thumbs downs. I still don’t understand the algorithm they’re using to figure out the rankings but it’s starting to look a lot more natural for Phoenix to be in the #1 spot