Mint vs Finch vs Gun vs Tesla vs HTTPoison etc

Let me state up front that I’m the original author of Finch (although at this point I’m by no means the largest contributor to the project). So that’s my bias out of the way. I’ll attempt to give a rundown of the current clients and then explain why I felt like I should try to contribute a new one.

Hackney and httpoison (which is an elixir wrapper around hackney) are probably the oldest and most popular http clients for Elixir. Both are very battle tested and have been used in many production deployments of elixir. Hackney has a lot of features and supports a number of use cases. The main issue with hackney (and thus httpoison) is with how it handles pools of connections. The pools have slowly gotten better over time but hackney is attempting to support a large number of use cases and has defaults that are better in general but worse in high throughput scenarios. If you care about performance, you’ll need to create dedicated pools for each of your hosts, manage which pools you’re using for different calls, etc. This is all doable, but its a chore and error prone. hackney and httpoison don’t support :telemetry, at least last I checked, which also added friction. We used hackney for a very long time at B/R and got a very long way. I’m very appreciative of the work that people have put into those libraries.

Gun is less popular but claimed higher performance than hackney (using default configs) for a long time. That said, its always been a chore to get to work properly because it depends on a non-standard version of cowlib that makes it incompatible with stuff like plug. So getting it working in a standard elixir project is typically difficult. It also didn’t truly support all of the features (websockets comes to mind) that it claimed for a while. Some of this may have changed as I believe there was a new release recently.

Mint is a wrapper around gen_tcp and ssl libraries in erlang that allows you to make http/1.1 and http/2 requests. It does this in a way that mostly hides the underlying socket mechanisms from you. This is useful for building libraries but makes it highly non-ergonomic for general use. You’ll need to build your own pooling mechanism if you want to get performance benefits of long lived connections with http/1.1. http/2 is a highly stateful protocol so you’ll end up needing to implement a lot of logic and functionality on top of the connection to make it work correctly.

Mojito is a pooling solution written around Mint. It uses a novel pool of pools for specific downstreams which tends to provide good utilization of all your connections. The main downside to mojito is that each connection is stored inside of a single process. That means that when you need to make a request, you need to get access to the gen_server, send a request to the gen_server, the gen_server then receives the response, and returns it back to the caller. Whenever you cross a process boundary, the memory must be copied. This means that you’re doing excessive copying to and fro for every single request which increases memory pressure, increases CPU pressure due to excess garbage collection and process rescheduling, and increases your overall latency since the connection can’t be checked back into the pool until the copying has finished.

This brings me to finch. At B/R we were making tens of thousands of http requests per second and were running into errors with misuse of hackney pools. So I spent a bunch of my free time working on a new client that combined mint with a new pool that jose had recently published called nimble pool. Finch stole the idea of a pool of pools from mojito. But instead of using processes to hold each connection, we instead hand the connection itself to the caller (which is possible thanks to mint’s design). This reduces memory, cpu, and decreases latency since we’re no longer copying large binaries across process boundaries. At least, this is true for http/1.1. http/2 is a completely different implementation and I suspect is about as “fast” as any other http/2 implementation. Finch also added support for telemetry spans which was something that we needed at B/R and has made its way into other clients as well. But Finch is brutally focused on being high throughput. This means we don’t support as many features as other clients do because we would have to take them into consideration when dealing with performance goals.

A more pleasant to use API around Finch you should check out GitHub - wojtekmach/req. I think this is potentially the right way to handle the various features such as a REST verb API, automatically decompressing responses, retries, etc. that many people have come to expect from a robust client. Definitely watching and chatting with Wojtek about it and excited to see what he does with it.

Something else that I didn’t see mentioned in your original post is chatterbox. This is a http/2 only implementation for both clients and servers. If I was going to support only http/2 that would be what I would use atm.

Tesla is a wrapper around all of these things. I would stick to the hackney adapter or the finch adapter (because of my aforementioned biases). We attempted to use tesla at B/R for a long time but eventually found that it still required more configuration than we wanted to do per project and moved to GitHub - keathley/twirp-elixir: Elixir implementation of the twirp RPC framework, backed by finch. But Tesla has a lot of useful middleware and I think its a good library if you only have a few clients to manage.

I’ve never seen simplehttp or katipo so I can’t comment on either of those.

110 Likes