Mint vs Finch vs Gun vs Tesla vs HTTPoison etc

I’ll go ahead and tell you that I use Tesla because it keeps my options open. It’s loosely based on Faraday from Ruby. You just use Tesla and then it has adapters for whatever other client you want it to use under the hood.

Tesla supports multiple HTTP adapter that do the actual HTTP request processing.

When using adapter other than :httpc remember to add it to the dependencies list in mix.exs


Thank you for your thorough answer.

Indeed I guess my question, which I have admittedly not articulated properly, was intended to go more along the lines of:

Which libraries address which use cases ?

And in the process, that would perhaps help understand the fragmentation and also, the differences in implementation (GenServer/library/application). Otherwise I don’t see why so many people would be attacking the same problem.


Our internal service clients are all based on HTTPoison (probably because it was the most popular choice 4 years ago(?))… but recently inspired by Goth redesign I started refactoring it to allow swapping http_client library “on demand per project” and want to give a shot to Finch in production.

Also impressed by this tweet


I second this. I have had fantastic success with Tesla. When I used Mojito things go really sketchy with services timing out when I would pull info.

1 Like

I wanted to use Finch because I’ve seen good posts about its performance — and because @keathley is awesome :star_struck: — but sadly for my current work it might not be a good fit because it seems to maintain a pool of open connections.

And in my current projects I need to be very conservative with how much connections I open to commercial API servers because we can get severely throttled and that would be a business disaster.

But I’d go with Tesla with Gun adapter (I used the raw Gun Erlang library and loved it) and then maybe migrate to Finch one day (by tweaking its connecting pool to never keep open connections, if that’s possible),

My $0.02


Let me state up front that I’m the original author of Finch (although at this point I’m by no means the largest contributor to the project). So that’s my bias out of the way. I’ll attempt to give a rundown of the current clients and then explain why I felt like I should try to contribute a new one.

Hackney and httpoison (which is an elixir wrapper around hackney) are probably the oldest and most popular http clients for Elixir. Both are very battle tested and have been used in many production deployments of elixir. Hackney has a lot of features and supports a number of use cases. The main issue with hackney (and thus httpoison) is with how it handles pools of connections. The pools have slowly gotten better over time but hackney is attempting to support a large number of use cases and has defaults that are better in general but worse in high throughput scenarios. If you care about performance, you’ll need to create dedicated pools for each of your hosts, manage which pools you’re using for different calls, etc. This is all doable, but its a chore and error prone. hackney and httpoison don’t support :telemetry, at least last I checked, which also added friction. We used hackney for a very long time at B/R and got a very long way. I’m very appreciative of the work that people have put into those libraries.

Gun is less popular but claimed higher performance than hackney (using default configs) for a long time. That said, its always been a chore to get to work properly because it depends on a non-standard version of cowlib that makes it incompatible with stuff like plug. So getting it working in a standard elixir project is typically difficult. It also didn’t truly support all of the features (websockets comes to mind) that it claimed for a while. Some of this may have changed as I believe there was a new release recently.

Mint is a wrapper around gen_tcp and ssl libraries in erlang that allows you to make http/1.1 and http/2 requests. It does this in a way that mostly hides the underlying socket mechanisms from you. This is useful for building libraries but makes it highly non-ergonomic for general use. You’ll need to build your own pooling mechanism if you want to get performance benefits of long lived connections with http/1.1. http/2 is a highly stateful protocol so you’ll end up needing to implement a lot of logic and functionality on top of the connection to make it work correctly.

Mojito is a pooling solution written around Mint. It uses a novel pool of pools for specific downstreams which tends to provide good utilization of all your connections. The main downside to mojito is that each connection is stored inside of a single process. That means that when you need to make a request, you need to get access to the gen_server, send a request to the gen_server, the gen_server then receives the response, and returns it back to the caller. Whenever you cross a process boundary, the memory must be copied. This means that you’re doing excessive copying to and fro for every single request which increases memory pressure, increases CPU pressure due to excess garbage collection and process rescheduling, and increases your overall latency since the connection can’t be checked back into the pool until the copying has finished.

This brings me to finch. At B/R we were making tens of thousands of http requests per second and were running into errors with misuse of hackney pools. So I spent a bunch of my free time working on a new client that combined mint with a new pool that jose had recently published called nimble pool. Finch stole the idea of a pool of pools from mojito. But instead of using processes to hold each connection, we instead hand the connection itself to the caller (which is possible thanks to mint’s design). This reduces memory, cpu, and decreases latency since we’re no longer copying large binaries across process boundaries. At least, this is true for http/1.1. http/2 is a completely different implementation and I suspect is about as “fast” as any other http/2 implementation. Finch also added support for telemetry spans which was something that we needed at B/R and has made its way into other clients as well. But Finch is brutally focused on being high throughput. This means we don’t support as many features as other clients do because we would have to take them into consideration when dealing with performance goals.

A more pleasant to use API around Finch you should check out GitHub - wojtekmach/req. I think this is potentially the right way to handle the various features such as a REST verb API, automatically decompressing responses, retries, etc. that many people have come to expect from a robust client. Definitely watching and chatting with Wojtek about it and excited to see what he does with it.

Something else that I didn’t see mentioned in your original post is chatterbox. This is a http/2 only implementation for both clients and servers. If I was going to support only http/2 that would be what I would use atm.

Tesla is a wrapper around all of these things. I would stick to the hackney adapter or the finch adapter (because of my aforementioned biases). We attempted to use tesla at B/R for a long time but eventually found that it still required more configuration than we wanted to do per project and moved to GitHub - keathley/twirp-elixir: Elixir implementation of the twirp RPC framework, backed by finch. But Tesla has a lot of useful middleware and I think its a good library if you only have a few clients to manage.

I’ve never seen simplehttp or katipo so I can’t comment on either of those.


Another thing to look at is whether a client handles TLS securely out-of-the-box. If it doesn’t, then you are going to have to do the ssl options dance.

Mint-based clients, including Finch, Mojito, and Tesla with the appropriate adapter, should handle that just fine.

Hackney, and by extension HTTPoison and Tesla with the Hackney adapter, do check the server certificate by default. However, if you want to start tuning the TLS options, e.g. to disable old TLS versions, you are going to have to set all the necessary options yourself:

# This fails, as expected:
iex(1)> HTTPoison.get("")                                   
   id: nil,
   reason: {:tls_alert,
     'TLS client: In state certify at ssl_handshake.erl:1764 generated CLIENT ALERT: Fatal - Certificate Expired\n'}}
# But this succeeds while it shouldn't:
iex(2)> HTTPoison.get("", [], ssl: [versions: [:"tlsv1.2"]])

Other clients, such as httpc, gun and ibrowse, leave it completely up to you. For instance, Tesla with the default httpc adapter:

iex(3)> Tesla.get("")                    

I would highly recommend you test your choice of HTTP client against some endpoints, using the specific configuration (adapter, ssl options) you intend to use, so you can be confident your application gets the TLS protections you expect.


If you’re a library author you might also consider using Tesla for outbound HTTP requests to not force an HTTP library upon the library user, and let him choose the adapter he wants. Beware, however, the default Tesla adapter is httpc which is insecure (it makes TLS requests without verifying anything), so you have to document it correctly.

There are also some things that are not possible with Tesla. For example, I recently had to proxy some big log files through a Phoenix app. As far as I know, streaming the HTTP response directly to %Plug.Conn{} is not possible with Tesla, so I used Mint instead. HTTPoison supports it as well.


Hi, library author here who chose to use an existing package based on Tesla. If I could go back and do it again, I would not do that. Here’s why:

  • First, because it’s abstractions on abstractions on abstractions.

  • Tesla was not originally designed with this level of customizability from outside the client (read: library), so it doesn’t cater to the end-user in that regard, anyway.

  • The design of Mint/finch, which are awesome, is different enough to require changes to the abstractions itself, which I suppose is really more of a refinement of my first point.

So I’m firmly in the Mint/finch camp. Insofar as package authors want to make http clients more extensible, great.

Finally, this is a diverse community of language and library users, and one small way we can demonstrate inclusivity is by not applying a gender to the community at large :slight_smile:


Library author also and I’m not sure I’m getting your point here :slight_smile:

Say you write a library and you need to refresh some keys periodically ( for instance). Erlang’s httpc is out of question, because TLS, so you need to pick another library. Using Tesla allows not forcing a specific library upon the user. This is basically what I’m doing with JWKSURIUpdater for instance.

What would be your approach instead? Maybe you meant it for other use-cases, when you need better performances or features not available in Tesla? Feel free to be more specific, I’m interested in what others do to deal with this problem.

1 Like

And now you allow the user to shoot himself on the foot more easily :slight_smile:

I say this because secure http in the BEAM is a very old implementation that http libraries abstract away from you, but then its so easy for the end user to screw everything up when customizing it:

In my opinion I would stick with a specific library, and build a wrapper around it to ensure that the user doesn’t shoot himself on the foot when passing custom settings.

Not if you default to a secure adapter when building the Tesla client:

  defp tesla_adapter(), do: Application.get_env(:tesla, :adapter, Tesla.Adapter.Hackney)
1 Like

But the end user of the library can override this, correct?

1 Like

We use httpoison, but its use of hackney has continually bitten us badly. It is very temperamental and we’ve had random failures due to slightly out of the ordinary cert chains, poorly configured servers, on top of OTP related https bugs.

I’ve looked at tesla but most of its backends that are not hackney seem do not even verify the ssl cert?

We’ve actually started just wrapping the curl library. It uses the system certificate store, it supports everything, it has retries and exponential backoff, location redirects, streaming the file to disk (which is important so as not to run out of memory when manipulating audio files), a whole lot of other options, and it never, ever fails.

I just wish someone had made a library for this so that we didn’t have to write our own.


like GitHub - puzza007/katipo: HTTP2 client for Erlang based on libcurl and libevent or a different one?

1 Like

Are you planning to share it?

Yes, at least for the default :httpc backend.

I have open it in 2019, but still not solved, and I even got downvoted because of a recent comment I made.

If you’re going to question how people spend their free time, you should at least offer to mow their lawn. I mean, it’s not surprising you’re getting downvoted. They’ve offered you code and insight into how they approach a problem - something that is much more valuable than the actual code btw. In return, you offered them an opinion on said code. When they didn’t respond to your opinion in the way you wanted, you criticized them. You’re being downvoted because you presumed that your opinion was worth more than that maintainers time. If you don’t like Tesla it’s really simple: don’t use Tesla. Or fork it and change it to your liking. But don’t assume that because you use a thing it gives you any right to tell people what they do with their projects or their time.


I recently used Finch for a small project wrapping the Hue API. So far I am very happy with the results!

I’ve used HTTPoison, a wrapper around Hackney, for a couple years and found myself wishing for something implemented entirely in Elixir. When on-boarding engineers new to the Elixir ecosystem I received feedback that the Erlang-isms of Hackney presented an additional barrier to entry to engineers already busy with learning Elixir itself. I was excited when Mint was announced, but realized it was a bit low-level for most scenarios, hence my interest in Finch.

@keathley Thank you for your work on Finch! I’m curious to see how Req progresses :eyes: Adding the ability to perform one-line requests like HTTPoison is great for developers new to the ecosystem.


Interesting! I saw something similar on a Ruby project that surprised us when we moved the project to Alpine Linux images and forgot to install curl before deploying :laughing:

Did you use the Ports module to handle instrumenting curl itself, or something else?

1 Like

I think the best way to pick the HTTP client is to start with the requirements. Here’s what my app needs:

  • Security,
  • Testability,
  • Out-of-the box support for various formats,
  • Customizability and extensibility,
  • Retries,
  • Logging.

You might also want to care about:

  • Telemetry support,
  • High performance,
  • Out-of-the box authentication support (like Basic Auth).

I guess you can get all/most of that features with any client, but Tesla gives you a lot of them for free. This is mostly because it was explicitly designed around the idea of swappable adapters and pluggable middleware.

For example, the testing story is great. You could use the provided mock adapter, but you can do even better than that - Tesla defines a simple behaviour for adapters which works great with Mox: define a mock adapter and plug it into your config/test.exs and you can test all the way down to bare-bones HTTP calls.

Customizing Tesla is really important for my use case: I need to log the requests, but they can contain sensitive data in query strings, so I just copy-pasted the provided logging middleware, tweaked it and plugged it into the client.

I also built an adapter that dumps and loads requests from disk which allowed me to built test fixtures that enabled fast and repeatable unit testing and fearless refactoring.

I’m keeping an eye on @wojtekmach’s req and I hope it will tick all the boxes.

What I’d love to see though is a unified, agreed-upon interface for HTTP requests/responses which would make swapping HTTP clients easier.