Erqwest - A fast and correct HTTP client based on reqwest

Erqwest is an http client implemented as a NIF-based wrapper around reqwest using rustler. The aim is to deliver the best possible performance and correctness. It has reached the stage where it could do with some testing so if you would like to help with that, please try it out :slight_smile:.

It’s written in erlang but it’s a simple API and should be ergonomic from elixir too:

iex(1)> :erqwest.start_client(:default)
:ok
iex(2)> {:ok, %{status: 200, body: body}} = :erqwest.get(:default, "https://httpbin.org/get")
{:ok,
 %{
   body: "{\n  \"args\": {}, \n  \"headers\": {\n    \"Accept\": \"*/*\", \n    \"Host\": \"httpbin.org\", \n    \"X-Amzn-Trace-Id\": \"Root=1-6108502f-5ff7a84e1e0c9843706ebc67\"\n  }, \n  \"origin\": \"85.230.179.80\", \n  \"url\": \"https://httpbin.org/get\"\n}\n",
   headers: [
     {"date", "Mon, 02 Aug 2021 20:06:07 GMT"},
     {"content-type", "application/json"},
     {"content-length", "221"},
     {"connection", "keep-alive"},
     {"server", "gunicorn/19.9.0"},
     {"access-control-allow-origin", "*"},
     {"access-control-allow-credentials", "true"}
   ],
   status: 200
 }}
iex(3)> :erqwest.req(:default, %{method: :put, url: "https://httpbin.org/delay/1", timeout: 100})
{:error,
 %{
   code: :timeout,
   reason: "error sending request for url (https://httpbin.org/delay/1): operation timed out"
 }}

It exposes most of the features commonly used in calling APIs etc, if there’s any reqwest feature that’s missing feel free to open an issue or PR!

15 Likes

Awesome! I loved using reqwest when I worked with Rust. It’s a super good and solid library. Kudos for the effort.

1 Like

Really nice project and really convenient to use. I really like the decision to take e.g. headers as an option, and not an arg, and returning a response map.

I was curious to see how erqwest is using rustler in an Erlang project. And it turns out it does not use rustler, the Mix project, it just uses rustler crates. This is very neat, keeping dependencies to the absolute minimum. The only thing this library needs is rust & cargo.

4 Likes

Thanks! The interface is heavily inspired by katipo :slight_smile:

1 Like

I’ve just released 0.1.0. It contains quite a few new features, the most noticeable being:

  • Making sure we’re a well-behaved NIF and always return in less than 1 ms. Previously this was not guaranteed to be the case when the request body was very large.
  • [Breaking change] Splitting the sync and async APIs for clarity. Everything in erqwest is now synchronous and the async interface lives in erqwest_async.
  • Streaming support (of request and response bodies). Getting the API right was a bit tricky but I think it worked out well. I wanted to ensure that the sync API is always “safe”, ie. that it is impossible to end up in a state where a function call hangs indefinitely waiting for a message, or where you end up with stray messages in your inbox, even if you use the API incorrectly. I also wanted to ensure that the ergonomics of non-streaming use are not compromised.
  • [Breaking change] As a result of the above the message format for the async API has changed.
  • [Breaking change] The tokio runtime is now monitored by an erlang supervisor. This means you need to start the application (:application.ensure_started(:erqwest)) before using it.
  • Optional cookies and gzip support. These are off by default since the extra dependencies increase the rust compile times. You set an env var at compile time to enable them (I’m not sure if there’s a better way to handle optional dependencies with rebar3, if you have ideas please let me know!).

See the readme for examples of how to use the new streaming API and grab the new version from hex where you can also find the docs :slight_smile:

6 Likes

That’s mighty impressive! Thank you!

Can you please show us where in the code is this addressed exactly? I’m very interested in that particular aspect because I’ll be able to go back to my Rustler-based library at one point and I want it to be well-behaved as well.

2 Likes

Sure, so the first thing I did was time the NIF calls, just using timer:tc/1, which I think is good enough here (if you run it a few times) because we’re not benchmarking, we just want a rough idea. You should look at the shortest times (assuming a CPU bound workload), since longer times are probably caused by the OS scheduler context switching. What I saw is that erqwest_nif:make_client/2 is consistently very slow (~30 ms). I checked what it was doing with perf and it was spending its time in openssl, which we can assume means it’s CPU bound, so I marked it as a CPU-bound dirty NIF.

For erqwest_nif:req/1, times were generally well under 1 ms, which makes sense because all it’s doing is queueing something to be processed by another thread. I went looking for any edge cases which might cause it to consistently take more than 1 ms, and found that it did when the request body was large. This is because it copies the body into a Vec<u8>. I considered marking this NIF as dirty too, however some benchmarking showed that this caused a ~30% slowdown in the (probably more common) case that the request body is small, so I looked for another solution. It turns out that copying a binary is very cheap, since binaries over 64-bytes are reference counted. So I changed the code to just copy the binary/iodata to an OwnedEnv, and decode it on a tokio thread where we have no execution time restrictions.

(I’m getting told my post has too many links so I’m splitting it up)

5 Likes

The last thing I did was add a call to enif_consume_timeslice to give the BEAM a rough idea of how much CPU time we have consumed. The docs for that function are a bit confusing, but you can get a better idea of what it does by looking at the source and also the way other functions call BUMP_REDS, for example enif_send. Hope that helps!

3 Likes

There are other strategies you can use too, depending on the nature of your NIF. jiffy is a good example for how to use enif_schedule_nif, which rustler should hopefully support soon, which seems to be the way to go for purely computational NIFs.

4 Likes

Gems of wisdom here, man, thanks a bunch. :heart:

This is something I was never able to decide conclusively on. I’ll get back to my sqlite3 library in the next few months and I couldn’t figure out if I should mark the functions that can take a longer time (so any SQL expressions really) as DirtyCpu or DirtyIo (since we can’t mark a function as both). Do you think I should mark them DirtyCpu by the mere virtue of them possibly returning after more than 1ms? (Even if they are also expected to do a lot of I/O?)

Never used OwnedEnv before, can you give me a quick FYI on what does it exactly do?

As for the binary trick, that’s super neat. I’ve known about it for a long time but I don’t remember ever employing it. Good job!

This one I wanted to use for a long time now, but the part that bothers me is that this is informing the runtime after the function has already executed. I’m still looking for a way to make sure the NIF always returns within 1ms and if that means the function has to be called several times in a loop until it does its job then that’s what my wrapping Elixir code will do. Actually on my side it might be easier because SQLite3 has facilities for that – even if a request takes 1000ms you can instruct SQLite3 to periodically yield to the caller.

…actually, it looks like enif_schedule_nif does exactly that? :open_mouth: If so, nice!


Very grateful for your work and insightful comments. I haven’t looked at Rustler (and my library) in a long time and I appreciate the new info and your take on the problem. Expect me to steal some constructs from your library! :003:

Thank you. :bowing_man:

1 Like

Hmm this is a tricky one! In the erf_nif docs it is mentioned that it is possible to switch between the two using enif_schedule_nif, but since you probably don’t know what the 3rd party code is doing that probably won’t work. What about doing the query on another thread and returning the result as a message? Depending on how many connections you expect the user to want to have open, you could spawn a thread per connection. From a quick look here it sounds like sqlite3 only allows a single thread to operate on a connection at any one time anyway so performance might not even be worse. A port could also be a good solution?

It’s a process-independent environment, essentially a way to store erlang terms between NIF calls.

I don’t think it matters when you inform the runtime, since it cannot context switch until your NIF returns anyway. But maybe what you meant is you don’t want to discover you’ve consumed more than 1 ms worth of CPU time by the time you get to calling enif_consume_timeslice? So yea as you mention the hard part is absolutely finding a way to ensure you never consume more than 1 ms.

1 Like

Super glaring omission from my side but then again I’ve probably worked no more than 12-15h on my library in total, including research. Still, embarrassing omission for sure. :man_facepalming:

This looks like a perfect place to store prepared statements cache, for example. Nice.

Yes, exactly. I want 1ms to be a complete total maximum runtime of all SQL execution functions no matter what. (I don’t mind opening/closing the DB or setting some options to take more than 1ms because those are 0.1% of the function calls.) To that end… :point_down:

No problem with that at all indeed. I was just over-fixating on having a synchronous API that might do all sorts of dirty trickery underneath so as to never clog the BEAM VM – having several seconds of response time when querying big SQLite3 public DBs isn’t unheard of. I’ve queried various census databases (1TB+) and there were some simple JOINs with 2-3 filters that took at least 4s. I don’t think any user of the library would mind that at all – but they would mind if the BEAM VM’s guarantees were broken due to it.

So I’ll follow closely the implementation of enif_schedule_nif into Rustler or, depending on timing, might just reach for the guts of SQLite3 and make it periodically yield to internal Rust functions of mine which track progress and assemble the response.

So again, I over-fixated on having a sync API but adding async API has been super tempting from the very beginning! :smiley:


I am seeing that Rustler PR not being active for the last PR but right now I don’t have all the necessary Rust qualifications to contribute (sigh).

1 Like

Thanks for the library! I was dealing with some memory issues with HTTPoison and erqwest eliminated the problem. There seem to be no increase in memory when performing get requests in my Phoenix application. Interesting enough, I compared it with Finch and found them both to be the same speed, but Finch consumed more memory, so I ended up using erqwest.

One question: do you think it’s okay to make one default erqwest client to use amongst all my LiveView processes?

Glad to hear that it’s performing well! Yes it’s best to share a single client, it’s safe to use from multiple processes simultaneously. Internally a client holds a reqwest Client, and the docs there recommend creating one and reusing it.

1 Like

Awesome man, thanks!