I’d like to avoid doing that if possible but if it is the only option then fine.
Thank you
I’d like to avoid doing that if possible but if it is the only option then fine.
Thank you
The library is great.
Probably the only thing I’m very skeptical about is enabled retries by default.
The default is :safe_transient, safe meaning only GET or HEAD, transient meaning network errors, 408/500/503/etc. Trivial to turn off though!
Hey everyone, Req v0.5 is out! I’ve written a release blog post for the occasion. Happy hacking!
Req v0.5.0 brings testing enhancements, errors standardization, %Req.Response.Async{}
, and more improvements and bug fixes.
In previous releases, we could only create test stubs (using Req.Test.stub/2
), that is, fake
HTTP servers which had predefined behaviour. Let’s say we’re integrating with a third-party
weather service and we might create a stub for it like below:
Req.Test.stub(MyApp.Weather, fn conn ->
Req.Test.json(conn, %{"celsius" => 25.0})
end)
Anytime we hit this fake we’ll get the same result. This works extremely well for simple
integrations however it’s not quite enough for more complicated ones. Imagine we’re using
something like AWS S3 and we test uploading some data and reading it back again. While we could do
this:
Req.Test.stub(MyApp.S3, fn
conn when conn.method == "PUT" ->
# ...
conn when conn.method == "GET" ->
# ...
end)
making the test just a little bit more thorough will make it MUCH more complicated, for example:
the first GET request should return a 404, we then make a PUT, and now GET should return a 200.
We could solve it by adding some state to our test (e.g. an agent) but there is a simpler way and
that is to set request expectations using the new Req.Test.expect/3
function:
Req.Test.expect(MyApp.S3, fn conn when conn.method == "GET" ->
Plug.Conn.send_resp(conn, 404, "not found")
end)
Req.Test.expect(MyApp.S3, fn conn when conn.method == "PUT" ->
{:ok, body, conn} = Plug.Conn.read_body(conn)
assert body == "foo"
Plug.Conn.send_resp(conn, 200, "")
end)
Req.Test.expect(MyApp.S3, fn conn when conn.method == "GET" ->
Plug.Conn.send_resp(conn, 200, "foo")
end)
The important part is the request expectations are meant to run in order (and fail if they don’t).
In this release we’re also adding Req.Test.transport_error/2
, a way to simulate network
errors.
Here is another example using both of the new features, let’s simulate a server that is
having issues: on the first request it is not responding and on the following two requests it
returns an HTTP 500. Only on the third request it returns an HTTP 200. Req by default
automatically retries transient errors (using retry
step) so it will make multiple
requests exercising all of our request expectations:
iex> Req.Test.expect(MyApp.S3, &Req.Test.transport_error(&1, :econnrefused))
iex> Req.Test.expect(MyApp.S3, 2, &Plug.Conn.send_resp(&1, 500, "internal server error"))
iex> Req.Test.expect(MyApp.S3, &Plug.Conn.send_resp(&1, 200, "ok"))
iex> Req.get!(plug: {Req.Test, MyApp.S3}).body
# 15:57:06.309 [error] retry: got exception, will retry in 1000ms, 3 attempts left
# 15:57:06.309 [error] ** (Req.TransportError) connection refused
# 15:57:07.310 [error] retry: got response with status 500, will retry in 2000ms, 2 attempts left
# 15:57:09.311 [error] retry: got response with status 500, will retry in 4000ms, 1 attempt left
"ok"
Finally, for parity with Mox, we add functions for setting ownership
mode:
Req.Test.set_req_test_from_context/1
Req.Test.set_req_test_to_private/1
Req.Test.set_req_test_to_shared/1
And for verifying expectations:
Thanks to Andrea Leopardi for driving the testing improvements.
In previous releases, when using the default adapter, Finch, Req could return these exceptions on
network/protocol errors: Mint.TransportError
, Mint.HTTPError
, and Finch.Error
. They have
now been standardized into: Req.TransportError
and Req.HTTPError
for more consistent
experience. In fact, this standardization was the pre-requisite of adding
Req.Test.transport_error/2
!
Two additional exception structs have been added: Req.ArchiveError
and Req.DecompressError
for zip/tar/etc errors in decode_body
and gzip/br/zstd/etc errors in decompress_body
respectively. Additionally, decode_body
now returns Jason.DecodeError
instead of raising it.
%Req.Response.Async{}
In previous releases we added ability to stream response body chunks into the current process
mailbox using the into: :self
option. When such is used, the response.body
is now set to
Req.Response.Async
struct which implements the Enumerable
protocol.
Here’s a quick example:
resp = Req.get!("http://httpbin.org/stream/2", into: :self)
resp.body
#=> #Req.Response.Async<...>
Enum.each(resp.body, &IO.puts/1)
# {"url": "http://httpbin.org/stream/2", ..., "id": 0}
# {"url": "http://httpbin.org/stream/2", ..., "id": 1}
Here is another example where we use Req to talk to two different servers. The first server
produces some test data, strings "foo"
, "bar"
and "baz"
. The second one is an “echo” server, it simply
responds with the request body it returned. We then stream data from one server, transform it, and
stream it to the other one:
Mix.install([
{:req, "~> 0.5"},
{:bandit, "~> 1.0"}
])
{:ok, _} =
Bandit.start_link(
scheme: :http,
port: 4000,
plug: fn conn, _ ->
conn = Plug.Conn.send_chunked(conn, 200)
{:ok, conn} = Plug.Conn.chunk(conn, "foo")
{:ok, conn} = Plug.Conn.chunk(conn, "bar")
{:ok, conn} = Plug.Conn.chunk(conn, "baz")
conn
end
)
{:ok, _} =
Bandit.start_link(
scheme: :http,
port: 4001,
plug: fn conn, _ ->
{:ok, body, conn} = Plug.Conn.read_body(conn)
Plug.Conn.send_resp(conn, 200, body)
end
)
resp = Req.get!("http://localhost:4000", into: :self)
stream = resp.body |> Stream.with_index() |> Stream.map(fn {data, idx} -> "[#{idx}]#{data}" end)
Req.put!("http://localhost:4001", body: stream).body
#=> "[0]foo[1]bar[2]baz"
Req.Response.Async
is an experimental feature which may change in the future.
The existing caveats to into: :self
still apply, that is:
If the request is sent using HTTP/1, an extra process is spawned to consume messages from the
underlying socket.
On both HTTP/1 and HTTP/2 the messages are sent to the current process as soon as they arrive,
as a firehose with no back-pressure.
If you wish to maximize request rate or have more control over how messages are streamed, use
into: fun
or into: collectable
instead.
Req
: Deprecate setting :headers
to values other than string/integer/DateTime
.
This is to potentially allow special handling of atom values in the future.
Req
: Add Req.run/2
and Req.run!/2
.
Req
: into: :self
now sets response.body
as Req.Response.Async
which implements
enumerable.
Req.Request
: Deprecate setting :redact_auth
. It now has no effect. Instead of allowing
to opt out of, we give an idea what the secret was without revealing it fully:
iex> Req.new(auth: {:basic, "foobar:baz"})
%Req.Request{
options: %{auth: {:basic, "foo*******"}},
...
}
iex> Req.new(headers: [authorization: "bearer foobarbaz"])
%Req.Request{
headers: %{"authorization" => ["bearer foo******"]},
...
}
Req.Request
: Deprecate halt/1
in favour of Req.Request.halt/2
.
Req.Test
: Add Req.Test.transport_error/2
to simulate transport errors.
Req.Test
: Add Req.Test.expect/3
.
Req.Test
: Add functions for setting ownership mode: Req.Test.set_req_test_from_context/1
, Req.Test.set_req_test_to_private/1
,
Req.Test.set_req_test_to_shared/1
and for verifying expectations: Req.Test.verify!/0
, Req.Test.verify!/1
, and Req.Test.verify_on_exit!/1
.
Req.Test
: Add Req.Test.html/2
.
Req.Test
: Add Req.Test.text/2
.
Req.Test
: Drop :nimble_ownership
dependency.
Req.Test
: Deprecate Req.Test.stub/1
, i.e. the intended use case is to only work
with plug stubs/mocks.
decode_body
: Return Jason.DecodeError
on JSON errors instead of raising it.
decode_body
: Return Req.ArchiveError
on tar/zip errors.
decompress_body
: Return Req.DecompressError
.
put_aws_sigv4
: Drop :aws_signature
dependency.
retry
: (BREAKING CHANGE) Consider
%Req.TransportError{reason: :closed | :econnrefused | :timeout}
as transient. Previously
any exceptions with those reason values were consider as such.
retry
: (BREAKING CHANGE) Consider
%Req.HTTPError{protocol: :http2, reason: :unprocessed}
as transient.
run_finch
: (BREAKING CHANGE) Return Req.HTTPError
instead of Mint.HTTPError
.
run_finch
: (BREAKING CHANGE) Return Req.TransportError
instead of Mint.TransportError
.
run_finch
: Set inet6: true
if URL looks like IPv6 address.
run_plug
: Make public.
run_plug
: Add support for simulating network issues using Req.Test.transport_error/2
.
run_plug
: Support passing 2-arity functions as plugs.
run_plug
: Automatically fetch query params.
verify_checksum
: Fix handling compressed responses.
Hello,
Do Req supports the encryption/decryption of the payload according to JWE standard?
I am trying to test using it to connect to highly secured service in sandbox and they mandates OAUTH 1.0 using RSA keys besides full encryption of the payload. I did look at documents and the forum and it seems it is still not supported "I am new to elixir so I could be wrong "
Nope, not supported.
Hey folks, just a quick update on Req. Recently I published an article, SDKs with Req: S3 - Dashbit Blog, and the recent Req focus was on all things S3. The biggest change in Req is adding :form_multipart
option:
iex> Mix.install([{:req, "~> 0.5.6"}])
iex> File.write!("c.txt", "ccc")
iex>
iex> resp =
...> Req.post!(
...> "https://httpbin.org/anything",
...> form_multipart: [
...> a: "aaa",
...> b: {"bbb", filename: "b.txt"},
...> c: File.stream!("c.txt", 2048)
...> ]
...> )
iex>
iex> resp.body |> Map.take(["form", "files"])
%{"files" => %{"b" => "bbb", "c" => "ccc"}, "form" => %{"a" => "aaa"}}```
ReqS3 also got updated, it now automatically decodes all XML responses, reads common AWS_*
system env variables out of the box, and has improved support for S3-compatible services. Here’s an example using MinIO:
$ docker run -p 9000:9000 minio/minio server /data
Mix.install([
{:req_s3, "~> 0.2.3"}
])
req =
Req.new()
|> ReqS3.attach()
|> Req.merge(
aws_sigv4: [
access_key_id: "minioadmin",
secret_access_key: "minioadmin"
],
aws_endpoint_url_s3: "http://localhost:9000"
)
# create bucket
%{status: status} = Req.put!(req, url: "s3://bucket1")
true = status in [200, 409]
# create object
%{status: 200} = Req.put!(req, url: "s3://bucket1/object1", body: "value1")
# list objects
%{status: 200, body: body} = Req.get!(req, url: "s3://bucket1")
dbg(body)
See changelogs for more information:
Happy hacking!
I’ve noticed that Req returns iodata when the response is streamed/chunked from the server. The automatic decoding step doesn’t happen, although the content type (csv in my case txt/csv
) is set appropriately. Is this expected behavior, or could this be improved?
In this particular case I’m not using Req’s body response streaming feature. Getting the full response body in one go would be ideal. I see there is an :into
option for the response body, but I don’t know how to set its value to collect the response in a binary (and it still wouldn’t decode automatically).
Never mind, I totally misinterpreted something I was seeing. What I saw was not iodata, but was the decoded csv after all! Even chunked responses get decoded perfectly. Sorry for the confusion!