Even though i use ReqS3.attach() i still get reply in XML

I tried to fetch titles of objects using req with req_s3 plugin.

def new_req(options \\ []) when is_list(options) do
  access_key = System.fetch_env!("MINIO_ACCESS_KEY")
  secret_access_key = System.fetch_env!("MINIO_SECRET_KEY")
  endpoint_url = System.fetch_env!("MINIO_URL")
  Req.new(
    base_url: "#{endpoint_url}/justrunit",
    aws_sigv4: [service: :s3, access_key_id: access_key, secret_access_key: secret_access_key],
    retry: :transient
  )
  |> ReqS3.attach()
  |> Req.merge(options)
end

def list_objects(prefix) do
  req = new_req() |> ReqS3.attach()
    
  case Req.get!(req, params: [prefix: prefix]) do
    %Req.Response{status: 200, body: body} -> {:ok, body}
    response -> {:error, response}
  end
end

And it returned XML instead of elixir data structures.

I printed out request steps after attaching ReqS3 and it seems fine.

web-1            | Step: {:put_user_agent, &Req.Steps.put_user_agent/1}
web-1            | Step: {:compressed, &Req.Steps.compressed/1}
web-1            | Step: {:encode_body, &Req.Steps.encode_body/1}
web-1            | Step: {:put_base_url, &Req.Steps.put_base_url/1}
web-1            | Step: {:auth, &Req.Steps.auth/1}
web-1            | Step: {:put_params, &Req.Steps.put_params/1}
web-1            | Step: {:put_path_params, &Req.Steps.put_path_params/1}
web-1            | Step: {:put_range, &Req.Steps.put_range/1}
web-1            | Step: {:cache, &Req.Steps.cache/1}
web-1            | Step: {:put_plug, &Req.Steps.put_plug/1}
web-1            | Step: {:compress_body, &Req.Steps.compress_body/1}
web-1            | Step: {:checksum, &Req.Steps.checksum/1}
web-1            | Step: {:s3_handle_url, #Function<2.113709267/1 in ReqS3.handle_s3_url>}
web-1            | Step: {:put_aws_sigv4, &Req.Steps.put_aws_sigv4/1}

Putting XML into ReqS3.XML.parse_s3 works as it should i’d rather use ReqS3.attach().

My project is using MinIO as a local S3 db so you can get it from github and run:

git clone https://github.com/justrundotit/justrunit
docker-compose -f docker-compose.yml -f docker-compose.dev.yml up

Thanks for your time, all questions are welcome :slight_smile:

@Wojciech, just to confirm: your endpoint URL starts with “s3://” correct?

1 Like

As @jswanner said (thanks!), XML is decoded only when you use the s3:// scheme. I just released v0.2.2 that adds an :aws_endpoint_url_s3 option so you can use s3:// with a custom URL. This should now work:

def new_req(options \\ []) when is_list(options) do
  access_key = System.fetch_env!("MINIO_ACCESS_KEY")
  secret_access_key = System.fetch_env!("MINIO_SECRET_KEY")
  endpoint_url = System.fetch_env!("MINIO_URL")
  Req.new(
    base_url: "s3://justrunit",
    aws_sigv4: [access_key_id: access_key, secret_access_key: secret_access_key],
    aws_endpoint_url_s3: endpoint_url,
    retry: :transient
  )
  |> ReqS3.attach()
  |> Req.merge(options)
end

(I assume justrunit is the name of the bucket.)

Btw, if you set AWS_ENDPOINT_URL_S3, AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, the following would work too:

def new_req(options \\ []) when is_list(options) do
  Req.new(
    base_url: "s3://justrunit"
    retry: :transient
  )
  |> ReqS3.attach()
  |> Req.merge(options)
end
4 Likes

Well… i didn’t use s3 scheme, but now when i have changed it and i get this error when i try to upload a file:

web-1            | [warning] retry: got exception, will retry in 1000ms, 3 attempts left
web-1            | [warning] ** (Req.TransportError) connection refused
web-1            | [warning] retry: got exception, will retry in 2000ms, 2 attempts left
web-1            | [warning] ** (Req.TransportError) connection refused
web-1            | [warning] retry: got exception, will retry in 4000ms, 1 attempt left
web-1            | [warning] ** (Req.TransportError) connection refused

Even though after doing:

full_url = "#{req.options[:base_url]}/#{key}"
IO.puts("Debug: Request URL: #{full_url}")

I get this output:

Debug: Request URL: s3://justrunit/1/44r4r/tekst2

So it should’ve been fine, here’s the code:

def new_req(options \\ []) when is_list(options) do
    Req.new(
      base_url: "s3://justrunit",
      retry: :transient
    )
    |> ReqS3.attach()
    |> Req.merge(options)
  end

  def put_object(key, content, opts \\ []) do
    put_if_exists = Keyword.get(opts, :put_if_exists, false)
    if put_if_exists && object_exists?(key) do
      {:error, :already_exists}
    else
      req = new_req()

      full_url = "#{req.options[:base_url]}/#{key}"
      IO.puts("Debug: Request URL: #{full_url}")

      case Req.put!(req, url: key, body: content) do
        %Req.Response{status: status} when status in 200..299 -> 
          {:ok, :created}
        response -> 
          IO.inspect(response)
          {:error, response}
      end
    end
  end

I have also set environmental variables inside my dockerfile to:

- AWS_ENDPOINT_URL_S3=s3://minio:9000
- AWS_ACCESS_KEY_ID=minioadmin
- AWS_SECRET_ACCESS_KEY=minioadmin

So that’s the reason why some stuff have been removed.

Thank you guys for your replies

When using ReqS3 plugin, s3:// gets special treatment but here you should use http://. Apologies if this is confusing, I’ll try to improve docs.

Seems you are getting econnrefused so perhaps port 9000 is not bound, there are problems with connectivity between containers etc.

1 Like

Could you create a minimal reproduction, eg a command to run minio and a single file .exs that uses Mix.install?

1 Like

I tried to do that but to be honest there are so many steps that i don’t trust myself doing minimal replication without doing at least one thing wrong.

You can do:

git clone https://github.com/justrundotit/justrunit
docker-compose -f docker-compose.yml -f docker-compose.dev.yml up

It takes about three minutes from start to finish.

Relevant files:

  • lib/justrunit/s3.ex for main logic
  • docker-compose.dev.yml for env variables

To test if it works enter http://localhost:4000 and click new account in top-right.

I’m sorry you have to do so much, but the alternative isn’t any better :disguised_face:

OK, there was another req_s3 bug which is now fixed on main. This now works:

# docker run -d -p 9000:9000 -p 9001:9001 minio/minio server /data --console-address ":9001"
Mix.install([
  {:req_s3, github: "wojtekmach/req_s3", ref: "81ea8f6"}
])

System.put_env("AWS_ENDPOINT_URL_S3", "http://localhost:9000")
System.put_env("AWS_ACCESS_KEY_ID", "minioadmin")
System.put_env("AWS_SECRET_ACCESS_KEY", "minioadmin")

req =
  Req.new(base_url: "s3://bucket1")
  |> ReqS3.attach()

# create bucket
%{status: status} = Req.put!(req)
true = status in [200, 409]

%{status: 200} = Req.put!(req, url: "/object1", body: "value1")

# list objects
%{status: 200, body: body} = Req.get!(req, url: "/")
dbg(body)
3 Likes

And here’s a version that doesn’t rely on global things:

# docker run -d -p 9000:9000 -p 9001:9001 minio/minio server /data --console-address :9001
Mix.install([
  {:req_s3, github: "wojtekmach/req_s3", ref: "81ea8f6"}
])

req =
  Req.new()
  |> ReqS3.attach()
  |> Req.merge(
    aws_sigv4: [
      access_key_id: "minioadmin",
      secret_access_key: "minioadmin"
    ],
    aws_endpoint_url_s3: "http://localhost:9000"
  )

# create bucket
%{status: status} = Req.put!(req, url: "s3://bucket1")
true = status in [200, 409]

# create object
%{status: 200} = Req.put!(req, url: "s3://bucket1/object1", body: "value1")

# list objects
%{status: 200, body: body} = Req.get!(req, url: "s3://bucket1")
dbg(body)
3 Likes

Thanks! That was the solution.

1 Like