Ex_aws: duplicates and timeouts


I’m using ex_aws firehose and sometimes I might get duplicate records inserted. I’ve been debugging it and I’ve come down that it might be ex_aws or kinesis firehose. I enabled debugging and the only error I see is ExAws: HTTP ERROR: :closed I’ve already extended the recv_timeout to 300_000 and i still get these errors =( and Its not easy to reproduce. but there is an outlier somewhere there. =( file size is pretty low x kb.

Is there some option I’m just missing. I’m using :ex_aws, “1.1.5”


ExAws automatically retries if it does not get a successful response from AWS. It looks like the http connection is getting interrupted periodically, which leads to the closed error. Despite this, Kinesis did in fact receive the record, so when it retries the record is placed twice.

Likely your best bet is to tag each record you place with some kind of uniquely identifying piece of information so that you can simply de-duplicate on the other end.

1 Like

thanks! will check it out. I was reading the putrecord api docs for firehose and I didnt seem to find a way to tag records. =(

Well, you can always include the tag inside the record itself right?

I thought there was a tag key or tag action to mark it because I do have a unique key on the json I’m passing. The next time it retries the same blob will be generated. For de-duplicating, did you mean I have to setup a data transformation for it in firehose? Thanks for the help! =)

%{data: "{\"ticker_symbol\": \"hello\", \"sector\": \"world\", \"id\": 1}"}

[debug] Request BODY: “{“Record”:{“Data”:“eyJicFfdada9jcmVhdGVkX2F0IjogIjIwMTctMTItMjEgMTU6MTQ6MjAuODMwMDQ0IiwiYnBfdXNlcl9pZCI6IDI4NzIzOTcsICJicF9jb2lucyI6IDAsImJwX3RpdGxlIjogIkRyYXdpbmdzIFBvdWNoIiwgImJwX251bV9kcmF3cyI6IDMsICJicF9zZXRfaWQiOiAxOTExLCAiYnBfaWQiOiAzOTAwOTQ2MTN9”},…}”

You could definitely try a data transformation function in Firehose, although I confess I don’t have a lot of experience with them.

I was mostly just imagining that whatever service is ultimately handling this data would be able to use the ids within the records to avoid duplicate processing.

1 Like

thanks for the help! =) I also bought your book. Cant wait to read it :slight_smile:

Oh thanks! Be sure to create a post if you have any questions!

1 Like

thanks! and thanks for making the ex_aws :slight_smile:

Hi there!

I’m having some http timeouts on exaws. I didnt want to spam the posts so I thought to continue on from here. The elixir app is causing the beam to crash with out of memory. =( using ex_aws 1.1.5. I’ve tried the following and I feel lost. :frowning:

  1. I’ve already extended the timeout to a huge amount =p and when i set the debug there are no outputs on the log.
  debug_requests: true,
  access_key_id: [{:system, "AWS_ACCESS_KEY_ID"}, :instance_role],
  secret_access_key: [{:system, "AWS_SECRET_ACCESS_KEY"}, :instance_role],
  recv_timeout: 600_000, #increase timeout for http exaws timeout error to 120 seconds
  hackney: [recv_timeout: 600_000, pool: false]```

2. If I run locally, there are no errors using the same AWS_ACCESS_KEY_ID

3. If I disabled the application to connect to each delivery system via iex, there are no timeouts. 

I'm not sure what I'm missing. It used to run before but then recently its getting all these timeouts and not consistent too. Thanks! I'm stumped :expressionless:

Hey @demem123.

Unless you’ve changed things, :ex_aws removed the default http client from httpoison to just bare hackney, so any config opts need to be under :hackney_opts not :httpoison_opts.

Also notably, debug_requests, access_key_id, secret_access_key config options all need to be directly :ex_aws not :ex_aws, :hackney_opts.


I havent changed things. I did enable debugging but there wasnt anything written regarding the reason for timeout. its really bizarre. =(


What I’m saying is that the configuration you’ve shown does not enable debugging. The config should look like:

config :ex_aws,
  debug_requests: true,
  access_key_id: [{:system, "AWS_ACCESS_KEY_ID"}, :instance_role],
  secret_access_key: [{:system, "AWS_SECRET_ACCESS_KEY"}, :instance_role]

config :ex_aws, :hackney_opts,
  recv_timeout: 600_000,
  pool: false
1 Like

thanks! I did move it out :slight_smile: but it still didnt say why its timing out. hmm… i do see that its just running a lot of beam.smp stuff. memory is increaing then it dies and it only happens if its using ex_aws, :frowning: Not sure if its a memory leak but its odd that I dnt get the same problem locally.

Still investigating. :fearful:

How large is the file you’re downloading?

1 Like

its not even a lot cause the box has 1gb of ram and it dies within minutes < 5 minutes. :frowning: I see the memory flash before my eyes. hehe well, gotta have humor at least. :stuck_out_tongue: