@tanweerdev Connection could be closed for various reasons more or less related with client. However if you did not reached any edge-case I think that you hit some timeout. Did you tried changing hackney options? Did you tried retries configuration?
I guess you have just started using this library or some configuration you have is still default (ATTEMPT: 1 so most probably no retries configuration). For bigger files and especially for slow internet connection chance for timeout may be really big, so it’s important that proper download/upload strategy is used regardless of service/software you use as everything have limits.
Also is your internet stable? I don’t think it’s that, but it’s just a good example … Some time ago fiber network I’m using was overloaded (people stay at home - read: “facebook” - because of covid restrictions) and I had many connection problems that time.
I see, in some cases it retried upto 5 times. I have added provided 25 mins timeout option Task.async_stream but before even 25 mins, request is closed and control is not returned back
What do you think about playing a bit with multipart_chunksize? Maybe it could be good for you to set to something like 10mb or maybe even smaller … For sure it’s more like workaround, but could also optimize your project. Again I did not made such things before, but it should work as ex_aws provides a way to read configuration from Amazon CLI files.
Could it be related to socket or request processing? why I think it could be because as soon as I see below msg
Could not get response
Error: socket hang up
All the processing stops for sure. Even if there is no warning or something
Note: Last time POST request hang up after 6-7 mins although I have 25 mins timeout option almost everywhere
I’m of course not sure, but asked myself if there is a problem with 20mb file then how it would work in case we would send exactly same file in smaller chunks?
ok, I see that most probably I would not able to reproduce that … I have 100/100 Mbps fiber internet, so sending 20mb file would definitely not take so much time
I saw similar problems in other libraries (other languages) and there with exactly same error people found that the configuration was incorrect, but it does not looks like your use case …
We upload 50 mb zipped file for testing purpose. Potentially file can be as big as 3 GB according to requirements at least in production. We have to unzip the file, upload each individual image/document which isnt normally bigger than few mbs.
people are talking about changing request headers …
I don’t have an AWS-based project right now. Do you think you can try those suggestions? Maybe AWS does not supports some headers like those mentioned in issue …
Maybe easier would be to compare working request (like curl) with current one?
if I have inactivity_timeout: :infinity, it will process all the files but will not send back response even after processing it. I found this article helpful but even this doesnt provide details about the protocol possible options. As soon as All tasks are processed via async_stream (I have also tried to use chunk of 4 and then do task await or Task.async_stream) I need to insert some record in db and send back response. So in my opinion solution lies in http protocol options but as documentation is not very clear. I cant figure out how to achieve this. Thanks in advance for all the effort and help @Eiji
ok, so most probably it’s not about content size except there is some limit on S3, but I don’t think it’s that as I believe that it should return a proper error response …
I have one more idea … Can you please compare result of:
If it’s not that then I don’t think it’s possible to debug it properly at least from your app … Maybe we would need to debug some values in ex_aws code to have more information about request …