Is Phoenix well suited to handle large file uploads?

Hi all!

I am currently investigating different options in order to handle large file uploads.
As our application needs additional logic to be executed after a file was successfully uploaded, I think a simple FTP server is not well suited for that use case. Ok, many FTP servers offer hooks after a file was uploaded, but starting a bunch of small applications is not the thing we want.

As we’re already using Elixir in a couple of applications I tend to write an upload sever using Phoenix. As the uploaded files my vary from 200MB to 2 or more GB I’m not fully convinced that Phoenix is the right tool for that type of task.

What do you think, is Phoenix well suited for that kind of use case?

Many thanks!


There were some problems with cowboy, but I guess they have been resolved by now. Here’s an example.


You may want to also consider using pedigreed S3 upload urls. You’ll likely want to store these files on S3 anyway and by uploading to there directly you save yourself bandwidth.


I think that it should no be the responsibility of your application.

Let it be handled by Nginx. After saving the file it will call the app back.

This applies to any framework and also saves app server’s connections.


Many thanks for your reply, but S3 is not an option for us, because we’ve a contract agreement with our customer which only allows us to store the uploaded files within our (local) infrastructure.

1 Like

I’ve written a small prototype Phoenix application and have done a couple of tests, e.g. uploading 500MB files in parallel. The results looks promising. The load is (as expected with Elixir :grin:) really low and the performance is only limited by the network connection and the attached disk(s) of the server.

As a first result, I’m happy with that. I think I’ll do a stress test over the weekend…
The problems stated by @idi527 seems really be resolved by now :+1:


@railsmechanic keep updated please …

1 Like

I too am curious about this. We are in the same position in that we will be dealing with files sizes of up to a few GB which have to be stored within our local servers, and additional logic after upload.

I’m working with an Ember front / Phoenix back, so any thoughts / advice would be appreciated.


How about chunked upload?

1 Like

For those generally wondering about file uploads, Phoenix by default streams file upload to disk. That is done by Plug.Parsers. The file is never fully loaded in memory and Plug.Parsers allow you to control precisely the maximum file size, the upload speed, etc. The file is removed from disk after the request finishes (unless you move the file elsewhere). It should suit for most needs without a need to resort to put nginx in the front.

If you want to perform other forms of streaming, for example, working as a proxy, then you can dump Plug.Parsers and roll your own. But generally speaking, all of the tools are there and Plug.Parsers should suit the majority of cases.


@josevalim Plug is amazing and should get better marketing, I was scrambling to get docs on maru (Everything API/deployment related seems to be about Phoenix nowadays, and finally find out I just needed an Application with a worker Router using plugs to get the job done!


Hey @Dephora. Have you managed to get file uploads working with Ember and Elixir? Curious on your approach. Have been stuck for a while. :frowning:


I actually haven’t made it there yet, outside of coming to the conclusion that I’m definitely going to use Arc.

On the Ember side of things, I spent some time with the various popular file upload addons but I think ultimately decided on ember-file-upload if I remember right.

The other piece of the puzzle on my end is using GraphQL for the upload if it’s not too difficult since I’m currently using GraphQL everywhere else.

@Dephora I see. Thanks for the reply! I am looking into direct upload to S3 as one option w/ a pre-signed url.

For the frontend side there’s this one:
I’m sure it wouldn’t be too hard to use that in a phoenix backend.

1 Like

2 posts were split to a new topic: Can’t reach

Today i encountered this big file upload issue, here is the issue:

the flow is: client - haproxy - elixir(Phoenix)

  • upload file at 6.5GB failed, haproxy error is: SH , this indicate cowboy server side timeout
  • upload file at 4GB failed
  • upload file at 1.1GB failed
  • upload 500MB succeed

I already changed the config :

 transport_options: [socket_opts: [:inet6], idle_timeout: 5_000_000]

In elixir logs, I can see the 4GB file actually already uploaded, no error shows. Is this indicate that cowboy return error but process still running?
Any idea? what is the other place I can check?

Does Plug.Parsers also stream payload to disk on handle_in in Phoenix Channels?

I need to handle a big file through a handle_in as a stream

No, it does not. But Phoenix.Channels do support a streaming binary format, which is used by Phoenix.LiveView for streaming to disk. So you can either use LiveView or re-implement it in your channel.