My approach was uploading them to /var/www/app/static
and let nginx serve them but I’m not sure how to do this in my dev environment. I’m guessing that there might be a easier way to do this.
I do an environment variable for the path to save them to and let nginx serve that.
Do you do the same thing in your dev environment? That’s my current approach as well but I’m not sure how to setup my dev environment (how to serve them in particular).
Here’s something but no clear answer yet.
In dev I just let phoenix itself host things via static. It is plenty fast at that for dev use (even for most prod use).
We are talking about user uploaded files here right?
Plug.Static
is not well suited to handling dynamic files (as the name suggests). It assumes many things - the most important being that files do not change. This means it will not work properly when you update or delete files.
That said, this limitation may be acceptable for dev environment if you’re aware of it.
Is it possible to use Plug.Static
on dev environment and use nginx to serve the files? If so, could you give me a some hints on to do that? (How to set up Plug.Static
)
My temporary solution will be using another local server to serve the uploaded files on my dev environment.
python -m SimpleHTTPServer 4001
should do the trick for now.
https://github.com/phoenixframework/phoenix_guides/issues/552
Will work on improving the Phoenix File Upload guide after I get everything sorted out.
Plug.Static is not well suited to handling dynamic files (as the name suggests). It assumes many things - the most important being that files do not change. This means it will not work properly when you update or delete files.
Plug.Static works for those files. The name static is not because the set of files is static, it is because the file in disk is served as is. We use etags for cache if you don’t specify a digest, so it should work fine for updating and removing files as well.
Plug.Static works for those files. The name static is not because the set of files is static, it is because the file in disk is server as is. We use etags for cache if you don’t specify a digest, so it should work fine for updating and removing files as well.
Thank you for clearing it up. I think it’s a common misconception - I heard that over on IRC couple days ago. Inspecting the code - there’s indeed no cache for the files. For some reason I though Plug.Static
was storing the filenames in ets.
why dont you put them into a (postgre) database?
i mean save the binary and the content-type and serve them through a file-controller or so.
Or is that somehow bad performance wise?
It’s slow and doesn’t scale you’d have added complexity with pretty much no benefits.
A more common approach is to put the data itself on S3, and then simply safe a database record with the S3 pathway. This also has the benefit of making the files easily available across multiple nodes.
That’s what Firebase recommended back few years ago (only because there were no other choices besides using an exteral service) lol
You would be putting a lot of stress on your database and your server to transfer the binary data.
This exploded since I was last here. ^.^
You can always cache those calls in ETS or so though (CacheEx for example, with no timeout, could be useful since it could do the DB lookup too for a central access point). External server hosted files should never change so that would be good to do.