Hi Jetmush,
I really didn’t expect someone else looking at this code after a year ^^’ After looking at it again, the version you saw only supported one file at a time. I quickly adjusted this (it really isn’t much work! Just add the ‘multiple: true’ option in your template and you’ll receive an array of plug.Upload structs.)
After uploading 2 files, this is what you should see in a IEx.pry’d controller action:
pry(1)> up
[
%Plug.Upload{
content_type: "application/octet-stream",
filename: "bcm2708-rpi-b.dtb",
path: "/tmp/plug-1595/multipart-1595601713-270993472630691-4"
},
%Plug.Upload{
content_type: "application/octet-stream",
filename: "bcm2708-rpi-b-plus.dtb",
path: "/tmp/plug-1595/multipart-1595601713-367099312190116-4"
}
]
I left the pry in the repo so that you can clone, compile, run and see what happens. Now specifically for your question regarding:
multiple images in one database field.
I prefer storing the images on a AWS S3, or on the file system and store a path in the database. Since an upload is mostly one file, I’d implement uploading multiple files as multiple single uploads (and thus multiple database entries). Is it important that you keep track of the fact which files were uploaded in a single batch? If so, I’d personally suggest either one of:
- Add a new table, e.g. “batch upload”. Then you can add a FK in your uploads as to which “batch upload” they belong.
- I’d personally rather not suggest this, though I cannot put my finger on to the “why” part, but you could separate your file names with a unique separator. After then reading the entry, split on that separator and you have a list of filenames. You might resolve this with saving it as a JSON entry I think, but then what when you e.g. periodically check the integrity of files and one file is corrupt / no longer present?
With the second approach it seems to me a lot can go wrong and should be solved with code (e.g. the removal of a file means a bunch of code, while in essence it could be just one small table entry deletion). Hence my preference for the first solution, though I’d recommend this with URL’s to AWS S3 (definitely more scalable )
Note: whole rant above is if you’d want to keep track of batch uploads. If this is not necessary, I’d suggest not to add unnecessary complexity and see each file as a separate upload in its respective table.
In case other people read this and find this absolute blasphemy, please do tell me! Always eager to learn!!!
Hope this helped!