Phoenix + Amazon S3 (app assets, not user uploads)

I need to provide some context before asking the question:

CONTEXT

I have a Phoenix application that is being deployed to Heroku. As default, Brunch is being used to compile the Static Assets like .js,.css and images.

  • Those assets are stored on ./assets (Phoenix 1.3).
  • Those assets are compiled to ./priv/static/.

The compilation process generates a cache_manifest.json, after the assets are digested using MD5 fingerprinting.

It maybe important to notice I’m using CloudFlare’s free version as a CDN.

I’m not concerned about user uploaded assets, I’m talking about the app’s assets

Relevant part of the apps config/prod.exs

config :bespoke_work, BespokeWork.Web.Endpoint, 
  on_init: {BespokeWork.Web.Endpoint, :load_from_system_env, []},
  http: [port: {:system, "PORT"}],
  url: [scheme: "https", host: System.get_env("HEROKU_HOST"), port: System.get_env("HEROKU_PORT")],
  static_url: [scheme: "https", host: System.get_env("STATIC_ASSETS"), port: 443],
  force_ssl: [rewrite_on: [:x_forwarded_proto]],
  cache_static_manifest: "priv/static/cache_manifest.json",
  secret_key_base: System.get_env("SECRET_KEY_BASE")```

QUESTION

  1. How can I prevent Heroku from building the assets and, instead, during deploy, automatically upload the digested assets to an Amazon S3 Bucket?

  2. Will that make Heroku’s slug smaller?


POSSIBLE SOLUTION

1.Reducing Heroku’s Slug Size:

• On the Procfile redirect mix phx.digest to output digested items to /dev/null.

or

• Redefine mix deps.compile for Prod, not generating the assets.

2.Generate the assets locally.

3.Either Manually upload them or use a Shell Script to upload them to S3.

4.Use static_url to generate paths “pointing” to the S3 Bucket.


Is there any simpler way to accomplish this?

1 Like

the biggest question is why? what problem are you trying to solve?

that said you can create a custom compile script for your assets… where I suppose you can do the normal assets compilation/digesting - then upload to s3 and then rm the compiled assets dir - https://github.com/gjaldon/heroku-buildpack-phoenix-static#compile

but I would warn against this, it accomplishes close to nothing, and adds a lot of complexity.

also don’t use s3 as a cdn, it’s not really optimized for that.

static_url+cloudfront takes no time to set up, and is what I would recommend.

btw: you are already using cloudflare cdn which is working fine (look for cf-cache-status:HIT)

Problem: Heroku’s Slug Size Limitation (300 mb).

I have to upload a lot of big images (>5 mb/ea.) to this project and this limit is really low.

Regarding the CDN: yes, I’m already using CloudFlare, the question mentions that.

This is what I’m considering right now:

• One possible solution is to use a CI/CD environment like Codeship to automate the deployment process, including the Amazon S3 part (and setup scripts, and tests, and what have you).

• This doesn’t fix the Heroku Slug size limitation (which is already big). It maybe interesting to consider alternative way of storing those assets like a database.

ok, that is a very special use case, that goes beyond ‘normal’ assets.

without knowing anything about these images (can the app server start with no images, how does it reference the images etc - what is their life cycle - create/update/delete)
my best guess is go for ‘admin uploaded’ images eg. store them on s3 with references to them in a db table (most likely using https://hex.pm/packages/arc + ex_aws).

Also note that heroku has a 1gb limit on your git repo size.

1 Like

So I just talked to the really nice guys @ Heroku and here is the deal:

and / or

create a custom mix task that uploads your assets to s3, then rm them from the slug… call the mix task in your custom compile file… heroku-buildpack-phoenix-static/compile at master · gjaldon/heroku-buildpack-phoenix-static · GitHub

just remember the 1gb limit that heroku has. Limits | Heroku Dev Center