Flexible Dockerized Phoenix Deployments (1.2 & 1.3)



That was it, thanks. I guess I didn’t really understand what webpack was doing.

Got it! Deploying phoenix 1.3 + webpack via Docker! Thanks everyone!

The separate HAProxy deploy ansible playbook I was talking about earlier is working too. I just need to make a separate ansible role to run ./build.sh and docker-compose up and I’m good.


Hi all, using Elixir 1.6.4/Phoenix 1.4.0-dev I’m having trouble in build.sh when running Dockerfile.build:

RUN npm i

gives me:

npm ERR! write after end
npm ERR! write after end
npm ERR! write after end
npm ERR! write after end
npm ERR! write after end

npm ERR! A complete log of this run can be found in:
npm ERR!     /opt/app/.npm/_logs/2018-07-11T01_15_54_198Z-debug.log
The command '/bin/sh -c npm i' returned a non-zero code: 1
Unable to find image 'dynt-build:latest' locally

So I tried fetching the latest npm which works but then it fails at the npm deploy step:

RUN npm i npm@latest -g

which now says webpack is missing(when it runs scripts in package.json):

  "scripts": {
    "deploy": "webpack --mode production",
    "watch": "webpack --mode development --watch"
Step 7/13 : RUN npm i npm@latest -g
 ---> Running in bc37bb5a75b6
/usr/bin/npm -> /usr/lib/node_modules/npm/bin/npm-cli.js
/usr/bin/npx -> /usr/lib/node_modules/npm/bin/npx-cli.js
+ npm@6.1.0
added 228 packages from 48 contributors, removed 79 packages and updated 91 packages in 40.484s
Removing intermediate container bc37bb5a75b6
 ---> 55d96a8fc918
Step 8/13 : RUN npm run deploy
 ---> Running in b03b51c1621e

> @ deploy /opt/app/assets
> webpack --mode production

sh: webpack: not found
npm ERR! file sh
npm ERR! errno ENOENT
npm ERR! syscall spawn
npm ERR! @ deploy: `webpack --mode production`
npm ERR! spawn ENOENT
npm ERR! 
npm ERR! Failed at the @ deploy script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
npm WARN Local package.json exists, but node_modules missing, did you mean to install?

npm ERR! A complete log of this run can be found in:
npm ERR!     /opt/app/.npm/_logs/2018-07-11T01_19_24_208Z-debug.log
The command '/bin/sh -c npm run deploy' returned a non-zero code: 1
Unable to find image 'dynt-build:latest' locally

But why? Shouldn’t webpack be present since I’m deploying via phoenix 1.4.0-dev which has webpack enabled by default?

I could avoid npm run deploy but then there will be no input path for RUN mix phx.digest:

 ---> Running in c4c306bea664
The input path "priv/static" does not exist

Which in turn breaks docker when the container needs to built from docker-compose up



Yeah the latest Phoenix uses Webpack instead of Brunch, which probably requires some modifications. I haven’t really worked with the new Phoenix at all so I’m not sure about how it works yet but switching from Brunch to Webpack should be easy enoughl. It looks like Webpack isn’t getting into your $PATH, which is why it can’t be found. I don’t think you can just change the RUN npm i line to RUN npm i npm@latest -g because then you won’t install anything except the latest NPM. I think you should do both lines like so:

RUN npm i npm@latest -g
RUN npm i

I’m not sure if that’s what you are doing already, but try it out. Also, if that doesn’t work please post your Dockerfile.build and your package.json and I will see if I can see what’s wrong.


Hey thanks your suggestions worked, I successfully built.

One thing though, I’m seeing the same issue from here:

This is the log:

dynt-db   | 2018-07-12 01:05:45.491 UTC [47] LOG:  database system was shut down at 2018-07-12 01:05:45 UTC
dynt-db   | 2018-07-12 01:05:45.496 UTC [1] LOG:  database system is ready to accept connections
dynt-admin | Loading ..
dynt-admin | {"init terminating in do_boot",{{badmatch,{error,{"no such file or directory","nil.app"}}},[{'Elixir.Dynt.ReleaseTasks',seed,0,[{file,"lib/dynt/release_tasks.ex"},{line,19}]},{init,start_em,1,[]},{init,do_boot,3,[]}]}}
dynt-admin | init terminating in do_boot ({{badmatch,{error,{[_],[_]}}},[{Elixir.Dynt.ReleaseTasks,seed,0,[{_},{_}]},{init,start_em,1,[]},{init,do_boot,3,[]}]})
dynt-admin | 
dynt-admin | Crash dump is being written to: erl_crash.dump...done
dynt-admin exited with code 1

It’s not clear to me why this is happening in my case because

  def repos, do: Application.get_env(dynt(), :ecto_repos, [])

Is a list of applications, not repos.

I also noticed that #{me} is returning empty string in the log i.e. "Loading .." should instead say "Loading dynt.."

I will paste the entire release_tasks.ex load is failing on this line(19):

    :ok = Application.load(me):

full release_tasks.ex:

defmodule Dynt.ReleaseTasks do

  @start_apps [

  def dynt, do: Application.get_application(__MODULE__)

  def repos, do: Application.get_env(dynt(), :ecto_repos, [])

  def seed do
    me = dynt()

    IO.puts "Loading #{me}.."
    # Load the code for dynt, but don't start it
    :ok = Application.load(me)

    IO.puts "Starting dependencies.."
    # Start apps necessary for executing migrations
    Enum.each(@start_apps, &Application.ensure_all_started/1)

    # Start the Repo(s) for dynt
    IO.puts "Starting repos.."
    Enum.each(repos(), &(&1.start_link(pool_size: 1)))

    # Run migrations

    # Run seed script
    Enum.each(repos(), &run_seeds_for/1)

    # Signal shutdown
    IO.puts "Success!"

  def migrate, do: Enum.each(repos(), &run_migrations_for/1)

  def priv_dir(app), do: "#{:code.priv_dir(app)}"

  defp run_migrations_for(repo) do
    app = Keyword.get(repo.config, :otp_app)
    IO.puts "Running migrations for #{app}"
    Ecto.Migrator.run(repo, migrations_path(repo), :up, all: true)

  def run_seeds_for(repo) do
    # Run the seed script if it exists
    seed_script = seeds_path(repo)
    if File.exists?(seed_script) do
      IO.puts "Running seed script.."

  def migrations_path(repo), do: priv_path_for(repo, "migrations")

  def seeds_path(repo), do: priv_path_for(repo, "seeds.exs")

  def priv_path_for(repo, filename) do
    app = Keyword.get(repo.config, :otp_app)
    repo_underscore = repo |> Module.split |> List.last |> Macro.underscore
    Path.join([priv_dir(app), repo_underscore, filename])


Awesome! I actually know exactly what this error is. For some reason, sometimes Application.get_application/1 often doesn’t work in production. It has been discussed a bunch of times and as far as I know there isn’t any fix for it besides hardcoding your app name. So, replace the call to that function with the name of your application like :myapp and that should work. It’s actually in the guide but its not very clear right now so I will try to update it to be more clear about this specific error message :slight_smile:

Edit: I just updated the “Troubleshooting” section of the guide to reflect this. Take a look at that if you need more specifics :slight_smile:


That helped thanks.
I’m almost there! docker-compose up fires-up dynt-db, dynt-admin and dynt-server but dynt-server/phoenix doesn’t enter a state where it is accepting connections.

~ ᐅ http localhost:5000

http: error: ConnectionError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response')) while doing GET request to URL: http://localhost:5000/

This is the log:

~/webapp/phx/1.4.0-dev/dynt (master ✘)✹ ᐅ docker-compose up        
Creating network "dynt_default" with the default driver
Pulling db (postgres:10.2-alpine)...
10.2-alpine: Pulling from library/postgres
ff3a5c916c92: Pull complete
a503b44e1ce0: Pull complete
211706713093: Pull complete
8df57d533e71: Pull complete
7858f71c02fb: Pull complete
55a8ef17ba59: Pull complete
3fb44f23d323: Pull complete
65cad41156b3: Pull complete
5492a5bead70: Pull complete
Digest: sha256:2cdf8430d7c1f59b3d3808f672944491aabf0fdf15da5b5e158e9fa4162453bf
Status: Downloaded newer image for postgres:10.2-alpine
Building admin
Step 1/5 : FROM bitwalker/alpine-erlang:20.3.2
20.3.2: Pulling from bitwalker/alpine-erlang
2fdfe1cd78c2: Already exists
9ebca238abc6: Pull complete
8b81f5dbc183: Pull complete
Digest: sha256:de3f1a537acbc9c6d920104369eb15b5043f7316d661785bf52987288b6aae54
Status: Downloaded newer image for bitwalker/alpine-erlang:20.3.2
 ---> d8328b810c7c
Step 2/5 : ENV MIX_ENV=prod
 ---> Running in 025cf746c098
Removing intermediate container 025cf746c098
 ---> e48fe25e859b
Step 3/5 : ADD _build/prod/rel/dynt/releases/0.1.0/dynt.tar.gz ./
 ---> 192fd6f99c99
Step 4/5 : USER default
 ---> Running in aaa59b902d2a
Removing intermediate container aaa59b902d2a
 ---> 1b92cfc388d2
Step 5/5 : ENTRYPOINT ["./bin/dynt"]
 ---> Running in 0ee9f8e32921
Removing intermediate container 0ee9f8e32921
 ---> 7b7fcc8f6ca8
Successfully built 7b7fcc8f6ca8
Successfully tagged dynt-release:latest
WARNING: Image for service admin was built because it did not already exist. To rebuild this image you must use `docker-compose build` or `docker-compose up --build`.
Creating dynt-db ... done
Creating dynt-admin ... done
Creating dynt-server ... done
Attaching to dynt-db, dynt-admin, dynt-server
dynt-db   | The files belonging to this database system will be owned by user "postgres".
dynt-db   | This user must also own the server process.
dynt-db   | 
dynt-db   | The database cluster will be initialized with locale "en_US.utf8".
dynt-db   | The default database encoding has accordingly been set to "UTF8".
dynt-db   | The default text search configuration will be set to "english".
dynt-db   | 
dynt-db   | Data page checksums are disabled.
dynt-db   | 
dynt-db   | fixing permissions on existing directory /var/lib/postgresql/data ... ok
dynt-db   | creating subdirectories ... ok
dynt-db   | selecting default max_connections ... 100
dynt-db   | selecting default shared_buffers ... 128MB
dynt-db   | selecting dynamic shared memory implementation ... posix
dynt-db   | creating configuration files ... ok
dynt-db   | running bootstrap script ... ok
dynt-db   | performing post-bootstrap initialization ... sh: locale: not found
dynt-db   | 2018-07-12 02:24:34.389 UTC [28] WARNING:  no usable system locales were found
dynt-db   | ok
dynt-db   | syncing data to disk ... 
dynt-db   | WARNING: enabling "trust" authentication for local connections
dynt-db   | You can change this by editing pg_hba.conf or using the option -A, or
dynt-db   | --auth-local and --auth-host, the next time you run initdb.
dynt-db   | ok
dynt-db   | 
dynt-db   | Success. You can now start the database server using:
dynt-db   | 
dynt-db   |     pg_ctl -D /var/lib/postgresql/data -l logfile start
dynt-db   | 
dynt-db   | waiting for server to start....2018-07-12 02:24:37.101 UTC [33] LOG:  listening on IPv4 address "", port 5432
dynt-db   | 2018-07-12 02:24:37.101 UTC [33] LOG:  could not bind IPv6 address "::1": Address not available
dynt-db   | 2018-07-12 02:24:37.101 UTC [33] HINT:  Is another postmaster already running on port 5432? If not, wait a few seconds and retry.
dynt-db   | 2018-07-12 02:24:37.107 UTC [33] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
dynt-db   | 2018-07-12 02:24:37.144 UTC [34] LOG:  database system was shut down at 2018-07-12 02:24:34 UTC
dynt-db   | 2018-07-12 02:24:37.148 UTC [33] LOG:  database system is ready to accept connections
dynt-db   |  done
dynt-db   | server started
dynt-db   | 
dynt-db   | ALTER ROLE
dynt-db   | 
dynt-db   | 
dynt-db   | /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/*
dynt-db   | 
dynt-db   | 2018-07-12 02:24:37.627 UTC [33] LOG:  received fast shutdown request
dynt-db   | waiting for server to shut down....2018-07-12 02:24:37.683 UTC [33] LOG:  aborting any active transactions
dynt-db   | 2018-07-12 02:24:37.685 UTC [33] LOG:  worker process: logical replication launcher (PID 40) exited with exit code 1
dynt-db   | 2018-07-12 02:24:37.685 UTC [35] LOG:  shutting down
dynt-db   | 2018-07-12 02:24:37.751 UTC [33] LOG:  database system is shut down
dynt-db   |  done
dynt-db   | server stopped
dynt-db   | 
dynt-db   | PostgreSQL init process complete; ready for start up.
dynt-db   | 
dynt-db   | 2018-07-12 02:24:37.872 UTC [1] LOG:  listening on IPv4 address "", port 5432
dynt-db   | 2018-07-12 02:24:37.872 UTC [1] LOG:  listening on IPv6 address "::", port 5432
dynt-db   | 2018-07-12 02:24:37.894 UTC [1] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
dynt-db   | 2018-07-12 02:24:37.941 UTC [46] LOG:  database system was shut down at 2018-07-12 02:24:37 UTC
dynt-db   | 2018-07-12 02:24:37.947 UTC [1] LOG:  database system is ready to accept connections
dynt-admin | Loading ..
dynt-admin | Starting dependencies..
dynt-admin | Starting repos..
dynt-admin | Running migrations for dynt
dynt-admin | 02:24:45.214 [info] Already up
dynt-admin | Running seed script..
dynt-admin | Success!
dynt-admin exited with code 0




That line is the culprit. The entrypoint has to be ["./bin/dynt", "start"] (or ["./bin/dynt", "foreground"]). When starting there should be a log from dynt-server telling you that you did not specify a command for your application and it doesn’t know what to do.


@jswn thank you for sharing i will have look at it for my phoenix app :slight_smile: :+1:


I’m not sure it is. The ENTRYPOINT just specifies which base command should be. Then, as I have it set up for the guide, you can provide a different CMD to the ENTRYPOINT to specify which arguments should be passed in. For instance, his server should run with foreground as seen on his repo.

Can you run docker logs dynt-server to see the output from your server container? I can’t even see it doing anything in the log you posted, which is very odd. It might not be starting or something.


Yes you are right. The entrypoint gets appended by the command, and I have overlooked it in the compose file.

~/webapp/phx/1.4.0-dev/dynt (master ✔) ᐅ docker logs dynt-server
02:28:54.816 [info] SIGTERM received - shutting down

That’s it… It’s four minutes later than the previous logs because I ran it couple times after the recording of the other log.

Here’s a more complete log:

It’s interesting that when I shutdown by hitting ctrl-c(twice), only dynt-admin and dynt-db are shutdown, not dynt-server

^CGracefully stopping... (press Ctrl+C again to force)
Stopping dynt-server ... done
Stopping dynt-db ... done

but earlier on, after docker-compose up all three are fired-up:

Creating dynt-db ... done
Creating dynt-admin ... done
Creating dynt-server ... done
Attaching to dynt-db, dynt-admin, dynt-server


This is weird, I tried another docker-compose up and dynt-server is getting a SIGTERM:

dynt-server | 02:09:39.013 [info] SIGTERM received - shutting down
dynt-server | 
dynt-server | 02:18:22.816 [info] SIGTERM received - shutting down
dynt-server | 

I think it has something to do with the postgresql db failing to launch. Here is the whole log of ‘run 2’:

It could be me forgetting to do docker-compose down before up but I just tried that and I’m back to square one where ctrl-c will shutdown db and server but not server. I have yet to see the http server return a page.


This is very odd, I’m honestly not sure what could be sending the server container a SIGTERM :confused: Try the following and then re-run everything and see if it helps:

  1. docker compose down
  2. Remove the containers completely by running docker ps -a and removing all of the relevant containers with docker rm <container>
  3. Remove all the relevant images with docker rmi <image>

That will make sure that everything is fresh, including the containers which often are cached by Docker and not always updated when you change something. Let me know if that helps!


To make sure everything was squeaky clean I would blow-away com.docker.docker dirs from both Caches and Containers in my Home dir on my home Mac. Every build. But I will try what you suggested.

Maybe try it on a Linux vps or a Raspberry Pi running ubuntu?


What would be the best way to isolate dynt-server? Just comment-out dynt-db and dynt-admin sections from the docker-compose.yml? Anything in build.sh or release_tasks.ex I’ll need to remove/comment-out to ensure that only dynt-server runs alone?


I think I know what is wrong. I’m developing with phoenix 1.4-dev but I use FROM bitwalker/alpine-elixir-phoenix:1.6.4 which must use 1.3.x hence the weird behavior …


Ah yes, I didn’t realize you were on 1.4. That could definitely be it. You could try going to the repository for the image and grabbing the Dockerfile so that you can modify it for 1.4 if you want!


Thank you for the great guide.

I have one issue when starting the containers. I get the following error:

ssp-admin | Running migrations for ssp
ssp-admin | {“init terminating in do_boot”,{{badmatch,{error,eacces}},[{elixir_compiler,file,2,[{file,“src/elixir_compiler.erl”},{line,41}]},{‘Elixir.Code’,load_file,2,[{file,“lib/code.ex”},{line,629}]},{‘Elixir.Ecto.Migrator’,extract_module,2,[{file,“lib/ecto/migrator.ex”},{line,291}]},{‘Elixir.Ecto.Migrator’,’-migrate/4-fun-0-’,4,[{file,“lib/ecto/migrator.ex”},{line,259}]},{‘Elixir.Enum’,’-map/2-lists^map/1-0-’,2,[{file,“lib/enum.ex”},{line,1294}]},{‘Elixir.Enum’,’-each/2-lists^foreach/1-0-’,2,[{file,“lib/enum.ex”},{line,737}]},{‘Elixir.Enum’,each,2,[{file,“lib/enum.ex”},{line,737}]},{‘Elixir.Ssp.ReleaseTasks’,seed,0,[{file,“lib/ssp/release_tasks.ex”},{line,31}]}]}}

It seems that it requires different permissions for the migration files. The current permissions on the container are the following:

$ ls -al lib/ssp-0.0.1/priv/repo/migrations/
-rw------- 1 root root 351 Aug 6 09:02 20180723124131_create_unit.exs
-rw------- 1 root root 910 Aug 6 09:02 20180723124132_create_users.exs

Any idea on what I might have done wrong?


The files have to be readable to whom ever runs the database migration.

You haven’t shown the necessary bits to tell you if thats root or not. But chances are high that you either need to chown the files to the proper user or you need a chmod o+w priv/repo/migrations/*.exs, of course the full path needs to be accessible by the user in question, so you’ll might need to add some rs and xs the full path up for other.


I followed the guide and it seems the executing user is default.

I added two calls with chown to change the owner to default.

RUN chown default lib/ssp-*/priv/repo/migrations/*.exs
RUN chown default lib/ssp-*/priv/repo/seeds.exs

That fixed the problem. Though I thought I might have done something wrong following the guide as this step wasn’t mentioned.


Docker uses the permissions as they are on the host when copying (not the owners though). If you are on a windows host though, docker will default to 600 as far as I remember. So bets are high, you are using windows :wink: