What are the best recommendations for deploying a phoenix/elixir app? Are there any good tutorials that any of you would recommend?
It really depends on your project and your needs, but I would start here:
Generally, the official documentation is most of the time the best and most up-to-date source of information.
I’ve just recently gotten my deployments to a state I’m semi-happy with. I’d love feedback from more experienced people.
First, it is totally worth it to fully automate your deployments, with simple config changes for dev and prod. So worth it.
I went with Erlang releases as my deployable, because shipping dev tools to prod gives me the willies. This was a bit tricky, despite the help of mix release
. My release build script has to unset MIX_ENV
, generate a secret key, then set MIX_ENV=prod
. I was having trouble with phx.gen.secret
complaining about… not having a secret_key_base
, but only if MIX_ENV
was set (maybe from other commands in my shell session). Go figure.
I also wanted to read my config from environment variables at runtime (to set these in a systemd unit, look up environment files). This was surprisingly tricky, especially DATABASE_URL
. When I read DATABASE_URL
from config/runtime.exs
, it was ignored when running migrations from the release. Everything worked when I read it from the init
callback of my repo. At that point, setting up for a dev environment run is just a matter of sourcing dev_env.sh
; the tricky part was pulling the credentials out of ansible-inventory (ansible is my source of truth for credentials).
Getting seeds.esx
running in prod with a release (no mix
) turned out to be too big of a pain to be worth it. I replaced the seeds script with a helper function, next to my in-prod db migrations (which were also tricky to do without mix
). Helper script below, lightly redacted:
defmodule Testapp.Helper do
alias Testapp.Support.Priority
def eval_db_migrate() do
for repo <- repos() do
# TODO consider Ecto.Migrator.up/4? Why didn't I use this?
{:ok, _wat, _idk} = Ecto.Migrator.with_repo(repo, &Ecto.Migrator.run(&1, :up, all: true))
end
end
# for some reason, running this through with_repo works in a release
# where I can't figure out how to start the apps so just doing Repo.insert works
defp do_seed(repo) do
for {{name, desc}, id} <- Enum.with_index([
{"Apocalyptic", "Everything is on fire"},
{"Serious", "Ongoing damage"},
{"Medium", "Pretty noticeable problem"},
{"Mild", "Not a huge deal"},
{"Minimal", "Whenever you get time"}]) do
repo.get_by(Priority, name: name) || repo.insert!(%Priority{id: id, name: name, desc: desc})
end
end
def eval_seeds() do
[repo] = repos()
Ecto.Migrator.with_repo(repo, &do_seed(&1))
end
defp repos do
Application.load(:Testapp)
Application.fetch_env!(:testapp, :ecto_repos)
end
end
I also created some fairly trivial wrapper shell scripts that source my environment file (for config/creds) before calling one of these functions with the release’s eval
command.
My next planned major change to how I do deployments will be treating the database deployments more like backup restores, in the spirit of exercising error handling by making it the common case. As it is, I’ve created my prod db manually.
Again, I post this more as hints from a fellow noob and grist for conversation than accepted recommendations. I especially overanalyzed handling state and secrets, before deciding to YOLO it and just ship something before I died of old age. Feedback welcome.
For deployment with Docker, I’d put mix deps.get
before COPY config config
, but leave mix deps.compile
after COPY config config
. This saves the time of downloading dependencies (which is a huge headache in China) when rebuilding the docker image as long as mix.exs
doesn’t change.
I recently worked on putting a phoenix app in a container for my side I made it work to compile without much issue in end. Also able to run some tasks without mix using a docker container. MIX_ENV
is the important elixir mix var you should use to decide environment for your builds. You can have other variables that you can set for your environment to control runtime I have found it useful to call it RELEASE_LEVEL
which I keep to , production
, staging
, test
while MIX_ENV
is prod
, dev
and test
together they work well
For my own project, Guildflow, I use Linode. I build a release on a dedicated build “node” and then run the app on a second app “node” with a dedicated database “node”. It works okay and was a great learning experience.
That said, if the goal of your project is to validate the business concerns of the application, I’d probably try to lean on Gigalixir as it will just make your life simple.
I would recommend relying on a service like render.com or gigalixir. They have the tooling built in as well, and it will save you a lot of time.
How do you transfer the release between your build and production nodes? I am using rsync to do updates, but it breaks the connection with the running instance, so the bin/ commands don’t work anymore.
It’s manual scp
and on the app node there are folders for each version. After the copy is made from the build to app node, I ssh into the app node, cd into the running version and use the release command ./bin/guildflow stop
and I then cd into the new version and run ./bin/guildflow daemon
and if needed ./bin/guildflow remote
to run the migrations via iex
: iex> Guildflow.Release.migrate()
.
Hope this info helps. Let me know if you have any follow up questions.
I don’t think you are supposed to generate the secret key each time you deploy. Your users would get logged out after each deploy because the server would not be able to read the session cookies any more.
Create the production secret locally, store it and set it in SECRET_KEY_BASE
environment variable on the server.
Personally, I would do an Elixir release with a systemd service behind NGINX keeping it straight-forward and simple. Ideally on Fedora/CentOS with SELinux. All deployed with a simple Bash script or in a git-push fashion.
Without any further requirements, that’s what I would do and see if it’s sufficient.
I came across this post:
Is anyone using this? Does it still work?
Speaking of systemd services, here’s my systemd unit file:
[Unit]
Description=Erlang release startup
[Service]
Type=simple
User=webserver
Group=webserver
ExecStart=/opt/webserver/web/bin/web start
ExecStop=/opt/webserver/web/bin/web stop
# read environment variables from this file, including secrets
EnvironmentFile=/opt/webserver/web_env
# shouldn't matter. whatever
WorkingDirectory=/opt/webserver
[Install]
WantedBy=multi-user.target
Just dump in /etc/systemd/system/webserver.service
(after adjusting paths, of course). IIRC this was mostly copy-pasta from somewhere else, so I can’t take credit for anything but the EnvironmentFile line.
Is anyone using libcluster for their Phoenix apps and can share tutorials?
I use Distillery and Edeliver to help deploy on an Ubuntu VPS, it’s worked pretty well so far. This guide was really useful for me: https://medium.com/@zek/deploy-early-and-often-deploying-phoenix-with-edeliver-and-distillery-part-one-5e91cac8d4bd.
This guide (https://dreamconception.com/tech/phoenix-automated-build-and-deploy-made-simple/) has a more involved setup but is probably more robust as it uses Ansible and handles other aspects of managing deployments.
There are more managed services, like Heroku or Gigalixir worth exploring.
I am using libcluster in GCP Kubernetes. I don’t recall any specific tutorials, but I know there are a few that cover it. If you hit a dead-end, let me know and I can share how I do it.
I have two deployment strategies:
-
Quick-and-Dirty
Deploy to Heroku. Many downsides, but the upside is how fast / simple it is. You can always use ngrok for something really quick and dirty, but Heroku is pretty convenient. -
For everything else…
I deploy to GCP Kubernetes. A much more powerful setup, with each Elixir container connected on a private network via libcluster k8s DNS. Kubernetes and Elixir go VERY well together. In some respects, k8s container orchestration feels familiar to how the BEAM / actor model works, but higher-level. I have two apps currently in production on this setup, both with essentially zero downtime due to rolling updates (no need for hot updates). Receiving a SIGTERM from Kubernetes can be caught in Elixir, giving you the ability to gracefully kill off processes in the reverse order they were created. This gives you time to stash things in cache, wait for any HTTP responses, and hand off processes to other container.
For what it’s worth, I don’t think I have come across a deployment / devops solution that checks as many boxes as Google hosted Kubernetes.