Deploying elixir - does it get easier?

I am still doing something similar to this:

https://medium.com/@zek/deploy-early-and-often-deploying-phoenix-with-edeliver-and-distillery-part-one-5e91cac8d4bd
https://medium.com/@zek/deploy-early-and-often-deploying-phoenix-with-edeliver-and-distillery-part-two-f361ef36aa10

I deployed 10+ apps to production and it still always takes me at least 1-2h. Most of the time goes to installing different dependencies for different distros(and versions of these) + solving cryptic erlang errors(once I spent 4h on single db migration error).

Building/deploying further releases/upgrades on small VPS’s is also very very slow(think DO’s $5 droplet).

This is the biggest reason I started using node and go for simple/medium complex projects.

A while ago I also had ansible scripts to set-up everything but these got outdated and created enough additional complexity that made me stop using them. Using hosting services like heroku just for this is also no-go for me or my clients.

2 Likes

I usually make sure my build machine and target server are the same: same OS, same versions of libs.

Then make a Release Package. From there its a straight scp+untar and its up and running.

5 Likes

Same boat. It takes a bit to deploy an elixir app. That said, once everything is setup it only takes a minute or so after running the edeliver command. It would be nice if there were more options for deploying. I’m aware of Gigalixir and Heroku as the only two quick/easier options (although Gigalixir didn’t feel quick - was a bit of a headache trying to get it to work whereas Heroku was just an add your buildpacks and go experience. Perhaps Gigalixir is easier now). Supposedly there is an easy deployment tool in the works that will be a part of the core framework. Not sure where things are at with that though.

1 Like

I used this Guide to start our current deployment Docker / Kubernetes on GCP.
It is really easy and fast to deploy a new build.

3 Likes

Yes, once you build a release it is pretty easy. I have an email course you can get (its free) on how to do it

https://elixirtraining.org/release_email_course.html

4 Likes

This is my current strategy as well: same build and target machine + Distillery releases and hot-upgrades.

Tried Edeliver at some point, but had some issues along the way. I also felt it added extra complexity to my setup.

Look over the Distillery documentation: Home - Distillery Documentation. It’s just as good as you would expect in the Elixir ecosystem :love_you_gesture:. Has multiple guides for different scenarios.

1 Like

Similar path:

  • build a distillery release using base docker image having same system as target host : elixir:1.8.1-slim for a stretch host.
  • make debian package.
  • install package on host (ssh).

Hot upgrade is not implemented (I do not need it).

Woah. Never heard about the debian package approach before.

Are you using https://github.com/johnhamelink/exrm_deb? What happens after you install the .deb package? What do you start, how?

1 Like

I used dh_make as suggested in https://debian-handbook.info/browse/stable/sect.building-first-package.html.

I did not know about exrm_deb…

I found dh_make upsetting at first approach and I did ran away a couple of times from it in the past but it now fits my bill and I can use it for other contexts.
Going into files details, I have

debian/.gitignore
debian/changelog
debian/compat
debian/control
debian/copyright
debian/lift.env
debian/lift.ex.dirs
debian/lift.ex.install
debian/lift.ex.logrotate
debian/lift.ex.postrm
debian/lift.ex.preinst
debian/lift.ex.prerm
debian/lift.ex.service
debian/rules
debian/source/format

The approach I choose is a long running process owned by a dedicated user, environment in a dedicated env file /etc/lift/lift.env

service: systemd, excerpt:

[Service]
EnvironmentFile=/etc/lift/lift.env
ExecStart=/bin/sh -c '/usr/lib/lift.ex/bin/lift foreground >> /var/log/lift/lift.log 2>&1'
Restart=on-failure
User=lift

That is, I start the release much like it would in a docker container, and this what systemd is expecting for a simple type.

package is built with a makefile (33 lines, 982 bytes).

2 Likes

This is nice. How does updating work? Simply reinstall the package and tell systemd to restart?

Just install new package.

If it’s really just simple services I am using simple bash scripts to set up the machine and run updates/upgrades via SSH.

I wrote a small post on this here: https://medium.com/@mcsonique/deploying-elixir-phoenix-projects-to-production-44a236c643c

The gist with less explanations is here:

I use it in conjucntion with hetzner cloud which has small VMs starting at 3 EUR/month (2 GB RAM, 20 GB disk, 1 vCPU) and use it as cloud-init script (yes, it eats bash scripts).

It’s nowhere fancy nor super cool but it does the job for simple deployments.

1 Like

This is great! Got to try that out. Thanks for the advice!

2 posts were split to a new topic: Deploying elixir on windows 10

Still using Docker + some simple bash scripts around it for deployment, i’d say its okay experience.
distillery etc don’t work well for me, mostly due to config stuff (REPLACE_OS_VARS etc is not enough). Well, using different OS doesn’t help either.

It’s sad there are no ready to go solution still, there are like 90% devs around who individually reinventing the wheel

Have you took a look at config providers in distillery 2?

Also, what is it, what is not enough with it? Perhaps you are binding config values to late/early? Or is it because of the limitation that REPLACE_OS_VARS has to be strings?

1 Like

has to be strings?

Yes. Also, in some cases i need more complex logic, e.g. passing binary blob. at this moment done via base64 encode/decode

config_providers

I’ll take a look, thanks. Well, having custom code for prod to do the same logic regular dev.exs/test.exs do is not the best possible approach. Docker version “just works”, more or less hassle-free. Well, 3rd party code can’t change much here without modifying how config works for everyone

Watch this:

The short answer is in the future, they’re working on it.

4 Likes

Hi there

The first time I went to production I can say that it took me some time to take care of everything.
Deploying Elixir/Phoenix apps requires certain OS knowledge.

I have a vagrant box for each project locally. Sure, you can use Docker if you fancy or anyother containerization software however there are some hidden costs to this approach.

I make sure that the VPS OS is the same as the OS I’ll be using in Vagrant. For most projects it’s just copy pasting the vagrant setup file.

Then I’ve created a simple bash script which compiles everything from within a virtual machine with distillery and copies it via SCP, extracts & runs it on the host. (You can compare it to edeliver… however it’s simpler and what I need)

Shouldn’t take more than 10 minutes for the whole compilation stuff. If you use distillery you won’t need to install anything else on the VPS since you’ll get a self contained binary.

I currently run multiple Phoenix/elixir apps on a single Linode VPS. Phoenix gets routed via NGINX on a per domain basis. My usage of nginx is purely based on the fact that I also run some PHP compatible scripts on the same server.
For high performance apps I’d recommend using HAproxy since it goes better AND has more statistics. This way you don’t need to setup 10 vps’es for 10 apps :slight_smile:

The only downside to this is that yes, I need to configure a systemd service manually for each app. Nginx setupalso needs to be defined manually. These happen only the first time. After that it’s just reloading the vagrant box and deploying it.

I’d also recommend something else if you ever have issues with DB migrations. Or if you want to work directly on the remote DB in a secure way, at least for the migration of the data part
This may seem as less good practice from a security standpoint however if you use public key certificates it’s all good . Set up a reverse port forwarding from your production postgresql (or another DB) to your localhost.
This way you can use the remote database locally. All over an encrypted connection and you need not modify anything in postgresql to allow outgoing IP’s since everything will be done via localhost.

ssh -L 40001:localhost:5432 user@remotehost.net -p 2887 

Where localhost is the hostname and 40001 is the local port you’ll use/
The -p 2887 indicates that I’ll be using that port for SSH instead of the default SSH port.

The above can be used for everything else as well including observing your app locally, migration of big data. All from your own box.

Good luck!

3 Likes