I deployed 10+ apps to production and it still always takes me at least 1-2h. Most of the time goes to installing different dependencies for different distros(and versions of these) + solving cryptic erlang errors(once I spent 4h on single db migration error).
Building/deploying further releases/upgrades on small VPS’s is also very very slow(think DO’s $5 droplet).
This is the biggest reason I started using node and go for simple/medium complex projects.
A while ago I also had ansible scripts to set-up everything but these got outdated and created enough additional complexity that made me stop using them. Using hosting services like heroku just for this is also no-go for me or my clients.
Same boat. It takes a bit to deploy an elixir app. That said, once everything is setup it only takes a minute or so after running the edeliver command. It would be nice if there were more options for deploying. I’m aware of Gigalixir and Heroku as the only two quick/easier options (although Gigalixir didn’t feel quick - was a bit of a headache trying to get it to work whereas Heroku was just an add your buildpacks and go experience. Perhaps Gigalixir is easier now). Supposedly there is an easy deployment tool in the works that will be a part of the core framework. Not sure where things are at with that though.
This is my current strategy as well: same build and target machine + Distillery releases and hot-upgrades.
Tried Edeliver at some point, but had some issues along the way. I also felt it added extra complexity to my setup.
Look over the Distillery documentation: Home - Distillery Documentation. It’s just as good as you would expect in the Elixir ecosystem . Has multiple guides for different scenarios.
I found dh_make upsetting at first approach and I did ran away a couple of times from it in the past but it now fits my bill and I can use it for other contexts.
Going into files details, I have
I use it in conjucntion with hetzner cloud which has small VMs starting at 3 EUR/month (2 GB RAM, 20 GB disk, 1 vCPU) and use it as cloud-init script (yes, it eats bash scripts).
It’s nowhere fancy nor super cool but it does the job for simple deployments.
Still using Docker + some simple bash scripts around it for deployment, i’d say its okay experience.
distillery etc don’t work well for me, mostly due to config stuff (REPLACE_OS_VARS etc is not enough). Well, using different OS doesn’t help either.
It’s sad there are no ready to go solution still, there are like 90% devs around who individually reinventing the wheel
Also, what is it, what is not enough with it? Perhaps you are binding config values to late/early? Or is it because of the limitation that REPLACE_OS_VARS has to be strings?
Yes. Also, in some cases i need more complex logic, e.g. passing binary blob. at this moment done via base64 encode/decode
config_providers
I’ll take a look, thanks. Well, having custom code for prod to do the same logic regular dev.exs/test.exs do is not the best possible approach. Docker version “just works”, more or less hassle-free. Well, 3rd party code can’t change much here without modifying how config works for everyone
The first time I went to production I can say that it took me some time to take care of everything.
Deploying Elixir/Phoenix apps requires certain OS knowledge.
I have a vagrant box for each project locally. Sure, you can use Docker if you fancy or anyother containerization software however there are some hidden costs to this approach.
I make sure that the VPS OS is the same as the OS I’ll be using in Vagrant. For most projects it’s just copy pasting the vagrant setup file.
Then I’ve created a simple bash script which compiles everything from within a virtual machine with distillery and copies it via SCP, extracts & runs it on the host. (You can compare it to edeliver… however it’s simpler and what I need)
Shouldn’t take more than 10 minutes for the whole compilation stuff. If you use distillery you won’t need to install anything else on the VPS since you’ll get a self contained binary.
I currently run multiple Phoenix/elixir apps on a single Linode VPS. Phoenix gets routed via NGINX on a per domain basis. My usage of nginx is purely based on the fact that I also run some PHP compatible scripts on the same server.
For high performance apps I’d recommend using HAproxy since it goes better AND has more statistics. This way you don’t need to setup 10 vps’es for 10 apps
The only downside to this is that yes, I need to configure a systemd service manually for each app. Nginx setupalso needs to be defined manually. These happen only the first time. After that it’s just reloading the vagrant box and deploying it.
I’d also recommend something else if you ever have issues with DB migrations. Or if you want to work directly on the remote DB in a secure way, at least for the migration of the data part
This may seem as less good practice from a security standpoint however if you use public key certificates it’s all good . Set up a reverse port forwarding from your production postgresql (or another DB) to your localhost.
This way you can use the remote database locally. All over an encrypted connection and you need not modify anything in postgresql to allow outgoing IP’s since everything will be done via localhost.
Where localhost is the hostname and 40001 is the local port you’ll use/
The -p 2887 indicates that I’ll be using that port for SSH instead of the default SSH port.
The above can be used for everything else as well including observing your app locally, migration of big data. All from your own box.