Using LetsEncrypt with Phoenix (behind Nginx). How to make this more optimal?

Recently I set up Phoenix on an online application.

Below I will detail the exact steps I’ve used. (And link to the guides I’ve followed), as well as detailing the steps that felt ‘weird’. I make this topic mainly because of the following reason:

I have Phoenix (with hot upgrades) working on an Ubuntu webserver behind Nginx, and using LetsEncrypt for SSL, but:

  • The services are all run from the root user. This might not be neccesary?
  • As it was set up now, LetsEncrypt will not be able to auto-refresh the certificate. This is of course very suboptimal.

1. Setting up the VPS

I bought a new VPS that runs ubuntu (xenial, 16.06). After following the basic setup procedure, I installed the following programs:

sudo apt-get install openssh-server vim nginx letsencrypt

(I believe that was everything necessary)

On the VPS I created a directory /var/www/my_app (and made sure that this directory is of the current user rather than the root user, by using chown myuser /var/www/my_app && chgrp myuser /var/www/my_app).

Also important: locally use ssh-copy-id yourname@yourhostname to make sure you don’t need to enter the SSH-password every time you connect to the VPS. Here, yourhostname is the IP-address of the VPS; although I definitely recommend setting up a record for it in the /etc/hosts file.

2. The first release.

Now, locally, I had a Phoenix project. For the sake of this post, let’s call it MyApp. I added Distillery to the mix.exs file, and, following the Distillery+Phoenix guide, I ran:

mix release.init

After this you can edit rel/config.exs to your liking, but it is fine to keep it as-is.
I altered config/prod.exs to the following:

config :my_app, MyApp.Endpoint,
  http: [port: {:system, "PORT"}],
  url: [host: "localhost", port: {:system, "PORT"}],
  server: true,
  root: ".",
  version: Mix.Project.config[:version],
  cache_static_manifest: "priv/static/manifest.json"

Important changes are:

  • server: true makes sure that Cowboy starts.
  • the {:system, "PORT"} mean that the actual port is read from the system environment file.

I then generated the first release using the command

./node_modules/brunch/bin/brunch b -p && MIX_ENV=prod mix do phoenix.digest, release --env=prod

The important file it creates is called _build/prod/rel/my_app/releases/0.0.1/my_app.tar.gz.

Now, the first release could be pushed to the webserver by using scp or alternatively rsync. One minor issue that scp has, is that it does not create directories that are unavailable on the remote host. rsync on the other hand does not give visual feedback of how the uploading process is doing.

# From within the directory of my_app
scp _build/prod/rel/my_app/releases/0.0.1/my_app.tar.gz yourusername@yourhostname:/var/www/my_app/

After this, it’s time to test it out! Go to the /var/www/my_app folder on the VPS, and extract the my_app.tar.gz file: tar -xzf my_app.tar.gz.
Now, you can run it: sudo PORT=4000 ./my_app console.
This should work without problems (If it does not, either the port you specified is already taken by another app, or your phoenix configuration settings are off). You can test if it actually works by going to http://yourhostname:4000 in a browser.

3. Setting up Phoenix on the VPS as service that auto-starts on reboot.

(This follows the Phoenix In Production with Systemd guide, with some alterations for Ubuntu.)

Create a new file on the VPS, which contains Environment Variables for the to-be-created service:

sudo vim /etc/default/my_app.env

Enter the following here:


All right!
Now, create a new file on the VPS, which contains info about our service:

sudo vim /etc/systemd/system/my_app.service

Fill in the following details here:

Description=Runner for MyApp # Ensures network is up

ExecStart=/var/www/my_app/bin/my_app start
ExecStop=/var/www/my_app/bin/my_app stop


After making this file, run systemctl daemon-reload to ensure the file is known to the system.
Now you can start/stop the Phoenix server by calling sudo service my_app.service start, sudo service my_app.service stop, and check its status with sudo service my_app.service status.

At this stage, you can check again if everything works by starting/restarting the server, checking http://yourhostname:4000 in a browser and possibly checking its status using above command if something is wrong.

If you’re happy, run systemctl enable my_app.service to ensure that the process is auto-started at startup.

4. Setting up Nginx.

Now the Phoenix app works fine, but it does not yet use Nginx. Using Nginx is a good idea if you’re planning on maybe hosting multiple sites on this same VPS, or want to maybe route traffic in different way in the future.

If everything was set up correctly during Step 1, if you go to http://yourhostname, you’ll see the nginx default page.

To add the phoenix website, go to /etc/nginx/sites-available/ and run sudo vim my_app here. Insert the following details:

upstream my_app {

map $http_upgrade $connection_upgrade {
    default upgrade;
    '' close;

    listen 80;
    server_name .yourhostname;

    location / {
        try_files $uri @proxy;

    location @proxy {
        include proxy_params;
        proxy_redirect off;
        proxy_pass http://my_app;
        # The following two headers need to be set in order
        # to keep the websocket connection open. Otherwise you'll see
        # HTTP 400's being returned from websocket connections.
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection $connection_upgrade;

After you’re happy, run sudo ln -s /etc/nginx/sites-available/my_app /etc/nginx/sites-enabled to ensure that nginx will find it (It will auto-include config files in the sites-enabled folder, and above command creates a symlink to the file we just created that lives in the sites-available folder).

Now, run sudo service nginx restart to ensure the changes are found. Nginx will let you know if there are errors. Now, if you go to http://yourhostname, you should see your Phoenix application in its full glory!

5. Performing an Hot Upgrade.

Now it’s time to move back to the local machine.

Edit anything on your application. Now, after editing something in your application, change, in the mix.exs file the version number to something larger. For the sake of this guide, let’s change it from 0.0.1 to 0.0.2. (Of course, you follow Semantic Versioning, right?)

After you’re happy (and commited the changes into version control), run:

./node_modules/brunch/bin/brunch b -p && MIX_ENV=prod mix do phoenix.digest, release --env=prod --upgrade

Note the --upgrade flag. It ensures that a so-called appup file is created to perform a hot upgrade.

After running this, copy the new file to the remote location. Note that the location where we put it is slightly different from before (it now is in a subfolder):

scp _build/prod/rel/my_app/releases/0.0.2/my_app.tar.gz yourusername@yourhostname:/var/www/my_app/releases/0.0.2

(Note that scp might complain here about the 0.0.2 folder not existing remotely. Make sure it exists there, or copy the file using rsync)

After the copy has finished, move to the VPS and execute /var/www/my_app/bin/my_app upgrade 0.0.2. This should automatically extract the uploaded archive, and upgrade the running service to it. Amazing!

(Check if it worked successfully in the browser.)

This step should be repeated every time you change something on your application :slight_smile: .

6. SSL using LetsEncrypt

(This follows the Setting up Phoenix Elixir with Nginx and Letsencrypt guide, but has some important changes as to nginx’s workings on Ubuntu, and some better settings based on the comments of that guide.)

All right! Time for LetsEncrypt!
(Make sure that you have a domain name in a DNS that points to your VPS first. In this guide, this domain will be called

On your VPS, ask LetsEncrypt to generate a certificate for you:

letsencrypt certonly -a manual --rsa-key-size 4096 --email -d

During this process, you will be asked to host a challenge-response on your webserver. The easiest (but a most horrible) way to do this is to open a second window with an ssh-connection, and run:

sudo service nginx stop
sudo su root # Yes, root access is neccesary to start a webserver

followed by the mini-python-server script that the letsencrypt process will ask you to copy and run, followed by pressing ENTER in the letsencrypt setup procedure, followed by exit in the root-mode tab to quit root mode.

(NOTE: This step above is the one that feels the most hacky. There must be a nicer way to do this?)

All right! Now alter your nginx configuration file: (sudo vim /etc/nginx/sites-available/my_app)

upstream my_app {

map $http_upgrade $connection_upgrade {
    default upgrade;
    '' close;

server {
  listen 80;
  return 301 https://$server_name$request_uri;

    listen 443 ssl;

    ssl_certificate /etc/letsencrypt/live/;
    ssl_certificate_key /etc/letsencrypt/live/;

    location / {
        try_files $uri @proxy;

    location @proxy {
        include proxy_params;
        proxy_redirect off;
        proxy_pass http://my_app;
        # The following two headers need to be set in order
        # to keep the websocket connection open. Otherwise you'll see
        # HTTP 400's being returned from websocket connections.
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection $connection_upgrade;

This will ensure that nginx will be able to find the certificate files, and will automatically redirect all http traffic to https.
Now, run sudo service nginx start to start nginx again and load the file.

Try it out in your browser. Hopefully, everything should work. :slight_smile:

All right! You’ve made it!

Now, here are questions for people who’ve either followed along with this procedure, or have done similar setups in the past:

  • Can this be simplified?
  • How to limit the amount of things that needs to be done as superuser?
  • How to install Letsencrypt-certificates without stopping nginx (breaking availability at that time)?
  • How to make sure that the Letsencrypt-certificates are auto-renewed? (again, preferably without stopping the server)

Oh, and also, please let me know if there are mistakes in the guide if something is not clear enough, of course :wink: .


This is a nice write-up!

For your questions, based on my experience:

Running the Phoenix app shouldn’t need sudo or as root since you’re binding to port 4000. I’m not really familiar with systemd but I believe that you should be okay specifying User=myuser on myapp.service.

If your nginx binds on port 80 you must run as root. But thankfully only nginx parent process will run as root; it will spawn worker processes that will run as a less-privileged user (nginx or www-data, I forgot).

For letsencrypt, If you’re using nginx, you can use webrootauth. Just search for “letsencrypt nginx webrootauth reverse proxy”. It should be simpler (without needing to run the mini-python-server), but with more setting up to do for the automatic challenge, but it’s a one time process and after that the renewal can be automated with cron. You still need to reload nginx (sudo nginx -s reload) after the certificates are renewed, but reloading config doesn’t stop nginx so you can still have the availability.

I read an article written shortly after letsencrypt was launched last year and it helped me set it up (can’t find it now though sadly), but since then there has been loads of articles that might be helpful.


I also have a doubt…
What about the case when I want the SSL certificate to go with phoenix ?

Well this thread is for NGinx so you probably want a new thread. However, you can just use an Acme library then. :slight_smile:

1 Like

With our new project, I’ve been updating this guide a little.
The following is mostly for my own remembering: :sweat_smile:

One important thing I only learned today, is the possibility to restrict accessibility to port 4000, so that all external traffic really has to come through nginx:
To do this, we can build a firewall using the tool ufw which is installable on Ubuntu servers (and other linux-distros as well):

ufw default deny incoming
ufw default allow outgoing
ufw allow ssh
ufw allow http
ufw allow https
ufw allow in on lo to any port 4000 # This line makes sure that Nginx is able to connect to the running app.
ufw enable

More info in this guide on DigitalOcean

1 Like

Or change Phoenix to only bind to specific interfaces.

If you could elaborate on this, that would be super helpful! :smiley:

Assuming nginx is on the same host and you’re proxying to you can tell Phoenix/cowboy to only bind to, rather than all addresses, meaning it’s not directly accessible. The cowboy plug offers an IP option to allow you to do this.

Now with that said you should still have a firewall setup to prevent mistakes, but this is another way to to help secure your setup.


I have another update to make to this:

If you want to use RabbitMQ (securely!) you can either fiddle for a long time with setting up the RabbitMQ SSL configuration, or you can forward the AMQP stream through Nginx and use it as a TLS terminating proxy, allowing you to re-use the Let’s Encrypt certificates:

  • Install RabbitMQ

  • Make sure the RabbitMQ server is started using sudo service rabbitmq-server start

  • Make sure it is started on system restart using sudo systemctl enable rabbitmq-server

  • Configure ufw with the following extra ports:

sudo ufw allow 5671/tcp

(This opens the default amqps port to the outside world)

  • Add the following to your server’s nginx configuration to allow access to the RabbitMQ management API:
sudo rabbitmq-plugins enable rabbitmq_management
  • Add a section to your server config to expose the RabbitMQ management API to the outside world over HTTPS:
    location ~* /rabbitmq/api/(.*?)/(.*) {
        proxy_buffering                    off;
        proxy_set_header Host              $http_host;
        proxy_set_header X-Real-IP         $remote_addr;
        proxy_set_header X-Forwarded-For   $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;

    location ~* /rabbitmq/(.*) {
        rewrite ^/rabbitmq/(.*)$ /$1 break;
        proxy_buffering                    off;
        proxy_set_header Host              $http_host;
        proxy_set_header X-Real-IP         $remote_addr;
        proxy_set_header X-Forwarded-For   $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;

(Put this before the existing location {...} blocks that are more general.)

  • To allow access to RabbitMQ remotely, add the following to your main nginx.conf (It unfortunately cannot live inside the site-specific configuration because that one is included inside the http {...} block and therefore does not allow you to write the stream {...} block.)
stream {
    upstream rabbitmq_backend {
        server localhost:5672;

    server {
        listen      5671 ssl;

    ssl_certificate /etc/letsencrypt/live/; # managed by Certbot
    ssl_certificate_key /etc/letsencrypt/live/; # managed by Certbot

    # The following settings were copied from /etc/letsencrypt/options-ssl-nginx.conf; Be sure to keep them up to date!
    # (We cannot include that file since it contains a 'ssl_session_cache' that cannot be reused.)
    ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
    ssl_prefer_server_ciphers on;


    # End of copied settings

    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot

        proxy_connect_timeout 1s;
        proxy_pass rabbitmq_backend;
  • Make sure the nginx config is correct by testing it using sudo nginx -t (It will give you readable error messages if it fails).
  • Restart nginx with sudo service nginx reload
  • Try connecting from e.g. a local AMQP client using the url amqps://, which should now work!
  • For extra security, you might want to remove the default guest user from RabbitMQ or at least give it a non-default password; since RabbitMQ believes all connections to come from localhost its ‘guest can only connect from localhost’ protection is not useful anymore.

:slight_smile: All in all, this was a relative breeze to set up! RabbitMQ is an amazing piece of software. I’m very happy not needing to manually configure all of this.

(However client certificates are something I still need to look into)


So… RabbitMQ is still open to unencrypted access?

@sribe as far as I can see, (Please prove me wrong!) in above configuration, RabbitMQ is only open for encrypted access from the outside world (it is open for unecrypted access from localhost). The thing that client certificates would add is to make it impossible for man-in-the-middle attacks to occur, which can still happen in a system that uses TLS encryption.

Perhaps I misunderstood, do you need to access RMQ from the outside world? Or are you just directing internal traffic to your gateway?

Ah! Good question; I need to make this more clear in the guide:

In our case, we have one ‘coordination server’ running Ruby, and another server (and there might be multiple of these in the future) ‘consumer servers’ running Elixir. RabbitMQ runs on the ‘coordination server’, but the Elixir servers will connect to this one remotely (And some of these might at some point be at different geographical locations).

But of course, these are trusted servers also operated by me and my colleagues; not ‘just anyone’ from the outside world. :slight_smile:

Another option is Traefik, which fetches and manages your letsencrypt certs for you.

Did you have to install nginx stream-core separately?

As far as I can remember, I did not have to install it separately. But that might very well depend on which operating system (and i.e. which package manager) your VPS is running.