Is cowboy enough for most Phoenix deployment. Is it at par with nginx or apache

Is cowboy enough for most Phoenix deployment. Is it at par with nginx or apache.

1 Like

They aren’t really the same kind of thing. However, to answer your first question, cowboy is indeed enough for most Phoenix deployment. You will run into a dozen other bottlenecks long before you are limited by cowboy.


How is Nginx different fromNginx or Apache? And why do many people say it’s hard to deploy Phoenix, it appears easier than Wordpress, RoR, or Django.

nginx and apache are webservers with a lot of extra features, cowboy is more of an application server.

Deploying wordpress is trivial. Just dropping random PHP files directly in the webroot of the server, opening a certain URL after that to initialise the installation and then optionally removing that entrypoint.

RoR (and probably Django as well) is a bit more complicated, as you need an application server which potentially needs to get restarted after deployment.

Phoenix, or BEAM applications in general, are considered harder to deploy, as you need to precompile them on a host that matches architecture and operating system (its even more complicated than that if you are on a system with rolling releases).

Those at least are the native deployment strategies. You can of course also use docker as deployment target for each of them, but thats another story.


I just deployed Phoenix on my VPS, and it took me longer to deploy my Django app with Nginx + Gunicorn configuration, so I wondered maybe I’m missing something in Phoenix with Cowboy almost no configuration. Both run on rolling release of Debian.

As I really have no clue about your deployments, I can’t tell you much. But for Elixir deployments, the different levels of configuration (compiletime, runtime, boottime) have always caused confusion and probably will continue to do so, as configuration bits are often not clearly beeing documented by the libraries.

Also the precompilation is something not many do properly understand, when they come from a PHP or Ruby background, where they can just upload sourcecode and it will work just like that…


@Nobbz, I think I’m missing something in my Phoenix deployment, I did not do any precompilation, but my app works, it can’t this be simple. Any good tutorial you can recommend for me to read?

You can just use source code and run things using mix if you have erlang/elixir installed on the deployment target. This is not the common way to deploy code though. Usually you’ll bundle everything up in a “release” and deploy that.


do I run these releases as a cron job?

No, usually by the means of your operating systems PID-1, usually systemd nowadays.

Or if you dare, just stick them in a tmux session :smiley:

1 Like

I never run apps in systemd or tmux, I found some tutorial for systemd for running NodeJs app Is there something Phoenix specific I should keep in mind?

So what keeps your nginx and gunicorn running then?

1 Like

You’re right I got mixed up, my sockets and service files run there. I’ll check this Phoenix deployment out…thanks a lot.

I very much prefer to use nginx but the reason isn’t due to performance.

The reasoning is because you can configure nginx to handle SSL, serve and cache your assets, handle things like redirecting http to https (and www to your apex domain) and also set up subdomains.

Now, the real win here is you only have to do this in 1 place with nginx. Your Phoenix app doesn’t need to know anything about any of the above things. So if you release 2 Phoenix apps, you can generally reuse all of your nginx configuration on both apps, and also host 2 apps on the same server on different subdomains.

Plus, if you grow in the future and decide to use a CDN for your assets, it’s nice to have all of that stuff (caching headers, etc.) out of your app to begin with.

And lastly, all of that nginx configuration can be used for Django, Rails or whatever other framework you want to use with pretty much no changes. I really like when I can reuse what I learn for any stack.


Same here.

I’m thinking of using docker to deploy Phoenix, but what/how is the best way to start the Phoenix/Nginx docker container on my VPS?

You could use systemd to start and monitor the container with your phoenix app.

The nice thing about systemd is that it will restart the container if it crashes, it can also capture the stdout and forward it to syslog so it will appear in /var/log/messages.

You will need to create a unit file for your container.

If you also want nginx, I think the best way is to create a separate systemd service for the nginx container, create a dependency between the units so that the two containers start in the correct order and link them using docker’s linking capability.

You could also use something like docker compose to start and link the two containers and I believe you could still use systemd to start compose in a very similar way as you’d use it to start a standalone conatiner.

1 Like

I would simply use
e.g. docker run --restart=always myphoenixapp
If you go down the docker route and start simple (with only a few containers) there is nothing wrong with using docker tools like restart handling and docker logs

as soon as it gets more complicated you can think about throwing docker-compose into the mix… :wink:


I deploy my project into docker swarm using Gitlab pipeline. I have a build step and deploy step. If it is helpful for you, I’ve pasted bellow my gitlab.yml and my dockerfile

  - test
  - build
  - deploy


.runner-tags: &runner-tags
    - deploy

  <<: *runner-tags
  image: docker:stable
    DOCKER_DRIVER: overlay2
    DOCKER_HOST: "tcp://docker:2375/"
  - docker login -u $NEXUS_USER -p $NEXUS_PASSWORD
  stage: build
    - docker build -t ${IMAGE_NAME}:${CI_COMMIT_REF_NAME} -f Dockerfile-prod .
    - docker push ${IMAGE_NAME}:${CI_COMMIT_REF_NAME}
    - tags
    - master

  <<: *runner-tags
  stage: test
  image: elixir:1.7-alpine
    - mix local.hex --force
    - mix local.rebar --force
    - mix deps.get --only test
    - name: mongo:latest
      alias: mongodb
  script: mix test
    - merge_request
    - master

.deploy-common: &deploy-common
  <<: *runner-tags
  stage: deploy
    - eval $(ssh-agent -s)
    - echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add - > /dev/null
    - mkdir -p ~/.ssh
    - chmod 700 ~/.ssh
    - echo "$SSH_KNOWN_HOSTS" > ~/.ssh/known_hosts
    - chmod 644 ~/.ssh/known_hosts

  <<: *deploy-common
    - ssh $SWARM_MANAGER_DEV docker stack deploy -c services/docker-compose.yml $DOCKER_SWARM_GROUP_NAME --with-registry-auth
    - master
    name: staging
    url: https://host-of-my-project-for-staging/

  <<: *deploy-common
    - ssh $SWARM_MANAGER_PROD docker stack deploy -c services/docker-compose.yml $DOCKER_SWARM_GROUP_NAME --with-registry-auth
    - tags
    name: prod
    url: https://host-of-my-project-for-production

And my dockerfile

#Build Stage
FROM bitwalker/alpine-elixir:1.7.4 as build

ENV MIX_ENV="prod" \

#Copy the source folder into the Docker image
COPY . .
RUN mix deps.get && \
    mix deps.compile && \
    mix release && \
#Extract Release archive to /rel for copying in next stage
    RELEASE_DIR=`ls -d _build/prod/rel/${APP_NAME}/releases/*/` && \
    mkdir /export && \
    tar -xf "${RELEASE_DIR}/${APP_NAME}.tar.gz" -C /export

#Deployment Stage
FROM alpine:3.8

#Set environment variables and expose port

    LANG="en_US.UTF-8" \
    HOME="/opt/app" \
    TERM="xterm" \
    DEPS="ncurses-libs zlib openssl bash"


#Container default workdir

#Install Dependencies for Erlang and Distillery
RUN apk --update add --no-cache --upgrade \
    ${DEPS} && \
    adduser -s /bin/sh -u 1001 -G root -h ${HOME} -S -D default && \
    chown -R 1001:0 ${HOME} && \
    rm -rf /var/cache/apk/*

#Copy and extract .tar.gz Release file from the previous stage
COPY --from=build /export/ .

#Change user
USER default

#Set default entrypoint and command
ENTRYPOINT ["/opt/app/bin/content_proxy"]
CMD ["foreground"]


My usual way of deploying Phoenix apps in production is using Nginx, I have done one deployment with cowboy only, and I would not recommend it.


  • Setting up SSL certificates is way harder! With Nginx you can use certbotwithout taking care of anything. With cowboy, you can still use certbot for letsencrypt certificates and autorenewal, but you will have to teach cowboy to change the new certificate at runtime.
  • Not many tutorials. Seriously, you are on your own.
  • Static asset serving speed (probably not to a degree your regular user will notice though)
  • Not many people use Cowboy only, you will have to understand what is going on, and figure it out on your own, instead of copy pasting a solution
  • Can’t run multiple apps under different URLS on the same server
  • Cowboy can not listen to ports smaller than 1000 on Linux, unless you run it on root. Since HTTP is port 80 and HTTPS is port 443, you will have to do some workarounds.Is it worth it though?


  • Cuts out one moving part of your system (nginx)

Please correct me if I am wrong, or have anything to add to these

EDIT; I wrote down how to deploy a Phoenix app with Cowboy in our company Wiki if anyone insists (warning, no one has ever followed this tutorial, I might have missed stuff and I think it doe s not account for changing SSL certificate renewal).