Does anyone here deploy using Ansible, Haproxy on baremetal?

I am going to attempt to deploy to a cheap instance on Vultr tomorrow.

So I was wondering if anyone does deployments, the vanilla way?

Here’s one blog post I’m going to study and deploy tomorrow:

Other resources would help tremendously.

P.S. I have deployed using Render before, but I want to do it the old-school way.

  1. No docker.
  2. No Postgres, just Sqlite.
  3. Haproxy.
  4. Ansible. (Because I want a bare minimum automation, for securing, and configuring the server.)

Will figure out blue-green deploys later.


Here is how I have done it in past.

  1. Create a mix release. And assemble a tar.
  2. Rsync the tar to the server.
  3. Unzip the tar and add it to systemd.
  4. Put localhost:4000 behind nginx. (Or haproxy, in your case)

Will add links to those later in the day. (on phone at the moment. )


Yes please.

Found one using Bash Script:

Another excellent series of posts can be found in the Cogini blog. This predates Fly and is the reference I used last year to build my own scripts with Ansible. I also use Ansible to routinely deploy my blog articles on the fly.

1 Like

There is no school like the old school. :grinning:

1 Like

I agree about the old school approach.

I was suffering from analysis paralysis because of the deploy story that has become the norm.

So much unnecessary hoops to jump through. It felt like I was back to Node.js ecosystem, from which I’m desperately trying to get away.

  1. Docker has hidden complexity beneath its facade of simplicity.
  2. Kubernetes takes it up a few notch higher.
  3. AWS is like a vulture.
  4. FLY seems unreliable.
  5. Heroku disappointed everyone, and Render followed suit with its pricing.

Everything seems so fallible, brittle.

One way, companies can pull the rag from underneath us, and make us pay for measly hardware specs.

The other way has tooling, that’s just inefficiency piled on top of inefficiency.

I am done with complexities.

Just want to do bare minimum.

1 Like

I think that docker improved significantly (at least in terms of documentation) over the last 5 years. I use it for all deploys, be it simple deploy with just a server and database or production grade deploys with a lot of services.

The biggest advantage of using docker is the ability to configure you application using environment variables (yes, we can use different ways, but this is the best way I have found so far), freeze other local dependencies in a isolated image, have built images that always work on different host OS flavors, however it is true that the complexity is a little higher, especially if you are new to docker.

When I work with docker, the things I stay away from are other tools that you don’t need. In most of the production deploys that I’ve done on small projects, we would just use a docker-compose file from a git source, with versioned images, no need for any third party tools or other abominations that make the deploy harder.

Here’s my take on Docker:

Not a fan of these articles that bash on a technology without any arguments, the author clearly has no experience with any kinds of deploy and didn’t seem to invest time to learn even about basic functionality of docker.

On toy projects you can go with whatever you want, however for production products, copying files manually to server is just plain stupid, should something go wrong, you will stay with an outage, not to mention about lack of automation.

Lol I’m the author.

And I have linked to as many articles I can find, to back my claim.


Pieter Levels has many websites running from a single VPS, which costs him 500 dollars per month.

It’s PHP, Sqlite, Nginx. All done manually.

I would have linked to a tweet or two where he mentions his tech stack and how much he makes from that stack.


I have no intention of making a Google size company or running micro services with dozen services.

Just one server, and will do it manually if I have to.

I won’t touch Docker or Cloud ever again.

Sorry to say this, but topic aside, I consider these kind of articles toxic, just because you’ve had a bad experience, doesn’t mean you should make it that black and white, as the truth is always in the middle…

1 Like

Ignore it or downvote it.

I had bad experience and then found others have experienced the same.

So I wrote about it in a single post, so I can revisit it and renew my hatred for Docker. :sweat_smile:

And no, I don’t consider criticism to be toxic.

I like to read articles about others’ experiences.

For instance, I would like to write my experience with JavaScript / Node and hope others find it relatable.


"Truth is somewhat in middle. Just that I am not there yet. "

+1 for This perspective. In tech, it is a lot easier to get carried away. :slight_smile:

1 Like
   aliases: aliases(),
      deps: deps(),
      releases: [
        razor_new: [
          steps: [:assemble, :tar]

add this section to def project do in mix.exs.

read more about releases here.

Rsync to the server is simple.

Adding to systemd

Description=razor_new service postgresql.service

ExecStart=/home/ubuntu/deploy/bin/server start
ExecStop=/home/ubuntu/deploy/bin/server stop



Tweak this one as you like. This helps start the service and run in background. That’s what I use systemd for.

At the moment, my nginx is bloated with lots of ports and services it is serving as a frontend to.

However, i will post the blogs I used to configure it initially for elixir.

1 Like

I deploy anything on “bare metal”, Elixir included, mainly via Ansible. Erlang and Elixir are, thanks to releases, one of the most painless things to ship to a server to me. To be fair, I am used to Python, and Python’s packaging story is, well, let’s say BEAM land is a nice place to be.

I wrote a blog post about it a while ago about deploying Elixir releases with Ansible, and the setup takes great care to follow Ansible best practices, like not reporting “xxx changed” when nothing did. I’ve been using it for years, and except for moving to Debian stable repos from ESL and some minor changes I did to it recently to make it work with hot upgrades via the excellent castle it is still unchanged, and works perfectly. Since I don’t want to just blogspam, here is my previous deployment that will:

  • set up the postgresql database and database user for the app
  • set up a deployment directory (it then keeps the previous 5 releases around via deploy_helper, note that actually just making a new release for every commit is a lot more efficient than you think! check this commit for what I changed)
  • check out the git release, download prod dependencies, assemble a release, run migrations
  • make the new release the “current” one (started by systemd)
  • template service and systemd config
  • start and enable the service (and restart if something changed).

Since you mentioned no PostgreSQL, you can pretty much just delete the postgres lines. Configuring haproxy is straightforward with Ansible - make a role with two tasks, one to install it, one to template some configuration, if the config changed, make your service manager reload it.

I have no intention of making a Google size company or running micro services with dozen services.

Just one server, and will do it manually if I have to.

I won’t touch Docker or Cloud ever again.

You do not need Docker or Cloud for running a company with dozens of services. I’ve worked at a place that had 3 digits of (mostly bare metal) servers on prem, mostly without Docker, deployed (mostly) via questionable shell scripts.
I am not a fan of the Cloud or Docker either, and love having a standard bare metal server to do what I want with. But I do not think it is right to say that they have no place or they will ruin your company due to costs / complexity / … . Each technology has its uses. I don’t think shell scripts are a good way to deploy software to 5000 hosts. I don’t think Kubernetes is a good way to deploy software to 5 hosts. But if it works for people, by all means, go for it. I’m happy with my setup :slight_smile:

About blue-green deploys - the simplest thing I can suggest: make a systemd template unit (myapp@.service instead of myapp.service) that takes the port somewhere on the command line or in the environment variables - then use the part after the @ to specify it: systemctl enable --now myapp@400{1,2}.service with Environment=PORT=%i will expand to one instance with PORT=4001 and another with PORT=4002. Or use hot code upgrade, if you want to go real fancy. I’m sure you can work with that.

Keep us posted what you end up with :slight_smile:


I’m so glad I asked for help in this community.

Thank you so much everyone, and finally @jchrist for sharing your insights and your Ansible file.

I was stuck because of Decision fatigue, analysis paralysis, and now that I got so many helpful responses, I can finally take that step.

I will update about my progress tomorrow.

Haha, yes I know each tech has its place, and people do use them.

It’s just that I believe startups / indies / newbies can take the learnings from failings of the mega corps, and not choose “Java” again and again. (Here I’m using “Java”, synonymous with using Node / Docker / …any other tools we know now to be a pain down the line.)

And criticism comes when we suffer thoroughly in our day to day usage of said technologies.

For instance, I haven’t used Wordpress, but have dabbled with PHP in last year of my college project. I can’t just dismiss other people, when they say Wordpress is terrible or PHP is bad, as in my limited experience, it was good.

Every criticism comes from somewhere, and I just shared my experience with Docker.

It’s my skepticism of current state of web, that made me come to Elixir & Phoenix. If I was content deploying SPAs in Docker containers, I would never have jumped to this platform.

If I didn’t use PM2, to make a single threaded Node to simulate multi threading, I wouldn’t have appreciated BEAM.

If I didn’t pull my hair out, trying to debug Node.js, I wouldn’t have appreciated the observability of BEAM.

If I didn’t experience how tough distributed server communication over Redis is, I wouldn’t have appreciated the concurrency model of BEAM.

If I didn’t experience how fickle socket connection is, I wouldn’t have appreciated Phoenix Channels, LiveView.

If I didn’t experience how terrible development with React is, I wouldn’t have learnt LiveView.

If I didn’t see what mutability does in JavaScript, or how OOP garrotes a project by its throat, I wouldn’t have appreciated functional programming in Elixir.

I’m sure companies will continue to use React, Node, and other tools that I found too painful to use. But in my naivety, I just wish they didn’t.

1 Like

this is my full deploy script

set -e


old_sha=$(git rev-parse -q --verify refs/stash)
git stash save -q
new_sha=$(git rev-parse -q --verify refs/stash)
if [ "$old_sha" = "$new_sha" ]; then


for host in ${hosts[@]}; do

  scp tmp/app.tar.gz deployer@"$host":

  ssh deployer@"$host" 'sudo systemctl stop phoenix'
  ssh deployer@"$host" 'tar xzf app.tar.gz  -C /opt/apps/phoenix_app'
  ssh deployer@"$host" 'sudo systemctl start phoenix'

if $made_stash_entry; then git stash pop; fi

./ is just building the release


Couldn’t be simpler!

1 Like

@derpycoder I wrote about this a couple years back: It was fun to play around with it.

1 Like

I under estimated the complexity of Devops.

Found this discussion: