Where do you host **small** or even tiny elixir projects?

I have been running Dokku on a hetzner VPS for a couple of years for several architectural spikes, one phoenix. Once it was set up I found it easy to maintain (but I adopted containers before docker, YMMV). Set up involved a bit of trial and Error.

What I like about Dokku is that it makes it very easy to gradually develop and launch. Eg start new site, add letsencrypt for it, start developing without login, just add dokku basic auth, see if the idea works and then later add logins, access control etc. And I get effortless remote private git repositories for every experiment.

Possible downside is that you need a bigger vps than for just hosting, since dokku also builds before running (mine was already oversized because of a haskell deployment, I don’t know if this is an issue with elixir). It can also pull images, but I quite like the tight push-build-deploy flow.

I was not aware of coolify. Might look into it to see how it compares.
(And nice to see @artem around, I believe we met eons ago at a conference in Helsinki).

4 Likes

I’ve been increasingly looking to Livebook apps for this exact use case! Just set up a single Livebook instance on your favourite hosting provider, develop your app using Kino, or PhoenixPlayground, hit ā€œDeployā€ and enjoy!

The biggest drawbacks right now are that you can’t host anything at the root path (everything is nested under ā€˜/apps’), you don’t get domain-based routing, and you can’t easily design the homepage without running your own fork of Liveview.

Even so, the freedom to just jump into a Livebook, code something up and deploy it in minutes, at zero additional cost is beyond cool!

Also, with Livebook teams, you can actually have separate environments eg dev/staging/prod without compromising the workflow.

As an example, I’ve set this up on https://speedrun.dev . No apps deployed right now, but you get the gist :slight_smile:

1 Like

Kamal and Hetzner does the trick:

8 Likes

You should look at Docker Watchtower for CD. GitHub - containrrr/watchtower: A process for automating Docker container base image updates.

You just let your CI pipeline push a new image and Watchtower will automatically pull and run it (I do not know if this is zero downtime though).

Another party trick that Elixir can do well but would be a terrible idea with other runtimes, is GitHub - sasa1977/site_encrypt: Integrated certification via Let's encrypt for Elixir-powered sites so that Elixir handles the lets encrypt certs. What I think you might lose here is that you can’t have one Nginx that is reverse-proxying different urls to different ports on the box.

I used watchtower and site encrypt on a VPS before I decided to just use Fly.io but if the cost savings were worth it to me I would totally go back to a VPS because it worked very well.

One way to limit the cost on Fly.io is to have a shared Postgres on Fly.

2 Likes

The thing that they mention to not use it in production and instead reach for k8s is very questionable honestly, this might point to potential issues with the technology.

From what is written in readme, it seems that the old container is stopped and the new one is started. This is not ideal, but for a project where you can afford downtime and don’t update often, this should be good enough.

I have a small second hand computer in my electrical cupboard that I deploy OCI containers to using a small shell script, podman, and systemd. It’s great!

3 Likes

I’m using a mix of Kamal and Dokku. I’m transitioning more to Kamal since I like it’s architectural design more (Dokku insists on owning more of the host).

This is self-hosted on my rack of servers.

Kamal + Phoenix worked right out of the box. I only had to adjust nginx once I put nginx in front of Kamal (for wildcard SSL certs).

I found Elixir/Phoenix to be reasonably conservative on RAM, with my latest app consuming ~ 150MB.

4 Likes

Thanks a lot for so many great answers! I don’t dare to mark any of them as a solution, because all these options are. Will be studying what works best for my case and maybe then post here what worked.

3 Likes

Virtual Machine on my PC.

I have a non-standard deployment that works well for my needs. I run a Nerves instance on https://www.vultr.com by using nerves_system_vultr. It costs me $5/mo. I have an SQLite db I keep on the same host.

I use it to run DepViz and a few other tiny sites. If you’re interested in how it works you can find the code on GitHub: GitHub - axelson/vps.

For me the biggest downsides for hosting web apps in this way is that all the dependencies have to be in sync and it’s hard to add external dependencies (since there’s no apt get install).

2 Likes

If anybody cares eventually I decided to install dokku on a cheap 2GB RAM machine. Had to mess with initial configuration a bit, unfortunately it doesn’t support modern elixir out of the box (as ā€œherokuishā€ buildpacks do not know how to install recent ones), but I figured I can ask it to use gigalixir buildpacks which are probabaly supported by gigalixir folks. And figuring how to keep app logs permanent also took some time (dokku kind’a supports it, but without a clear guide).

Then deployment became pretty much ā€œpush to dokkuā€, I don’t even need to have git repo hosted anywhere - everything can be developed on my machine.

It still has some issues, e.g. machine once ran out of space because of lots of docker artifacts, so I had to google for proper docker cleaning commands.

Yet despite all these things, I really like ease of adding yet another app by few commands and git push. Feels good for the tiny projects which I do not really care about. Tons of them can now be hosted on a cheap machine easily.

5 Likes

Thanks for reporting back. If you ever write a tutorial about it I would be interested to read; please tag me.


Someone already mentioned Coolify in the thread. I really like it. I host it on Hetzner for some 14EUR/month and throw all my idea-spaghettis on it. They haven’t yet surprised me with overrages even though this stuff is happening

I think it’s something I’m doing wrong with Oban in one of the apps.


Another idea that I would love to explore: https://www.youtube.com/watch?v=fuZoxuBiL9o

zfs - snapshots. Thats how I manage it.

2 Likes

Might seem a bit crazy, but I do it all in Livebook these days. I set up speedrun.dev as a deployment target using Livebook teams, and I deploy apps to it both from my local instance and from a cloud instance I host.

It’s pretty magical to be able to have an idea, jump into livebook (even on my phone!), code it up, hit a button and have it deployed instantly.

2 Likes

I also use Livebook (with Livebook Teams for easier deployments) for some RnD(ish) work, sometimes even with production, but to me it works for really tiny apps only. Once it gets at least a tiny ambition of becoming a real service (with, oh, user accounts maybe), then it gets not too comfortable to run everything in a single notebook. And debugging in livebook I can’t quite figure.

Or do you somehow write and debug modules somewhere out of livebook and final LB is a glue between modules only?

The fact that you can’t write tests for livebooks is also not very encouraging for more complex things.

PS: Nevermind, looks like you can: livebook/lib/livebook/notebook/learn/intro_to_livebook.livemd at main Ā· livebook-dev/livebook Ā· GitHub

Yeah, writing larger systems in LiveBook gets unwieldy, partly because of the linear top-to-bottom format. Scrolling through 1000 line cells is somehow much less pleasant than scrolling through 1000 line files.

My best advice is to keep LB apps small. Aside from extracting reusable libraries, I’ve found it helpful to think of them a bit like microservices. You can deploy different application concerns as their own notebooks and link them up either via standard APIs or through clustering.

In fact, you can even build backend service apps that serve multiple logical ā€œfrontendā€ apps. I do this with my KinoReverseProxy library, which I deploy once, and then configure to do host-based routing to many independent apps on the same LB instance.

Finally, re debugging, I find dbg/1 much more useful in LiveBook than in normal IDEs. I miss step-through debuggers from working with other languages, but I dont think I’ve ever managed to get one working with Elixir.

Do I get it correctly, @elepedus that most of the time you do not care about user accounts and billing or ā€œbillingā€ (as in I let people play with AI for free, but no more than X calls a month so I don’t get bankrupt because of my hobbie project)? I think most of the time it’s exactly user accounts and storing user’s settings-artifacts-whatever that forces me to do at least a simple Phoenix app.

May I ask you to describe (sketch?) an example how your livebook microservices collaborate in some simpl(ish) project? I like the idea, but imagining how it could all work together is a bit tough for me.

There’s nothing inherent to running in Livebook that prevents us from having user accounts or billing.

For example, we could have a notebook that deploys an Accounts app which stores data in a local sqlite or remote managed database and exposes either an API that other notebooks can call, or clusters with them to provide auth services. Or you could use something like Auth0.

A lot of the time it’s precisely these table stakes that eat up a lot of time on small projects, so I’ve been slowly working towards being able to share solutions between multiple apps.

Let me know if the above still leaves too many unanswered questions and I’ll see about putting together a sketch or something :slight_smile:

1 Like