Where do you host **small** or even tiny elixir projects?

I like creating small projects open to the whole world. Something in style of my_general_domain.com/great_mortgage_calc_in_live_view

These would never be very popular, they would always have tiny audience if any. If some of them by accident ever become popular, I’d deploy them on render.com or fly.io , but before it happens in many cases these aren’t even worth a domain.

If these were done in JS (even with some server side next.js/nuxt.js) I’d post endless amount of them on netlify.com or some competitor for free or very cheap.

What would be a place to host such projects when done in elixir? I don’t mind paying some $5-10 or even $20 a month for hosting, but not per-site.

Or how do you host your tiny hobby projects and exercises?

9 Likes

You can also check Tiers & Pricing | Gigalixir

1 Like

I’m a big fan of fly.io. They don’t bill if you spend less than $5/mo. Now that I have an Elixir project and a Ghost blog hosted there I think I’m paying about $11/mo

2 Likes

Hmm, I can see how fly.io and gigalixir are good for one project (I even use render.com in similar case for one project), however, I am looking for a way to run a dozen or two projects cheaply.

Possibly on a slow machine, with very limited traffic, but running all the time (or with a fast start, not with 50sec long start).

1 Like

I have an old macbook pro (with a broken screen), plugged into the internet in the office, on which I run cloudflare tunnel for free. I run multiple phoenix apps which people access via a domain name, in the usual way. (In my case, these are not public sites … they are for our internal users). If you have a suitable macbook (or pc), then this is all quite simple (even for me → chatGPT is my friend) and free except for electricity. This has been running successfully - faultlessly one could say, for over 18 months.

(I also have cloudflared on my dev macbook for development - same idea).

6 Likes

An OVH dev sandbox is good for hosting small projects if you do not need performance.

But you are the admin of the machine.

1 Like

Hmm, indeed the single cheap VPS machine can probably run many Phoenix instances for just several bucks a month.

Is there anybody who is actually doing it? Do you put everything to docker and setup nginx to serve different projects at different /paths ? How do you plan amount of RAM needed per project? And deployment to go manually via SCP file copying?

Or is there a better way? Like making projects share same RAM somehow (it is extremely unlikely that several tiny projects become popular or even used by more than a couple of visitors simultaneously)?

2 Likes

If you own a domain (e.g. artem.xyz) you can put a Phoenix site on each subdomain (e.g. project-1.artem.xyz, project-2.artem.xyz), or you can use a reverse proxy like nginx to put each project in a different path for a given origin (e.g. artem.xyz/project-a, artem.xyz/project-b).

I put my toy projects on a cheap VPS (Hetzner). It’s good practice to run something like that because it also teaches you the basics of DevOps. Sure, you can pay someone like Netlify or Render or fly.io to hide the difficult stuff away, but learning how to do it yourself is a great skill to have. And it’s really not that difficult once you understand what is going on.

So my setup for these toys is to create the server, then use Ansible to set up the server (create users, disable root access, install and configure services, etc.). Then I have a playbook that pulls my Git repo, builds a Docker image, then restarts the server with the new container.

First, you set things up manually, then automate them. There’s tons of great tutorials online that will show you how to set this stuff up.

9 Likes

You can take a look at mainproxy, which runs all projects in a single beam instance, which should remove overhead in RAM usage of the VM. Though I’d only go there if that really becomes a problem. E.g. you could also use swap space to increase your RAM ceiling if you don’t expect traffic at the same time.

2 Likes

I run a number of small Amazon EC2 VPS where I have nginx as a reverse proxy into several elixir/phoenix applications. I deploy them as elixir releases. I have a bash scripts that scp the release to the server, unpack the release and restarts the systemd service. In addition I have a tiny acme script renewing the certificates for them.

They use the same MySQL/Postgres instance which is running on the server.

They are low traffic, used for administration purposes. They are currently sitting at 60Mb RAM usage. I don’t think it would be worth running multiple applications in the same beam to save RAM. It is quite affordable to just increase the RAM resources slightly if needed.

1 Like

The stack

I’ve personally deployed directly on VMs for years using docker-compose, that includes commercial products, some of them were pretty beefy. Not only this is painless after you have your sample configurations and a workflow, but you are never limited to one cloud provider/platform.

I used to run the following services by default:

  1. Nginx - running in reverse proxy mode that is also responsible for https traffic;
  2. Certbot - responsible for fetching and refreshing letsencrypt certificates;
  3. Postgres - in 95% of cases, having a self-managed postgres instance located on the same VM is better than alternatives, both in terms of reliability/resource usage/latency.

I’ve updated my setup recently and now I’m using nginx proxy manager instead of nginx+certbot, you lose the declarative approach, however it’s that much easier to get https up and running, for me this is always painful when deploying a brand new project.

The only caveat with this approach is that your project needs to be dockerized, I use mainly gitlab + custom runners to achieve this, however it’s clear that this setup is more involved out of the box because of docker, maybe it could be improved by injecting the release in the container instead of building immutable containers.

Deploying new versions

The setup I use to deploy new versions is pretty primitive, I basically have a git repo called my_project_infra, that contains the docker-compose.yml and all the additional configuration required to deploy that product + some shell scripts to make this work easier. I leverage on git to be able to revert to old configurations if something goes wrong, worked with this setup in production with great success.

I can do zero downtime deploys. The only big thing that this is missing is continuous delivery, which I want to do for all projects these days. I will definitely invest some time in the future to make this work independently of any cloud providers on the market.

Self-hosted vs VPS

Until recently, I was using hetzner (usually for dev envs + runners) and aws (for prod) VMs for deploy, until I recently finally configured my own server at home.

I want to say that having your own infrastructure makes things 100x times easier, especially if there is networking involved. I literally have now practically unlimited resources at the cost of 20$ I pay for the electricity (I have a beefy hp proliant server, so you might easily get away with 5$ for electricity if you have a more power-efficient system).

The home server is also times more reliable as I have it currently running in raid 1 + I plan on setting up a NAS for periodic backups/snapshots, this kind of reliability comes at a premium if you offload it to a cloud provider.

As for networking, as other mentioned above, if your provider blocks inbound connections, using cloudflare tunnels is the way to go, their free plan is very generous and it’s a turnkey solution that I’ve used reliably in the past. The only caveat is that you take some time to understand the security implications of passing unencrypted traffic through their systems.

17 Likes

I love the fact people go great length to optimize code to shave of a millisecond but never tweak Postgres or check the latency of an external Postgres (cloud) instance. Let alone check their indexes once a year :sweat_smile:

5 Likes

I have recently taken a liking to Coolify - I self-host it on Hetzner and deploy toy projects to it. The Dockerfile generated by Phoenix just works if I remember correctly. It’s also wicked easy to deploy Postgres and a slew of other OSS dependencies. I highly recommend checking it out if you’re comfortable managing a VPS and are aiming for a good balance of cost & developer experience

3 Likes

If those small projects are all written by you, and you can ensure there is not library version conflict, you can run them all in the same Erlang release. There is main proxy, but something as simple as this will work, with liveview and all that.

defmodule MySuperProject.Plug do
  def init(options) do
    options
  end

  def call(%Plug.Conn{host: "app1.host.domain"} = conn, opts) do
    App1Web.Endpoint.call(conn, opts)
  end

  def call(%Plug.Conn{host: "app2.host.domain"} = conn, opts) do
    App2Web.Endpoint.call(conn, opts)
  end

  def call(conn, opts) do
    DefaultAppWeb.Endpoint.call(conn, opts)
  end
end

13 Likes

Yeah, because websockets since phoenix 1.7 are also plug based. It doesn’t work with earlier versions of phoenix.

3 Likes

Yes. I forgot to mention that I use Phoenix 1.7+ and Bandit, N small projects, one release, one tcp port, one live dashboard.

3 Likes

[how to cheaply host many small projects with likely low traffic]

I’m also trying to figure out this exact same thing. Hetzner VPS (or similar) with some ergonomic way to cram multiple projects into a single server seems the way to go.

A couple of options that you likely know about already are Kamal and Dokku. I got excited about Kamal, but I don’t like that it currently requires a container registry. Dokku seems promising and I was trying to set it up but run into skill issues. I like the idea of just being able to use Dokku as a git remote, push code and have it automatically run all the build and deployment actions.

Could be cool to have one of these OSS PaaS written in Elixir, maybe designed specifically to fit the needs of Elixir apps. I looked into main_proxy but I’m put off by the constraint of having all projects avoid dependency conflicts. It seems like it will add maintenance burden that I’d rather not have to deal with.

I’m also thinking that I might try to skip the whole Docker part and just SCP mix releases into a machine with a Postgres and proxy (nginx, Caddy, Traefik) running. Maybe not that crazy if you just gradually build up a couple of shell scripts to automate things a bit?

2 Likes

Just out of curiosity, for those setting up their own Postgres instance: how do you manage backups? You set it up at all. Since the stakes are low, I totally get it if this would be skipped :slight_smile:

3 Likes

I think the easiest solution is to just enable snapshot backups on your VPS, not sure how reliable it is for cheap providers like hetzner, however on AWS I’ve used it successfully a few times, the main benefit being that you can restore the snapshot almost instantly.

Otherwise I think there should be 100 more ways to do that, starting from a cron script that uses posgres CLI to write a backup to a file and copy it somewhere, to more fancy tools like pgAgent

You can also use postgres replication, this is more involved as there are database design considerations when using it, however it allows to have redundant topologies in case of catastrophic failures.

4 Likes

I’m using docker compose and traefik. The CI pipeline for a given project logs in with ssh, git pulls, and restart the container for the project :slight_smile:

Quick and dirty since it’s just demos.

1 Like