For context, my quest was and remains to design, implement and self-host a new live-view app in such a way that whether it grows slowly or rapidly I can scale both horizontally and vertically using whatever cloud or bare metal resources the app needs and the revenue it should generate would pay for.
I’ll happily elaborate on my journey on request, but it’s a long and complicated tale of many mistakes and aha-moments. With some notable exceptions there are ample tutorials, how-to’s and developer documentation out there to cover the details of what I’ve tried and failed with as well as what I ended up getting success with.
In this discussion I’m hoping to maintain a mile-high, inch-deep perspective on the viability of various approaches, platforms and packages required for a fully operational phoenix app exposed to the world, the mesh of interrelated challenges, solutions, opportunities and risks that comes with the territory.
In short, let’s discuss what are the right things to do rather than how to get them done.
Though I started out chasing self-hosted OpenStack, then Charmed Kubernetes using juju and MAAS with Ceph for storage management, I recently found that all of the elements of Charmed Kubernetes that I figured on actually using was available to me by running microk8s in clustered mode. my pfSense firewalls allowed me to set up the Calico CNI with MetalLB in BGP mode. The database is a “production grade” PostgreSQL HA cluster run by Percona’s version of CrunchyData’s Postgres Operator and the entire cluster is monitored using Percona Management and Monitoring (PMM) running in an off-cluster docker container.
It’s all working very nicely, including (eventually) getting cert-manager to work properly to issue and update Let’s Encrypt certificates for the site. But all the how-to’s and documentation on cert-manager had focussed on using it in conjunction with an Ingress controller such as nginx-ingress.
I really struggled to get cert-manager to work for me, and in the end I discovered that the version of it that ships with microk8s is quite old and buggy, so I had to upgrade that myself in the end. But that leg of the journey, going through many different guides, examples and how-to’s made me realise that the generic ingress-controller concept in Kubernetes has a massive functional overlap with what standard phoenix apps using cowboy does anyway. Especially if you’ve already using a load balancer such as MetalLB (or whatever the cloud provider offers) I started to suspect that I could cut out nginx-ingress entirely. That’s what I did and with very pleasing results I daresay. Perhaps someone, or some event, will make me regret that choice, but so far so good. To get that done though I had to dig into parts of cert-manager the how-to guides don’t cover and deploy a DNS01 solver with a delegate zone hosted at AWS Route53 and mounting the certificate as a volume-mount in the manifest for the certificate and key files to become available to the pod.