Do we need Kubernetes as Application grows?

I have read about Containers and Clusters, and Kubernetes. We have an API written in Elixir Phoenix.

The application deals with Live views (Live stream, FFMPEG, Timelapse and Jpegs)

There are 500 (almost) cameras in working state, to which Evercam Server’s is sending requests on different frequencies. e.g

1 Jpeg per second, per 2secs, 5secs, 10secs, 60secs, 5min or 10min

All camera’s snapshot workers are running through Supervisor.

Right now the application is deployed on an Hetzner Server.

Intel® Core™ i7-6700 Quad-Core
incl. Hyper-Threading Technology

Hard Drive: 2 x 500 GB SATA 6 Gb/s SSD
(Software-RAID 1)
additional graphics card: GeForce® GTX 1080
Connection: 1 GBit/s-Port
Guaranteed bandwidth: 1 Gbit/s
Backup space: 100 GB
Traffic: Unlimited *

The problems I want to discuss here at the moment.

The application is growing with camera recordings and many other features like Sync Recordings from Local NVR/DVRs to Cloud Recordings. Handling RTSP streams, Camera Recordings, Application is pretty stable, with everything, but sometimes, somehow we got few issues on the server level, And when I say server level, I mean Ubuntu or Internet or Raid etc. and the system goes down, or maybe we even lose Ubuntu server as the whole. Every camera has IP restrictions, when we start our application on any Ubuntu Hetzner server, we place our IP in their firewalls to allow the application to do operations on Camera, (Software Level and Hardware Level). So Even If we set up a new server all of a sudden then we are not in a position to handle cameras, (2 Years ago, this was an issue, now Hetzner has floating IP feature, where we can Give an IP to a server, one lost, set up a new one and assign that IP to this server). But is this a better way to handle this? In actual, we want to switch to multiserver architecture. where not only we even lose a server, another server comes to the rescue but also divide application load.

In Elixir Mini-Documentary, I see Jose Valim, attaching 2 iex terminals to each other and then call a method from terminal 2 which was defined in terminal 1.

The solutions I am looking right now is How we can divide the load of application, as I have described there are snapshot recording workers, and also there will be Camera User Interface, which attached to API, will send requests as well. Is there any solution to my problem where.

There will be one IP. on which Application will connect to User Interface, and on back of that IP, we can attach as many servers as we can, and in case there are 500 snapshot workers, they all get divided between other servers, also the API read and write requests to DB as well, divide between servers. Is there no solution but K8s?

(My questions may look ambiguous to you sorry for that)

1 Like

I don’t think you need to go all the way to Kubernetes. Is AWS an option? You could use an AWS network load balancer with a static IP and an auto scaling group of EC2 instances.


There are lots of solutions besides kubernetes. Roll up another server the same as your current one. Roll up a third smaller server with just haproxy on it pointed at the two and point all your clients at it. Configuring haproxy is pretty easy.