Off topic posts from Virtualization with Phoenix

You always need to reason why you need that tool, not why you do not need it. And from my experience in 99% of the situations the k8s and containers are needless layer of abstraction. If you are asking why should I ran it directly on VM instead of why should I ran it in a Docker, then there is something absolutely wrong with our industry.

And I omit there all the reasons why I think that Docker is the worst possible containers implementation (fortunately k8s is migrating away from it to better solutions).


You always need to reason why you need that tool, not why you do not need it. And from my experience in 99% of the situations the k8s and containers are needless layer of abstraction. If you are asking why should I ran it directly on VM instead of why should I ran it in a Docker , then there is something absolutely wrong with our industry.

I understand your point. However, please be open to other circumstance different from your experience - such as where running apps in container is the default, and using VM requires a reason instead. The default could be just different!

Don’t get me wrong - we have to talk about pros and cons, and how to handle those needs, and promote the “best practice” - but we shouldn’t criticize other choices just because “that isn’t necessary”.

And please note that I already said don’t use Docker for local dev unless needed :wink:


But you shouldn’t, because you raised it, now is fair that you elucidate us in the why Docker is the worst possible implementation for containers.

  • Client-server architecture for such tool isn’t the best solution
  • For a long time it wasn’t possible to run Docker daemon as non-root, so if someone escaped container in some way you were screwed
  • It didn’t integrate with system supervisor well due to the client-server arch (for example it cannot monitor running container and sometimes can mark system as healthy even when the service is down)
  • Poor integration with SELinux and AppArmor, it is getting better fortunately, but it took hell lot of time
  • Lack of init system within containers, it got better with --init flag, but a lot of tools still do not support it (for example k8s do not support it AFAIK)
  • Docker Hub seems like good idea, but it only seems so. All abstractions are leaky, and running CentOS container on Debian-based host can be bad idea (as many distributions sometimes have kernel patches, so it can cause screwups when it will conflict there)
  • --privileged flag with Docker client-server arch, where server is running with root - disaster recipe

And as I said earlier - containers aren’t the solution, as right now it looks like:

  • Physical Hardware
  • Host OS
  • VM
  • VM OS
  • Docker runtime
  • Container
  • Your app

For me it seems like there is some duplication there, and why have Docker on top on real VM? Bug in any of this elements need to be tracked down and fixed. Handling bugs isn’t something I like and want to do.

I am still sad that the idea of unikernels didn’t lift off, as IMHO it is much better to have it like

  • Hardware
  • Host OS
  • VM
  • Your app

Less layers, less abstractions, less leak, less potential bugs.

1 Like

Well you can take out the VM and VM OS, they aren’t required.

If you want to be fair with your docker example, then you need to add here the VM OS and the Virtualization technology.

For me Docker have been a life saver in my developer workflow :slight_smile:

For production I don’t have any experience at all, but I remember to not be able to release Elixir as you do in Rust or Golang, where you build the release for the target, and then just copy it to the target and run it, because Elixir was still needing some dependency to be present in the target.

His this solved with Elixir releases? If so then I see why some prefer to not use Docker to deploy Elixir in a production system.

But I understand that some are resistant to change or just prefer other alternatives ways, and nothing wrong with that to.

It is, but it is operating system virtualisation as opposed to hardware virtualisation. Instead of lying to the OS about the hardware it’s running on, you lie to the application about the OS it’s running on.

I agree that this is mostly what we end up with and it’s pretty ridiculous. I don’t think there’s really a Docker or Linux container runtime which adds overhead but does add complexity. I think it’s sad that SmartOS never took off which has containers running on the metal with proper isolation, as opposed to glued together Linux kernel components.

I think they are the solution but not the way they are implemented right now, and Linux will probably never get a really good implementation. That would look something like BSD jails + resource management, or Solaris zones.

1 Like

Not really, as in unikernels the OS is “part of your application”.

You can, but you need to pack it with ERTS for that platform. It isn’t straightforward, but it is perfectly possible. It is like with Java or virtually any other technology with external VM.

Not really, you only lie about the system capabilities available. The most important thing is that the kernel is still shared, so it is not totally hidden (as I said earlier, for example patches will be shared, probably kernel modules or eBPF code will be shared as well, if you give such capabilities to the container).

Yes, please… And jails on Darwin.

1 Like

Probably better to split it to a separate discussion. (is it possible to do that to existing comments? If so please do that, moderators…)

Some notes:

  • container != docker. For example you don’t have client-server issue In other runtime such as podman
  • some runtimes tries to mitigate security/isolation issues - in non traditional way -see gvisor
  • big cloud providers are having issues with docker and that’s why they make own runtime with compatibility layer (podman from redhat and gvisor from google) instead of dumping the idea of container :wink:

SmartOS would be great for Ops stories but that’s not dev’s main concerns. As a dev, I just need a way to build and run a container image from my local and right now docker is the only viable option (e.g. lots of efforts made for Mac support… no podman on mac.)

I believe the benefit of docker (not container itself) is purely the reproducibility and the same interface across target (dev, deploy server), with good (not perfect though) isolation and thus some security (again not perfect). And the value of them is much larger than its shortcomings which can be risk managed or contained sort of.

We can criticize the design choices of docker but let’s avoid calling the choice of using docker unreasonable. Software engineering is a series of compromises, isn’t it? :slight_smile: