Deploy with Edeliver in an autoscaling environment


Has anyone had any success with deploying dynamic hosts, for example an autoscaling server environment on AWS?

AFAIK Edeliver requires that you specify your deployment hosts upfront


Most likely if you need to deploy service on orchestration platform like Kubernetes you need to pack it with container. Kubernetes automatically deploy services when needed or when service die.
AWS has
Google has
But I don’t have any experience with this.

Clustering Elixir nodes on Kubernetes

1 Like

Simplest version is to hit the AWS EC2 API when you deploy and get a list of relevant instances. Use that to populate the hosts list.

1 Like

We package our projects in a Docker containers, push them to Docker Hub and then deploy them to Kubernetes cluster. Cluster itself can be scaled via k8s Horizontal Pod Autoscaler combined with AWS AutoScaling Groups.

Also we have some sort of generator for our projects, you can use it to extract configuration samples: Nebo15/renew.

1 Like

I do the same in our environment at work. We have an additional layer (OpenShift Origin) in there, but really it’s just automating some things that Kubernetes does not provide out of the box (namely source-to-image builds, an image registry, etc.). We push code to GitHub, it triggers a build in OpenShift, which pushes the container image to the internal registry, which in turn triggers automatic deployments to dev. We currently manually tag images into staging/prod environments due to our QA+release process, but in theory we could do full CI with this setup.

We use the autoscaling built-in to Kubernetes to control scaling up replicas for pods as needed, but we do not currently have our OpenShift hosts in an AWS autoscaling group, for…reasons I guess, I set things up initially, but have handed over control of that stuff to our ops team awhile ago, so it’s more or less not my call anymore.

Puppet/Chef/Ansible/any orchestration tool can autoscale and deploy. you do not have to use Docker/containers to get autoscaling and co if you are not ready or your organisation is not ready for it.

Erlang releases come with a lot of what you can need anyway.

I’m not sure I agree that those tools can handle concerns like autoscaling, A/B or Blue/Green deployments out of the box. Are they even strictly speaking orchestration tools anyway? My understanding is that they are configuration management tools, where part of your configuration is the state of your deployment environment (what tools are installed, etc.). Orchestration implies constant awareness of the environments current state, and reacting to both that state and external events (shifting load around, spinning up new instances, replacing crashed instances, scaling instances, etc.). That said, containers don’t necessarily help or hurt you with any of the above. Releases don’t solve any of those problems, but they aren’t meant to anyway.