What would you like to see in a book about Elixir Deployments on AWS ECS?

I’m trying to find somewhere which will enable me to deploy a Phoenix application which talks to the Bitcoin daemon running on the same machine. As I understand it, cloud services like AWS recreate the file system every time you redeploy your app. This makes it unrealistic to deploy the app I am trying to build. I wonder if anyone has any idea of a solution to this problem. Thanks.

2 Likes

There are actually many ways you can manage persistent storage on docker. If you are just planning on deploying to AWS EC2 you should be able to use persistent volumes off the bat (don’t use ephemeral storage when you spin up your instance). And if you are looking at ECS and docker, you can mount your EC2 directories onto your docker containers easily.

You need storage as a service https://www.cloudberrylab.com/blog/amazon-s3-vs-amazon-ebs/,
About blockchain AWS also lately added blockchain support https://aws.amazon.com/blockchain/ https://aws.amazon.com/blockchain/templates/ .

Thanks. I’ll check out storage as a service, and the AWS blockchain support.

An open-source library that does automatic servers provision to AWS, given a minimal configuration file. It will make the book more valuable and will give you additional exposure and respect.

2 Likes

I’d definitely be interested in this, particularly regarding the layer of Docker and CI. We’re using Docker containers on AWS for everything but Elixir, actually. For Elixir, we’re pushing to Heroku (the nightly reset doesn’t impact us). I think the key is simplicity. I shy away from complex / multi-dependency / fragile devops setups.

1 Like

You mean something like https://www.terraform.io/ ?

Yes. Thanks for pointing it out for me, I completely forgot about it.

Can you elaborate. Are there any costs that are not on my radar?

Actually I think what is lacking at this point is a good approach that works for 95% of the people, so everyone has to reinvent the wheel.

A good multi-stage docker image would help that builds a phoenix application to docker with good defaults (I have a good one, looking to open source).

From then it will be general ECS as everything applies similarly to every docker image (whether it is python, ruby or elixir).

The Elixir specific part then is making distributed Erlang work. Part of this should be included in the docker image. Another part is the mechanics setting this up with ECS (VPC configuration, setting up service discovery).

Definitely interested to buy you book. I wish it would be available now :wink:

1 Like

With AWS Fargate, if you end up using cumulative resources of 2vCPU/16GB for a month, you end up with:

2 (vCPU) x 0.0506 (vCPU cost per hour) x 720 (hours in a month) = $72.8640
16 (GB)  x 0.0127 (GB cost per hour)   x 720 (hours in a month) = $146.3040
--
Total: 219.1680

An on demand instance for the same configuration (r4.large) would cost you $95.760 which
would go down further if you use Reserved Instances (a 1 year no upfront RI would cost you $61.32).

So, if you have a service that needs to be up all the time, I’d just spin up a cluster and use ECS without Fargate. However, if you need containers for one off tasks which don’t need to run continuously I think Fargate is perfectly suited for that.

2 Likes

Holywow… My dedicated server is substantially better than those stats at ~$30/month… o.O

1 Like

But can you easy scale over this one server ? :slight_smile:

1 Like

I have a few of these around, one in a california datacenter, one in canada, and one in germany, been pretty trivial thus far. As for scaling ‘on’ any given server, not needed to yet, not hit their limitations at this point, but upgrading them is pretty trivial and the SLA guarantees it within 15 minutes of putting in the request and payment without downtime except in some rare cases where a restart is needed (or the OS needs a reboot for it like when hotswapping the CPU).

@minhajuddin,

I just signed up to be notified for your book publishment on LeanPub.

Regarding the content, since you are targeting Elixir and ECS, why not generalize that to Elixir and containers? AWS users will certainly look at EKS (Kubernetes) once they pass a certain infrastructure size.

With that perspective, I would very much like to know more on how a BEAM runtime behaves within a container. How to configure your container deployment (ECS/Fargate/Kubernetes) with regards to CPU and memory limitations. Are additional BEAM settings needed to have these limitations respected? E.g. in the Java world, you need to pass additional parameters when starting the JVM or it would calculate its heap size based on the amount of host memory rather than the container constraints.
Further on, how to debug CPU and memory bottlenecks for Elixir in a container runtime?

Second, once we go to Distributed Elixir, what are the gotchas regarding inter-container communication? How is the container network to be configured? Can it be easily integrated with a service mesh for secure inter-service communication?

Other stuff I’m interested in for Elixir & containers:

  • Logging: how to configure the new Erlang logger and integrate it with e.g. log rotation & sidecar log shipper, CloudWatch Logs for structured logging, etc.
  • Metrics: how to expose service level metrics and integrate that with, e.g. CloudWatch metrics.
  • Message Queues

And you can select much more integrations which reside on the boundary between the Elixir runtime (BEAM) and cloud technology.

Ringo

3 Likes

@minhajuddin Any new news here?

I had done a lot of writing on the book before the elixir release code was released and then a lot of things changed. I am planning to do a rewrite and release it by July of this year.

1 Like

I’m sorry if this sounds rude or stupid, but if I was deploying to AWS ECS, I won’t be using Elixir, in that case I’d be using Sinatra / Flask / some Go micro framework or Crystal language.
I’d love to read a book about how to deploy and scale Elixir (Phoenix) apps on a bare-metal network of servers or cheap VPSes.

Two things I’m interested in ecs wise:. 1) definitely pros/cons versus say using just bare AWS vms, and 2) how to set up (or not set up) pg2 clustering

It doesn’t sound rude :slight_smile:

ECS provides a good deployment platform if you want to use containers on AWS. It has excellent support throughout the AWS stack. The benefits of the Erlang VM don’t go away when you use containers and containers are very convenient.

3 Likes