Elixir API deployment with SSL

Hello Team,

I have developed an Elixir API and preparing to deploy into production.

Can I get some help on how I can enable TLSv1.2 with all the other accompanying options?
Do I necessarily have to use Nginx in order to achieve a complete TLS configuration?

Thanks in advance.



Are you using Phoenix? If so, setting up TLS is relatively straightforward. Take a look at the Phoenix guide on how to do it:


1 Like

Hi @wmnnd,

I am not using Phoenix.

Just the Elixir language and Plugs.

Then you’re probably using Plug with Cowboy. You can configure HTTPS for Cowboy through Plug.Cowboy: https://hexdocs.pm/plug_cowboy/Plug.Cowboy.html#https/3


Many thanks, @wmnnd

I’m going through the documentation and will try it out shortly.
Will feedback on the outcome.


1 Like

You don’t have to use, but it’s easy to use NGINX as a proxy and Letsencrypt certs.


FWIW I would never ever recommend running raw Erlang/OTP on the internet apart from hobby apps, I always use a dedicated load balancer (haproxy, nginx, traefik, …):

  • crypto is expensive, the LB provides very efficient TLS termination without clogging up your OTP apps
  • crypto is hard, the LB have this as a core priority. If you read the last 3 years of OTP related code for TLS you will see quality & valiant efforts to keep up with the status quo but it’s always trailing what the LB can provide. If there is a security issue with a mainstream LB you can be certain it will be fixed in a handful of days.
  • LB are really good at doing LB. Simple things like handling blue/green deployments, switching nodes for maintenance, dealing with malicious traffic, rate limiting, are easiest done right at the point of contact with your infrastructure. If you have a LB squashing a bad user, your app can perform properly for all other users without constraints. If not, your rate limiting activities have to compete with your valid users
  • when you need to update TLS certificates, or change connection parameters, you’ll not need to have to take your app down to do so.

At the end of the day, these 4 reasons are just 1 reason: separation of concerns is a good thing. Use the right tool for the job. Do TLS termination with a process that is designed fro this. Same for load balancing of end-user requests.


I use haproxy even for my hobby projects – this way I can “bind” as many hobby websites as I want on port 443 on the same server. I use haproxy’s dynamic configuration https://www.haproxy.com/blog/dynamic-configuration-haproxy-runtime-api/ on each app’s startup via a helper not quite unlike

defmodule HAProxyHelper do
  def connect(unix_socket_path) do
    :gen_tcp.connect({:local, unix_socket_path}, 0, [:binary, :local, active: false])

  def query(sock, query, timeout \\ 1_000) do
    :ok = exec(sock, query)
    :gen_tcp.recv(sock, 0, timeout)

  def exec(sock, query) do
    :gen_tcp.send(sock, <<"#{query}\n\r", 0>>)

to edit the backends / domain names. Each app uses 0 as their port so that :gen_tcp assigns a randomly available one, then I can use something like :ranch.get_port(MyApp.Endpoint.HTTP) (for phoenix apps on cowboy) to get the assigned port which I then pass to haproxy via the helper above.


I wonder how this consideration of trade-offs changes when you start considering http2 and the fact that many popular LB/proxy solutions (Nginx and AWS ELB for example) don’t forward http2 - they downgrade the traffic between the balancer and server to http1.


Even though haproxy is said to start supporting http/2 between the backends and itself “soon”, some people say that there isn’t much benefit to be had from it …

From https://stackoverflow.com/questions/41637076/http2-with-node-js-behind-nginx-proxy/41647983

There is almost no sense to implement it, as the main HTTP/2 benefit is that it allows multiplexing many requests within a single connection, thus [almost] removing the limit on number of simalteneous requests - and there is no such limit when talking to your own backends. Moreover, things may even become worse when using HTTP/2 to backends, due to single TCP connection being used instead of multiple ones.

What I wait from haproxy is to be able to reuse http/1.1 connections when proxying http/2. It currently opens a new connection each time.

1 Like

It’s very much a case of measuring and optimising your deployment. Simply pouring HTTP2 everywhere and waiting for a significant improvement is unlikely to help much.

For example:

If you have your LB configured correctly, then you already have multiple persistent open HTTP/1.1 connections between the LB and your app server, that will remain open across many request/response cycles. There is very little gain to be had from multiplexed backend connections, and you have no TLS nor TCP setup/teardown cost.

However, prioritised streams, compressed headers, and server push can make a significant difference on the client side, and I am not at all clear how a LB impacts this yet. Some functionality appears to require end-to-end HTTP2.

http://blog.kazuhooku.com/2015/06/http2-and-h2o-improves-user-experience.html has some old but useful data - 30% improvement in time to first paint, with substantial time spent configuring the server, including attempting to work around bugs in the prioritisation logic on the client side. While one hopes this has improved substantially since 2015, the inconsistent results between browsers is unlikely to have changed.

In projects like https://h2o.examp1e.net/ there are quite a few settings available to tune these, but I don’t think we have this sort of flexibility yet in Phoenix and Cowboy. It would be an interesting project for some organisation to fund.