Serving Phoenix application - rev proxy?

How do you typically serve Phoenix applications? It’s common for many envs to have a reverse proxy (pure http server) set as the “first line of defence”, typically also serving static files. Do you use this kind of setup also for Phx applications? If not then why is it not needed?

I run my apps on fly.io so I guess they technically have a proxy infront of it as a loadbalancer (but I’m not in control of it and I don’t think it does any defence etc just load balancing). For protection I use Cloudflare.

The “defence” part was in quotes. I don’t mean it in literal sense of protection against threats. Generally I typically ran e. g. Rails or Laravel applications behind a rev-proxy. I know many others who do the same. For example an Nginx server can cope with much higher traffic and deliver much higher throughput serving static data. Wondering how do Elixir based servers compare and if running rev-proxy is worth the config and maintenance overhead in case of Phx applications

In my Traefik setup I put the https and cert management completely on the shoulders of Traefik. The phoenix app just continues to listen on 4000 via the docker port mapping. The only unusual change on the Phoenix side was that in runtime.exs I do:

  config :markably, MarkablyWeb.Endpoint,
    ...
    # https://elixirforum.com/t/remove-port-from-router-url-helpers/18780/2?u=maz
    url: [host: host, scheme: "https", port: 443], # needed to avoid rendering port 4000 in routes
    ...
3 Likes

I used to reverse proxy with Nginx, but lately I just serve directly from Phoenix with site_encrypt

1 Like

I used Caddy for local development.

Will deploy it someday. It’s super easy.

It automatically fetches and renews Letsencrypt certs for your main domain and subdomains.

Also, it’s 4x faster than Nginx and super simple to use.

www.derpycoder.site {
	redir https://derpycoder.site permanent
}

derpycoder.site {
	tls certs/caddy/cert.pem certs/caddy/key.pem       // For local development I used Mkcert, remove this in Production
	encode zstd gzip

	reverse_proxy localhost:4000 {
		header_up Host {host}
		header_up Origin {host}
		header_up X-Real-IP {remote}
		header_up X-Forwarded-Host {host}
		header_up X-Forwarded-Server {host}
		header_up X-Forwarded-Port {port}
		header_up X-Forwarded-For {remote}
		header_up X-Forwarded-Proto {scheme}
		header_down Access-Control-Allow-Origin https://derpycoder.site
		header_down Access-Control-Allow-Credentials true
	}
}

docs.derpycoder.site {
	encode zstd gzip

	root * doc
	file_server browse
}

I can have basic-auth to protect subdomains, using Caddy Auth, if you have apps that are admin-only.

It’s really useful apart from security as well, for instance, you can do rolling or canary deployment, and Caddy will route traffic to one instance or the other.


I don’t know how to do the Active-Active load balancing using Keepalived, perhaps others have experience with that.

Uh… that’s a bold claim. In what use cases? Any data to back that up?

Not that I want to discredit Caddy or something but I take such claims with a huge grain of salt. Especially as it says on its website that it’s written in golang :wink:

Former founder of Dgraph, a hardcore techie, created a blog post:

Here’s another, which doesn’t claim Caddy is 4x faster than Nginx:


Haha.

In the blog post that you cited, he was using nginx in single thread and caddy in multi-thread. Also it is a synthetic benchmark that has very little processing on the proxyed application server. Single thread Nginx is a valid config for real world workloads because you want to leave as much as possible cpu to the application server that does the heavy lifting.

I use nginx. It has served me well for the last 10+ years across many different projects and web stacks, and I see no reason to switch.

I personally used Haproxy and Caddy.

I found caddy to be easy to use, has easy HTTPS, HTTP/3 is enabled by default, and can reload from the config easily.

It has sensible defaults, however, I felt Haproxy was a tad bit faster when using both of them locally. But the config for Haproxy took me a while, and I got no HTTP/3 and HTTPS was not easy. In production, HTTPS will be more work. It has a good admin panel and can be connected to Grafana as well. ( I didn’t even enable multi-threading!)

Nginx, I was reluctant to try it out, because the paid version is costly, and many from Ycombinator were against it. (I guess paid version has multi-threading or something?!?, I don’t know)

Perhaps I will write a blog post detailing my experience with each of them.

And as Jose Valim said, performance is not the only dimension we should be looking at.

I use nginx - I have a gist for my config here default.nginx · GitHub

Works well enough for me. I do have very low traffic so I can’t speak to scaling :slight_smile: