For the sake of providing just another data point: we run all our services using HTTP and put Nginx in front to manage HTTPS. Since we use a multitude of technologies, this setup allows the DevOps team to easily manage certificates on their own.
I can see how maybe I conflated implementation vs the general use of https. It’s very well possible this is more the common day De facto and would say this is also how I commonly do it myself. Though as of the moment I wanted to look at possibly consolidating to cowboy and I’m looking at alternatives to Nginx on the front.
As a newbie in deployment here is my experience. I deployed for now a single Phoenix project and I thought it would be simpler for me to let Phoenix manage the ssl files. I had some difficulties but succeed to config
https both in production and dev envs. But what I found really a lot more unfriendly is the renewal of certicats. Besides, I gave up on learning that and just chose to renew them manually every three months, until I configure something like haproxy/traefik/nginx to handle that and just forget all https things in Phoenix. I have to say that my project is an umbrella one and has two web apps, the public web app and the admin web app.
If you know a really easy way to handle ssl certs renewal in Phoenix, that interests me though.
I use certbot with lets encrypt and nginx normally. I bet you can use certbot in the same way but without the update to nginx and just make sure your paths are right.
https as the default I also agree with it, because is easy to detect browser issues regarding serving insecure content, as you mention.
A call of attention for the Erlang
:httpc is due, because it ignores https certificates, unless configured to validate them. So this is an insecure default, but no one seems willing to fix it in the core. If some library doesn’t configure
:httpc properly you will end up in a bad position, because you will be in the same position of using just http. Also if you try to customize any of this libraries you will need to provide the full configuration, not only the one you want to customize, because no merging of the provided configuration will happen with the one already present.
Now running insecure connections between the load balancer and the backend, even if it is in a private network, its like having a prison with security guards only present at their main gates.
Why you may ask?
I think you can easily picture how some of the criminals will be able to escape if they had only on layer of defense to defeat.
What its happening in real life his that attackers just love this approach of ops teams to only secure the edge of a network, like terminating ssl in the load balancer.
Once attackers get inside a private network, they can move freely because its all open, thus they can MitM attack everything they want, extract data, tamper with it, etc. They can stay hidden for months or years, without the ops team notice them. More often then not the ops team will only know about it when some security researcher notify them that their data its being sold in the dark web, and sometimes the data breach is from months or years ago.
Unfortunately the IT ecosystem tends more to the convenience side then to be secure by default. This is a cargo culture from the past that will take decades to change.
In the end of the day, for the good and for the bad, this insecure IT practices are keeping all of us in the security ecosystem employed…
I don’t know if you’re using Distillery for releases, but perhaps this article, which includes a section on renewal, helps?
I absolutely hate working with https locally. Especially when building APIs for mobile apps.
We have this one Golang project where the previous developer decided that the app handles the SSL certs and strictly forces https. Took me a big amount of time to get it all set up so that the mobile app can talk to the API.
I had to use a real domain with an A record pointing to my Macbooks local network address.
Then I had to create a CA cert which was used to sign the SSL certs of the app.
The CA cert had to be imported into the Android phone in the right file format (big pain point).
And recently I discovered that I could not send Postman requests to my local API anymore, while CURL works fine. If I use localhost instead of the custom domain, the request reaches the API but the app denies it due to wrong ssl handshake.
If the goal is to annoy the hell out of devs, then yes, use https locally. I just don’t see the point.
That does not sounds like a typical use case, I don’t think thats a fair comparison vs just using
mix phx.gen.cert and adding the cert to your local keychain.
Anyone can literally have https locally with in 5 minutes on my project with phoenix. Your case sounds more like an abuse of the tech not a best practice.
The only real issues I see with defaulting to https is with in the browsers. But seeing as http2 is required to have https by most browsers I assume they will be making it easier as time goes.
Your application will work the same either it gets hit by HTTP or HTTPS requests.
- If you set the
secureflag on a cookie, the cookie will not be sent via HTTP.
- WebRTC does not work on HTTP site.
- If my web service is a public JSON API service, other websites served via HTTPS will not be able to call the APIs.
RTMP does not work on HTTP site.
Sorry for being off-topic, but do you imply that rtmp works over https? If so, how does that happen?
Depends on what browser you are using.
- Chrome requires localhost or https.
- Safari requires https even on localhost.
- IE/Edge? I don’t know.
I tried to google but couldn’t find any information on chrome supporting rtmp without extensions, could you point me to some specs/docs about it?
Sorry, I can’t find any documentation either. Maybe my memory issue. But I’m pretty sure about WebRTC.
As for webrtc, as a spec it’s fine with unsecure origins, the requirement for a secure origin is mostly for the media capture.
I just set up recently a Phoenix app with Nginx in front of it. Using HTTP2. So, Nginx can reverse proxy HTTP2. The Phoenix app keeps it simple running an http port. That’s the default locally as well for me.
Nginx speaks HTTP/1.1 to the backend even when it’s accepting HTTP/2 from the internet.
This is perhaps unlikely to directly noticeably affect performance, but I think it prevents using HTTP/2 Server Push from Phoenix.
Oh right. I was referring to the client side part of the connection.
When you use Nginx as a reverse proxy in front of a Phoenix app, you use it for serving the static assets as well. That’s where it makes the difference.
I see Nginx can do Server Push, too.