For the love of everything, I can’t seem to get this to work… I have a docker container with a phoenix app running in there. It’s hosted on a subdomain “sub.example.com” and I need to serve it with https.
So I got myself a wildcard SSL certificate, installed it, configured the production file and exposed 80 and 443 ports of the docker container.
Tried it out and port 80 works fine, but 443 is always returning a “ERR_CONNECTION_RESET”. The logs is showing nothing on 443, but 80 works fine.
Been trying for awhile now, and now i need your help. Any idea on whats wrong? Check the code below:
Dockerfile
FROM bitwalker/alpine-elixir-phoenix:latest
# create app folder
RUN mkdir /app
WORKDIR /app
COPY . .
# setting the port and the environment (prod = PRODUCTION!)
EXPOSE 80
EXPOSE 443
# install dependencies (production only)
RUN mix local.rebar --force
RUN mix deps.get --only prod
RUN mix compile
How are you running the container? EXPOSE doesn’t do anything, really:
EXPOSE
EXPOSE <port> [<port>/<protocol>...]
The EXPOSE instruction informs Docker that the container listens on the specified network ports at runtime. You can specify whether the port listens on TCP or UDP, and the default is TCP if the protocol is not specified.
The EXPOSE instruction does not actually publish the port. It functions as a type of documentation between the person who builds the image and the person who runs the container, about which ports are intended to be published. To actually publish the port when running the container, use the -p flag on docker run to publish and map one or more ports, or the -P flag to publish all exposed ports and map them to high-order ports.
This suggests that the TCP connection to the Phoenix app was established, but it was closed before any TLS handshake messages were sent by the server. This usually means there is a problem with your certificate/key files: Erlang’s :ssl does not verify the locations at startup, but only once the files are actually needed during the handshake. If it can’t read the files, the ssl socket crashes, and curl reports what you see.
So check if the env vars are correct, the files are present and readable and the private key is not encrypted (password protected).
Something else to think about is to let Nginx or other web server handle the SSL part, and proxy_pass requests to upstream. Then you can just run your Phoenix app on port 4000 and not worry about configuring ssl in Phoenix.
This is how I run my phoenix site with nginx in between.
@voltone The ENV variables are being read. I’ve test the paths by making a typo and it crashes immediately. I am not sure how else I can try it out, but I am not getting any type of errors for guidance.
@egze This is interesting, iv’e been searching the web and seen this pop up several times. Thinking about resorting to this method. But then I ask myself, why even have the option to add SSL via Phoenix?
Because you can also use pure Erlang/Elixir to host your site if you want. SSL can certainly be done with pure Phoenix. Just that for me it’s not so practical. I host multiple sites on one box and I only have one 443 port
So have installed nginx and configured it to match the phoneix needs as much as I could, but now I keep getting a 400 bad request - Request Header Or Cookie Too Large. I went through the internet and tried all kinds of settings and I just can’t get it to work. (Excuse my skills, I am still new to devops related things)
My current configs are as follow (Without ssl for now, trying to make it work the normal way first to try and understand it):
@egze Found the problem! Got it working for port 80 now, next step is 443. Not sure why it gives the cookie error, but I had the wrong configuration. The one that works is:
You don’t need any https for phoenix at all. It runs only with http. https is for nginx, but the internal traffic between nginx and phoenix doesn’t need to be encrypted anymore. You can certainly encrypt it if you want, but what’s the point?
To summarize:
internet -----[https]-----> nginx -----[http]-----> phoenix
Consider you are in a cloud environment where LB/RP are not on the same host as your application. Its quite common to use HTTPS in the backend then:
Ensure we are talking with a trusted backend server
Ensure that user data isn’t sniffable by the cloud provider or anyone else who might have access to one of the network layers (virtual, physical, over-/underlay, whatever).
Alright, after days of searching, trial and error and discussions on this post. It finally works with the following configurations! Thanks to everyone and especially @egze with helping out and sparring!
Btw both nginx and phoenix are in the same docker conatiner.