Unable to connect local postgres from docker


I’m trying to learn Docker to container my basic API.
The steps I did are the following :

  • Create the API, test it with a postman etc.
  • Mix phx.gen.secret (let’s assume SECRET as a value for after)
  • As my API has no assets, I didn’t have to compile them.
  • Mix phx.gen.release —docker
  • Removed lines from Dockerfile which interact with assets
  • Docker build
docker run -e DATABASE_URL="ecto://postgres:postgres@localhost/database_dev"
-e SECRET_KEY_BASE=SECRET -p 4000:4000 imageID

I tried multiples variations with my ip instead of localhost, the HOST argument, —network=host, etc. But the error is still

16:36:50.497 [error] Postgrex.Protocol (#PID<0.1887.0>)
failed to connect: ** (DBConnection.ConnectionError) tcp connect
(my_ip:5432): connection refused - :econnrefused 

After looking for a bit, I found out that the problem may come from docker permissions, but I couldn’t find anymore stuff.
I feel like I messed up some really basic step, but I can’t find it alone.

Can someone tell me where I did wrong ?

1 Like

You need to provide us with your Dockerfile(s), and Elixir config files in order for us to be able to understand what is wrong.

1 Like

the problem is localhost in DATABASE_URL. The URL is resolved inside the docker and there is no postgress.

1 Like

First, the dockerfile is the following, direcly generated by phoenix. I commented assets related stuff (compile and deploy)

# Find eligible builder and runner images on Docker Hub. We use Ubuntu/Debian instead of
# Alpine to avoid DNS resolution issues in production.
# https://hub.docker.com/r/hexpm/elixir/tags?page=1&name=ubuntu
# https://hub.docker.com/_/ubuntu?tab=tags
# This file is based on these images:
#   - https://hub.docker.com/r/hexpm/elixir/tags - for the build image
#   - https://hub.docker.com/_/debian?tab=tags&page=1&name=bullseye-20210902-slim - for the release image
#   - https://pkgs.org/ - resource for finding needed packages
#   - Ex: hexpm/elixir:1.13.3-erlang-24.2.1-debian-bullseye-20210902-slim
ARG BUILDER_IMAGE="hexpm/elixir:1.13.3-erlang-24.2.1-debian-bullseye-20210902-slim"
ARG RUNNER_IMAGE="debian:bullseye-20210902-slim"

FROM ${BUILDER_IMAGE} as builder

# install build dependencies
RUN apt-get update -y && apt-get install -y build-essential git \
    && apt-get clean && rm -f /var/lib/apt/lists/*_*

# prepare build dir

# install hex + rebar
RUN mix local.hex --force && \
    mix local.rebar --force

# set build ENV
ENV MIX_ENV="prod"

# install mix dependencies
COPY mix.exs mix.lock ./
RUN mix deps.get --only $MIX_ENV
RUN mkdir config

# copy compile-time config files before we compile dependencies
# to ensure any relevant config change will trigger the dependencies
# to be re-compiled.
COPY config/config.exs config/${MIX_ENV}.exs config/
RUN mix deps.compile

COPY priv priv

# note: if your project uses a tool like https://purgecss.com/,
# which customizes asset compilation based on what it finds in
# your Elixir templates, you will need to move the asset compilation
# step down so that `lib` is available.
# COPY assets assets # My app has no-assets

# compile assets
# RUN mix assets.deploy

# Compile the release
COPY lib lib

RUN mix compile

# Changes to config/runtime.exs don't require recompiling the code
COPY config/runtime.exs config/

COPY rel rel
RUN mix release

# start a new build stage so that the final image will only contain
# the compiled release and other runtime necessities

RUN apt-get update -y && apt-get install -y libstdc++6 openssl libncurses5 locales \
  && apt-get clean && rm -f /var/lib/apt/lists/*_*

# Set the locale
RUN sed -i '/en_US.UTF-8/s/^# //g' /etc/locale.gen && locale-gen


WORKDIR "/app"
RUN chown nobody /app

# Only copy the final release from the build stage
COPY --from=builder --chown=nobody:root /app/_build/prod/rel/worklist ./

USER nobody

CMD ["/app/bin/server"]

Second, I did try with my ip in DATABASE_URL, but nothing changed.

My end goal would be to use docker compose with my app and postgres as services. But trying to use compose when I can’t make a container run correctly seem a bit out of order.


Are you trying to connect from the Docker container to the DB that’s running locally? That won’t work by default, you’ll need to open up some ports (sorry, you’ll need to peek into the Docker docs).

Doing this with Docker Compose might actually be easier as it will set some things up for you. Here’s an example (not tested):

version: '3'

    image: postgres:11.12
      POSTGRES_PASSWORD: postgres
      POSTGRES_DB: myapp_db

      context: .
      dockerfile: Dockerfile
      - db
      DATABASE_URL: "postgresql://postgres:postgres@db/myapp_db"

I’m on another project rn, but many thanks, I’ll try this as soon as possible

You can connect to services running on the host machine from a docker container directly using the special hostname:


See: Networking features in Docker Desktop for Windows | Docker Documentation

(the link is about Windows, but it also works on Mac)


For linux you will need to add following line to your service needs to access localhost-

      - "host.docker.internal:host-gateway"

Using the nobody user is not advised from a security perspective, and root shouldn’t really be used at all.

And was then updated here:

I read about this issue. But wouldn’t it become rather complex to set up another user (let’s say phoenix) before dropping your docker + compose ?
Doesn’t it defeat the whole point of the “Deploy everywhere” thing ?

Ok so after trying your thing and many others similar I found across the week, my compose.yml look like that rn :

        image: postgres
        restart: always
        container_name: database
            - pg-data:/var/lib/postgresql/data
            POSTGRES_USER: phoenix
            POSTGRES_PASSWORD: phx_passwrd
            POSTGRES_DB: worklist
        container_name: worklist-api
        restart: always
            context: .
            dockerfile: Dockerfile
            - db
            DATABASE_URL: "ecto://phoenix:phx_passwrd@db/worklist"

        external: true # Must use "docker volume create --name=pg-data" before

Now, it seem I can connect the DB because I have those logs for the db service:

PostgreSQL Database directory appears to contain a database; Skipping initialization

2022-03-30 09:15:35.262 UTC [1] LOG:  starting PostgreSQL 14.2 (Debian 14.2-1.pgdg110+1) on aarch64-unknown-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit
2022-03-30 09:15:35.262 UTC [1] LOG:  listening on IPv4 address "", port 5432
2022-03-30 09:15:35.262 UTC [1] LOG:  listening on IPv6 address "::", port 5432
2022-03-30 09:15:35.269 UTC [1] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2022-03-30 09:15:35.275 UTC [26] LOG:  database system was interrupted; last known up at 2022-03-29 12:38:09 UTC
2022-03-30 09:15:35.451 UTC [26] LOG:  database system was not properly shut down; automatic recovery in progress
2022-03-30 09:15:35.453 UTC [26] LOG:  invalid record length at 0/16FBA88: wanted 24, got 0
2022-03-30 09:15:35.453 UTC [26] LOG:  redo is not required
2022-03-30 09:15:35.463 UTC [1] LOG:  database system is ready to accept connections

along with those for the phoenix app:

09:15:35.910 [info] Running WorklistWeb.Endpoint with cowboy 2.9.0 at :::4000 (http)
09:15:35.911 [error] Could not find static manifest at "/app/lib/worklist-0.1.0/priv/static/cache_manifest.json". Run "mix phx.digest" after building your static files or remove the configuration from "config/prod.exs".
09:15:35.911 [info] Access WorklistWeb.Endpoint at http://example.com:443

For anyone strumbling upon this subject, what I didn’t get was that the service name “db” should be used instead of the IP in the URL.

I’ll mark your answer as solution.

I don’t understand your question.

The setup of another users is already done in the Dockerfile, you are not required to do anything regarding the user when using the docker image.

No, it doesn’t at all. The unprivileged user is inside the docker container, not on the host machine running the container.

I don’t think I get it right, but your proposition is to replace “USER nobody” in Dockerfile by, let’s say “USER phoenix” ?

And you say this simple change would setup a new user in the Dockerfile, which isn’t “nobody” so he don’t have the permissions which could mess up outside the container. Hence I don’t need to do anything more when using the image ?

In short, just changing the USER (and doing nothing more) close a vulnerability ?

The proposition is already merged into the current Dockerfile in the official docs as per the pull request I linked, but I see now that it was reverted back by what it looks like an accidental change as I mention in my comment.

Please read my report on the issue and the link on it.

Basically the user nobody cannot be trusted to run inside or outside the container any application, neither the root user can also be used to run services, inside or outside a docker container, that are exposed to the internet.

1 Like