Node name and cookie in mix.release inside Docker container

Hi All
I create mix release inside a Docker container and run the container in production. But i cannot connect to the running node via remote Shell.

Inside the running Docker container when i run ’ bin/my_app eval “IO.puts(”#{node()}")" ’ i get node@nohost.

Similarly when i run eval Node.get_cookie i get nocookie. I already set release_distrubution RELEASE_COOKIE and Release_cookie environtment variables. But it seems like release does not read them.

I also created .erlang.cookie file ınside the home directory but nothing changes

When i try to restart the running release with "bin/my_app restart ". i got --rpc-eval : RPC failed with reason :nodedown error.

I will appreciate any help.
Thank you

Kind regards

The most important environment in this case is RELEASE_DISTRIBUTION because it defines in which mode a node is started. However, if it is untouched (that’s what I expect in your case), it is set to name.

May I ask you to run the following code on your node:

|> {k, v} -> "#{k}=#{v}" end)
|> Enum.filter(&String.starts_with?(&1, "RELEASE_")) 

I expect to see the following result (not all values must be the same):

 "RELEASE_COMMAND=start_iex", "RELEASE_MODE=embedded", "RELEASE_NAME=test",

Thank you for the reply,

When i run that code i got the following:


But when i run:

./myapp eval "node() |> IO.puts " i got the following

when i run:
./my_app eval "Node.get_cookie |> IO.puts " i got the following

when i run
./my_app restart i get:
–rpc-eval : RPC failed with reason :nodedown

But if you want to run command on the existing node, you should use rpc instead of eval. Please check the following description:

% ./_build/dev/rel/test/bin/test
Usage: test COMMAND [ARGS]

The known commands are:

    start          Starts the system
    start_iex      Starts the system with IEx attached
    daemon         Starts the system as a daemon
    daemon_iex     Starts the system as a daemon with IEx attached
    eval "EXPR"    Executes the given expression on a new, non-booted system
    rpc "EXPR"     Executes the given expression remotely on the running system
    remote         Connects to the running system via a remote shell
    restart        Restarts the running system via a remote command
    stop           Stops the running system via a remote command
    pid            Prints the operating system PID of the running system via a remote command
    version        Prints the release name and version to be booted

As you can see eval starts a new instance and run the given command and rpc connects to the existing node and run the given command on it.

When i run

~/bin # ./my_app rpc "Node.get_cookie |> IO.puts "
i get:
–rpc-eval : RPC failed with reason :nodedown

I want to connect to the running system from other nodes.

even inside the docker container when i run:

~/bin # ./my_app remote
Erlang/OTP 22 [erts-] [source] [64-bit] [smp:6:6] [ds:6:6:10] [async-threads:1]

Could not contact remote node my_app@, reason: :nodedown. Aborting…

There is a related topic here:

i still dont have a solution though.

My server is Ubuntu as well

i guess this is related as well?

If you ask me, it looks like your server is just down. It is really hard to help you unless you provide some code that allow us to reproduce the issue.

Generally, the thing that bothers me the most is that you run this in docker so the primary process must be presented, otherwise your container would be killed. Can you tell us how do you start your container and application inside it?

No the server is not down. it is running in production. Only problem is i cannot connect the running node to each other.

i start it with bin/my_app start

Okay, then it’s must be up and running because this command starts your application in the foreground.

Can you run the following commands and paste results back?

$ ./erts-*/bin/epmd -names
$ ping -c3 -w3
$ nc -zv 4369; echo $?
$ nc -zv <port_returned_by_epmd>; echo $?

What a coincidence, I was writing a guide on this subject - for docker and k8s (with libcluster).

Here, I used “cluster_demo” image with phx + mix release.

First of all, to use automatic DNS with FQDN, you should use docker network. Without it, you cannot access a container from another container with container name.

Otherwise, you have to use IP address inside RELEASE_NODE - which is doable, but needs wrapper (either docker, or and other containers need to know the ip address (which does not stay!)

Here are examples of using docker network for DNS with container name.

# this will create <container name>.my-net DNS entry.
docker network create my-net

By default, RELEASE_DISTRIBUTION is sname (allowing non-FQDN), using release name and host id automatically for RELEASE_NAME

docker run --rm \
  --name snamenode \
  --network my-net \
  --env RELEASE_COOKIE=thisissecret \
  --env SECRET_KEY_BASE=+y5AreV1firmKw+kB9idUb0gp3lxi3Y5qhMntozh8P9xHS/+iq2wN1LH3ZALFLo7 \
  -p 4000:4000 \

To use name (with FQDN)

docker run --rm \
  --name namenode \
  --network my-net \
  --env RELEASE_COOKIE=thisissecret \
  --env RELEASE_NODE="" \
  --env SECRET_KEY_BASE=+y5AreV1firmKw+kB9idUb0gp3lxi3Y5qhMntozh8P9xHS/+iq2wN1LH3ZALFLo7 \
  -p 4001:4001 \

To run remote from the same cotainer (docker exec) - you don’t need to set anything since all are already there

docker exec -it snamenode bin/app remote
docker exec -it namenode bin/app remote

rpc works well from the same container

docker exec -it snamenode bin/app rpc "%{cookie: Node.get_cookie(), self: Node.self(), list: Node.list()} |> IO.inspect()"
# %{cookie: :thisissecret, list: [], self: :app@2b0c8dcd9e01}

docker exec -it namenode bin/app rpc "%{cookie: Node.get_cookie(), self: Node.self(), list: Node.list()} |> IO.inspect()"
# %{cookie: :thisissecret, list: [], self: :""}

To connect the container from another container, you have to set the required RELEASE_* info

docker run --rm -it \
  --name nameremote \
  --network my-net \
  --env RELEASE_COOKIE=thisissecret \
  --env RELEASE_NODE="" \
  cluster-demo \
  bin/app remote

docker run --rm -it \
  --name remote \
  --network my-net \
  --env RELEASE_COOKIE=thisissecret \
  --env RELEASE_NODE="app@2b0c8dcd9e01" \
  cluster-demo \
  bin/app remote

If you don’t run both server and remote containers in the same docker network (or without network - which is bridge mode) - then connection between them won’t work while docker exec (in the same pod) works.

~ #  ./erts-*/bin/epmd -names
epmd: up and running on port 4369 with data:
name my_app at port 41389

~ # ping -c3 -w3
PING ( 56 data bytes
64 bytes from seq=0 ttl=64 time=0.079 ms
64 bytes from seq=1 ttl=64 time=0.111 ms
64 bytes from seq=2 ttl=64 time=0.098 ms

--- ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.079/0.096/0.111 ms

~ # nc -zv 4369; echo $?

~ # nc -zv 41389; echo $?

This means epmd is not reachable via hose ip/port.

My result (using elixir:1.10.4-slim docker image)

nc -zv 4369; echo $?
localhost [] 4369 (?) open

nc -zv 4369; echo $?
8011844e9d1a [] 4369 (?) open
  • How erlang/elixir is installed?
  • Could you test for epmd? Or check out netstat -tulpn | grep LISTEN to confirm epmd is running and listening on right interfaces.
netstat -tulpn | grep LISTEN
tcp        0      0  *               LISTEN      -
tcp        0      0  *               LISTEN      -
tcp        0      0 *               LISTEN      -
tcp        0      0*               LISTEN      -
tcp6       0      0 :::4369                 :::*                    LISTEN      -

This means epmd is not reachable via hose ip/port.

Thank you very much. This helped me see my mistake.

Hello, im also having the same issue, I get:

/app # nc -zv 4369; echo $?
/app # nc -zv 24031; echo $? ( open
/app # nc -zv 14031; echo $? ( open
/app # netstat -tulpn | grep LISTEN
tcp        0      0 *               LISTEN      9/beam.smp
tcp        0      0  *               LISTEN      9/beam.smp
tcp        0      0 *               LISTEN      40/epmd
tcp        0      0 :::24031                :::*                    LISTEN      40/epmd

i see epmd is started on a different port that the default one, maybe that is the problem? how do i make sure empd is reachable via host ip/port?