LiveBook remote execution within Kubernetes

Hi,

I’ve deployed my application, and livebook within kubernetes. I’m now trying to get livebook to work with remote execution against my running application, and I’m struggling to get the connection working.

for the application:

  • deployment has epmd port configured
  • service for both web endpoints, and epmd
  • release cookie is set as env in dockerfile

in livebook, i’m trying this out

testing dns resolves - these both return ip’s for the services

DNS.resolve("my-app.default.svc.cluster.local")
DNS.resolve("my-app-epmd.default.svc.cluster.local")

testing node connection - these all fail with :pang.

node = :"my-app.prod.svc.cluster.local"
Node.ping(node)

node = :"my-app-epmd.prod.svc.cluster.local"
Node.ping(node)

node = :"my-app@my-app.prod.svc.cluster.local"
Node.ping(node)

node = :"my-app@my-app-epmd.prod.svc.cluster.local"
Node.ping(node)

Naturally, trying to actually run the Kino.RPC fails with {:erpc, :noconnection}

I’m pretty new to distributed elixir, so i’m sure i’m missing something basic :crossed_fingers:

thanks

Hey @duncanphillips, what RELEASE_NODE do you set in rel/env.sh.eex?

:wave: hi

I don’t have that file, but if I go onto the node and look at the RELEASE env vars, I see this

  {"RELEASE_COOKIE", "..."},
  {"RELEASE_REMOTE_VM_ARGS", "/app/releases/0.1.0/remote.vm.args"},
  {"RELEASE_BOOT_SCRIPT", "start"},
  {"RELEASE_VSN", "0.1.0"},
  {"RELEASE_ENVIRONMENT", "production"},
  {"RELEASE_TMP", "/app/tmp"},
  {"RELEASE_SYS_CONFIG", "/app/releases/0.1.0/sys"},
  {"RELEASE_VM_ARGS", "/app/releases/0.1.0/vm.args"},
  {"RELEASE_NAME", "my-app"},
  {"RELEASE_BOOT_SCRIPT_CLEAN", "start_clean"},
  {"RELEASE_NODE", "my-app"},
  {"RELEASE_PROG", "my-app"},
  {"RELEASE_MODE", "embedded"},
  {"RELEASE_COMMAND", "start"},
  {"RELEASE_DISTRIBUTION", "sname"},
  {"RELEASE_ROOT", "/app"}

A note on where I’m trying to get - I am keen to get livebook working, and then move onto figuring out the libcluster stuff with the kubernetes DNS piece. I thought this might be the easier initial path, but I am new to the clustering stuff, and trying to figure out how the clustering works in the context of kubernetes.

I think your comment has started pointing me in the right way…

I didn’t think I needed to follow all the clustering setup guides I had found so far, but it seems like I need to make changes, something like below, to make nodes connectable.

{"RELEASE_NODE", "myapplication@[pod-ip]"},
{"RELEASE_DISTRIBUTION", "name"},

Found these, which I’ll work through.

Exactly! You need RELEASE_DISTRIBUTION=name to use full node names with ip/domain, and that’s necessary to connect across nodes. Then, when you do Node.ping from Livebook, you want to use exactly the same node name as specified in RELEASE_NODE.

great, thanks for the confirmation.

one question I have, specific to kubernetes setup. It would be much easier to connect in livebook based on a service name, and not a pod name. i.e With the setup from the last comment, each node will startup with a unique app-name-123-456-678@something.internal or app-name@[pod-ip-addr], but it would be easier to configure livebook for a consistent service name like app-name@something.internal.

I’m not too sure if this is possible to do easily, or do I need to do some dns lookup in livebook to find a pod to connect to?

thanks!

Yeah, I think generally it would be app-name@[pod-ip-addr], and then you do a DNS lookup.

Though, if you are running a single node, then RELEASE_NODE=app-name@something.internal should also work, as long as something.internal is resolvable from the Livebook pod : )

1 Like