Creating a cluster with libcluster and Kubernetes

I’m having problems creating a cluster with libcluster and K8S…

The error I’m having is:

`[warning] [libcluster:k8s] unable to connect to :"foo@10.244.2.106": not part of network

These are my kubernetes configs:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: foo
spec:
  replicas: 4
  selector:
    matchLabels:
      app: foo
  template:
    metadata:
      labels:
        app: foo
    spec:
      serviceAccountName: foo-account
      containers:
        - name: foo
          image: full_node:latest
          args: ["--name", "foo@$(MY_POD_IP)", "--cookie", "foo-cookie"]
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 4000
          env:
            - name: MY_POD_IP
              valueFrom:
                fieldRef:
                  fieldPath: status.podIP

---

apiVersion: v1
kind: Service
metadata:
  name: foo-service
  namespace: default
spec:
  selector:
    app: foo
  ports:
    - protocol: TCP
      name: api
      port: 4000
      targetPort: 4000
    - protocol: TCP
      name: web
      port: 4001
      targetPort: 4001
  type: LoadBalancer

---
apiVersion: v1
kind: Service
metadata:
  name: foo-nodes
  namespace: default
spec:
  type: ClusterIP
  clusterIP: None
  selector:
    app: foo
  ports:
    - name: epmd
      port: 4369

---

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  namespace: default
  name: pod-reader
rules:
  - apiGroups: [""]
    resources: ["pods", "endpoints"]
    verbs: ["get", "watch", "list"]

---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: read-pods
  namespace: default
subjects:
  - kind: ServiceAccount
    name: foo-account
    namespace: default
roleRef:
  kind: Role
  name: pod-reader
  apiGroup: rbac.authorization.k8s.io

---

apiVersion: v1
kind: ServiceAccount
metadata:
  name: foo-account
  namespace: default

The libcluster config:

topologies = [
      k8s: [
        strategy: Cluster.Strategy.Kubernetes.DNS,
        config: [
          service: "foo-nodes",
          application_name: "foo"
        ]
      ]
    ]

Communication between pods are fine. Endpoints

kubectl get endpoints
NAME           ENDPOINTS                                                           AGE
foo-nodes     10.244.2.100:4369,10.244.2.101:4369,10.244.2.102:4369 + 1 more...   15m

Services

kubectl get services
NAME           TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                         AGE
foo-nodes     ClusterIP      None            <none>        4369/TCP                        37m
foo-service   LoadBalancer   10.101.190.19   <pending>     4000:31343/TCP,4001:32461/TCP   4h53m

I followed several tutorials like this: Connecting Elixir Nodes with libcluster, locally and on Kubernetes Read the docu: Cluster.Strategy.Kubernetes.DNS — libcluster v3.3.3 I don’t know what else I can try…

Any though?

What is the command that is executed in the container? --name and --cookie are arguments to the elixir or iex command. If you built a release try working with env vars instead of args:

          env:
            - name: MY_POD_IP
              valueFrom:
                fieldRef:
                  fieldPath: status.podIP
            - name: RELEASE_DISTRIBUTION
              value: name
            - name: RELEASE_NODE
              value: foo@$(MY_POD_IP)
            - name: RELEASE_COOKIE
              value: foo-cookie
1 Like

Thanks, it worked!