Kubernetes + libcluster help (GKE)

Hi,

I want to get my Elixir app to run with libcluster on Google Kubernetes Engine.

I found a similar article that’s somewhat related. This article talks about how to create a “headless service” that returns all the pod IPs, which he then passes into Peerage. You can search for myapp-service-headless to find his Kubernetes YAML file.

It looks like Peerage provides the same functionality

https://hexdocs.pm/libcluster/Cluster.Strategy.Kubernetes.html
https://hexdocs.pm/libcluster/Cluster.Strategy.Kubernetes.DNS.html#content (I think this is the one I need)

Unfortunately I don’t really understand the exact configuration with GKE + Libcluster.

I want to use the DNS A record method they mention in this doc (for some reason this doesnt match up with the docs with the same module name… maybe its out of date?). So far I DID get POD_IP and NAMESPACE properly passed into the docker container. So that step is good.

Here are my questions:

  1. I need a headless service right? It seems like the “DNS” method is preferable (the 2nd link). I tried making a headless service which matches that config in the article above, but it just says 0 pods and looks like it’s not working
  2. It says I should be able to run kubectl get endpoints -l app=myapp. I assume this is referring to the headless endpoint? Is that needed for both strategies?

Here is my headless service YAML (thats not working):

apiVersion: v1
kind: Service
metadata:
  creationTimestamp: 2018-12-24T06:22:20Z
  labels:
    app: xyz-headless
  name: xyz-headless
  namespace: default
  resourceVersion: "251868"
  selfLink: /api/v1/namespaces/default/services/xyz-headless
  uid: 12345
spec:
  clusterIP: None
  ports:
  - port: 8080
    protocol: TCP
    targetPort: 8080
  selector:
    app: xyz-web ### this is actually my "deployment"... awsnt sure what to put here
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}
1 Like

I’m doing a similar setup and this article helped me a lot https://engineering.dollarshaveclub.com/elixir-otp-applications-on-kubernetes-9944636b8609

repo: https://github.com/dollarshaveclub/ex_cluster

4 Likes

Thanks! I’ll check it out

I haven’t done this in a long time but found an old app that used it.

config/prod.exs

config :libcluster,
    topologies: [
    k8s: [
      strategy: Cluster.Strategy.Kubernetes,
      config: [
        kubernetes_selector: "app=${KUBE_APP}",
        kubernetes_node_basename: "my_app"]]]

vm.args

-name my_app@${MY_POD_IP}
-setcookie <thecookie>

looking through my service files I don’t see anything interesting so I’m not sure they are relevant other than setting app name.

deployment

kind: Deployment
metadata:
  name: my_app-deployment
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: my_app
    spec:

headless-service

apiVersion: v1
kind: Service
metadata:
  name: my_app-service-headless
  labels:
    app: my_app
spec:
  ports:
    - port: 8000
  selector:
    app: my_app
  clusterIP: None

service

apiVersion: v1
kind: Service
metadata:
  name: my_app-service
spec:
  ports:
    - port: 8080
      targetPort: 8000
      protocol: TCP
      name: http
  selector:
    app: my_app
1 Like

dude awesome!!! THANK YOU!

Just realised I forgot the deployment one.

metadata:
  name: my_app-deployment
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: my_app
    spec:
      imagePullSecrets:
        - name: regsecret
      containers:
        - name: my_app
          image: <my registry>
          ports:
            - containerPort: 8000
          args: ["foreground"]
          env:
            - name: HOST
              value: "app.myapp.com"

ingress (I think this has changed in newer versions though?) so probably not helpful.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: my_app-appback-ingress
spec:
  rules:
  - host: appback.myapp.com
    http:
      paths:
      - path: /
        backend:
          serviceName: my_app-service
          servicePort: 8080

Let me know if you get it working. If not I’ll see if any other config seems relevant.

1 Like

Hi all, this post was incredibly helpful for me (THANK YOU!) and I wanted to share our final configuration, in case it helps anyone else. We’re using Google Kubernetes with libcluster. The Elixir apps are accessible by the web via a load balancer that use a google managed SSL certificate. They can access each other via that headless service.

env.sh.eex

#!/bin/sh
export RELEASE_DISTRIBUTION=name
export RELEASE_NODE=<%= @release.name %>@${POD_IP}

config/prod.exs

config :libcluster,
  topologies: [
    k8s: [
      strategy: Elixir.Cluster.Strategy.Kubernetes.DNS,
      config: [
        service: "myapp-service-headless",
        application_name: "my_app",
        polling_interval: 10_000
      ]
    ]
  ]

my_app.ex

  def start(_type, _args) do
    ...
    topologies = Application.get_env(:libcluster, :topologies) || []

    children = [
      {Cluster.Supervisor, [topologies, [name: MyApp.ClusterSupervisor]]},
      supervisor(MyApp.Repo, []),
      supervisor(MyAppWeb.Endpoint, []),
      ...
    ]

  opts = [strategy: :one_for_one, name: MyApp.Supervisor]

  Supervisor.start_link(children, opts)
end

kubernetes/all-together-now.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-deployment
  namespace: default
spec:
  replicas: 2
  selector:
    matchLabels:
      app: myapp
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
      - image: gcr.io/my-gcp-project-123/my_app_otp:develop-HEAD
        name: myapp
        lifecycle:
          preStop:
            exec:
              command: ["./prod/rel/my_app/bin/my_app","stop"]
        ports:
        - containerPort: 4000
          protocol: TCP
        env:
        - name: POD_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
        envFrom:
        - configMapRef:
            name: staging-env-file
        command: ["./prod/rel/my_app/bin/my_app"]
        args: ["start"]
      terminationGracePeriodSeconds: 60   
---
apiVersion: v1
kind: Service
metadata:
  name: myapp-service
  namespace: default
  annotations:
    cloud.google.com/app-protocols: '{"service-https-port":"HTTPS"}'
    cloud.google.com/neg: '{"ingress": true}' # Creates a NEG after an Ingress is created
spec:
  ports:
  - name: service-https
    port: 443
    protocol: TCP
    targetPort: 4000
  selector:
    app: myapp
  type: NodePort
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: myapp-ingress
  annotations:
    kubernetes.io/ingress.global-static-ip-name: staging-managed-ip
    ingress.gcp.kubernetes.io/pre-shared-cert: "staging"
    kubernetes.io/ingress.allow-http: "false"
spec:
  backend:
    serviceName: myapp-service
    servicePort: service-https
---
apiVersion: v1
kind: Service
metadata:
  name: myapp-service-headless
spec:
  ports:
    - port: 8000
  selector:
    app: myapp
  clusterIP: None
5 Likes

then how you run it to deploy?

Yeah fair question :slight_smile: . I count just 4 moving pieces in that kubernetes YAML. The file assumes that:

  1. you’ve built your image and it’s hosted at gcr.io/my-gcp-project-123/my_app_otp:develop-HEAD,
  2. you’re using #mix-release and not distillery,
  3. you’ve got a static external IP address (staging-managed-ip in this example),
  4. and you’ve got an SSL certficiate set up on Google Cloud Platform (staging in this example).

If that’s the case, you should be able to set up a cluster with some node pools and run kubectl apply -f ./kubernetes/all-together-now.yaml. It will create a deployment, a service, an ingress, and a headless-service (which is the key bit to getting #libcluster 's Kubernetes.DNS strategy to work. Hope this helps.