Difficult debugging problem

Yes its always the same domains. Ok, I will re-enable those for sure then.

Here is my Dockerfile (i’m just putting it anyway just in case it helps debugging). Thanks, good point. Maybe I can just add a temporary line where I add apk --no-cache --update add net-tools iproute2 and comment it out when I’m not using it

# https://github.com/GoogleCloudPlatform/community/blob/master/tutorials/elixir-phoenix-on-kubernetes-google-container-engine/Dockerfile

# REPLACE_OS_VARS is used only for POD_IP passed in from GKE yaml

FROM elixir:1.7.4-alpine

ARG APP_NAME=appname
ARG PHOENIX_SUBDIR=.

ENV MIX_ENV=prod \
    REPLACE_OS_VARS=true \
    TERM=xterm

WORKDIR /opt/app

RUN apk update \
    && apk --no-cache --update add git \
    && mix local.rebar --force \
    && mix local.hex --force

COPY . .

RUN mix do deps.get, deps.compile, compile

RUN mix release --env=prod --verbose \
    && mv _build/prod/rel/${APP_NAME} /opt/release \
    && mv /opt/release/bin/${APP_NAME} /opt/release/bin/start_server

FROM alpine:latest

RUN apk update \
    && apk --no-cache --update add bash openssl-dev

ENV PORT=8080 \
    REPLACE_OS_VARS=true \
    MIX_ENV=prod

WORKDIR /opt/app

EXPOSE ${PORT}

COPY --from=0 /opt/release .

CMD ["/opt/app/bin/start_server", "foreground"]

Sure I could try but it was kind of a nightmare to deploy this. It might take me all day to deploy it w/o k8s, I will try though. I did see another google tutorial maybe I’ll try this: https://cloud.google.com/community/tutorials/elixir-phoenix-on-google-compute-engine

I could probably do this fairly easily

Do you think I should try adding that DNS thing first before deploying to an instance?

Oh nice link, thanks

You don’t necessarily need to deploy outside K8s, just find a way to disable k8s DNS lookups from those specific pods. That’s achieved by making sure that the resolv.conf file that ends up in each pod is just using a normal DNS server instead of your pod local one. I don’t recall how to configure that off the top of my head though sorry.

You don’t need to create new images just for debugging, what I suggest is connecting to an existing image and installing the packages for debugging there.

Based on what I’m reading it appears GKE still uses kube-dns

kubectl -n kube-system get pods

NAME                                                           READY   STATUS    RESTARTS   AGE
event-exporter-v0.2.3-54f94754f4-q6svn                         2/2     Running   0          1d
fluentd-gcp-scaler-697b966945-6lslx                            1/1     Running   0          1d
fluentd-gcp-v3.1.0-wr2lc                                       2/2     Running   0          1d
heapster-v1.6.0-beta.1-7988d7c5b5-9tfrs                        3/3     Running   0          1d
kube-dns-548976df6c-bvxgg                                      4/4     Running   0          1d
kube-dns-autoscaler-67c97c87fb-x4n86                           1/1     Running   0          1d
kube-proxy-gke-app-cluster-highcpu-64-ssd-afc9f592-zm9g        1/1     Running   0          1d
l7-default-backend-5bc54cfb57-hw4gv                            1/1     Running   0          1d
metrics-server-v0.2.1-fd596d746-px56s                          2/2     Running   0          1d

Ok I got ss -s working. Here’s when no problems:

bash-4.4# ss -s
Total: 2782 (kernel 88998)
TCP:   2683 (estab 511, closed 2167, orphaned 0, synrecv 0, timewait 2094/0), ports 0

Transport Total     IP        IPv6
*	  88998     -         -
RAW	  0         0         0
UDP	  0         0         0
TCP	  516       76        440
INET	  516       76        440
FRAG	  0         0         0

I’m trying to get it to break so i can show you what it looks like then but that is easier said than done

This is what it looks like if I load it with more traffic than it can handle

bash-4.4# netstat -tpan | grep TIME_WAIT | wc -l
21750

and

bash-4.4# ss -s
Total: 3307 (kernel 90342)
TCP:   31988 (estab 993, closed 30990, orphaned 0, synrecv 0, timewait 30874/0), ports 0

Transport Total     IP        IPv6
*	  90342     -         -
RAW	  0         0         0
UDP	  0         0         0
TCP	  998       16        982
INET	  998       16        982
FRAG	  0         0         0

what does

ulimit -a

say?

1 Like

Hey hey, i wrote them in the 2nd or 3rd reply if you scroll up

I managed to deploy to a plain 64 CPU instance in amazon. I hand installed everything and ran it in foregroud in production mode via ssh.

Still same problem, although it got to about 50K total before dying, that is ~5-10% better but I can see when it starts to fail it looks like this:

ok, did you try to test the API you are calling? Is it possible that they are not able to handle the load you are putting on them? I would use some outside tool like ab for example to verify that you are not getting the same timeouts.

I’m sure its not them because of a few things:

a) they can handle millions of requests a second (they’re data APIs)
b) if it were the API then simply multiplying the number of running apps on a single node wouldnt get wildly improved results

afaik this should not be this low for a high concurrency server…

can you echo 1048576 > /proc/sys/fs/aio-max-nr or change in sysctl.conf and report back?

When you do not count these as timeout, does it change your stats significantly?

1 Like

I will try. I just realized that Tesla uses a pool by default even though hackney by default doesn’t use a pool.

So this entire time its been using a pool which has thrown off my mental model. Now that I know it IS using a pool and failing, I decided to test the pool settings.

The default connections is 50 with a timeout of 15 second I believe… I tried setting it to 200 with a timeout of 1 second.

You can see a pool def helped. I suppose ill try to see the max pool i can get on a single app instance. When I did this before I remember getting decreasing performance past a certain number, I thought it was 200 or so.

1 Like

That’s suspiciously close to 32K. What does cat /proc/sys/net/ipv4/ip_local_port_range show and have you tried increasing it?

On the 64 cpu server thats not kubernetes it is:

root@testcrap2:/home/info# cat /proc/sys/net/ipv4/ip_local_port_range
32768   60999

I dont really see a change at all, but i will remember to change these when i go bck to kubernetes too, thank you.

If this is for the filesystem IO I believe my app hardly uses the disk if that matters. In the google cloud the disk IO is several MB with like 20-100 operations a second