ingress logs acceptance and troubleshooting

check ingress config

    kubectl get ingress
    pod=`kubectl get pods -n ingress-nginx | grep controller | awk '{print $1}'`

check the CM has been applied

    kubectl -n ingress-nginx exec -ti $pod -- bash

    grep log_format nginx.conf
    grep access_log nginx.conf
    grep error_log nginx.conf
    grep gzip nginx.conf
    #grep brotli nginx.conf
    ^D

notice access and error logs to to std outputs

    kubectl exec -ti $pod -n ingress-nginx -- bash

    ls -lF /var/log/nginx/
    ^D

proceed with a test

generate a few access logs with gzip enabled

    curl --compressed -i --resolve hello-world.info:80:192.168.49.2 http://hello-world.info/
# -H "Accept-Encoding: gzip,deflate"
    curl --compressed -i --resolve hello-world.info:80:192.168.49.2 http://hello-world.info/NO-EXIST

note nginx will discard vhosts that it doesn’t know about WITHOUT ANY LOG

    #curl --compressed -i http://192.168.49.2/

one could try something nasty e.g. trying to talk SSL on 80/tcp but that doesn’t generate error log either

#curl --compressed -i --resolve hello-world.info:80:192.168.49.2 https://hello-world.info:80/

in the end there’s no easy way to generate error log on ingress-nginx. you will only get errors on an nginx server while trying to reach a file not found or some file with bad permissions.

ingress controller logs - within k8s node

let’s get root into minikube (minikube ssh isn’t enough)

    docker exec -ti minikube bash

now let’s check the raw logs

    cd /var/log/containers/
    tail -F ingress-nginx-controller*log

ingress controller logs - casual pod logs

    kubectl get ingress
    pod=`kubectl get pods -n ingress-nginx | grep controller | awk '{print $1}'`

access logs should show up as usual over there

    kubectl logs $pod -n ingress-nginx | grep GET
    kubectl logs -n ingress-nginx -l app.kubernetes.io/name=ingress-nginx | grep GET

fluent-bit

check that fluent-bit finds out about that log file and can reach the log server just fine (otherwise prints an error)

    kubectl get pods | grep ^fluent
    pod=`kubectl get pods | grep ^fluent | awk '{print $1}'`
    kubectl logs $pod --tail=5

first let’s check local logs

    kubectl exec -ti $pod -- bash

    tail -F /var/log/fluent-bit.std.log

then check pod’s logs

kubectl logs $pod

and finally check the indexing dashboard (==> Discover) for the newly created index (index template need to exist beforehand) – you might also check the dynamic mappings that got generated on-the-fly on that index


HOME | GUIDES | LECTURES | LAB | SMTP HEALTH | HTML5 | CONTACT
Copyright © 2024 Pierre-Philipp Braun