k8s // ingress as DaemonSet + hostNetwork

assuming three nodes k8s or minikube cluster

descr

with k8s ingress as DaemonSet instead of Deployment, you can still limit the amount of nodes an ingress would work on – simply use a label e.g. run the DS only on worker nodes.

warning / lessons learned

Ingress serves as a LoadBalancer service by default. this is not what you want when using Kind Minikube or Bare-metal setup.

the hostPort setup offered in ingress-nginx helm package does exactly what we needed, namely serving 80,443/tcp on every node where its DS lives, as it skips the overlays altogether.

an ingress-nodeport would be to mix NodePort and externalTrafficPolicy to have only 30080,30443/tcp ports listening on those specific nodes (nodeport goes across the whole cluster by default), but that’s less efficient that just using the solution above.

controller install & setup

    git clone https://github.com/kubernetes/ingress-nginx.git
    cd ingress-nginx/charts/ingress-nginx/

deploy controllers on every worker node

cat > values-custom.yaml <<EOF
controller:
  kind: DaemonSet

  # https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/
  config:
    enable-brotli: 'true'
    enable-real-ip: 'true'
    use-gzip: 'true'
    use-http2: 'true'

  containerPort:
    http: 80
    https: 443

  hostPort:
    enabled: true
    ports:
      http: 80
      https: 443

  service:
    enabled: false

  nodeSelector:
    kubernetes.io/role: worker
    #app_type: ingress

  tolerations:
  - key: "kubernetes.io/role"
    #key: app_type
    operator: "Equal"
    value: "worker"
    #value: ingress
    effect: "NoSchedule"

defaultBackend:
  enabled: false
EOF

ready to go

helm dependency build
#helm uninstall lbs
helm template lbs ./ --values=values-custom.yaml | grep -i brotli
helm install --dry-run lbs ./ --values=values-custom.yaml | less
helm install lbs ./ --values=values-custom.yaml

acceptance

check where the ingress-nginx controllers live

kubens -c           # default
kubectl get pods -o wide    # only worker nodes
kubectl get svc         # nothing for controllers

check that worker nodes now listen on 80,443/tcp

node2=192.168.49.3
node3=192.168.49.4

nmap -p 80,443 $node2
nmap -p 80,443 $node3

resources

https://kubernetes.github.io/ingress-nginx/deploy/#quick-start

https://stackoverflow.com/questions/61004408/is-it-necessary-to-deploy-the-ingress-controller-using-daemonset

https://kubernetes.github.io/ingress-nginx/

class

https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/

host network

https://stackoverflow.com/questions/77110555/what-is-hostnetwork-in-kubernetes

https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy

https://stackoverflow.com/questions/63691946/kubernetes-what-is-hostport-and-hostip-used-for


HOME | GUIDES | LECTURES | LAB | SMTP HEALTH | HTML5 | CONTACT
Copyright © 2024 Pierre-Philipp Braun