assuming three nodes k8s or minikube cluster
eventually prepare the list of app-types
with k8s ingress as DaemonSet instead of Deployment,
you can still limit the amount of nodes an ingress would work on
– simply use a label e.g. run the DS only on worker nodes.
Ingress serves as a LoadBalancer service by default.
this is not what you want when using
Kind
Minikube
or Bare-metal setup.
the hostPort setup offered in ingress-nginx helm package does exactly what we needed,
namely serving 80,443/tcp on every node where its DS lives, as it skips the overlays altogether.
an ingress-nodeport would be to mix NodePort and externalTrafficPolicy to have only 30080,30443/tcp ports listening on those specific nodes (nodeport goes across the whole cluster by default), but that’s less efficient that just using the solution above.
git clone https://github.com/kubernetes/ingress-nginx.git
cd ingress-nginx/charts/ingress-nginx/
deploy controllers on every worker node
cat > values-custom.yaml <<EOF
controller:
kind: DaemonSet
# https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/
config:
enable-brotli: 'true'
enable-real-ip: 'true'
use-gzip: 'true'
use-http2: 'true'
containerPort:
http: 80
https: 443
hostPort:
enabled: true
ports:
http: 80
https: 443
service:
enabled: false
nodeSelector:
kubernetes.io/role: worker
#app_type: ingress
tolerations:
- key: "kubernetes.io/role"
#key: app_type
operator: "Equal"
value: "worker"
#value: ingress
effect: "NoSchedule"
# https://kubernetes.github.io/ingress-nginx/user-guide/default-backend/
defaultBackend:
enabled: false
EOF
helm dependency build
#helm uninstall lbs helm template lbs ./ --values=values-custom.yaml | grep -i brotli helm install lbs ./ --values=values-custom.yaml --dry-run helm install lbs ./ --values=values-custom.yaml
check where the ingress-nginx controllers live
kubens -c # default kubectl get pods -o wide # only worker nodes kubectl get svc # nothing for controllers
eventually look at the configuration within
kubectl exec lbs-ingress-nginx-controller-4rj9m -- cat nginx.conf
check that worker nodes now listen on 80,443/tcp
node2=192.168.49.3 node3=192.168.49.4 nmap -p 80,443 $node2 nmap -p 80,443 $node3
https://kubernetes.github.io/ingress-nginx/deploy/#quick-start
https://kubernetes.github.io/ingress-nginx/
https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/
https://stackoverflow.com/questions/77110555/what-is-hostnetwork-in-kubernetes
https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy
https://stackoverflow.com/questions/63691946/kubernetes-what-is-hostport-and-hostip-used-for