assuming three nodes k8s or minikube cluster
svc: we need to expose the service anyways (ClusterIP is fine enough)
ingress: take good care of ingress class name
ingress: take good care of destination service port
we use a full-blown setup to have ingress listen on node’s hostNetwork.
sample hello world app on 8080/tcp
cat > test-lbs.yaml <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  name: test-lbs
  labels:
    app: test-lbs
spec:
  replicas: 2
  selector:
    matchLabels:
      app: test-lbs
  template:
    metadata:
      labels:
        app: test-lbs
    spec:
      containers:
        - name: test-lbs
          image: gcr.io/google-samples/hello-app:1.0
          # listens on 8080 anyhow
          #ports:
          #- containerPort: 80
EOF
cluster ip service on 80/tcp
cat > test-lbs-svc.yaml <<EOF
apiVersion: v1
kind: Service
metadata:
  name: test-lbs
  labels:
    app: test-lbs
spec:
  #type: NodePort
  ports:
    - protocol: TCP
      targetPort: 8080
      port: 80
      # no need to force it (30000-32767)
      #nodePort: 30000
  selector:
    app: test-lbs
EOF
ingress for vhost hello.world pointing to the service – beware of the ingress class you are using.
domain=hello.local
class=nginx
cat > test-lbs-ingress.yaml <<EOF
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: test-lbs
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
  ingressClassName: $class
  rules:
    - host: $domain
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: test-lbs
                port:
                  number: 80
EOF
kubectl apply -f test-lbs.yaml kubectl apply -f test-lbs-svc.yaml kubectl apply -f test-lbs-ingress.yaml kubectl get deploy test-lbs kubectl get pods | grep ^test-lbs kubectl get svc test-lbs kubectl get ingress test-lbs kubectl get pods -n ingress-nginx kubectl get pods -n ingress-nginx -o wide
check ingress through HTTP
assuming full-blown ingress setup
node2=192.168.49.3
node3=192.168.49.4
nmap -p 80,443 $node2
nmap -p 80,443 $node3
curl -i --resolve $domain:80:$node1 $domain
curl -i --resolve $domain:80:$node2 $domain
# ingress-nodeport alternative
    curl -i --resolve hello.local:30080:$node2 http://hello.local:30080/
    curl -i --resolve hello.local:30080:$node3 http://hello.local:30080/
ingress controller replica pending
Warning FailedScheduling 2m24s default-scheduler 0/2 nodes are available: 1 node(s) didn't have free ports for the requested pod ports, 1 node(s) didn't match Pod's node affinity/selector. preemption: 0/2 nodes are available: 1 No preemption victims found for incoming pod, 1 Preemption is not helpful for scheduling..
==> missing required label on nodes (primary=true on minikube vs. ingress-ready=true on kind?)
https://www.appvia.io/blog/tutorial-deploy-kubernetes-cluster ==> sample yaml
https://dev.to/pavanbelagatti/deploying-an-application-on-kubernetes-a-complete-guide-1cj6
https://stackoverflow.com/questions/68449554/ingress-rule-using-host
https://www.baeldung.com/ops/kubernetes-k8s-service-targetport-vs-port
https://github.com/kubernetes/ingress-nginx/issues/4853
https://komodor.com/learn/how-to-fix-kubernetes-service-503-service-unavailable-error/
https://stackoverflow.com/questions/77580790/unable-to-assign-pods-to-nodes