minikube // rebalance pods across the cluster farm

three nodesthree nodes

#minikube delete --all
minikube start --driver=docker --nodes 3
minikube addons enable metrics-server

fix the role labels for the workers

kubectl get nodes
kubectl label nodes minikube-m02 kubernetes.io/role=worker
kubectl label nodes minikube-m03 kubernetes.io/role=worker

three replicasthree replicas

#kubectl delete deploy web
vi example-web.yml
apiVersion: apps/v1
kind: Deployment
metadata:
    name: web
    labels:
    app: web
spec:
    replicas: 3
    selector:
    matchLabels:
        app: web
    template:
    metadata:
        labels:
        app: web
    spec:
        containers:
        - name: web
          image: gcr.io/google-samples/hello-app:1.0
          ports:
          - containerPort: 8080
        affinity:
          nodeAffinity:
        requiredDuringSchedulingIgnoredDuringExecution:
        #preferredDuringSchedulingIgnoredDuringExecution:
          nodeSelectorTerms:
          - matchExpressions:
            - key: kubernetes.io/hostname
              operator: In
              values:
              - minikube
              - minikube-m02
              - minikube-m03
kubectl apply -f example-web.yml
kubectl get deploy -o wide
kubectl get pods -o wide

see what happens when shutting down a node

minikube node list
minikube node delete minikube-m03
kubectl get pods -o wide

==> the third replica goes to minikube-m02

restore the node and rebalance

minikube node add
minikube node list

git clone https://github.com/kubernetes-sigs/descheduler.git
cd descheduler/
kubectl create -f kubernetes/base/rbac.yaml
kubectl create -f kubernetes/base/configmap.yaml

#kubectl delete deploy descheduler -n kube-system
kubectl create -f kubernetes/deployment/deployment.yaml

kubectl get pods -o wide

==> back to normal

troubleshootingtroubleshooting

Warning  FailedScheduling  4m4s (x2 over 9m30s)  default-scheduler  0/3 nodes are available: 3 node(s) didn't match Pod's node affinity/selector. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling..

==> that was with default nodeSelector: role: worker

resourcesresources

multi-node

https://medium.com/womenintechnology/create-a-3-node-kubernetes-cluster-with-minikube-8e3dc57d6df2

https://www.digihunch.com/2021/09/single-node-kubernetes-cluster-minikube/

FW https://medium.com/womenintechnology/create-a-3-node-kubernetes-cluster-with-minikube-8e3dc57d6df2

affinity

https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/

https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity

rebalance

https://github.com/kubernetes-sigs/descheduler

https://itnext.io/keep-you-kubernetes-cluster-balanced-the-secret-to-high-availability-17edf60d9cb7

https://itnext.io/pod-rebalancing-and-allocations-in-kubernetes-df3dbfb1e2f9

https://medium.com/womenintechnology/create-a-3-node-kubernetes-cluster-with-minikube-8e3dc57d6df2

https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity

https://stackoverflow.com/questions/44041965/redistribute-pods-after-adding-a-node-in-kubernetes

https://stackoverflow.com/questions/39092090/how-can-i-distribute-a-deployment-across-nodes/64958458#64958458

https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/

https://stackoverflow.com/questions/41159843/kubernetes-pod-distribution-amongst-nodes

FW https://itnext.io/pod-rebalancing-and-allocations-in-kubernetes-df3dbfb1e2f9

moar / nodeSelector

https://stackoverflow.com/questions/37415617/can-we-mention-more-than-one-node-label-in-single-nodeselector-in-kubernetes

https://docs.openshift.com/container-platform/4.8/nodes/scheduling/nodes-scheduler-node-selectors.html

https://stackoverflow.com/questions/60870978/how-to-use-nodeselector-in-kubernetes

https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/


HOME | GUIDES | LECTURES | LAB | SMTP HEALTH | HTML5 | CONTACT
Licensed under MIT