kubernetes

20210908 (수) pod scheduler

gusalstm 2021. 9. 8. 15:45
반응형

9:40

 

  Pod Scheduler   

node name manifests에서 수동으로 pods를 배치할 노드를 지정
 - 스케쥴러의 관여 없음
scale 명령어로 scale out을 실행해도 같은 노드에서 배정됨
(replicaset에 설정된 node name 으로 인하여)
node selector 노드의 레이블 지우기 (node 라는 label 항목을 마이너스(-)하는 명령)
$ kubectl label node kube-node2 node-

affinity 파드가 특정 노드에 배치되는 것을 선호하는 노드를 지정

node affinity      : pod를 특정 node에 배치하는것을 선호
pod affinity       : pod 간에 같은 노드에 배치하는 것을 선호
pod anti-affinity : pod 간에 다른 노드에 배치하는 것을 선호
 - node affinity node affinity : node selector와 유사하게 pod를 특정 node에 배치할 수 있음
node selector는 해당 label이 지정된 node에만 배치되지만 node affinity는 특정 node에 배치하는 것을 강제하지 않을 수 있음
nodeAffinity 유형과 연관된 nodeSelectorTerms 를 지정하면, nodeSelectorTerms 중 하나라도 만족시키는 노드에 파드가 스케줄된다.
nodeSelectorTerms 와 연관된 여러 matchExpressions 를 지정하면, 파드는 matchExpressions 를 모두 만족하는 노드에만 스케줄된다.
 - pod affinity pod affinity
 - pod anti-affinity pod anti-affinity
scale in scale out시에도 지켜짐.
taints & tolerations taints : 특정 노드에 더이상 pod가 추가되지 않도록 pod를 스케쥴링 하지 않는것.
  → 별도의 tolerations이 없는 경우 pod가 추가 생성되지 않음
  → 정의형태 : "KEY[=VALUE]:EFFECT" 

taints effect의 종류 :
 - NoSchedule  : tolerations이 정의되어있지 않은경우 pod를 스케쥴링 하지 않음
 - PreferNoSchedule : NoSchedule과 동일하나 리소스 부족 등의 예외상황에 배치할 수 있음
        (기존 pod에 적용되지 않음)
 - NoExecute  : tolerations이 정의되어있지 않은경우 pod를 스케쥴링 할 수 없음
        (기존 pod도 설정적용을 받음)

tolerations : taints가 적용된 노드에 pod를 배치할 수 있도록 하는것

vagrant@kube-control1:~/work/20210908$ kubectl describe nodes |grep Taints
Taints:             node-role.kubernetes.io/master:NoSchedule  → control plane은 적용되어있음
Taints:             <none>
Taints:             <none>
Taints:             <none>
cordon & drain

 

 

### nodename : kube-node1에만 생성 ###
vagrant@kube-control1:~/work/20210908$ cat myapp-rs-nodename.yaml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: myapp-rs-nodename
spec:
  replicas: 2
  selector:
    matchLabels:
      app: myapp-rs-nodename
  template:
    metadata:
      labels:
        app: myapp-rs-nodename
    spec:
      nodeName: kube-node1
      containers:
      - name: myapp
        image: devops2341/go-myweb:latest
        
### node selector : gpu:highend라는 레이블을 가진 node에만 생성 ###
vagrant@kube-control1:~/work/20210908$ cat myapp-rs-nodeselector.yaml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: myapp-rs-nodeselector
spec:
  replicas: 2
  selector:
    matchLabels:
      app: myapp-rs-nodeselector
  template:
    metadata:
      labels:
        app: myapp-rs-nodeselector
    spec:
      nodeSelector:
        gpu: highend
      containers:
      - name: myapp
        image: devops2341/go-myweb:latest
vagrant@kube-control1:~/work/20210908$ kubectl get nodes --show-labels
NAME            STATUS   ROLES    AGE   VERSION    LABELS
kube-control1   Ready    master   15d   v1.19.11   kubernetes.io
kube-node1      Ready    <none>   15d   v1.19.11   gpu=highend #다른 레이블 생략함
kube-node2      Ready    <none>   15d   v1.19.11   gpu=midrange#다른 레이블 생략함 
kube-node3      Ready    <none>   15d   v1.19.11   gpu=lowends #다른 레이블 생략함
vagrant@kube-control1:~/work/20210908$ kubectl get pods -o wide
NAME                          READY   STATUS    RESTARTS   AGE       NODE       
myapp-rs-nodeselector-mfz5f   1/1     Running   0          10s     kube-node1 
myapp-rs-nodeselector-rqvrn   1/1     Running   0          10s     kube-node1 
# node1 : gpu=highend 에만 생성 #

### node affinity : 기본 문법 ###
spec:
  affinity:
    nodeAffinity:
      requireDuringSchedulingIgnoredDuringExecution: #반드시 조건 만족 (필수조건)
        nodeSelectorTerms:
        - matchExpressions:
          - key: gpu-model   # label name
            operator: in     # 조건연산자
            values:
            - '3080'         # label value
            - '2080'         # label value
      perferredDuringSchedulingIgnoredDuringExecution: #조건 선호 (예외 가능)
      - weight: 10           # 가중치 ( 1 - 100 )
        perferences:
          matchLabels:
          - key: gpu-model
            operator: in
            values:
            - titan
            
### pod affinity : 기본 문법 ###
spec:
  affinity:
    podAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
      - labelSelector:
          matchExpressions:
          - key: KEY
            operator: In
            values:
            - 'val1'
            - 'val2'
        topologyKey: TOPOLOGY_KEY  
# topologykey: pod affinity, pod anti-affinity 파드를 배치/분리하는 기준
# 보통 노드의 이름을 참조하여 파드 배치/분리를 결정

### pod anti-affinity : 기본 문법 ###
    podAntiAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
      - labelSelector:
          matchExpressions:
          - key: KEY
            operator: In
            values:
            - 'val1'
        topologyKey: TOPOLOGY_KEY

    

 

 

 

### no notolerations.###
vagrant@kube-control1:~/work/20210908$ cat myapp-rs-podaff-cache.yaml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: myapp-rs-aff-cache
spec:
  replicas: 2
  selector:
    matchLabels:
      app: myapp-rs-aff-cache
      tier: cache
  template:
    metadata:
      labels:
        app: myapp-rs-aff-cache
        tier: cache
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: tier
                operator: In
                values:
                - cache
            topologyKey: "kubernetes.io/hostname"
      containers:
      - name: myapp
        image: devops2341/go-myweb:latest


vagrant@kube-control1:~/work/20210908$ cat myapp-rs-podaff-front.yaml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: myapp-rs-aff-front
spec:
  replicas: 2
  selector:
    matchLabels:
      app: myapp-rs-aff-front
      tier: frontend
  template:
    metadata:
      labels:
        app: myapp-rs-aff-front
        tier: frontend
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: tier
                operator: In
                values:
                - frontend
            topologyKey: "kubernetes.io/hostname"
        podAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: tier
                operator: In
                values:
                - cache
            topologyKey: "kubernetes.io/hostname"
      containers:
      - name: myapp
        image: devops2341/go-myweb:latest


vagrant@kube-control1:~/work/20210908$

vagrant@kube-control1:~/work/20210908$ cat myapp-rs-notoleration.yaml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: myapp-rs-notol
spec:
  replicas: 1
  selector:
    matchLabels:
      app: myapp-rs-notol
      tier: backend
  template:
    metadata:
      labels:
        app: myapp-rs-notol
        tier: backend
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: tier
                operator: In
                values:
                - cache
            topologyKey: "kubernetes.io/hostname"
      containers:
      - name: myapp
        image: devops2341/go-myweb:latest

vagrant@kube-control1:~/work/20210908$


vagrant@kube-control1:~/work/20210908$ kubectl scale replicaset myapp-rs-aff-cache --replicas 3
replicaset.apps/myapp-rs-aff-cache scaled
vagrant@kube-control1:~/work/20210908$ kubectl scale replicaset myapp-rs-aff-front --replicas 3
replicaset.apps/myapp-rs-aff-front scaled
vagrant@kube-control1:~/work/20210908$ kubectl get pods -o wide
NAME                          READY   STATUS    RESTARTS   AGE     IP                NODE         NOMINATED NODE   READINESS GATES
myapp-rs-aff-cache-2rr5x      1/1     Running   0          23s     192.168.233.253   kube-node2   <none>           <none>
myapp-rs-aff-cache-5g8kv      1/1     Running   0          3m36s   192.168.9.112     kube-node1   <none>           <none>
myapp-rs-aff-cache-79n69      1/1     Running   0          3m36s   192.168.119.135   kube-node3   <none>           <none>
myapp-rs-aff-front-dcpfl      1/1     Running   0          119s    192.168.119.138   kube-node3   <none>           <none>
myapp-rs-aff-front-qts4s      1/1     Running   0          119s    192.168.9.116     kube-node1   <none>           <none>
myapp-rs-aff-front-zq65m      1/1     Running   0          6s      192.168.233.251   kube-node2   <none>           <none>
myapp-rs-nodeselector-mfz5f   1/1     Running   0          162m    192.168.9.101     kube-node1   <none>           <none>
myapp-rs-nodeselector-rqvrn   1/1     Running   0          162m    192.168.9.98      kube-node1   <none>           <none>
vagrant@kube-control1:~/work/20210908$ kubectl scale replicaset myapp-rs-aff-cache --replicas 2
replicaset.apps/myapp-rs-aff-cache scaled
vagrant@kube-control1:~/work/20210908$ kubectl scale replicaset myapp-rs-aff-front --replicas 2
replicaset.apps/myapp-rs-aff-front scaled
vagrant@kube-control1:~/work/20210908$ kubectl get pods -o wide
NAME                          READY   STATUS    RESTARTS   AGE     IP                NODE         NOMINATED NODE   READINESS GATES
myapp-rs-aff-cache-5g8kv      1/1     Running   0          5m33s   192.168.9.112     kube-node1   <none>           <none>
myapp-rs-aff-cache-79n69      1/1     Running   0          5m33s   192.168.119.135   kube-node3   <none>           <none>
myapp-rs-aff-front-dcpfl      1/1     Running   0          3m56s   192.168.119.138   kube-node3   <none>           <none>
myapp-rs-aff-front-qts4s      1/1     Running   0          3m56s   192.168.9.116     kube-node1   <none>           <none>
myapp-rs-nodeselector-mfz5f   1/1     Running   0          164m    192.168.9.101     kube-node1   <none>           <none>
myapp-rs-nodeselector-rqvrn   1/1     Running   0          164m    192.168.9.98      kube-node1   <none>           <none>
vagrant@kube-control1:~/work/20210908$ kubectl get nodes --show-labels
NAME            STATUS   ROLES    AGE   VERSION    LABELS
kube-control1   Ready    master   15d   v1.19.11   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=kube-control1,kubernetes.io/os=linux,node-role.kubernetes.io/master=
kube-node1      Ready    <none>   15d   v1.19.11   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,gpu-model=3080,gpu=highend,kubernetes.io/arch=amd64,kubernetes.io/hostname=kube-node1,kubernetes.io/os=linux
kube-node2      Ready    <none>   15d   v1.19.11   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,gpu-model=2080,gpu=midrange,kubernetes.io/arch=amd64,kubernetes.io/hostname=kube-node2,kubernetes.io/os=linux
kube-node3      Ready    <none>   15d   v1.19.11   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,gpu-model=1660,gpu=lowends,kubernetes.io/arch=amd64,kubernetes.io/hostname=kube-node3,kubernetes.io/os=linux
vagrant@kube-control1:~/work/20210908$ kubectl describe nodes kube-nodes |grep Taints
Error from server (NotFound): nodes "kube-nodes" not found
vagrant@kube-control1:~/work/20210908$ kubectl describe nodes kube-nodes |grep -iname Taints
grep: invalid max count
Error from server (NotFound): nodes "kube-nodes" not found
vagrant@kube-control1:~/work/20210908$ kubectl describe nodes |grep -iname Taints
grep: invalid max count
vagrant@kube-control1:~/work/20210908$ kubectl describe nodes |grep taints
vagrant@kube-control1:~/work/20210908$ kubectl describe nodes |grep Taints
Taints:             node-role.kubernetes.io/master:NoSchedule
Taints:             <none>
Taints:             <none>
Taints:             <none>
vagrant@kube-control1:~/work/20210908$ kubectl taint NODE NODE-NAME KEY=[VALUE]:EFFECT ^C
vagrant@kube-control1:~/work/20210908$
vagrant@kube-control1:~/work/20210908$ kubectl get node --show-labels
NAME            STATUS   ROLES    AGE   VERSION    LABELS
kube-control1   Ready    master   15d   v1.19.11   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=kube-control1,kubernetes.io/os=linux,node-role.kubernetes.io/master=
kube-node1      Ready    <none>   15d   v1.19.11   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,gpu-model=3080,gpu=highend,kubernetes.io/arch=amd64,kubernetes.io/hostname=kube-node1,kubernetes.io/os=linux
kube-node2      Ready    <none>   15d   v1.19.11   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,gpu-model=2080,gpu=midrange,kubernetes.io/arch=amd64,kubernetes.io/hostname=kube-node2,kubernetes.io/os=linux
kube-node3      Ready    <none>   15d   v1.19.11   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,gpu-model=1660,gpu=lowends,kubernetes.io/arch=amd64,kubernetes.io/hostname=kube-node3,kubernetes.io/os=linux
vagrant@kube-control1:~/work/20210908$ kubectl taint node kube-node3 env=production:NoSchedule
node/kube-node3 tainted
vagrant@kube-control1:~/work/20210908$ kubectl describe node kube-node3 |grep taint
vagrant@kube-control1:~/work/20210908$ kubectl describe node kube-node3
Name:               kube-node3
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    gpu=lowends
                    gpu-model=1660
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=kube-node3
                    kubernetes.io/os=linux
Annotations:        csi.volume.kubernetes.io/nodeid: {"rook-ceph.cephfs.csi.ceph.com":"kube-node3","rook-ceph.rbd.csi.ceph.com":"kube-node3"}
                    kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl: 0
                    projectcalico.org/IPv4Address: 192.168.200.23/24
                    projectcalico.org/IPv4IPIPTunnelAddr: 192.168.119.128
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Mon, 23 Aug 2021 07:52:59 +0000
Taints:             env=production:NoSchedule
Unschedulable:      false
Lease:
  HolderIdentity:  kube-node3
  AcquireTime:     <unset>
  RenewTime:       Wed, 08 Sep 2021 05:33:55 +0000
Conditions:
  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason
   Message
  ----                 ------  -----------------                 ------------------                ------
   -------
  NetworkUnavailable   False   Wed, 08 Sep 2021 00:40:38 +0000   Wed, 08 Sep 2021 00:40:38 +0000   CalicoIsUp
   Calico is running on this node
  MemoryPressure       False   Wed, 08 Sep 2021 05:31:30 +0000   Wed, 08 Sep 2021 00:39:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure         False   Wed, 08 Sep 2021 05:31:30 +0000   Wed, 08 Sep 2021 00:39:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure          False   Wed, 08 Sep 2021 05:31:30 +0000   Wed, 08 Sep 2021 00:39:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready                True    Wed, 08 Sep 2021 05:31:30 +0000   Wed, 08 Sep 2021 00:39:36 +0000   KubeletReady
   kubelet is posting ready status. AppArmor enabled
Addresses:
  InternalIP:  192.168.200.23
  Hostname:    kube-node3
Capacity:
  cpu:                2
  ephemeral-storage:  40593612Ki
  hugepages-2Mi:      0
  memory:             3064356Ki
  pods:               110
Allocatable:
  cpu:                2
  ephemeral-storage:  37411072758
  hugepages-2Mi:      0
  memory:             2961956Ki
  pods:               110
System Info:
  Machine ID:                 6f1ad46bead544d1b5bcb11fd9fa3de9
  System UUID:                3ee51e77-8110-6a4e-9a2d-52e44968fbd9
  Boot ID:                    7940543d-9dd5-4562-aff3-375558009495
  Kernel Version:             5.4.0-81-generic
  OS Image:                   Ubuntu 20.04.2 LTS
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  docker://20.10.8
  Kubelet Version:            v1.19.11
  Kube-Proxy Version:         v1.19.11
PodCIDR:                      192.168.3.0/24
PodCIDRs:                     192.168.3.0/24
Non-terminated Pods:          (14 in total)
  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
  default                     myapp-rs-aff-cache-79n69                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         93m
  default                     myapp-rs-aff-front-dcpfl                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         91m
  kube-system                 calico-node-4hg7w                                      250m (12%)    0 (0%)      0 (0%)           0 (0%)         15d
  kube-system                 kube-proxy-pcpsn                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         15d
  kube-system                 metrics-server-766c9b8df-2wwfc                         100m (5%)     0 (0%)      200Mi (6%)       0 (0%)         28h
  metallb-system              speaker-bvsnw                                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         14d

kind: ReplicaSet
  rook-ceph                   csi-cephfsplugin-8fnps                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         5d4h
  rook-ceph                   csi-cephfsplugin-provisioner-78d66674d8-c2xrl          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5d4h
  rook-ceph                   csi-rbdplugin-provisioner-687cf777ff-q9srr             0 (0%)        0 (0%)      0 (0%)           0 (0%)         5d4h
  rook-ceph                   csi-rbdplugin-t586m                                    0 (0%)        0 (0%)      0 (0%)        
metadata:
   0 (0%)         5d4h
  rook-ceph                   rook-ceph-crashcollector-kube-node3-fd9f9f6bd-hqtr4    0 (0%)        0 (0%)      0 (0%)           0 (0%)         29h
  rook-ceph                   rook-ceph-mon-b-7dcc58fcc6-998r4                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         5d4h
  rook-ceph                   rook-ceph-operator-fd9ff6bf5-8gsxf                     0 (0%)        0 (0%)      0 (0%)        
   0 (0%)         5d4h

apiVersion: apps/v1
  rook-ceph                   rook-ceph-osd-1-69c579789-r28h9                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5d3h
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests    Limits
  --------           --------    ------
  cpu                350m (17%)  0 (0%)
  memory             200Mi (6%)  0 (0%)
  ephemeral-storage  0 (0%)      0 (0%)
  hugepages-2Mi      0 (0%)      0 (0%)
Events:              <none>
vagrant@kube-control1:~/work/20210908$ kubectl describe node kube-node3 |grep Taint
Taints:             env=production:NoSchedule
vagrant@kube-control1:~/work/20210908$ vim myapp-rs-notoleration.yaml
 [New] 30L, 652C written
vagrant@kube-control1:~/work/20210908$ vim myapp-rs-notoleration.yaml
 30L, 657C written
vagrant@kube-control1:~/work/20210908$ kubectl create -f myapp-rs-notoleration.yaml
replicaset.apps/myapp-rs-notol created
vagrant@kube-control1:~/work/20210908$ kubectl get replicasets
NAME                    DESIRED   CURRENT   READY   AGE
myapp-rs-aff-cache      2         2         2       106m
myapp-rs-aff-front      2         2         2       105m
myapp-rs-nodeselector   2         2         2       4h25m
myapp-rs-notol          1         1         1       20s
vagrant@kube-control1:~/work/20210908$ kubectl get pods -o wide
NAME                          READY   STATUS    RESTARTS   AGE     IP                NODE         NOMINATED NODE   READINESS GATES
myapp-rs-aff-cache-5g8kv      1/1     Running   0          107m    192.168.9.112     kube-node1   <none>           <none>
myapp-rs-aff-cache-79n69      1/1     Running   0          107m    192.168.119.135   kube-node3   <none>           <none>
myapp-rs-aff-front-dcpfl      1/1     Running   0          105m    192.168.119.138   kube-node3   <none>           <none>
myapp-rs-aff-front-qts4s      1/1     Running   0          105m    192.168.9.116     kube-node1   <none>           <none>
myapp-rs-nodeselector-mfz5f   1/1     Running   0          4h26m   192.168.9.101     kube-node1   <none>           <none>
myapp-rs-nodeselector-rqvrn   1/1     Running   0          4h26m   192.168.9.98      kube-node1   <none>           <none>
myapp-rs-notol-77kfw          1/1     Running   0          28s     192.168.233.250   kube-node2   <none>           <none>
vagrant@kube-control1:~/work/20210908$ kubectl delete -f myapp-rs-notoleration.yaml
replicaset.apps "myapp-rs-notol" deleted
vagrant@kube-control1:~/work/20210908$ kubectl delete -f myapp-rs-
myapp-rs-nodeaff.yaml       myapp-rs-nodeselector.yaml  myapp-rs-podaff-cache.yaml
myapp-rs-nodename.yaml      myapp-rs-notoleration.yaml  myapp-rs-podaff-front.yaml
vagrant@kube-control1:~/work/20210908$ kubectl delete -f myapp-rs-podaff-cache.yaml
replicaset.apps "myapp-rs-aff-cache" deleted
vagrant@kube-control1:~/work/20210908$ kubectl delete -f myapp-rs-podaff-front.yaml
replicaset.apps "myapp-rs-aff-front" deleted
vagrant@kube-control1:~/work/20210908$ kubectl get all
NAME                              READY   STATUS    RESTARTS   AGE
pod/myapp-rs-nodeselector-mfz5f   1/1     Running   0          4h29m
pod/myapp-rs-nodeselector-rqvrn   1/1     Running   0          4h29m

NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   15d

NAME                                    DESIRED   CURRENT   READY   AGE
replicaset.apps/myapp-rs-nodeselector   2         2         2       4h29m
vagrant@kube-control1:~/work/20210908$ kubectl create -f myapp-rs-notoleration.yaml
replicaset.apps/myapp-rs-notol created
vagrant@kube-control1:~/work/20210908$ kubectl delete -f myapp-rs-notoleration.yaml
replicaset.apps "myapp-rs-notol" deleted
vagrant@kube-control1:~/work/20210908$ kubectl create -f myapp-rs-
myapp-rs-nodeaff.yaml       myapp-rs-nodeselector.yaml  myapp-rs-podaff-cache.yaml
myapp-rs-nodename.yaml      myapp-rs-notoleration.yaml  myapp-rs-podaff-front.yaml
vagrant@kube-control1:~/work/20210908$ kubectl create -f myapp-rs-podaff-cache.yaml
replicaset.apps/myapp-rs-aff-cache created
vagrant@kube-control1:~/work/20210908$ kubectl create -f myapp-rs-podaff-front.yaml
replicaset.apps/myapp-rs-aff-front created
vagrant@kube-control1:~/work/20210908$ kubectl get pods -o wide
NAME                          READY   STATUS              RESTARTS   AGE     IP                NODE         NOMINATED NODE   READINESS GATES
myapp-rs-aff-cache-p7f8p      1/1     Running             0          7s      192.168.9.121     kube-node1   <none>
<none>
myapp-rs-aff-cache-zmqdx      1/1     Running             0          7s      192.168.233.255   kube-node2   <none>
<none>
myapp-rs-aff-front-7hdzp      0/1     ContainerCreating   0          4s      <none>            kube-node2   <none>
<none>
myapp-rs-aff-front-rwqwf      0/1     ContainerCreating   0          4s      <none>            kube-node1   <none>
<none>
myapp-rs-nodeselector-mfz5f   1/1     Running             0          4h29m   192.168.9.101     kube-node1   <none>
<none>
myapp-rs-nodeselector-rqvrn   1/1     Running             0          4h29m   192.168.9.98      kube-node1   <none>
<none>
vagrant@kube-control1:~/work/20210908$ kubectl create -f myapp-rs-notoleration.yaml
replicaset.apps/myapp-rs-notol created
vagrant@kube-control1:~/work/20210908$ kubectl get replicasets
NAME                    DESIRED   CURRENT   READY   AGE
myapp-rs-aff-cache      2         2         2       25s
myapp-rs-aff-front      2         2         2       22s
myapp-rs-nodeselector   2         2         2       4h30m
myapp-rs-notol          1         1         0       4s
vagrant@kube-control1:~/work/20210908$ kubectl get pods -o wide
NAME                          READY   STATUS    RESTARTS   AGE     IP                NODE         NOMINATED NODE   READINESS GATES
myapp-rs-aff-cache-p7f8p      1/1     Running   0          34s     192.168.9.121     kube-node1   <none>           <none>
myapp-rs-aff-cache-zmqdx      1/1     Running   0          34s     192.168.233.255   kube-node2   <none>           <none>
myapp-rs-aff-front-7hdzp      1/1     Running   0          31s     192.168.233.194   kube-node2   <none>           <none>
myapp-rs-aff-front-rwqwf      1/1     Running   0          31s     192.168.9.109     kube-node1   <none>           <none>
myapp-rs-nodeselector-mfz5f   1/1     Running   0          4h30m   192.168.9.101     kube-node1   <none>           <none>
myapp-rs-nodeselector-rqvrn   1/1     Running   0          4h30m   192.168.9.98      kube-node1   <none>           <none>
myapp-rs-notol-tl9vg          0/1     Pending   0          13s     <none>            <none>       <none>           <none>
vagrant@kube-control1:~/work/20210908$ kubectl describe pods myapp-rs-notol-tl9vg
Name:           myapp-rs-notol-tl9vg
Namespace:      default
Priority:       0
Node:           <none>
Labels:         app=myapp-rs-notol
                tier=backend
Annotations:    <none>
Status:         Pending
IP:
IPs:            <none>
Controlled By:  ReplicaSet/myapp-rs-notol
Containers:
  myapp:
    Image:        devops2341/go-myweb:latest
    Port:         <none>
    Host Port:    <none>
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-w26v7 (ro)
Conditions:
  Type           Status
  PodScheduled   False
Volumes:
  default-token-w26v7:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-w26v7
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age                 From               Message
  ----     ------            ----                ----               -------
  Warning  FailedScheduling  63s (x3 over 2m7s)  default-scheduler  0/4 nodes are available: 1 node(s) had taint {env: production}, that the pod didn't tolerate, 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't match pod anti-affinity rules.
vagrant@kube-control1:~/work/20210908$

cordon & uncordon 명령어
kubectl cordon NODE
kubectl uncordon NODE

vagrant@kube-control1:~/work/20210908$ kubectl cordon kube-node1
node/kube-node1 cordoned
vagrant@kube-control1:~/work/20210908$ kubectl cordon kube-node2
node/kube-node2 cordoned

vagrant@kube-control1:~/work/20210908$ kubectl get nodes
NAME            STATUS                     ROLES    AGE   VERSION
kube-control1   Ready                      master   15d   v1.19.11
kube-node1      Ready,SchedulingDisabled   <none>   15d   v1.19.11
kube-node2      Ready,SchedulingDisabled   <none>   15d   v1.19.11
kube-node3      Ready                      <none>   15d   v1.19.11
 
vagrant@kube-control1:~/work/20210908$ kubectl drain kube-node3 --ignore-daemonsets=true --delete-local-data=true
node/kube-node3 already cordoned

728x90