반응형
Notice
Recent Posts
Recent Comments
Link
일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | 2 | 3 | 4 | 5 | 6 | 7 |
8 | 9 | 10 | 11 | 12 | 13 | 14 |
15 | 16 | 17 | 18 | 19 | 20 | 21 |
22 | 23 | 24 | 25 | 26 | 27 | 28 |
29 | 30 |
Tags
- Kubernetes
- yum
- chatGPT
- M365필터
- swapon
- nmcli
- MSBing
- tar
- 프로세스
- docker network
- HTTPD
- permission
- 같은폴더
- newbingai
- firewalld
- 리다이렉션
- 랜카드인식불량
- vagrant kubernetes
- chmod
- ssh
- docker
- 날짜변경
- ansible
- 엑셀파일명변경
- lvcreate
- journalctl
- vgcreate
- mount
- docker image
- pvcreate
Archives
- Today
- Total
becool
20210906 (월) deployment strategies, statefulset 본문
반응형
Deployment
Replicaset을 포함하는 상위 컨트롤러로 일정 갯수의 복제본을 유지시켜주는 컨트롤러
배포 전략 (Deployment Strategy)
RollingUpdate
ReCreate
### deployments 생성때부터 --record 옵션으로 revision 기록 하기 ###
vagrant@kube-control1:~/work/20210906$ cat myapp-deploy-v1.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-deploy
labels:
app: myapp-deploy
spec:
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 2
minReadySeconds: 15
replicas: 3
selector:
matchLabels:
app: myapp-deploy
template:
metadata:
labels:
app: myapp-deploy
spec:
containers:
- name: myapp
image: devops2341/go-myweb:v1
ports:
- containerPort: 8080
protocol: TCP
vagrant@kube-control1:~/work/20210906$ kubectl create -f myapp-deploy-v1.yaml --record
deployment.apps/myapp-deploy created
vagrant@kube-control1:~/work/20210906$ kubectl rollout history deployment
deployment.apps/myapp-deploy
REVISION CHANGE-CAUSE
1 kubectl create --filename=myapp-deploy-v1.yaml --record=true
### 이미지 교체
vagrant@kube-control1:~/work/20210906$ kubectl set image deployment myapp-deploy myapp=devops2341/go-myweb:v2 --record
deployment.apps/myapp-deploy image updated
### 이미지 교체 이력 확인 (--record 옵션)
vagrant@kube-control1:~/work/20210906$ kubectl rollout history deployments
deployment.apps/myapp-deploy
REVISION CHANGE-CAUSE
1 kubectl create --filename=myapp-deploy-v1.yaml --record=true
2 kubectl set image deployment myapp-deploy myapp=devops2341/go-myweb:v2 --record=true
### 오타 등으로 이미지교체에 실패했을때, pod는 maxUnavailable 항목에 지정한
#### 1개 pod 제외한 나머지 pod는 정상적인 running상태를 유지한다.
vagrant@kube-control1:~/work/20210906$ kubectl get pods
NAME READY STATUS RESTARTS AGE
myapp-deploy-65bdb76f48-9k4vs 1/1 Running 0 3m6s
myapp-deploy-65bdb76f48-vhzb6 1/1 Running 0 3m6s
myapp-deploy-7b88d989c8-2jlk5 0/1 ImagePullBackOff 0 44s
myapp-deploy-7b88d989c8-bw2mp 0/1 ImagePullBackOff 0 44s
myapp-deploy-7b88d989c8-w6bfn 0/1 ImagePullBackOff 0 44s
vagrant@kube-control1:~/work/20210906$ kubectl rollout history deployments
deployment.apps/myapp-deploy
REVISION CHANGE-CAUSE
1 kubectl create --filename=myapp-deploy-v1.yaml --record=true
2 kubectl set image deployment myapp-deploy myapp=devops2341/go-myweb:v2 --record=true
3 kubectl set image deployment myapp-deploy myapp=devops2341/v3 --record=true
vagrant@kube-control1:~/work/20210906$ kubectl rollout undo deployment myapp-deploy --to-revision 2
deployment.apps/myapp-deploy rolled back
vagrant@kube-control1:~/work/20210906$ kubectl get pods
NAME READY STATUS RESTARTS AGE
myapp-deploy-65bdb76f48-47fkt 1/1 Running 0 9s
myapp-deploy-65bdb76f48-9k4vs 1/1 Running 0 7m9s
myapp-deploy-65bdb76f48-vhzb6 1/1 Running 0 7m9s
Statefulset
pod의 복제본 관리
pod의 고유성 (이름, 네트워크, 스토리지)
pod의 순서 (Order)
Statefulset 주의사항
pod에 사용할 스토리지는 PVC를 통해서만 할당 가능함
(→ 미리 PV를 생성하거나 볼륨을 동적 프로비저닝이 가능해야함.)
pod는 고유성을 가지므로 각각의 pod에 접근 시 Headless 서비스를 이용
pod의 이름은 Statefulset_NAME-# 형식 (# 0부터 순서대로 숫자부여)
pod의 Kubernetes 도메인 주소는 POD_NAME.DOMAIN_NAME.svc
vagrant@kube-control1:~/work/20210906$ cat myapp-sts.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: myapp-sts
spec:
selector:
matchLabels:
app: myapp-sts
serviceName: myapp-svc-headless
replicas: 2
template:
metadata:
labels:
app: myapp-sts
spec:
containers:
- name: myapp
image: devops2341/go-myweb:latest
ports:
- containerPort: 8080
protocol: TCP
vagrant@kube-control1:~/work/20210906$ cat myapp-svc-headless.yaml
apiVersion: v1
kind: Service
metadata:
name: myapp-svc-headless
labels:
app: myapp-svc-headless
spec:
clusterIP: None
ports:
- name: http
port: 8080
targetPort: 8080
selector:
app: myapp-sts
vagrant@kube-control1:~/work/20210906$ kubectl get statefulsets NAME READY AGE myapp-sts 2/2 48s vagrant@kube-control1:~/work/20210906$ kubectl get pods NAME READY STATUS RESTARTS AGE myapp-deploy-678f76c6d7-dk4sl 1/1 Running 0 136m myapp-deploy-678f76c6d7-f5qz6 1/1 Running 0 136m myapp-deploy-678f76c6d7-pn9kt 1/1 Running 0 136m myapp-sts-0 1/1 Running 0 52s myapp-sts-1 1/1 Running 0 20s vagrant@kube-control1:~/work/20210906$ kubectl describe statefulset myapp-sts Name: myapp-sts Namespace: default CreationTimestamp: Mon, 06 Sep 2021 06:19:19 +0000 Selector: app=myapp-sts Labels: <none> Annotations: <none> Replicas: 2 desired | 2 total Update Strategy: RollingUpdate Partition: 0 Pods Status: 2 Running / 0 Waiting / 0 Succeeded / 0 Failed Pod Template: Labels: app=myapp-sts Containers: myapp: Image: devops2341/go-myweb:latest Port: 8080/TCP Host Port: 0/TCP Environment: <none> Mounts: <none> Volumes: <none> Volume Claims: <none> Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 95s statefulset-controller create Pod myapp-sts-0 in StatefulSet myapp-sts successful Normal SuccessfulCreate 63s statefulset-controller create Pod myapp-sts-1 in StatefulSet myapp-sts successful vagrant@kube-control1:~/work/20210906$ kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 13d myapp-svc-deploy LoadBalancer 10.111.36.106 192.168.200.11 80:30834/TCP 152m myapp-svc-headless ClusterIP None <none> 8080/TCP 2m1s vagrant@kube-control1:~/work/20210906$ kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES myapp-deploy-678f76c6d7-dk4sl 1/1 Running 0 137m 192.168.119.169 kube-node3 <none> <none> myapp-deploy-678f76c6d7-f5qz6 1/1 Running 0 137m 192.168.233.217 kube-node2 <none> <none> myapp-deploy-678f76c6d7-pn9kt 1/1 Running 0 137m 192.168.9.70 kube-node1 <none> <none> myapp-sts-0 1/1 Running 0 2m29s 192.168.9.69 kube-node1 <none> <none> myapp-sts-1 1/1 Running 0 117s 192.168.119.168 kube-node3 <none> <none> vagrant@kube-control1:~/work/20210906$ curl http://192.168.9.69 curl: (7) Failed to connect to 192.168.9.69 port 80: Connection refused vagrant@kube-control1:~/work/20210906$ curl http://192.168.9.69:8080 Hello World! myapp-sts-0 vagrant@kube-control1:~/work/20210906$ kubectl scale statefulset myapp-sts --replicas 4 statefulset.apps/myapp-sts scaled vagrant@kube-control1:~/work/20210906$ kubectl get pods NAME READY STATUS RESTARTS AGE myapp-deploy-678f76c6d7-dk4sl 1/1 Running 0 139m myapp-deploy-678f76c6d7-f5qz6 1/1 Running 0 139m myapp-deploy-678f76c6d7-pn9kt 1/1 Running 0 139m myapp-sts-0 1/1 Running 0 4m3s myapp-sts-1 1/1 Running 0 3m31s myapp-sts-2 1/1 Running 0 19s myapp-sts-3 1/1 Running 0 14s vagrant@kube-control1:~/work/20210906$ kubectl scale statefulset myapp-sts --replicas 2 statefulset.apps/myapp-sts scaled vagrant@kube-control1:~/work/20210906$ kubectl get pods NAME READY STATUS RESTARTS AGE myapp-deploy-678f76c6d7-dk4sl 1/1 Running 0 140m myapp-deploy-678f76c6d7-f5qz6 1/1 Running 0 140m myapp-deploy-678f76c6d7-pn9kt 1/1 Running 0 140m myapp-sts-0 1/1 Running 0 4m54s myapp-sts-1 1/1 Running 0 4m22s |
vagrant@kube-control1:~/work/20210906$ kubectl run nettool -it --image devops2341/network-multitool:v1 --rm bash If you don't see a command prompt, try pressing enter. bash-5.1# host myapp-sts-0.myapp-svc-headless myapp-sts-0.myapp-svc-headless.default.svc.cluster.local has address 192.168.9.69 bash-5.1# host myapp-sts-1.myapp-svc-headless myapp-sts-1.myapp-svc-headless.default.svc.cluster.local has address 192.168.119.168 bash-5.1# host myapp-svc-headless myapp-svc-headless.default.svc.cluster.local has address 192.168.119.168 myapp-svc-headless.default.svc.cluster.local has address 192.168.9.69 myapp-svc-headless.default.svc.cluster.local has address 192.168.233.227 bash-5.1# curl http://myapp-sts-0.myapp-svc-headless:8080 Hello World! myapp-sts-0 bash-5.1# curl http://myapp-sts-1.myapp-svc-headless:8080 Hello World! myapp-sts-1 |
반응형
vagrant@kube-control1:~/work/20210906$ cat myapp-svc-headless2.yaml
apiVersion: v1
kind: Service
metadata:
name: myapp-svc-headless2
labels:
app: myapp-svc-headless2
spec:
clusterIP: None
ports:
- name: http
port: 8080
targetPort: 8080
selector:
app: myapp-sts-vol
vagrant@kube-control1:~/work/20210906$ cat myapp-sts-volume.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: myapp-sts-vol
spec:
selector:
matchLabels:
app: myapp-sts-vol
serviceName: myapp-svc-headless2
replicas: 2
template:
metadata:
labels:
app: myapp-sts-vol
spec:
containers:
- name: myapp
image: devops2341/go-myweb:alpine
ports:
- containerPort: 8080
protocol: TCP
volumeMounts:
- name: myapp-data
mountPath: /data
volumeClaimTemplates:
- metadata:
name: myapp-data
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 1Gi
storageClassName: rook-ceph-block
vagrant@kube-control1:~/work/20210906$
vagrant@kube-control1:~/work/20210906$ kubectl get pods NAME READY STATUS RESTARTS AGE myapp-deploy-678f76c6d7-dk4sl 1/1 Running 0 173m myapp-deploy-678f76c6d7-f5qz6 1/1 Running 0 173m myapp-deploy-678f76c6d7-pn9kt 1/1 Running 0 173m myapp-sts-0 1/1 Running 0 38m myapp-sts-1 1/1 Running 0 37m myapp-sts-2 1/1 Running 0 33m myapp-sts-vol-0 1/1 Running 0 43s myapp-sts-vol-1 0/1 ContainerCreating 0 15s nettool 1/1 Running 0 31m vagrant@kube-control1:~/work/20210906$ kubectl get persistentvolumeclaims NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE myapp-data-myapp-sts-vol-0 Bound pvc-d0a53c0b-135f-4226-a2e0-89c961bb6a37 1Gi RWO rook-ceph-block 57s myapp-data-myapp-sts-vol-1 Bound pvc-e8f04ef5-26f5-4a95-a15e-0defc3b6fd9d 1Gi RWO rook-ceph-block 29s vagrant@kube-control1:~/work/20210906$ kubectl describe pods myapp-sts-vol-0 Name: myapp-sts-vol-0 Namespace: default Priority: 0 Node: kube-node1/192.168.200.21 Start Time: Mon, 06 Sep 2021 06:57:00 +0000 Labels: app=myapp-sts-vol controller-revision-hash=myapp-sts-vol-c87b47645 statefulset.kubernetes.io/pod-name=myapp-sts-vol-0 Annotations: cni.projectcalico.org/containerID: 3f1cebedc8fa4bdd2216b5480a754c65fcf9b8167d97c1ba27b203fdf412e2bf cni.projectcalico.org/podIP: 192.168.9.110/32 cni.projectcalico.org/podIPs: 192.168.9.110/32 Status: Running IP: 192.168.9.110 IPs: IP: 192.168.9.110 Controlled By: StatefulSet/myapp-sts-vol Containers: myapp: Container ID: docker://c2859df65977a15e6b6554d87090a92b41931d1fc901ee0f2aae0f45db8cfee8 Image: devops2341/go-myweb:alpine Image ID: docker-pullable://devops2341/go-myweb@sha256:6ee032cbdc1e9537a2d80f40197836c633b5a03b429504b5b41a814a9185829a Port: 8080/TCP Host Port: 0/TCP State: Running Started: Mon, 06 Sep 2021 06:57:25 +0000 Ready: True Restart Count: 0 Environment: <none> Mounts: /data from myapp-data (rw) /var/run/secrets/kubernetes.io/serviceaccount from default-token-w26v7 (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: myapp-data: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: myapp-data-myapp-sts-vol-0 ReadOnly: false default-token-w26v7: Type: Secret (a volume populated by a Secret) SecretName: default-token-w26v7 Optional: false QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 2m23s (x2 over 2m23s) default-scheduler 0/4 nodes are available: 4 pod has unbound immediate PersistentVolumeClaims. Normal Scheduled 2m20s default-scheduler Successfully assigned default/myapp-sts-vol-0 to kube-node1 Normal SuccessfulAttachVolume 2m20s attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-d0a53c0b-135f-4226-a2e0-89c961bb6a37" Normal Pulling 2m2s kubelet Pulling image "devops2341/go-myweb:alpine" Normal Pulled 116s kubelet Successfully pulled image "devops2341/go-myweb:alpine" in 6.108087982s Normal Created 116s kubelet Created container myapp Normal Started 115s kubelet Started container myapp vagrant@kube-control1:~/work/20210906$ |
vagrant@kube-control1:~/work/20210906$ kubectl describe pods myapp-sts-vol-0 Volumes: myapp-data: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: myapp-data-myapp-sts-vol-0 ReadOnly: false default-token-w26v7: Type: Secret (a volume populated by a Secret) SecretName: default-token-w26v7 Optional: false vagrant@kube-control1:~/work/20210906$ kubectl describe pods myapp-sts-vol-1 Volumes: myapp-data: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: myapp-data-myapp-sts-vol-1 ReadOnly: false default-token-w26v7: Type: Secret (a volume populated by a Secret) SecretName: default-token-w26v7 Optional: false vagrant@kube-control1:~/work/20210906$ bash-5.1# host myapp-sts-vol-1.myapp-svc-headless2 myapp-sts-vol-1.myapp-svc-headless2.default.svc.cluster.local has address 192.168.119.177 bash-5.1# host myapp-sts-vol-0.myapp-svc-headless2 myapp-sts-vol-0.myapp-svc-headless2.default.svc.cluster.local has address 192.168.9.79 vagrant@kube-control1:~/work/20210906$ kubectl exec myapp-sts-vol-0 -it -- sh / # cd /data/ /data # ls lost+found myapp-sts-vol-0.txt myapp-sts-vol-0_dir /data # cat myapp-sts-vol-0.txt myapp-sts-vol-0 /data # exit vagrant@kube-control1:~/work/20210906$ kubectl exec myapp-sts-vol-1 -it -- sh / # cat /data/myapp-sts-vol-1.txt myapp-sts-vol-1 |
statefulset은 headless service를 통해 각각의 pods에 직접 접속하는경우 pods에 독립성을 보장해줄 수 있다.
따라서 , (동적 provisioning 된) pvc를 통해 pods가 각각의 독립성을 띈다.
728x90
'kubernetes' 카테고리의 다른 글
20210907 (화) auto scaling (0) | 2021.09.07 |
---|---|
20210907 (화) request resource, limitrange (0) | 2021.09.07 |
20210906 (월) kubeadm 설치 실습 (0) | 2021.09.06 |
20210903 (금) deployment, deployment strategies (0) | 2021.09.03 |
20210903 (금) kubernetes application customizing (0) | 2021.09.03 |
Comments