本例我們將在K8S中建立一個域名為test.tomy168.com的nginx工作負載,其中若該pod的CPU工作負載超過3%則會自動創建新的pod來應對突如其來的大量客戶訪問。
K8S的設計精神是在於微服務的建構與應用、上述雖然是一個很常見的應用,但是在K8S中卻需要動用到非常多的元件來組成,當用戶從外部訪問K8S集群所創建的服務應用時,會先透過ingress元件解析域名,確認域名無誤後、首先掛載secret元件以提供https服務,接著反向代理到對應的service元件,可以簡單的理解ingress就是過往我們常用的nginx upstream功能。
過了ingress這一手後,所有在Worker Node與pod之間的網路連線都依靠service元件來轉導,在名稱為service-clusterip的service元件中,透過標籤selector定義了最終訪問後送的pod。
什麼時候提供nginx網頁服務的pod需要自動擴展呢?首先在pod定義中設定容器的資源需求為cpu: 「100m」(代表0.1顆cpu的資源)、透過deployment元件佈署這個pod,再透過deployment上層的hpa元件來監控系統資源,一旦發現旗下deployment所創建的pod超過或低於hpa設定的資源閥值,則hpa將會按照其設定的最大與最小複本數來循序漸進地改寫deployment元件中所定義的replicas參數,直到這個參數等於maxReplicas,而deployment元件會動態的參閱自身定義的replicas參數去動態的創建或終止旗下的pod數量,至此解釋完畢hpa動態水平擴展與收縮的實現邏輯。
實驗環境中的所有機器皆以作業系統CentOS7.9架設,rancher server是以docker方式單獨建立,K8S-cluster(RKE)是以rancher server自定義方式所創建
非RKE的K8S環境需確認已安裝metrics-server
實作各元件的定義
創建測試專用的namespace
[root@rke-master ~]# kubectl create namespace dev
建立deployment元件,按照需求創建出一個nginx應用的pod
[root@rke-master ~]# nano /root/k8s-yaml/deployment.yaml
apiVersion: apps/v1 kind: Deployment metadata: name: pc-deployment namespace: dev spec: replicas: 1 selector: matchLabels: app: nginx-pod template: metadata: labels: app: nginx-pod spec: containers: - name: nginx image: nginx:1.17.1 resources: requests: cpu: "100m" ports: - containerPort: 80
[root@rke-master ~]# kubectl apply -f /root/k8s-yaml/deployment.yaml [root@rke-master ~]# kubectl get pod -n dev NAME READY STATUS RESTARTS AGE pc-deployment-5f5b97d69c-x8njm 1/1 Running 0 2h [root@rke-master k8s-yaml]# kubectl get deployment -n dev NAME READY UP-TO-DATE AVAILABLE AGE pc-deployment 1/1 1 1 2h
建立上層hpa元件,為了快速測試自動擴展功能,故將cpu警戒值設低為3%
[root@rke-master ~]# nano /root/k8s-yaml/pc-hpa.yaml
apiVersion: autoscaling/v1 kind: HorizontalPodAutoscaler metadata: name: pc-hpa namespace: dev spec: minReplicas: 1 # 最小pod數量 maxReplicas: 4 # 最大pod數量 targetCPUUtilizationPercentage: 3 # CPU使用率指標 scaleTargetRef: # 指定要控制的deployment元件資訊 apiVersion: apps/v1 kind: Deployment name: pc-deployment
[root@rke-master ~]# kubectl apply -f /root/k8s-yaml/pc-hpa.yaml [root@rke-master ~]# kubectl get hpa -n dev NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE pc-hpa Deployment/pc-deployment 0%/3% 1 4 1 2m
建立service元件,讓pod可被外部訪問,不使用nodeport的原因是稍後我們將建置ingress元件,該元件會幫我們將外部需求轉導至K8S Cluster內部。
[root@rke-master ~]# nano /root/k8s-yaml/service-clusterip.yaml
apiVersion: v1 kind: Service metadata: name: service-clusterip namespace: dev spec: selector: app: nginx-pod clusterIP: type: ClusterIP ports: - port: 80 # service所對應的clusterIP的端口 targetPort: 80 # service所對應的pod的端口
[root@rke-master ~]# kubectl apply -f /root/k8s-yaml/service-clusterip.yaml [root@rke-master ~]# kubectl get service -n dev NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service-clusterip ClusterIP 10.43.30.240 <none> 80/TCP 2h
建立稍後ingress所需的secret元件
[root@rke-master ~]# kubectl create secret tls ingress-secret \ --key /root/k8s-yaml/secret/test.tomy168.com.key \ --cert /root/k8s-yaml/secret/private.crt \ -n dev
建立ingress元件,其nginx upstream功能可以讓外部https request導入cluster內部
[root@rke-master ~]# nano /root/k8s-yaml/ingress-https.yaml
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ingress-https namespace: dev spec: tls: - hosts: - test.tomy168.com secretName: ingress-secret # 存放憑證與私鑰的secret元件名稱 rules: - host: test.tomy168.com http: paths: - path: / backend: serviceName: service-clusterip # 反向代理到後端的service元件名稱 servicePort: 80
[root@rke-master ~]# kubectl apply -f /root/k8s-yaml/ingress-https.yaml
檢查所創建的ingress與secret元件資訊
[root@rke-master ~]# kubectl get secret -n dev NAME TYPE DATA AGE default-token-85whq kubernetes.io/service-account-token 3 2h ingress-secret kubernetes.io/tls 2 1h54m [root@rke-master ~]# kubectl get ingress -n dev Warning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress NAME CLASS HOSTS ADDRESS PORTS AGE ingress-https <none> test.tomy168.com 192.168.0.152,192.168.0.153 80, 443 1h52m
製造訪問壓力來觀察pod的擴展與收縮
Windows客戶端下載postman軟體並對test.tomy168.com進行一萬次的連線來製造壓力
觀察hpa元件、deployment與pod的狀態
[root@rke-master ~]# kubectl get deploy -n dev -w NAME READY UP-TO-DATE AVAILABLE AGE pc-deployment 1/1 1 1 4m46s pc-deployment 1/2 1 1 7m17s pc-deployment 1/2 1 1 7m17s pc-deployment 1/2 1 1 7m17s pc-deployment 1/2 2 1 7m17s pc-deployment 2/2 2 2 7m20s pc-deployment 2/1 2 2 25m pc-deployment 2/1 2 2 25m pc-deployment 1/1 1 1 25m [root@rke-master ~]# kubectl get pod -n dev -w NAME READY STATUS RESTARTS AGE pc-deployment-5f5b97d69c-x8njm 1/1 Running 0 5m3s pc-deployment-5f5b97d69c-jn6wt 0/1 Pending 0 0s pc-deployment-5f5b97d69c-jn6wt 0/1 Pending 0 0s pc-deployment-5f5b97d69c-jn6wt 0/1 ContainerCreating 0 0s pc-deployment-5f5b97d69c-jn6wt 0/1 ContainerCreating 0 2s pc-deployment-5f5b97d69c-jn6wt 1/1 Running 0 3s pc-deployment-5f5b97d69c-jn6wt 1/1 Terminating 0 17m pc-deployment-5f5b97d69c-jn6wt 1/1 Terminating 0 17m pc-deployment-5f5b97d69c-jn6wt 0/1 Terminating 0 17m pc-deployment-5f5b97d69c-jn6wt 0/1 Terminating 0 17m pc-deployment-5f5b97d69c-jn6wt 0/1 Terminating 0 17m [root@rke-master ~]# kubectl get hpa pc-hpa -n dev -w NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE pc-hpa Deployment/pc-deployment 0%/3% 1 4 1 2m50s pc-hpa Deployment/pc-deployment 5%/3% 1 4 1 3m51s pc-hpa Deployment/pc-deployment 5%/3% 1 4 2 4m6s pc-hpa Deployment/pc-deployment 2%/3% 1 4 2 4m53s pc-hpa Deployment/pc-deployment 3%/3% 1 4 2 5m38s pc-hpa Deployment/pc-deployment 2%/3% 1 4 2 6m40s pc-hpa Deployment/pc-deployment 0%/3% 1 4 2 16m pc-hpa Deployment/pc-deployment 0%/3% 1 4 2 21m pc-hpa Deployment/pc-deployment 0%/3% 1 4 1 21m
可以看到在壓力之下hpa監控到pod的cpu資源使用量超過3%後便會修改replicas參數直至pods足以應付所設定的警戒值targetCPUUtilizationPercentage,透過上面hpa的狀態可以觀察到...若在後續pods副本的總體負載降低時(一萬次連線壓力結束時),hpa也會在一段時間內終止擴展出來已經是多餘了的pods,以達到Auto Scaling效果。
0 Comments:
張貼留言