kubernetes的学习—hpa的介绍与演练

HorizontalPodAutoscaler 演练

实验原理

HorizontalPodAutoscaler(简称 HPA ) 自动更新工作负载资源(例如 Deployment 或者 StatefulSet), 目的是自动扩缩工作负载以满足需求。

水平扩缩意味着对增加的负载的响应是部署更多的 Pod。 这与 “垂直(Vertical)” 扩缩不同,对于 Kubernetes, 垂直扩缩意味着将更多资源(例如:内存或 CPU)分配给已经为工作负载运行的 Pod。

如果负载减少,并且 Pod 的数量高于配置的最小值, HorizontalPodAutoscaler 会指示工作负载资源(Deployment、StatefulSet 或其他类似资源)缩减。

hpa水平扩缩的原理是水平的,即通过自动添加或减少pod副本的数量来实现服务的负载均衡。

vpa垂直扩缩的原理是垂直的,即通过自动添加或减少pod上的CPU和内存的请求值来实现服务的负载均衡。

实验环境

一台master: 192.168.220.104

三台node:

node1:192.168.220.105

node2:192.168.220.106

node3:192.168.220.107

实验步骤

1、安装监控指标metrics server

(1)使用阿里云的components.yaml配置文件
10:02:01[root@master ~]# cd yaml/metrics/
10:02:13[root@master ~/yaml/metrics]# ls
aliyun-components.yaml  components.yaml  zjx-components.yaml
10:02:15[root@master ~/yaml/metrics]# cat aliyun-components.yaml 
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
    rbac.authorization.k8s.io/aggregate-to-admin: "true"
    rbac.authorization.k8s.io/aggregate-to-edit: "true"
    rbac.authorization.k8s.io/aggregate-to-view: "true"
  name: system:aggregated-metrics-reader
rules:
- apiGroups:
  - metrics.k8s.io
  resources:
  - pods
  - nodes
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
rules:
- apiGroups:
  - ""
  resources:
  - nodes/metrics
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - pods
  - nodes
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server-auth-reader
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server:system:auth-delegator
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:auth-delegator
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:metrics-server
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: v1
kind: Service
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  ports:
  - name: https
    port: 443
    protocol: TCP
    targetPort: https
  selector:
    k8s-app: metrics-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  selector:
    matchLabels:
      k8s-app: metrics-server
  strategy:
    rollingUpdate:
      maxUnavailable: 0
  template:
    metadata:
      labels:
        k8s-app: metrics-server
    spec:
      containers:
      - args:
        - --kubelet-insecure-tls
        - --kubelet-preferred-address-types=InternalIP 
        - --cert-dir=/tmp
        - --secure-port=4443
        - --kubelet-preferred-address-types=InternalDNS,InternalIP,ExternalIP,Hostname
        - --kubelet-use-node-status-port
        - --metric-resolution=15s
        image: registry.aliyuncs.com/google_containers/metrics-server:v0.6.0
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /livez
            port: https
            scheme: HTTPS
          periodSeconds: 10
        name: metrics-server
        ports:
        - containerPort: 4443
          name: https
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /readyz
            port: https
            scheme: HTTPS
          initialDelaySeconds: 20
          periodSeconds: 10
        resources:
          requests:
            cpu: 100m
            memory: 200Mi
        securityContext:
          allowPrivilegeEscalation: false
          readOnlyRootFilesystem: true
          runAsNonRoot: true
          runAsUser: 1000
        volumeMounts:
        - mountPath: /tmp
          name: tmp-dir
      nodeSelector:
        kubernetes.io/os: linux
      priorityClassName: system-cluster-critical
      serviceAccountName: metrics-server
      volumes:
      - emptyDir: {}
        name: tmp-dir
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
  labels:
    k8s-app: metrics-server
  name: v1beta1.metrics.k8s.io
spec:
  group: metrics.k8s.io
  groupPriorityMinimum: 100
  insecureSkipTLSVerify: true
  service:
    name: metrics-server
    namespace: kube-system
  version: v1beta1
  versionPriority: 100
(2) 执行安装命令
10:02:15[root@master ~/yaml/metrics]# kubectl apply -f aliyun-components.yaml 
serviceaccount/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
service/metrics-server created
deployment.apps/metrics-server created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
(3) 查看效果
10:18:37[root@master ~/yaml/metrics]# kubectl get pod -n kube-system
NAME                             READY   STATUS    RESTARTS       AGE
coredns-6d8c4cb4d-djzfx          1/1     Running   6 (9h ago)     5d15h
coredns-6d8c4cb4d-tzzlb          1/1     Running   6 (9h ago)     5d15h
etcd-master                      1/1     Running   7 (9h ago)     5d15h
kube-apiserver-master            1/1     Running   7 (9h ago)     5d15h
kube-controller-manager-master   1/1     Running   7 (9h ago)     5d15h
kube-flannel-ds-4cm4l            1/1     Running   6 (9h ago)     5d15h
kube-flannel-ds-874tn            1/1     Running   6 (9h ago)     5d15h
kube-flannel-ds-jsp6s            1/1     Running   6 (113m ago)   5d15h
kube-flannel-ds-pmrxz            1/1     Running   6 (9h ago)     5d15h
kube-proxy-b4x2f                 1/1     Running   6 (9h ago)     5d15h
kube-proxy-fs4l2                 1/1     Running   6 (9h ago)     5d15h
kube-proxy-llj96                 1/1     Running   6 (113m ago)   5d15h
kube-proxy-rrmlr                 1/1     Running   6 (9h ago)     5d15h
kube-scheduler-master            1/1     Running   7 (9h ago)     5d15h
metrics-server-b9f7b695f-ct7tx   1/1     Running   2 (9h ago)     42h
10:19:02[root@master ~/yaml/metrics]# kubectl top pod
NAME                          CPU(cores)   MEMORY(bytes)   
nginx                         1m           10Mi            
php-apache-65685cdf7b-hkc28   1m           7Mi             
10:19:11[root@master ~/yaml/metrics]# kubectl top node
NAME     CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
master   90m          4%     864Mi           45%       
node-1   23m          2%     436Mi           23%       
node-2   27m          2%     420Mi           24%       
node-3   18m          1%     365Mi           21% 

 

2、给每台主机制作apache镜像

(1) master制作镜像

10:27:51[root@master ~/yaml/hpa/dockerfile]# ls
Dockerfile  index.php
10:27:54[root@master ~/yaml/hpa/dockerfile]# cat Dockerfile 
FROM php:5-apache
COPY index.php /var/www/html/index.php
RUN chmod a+rx index.php
10:28:01[root@master ~/yaml/hpa/dockerfile]# cat index.php 
<?php
 $x = 0.0001;
 for($i = 0; $i <= 1000000; $i++)
 {
	 $x += sqrt($x);
 }
 echo "OK!";
?>

23:20:48[root@master ~/yaml/hpa/dockerfile]# docker build -t zjx-hpa:1.0 .
23:20:48[root@master ~/yaml/hpa/dockerfile]# docker images
REPOSITORY                                                        TAG        IMAGE ID       CREATED         SIZE
zjx-hpa                                                           1.0        a76ae4ee00ba   11 hours ago    355MB

(2) master打包镜像

23:31:19[root@master ~/yaml/hpa/dockerfile]# docker save > zjx-hpa.tar zjx-hpa:1.0
23:31:39[root@master ~/yaml/hpa/dockerfile]# ll
total 354948
-rw-r--r-- 1 root root        82 Mar 18 23:20 Dockerfile
-rw-r--r-- 1 root root        94 Mar 18 21:51 index.php
-rw-r--r-- 1 root root 363456000 Mar 18 23:31 zjx-hpa.tar

(2) 给node1、node2、node3传输apache镜像,并查看

# 如果之前没有连接过,需输入目标服务器密码
23:31:46[root@master ~/yaml/hpa/dockerfile]# scp zjx-hpa.tar 192.168.220.105:/root/hpa
zjx-hpa.tar
23:31:46[root@master ~/yaml/hpa/dockerfile]# scp zjx-hpa.tar 192.168.220.106:/root/hpa
zjx-hpa.tar
23:31:46[root@master ~/yaml/hpa/dockerfile]# scp zjx-hpa.tar 192.168.220.107:/root/hpa
zjx-hpa.tar

[root@node-1 ~]# docker images
REPOSITORY                                               TAG        IMAGE ID       CREATED         SIZE
zjx-hpa                                                  1.0        a76ae4ee00ba   11 hours ago    355MB

[root@node-2 ~]# docker images
REPOSITORY                                           TAG        IMAGE ID       CREATED         SIZE
zjx-hpa                                              1.0        a76ae4ee00ba   11 hours ago    355MB

[root@node-3 ~]# docker images
REPOSITORY                                           TAG        IMAGE ID       CREATED         SIZE
zjx-hpa                                              1.0        a76ae4ee00ba   19 hours ago    355MB

3、运行 php-apache 服务器并暴露服务

首先启动一个 Deployment 用 zjx-hpa:1.0 镜像运行一个容器, 然后使用以下清单文件将其暴露为一个 服务(Service)

apiVersion: apps/v1
kind: Deployment
metadata:
  name: php-apache
spec:
  selector:
    matchLabels:
      run: php-apache
  template:
    metadata:
      labels:
        run: php-apache
    spec:
      containers:
      - name: php-apache
        image: zjx-hpa:1.0
        ports:
        - containerPort: 80
        resources:
          limits:
            cpu: 200m
          requests:
            cpu: 100m
---
apiVersion: v1
kind: Service
metadata:
  name: php-apache
  labels:
    run: php-apache
spec:
  ports:
  - port: 80
  selector:
    run: php-apache

查看pod服务

11:17:25[root@master ~/yaml/hpa]# kubectl get deploy
NAME         READY   UP-TO-DATE   AVAILABLE   AGE
php-apache   1/1     1            1           11h

11:14:16[root@master ~/yaml/hpa]# kubectl get pod
NAME                          READY   STATUS    RESTARTS       AGE
php-apache-65685cdf7b-hkc28   1/1     Running   1 (10h ago)    11h

11:17:57[root@master ~/yaml/hpa]# kubectl get svc
NAME         TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.1.0.1      <none>        443/TCP   5d16h
php-apache   ClusterIP   10.1.51.167   <none>        80/TCP    11h
# ClusterIP只允许集群内部访问,NodePort可以允许外部访问,默认是ClusterIP

4、创建 HorizontalPodAutoscaler

23:42:49[root@master ~/yaml/hpa]# kubectl autoscale deployment php-apache --cpu-percent=20 --min=1 --max=10
horizontalpodautoscaler.autoscaling/php-apache autoscaled

# 通过运行以下命令检查新制作的 HorizontalPodAutoscaler 的当前状态
23:44:59[root@master ~/yaml/hpa]# kubectl get hpa
NAME         REFERENCE               TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
php-apache   Deployment/php-apache   1%/20%    1         10        1          76s

5、增加负载

# 在单独的终端中运行它
# 以便负载生成继续,你可以继续执行其余步骤
kubectl run -i --tty load-generator --rm --image=busybox:1.28 --restart=Never -- /bin/sh -c "while sleep 0.01; do wget -q -O- http://php-apache; done"

# 在实验进行到一定时间后结束此pod,以此减少负载
kubectl delete load-generator

实验结果

# 准备好后按 Ctrl+C 结束观察
23:47:05[root@master ~/yaml/hpa]# kubectl get hpa php-apache --watch
NAME         REFERENCE               TARGETS    MINPODS   MAXPODS   REPLICAS   AGE
php-apache   Deployment/php-apache   1%/20%     1         10        1          2m14s
php-apache   Deployment/php-apache   128%/20%   1         10        1          5m30s
php-apache   Deployment/php-apache   193%/20%   1         10        4          5m45s
php-apache   Deployment/php-apache   48%/20%    1         10        8          6m
php-apache   Deployment/php-apache   27%/20%    1         10        10         6m15s
php-apache   Deployment/php-apache   31%/20%    1         10        10         6m30s
php-apache   Deployment/php-apache   26%/20%    1         10        10         6m45s
php-apache   Deployment/php-apache   27%/20%    1         10        10         7m
php-apache   Deployment/php-apache   28%/20%    1         10        10         7m30s
php-apache   Deployment/php-apache   23%/20%    1         10        10         7m45s
php-apache   Deployment/php-apache   32%/20%    1         10        10         8m
php-apache   Deployment/php-apache   25%/20%    1         10        10         8m16s
php-apache   Deployment/php-apache   26%/20%    1         10        10         8m31s
php-apache   Deployment/php-apache   24%/20%    1         10        10         9m1s
php-apache   Deployment/php-apache   26%/20%    1         10        10         9m16s
php-apache   Deployment/php-apache   25%/20%    1         10        10         9m31s
php-apache   Deployment/php-apache   29%/20%    1         10        10         9m46s
php-apache   Deployment/php-apache   27%/20%    1         10        10         10m
php-apache   Deployment/php-apache   8%/20%     1         10        10         11m
php-apache   Deployment/php-apache   1%/20%     1         10        10         12m
php-apache   Deployment/php-apache   1%/20%     1         10        10         13m
php-apache   Deployment/php-apache   1%/20%     1         10        4          14m
php-apache   Deployment/php-apache   1%/20%     1         10        1          15m

# 副本replicas的数量在访问增多的、CPU使用率过大的时候而增多;在访问减少、CPU使用率变小时候而减少。

由此可以得出,HPA水平扩缩可以自动有效的控制工作的负载均衡以此来满足服务的要求。

官方实验地址:https://kubernetes.io/zh-cn/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/

  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值