缺省基于CPU的HPA
要实现HPA,先确保OpenShift需要有Metrics功能。
- 创建项目,部署测试应用。
$ oc new-project my-hpa
$ oc new-app quay.io/gpte-devops-automation/pod-autoscale-lab:rc0 --name=pod-autoscale
$ oc expose svc pod-autoscale
- 记录Route访问地址ROUTE-URL
$ oc get route pod-autoscale --template={{.spec.host}}
- 为Deployment管理的Container设置可使用的cpu资源量。
$ oc set resources deployment pod-autoscale --requests=cpu=200m --limits=cpu=500m
- 为Deployment配置AutoScale,然后查看自动创建的HPA对象。
$ oc autoscale deploy/pod-autoscale --min 1 --max 5 --cpu-percent=40
$ oc get hpa pod-autoscale
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
pod-autoscale Deployment/pod-autoscale <unknown>/40% 1 5 0 7s
- 进入一个Pod。
$ oc rsh $(oc get ep pod-autoscale -o jsonpath='{ .subsets[].addresses[0].targetRef.name }')
- 在pod中执行命令,消耗CPU。注意:为前面获得的Route访问地址。
# while true;do curl <ROUTE-URL>;done
- 查看刚刚的HPA对象的状态,确认“REPLICAS”会增加。
$ oc get hpa -w
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
pod-autoscale Deployment/pod-autoscale 150%/40% 1 5 1 38m
pod-autoscale Deployment/pod-autoscale 150%/40% 1 5 4 38m
pod-autoscale Deployment/pod-autoscale 229%/40% 1 5 4 38m
pod-autoscale Deployment/pod-autoscale 229%/40% 1 5 5 38m
pod-autoscale Deployment/pod-autoscale 211%/40% 1 5 5 39m
- 在OpenShift控制台中确认项目中有多个pod-autoscale-xxxxx了。
- 停止(6)步的命令执行,并退出Pod环境。
- 等待一段时间,观察只有一个在运行的Pod了。
定制基于HTTP请求的HPA
说明:需要先完成上一节前2步的操作,同时确保删除项目中已有的HPA对象。
$ oc delete hpa pod-autoscale
第一个:基于HTTP Request的HPA
- 创建运行Prometheus的项目。
$ oc new-project my-prometheus
- 在OperatorHub中安装Prometheus Operator到my-prometheus项目中。
- 在Prometheus Operator中创建一个Service Monitor实例,内容如下:
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: pod-autoscale
labels:
lab: custom-hpa
namespace: my-prometheus
spec:
namespaceSelector:
matchNames:
- my-prometheus
- my-hpa
selector:
matchLabels:
app: pod-autoscale
endpoints:
- port: 8080-tcp
interval: 30s
- 在Prometheus Operator中创建一个Prometheus实例,内容如下:
apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
name: my-prometheus
labels:
prometheus: my-prometheus
namespace: my-prometheus
spec:
replicas: 2
serviceAccountName: prometheus-k8s
securityContext: {}
serviceMonitorSelector:
matchLabels:
lab: custom-hpa
- 在my-hpa项目中创建一个RoleBinding,以便让my-prometheus项目可以对my-hpa项目有查看权限。其中名为prometheus-k8s的ServiceAccount源自上一步my-prometheus项目中资源。
$ echo "---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: my-prometheus-hpa
namespace: my-hpa
subjects:
- kind: ServiceAccount
name: prometheus-k8s
namespace: my-prometheus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: view" | oc create -f -
- 创建Route,然后获得它的访问地址。
$ oc expose svc prometheus-operated -n my-prometheus
$ oc get route prometheus-operated -o jsonpath='{.spec.host}{"\n"}' -n my-prometheus
- 用浏览器访问上面的Prometheus访问地址,然后查询http_requests_total指标,稍后会显示下图实时监控数据。说明Prometheus已经能监控到应用的运行指标。
- 执行命令,配置权限以便让HPA能获取客户定制指标数据。
$ oc create -f https://raw.githubusercontent.com/liuxiaoyu-git/ocp_advanced_deployment_resources/master/ocp4_adv_deploy_lab/custom_hpa/custom_adapter_kube_objects.yaml
- 查看APIService
$ oc get apiservice v1beta1.custom.metrics.k8s.io
NAME SERVICE AVAILABLE AGE
v1beta1.custom.metrics.k8s.io my-prometheus/my-metrics-apiserver True 45m
$ oc get --raw /apis/custom.metrics.k8s.io/v1beta1/ | jq -r '.resources[] | select(.name | contains("pods/http"))'
{
"name": "pods/http_requests",
"singularName": "",
"namespaced": true,
"kind": "MetricValueList",
"verbs": [
"get"
]
}
- 根据http_requests监控创建HPA对象,当其超过500m的时候扩展Pod。
echo "---
kind: HorizontalPodAutoscaler
apiVersion: autoscaling/v2beta1
metadata:
name: pod-autoscale-custom
namespace: my-hpa
spec:
scaleTargetRef:
kind: DeploymentConfig
name: pod-autoscale
apiVersion: apps.openshift.io/v1
minReplicas: 1
maxReplicas: 5
metrics:
- type: Pods
pods:
metricName: http_requests
targetAverageValue: 500m" | oc create -f -
- 对my-hpa项目的应用加压。
$ AUTOSCALE_ROUTE=$(oc get route pod-autoscale -n my-hpa -o jsonpath='{ .spec.host}')
$ while true;do curl http://$AUTOSCALE_ROUTE;sleep .5;done
- 在OpenShift控制台的Deployment Configs中查看my-hpa项目中的pod-autoscale,确认有增加。
- 在my-hpa应用的Prometheus页面可以看到监控数据。
- 停止加压请求,过一会确认运行my-hpa的应用pod数量会下降到1个。
第二个:基于HTTP Request的HPA
- 创建另一个项目和instrumented_app应用。
$ oc new-project my-new-hpa
$ oc new-app quay.io/gpte-devops-automation/instrumented_app:rc0 -n my-new-hpa
$ oc expose svc instrumentedapp -n my-new-hpa
- 在Prometheus Operator中新建一个Service Monitor,内容如下:
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
labels:
lab: custom-hpa
name: my-servicemonitor
namespace: my-prometheus
spec:
endpoints:
- interval: 30s
port: 8080-tcp
namespaceSelector:
matchNames:
- my-new-hpa
selector:
matchLabels:
app: instrumentedapp
- 创建RoleBinding,让Prometheus可以查看my-new-hpa项目中的资源。
$ echo "---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: my-new-hpa
namespace: my-new-hpa
subjects:
- kind: ServiceAccount
name: prometheus-k8s
namespace: my-prometheus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: view" | oc create -f -
- 查看监控指标
$ oc get --raw /apis/custom.metrics.k8s.io/v1beta1/ | jq -r '.resources[] | select(.name | contains("pods/http"))'
{
"name": "pods/http_requests",
"singularName": "",
"namespaced": true,
"kind": "MetricValueList",
"verbs": [
"get"
]
}
- 针对instrumentedapp应用新建一个HPA,并将http_requests作为扩展收缩的监控指标。
$ echo "---
kind: HorizontalPodAutoscaler
apiVersion: autoscaling/v2beta1
metadata:
name: pod-autoscale-custom
namespace: my-new-hpa
spec:
scaleTargetRef:
kind: DeploymentConfig
name: instrumentedapp
apiVersion: apps.openshift.io/v1
minReplicas: 1
maxReplicas: 5
metrics:
- type: Pods
pods:
metricName: http_requests
targetAverageValue: 50000m" | oc create -f -
- 这个instrumentedapp应用中自带加压程序,因此只需要查看它的Deployment Configs中的Pod数量即可,确认数量逐渐增加到5个。