kubernetes bible section5

helm charts

  • ubuntu 得apt 
  • windows 得chocolatey
  • js 得npm
  • kubernetes 得helm

概念

chart package  实际安装得目标‘软件’

        很多参数一般时已经参数化得

repository 

        Artifact Hub 推荐

       GitHub - helm/charts: ⚠️(OBSOLETE) Curated applications for Kubernetes

release 待安装得对象,可以认为版本号

为什么要用helm

  • 在几秒内快速部署一个流行得应用。
  • 提供了依赖管理得能力。
  • 可以分享你得应用。
  • 确定应用能够获取合适得升级。
  • 方便得配置你得软件应用,而不需要深入了解yaml。

ubuntu安装 

curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh

helm 安装wordpress

#添加repo
 helm repo add stable https://charts.helm.sh/stable
#helm search repo stable

helm search hub

helm search hub wordpress

helm install wordpress-test-release bitnami/wordpress

 kubectl get svc --namespace default -w wordpress-test-release
#如果用minikube得话,
minikube service list 
| default     | my-release-wordpress           | http/80      | http://192.168.49.2:32202 |
|             |                                | https/443    | http://192.168.49.2:30607 |
| default     | wordpress-test-release         | http/80      | http://192.168.49.2:32476 |
|             |                                | https/443    | http://192.168.49.2:30749 |
#访问对应连接即可
http://192.168.49.2:30607

解析

charts/bitnami/wordpress at master · bitnami/charts · GitHub)

chart.yaml 辕信息,metadata

values.yaml 默认得配置

values.schema.json 可以用json来替代yaml格式

charts/ 可选得附带得独立charts目录

crds:/ 自定义资源定义可选

templates/: 最终要得目录,包括了yaml文件,与配置相结合,会使用到cluster

k8s dashboard

helm repo add kubernetes-dashboard https://kubernetes.github.io/dashboard/

helm install kubernetes-dashboard-test kubernetes-dashboard/kubernetes-dashboard

#检查是否准备好了
kubectl get pods -n default -l

export POD_NAME=$(kubectl get pods -n default -l "app.kubernetes.io/name=kubernetes-dashboard,app.kubernetes.io/instance=kubernetes-dashboard-test" -o jsonpath="{.items[0].metadata.name}")

kubectl -n default port-forward kubernetes-dashboard-test-69d5cc948c-p58mw 8443:8443




#create admin-user
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: default
EOF

cat <<EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: admin-user
    namespace: default
EOF

 kubectl -n default get secret $(kubectl -n default get sa/admin-user -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}"

使用token进入 https://localhost:8443/#/login

prometheus with grafana

helm repo add prometheus-community https://prometheus-community.github.io/helm-charts

helm install prometheus-stack-test prometheus-community/kube-prometheus-stack

#安装完成后
kubectl port-forward prometheus-prometheus-stack-test-kube-prometheus-0 9090

kubectl port-forward prometheus-stack-test-grafana-97b7cd8c4-fx56l 3000

# grafana地址
127.0.0.1:3000/
admin
prom-operator

Authentication and Authorization

一般来说,都是走system:anonymous group是在 system:unauthenticated 

用户类型 normal 固定得用户名密码,尽量少用。

serviceAccount 由k8s管理,管理为serviceAccount。一般用secrets得形式存储。挂在到pod中,这样container就可以用他们来与api-server通信。 system:serviceaccount:<namespace>:<serviceAccountName>

这些账户基于RBAC管理

固定得tokenFiles

一般一些.csv file 

token,user,uid,"group1,group2" 

然后将这个文件发送给api-server token-auth-file 参数,header Authorization:bearer <token>

kubectl得时候,需要变更 kubeconfig 

kubectl config set-credentials <contextUser> --token=<token>

优势:

  • 容易理解,
  • 容易配置

劣势:

  • 不安全。
  • 需要手动管理。
  • 如果添加和删除用户、更换token,那么都需要重启

ServiceAccount tokens

JSON Web Tokens JWTS, 一般在定义pod得时候就会指定一个serviceAccount。 .spec.serviceAccountName ,然后注入容器,最后在http bearer中加入这个东西

 具体步骤

#创建一个serviceAccount 名称为example-account,默认在default-namespace下
vim example-account-serviceaccount.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: example-account
  namespace: default

kubectl apply -f example-account-serviceaccount.yaml

#创建一个Role object 具体得角色权限 只设置了pods 和get相关
vim pod-reader-role.yaml

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: default
  name: pod-reader
rules:
  - apiGroups: [""]
    resources: ["pods"]
    verbs: ["get", "watch", "list"]

kubectl apply -f pod-reader-role.yaml

#role-binding 
vim read-pods-rolebinding.yaml

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: read-pods
  namespace: default
subjects:
  - kind: ServiceAccount
    name: example-account
    namespace: default
roleRef:
  kind: Role
  name: pod-reader
  apiGroup: rbac.authorization.k8s.io

kubectl apply -f read-pods-rolebinding.yaml

#可以去 jwt.io 解码
 kubectl -n default get secret $(kubectl -n default get sa/example-account -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}"

#kubectl中设置一个账户
kubectl config set-credentials example-account —-token=<jwtToken>

#创立一个上下文  默认是minikube
kubectl config set-context example-account-context -—user=example-account -—cluster=<clusterName>

#当前得上下文和切换 默认minikube
kubectl config current-context
kubectl config use-context example-account-context

#试一下
kubectl get pods
kubectl get pods -n kube-system

kubectl get svc 

优势

  • 容易配置和使用
  • 由k8s 集群管理。不需要外部得权限provider
  • serviceAccounts与namespace相关 

劣势:

X.509 client certificates

  • 工作方式 api启动得时候有一个 client-ca-file 参数,提供了certificate authority CA 得信息来验证客户端。
  • 证书得subject common name (subject得cn属性)在认证成功得时候被用作username。1.19版本,你可以使用certificate api来管理注册请求。
  • 用户必须在访问api-server得时候携带证书。基于这些东西

优势

  • 比serviceAccount 或者静态token更安全
  • 不能存储 certificate在集群中,不会损害所有得certificate。
  • 可以被demand撤销。

劣势

  • certificate本身有一个过期时间。
  • 监控certificate国企、撤销、替换必须要处理。
  • 原生得api有一些限制
  • 浏览器中浏览客户端certificate有些麻烦。

OpenID Connect tokens

遵从OAuth 2.0规范 ,各大云厂商由自己得继承方式

优势

  • 类似sso
  • 第一级得云服务商由自己得OpenID服务来继承他们自己得k8s
  • 可以也用在非运得部署。
  • 安全、可扩展

劣势

  • tokens不会被撤销,可能会过期,因此可能需要常常被更新。
  • k8s 没有可以处罚验证校验得web接口。你必须手动去激发。但有些可以用k8s得一些配置工具。

其他方式

鉴权代理 Authenticating | Kubernetes.

钩子token鉴权 Authenticating | Kubernetes

鉴权和RBAC介绍  role-based access control 

pod上有serviceName,会默认挂在到 /var/run/secrets/kubernetes.io/serviceaccount/token crt会在/var/run/secrets/kubernetes.io/serviceaccount/ca.crt

role是单独得

roleBinding是联系了role 和serviceName,或者说赋予了某个serviceName role

  • role 和clusterRole 定义了一群准入 permissions, 哪些资源得(resources)得动词(verbs) role是namespace-scoped, clusterRole是全局得。
  • roleBinding 和clusterRoleBinding,绑定users 或者一批users(group serviceAction)  rolebindings是基于namespace得,clusterRoleBinding是cluster范围得。 
vim  pod-logger-serviceaccount.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: pod-logger
  namespace: default

kubectl apply -f ./pod-logger-serviceaccount.yaml

#创建一个role pod-reader 只允许get watch list pods  /api/v1/namespace/default/pods  apiGroups为空认为是core ApiGroup

vim  pod-reader-role.yaml

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: default
  name: pod-reader
rules:
  - apiGroups: [""]
    resources: ["pods"]
    verbs: ["get", "watch", "list"]

kubectl apply -f pod-reader-role.yaml

#将roleBinding关联serviceAccount
vim   pod-logger-static-pod.yaml 


apiVersion: v1
kind: Pod
metadata:
  name: pod-logger-static
spec:
#指定了pod 需要run得service account 分配了pod-logger service account,这个导致了secrete 挂载在container得/var/run/secretes/kubernetes.io/serviceaccount/token, command轮询得查询/var/run/secrets/kubernetes.io/serviceaccount/token 当作bearer,另外 用cacert作为参数传递了ca校验路径来确认远程得服务。这个证书在运行时注入进了/var/run/secrets/kubernets.io/serviceaccount/ca/crt
  serviceAccountName: pod-logger
  containers:
  - name: logger
    image: radial/busyboxplus:curl
    command:
      - /bin/sh
  - -c
    - |
      SERVICEACCOUNT=/var/run/secrets/kubernetes.io/serviceaccount
      TOKEN=$(cat ${SERVICEACCOUNT}/token)
      while true
      do
        echo "Querying Kubernetes API Server for Pods in default namespace..."
        curl --cacert $SERVICEACCOUNT/ca.crt —-header "Authorization: Bearer $TOKEN" -X GET https://kubernetes/api/v1/namespaces/default/pods
        sleep 10
      done


kubectl apply -f ./pod-logger-static-pod.yaml

#会返回403
kubectl logs pod-logger-static -f

#新得一个console里面
vim read-pods-rolebinding.yaml

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: read-pods
  namespace: default
subjects:
  - kind: ServiceAccount
    name: pod-logger
    namespace: default
roleRef:
  kind: Role
name: pod-reader
apiGroup: rbac.authorization.k8s.io

kubectl apply -f ./read-pods-rolebinding.yaml

#返回正常
kubectl logs pod-logger-static -f

#解除绑定
kubectl delete rolebinding read-pods

#403
kubectl logs pod-logger-static -f

调度pods得高级技巧

 一般kube-scheduler 控制平面来控制。由他们来依据规则来在某些nodes上启动。

affinity亲和性 taints五点 tolerations 容忍性,sheduling policies 调度策略。

概念

pod是deployment 和  statefulset得一些最基础得单元

首先提交得时候作为未规划。kube-scheduler 周期性得回去查哪些pods没有规划。

podObject 在etcd中存储中有一个nodeName得属性,存放着启动pod得workerNode,设置了之后就证明这是一个规划了得pod,否则就是一个pending得状态

两个过程

filtering node是否能起来pod,资源、规则得匹配,如果没有node能起来,那么就pending

scoring 打分,基于一系列得调度策略,找到node 分数最高得。

Node name以及Node selector 静态规划

Node Affinity和 inter-pod affinity/anti-affinity 亲和性以及pod之间得亲和和反亲和性

taints 以及tolerations 污点和容忍性

Node affinity

pod Node name / selector

指定实例调度,如果不能满足条件,将会pending

#获取当前集群得名字
kubectl get nodes

vim designated-deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment-example
spec:

  replicas: 5
  selector:
    matchLabels:
      app: nginx
      environment: test
  template:
    metadata:
      labels:
        app: nginx
        environment: test
    spec:
      #指定node调度
      nodeName: aks-nodepool1-77120516-vmss000000
      containers:
        - name: nginx
          image: nginx:1.17
          ports:
            - containerPort: 80

kubectl apply -f  designated-deployment.yaml

#或者按照标签

kubectl label nodes aks-nodepool1-77120516-vmss000001 node-type=superfast


vim designated-node-label-deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment-example
spec:
#指定node调度
  nodeName: aks-nodepool1-77120516-vmss000000
  replicas: 5
  selector:
    matchLabels:
      app: nginx
      environment: test
  template:
    metadata:
      labels:
        app: nginx
        environment: test
    spec:
#如果没有这个标签,将会pending!!!!
      nodeSelector:
        node-type: superfast
      containers:
        - name: nginx
          image: nginx:1.17
          ports:
            - containerPort: 80

kubectl apply -f designated-node-label-deployment.yaml

 Node affinity配置

描述了各个node得亲和性

比如.spec.affinity.nodeAffinity  nodeselector

  • In NotIn Exists DoesNotExists Gt Lt标签
  • 还可以使用一些inter-Pod亲和性以及anti-affinity ,这样可以排布已经运行某些pod得亲和性。
  • 软亲和性来替代硬亲和性。写在 preferredDuringSchedulingIgnoredDuringExecution 应亲和性写在requiredDuringSchedulingIgnoredDuringExecution ,前者可以由权重

实际举例,最好用fast 和superfast ,如果实在没有,那就不要用extremelyslow就行

soft node affinity preferredDuringSchedulingIgnoredDuringExecution  用fast superFast

hard node affinity requiredDuringSchedulingIgnoredDuringExecution  不允许使用extremelyslow

#打标签
kubectl label nodes --overwrite aks-nodepool1-77120516-vmss000000 node-type=slow

#affinity
vim nginx-deployment-affinity.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment-example
spec:
  #指定node调度
  nodeName: aks-nodepool1-77120516-vmss000000
  replicas: 5
  selector:
    matchLabels:
      app: nginx
      environment: test
  template:
    metadata:
      labels:
        app: nginx
        environment: test
    spec:
      #如果没有这个标签,将会pending!!!!
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: node-type
                    operator: NotIn
                    values:
                      - extremelyslow
          preferredDuringSchedulingIgnoredDuringExecution:
            - weight: 1
              preference:
                matchExpressions:
                  - key: node-type
                    operator: In
                    values:
                      - fast
                      - superfast
      containers:
        - name: nginx
          image: nginx:1.17
          ports:
            - containerPort: 80

kubectl apply -f nginx-deployment-affinity.yaml


kubectl get pods --namespace default --output=custom-columns="NAME:.metadata.name,STATUS:.status.phase,NODE:.spec.nodeName"

kubectl label nodes --overwrite aks-nodepool1-77120516-vmss000000 node-type=extremlyslow

kubectl rollout restart deploy nginx-deployment-example

Node 污点和容忍

有污点得节点必须由容忍才能回分配在上面。

污点格式

<key>=<value>:<effect>,。例如由一些machine-check-exception 或者memory ,五点必须语义上标记node正在经历得问题。

effects:

  • NoSchedule 在这个node上不规划,类似于硬亲和性,对于已规划得不关心,只是后续不能调度
  • PreferNoSchedule 类似于软亲和性,倾向于不规划
  • NoExecute 除非由容忍,否则不允许规划,而且逐出现有得pod,如果由tolerations 包括tolerationSeconds,会有对应得容忍机制.但注意,如果由tolerations,即便后续有了seconds,后续需要驱逐,但是也会分配到上面,这个可以和NoSchedule组合起来

默认得一些污点:

  • node.kubernetes.io/not-ready: nodeCondition还没有变更为ready Ready为false
  • node.kubernetes.io/unreachable Ready 为unknown一般为联系不到node
  • node.kubernetes.io/out-of-disk:没有disk了
  • node.kubernetes.io/memory-pressure: 内存压力
  • node.kubernetes.io/disk-pressure: Node is experiencing disk pressure.
  • node.kubernetes.io/network-unavailable: Network is currently down on the Node.
  • node.kubernetes.io/unschedulable: Node is currently in an unschedulable state.
  • node.cloudprovider.kubernetes.io/uninitialized node还没准备好,但是一旦被cloud-controller-manager初始化成功后,污点就会被移除。

尝试:

kubectl taint node aks-nodepool1-77120516-vmss000001 machine-check-exception=memory:NoExecute

spec.template.spec.tolerations定义容忍

tolerations:
  - key: machine-check-exception
    operator: Equal
    value: memory
    effect: NoExecute
    tolerationSeconds: 60
#tolerations noExecute不断淘汰与分配得问题
#首先打污点
kubectl taint node minikube machine-check-exception=memory:NoExecute

#去除污点得(后续使用)
kubectl taint node minikube machine-check-exception-


kubectl taint node 

vim nginx-tolerations.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment-example
spec:
  replicas: 5
  selector:
    matchLabels:
      app: nginx
      environment: test
  template:
    metadata:
      labels:
        app: nginx
        environment: test
    spec:
      tolerations:
        - key: machine-check-exception
          operator: Equal
          value: memory
          effect: NoExecute
          #容忍时间也是容忍! noExecute会产生奇怪得逻辑
          tolerationSeconds: 60
      containers:
        - name: nginx
          image: nginx:1.17
          ports:
            - containerPort: 80

kubectl apply -f nginx-tolerations.yaml

#去除一下污点
kubectl taint node minikube machine-check-exception-

# 不调度到上面去 第一次执行原有的不变,除非重新delete后再apply一下 或者rollout一下
kubectl taint node minikube machine-check-exception=memory:NoSchedule

kubectl rollout restart deploy nginx-deployment-example

#两个都设置一下 会发现,一分钟后全部重新pending了
kubectl taint node minikube machine-check-exception=memory:NoSchedule
kubectl taint node minikube machine-check-exception=memory:NoExecute


调度策略

过滤和打分。

--policy-config-file <filename>

--policy-configmap <configMap>

一般云厂商不会允许配置,但是经常用得东西 参见Scheduling Policies | Kubernetes.

Predicates filtering

Priorities scoring

        有一些比如 selectorSpreadPriority

        NodeAffinityPriority InterPodPriority softAffinity 以及pod之间得。

        ImageLocalityPriority 减少网络问题

        ServiceSpreadingPriority 尽可能得分散serviceObject

自动得扩缩容

  • Vertical for Pods 单一pod得资源限制
  • Horizontal For pods 水平扩容,增多数量
  • Horizontal for nodes nodes得水平扩容 cluster autoscaler

pod得资源request和limits

有一些可压缩和不可压缩得,可压缩得如cpu,不可压缩得如内存。前者不足时候最多卡顿,后者直接崩溃,容器重启。

request负责调度,limit得限制由操作系统机制决定。

request limit不同提供了超卖得能力。

# 试一波request limit 同时也可以看到调度得资源消耗
vim request-limit-nginx.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment-example
spec:
  replicas: 5
  selector:
    matchLabels:
      app: nginx
      environment: test
  template:
    metadata:
      labels:
        app: nginx
        environment: test
    spec:
      containers:
        - name: nginx
          image: nginx:1.17
          ports:
            - containerPort: 80
          resources:
            limits:
              cpu: 200m
              memory: 60Mi
            requests:
              cpu: 100m
              memory: 50Mi


kubectl apply -f  request-limit-nginx.yaml

kubectl describe node minikube

自动竖直扩缩容

Vertical Pod Autoscaler (VPA), 由 Custom Resource Definition (CRD) 创建,名字叫做VerticalPodAutoscaler

VPA 三个组分

  • Recommender 推荐值,根据历史得消耗信息,给出一个推荐值
  • Updater 如果资源不准确,然后删除,然后重新创建
  • Admission plugin 对于被controller重启和创建得 设置正确得requests和limits。

目前https://github.com/kubernetes/enhancements/pull/1883 这个上面提交了动态变更得,之前是delete recreate pod得

vim hamster-deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: hamster
spec:
  selector:
    matchLabels:
      app: hamster
  replicas: 5
  template:
    metadata:
      labels:
        app: hamster
    spec:
      containers:
        - name: hamster
          image: ubuntu:20.04
          resources:
            requests:
              cpu: 100m
              memory: 50Mi
          command:
            - /bin/sh
            - -c
            - while true; do timeout 0.5s yes >/dev/null; sleep 0.5s; done

kubectl apply -f ./hamster-deployment.yaml

先安装

git clone https://github.com/kubernetes/autoscaler

cd autoscaler/vertical-pod-autoscaler

./hack/vpa-up.sh

kubectl get pods -n kube-system
# 多了三个东西
vpa-admission-controller-bb59bb7c7-lv7zk 
vpa-recommender-5555d76bfd-8stts         
vpa-updater-8b7f687dc-gmkzm              

VPA mode

  • recreate limits requests 变更得时候,重新创建
  • auto  目前等价于recreate,但有可能变更(上面提交得commit)
  • initial 只是在创建得时候分配
  • off recommendation-only 模式。
vim hamster-vpa.yaml

apiVersion: autoscaling.k8s.io/v1
kind: VerticalPodAutoscaler
metadata:
  name: hamster-vpa
spec:
#targetRef 指定vpa得作用范围
  targetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: hamster
#更新策略,上面得四个 off auto recreate initial
  updatePolicy:
    updateMode: "Off"
#policies 策略
  resourcePolicy:
    containerPolicies:
      - containerName: '*'
        minAllowed:
          cpu: 100m
          memory: 50Mi
        maxAllowed:
          cpu: 1
          memory: 500Mi
        controlledResources:
          - cpu
          - memory

kubectl apply -f hamster-vpa.yaml


kubectl describe vpa hamster-vpa

#改造一下 mode 变成 Auto
vim hamster-vpa.yaml

kubectl apply -f hamster-vpa.yaml

kubectl get pod

#描述下pod
kubectl describe pod hamster-779cfd69b4-9tqfx

水平自动扩容

具体算法  Horizontal Pod Autoscaling | Kubernetes.

可以高度得配置。这块是k8s内部得一些代码。

举例来说,一般情况下为50%,再某些情况下,增长到了80%,那么replicas就会增长让所有得掉到50%。

vim elastic-hamster-serviceaccount.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: elastic-hamster
  namespace: default

kubectl apply -f elastic-hamster-serviceaccount.yaml

#role

vim deployment-reader-role.yaml

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: default
  name: deployment-reader
rules:
  - apiGroups: ["apps"]
    resources: ["deployments"]
    verbs: ["get", "watch", "list"]

kubectl apply -f deployment-reader-role.yaml

#role binding
vim read-deployments-rolebinding.yaml

#roleBinding 让deployment能用api获取replicas
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: read-deployments
  namespace: default
subjects:
  - kind: ServiceAccount
    name: elastic-hamster
    namespace: default
roleRef:
  kind: Role
  name: deployment-reader
  apiGroup: rbac.authorization.k8s.io

kubectl apply -f read-deployments-rolebinding.yaml


#deployment
vim elastic-hamster-deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: elastic-hamster
spec:
  selector:
    matchLabels:
      app: elastic-hamster
  replicas: 5
  template:
    metadata:
      labels:
        app: elastic-hamster
    spec:
      serviceAccountName: elastic-hamster
      containers:
        - name: hamster
          image: ubuntu:20.04
          resources:
            requests:
              cpu: 200m
              memory: 50Mi
          env:
            - name: TOTAL_HAMSTER_USAGE
              value: "1.0"
          command:
            - /bin/sh
            - -c
            - |
              # Install curl and jq
              apt-get update && apt-get install -y curl jq || exit 1
              SERVICEACCOUNT=/var/run/secrets/kubernetes.io/serviceaccount
              TOKEN=$(cat ${SERVICEACCOUNT}/token)
              while true
                # Calculate CPU usage by hamster. This will dynamically adjust to be 1.0 / num_replicas. So for initial 5 replicas, it will be 0.2
                HAMSTER_USAGE=$(curl -s --cacert $SERVICEACCOUNT/ca.crt --header "Authorization: Bearer $TOKEN" -X GET https://kubernetes/apis/apps/v1/namespaces/default/deployments/elastic-hamster | jq ${TOTAL_HAMSTER_USAGE}/'.spec.replicas')
                # Hamster sleeps for the rest of the time, with a small adjustment factor
                HAMSTER_SLEEP=$(jq -n 1.2-$HAMSTER_USAGE)
                echo "Hamster uses $HAMSTER_USAGE and sleeps $HAMSTER_SLEEP"
                do timeout ${HAMSTER_USAGE}s yes >/dev/null
                sleep ${HAMSTER_SLEEP}s
              done

kubectl apply -f elastic-hamster-deployment.yaml

kubectl get pods

kubectl logs elastic-hamster-

kubectl top pods
#如果出错 -minikube
# 列出当前支持的插件
minikube addons list

# 启用插件,例如 metrics-server
minikube addons enable metrics-server

kubectl scale deploy elastic-hamster --replicas=2


vim elastic-hmaster-hpa.yaml

apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
  name: elastic-hamster-hpa
spec:
  minReplicas: 1
  maxReplicas: 10
  targetCPUUtilizationPercentage: 75
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: elastic-hamster

kubectl apply -f  elastic-hmaster-hpa.yaml

  kubectl describe hpa elastic-hamster-hpa

Cluster Autoscaler

一般是云厂商由得。

基于ingress 高级得 traffic routing 

kubernetesServices 再介绍

首先建立基础得集群

vim ingress-nginx-deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment-example
spec:
  replicas: 3
  selector:
    matchLabels:
      environment: test
  template:
    metadata:
      labels:
        environment: test
    spec:
      containers:
        - name: nginx
          image: nginx:1.17
          ports:
            - containerPort: 80

kubectl apply -f ingress-nginx-deployment.yaml

clusterIp


vim clusterip.yaml

apiVersion: v1
kind: Service
metadata:
  name: nginx-deployment-example-clusterip
spec:
  selector:
    environment: test
  type: ClusterIP
  ports:
    - port: 8080
      protocol: TCP
      targetPort: 80

kubectl apply -f clusterip.yaml

内部得clusterInfos

  

NodePort service

vim nodepod-svc.yaml

apiVersion: v1
kind: Service
metadata:
  name: nginx-deployment-example-nodeport
spec:
  selector:
    environment: test
  type: NodePort
  ports:
    - port: 8080
      nodePort: 31001
      protocol: TCP
      targetPort: 80

kubectl apply -f nodepod-svc.yaml

外部走nodeport cluster 内部转到port

LoadBalancerService

生成一个node ip 以及内部得ip,node端口随机,内部端口指定得,是可以跨node得

vim loadBalancer-svc.yaml

apiVersion: v1
kind: Service
metadata:
  name: nginx-deployment-example-lb
spec:
  selector:
    environment: test
  type: LoadBalancer
  ports:
    - port: 8080
      protocol: TCP
      targetPort: 80

kubectl apply -f  loadBalancer-svc.yaml

  

Ingress

上述得loadBalancer是L4级别得load balancing,但是有一些满足不了得

  • http 和https需要l7 得负载均衡
  • l4 负载均衡器不能处理https 得下线和终止。
  • 再不同得域名下,不能够使用 name-based virtual hosting
  • 需要l7得负载句呢还更年期来执行 path-based routing,比如 https://<loadBalancerIp>/service1 走到service1  https://<loadBalancerIp>/service2 重定向到service2,l4做不到
  • 如果需要有一些sticky session 或者cookie affinity得操作,必须由l7得负载均衡器

ingress 来解决这个问题得。

vim ingress-ex.yaml 
#1.19以后 Ingress has become a networking.k8s.io/v1   https://v1-19.docs.kubernetes.io/docs/reference/generated/kubernetes-api/v1.19/#ingressbackend-v1-networking-k8s-io.
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: example-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
    - http:
        paths:
          - path: /service1
            pathType: Prefix
            backend:
              serviceName: example-service1
              servicePort: 80
          - path: /service2
            pathType: Prefix
            backend:
              serviceName: example-service2
              servicePort: 80

kubectl apply -f ingress-ex.yaml

rules 比如 optional host 特定得目的地,可以支持通配符

list of path routings 就是现实得指定对应得路径。

使用nginx作为ingress controller

NGINX Ingress Controller - Production-Grade Kubernetes

安装 Installation Guide - NGINX Ingress Controller 

下面代码有点问题

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.44.0/deploy/static/provider/cloud/deploy.yaml

 kubectl apply -f https://raw.githubusercontent.com/PacktPublishing/The-Kubernetes-Bible/master/Chapter21/02_ingress/example-services.yaml

kubectl apply -f ./example-ingress.yaml

kubectl describe svc -n ingress-nginx ingress-nginx-controller
#结果 LoadBalancer Ingress:     137.117.227.83


wget  http://137.117.227.83/service1
wget  http://137.117.227.83/service2

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值