k8s集群搭建-(四)—node节点的搭建

node节点的部署

Master apiserver启用TLS认证后,Node节点kubelet组件想要加入集群,必须使用CA签发的有效证书才能与apiserver通信,当Node节点很多时,签署证书是一件很繁琐的事情,因此有了TLS Bootstrapping机制,kubelet会以一个低权限用户自动向apiserver申请证书,kubelet的证书由apiserver动态签署。
认证大致工作流程如图所示:

在master节点上,给node节点部署kubelet

创建角色绑定

kubectl create clusterrolebinding kubelet-bootstrap \
  --clusterrole=system:node-bootstrapper \
  --user=kubelet-bootstrap

创建 kubelet bootstrapping kubeconfig 文件

# 设置集群参数
kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=https://10.61.66.216:6443 \
  --kubeconfig=bootstrap.kubeconfig

# 设置客户端认证参数
kubectl config set-credentials kubelet-bootstrap \
  --token=${BOOTSTRAP_TOKEN} \
  --kubeconfig=bootstrap.kubeconfig

# 设置上下文参数
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kubelet-bootstrap \
  --kubeconfig=bootstrap.kubeconfig

# 设置默认上下文
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig

将bootstrap.kubeconfig文件拷贝到node节点

即下文的node节点下,放在/opt/worker/kubelet/kubeconfig下

 

准备工作

在node节点下

1.创建文件夹

在kubectl中的bin中放入kubectl文件

2.进入kubelet文件夹

bin文件中放kubelet

在/opt/worker/kubelet/config中创建template文件夹

[root@zoutt-node2 template]# vi kubelet.cfg.template
HOSTNAME_OVERRIDE=HOSTNAME_OVERRIDE_VALUE
KUBECONFIG="/opt/worker/kubelet/kubeconfig/kubelet.kubeconfig"
BOOTSTRAP_KUBECONFIG="/opt/worker/kubelet/kubeconfig/bootstrap.kubeconfig"
CERT_DIR="/opt/worker/kubelet/ssl"
POD_INFRA_CONTAINER_IMAGE="registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"
CLUSTER_DNS="172.20.0.2"
CLUSTER_DOMAIN="cluster.local."

在/opt/worker/kubelet/config中创建kubelet.cfg

其中HOSTNAME_OVERRIDE为节点内网ip

[root@zoutt-node2 config]# vi kubelet.cfg

HOSTNAME_OVERRIDE="10.61.66.216"
KUBECONFIG="/opt/worker/kubelet/kubeconfig/kubelet.kubeconfig"
BOOTSTRAP_KUBECONFIG="/opt/worker/kubelet/kubeconfig/bootstrap.kubeconfig"
CERT_DIR="/opt/worker/kubelet/ssl"
POD_INFRA_CONTAINER_IMAGE="registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"
CLUSTER_DNS="172.20.0.2"
CLUSTER_DOMAIN="cluster.local."

进入/opt/worker/kubelet,kubeconfig中放bootstrap.kubeconfig文件

进入/opt/worker/kubelet/service/,创建kubelet.service文件

[root@zoutt-node2 service]# vi kubelet.service 


[Unit]
Description=Kubernetes Kubelet Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service

[Service]
EnvironmentFile=-/opt/worker/kubelet/config/kubelet.cfg
ExecStart=/opt/worker/kubelet/bin/kubelet \
--logtostderr=false \
--v=4 \
--log-dir=/opt/worker/kubelet/log \
--hostname-override=${HOSTNAME_OVERRIDE} \
--kubeconfig=${KUBECONFIG} \
--bootstrap-kubeconfig=${BOOTSTRAP_KUBECONFIG} \
--cert-dir=${CERT_DIR} \
--pod-infra-container-image=${POD_INFRA_CONTAINER_IMAGE} \
--allow-privileged=true \
--cluster-dns=${CLUSTER_DNS} \
--cluster-domain=${CLUSTER_DOMAIN} \
--network-plugin=cni \
--feature-gates=RotateKubeletClientCertificate=true,RotateKubeletServerCertificate=true \


Restart=on-failure

[Install]
WantedBy=multi-user.target

进入/opt/worker,在script文件夹中创建init.sh,创建初始化脚本文件

[root@zoutt-node2 script]# vi init.sh 


#关闭selinux
setenforce  0
sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/sysconfig/selinux
sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
sed -i "s/^SELINUX=permissive/SELINUX=disabled/g" /etc/sysconfig/selinux
sed -i "s/^SELINUX=permissive/SELINUX=disabled/g" /etc/selinux/config
getenforce

#关闭swap
swapoff -a
sed -i 's/.*swap.*/#&/' /etc/fstab
cat /etc/fstab

#设置内核
cat <<EOF >  /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward=1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
modprobe br_netfilter
sysctl -p /etc/sysctl.d/k8s.conf

# 获取信息
#[ip]
IP=$(ifconfig $NETWORK_CARD | grep 'inet ' | awk '{print $2}')

#[ssl]
mkdir /opt/ssl
mkdir /opt/ssl/ca
mkdir /opt/worker/kubelet/log

cat > /opt/ssl/ca/ca.pem <<EOF
$CA_DATA
EOF

# kubelet服务
#[config]
cp -f /opt/worker/kubelet/config/template/kubelet.cfg.template /opt/worker/kubelet/config/kubelet.cfg
sed -i "s/^HOSTNAME_OVERRIDE=HOSTNAME_OVERRIDE_VALUE/HOSTNAME_OVERRIDE=\"$IP\"/g" /opt/worker/kubelet/config/kubelet.cfg
cat /opt/worker/kubelet/config/kubelet.cfg

#[service]
chmod +x /opt/worker/kubelet/bin/kubelet
cp -r /opt/cni/net.d /etc/cni/net.d
cp /opt/worker/kubelet/service/kubelet.service /usr/lib/systemd/system/kubelet.service

#[kubeconfig]
chmod +x /opt/worker/kubectl/bin/kubectl
cp /opt/worker/kubectl/bin/kubectl /usr/local/bin

kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/ssl/ca/ca.pem \
  --embed-certs=true \
  --server=$MASTER_URI \
  --kubeconfig=/opt/worker/kubelet/kubeconfig/bootstrap.kubeconfig

kubectl config set-credentials kubelet-bootstrap \
  --token=$BOOTSTRAP_TOKEN \
  --kubeconfig=/opt/worker/kubelet/kubeconfig/bootstrap.kubeconfig

kubectl config set-context default \
  --cluster=kubernetes \
  --user=kubelet-bootstrap \
  --kubeconfig=/opt/worker/kubelet/kubeconfig/bootstrap.kubeconfig

kubectl config use-context default --kubeconfig=/opt/worker/kubelet/kubeconfig/bootstrap.kubeconfig

systemctl daemon-reload
systemctl restart docker
systemctl status docker
systemctl restart kubelet
systemctl status kubelet

进入/opt 目录下,创建start.sh文件,

修改MASTER_URI为自己的master的内网ip,CA_DATA改为ca.pem 的data,token为master节点生成的token.csv文件中的token

[root@zoutt-node2 opt]# vi start.sh 

#!/bin/bash

NETWORK_CARD="eth0"

MASTER_URI="https://10.61.66.202:6443"

CA_DATA="-----BEGIN CERTIFICATE-----
MIIDvjCCAqagAwIBAgIUfBBe76WNWY+8E6a+0G2YYDWwZEwwDQYJKoZIhvcNAQEL
BQAwZTELMAkGA1UEBhMCQ04xEDAOBgNVBAgTB0JlaUppbmcxEDAOBgNVBAcTB0Jl
aUppbmcxDDAKBgNVBAoTA2s4czEPMA0GA1UECxMGU3lzdGVtMRMwEQYDVQQDEwpr
dWJlcm5ldGVzMB4XDTE5MDYyMDA2NTQwMFoXDTI0MDYxODA2NTQwMFowZTELMAkG
A1UEBhMCQ04xEDAOBgNVBAgTB0JlaUppbmcxEDAOBgNVBAcTB0JlaUppbmcxDDAK
BgNVBAoTA2s4czEPMA0GA1UECxMGU3lzdGVtMRMwEQYDVQQDEwprdWJlcm5ldGVz
MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAuZ4glpBZNmXmJVnhrQ1K
3Igl52RCndp8CAfrNMeTLWVTAlmdU1QOXNVUx7ThqXjq6W4vNfwoHREKYrB2449j
xwmEoSHSjABnL/OrGZ6O97dLdsHjfuh1kK0zUG75BDLCsRPRhDaoTydKNq8VNviu
YSDoEoqIiAowJis3nJSZShoGzztBy9s0RLvPqnoCBFjR3o4CidCxHiffNtjHb2PZ
AEqE33tT54nuM5bakYCe5flKF6tIORP7TMnBAd/EQZ7XfnB97+9Clk7XTsS5aTsC
eHCmQV0u4+WlUbdLOgR8JQ6iFSY+IffLlu3hs9tOCcqRs8m6jqDM5zcKuG5A8I8b
qwIDAQABo2YwZDAOBgNVHQ8BAf8EBAMCAQYwEgYDVR0TAQH/BAgwBgEB/wIBAjAd
BgNVHQ4EFgQUdoCatOC2nx9Ye771F4Y0KeXasN4wHwYDVR0jBBgwFoAUdoCatOC2
nx9Ye771F4Y0KeXasN4wDQYJKoZIhvcNAQELBQADggEBACHXeDQMSoQACu1ZBguC
oHbnC6L6ovBIElnW9Pnb1R+fmmBahwWWpxj37RXe8ELj8VrgWPESj7o/AkDZYEqu
eBAnW/pj40uHtIB0lRIj1+f8gm3xmPJyat2yvolo/oUgRO5bN4BhPQKL36cy5Mz3
bMPTpkB/VUZqvyvyHOlqxf1sM79/MiVoQr0AMo3EJwWGkoL8U75o0uxNdCtop3O3
XvW/ON+OTTpxHuP/uRqB0h5hG1c+2LJw/0ZE0/6pZUViGsuezDJA8k1NQ19FOF3G
SwgmlwftcZkjT1KGTCLpXGI2BKyIZ/3Ok600PHJ8i8oUKnpp1SQ1yX+oqS2Dw3tb
aGY=
-----END CERTIFICATE-----"

BOOTSTRAP_TOKEN="2d6586cf697ee3c8d7d5c97310a20230"

source /opt/worker/script/init.sh

启动

chmod 777 start.sh
./start.sh

master查看csr请求

[root@zoutt-master ssl]# kubectl get csr
NAME                                                   AGE   REQUESTOR           CONDITION
node-csr-YD7Qc3yrTvPGMCGbLwFmn6RYVHmgceejrnhLPKpPD2I   75m   kubelet-bootstrap   Approved,Issued

批准kubelet 的 TLS 证书请求

[root@k8s-master1 ~]# kubectl get csr|grep 'Pending' | awk 'NR>0{print $1}'| xargs kubectl certificate approve

查看node已经加入集群

[root@zoutt-master ssl]# kubectl get node

 

在master使用容器化的方式,搭建flannel以及kube-router、core-dns

进入master,创建/opt/kubernetes/bin/master/plugin/文件夹

如图创建core-dns、flannel、ingress-nginx、kube-router

1、搭建kube-router

进入kube-router文件夹

1、创建kuberoute.yaml 文件

[root@zoutt-master kube-router]# vi kuberoute.yaml 

apiVersion: v1
kind: ConfigMap
metadata:
  name: kube-router-cfg
  namespace: kube-system
  labels:
    tier: node
    k8s-app: kube-router
data:
  cni-conf.json: |
    {
      "name":"kubernetes",
      "type":"bridge",
      "bridge":"kube-bridge",
      "isDefaultGateway":true,
      "ipam": {
        "type":"host-local"
      }
    }
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  labels:
    k8s-app: kube-router
    tier: node
  name: kube-router
  namespace: kube-system
spec:
  template:
    metadata:
      labels:
        k8s-app: kube-router
        tier: node
      annotations:
        scheduler.alpha.kubernetes.io/critical-pod: ''
    spec:
      containers:
      - name: kube-router
        image: cloudnativelabs/kube-router
        args:
        - --run-router=false
        - --run-firewall=true
        - --run-service-proxy=true
        - --kubeconfig=/var/lib/kube-router/kubeconfig
        env:
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        resources:
          requests:
            cpu: 250m
            memory: 250Mi
        securityContext:
          privileged: true
        volumeMounts:
        - name: lib-modules
          mountPath: /lib/modules
          readOnly: true
        - name: cni-conf-dir
          mountPath: /etc/cni/net.d
        - name: kubeconfig
          mountPath: /var/lib/kube-router
        - name: run
          mountPath: /var/run/docker.sock
          readOnly: true
      initContainers:
      - name: install-cni
        image: busybox
        command:
        - /bin/sh
        - -c
        - set -e -x;
          if [ ! -f /etc/cni/net.d/10-kuberouter.conf ]; then
            TMP=/etc/cni/net.d/.tmp-kuberouter-cfg;
            cp /etc/kube-router/cni-conf.json ${TMP};
            mv ${TMP} /etc/cni/net.d/10-kuberouter.conf;
          fi
        volumeMounts:
        - name: cni-conf-dir
          mountPath: /etc/cni/net.d
        - name: kube-router-cfg
          mountPath: /etc/kube-router
      hostNetwork: true
      tolerations:
      - key: CriticalAddonsOnly
        operator: Exists
      - effect: NoSchedule
        key: node-role.kubernetes.io/master
        operator: Exists
      volumes:
      - name: lib-modules
        hostPath:
          path: /lib/modules
      - name: cni-conf-dir
        hostPath:
          path: /etc/cni/net.d
      - name: run
        hostPath:
          path: /var/run/docker.sock
      - name: kube-router-cfg
        configMap:
          name: kube-router-cfg
      - name: kubeconfig
        configMap:
          name: kube-router-kubeconfig
          items:
          - key: kube-router.kubeconfig
            path: kubeconfig
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: kube-router
rules:
- apiGroups:
  - ""
  resources:
  - namespaces
  - pods
  - services
  - nodes
  - endpoints
  verbs:
  - list
  - get
  - watch
- apiGroups:
  - "networking.k8s.io"
  resources:
  - networkpolicies
  verbs:
  - list
  - get
  - watch
- apiGroups:
  - "extensions"
  resources:
  - networkpolicies
  verbs:
  - get
  - list
  - watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: kube-router
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kube-router
subjects:
- kind: User
  name: kube-router


2、创建初始化脚本init.sh(ip改为 自身的master内网Ip)

[root@zoutt-master kube-router]# vi init.sh 

#!/bin/bash

kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=https://10.61.66.202:6443 \
  --kubeconfig=kube-router.kubeconfig

kubectl config set-credentials kube-router \
  --client-certificate=/opt/kubernetes/ssl/kube-router.pem \
  --client-key=/opt/kubernetes/ssl/kube-router-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-router.kubeconfig

kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-router \
  --kubeconfig=kube-router.kubeconfig

kubectl config use-context default --kubeconfig=kube-router.kubeconfig

kubectl create configmap -n kube-system kube-router-kubeconfig --from-file=./kube-router.kubeconfig

kubectl apply -f kuberoute.yaml

3、运行脚本

chmod 777 init.sh
./init.sh

4、查看pods是否生成

 

 

2、搭建flannel

进入flannel文件夹

1、创建kube-flannel-legacy.yml文件,(需要自行更换镜像拉取地址)

[root@zoutt-master flannel]# vi kube-flannel-legacy.yml 

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "type": "flannel",
      "delegate": {
        "hairpinMode": true,
        "isDefaultGateway": true
      }
    }
  net-conf.json: |
    {
      "Network": "192.168.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
metadata:
  name: kube-flannel-ds
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      hostNetwork: true
      nodeSelector:
        beta.kubernetes.io/arch: amd64
      serviceAccountName: flannel
      containers:
      - name: kube-flannel
        image: mec-hub.21cn.com/coreos/flannel:v0.10.0-amd64
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        - --iface=eth0
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: true
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      - name: install-cni
        image: mec-hub.21cn.com/coreos/flannel:v0.10.0-amd64
        command: [ "/bin/sh", "-c", "set -e -x; cp -f /etc/kube-flannel/cni-conf.json /etc/cni/net.d/10-flannel.conf; while true; do sleep 3600; done" ]
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg

2、创建kube-flannel-rbac.yml文件


# Create the clusterrole and clusterrolebinding:
# $ kubectl create -f kube-flannel-rbac.yml
# Create the pod using the same namespace used by the flannel serviceaccount:
# $ kubectl create --namespace kube-system -f kube-flannel-legacy.yml
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: flannel
rules:
  - apiGroups:
      - ""
    resources:
      - pods
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes/status
    verbs:
      - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
  - kind: ServiceAccount
    name: flannel
    namespace: kube-system

应用yaml文件

kubectl apply -f kube-flannel-rbac.yml
kubectl apply -f kube-flannel-legacy.yml

 

3、搭建core-dns

创建coredns.yaml文件

[root@zoutt-master core-dns]# vi coredns.yaml 

apiVersion: v1
kind: ServiceAccount
metadata:
  name: coredns
  namespace: kube-system
  labels:
      kubernetes.io/cluster-service: "true"
      addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: Reconcile
  name: system:coredns
rules:
- apiGroups:
  - ""
  resources:
  - endpoints
  - services
  - pods
  - namespaces
  verbs:
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: EnsureExists
  name: system:coredns
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:coredns
subjects:
- kind: ServiceAccount
  name: coredns
  namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
data:
  Corefile: |
    .:53 {
        errors
        log
        health
        kubernetes cluster.local in-addr.arpa ip6.arpa 172.20.0.0/16 {
          pods insecure
          upstream
          fallthrough in-addr.arpa ip6.arpa
        }
        prometheus :9153
        forward . /etc/resolv.conf
        cache 30
    }
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: coredns
  namespace: kube-system
  labels:
    k8s-app: coredns
    kubernetes.io/cluster-service: "true"
    kubernetes.io/name: "CoreDNS"
spec:
  replicas: 2
  selector:
    matchLabels:
      k8s-app: coredns
  template:
    metadata:
      labels:
        k8s-app: coredns
      annotations:
        scheduler.alpha.kubernetes.io/critical-pod: ''
        scheduler.alpha.kubernetes.io/tolerations: '[{"key":"CriticalAddonsOnly", "operator":"Exists"}]'
    spec:
      serviceAccountName: coredns
      containers:
      - name: coredns
        image: coredns/coredns:latest
        # imagePullPolicy: Always
        imagePullPolicy: IfNotPresent
        args: [ "-conf", "/etc/coredns/Corefile" ]
        volumeMounts:
        - name: config-volume
          mountPath: /etc/coredns
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
      dnsPolicy: Default
      volumes:
        - name: config-volume
          configMap:
            name: coredns
            items:
            - key: Corefile
              path: Corefile
---
apiVersion: v1
kind: Service
metadata:
  name: coredns
  namespace: kube-system
  labels:
    k8s-app: coredns
    kubernetes.io/cluster-service: "true"
    kubernetes.io/name: "CoreDNS"
spec:
  selector:
    k8s-app: coredns
  clusterIP: 15.20.0.2
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
    protocol: TCP

应用core-dns.yaml文件

kubectl apply -f coredns.yaml

4、搭建cni

进入/opt,创建cni文件夹

进入net.d,创建10-flannel.conf 

[root@zoutt-node2 net.d]# vi 10-flannel.conf 

{
  "name": "cbr0",
  "type": "flannel",
  "delegate": {
    "hairpinMode": true,
    "isDefaultGateway": true
  }
}
~ 

 

5、搭建ingress-nginx(非必装)

1、创建configmap.yaml文件

[root@zoutt-master ingress-nginx]# vi configmap.yaml 

kind: ConfigMap
apiVersion: v1
metadata:
  name: nginx-configuration
  namespace: ingress-nginx
  labels:
    app: ingress-nginx
data:
  proxy-next-upstream: "off"
  proxy-body-size: "2048m"
  log-format-upstream: '{ "@timestamp": "$time_iso8601", "@fields": {"remote_addr": "$remote_addr","remote_user": "$remote_user","body_bytes_sent": "$body_bytes_sent", "request_time": "$request_time", "status": "$status", "request": "$request", "request_method": "$request_method", "http_referrer": "$http_referer", "http_x_forwarded_for": "$http_x_forwarded_for", "http_user_agent": "$http_user_agent" } }'
  access-log-path: "/var/log/nginx/access.log"
  error-log-path: "/var/log/nginx/error.log"
  ssl-redirect: "true"

2、创建default-backend.yaml文件

[root@zoutt-master ingress-nginx]# vi default-backend.yaml

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: default-http-backend
  labels:
    app: default-http-backend
  namespace: ingress-nginx
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: default-http-backend
    spec:
      terminationGracePeriodSeconds: 60
      containers:
      - name: default-http-backend
        # Any image is permissable as long as:
        # 1. It serves a 404 page at /
        # 2. It serves 200 on a /healthz endpoint
        #image: gcr.io/google_containers/defaultbackend:1.4
        #image: index.tenxcloud.com/google_containers/defaultbackend:1.0
        image: registry.cn-hangzhou.aliyuncs.com/google-containers/defaultbackend:1.4
        livenessProbe:
          httpGet:
            path: /healthz
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 30
          timeoutSeconds: 5
        ports:
        - containerPort: 8080
        resources:
          limits:
            cpu: 10m
            memory: 20Mi
          requests:
            cpu: 10m
            memory: 20Mi
---

apiVersion: v1
kind: Service
metadata:
  name: default-http-backend
  namespace: ingress-nginx
  labels:
    app: default-http-backend
spec:
  ports:
  - port: 80
    targetPort: 8080
  selector:
    app: default-http-backend

3、创建default-server-secret.yaml 

[root@zoutt-master ingress-nginx]# vi default-server-secret.yaml 

apiVersion: v1
kind: Secret
metadata:
  name: default-server-secret
  namespace: ingress-nginx
type: Opaque
data:
 

4、创建ingress-controller-rabc.yaml

[root@zoutt-master ingress-nginx]# vi ingress-controller-rabc.yaml 

apiVersion: v1
kind: ServiceAccount
metadata:
  name: nginx-ingress-serviceaccount
  namespace: ingress-nginx

---

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: nginx-ingress-clusterrole
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - endpoints
      - nodes
      - pods
      - secrets
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - "extensions"
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ""
    resources:
        - events
    verbs:
        - create
        - patch
  - apiGroups:
      - "extensions"
    resources:
      - ingresses/status
    verbs:
      - update

---

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
  name: nginx-ingress-role
  namespace: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - pods
      - secrets
      - namespaces
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - configmaps
    resourceNames:
      # Defaults to "<election-id>-<ingress-class>"
      # Here: "<ingress-controller-leader>-<nginx>"
      # This has to be adapted if you change either parameter
      # when launching the nginx-ingress-controller.
      - "ingress-controller-leader-nginx"
    verbs:
      - get
      - update
  - apiGroups:
      - ""
    resources:
      - configmaps
    verbs:
      - create
  - apiGroups:
      - ""
    resources:
      - endpoints
    verbs:
      - get

---

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: nginx-ingress-role-nisa-binding
  namespace: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: nginx-ingress-role
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx

---

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: nginx-ingress-clusterrole-nisa-binding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: nginx-ingress-clusterrole
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx

5、创建ingress-controller-with-rabc.yaml

[root@zoutt-master ingress-nginx]# vi ingress-controller-with-rabc.yaml


apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: nginx-ingress-controller
  namespace: ingress-nginx
spec:
  #replicas: 2
  selector:
    matchLabels:
      app: ingress-nginx
  template:
    metadata:
      labels:
        app: ingress-nginx
      annotations:
        prometheus.io/port: '10254'
        prometheus.io/scrape: 'true'
    spec:
      serviceAccountName: nginx-ingress-serviceaccount
      hostNetwork: true
      containers:
        - name: nginx-ingress-controller
          # image: hub.tech.21cn.com/k8s/nginx-ingress-controller:0.20.0
          image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.21.0
          args:
            - /nginx-ingress-controller
            - --default-backend-service=$(POD_NAMESPACE)/default-http-backend
            - --default-ssl-certificate=$(POD_NAMESPACE)/cn21
            - --configmap=$(POD_NAMESPACE)/nginx-configuration
            - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
            - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
          securityContext:
            runAsUser: 0
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          ports:
          - name: http
            containerPort: 80
            hostPort: 80
          - name: https
            containerPort: 443
            hostPort: 443
          livenessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
          volumeMounts:
           - name: nginx-logs
             mountPath: /var/log/nginx
      volumes:
      - name: nginx-logs
        hostPath:
         path: /data/log/nginx

6、创建 ingress-service.yaml

[root@zoutt-master ingress-nginx]# vi  ingress-service.yaml

kind: Service
apiVersion: v1
metadata:
  name: ingress-service
  namespace: ingress-nginx
spec:
  selector:
    app: ingress-nginx
  ports:
  - name: http
    port: 80
    targetPort: 80
  - name: https
    port: 443
    targetPort: 443
  type: ClusterIP

7、创建namesapce.yaml

[root@zoutt-master ingress-nginx]# vi namesapce.yaml

apiVersion: v1
kind: Namespace
metadata:
  name: ingress-nginx

8、创建tcp-services-configmap.yaml

[root@zoutt-master ingress-nginx]# vi tcp-services-configmap.yaml

kind: ConfigMap
apiVersion: v1
metadata:
  name: tcp-services
  namespace: ingress-nginx

9、创建udp-services-configmap.yaml

[root@zoutt-master ingress-nginx]# vi udp-services-configmap.yaml

kind: ConfigMap
apiVersion: v1
metadata:
  name: udp-services
  namespace: ingress-nginx

10、创建start.sh脚本

[root@zoutt-master ingress-nginx]# vi start.sh 

# nginx 日志路径如下:
# configmap.yaml 配置nginx日志路径
# ingress-controller-with-rabc.yaml mountPath 容器路径
# ingress-controller-with-rabc.yaml hostPath  宿主机路径

kubectl apply -f namesapce.yaml
kubectl apply -f default-backend.yaml
kubectl apply -f default-server-secret.yaml
kubectl apply -f configmap.yaml
kubectl apply -f tcp-services-configmap.yaml
kubectl apply -f udp-services-configmap.yaml
kubectl apply -f ingress-controller-rabc.yaml
kubectl apply -f ingress-controller-with-rabc.yaml
kubectl apply -f ingress-service.yaml

11、创建delete.sh脚本

[root@zoutt-master ingress-nginx]# vi delete.sh 


kubectl delete -f default-backend.yaml
kubectl delete -f default-server-secret.yaml
kubectl delete -f configmap.yaml
kubectl delete -f tcp-services-configmap.yaml
kubectl delete -f udp-services-configmap.yaml
kubectl delete -f ingress-controller-rabc.yaml
kubectl delete -f ingress-controller-with-rabc.yaml
kubectl delete -f ingress-service.yaml
kubectl delete -f namespace.yaml

12、运行start.sh脚本

 

 

 

 

  • 0
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值