kubernetes v1.12.3 kubeadm 方式单节点安装(历史文档记录)

1-系统初始化

关闭selinux

sed -i 's/#UseDNS yes/UseDN	S no/g'   /etc/ssh/sshd_config
systemctl   restart sshd
grep DNS               /etc/ssh/sshd_config
grep SELINUX=disabled  /etc/selinux/config 
systemctl  disable firewalld  NetworkManager
systemctl  stop    firewalld    NetworkManager
mv /etc/yum.repos.d/CentOS-Base.repo      /etc/yum.repos.d/CentOS-Base.repo.backup
curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
curl  -o /etc/yum.repos.d/epel.repo       http://mirrors.aliyun.com/repo/epel-7.repo

时间相关

timedatectl set-timezone Asia/Shanghai
yum -y install ntpdate
ntpdate ntp1.aliyun.com
timedatectl set-local-rtc 0
systemctl restart rsyslog
systemctl restart crond
yum -y install ntp
cat >   /etc/ntp.conf  << EOF
driftfile /var/lib/ntp/drift
restrict default nomodify notrap nopeer noquery
restrict 127.0.0.1 
restrict ::1
server ntp1.aliyun.com iburst  iburst
logfile /var/log/ntp.log
includefile /etc/ntp/crypto/pw
keys /etc/ntp/keys
disable monitor
EOF
systemctl  enable  --now ntpd
ntpq -p

配置必要的内核参数

cat <<EOF | tee /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
modprobe br_netfilter
sysctl -p /etc/sysctl.d/k8s.conf

docker 安装与配置

yum -y install yum-utils
yum-config-manager --add-repo \
https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
mkdir  /etc/docker
cat > /etc/docker/daemon.json  << EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "registry-mirrors": ["https://2xdz2l32.mirror.aliyuncs.com"]
}
EOF
yum -y install docker-ce-18.06.1.ce-3.el7
systemctl  enable --now docker

k8s 安装基础组件

cat > /etc/yum.repos.d/kubernetes.repo <<  EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum  -y install   kubelet-1.12.3
yum  -y install   kubectl-1.12.3
yum  -y install   kubeadm-1.12.3
systemctl enable kubelet.service

kubeclt 命令补全

yum   -y install bash-completion
source /usr/share/bash-completion/bash_completion
echo 'source <(kubectl completion bash)'     >>  /root/.bashrc
source  /root/.bashrc
kubectl  desc

配置集群参数

cat  > /opt/kube-1.12.3-config.yaml  << EOF
apiVersion: kubeadm.k8s.io/v1alpha2
kind: MasterConfiguration
kubernetesVersion: v1.12.3
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
api:
  advertiseAddress: 0.0.0.0
  controlPlaneEndpoint: 172.16.99.100:6443
controllerManagerExtraArgs:
  node-monitor-grace-period: 10s
  pod-eviction-timeout: 10s

apiServerCertSANs:
- 172.16.99.100
etcd:
networking:
  podSubnet: "10.244.0.0/16"
nodeRegistration:
  name: 172.16.99.100
EOF

提前拉取所需镜像

kubeadm config images  list --config  /opt/kube-1.12.3-config.yaml
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.12.3
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.12.3
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.12.3
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.12.3
registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1
registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.2.24
registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.2.2
kubeadm config images pull --config  /opt/kube-1.12.3-config.yaml

配置kubelet

cat > /etc/sysconfig/kubelet  << EOF 
KUBELET_EXTRA_ARGS="--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1" 
EOF

开始安装

kubeadm  init --config /opt/kube-1.12.3-config.yaml
[init] using Kubernetes version: v1.12.3
[preflight] running pre-flight checks
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [172.16.99.100 localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [172.16.99.100 localhost] and IPs [172.16.99.100 127.0.0.1 ::1]
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [172.16.99.100 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.16.99.100 172.16.99.100 172.16.99.100]
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[certificates] Generated sa key and public key.
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" 
[init] this might take a minute or longer if the control plane images have to be pulled
[apiclient] All control plane components are healthy after 29.007559 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.12" in namespace kube-system with the configuration for the kubelets in the cluster
[markmaster] Marking the node 172.16.99.100 as master by adding the label "node-role.kubernetes.io/master=''"
[markmaster] Marking the node 172.16.99.100 as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "172.16.99.100" as an annotation
[bootstraptoken] using token: n8gghb.dteod2x0rkiz9xn1
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 172.16.99.100:6443 --token n8gghb.dteod2x0rkiz9xn1 --discovery-token-ca-cert-hash sha256:035bda68b4b25071424bd0428bf7e698ebeb9df8b7c42ba30796caf61a95fc4d

mkdir  /root/.kube
cp /etc/kubernetes/admin.conf  /root/.kube/config
kubectl  get node
NAME            STATUS     ROLES    AGE   VERSION
172.16.99.100   NotReady   master   94s   v1.12.3
kubectl  get pod -n kube-system
NAME                                    READY   STATUS    RESTARTS   AGE
coredns-6c66ffc55b-5ff2z                0/1     Pending   0          94s
coredns-6c66ffc55b-h6x6n                0/1     Pending   0          94s
etcd-172.16.99.100                      1/1     Running   0          52s
kube-apiserver-172.16.99.100            1/1     Running   0          65s
kube-controller-manager-172.16.99.100   1/1     Running   0          40s
kube-proxy-hhj2b                        1/1     Running   0          94s
kube-scheduler-172.16.99.100            1/1     Running   0          51s

安装flannel

docker pull quay.io/coreos/flannel:v0.10.0-amd64
cat  > /opt/flannel:v0.10.0-amd64  << EOF
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: flannel
rules:
  - apiGroups:
      - ""
    resources:
      - pods
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes/status
    verbs:
      - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-amd64
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      hostNetwork: true
      nodeSelector:
        beta.kubernetes.io/arch: amd64
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.10.0-amd64
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.10.0-amd64
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: true
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
EOF
kubectl  create -f /opt/flannel:v0.10.0-amd64
kubectl  get node
NAME            STATUS   ROLES    AGE     VERSION
172.16.99.100   Ready    master   8m45s   v1.12.3
kubectl  get pod -n kube-system 
NAME                                    READY   STATUS    RESTARTS   AGE
coredns-6c66ffc55b-5ff2z                1/1     Running   0          8m54s
coredns-6c66ffc55b-h6x6n                1/1     Running   0          8m54s
etcd-172.16.99.100                      1/1     Running   0          8m12s
kube-apiserver-172.16.99.100            1/1     Running   0          8m25s
kube-controller-manager-172.16.99.100   1/1     Running   0          8m
kube-flannel-ds-amd64-8qsks             1/1     Running   0          43s
kube-proxy-hhj2b                        1/1     Running   0          8m54s
kube-scheduler-172.16.99.100            1/1     Running   0          8m11s

install metrics-server

yum -y install git
git clone https://github.com/yimtun/metrics-yaml.git   /opt/metrics-yaml
docker pull yimtune/metrics-server-amd64:v0.3.6
docker tag  yimtune/metrics-server-amd64:v0.3.6  k8s.gcr.io/metrics-server-amd64:v0.3.6
kubectl  create -f /opt/metrics-yaml/1.8+/
kubectl  get pod -n kube-system 
NAME                                    READY   STATUS    RESTARTS   AGE
coredns-6c66ffc55b-5ff2z                1/1     Running   0          12m
coredns-6c66ffc55b-h6x6n                1/1     Running   0          12m
etcd-172.16.99.100                      1/1     Running   0          11m
kube-apiserver-172.16.99.100            1/1     Running   0          11m
kube-controller-manager-172.16.99.100   1/1     Running   0          11m
kube-flannel-ds-amd64-8qsks             1/1     Running   0          3m50s
kube-proxy-hhj2b                        1/1     Running   0          12m
kube-scheduler-172.16.99.100            1/1     Running   0          11m
metrics-server-5df586cb87-mvxqk         0/1     Pending   0          22s
kubectl  get event -n kube-system
8s          Warning   FailedScheduling    Pod          0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.
34s         Normal    SuccessfulCreate    ReplicaSet   Created pod: metrics-server-5df586cb87-mvxqk
34s         Normal    ScalingReplicaSet   Deployment   Scaled up replica set metrics-server-5df586cb87 to 1
kubectl taint node   172.16.99.200   node-role.kubernetes.io/master:NoSchedule-
kubectl  get event -n kube-system
2s          Normal    Scheduled           Pod          Successfully assigned kube-system/metrics-server-5df586cb87-mvxqk to 172.16.99.100
0s          Normal    Pulled              Pod          Container image "k8s.gcr.io/metrics-server-amd64:v0.3.6" already present on machine
0s          Normal    Created             Pod          Created container
kubectl  top node
NAME            CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
172.16.99.100   513m         6%     1218Mi          7%
kubectl  top  pod -n kube-system
NAME                                    CPU(cores)   MEMORY(bytes)   
coredns-6c66ffc55b-5ppxm                5m           11Mi            
coredns-6c66ffc55b-v2dw9                6m           13Mi            
etcd-172.16.99.100                      48m          33Mi            
kube-apiserver-172.16.99.100            75m          402Mi           
kube-controller-manager-172.16.99.100   115m         53Mi            
kube-flannel-ds-amd64-zfq62             7m           11Mi            
kube-proxy-6wghm                        7m           9Mi             
kube-scheduler-172.16.99.100            35m          13Mi            
metrics-server-5df586cb87-sl2qr         3m           11Mi 

安装dashboard

cat >  /opt/dashboard.yaml << EOF
# ------------------- Dashboard Secret ------------------- #

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-certs
  namespace: kube-system
type: Opaque

---
# ------------------- Dashboard Service Account ------------------- #

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system

---
# ------------------- Dashboard Role & Role Binding ------------------- #

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: kubernetes-dashboard-minimal
  namespace: kube-system
rules:
  # Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret.
- apiGroups: [""]
  resources: ["secrets"]
  verbs: ["create"]
  # Allow Dashboard to create 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
  resources: ["configmaps"]
  verbs: ["create"]
  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
  resources: ["secrets"]
  resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"]
  verbs: ["get", "update", "delete"]
  # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
  resources: ["configmaps"]
  resourceNames: ["kubernetes-dashboard-settings"]
  verbs: ["get", "update"]
  # Allow Dashboard to get metrics from heapster.
- apiGroups: [""]
  resources: ["services"]
  resourceNames: ["heapster"]
  verbs: ["proxy"]
- apiGroups: [""]
  resources: ["services/proxy"]
  resourceNames: ["heapster", "http:heapster:", "https:heapster:"]
  verbs: ["get"]

---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: kubernetes-dashboard-minimal
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kubernetes-dashboard-minimal
subjects:
- kind: ServiceAccount
  name: kubernetes-dashboard
  namespace: kube-system

---
# ------------------- Dashboard Deployment ------------------- #

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      containers:
      - name: kubernetes-dashboard
        image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1
        ports:
        - containerPort: 8443
          protocol: TCP
        args:
          - --auto-generate-certificates
          # Uncomment the following line to manually specify Kubernetes API server Host
          # If not specified, Dashboard will attempt to auto discover the API server and connect
          # to it. Uncomment only if the default does not work.
          # - --apiserver-host=http://my-address:port
        volumeMounts:
        - name: kubernetes-dashboard-certs
          mountPath: /certs
          # Create on-disk volume to store exec logs
        - mountPath: /tmp
          name: tmp-volume
        livenessProbe:
          httpGet:
            scheme: HTTPS
            path: /
            port: 8443
          initialDelaySeconds: 30
          timeoutSeconds: 30
      volumes:
      - name: kubernetes-dashboard-certs
        secret:
          secretName: kubernetes-dashboard-certs
      - name: tmp-volume
        emptyDir: {}
      serviceAccountName: kubernetes-dashboard
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule

---
# ------------------- Dashboard Service ------------------- #
#
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  type: NodePort
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30000
  selector:
    k8s-app: kubernetes-dashboard
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding 
metadata: 
  name: admin-user
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kube-system
EOF
docker pull  registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.1
docker tag   registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.1  k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1
kubectl  create -f /opt/dashboard.yaml 
mkdir  /opt/key
openssl genrsa -out /opt/key/dashboard.key 2048
openssl req -new -out /opt/key/dashboard.csr -key /opt/key/dashboard.key -subj '/CN=my-k8s'
openssl x509 -req -in /opt/key/dashboard.csr -signkey /opt/key/dashboard.key -out  /opt/key/dashboard.crt
kubectl delete secret kubernetes-dashboard-certs -n kube-system
kubectl create secret generic kubernetes-dashboard-certs --from-file=/opt/key/dashboard.key --from-file=/opt/key/dashboard.crt -n kube-system
kubectl  delete  pod -n kube-system -l k8s-app=kubernetes-dashboard
kubectl  get pod -n kube-system -l k8s-app=kubernetes-dashboard
kubectl  -n kube-system  get secrets  | grep admin-user | awk '{print $1}'
admin-user-token-kxrds
kubectl  -n kube-system describe  secrets admin-user-token-kxrds

https://172.16.99.100:30000/

在这里插入图片描述

undo

kubeadm  reset -f
rm -rf /etc/kubernetes
rm -rf /var/lib/kubelet
rm -rf /var/lib/etcd/

外置etcd http

http

cat  > /opt/kube-1.12.3-config.yaml   << EOF
apiVersion: kubeadm.k8s.io/v1alpha2
kind: MasterConfiguration
kubernetesVersion: v1.12.3
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
api:
  advertiseAddress: 0.0.0.0
  controlPlaneEndpoint: 172.16.99.200:6443
controllerManagerExtraArgs:
  node-monitor-grace-period: 10s
  pod-eviction-timeout: 10s

apiServerCertSANs:
- 172.16.99.200
etcd:
    external:
        endpoints:
        - http://172.16.99.201:2379
networking:
  podSubnet: "10.244.0.0/16"
nodeRegistration:
  name: 172.16.99.200
EOF

https

cat  > /opt/kube-1.12.3-config.yaml  << EOF
apiVersion: kubeadm.k8s.io/v1alpha2
kind: MasterConfiguration
kubernetesVersion: v1.12.3
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
api:
  advertiseAddress: 0.0.0.0
  controlPlaneEndpoint: 172.16.99.200:6443
controllerManagerExtraArgs:
  node-monitor-grace-period: 10s
  pod-eviction-timeout: 10s

apiServerCertSANs:
- 172.16.99.200
etcd:
    external:
        endpoints:
        - https://172.16.99.201:2379
        caFile: /etc/etcd/certs/etcd-ca.crt
        certFile: /etc/etcd/certs/etcd-client.crt
        keyFile: /etc/etcd/certs/etcd-client.key
networking:
  podSubnet: "10.244.0.0/16"
nodeRegistration:
  name: 172.16.99.200
EOF
mkdir /etc/etcd/certs/   -p
scp 172.16.99.201:/etcd-certs/{etcd-client.crt,etcd-client.key,etcd-ca.crt}   /etc/etcd/certs/
ETCDCTL_API=3 etcdctl --endpoints=http://172.16.99.102:2379  get / --prefix --keys-only
ETCDCTL_API=3 etcdctl del / --prefix
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值