kubeadm安装kubernetes 多master高可用

目录

版本信息

节点信息

安装前准备

1. 配置hosts解析

2. 安装docker

3. 安装 kubeadm, kubelet 和 kubectl

4. 配置系统相关参数

5. 配置haproxy和keepalived静态pod文件

6. 配置kubelet

配置master节点

1. 配置第一个master节点

2. 配置第二个master节点

3. 配置第三个master节点 

配置node节点

kubeadm初始化过程分步操作

部署kubernetes-dashboard

1. 生成kubernetes-dashboard.yaml文件

2. 部署dashboard

3. 创建一个管理员用户

4. 登录dashboard

Guestbook留言板系统部署示例

使用kubernetes api 访问集群

1. 使用token方式验证

2. 使用证书方式验证



版本信息

os centos7

kubernetes v1.11.2

docker v17.03.2-ce

 

节点信息

roleiphostname部署组件
master192.168.6.131k8s-m1

haproxy, keepalived, kubelet, etcd,

kube-controller-manager, kube-scheduler, kube-proxy, flannel

192.168.6.132k8s-m2

haproxy, keepalived, kubelet, etcd,

kube-controller-manager, kube-scheduler, kube-proxy, flannel

192.168.6.133k8s-m3

haproxy, keepalived, kubelet, etcd,

kube-controller-manager, kube-scheduler, kube-proxy, flannel

node192.168.6.144k8s-n1kubelet, kube-proxy, flannel
192.168.6.145k8s-n2kubelet, kube-proxy, flannel

另外vip(loadblancer ip): 192.168.6.130

部署的组件中haproxy, keepalived, kubelet, etcd, kube-controller-manager, kube-scheduler 都以静态pod方式运行。

 

安装前准备

开始部署前确保所有节点网络正常,能访问公网。主要操作都在k8s-m1节点进行,设置k8s-m1可以免密码登陆其他节点。所有操作都使用root用户身份进行。

1. 配置hosts解析

以下操作在所有节点操作

根据各节点ip分别配置,如在192.168.6.131节点运行如下命令:

echo "192.168.6.131 k8s-m1" >> /etc/hosts

配置后运行reboot 重启系统

2. 安装docker

以下操作在所有节点操作

国内访问docker较慢,使用DaoCloud 的docker 镜像安装,详见 https://download.daocloud.io/Docker_Mirror/Docker/17.09.1-ce

v1.11版本的kubernetes最高支持17.03版本的docker,更高版本的docker,kubeadm初始化过程中会警告。

yum install -y yum-utils
yum-config-manager \
    --add-repo \
    https://download.daocloud.io/docker/linux/centos/docker-ce.repo
yum install -y -q --setopt=obsoletes=0 docker-ce-17.03.2.ce* docker-ce-selinux-17.03.2.ce*
systemctl enable docker
systemctl start docker
systemctl status docker

3. 安装 kubeadm, kubelet 和 kubectl

以下操作在所有节点操作

使用阿里镜像安装:

# 配置源 
cat << EOF > /etc/yum.repos.d/kubernetes.repo 
[kubernetes] 
name=Kubernetes 
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 
enabled=1 
gpgcheck=1 
repo_gpgcheck=1 
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg 
EOF

# 安装 
yum install -y kubelet kubeadm kubectl

此镜像除了安装这三个组件外,还安装了几个依赖包:

安装完成后,运行如下命令:

systemctl cat kubelet

可以看到kubelet以设置为系统服务,生成了kubelet.service 和10-kubeadm.conf 两文件:

4. 配置系统相关参数

以下操作在所有节点操作

# 临时禁用selinux
setenforce 0
# 永久关闭selinux,修改/etc/sysconfig/selinux文件设置,把SELINUX的值改为disabled
vim /etc/sysconfig/selinux


# 临时关闭swap
swapoff -a
# 永久关闭 注释/etc/fstab文件里swap相关的行
vim /etc/fstab

# 关闭防火墙
systemctl stop firewalld
# 禁用开机启动
systemctl disable firewalld

# 开启forward
# Docker从1.13版本开始调整了默认的防火墙规则
# 禁用了iptables filter表中FOWARD链
# 这样会引起Kubernetes集群中跨Node的Pod无法通信

iptables -P FORWARD ACCEPT

# 配置转发相关参数,否则可能会出错
cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
vm.swappiness=0
EOF
sysctl --system

# 加载ipvs相关内核模块
# 如果重新开机,需要重新加载
modprobe ip_vs
modprobe ip_vs_rr
modprobe ip_vs_wrr
modprobe ip_vs_sh
modprobe nf_conntrack_ipv4
lsmod | grep ip_vs

 kubeadm在初始化过程中会检查上述配置,没有配置会提示警告或者出错无法初始化。

5. 配置haproxy和keepalived静态pod文件

以下操作在master节点操作

# 设置节点环境变量,后续ip,hostname信息都以环境变量表示
CP0_IP="192.168.6.131"
CP0_HOSTNAME="k8s-m1"
CP1_IP="192.168.6.132"
CP1_HOSTNAME="k8s-m2"
CP2_IP="192.168.6.133"
CP2_HOSTNAME="k8s-m3"
ADVERTISE_VIP="192.168.6.130"

# 拉取haproxy镜像
docker pull haproxy:1.7.8-alpine

# 生成haproxy配置文件
mkdir /etc/haproxy
cat > /etc/haproxy/haproxy.cfg <<EOF
global
  log 127.0.0.1 local0 err
  maxconn 50000
  uid 99
  gid 99
  #daemon
  nbproc 1
  pidfile haproxy.pid

defaults
  mode http
  log 127.0.0.1 local0 err
  maxconn 50000
  retries 3
  timeout connect 5s
  timeout client 30s
  timeout server 30s
  timeout check 2s

listen admin_stats
  mode http
  bind 0.0.0.0:1080
  log 127.0.0.1 local0 err
  stats refresh 30s
  stats uri     /haproxy-status
  stats realm   Haproxy\ Statistics
  stats auth    will:will
  stats hide-version
  stats admin if TRUE

frontend k8s-https
  bind 0.0.0.0:8443
  mode tcp
  #maxconn 50000
  default_backend k8s-https

backend k8s-https
  mode tcp
  balance roundrobin
  server $CP0_HOSTNAME $CP0_IP:6443 weight 1 maxconn 1000 check inter 2000 rise 2 fall 3
  server $CP1_HOSTNAME $CP1_IP:6443 weight 1 maxconn 1000 check inter 2000 rise 2 fall 3
  server $CP2_HOSTNAME $CP2_IP:6443 weight 1 maxconn 1000 check inter 2000 rise 2 fall 3
EOF


# 配置haproxy静态pod文件
mkdir -p /etc/kubernetes/manifests
cat > /etc/kubernetes/manifests/haproxy.yaml << EOF
kind: Pod
apiVersion: v1
metadata:
  annotations:
    scheduler.alpha.kubernetes.io/critical-pod: ""
  labels:
    component: haproxy
    tier: control-plane
  name: kube-haproxy
  namespace: kube-system
spec:
  hostNetwork: true
  priorityClassName: system-cluster-critical
  containers:
  - name: kube-haproxy
    image: haproxy:1.7.8-alpine
    resources:
      requests:
        cpu: 100m
    volumeMounts:
    - name: haproxy-cfg
      readOnly: true
      mountPath: /usr/local/etc/haproxy/haproxy.cfg
  volumes:
  - name: haproxy-cfg
    hostPath:
      path: /etc/haproxy/haproxy.cfg
      type: FileOrCreate
EOF



# 拉取keepalived镜像
docker pull osixia/keepalived:1.4.4

# 配置keepalived静态pod文件
cat > /etc/kubernetes/manifests/keepalived.yaml << EOF
kind: Pod
apiVersion: v1
metadata:
  annotations:
    scheduler.alpha.kubernetes.io/critical-pod: ""
  labels:
    component: keepalived
    tier: control-plane
  name: kube-keepalived
  namespace: kube-system
spec:
  hostNetwork: true
  priorityClassName: system-cluster-critical
  containers:
  - name: kube-keepalived
    image: osixia/keepalived:1.4.4
    env:
    - name: KEEPALIVED_VIRTUAL_IPS
      value: "#PYTHON2BASH:['$ADVERTISE_VIP']"
    - name: KEEPALIVED_INTERFACE
      # 此值为网络设备名称,使用ifconfig命令查看
      value: ens33
    - name: KEEPALIVED_UNICAST_PEERS
      value: "#PYTHON2BASH:['$CP0_IP','$CP1_IP','$CP2_IP']"
    - name: KEEPALIVED_PASSWORD
      value: hello
    resources:
      requests:
        cpu: 500m
    securityContext:
      privileged: true
      capabilities:
        add:
        - NET_ADMIN
EOF

注意 keepalived.yaml文件中KEEPALIVED_INTERFACE 的值根据ifconfig 命令修改。

6. 配置kubelet

以下操作在所有节点操作

# 配置kubelet使用国内阿里pause镜像,官方的镜像被墙,kubelet启动不了
cat > /etc/sysconfig/kubelet <<EOF
KUBELET_EXTRA_ARGS="--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.1"
EOF

# 重新载入kubelet系统配置
systemctl daemon-reload
# 设置开机启动,暂时不启动kubelet
systemctl enable kubelet

配置master节点

1. 配置第一个master节点

# 设置节点环境变量,后续ip,hostname信息都以环境变量表示
CP0_IP="192.168.6.131"
CP0_HOSTNAME="k8s-m1"
CP1_IP="192.168.6.132"
CP1_HOSTNAME="k8s-m2"
CP2_IP="192.168.6.133"
CP2_HOSTNAME="k8s-m3"
ADVERTISE_VIP="192.168.6.130"

# 生成kubeadm配置文件
cat > kubeadm-master.config <<EOF
apiVersion: kubeadm.k8s.io/v1alpha2
kind: MasterConfiguration
# kubernetes版本
kubernetesVersion: v1.11.2
# 使用国内阿里镜像
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers

apiServerCertSANs:
- "$CP0_HOSTNAME"
- "$CP0_IP"
- "$ADVERTISE_VIP"
- "127.0.0.1"

api:
  advertiseAddress: $CP0_IP
  controlPlaneEndpoint: $ADVERTISE_VIP:8443

etcd:
  local:
    extraArgs:
      listen-client-urls: "https://127.0.0.1:2379,https://$CP0_IP:2379"
      advertise-client-urls: "https://$CP0_IP:2379"
      listen-peer-urls: "https://$CP0_IP:2380"
      initial-advertise-peer-urls: "https://$CP0_IP:2380"
      initial-cluster: "$CP0_HOSTNAME=https://$CP0_IP:2380"
    serverCertSANs:
      - $CP0_HOSTNAME
      - $CP0_IP
    peerCertSANs:
      - $CP0_HOSTNAME
      - $CP0_IP

controllerManagerExtraArgs:
  node-monitor-grace-period: 10s
  pod-eviction-timeout: 10s

networking:
  podSubnet: 10.244.0.0/16
  
kubeProxy:
  config:
    # mode: ipvs
    mode: iptables
EOF

# 提前拉取镜像
kubeadm config images pull --config kubeadm-master.config

拉取的镜像如下

 执行如下命令初始化

# 初始化
kubeadm init --config kubeadm-master.config

等待命令执行完成,正常情况下出现如下提示,表示初始化成功。kubelet进程启动,master节点的各个组件在docker中以静态pod形式启动。运行提示的三个命令,设置kubectl的配置文件,保存kubeadm join命令,用于后续添加节点。

 此时运行命令查看节点信息,发现节点状态为NotReady:

kubectl get nodes

 运行命令查看pod信息,发现coredns pod还没有运行:

# --all-namespaces 表示查看所有命名空间
kubectl get pods --all-namespaces

运行命令查看kubelet的日志:

# 查看系统服务systemd日志 -u 指定要查看的系统服务日志名称 -f 跟踪日志更新
journalctl -fu kubelet

 发现日志重复打印"No networks found in /etc/cni/net.d",原因是网络插件还未安装,现在安装flannel:

# 生成创建flannel pod的l配置文件,以DaemonSet方式运行
cat > kube-flannel.yaml << EOF
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: flannel
rules:
  - apiGroups:
      - ""
    resources:
      - pods
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes/status
    verbs:
      - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      hostNetwork: true
      nodeSelector:
        beta.kubernetes.io/arch: amd64
      tolerations:
      - key: node-role.kubernetes.io/master
        operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: registry.cn-shanghai.aliyuncs.com/gcr-k8s/flannel:v0.10.0-amd64
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: registry.cn-shanghai.aliyuncs.com/gcr-k8s/flannel:v0.10.0-amd64
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        #- --iface=eth1
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: true
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
EOF

运行命令创建 flannel DaemonSet:

kubectl apply -f kube-flannel.yaml

创建完成之后再运行上述命令查看:

k8s-m1节点已准备好。

 flannel和coredns已运行,此时kubelet已不再打印上述日志。

也可运行命令 docker ps 查看当前 docker 容器运行情况:

 kubernetes会为每个pod创建一个pause容器,它是Linux命名空间共享的基础,为每个Pod提供PID 1,并收集僵尸进程,详情可以参考 https://blog.csdn.net/M2l0ZgSsVc7r69eFdTj/article/details/78238863

使用命令查看keepalived日志:

# kube-keepalived-k8s-m1 为查看pod情况的 keepalived pod名称,-n 指定命名空间
kubectl logs kube-keepalived-k8s-m1 -n kube-system

 日志显示当前节点已为master keepalived 的master 节点。

在浏览器中输入地址 http://192.168.6.130:1080/haproxy-status 可查看haproxy的运行情

上传文件至其他节点

# 打包ca相关文件上传至其他master节点
cd /etc/kubernetes && tar cvzf k8s-key.tgz pki/ca.* pki/sa.* pki/front-proxy-ca.* pki/etcd/ca.*
scp /etc/kubernetes/k8s-key.tgz $CP1_IP:/etc/kubernetes
ssh $CP1_IP 'tar xf /etc/kubernetes/k8s-key.tgz -C /etc/kubernetes/'
scp /etc/kubernetes/k8s-key.tgz $CP2_IP:/etc/kubernetes
ssh $CP2_IP 'tar xf /etc/kubernetes/k8s-key.tgz -C /etc/kubernetes/'

2. 配置第二个master节点

# 设置节点环境变量,后续ip,hostname信息都以环境变量表示
CP0_IP="192.168.6.131"
CP0_HOSTNAME="k8s-m1"
CP1_IP="192.168.6.132"
CP1_HOSTNAME="k8s-m2"
CP2_IP="192.168.6.133"
CP2_HOSTNAME="k8s-m3"
ADVERTISE_VIP="192.168.6.130"


# 生成kubeadm配置文件,与第一个master节点的区别除了修改ip外,主要是etcd增加节点的配置
cat >kubeadm-master.config<<EOF
apiVersion: kubeadm.k8s.io/v1alpha2
kind: MasterConfiguration
kubernetesVersion: v1.11.2
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers

apiServerCertSANs:
- "$CP1_HOSTNAME"
- "$CP1_IP"
- "$ADVERTISE_VIP"
- "127.0.0.1"

api:
  advertiseAddress: $CP1_IP
  controlPlaneEndpoint: $ADVERTISE_VIP:8443

etcd:
  local:
    extraArgs:
      listen-client-urls: "https://127.0.0.1:2379,https://$CP1_IP:2379"
      advertise-client-urls: "https://$CP1_IP:2379"
      listen-peer-urls: "https://$CP1_IP:2380"
      initial-advertise-peer-urls: "https://$CP1_IP:2380"
      initial-cluster: "$CP0_HOSTNAME=https://$CP0_IP:2380,$CP1_HOSTNAME=https://$CP1_IP:2380"
      initial-cluster-state: existing
    serverCertSANs:
      - $CP1_HOSTNAME
      - $CP1_IP
    peerCertSANs:
      - $CP1_HOSTNAME
      - $CP1_IP

controllerManagerExtraArgs:
  node-monitor-grace-period: 10s
  pod-eviction-timeout: 10s

networking:
  podSubnet: 10.244.0.0/16
  
kubeProxy:
  config:
    # mode: ipvs
    mode: iptables
EOF

# 提前拉取镜像
kubeadm config images pull --config kubeadm-master.config

# 配置kubelet
# 生成证书
kubeadm alpha phase certs all --config kubeadm-master.config
# 生成kubelet相关配置文件
kubeadm alpha phase kubelet config write-to-disk --config kubeadm-master.config
kubeadm alpha phase kubelet write-env-file --config kubeadm-master.config
kubeadm alpha phase kubeconfig kubelet --config kubeadm-master.config
# 启动kubelet
systemctl restart kubelet

# 部署 controlplane,即kube-apiserver, kube-controller-manager, kube-scheduler等各组件
# 生成controlplane的配置文件
kubeadm alpha phase kubeconfig all --config kubeadm-master.config

# 设置kubectl 默认配置文件
mkdir ~/.kube
cp /etc/kubernetes/admin.conf ~/.kube/config

此时可以运行命令查看当前节点情况:

 现在k8s-m2可以算作一个node节点。

查看此节点上的pod运行情况:

 除了我们自己配置的haproxy和keepalived外,还自动运行了在第一个master节点是运行的DaemonSet pod flannel,还有一个kub-proxy pod在每个节点上都会运行。

# 添加etcd到集群中
# 向添加etcd集群中添加成员,此时如果使用kubectl命令会出错"Unable to connect to the server: unexpected EOF",这是etcd添加第二个节点的机制导致的,先启动etcd就可以了
kubectl exec -n kube-system etcd-${CP0_HOSTNAME} -- etcdctl --ca-file /etc/kubernetes/pki/etcd/ca.crt --cert-file /etc/kubernetes/pki/etcd/peer.crt --key-file /etc/kubernetes/pki/etcd/peer.key --endpoints=https://${CP0_IP}:2379 member add ${CP1_HOSTNAME} https://${CP1_IP}:2380
# 部署etcd静态pod
kubeadm alpha phase etcd local --config kubeadm-master.config

# 查看ectd节点
kubectl exec -n kube-system etcd-${CP0_HOSTNAME} -- etcdctl --ca-file /etc/kubernetes/pki/etcd/ca.crt --cert-file /etc/kubernetes/pki/etcd/peer.crt --key-file /etc/kubernetes/pki/etcd/peer.key --endpoints=https://${CP0_IP}:2379 member list


# 部署controlplane静态pod文件,kubelet会自动启动各组件
kubeadm alpha phase controlplane all --config kubeadm-master.config

此时查看节点及pod情况:

k8s-m2节点上的各组件都已运行。

# 标记为master节点,只是在此节点上添加了label和taint
kubeadm alpha phase mark-master --config kubeadm-master.config

再查看节点情况,此时k8s-m2已经是master节点:

3. 配置第三个master节点 

操作基本与配置第二个master节点相同

# 设置节点环境变量,后续ip,hostname信息都以环境变量表示
CP0_IP="192.168.6.131"
CP0_HOSTNAME="k8s-m1"
CP1_IP="192.168.6.132"
CP1_HOSTNAME="k8s-m2"
CP2_IP="192.168.6.133"
CP2_HOSTNAME="k8s-m3"
ADVERTISE_VIP="192.168.6.130"


# 生成kubeadm配置文件,与第一个master节点的区别除了修改ip外,主要是etcd增加节点的配置
cat >kubeadm-master.config<<EOF
apiVersion: kubeadm.k8s.io/v1alpha2
kind: MasterConfiguration
kubernetesVersion: v1.11.2
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers

apiServerCertSANs:
- "$CP2_HOSTNAME"
- "$CP2_IP"
- "$ADVERTISE_VIP"
- "127.0.0.1"

api:
  advertiseAddress: $CP2_IP
  controlPlaneEndpoint: $ADVERTISE_VIP:8443

etcd:
  local:
    extraArgs:
      listen-client-urls: "https://127.0.0.1:2379,https://$CP2_IP:2379"
      advertise-client-urls: "https://$CP2_IP:2379"
      listen-peer-urls: "https://$CP2_IP:2380"
      initial-advertise-peer-urls: "https://$CP2_IP:2380"
      initial-cluster: "$CP0_HOSTNAME=https://$CP0_IP:2380,$CP1_HOSTNAME=https://$CP1_IP:2380,$CP2_HOSTNAME=https://$CP2_IP:2380"
      initial-cluster-state: existing
    serverCertSANs:
      - $CP2_HOSTNAME
      - $CP2_IP
    peerCertSANs:
      - $CP2_HOSTNAME
      - $CP2_IP

controllerManagerExtraArgs:
  node-monitor-grace-period: 10s
  pod-eviction-timeout: 10s

networking:
  podSubnet: 10.244.0.0/16
  
kubeProxy:
  config:
    # mode: ipvs
    mode: iptables
EOF

# 提前拉取镜像
kubeadm config images pull --config kubeadm-master.config

# 配置kubelet
# 生成证书
kubeadm alpha phase certs all --config kubeadm-master.config
# 生成kubelet相关配置文件
kubeadm alpha phase kubelet config write-to-disk --config kubeadm-master.config
kubeadm alpha phase kubelet write-env-file --config kubeadm-master.config
kubeadm alpha phase kubeconfig kubelet --config kubeadm-master.config
# 启动kubelet
systemctl restart kubelet

# 部署 controlplane,即kube-apiserver, kube-controller-manager, kube-scheduler等各组件
# 生成controlplane的配置文件
kubeadm alpha phase kubeconfig all --config kubeadm-master.config

# 设置kubectl 默认配置文件
mkdir ~/.kube
cp /etc/kubernetes/admin.conf ~/.kube/config

# 添加etcd到集群中
# 向添加etcd集群中添加成员,此时如果使用kubectl命令不会出错
kubectl exec -n kube-system etcd-${CP0_HOSTNAME} -- etcdctl --ca-file /etc/kubernetes/pki/etcd/ca.crt --cert-file /etc/kubernetes/pki/etcd/peer.crt --key-file /etc/kubernetes/pki/etcd/peer.key --endpoints=https://${CP0_IP}:2379 member add ${CP2_HOSTNAME} https://${CP2_IP}:2380
# 部署etcd静态pod
kubeadm alpha phase etcd local --config kubeadm-master.config

# 查看ectd节点
kubectl exec -n kube-system etcd-${CP0_HOSTNAME} -- etcdctl --ca-file /etc/kubernetes/pki/etcd/ca.crt --cert-file /etc/kubernetes/pki/etcd/peer.crt --key-file /etc/kubernetes/pki/etcd/peer.key --endpoints=https://${CP0_IP}:2379 member list


# 部署controlplane静态pod文件,kubelet会自动启动各组件
kubeadm alpha phase controlplane all --config kubeadm-master.config

# 标记为master节点,只是在此节点上添加了label和taint
kubeadm alpha phase mark-master --config kubeadm-master.config

 查看节点及pod运行情况:

 master节点都已成功运行。

配置node节点

分别在k8s-n1和k8s-n2节点上运行之前在k8s-m1节点初始化后保存的kubeadm join命令

kubeadm join 192.168.6.130:8443 --token fsncx5.4i778tb8oz6qog3q --discovery-token-ca-cert-hash sha256:a483bfe73b21728b56edbc43444dce00d51bfbcb988ca371d98b45efd04d10fd

此命令默认有效期24h,如果过期或忘记kubeadm join命令可使用如下命令创建新token

kubeadm alpha phase bootstrap-token create --config kubeadm-master.config

在master节点上查看当前token列表

kubeadm token list

在master节点上查看当前节点及pod情况

kubeadm初始化过程分步操作

kubeadm init 命令实际是由的原子工作任务组成的,详见 http://docs.kubernetes.org.cn/829.html

# 配置kubelet
# 生成证书
kubeadm alpha phase certs all --config kubeadm-master.config
# 生成kubelet相关配置文件
kubeadm alpha phase kubelet config write-to-disk --config kubeadm-master.config
kubeadm alpha phase kubelet write-env-file --config kubeadm-master.config
kubeadm alpha phase kubeconfig kubelet --config kubeadm-master.config
# 启动kubelet
systemctl restart kubelet
 
# 部署 controlplane,即kube-apiserver, kube-controller-manager, kube-scheduler等各组件
# 生成controlplane的配置文件
kubeadm alpha phase kubeconfig all --config kubeadm-master.config
 
# 设置kubectl 默认配置文件
mkdir ~/.kube
cp /etc/kubernetes/admin.conf ~/.kube/config
 
# 添加etcd到集群中
# 向添加etcd集群中添加成员,此时如果使用kubectl命令不会出错
kubectl exec -n kube-system etcd-${CP0_HOSTNAME} -- etcdctl --ca-file /etc/kubernetes/pki/etcd/ca.crt --cert-file /etc/kubernetes/pki/etcd/peer.crt --key-file /etc/kubernetes/pki/etcd/peer.key --endpoints=https://${CP0_IP}:2379 member add ${CP2_HOSTNAME} https://${CP2_IP}:2380
# 部署etcd静态pod
kubeadm alpha phase etcd local --config kubeadm-master.config
 
# 查看ectd节点
kubectl exec -n kube-system etcd-${CP0_HOSTNAME} -- etcdctl --ca-file /etc/kubernetes/pki/etcd/ca.crt --cert-file /etc/kubernetes/pki/etcd/peer.crt --key-file /etc/kubernetes/pki/etcd/peer.key --endpoints=https://${CP0_IP}:2379 member list
 
# 部署controlplane静态pod文件,kubelet会自动启动各组件
kubeadm alpha phase controlplane all --config kubeadm-master.config

# 生成添加节点命令
# 上传配置到configMap中
kubeadm alpha phase upload-config --config kubeadm-master.config
kubeadm alpha phase kubelet config upload --config kubeadm-master.config
# 配置节点的 TLS 引导
kubeadm alpha phase bootstrap-token all --config kubeadm-master.config

# 单独创建token
kubeadm alpha phase bootstrap-token create --config kubeadm-master.config

# 单独安装插件,通过 API server 安装内部 coreDNS 服务和 kube-proxy 插件组件
kubeadm alpha phase addon all --config kubeadm-master.config

# 标记为master节点,只是在此节点上添加了label和taint
kubeadm alpha phase mark-master --config kubeadm-master.config



# 清除docker容器,方便反复试验
docker ps -a | awk 'NR!=1{print $1}' | xargs docker rm -f
umount /var/lib/kubelet/pods/*/volumes/kubernetes.io~secret/*
rm -rf /var/lib/kubelet/
rm -rf /var/lib/etcd/
rm -rf /var/log/pods/
# 重新生成删除的配置文件
kubeadm alpha phase kubelet config write-to-disk --config kubeadm-master.config
kubeadm alpha phase kubelet write-env-file --config kubeadm-master.config

# kubeconfig中的client-certificate-data字段值是证书的base64编码后的文本,还原为证书格式
cat /etc/kubernetes/admin.conf | grep client-certificate-data | awk -F ': ' '{print $2}' | base64 -d > /etc/kubernetes/pki/client.crt

部署kubernetes-dashboard

在master节点上操作

1. 生成kubernetes-dashboard.yaml文件

也可从站点下载:

https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml

# 生成kubernetes-dashboard.yaml,把镜像地址改为国内阿里镜像,设置service的type为NodePort,nodePort为30001对集群外访问提供端口
cat > kubernetes-dashboard.yaml << EOF
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# ------------------- Dashboard Secret ------------------- #

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-certs
  namespace: kube-system
type: Opaque

---
# ------------------- Dashboard Service Account ------------------- #

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system

---
# ------------------- Dashboard Role & Role Binding ------------------- #

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: kubernetes-dashboard-minimal
  namespace: kube-system
rules:
  # Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret.
- apiGroups: [""]
  resources: ["secrets"]
  verbs: ["create"]
  # Allow Dashboard to create 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
  resources: ["configmaps"]
  verbs: ["create"]
  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
  resources: ["secrets"]
  resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"]
  verbs: ["get", "update", "delete"]
  # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
  resources: ["configmaps"]
  resourceNames: ["kubernetes-dashboard-settings"]
  verbs: ["get", "update"]
  # Allow Dashboard to get metrics from heapster.
- apiGroups: [""]
  resources: ["services"]
  resourceNames: ["heapster"]
  verbs: ["proxy"]
- apiGroups: [""]
  resources: ["services/proxy"]
  resourceNames: ["heapster", "http:heapster:", "https:heapster:"]
  verbs: ["get"]

---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: kubernetes-dashboard-minimal
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kubernetes-dashboard-minimal
subjects:
- kind: ServiceAccount
  name: kubernetes-dashboard
  namespace: kube-system

---
# ------------------- Dashboard Deployment ------------------- #

kind: Deployment
apiVersion: apps/v1beta2
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      containers:
      - name: kubernetes-dashboard
        image: registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.0
        ports:
        - containerPort: 8443
          protocol: TCP
        args:
          - --auto-generate-certificates
          # Uncomment the following line to manually specify Kubernetes API server Host
          # If not specified, Dashboard will attempt to auto discover the API server and connect
          # to it. Uncomment only if the default does not work.
          # - --apiserver-host=https://192.168.6.130:8443
        volumeMounts:
        - name: kubernetes-dashboard-certs
          mountPath: /certs
          # Create on-disk volume to store exec logs
        - mountPath: /tmp
          name: tmp-volume
        livenessProbe:
          httpGet:
            scheme: HTTPS
            path: /
            port: 8443
          initialDelaySeconds: 30
          timeoutSeconds: 30
      volumes:
      - name: kubernetes-dashboard-certs
        secret:
          secretName: kubernetes-dashboard-certs
      - name: tmp-volume
        emptyDir: {}
      serviceAccountName: kubernetes-dashboard
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule

---
# ------------------- Dashboard Service ------------------- #

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  type: NodePort
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30001
  selector:
    k8s-app: kubernetes-dashboard

EOF

2. 部署dashboard

kubectl create -f kubernetes-dashboard.yaml

查看pod及service运行情况:

3. 创建一个管理员用户

# 生成配置文件
cat > admin-user.yaml << EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kube-system

---

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding 
metadata: 
  name: admin-user
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kube-system
EOF


# 创建用户
kubectl create -f admin-user.yaml

4. 登录dashboard

在浏览器中访问地址:https://192.168.6.130:30001,也可用任意节点加端口号30001访问,类型为NodePort的service会在所有的节点上监听指定的端口,添加证书例外以访问dashboard。

 运行命令查看secret列表:

# -n 指定命名空间
kubectl get secrets -n kube-system

 第一个为admin-user的secret,运行命令查看secret详情:

kubectl describe secret admin-user-token-fxbxt -n kube-system

 到dashboard登录页面,选择“令牌”,复制token的值到“输入令牌”文本框中,登录。

 Guestbook留言板系统部署示例

在master节点操作

部署架构图

 在本例中将用到3个docker镜像,下载地址为 https://hub.docker.com/u/kubeguide/

  • redis-master: 用于前端web应用进行“写”留言操作的Redis服务,其中已经保存了一条内容为"Hello World!" 的留言。
  • guestbook-redis-slave: 用于前端web应用进行“读”留言操作的Redis服务,并与Redis-Master的数据保持同步。
  • guestbook-php-frontend: PHP Web 服务,在网页上展示留言的内容,也提供一个文本输入框供访客添加留言。

kubernetes 部署架构图

创建redis-master RC 和service

# 先创建一个test命名空间,在此命名空间中部署系统
kubectl create namespace test

# 生成部署文件,命令空间为test
cat > redis-master.yaml << EOF
# -------------------redis-master ReplicationController ------------------- #

kind: ReplicationController
apiVersion: v1
metadata:
  name: redis-master
  namespace: test
  labels:
    name: redis-master
spec:
  replicas: 1
  selector:
    name: redis-master
  template:
    metadata:
      labels:
        name: redis-master
    spec:
      containers: 
      - name: master
        image: kubeguide/redis-master
        ports:
        - containerPort: 6379


---
# ------------------- redis-master Service ------------------- #

kind: Service
apiVersion: v1
metadata:
  name: redis-master
  namespace: test
  labels:
    name: redis-master
spec:
  ports:
    - port: 6379
      targetPort: 6379
  selector:
    name: redis-master
EOF


# 创建RC 和 Service
kubectl create -f redis-master.yaml

查看运行情况:

由于ip地址是在服务创建后由kubernetes系统自动分配的,在其他Pod中无法预先知道某个Service的虚拟ip地址。为此,kubernetes在每个pod的容器里都增加了一组Service相关的环境变量,用来记录从服务名到虚拟ip地址的映射关系。以redis-master服务为例,在容器的环境变量中会增加下面两条记录:

REDIS_MASTER_SERVICE_HOST=10.110.69.237

REDIS_MASTER_SERVICE_PORT=6379

于是,redis-slave和frontend等pod中的应用程序就可以通过上述环境变量得到redis-master服务的虚拟ip和端口号。

创建redis-slave RC 和 Service

# 生成部署文件
cat > redis-slave.yaml << EOF
# -------------------redis-slave ReplicationController ------------------- #

kind: ReplicationController
apiVersion: v1
metadata:
  name: redis-slave
  namespace: test
  labels:
    name: redis-slave
spec:
  replicas: 2
  selector:
    name: redis-slave
  template:
    metadata:
      labels:
        name: redis-slave
    spec:
      containers: 
      - name: slave
        image: kubeguide/guestbook-redis-slave
        env:
        - name: GET_HOSTS_FROM
          value: env
        ports:
        - containerPort: 6379


---
# ------------------- redis-slave Service ------------------- #

kind: Service
apiVersion: v1
metadata:
  name: redis-slave
  namespace: test
  labels:
    name: redis-slave
spec:
  ports:
    - port: 6379
  selector:
    name: redis-slave
EOF


# 创建RC和service
kubectl create -f redis-slave.yaml

在容器的配置部分设置了一个环境变量GET_HOSTS_FROM=env,意思是从环境变量中获取redis-master服务的ip信息。

查看运行情况:

创建frontend RC和Service

# 生成部署文件
cat > frontend.yaml << EOF
# -------------------frontend ReplicationController ------------------- #

kind: ReplicationController
apiVersion: v1
metadata:
  name: frontend
  namespace: test
  labels:
    name: frontend
spec:
  replicas: 3
  selector:
    name: frontend
  template:
    metadata:
      labels:
        name: frontend
    spec:
      containers: 
      - name: frontend
        image: kubeguide/guestbook-php-frontend
        env:
        - name: GET_HOSTS_FROM
          value: env
        ports:
        - containerPort: 80


---
# ------------------- frontend Service ------------------- #

kind: Service
apiVersion: v1
metadata:
  name: frontend
  namespace: test
  labels:
    name: frontend
spec:
  type: NodePort
  ports:
    - port: 80
      nodePort: 30002
  selector:
    name: frontend
EOF


# 创建RC和service
kubectl create -f frontend.yaml

通过浏览器访问frontend页面

在浏览器中访问地址 http://192.168.6.130:30002

使用kubernetes api 访问集群

kubeadm默认禁用http非安全方式访问,https方式访问需要验证客户端身份。

以下操作均使用postman进行。

1. 使用token方式验证

在postman中以get方式输入地址https://192.168.6.130:8443/api/v1/namespaces/test/pods,获取命名空间test下的pod信息, Authorization 验证方式选择"Bear Token",把上面登录kubernetes-dashboard 用到的token 拷贝到输入框中,发送请求,如下图:

2. 使用证书方式验证

kubectl的配置文件中嵌入了客户端证书数据,数据是用base64编码的,如图:

client-certificate-data:客户端证书数据

client-key-data:客户端私钥数据

使用命令将文本还原为证书文件:

# kubeconfig中的client-certificate-data字段值是证书的base64编码后的文本,将文本解码还原为证书格式
cat /etc/kubernetes/admin.conf | grep client-certificate-data | awk -F ': ' '{print $2}' | base64 -d > /opt/kubernetes/client.crt
cat /etc/kubernetes/admin.conf | grep client-key-data | awk -F ': ' '{print $2}' | base64 -d > /opt/kubernetes/client.key

将证书与私钥文件添加到postman中,File --> Settings --> Certificates --> Add Certificate,填写访问集群的地址及端口号,添加刚才生成的文件,如图:

 添加完成后,再访问api时就不需要验证信息即可访问,如图:

 

 

 

 

参考文献:

1. https://www.kubernetes.org.cn/4256.html

2. http://docs.kubernetes.org.cn/829.html

3. 《kubernetes权威指南》第二版

 

 

  • 1
    点赞
  • 6
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值