第三课Kubernetes生产级实践-二进制方式环境搭建Containerd

第三课Kubernetes生产级实践-二进制方式环境搭建Containerd

tags:

  • k8s
  • 慕课网

categories:

  • 二进制方式搭建
  • kubernetes-the-hard-way
  • containerd

第一节 基本环境准备

1.1 kubernetes-the-hard-way介绍

  1. 用于学习:依照github知名项目kubernetes-the-hard-way的流程,全部手动部署,深入了解集群各个组件。
  2. 生产级高可用:在kubernetes-the-hard-way基础上增加了各个组件的高可用方案,满足生产集群要求
  3. 99年永久证书,不用为证书过期烦恼。
  4. 安装不依赖ansible等第三方工具
  5. 高可用不依赖haproxy、keepalived,采用本地代理的方式,简单优雅
  6. github地址:https://github.com/kelseyhightower/kubernetes-the-hard-way

1.2 服务器系统设置

  1. 我们这里使用的是五台centos-7.8的虚拟机三台主节点和两个个从节点,具体信息如下表:
    | 系统类型 | IP地址 | 节点角色 | CPU | Memory | Hostname |
    | :------: | :--------: | :-------: | :-----: | :---------: | :-----: |
    | centos-7.8 | 192.168.242.149 | master | >=2 | >=2G | k8s-master01 |
    | centos-7.8 | 192.168.242.156 | master | >=2 | >=2G | k8s-master02 |
    | centos-7.8 | 192.168.242.157 | master | >=2 | >=2G | k8s-master03 |
    | centos-7.8 | 192.168.242.158 | worker | >=2 | >=2G | k8s-node01 |
    | centos-7.8 | 192.168.242.159 | worker | >=2 | >=2G | k8s-node02 |

  2. 设置主机名。主机名必须每个节点都不一样,并且保证所有点之间可以通过hostname互相访问。

# 查看主机名
hostname
# 修改主机名
hostnamectl set-hostname <your_hostname>
# 配置host,使所有节点之间可以通过hostname互相访问
vi /etc/hosts
echo -e "\n192.168.242.149 k8s-master01\n192.168.242.156 k8s-master02\n192.168.242.157 k8s-master03\n192.168.242.158 k8s-node01\n192.168.242.159 k8s-node02" >> /etc/hosts
# <node-ip> <node-hostname>
  1. 安装依赖包
# 更新yum
yum update
# 安装依赖包
yum install -y socat conntrack ipvsadm ipset jq sysstat curl iptables libseccomp net-tools git vim wget
  1. 关闭防火墙、swap,重置iptables
# 关闭防火墙
systemctl stop firewalld && systemctl disable firewalld
# 重置iptables
iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat && iptables -P FORWARD ACCEPT
# 关闭swap
# 使用下面的命令对文件/etc/fstab操作,注释 /dev/mapper/centos_master-swap  swap  swap    defaults        0 0 这行
swapoff -a && sed -i 's/.*swap.*/#&/' /etc/fstab && free –h
# 关闭selinux
setenforce 0 && sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
# 查看selinux状态
sestatus
# 关闭dnsmasq(否则可能导致docker容器无法解析域名)
service dnsmasq stop && systemctl disable dnsmasq
  1. K8s参数设置
# K8s参数设置
cat > /etc/sysctl.d/kubernetes.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_nonlocal_bind = 1
net.ipv4.ip_forward = 1
vm.swappiness = 0
vm.overcommit_memory = 0
EOF
# 生效文件
sysctl -p /etc/sysctl.d/kubernetes.conf
# 执行sysctl -p 时出现下面的错误
# sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-ip6tables: No such file or directory
# sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
# 解决方法:运行命令 modprobe br_netfilter 然后再执行 sysctl -p /etc/sysctl.d/kubernetes.conf
modprobe br_netfilter
# 查看
ls /proc/sys/net/bridge

1.3 配置免密登录

  1. 为了方便文件的copy我们选择一个中转节点(随便一个节点,可以是集群中的也可以是非集群中的),配置好跟其他所有节点的免密登录。
# 看看是否已经存在rsa公钥
cat ~/.ssh/id_rsa.pub
# 如果不存在就创建一个新的
ssh-keygen -t rsa
ssh-copy-id root@k8s-master01

1.4 准备k8s软件包

  1. 软件包下载:在任意一个节点下载好压缩包后,复制到所有节点即可
    • master节点组件:kube-apiserver、kube-controller-manager、kube-scheduler、kubectl
    • worker节点组件:kubelet、kube-proxy
# 设定版本号  1.18.5 etcd 3.2.18 containd1.4.3
# 设定版本号  1.20.2 etcd 3.4.10 containd1.4.3
export VERSION=v1.18.5

# 下载master节点组件
wget https://storage.googleapis.com/kubernetes-release/release/${VERSION}/bin/linux/amd64/kube-apiserver
wget https://storage.googleapis.com/kubernetes-release/release/${VERSION}/bin/linux/amd64/kube-controller-manager
wget https://storage.googleapis.com/kubernetes-release/release/${VERSION}/bin/linux/amd64/kube-scheduler
wget https://storage.googleapis.com/kubernetes-release/release/${VERSION}/bin/linux/amd64/kubectl

# 下载worker节点组件
wget https://storage.googleapis.com/kubernetes-release/release/${VERSION}/bin/linux/amd64/kube-proxy
wget https://storage.googleapis.com/kubernetes-release/release/${VERSION}/bin/linux/amd64/kubelet

# 下载etcd 1.18.5 etcd 3.2.18
# 下载etcd 1.20.2 etcd 3.4.10 
wget https://github.com/etcd-io/etcd/releases/download/v3.2.18/etcd-v3.2.18-linux-amd64.tar.gz
tar -xvf etcd-v3.2.18-linux-amd64.tar.gz
mv etcd-v3.2.18-linux-amd64/etcd* .
rm -rf etcd-v3.2.18-linux-amd64*

# 统一修改文件权限为可执行
chmod +x kube*
  1. 软件包分发: 完成下载后,分发文件,将每个节点需要的文件scp过去
# 把master相关组件分发到master节点
MASTERS=(k8s-master01 k8s-master02 k8s-master03)
for instance in ${MASTERS[@]}; do
  scp kube-apiserver kube-controller-manager kube-scheduler kubectl root@${instance}:/usr/local/bin/
done

# 把worker先关组件分发到worker节点
WORKERS=(k8s-node01 k8s-node02)
for instance in ${WORKERS[@]}; do
  scp kubelet kube-proxy root@${instance}:/usr/local/bin/
done

# 把etcd组件分发到etcd节点
ETCDS=(k8s-master01 k8s-master02 k8s-master03)
for instance in ${ETCDS[@]}; do
  scp etcd etcdctl root@${instance}:/usr/local/bin/
done

第二节 生成证书

2.1 安装cfssl

  1. cfssl是非常好用的CA工具,我们用它来生成证书和秘钥文件 安装过程比较简单,如下:
# 只在master01上安装就可以
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -O /usr/local/bin/cfssl
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -O /usr/local/bin/cfssljson

# 修改为可执行权限
chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson

# 验证
cfssl version

2.2 根证书

  1. 根证书是集群所有节点共享的,只需要创建一个 CA 证书,后续创建的所有证书都由它签名。。在任意节点(可以免密登录到其他节点)创建一个单独的证书目录,如:mkdir pki && cd pki
  2. 根证书配置文件
cat > ca-config.json <<EOF
{
  "signing": {
    "default": {
      "expiry": "876000h"
    },
    "profiles": {
      "kubernetes": {
        "usages": ["signing", "key encipherment", "server auth", "client auth"],
        "expiry": "876000h"
      }
    }
  }
}
EOF

cat > ca-csr.json <<EOF
{
  "CN": "Kubernetes",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "US",
      "L": "Portland",
      "O": "Kubernetes",
      "OU": "CA",
      "ST": "Oregon"
    }
  ]
}
EOF
  1. 生成证书和秘钥
# 生成证书和私钥
cfssl gencert -initca ca-csr.json | cfssljson -bare ca
# 生成完成后会有以下文件(我们最终想要的就是ca-key.pem和ca.pem,一个秘钥,一个证书)
ls
ca-config.json  ca.csr  ca-csr.json  ca-key.pem  ca.pem

2.3 admin客户端证书配置文件

  1. admin客户端证书配置文件
cat > admin-csr.json <<EOF
{
  "CN": "admin",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "system:masters",
      "OU": "seven"
    }
  ]
}
EOF
  1. 生成admin客户端证书和私钥
# 根据根证书签发的证书文件 ca-key=ca-key.pem
cfssl gencert \
  -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -profile=kubernetes \
  admin-csr.json | cfssljson -bare admin

2.4 kubelet客户端证书

  1. Kubernetes使用一种称为Node Authorizer的专用授权模式来授权Kubelets发出的API请求。
  2. Kubelet使用将其标识为system:nodes组中的凭据,其用户名为system:node:nodeName,因为跟nodeName有关所以每个node节点不同。接下来就给每个工作节点生成证书
  3. 生成kubelet客户端证书和私钥
# 设置你的worker节点列表
WORKERS=(k8s-node01 k8s-node02)
WORKER_IPS=(192.168.242.158 192.168.242.159)
# 生成所有worker节点的证书配置
for ((i=0;i<${#WORKERS[@]};i++)); do
cat > ${WORKERS[$i]}-csr.json <<EOF
{
  "CN": "system:node:${WORKERS[$i]}",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "Beijing",
      "O": "system:nodes",
      "OU": "seven",
      "ST": "Beijing"
    }
  ]
}
EOF
cfssl gencert \
  -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -hostname=${WORKERS[$i]},${WORKER_IPS[$i]} \
  -profile=kubernetes \
  ${WORKERS[$i]}-csr.json | cfssljson -bare ${WORKERS[$i]}
done

2.5 kube-controller-manager客户端证书

  1. kube-controller-manager客户端证书配置文件
cat > kube-controller-manager-csr.json <<EOF
{
    "CN": "system:kube-controller-manager",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
      {
        "C": "CN",
        "ST": "BeiJing",
        "L": "BeiJing",
        "O": "system:kube-controller-manager",
        "OU": "seven"
      }
    ]
}
EOF
  1. 生成kube-controller-manager客户端证书
cfssl gencert \
  -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -profile=kubernetes \
  kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager

2.6 kube-proxy客户端证书

  1. kube-proxy客户端证书配置文件
cat > kube-proxy-csr.json <<EOF
{
  "CN": "system:kube-proxy",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "seven"
    }
  ]
}
EOF
  1. 生成kube-proxy客户端证书
cfssl gencert \
  -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -profile=kubernetes \
  kube-proxy-csr.json | cfssljson -bare kube-proxy

2.7 kube-scheduler客户端证书

  1. kube-scheduler客户端证书配置文件
cat > kube-scheduler-csr.json <<EOF
{
    "CN": "system:kube-scheduler",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
      {
        "C": "CN",
        "ST": "BeiJing",
        "L": "BeiJing",
        "O": "system:kube-scheduler",
        "OU": "seven"
      }
    ]
}
EOF
  1. 生成kube-scheduler客户端证书
cfssl gencert \
  -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -profile=kubernetes \
  kube-scheduler-csr.json | cfssljson -bare kube-scheduler

2.8 kube-apiserver服务端证书

  1. kube-apiserver服务端证书配置文件
cat > kubernetes-csr.json <<EOF
{
  "CN": "kubernetes",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "seven"
    }
  ]
}
EOF
  1. 生成kube-apiserver服务端证书
  2. 服务端证书与客户端略有不同,客户端需要通过一个名字或者一个ip去访问服务端,所以证书必须要包含客户端所访问的名字或ip,用以客户端验证
# apiserver的service ip地址(一般是svc网段的第一个ip)
KUBERNETES_SVC_IP=10.233.0.1
# 所有的master内网ip,逗号分隔(云环境可以加上master公网ip以便支持公网ip访问)
MASTER_IPS=192.168.242.149,192.168.242.156,192.168.242.157,192.168.242.150
# 生成证书 这里可以多预留一些ip 192.168.242.150是多留的一个 也可以写一个公网的用于公网访问
cfssl gencert \
  -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -hostname=${KUBERNETES_SVC_IP},${MASTER_IPS},127.0.0.1,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.svc.cluster.local \
  -profile=kubernetes \
  kubernetes-csr.json | cfssljson -bare kubernetes

2.9 Service Account证书

  1. 配置文件
cat > service-account-csr.json <<EOF
{
  "CN": "service-accounts",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "seven"
    }
  ]
}
EOF
  1. 生成证书
cfssl gencert \
  -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -profile=kubernetes \
  service-account-csr.json | cfssljson -bare service-account

2.10 proxy-client 证书

  1. 配置文件。它专门用于聚合api的,方便用户开发自己的Api Server不用修改源码。
cat > proxy-client-csr.json <<EOF
{
  "CN": "aggregator",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "seven"
    }
  ]
}
EOF
  1. 生成证书
cfssl gencert \
  -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -profile=kubernetes \
  proxy-client-csr.json | cfssljson -bare proxy-client

2.11 分发客户端、服务端证书

  1. 分发worker节点需要的证书和私钥
WORKERS="k8s-node01 k8s-node02"
for instance in ${WORKERS[@]}; do
  scp ca.pem ${instance}-key.pem ${instance}.pem root@${instance}:~/
done
  1. 分发master节点需要的证书和私钥

注意:由于下面分发的证书即包含了etcd的证书也包含了k8s主节点的证书。
所以 MASTER_IPS 中必须包含所有 master 节点以及 etcd 节点。如果没有包含所有etcd节点的证书,需要重新定义,逗号分隔

MASTER_IPS=192.168.242.149,192.168.242.156,192.168.242.157
OIFS=$IFS
IFS=','
for instance in ${MASTER_IPS}; do
  scp ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem \
    service-account-key.pem service-account.pem proxy-client.pem proxy-client-key.pem root@${instance}:~/
done
IFS=$OIFS

第三节 kubernetes各组件的认证配置

3.1 kubeconfigs

  1. kubernetes的认证配置文件,也叫kubeconfigs,用于让kubernetes的客户端定位kube-apiserver并通过apiserver的安全认证。
  2. 接下来我们一起来生成各个组件的kubeconfigs,包括controller-manager,kubelet,kube-proxy,scheduler,以及admin用户。以下命令需要与上一节“生成证书”在同一个目录下执行

3.2 生成各个组件的kubeconfigs

  1. kubelet
# 指定你的worker列表(hostname),空格分隔
WORKERS="k8s-node01 k8s-node02"
for instance in ${WORKERS}; do
  kubectl config set-cluster kubernetes \
    --certificate-authority=ca.pem \
    --embed-certs=true \
    --server=https://127.0.0.1:6443 \
    --kubeconfig=${instance}.kubeconfig

  kubectl config set-credentials system:node:${instance} \
    --client-certificate=${instance}.pem \
    --client-key=${instance}-key.pem \
    --embed-certs=true \
    --kubeconfig=${instance}.kubeconfig

  kubectl config set-context default \
    --cluster=kubernetes \
    --user=system:node:${instance} \
    --kubeconfig=${instance}.kubeconfig

  kubectl config use-context default --kubeconfig=${instance}.kubeconfig
done
  1. kube-proxy
kubectl config set-cluster kubernetes \
    --certificate-authority=ca.pem \
    --embed-certs=true \
    --server=https://127.0.0.1:6443 \
    --kubeconfig=kube-proxy.kubeconfig

kubectl config set-credentials system:kube-proxy \
   --client-certificate=kube-proxy.pem \
   --client-key=kube-proxy-key.pem \
   --embed-certs=true \
   --kubeconfig=kube-proxy.kubeconfig

kubectl config set-context default \
   --cluster=kubernetes \
   --user=system:kube-proxy \
   --kubeconfig=kube-proxy.kubeconfig

kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
  1. kube-controller-manager
kubectl config set-cluster kubernetes \
  --certificate-authority=ca.pem \
  --embed-certs=true \
  --server=https://127.0.0.1:6443 \
  --kubeconfig=kube-controller-manager.kubeconfig

kubectl config set-credentials system:kube-controller-manager \
  --client-certificate=kube-controller-manager.pem \
  --client-key=kube-controller-manager-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-controller-manager.kubeconfig

kubectl config set-context default \
  --cluster=kubernetes \
  --user=system:kube-controller-manager \
  --kubeconfig=kube-controller-manager.kubeconfig

kubectl config use-context default --kubeconfig=kube-controller-manager.kubeconfig
  1. kube-scheduler
kubectl config set-cluster kubernetes \
  --certificate-authority=ca.pem \
  --embed-certs=true \
  --server=https://127.0.0.1:6443 \
  --kubeconfig=kube-scheduler.kubeconfig

kubectl config set-credentials system:kube-scheduler \
  --client-certificate=kube-scheduler.pem \
  --client-key=kube-scheduler-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-scheduler.kubeconfig

kubectl config set-context default \
  --cluster=kubernetes \
  --user=system:kube-scheduler \
  --kubeconfig=kube-scheduler.kubeconfig

kubectl config use-context default --kubeconfig=kube-scheduler.kubeconfig
  1. admin用户配置.为admin用户生成kubeconfig配置
kubectl config set-cluster kubernetes \
  --certificate-authority=ca.pem \
  --embed-certs=true \
  --server=https://127.0.0.1:6443 \
  --kubeconfig=admin.kubeconfig

kubectl config set-credentials admin \
  --client-certificate=admin.pem \
  --client-key=admin-key.pem \
  --embed-certs=true \
  --kubeconfig=admin.kubeconfig

kubectl config set-context default \
  --cluster=kubernetes \
  --user=admin \
  --kubeconfig=admin.kubeconfig

kubectl config use-context default --kubeconfig=admin.kubeconfig

3.3 分发配置文件

  1. 把kubelet和kube-proxy需要的kubeconfig配置分发到每个worker节点
WORKERS="k8s-node01 k8s-node02"
for instance in ${WORKERS}; do
    scp ${instance}.kubeconfig kube-proxy.kubeconfig ${instance}:~/
done
  1. 把kube-controller-manager和kube-scheduler需要的kubeconfig配置分发到master节点
MASTERS="k8s-master01 k8s-master02 k8s-master03"
for instance in ${MASTERS}; do
    scp admin.kubeconfig kube-controller-manager.kubeconfig kube-scheduler.kubeconfig ${instance}:~/
done

第四节 部署kubernetes控制平面Master

4.1 部署ETCD高可用集群

  1. Kubernetes组件是无状态的,并在etcd中存储集群状态。我们将部署三个节点的etcd群集,并对其进行配置以实现高可用性和安全的远程访问。所有命令都是运行在每个master节点
  2. copy必要的证书文件
# copy必要的证书文件
mkdir -p /etc/etcd /var/lib/etcd
chmod 700 /var/lib/etcd
cp ca.pem kubernetes-key.pem kubernetes.pem /etc/etcd/
  1. 配置etcd.service文件
ETCD_NAME=$(hostname -s)
# 这个变量每个节点请改一下
ETCD_IP=192.168.242.149
# etcd所有节点的ip地址
ETCD_NAMES=(k8s-master01 k8s-master02 k8s-master03)
ETCD_IPS=(192.168.242.149 192.168.242.156 192.168.242.157)
cat <<EOF > /etc/systemd/system/etcd.service
[Unit]
Description=etcd
Documentation=https://github.com/coreos

[Service]
Type=notify
ExecStart=/usr/local/bin/etcd \\
  --name ${ETCD_NAME} \\
  --cert-file=/etc/etcd/kubernetes.pem \\
  --key-file=/etc/etcd/kubernetes-key.pem \\
  --peer-cert-file=/etc/etcd/kubernetes.pem \\
  --peer-key-file=/etc/etcd/kubernetes-key.pem \\
  --trusted-ca-file=/etc/etcd/ca.pem \\
  --peer-trusted-ca-file=/etc/etcd/ca.pem \\
  --peer-client-cert-auth \\
  --client-cert-auth \\
  --initial-advertise-peer-urls https://${ETCD_IP}:2380 \\
  --listen-peer-urls https://${ETCD_IP}:2380 \\
  --listen-client-urls https://${ETCD_IP}:2379,https://127.0.0.1:2379 \\
  --advertise-client-urls https://${ETCD_IP}:2379 \\
  --initial-cluster-token etcd-cluster-0 \\
  --initial-cluster ${ETCD_NAMES[0]}=https://${ETCD_IPS[0]}:2380,${ETCD_NAMES[1]}=https://${ETCD_IPS[1]}:2380,${ETCD_NAMES[2]}=https://${ETCD_IPS[2]}:2380 \\
  --initial-cluster-state new \\
  --data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF
  1. 所有etcd节点都配置好etcd.service后,启动etcd集群
# 所有etcd节点都配置好etcd.service后,启动etcd集群
systemctl daemon-reload && systemctl enable etcd && systemctl restart etcd
  1. 验证etcd集群
ETCDCTL_API=3 etcdctl member list \
  --endpoints=https://127.0.0.1:2379 \
  --cacert=/etc/etcd/ca.pem \
  --cert=/etc/etcd/kubernetes.pem \
  --key=/etc/etcd/kubernetes-key.pem

4.2 配置 API Server

  1. 所有命令都是运行在每个master节点
# 创建kubernetes必要目录
mkdir -p /etc/kubernetes/ssl
# 准备证书文件
mv ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem \
    service-account-key.pem service-account.pem \
    proxy-client.pem proxy-client-key.pem \
    /etc/kubernetes/ssl

# 配置kube-apiserver.service
# 本机内网ip
IP=192.168.242.149
# apiserver实例数 这个自己写
APISERVER_COUNT=3
# etcd节点
ETCD_ENDPOINTS=(192.168.242.149 192.168.242.156 192.168.242.157)
# 创建 apiserver service
cat <<EOF > /etc/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes

[Service]
ExecStart=/usr/local/bin/kube-apiserver \\
  --advertise-address=${IP} \\
  --allow-privileged=true \\
  --apiserver-count=${APISERVER_COUNT} \\
  --audit-log-maxage=30 \\
  --audit-log-maxbackup=3 \\
  --audit-log-maxsize=100 \\
  --audit-log-path=/var/log/audit.log \\
  --authorization-mode=Node,RBAC \\
  --bind-address=0.0.0.0 \\
  --client-ca-file=/etc/kubernetes/ssl/ca.pem \\
  --enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \\
  --etcd-cafile=/etc/kubernetes/ssl/ca.pem \\
  --etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem \\
  --etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem \\
  --etcd-servers=https://${ETCD_ENDPOINTS[0]}:2379,https://${ETCD_ENDPOINTS[1]}:2379,https://${ETCD_ENDPOINTS[2]}:2379 \\
  --event-ttl=1h \\
  --kubelet-certificate-authority=/etc/kubernetes/ssl/ca.pem \\
  --kubelet-client-certificate=/etc/kubernetes/ssl/kubernetes.pem \\
  --kubelet-client-key=/etc/kubernetes/ssl/kubernetes-key.pem \\
  --service-account-issuer=api \\
  --service-account-key-file=/etc/kubernetes/ssl/service-account.pem \\
  --service-account-signing-key-file=/etc/kubernetes/ssl/service-account-key.pem \\
  --api-audiences=api,vault,factors \\
  --service-cluster-ip-range=10.233.0.0/16 \\
  --service-node-port-range=30000-32767 \\
  --proxy-client-cert-file=/etc/kubernetes/ssl/proxy-client.pem \\
  --proxy-client-key-file=/etc/kubernetes/ssl/proxy-client-key.pem \\
  --runtime-config=api/all=true \\
  --requestheader-client-ca-file=/etc/kubernetes/ssl/ca.pem \\
  --requestheader-allowed-names=aggregator \\
  --requestheader-extra-headers-prefix=X-Remote-Extra- \\
  --requestheader-group-headers=X-Remote-Group \\
  --requestheader-username-headers=X-Remote-User \\
  --tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem \\
  --tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \\
  --v=1
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF
  1. 启动验证
systemctl daemon-reload && systemctl enable kube-apiserver && systemctl restart kube-apiserver
systemctl status kube-apiserver
netstat -ntlp
journalctl -f

4.3 配置kube-controller-manager

# 准备kubeconfig配置文件
mv kube-controller-manager.kubeconfig /etc/kubernetes/

# 创建 kube-controller-manager.service
#  --experimental-cluster-signing-duration=87600h 这个是1.18.5的
#  --cluster-signing-duration=876000h0m0s 这个是1.20.2的
cat <<EOF > /etc/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes

[Service]
ExecStart=/usr/local/bin/kube-controller-manager \\
  --bind-address=0.0.0.0 \\
  --cluster-cidr=10.200.0.0/16 \\
  --cluster-name=kubernetes \\
  --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \\
  --cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \\
  --experimental-cluster-signing-duration=87600h \\
  --kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \\
  --leader-elect=true \\
  --root-ca-file=/etc/kubernetes/ssl/ca.pem \\
  --service-account-private-key-file=/etc/kubernetes/ssl/service-account-key.pem \\
  --service-cluster-ip-range=10.233.0.0/16 \\
  --use-service-account-credentials=true \\
  --v=1
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF
  1. 启动验证
systemctl daemon-reload && systemctl enable kube-controller-manager && systemctl restart kube-controller-manager
# 显示正在运行中
systemctl status kube-controller-manager
netstat -ntlp
journalctl -f

4.4 配置kube-scheduler

# 准备kubeconfig配置文件
mv kube-scheduler.kubeconfig /etc/kubernetes

# 创建 scheduler service 文件
cat <<EOF > /etc/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
ExecStart=/usr/local/bin/kube-scheduler \\
  --authentication-kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \\
  --authorization-kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \\
  --kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \\
  --leader-elect=true \\
  --bind-address=0.0.0.0 \\
  --port=0 \\
  --v=1
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF
  1. 启动验证
systemctl daemon-reload && systemctl enable kube-scheduler && systemctl restart kube-scheduler
systemctl status kube-scheduler
netstat -ntlp
journalctl -f

4.5 配置kubectl

  1. kubectl是用来管理kubernetes集群的客户端工具,我们已经下载到了所有的master节点。
# 创建kubectl的配置目录
mkdir ~/.kube/
# 把管理员的配置文件移动到kubectl的默认目录
mv ~/admin.kubeconfig ~/.kube/config
# 测试
kubectl get nodes
  1. 在执行 kubectl exec、run、logs 等命令时,apiserver 会转发到 kubelet。这里定义 RBAC 规则,授权 apiserver 调用 kubelet API。
kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes

第五节 部署kubernetes工作节点Node

5.1 Container Runtime安装

  1. 迟早得用Containerd,官方宣传1.23版本弃用 docker 作为容器运行时。
# 设定containerd的版本号
VERSION=1.4.3
# 下载压缩包
wget https://github.com/containerd/containerd/releases/download/v${VERSION}/cri-containerd-cni-${VERSION}-linux-amd64.tar.gz
# 解压缩
tar -xvf cri-containerd-cni-${VERSION}-linux-amd64.tar.gz
# 复制需要的文件
cp etc/crictl.yaml /etc/
cp etc/systemd/system/containerd.service /etc/systemd/system/
cp -r usr /
# 配置一下
mkdir -p /etc/containerd
# 默认配置生成配置文件
containerd config default > /etc/containerd/config.toml
# 定制化配置(可选) 基本上默认就行
vi /etc/containerd/config.toml
# 启动和验证
systemctl enable containerd
systemctl restart containerd
# 检查状态
systemctl status containerd
crictl pull docker.io/library/nginx:1.19
crictl pull registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/pause:3.2
ctr -n k8s.io i tag  registry.cn-hangzhou.aliyuncs.com/kubernetes-kubespray/pause:3.2 k8s.gcr.io/pause:3.2
crictl ps

5.2 配置kubelet

  1. 在所有的node上执行
mkdir -p /etc/kubernetes/ssl/
mv ${HOSTNAME}-key.pem ${HOSTNAME}.pem ca.pem ca-key.pem /etc/kubernetes/ssl/
mv ${HOSTNAME}.kubeconfig /etc/kubernetes/kubeconfig
IP=192.168.242.158
# 写入kubelet配置文件
cat <<EOF > /etc/kubernetes/kubelet-config.yaml
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
  anonymous:
    enabled: false
  webhook:
    enabled: true
  x509:
    clientCAFile: "/etc/kubernetes/ssl/ca.pem"
authorization:
  mode: Webhook
clusterDomain: "cluster.local"
clusterDNS:
  - "169.254.25.10"
podCIDR: "10.200.0.0/16"
address: ${IP}
readOnlyPort: 0
staticPodPath: /etc/kubernetes/manifests
healthzPort: 10248
healthzBindAddress: 127.0.0.1
kubeletCgroups: /systemd/system.slice
resolvConf: "/etc/resolv.conf"
runtimeRequestTimeout: "15m"
kubeReserved:
  cpu: 200m
  memory: 512M
tlsCertFile: "/etc/kubernetes/ssl/${HOSTNAME}.pem"
tlsPrivateKeyFile: "/etc/kubernetes/ssl/${HOSTNAME}-key.pem"
EOF
  1. 配置kubelet服务-containd版
cat <<EOF > /etc/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=containerd.service
Requires=containerd.service

[Service]
ExecStart=/usr/local/bin/kubelet \\
  --config=/etc/kubernetes/kubelet-config.yaml \\
  --container-runtime=remote \\
  --container-runtime-endpoint=unix:///var/run/containerd/containerd.sock \\
  --image-pull-progress-deadline=2m \\
  --kubeconfig=/etc/kubernetes/kubeconfig \\
  --network-plugin=cni \\
  --node-ip=${IP} \\
  --register-node=true \\
  --v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF
  1. 启动验证
systemctl daemon-reload && systemctl enable kubelet && systemctl restart kubelet
# 这里有些错没问题 因为网络还没好继续下一步就行
systemctl status kubelet
journalctl -f

5.3 配置nginx-proxy

  1. nginx-proxy是一个用于worker节点访问apiserver的一个代理,是apiserver一个优雅的高可用方案,它使用kubelet的staticpod方式启动,让每个节点都可以均衡的访问到每个apiserver服务,优雅的替代了通过虚拟ip访问apiserver的方式。
  2. nginx-proxy 只需要在没有 apiserver 的节点部署, 因为它提供了一个6443的代理
  3. nginx配置文件
mkdir -p /etc/nginx
# master ip列表
MASTER_IPS=(192.168.242.149 192.168.242.156 192.168.242.157)
# 执行前请先copy一份,并修改好upstream的 'server' 部分配置
cat <<EOF > /etc/nginx/nginx.conf
error_log stderr notice;

worker_processes 2;
worker_rlimit_nofile 130048;
worker_shutdown_timeout 10s;

events {
  multi_accept on;
  use epoll;
  worker_connections 16384;
}

stream {
  upstream kube_apiserver {
    least_conn;
    server ${MASTER_IPS[0]}:6443;
    server ${MASTER_IPS[1]}:6443;
	server ${MASTER_IPS[2]}:6443;
  }

  server {
    listen        127.0.0.1:6443;
    proxy_pass    kube_apiserver;
    proxy_timeout 10m;
    proxy_connect_timeout 1s;
  }
}

http {
  aio threads;
  aio_write on;
  tcp_nopush on;
  tcp_nodelay on;

  keepalive_timeout 5m;
  keepalive_requests 100;
  reset_timedout_connection on;
  server_tokens off;
  autoindex off;

  server {
    listen 8081;
    location /healthz {
      access_log off;
      return 200;
    }
    location /stub_status {
      stub_status on;
      access_log off;
    }
  }
}
EOF
  1. nginx manifest启动文件
mkdir -p /etc/kubernetes/manifests/
cat <<EOF > /etc/kubernetes/manifests/nginx-proxy.yaml
apiVersion: v1
kind: Pod
metadata:
  name: nginx-proxy
  namespace: kube-system
  labels:
    addonmanager.kubernetes.io/mode: Reconcile
    k8s-app: kube-nginx
spec:
  hostNetwork: true
  dnsPolicy: ClusterFirstWithHostNet
  nodeSelector:
    kubernetes.io/os: linux
  priorityClassName: system-node-critical
  containers:
  - name: nginx-proxy
    image: docker.io/library/nginx:1.19
    imagePullPolicy: IfNotPresent
    resources:
      requests:
        cpu: 25m
        memory: 32M
    securityContext:
      privileged: true
    livenessProbe:
      httpGet:
        path: /healthz
        port: 8081
    readinessProbe:
      httpGet:
        path: /healthz
        port: 8081
    volumeMounts:
    - mountPath: /etc/nginx
      name: etc-nginx
      readOnly: true
  volumes:
  - name: etc-nginx
    hostPath:
      path: /etc/nginx
EOF

5.4 配置nginx-proxy

  1. 配置文件
mv kube-proxy.kubeconfig /etc/kubernetes/
# 创建 kube-proxy-config.yaml
cat <<EOF > /etc/kubernetes/kube-proxy-config.yaml
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
bindAddress: 0.0.0.0
clientConnection:
  kubeconfig: "/etc/kubernetes/kube-proxy.kubeconfig"
clusterCIDR: "10.200.0.0/16"
mode: ipvs
EOF
  1. kube-proxy 服务文件
cat <<EOF > /etc/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Kube Proxy
Documentation=https://github.com/kubernetes/kubernetes

[Service]
ExecStart=/usr/local/bin/kube-proxy \\
  --config=/etc/kubernetes/kube-proxy-config.yaml
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF
  1. 启动验证
systemctl daemon-reload && systemctl enable kube-proxy && systemctl restart kube-proxy
systemctl status kube-proxy

第六节 网络插件Calico和DNS插件CoreDNS

6.1 网络插件Calico

  1. 官方文档地址:https://docs.projectcalico.org/getting-started/kubernetes/self-managed-onprem/onpremises
  2. 文档中有两个配置,50以下节点和50以上节点,它们的主要区别在于这个:typha
    • 当节点数比较多的情况下,Calico 的 Felix组件可通过 Typha 直接和 Etcd 进行数据交互,不通过 kube-apiserver,降低kube-apiserver的压力。
    • 大家根据自己的实际情况选择下载。下载后的文件是一个all-in-one的yaml文件,我们只需要在此基础上做少许修改即可。
  3. 修改IP自动发现
    • 当kubelet的启动参数中存在–node-ip的时候,以host-network模式启动的pod的status.hostIP字段就会自动填入kubelet中指定的ip地址。
  4. 在master01上
# 下载小于50的
curl https://docs.projectcalico.org/manifests/calico.yaml -O
# 修改配置文件的值
vi calico.yaml
# 修改IP自动发现:修改前(如果有一些虚拟网卡这里pod网络可能会出错):
- name: IP
  value: "autodetect"
# 修改IP自动发现:修改后
- name: IP
  valueFrom:
    fieldRef:
      fieldPath: status.hostIP
# 修改 CIDR: 修改前:
# - name: CALICO_IPV4POOL_CIDR
#   value: "192.168.0.0/16"
# 修改 CIDR:修改后(修改成你自己的value哦,我这里是10.200.0.0/16)
- name: CALICO_IPV4POOL_CIDR
  value: "10.200.0.0/16"

6.2 DNS插件-CoreDNS

  1. 早期的版本中dns组件以pod形式独立运行,为集群提供dns服务,所有的pod都会请求同一个dns服务。
  2. 从kubernetes 1.18版本开始NodeLocal DnsCache功能进入stable状态。
  3. NodeLocal DNSCache通过daemon-set的形式运行在每个工作节点,作为节点上pod的dns缓存代理,从而避免了iptables的DNAT规则和connection tracking。极大提升了dns的性能。
  4. 部署CoreDNS
# 设置 coredns 的 cluster-ip
COREDNS_CLUSTER_IP=10.233.0.10
# 下载coredns配置all-in-one(addons/coredns.yaml)
# 替换cluster-ip
sed -i "s/\${COREDNS_CLUSTER_IP}/${COREDNS_CLUSTER_IP}/g" coredns.yaml
# 创建 coredns
kubectl apply -f coredns.yaml
  1. 部署NodeLocal DNSCache
# 设置 coredns 的 cluster-ip
COREDNS_CLUSTER_IP=10.233.0.10
# 下载nodelocaldns配置all-in-one(addons/nodelocaldns.yaml)
# 替换cluster-ip
sed -i "s/\${COREDNS_CLUSTER_IP}/${COREDNS_CLUSTER_IP}/g" nodelocaldns.yaml
# 创建 nodelocaldns
kubectl apply -f nodelocaldns.yaml

第七节 集群冒烟测试

7.1 创建nginx ds

cat > nginx-ds.yml <<EOF
apiVersion: v1
kind: Service
metadata:
  name: nginx-ds
  labels:
    app: nginx-ds
spec:
  type: NodePort
  selector:
    app: nginx-ds
  ports:
  - name: http
    port: 80
    targetPort: 80
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: nginx-ds
spec:
  selector:
    matchLabels:
      app: nginx-ds
  template:
    metadata:
      labels:
        app: nginx-ds
    spec:
      containers:
      - name: my-nginx
        image: nginx:1.19
        ports:
        - containerPort: 80
EOF

# 创建ds
kubectl apply -f nginx-ds.yml

7.2 检查各种ip连通性

# 检查各 Node 上的 Pod IP 连通性
kubectl get pods  -o wide

# 在每个worker节点上ping pod ip
ping <pod-ip>

# 检查service可达性
kubectl get svc

# 在每个worker节点上访问服务
curl <service-ip>:<port>
curl 10.233.120.200:80

# 在每个节点检查node-port可用性
curl <node-ip>:<port>
netstat -ntlp | grep 30278
curl 192.168.242.158:30278

7.3 检查dns可用性

# 创建一个nginx pod
cat > pod-nginx.yaml <<EOF
apiVersion: v1
kind: Pod
metadata:
  name: nginx
spec:
  containers:
  - name: nginx
    image: docker.io/library/nginx:1.19
    ports:
    - containerPort: 80
EOF

# 创建pod
kubectl apply -f pod-nginx.yaml

# 进入pod,查看dns
kubectl exec nginx -it -- /bin/bash

# 查看dns配置
root@nginx:/# cat /etc/resolv.conf

# 查看名字是否可以正确解析
root@nginx:/# curl nginx-ds

7.4 常用命令

kubectl get pods
kubectl logs <pod-name>
kubectl get pods -l app=nginx-ds
kubectl exec -it <nginx-pod-name> -- nginx -v
  • 0
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值