源码(二进制)安装kuberbetes(k8s) (完整版-持续完善)

环境配置

1. 分别设置hostname

hostnamectl  set-hostname master
hostnamectl  set-hostname node01
hostnamectl  set-hostname node02

2. 主机hosts文件配置

cat <<EOF >>/etc/hosts
192.168.122.10 master
192.168.122.11 node01
192.168.122.12 node02
EOF

3. 关闭防火墙

systemctl stop firewalld  && systemctl disable firewalld

4. 禁用swap

swapoff -a && sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

5. 禁用selinux

setenforce 0 && sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config

6. 将桥接的 IPv4 流量传递到 iptables 的链

cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

使系统生效

sysctl --system 

7. 配置免密方便部署

ssh-keygen 
ssh-copy-id  192.168.122.11
ssh-copy-id  192.168.122.12

配置etcd证书

生成证书的方式有多种,参考官网 手动生成证书

1. 自签证书颁发机构(CA)

1. 下载TLS工具CFSSL
curl -L https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -o /usr/local/bin/cfssl
curl -L https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -o /usr/local/bin/cfssljson
curl -L https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -o /usr/local/bin/cfssl-certinfo
 
#授权
chmod +x /usr/local/bin/cfssl*
2. 生成CA配置和证书模板文件
cfssl print-defaults config > ca-config.json
cfssl print-defaults csr > ca-csr.json
3. 创建CA配置文件

配置证书生成策略,规定CA可以颁发那种类型的证书
修改CA配置文件ca-config.json ,证书有效时间改为10年,默认1年

cat >ca-config.json<<EOF
{
    "signing": {
        "default": {
            "expiry": "87600h"
        },
        "profiles": {
            "kubernetes": {
                "expiry": "87600h",
                "usages": [
                    "signing",
                    "key encipherment",
                    "server auth",
                    "client auth"
                ]
            }
        }
    }
}
EOF
4. 定义etcd 自签名的根证书
cat > ca-csr.json<< EOF
{
    "CN": "etcd CA",
    "key": {
        "algo": "rsa",
        "size": 2048
  },
        
  "names": [
        {
            "C": "CN",
            "L": "Xi'an",
            "ST": "Shaanxi"
        }
     ]
}
EOF
5. 生成证书

生成CA所必需的文件ca-key.pem(私钥)和ca.pem(证书),还会生成ca.csr(证书签名请求),用于交叉签名或重新签名。

cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

2. 使用自签 CA 签发 Etcd HTTPS 证书

1.创建证书申请文件

添加api证书文件配置模板

#下面的文件的hosts字段的ip地址是为所有etcd集群内部通信的ip,我们要3个etcd做集群
cat > server-csr.json<< EOF
{
    "CN": "etcd",
    "hosts": [
    "192.168.122.10",
    "192.168.122.11",
    "192.168.122.12"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
      {
        "C": "CN",
        "L": "Xi'an",
        "ST": "Shaanxi"
    }
  ]
}
EOF 
2. 基于模板生成证书生成etcd server证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server

部署ETCD

1. 下载etcd二进制包

#下载地址
https://github.com/etcd-io/etcd/releases/download/v3.4.9/etcd-v3.4.9-linux-amd64.tar.gz
 
#安装部署
mkdir /data/etcd/{bin,cfg,ssl} -p
 
#将刚才生成的证书添加到ssl目录下
cp /data/tls/etcd/ca*pem /data/tls/etcd/server*pem /data/etcd/ssl/
 
#解压安装
tar zxvf etcd-v3.4.9-linux-amd64.tar.gz
cp etcd-v3.4.9-linux-amd64/{etcd,etcdctl} /data/etcd/bin/

2. 添加etcd主配置文件

#注意修改etcdip地址
cat > /data/etcd/cfg/etcd.conf << EOF
#[Member]
ETCD_NAME="etcd-1"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.122.10:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.122.10:2379"
 
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.122.10:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.122.10:2379"
 
ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.122.10:2380,etcd-2=https://192.168.122.11:2380,etcd-3=https://192.168.122.12:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF

参数说明:
ETCD_NAME #节点名称,集群中唯一
ETCD_DATA_DIR: #数据目录
ETCD_LISTEN_PEER_URLS #集群通信监听地址
ETCD_LISTEN_CLIENT_URLS #客户端访问监听地址
ETCD_INITIAL_ADVERTISE_PEER_URLS#集群通告地址
ETCD_ADVERTISE_CLIENT_URLS #客户端通告地址
ETCD_INITIAL_CLUSTER #集群节点地址
ETCD_INITIAL_CLUSTER_TOKEN #集群 Token
ETCD_INITIAL_CLUSTER_STATE #加入集群的当前状态,new 是新集群,existing 表示加入已有集群

3. Etcd配置systemd管理

cat > /usr/lib/systemd/system/etcd.service << EOF
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
 
[Service]
Type=notify
EnvironmentFile=/data/etcd/cfg/etcd.conf
ExecStart=/data/etcd/bin/etcd \
--cert-file=/data/etcd/ssl/server.pem \
--key-file=/data/etcd/ssl/server-key.pem \
--peer-cert-file=/data/etcd/ssl/server.pem \
--peer-key-file=/data/etcd/ssl/server-key.pem \
--trusted-ca-file=/data/etcd/ssl/ca.pem \
--peer-trusted-ca-file=/data/etcd/ssl/ca.pem \
--logger=zap
Restart=on-failure
LimitNOFILE=65536
 
[Install]
WantedBy=multi-user.target
EOF

4. 推送etcd到其他节点(11/12)

scp /usr/lib/systemd/system/etcd.service node01:/usr/lib/systemd/system/etcd.service
scp /usr/lib/systemd/system/etcd.service node02:/usr/lib/systemd/system/etcd.service
scp  -r /data/etcd node01:/data
scp  -r /data/etcd node02:/data

5. 修改node节点etcd

node01:

vi /data/etcd/cfg/etcd.conf
    
ETCD_NAME="etcd-2"    #这里对应下面etcd-initial-cluster的名称
 
ETCD_LISTEN_PEER_URLS="https://192.168.122.11:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.122.11:2379"
 
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.122.11:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.122.11:2379"
#上面4个都改为当前主机的ip地址
 
ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.122.10:2380,etcd-2=https://192.168.122.11:2380,etcd-3=https://192.168.122.12:2380"

node02跟node01类似:

6. 启动etcd

#第一个启动会卡住,等待其他节点的etcd启动组成集群,最好3个一起启动

systemctl daemon-reload
systemctl start etcd
systemctl enable etcd

7. 查看etcd集群状态

ETCDCTL_API=3 /data/etcd/bin/etcdctl --cacert=/data/etcd/ssl/ca.pem --cert=/data/etcd/ssl/server.pem --key=/data/etcd/ssl/server-key.pem --endpoints=“https://192.168.122.10:2379,https://192.168.122.11:2379,https://192.168.122.12:2379” endpoint health

在这里插入图片描述

部署docker

3台服务器同时安装部署docker

1. 配置yum源

 sudo yum install -y yum-utils
 sudo yum-config-manager \
    --add-repo \
    https://download.docker.com/linux/centos/docker-ce.repo

yum安装docker

yum install  docker-ce-19.03.9

3. 添加阿里云docker镜像加速

mkdir /etc/docker
cat > /etc/docker/daemon.json << EOF
{
"registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"]
}
EOF

4. 启动docker

systemctl start docker 
systemctl enable docker 
systemctl status docker

创建 Kubernetes CA 证书

1. 自签证书颁发机构(CA)

cd /data/tls/k8s
1. 添加证书配置
cat > ca-config.json<< EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
     },
      "profiles": {
      "kubernetes": {
        "expiry": "87600h",
        "usages": [
          "signing",
          "key encipherment",
          "server auth",
          "client auth"
        ]
      }
    }
  }
}
EOF
2. ca-k8s根证书文件
cat > ca-csr.json<< EOF
{
    "CN": "kubernetes",
    "key": {
      "algo": "rsa",
      "size": 2048
  },    
  "names": [
      {
        "C": "CN",
        "L": "Xi'an",
        "ST": "Shaanxi",
        "O": "k8s",
        "OU": "System"
       }
   ]    
}
EOF
3.生成证书
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

2. 使用自签 CA 签发 kube-apiserver HTTPS 证书

1. 创建证书申请文件
#把你现有集群的主机ip写在下面
cat > server-csr.json<< EOF
{
    "CN": "kubernetes",
    "hosts": [
      "10.0.0.1",
      "127.0.0.1",
      "192.168.122.10",
      "192.168.122.11",
      "192.168.122.12",
      "kubernetes",
      "kubernetes.default",
      "kubernetes.default.svc",
      "kubernetes.default.svc.cluster",
      "kubernetes.default.svc.cluster.local"
],
"key": {
  "algo": "rsa",
  "size": 2048
},
"names": [
    {
      "C": "CN",
      "L": "Xi'an",
      "ST": "Shaanxi",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF

2. 生成API_SERVER证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server

部署master节点

下载地址:

https://dl.k8s.io/v1.19.0/kubernetes-server-linux-amd64.tar.gz
或者:https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.19.md#v1190

设置安装路径:

#准备目录
cd
mkdir -p /data/kubernetes/{bin,cfg,ssl,logs}
#解压软件包
tar zxvf kubernetes-server-linux-amd64.tar.gz

#拷贝master节点所需组件的二进制文件
cd kubernetes/server/bin
cp kube-apiserver kube-scheduler kube-controller-manager /data/kubernetes/bin

#拷贝kubectl管理工具
cp kubectl /usr/bin/
  
#拷贝证书文件
cp /data/tls/k8s/*.pem   /data/kubernetes/ssl/

1. 部署 kube-apiserver

1. 创建配置文件
#注意ip地址,和etcd地址
cat > /data/kubernetes/cfg/kube-apiserver.conf << EOF
KUBE_APISERVER_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/data/kubernetes/logs \\
--etcd-servers=https://192.168.122.10:2379,https://192.168.122.11:2379,https://192.168.122.12:2379 \\
--bind-address=0.0.0.0 \\
--secure-port=6443 \\
--advertise-address=192.168.122.10 \\
--allow-privileged=true \\
--service-cluster-ip-range=10.0.0.0/24 \\
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \\
--authorization-mode=RBAC,Node \\
--enable-bootstrap-token-auth=true \\
--token-auth-file=/data/kubernetes/cfg/token.csv \\
--service-node-port-range=30000-32767 \\
--kubelet-client-certificate=/data/kubernetes/ssl/server.pem \\
--kubelet-client-key=/data/kubernetes/ssl/server-key.pem \\
--tls-cert-file=/data/kubernetes/ssl/server.pem \\
--tls-private-key-file=/data/kubernetes/ssl/server-key.pem \\
--client-ca-file=/data/kubernetes/ssl/ca.pem \\
--service-account-key-file=/data/kubernetes/ssl/ca-key.pem \\
--etcd-cafile=/data/etcd/ssl/ca.pem \\
--etcd-certfile=/data/etcd/ssl/server.pem \\
--etcd-keyfile=/data/etcd/ssl/server-key.pem \\
--audit-log-maxage=30 \\
--audit-log-maxbackup=3 \\
--audit-log-maxsize=100 \\
--audit-log-path=/data/kubernetes/logs/k8s-audit.log"
EOF
 
#上面两个\\ 第一个是转义符,第二个是换行符,使用转义符是为了使用 EOF 保留换行符

参数说明:

--logtostderr:     #设置为false表示将日志写入文件,不写入stderr
--v                 #日志等级
--log-dir          #指定日志输出目录
--etcd-servers       #etcd 集群地址
--bind-address      #监听地址,当前地址
--secure-port       #https 安全端口 (默认6443)
--advertise-address   #集群通告地址
--allow-privileged  #启用授权,运行docker使用特权模式
--service-cluster-ip-range   #Service 虚拟 IP 地址段
--enable-admission-plugins   #准入控制模块,各控制模块以插件的形式依次生效
--authorization-mode   #认证授权,启用 RBAC 授权和节点自管理
--enable-bootstrap-token-auth  #启用启动引导令牌(Bootstrap Token)身份认证 
      #启动引导令牌被启用后,可以作为持有者令牌的凭据,用于 API 服务器请求的身份认证。(官网解释)
      #这里开启这个机制的作用,如果后续有新的节点加入,会自动帮忙授权,只要加入到对应的组中
--token-auth-file    #bootstrap token 文件
--service-node-port-range   #Service nodeport 类型默认分配端口范围
--kubelet-client-xxx   #apiserver 访问 kubelet 客户端证书
--tls-xxx-file         #apiserver https 证书
--etcd-xxxfile      #连接 Etcd 集群证书
--audit-log-xxx     #审计日志
2. 创建token文件
cat > /data/kubernetes/cfg/token.csv << EOF
19c2834350a42f92912196a841ceecad,kubelet-bootstrap,10001,"system:nodebootstrapper"
EOF
 
#格式:token,用户名,UID,用户组
#token 也可自行生成替换:
#head -c 16 /dev/urandom | od -An -t x | tr -d ' '
3. 添加api服务到systemd
cat > /usr/lib/systemd/system/kube-apiserver.service << EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
 
[Service]
EnvironmentFile=/data/kubernetes/cfg/kube-apiserver.conf
ExecStart=/data/kubernetes/bin/kube-apiserver \$KUBE_APISERVER_OPTS
Restart=on-failure
 
[Install]
WantedBy=multi-user.target
EOF
4.启动kube-apiserver
systemctl daemon-reload
systemctl start kube-apiserver
systemctl enable kube-apiserver
5. 授权 kubelet-bootstrap 用户允许请求证书

k8s为了解决kubelet颁发证书的复杂性,所以引入了bootstrap机制,自动的为将要加入到集群的node颁发kubelet证书,所有链接apiserver的都需要证书。

kubelet bootstrap简单理解是kubernetes apiserver 和work node之间建立通信的引导机制。api-server配置参数中启用Bootstrap Token Authentication(–enable-bootstrap-token-auth )会告诉kubelet 一个特殊的 token,kubelet 根据此token通过 api server 的认证,kubelet 使用低权限的 bootstrap token 跟 api server 建立连接后,Bootstrap 使得kubelet 自动向 api server 申请自己的证书,并且使得api server 自动审批证书
参考:https://kubernetes.io/zh-cn/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/#bootstrap-tokens

创建kubelet-bootstrap角色绑定:

kubectl create clusterrolebinding kubelet-bootstrap \
--clusterrole=system:node-bootstrapper \
--user=kubelet-bootstrap
 
#返回
clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created

2. 部署 kube-controller-manager

1. 创建配置文件
cat > /data/kubernetes/cfg/kube-controller-manager.conf << EOF
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/data/kubernetes/logs \\
--leader-elect=true \\
#--master=127.0.0.1:8080 \\
--bind-address=127.0.0.1 \\
--allocate-node-cidrs=true \\
--cluster-cidr=10.244.0.0/16 \\
--service-cluster-ip-range=10.0.0.0/24 \\
--cluster-signing-cert-file=/data/kubernetes/ssl/ca.pem \\
--cluster-signing-key-file=/data/kubernetes/ssl/ca-key.pem \\
--root-ca-file=/data/kubernetes/ssl/ca.pem \\
--service-account-private-key-file=/data/kubernetes/ssl/ca-key.pem \\
--experimental-cluster-signing-duration=87600h0m0s"
EOF

参数说明

–master #通过本地非安全本地端口 8080 连接 apiserver。
–leader-elect #当该组件启动多个时,自动选举(HA)
–cluster-signing-cert-file #ca证书文件
–cluster-signing-key-file #ca证书私钥
#自动为 kubelet 颁发证书的 CA,与 apiserver 保持一致

2.生成kubeconfig文件

生成kube-controller-manager证书:
cd /data/tls/k8s

cat > kube-controller-manager-csr.json << EOF
{
  "CN": "system:kube-controller-manager",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "Xi'an", 
      "ST": "Shaanxi",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF

生成证书:

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager

生成kubeconfig文件(以下是shell命令,直接在终端执行):

KUBE_CONFIG="/data/kubernetes/cfg/kube-controller-manager.kubeconfig"
KUBE_APISERVER="https://192.168.122.10:6443"
kubectl config set-credentials kube-controller-manager   --client-certificate=./kube-controller-manager.pem   --client-key=./kube-controller-manager-key.pem   --embed-certs=true   --kubeconfig=${KUBE_CONFIG}
kubectl config set-context default   --cluster=kubernetes   --user=kube-controller-manager   --kubeconfig=${KUBE_CONFIG}
kubectl config use-context default --kubeconfig=${KUBE_CONFIG}
3. kube-controller-manager 配置systemd管理
cat > /usr/lib/systemd/system/kube-controller-manager.service << EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
 
[Service]
EnvironmentFile=/data/kubernetes/cfg/kube-controller-manager.conf
ExecStart=/data/kubernetes/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
 
[Install]
WantedBy=multi-user.target
EOF
3.启动kube-controller-manager服务
systemctl daemon-reload
systemctl start kube-controller-manager
systemctl enable kube-controller-manager

3.部署 kube-scheduler

1. 添加配置文件
cat > /data/kubernetes/cfg/kube-scheduler.conf << EOF
KUBE_SCHEDULER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/data/kubernetes/logs \
--leader-elect \
#--master=127.0.0.1:8080 \
--bind-address=127.0.0.1"
EOF

参数说明

--master        #通过本地非安全本地端口 8080 连接 apiserver。
--leader-elect  #当该组件启动多个时,自动选举(HA)
2.生成证书

cd /data/tls/k8s

cat > kube-scheduler-csr.json << EOF
{
  "CN": "system:kube-scheduler",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "Xi'an",
      "ST": "Shaanxi",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF
# 生成证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler

生成kubeconfig文件(以下是shell命令,直接在终端执行):

KUBE_CONFIG="/data/kubernetes/cfg/kube-scheduler.kubeconfig"
KUBE_APISERVER="https://192.168.122.10:6443"

kubectl config set-cluster kubernetes \
  --certificate-authority=/data/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=${KUBE_CONFIG}
kubectl config set-credentials kube-scheduler \
  --client-certificate=./kube-scheduler.pem \
  --client-key=./kube-scheduler-key.pem \
  --embed-certs=true \
  --kubeconfig=${KUBE_CONFIG}
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-scheduler \
  --kubeconfig=${KUBE_CONFIG}
kubectl config use-context default --kubeconfig=${KUBE_CONFIG}
3.kube-scheduler 配置systemd
cat > /usr/lib/systemd/system/kube-scheduler.service << EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/data/kubernetes/cfg/kube-scheduler.conf
ExecStart=/data/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
3.启动kube-scheduler服务
systemctl daemon-reload
systemctl start kube-scheduler
systemctl enable kube-scheduler

4. 查看集群状态

生成kubectl连接集群的证书:

cat > admin-csr.json <<EOF
{
  "CN": "admin",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "Xi'an",
      "ST": "Shanaxi",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin

生成kubeconfig文件:

mkdir /root/.kube
KUBE_CONFIG="/root/.kube/config"
KUBE_APISERVER="https://192.168.122.10:6443"

kubectl config set-cluster kubernetes \
  --certificate-authority=/data/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=${KUBE_CONFIG}
kubectl config set-credentials cluster-admin \
  --client-certificate=./admin.pem \
  --client-key=./admin-key.pem \
  --embed-certs=true \
  --kubeconfig=${KUBE_CONFIG}
kubectl config set-context default \
  --cluster=kubernetes \
  --user=cluster-admin \
  --kubeconfig=${KUBE_CONFIG}
kubectl config use-context default --kubeconfig=${KUBE_CONFIG}
[root@master k8s]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok                  
scheduler            Healthy   ok                  
etcd-1               Healthy   {"health":"true"}   
etcd-0               Healthy   {"health":"true"}   
etcd-2               Healthy   {"health":"true"} 

部署node节点

以在master上部署node节点为例,后续将node相关组件直接复制到node01,node02即可

1.部署kubelet

#所有操作在master执行
 
#拷贝node所需组件的二进制文件
cd /data/kubernetes/server/bin
cp kubelet kube-proxy /data/kubernetes/bin

1.添加kubelet配置
#注意修改--hostname-override=为当前主机ip
cat > /data/kubernetes/cfg/kubelet.conf << EOF
KUBELET_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/data/kubernetes/logs \\
--hostname-override=192.168.122.10 \\
--network-plugin=cni \\  
--kubeconfig=/data/kubernetes/cfg/kubelet.kubeconfig \\
--bootstrap-kubeconfig=/data/kubernetes/cfg/bootstrap.kubeconfig \\
--config=/data/kubernetes/cfg/kubelet-config.yml \\
--cert-dir=/data/kubernetes/ssl \\
--pod-infra-container-image=k8s.gcr.io/pause-amd64:3.2"
EOF

参数说明

–hostname-override #显示名称,集群中唯一
–network-plugin #启用网络插件,一般设置成CNI ,还有一种是kubenet
–kubeconfig:空路径,会自动生成,后面用于连接 apiserver --bootstrap-kubeconfig #首次启动向 apiserver 申请证书
–config #配置参数文件
–cert-dir #kubelet 证书生成目录
–pod-infra-container-image #管理 Pod 网络容器的镜像

2.添加kubelet配置参数
cat > /data/kubernetes/cfg/kubelet-config.yml << EOF
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS:
- 10.0.0.2
clusterDomain: cluster.local
failSwapOn: false
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 2m0s
    enabled: true
  x509:
    clientCAFile: /data/kubernetes/ssl/ca.pem
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 5m0s
    cacheUnauthorizedTTL: 30s
evictionHard:
imagefs.available: 15%
memory.available: 100Mi
nodefs.available: 10%
nodefs.inodesFree: 5%
maxOpenFiles: 1000000
maxPods: 110
EOF
3. 生成启动引导kubeconfig文件 --bootstrap.kubeconfig

方式一:
直接创建一个kubeconfig文件 内容类似如下:

apiVersion: v1
kind: Config
clusters:
- cluster:
    certificate-authority: /var/lib/kubernetes/ca.pem
    server: https://my.server.example.com:6443
  name: bootstrap
contexts:
- context:
    cluster: bootstrap
    user: kubelet-bootstrap
  name: bootstrap
current-context: bootstrap
preferences: {}
users:
- name: kubelet-bootstrap
  user:
    token: 07401b.f395accd246ae52d

因为启动引导 kubeconfig 文件是一个标准的 kubeconfig 文件,可以使用 kubectl 来生成该文件。要生成上面的示例文件
参考: kubelet启动引导配置

#添加临时变量
KUBE_APISERVER="https://192.168.122.10:6443" # apiserver IP:PORT
TOKEN="19c2834350a42f92912196a841ceecad" # 与 /data/kubernetes/cfg/token.csv 里保持一致
 
# 生成 kubelet bootstrap kubeconfig 配置文件
# 设置集群参数
kubectl config set-cluster kubernetes \
--certificate-authority=/data/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=bootstrap.kubeconfig
 # 设置客户端认证参数
kubectl config set-credentials "kubelet-bootstrap" \
--token=${TOKEN} \
--kubeconfig=bootstrap.kubeconfig
 # 设置上下文参数
kubectl config set-context default \
--cluster=kubernetes \
--user="kubelet-bootstrap" \
--kubeconfig=bootstrap.kubeconfig
 # 设置默认上下文
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
 
#添加kubeconfig到cfg目录
cp bootstrap.kubeconfig /data/kubernetes/cfg
4. 添加kubelet到systemd管理
cat > /usr/lib/systemd/system/kubelet.service << EOF
[Unit]
Description=Kubernetes Kubelet
After=docker.service
 
[Service]
EnvironmentFile=/data/kubernetes/cfg/kubelet.conf
ExecStart=/data/kubernetes/bin/kubelet \$KUBELET_OPTS
Restart=on-failure
LimitNOFILE=65536
 
[Install]
WantedBy=multi-user.target
EOF
5.启动kubelet服务
systemctl daemon-reload
systemctl start kubelet
systemctl enable kubelet

2.批准 kubelet 证书申请并加入集群

1. 查看证书签名请求状态
[root@master tls]# kubectl get csr

#可以看到结尾是pending

NAME                                                   AGE   SIGNERNAME                                    REQUESTOR           CONDITION
node-csr-X6f2ogMt5n3_fzpD1rfG5_jJqn4unxYEwnDd7cmP0sc   46s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending
2. 批准申请证书请求
kubectl certificate approve node-csr-X6f2ogMt5n3_fzpD1rfG5_jJqn4unxYEwnDd7cmP0sc
3.再次查看证书签名请求状态
[root@master tls]# kubectl get csr
NAME                                                   AGE     SIGNERNAME                                    REQUESTOR           CONDITION
node-csr-X6f2ogMt5n3_fzpD1rfG5_jJqn4unxYEwnDd7cmP0sc   6m20s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Approved,Issued

4. 查看node节点状态
[root@master tls]# kubectl get node
NAME             STATUS     ROLES    AGE     VERSION
192.168.122.10   NotReady   <none>   7m23s   v1.19.0

本来应该是Ready 状态的,这里踩了坑所以导致NotReady 。

报错信息“E0402 17:54:42.335761 2124 kubelet.go:2103] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized”
CNI插件安装好后会恢复
–network-plugin:网络插件名称,对于CNI插件,设置为cni即可,无须关注–network-plugin-dir的路径。对于kubenet插件,设置为kubenet,目前仅实现了一个简单的cbr0 Linux网桥
在设置–network-plugin="cni"时,kubelet还需设置下面两个参数(默认,如果路径未变化可不指定)。
◎ --cni-conf-dir:CNI插件的配置文件目录,默认为/etc/cni/net.d。该目录下
配置文件的内容需要符合CNI规范。
◎ --cni-bin-dir:CNI插件的可执行文件目录,默认为/opt/cni/bin。

3. 部署 kube-proxy

1. 创建proxy配置文件
cat > /data/kubernetes/cfg/kube-proxy.conf << EOF
KUBE_PROXY_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/data/kubernetes/logs \\
--config=/data/kubernetes/cfg/kube-proxy-config.yml"
EOF
2.添加配置参数
cat > /data/kubernetes/cfg/kube-proxy-config.yml << EOF
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
metricsBindAddress: 0.0.0.0:10249
clientConnection:
  kubeconfig: /data/kubernetes/cfg/kube-proxy.kubeconfig
hostnameOverride: 192.168.122.10       #注意这里的ip地址,改为当前主机ip
clusterCIDR: 10.0.0.0/24
EOF
3.生成 kube-proxy.kubeconfig 文件

创建证书请求文件

cd  /data/tls/k8s
 
cat > kube-proxy-csr.json<< EOF
{
    "CN": "system:kube-proxy",
    "hosts": [],
    "key": {
      "algo": "rsa",
      "size": 2048
},
"names": [
    {
      "C": "CN",
      "L": "Xi'an",
      "ST": "Shaanxi",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF
4. 生成证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
5. 生成kube-proxy.kubeconfig文件
KUBE_APISERVER="https://192.168.122.10:6443"
 # 设置集群参数
kubectl config set-cluster kubernetes \
--certificate-authority=/data/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=kube-proxy.kubeconfig
 # 设置客户端认证参数
kubectl config set-credentials kube-proxy \
--client-certificate=./kube-proxy.pem \
--client-key=./kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig
 # 设置上下文参数
kubectl config set-context default \
--cluster=kubernetes \
--user=kube-proxy \
--kubeconfig=kube-proxy.kubeconfig
 切换默认上下文:
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
6. 拷贝证书文件

cp kube-proxy.kubeconfig /data/kubernetes/cfg/

7.添加kube-proxy到systemd管理
cat > /usr/lib/systemd/system/kube-proxy.service << EOF
[Unit]
Description=Kubernetes Proxy
After=network.target
 
[Service]
EnvironmentFile=/data/kubernetes/cfg/kube-proxy.conf
ExecStart=/data/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTS
Restart=on-failure
LimitNOFILE=65536
 
[Install]
WantedBy=multi-user.target
EOF
8.启动kube-proxy服务
systemctl daemon-reload
systemctl start kube-proxy
systemctl enable kube-proxy
systemctl status kube-proxy

授权 apiserver 访问 kubelet

创建ClusterRole并进行集群角色绑定
yaml文件:

cat > apiserver-to-kubelet-rbac.yaml<< EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:kube-apiserver-to-kubelet
rules:
  - apiGroups:
      - ""
    resources:
      - nodes/proxy
      - nodes/stats
      - nodes/log
      - nodes/spec
      - nodes/metrics
      - pods/log
    verbs:
      - "*"
 
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: system:kube-apiserver
  namespace: ""
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:kube-apiserver-to-kubelet
subjects:
  - apiGroup: rbac.authorization.k8s.io
    kind: User
    name: kubernetes
EOF

创建:

kubectl apply -f apiserver-to-kubelet-rbac.yaml

在这里插入图片描述

部署CNI网络

安装CN

#下载地址
https://github.com/containernetworking/plugins/releases/download/v0.8.6/cni-plugins-linux-amd64-v0.8.6.tgz

#解包

mkdir /opt/cni/bin -p
tar zxvf cni-plugins-linux-amd64-v0.8.6.tgz -C /opt/cni/bin

这里没有设置/etc/cni/net.d/下的CNI配置文件,这是因为安装flannel的时候会以configmap的形式自动进行挂载(calico也是一样),所以不需要进行配置

这里采用yaml方式安装Flannel网络插件
获取安装flannel的yaml文件:
https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

部署flannel

kubectl create -f kube-flannel.yml

ps:如果pod 日志报错:

Error from server (Forbidden): Forbidden (user=kubernetes, verb=get, resource=nodes, subresource=proxy) ( pods/log kube-flannel-ds-zv4bj) 

原因可能是:未授权 apiserver 访问 kubelet

新增node节点

确保node节点已安装docker并启动成功

1. 将master节点node组件相关配置拷贝到其他节点

scp -r /data/kubernetes root@192.168.122.11:/data/
scp -r /data/kubernetes root@192.168.122.12:/data/
scp -r /usr/lib/systemd/system/{kubelet,kube-proxy}.service root@192.168.122.11:/usr/lib/systemd/system
scp -r /usr/lib/systemd/system/{kubelet,kube-proxy}.service root@192.168.122.12:/usr/lib/systemd/system
scp -r /opt/cni/ root@192.168.122.11:/opt/
scp -r /opt/cni/ root@192.168.122.12:/opt/

2. 修改node节点配置

1. 删除 kubelet 证书和 kubeconfig 文件
#这几个文件是证书申请审批后自动生成的,每个 Node 不同,必须删除重新生成。
rm -f /data/kubernetes/cfg/kubelet.kubeconfig
rm -f /data/kubernetes/ssl/kubelet*
2.修改配置
#192.168.122.11
vi /data/kubernetes/cfg/kubelet.conf
--hostname-override=192.168.122.11     #修改为当前主机ip
 
#192.168.122.11
vi /data/kubernetes/cfg/kube-proxy-config.yml
hostnameOverride: 192.168.122.11       #修改为当前主机ip


192.168.122.12 同上操作

3. 启动kubelet

systemctl daemon-reload
systemctl start kubelet
systemctl enable kubelet

4. 批准 kubelet 证书申请

在master上操作:

kubectl get csr
kubectl certificate approve [csr-name]

在这里插入图片描述

5. 启动kube-proxy
systemctl start kube-proxy
systemctl enable  kube-proxy
systemctl status   kube-proxy
6.查看节点状态

任意节点执行命令均可

#查看flannel网络插件状态	
kubectl get pod -n kube-system
#查看node节点状态
kubectl get node

在这里插入图片描述
ps:
如果查看flannel 插件处于 “Init:0/1” 状态,可以通过 kubectl describe kube-flannel-ds-hl98j -n kube-system查看pod事件。
如果是k8s.gcr.io/pause-amd64:3.2 镜像拉取失败,有两种解决办法:

  1. 手动拉取镜像,并打tag标签,然后删除pod 让其重新创建
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.2
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.2 k8s.gcr.io/pause-amd64:3.2
kubectl delete pod kube-flannel-ds-hl98j kube-flannel-ds-zvnmg  -n kube-system
  1. 直接修改 /data/kubernetes/cfg/kubelet.conf 中的pause-amd64:3.2 镜像仓库地址为可用地址,如:registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.2
  • 5
    点赞
  • 25
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值