二进制包方式部署k8s集群

 1、准备环境

虚拟机操作系统: Centos7

角色IP组件
Master192.168.150.20

kube-apiserver,kube-controller-manager,kube

                                                            -scheduler,etcd
Node1192.168.150.21kubelet,kube-proxy,docker,etcd

2、系统初始化工作

        在三台虚拟机上进行如下操作:

2、1 关闭防火墙

$ systemctl stop firewalld
$ systemctl disable firewalld

2、2 关闭selinux

$ sed -i 's/enforcing/disabled/' /etc/selinux/config  
$ setenforce 0  

2、3 关闭swap

$ swapoff -a 
$ sed -ri 's/.*swap.*/#&/' /etc/fstab 

2、4 设置主机名

$ hostnamectl set-hostname <hostname>

注意:设置完主机名需要重启机器

2、5 在master添加hosts

$ cat >> /etc/hosts << EOF
192.168.150.20 master
192.168.150.21 node1
EOF

2、6 将桥接的IPv4流量传递到iptables的链

$ cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
$ sysctl --system

2、7 设置时间同步

$ yum install ntpdate -y 
$ ntpdate time.windows.com

3、部署 Etcd 集群

节点名称IP
etcd-1192.168.150.20
etcd-2

192.168.150.21

注意:这里为了节省机器,这里与 K8s 节点机器复用。

3、1 准备 cfssl 证书生成工具

$ wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 
$ wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 
$ wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 
$ chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64 
$ mv cfssl_linux-amd64 /usr/local/bin/cfssl 
$ mv cfssljson_linux-amd64 /usr/local/bin/cfssljson 
$ mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo

3.2 生成 Etcd 证书

(1)自签证书颁发机构( CA)
        创建工作目录:
$ mkdir -p ~/TLS/{etcd,k8s}
$ cd TLS/etcd
        自签 CA:
cat > ca-config.json<< EOF 
{
    "signing": {
        "default": {
            "expiry": "87600h"
        },
        "profiles": {
            "www": {
                "expiry": "87600h",
                "usages": [
                    "signing",
                    "key encipherment",
                    "server auth",
                    "client auth"
                ]
            }
        }
    }
}
EOF
cat > ca-csr.json<< EOF 
{
    "CN": "etcd CA",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing"
        }
    ]
}
EOF
        生成证书:
$ cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

  验证是否生成:      

$ ls *pem
(2)使用自签 CA 签发 Etcd HTTPS 证书
        创建证书申请文件:
$ cat > server-csr.json<< EOF
{
    "CN": "etcd",
    "hosts": [
        "192.168.150.20",
        "192.168.150.21"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing"
        }
    ]
}
EOF
注意:上述文件 hosts 字段中 IP 为所有 etcd 节点的集群内部通信 IP,一个都不能少!
        生成证书:
$ cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server

验证是否生成:

$ ls server*

3.3 从 Github 下载二进制文件

$ wget https://github.com/etcd-io/etcd/releases/download/v3.4.9/etcd-v3.4.9- linux-amd64.tar.gz

3.4 部署 Etcd 集群

在节点master上操作,待会将节点master生成的所有文件拷贝到节点node1中

(1)创建工作目录并解压二进制包

$ mkdir –p /opt/etcd/{bin,cfg,ssl}  
$ tar zxvf etcd-v3.4.9-linux-amd64.tar.gz 
$ mv etcd-v3.4.9-linux-amd64/{etcd,etcdctl} /opt/etcd/bin/

(2)创建 etcd 配置文件

$ cat > /opt/etcd/cfg/etcd.conf << EOF 
#[Member] 
ETCD_NAME="etcd-1" 
ETCD_DATA_DIR="/var/lib/etcd/default.etcd" 
ETCD_LISTEN_PEER_URLS="https://自己master节点的IP:2380" 
ETCD_LISTEN_CLIENT_URLS="https://自己master节点的IP:2379" 
			
#[Clustering] 
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://自己master节点的IP:2380" 
ETCD_ADVERTISE_CLIENT_URLS="https://自己master节点的IP:2379" 
ETCD_INITIAL_CLUSTER="etcd-1=https://自己master节点的IP:2380,etcd-2=https://自己node1节点的IP:2380" 
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" 
ETCD_INITIAL_CLUSTER_STATE="new"
EOF

#说明:
ETCD_NAME:节点名称,集群中唯一 
ETCD_DATA_DIR:数据目录 
ETCD_LISTEN_PEER_URLS:集群通信监听地址 
ETCD_LISTEN_CLIENT_URLS:客户端访问监听地址 
ETCD_INITIAL_ADVERTISE_PEER_URLS:集群通告地址 
ETCD_ADVERTISE_CLIENT_URLS:客户端通告地址 
ETCD_INITIAL_CLUSTER:集群节点地址 
ETCD_INITIAL_CLUSTER_TOKEN:集群 
Token ETCD_INITIAL_CLUSTER_STATE:加入集群的当前状态,new 是新集群,existing 表示加入已有集群

(3)systemd 管理 etcd

$ cat > /usr/lib/systemd/system/etcd.service << EOF 
[Unit] 
Description=Etcd Server 
After=network.target 
After=network-online.target 
Wants=network-online.target 
[Service] 
Type=notify 
EnvironmentFile=/opt/etcd/cfg/etcd.conf 
ExecStart=/opt/etcd/bin/etcd \ 
--cert-file=/opt/etcd/ssl/server.pem \ 
--key-file=/opt/etcd/ssl/server-key.pem \ 
--peer-cert-file=/opt/etcd/ssl/server.pem \ 
--peer-key-file=/opt/etcd/ssl/server-key.pem \ 
--trusted-ca-file=/opt/etcd/ssl/ca.pem \
--peer-trusted-ca-file=/opt/etcd/ssl/ca.pem  
Restart=on-failure 
RestartSec=5
LimitNOFILE=65536 
[Install] 
WantedBy=multi-user.target 
EOF

(4)拷贝刚才生成的证书
          

$ cp ~/TLS/etcd/ca*pem ~/TLS/etcd/server*pem /opt/etcd/ssl/

 (5)将上面节点master所有生成的文件拷贝到节点node1:
          

 $ scp -r /opt/etcd/ root@node1IP:/opt/
 $ scp /usr/lib/systemd/system/etcd.service root@node1IP:/usr/lib/systemd/system/

在节点node1修改 etcd.conf 配置文件中的节点名称和当前服务器node1 IP:

$ vi /opt/etcd/cfg/etcd.conf
#[Member]
ETCD_NAME="etcd-2"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://node1的IP:2380"
ETCD_LISTEN_CLIENT_URLS="https://node1的IP:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://node1的IP:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://node1的IP:2379"
ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.150.20:2380,etcd-2=https://192.168.150.21:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

3、5 查看etcd集群状态:

master和node1节点同时启动:

$ systemctl daemon-reload
$ systemctl start etcd

注意:因为etcd.service中指定了其他etcd主机,所以单独启动一台是不通其他主机,那么etcd就会启动失败。所以要把etcd集群中所有主机都配置好,同时启动就OK了!!!一定要同时启动!!!

设置开机启动:

$ systemctl enable etcd

查看etcd集群状态:

$ systemctl status etcd

# 效果如下:
master节点:
● etcd.service - Etcd Server
   Loaded: loaded (/usr/lib/systemd/system/etcd.service; disabled; vendor preset: disabled)
   Active: active (running) since Fri 2022-10-14 11:46:11 EDT; 15s ago
 Main PID: 17401 (etcd)
   CGroup: /system.slice/etcd.service
           └─17401 /opt/etcd/bin/etcd --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --peer-cert-file=/opt/etcd/ssl/server.pem --peer-key-file=/opt/etcd/ssl/serv...

Oct 14 11:46:11 master etcd[17401]: raft2022/10/14 11:46:11 INFO: 871053d560e41f62 has received 2 MsgVoteResp votes and 0 vote rejections
Oct 14 11:46:11 master etcd[17401]: raft2022/10/14 11:46:11 INFO: 871053d560e41f62 became leader at term 12
Oct 14 11:46:11 master etcd[17401]: raft2022/10/14 11:46:11 INFO: raft.node: 871053d560e41f62 elected leader 871053d560e41f62 at term 12
Oct 14 11:46:11 master etcd[17401]: published {Name:etcd-1 ClientURLs:[https://192.168.150.20:2379]} to cluster 8488de0f79375062
Oct 14 11:46:11 master etcd[17401]: ready to serve client requests
Oct 14 11:46:11 master systemd[1]: Started Etcd Server.
Oct 14 11:46:11 master etcd[17401]: serving client requests on 192.168.150.20:2379
Oct 14 11:46:11 master etcd[17401]: setting up the initial cluster version to 3.4
Oct 14 11:46:11 master etcd[17401]: set the initial cluster version to 3.4
Oct 14 11:46:11 master etcd[17401]: enabled capabilities for version 3.4

node1节点:
● etcd.service - Etcd Server
   Loaded: loaded (/usr/lib/systemd/system/etcd.service; enabled; vendor preset: disabled)
   Active: active (running) since Fri 2022-10-14 11:46:11 EDT; 19min ago
 Main PID: 16958 (etcd)
   CGroup: /system.slice/etcd.service
           └─16958 /opt/etcd/bin/etcd --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --peer-cert-file=/opt/etcd/ssl/server.pem --peer-key-file=/opt/etcd/ssl/server-key.pem -...

Oct 14 11:46:11 node1 etcd[16958]: raft2022/10/14 11:46:11 INFO: 7b86f805bf06433c [term: 1] received a MsgVote message with higher term from 871053d560e41f62 [term: 12]
Oct 14 11:46:11 node1 etcd[16958]: raft2022/10/14 11:46:11 INFO: 7b86f805bf06433c became follower at term 12
Oct 14 11:46:11 node1 etcd[16958]: raft2022/10/14 11:46:11 INFO: 7b86f805bf06433c [logterm: 1, index: 2, vote: 0] cast MsgVote for 871053d560e41f62 [logterm: 1, index: 2] at term 12
Oct 14 11:46:11 node1 etcd[16958]: raft2022/10/14 11:46:11 INFO: raft.node: 7b86f805bf06433c elected leader 871053d560e41f62 at term 12
Oct 14 11:46:11 node1 etcd[16958]: published {Name:etcd-2 ClientURLs:[https://192.168.150.21:2379]} to cluster 8488de0f79375062
Oct 14 11:46:11 node1 etcd[16958]: ready to serve client requests
Oct 14 11:46:11 node1 etcd[16958]: serving client requests on 192.168.150.21:2379
Oct 14 11:46:11 node1 systemd[1]: Started Etcd Server.
Oct 14 11:46:11 node1 etcd[16958]: set the initial cluster version to 3.4
Oct 14 11:46:11 node1 etcd[16958]: enabled capabilities for version 3.4

可以看到etcd集群已经成功启动。

4、部署 Master Node

4、1自签证书颁发机构(CA)

cat > ca-config.json<< EOF 
{
    "signing": {
        "default": {
            "expiry": "87600h"
        },
        "profiles": {
            "kubernetes": {
                "expiry": "87600h",
                "usages": [
                    "signing",
                    "key encipherment",
                    "server auth",
                    "client auth"
                ]
            }
        }
    }
}
EOF
cat > ca-csr.json<< EOF 
{
    "CN": "kubernetes",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF

4、2 生成证书:

$ cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

* 验证是否生成成功:
            

$ ls ls *pem
ca-key.pem  ca.pem

4、3 使用自签 CA 签发 kube-apiserver HTTPS 证书:

        创建证书申请文件:

cat > server-csr.json<< EOF
{
    "CN": "kubernetes",
    "hosts": [
        "10.0.0.1",
        "127.0.0.1",
        "master节点IP",
        "node1节点",
        "其他后期需要加入的节点IP",
        "其他后期需要加入的节点IP",
        "其他后期需要加入的节点IP",
        "kubernetes",
        "kubernetes.default",
        "kubernetes.default.svc",
        "kubernetes.default.svc.cluster",
        "kubernetes.default.svc.cluster.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF

生成证书并验证:

$ cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server
$ ls server*.pem
 server-key.pem  server.pem

5、部署Master组件:kube-apiserver kube-controller-manager 和 kube-scheduler

5、1 部署kube-apiserver

      (1) 从 Github 下载二进制文件

$ wget https://dl.k8s.io/v1.19.16/kubernetes-server-linux-amd64.tar.gz

        (2)解压二进制包:

$ mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs} 
$ tar zxvf kubernetes-server-linux-amd64.tar.gz 
$ cd kubernetes/server/bin 
$ cp kube-apiserver kube-scheduler kube-controller-manager /opt/kubernetes/bin 
$ cp kubectl /usr/bin/

        (3)部署 kube-apiserver:

        创建配置文件:

cat > /opt/kubernetes/cfg/kube-apiserver.conf << EOF
KUBE_APISERVER_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--etcd-servers=https://192.168.150.20:2379,https://192.168.150.21:2379 \\
--bind-address=192.168.150.20 \\
--secure-port=6443 \\
--advertise-address=192.168.150.20 \\
--allow-privileged=true \\
--service-cluster-ip-range=10.0.0.0/24 \\
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \\
--authorization-mode=RBAC,Node \\
--enable-bootstrap-token-auth=true \\
--token-auth-file=/opt/kubernetes/cfg/token.csv \\
--service-node-port-range=30000-32767 \\
--kubelet-client-certificate=/opt/kubernetes/ssl/server.pem \\
--kubelet-client-key=/opt/kubernetes/ssl/server-key.pem \\
--tls-cert-file=/opt/kubernetes/ssl/server.pem \\
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \\
--client-ca-file=/opt/kubernetes/ssl/ca.pem \\
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--etcd-cafile=/opt/etcd/ssl/ca.pem \\
--etcd-certfile=/opt/etcd/ssl/server.pem \\
--etcd-keyfile=/opt/etcd/ssl/server-key.pem \\
--audit-log-maxage=30 \\
--audit-log-maxbackup=3 \\
--audit-log-maxsize=100 \\
--audit-log-path=/opt/kubernetes/logs/k8s-audit.log"
EOF


# 参数说明:
–logtostderr:启用日志
—v:日志等级
–log-dir:日志目录
–etcd-servers:etcd 集群地址
–bind-address:监听地址
–secure-port:https 安全端口
–advertise-address:集群通告地址
–allow-privileged:启用授权 
–service-cluster-ip-range:Service 虚拟 IP 地址段
–enable-admission-plugins:准入控制模块 
–authorization-mode:认证授权,启用 RBAC 授权和节点自管理 
–enable-bootstrap-token-auth:启用 TLS bootstrap 机制 
–token-auth-file:bootstrap token 文件 
–service-node-port-range:Service nodeport 类型默认分配端口范围 
–kubelet-client-xxx:apiserver 访问 kubelet 客户端证书 
–tls-xxx-file:apiserver https 证书 
–etcd-xxxfile:连接 Etcd 集群证书 
–audit-log-xxx:审计日志

拷贝刚才生成的证书:

$ cp ~/TLS/k8s/ca*pem ~/TLS/k8s/server*pem /opt/kubernetes/ssl/

启用 TLS Bootstrapping 机制:

$ export BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d '')
$cat > /opt/kubernetes/cfg/token.csv << EOF 
${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:node- bootstrapper" 
EOF

systemd 管理 apiserver:

cat > /usr/lib/systemd/system/kube-apiserver.service << EOF 
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-apiserver.conf
ExecStart=/opt/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF

5、2 部署 kube-controller-manager:

(1)创建配置文件:

cat > /opt/kubernetes/cfg/kube-controller-manager.conf << EOF 
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--leader-elect=true \\
--master=127.0.0.1:8080 \\
--bind-address=127.0.0.1 \\
--allocate-node-cidrs=true \
--cluster-cidr=10.244.0.0/16 \
--service-cluster-ip-range=10.0.0.0/24 \
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \
--root-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \
--experimental-cluster-signing-duration=87600h0m0s"
EOF

#说明:
–master:通过本地非安全本地端口 8080 连接 apiserver。 
–leader-elect:当该组件启动多个时,自动选举(HA) 
–cluster-signing-cert-file/–cluster-signing-key-file:自动为 kubelet 颁发证书 的 CA,apiserver 保持一致

(2)systemd 管理 controller-manager:

cat > /usr/lib/systemd/system/kube-controller-manager.service << EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-controller-manager.conf
ExecStart=/opt/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF

5、3 部署 kube-scheduler:

(1)创建配置文件:

cat > /opt/kubernetes/cfg/kube-scheduler.conf << EOF
KUBE_SCHEDULER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--leader-elect \
--master=127.0.0.1:8080 \
--bind-address=127.0.0.1"
EOF

(2)systemd 管理 scheduler:

cat > /usr/lib/systemd/system/kube-scheduler.service << EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-scheduler.conf
ExecStart=/opt/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF

5、4 启动三个组件:

$ systemctl daemon-reload
$ systemctl start kube-apiserver
$ systemctl start kube-controller-manager
$ systemctl start kube-scheduler

5、5 验证Master组件是否部署成功:

# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok
controller-manager   Healthy   ok
etcd-1               Healthy   {"health":"true"}
etcd-0               Healthy   {"health":"true"}

可以看到Master组件已经部署成功。

5、6 授权 kubelet-bootstrap 用户允许请求证书:

$ kubectl create clusterrolebinding kubelet-bootstrap  --clusterrole=system:node-bootstrapper  --user=kubelet-bootstrap

6、部署Node节点组件:docker、kube-proxy 和 kulet

6、1 安装Docker

  官网安装地址:https://docs.docker.com/engine/install/centos/

  安装步骤:

        yum安装gcc相关:

$ yum -y install gcc;
$ yum -y install gcc-c++;

        安装需要的软件包:

$ yum install -y yum-utils		
$ yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

        更新yum软件包索引:

$ yum makecache fast

        安装docker-ce:

$ yum install docker-ce docker-ce-cli containerd.io docker-compose-plugin

        验证docker是否安装成功:

$ docker version

        启动docker:

$ systemctl daemon-reload
$ systemctl start docker
$ systemctl enable docker

        配置阿里云容器镜像:

$ sudo mkdir -p /etc/docker
$ sudo tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://fxt824bw.mirror.aliyuncs.com"]
}
EOF

        重启docker:

$ sudo systemctl daemon-reload
$ sudo systemctl restart docker

 6、2 部署kubelet:

        创建配置文件:

cat > /opt/kubernetes/cfg/kubelet.conf << EOF 
KUBELET_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--hostname-override=node1 \
--network-plugin=cni \
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \
--config=/opt/kubernetes/cfg/kubelet-config.yml \
--cert-dir=/opt/kubernetes/ssl \
--pod-infra-container-image=lizhenliang/pause-amd64:3.0"
EOF


# 参数说明:
–hostname-override:显示名称,集群中唯一 
–network-plugin:启用 CNI 
–kubeconfig:空路径,会自动生成,后面用于连接 apiserver 
–bootstrap-kubeconfig:首次启动向 apiserver 申请证书 
–config:配置参数文件 
–cert-dir:kubelet 证书生成目录 
–pod-infra-container-image:管理 Pod 网络容器的镜像

     配置参数文件:

cat > /opt/kubernetes/cfg/kubelet-config.yml << EOF
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS:
- 10.0.0.2
clusterDomain: cluster.local 
failSwapOn: false
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 2m0s
    enabled: true
  x509:
    clientCAFile: /opt/kubernetes/ssl/ca.pem 
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 5m0s
    cacheUnauthorizedTTL: 30s
evictionHard:
  imagefs.available: 15%
  memory.available: 100Mi
  nodefs.available: 10%
  nodefs.inodesFree: 5%
maxOpenFiles: 1000000
maxPods: 110
EOF

        将Master节点的一些配置文件拷贝到Node1节点:

$ scp -r  /opt/kubernetes/ssl root@Node1节点IP:/opt/kubernetes

        生成 bootstrap.kubeconfig 文件(Node节点操作):

KUBE_APISERVER="https://master节点IP:6443" 
TOKEN="master节点中token.csv的token值" 

kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=bootstrap.kubeconfig
kubectl config set-credentials "kubelet-bootstrap" \
  --token=${TOKEN} \
  --kubeconfig=bootstrap.kubeconfig
kubectl config set-context default \
  --cluster=kubernetes \
  --user="kubelet-bootstrap" \
  --kubeconfig=bootstrap.kubeconfig
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig

        拷贝到配置文件路径:

$ cp bootstrap.kubeconfig /opt/kubernetes/cfg/

        systemd 管理 kubelet:

cat > /usr/lib/systemd/system/kubelet.service << EOF
[Unit]
Description=Kubernetes Kubelet
After=docker.service
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kubelet.conf
ExecStart=/opt/kubernetes/bin/kubelet $KUBELET_OPTS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF

 6、3 部署kube-proxy:

        创建配置文件:

cat > /opt/kubernetes/cfg/kube-proxy.conf << EOF
KUBE_PROXY_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--config=/opt/kubernetes/cfg/kube-proxy-config.yml"
EOF

        配置参数文件:

cat > /opt/kubernetes/cfg/kube-proxy-config.yml << EOF
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
metricsBindAddress: 0.0.0.0:10249
clientConnection:
  kubeconfig: /opt/kubernetes/cfg/kube-proxy.kubeconfig
hostnameOverride: node1
clusterCIDR: 10.0.0.0/24
EOF

        生成证书请求文件(Master节点操作):

$ cd ~/TLS/k8s
$ cat > kube-proxy-csr.json << EOF
{
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing",
      "ST": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF

        生成证书:

$ cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

        验证是否生成:

$ ls kube-proxy*pem

        发送到Node1节点(Master节点操作):

$ scp  /root/TLS/k8s/kubu-proxy*.pem root@192.168.150.21:/opt/kubernetes/ssl

        生成 kubeconfig 文件(Node节点操作):

KUBE_APISERVER="https://master节点ip:6443"
kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=kube-proxy.kubeconfig
kubectl config set-credentials kube-proxy \
  --client-certificate=/opt/kubernetes/ssl/kube-proxy.pem \
  --client-key=/opt/kubernetes/ssl/kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-proxy.kubeconfig
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-proxy \
  --kubeconfig=kube-proxy.kubeconfig
kubectl config use-context default1 --kubeconfig=kube-proxy.kubeconfig

        拷贝到配置文件指定路径:

cp kube-proxy.kubeconfig /opt/kubernetes/cfg/

        systemd 管理 kube-proxy:

cat > /usr/lib/systemd/system/kube-proxy.service << EOF
[Unit]
Description=Kubernetes Proxy
After=network.target
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-proxy.conf
ExecStart=/opt/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF

6、4 启动并设置开机启动:

$ systemctl daemon-reload
$ systemctl start kubelet
$ systemctl start kube-proxy

        查看启动状态:

$ systemctl status kubelet
$ systemctl status kube-proxy


# 结果如下:
● kubelet.service - Kubernetes Kubelet
   Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
   Active: active (running) since Sun 2022-10-16 08:41:09 EDT; 3h 23min ago
 Main PID: 18629 (kubelet)
   CGroup: /system.slice/kubelet.service
           └─18629 /opt/kubernetes/bin/kubelet

● kube-proxy.service - Kubernetes Proxy
   Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled)
   Active: active (running) since Sun 2022-10-16 11:40:31 EDT; 24min ago
 Main PID: 58125 (kube-proxy)
   CGroup: /system.slice/kube-proxy.service
           └─58125 /opt/kubernetes/bin/kube-proxy --logtostderr=false --v=2 --log-dir=/opt/kubernetes/logs --config=/opt/kubernetes/cfg/kube-proxy-config.yml

可以看到Node1节点的组件已经部署成功。

6、6 将Node1加入到集群中

        查看 kubelet 证书请求:
$ kubectl get csr

        master节点批准申请:

$ kubectl certificate approve 查到的csr

       此后如果需要向集群中添加新的Node节点,只需按照上面的步骤执行即可。

7、安装CNI网络插件:

        下载CNI二进制文件:

wget https://github.com/containernetworking/plugins/releases/download/v0.8.6/cni- plugins-linux-amd64-v0.8.6.tgz

        解压二进制包并移动到默认工作目录:

$ mkdir /opt/cni/bin 
$ tar zxvf cni-plugins-linux-amd64-v0.8.6.tgz -C /opt/cni/bin

        部署CNI网络:

$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

        可能出现错误,需要修改yml文件:

---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: psp.flannel.unprivileged
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
    seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
    apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
    apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
  privileged: false
  volumes:
    - configMap
    - secret
    - emptyDir
    - hostPath
  allowedHostPaths:
    - pathPrefix: "/etc/cni/net.d"
    - pathPrefix: "/etc/kube-flannel"
    - pathPrefix: "/run/flannel"
  readOnlyRootFilesystem: false
  # Users and groups
  runAsUser:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  fsGroup:
    rule: RunAsAny
  # Privilege Escalation
  allowPrivilegeEscalation: false
  defaultAllowPrivilegeEscalation: false
  # Capabilities
  allowedCapabilities: ['NET_ADMIN']
  defaultAddCapabilities: []
  requiredDropCapabilities: []
  # Host namespaces
  hostPID: false
  hostIPC: false
  hostNetwork: true
  hostPorts:
  - min: 0
    max: 65535
  # SELinux
  seLinux:
    # SELinux is unsed in CaaSP
    rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: flannel
rules:
  - apiGroups: ['extensions']
    resources: ['podsecuritypolicies']
    verbs: ['use']
    resourceNames: ['psp.flannel.unprivileged']
  - apiGroups:
      - ""
    resources:
      - pods
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes/status
    verbs:
      - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "cniVersion": "0.2.0",
      "name": "cbr0",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-amd64
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: beta.kubernetes.io/os
                    operator: In
                    values:
                      - linux
                  - key: beta.kubernetes.io/arch
                    operator: In
                    values:
                      - amd64
      hostNetwork: true
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: lizhenliang/flannel:v0.11.0-amd64 
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: lizhenliang/flannel:v0.11.0-amd64 
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
             add: ["NET_ADMIN"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg

        将此yml文件上传到master节点,然后执行:

$ kubectl apply -f kube-flannel.yml

8、测试k8s集群:

        部署nginx测试(Master节点操作):

$ kubectl create deployment nginx --image=nginx
$ kubectl expose deployment nginx --port=80 --type-NodePort
$ kubectl get pods,svc

        访问nginx:http://任意节点IP:端口号即可。

       

        

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值