二进制方式搭建K8S集群

二进制方式搭建K8S集群


一、准备

角色IP组件
ubuntu-server1(master)192.168.71.11kube-apiserver kube-controller-manager kube-scheduller etcd kubelet kube-proxy docker
ubuntu-server2(node)192.168.71.12kubelet kube-proxy docker etcd
ubuntu-server3(node)192.168.71.13kubelet kube-proxy docker etcd

CPU>=2 RAM>=2G disk>=35G

1.1 OS:Ubuntu20.04 官网链接: https://ubuntu.com/.

1.2 关闭防火墙

sudo ufw status #查看防火墙,状态需要Status: inactive
sudo ufw disable  #关闭防火墙 永久
sudo ufw enable  #开启防火墙
###########
#centos
systemctl stop firewalld
systemctl disable firewalld

1.3 关闭SElinux

##ubuntu20.04可以跳过
######################
#centos
setenforce 0 # 临时
sed -i 's/enforcing/disabled/' /etc/selinux/config # 永久

1.4 关闭swap,建议永久关闭

free -h #查看swap使用情况

在这里插入图片描述

swapoff -a  # 临时
vim /etc/fstab  # 永久   将swap那一行直接注释掉

在这里插入图片描述

1.5 同步系统时间

查看时间

date 
sudo apt install ntpdate
ntpdate time.windows.com

1.6 修改hostname

按照自己实际需求更改

vim /etc/hostsname

1.7 修改hosts

vim /etc/hosts

在这里插入图片描述

1.8 将桥接的 IPv4 流量传递到 iptables 的链,路由转发

vim /etc/sysctl.conf
#######################添加以下内容 
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
#########################
sysctl -p #刷新配置
1.9安装ipvsadm ipset
sudo apt install ipvsadm ipset

1.10 以上步骤每台机器都需要执行,然后重启


二、Etcd集群部署

首先在master节点操作,完成配置后直接将配置文件拷贝至其余节点

2.1 安装cfssl工具

curl -L https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -o /usr/local/bin/cfssl
curl -L https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -o /usr/local/bin/cfssljson
curl -L https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -o /usr/local/bin/cfssl-certinfo
chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson /usr/local/bin/cfssl-certinfo

也可以直接去github下载较新的版本 https://github.com/cloudflare/cfssl.

2.2 生成etcd证书

2.2.1创建目录
mkdir /usr/local/kubernetes/{k8s-cert,etcd-cert} -p
cd /usr/local/kubernetes/etcd-cert/
2.2.2 创建用来生成 CA 文件的 JSON 配置文件
vim ca-config.json

在这里插入图片描述

 ca-config.json

{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "www": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
2.2.3创建用来生成 CA 证书签名请求(CSR)的 JSON 配置文件
vim ca-csr.json

在这里插入图片描述
 ca-csr.json

{
    "CN": "etcd CA",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing"
        }
    ]
}
2.2.4生成CA证书(ca.pem)和密钥(ca-key.pem)
cfssl gencert -initca ca-csr.json | cfssljson -bare ca
ls

在这里插入图片描述

2.2.5创建 etcd 证书签名请求
vim server-csr.json

在这里插入图片描述
 server-csr.json

{
    "CN": "etcd",
    "hosts": [
        "192.168.71.11",
        "192.168.71.12",
        "192.168.71.13",
        "192.168.71.14",
        "192.168.71.15"
        ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing"
        }
    ]
}

  注:上述文件 hosts 字段中 IP 为所有 etcd 节点的集群内部通信 IP,一个都不能少!为了方便后期扩容可以多写几个预留的 IP。

2.2.6生成etcd证书和私钥
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server

在这里插入图片描述

2.3 部署etcd

2.3.1下载etcd二进制包

下载地址: https://github.com/etcd-io/etcd/releases,这里采用etcd-v3.5.0-linux-amd64.tar.gz

cd
tar -zxvf etcd-v3.5.0-linux-amd64.tar.gz
mkdir /opt/etcd/{bin,cfg,ssl} -p
cd etcd-v3.5.0-linux-amd64/
cp ./{etcd,etcdctl} /opt/etcd/bin/

在这里插入图片描述
在这里插入图片描述

2.3.2设置etcd配置文件
cd /opt/etcd/cfg
vim /opt/etcd/cfg/etcd.conf

#Name 自定义与INITIAL_CLUSTER中保持一致
#INITIAL_CLUSTER填写所有节点ip
#LISTEN_PEER_URLS、LISTEN_CLIENT_URLS、INITIAL_ADVERTISE_PEER_URLS、ADVERTISE_CLIENT_URLS修改为本机ip

#[Member]
NAME="etcd-1"
DATA_DIR="/var/lib/etcd/default.etcd"
LISTEN_PEER_URLS="https://192.168.71.11:2380"
LISTEN_CLIENT_URLS="https://192.168.71.11:2379"

#[Clustering]
INITIAL_ADVERTISE_PEER_URLS="https://192.168.71.11:2380"
ADVERTISE_CLIENT_URLS="https://192.168.71.11:2379"
INITIAL_CLUSTER="etcd-1=https://192.168.71.11:2380,etcd-2=https://192.168.71.12:2380,etcd-3=https://192.168.71.13:2380"
INITIAL_CLUSTER_TOKEN="etcd-cluster"
INITIAL_CLUSTER_STATE="new"

说明:
NAME 节点名称
DATA_DIR 数据目录
LISTEN_PEER_URLS 集群通信监听地址
LISTEN_CLIENT_URLS 客户端访问监听地址
INITIAL_ADVERTISE_PEER_URLS 集群通告地址
ADVERTISE_CLIENT_URLS 客户端通告地址
INITIAL_CLUSTER 集群节点地址
INITIAL_CLUSTER_TOKEN 集群Token
INITIAL_CLUSTER_STATE 加入集群的当前状态,new是新集群,existing表示加入已有集群

2.3.3创建etcd系统服务
vim /usr/lib/systemd/system/etcd.service

etcd.service

[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=/opt/etcd/cfg/etcd.conf
ExecStart=/opt/etcd/bin/etcd \
      --name=${NAME} \
      --data-dir=${DATA_DIR} \
      --listen-peer-urls=${LISTEN_PEER_URLS} \
      --listen-client-urls=${LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \
      --advertise-client-urls=${ADVERTISE_CLIENT_URLS} \
      --initial-advertise-peer-urls=${INITIAL_ADVERTISE_PEER_URLS} \
      --initial-cluster=${INITIAL_CLUSTER} \
      --initial-cluster-token=${INITIAL_CLUSTER_TOKEN} \
      --initial-cluster-state=${INITIAL_CLUSTER_STATE} \
      --cert-file=/opt/etcd/ssl/server.pem \
      --key-file=/opt/etcd/ssl/server-key.pem \
      --peer-cert-file=/opt/etcd/ssl/server.pem \
      --peer-key-file=/opt/etcd/ssl/server-key.pem \
      --trusted-ca-file=/opt/etcd/ssl/ca.pem \
      --peer-trusted-ca-file=/opt/etcd/ssl/ca.pem
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
2.3.4拷贝证书
cd /usr/local/kubernetes/etcd-cert/
cp ./{ca,server-key,server}.pem /opt/etcd/ssl/
2.3.5拷贝配置到其他节点并修改配置
scp -r /opt/etcd/ root@192.168.71.12:/opt/
scp -r /opt/etcd/ root@192.168.71.13:/opt/
scp /usr/lib/systemd/system/etcd.service root@192.168.71.12:/usr/lib/systemd/system
scp /usr/lib/systemd/system/etcd.service root@192.168.71.13:/usr/lib/systemd/system

修改NAME为节点的NAME,url的ip地址为节点对应ip

  1. 192.168.71.12 —etcd.conf
#[Member]
NAME="etcd-2"
DATA_DIR="/var/lib/etcd/default.etcd"
LISTEN_PEER_URLS="https://192.168.71.12:2380"
LISTEN_CLIENT_URLS="https://192.168.71.12:2379"

#[Clustering]
INITIAL_ADVERTISE_PEER_URLS="https://192.168.71.12:2380"
ADVERTISE_CLIENT_URLS="https://192.168.71.12:2379"
INITIAL_CLUSTER="etcd-1=https://192.168.71.11:2380,etcd-2=https://192.168.71.12:2380,etcd-3=https://192.168.71.13:2380"
INITIAL_CLUSTER_TOKEN="etcd-cluster"
INITIAL_CLUSTER_STATE="new"
  1. 192.168.71.13 —etcd.conf
#[Member]
NAME="etcd-3"
DATA_DIR="/var/lib/etcd/default.etcd"
LISTEN_PEER_URLS="https://192.168.71.13:2380"
LISTEN_CLIENT_URLS="https://192.168.71.13:2379"

#[Clustering]
INITIAL_ADVERTISE_PEER_URLS="https://192.168.71.13:2380"
ADVERTISE_CLIENT_URLS="https://192.168.71.13:2379"
INITIAL_CLUSTER="etcd-1=https://192.168.71.11:2380,etcd-2=https://192.168.71.12:2380,etcd-3=https://192.168.71.13:2380"
INITIAL_CLUSTER_TOKEN="etcd-cluster"
INITIAL_CLUSTER_STATE="new"
2.3.6启动并检查

在每个节点启动etcd并加入开机启动

systemctl daemon-reload 
systemctl start etcd.service 
systemctl enable etcd.service

检查集群状态

/opt/etcd/bin/etcdctl --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/server.pem --key=/opt/etcd/ssl/server-key.pem --endpoints="https://192.168.71.11:2379,https://192.168.71.12:2379,https://192.168.71.13:2379" endpoint health

在这里插入图片描述

三、部署Master组件

3.1生成证书

3.1.1创建用来生成 CA 文件的 JSON 配置文件
cd /usr/local/kubernetes/k8s-cert/
vim ca-config.json

ca-config.json

{
 "signing": {
   "default": {
     "expiry": "87600h"
   },
   "profiles": {
     "kubernetes": {
        "expiry": "87600h",
        "usages": [
           "signing",
           "key encipherment",
           "server auth",
           "client auth"
       ]
     }
   }
 }
}
3.1.2创建用来生成 CA 证书签名请求(CSR)的 JSON 配置文件
vim ca-csr.json
############
{
   "CN": "kubernetes",
   "key": {
       "algo": "rsa",
       "size": 2048
   },
   "names": [
       {
           "C": "CN",
           "L": "Beijing",
           "ST": "Beijing",
     	    "O": "k8s",
           "OU": "System"
       }
   ]
}
3.1.3生成CA证书(ca.pem)和密钥(ca-key.pem)
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

在这里插入图片描述

3.1.4生成api-server证书

证书签名请求文件中的hosts可以多规划一些,方便以后添加节点避免重新制作证书

vim server-csr.json
################
{
   "CN": "kubernetes",
   "hosts": [
     "10.0.0.1",
     "127.0.0.1",
     "kubernetes",
     "kubernetes.default",
     "kubernetes.default.svc",
     "kubernetes.default.svc.cluster",
     "kubernetes.default.svc.cluster.local",
     "192.168.71.11",
     "192.168.71.12",
     "192.168.71.13",
     "192.168.71.14",
     "192.168.71.15"
   ],
   "key": {
       "algo": "rsa",
       "size": 2048
   },
   "names": [
       {
           "C": "CN",
           "L": "BeiJing",
           "ST": "BeiJing",
           "O": "k8s",
           "OU": "System"
       }
   ]
}
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server

在这里插入图片描述

3.1.5生成kube-proxy证书
vim kube-proxy-csr.json
############
{
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing",
      "ST": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

在这里插入图片描述

3.1.6生成admin管理员证书
vim admin-csr.json
#######
{
 "CN": "admin",
 "hosts": [],
 "key": {
   "algo": "rsa",
   "size": 2048
 },
 "names": [
   {
     "C": "CN",
     "L": "BeiJing",
     "ST": "BeiJing",
     "O": "system:masters",
     "OU": "System"
   }
 ]
}
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin
3.1.7拷贝证书
mkdir /opt/kubernetes/{bin,cfg,ssl,logs} -p
cp ca.pem ca-key.pem server.pem server-key.pem /opt/kubernetes/ssl/

3.2创建TLSBootstrapping Token

cd /opt/kubernetes/cfg/
head -c 16 /dev/urandom | od -An -t x | tr -d ' '

输出:432ace6c388f4317b8d4083809d22eda (随机)

vim token.csv

token.csv

432ace6c388f4317b8d4083809d22eda,kubelet-bootstrap,10001,"system:node-bootstrapper"

3.3准备k8s二进制包

下载地址: https://github.com/kubernetes/kubernetes/tree/master/CHANGELOG
选择版本,这里我选择1.19版本,点击CHANGELOG-1.19.md。不要选择1.20及以上的版本,程序的启动参数有变动,本文中的配置不适用。
在这里插入图片描述
在这里插入图片描述
点击下载
#将二进制包分别scp发送到master和node节点 /root目录下并解压

mkdir /opt/kubernetes/{bin,cfg,ssl,logs} -p
cd kubernetes/server/bin/
cp kube-apiserver kube-scheduler kube-controller-manager kubectl /opt/kubernetes/bin/

3.4部署kube-apiserver

3.4.1创建apiserver配置文件
cd /opt/kubernetes/cfg/
vim kube-apiserver.conf

kube-apiserver.conf

KUBE_APISERVER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--etcd-servers=https://192.168.71.11:2379,https://192.168.71.12:2379,https://192.168.71.13:2379 \
--bind-address=192.168.71.11 \
--secure-port=6443 \
--advertise-address=192.168.71.11 \
--allow-privileged=true \
--service-cluster-ip-range=10.0.0.0/24 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--enable-bootstrap-token-auth=true \
--token-auth-file=/opt/kubernetes/cfg/token.csv \
--service-node-port-range=30000-32767 \
--kubelet-client-certificate=/opt/kubernetes/ssl/server.pem \
--kubelet-client-key=/opt/kubernetes/ssl/server-key.pem \
--tls-cert-file=/opt/kubernetes/ssl/server.pem  \
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \
--client-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/opt/etcd/ssl/ca.pem \
--etcd-certfile=/opt/etcd/ssl/server.pem \
--etcd-keyfile=/opt/etcd/ssl/server-key.pem \
--audit-log-maxage=30 \
--audit-log-maxbackup=3 \
--audit-log-maxsize=100 \
--audit-log-path=/opt/kubernetes/logs/k8s-audit.log"

3.4.2systemd管理apiserver

vim /usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-apiserver.conf
ExecStart=/opt/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
##########

启动apiserver

systemctl daemon-reload
systemctl enable kube-apiserver
systemctl restart kube-apiserver
systemctl status kube-apiserver

3.5部署controller-manager

3.5.1创建controller-manager配置文件
vim kube-controller-manager.conf
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--leader-elect=true \
--master=127.0.0.1:8080 \
--bind-address=127.0.0.1 \
--allocate-node-cidrs=true \
--cluster-cidr=10.244.0.0/16 \
--service-cluster-ip-range=10.0.0.0/24 \
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem  \
--root-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \
--cluster-signing-duration=87600h0m0s"
3.5.2systemd管理controller-manager
vim /usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-controller-manager.conf
ExecStart=/opt/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target

启动controller-manager

systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl restart kube-controller-manager
systemctl status kube-controller-manager

3.6部署kube-scheduler

3.6.1 创建kube-scheduler配置文件
vim kube-scheduler.conf
KUBE_SCHEDULER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--leader-elect \
--master=127.0.0.1:8080 \
--bind-address=127.0.0.1"
3.6.2systemd管理kube-scheduler
vim /usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-scheduler.conf
ExecStart=/opt/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target

启动kube-scheduler

systemctl daemon-reload
systemctl enable kube-scheduler
systemctl restart kube-scheduler
systemctl status kube-scheduler

3.7给kubelet-bootstrap授权

Master apiserver启用TLS认证后,Node节点kubelet组件想要加入集群,必须使用CA签发的有效证书才能与apiserver通信,当Node节点很多时,签署证书是一件很繁琐的事情,因此有了TLS Bootstrapping机制,kubelet会以一个低权限用户自动向apiserver申请证书,kubelet的证书由apiserver动态签署。

cp ../../kubernetes/bin/kubectl /usr/bin
kubectl create clusterrolebinding kubelet-bootstrap \
--clusterrole=system:node-bootstrapper \
--user=kubelet-bootstrap
kubectl get cs

在这里插入图片描述

四、部署ndoe组件

4.1安装docker

前往: https://blog.csdn.net/ripper821/article/details/118165134

4.2准备二进制包

解压

mkdir /opt/kubernetes/{bin,cfg,ssl,logs} -p
cd kubernetes/server/bin/
cp kubelet kube-proxy /opt/kubernetes/bin/

4.3从master拷贝证书到node

cd /usr/local/kubernetes/k8s-cert/
scp ca.pem kube-proxy.pem kube-proxy-key.pem root@192.168.71.12:/opt/kubernetes/ssl/
scp ca.pem kube-proxy.pem kube-proxy-key.pem root@192.168.71.13:/opt/kubernetes/ssl/

以上三个证书也要复制到master节点/opt/kubernetes/ssl/目录下

4.4node节点部署kubelet

master节点也要安装,过程参考node节点

4.4.1创建kubelet配置文件
vim /opt/kubernetes/cfg/kubelet.conf
KUBELET_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--hostname-override=ubuntu-server2 \
--network-plugin=cni \
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \
--config=/opt/kubernetes/cfg/kubelet-config.yml \
--cert-dir=/opt/kubernetes/ssl \
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"
KUBELET_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--hostname-override=ubuntu-server3 \
--network-plugin=cni \
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \
--config=/opt/kubernetes/cfg/kubelet-config.yml \
--cert-dir=/opt/kubernetes/ssl \
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"
4.4.2创建kubelet-config.yml文件
vim /opt/kubernetes/cfg/kubelet-config.yml
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS:
- 10.0.0.2
clusterDomain: cluster.local 
failSwapOn: false
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 2m0s
    enabled: true
  x509:
    clientCAFile: /opt/kubernetes/ssl/ca.pem 
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 5m0s
    cacheUnauthorizedTTL: 30s
evictionHard:
  imagefs.available: 15%
  memory.available: 100Mi
  nodefs.available: 10%
  nodefs.inodesFree: 5%
maxOpenFiles: 1000000
maxPods: 110
4.4.3生成 bootstrap.kubeconfig 文件
cd /opt/kubernetes/ssl
cp /root/kubernetes/server/bin/kubectl /usr/bin
KUBE_APISERVER="https://192.168.71.11:6443"
TOKEN="432ace6c388f4317b8d4083809d22eda"
kubectl config set-cluster kubernetes \
--certificate-authority=/opt/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=bootstrap.kubeconfig
kubectl config set-credentials "kubelet-bootstrap" \
--token=${TOKEN} \
--kubeconfig=bootstrap.kubeconfig
kubectl config set-context default \
--cluster=kubernetes \
--user="kubelet-bootstrap" \
--kubeconfig=bootstrap.kubeconfig
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
cp bootstrap.kubeconfig /opt/kubernetes/cfg
4.4.4systemd 管理 kubelet
vim /usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
After=docker.service

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kubelet.conf
ExecStart=/opt/kubernetes/bin/kubelet $KUBELET_OPTS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
systemctl daemon-reload
systemctl start kubelet
systemctl enable kubelet
systemctl status kubelet

在这里插入图片描述

4.4.5master批准 kubelet 证书申请并加入集群

查看申请

kubectl get csr

在这里插入图片描述
批准申请

kubectl certificate approve

在这里插入图片描述
查看节点

kubectl get node

在这里插入图片描述
1master 2node 应该都可以看见,STATUS:NOT READY

4.5node部署kube-proxy

master节点也要安装,过程参考node节点

4.5.1创建kube-proxy配置文件
vim /opt/kubernetes/cfg/kube-proxy.conf
KUBE_PROXY_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--config=/opt/kubernetes/cfg/kube-proxy-config.yml"
4.5.2创建kube-proxy-config.yml文件
vim /opt/kubernetes/cfg/kube-proxy-config.yml
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
address: 0.0.0.0
metricsBindAddress: 0.0.0.0:10249
clientConnection:
  kubeconfig: /opt/kubernetes/cfg/kube-proxy.kubeconfig
hostnameOverride: ubuntu-server2
clusterCIDR: 10.0.0.0/24
mode: ipvs
ipvs:
  scheduler: "rr"
iptables:
  masqueradeAll: true
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
address: 0.0.0.0
metricsBindAddress: 0.0.0.0:10249
clientConnection:
  kubeconfig: /opt/kubernetes/cfg/kube-proxy.kubeconfig
hostnameOverride: ubuntu-server3
clusterCIDR: 10.0.0.0/24
mode: ipvs
ipvs:
  scheduler: "rr"
iptables:
  masqueradeAll: true
4.5.3创建kube-proxy.kubeconfig文件
KUBE_APISERVER="https://192.168.71.11:6443"
kubectl config set-cluster kubernetes  --certificate-authority=/opt/kubernetes/ssl/ca.pem  --embed-certs=true  --server=${KUBE_APISERVER} --kubeconfig=kube-proxy.kubeconfig
kubectl config set-credentials kube-proxy  --client-certificate=./kube-proxy.pem  --client-key=./kube-proxy-key.pem  --embed-certs=true  --kubeconfig=kube-proxy.kubeconfig
kubectl config set-context default  --cluster=kubernetes  --user=kube-proxy  --kubeconfig=kube-proxy.kubeconfig
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
cp kube-proxy.kubeconfig /opt/kubernetes/cfg/
4.5.4systemd管理kube-proxy
vim /usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Proxy
After=network.target

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-proxy.conf
ExecStart=/opt/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
systemctl daemon-reload 
systemctl start kube-proxy 
systemctl enable kube-proxy
systemctl status kube-proxy

4.6部署cni网络插件

4.6.1准备二进制包(所有机器都要安装)

链接: https://github.com/containernetworking/plugins/releases

mkdir /opt/cni/bin -p
mkdir /etc/cni/net.d -p
tar -zxvf cni-plugins-linux-amd64-v0.9.1.tgz -C /opt/cni/bin/
4.6.2在master部署k8s集群网络

yaml文件下载地址: https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

vim kube-flannel.yml
kubectl apply -f kube-flannel.yml
4.6.3查看,耐心等待1分钟左右
kubectl get pods -n kube-system

在这里插入图片描述
在这里插入图片描述
出现上面2张图说明网络插件安装成功

4.6.4创建角色绑定,授权查看日志
kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes
kubectl -n kube-system logs -f kube-flannel-ds-amd64-6z2f8
4.6.5 设置标签
kubectl label nodes ubuntu-server1 node-role.kubernetes.io/master=true

在这里插入图片描述
我的ubutnu-server1是master所以给它打上“master”标签

4.7授权apiserver访问kubelet

vim apiserver-to-kubelet-rbac.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:kube-apiserver-to-kubelet
rules:
  - apiGroups:
      - ""
    resources:
      - nodes/proxy
      - nodes/stats
      - nodes/log
      - nodes/spec
      - nodes/metrics
      - pods/log
    verbs:
      - "*"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: system:kube-apiserver
  namespace: ""
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:kube-apiserver-to-kubelet
subjects:
  - apiGroup: rbac.authorization.k8s.io
    kind: User
    name: kubernetes
kubectl apply -f apiserver-to-kubelet-rbac.yaml

4.8 为master节点设置污点

这样master就不会有pod创建

kubectl taint nodes ubuntu-server1 node-role.kubernetes.io/master=:NoSchedule

4.9部署dns

模板: https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/dns/coredns.
部署CoreDNS,修改官方yaml中的镜像地址,域名称cluster.local,dns的clusterip
下面是我的yaml文件,可以参考
在这里插入图片描述
图片中所示的forward那一行必须和我一样。
coredns.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: coredns
  namespace: kube-system
  labels:
      kubernetes.io/cluster-service: "true"
      addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: Reconcile
  name: system:coredns
rules:
- apiGroups:
  - ""
  resources:
  - endpoints
  - services
  - pods
  - namespaces
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: EnsureExists
  name: system:coredns
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:coredns
subjects:
- kind: ServiceAccount
  name: coredns
  namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
  labels:
      addonmanager.kubernetes.io/mode: EnsureExists
data:
  Corefile: |
    .:53 {
        errors
        health {
            lameduck 5s
        }
        ready
        kubernetes cluster.local in-addr.arpa ip6.arpa {
            pods insecure
            fallthrough in-addr.arpa ip6.arpa
            ttl 30
        }
        prometheus :9153
        forward . 114.114.114.114 8.8.8.8
        cache 30
        loop
        reload
        loadbalance
    }
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: coredns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "CoreDNS"
spec:
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
  selector:
    matchLabels:
      k8s-app: kube-dns
  template:
    metadata:
      labels:
        k8s-app: kube-dns
    spec:
      securityContext:
        seccompProfile:
          type: RuntimeDefault
      priorityClassName: system-cluster-critical
      serviceAccountName: coredns
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 100
            podAffinityTerm:
              labelSelector:
                matchExpressions:
                  - key: k8s-app
                    operator: In
                    values: ["kube-dns"]
              topologyKey: kubernetes.io/hostname
      tolerations:
        - key: "CriticalAddonsOnly"
          operator: "Exists"
      nodeSelector:
        kubernetes.io/os: linux
      containers:
      - name: coredns
        image: k8s.gcr.io/coredns/coredns:v1.8.0
        imagePullPolicy: IfNotPresent
        resources:
          limits:
            memory: 170Mi
          requests:
            cpu: 100m
            memory: 70Mi
        args: [ "-conf", "/etc/coredns/Corefile" ]
        volumeMounts:
        - name: config-volume
          mountPath: /etc/coredns
          readOnly: true
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        - containerPort: 9153
          name: metrics
          protocol: TCP
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        readinessProbe:
          httpGet:
            path: /ready
            port: 8181
            scheme: HTTP
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            add:
            - NET_BIND_SERVICE
            drop:
            - all
          readOnlyRootFilesystem: true
      dnsPolicy: Default
      volumes:
        - name: config-volume
          configMap:
            name: coredns
            items:
            - key: Corefile
              path: Corefile
---
apiVersion: v1
kind: Service
metadata:
  name: kube-dns
  namespace: kube-system
  annotations:
    prometheus.io/port: "9153"
    prometheus.io/scrape: "true"
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "CoreDNS"
spec:
  selector:
    k8s-app: kube-dns
  clusterIP: 10.0.0.2
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
    protocol: TCP
  - name: metrics
    port: 9153
    protocol: TCP

4.10在master创建一个测试pod,查看是否成功

kubectl create deployment test-nginx --image=nginx
#暴露端口
kubectl expose deployment test-nginx --port=80 --type=NodePort

查看命令,拉取镜像需要时间耐心等待,READY从0/1至1/1

kubectl get pods -o wide
kubectl get svc

在这里插入图片描述
访问方式:
1)使用任意node ip访问。例如:192.168.71.12:30884
在这里插入图片描述
2)在master节点使用

curl 10.0.0.149:80
curl 10.244.1.2:80

在这里插入图片描述

4.11测试dns是否部署成功

vim bs.yaml
kubectl apply -f bs.yaml

bs.yaml

apiVersion: v1
kind: Pod
metadata: 
    name: busybox
    namespace: default
spec:
    containers:
      - image: busybox:1.28.4
        command:
          - sleep
          - "3600"
        imagePullPolicy: IfNotPresent
        name: busybox
    restartPolicy: Always
kubectl exec -it busybox sh
nslookup kubernetes

在这里插入图片描述

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

masayil

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值