二进制方式搭建的k8s集群

k8学习## 二进制搭建单master的k8s集群(1.20.13版本)

@比基尼海滩威猛先生

所需文件已上传至百度网盘
链接:https://pan.baidu.com/s/1FR5UhqgtXvFCjJKKzRLhTA
提取码:d29h

机器准备:
一台或者多台Centos7.4-7.8-x86_64机器
硬件配置:2GB或更多RAM,2CPU或更多CPU,硬盘30G最低
需通外网要拉镜像,并且机器之间可以互通
禁止swap分区(swapoff -a)

各组件:
master:api-server scheduler controler-manager etcd
node:kubelet kuber-proxy
一.(除了第四条的hosts是master配置的,其他所有节点都需要配置)
1)关闭防火墙 (systemctl disable firewalld --now)
2)关闭selinux (sed -i ‘s/enforcing/disabled’ /etc/selinux/config)
3)关闭swap(将/etc/fstab中挂载swap分区的配置注释,swapoff -a是临时生效)
4)配置主机名(所有),配置/etc/hosts(只在master)
5)将桥接的ipv4流量传递到iptables的链(建议启用,防止丢包等各种问题)
vim /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
#sysctl --system 生效
6)时间同步
可使用xshell的批量执行命令选项,执行ntpdate time.windows.com 同步本机时间
7)安装docker
二.安装cfssl证书生成工具
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo

##################################################################etcd集群#######################################################################

一.生成etcd证书
1.创建工作目录
mkdir -p ~/TLS/{etcd,k8s}
cd ~/TLS/etcd/
2.为自签CA生成相应的配置文件和请求文件并作修改
cfssl print-defaults config > ca-config.json
cfssl print-defaults csr > ca-csr.json

cat ca-config.json 
{
    "signing": {
        "default": {
            "expiry": "87600h"
        },
        "profiles": {
            "www": {
                "expiry": "87600h",
                "usages": [
                    "signing",
                    "key encipherment",
                    "server auth",
                    "client auth"
                ]
            }
        }
    }
}

cat ca-csr.json

{
    "CN": "etcd CA",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing"
        }
    ]
}

3.生成证书(生成公钥和私钥)
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
4.使用自签CA签发Etcd HTTPS证书
1)生成证书申请文件并修改
cfssl print-defaults csr > server-csr.json
cat server-csr.json

{
    "CN": "etcd",
    "hosts": [
        "192.168.43.15",
        "192.168.43.14",
        "192.168.43.16"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing"
        }
    ]
}

2)生成域名证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server

5.下载etcd二进制包
cd /root/
wget https://github.com/etcd-io/etcd/releases/download/v3.4.9/etcd-v3.4.9-linux-amd64.tar.gz
6.创建工作目录并解压二进制包
mkdir -p /opt/etcd/{bin,cfg,ssl}
tar -xf etcd-v3.4.9-linux-amd64.tar.gz
mv etcd-v3.4.9-linux-amd64/{etcd,etcdctl} /opt/etcd/bin/

7.创建etcd配置文件
cat /opt/etcd/cfg/etcd.conf

#[Member]
ETCD_NAME="etcd-1"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.43.15:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.43.15:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.43.15:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.43.15:2379"
ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.43.15:2380,etcd-2=https://192.168.43.14:2380,etcd-3=https://192.168.43.16:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

ETCD NANE:节点名称,集群中唯一
ETCD_ DATA DIR: 数据目录
ETCD LISTEN PEER_ URLS: 集群通信监听地址
ETCD LISTEN_ CLIENT_ URLS:客户端访问监听地址
ETCD_ INITIAL_ADVERTISE_PEER_URLS:集群通告地址
ETCD_ ADVERTISE_CLIENT_URLS: 客卢端通告地址
ETCD_ INITIAL_ CLUSTER:集群节点地址
ETCD_ INIT IAL_CLUSTER_ TOKEN:集群 Token
ETCD_INITIAL_CLUSTER_ STATE:加入集群的当前状态,new 是新集群,existing
表示加入己有集群

8.配置系统管理etcd,准备配置文件
cat /usr/lib/systemd/system/etcd.service

[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=/opt/etcd/cfg/etcd.conf
ExecStart=/opt/etcd/bin/etcd \
--cert-file=/opt/etcd/ssl/server.pem \
--key-file=/opt/etcd/ssl/server-key.pem \
--peer-cert-file=/opt/etcd/ssl/server.pem \
--peer-key-file=/opt/etcd/ssl/server-key.pem \
--trusted-ca-file=/opt/etcd/ssl/ca.pem \
--peer-trusted-ca-file=/opt/etcd/ssl/ca.pem \
--logger=zap
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

9.把生成的证书拷贝到配置文件中的路径
cp TLS/etcd/capem TLS/etcd/serverpem /opt/etcd/ssl/

10.将配置以及etcd解压的包拷贝到其他集群
scp -r /opt/etcd 192.168.43.14:/opt/
scp -r /usr/lib/systemd/system/etcd.service 192.168.43.14:/usr/lib/systemd/system/
scp -r /opt/etcd 192.168.43.16:/opt/
scp -r /usr/lib/systemd/system/etcd.service 192.168.43.16:/usr/lib/systemd/system/
11.修改其他两个节点的etcd.conf配置文件(/opt/etcd/cfg/etcd.conf)然后三个节点启动etcd服务并设置开机自启 systemctl enable etcd.service --now
12.查看集群状态

ETCDCTL_API=3 /opt/etcd/bin/etcdctl --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/server.pem --key=/opt/etcd/ssl/server-key.pem --endpoints="https://192.168.43.14:2379,https://192.168.43.15:2379,https://192.168.43.16:2379" endpoint health --write-out=table

######################################################################################master节点##############################################################################

一.生成kube-apiserver证书
1.可将/root/TLS/etcd/下的配置文件和请求文件拷贝下来进行修改
cd /root/TLS/k8s

cat ca-config.json

{
   "signing": {
       "default": {
           "expiry": "87600h"
       },
       "profiles": {
           "kubernetes": {
               "expiry": "87600h",
               "usages": [
                   "signing",
                   "key encipherment",
                   "server auth",
                   "client auth"
               ]
           }
       }
   }
}

cat ca-csr.json

{
    "CN": "kubernetes",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
            "O":"k8s",
            "OU":"System"
        }
    ]
}

生成证书:
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

2.使用自签ca签发kube-apiserver HTTPS证书
cat server-csr.json

{
    "CN": "kubernetes",
    "hosts": [
        "10.0.0.1",
        "127.0.0.1",
        "192.168.43.15",
        "192.168.43.14",
        "192.168.43.16",
        "192.168.43.17",
        "192.168.43.100",
        "kubernetes",
        "kubernetes.default",
        "kubernetes.default.svc",
        "kubernetes.default.svc.cluster",
        "kubernetes.default.svc.cluster.local"

    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}

上面hosts中包含了vip以及所有master节点ip(不止一个master节点)以及node节点ip
3.生成域名证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server

4.下载master和node所需的二进制文件
https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.20.md
1.解压二进制包
mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs}
tar -xf kubernetes-server-linux-amd64.tar.gz
cd kubernetes/server/bin/
cp kube-apiserver kube-scheduler kube-controller-manager /opt/kubernetes/bin/
cp kubectl /usr/bin/

5.编辑kube-aipserver.conf配置文件
cd /opt/kubernetes/cfg
cat kube-apiserver.conf

KUBE_APISERVER_OPTS="--logtostderr=false \
  --v=2 \
  --log-dir=/opt/kubernetes/logs \
  --bind-address=192.168.43.15 \
  --secure-port=6443 \
  --advertise-address=192.168.43.15 \
  --allow-privileged=true \
  --service-cluster-ip-range=10.0.0.0/24 \
  --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \
  --authorization-mode=Node,RBAC \
  --enable-bootstrap-token-auth=true \
  --token-auth-file=/opt/kubernetes/cfg/token.csv \
  --service-node-port-range=30000-32767 \
  --kubelet-client-certificate=/opt/kubernetes/ssl/server.pem \
  --kubelet-client-key=/opt/kubernetes/ssl/server-key.pem \
  --tls-cert-file=/opt/kubernetes/ssl/server.pem \
  --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \
  --client-ca-file=/opt/kubernetes/ssl/ca.pem \
  --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
  --service-account-issuer=api \
  --service-account-signing-key-file=/opt/kubernetes/ssl/server-key.pem \
  --etcd-cafile=/opt/etcd/ssl/ca.pem \
  --etcd-certfile=/opt/etcd/ssl/server.pem \
  --etcd-keyfile=/opt/etcd/ssl/server-key.pem \
  --etcd-servers=https://192.168.43.15:2379,https://192.168.43.14:2379,https://192.168.43.16:2379 \
  --requestheader-client-ca-file=/opt/kubernetes/ssl/ca.pem \
  --proxy-client-cert-file=/opt/kubernetes/ssl/server.pem \
  --proxy-client-key-file=/opt/kubernetes/ssl/server-key.pem \
  --requestheader-allowed-names=kubernetes \
  --requestheader-extra-headers-prefix=X-Remote-Extra- \
  --requestheader-group-headers=X-Remote-Group \
  --requestheader-username-headers=X-Remote-User \
  --enable-aggregator-routing=true \
  --audit-log-maxage=30 \
  --audit-log-maxbackup=3 \
  --audit-log-maxsize=100 \
  --audit-log-path=/opt/kubernetes/logs/k8s-audit.log"

注解:
–logtostderr:启用日志
–v:日志等级
–log-dir:日志目录
–etcd-servers: etcd 集群地址
–bind-address:监听地址
–secure-port:https安全端口
–advertise-address:集群通告地址
–allowed-privileged:启用授权
–service-cluster-ip-range: Service 虚拟 IP 地址段
–enable-admission-plugins:准入控制模块
–authorization-moda:认证授权,启用 RBAC 投权和节点自管理
–enable-bootstrap-token-auth: 启用 TLS bootstrap 机制
–token-auth-file: bootstrap token 安件
–service-node-port-range: Service nodeport 类型默认分配端口范围
–kubelet-client-xxx: apiserver 访问kubelet 客户端证书
–tls-xxx-file: apiserver https 证书
1.20 版本必须加的参数:–service-account-issuer, --service-account-
signing-key-file
–etcd-xxxfile:连接Etcd 集群证书
-audit-log-xxx:审计日志

启动聚合层相关配罝:

–requestheader-client-ca-file,
–proxy-client-
cert-file,
–proxy-client-key-file,
–requestheader-allowed-names,
–requestheader-extra-headers-prefix,
–requestheader-group-headers,
–requestheader-username-headers,
–enable-aggregator-routing

6.把刚才生成的证书拷贝到配置文件中的路径
cp ~/TLS/k8s/ca*.pem ~/TLS/k8s/server*pem /opt/kubernetes/ssl/
7.生成token并且编辑上述配置文件中的token文件
head -c 16 /dev/urandom | od -An -t x | tr -d ’ ’
cat /opt/kubernetes/cfg/token.csv
fa9a4b49d4dc899b4f2c2a93dd2eef67,kubelet-bootstrap,10001,“system:node-bootstrapper”

8.配置kube-apiserver系统启动
cat /usr/lib/systemd/system/kube-apiserver.service

[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-apiserver.conf
ExecStart=/opt/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target

9.启动服务以及开机自启
systemctl enable kube-apiserver.service --now

二.部署kube-controller-manager

1.创建配置文件
cat kube-controller-manager.conf

KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--leader-elect=true \
--kubeconfig=/opt/kubernetes/cfg/kube-controller-manager.kubeconfig \
--bind-address=127.0.0.1 \
--allocate-node-cidrs=true \
--cluster-cidr=10.244.0.0/16 \
--service-cluster-ip-range=10.0.0.0/24 \
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \
--root-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \
--cluster-signing-duration=87600h0m0s"

–kubeconfig:连接 apiserver 配置文件

–leader-elect:当该组件启动多个时,自动选举 (HA)

–cluster-signing-cert-file/–cluster-signing-key-file:自动为
kubelet 预发证书的CA,与apiserver 保持一致

2.配置kube-controller-manager并启动
1)生成kube-controller-manager证书
cd ~/TLS/k8s/
cp server-csr.json kube-controller-manager-csr.json
cat kube-controller-manager-csr.json

{
    "CN": "system:kube-controller-manager",
    "hosts": [],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
            "O": "system:masters",
            "OU": "System"
        }
    ]
}

2)生成证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager

3)生成kubeconfig文件(直接在命令行执行)

KUBE_CONFIG="/opt/kubernetes/cfg/kube-controller-manager.kubeconfig"
KUBE_APISERVER="https://192.168.43.15:6443"
kubectl config set-cluster kubernetes \
   --certificate-authority=/opt/kubernetes/ssl/ca.pem \
   --embed-certs=true \
   --server=${KUBE_APISERVER} \
   --kubeconfig=${KUBE_CONFIG}
kubectl config set-credentials kube-controller-manager \
   --client-certificate=/root/TLS/k8s/kube-controller-manager.pem \
   --client-key=/root/TLS/k8s/kube-controller-manager-key.pem \
   --embed-certs=true \
   --kubeconfig=${KUBE_CONFIG}
kubectl config set-context default \
   --cluster=kubernetes \
   --user=kube-controller-manager \
   --kubeconfig=${KUBE_CONFIG}
kubectl config use-context default --kubeconfig=${KUBE_CONFIG}

4)启动kube-controller-manager并设置开机自启
systemctl enable kube-controller-manager.service --now

三.配置kube-scheduler并开机自启
1.创建配置文件
cd /opt/kubernetes/cfg/
cat kube-scheduler.conf

KUBE_SCHEDULER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--leader-elect \
--kubeconfig=/opt/kubernetes/cfg/kube-scheduler.kubeconfig \
--bind-address=127.0.0.1"

–kubeconfig:连接 apiserver 配置文件
–leader-elect:当该组件启动多个时,自动选举 (HA)

2.生成kube-scheduler证书
1)生成kube-scheduler证书
cd ~/TLS/k8s/
cp kube-controller-manager-csr.json kube-scheduler-csr.json
cat kube-scheduler-csr.json

{
    "CN": "system:kube-scheduler",
    "hosts": [],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
            "O": "system:masters",
            "OU": "System"
        }
    ]
}

2)生成证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler

3.生成kubeconfig文件

KUBE_CONFIG="/opt/kubernetes/cfg/kube-scheduler.kubeconfig"
KUBE_APISERVER="https://192.168.43.15:6443"
kubectl config set-cluster kubernetes \
   --certificate-authority=/opt/kubernetes/ssl/ca.pem \
   --embed-certs=true \
   --server=${KUBE_APISERVER} \
   --kubeconfig=${KUBE_CONFIG}
kubectl config set-credentials kube-scheduler \
   --client-certificate=/root/TLS/k8s/kube-scheduler.pem \
   --client-key=/root/TLS/k8s/kube-scheduler-key.pem \
   --embed-certs=true \
   --kubeconfig=${KUBE_CONFIG}
kubectl config set-context default \
   --cluster=kubernetes \
   --user=kube-scheduler \
   --kubeconfig=${KUBE_CONFIG}
kubectl config use-context default --kubeconfig=${KUBE_CONFIG}

4.systemd管理scheduler
cd /usr/lib/systemd/system
cp kube-controller-manager.service kube-scheduler.service
cat kube-scheduler.service

[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-scheduler.conf
ExecStart=/opt/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target

5.设置开机自启
systemctl enable kube-scheduler.service --now

四.配置kubelet连接集群
1.生成kubelet连接集群证书
cd /root/TLS/k8s
cp kube-scheduler-csr.json admin-csr.json
cat admin-csr.json

{
    "CN": "admin",
    "hosts": [],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
            "O": "system:masters",
            "OU": "System"
        }
    ]
}

生成证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin

2.生成kubeconfig文件
mkdir /root/.kube
cd /root/TLS/k8s

KUBE_CONFIG="/root/.kube/config"
KUBE_APISERVER="https://192.168.43.15:6443"
kubectl config set-cluster kubernetes \
   --certificate-authority=/opt/kubernetes/ssl/ca.pem \
   --embed-certs=true \
   --server=${KUBE_APISERVER} \
   --kubeconfig=${KUBE_CONFIG}
kubectl config set-credentials cluster-admin \
   --client-certificate=/root/TLS/k8s/admin.pem \
   --client-key=/root/TLS/k8s/admin-key.pem \
   --embed-certs=true \
   --kubeconfig=${KUBE_CONFIG}
kubectl config set-context default \
   --cluster=kubernetes \
   --user=cluster-admin \
   --kubeconfig=${KUBE_CONFIG}
kubectl config use-context default --kubeconfig=${KUBE_CONFIG}

kubectl get cs 查看集群节点信息是否正常

3.授权kubelet-bootstrap用户允许请求证书

kubectl create clusterrolebinding kubelet-bootstrap \
--clusterrole=system:node-bootstrapper \
--user=kubelet-bootstrap

###############################################################################工作节点#############################################################################

一.将master也同样作为work node进行部署
cd /root/TLS/k8s/kubernetes/server/bin
cp kubelet kube-proxy /opt/kubernetes/bin/

1.准备kubelet

1)为kubelet准备配置文件
cd /opt/kubernetes/cfg

cat kubelet.conf

KUBELET_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--hostname-override=k8s-master1 \
--network-plugin=cni \
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \
--config=/opt/kubernetes/cfg/kubelet-config.yml \
--cert-dir=/opt/kubernetes/ssl \
--pod-infra-container-image=lizhenliang/pause-amd64:3.0"

–hostname-override: 显示名称,集群中唯
–network-plugin: 启用 CNI
–kubeconfig:空路径,会自动生成,后面用于连接 apiserver
–bootstrap–kulbeconfig:首次启动向 apiserver 申请证书
–config:配罝参数文件
–cert-dir: kubelet 证书生成目录
–pod-infra-container-image:管理Pod 网络容器的镜像

2)配置参数文件
cd /opt/kubernetes/cfg

cat kubelet-config.yml

kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS:
- 10.0.0.2
clusterDomain: cluster.local
failSwapOn: false
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 2m0s
    enabled: true
  x509:
    clientCAFile: /opt/kubernetes/ssl/ca.pem
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 5m0s
    cacheUnauthorizedTTL: 30s
evictionHard:
  imagefs.available: 15%
  memory.available: 100Mi
  nodefs.available: 10%
  nodefs.inodesFree: 5%
maxOpenFiles: 1000000
maxPods: 110

3)生成kubelet初次加入集群引导kubeconfig文件

KUBE_CONFIG="/opt/kubernetes/cfg/bootstrap.kubeconfig"
KUBE_APISERVER="https://192.168.43.15:6443"
TOKEN="fa9a4b49d4dc899b4f2c2a93dd2eef67" #和token.csv里的保持一致

生成kubelet bootstrap kubeconfig 配置文件:

kubectl config set-cluster kubernetes --certificate-authority=/opt/kubernetes/ssl/ca.pem --embed-certs=true --server=${KUBE_APISERVER} --kubeconfig=${KUBE_CONFIG}

kubectl config set-credentials "kubelet-bootstrap" \
 --token=${TOKEN} \
 --kubeconfig=${KUBE_CONFIG}

kubectl config set-context default \
 --cluster=kubernetes \
 --user="kubelet-bootstrap" \
 --kubeconfig=${KUBE_CONFIG}

kubectl config use-context default --kubeconfig=${KUBE_CONFIG}

4)systemd管理kubelet

cat /usr/lib/systemd/system/kubelet.service 
[Unit]
Description=Kubernetes Kubelet
After=docker.service

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kubelet.conf
ExecStart=/opt/kubernetes/bin/kubelet $KUBELET_OPTS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target

5)启动并设置开机自启
systemctl daemon-reload
systemctl start kubelet.service
systemctl enable kubelet.service

6)查看kubelet证书请求(kubelet向apiserver发出证书请求)

kubectl get csr
NAME                                                   AGE   SIGNERNAME                                    REQUESTOR           CONDITION
node-csr-u7YXV0rb3E0Ruq5OLVpLmj49_94CdPuWmgaO7LERiv0   35s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending

7)apiserver批准申请
kubectl certificate approve node-csr-u7YXV0rb3E0Ruq5OLVpLmj49_94CdPuWmgaO7LERiv0(和上面查到的name对应)

2.部署kube-proxy
1)创建配置文件
cat kube-proxy.conf

KUBE_PROXY_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--config=/opt/kubernetes/cfg/kube-proxy-config.yml"

2)配置参数文件
cat kube-proxy-config.yml

kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
metricsBindAddress: 0.0.0.0:10249
clientConnection:
  kubeconfig: /opt/kubernetes/cfg/kube-proxy.kubeconfig
hostnameOverride: k8s-master1
clusterCIDR: 10.0.0.0/24

3)生成kube-proxy证书
cd /root/TLS/k8s

cat kube-proxy-csr.json

{
    "CN": "system:kube-proxy",
    "hosts": [],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}

生成证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

4)生成kubeconfig文件

KUBE_CONFIG="/opt/kubernetes/cfg/kube-proxy.kubeconfig"
KUBE_APISERVER="https://192.168.43.15:6443"

kubectl config set-cluster kubernetes --certificate-authority=/opt/kubernetes/ssl/ca.pem --embed-certs=true --server=${KUBE_APISERVER} --kubeconfig=${KUBE_CONFIG}

kubectl config set-credentials kube-proxy \
 --client-certificate=/root/TLS/k8s/kube-proxy.pem \
 --client-key=/root/TLS/k8s/kube-proxy-key.pem \
 --embed-certs=true \
 --kubeconfig=${KUBE_CONFIG}

kubectl config set-context default \
 --cluster=kubernetes \
 --user=kube-proxy \
 --kubeconfig=${KUBE_CONFIG}

kubectl config use-context default --kubeconfig=${KUBE_CONFIG}

5)systemd管理kube-proxy
cat /usr/lib/systemd/system/kube-proxy.service

[Unit]
Description=Kubernetes Proxy
After=network.target

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-proxy.conf
ExecStart=/opt/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target

6)设置启动
systemctl daemon-reload
systemctl start kube-proxy.service
systemctl enable kube-proxy.service

3.部署网络
1)准备calico.yaml文件
cd /root/
kubectl apply -f calico.yaml
kubectl get pods -n kube-system
等calico pod都是running状态 那节点也会准备就绪

4.授权apiserver访问kubelet

cat apiserver-to-kubelet-rbac.yaml

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata: 
 annotations: 
   rbac.authorization.kubernetes.io/autoupdate: "true"
 labels: 
   kubernetes.io/bootstrapping: rbac-defaults
 name: system:kube-apiserver-to-kubelet
rules: 
 - apiGroups: 
     - ""
   resources: 
     - nodes/proxy
     - nodes/stats
     - nodes/log
     - nodes/spec
     - nodes/metrics
     - pods/log
   verbs: 
     - "*"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata: 
 name: system:kube-apiserver
 namespace: ""
roleRef:
 apiGroup: rbac.authorization.k8s.io
 kind: ClusterRole
 name: system:kube-apiserver-to-kubelet
subjects: 
 - apiGroup: rbac.authorization.k8s.io
   kind: User
   name: kubernetes

kubectl apply -f apiserver-to-kubelet-rbac.yaml

5.增加node节点

1).在master节点将work node涉及的文件拷贝到其他工作节点
scp -r /opt/kubernetes 192.168.43.14:/opt/
scp /usr/lib/systemd/system/{kubelet,kube-proxy}.service 192.168.43.14:/usr/lib/systemd/system/
scp /opt/kubernetes/ssl/ca.pem 192.168.43.14:/opt/kubernetes/ssl/

2).删除工作节点的kubelet证书和kubeconfig文件(这些文件和证书是申请审批后自动生成的,每个节点不同必须删掉)
rm -f /opt/kubernetes/cfg/kubelet.kubeconfig
rm -f /opt/kubernetes/ssl/kubelet*

3).修改kubelet.conf和kube-proxy-config.yml文件,将主机名修改为当前工作节点

4).启动kubelet和kube-proxy并设置开机自启
systemctl start kubelet kube-proxy
systemctl enable kubelet kube-proxy

5).在master批准新增node kubelet证书申请

kubectl get csr
NAME                                                   AGE   SIGNERNAME                                    REQUESTOR           CONDITION
node-csr-t4ioEaGVwtTI3tKZvNQ1C-eECVfi9FvO9z5PdM8zUgQ   2m    kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending

kubectl certificate approve node-csr-t4ioEaGVwtTI3tKZvNQ1C-eECVfi9FvO9z5PdM8zUgQ

6).master查看节点
kubectl get nodes

7).其他node节点增加同理!!!!!!!!!

#############################################################################Dashboard和CoreDNS######################################################################

一.编辑coredns.yaml
cat coredns.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: coredns
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:coredns
rules:
  - apiGroups:
    - ""
    resources:
    - endpoints
    - services
    - pods
    - namespaces
    verbs:
    - list
    - watch
  - apiGroups:
    - discovery.k8s.io
    resources:
    - endpointslices
    verbs:
    - list
    - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:coredns
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:coredns
subjects:
- kind: ServiceAccount
  name: coredns
  namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
data:
  Corefile: |
    .:53 {
        errors
        health {
          lameduck 5s
        }
        ready
        kubernetes cluster.local in-addr.arpa ip6.arpa {
          fallthrough in-addr.arpa ip6.arpa
        }
        prometheus :9153
        forward . /etc/resolv.conf {
          max_concurrent 1000
        }
        cache 30
        loop
        reload
        loadbalance
    }
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: coredns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/name: "CoreDNS"
spec:
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
  selector:
    matchLabels:
      k8s-app: kube-dns
  template:
    metadata:
      labels:
        k8s-app: kube-dns
    spec:
      priorityClassName: system-cluster-critical
      serviceAccountName: coredns
      tolerations:
        - key: "CriticalAddonsOnly"
          operator: "Exists"
      nodeSelector:
        kubernetes.io/os: linux
      affinity:
         podAntiAffinity:
           preferredDuringSchedulingIgnoredDuringExecution:
           - weight: 100
             podAffinityTerm:
               labelSelector:
                 matchExpressions:
                   - key: k8s-app
                     operator: In
                     values: ["kube-dns"]
               topologyKey: kubernetes.io/hostname
      containers:
      - name: coredns
        image: coredns/coredns:1.7.1
        imagePullPolicy: IfNotPresent
        resources:
          limits:
            memory: 170Mi
          requests:
            cpu: 100m
            memory: 70Mi
        args: [ "-conf", "/etc/coredns/Corefile" ]
        volumeMounts:
        - name: config-volume
          mountPath: /etc/coredns
          readOnly: true
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        - containerPort: 9153
          name: metrics
          protocol: TCP
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            add:
            - NET_BIND_SERVICE
            drop:
            - all
          readOnlyRootFilesystem: true
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        readinessProbe:
          httpGet:
            path: /ready
            port: 8181
            scheme: HTTP
      dnsPolicy: Default
      volumes:
        - name: config-volume
          configMap:
            name: coredns
            items:
            - key: Corefile
              path: Corefile
---
apiVersion: v1
kind: Service
metadata:
  name: kube-dns
  namespace: kube-system
  annotations:
    prometheus.io/port: "9153"
    prometheus.io/scrape: "true"
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    kubernetes.io/name: "CoreDNS"
spec:
  selector:
    k8s-app: kube-dns
  clusterIP: 10.0.0.2
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
    protocol: TCP
  - name: metrics
    port: 9153
    protocol: TCP

clusterIP和kubelet的clusterIP对应

二.执行此yaml文件
kubectl apply -f coredns.yaml
三.查看启动的容器
kubectl get pods -n kube-system

  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值