Kubernetes二进制部署(单节点)

一、实验环境介绍

  • 实验环境:VMware15.5、Xshell 6、Centos7.6、flannel-v0.10.0、etcd-v3.3.10、kubernetes-server1.12
  • 节点IP地址分配:

Master:192.168.50.133

Node1:192.168.50.134

Node2:192.168.50.135

二、实验步骤

  • 前置条件:在各节点上先安装Docker-ce

//生成CA根证书

1、在master节点上生成CA证书配置文件

cat > ca-config.json <<EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "www": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"     
        ]  
      } 
    }         
  }
}
EOF

2、签名CA证书

cat > ca-csr.json <<EOF 
{   
    "CN": "etcd CA",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing"
        }
    ]
}
EOF

3、生成CA证书

cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

4、生成etcd三个节点之间的通信验证配置文件

cat > server-csr.json <<EOF
{
    "CN": "etcd",
    "hosts": [
    "192.168.50.133",
    "192.168.50.134",
    "192.168.50.135"
    "192.168.50.136"
    "192.168.50.137"
    "192.168.50.138"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing"
        }
    ]
}
EOF

5、生成ETCD-server端证书

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server

//部署ETCD

6、将软件包上传至master节点并解压缩

tar zxvf etcd-v3.3.10-linux-amd64.tar.gz

7、创建etcd工作目录

mkdir /opt/etcd/{cfg,bin,ssl} -p

8、将解压缩的命令文件放置在工作目录中

mv etcd-v3.3.10-linux-amd64/etcd etcd-v3.3.10-linux-amd64/etcdctl /opt/etcd/bin/

9、将之前生成的证书拷贝到指定工作目录中

cp etcd-cert/*.pem /opt/etcd/ssl/

10、将整个etcd工作目录及服务启动脚本推送到其他node节点

scp -r /opt/etcd/ root@192.168.50.135:/opt/
scp -r /opt/etcd/ root@192.168.50.136:/opt/
scp /usr/lib/systemd/system/etcd.service root@192.168.50.135:/usr/lib/systemd/system/
scp /usr/lib/systemd/system/etcd.service root@192.168.50.136:/usr/lib/systemd/system/

11、进入node1节点,修改配置文件

vim /opt/etcd/cfg/etcd

修改内容如下

ETCD_NAME="etcd02"     ## 修改ETCD节点名称
ETCD_LISTEN_PEER_URLS="https://192.168.50.135:2380"    ## 修改ETCD监听地址
ETCD_LISTEN_CLIENT_URLS="https://192.168.50.135:2379"  ## 修改ETCD客户端监听地址
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.50.135:2380"  ## 集群通告地址
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.50.135:2379"  ## 客户端通告地址

node2修改内容同上

12、修改完成后分别在master、node1、node2上启动ETCD服务

systemctl start etcd

13、检查ETCD集群状态

/opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.50.133:2379,https://192.168.50.134:2379,https://192.168.50.135:2379" cluster-health

如果出现以下状态则说明ETCD集群搭建完毕:
member 3eae9a550e2e3ec is healthy: got healthy result from https://192.168.50.133:2379
member 26cd4dcf17bc5cbd is healthy: got healthy result from https://192.168.50.134:2379
member 2fcd2df8a9411750 is healthy: got healthy result from https://192.168.50.135:2379
cluster is healthy

//在配置flannel网络插件

14、在master上操作,将网段配置信息写入分配的子网段到ETCD中,供flannel使用

/opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.50.133:2379,https://192.168.50.133:2379,https://192.168.50.133:2379" set /coreos.com/network/config '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}'

15、将flannel软件包上传到两个node节点

tar zxvf flannel-v0.10.0-linux-amd64.tar.gz

16、在各节点上创建k8s工作目录

mkdir -p /opt/kubernetes/{cfg,bin,ssl}

17、将软件包中所要用到的命令放置到k8s工作目录下

mv mk-docker-opts.sh flanneld /opt/kubernetes/bin/

18、使用以下脚本自动化配置flannel组件

vim flannel.sh

#!/bin/bash

ETCD_ENDPOINTS=${1:-"http://127.0.0.1:2379"}

cat <<EOF >/opt/kubernetes/cfg/flanneld
FLANNEL_OPTIONS="--etcd-endpoints=${ETCD_ENDPOINTS} \
-etcd-cafile=/opt/etcd/ssl/ca.pem \
-etcd-certfile=/opt/etcd/ssl/server.pem \
-etcd-keyfile=/opt/etcd/ssl/server-key.pem"
EOF

创建flannel组件服务启动脚本

cat <<EOF >/usr/lib/systemd/system/flanneld.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service

[Service]
Type=notify
EnvironmentFile=/opt/kubernetes/cfg/flanneld
ExecStart=/opt/kubernetes/bin/flanneld --ip-masq \$FLANNEL_OPTIONS
ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure

[Install]
WantedBy=multi-user.target

EOF

19、启动flannel组件并将其设置为开机自启动

systemctl daemon-reload
systemctl enable flanneld

20、启动配置脚本

bash flannel.sh https://192.168.50.133:2379,https://192.168.50.134:2379,https://192.168.50.135:2379

21、配置docker连接flannel

插入一行:
EnvironmentFile=/run/flannel/subnet.env

修改如下行:
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS -H fd:// --containerd=/run/containerd/containerd.sock

配置完成后重启docker服务

systemctl daemon-reload
systemctl restart docker

//部署master组件

22、在master上操作,api-server生成证书

# 将master配置脚本包上传并解压(可以使用脚本直接配置,也可以使用下面的命令按步骤配置)
unzip master.zip

23、生成api-server证书

k8s-cert.sh

cat > ca-config.json <<EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
EOF

cat > ca-csr.json <<EOF
{
    "CN": "kubernetes",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
      	    "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF

生成CA证书

生成证书:cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

24、生成api-server服务端证书

特别注意:中间填写的IP地址是所有节点(除node外)包括后期部署多节点时候所需要用到的所有IP地址,只能多不能少,可以多些留几个备用

cat > server-csr.json <<EOF
{
    "CN": "kubernetes",
    "hosts": [
      "10.0.0.1",
      "127.0.0.1",
      "192.168.50.133",  //master1
      "192.168.50.134",  //master2
      "192.168.50.100",  //vip
      "192.168.50.138",  //lb (master)
      "192.168.50.139",  //lb (backup)
      "kubernetes",
      "kubernetes.default",
      "kubernetes.default.svc",
      "kubernetes.default.svc.cluster",
      "kubernetes.default.svc.cluster.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF

生成服务端证书

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server

25、

cat > admin-csr.json <<EOF
{
  "CN": "admin",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing",
      "ST": "BeiJing",
      "O": "system:masters",
      "OU": "System"
    }
  ]
}
EOF

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin

cat > kube-proxy-csr.json <<EOF
{
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing",
      "ST": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF

生成证书:cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

26、将k8s安装包上传至服务器并解压缩

tar zxvf kubernetes-server-linux-amd64.tar.gz

进入解压缩目录

cd /root/k8s/kubernetes/server/bin

复制相关命令文件到k8s工作目录下

cp kube-apiserver kubectl kube-controller-manager kube-scheduler /opt/kubernetes/bin/

27、创建token认证文件

vim /opt/kubernetes/cfg/token.csv

写入以下内容:(序列号,用户名,id,角色)
0fb61c46f8991b718eb38d27b605b008,kubelet-bootstrap,10001,"system:kubelet-bootstrap"

# 序列号可以使用 head -c 16 /dev/urandom | od -An -t x | tr -d ' ' 直接生成

28、开启api-server服务

bash apiserver.sh 192.168.50.133 https://192.168.50.133:2379,https://192.168.50.134:2379,https://192.168.50.135:2379

29、使用脚本启动scheduler服务

./controller-manager 127.0.0.1

30、使用脚本启动scheduler服务

./controller-manager.sh 127.0.0.1

31、查看master 节点状态

kubectl get cs

NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-2               Healthy   {"health":"true"}   
etcd-1               Healthy   {"health":"true"}   
etcd-0               Healthy   {"health":"true"}   

//开始部署node节点

32、在master上进行操作,将kubelet、kube-proxy命令拷贝到node节点

scp kubelet kube-proxy root@192.168.50.134:/opt/kubernetes/bin/
scp kubelet kube-proxy root@192.168.50.135:/opt/kubernetes/bin/

33、在node节点上操作,将node.zip上传并解压

unzip node.zip

34、在master上操作

mkdir kubeconfig
cd kubeconfig/
mv kubeconfig.sh kubeconfig

vim kubeconfig

删除以下部分:
# 创建 TLS Bootstrapping Token
#BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')
BOOTSTRAP_TOKEN=0fb61c46f8991b718eb38d27b605b008

cat > token.csv <<EOF
${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF

35、设置客户端认证参数

kubectl config set-credentials kubelet-bootstrap \
  --token=6351d652249951f79c33acdab329e4c4 \    ## 这个部分修改为刚才生成的ID
  --kubeconfig=bootstrap.kubeconfig

36、设置环境变量

export PATH=$PATH:/opt/kubernetes/bin/

37、执行脚本生成配置文件

 bash kubeconfig 192.168.50.133 /root/k8s/k8s-cert/

38、拷贝配置文件到node节点

scp bootstrap.kubeconfig kube-proxy.kubeconfig root@192.168.50.134:/opt/kubernetes/cfg/
scp bootstrap.kubeconfig kube-proxy.kubeconfig root@192.168.195.135:/opt/kubernetes/cfg/

39、创建bootstrap角色赋予权限用于连接apiserver请求签名

kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap

40、在node01节点上操作,执行脚本配置kubelet

bash kubelet.sh 192.168.50.134

41、在master节点上检查到node01节点的请求

kubectl get csr
NAME                                                   AGE     REQUESTOR           CONDITION
node-csr-NOI-7vkfTLIqJgOWq4fHPNPHKbjBXlDDHptj7FpTa8A   3m18s   kubelet-bootstrap   Pending(等待集群给该节点颁发证书)

42、颁发证书

kubectl certificate approve node-csr-NOI-7vkfTLIqJgOWq4fHPNPHKbjBXlDDHptj7FpTa8A

43、在node01节点操作,使用脚本启动proxy服务

bash proxy.sh 192.168.195.150

//部署node02

44、在node01节点操作,把现成的/opt/kubernetes目录复制到其他节点进行修改

scp -r /opt/kubernetes/ root@192.168.50.135:/opt/

45、把kubelet,kube-proxy的service文件拷贝到node2中

scp /usr/lib/systemd/system/{kubelet,kube-proxy}.service root@192.168.50.135:/usr/lib/systemd/system/

46、在node02上操作,删除复制过来的证书,等会node02会自行申请证书

cd /opt/kubernetes/ssl/
rm -rf *

47、修改配置文件kubelet kubelet.config kube-proxy(三个配置文件)

cd /opt/cfg/

vim kubelet
修改如下内容:
--hostname-override=192.168.50.135 \

vim kubelet.config
address: 192.168.50.135

vim kube-proxy
--hostname-override=192.168.50.135 \

48、启动各项服务并设置为开机自启动

systemctl start kubelet.service
systemctl enable kubelet.service 
systemctl start kube-proxy.service
systemctl enable kube-proxy.service

49、在master上操作查看证书申请请求

1.查看证书申请请求
kubectl get csr

2.批准证书申请
kubectl certificate approve <证书ID号>

50、在master上查看群集中的节点

kubectl get node

NAME              STATUS   ROLES    AGE   VERSION
192.168.50.134   Ready    <none>   21h   v1.12.3
192.168.50.135   Ready    <none>   37s   v1.12.3

到此,k8s单节点部署完成

  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值