角色 IP 组件 k8s-maste 192.168.73.71 kube-apiserver,kube-controller-manager,kube-scheduler,etcd k8s-node1 192.168.73.72 kubelet,kube-proxy,docker,flannel,etcd k8s-node2 192.168.73.73 kubelet,kube-proxy,docker,flannel,etcd
基础环境:
关闭防火墙、关闭内核安全机制、更改主机名、编写本地 hosts 文件。
一、部署 Etcd 集群:
@k8s-master
开启路由转发:
[ root@k8s-master ~]
net.ipv4.ip_forward = 1
[ root@k8s-master ~]
1 **、生成证书:**
1 **)、使用cfssl 来生成自签证书,先下载cfssl 工具:**
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo
[ root@k8s-master ~]
[ root@k8s-master ~]
[ root@k8s-master ~]
[ root@k8s-master ~]
[ root@k8s-master ~]
[ root@k8s-master ~]
[ root@k8s-master ~]
[ root@k8s-master ~]
2 **)、新建一个目录,用来存放生成** etcd 证书的文件,并在其目录下创建文件:
[ root@k8s-master ~]
[ root@k8s-master ~]
[ root@k8s-master ssl]
---------------------------------------------
{
"signing" : {
"default" : {
"expiry" : "87600h"
} ,
"profiles" : {
"www" : {
"expiry" : "87600h" ,
"usages" : [
"signing" ,
"key encipherment" ,
"server auth" ,
"client auth"
]
}
}
}
}
---------------------------------------------
[ root@k8s-master ssl]
\ ---------------------------------------------
{
"CN" : "etcd CA" ,
"key" : {
"algo" : "rsa" ,
"size" : 2048
} ,
"names" : [
{
"C" : "CN" ,
"L" : "Beijing" ,
"ST" : "Beijing"
}
]
}
\ ---------------------------------------------
[ root@k8s-master ssl]
\ ---------------------------------------------
其中hosts可将IP设置为127.0.0.1通用,客户端IP,下方为绝对集群IP值
{
"CN" : "etcd" ,
"hosts" : [
"192.168.73.71" ,
"192.168.73.72" ,
"192.168.73.73"
] ,
"key" : {
"algo" : "rsa" ,
"size" : 2048
} ,
"names" : [
{
"C" : "CN" ,
"L" : "BeiJing" ,
"ST" : "BeiJing"
}
]
}
\ ---------------------------------------------
3 **)、生成证书,并查看**
[ root@k8s-master ssl]
[ root@k8s-master ssl]
[ root@k8s-master ssl]
[ root@k8s-master ssl]
2 **、部署** Etcd
以下部署步骤在规划的三个etcd节点操作一样,唯一不同的是etcd配置文件中的服务器IP要写当前的:
@ k8s-master k8s-node1 k8s-node2
下载包 etcd-v3.2.12-linux-amd64.tar.gz 到虚拟机
1 **)、创建** etcd 工作目录,解压二进制包,并将需要的包移动到 etcd 工作目录
[ root@k8s-master ~]
[ root@k8s-master ~]
[ root@k8s-master ~]
[ root@k8s-master ~]
2 **)、创建** etcd 配置文件
[ root@k8s-master ~]
\ ---------------------------------------------
\ #[ Member]
ETCD_NAME = "etcd01"
ETCD_DATA_DIR = "/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS = "https://192.168.73.71:2380"
ETCD_LISTEN_CLIENT_URLS = "https://192.168.73.71:2379"
ETCD_INITIAL_ADVERTISE_PEER_URLS = "https://192.168.73.71:2380"
ETCD_ADVERTISE_CLIENT_URLS = "https://192.168.73.71:2379"
ETCD_INITIAL_CLUSTER = "etcd01=https://192.168.73.71:2380,etcd02=https://192.168.73.72:2380,etcd03=https://192.168.73.73:2380"
ETCD_INITIAL_CLUSTER_TOKEN = "etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE = "new"
\ -------------------------
ETCD_NAME 节点名称
ETCD_DATA_DIR 数据目录
ETCD_LISTEN_PEER_URLS 集群通信监听地址
ETCD_LISTEN_CLIENT_URLS 客户端访问监听地址
ETCD_INITIAL_ADVERTISE_PEER_URLS 集群通告地址
ETCD_ADVERTISE_CLIENT_URLS 客户端通告地址
ETCD_INITIAL_CLUSTER 集群节点地址
ETCD_INITIAL_CLUSTER_TOKEN 集群Token
ETCD_INITIAL_CLUSTER_STATE 加入集群的当前状态,new是新集群,existing表示加入已有集群
\ ---------------------------------------------
3 **)、加入系统服务:**systemd 管理 etcd
[ root@k8s-master ~]
---------------------------------------------
[ Unit]
Description = Etcd Server
After = network.target
After = network-online.target
Wants = network-online.target
[ Service]
Type = notify
EnvironmentFile = /opt/etcd/cfg/etcd
ExecStart = /opt/etcd/bin/etcd \
--name= ${ETCD_NAME} \
--data-dir= ${ETCD_DATA_DIR} \
--listen-peer-urls= ${ETCD_LISTEN_PEER_URLS} \
--listen-client-urls= ${ETCD_LISTEN_CLIENT_URLS} ,http://127.0.0.1:2379 \
--advertise-client-urls= ${ETCD_ADVERTISE_CLIENT_URLS} \
--initial-advertise-peer-urls= ${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
--initial-cluster= ${ETCD_INITIAL_CLUSTER} \
--initial-cluster-token= ${ETCD_INITIAL_CLUSTER_TOKEN} \
--initial-cluster-state= new \
--cert-file= /opt/etcd/ssl/server.pem \
--key-file= /opt/etcd/ssl/server-key.pem \
--peer-cert-file= /opt/etcd/ssl/server.pem \
--peer-key-file= /opt/etcd/ssl/server-key.pem \
--trusted-ca-file= /opt/etcd/ssl/ca.pem \
--peer-trusted-ca-file= /opt/etcd/ssl/ca.pem
Restart = on-failur
LimitNOFILE = 65536
[ Install]
WantedBy = multi-user.target
---------------------------------------------
4 **)将刚才生成的证书拷贝到配置文件中的位置:**
[ root@k8s-master ~]
[ root@k8s-master ssl]
[ root@k8s-master ssl]
[ root@k8s-master ssl]
[ root@k8s-master ssl]
5 **)启动并设置开机自启:**
systemctl daemon-reload
systemctl start etcd.service
systemctl enable etcd
systemctl status etcd.service
注意:因为只有 master,所以在此时是启动不成功的,所以先结束掉
二、在两个 node 节点上安装 Docker 及 etcd
1 **、各** node 节点安装 docker
1 **)、安装最新版本的** Docker 依赖环境
yum -y install yum-utils
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
2 **)、安装** Docker 并设置为开机自启动
yum -y install docker
systemctl start docker
systemctl enable docker
3 **)、通过查看** Docker 版本查看 Docker 服务,并确保两个 docker 版本一致
docker version
4 **)、因为** docker 默认是从国外网站拉取镜像,如果网速慢,可以添加加速站点,然后重启 docker
vim /etc/docker/damon.json
---------------------------------------------
{
"registry-mirrors" : [ "https://ung2thfc.mirror.aliyuncs.com" ]
}
\ ---------------------------------------------
systemctl restart docker
2 **、各** node 节点安装 etcd
1 **)、创建工作目录:**
[ root@k8s-node1 ~]
[ root@k8s-node1 ~]
[ root@k8s-node1 ~]
[ root@k8s-node1 ~]
2 **)、在** master 节点将各 node 需要的文件远程复制过去
@ k8s-master
a **)**etcd 配置文件
[ root@k8s-master ~]
[ root@k8s-master ~]
b **)****系统服务:**systemd 管理 etcd
[ root@k8s-master ~]
[ root@k8s-master ~]
c **)**Etcd 生成的证书
[ root@k8s-master ~]
[ root@k8s-master ~]
3 **)、在各** node 节点修改配置文件
@k8s-node1
[ root@k8s-node1 ~]
---------------------------------------------
ETCD_NAME = "etcd02"
ETCD_DATA_DIR = "/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS = "https://192.168.73.72:2380"
ETCD_LISTEN_CLIENT_URLS = "https://192.168.73.72:2379"
ETCD_INITIAL_ADVERTISE_PEER_URLS = "https://192.168.73.72:2380"
ETCD_ADVERTISE_CLIENT_URLS = "https://192.168.73.72:2379"
ETCD_INITIAL_CLUSTER = "etcd01=https://192.168.73.71:2380,etcd02=https://192.168.73.72:2380,etcd03=https://192.168.73.73:2380"
ETCD_INITIAL_CLUSTER_TOKEN = "etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE = "new"
---------------------------------------------
启动并设置开机自启:然后查看运行状态
systemctl daemon-reload
systemctl start etcd.service
systemctl enable etcd
systemctl status etcd.service
k8s-node2
[ root@k8s-node2 ~]
---------------------------------------------
ETCD_NAME = "etcd03"
ETCD_DATA_DIR = "/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS = "https://192.168.73.73:2380"
ETCD_LISTEN_CLIENT_URLS = "https://192.168.73.73:2379"
ETCD_INITIAL_ADVERTISE_PEER_URLS = "https://192.168.73.73:2380"
ETCD_ADVERTISE_CLIENT_URLS = "https://192.168.73.73:2379"
ETCD_INITIAL_CLUSTER = "etcd01=https://192.168.73.71:2380,etcd02=https://192.168.73.72:2380,etcd03=https://192.168.73.73:2380"
ETCD_INITIAL_CLUSTER_TOKEN = "etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE = "new"
---------------------------------------------
启动并设置开机自启:然后查看运行状态
systemctl daemon-reload
systemctl start etcd.service
systemctl enable etcd
systemctl status etcd.service
3 **、在各节点全部部署完成后,启动各节点** etcd 服务,并检查 etcd 集群状态:
[ root@k8s-master ~]
[ root@k8s-master ~]
[ root@k8s-master ~]
[ root@k8s-master ssl]
---------------------------------------------
member 350160273f5f96c4 is healthy: got healthy result from https://192.168.73.72:2379
member 38eaefc54f18a232 is healthy: got healthy result from https://192.168.73.73:2379
member a8796da733cd7114 is healthy: got healthy result from https://192.168.73.71:2379
cluster is healthy
---------------------------------------------
三、部署 Flannel 网络
1 **、Falnnel 要用etcd 存储自身一个子网信息,所以要保证能成功连接Etcd ,写入预定义子网段:**
[ root@k8s-master ssl]
2 **、在各** node 节点部署 Flannel 网络
1 **)、首先将** flannel-v0.10.0-linux-amd64.tar.gz 包下载或移动到虚拟机:
wget https://github.com/coreos/flannel/releases/download/v0.10.0/flannel-v0.10.0-linux-amd64.tar.gz
wget https://github.com/coreos/flannel/releases/download/v0.11.0/flannel-v0.11.0-linux-amd64.tar.gz
2 **)、解压缩并创建工作目录,将解压缩后需要的二进制文件移动到工作目录**
[ root@k8s-node1 ~]
[ root@k8s-node1 ~]
[ root@k8s-node1 ~]
3 **)、在各** node 节点配置 Flannel **,systemd 管理 Flannel 、配置** Docker 启动指定子网段
a **)**配置 Flannel
[ root@k8s-node1 ~]
---------------------------------------------
FLANNEL_OPTIONS = "--etcd-endpoints=https://192.168.73.71:2379,https://192.168.73.72:2379,https://192.168.73.73:2379 -etcd-cafile=/opt/etcd/ssl/ca.pem -etcd-certfile=/opt/etcd/ssl/server.pem -etcd-keyfile=/opt/etcd/ssl/server-key.pem"
---------------------------------------------
b **)s ystemd** 管理 Flannel
[ root@k8s-node1 ~]
---------------------------------------------
[ Unit]
Description = Flanneld overlay address etcd agent
After = network-online.target network.target
Before = docker.service
[ Service]
Type = notify
EnvironmentFile = /opt/kubernetes/cfg/flanneld
ExecStart = /opt/kubernetes/bin/flanneld --ip-masq $FLANNEL_OPTIONS
ExecStartPost = /opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart = on-failure
[ Install]
WantedBy = multi-user.target
---------------------------------------------
c **)配置** Docker 启动指定子网段
[ root@k8s-node1 ~]
[ root@k8s-node1 ~]
---------------------------------------------
[ Unit]
Description = Docker Application Container Engine
Documentation = https://docs.docker.com
After = network-online.target firewalld.service
Wants = network-online.target
[ Service]
Type = notify
EnvironmentFile = /run/flannel/subnet.env
ExecStart = /usr/bin/dockerd $DOCKER_NETWORK_OPTIONS
ExecReload = /bin/kill -s HUP $MAINPID
LimitNOFILE = infinity
LimitNPROC = infinity
LimitCORE = infinity
TimeoutStartSec = 0
Delegate = yes
KillMode = process
Restart = on-failure
StartLimitBurst = 3
StartLimitInterval = 60s
[ Install]
WantedBy = multi-user.target
---------------------------------------------
d **)重启** node1 上的 flannel 和 docker **,并检查是否生效:**
[ root@k8s-node1 ~]
[ root@k8s-node1 ~]
[ root@k8s-node1 ~]
[ root@k8s-node1 ~]
[ root@k8s-node1 ~]
[ root@k8s-node1 ~]
[ root@k8s-node1 ~]
[ root@node1 k8s]
docker0: flags = 409 9 < UP,BROADCAST,MULTICAST> mtu 1500
inet 172.17 .58.1 netmask 255.255 .255.0 broadcast 172.17 .58.255
ether 02:42:2f:c5:af:c1 txqueuelen 0 ( Ethernet)
4 **)将** node2 上需要的文件远程复制过去
配置 Flannel****文件
[ root@k8s-node1 ~]
s****ystemd 管理 Flannel
[ root@k8s-node1 ~]
配置 Docker 启动指定子网段
[ root@k8s-node1 ~]
5 **)重启** node2 上的 flannel 和 docker **,并检查是否生效,然后使用** ping 命令,测试两台 docker 的连通性
[ root@k8s-node2 ~]
[ root@k8s-node2 ~]
[ root@k8s-node2 ~]
[ root@k8s-node2 ~]
[ root@k8s-node2 ~]
[ root@k8s-node2 ~]
[ root@node2 k8s]
docker0: flags = 409 9 < UP,BROADCAST,MULTICAST> mtu 1500
inet 172.17 .89.1 netmask 255.255 .255.0 broadcast 172.17 .89.255
6 **)确保docker0 与flannel.1 在同一网段。**
测试不同节点互通,在当前节点访问另一个Node节点docker0 IP:
[ root@k8s-node2 ~]
[ root@k8s-node2 ~]
ping node1上面 docker0 的 ip,查看是否能互通
四、在 master 节点上部署组件:
1 **、在部署Kubernetes 之前一定要确保etcd 、flannel 、docker 是正常工作的,否则先解决问题再继续。**
检查各节点(master+node)服务是否正常运行:
[ root@k8s-master ssl]
[ root@k8s-node1 ~]
[ root@k8s-node1 ~]
[ root@k8s-node1 ~]
2 **、生成证书:**
1 **)、创建存放证书的目录**
[ root@k8s-master ~]
[ root@k8s-master ~]
2)、创建生成证书的文件
[ root@k8s-master k8s]
---------------------------------------------
{
"signing" : {
"default" : {
"expiry" : "87600h"
} ,
"profiles" : {
"kubernetes" : {
"expiry" : "87600h" ,
"usages" : [
"signing" ,
"key encipherment" ,
"server auth" ,
"client auth"
]
}
}
}
}
---------------------------------------------
[ root@k8s-master k8s]
---------------------------------------------
{
"CN" : "kubernetes" ,
"key" : {
"algo" : "rsa" ,
"size" : 2048
} ,
"names" : [
{
"C" : "CN" ,
"L" : "Beijing" ,
"ST" : "Beijing" ,
"O" : "k8s" ,
"OU" : "System"
}
]
}
---------------------------------------------
[ root@k8s-master k8s]
---------------------------------------------
{
"CN" : "kubernetes" ,
"hosts" : [
"10.0.0.1" ,
"127.0.0.1" ,
"192.168.73.71" ,
"kubernetes" ,
"kubernetes.default" ,
"kubernetes.default.svc" ,
"kubernetes.default.svc.cluster" ,
"kubernetes.default.svc.cluster.local"
] ,
"key" : {
"algo" : "rsa" ,
"size" : 2048
} ,
"names" : [
{
"C" : "CN" ,
"L" : "BeiJing" ,
"ST" : "BeiJing" ,
"O" : "k8s" ,
"OU" : "System"
}
]
}
---------------------------------------------
[ root@k8s-master k8s]
---------------------------------------------
{
"CN" : "system:kube-proxy" ,
"hosts" : [ ] ,
"key" : {
"algo" : "rsa" ,
"size" : 2048
} ,
"names" : [
{
"C" : "CN" ,
"L" : "BeiJing" ,
"ST" : "BeiJing" ,
"O" : "k8s" ,
"OU" : "System"
}
]
}
---------------------------------------------
3 **)、生成证书**
a **、生成** ca 证书:
[ root@k8s-master k8s]
b **、生成** apiserver 证书:
[ root@k8s-master k8s]
c **、生成** kube-proxy 证书:
[ root@k8s-master k8s]
d **、查看生成证书文件:(一共** 6 个)
[ root@k8s-master k8s]
ca-key.pem ca.pem kube-proxy-key.pem kube-proxy.pem server-key.pem server.pem
3 **、部署** apiserver 组件:
1 **)、把** kubernetes-server-linux-amd64.tar.gz 包传到虚拟机,kubernetes-server-linux-amd64.tar.gz ,包含了所需的所有组件。
各个版本的https://kubernetes.io/zh/docs/setup/release/notes/
wget https://dl.k8s.io/v1.18.0/kubernetes-server-linux-amd64.tar.gz
2 **)、创建工作目录,解压缩包,并将需要的二进制文件移动到工作目录:**
[ root@k8s-master ~]
[ root@k8s-master ~]
[ root@k8s-master ~]
[ root@k8s-master bin]
[ root@k8s-master bin]
[ root@k8s-master ~]
3 **)、创建** token 文件,并记录生成的 id 号
[ root@k8s-master ~]
[ root@k8s-master ~]
9465881e2449a5427cf4b2da9e9da2ef
[ root@k8s-master ~]
9465881e2449a5427cf4b2da9e9da2ef,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
-----------------------------------
此为注释:不需要写进配置文件
第一列:随机字符串,自己可生成
第二列:用户名
第三列:UID
第四列:用户组
-----------------------------------
4 **)、创建** apiserver 配置文件:
[ root@k8s-master ~]
---------------------------------------------
KUBE_APISERVER_OPTS = "--logtostderr=true \
--v=4 \
--etcd-servers=https://192.168.73.71:2379,https://192.168.73.72:2379,https://192.168.73.73:2379 \
--bind-address=192.168.73.71 \
--secure-port=6443 \
--advertise-address=192.168.73.71 \
--allow-privileged=true \
--service-cluster-ip-range=10.0.0.0/24 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--enable-bootstrap-token-auth \
--token-auth-file=/opt/kubernetes/cfg/token.csv \
--service-node-port-range=30000-50000 \
--tls-cert-file=/opt/kubernetes/ssl/server.pem \
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \
--client-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/opt/etcd/ssl/ca.pem \
--etcd-certfile=/opt/etcd/ssl/server.pem \
--etcd-keyfile=/opt/etcd/ssl/server-key.pem"
--------------------------------------------------------
KUBE_APISERVER_OPTS = "--logtostderr=true \
--v=4 \
--etcd-servers=https://192.168.73.71:2379,https://192.168.73.72:2379,https://192.168.73.73:2379 \
--bind-address=192.168.73.71 \
--secure-port=6443 \
--advertise-address=192.168.73.71 \
--allow-privileged=true \
--service-cluster-ip-range=10.0.0.0/24 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--enable-bootstrap-token-auth \
--token-auth-file=/opt/kubernetes/cfg/token.csv \
--service-node-port-range=30000-50000 \
--tls-cert-file=/opt/kubernetes/ssl/server.pem \
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \
--client-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/opt/etcd/ssl/ca.pem \
--etcd-certfile=/opt/etcd/ssl/server.pem \
--etcd-keyfile=/opt/etcd/ssl/server-key.pem"
-----------------------
配置好前面生成的证书,确保能连接etcd。
参数说明:
--logtostderr 启用日志
---v 日志等级
--etcd-servers etcd集群地址
--bind-address 监听地址
--secure-port https安全端口
--advertise-address 集群通告地址
--allow-privileged 启用授权
--service-cluster-ip-range Service虚拟IP地址段
--enable-admission-plugins 准入控制模块
--authorization-mode 认证授权,启用RBAC授权和节点自管理
--enable-bootstrap-token-auth 启用TLS bootstrap功能,后面会讲到
--token-auth-file token文件
--service-node-port-range Service Node类型默认分配端口范围
\ ---------------------------------------------
5 **)、systemd 管理 apiserver :**
[root@k8s-master ~]
-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -
* * [Unit] * *
Description=Kubernetes API Server
Documentation=https:/ / github. com/kubernetes/kubernetes
* * [Service] * *
EnvironmentFile=- / opt/kubernetes/cfg/kube-apiserver
ExecStart=/ opt/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure
* * [Install] * *
WantedBy=multi-user. target
-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -
6 **)、将证书文件复制到工作目录:**
[ root@k8s-master ~]
[ root@k8s-master ~]
7 **)、启动** kube-apiserver **,并设置开启自启动****;**
[ root@k8s-master ~]
[ root@k8s-master ~]
[ root@k8s-master ~]
[ root@k8s-master ~]
4 **、部署** schedule 组件:
1 **)、创建** schedule 配置文件:
[ root@k8s-master ~]
---------------------------------------------
KUBE_SCHEDULER_OPTS = "--logtostderr=true \
--v=4 \
--master=127.0.0.1:8080 \
--leader-elect"
参数说明:
--master 连接本地apiserver
--leader-elect 当该组件启动多个时,自动选举(HA)
---------------------------------------------
2 **)、**systemd 管理 schduler 组件:
[ root@k8s-master ~]
---------------------------------------------
**[ Unit] **
Description = Kubernetes Scheduler
Documentation = https://github.com/kubernetes/kubernetes
**[ Service] **
EnvironmentFile = -/opt/kubernetes/cfg/kube-scheduler
ExecStart = /opt/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
Restart = on-failure
**[ Install] **
WantedBy = multi-user.target
---------------------------------------------
3 **)、启动并设置开机自启**
systemctl daemon-reload
systemctl enable kube-scheduler.service
systemctl restart kube-scheduler.service
systemctl status kube-scheduler.service
5 **、部署** controller-manager 组件:
1 **)、创建** controller-manager 配置文件:
[ root@k8s-master ~]
---------------------------------------------
KUBE_CONTROLLER_MANAGER_OPTS = "--logtostderr=true \
--v=4 \
--master=127.0.0.1:8080 \
--leader-elect=true \
--address=127.0.0.1 \
--service-cluster-ip-range=10.0.0.0/24 \
--cluster-name=kubernetes \
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \
--root-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem"
---------------------------------------------
2 **)、**systemd 管理 controller-manager 组件:
[ root@k8s-master ~]
---------------------------------------------
**[ Unit] **
Description = Kubernetes Controller Manager
Documentation = https://github.com/kubernetes/kubernetes
**[ Service] **
EnvironmentFile = -/opt/kubernetes/cfg/kube-controller-manager
ExecStart = /opt/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
Restart = on-failure
**[ Install] **
WantedBy = multi-user.ta
---------------------------------------------
3 **)、启动并设置开机自启**
systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl restart kube-controller-manager
systemctl status kube-controller-manager
4 **)、所有组件都已经启动成功,通过** kubectl 工具查看当前集群组件状态:
[root@k8s-master ~]# /opt/kubernetes/bin/kubectl get cs
NAME STATUS MESSAGE ERROR Scheduler Healthy Ok controller-manager Healthy Ok etcd-0 Healthy {“health”:“true”} etcd-1 Healthy {“health”:“true”} etcd-2 Healthy {“health”:“true”}
五、在 node 节点部署组件:
Master apiserver启用TLS认证后,Node节点kubelet组件 想要加入集群,必须使用CA签发的有效证书才能与apiserver通信,当Node节点 很多时,签署证书是一件很繁琐的事情,因此有了TLS Bootstrapping机制,kubelet会以一个低权限用户自动向apiserver申请证书,kubelet的证书由apiserver动态签署。
认证大致工作流程如图所示:
1 **、将** kubelet-bootstrap 用户绑定到系统集群角色
[ root@k8s-master ~]
/opt/kubernetes/bin/kubectl **create** clusterrolebinding kubelet-bootstrap \
*--clusterrole= system:node-bootstrapper \ *
*--user= kubelet-bootstrap
2 **、创建 创建****kubeconfig** 文件
在生成kubernetes证书的目录下执行以下命令生成kubeconfig文件:
1)、创建 kubelet、bootstrapping、kubeconfig
[ root@master k-ssl]
b6036f0c394f940686b7b7ee444b7d4b,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
[ root@master k-ssl]
[ root@k8s-master ~]
2)、设置集群参数
[ root@k8s-master ~]
3)、设置客户端认证参数
[ root@k8s-master ~]
4)、设置上下文参数
[ root@k8s-master ~]
5)、设置默认上下文
[ root@k8s-master ~]
6)、创建 kube-proxy kubeconfig 文件
[ root@k8s-master ~]
7)
[ root@k8s-master ~]
8)
[ root@k8s-master ~]
9)
[ root@k8s-master ~]
[ root@k8s-master ~]
10 **)、将** bootstrap.kubeconfig kube-proxy.kubeconfig 这两个文件拷贝到Node 节点 **/opt/kubernetes/cfg****目录下。**
[ root@k8s-master ~]
[ root@k8s-master ~]
3 **、部署** kubelet 组件
1 **)、将前面下载的二进制包中的** kubelet 和 kube-proxy 拷贝到 **node节点 /**opt/kubernetes/bin 目录下。
[ root@k8s-master ~]
[ root@k8s-master bin]
[ root@k8s-master bin]
2 **)、创建** kubelet 配置文件:
[ root@k8s-node1 ~]
---------------------------------------------
KUBELET_OPTS = "--logtostderr=true \
--v=4 \
--hostname-override=192.168.73.72 \
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \
--config=/opt/kubernetes/cfg/kubelet.config \
--cert-dir=/opt/kubernetes/ssl \
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"
------------------------
参数说明:
--hostname-override 在集群中显示的主机名
--kubeconfig 指定kubeconfig文件位置,会自动生成
--bootstrap-kubeconfig 指定刚才生成的bootstrap.kubeconfig文件
--cert-dir 颁发证书存放位置
--pod-infra-container-image 管理Pod网络的镜像
---------------------------------------------
3 **)、创建** kubelet.config 配置文件如下:
[ root@k8s-node1 ~]
---------------------------------------------
**kind: KubeletConfiguration**
**apiVersion: kubelet.config.k8s.io/v1beta1**
**address: 192.168 .****73.72**
**port: 10250 **
**readOnlyPort: 10255 **
**cgroupDriver: cgroupfs**
**clusterDNS: [ "10.0.0.2" ] **
**clusterDomain: cluster.local.**
**failSwapOn: false**
**authentication:**
anonymous:
enabled: true
---------------------------------------------
4 **)、**systemd 管理 kubelet 组件:
[ root@k8s-node1 ~]
---------------------------------------------
**[ Unit] **
Description = Kubernetes Kubelet
After = docker.service
Requires = docker.service
**[ Service] **
EnvironmentFile = /opt/kubernetes/cfg/kubelet
ExecStart = /opt/kubernetes/bin/kubelet $KUBELET_OPTS
Restart = on-failure
KillMode = process
**[ Install] **
WantedBy = multi-user.target
---------------------------------------------
5 **)、启动并设置开启自启:**
[ root@k8s-node1 ~]
[ root@k8s-node1 ~]
[ root@k8s-node1 ~]
[ root@k8s-node1 ~]
6 **)、配置** node2 ,跟 node1 是一样的操作
**创建** **kubelet** **配置文件:**
[ root@k8s-node2 ~]
---------------------------------------------
KUBELET_OPTS = "--logtostderr=true \
--v=4 \
--hostname-override=192.168.73.73 \
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \
--config=/opt/kubernetes/cfg/kubelet.config \
--cert-dir=/opt/kubernetes/ssl \
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"
---------------------------------------------
**创建** **kubelet.config** **配置文件如下:**
[ root@k8s-node2 ~]
---------------------------------------------
**kind: KubeletConfiguration**
**apiVersion: kubelet.config.k8s.io/v1beta1**
**address: 192.168 .****73.73**
**port: 10250 **
**readOnlyPort: 10255 **
**cgroupDriver: cgroupfs**
**clusterDNS: [ "10.0.0.2" ] **
**clusterDomain: cluster.local.**
**failSwapOn: false**
**authentication:**
anonymous:
enabled: true
---------------------------------------------
**systemd** **管理** **kubelet** **组件:**
[ root@k8s-node2 ~]
---------------------------------------------
**[ Unit] **
Description = Kubernetes Kubelet
After = docker.service
Requires = docker.service
**[ Service] **
EnvironmentFile = /opt/kubernetes/cfg/kubelet
ExecStart = /opt/kubernetes/bin/kubelet $KUBELET_OPTS
Restart = on-failure
KillMode = process
**[ Install] **
WantedBy = multi-user.target
---------------------------------------------
systemctl daemon-reload
systemctl enable kubelet
systemctl restart kubelet
systemctl status kubelet
7 **)、在** Master 审批 Node 加入集群:
启动后还没加入到集群中,需要手动允许该节点才可以。
在Master节点查看请求签名的Node:
查看为添加的节点
[ root@k8s-master ~]
NAME AGE REQUESTOR CONDITION node-csr-Bj4m02iAVLfY4lsLMDFGBfG1sfgPBnHTOAneF5DH1cA 116s kubelet-bootstrap Pending
添加节点**
[ root@k8s-master ~]
[ root@k8s-master ~]
**查看已添加的节点**
[ root@k8s-master ~]
4 **、部署** kube-proxy 组件:
1 **)、创建** kube-proxy 配置文件:
[ root@k8s-node1 ~]
---------------------------------------------
KUBE_PROXY_OPTS = "--logtostderr=true \
--v=4 \
--hostname-override=192.168.73.72 \
--cluster-cidr=10.0.0.0/24 \
--kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig"
---------------------------------------------
2 **)、**systemd 管理 kube-proxy 组件:
[ root@k8s-node1 ~]
---------------------------------------------
**[ Unit] **
Description = Kubernetes Proxy
After = network.target
**[ Service] **
EnvironmentFile = -/opt/kubernetes/cfg/kube-proxy
ExecStart = /opt/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTS
Restart = on-failure
**[ Install] **
WantedBy = multi-user.target
---------------------------------------------
3 **)、启动并添加开机自启:**
[ root@k8s-node1 ~]
[ root@k8s-node1 ~]
[ root@k8s-node1 ~]
[ root@k8s-node1 ~]
4 **)、配置** node2 ,跟 node1 是一样的操作:
创建 kube-proxy 配置文件:
[ root@k8s-node2 ~]
---------------------------------------------
KUBE_PROXY_OPTS = "--logtostderr=true \
--v=4 \
--hostname-override=192.168.73.73 \
--cluster-cidr=10.0.0.0/24 \
--kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig"
---------------------------------------------
systemd 管理 kube-proxy 组件:
root@k8s-node2 ~]
---------------------------------------------
**[ Unit] **
Description = Kubernetes Proxy
After = network.target
**[ Service] **
EnvironmentFile = -/opt/kubernetes/cfg/kube-proxy
ExecStart = /opt/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTS
Restart = on-failure
**[ Install] **
WantedBy = multi-user.target
---------------------------------------------
启动并添加开机自启:
systemctl daemon-reload
systemctl enable kube-proxy
systemctl restart kube-proxy
systemctl status kube-proxy.service
六、查看集群状态
[ root@k8s-master ~]
[ root@k8s-master ~]
七、运行一个测试示例:Nginx
1 **、创建一个** Nginx Web **,测试集群是否正常工作:**
[ root@k8s-master ~]
[ root@k8s-master ~]
2 **、查看** Pod **,Service :**
[ root@k8s-master ~]
[ root@k8s-master ~]
3 **、验证:访问集群中部署的Nginx ,打开浏览器输入:**http://192.168.73.71:38393
提醒:
若嫌 执行命令输入比较麻烦,可以使用软连接:
[ root@k8s-master ~]
[ root@k8s-master ~]
{
"registry-mirrors" : [ "https://ung2thfc.mirror.aliyuncs.com" ,"https://u9nigs6v.mirror.aliynucs.com" ,"https://docker.mirros.ustc.edu.cn" ,"https://dockerhub.azk8s.cn" ,"https://registry.docker-cn.com" ]
}
vim /etc/docker/daemon.json
{
"registry-mirrors" : [ "https://7efasetw.mirror.aliyuncs.com" ]
}
systemctl daemon-reload
systemctl restart docker