实验环境:
实验拓扑图:
推荐步骤:
基于之前配置的单节点继续往下操作
一.先配置master02
1.在master01上操作,复制kubernetes目录到master02
[root@localhost ~]# setenforce 0
[root@localhost ~]# iptables -F
[root@localhost ~]# iptables -t nat -F
[root@localhost ~]# hostnamectl set-hostname master02
[root@localhost ~]# su
[root@master ~]# scp -r /opt/kubernetes/ root@192.168.148.137:/opt/ ##将kubernetes文件全部拷贝到master02上
2.将master端的三个组件的启动脚本也拷贝到master02端上
[root@master ~]# scp /usr/lib/systemd/system/{kube-apiserver,kube-controller-manager,kube-scheduler}.service root@192.168.148.137:/usr/lib/systemd/system/
3.修改配置文件kube-apiserver中的IP
[root@master02 kubernetes]# cd /opt/kubernetes/cfg/
[root@master02 cfg]# vim kube-apiserver
....
--etcd-servers=https://192.168.148.138:2379,https://192.168.148.139:2379,https://192.168.148.140:2379 \
--bind-address=192.168.148.137 \ ##此处将IP地址改为本机地址
--secure-port=6443 \
--advertise-address=192.168.148.137 \ ##同样的修改方式修改
注意master上的etcd端的证书也需要拷贝到master02端上,因为配置文件底部加了etcd证书的验证,所以需要etcd的证书
[root@master ~]# scp -r /opt/etcd/ root@192.168.148.137:/opt/
4.启动master02中的三个组件服务
[root@master02 cfg]# systemctl start kube-apiserver.service
[root@master02 cfg]# systemctl start kube-controller-manager.service
[root@master02 cfg]# systemctl start kube-scheduler.service
[root@master02 cfg]# vim /etc/profile ##增加环境变量
....
unset -f pathmunge
export PATH=$PATH:/opt/kubernetes/bin/ ##底行加入声明环境变量
[root@master02 cfg]# source /etc/profile ##声明环境变量
[root@master02 cfg]# kubectl get node
到这边master02算配置完毕了
二.Nginx01配置
推荐步骤:
1.安装nginx服务,把nginx.sh和keepalived.conf脚本拷贝到家目录
[root@localhost ~]# setenforce 0
[root@localhost ~]# systemctl stop firewalld.service
[root@localhost ~]# hostnamectl set-hostname nginx01
[root@localhost ~]# su
[root@nginx01 ~]# vim /etc/yum.repos.d/nginx.repo ##配置nginx的yum源
[nginx]
name=nginx repo
baseurl=http://nginx.org/packages/centos/7/$basearch/
gpgcheck=0
[root@nginx01 ~]# yum -y install nginx ##安装nginx服务
2.配置nginx的反向代理
[root@nginx01 ~]# vim /etc/nginx/nginx.conf ##修改配置文件
events {
worker_connections 1024;
}
stream { ##添加这一段内容
log_format main '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent'; ##日期格式
access_log /var/log/nginx/k8s-access.log main; ##日志记载位置
upstream k8s-apiserver { ##配置4层负载
server 192.168.148.138:6443; ##master01的地址
server 192.168.148.137:6443; ##master02的地址
}
server {
listen 6443; ##nginx服务的端口为6443
proxy_pass k8s-apiserver;
}
}
http {
[root@nginx01 ~]# systemctl start nginx ##启动服务
Nginx02
3.部署keepalived服务(两台nginx服务器都安装keepalived服务)
[root@nginx01 ~]# yum -y install keepalived
4.修改配置文件
将keepalived的配置文件拷贝到加目录下
[root@nginx01 ~]# cp keepalived.conf /etc/keepalived/keepalived.conf ##将拷贝过来的配置文件替换原有的配置文件
cp:是否覆盖"/etc/keepalived/keepalived.conf"? y
[root@nginx01 ~]# cd /etc/keepalived/
[root@nginx01 keepalived]# vim keepalived.conf ##nginx01配置master
! Configuration File for keepalived
global_defs {
# 接收邮件地址
notification_email {
acassen@firewall.loc
failover@firewall.loc
sysadmin@firewall.loc
}
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id NGINX_MASTER
}
vrrp_script check_nginx {
script "/etc/nginx/check_nginx.sh"
}
vrrp_instance VI_1 {
state MASTER
interface ens33
virtual_router_id 51 ##VRRP 路由 ID实例,每个实例是唯一的
priority 100 ##优先级,备服务器设置 90
advert_int 1 ##指定VRRP 心跳包通告间隔时间,默认1秒
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.148.100/24 ##配置的虚拟IP
}
track_script {
check_nginx
}
}
Nginx02配置backup
! Configuration File for keepalived
global_defs {
notification_email {
acassen@firewall.loc
failover@firewall.loc
sysadmin@firewall.loc
}
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id NGINX_MASTER
}
vrrp_script check_nginx {
script "/etc/nginx/check_nginx.sh"
}
vrrp_instance VI_1 {
state BACKUP ##状态为backup
interface ens33
virtual_router_id 51
priority 90 ##优先级要小于master
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.148.100/24 ##虚拟IP
}
track_script {
check_nginx
}
}
5.配置check_nginx.sh的脚本(两台nginx的服务器都需要配置,这边举例nginx01)
[root@nginx01 keepalived]# vim /etc/nginx/check_nginx.sh
count=$(ps -ef |grep nginx |egrep -cv "grep|$$")
if [ "$count" -eq 0 ];then
systemctl stop keepalived
fi
[root@nginx01 keepalived]# chmod +x /etc/nginx/check_nginx.sh ##增加执行权限
[root@nginx01 keepalived]# systemctl start keepalived ##启动keepalived服务
Nginx02是没有的,因为只有当master宕掉,虚拟IP才会漂移过去
6.验证keepalived服务
[root@nginx01 keepalived]# pkill nginx ##nginx01关闭nginx服务
[root@nginx02 keepalived]# ip a ##nginx02这时查看IP地址
如果想重新让漂移地址回到nginx01上面就需要
[root@nginx01 keepalived]# systemctl start nginx ##nginx01先启动nginx服务
[root@nginx01 keepalived]# systemctl start keepalived.service ##再启动keepalived服务
7.开始修改node节点配置文件统一VIP(bootstrap.kubeconfig,kubelet.kubeconfig),两台nginx都需要修改,这边举例node01
[root@node01 ~]# vim /opt/kubernetes/cfg/bootstrap.kubeconfig
[root@node01 ~]# vim /opt/kubernetes/cfg/kubelet.kubeconfig
[root@node01 ~]# vim /opt/kubernetes/cfg/kube-proxy.kubeconfig
##统统修改为VIP
- cluster:
server: https://192.168.148.100:6443 ##修改地址都指向虚拟IP
name: kubernetes
[root@node01 ~]# systemctl restart kubelet.service ##重启node端的服务
[root@node01 ~]# systemctl restart kube-proxy.service
[root@node01 cfg]# cd /opt/kubernetes/cfg/ ##替换完成直接自检
Nginx02
8.在nginx上查看nginx的k8s日志
[root@nginx01 keepalived]# tail /var/log/nginx/k8s-access.log
9.在master01上操作,测试创建pod
[root@master ~]# kubectl run nginx --image=nginx ##创建一个nginx的服务
这时回到node01端查看镜像发现下载了nginx服务的镜像
这时容器的状态也为up状态
[root@master ~]# kubectl get pods ##master端查看pod状态
显示在运行中
但是node02是没有的,是因为采用的调度算法,只会选择在一个node节点上创建服务
10.查看pod日志
[root@master ~]# kubectl logs nginx-dbddb74b8-886sj ##查看日志
Error from server (Forbidden): Forbidden (user=system:anonymous, verb=get, resource=nodes, subresource=proxy) ( pods/log nginx-dbddb74b8-886sj) ##出现报错是因为没有绑定身份去访问日志
[root@master ~]# kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=system:anonymous ##创建一个身份去访问日志
clusterrolebinding.rbac.authorization.k8s.io/cluster-system-anonymous created
[root@master ~]# kubectl get pods -o wide ##查看pod地址
[root@node01 cfg]# curl 172.17.35.2 ##在对应网段的node节点上操作可以直接访问
[root@master ~]# kubectl logs nginx-dbddb74b8-886sj ##回到master端查看日志,发现有访问日志产生,是pod网关去访问的