低代码集群化部署Keepalived+Nginx
1.文档说明
低代码集群化部署。
注意:在搭建的过程中会遇到各种各样的问题,本文档只做一个参考作用。
也欢迎各位大佬一同讨论。
2.部署说明
2.1. nginx、keepalived的安装
2.1.1. 环境说明
操作系统:centos7.6,64位
master机器(master-node):192.168.xxx.60 VIP1:192.168.xxx.160
slave机器(slave-node ) :192.168.xxx.61 VIP2:192.168.xxx.161
2.1.2. 环境安装
安装nginx和keepalive服务(master-node和slave-node两台服务器上的安装操作完全一样)
1、安装依赖
[root@localhost ~]# yum -y install gcc
[root@localhost ~]# yum -y install openssl-devel
[root@localhost ~]# yum -y install libnl libnl-devel
[root@localhost ~]# yum -y install libnfnetlink-devel
[root@localhost ~]# yum -y install net-tools
[root@localhost ~]# yum -y install vim
[root@localhost ~]# yum -y install psmisc
2、安装nginx
把nginx安装包放到 /usr/local/src下
[root@localhost ~]# cd /usr/local/src
[root@localhost src]# tar -zvxf nginx-1.20.1.tar.gz
[root@localhost src]# cd nginx-1.20.1
[root@localhost nginx-1.20.1]# ./configure --prefix=/usr/local/nginx --user=www --group=www --with-http_ssl_module --with-http_flv_module --with-http_stub_status_module --with-http_gzip_static_module --with-pcre --with-http_realip_module
[root@localhost nginx-1.20.1]# make && make install
[root@localhost nginx-1.20.1]# /usr/sbin/groupadd -f www
[root@localhost nginx-1.20.1]# /usr/sbin/useradd -g www www
3、安装keepalived(已安装可跳过)
把keepalived安装包放到 /usr/local/src下
[root@localhost src]# tar -zvxf keepalived-2.2.7.tar.gz
[root@localhost src]# cd keepalived-2.2.7
[root@localhost keepalived-2.2.7]# ./configure
[root@localhost keepalived-2.2.7]# make && make install
[root@localhost keepalived-2.2.7]# mkdir /etc/keepalived
[root@localhost keepalived-2.2.7]# cp /usr/local/src/keepalived-2.2.7/keepalived/etc/keepalived/keepalived.conf.sample /etc/keepalived/
[root@localhost keepalived-2.2.7]# cp /usr/local/src/keepalived-2.2.7/keepalived/etc/init.d/keepalived /etc/rc.d/init.d/
[root@localhost keepalived-2.2.7]# cp /usr/local/src/keepalived-2.2.7/keepalived/etc/sysconfig/keepalived /etc/sysconfig/
[root@localhostkeepalived-2.2.7]# cp /usr/local/sbin/keepalived /usr/sbin/
把配置文件keepalived.conf_1复制到/etc/keepalived下
keepalived.conf_1
! Configuration File for keepalived
global_defs {
notification_email {
acassen@firewall.loc
failover@firewall.loc
sysadmin@firewall.loc
}
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 192.168.xxx.60
smtp_connect_timeout 30
router_id LVS_DEVEL
vrrp_skip_check_adv_addr
#vrrp_strict
vrrp_garp_interval 0
vrrp_gna_interval 0
}
vrrp_instance master1 {
state MASTER
interface ens192
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.xxx.200
}
}
vrrp_instance master2 {
state BACKUP
interface ens192
virtual_router_id 52
priority 90
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.xxx.201
}
}
把检测脚本check_nginx.sh复制到/etc/keepalived下
chmod +x /etc/keepalived/check_nginx.sh
#!/bin/bash
run=`ps -C nginx --no-header | wc -l`
if [ $run -eq 0 ]
then
/usr/local/nginx/sbin/nginx
sleep 3
if [ $run -eq 0 ]
then
killall keepalived
fi
fi
2.1.3. 配置服务
- 先关闭SElinux、防火墙
[root@localhost keepalived-1.3.2]# vim /etc/sysconfig/selinux
\#SELINUX=enforcing #注释掉
\#SELINUXTYPE=targeted #注释掉
SELINUX=disabled #增加
[root@localhost keepalived-1.3.2]# setenforce 0
[root@localhost keepalived-1.3.2]# systemctl stop firewalld
- 先后在master、slave服务器上启动nginx和keepalived,保证这两个服务都正常开启
nginx:
cd /usr/local/nginx/sbin./nginx
keepalived :
service keepalived start
service keepalived stop //停止服务
service keepalived status //查看服务状态
- 在master、slave服务器上查看是否已经绑定了虚拟IP
- 停止主服务器上的keepalived
[root@localhost ~]# /etc/init.d/keepalived stop
- 再次查看虚拟ip
2.2. nacos
2.2.1. 环境准备
请确保是在环境中安装使用:64 bit JDK 1.8+,Maven 3.2.x+
三台机器,这里演示192.168.xxx.64,192.168.xxx.65,192.168.xxx.66
2.2.2. 下载源码或者安装包
1、先把nacos安装包放到 192.168.xxx.64的/usr/local/src下
[root@localhost src]# tar -zvxf nacos-server-1.4.2-SNAPSHOT.tar.gz
[root@localhost src]# cd nacos/conf
2、在nacos的解压目录nacos/的conf目录下,有配置文件cluster.conf,请每行配置成ip:port
[root@localhost conf]# vim cluster.conf
[root@localhost conf]# vim application.properties
3、添加数据库的配置
4、运行对应的数据库脚本文件
5、启动nacos
[root@localhost conf]# cd ../bin
[root@localhost bin]# sh startup.sh
6、配置并启动其他两台机器
复制nacos文件夹到其他两台机器,并启动
7、配置nginx,负载均衡nacos
在nginx所在机器192.168.xxx.60、192.168.xxx.61上nginx.conf添加配置
[root@localhost ~]# cd /usr/local/nginx/conf
[root@localhost ~]# vim nginx.conf
在http中添加如下配置:
upstream nacosServer{
server 192.168.xxx.64:8848;
server 192.168.xxx.65:8848;
server 192.168.xxx.66:8848;
}
server {
listen 8848;
server_name localhost;
location /nacos {
proxy_pass http://nacosServer/nacos;
index index.html index.htm;
}
}
重新加载配置
[root@localhost conf]# cd ../sbin/
[root@localhost sbin]# ./nginx -s reload
2.3. seata
1、把seata包复制到192.168.xxx.64机器/home/lcp下并解压
修改file.conf配置文件
修改registry.conf配置文件
启动seata
[root@localhost seata]# nohup sh bin/seata-server.sh -p 8091 -n 1 &
2、复制seata文件夹到其他两台服务器并重启
2.4. minio
1、首先准备4台服务器:
节点 | 目录 |
---|---|
192.168.xxx.61 | /opt/minio/data |
192.168.xxx.64 | /opt/minio/data |
192.168.xxx.65 | /opt/minio/data |
192.168.xxx.66 | /opt/minio/data |
2、创建目录(所有机器)
[root@localhost ~]# mkdir -p /opt/minio/{run,data1,data2} && mkdir -p /etc/minio
3、上传Minio到/opt/minio/run
4、集群启动文件配置(所有机器)
[root@localhost ~]# vim /opt/minio/run/run.sh
#!/bin/bash
export MINIO_ACCESS_KEY=admin
export MINIO_SECRET_KEY=admin
/opt/minio/run/minio server --config-dir /etc/minio \
--address "127.0.0.1:9000" \
http://192.168.xxx.61/opt/minio/data1 http://192.168.xxx.61/opt/minio/data2 \
http://192.168.xxx.64/opt/minio/data1 http://192.168.xxx.64/opt/minio/data2 \
http://192.168.xxx.65/opt/minio/data1 http://192.168.xxx.65/opt/minio/data2 \
http://192.168.xxx.66/opt/minio/data1 http://192.168.xxx.66/opt/minio/data2
5、创建Minio.server(所有机器)
[root@localhost ~]# vim /usr/lib/systemd/system/minio.service
minio.service
[Unit]
Description=Minio service
Documentation=https://docs.minio.io/
[Service]
WorkingDirectory=/opt/minio/run/
ExecStart=/opt/minio/run/run.sh
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
6、权限修改(所有机器)
chmod +x /usr/lib/systemd/system/minio.service && chmod +x /opt/minio/run/minio && chmod +x /opt/minio/run/run.sh
7、启动集群(所有机器)
systemctl daemon-reload
systemctl start minio
systemctl enable minio
查看集群状态
systemctl status minio.service
注意:启动集群时一定要检查自己的防火墙是否是关闭状态,如果不是关闭状态,则会出错。
8、查看日志
[root@localhost log]# tail -100f /var/log/messages
9、测试搭建是否成功
浏览器访问以上机器任意IP和对应端口号即可进行访问,可创建bucket进行测试,如果创建失败,则说明集群某一步搭建不成功。
10、配置nginx,负载均衡nacos
在nginx所在机器192.168.xxx.60、192.168.xxx.61上nginx.conf添加配置
[root@localhost ~]# cd /usr/local/nginx/conf
[root@localhost ~]# vim nginx.conf
在http中添加如下配置:
upstream minioServer {
server 192.168.xxx.61:9000;
server 192.168.xxx.64:9000;
server 192.168.xxx.65:9000;
server 192.168.xxx.66:9000;
}
server {
listen 9000;
server_name localhost;
location /nacos {
proxy_pass http://minioServer/minio;
index index.html index.htm;
}
}
重新加载配置
[root@localhost conf]# cd ../sbin/
[root@localhost sbin]# ./nginx -s reload
2.5. redis
采用方案:Redis-cluster分片策略
1、Redis集群最少需要6个节点,可以分布在一台或者多台主机上
本教案在多台主机上创建分布式集群,不同的端口表示不同的redis节点,如下:
主节点:192.168.xxx.64:7001 192.168.xxx.65:7002 192.168.xxx.66:7003
从节点:192.168.xxx.60:7004 192.168.xxx.61:7005 192.168.xxx.72:7006
2、安装依赖
[root@localhost local]# yum -y install ruby
3、拷贝redis-3.0.0-rc2.tar.gz至/usr/local下并解压安装:
[root@localhost local]# tar -zxvf redis-3.0.0-rc2.tar.gz
[root@localhost local]# mv redis-3.0.0-rc2 redis
[root@localhost local]# cd redis
[root@localhost redis]# make && make install
若报错执行下面:
[root@localhost redis]# make MALLOC=libc
[root@localhost redis]# make install
4、在/usr/local/redis下创建redis-cluster目录,其下创建7001、7002。。7006目录,如下:
[root@localhost redis]# mkdir redis-cluster
[root@localhost redis]# cd redis-cluster
[root@localhost redis-cluster]# mkdir 7001 7002 7003 7004 7005 7006
5、将redis解压路径下的配置文件redis.conf,依次拷贝到7001目录内,并修改redis.conf配置文件:
[root@localhost redis-cluster]# cp /usr/local/redis/redis.conf /usr/local/redis/redis-cluster/7001
修改配置:
port 7001
bind 192.168.xxx.64
cluster-enabled yes
daemonized yes
logfile /usr/local/redis/redis-cluster/7001/node.log
将7001下的redis.conf复制到其他目录下并修改其中的IP和端口
6、按照2~3步骤在其他机器上操作,复制redis-cluster目录到其他服务器上
7、启动每个节点redis服务
依次以700X下的redis.conf,启动redis节点。(必须指定redis.conf文件)
[root@localhost redis]# /usr/local/bin/redis-server /usr/local/redis/redis-cluster/7001/redis.conf
8、查看redis是否启动
\9. 执行创建集群命令
进入到redis源码存放目录/usr/local/redis/src下,执行redis-trib.rb,此脚本是ruby脚本,它依赖ruby环境
[root@localhost src]# cd /usr/local/redis/src
[root@localhost src]# ./redis-trib.rb create --replicas 1 192.168.xxx.64:7001 192.168.xxx.65:7002 192.168.xxx.66:7003 192.168.xxx.60:7004 192.168.xxx.61:7005 192.168.xxx.72:7006
\9. 查询集群信息
集群创建成功登陆任意redis结点查询集群中的节点情况
[root@localhost src]# ./redis-cli -c -h 192.168.xxx.64 -p 7001
2.6. 前/后端服务
2.6.1. 前端
1、在nginx所在服务192.168.xxx.60、192.168.xxx.61上/usr/local/nginx/html下创建lcp_client,将前端资源文件复制下此文件夹下
2、打开nginx配置文件/usr/local/nginx/conf/nginx.conf添加如下配置:
server {
listen 3303;
server_name localhost;
#set_real_ip_from 220.248.78.106;
real_ip_header X-Forwarded-For;
real_ip_recursive on;
client_max_body_size 1024m;
location /lcp_client {
alias /usr/local/nginx/html/lcp_client;
}
}
2.6.1. 后端
1、在准备放后台服务的机器上创建文件夹/home/lcp/jars
2、上传后台服务配置文件和jar包:
3、修改configs下对应的配置文件
4、复制jars文件夹到其他机器上并修改ip
5、分别在这几台机器上启动后台服务
2.7. 数据库
略
2.8. 监控
略
ocal/nginx/html下创建lcp_client,将前端资源文件复制下此文件夹下
2、打开nginx配置文件/usr/local/nginx/conf/nginx.conf添加如下配置:
server {
listen 3303;
server_name localhost;
#set_real_ip_from 220.248.78.106;
real_ip_header X-Forwarded-For;
real_ip_recursive on;
client_max_body_size 1024m;
location /lcp_client {
alias /usr/local/nginx/html/lcp_client;
}
}
2.6.1. 后端
1、在准备放后台服务的机器上创建文件夹/home/lcp/jars
2、上传后台服务配置文件和jar包:
[外链图片转存中…(img-JDfWaxob-1662255742685)]
3、修改configs下对应的配置文件
4、复制jars文件夹到其他机器上并修改ip
5、分别在这几台机器上启动后台服务
2.7. 数据库
略
2.8. 监控
略