离线集群部署apisix

一.Luajit安装

1.上传LuaJIT-2.1.0-beta3.tar.gz 至 /home/dean目录

2.解压LuaJIT-2.1.0-beta3.tar.gz

3.cd /LuaJIT-2.1.0-beta3

4.编译安装:make&&make install

5.建立软链接:ln -sf luajit-2.1.0-beta3 /usr/local/bin/luajit

6.验证安装是否成功:luajit -v
二.安装readline库

1.上传解压tar -zxvf readline-6.2.tar.gz 至 /home/dean目录

2.cd readline-6.2

3.编译配置安装:

(1)./configure --prefix=/usr/local

(2)make

(3)make install

(4) ldconfig

三.lua安装

1.上传lua-5.1.5.tar.gz 至 /home/dean目录

2.解压tar -xzvf lua-5.1.5.tar.gz

3.移动到目标目录,编译安装:

(1)cd lua-5.1.5

  (2)  make linux && make install

4.验证安装:lua -v

四.LuaRocks安装

1.上传luarocks-3.9.2.tar.gz 至 /home/dean目录

2.解压tar -zxvf luarocks-3.9.2.tar.gz

3.进入目标地址:cd luarocks-3.9.2

4.配置编译安装:

(1)./configure

(2)make

(3)make install

5.验证安装:luarocks --version


五.OpenResty安装

1.上传openresty-1.21.4.1.tar.gz至 /home/dean目录

2.解压tar -xzvf openresty-1.21.4.1.tar.gz

3.配置编译:cd openresty-1.21.4.1/

(1)./configure --with-http_ssl_module --with-luajit --with-http_stub_status_module --with-http_realip_module    --with-http_v2_module --with-openssl=/usr/local/openssl-1.1.1i/openssl-1.1.1i


(2)make

(3)make install

4.验证安装:

  启动:/usr/local/openresty/nginx/sbin/nginx -v

  查看版本:openresty -v

5.配置环境变量:vi /etc/profile

export PATH=$PATH:/usr/local/openresty/bin
export PATH=/usr/local/openresty/nginx/sbin:$PATH

执行命令使配置生效:source /etc/profile 

六.etcd集群安装
一.上传 至 /home/de目录

etcd-v3.5.9-linux-amd64.tar.gz

二.解压

tar -zxf etcd-v3.5.9-linux-amd64.tar.gz
cd etcd-v3.5.9-linux-amd64/
mv etcd* /usr/local/bin

三.配置环境变量

vi /etc/profile

export PATH=$PATH:/usr/local/bin/etcdctl
export PATH=$PATH:/usr/local/bin/etcd
export ETCDCTL_API=3

按Esc键
:wq
source /etc/profile

四.查看版本

etcdctl -v

五.创建etcd配置文件

mkdir -p /var/lib/etcd/

六.创建开机启动文件

cat <<EOF | sudo tee /etc/systemd/system/etcd.service
 
[Unit]
Description=Etcd Server
Documentation=https://github.com/coreos/etcd
After=network.target
 
[Service]
User=root
Type=notify
#这个文件特别关键,etcd使用的环境变量都需要通过环境变量文件读取
EnvironmentFile=-/etc/etcd.conf
ExecStart=/usr/local/bin/etcd
Restart=on-failure
RestartSec=10s
LimitNOFILE=40000
 
[Install]
WantedBy=multi-user.target
EOF
七.三节点搭建集群
          

192.167.14.223
192.167.14.222
192.167.14.241

(1)192.167.14.223节点

cat <<EOF | sudo tee /etc/etcd.conf
#节点名称
ETCD_NAME=etcd0
#数据存放位置
ETCD_DATA_DIR=/var/lib/etcd/data.etcd
#集群内部通信使用的URL

ETCD_LISTEN_PEER_URLS="http://192.167.14.223:2380"  
#供外部客户端使用的url
ETCD_LISTEN_CLIENT_URLS="http://192.167.14.223:2379,http://127.0.0.1:2379"   

#广播给集群内其他成员访问的URL

ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.167.14.223:2380"
#广播给外部客户端使用的url

ETCD_ADVERTISE_CLIENT_URLS="http://192.167.14.223:2379,http://127.0.0.1:2379"

ETCD_INITIAL_CLUSTER="etcd0=http://192.167.14.223:2380,etcd1=http://192.167.14.222:2380,etcd2=http://192.167.14.241:2380" 

#集群的名称
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"  
#初始集群状态,new为新建集群

EOF


启动:systemctl start etcd

(2)192.167.14.222节点

cat <<EOF | sudo tee /etc/etcd.conf
#节点名称
ETCD_NAME=etcd1
#数据存放位置
ETCD_DATA_DIR=/var/lib/etcd/data.etcd
#集群内部通信使用的URL

ETCD_LISTEN_PEER_URLS="http://192.167.14.222:2380"  
#供外部客户端使用的url
ETCD_LISTEN_CLIENT_URLS="http://192.167.14.222:2379,http://127.0.0.1:2379"   

#广播给集群内其他成员访问的URL

ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.167.14.222:2380"
#广播给外部客户端使用的url

ETCD_ADVERTISE_CLIENT_URLS="http://192.167.14.222:2379,http://127.0.0.1:2379"

ETCD_INITIAL_CLUSTER="etcd0=http://192.167.14.223:2380,etcd1=http://192.167.14.222:2380,etcd2=http://192.167.14.241:2380" 
#集群的名称
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"  
#初始集群状态,new为新建集群
EOF

启动:systemctl start etcd

(3)192.167.14.241节点

cat <<EOF | sudo tee /etc/etcd.conf
#节点名称
ETCD_NAME=etcd2
#数据存放位置
ETCD_DATA_DIR=/var/lib/etcd/data.etcd
#集群内部通信使用的URL

ETCD_LISTEN_PEER_URLS="http://192.167.14.241:2380"  
#供外部客户端使用的url
ETCD_LISTEN_CLIENT_URLS="http://192.167.14.241:2379,http://127.0.0.1:2379"   

#广播给集群内其他成员访问的URL

ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.167.14.241:2380"
#广播给外部客户端使用的url

ETCD_ADVERTISE_CLIENT_URLS="http://192.167.14.241:2379,http://127.0.0.1:2379"

ETCD_INITIAL_CLUSTER="etcd0=http://192.167.14.223:2380,etcd1=http://192.167.14.222:2380,etcd2=http://192.167.14.241:2380" 
#集群的名称
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"  
#初始集群状态,new为新建集群

EOF

启动:systemctl start etcd

八.查看成员

etcdctl member list

九.测试

etcdctl put mykey "this is test"

etcdctl get mykey

七.apisix网关安装

1.上传:apisix-9.0.0.tar 至 /home/de目录

2.解压:tar -xf apisix-9.0.0.tar -C /home/apisix/apisix-3.2.1

3.进入目标地址:cd apisix

4.查找依赖:make deps

5.安装:make install

6.启动:apisix start

八.网关界面安装

1.上传apisix-dashboard-3.0.1-0.el7.x86_64.rpm 至 /home/dean目录
2.进入目标地址:rpm -ivh apisix-dashboard-3.0.1-0.el7.x86_64.rpm
3.manager-api -p /usr/local/apisix/dashboard/

4.启动:systemctl start apisix-dashboard

5.修改配置:/usr/local/dashboard/conf/conf.yaml


配置远程访问:

6.重启:systemctl restart dashboard

7.验证安装:http://ip:9000


8.打开端口访问权限
firewall-cmd --zone=public --add-port=9080/tcp --permanent
firewall-cmd --zone=public --add-port=9000/tcp --permanent
firewall-cmd --reload


九.安装pcre包
(1)上传pcre-devel-8.32-17.el7.x86_64.rpm 至 /home/dean目录

(2)安装rpm -ivh pcre-devel-8.32-17.el7.x86_64.rpm

十.网关配置

十一.Clickhouse集群安装
三节点搭建集群
192.167.14.223
192.167.14.222
192.167.14.241

一.三节点安装clickhouse

(1)上传包


(2)安装
  rpm -ivh *.rpm

(3)查看是否安装
rpm -qa|grep clickhouse

(4)配置文件付权限
chmod 755 /etc/clickhouse-server/config.xml

(5)修改配置文件
vim  /etc/clickhouse-server/config.xml

把<listen_host>::</listen_host>标签打开,也就是说把注释标签去掉,然后启动clickhouse-server,发现启动成功
(6)启动查看状态
 启动: systemctl start clickhouse-server.service
查看状态:systemctl status clickhouse-server.service
(7)查看服务是否正常

ss -untlp|grep 8123


(8)连接客户端
clickhouse-client --port=9000

use default

创建表
CREATE TABLE default.test(
  `host` String comment 'column1_comment',
  `client_ip` String,
  `url` String,
  `timestamp` String,
   PRIMARY KEY(`timestamp`)
) ENGINE = MergeTree()
查看表是否创建成功
show tables

二.集群搭建

1.上传赋权限
apache-zookeeper-3.9.1.tar.gz
chmod 755 apache-zookeeper-3.9.1.tar.gz

2.解压
tar zxvf apache-zookeeper-3.9.1.tar.gz

3.创建文件夹
cd /usr/local
mkdir zookeeper


4.移动到usr/local目录下

cd apache-zookeeper-3.9.1
mv * /usr/local/zookeeper

5.创建数据文件存储目录
cd /usr/local/zookeeper
mkdir data
cd conf
cp zoo_sample.cfg zoo.cfg


6.修改ZK配置 zoo.cfg
vim zoo.cfg

dataDir=/usr/local/zookeeper/data
server.1=192.167.144.223:2888:3888
server.2=192.167.144.222:2888:3888
server.3=192.167.144.241:2888:3888

6.创建myid文件

cd /usr/local/zookeeper/data
touch myid
echo "3">>myid


对应server. 后面的数字


7.启动zk集群


/usr/local/zookeeper/bin/zkServer.sh start


8.配置clickhouse集群 metrika.xml


vim /etc/clickhouse-server/config.d/metrika.xml


<?xml version="1.0"?>
<clickhouse>
<clickhouse_remote_servers>
<!--集群名称,clickhouse支持多集群的模式-->
    <clickhouse_cluster>
    <!--定义分片节点,这里我指定3个分片,每个分片只有1个副本,也就是它本身-->
        <shard>
             <internal_replication>true</internal_replication>
            <replica>
                <host>192.167.14.223</host>
                <port>9800</port>
                <user>default</user>
                <password>123456</password>
            </replica>
        </shard>
        <shard>
            <replica>
                <internal_replication>true</internal_replication>
                <host>192.167.14.222</host>
                <port>9800</port>
                <user>default</user>
                <password>123456</password>
            </replica>
        </shard>
        <shard>
            <internal_replication>true</internal_replication>
            <replica>
                <host>192.167.14.241</host>
                <port>9800</port>
                <user>default</user>
                <password>123456</password>
            </replica>
        </shard>
    </clickhouse_cluster>
</clickhouse_remote_servers>
<!--zookeeper集群的连接信息-->
<zookeeper-servers>
  <node index="1">
    <host>192.167.14.223</host>
    <port>2181</port>
  </node>

  <node index="2">
    <host>192.167.14.222</host>
    <port>2181</port>
  </node>
  <node index="3">
    <host>192.167.14.241</host>
    <port>2181</port>
  </node>
</zookeeper-servers>
<!--定义宏变量,后面需要用-->
<macros>
    <replica>ck3</replica>
</macros>
</clickhouse>

macros为对应主机名不能一样


9.修改三台机器clickhouse 配置文件

vim /etc/clickhouse-server/config.xml


找到<remote_servers>标签。删除改标签,添加下面四行

如果不删除<remote_servers>,四行内容放到原标签前面


<remote_servers incl="clickhouse_remote_servers" />
<zookeeper incl="zookeeper-servers" optional="true" />
<macros incl="macros" optional="true" />
<include_from>/etc/clickhouse-server/config.d/metrika.xml</include_from>


10.客户端连接clickhouse
clickhouse-client --port=9000

select * from system.clusters

十二.Openssl安装

(1)上传openssl-1.1.1i.tar.gz 至 /home/dean目录

tar -zxvf openssl-1.1.1i.tar.gz

(2)安装openssl-1.1.1i.tar.gz

      cd openssl-1.1.1i
     
./config --prefix=/usr/local/openssl

 make -j4


make && make install

(3)配置环境变量 vi /etc/profile

   export PATH=$PATH:/usr/local/openssl

source /etc/profile

sudo ldconfig


两个版本冲突需删除
sudo rpm -qa | grep openssl
sudo yum remove openssl-1.0.2k-19.el7.x86_64

十三.Jdk安装
(1)上传jdk-8u191-linux-x64.tar.gz 至 /home/dean目录
(2)解压tar -zxvfjdk-8u191-linux-x64.tar.gz
(3)配置环境变量
vi /etc/profile
export JAVA_HOME=/home/de/jdk-1.0.8
export CLASSPATH=$JAVA_HOME/lib:$JAVA_HOME/jre/lib
export PATH=$PATH:$JAVA_HOME/bin:$JAVA_HOME/jre/bin

立即生效
source profile

十四.Unzip解压包安装
(1)上传unzip-6.0-21.el7.x86_64.rpm 至/home/dean目录下

(2)安装rpm -ivh unzip-6.0-21.el7.x86_64.rpm


十五.配置域名

配置域名 /etc/resolv.conf
nameserver 8.8.8.8

十六.keepalived搭建vip

192.167.14.223
192.167.14.222

下载地址https://www.keepalived.org/download.html


(1)上传解压
tar xvf keepalived-2.2.8.tar.gz

(2)编译安装

./configure --prefix=/usr/local/keepalived --with-ssl==/usr/local/openssl
make && make install


完成后会在以下路径生成:
/usr/local/etc/keepalived/keepalived.conf
/usr/local/etc/sysconfig/keepalived
/usr/local/sbin/keepalived

(3)配置启动


# keepalived启动脚本变量引用文件,默认文件路径是/etc/sysconfig/,也可以不做软链接,直接修改启动脚本中文件路径即可(安装目录下)
cp /usr/local/keepalived/etc/sysconfig/keepalived  /etc/sysconfig/keepalived 
 
# 将keepalived主程序加入到环境变量(安装目录下)
cp /usr/local/keepalived/sbin/keepalived /usr/sbin/keepalived
 
# keepalived启动脚本(源码目录下),放到/etc/init.d/目录下就可以使用service命令便捷调用
cp /home/apisix/keepalived-2.2.8/keepalived/etc/init.d/keepalived  /etc/init.d/keepalived
 
# 将配置文件放到默认路径下
mkdir /etc/keepalived
cp /usr/local/keepalived/etc/keepalived/keepalived.conf.sample /etc/keepalived/keepalived.conf
 
加为系统服务:chkconfig --add keepalived
开机启动:chkconfig keepalived on
启动、关闭、重启
systemctl start keepalived
systemctl stop keepalived
systemctl restart keepalived
 
(4)修改配置文件
配置启动keepalived
修改默认配置文件/etc/keepalived/keepalived.conf

vim /etc/keepalived/keepalived.conf

查看网卡名称:ifconfig

eno3

A节点
! Configuration File for keepalived
 
global_defs {
    notification_email {
     root@localhost
   }
   notification_email_from root@localhost
   smtp_server localhost
   smtp_connect_timeout 30
   router_id NodeA
}
  vrrp_script chk_mg{
    script "/usr/local/bin/check_server.sh"
    interval 2
    weight 2
}

track_script {
    chk_server
}
vrrp_instance VI_1 {
    state MASTER   #指定A节点为主节点 备用节点上设置为BACKUP即可  
    interface eno3   #绑定虚拟IP的网络接口  
    virtual_router_id 51   #VRRP组名,两个节点的设置必须一样,以指明各个节点属于同一VRRP组  
    priority 100   #主节点的优先级(1-254之间),备用节点必须比主节点优先级低  
    advert_int 1   #组播信息发送间隔,两个节点设置必须一样  
    authentication {   #设置验证信息,两个节点必须一致  
        auth_type PASS
        auth_pass 6666
    }
        virtual_ipaddress {   #指定虚拟IP, 两个节点设置必须一样  
        192.167.14.100
    }
}


B节点


! Configuration File for keepalived
 
global_defs {
    notification_email {
     root@localhost
   }
   notification_email_from root@localhost
   smtp_server localhost
   smtp_connect_timeout 30
   router_id NodeB
}
   vrrp_script chk_mg{
    script "/usr/local/bin/check_server.sh"
    interval 2
    weight 2
}

track_script {
    chk_server
}
vrrp_instance VI_1 {
    state BACKUP   #指定A节点为主节点 备用节点上设置为BACKUP即可  
    interface eno3   #绑定虚拟IP的网络接口  
    virtual_router_id 51   #VRRP组名,两个节点的设置必须一样,以指明各个节点属于同一VRRP组  
    priority 99   #主节点的优先级(1-254之间),备用节点必须比主节点优先级低  
    advert_int 1   #组播信息发送间隔,两个节点设置必须一样  
    authentication {   #设置验证信息,两个节点必须一致  
        auth_type PASS
        auth_pass 6666
    }
        virtual_ipaddress {   #指定虚拟IP, 两个节点设置必须一样  
        192.167.14.100
    }
}

根据实际情况配置此脚本,此配置是当keepalived节点和业务服务节点在一起时使用

check_server.sh


#!/bin/bash

# 检测当前节点业务服务端口8088是否可用
nc -z localhost 8088

if [ $? -eq 0 ]; then
   echo 'server not running, stop keepalived!'
   systemctl stop keepalived
fi

B节点只需要修改三个地方:

router_id  NodeB
state   BACKUP
priority   99


(5)启动

systemctl start keepalived

systemctl status keepalived

systemctl stop keepalived

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

曼岛_

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值