Hadoop+Spark和Rabbitmq高可用集群部署

目录

一、节点与组件架构

二、服务器及网络配置要求

三、基本环境配置要求

四、安装包准备

五、Zookeeper部署

六、Hadoop部署

七、Spark部署

八、RabbitMQ部署      

一、节点与组件架构

        1.节点与组件分布:

节点Zookeeper

Journal

Node

Name

Node

ZKFC

Resource

Manager

Node

Manger

Data

Node

SparkRabbmit
Master111(A)1

       1

    (B)

111(A)1
Slave1111(B)1

       1

   (A)

111(B)1
Slave2111111

Note:A表示active,S表示Standby

        2.组件作用:

        Zookeeper:保持数据一致性,其中一个作用是控制NameNode,ResourceNode和Spark的状态,如active node单机故障时,standby node转active。

        ZKFC:用于监控Namenode的健康状况,如果active NameNode出现故障,将告知 Zookeeper(删除Znode锁),Zookeeper通知standby NameNode激活(配置Znode锁)。

        JournalNode:active NameNode会将编辑日志同步写入JournalNode,同时standby NameNode会从JournalNode读取编辑日志。当active NameNode出故障,standby 快速转化为active。

二、服务器及网络配置要求

(仅供参考,除最小节点要求,sudo配置和内网互访无碍外,可根据自身需求配置)

Server
Number3
OSCentOS linux 7.6
Spec4core/8GB memory/100GB hard disk/10Mbandwidth
Useruser: app, user group: apps (app user should be able to execute sudo su root without password)
Internetservers have access to each other in the intranet

三、基本环境配置要求

        1.主机名修改和主机名映射

        1.1 在10.200.207.1,以root用户执行

hostnamectl set-hostname VM-0-1-centos-Master

        1.2 在10.200.207.2,以root用户执行

hostnamectl set-hostname VM-0-1-centos-Slave-01

        1.3 在10.200.207.3,以root用户执行

hostnamectl set-hostname VM-0-1-centos-Slave-02

        1.4 新增的域名是为了后面spark以及fate1.9.0启动,在10.200.207.1、10.200.207.2、10.200.207.3,以root用户执行

vim /etc/hosts
192.168.0.1 VM-0-1-centos-Master fate-cluster
192.168.0.2 VM-0-2-centos-Slave-01 fate-cluster
192.168.0.3 VM-0-3-centos-Slave-02

        2.使SELinux失效,在10.200.207.1、10.200.207.2、10.200.207.3,以root用户执行

sed -i '/^SELINUX/s/=.*/=disabled/' /etc/selinux/config
setenforce 0

        3.设置服务器资源限制,在10.200.207.1、10.200.207.2、10.200.207.3,以root用户执行

vim /etc/security/limits.conf
* soft nofile 65536
* hard nofile 65536

        4.关闭服务器本地防火墙, 在10.200.207.1、10.200.207.2、10.200.207.3,以root用户执行

systemctl disable firewalld.service
systemctl stop firewalld.service
systemctl status firewalld.service

        5.初始化服务器,在10.200.207.1、10.200.207.2、10.200.207.3,以root用户执行

groupadd -g 6000 apps
useradd -s /bin/bash -G apps -m app
passwd app
mkdir -p /data/projects/common/jdk
chown –R app:apps /data/projects

        6.配置sudo,在10.200.207.1、10.200.207.2、10.200.207.3,以root用户执行

vim /etc/sudoers.d/app
app ALL=(ALL) ALL
app ALL=(ALL) NOPASSWD: ALL
Defaults !env_reset

        7.设置SSH免密互连

        7.1 转为app用户并生成RSA密钥,在10.200.207.1、10.200.207.2、10.200.207.3,以root用户执行

su - app
ssh-keygen -t rsa

        7.2 复制公钥到keys,并将keys分发给slave1,在10.200.207.1,以app用户执行

cat ~/.ssh/id_rsa.pub >> /home/app/.ssh/authorized_keys
chmod 600 ~/.ssh/authorized_keys
scp ~/.ssh/authorized_keys app@10.200.207.1:/home/app/.ssh

        7.3 复制公钥到keys,并将keys分发给slave2,在10.200.207.2,以app用户执行

cat ~/.ssh/id_rsa.pub >> /home/app/.ssh/authorized_keys
scp ~/.ssh/authorized_keys app@10.200.207.3:/home/app/.ssh

        7.4 复制公钥到keys,此时keys就拥有三台服务器的公钥了,再将keys分发给master和slave1,在10.200.207.3,以app用户执行

cat ~/.ssh/id_rsa.pub >> /home/app/.ssh/authorized_keys
scp ~/.ssh/authorized_keys app@10.200.207.1:/home/app/.ssh
scp ~/.ssh/authorized_keys app@10.200.207.2:/home/app/.ssh

        7.5 检验是否成功,在10.200.207.1、10.200.207.2、10.200.207.3,以app用户执行

ssh app@10.200.207.1
ssh app@10.200.207.2
ssh app@10.200.207.3

四、安装包准备

一般在公司内网的服务器,使用wegt下载要申请正向代理,审批非常麻烦。推荐用自己电脑下载好后,使用内网传输工具传上服务器。

        1.下载包列表(仅供参考,但hadoop需要3.3以上(否者fate启动不了libhdfs.so)和zookeeper需要3.6以上(不然yarn连接不上zookeeper),可以根据需求更改下载源或版本),三台服务器分别执行,以app用户

        1) https://webank-ai-1251170195.cos.ap-guangzhou.myqcloud.com/resources/jdk-8u192.tar.gz

        2) https://archive.apache.org/dist/hadoop/common/hadoop-3.3.1/hadoop-3.3.1.tar.gz

        3) https://downloads.lightbend.com/scala/2.12.10/scala-2.12.10.tgz

        4) https://archive.apache.org/dist/spark/spark-3.1.2/spark-3.1.2-bin-hadoop3.2.tgz

        5) https://archive.apache.org/dist/zookeeper/zookeeper-3.6.3/apache-zookeeper-3.6.3-bin.tar.gz

        6) https://webank-ai-1251170195.cos.ap-guangzhou.myqcloud.com/resources/Miniconda3-py38_4.12.0-Linux-x86_64.sh

        7) https://github.com/rabbitmq/rabbitmq-server/releases/download/rabbitmq_v3_6_15/rabbitmq-server-generic-unix-3.6.15.tar.xz

        8) http://www.erlang.org/download/otp_src_19.3.tar.gz

        9) https://mirrors.aliyun.com/gnu/ncurses/ncurses-6.0.tar.gz

        2.解压并更改名字,三台服务器分别执行,以app用户

tar xvf hadoop-3.2.0.tar.gz -C /data/projects/common
tar xvf scala-2.12.10.tgz -C /data/projects/common
tar xvf spark-3.1.2-bin-hadoop3.2.tgz -C /data/projects/common
tar xvf zookeeper-3.4.14.tar.gz -C /data/projects/common
tar xvf jdk-8u192-linux-x64.tar.gz -C /data/projects/common/jdk
mv hadoop-3.2.0 hadoop
mv scala-2.12.10 scala
mv spark-3.1.2-bin-hadoop3.2 spark
mv zookeeper-3.4.14 zookeeper

        3.更改全用户或单用户环境参数,三台服务器分别执行,以app用户

#sudo vi /etc/profile or vi ~/.bashrc
sudo vi /etc/profile
export JAVA_HOME=/data/projects/common/jdk/jdk-8u192
export PATH=$JAVA_HOME/bin:$PATH
export HADOOP_HOME=/data/projects/common/hadoop
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
export SPARK_HOME=/data/projects/common/spark
export PATH=$SPARK_HOME/bin:$PATH

五、Zookeeper部署

        1.参数配置,三台服务器分别执行,以app用户

cd /data/projects/common/zookeeper/conf
cat >> zoo.cfg << EOF
> tickTime=2000
> initLimit=10
> syncLimit=5
> dataDir=/data/projects/common/zookeeper/data/zookeeper
> dataLogDir=/data/projects/common/zookeeper/logs
> clientPort=2181
> maxClientCnxns=1000
> server.1= 10.200.207.1:2888:3888
> server.2= 10.200.207.2:2888:3888
> server.3= 10.200.207.3:2888:3888
> EOF

        2.在dataDir下,根据server.后的数字,配置myid,以此类推,三台服务器分别执行,以app用户

#在10.200.207.1
echo 1>> /data/projects/common/zookeeper/data/zookeeper/myid
#在10.200.207.2
echo 2>> /data/projects/common/zookeeper/data/zookeeper/myid
#在10.200.207.3
echo 3>> /data/projects/common/zookeeper/data/zookeeper/myid

        3.启动服务,三台服务器分别执行,以app用户

nohup /data/projects/common/zookeeper/bin/zkServer.sh start &

        4. 检验zookeeper

/data/projects/common/zookeeper/bin/zkServer.sh status

六、Hadoop部署

        1.JDK路径配置,三台服务器分别执行,以app用户

cd /data/projects/common/hadoop/etc/hadoop
vi hadoop-env.sh
export JAVA_HOME=/data/projects/common/jdk/jdk1.8.0_192
vi yarn-env.sh
export JAVA_HOME=/data/projects/common/jdk/jdk1.8.0_192

        2.core-site.xml、hdfs-site.xml、mapred-site.xml、yarn-site.xml参数配置

        2.1 core-site.xml

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/data/projects/common/hadoop/tmp</value>
    </property>
    <property>
        <name>fs.default.name</name>
        <value>hdfs://fate-cluster</value>
    </property>
    <property>
        <name>io.compression.codecs</name>
        <value>org.apache.hadoop.io.compress.GzipCodec,
            org.apache.hadoop.io.compress.DefaultCodec,
            org.apache.hadoop.io.compress.BZip2Codec,
            org.apache.hadoop.io.compress.SnappyCodec
        </value>
    </property>
    <property>
        <name>hadoop.proxyuser.root.hosts</name>
        <value>*</value>
    </property>
    <property>
        <name>hadoop.proxyuser.root.groups</name>
        <value>*</value>
    </property>
    <property>
        <name>ha.zookeeper.quorum</name>
        <value>10.200.207.1:2181,10.200.207.2:2181,10.200.207.3:2181</value>
    </property>
    <!-- Authentication for Hadoop HTTP web-consoles -->
        <property>
                <name>hadoop.http.filter.initializers</name>
                <value>org.apache.hadoop.security.AuthenticationFilterInitializer</value>
        </property>
        <property>
                <name>hadoop.http.authentication.type</name>
                <value>simple</value>
        </property>
        <property>
                <name>hadoop.http.authentication.token.validity</name>
                <value>3600</value>
        </property>
        <property>
                <name>hadoop.http.authentication.signature.secret.file</name>
                <value>/data/projects/commom/hadoop/etc/hadoop/hadoop-http-auth-signature-secret</value>
        </property>
        <property>
                <name>hadoop.http.authentication.cookie.domain</name>
                <value></value>
        </property>
        <property>
                <name>hadoop.http.authentication.simple.anonymous.allowed</name>
                <value>true</value>
        </property>
</configuration>

        2.2 hdfs-site.xml

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
    <property>
        <name>dfs.replication</name>
        <value>3</value>
    </property>
    <property>
        <name>dfs.permissions.enabled</name>
        <value>false</value>
    </property>
    <property>
        <name>dfs.nameservices</name>
        <value>fate-cluster</value>
    </property>
    <property>
        <name>dfs.ha.namenodes.fate-cluster</name>
        <value>nn1,nn2</value>
    </property>
    <property>
        <name>dfs.namenode.rpc-address.fate-cluster.nn1</name>
        <value>10.200.207.1:9000</value>
    </property>
    <property>
        <name>dfs.namenode.http-address.fate-cluster.nn1</name>
        <value>10.200.207.1:50070</value>
    </property>
    <property>
        <name>dfs.namenode.rpc-address.fate-cluster.nn2</name>
        <value>10.200.207.2:9000</value>
    </property>
    <property>
        <name>dfs.namenode.http-address.fate-cluster.nn2</name>
        <value>10.200.207.2:50070</value>
    </property>
    <property>
        <name>dfs.namenode.shared.edits.dir</name>
        <value>qjournal://10.200.207.1:8485;10.200.207.2:8485;10.200.207.3:8485/fate-cluster</value>
    </property>
    <property>
        <name>dfs.journalnode.edits.dir</name>
        <value>/data/projects/common/hadoop/data/journaldata</value>
    </property>
    <property>
        <name>dfs.namenode.name.dir</name>
        <value>file:///data/projects/common/hadoop/data/dfs/nn/local</value>
    </property>
    <property>
        <name>dfs.datanode.data.dir</name>
        <value>/data/projects/common/hadoop/data/dfs/dn/local</value>
    </property>
    <property>
        <name>dfs.client.failover.proxy.provider.fate-cluster</name>
        <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
    </property>
    <property>
        <name>dfs.ha.fencing.methods</name>
        <value>shell(/bin/true)</value>
    </property>
    <property>
        <name>dfs.ha.fencing.ssh.private-key-files</name>
        <value>/home/app/.ssh/id_rsa</value>
    </property>
    <property>
        <name>dfs.ha.fencing.ssh.connect-timeout</name>
        <value>10000</value>
    </property>
    <property>
        <name>dfs.ha.automatic-failover.enabled</name>
        <value>true</value>
    </property>
    <property>
        <name>dfs.client.block.write.replace-datanode-on-failure.policy</name>
        <value>NEVER</value>
    </property>
</configuration>

        2.3 mapred-site.xml

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
</configuration>

        2.4 yarn-site.xml

<?xml version="1.0"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->
<configuration>
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
    <property>
        <name>yarn.resourcemanager.ha.enabled</name>
        <value>true</value>
    </property>
    <property>
        <name>yarn.resourcemanager.cluster-id</name>
        <value>rmCluster</value>
    </property>
    <property>
        <name>yarn.resourcemanager.ha.rm-ids</name>
        <value>rm1,rm2</value>
    </property>
    <property>
        <name>yarn.resourcemanager.webapp.address</name>
        <value>10.200.207.1:8088</value>
    </property>
    <property>
        <name>yarn.resourcemanager.hostname.rm1</name>
        <value>10.200.207.1</value>
    </property>
    <property>
        <name>yarn.resourcemanager.hostname.rm2</name>
        <value>10.200.207.2</value>
    </property>
    <property>
        <name>yarn.resourcemanager.zk-address</name>
        <value>10.200.207.1:2181,10.200.207.2:2181,10.200.207.3:2181</value>
    </property>
    <property>
        <name>yarn.resourcemanager.recovery.enabled</name>
        <value>true</value>
    </property>
    <property>
        <name>yarn.resourcemanager.store.class</name>
        <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
    </property>
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
    <property>
        <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
        <value>org.apache.hadoop.mapred.ShuffleHandler</value>
    </property>
    <property>
        <name>yarn.nodemanager.pmem-check-enabled</name>
        <value>false</value>
    </property>

    <property>
        <name>yarn.nodemanager.vmem-check-enabled</name>
        <value>false</value>
    </property>

    <property>
        <name>yarn.nodemanager.resource.memory-mb</name>
        <value>20480</value>
    </property>
    <property>
        <name>yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage</name>
        <value>97.0</value>
    </property>
</configuration>

        3. 新建目录,需要提前建立配置的目录;同理,当需要格式化Hadoop时,需要提前删除这些目录,在三台服务器分别执行,以app用户执行

cd  /data/projects/common/hadoop
mkdir ./tmp
mkdir -p ./data/dfs/nn/local
mkdir -p ./data/dfs/dn/local
mkdir -p ./data/journaldata

        4. 启动各组件

        4.1 必须最先启动,在10.200.207.1,10.200.207.2,10.200.207.3,以app用户执行

hadoop-daemon.sh start journalnode

        4.2 格式化后并启动NameNode,会在根据core-site.xml中的hadoop.tmp.dir配置生成个文件,在10.200.207.1,以app用户执行

hdfs namenode -format
hadoop-daemon.sh start namenode

        4.3 主动获取上一步生成的文件到备份master,在10.200.207.2.以app用户执行

hdfs namenode -bootstrapStandby

        4.4 通过zfkc去格式化zookeeper,为了连接上zookeeper集群,然后在zookeeper集群上面创建一个znode节点,启动HA,在10.200.207.1,以app用户执行

hdfs zkfc -formatZK

        4.5 备份master启动namenode,在10.200.207.2,以app用户执行

hadoop-daemon.sh start namenode

        4.6 启动zkfc,先启动的,Namenode会active,在10.200.207.1,以app用户执行

hadoop-daemon.sh start zkfc

        4.7 启动zkfc,在10.200.207.2,以app用户执行

hadoop-daemon.sh start zkfc

        4.8 启动ResoureManager, 先启动的会active,和NameNode的active错开,以充分利用资源,在10.200.207.2,以app用户执行

yarn-daemon.sh start resourcemanager

        4.9 启动ResoureManager,在10.200.207.1,以app用户执行

yarn-daemon.sh start resourcemanager

        4.10 启动NodeManager,在三台服务器分别执行,以app用户执行

yarn-daemon.sh start nodemanager

        4.11 启动DataNode,在三台服务器分别执行,以app用户执行

hadoop-daemon.sh start datanode

        4.12 检测各组件是否成功启动,首先看JPS,然后看http://10.200.207.1:50070http://10.200.207.1:8088的状态

七、Spark部署

        1.重命名配置参数文件,在三台服务器分别执行,以app用户执行

cd /data/projects/common/spark/conf
mv workers.template workers
mv spark-defaults.conf.template spark-defaults.conf
mv spark-env.sh.template spark-env.sh

        2.加入如下内容到workers,在三台服务器分别执行,以app用户执行

VM-0-1-centos-Master
VM-0-1-centos-Slave-01
VM-0-1-centos-Slave-02

        3.加入如下内容到spark-default.conf, 其中spark.files是为了能获取Hadoop集群的名字,fate-cluster,在三台服务器分别执行,以app用户执行

spark.master yarn

spark.eventLog.enabled true

spark.eventLog.dir hdfs://fate-cluster/tmp/spark/event

# spark.serializer org.apache.spark.serializer.KryoSerializer

# spark.driver.memory 5g

# spark.executor.extraJavaOptions -XX:+PrintGCDetails -Dkey=value -Dnumbers="one two three"

spark.yarn.jars hdfs://fate-cluster/tmp/spark/jars/*.jar

spark.files file:///data/projects/common/spark/conf/hdfs-site.xml,file:///data/projects/common/spark/conf/core-site.xml

        4. 加入如下内容到spark-env.sh,在三台服务器分别执行,以app用户执行

export JAVA_HOME=/data/projects/common/jdk/jdk-8u192
export SCALA_HOME=/data/projects/common/scala
export HADOOP_HOME=/data/projects/common/hadoop
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export
SPARK_HISTORY_OPTS="-Dspark.history.fs.logDirectory=hdfs://fate-cluster/tmp/spark/event"
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib/native"
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:${HADOOP_HOME}/lib/native
export PYSPARK_PYTHON=/data/projects/fate/common/python/venv/bin/python
export PYSPARK_DRIVER_PYTHON=/data/projects/fate/common/python/venv/bin/python

SPARK_MASTER_WEBUI_PORT=8089
export SPARK_DAEMON_JAVA_OPTS="
-Dspark.deploy.recoveryMode=ZOOKEEPER
-Dspark.deploy.zookeeper.url=VM-0-1-centos,VM-0-1-centos-Slave-01,VM-0-1-centos-Slave-02
-Dspark.deploy.zookeeper.dir=/spark"

        5. 配置python环境,在三台服务器分别执行,以app用户执行

cd /data/projects/install
sh Miniconda3-py38_4.12.0-Linux-x86_64.sh -b -p /data/projects/fate/common/miniconda3

#Create a virtualized environment
/data/projects/fate/common/miniconda3/bin/python3.8 -m venv /data/projects/fate/common/python/venv

        6. 验证spark-shell和pyspark是否能正常启动,在三台服务器分别执行,以app用户执行

cd /data/projects/common/spark/jars
hdfs dfs -mkdir -p /tmp/spark/jars
hdfs dfs -mkdir -p /tmp/spark/event
hdfs dfs -put *jar /tmp/spark/jars

/data/projects/common/spark/bin/pyspark --master yarn --deploy-mode client
# or
/data/projects/common/spark/bin/spark-shell --master yarn --deploy-mode client 

          7. 查看WebUI

八、RabbitMQ部署      

        1. 配置Erlang

        1.1 解压安装包,在三台服务器分别执行,以app用户执行

cd /tmp
tar -zxvf otp_src_19.3.tar.gz  -C /data/projects/common

        1.2 依赖包安装,在三台服务器分别执行,以app用户执行

sudo yum install gcc-c++ automake cmake ncurses-devel openssl-devel wxGTK-devel fop java-1.8.0-openjdk-devel unixODBC-devel libssh2-devel perl

tar -zxvf ncurses-6.0.tar.gz
cd ncurses-6.0
./configure --with-shared --without-debug --without-ada --enable-overwrite  
make
make install

        1.3 配置当前会话变量,在三台服务器分别执行,以app用户执行

cd  /data/projects/common/otp_src_19.3/
export ERL_TOP=`pwd`

        1.4 编译,在三台服务器分别执行,以app用户执行

./configure --prefix=/data/projects/common/erlang
make
make install

       1.5 配置全用户环境变量,在三台服务器分别执行,以app用户执行

vi /etc/profile
export ERL_HOME=/data/projects/common/erlang
export PATH=$PATH:/data/projects/common/erlang/bin

        1.6 检验是否配置成功,输入erl是否成功进入Erlang Environment,在三台服务器分别执行,以app用户执行

        2. 配置RabbitMQ

        2.1 解压安装包,在三台服务器分别执行,以app用户执行

xz -d rabbitmq-server-generic-unix-3.6.15.tar.xz
tar xvf rabbitmq-server-generic-unix-3.6.15.tar  -C /data/projects/common

        2.2 启动单机RabbitMQ,生成cookie,并设置权限为400, 保证文件只读,在三台服务器分别执行,以app用户执行

cd /data/projects/common/rabbitmq_server-3.6.15 && ./sbin/rabbitmq-server -detached

        2.3 将cookie权限设置为400,保证文件只读

cd /home/app
chmod 400 .erlang.cookie 

        2.4 以VM-0-1-centos-Master为基,拷贝cookie到另外两台服务器,前提还需关闭另外两台服务器的RabbitMQ服务,并删除erlang.cookie, 不然复制后操作会提示认证问题,因为erlang.cookie改变了,以app用户执行

#run on 10.200.207.2、10.200.207.3
sbin/rabbitmqctl stop
cd /home/app
sudo rm .erlang.cookie

#run on 10.200.207.1
scp /home/app/.erlang.cookie app@10.200.207.2:/home/app/.erlang.cookie
scp /home/app/.erlang.cookie app@10.200.207.3:/home/app/.erlang.cookie

        2.5 配置新路径,在三台服务器分别执行,以app用户执行

sudo vi /etc/profile
export RABBITMQ=/data/projects/fate/common/rabbitmq_server-3.6.15
export PATH=$RABBITMQ/sbin:$PATH

        2.6 重新启动RabbitMQ服务,并关闭应用,连接VM-0-1-centos-Master,并重启应用,在10.200.207.2、10.200.207.3, 以app用户执行

rabbitmq-server -detached
rabbitmqctl stop_app 
rabbitmqctl reset
rabbitmqctl join_cluster rabbit@VM-0-1-centos-Master
rabbitmqctl start_app

        2.7 检查状态,是否为standolone RabbitMQ,任一服务器,以app用户执行

rabbitmqctl cluster_status

        2.8 配置策略,成为镜像模式RabbitMQ,任一服务器,以app用户执行

rabbitmqctl set_policy ha-all "#" '{"ha-mode":"all"}'

        2.9 启动webUI服务,在三台服务器分别执行,以app用户执行

 rabbitmq-plugins enable rabbitmq_management
 rabbitmq-plugins enable rabbitmq_federation
 rabbitmq-plugins enable rabbitmq_federation_management

        2.10 新增用户、角色和权限,任一服务器,以app用户执行

rabbitmqctl add_user fate fate
rabbitmqctl set_user_tags fate administrator
rabbitmqctl set_permissions -p / fate ".*" ".*" ".*" 

        2.11 登录用户查看数据,http://10.200.207.1:15672

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值