ActiveMQ集群部署方案

ActiveMQ集群部署方案

一. 利用zookeeper+levelDB的方法

​ 此方法适合5.9以上ActiveMQ版本,我们版本为5.14.4,符合版本要求。此方法利用zookerper控制broker的主从,每个broker有自己的一套levelDB模式的存储文件,3套情况下每次发送和消费只有当2套都完成更新的时候才算成功,当有一套broker挂掉的时候会控制最新更新的一套broker成为主机。

1. 先配置zookeeper集群
  1. 安装解压zookerper

    tar -xvf zookeeper-3.4.9.tar.gz
    
  2. 修改配置

    更改配置文件名
    mv zoo_sample.cfg  zoo.cfg
    

​ vim zoo.cfg,内容如下(写注意的为主要不同):

  # 1tickTime是客户端与zk服务端的心跳时间,2tickTime是客户端会话的超时时间。
tickTime的默认值为2000毫秒,更低的tickTime值可以更快地发现超时问题,但也会导致更高的网络流量(心跳消息)和更高的CPU使用率(会话的跟踪处理)。 
  tickTime=2000  
  # The number of ticks that the initial   
  # 此配置表示允许follower连接并同步到leader的初始化时间,它以tickTime的倍数来表示。当超过设置倍数的tickTime时间,则连接失败。 
  initLimit=10  
  # The number of ticks that can pass between   
  # Leader服务器与follower服务器之间信息同步允许的最大时间间隔,如果超过次间隔,默认follower服务器与leader服务器之间断开链接。  
  syncLimit=5  
  # the directory where the snapshot is stored.  
 # do not use /tmp for storage, /tmp here is just   
 # example sakes.  
 
 # 注意!!!!!!!!!!!!!!!!!!!!!!
# 无默认配置,必须配置,用于配置存储快照文件的目录。如果没有配置dataLogDir,那么事务日志也会存储在此目录。
 dataDir=/home/raptor/runtime/mqtestdata/activemq-1/zkdir/data
 dataLogDir=/home/raptor/runtime/mqtestdata/activemq-1/zkdir/log  
 # zk服务进程监听的TCP端口,默认情况下,服务端会监听2181端口。实际生产中如果zk的布置ip不同则这个可以相同
 clientPort=2181  
 
 # the maximum number of client connections.  
 # 限制连接到zookeeper服务器客户端的数量  
 #maxClientCnxns=60  
 #  
 # Be sure to read the maintenance section of the   
 # administrator guide before turning on autopurge.  
 #  
 # http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance  
 #  
 # The number of snapshots to retain in dataDir  
 #autopurge.snapRetainCount=3  
 # Purge task interval in hours  
 # Set to "0" to disable auto purge feature  
 #autopurge.purgeInterval=1  
   
 # 注意!!  
 # three servers of this cluster  以下是伪集群设置(配了三个就要把三个zookerper都启起来),实际生产中应该是IP不同的
 server.1=192.168.199.23:2888:3888  
 server.2=192.168.199.23:2888:3889 
 server.3=192.168.199.23:2888:3890  

3 在以上配置文件里配的对应位置新建zkdir

mkdir zkdir

4 zkdir下新建log和data

mkdir log                 mkdir data

5 在zkdir/data下,创建myid文件,并写入与ip地址相称的服务器编号,比如写入 1; 用来存对应zk服务的数据,跟mq无关。

echo 1 > myid;

6 启动zookerper

 ./zkServer.sh start
//检查zk启动情况
[root@localhost bin]# tail -f zookeeper.out 

配置注意事项

1、 zoo.cfg文档开头和结尾不能留空格

2、 myid前后不能留换行和空格

3、 zookeeper 进程名为QuorumPeerMain

4、 zoo.cfg配置文件写法需要注意,不能重写同一个变量

2. 配置ActiveMQ集群

1 conf里activemq.xml里 persistenceAdapter关于levelDB的配置(原先的jdbc持久化相关信息全部注释掉)

<persistenceAdapter>
    <replicatedLevelDB
      directory="${activemq.data}/leveldb"
      replicas="3"
      bind="tcp://0.0.0.0:0"
      zkAddress="ip1:2181,ip2:2181,ip3:2181"
      hostname="192.168.199.23"
      sync="local_disk"
      zkPath="/activemq/leveldb-stores"
      />
</persistenceAdapter>

注解:

  • replicas:集群中节点的数量,这里配置了三台就是3。(如果你配置了replicas=3,那么法定大小是(3/2)+1=2。Master 将会存储并更新然后等待 (2-1)=1 个Slave存储和更新完成,才汇报 success。至于为什么是 2-1,熟悉 Zookeeper 的应该知道,有一个 node要作为观擦者存在。当一个新的Master 被选中,你需要至少保障一个法定node 在线以能够找到拥有最新 状态的node。这个node 可以成为新的Master。因此,推荐运行至少3 个replica nodes,以防止一个node失败了,服务中断。)

  • bind:当这个节点成为主节点后,就会默认绑定当前IP和端口(bind属性指明了当本节点成为一个Master节点后,通过哪一个通讯位置进行和其它Salve节点的消息复制操作。注意这里配置的监听地址和端口不能在transportConnectors标签中进行重复配置,否则节点在启动时会报错。),tcp://0.0.0.0:0为随机的。

  • zkAddress:三台zookeeper的服务ip和端口,用逗号隔开

  • zkPath:默认它在zk内的寻址节点为“/default”。放mq master slave节点的节点目录。加入到某一组master slave中的mq的实例中的zkPath必须完全匹配。

  • hostname:这里应该填写三台zookeeper可以互相访问的地址,我因为用的是一台机器,所以写的192.168.199.23,如果你部署的是三台内网,这里就写相应的那台机器的内网IP。就是写当前机子ip

2 acitvemq.xml里broker标签里的属性

 xmlns="http://activemq.apache.org/schema/core" 
        brokerName="activemq-1"
        useJmx="true"
        advisorySupport="true"
        dataDirectory="${activemq.data}"

注意:

brokerName 配置主备的broker的此项必须写一样的名字!

useJmx true,开放jmx

advisorySupport,过去为false,使用静态链接的集群方式必须为true。

3 activemq.xml里面关于空间的配置

 		<systemUsage>
            <systemUsage sendFailIfNoSpace="true">
                <memoryUsage>
                    <memoryUsage percentOfJvmHeap="70" />
                </memoryUsage>
                <storeUsage>
                    <storeUsage limit="100 gb"/>
                </storeUsage>
                <tempUsage>
                    <tempUsage limit="50 gb"/>
                </tempUsage>
            </systemUsage>
        </systemUsage>

注意:

这个是用来控制持久化储存空间上限大小的控制
<storeUsage>
   <storeUsage limit="100 gb"/>
</storeUsage>

4 activemq.xml里关于transportConnectors

   <transportConnectors>
       <transportConnector name="openwire" uri="nio://0.0.0.0:61636?maximumConnections=1000&amp;wireFormat.maxFrameSize=524288000"/>
   </transportConnectors>

注释:

uri里,61636位置是broker的后台端口,用于连接mq。maximumConnections,最大链接数。maxFrameSize每次最大传输大小。

5 ActiveMQ其他配置

​ jetty.xml里的端口,主要是amq控制台登录端口。

​ env里的jmx端口,必须要做相应更改,不可重复。

再按以上配置2套备机,至此,一套主备备配好。

6 静态链接的配置

主备备集群保证了高可用,静态连接则使broker与broker之间可以互相通信。

进入activemq.xml,配置networkConnectors,配置的位置必须在transportConnectors之前。


<networkConnectors>
     <networkConnector name="network1to4" uri="static:(nio://192.168.199.23:61639)" duplex="false" conduitSubscriptions="false" prefetchSize="1">
       </networkConnector>
  
       <networkConnector name="network1to5" uri="static:(nio://192.168.199.23:61640)" duplex="false" conduitSubscriptions="false" prefetchSize="1">
         </networkConnector>
  
         <networkConnector name="network1to6" uri="static:(nio://192.168.199.23:61641)" duplex="false" conduitSubscriptions="false" prefetchSize="1">
          </networkConnector>
</networkConnectors>

以上是配置了两套主备备共六个broker后的broker1里面的配置,6个broker都要配,参照这个。

注意点:

duplex ,是否使用双攻,默认是false,我们使用false,避免配的broker太多的时候混乱。

conduitSubscriptions,默认true,是否把每个broker上连接的所有consumer当成一个。我们选择false,因为我们用了过滤器,true会对过滤器造成影响,且不利于我们平均消费。

prefetchSize,预期值,默认1000,我们选择1,因为我们之前的需求配置就是1,这里1000意义不大,且会造成消息小于1000的时候一个消费者消费,其他的不工作。

以下是对于network的属性的官方解释,采用默认值的就不配出来了:

propertydefaultdescription
namebridgename of the network - for more than one network connector between the same two brokers - use different names
dynamicOnlyfalseif true, only activate a networked durable subscription when a corresponding durable subscription reactivates, by default they are activated on startup.
decreaseNetworkConsumerPriorityfalseif true, starting at priority -5, decrease the priority for dispatching to a network Queue consumer the further away it is (in network hops) from the producer. When false all network consumers use same default priority(0) as local consumers
networkTTL1the number of brokers in the network that messages and subscriptions can pass through (sets both message&consumer -TTL)
messageTTL1(version 5.9) the number of brokers in the network that messages can pass through
consumerTTL1(version 5.9) the number of brokers in the network that subscriptions can pass through (keep to 1 in a mesh)
conduitSubscriptionstruemultiple consumers subscribing to the same destination are treated as one consumer by the network
excludedDestinationsemptydestinations matching this list won’t be forwarded across the network (this only applies todynamicallyIncludedDestinations)
dynamicallyIncludedDestinationsemptydestinations that match this list will be forwarded across the network n.b. an empty list means all destinations not in the exluded list will be forwarded
useVirtualDestSubsfalseif true, the network connection will listen to advisory messages for virtual destination consumers
staticallyIncludedDestinationsemptydestinations that match will always be passed across the network - even if no consumers have ever registered an interest
duplexfalseif true, a network connection will be used to both produce AND Consume messages. This is useful for hub and spoke scenarios when the hub is behind a firewall etc.
prefetchSize1000Sets the prefetch size on the network connector’s consumer. It must be > 0 because network consumers do not poll for messages
suppressDuplicateQueueSubscriptionsfalse(from 5.3) if true, duplicate subscriptions in the network that arise from network intermediaries will be suppressed. For example, given brokers A,B and C, networked via multicast discovery. A consumer on A will give rise to a networked consumer on B and C. In addition, C will network to B (based on the network consumer from A) and B will network to C. When true, the network bridges between C and B (being duplicates of their existing network subscriptions to A) will be suppressed. Reducing the routing choices in this way provides determinism when producers or consumers migrate across the network as the potential for dead routes (stuck messages) are eliminated. networkTTL needs to match or exceed the broker count to require this intervention.
bridgeTempDestinationstrueWhether to broadcast advisory messages for created temp destinations in the network of brokers or not. Temp destinations are typically created for request-reply messages. Broadcasting the information about temp destinations is turned on by default so that consumers of a request-reply message can be connected to another broker in the network and still send back the reply on the temporary destination specified in the JMSReplyTo header. In an application scenario where most/all messages use request-reply pattern, this will generate additional traffic on the broker network as every message typically sets a unique JMSReplyTo address (which causes a new temp destination to be created and broadcasted via an advisory message in the network of brokers). When disabling this feature such network traffic can be reduced but then producer and consumers of a request-reply message need to be connected to the same broker. Remote consumers (i.e. connected via another broker in your network) won’t be able to send the reply message but instead raise a “temp destination does not exist” exception.
alwaysSyncSendfalse(version 5.6) When true, non persistent messages are sent to the remote broker using request/reply in place of a oneway. This setting treats both persistent and non-persistent messages the same.
staticBridgefalse(version 5.6) If set to true, broker will not dynamically respond to new consumers. It will only use staticallyIncludedDestinations to create demand subscriptions
userNameThe username to authenticate against the remote broker
passwordThe password for the username to authenticate against the remote broker

7 回流配置

​ 静态链接后必须配置回流,不然会导致发送到broker1的消息通过network后到了broker4,这时候如果broker4上的所有consumer挂掉后,消息将不会回到broker1。

相关配置:

<policyEntry queue=">" usePrefetchExtension="false"  enableAudit="true">
		
	<networkBridgeFilterFactory>
		<conditionalNetworkBridgeFilterFactory replayWhenNoConsumers ="true"/>
	</networkBridgeFilterFactory>
		
	<deadLetterStrategy>
		<individualDeadLetterStrategy  processExpired="false" queuePrefix="DLQ." useQueueForQueueMessages="true"/>
	</deadLetterStrategy>
  
</policyEntry>

注释:

​ 回流的主要配置就是允许消息回流到原来的broker。配置conditionalNetworkBridgeFilterFactory 标签的属性replayWhenNoConsumers=true。如果是activemq版本小于5.9,需要关闭游标复制检测,enableAudit=false。(我们版本5.14.4,就不用关闭了).

8 对于client端而言,仍然需要使用failover协议,需要把所有的broker都配进去,这部分配置配置中心里,不在amq中配置。

activemq.xml里的配置顺序要注意:

  1. Networks——必须在消息存储之前创建
  2. Message store——必须在传输配置好之前配置完
  3. Transports——必须在broker配置的最后

二. 利用kahaDB共用文件锁的方法配ActiveMQ集群

​ 这也能达到master slave + broker cluster的混用。不同的是这个是利用kahaDB做持久化,在主备切换的时候,是主备公用一个存储文件,通过争夺文件锁的方式进行主备切换。KahaDB是从ActiveMQ 5.4开始默认的持久化插件,持久化机制是基于日志文件,索引和缓存。LevelDB的持久化引擎是在ActiveMQ 5.6版本之后推出的,LevelDB与KahaDB的持久化方式相似,在ActiveMQ 5.9版本提供了基于LevelDB和Zookeeper的数据复制方式,用于Master-slave方式的首选数据复制方案。

​ KahaDB方式在配置上与LevelDB+zookerper方式大体相同,主要区别在持久化方便的配置,和不需要配置zookerper。

​ KahaDB如果要在不同服务器之间共享文件,可以使用挂载NFS服务的方式。

转载注明:https://blog.csdn.net/renhuan28/article/details/79769758

  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值