activeMQ主从高可用(zookeeper故障转移)+持久化策略(可复制的leveldb-stores)

准备阶段:activemq.apache.org官网下载apache-activemq-5.15.9-bin.tar

                  jdk1.8

                  zk集群:三个zookeeper节点

                  borker节点  一个节点上启用6个activemq实例

说明:networkConnectors+nginx负载尽量不要使用,A集群队列aqueue中有1000条message,通过A集群消费600条,此时通过B集群去消费aqueue,集群B也是先去消费A集群剩余的400条到B集群本地再转发给消费端,所以还是通过A集群消费了1000条,根本就无法做到负载削峰来增加吞吐量,而且从整体集群的层面上看,反而增加了集群的压力,因为B集群的转发这一步是多余的。

所以 nginx / lvs 动态代理,来配合networkConnectors做负载均衡,就不演示了,因为根本没用

   HA1     my-activemq-clust          

1、解压apache-activemq-5.15.9-bin.tar到/opt下

2、修改activemq.xml文件,目录/opt/apache-activemq-5.15.9/conf

      修改brokerName,所有节点要保持一致

3、配置持久化策略:使用levelDB

提前创建好leveldb目录:/opt/apache-activemq-5.15.9/data/leveldb

 <!-- 持久化策略 -->
<persistenceAdapter>
 
    <!-- 基于LevelDB的高可用  -->
    <replicatedLevelDB
          directory="${activemq.data}/leveldb"
          replicas="3"
          bind="tcp://0.0.0.0:63631"
          zkAddress="job-dangdai-node-2:2181,job-dangdai-node-3:2181,job-dangdai-node-4:2181"
          zkPath="/activemq/leveldb-stores"
          sync="local_disk"
          hostname="my-activemq-clust"/>
</persistenceAdapter>

4、全的activemq.xml

<!--
    Licensed to the Apache Software Foundation (ASF) under one or more
    contributor license agreements.  See the NOTICE file distributed with
    this work for additional information regarding copyright ownership.
    The ASF licenses this file to You under the Apache License, Version 2.0
    (the "License"); you may not use this file except in compliance with
    the License.  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

    Unless required by applicable law or agreed to in writing, software
    distributed under the License is distributed on an "AS IS" BASIS,
    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    See the License for the specific language governing permissions and
    limitations under the License.
-->
<!-- START SNIPPET: example -->
<beans
  xmlns="http://www.springframework.org/schema/beans"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
  http://activemq.apache.org/schema/core http://activemq.apache.org/schema/core/activemq-core.xsd">

    <!-- Allows us to use system properties as variables in this configuration file -->
    <bean class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
        <property name="locations">
            <value>file:${activemq.conf}/credentials.properties</value>
        </property>
    </bean>

   <!-- Allows accessing the server log -->
    <bean id="logQuery" class="io.fabric8.insight.log.log4j.Log4jLogQuery"
          lazy-init="false" scope="singleton"
          init-method="start" destroy-method="stop">
    </bean>

    <!--
        The <broker> element is used to configure the ActiveMQ broker.
    -->
	<!-- persistent="true"表示要持久化存储消息,和子元素persistenceAdapter结合使用 --> 
	<!-- dataDirectory默认的存储持久化数据的目录 --> 
	<!-- brokerName 设置broker的name,在注意在网络上必须是唯一的--> 
    <broker xmlns="http://activemq.apache.org/schema/core" brokerName="my-activemq-clust" dataDirectory="${activemq.data}" schedulerSupport="true">

        <destinationPolicy>
            <policyMap>
              <policyEntries>
                <policyEntry topic=">" >
                    <!-- The constantPendingMessageLimitStrategy is used to prevent
                         slow topic consumers to block producers and affect other consumers
                         by limiting the number of messages that are retained
                         For more information, see:

                         http://activemq.apache.org/slow-consumer-handling.html

                    -->
                  <pendingMessageLimitStrategy>
                    <constantPendingMessageLimitStrategy limit="1000"/>
                  </pendingMessageLimitStrategy>
                </policyEntry>
              </policyEntries>
            </policyMap>
        </destinationPolicy>


        <!--
            The managementContext is used to configure how ActiveMQ is exposed in
            JMX. By default, ActiveMQ uses the MBean server that is started by
            the JVM. For more information, see:

            http://activemq.apache.org/jmx.html
        -->
        <managementContext>
            <managementContext createConnector="false"/>
        </managementContext>

        <!--
            Configure message persistence for the broker. The default persistence
            mechanism is the KahaDB store (identified by the kahaDB tag).
            For more information, see:

            http://activemq.apache.org/persistence.html
        -->
	<!-- 持久化策略 -->
        <persistenceAdapter>
          <!-- kahaDB 默认  <kahaDB directory="${activemq.data}/kahadb"/>-->
	<!-- jdbc   <jdbcPersistenceAdapter dataSource="#mysql-ds" createTablesOnStartup="false"/>-->
            <!-- levelDB  <levelDB directory="${activemq.data}/leveldb"/>-->
	 	
		<!-- (官方推荐使用) jdbc with journal 高速缓存+JDBC -->
 <!--<journalPersistenceAdapterFactory
                          journalLogFiles="4"
                    journalLogFileSize="32768"
                    useJournal="true"
                    useQuickJournal="true"
                    dataSource="#mysql-ds"
                    dataDirectory="activemq-data"/>-->
                
                <!-- 基于LevelDB的高可用  -->
                <replicatedLevelDB
                        directory="${activemq.data}/leveldb"
                        replicas="3"
                        bind="tcp://0.0.0.0:63631"
                        zkAddress="job-dangdai-node-2:2181,job-dangdai-node-3:2181,job-dangdai-node-4:2181"
                        zkPath="/activemq/leveldb-stores"
                        sync="local_disk"
                        hostname="my-activemq-clust"/>

        </persistenceAdapter>


          <!--
            The systemUsage controls the maximum amount of space the broker will
            use before disabling caching and/or slowing down producers. For more information, see:
            http://activemq.apache.org/producer-flow-control.html
          -->
          <systemUsage>
            <systemUsage>
                <memoryUsage>
                    <memoryUsage percentOfJvmHeap="70" />
                </memoryUsage>
                <storeUsage>
                    <storeUsage limit="100 gb"/>
                </storeUsage>
                <tempUsage>
                    <tempUsage limit="50 gb"/>
                </tempUsage>
            </systemUsage>
        </systemUsage>

        <!--
            The transport connectors expose ActiveMQ over a given protocol to
            clients and other brokers. For more information, see:

            http://activemq.apache.org/configuring-transports.html
        -->
        <transportConnectors>
            <!-- DOS protection, limit concurrent connections to 1000 and frame size to 100MB -->
            <transportConnector name="openwire" uri="tcp://0.0.0.0:61616?maximumConnections=1000&amp;wireFormat.maxFrameSize=104857600"/>
            <transportConnector name="amqp" uri="amqp://0.0.0.0:5672?maximumConnections=1000&amp;wireFormat.maxFrameSize=104857600"/>
            <transportConnector name="stomp" uri="stomp://0.0.0.0:61613?maximumConnections=1000&amp;wireFormat.maxFrameSize=104857600"/>
            <transportConnector name="mqtt" uri="mqtt://0.0.0.0:1883?maximumConnections=1000&amp;wireFormat.maxFrameSize=104857600"/>
            <transportConnector name="ws" uri="ws://0.0.0.0:61614?maximumConnections=1000&amp;wireFormat.maxFrameSize=104857600"/>
<!--<transportConnector name="nio" uri="nio://0.0.0.0:61618?trace=true"/>-->
<!-- 消息访问协议 优化方案 -->
<transportConnector name="auto+nio" uri="auto+nio://0.0.0.0:61608?maximumConnections=1000&amp;wireFormat.maxFrameSize=104857600&amp;org.apache.activemq.transport.nio.SelectorManager.corePoolSize=20&amp;org.apache.activemq.transport.nio.SelectorManager.maximumPoolSize=50" />
        </transportConnectors>

        <!-- destroy the spring context on shutdown to stop jetty -->
        <shutdownHooks>
            <bean xmlns="http://www.springframework.org/schema/beans" class="org.apache.activemq.hooks.SpringContextHook" />
        </shutdownHooks>

    </broker>
    <!-- 使用JDBC时才需配置datasource -->
    <bean id="mysql-ds" class="org.apache.commons.dbcp2.BasicDataSource" destroy-method="close">
        <property name="driverClassName" value="com.mysql.jdbc.Driver"/>
        <property name="url" value="jdbc:mysql://192.168.164.25:3306/activemq?relaxAutoCommit=true"/>
        <property name="username" value="root"/>
        <property name="password" value="123456"/>
        <property name="poolPreparedStatements" value="true"/>
    </bean>
    <!--
        Enable web consoles, REST and Ajax APIs and demos
        The web consoles requires by default login, you can disable this in the jetty.xml file

        Take a look at ${ACTIVEMQ_HOME}/conf/jetty.xml for more details
    -->
    <import resource="jetty.xml"/>

</beans>
<!-- END SNIPPET: example -->

5、将activemq分发到两外两个节点之上

6、首先启动zk集群,然后依次启动activemq

目录:/opt/apache-activemq-5.15.9/bin

启动命令:./activemq start

启动完成后,查看ZK目录数据

 

HA2     my-activemq-clust2

重复HA1操作

两个cluster负载均衡

HA1配置HA2的集群中的主从所有节点

 <!-- loadbalance networkconnector to HA1 -->
<networkConnectors>
     	<networkConnector uri="static:(tcp://192.168.164.255:61608,tcp://192.168.164.255:51515,tcp://192.168.164.255:51516)" duplex="false"/>
</networkConnectors>

HA2配置HA1的集群中的主从所有节点 

 <!-- loadbalance networkconnector to HA2 -->
<networkConnectors>
     	<networkConnector uri="static:(tcp://192.168.164.255:61605,tcp://192.168.164.255:61604,tcp://192.168.164.255:61603)" duplex="false"/>
</networkConnectors>

说明:networkConnectors+nginx负载尽量不要使用,A集群队列aqueue中有1000条message,通过A集群消费600条,此时通过B集群去消费aqueue,集群B也是先去消费A集群剩余的400条到B集群本地再转发给消费端,所以还是通过A集群消费了1000条,根本就无法做到负载削峰来增加吞吐量,而且从整体集群的层面上看,反而增加了集群的压力,因为B集群的转发这一步是多余的。

最终集群方案为:

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值