ActiveMQ集群配置



Version Compatibility

Available as of ActiveMQ 5.9.0.

How it works.

It uses Apache ZooKeeper to coordinate which node in the cluster becomes the master. The elected master broker node starts and accepts client connections. The other nodes go into slave mode and connect the the master and synchronize their persistent state /w it. The slave nodes do not accept client connections. All persistent operations are replicated to the connected slaves. If the master dies, the slaves with the latest update gets promoted to become the master. The failed node can then be brought back online and it will go into slave mode.

All messaging operations which require a sync to disk will wait for the update to be replicated to a quorum of the nodes before completing. So if you configure the store with replicas="3" then the quorum size is (3/2+1)=2. The master will store the update locally and wait for 1 other slave to store the update before reporting success. Another way to think about it is that store will do synchronous replication to a quorum of the replication nodes and asynchronous replication replication to any additional nodes.

When a new master is elected, you also need at least a quorum of nodes online to be able to find a node with the lastest updates. The node with the lastest updates will become the new master. Therefore, it's recommend that you run with at least 3 replica nodes so that you can take one down without suffering a service outage.

Deployment Tips

Clients should be using the Failover Transport to connect to the broker nodes in the replication cluster. e.g. using a URL something like the following:

failover:(tcp: //broker1:61616,tcp://broker2:61616,tcp://broker3:61616)

You should run at least 3 ZooKeeper server nodes so that the ZooKeeper service is highly available. Don't overcommit your ZooKeeper servers. An overworked ZooKeeper might start thinking live replication nodes have gone offline due to delays in processing their 'keep-alive' messages.

For best results, make sure you explicitly configure the hostname attribute with a hostname or ip address for the node that other cluster members to access the machine with. The automatically determined hostname is not always accessible by the other cluster members and results in slaves not being able to establish a replication session with the master.

Configuration

You can configure ActiveMQ to use LevelDB for its persistence adapter - like below :

<broker brokerName= "broker" ... >
   ...
   <persistenceAdapter>
     <replicatedLevelDB
       directory= "activemq-data"
       replicas= "3"
       bind= "tcp://0.0.0.0:0"
       zkAddress= "zoo1.example.org:2181,zoo2.example.org:2181,zoo3.example.org:2181"
       zkPassword= "password"
       zkPath= "/activemq/leveldb-stores"
       hostname= "broker1.example.org"
       />
   </persistenceAdapter>
   ...
</broker>

Replicated LevelDB Store Properties

All the broker nodes that are part of the same replication set should have matching brokerName XML attributes. The following configuration properties should be the same on all the broker nodes that are part of the same replication set:

property name

default value

Comments

replicas

3

The number of nodes that will exist in the cluster. At least (replicas/2)+1 nodes must be online to avoid service outage.

securityToken

 

A security token which must match on all replication nodes for them to accept each others replication requests.

zkAddress

127.0.0.1:2181

A comma separated list of ZooKeeper servers.

zkPassword

 

The password to use when connecting to the ZooKeeper server.

zkPath

/default

The path to the ZooKeeper directory where Master/Slave election information will be exchanged.

zkSessionTimeout

2s

How quickly a node failure will be detected by ZooKeeper. (prior to 5.11 - this had a typo zkSessionTmeout)

sync

quorum_mem

Controls where updates are reside before being considered complete. This setting is a comma separated list of the following options: local_memlocal_diskremote_memremote_diskquorum_memquorum_disk. If you combine two settings for a target, the stronger guarantee is used. For example, configuring local_mem, local_disk is the same as just using local_disk. quorum_mem is the same as local_mem, remote_mem and quorum_disk is the same as local_disk, remote_disk

Different replication sets can share the same zkPath as long they have different brokerName.

The following configuration properties can be unique per node:

property name

default value

Comments

bind

tcp://0.0.0.0:61619

When this node becomes a master, it will bind the configured address and port to service the replication protocol. Using dynamic ports is also supported. Just configure with tcp://0.0.0.0:0

hostname

 

The host name used to advertise the replication service when this node becomes the master. If not set it will be automatically determined.

weight

1

The replication node that has the latest update with the highest weight will become the master. Used to give preference to some nodes towards becoming master.

The store also supports the same configuration properties of a standard LevelDB Store but it does not support the pluggable storage lockers :

Standard LevelDB Store Properties

property name

default value

Comments

directory

LevelDB

The directory which the store will use to hold it's data files. The store will create the directory if it does not already exist.

readThreads

10

The number of concurrent IO read threads to allowed.

logSize

104857600 (100 MB)

The max size (in bytes) of each data log file before log file rotation occurs.

verifyChecksums

false

Set to true to force checksum verification of all data that is read from the file system.

paranoidChecks

false

Make the store error out as soon as possible if it detects internal corruption.

indexFactory

org.fusesource.leveldbjni.JniDBFactory, org.iq80.leveldb.impl.Iq80DBFactory

The factory classes to use when creating the LevelDB indexes

indexMaxOpenFiles

1000

Number of open files that can be used by the index.

indexBlockRestartInterval

16

Number keys between restart points for delta encoding of keys.

indexWriteBufferSize

6291456 (6 MB)

Amount of index data to build up in memory before converting to a sorted on-disk file.

indexBlockSize

4096 (4 K)

The size of index data packed per block.

indexCacheSize

268435456 (256 MB)

The maximum amount of off-heap memory to use to cache index blocks.

indexCompression

snappy

The type of compression to apply to the index blocks. Can be snappy or none.

logCompression

none

The type of compression to apply to the log records. Can be snappy or none.

Caveats

The LevelDB store does not yet support storing data associated with Delay and Schedule Message Delivery. Those are are stored in a separate non-replicated KahaDB data files. Unexpected results will occur if you use Delay and Schedule Message Delivery with the replicated leveldb store since that data will be not be there when the master fails over to a slave.

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
好的,您想了解关于ActiveMQ集群的Java实现方案,是吗?那我可以给您一些简单的介绍和建议。 首先,ActiveMQ是一个流行的开源消息中间件,它支持多种协议和消息格式,可以在分布式环境下运行。要实现ActiveMQ集群,有不同的方案和配置可以选择,这里提供一个简单的示例。 在Java中,可以使用ActiveMQ的Java客户端库来连接和发送消息到ActiveMQ Broker,也可以使用Spring Framework提供的ActiveMQ支持来实现更方便的集成和配置。 以下是一个基本的ActiveMQ集群配置示例: 1. 首先,在每个机器上安装ActiveMQ Broker,并确保它们都在同一个网络中,并且可以相互访问。 2. 在每个机器上,创建一个ActiveMQ Broker实例。可以使用命令行或XML配置文件进行配置。例如,使用XML配置文件,可以指定Broker的名称、IP地址、端口等参数。 3. 启动每个Broker实例,并确保它们都处于运行状态。可以使用命令行或Java代码启动。 4. 配置Broker之间的网络连接。可以使用静态或动态发现方式进行。静态发现需要手动配置每个Broker的网络连接信息,而动态发现可以使用ActiveMQ提供的多种发现方式,如多播、JMX等。例如,可以在每个Broker的XML配置文件中指定其他Broker的网络地址。 5. 创建一个ActiveMQ连接工厂并配置为使用负载均衡模式。可以使用Spring Framework提供的ActiveMQConnectionFactory或自己实现。例如,可以使用RoundRobin方式轮流连接不同的Broker实例。 6. 使用ActiveMQ连接工厂创建一个JMS连接,并从中创建一个JMS会话。可以使用Spring Framework提供的JmsTemplate或自己实现。例如,可以使用JmsTemplate发送和接收JMS消息。 7. 测试集群功能。可以尝试在不同的Broker实例上发送和接收消息,并检查它们是否能够正确地被路由和处理。 以上是一个简单的ActiveMQ集群配置示例,仅供参考。实际上,根据不同的需求和场景,可能需要更复杂的配置和实现方式。建议在实际应用中,根据具体情况选择最适合的方案。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值