1.准备
①需要具备搭建好的zookeeper集群
②拉取activemq镜像
docker pull webcenter/activemq
③说明
主机 | Zookeeper集群端口 | AMQ集群bind端口 | AMQ消息tcp端口 | 管理控制台端口 |
192.168.16.106 | 2181 | tcp://0.0.0.0:63631 | 61616 | 8161 |
192.168.16.106 | 2182 | tcp://0.0.0.0:63632 | 61617 | 8162 |
192.168.16.106 | 2183 | tcp://0.0.0.0:63633 | 61618 | 8263 |
2.docker启动三个activemq容器
docker run -d --name activemq_01 -p 61616:61616 -p 8161:8161 webcenter/activemq
docker run -d --name activemq_02 -p 61617:61616 -p 8162:8161 webcenter/activemq
docker run -d --name activemq_03 -p 61618:61616 -p 8163:8161 webcenter/activemq
3.hostname名字映射(如果不映射只需要吧mq配置文件的hostname改成当前主机ip)
vim /etc/hosts
#加入以下内容
192.168.16.106 hostname
4.ActiveMQ集群配置
①配置文件里面的BrokerName要全部一致
#进入容器实例
docker exec -it activemq_01 /bin/bash
#修改配置文件
cd conf
vim activemq.xml
#将三个activemq配置文件中的brokerName修改为一样的名字,防止混乱
<broker xmlns="http://activemq.apache.org/schema/core" brokerName="activemq_cluster" dataDirectory="${activemq.data}">
②持续化配置
找到persistenceAdapter节点,注释掉kahaDB,加入以下内容,其中只有bind的内容不一样,其余一致。
directory : 生成的目录
replicas : activemq集群的数量
bind :集群通信端口
zkAddress : zookeeper的地址
hostname : 主机名
sync:同步到本地磁盘
zkPath : activemq注册到zookeeper里面的节点路径
#activemq_01
<persistenceAdapter>
<replicatedLevelDB
directory="${activemq.data}/leveldb"
replicas="3"
bind="tcp://0.0.0.0:63631"
zkAddress="192.168.16.106:2181,192.168.16.106:2182,192.168.16.106:2183"
hostname="cyfuse"
sync="local_disk"
zkPath="/activemq/leveldb-stores"
/>
</persistenceAdapter>
#activemq_02
<persistenceAdapter>
<replicatedLevelDB
directory="${activemq.data}/leveldb"
replicas="3"
bind="tcp://0.0.0.0:63632"
zkAddress="192.168.16.106:2181,192.168.16.106:2182,192.168.16.106:2183"
hostname="cyfuse"
sync="local_disk"
zkPath="/activemq/leveldb-stores"
/>
</persistenceAdapter>
#activemq_03
<persistenceAdapter>
<replicatedLevelDB
directory="${activemq.data}/leveldb"
replicas="3"
bind="tcp://0.0.0.0:63633"
zkAddress="192.168.16.106:2181,192.168.16.106:2182,192.168.16.106:2183"
hostname="cyfuse"
sync="local_disk"
zkPath="/activemq/leveldb-stores"
/>
</persistenceAdapter>
③修改各个节点的消息端口(创建容器时对外映射的端口)
#activemq_01
<transportConnector name="openwire" uri="tcp://0.0.0.0:61616?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
#activemq_02
<transportConnector name="openwire" uri="tcp://0.0.0.0:61617?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
#activemq_03
<transportConnector name="openwire" uri="tcp://0.0.0.0:61618?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
5.重新启动activemq集群
①先启动zookeeper集群
docker start zookeeper1 zookeeper2 zookeeper3
②启动activemq集群
docker restart activemq_01 activemq_02 activemq_03
6.查看zookeeper节点
发现activemq节点下有三个临时节点
查看节点内容,其中elected不为空的是Master,其余两个是Slave
{"id":"localhost","container":null,"address":"tcp://cyfuse:63631","position":-1,"weight":1,"elected":"0000000000"}
{"id":"localhost","container":null,"address":null,"position":-1,"weight":1,"elected":null}
{"id":"localhost","container":null,"address":null,"position":-1,"weight":1,"elected":null}
7.集群可用性测试
ActiveMQ的客户端只能访问Master的Broker,其他处于Slave的Broker不能访问,所以客户端连接的Broker应该使用failover协议(失败转移)
当一个ActiveMQ节点挂掉或者一个Zookeeper节点挂掉,ActiveMQ服务依然正常运转,如果仅剩一个ActiveMQ节点,由于不能选举Master,所以ActiveMQ不能正常运行。同样的,如zookeeper仅剩一个活动节点,不管ActiveMQ各节点存活,ActiveMQ也不能正常提供服务。(ActiveMQ集群的高可用依赖于Zookeeper集群的高可用)
生产者代码
public class JmsProduce {
public static final String ACTIVEMQ_URL = "failover:(tcp://192.168.16.106:61616,tcp://192.168.16.106:61617,tcp://192.168.16.106:61618)";
public static final String QUEUE_NAME = "queue_cluster";
public static void main(String[] args) throws JMSException {
// 1.创建连接工厂,按照给定的url地址,采用默认用户名和密码
ActiveMQConnectionFactory factory = new ActiveMQConnectionFactory(ACTIVEMQ_URL);
// 2.获得连接connection并启动
Connection connection = factory.createConnection();
connection.start();
//3.创建会话session
//两个参数:①事务 ②签收
Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
//4.创建目的地(具体是队列还是主题topic)
Queue queue = session.createQueue(QUEUE_NAME);
//5.创建消息的生产者
MessageProducer messageProducer = session.createProducer(queue);
//设置通过session创建出来的生产者生产的Queue消息为持久性
messageProducer.setDeliveryMode(DeliveryMode.PERSISTENT);
//6.通过使用messageProducer生产三个消息发送到MQ的队列
for(int i=0;i<3;i++){
//7.创建消息
//text类型
TextMessage textMessage = session.createTextMessage("mag---" + i);
//8.通过messageProducer发送给mq
messageProducer.send(textMessage);
}
//9.关闭资源
messageProducer.close();
session.close();
connection.close();
System.out.println("---消息发布到mq---");
}
}
消费者代码
public static final String ACTIVEMQ_URL = "failover:(tcp://192.168.16.106:61616,tcp://192.168.16.106:61617,tcp://192.168.16.106:61618)";
public static final String QUEUE_NAME = "queue_cluster";
public static void main(String[] args) throws JMSException, IOException {
ActiveMQConnectionFactory factory = new ActiveMQConnectionFactory(ACTIVEMQ_URL);
Connection connection = factory.createConnection();
connection.start();
//3.创建会话session
//两个参数:①事务 ②签收
Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
//4.创建目的地(具体是队列还是主题topic)
Queue queue = session.createQueue(QUEUE_NAME);
//5.创建消费者
MessageConsumer messageConsumer = session.createConsumer(queue);
messageConsumer.setMessageListener(new MessageListener() {
@Override
public void onMessage(Message message) {
if(null != message && message instanceof TextMessage){
TextMessage textMessage = (TextMessage) message;
try {
System.out.println("接收到text消息:"+(textMessage.getText()));
} catch (JMSException e) {
e.printStackTrace();
}
}
}
});
System.in.read();//保证控制台不灭 不加的话消息没来得及处理程序就结束了
messageConsumer.close();
session.close();
connection.close();
}
启动生产者,控制台输出如下,说明连接成功
INFO | Successfully connected to tcp://192.168.16.106:61616
---消息发布到mq---
停止activemq集群的master,查看是否能选出新的主机
docker stop activemq_01
由于官方已经弃用了这种集群的方式,虽然能选出来新的master,但是生产者和消费者连接不上,只能在zookeeper里面看到谁是master