目的
验证在内存节点上做镜像,消息是否会丢失
环境
3 台服务器(IP: 64,65,66)配置集群,没有任何镜像策略。
过程
1 查看集群环境,如果没有内存节点,使用 change_cluster_node_type 命令改变其中一个节点为内存节点
$ rabbitmqctl cluster_status
[{nodes,[{disc,['rabbit@TK-RABBITMQ1','rabbit@TK-RABBITMQ2']},
{ram,['rabbit@TK-RABBITMQ3']}]},
{running_nodes,['rabbit@TK-RABBITMQ2','rabbit@TK-RABBITMQ3',
'rabbit@TK-RABBITMQ1']},
{cluster_name,<<"rabbit@TK-RABBITMQ1">>},
{partitions,[]},
{alarms,[{'rabbit@TK-RABBITMQ2',[]},
{'rabbit@TK-RABBITMQ3',[]},
{'rabbit@TK-RABBITMQ1',[]}]}]
2 新建一个 node 方式,名字为 test_policy ,内容为镜像到节点TK-RABBITMQ3 的镜像
$ rabbitmqctl set_policy test_policy "^test\." '{"ha-mode":"nodes","ha-params":["rabbit@TK-RABBITMQ3"]}'
Setting policy "test_policy" for pattern "^test\\." to "{\"ha-mode\":\"nodes\",\"ha-params\":[\"rabbit@TK-RABBITMQ3\"]}" with priority "0" ...
3 使用客户端代码新建name为test.one Node为rabbit@TK-RABBITMQ1的队列
使用客户端代码新建name为test.two Node为rabbit@TK-RABBITMQ2的队列
使用客户端代码新建name为test.three Node为rabbit@TK-RABBITMQ3的队列
4 查看队列信息
$ rabbitmqctl list_queues name policy messages_ready pid slave_pids
test.one test_policy 10 <rabbit@TK-RABBITMQ3.3.27052.123> []
test.three test_policy 10 <rabbit@TK-RABBITMQ3.3.27041.123> []
test.two test_policy 10 <rabbit@TK-RABBITMQ3.3.27038.123> []
由于镜像是在 TK-RABBITMQ3 上,虽然建队列时是指定在其它节点上,但是主队列都在此节点上。
5 使用 rabbitmqctl stop_app 关闭节点 TK-RABBITMQ3 应用,再用 rabbitmqctl start_app 启动应用
6 查看队列信息
$ rabbitmqctl list_queues name policy messages_ready pid slave_pids
test.one test_policy 10 <rabbit@TK-RABBITMQ3.3.27052.123> []
test.three test_policy 10 <rabbit@TK-RABBITMQ3.3.27041.123> []
test.two test_policy 10 <rabbit@TK-RABBITMQ3.3.27038.123> []
7 关闭节点 TK-RABBITMQ1 和 TK-RABBITMQ2
8 使用 rabbitmqctl stop_app关闭节点 TK-RABBITMQ3
9 启动节点TK-RABBITMQ3
$ rabbitmqctl start_app
Starting node 'rabbit@TK-RABBITMQ3' ...
BOOT FAILED
===========
Error description:
{could_not_start,rabbit,
{{failed_to_cluster_with,
['rabbit@TK-RABBITMQ1','rabbit@TK-RABBITMQ2'],
"Mnesia could not connect to any nodes."},
{rabbit,start,[normal,[]]}}}
Log files (may contain more information):
/home/rabbitmq/rmq/rabbitmq_server-3.6.2/var/log/rabbitmq/rabbit@TK-RABBITMQ3.log
/home/rabbitmq/rmq/rabbitmq_server-3.6.2/var/log/rabbitmq/rabbit@TK-RABBITMQ3-sasl.log
Error: {could_not_start,rabbit,
{{failed_to_cluster_with,
['rabbit@TK-RABBITMQ1','rabbit@TK-RABBITMQ2'],
"Mnesia could not connect to any nodes."},
{rabbit,start,[normal,[]]}}}
10 启动节点 TK-RABBITMQ1 ,再启动节点 TK-RABBITMQ3
11 查看队列信息
$ rabbitmqctl list_queues name policy messages_ready pid slave_pids
test.one test_policy 10 <rabbit@TK-RABBITMQ3.3.27052.123> []
test.three test_policy 10 <rabbit@TK-RABBITMQ3.3.27041.123> []
test.two test_policy 10 <rabbit@TK-RABBITMQ3.3.27038.123> []
结论
- 任意时候必须有一个 disk node 在运行。
- 在内存节点上做镜像,消息不会丢失
更多实验可点击:Rabbitmq 实验