python使用rabbitmq阻塞_Python:Kombu + RabbitMQ死锁-队列被阻塞或阻塞

The problem

I have a RabbitMQ Server that serves as a queue hub for one of my systems. In the last week or so, its producers come to a complete halt every few hours.

What have I tried

Brute force

Stopping the consumers releases the lock for a few minutes, but then blocking returns.

Restarting RabbitMQ solved the problem for a few hours.

I have some automatic script that does the ugly restarts, but it's obviously far from a proper solution.

Allocating more memory

Following cantSleepNow's answer, I have increased the memory allocated to RabbitMQ to 90%. The server has a whopping 16GB of memory and the message count is not very high (millions per day), so that does not seem to be the problem.

From the command line:

sudo rabbitmqctl set_vm_memory_high_watermark 0.9

And with /etc/rabbitmq/rabbitmq.config:

[

{rabbit,

[

{loopback_users, []},

{vm_memory_high_watermark, 0.9}

]

}

].

Code & Design

I use Python for all consumers and producers.

Producers

The producers are API server that serve calls. Whenever a call arrives, a connection is opened, a message is sent and the connection is closed.

from kombu import Connection

def send_message_to_queue(host, port, queue_name, message):

"""Sends a single message to the queue."""

with Connection('amqp://guest:guest@%s:%s//' % (host, port)) as conn:

simple_queue = conn.SimpleQueue(name=queue_name, no_ack=True)

simple_queue.put(message)

simple_queue.close()

Consumers

The consumers slightly differ from each other, but generally use the following pattern - opening a connection, and waiting on it until a message arrives. The connection can stay opened for long period of times (say, days).

with Connection('amqp://whatever:whatever@whatever:whatever//') as conn:

while True:

queue = conn.SimpleQueue(queue_name)

message = queue.get(block=True)

message.ack()

Design reasoning

Consumers always need to keep an open connection with the queue server

The Producer session should only live during the lifespan of the API call

This design had caused no problems till about one week ago.

Web view dashboard

The web console shows that the consumers in 127.0.0.1 and 172.31.38.50 block the consumers from 172.31.38.50, 172.31.39.120, 172.31.41.38 and 172.31.41.38.

System metrics

Just to be on the safe side, I checked the server load. As expected, the load average and CPU utilization metrics are low.

Why does the rabbit MQ each such a deadlock?

解决方案

This is most likely caused by a memory leak in the management module for RabbitMQ 3.6.2. This has now been fixed in RabbitMQ 3.6.3, and is available here.

The issue itself is described here, but is also discussed extensively on the RabbitMQ messages boards; for example here and here. This has also been known to cause a lot of weird issues, a good example is the issue reported here.

As a temporary fix until the new version is released, you can either upgrade to the new est build, downgrade to 3.6.1 or completely disable the management module.

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值