ZeroMQ 简称 ZMQ,是一个简单好用的传输层,像框架一样的一个 socket library,他使得 Socket 编程更加简单、简洁和性能更高.官网地址:http://zguide.zeromq.org/page:all
与 RabbitMQ 相比,ZMQ 并不像是一个传统意义上的消息队列服务器,事实上,它也根本不是一个服务器,它更像是一个底层的网络通讯库,在 Socket API 之上做了一层封装,将网络通讯、进程通讯和线程通讯抽象为统一的 API 接口。
安装和配置:
像所有其他开源软件一样,./configure;make && make install.或指定安装目录,./configure --prefix=/path/to/install
然后配置ZMQ_HOME=/path/to/install, LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$ZMQ_HOME/lib 就可以了。
1.hello world入门
Figure 2 - Request-Reply
#include <zmq.h>
#include <stdio.h>
#include <unistd.h>
#include <string.h>
#include <assert.h>
int main (void)
{
/* Socket to talk to clients */
void *context = zmq_ctx_new ();
void *responder = zmq_socket (context, ZMQ_REP);
int rc = zmq_bind (responder, "tcp://*:5555");
assert (rc == 0);
while (1) {
char buffer [10];
memset(buffer, 0x00, sizeof(buffer));
rc = zmq_recv (responder, buffer, 10, 0);
//buffer[rc]=0;
printf ("server Received [%s]rc[%d]\n", buffer, rc);
sleep (1); // Do some 'work'
zmq_send (responder, "World1234", 9, 0);
printf("server send [world]\n");
}
return 0;
}
解析:
a) zmq_ctx_new创建上下文
b) zmq_socket 创建经典响应式服务端
c) zmq_bind 绑定端口并开始监听
d) while循环里,接收客户端请求,并返回响应
客户端代码:
#include <zmq.h>
#include <string.h>
#include <stdio.h>
#include <unistd.h>
int main (void)
{
printf ("Connecting to hello world server\n");
void* context = zmq_ctx_new();
void* client = zmq_socket(context, ZMQ_REQ);
zmq_connect(client, "tcp://localhost:5555");
char buffer[10]={0};
int request_nbr = 0;
for(request_nbr=0; request_nbr<1; request_nbr++)
{
zmq_send(client, "hello", 9, 0);
printf("client send [hello]\n");
memset(buffer, 0x00, 10);
zmq_recv(client, buffer, 10, 0);
printf("client recv [%s]\n", buffer);
}
zmq_close(client);
zmq_ctx_destroy(context);
return 0;
}
解析:
a) zmq_ctx_new创建上下文
b) zmq_socket 创建经典请求式客户端
c) zmq_connect 连接服务端
d) 可多次发起请求、接收响应
编译方式:
gcc -o hwserver -I$ZMQ_HOME/include -L$ZMQ_HOME/lib -lzmq hwserver.c
gcc -o hwclient -I$ZMQ_HOME/include -L$ZMQ_HOME/lib -lzmq hwclient.c
./hwserver
./hwclient
可以看到客户端和服务端实现了信息交互
2.zmq的socket类型介绍
语法:
void *zmq_socket (void *context, int type);
2.1 客户端服务器模式
适用于单个服务端,服务于一个或多个客户端的情形,会逐步取代REQ和REP(当然还是可以用的),ROUTER和DEALER的请求响应模型。
客户端和服务端之间的交互都是异步非阻塞的。
客户端:ZMQ_CLIENT/ZMQ_REQ,服务端 ZMQ_SERVER/ZMQ_REP
2.2 发布订阅模式
适用于发布者同时发布信息给多个接收者。
客户端:ZMQ_SUB, 服务端: ZMQ_PUB
2.3 管道模式
适用于同个管道(pipeline)的多个节点之间的消息传递,数据沿管道流动;如果某个节点后连接了多个下级节点,那么每个下级节点会轮询的方式获得数据,保证数据不重复发送。
客户端:ZMQ_PULL/ZMQ_PUSH, 服务端: ZMQ_PUSH/ZMQ_PULL
推送端可以是服务端,PULL端也可以是,主要看节点所起的作用。
服务端 推送 数据,客户端 拉取 数据。
3.zmq的经典模型运用
3.1 经典的请求-响应模式
hello world示例给出的就是这种方式。
这种方式其实已经满足绝大部分场景,因为他的通信模型是异步的,当连接数较多,但活动连接数没有那么多时,完全可以采用这种模型。比如服务端的TPS可以达到100,活动连接数就是最大100,那其实允许的最大连接数可以达到1000甚至更多。
3.2 发布订阅模式
服务端示例:
// Pubsub envelope publisher
// Note that the zhelpers.h file also provides s_sendmore
#include "zhelpers.h"
#include <unistd.h>
int main (void)
{
// Prepare our context and publisher
void *context = zmq_ctx_new ();
void *publisher = zmq_socket (context, ZMQ_PUB);
zmq_bind (publisher, "tcp://*:5563");
while (1) {
// Write two messages, each with an envelope and content
s_sendmore (publisher, "A");
s_send (publisher, "We don't want to see this");
s_sendmore (publisher, "B");
s_send (publisher, "We would like to see this");
sleep (1);
}
// We never get here, but clean up anyhow
zmq_close (publisher);
zmq_ctx_destroy (context);
return 0;
}
注:其中的s_sendmore s_send等是zhelper.h提供的简便用法.
static int
s_sendmore (void *socket, char *string) {
int size = zmq_send (socket, string, strlen (string), ZMQ_SNDMORE);
return size;
}
具体代码见网址:
https://github.com/booksbyus/zguide/tree/master/examples 下的 C/zhelper.h
发布端,发布了两种消息,分别用A和B表示,客户端只订阅了B消息,zmq会自动帮客户端过滤掉A消息。
注意,这里客户端在接收时先接收了个地址,然后才是内容,这是什么意思呢?PUB端在发送的时候分两次发送了B和内容,SUB端自然也要接收两次了,接收到 [B] We would like to see this 这样的内容。
发布订阅模式非常适用于信息的分发,或自定义配置中心。当有配置更新时,通知所有订阅者进行更新。
举个例子,比如我们每个进程都有个缓存信息,当我们修改了某个东西需要更新这个缓存时,就可以让配置中心(PUB端)发布一条更新指令,每个进程都在启动时创建一个线程来监听配置中心,当有消息过来,则根据消息内容进行相应的更新。另外,注意对缓存内容加锁控制,更新时线程独占缓存。
3.3 推送拉取任务模型
任务发布端示例:
// Task ventilator
// Binds PUSH socket to tcp://localhost:5557
// Sends batch of tasks to workers via that socket
#include "zhelpers.h"
int main (void)
{
void *context = zmq_ctx_new ();
// Socket to send messages on
void *sender = zmq_socket (context, ZMQ_PUSH);
zmq_bind (sender, "tcp://*:5557");
// Socket to send start of batch message on
void *sink = zmq_socket (context, ZMQ_PUSH);
zmq_connect (sink, "tcp://localhost:5558");
printf ("Press Enter when the workers are ready: ");
getchar ();
printf ("Sending tasks to workers…\n");
// The first message is "0" and signals start of batch
s_send (sink, "0");
// Initialize random number generator
srandom ((unsigned) time (NULL));
// Send 100 tasks
int task_nbr;
int total_msec = 0; // Total expected cost in msecs
for (task_nbr = 0; task_nbr < 100; task_nbr++) {
int workload;
// Random workload from 1 to 100msecs
workload = randof (100) + 1;
total_msec += workload;
char string [10];
sprintf (string, "%d", workload);
s_send (sender, string);
}
printf ("Total expected cost: %d msec\n", total_msec);
zmq_close (sink);
zmq_close (sender);
zmq_ctx_destroy (context);
return 0;
}
任务处理端:
// Task worker
// Connects PULL socket to tcp://localhost:5557
// Collects workloads from ventilator via that socket
// Connects PUSH socket to tcp://localhost:5558
// Sends results to sink via that socket
#include "zhelpers.h"
int main (void)
{
// Socket to receive messages on
void *context = zmq_ctx_new ();
void *receiver = zmq_socket (context, ZMQ_PULL);
zmq_connect (receiver, "tcp://localhost:5557");
// Socket to send messages to
void *sender = zmq_socket (context, ZMQ_PUSH);
zmq_connect (sender, "tcp://localhost:5558");
// Process tasks forever
while (1) {
char *string = s_recv (receiver);
printf ("%s.", string); // Show progress
fflush (stdout);
s_sleep (atoi (string)); // Do the work
free (string);
s_send (sender, ""); // Send results to sink
}
zmq_close (receiver);
zmq_close (sender);
zmq_ctx_destroy (context);
return 0;
}
任务结果收集端:
// Task sink
// Binds PULL socket to tcp://localhost:5558
// Collects results from workers via that socket
#include "zhelpers.h"
int main (void)
{
// Prepare our context and socket
void *context = zmq_ctx_new ();
void *receiver = zmq_socket (context, ZMQ_PULL);
zmq_bind (receiver, "tcp://*:5558");
// Wait for start of batch
char *string = s_recv (receiver);
free (string);
// Start our clock now
int64_t start_time = s_clock ();
// Process 100 confirmations
int task_nbr;
for (task_nbr = 0; task_nbr < 100; task_nbr++) {
char *string = s_recv (receiver);
free (string);
if ((task_nbr / 10) * 10 == task_nbr)
printf (":");
else
printf (".");
fflush (stdout);
}
// Calculate and report duration of batch
printf ("Total elapsed time: %d msec\n",
(int) (s_clock () - start_time));
zmq_close (receiver);
zmq_ctx_destroy (context);
return 0;
}
分析一下:
a) 服务发布端,是PUSH端,发布任务给worker,通过轮询的方式
b) 每个worker同时是个PUSH端,给sink结果收集端推送任务执行结果
c) sink收集所有任务的执行结果,这样如果有失败的,我们可以进行后续的重新分发等操作。
通过这个例子也看出,并不是PUSH一定是服务端。
适用场景:假如有这样的一个场景,我们定时或按条件产生很多任务,这些任务之间是相互独立的,我们就需要一个类似进程池的这么个东西来执行这么多的任务。这时候,我们让任务发布者负责派遣任务,按需求启动任意个worker,这些worker甚至可以是位于不同的服务器上的(集群),如果需要收集结果,还可以启动结果收集服务,进行后续处理。
3.4 前后端分离模型
Figure 16 - Extended Request-Reply
连接及分发主控 broker:
// Simple request-reply broker
#include "zhelpers.h"
int main (void)
{
// Prepare our context and sockets
void *context = zmq_ctx_new ();
void *frontend = zmq_socket (context, ZMQ_ROUTER);
void *backend = zmq_socket (context, ZMQ_DEALER);
zmq_bind (frontend, "tcp://*:5559");
zmq_bind (backend, "tcp://*:5560");
// Initialize poll set
zmq_pollitem_t items [] = {
{ frontend, 0, ZMQ_POLLIN, 0 },
{ backend, 0, ZMQ_POLLIN, 0 }
};
// Switch messages between sockets
while (1) {
zmq_msg_t message;
zmq_poll (items, 2, -1);
if (items [0].revents & ZMQ_POLLIN) {
while (1) {
// Process all parts of the message
zmq_msg_init (&message);
zmq_msg_recv (&message, frontend, 0);
int more = zmq_msg_more (&message);
zmq_msg_send (&message, backend, more? ZMQ_SNDMORE: 0);
zmq_msg_close (&message);
if (!more)
break; // Last message part
}
}
if (items [1].revents & ZMQ_POLLIN) {
while (1) {
// Process all parts of the message
zmq_msg_init (&message);
zmq_msg_recv (&message, backend, 0);
int more = zmq_msg_more (&message);
zmq_msg_send (&message, frontend, more? ZMQ_SNDMORE: 0);
zmq_msg_close (&message);
if (!more)
break; // Last message part
}
}
}
// We never get here, but clean up anyhow
zmq_close (frontend);
zmq_close (backend);
zmq_ctx_destroy (context);
return 0;
}
真正的服务端 worker:
// Hello World worker
// Connects REP socket to tcp://localhost:5560
// Expects "Hello" from client, replies with "World"
#include "zhelpers.h"
#include <unistd.h>
int main (void)
{
void *context = zmq_ctx_new ();
// Socket to talk to clients
void *responder = zmq_socket (context, ZMQ_REP);
zmq_connect (responder, "tcp://localhost:5560");
while (1) {
// Wait for next request from client
char *string = s_recv (responder);
printf ("Received request: [%s]\n", string);
free (string);
// Do some 'work'
sleep (1);
// Send reply back to client
s_send (responder, "World");
}
// We never get here, but clean up anyhow
zmq_close (responder);
zmq_ctx_destroy (context);
return 0;
}
客户端 client:
// Hello World client
// Connects REQ socket to tcp://localhost:5559
// Sends "Hello" to server, expects "World" back
#include "zhelpers.h"
int main (void)
{
void *context = zmq_ctx_new ();
// Socket to talk to server
void *requester = zmq_socket (context, ZMQ_REQ);
zmq_connect (requester, "tcp://localhost:5559");
int request_nbr;
for (request_nbr = 0; request_nbr != 10; request_nbr++) {
s_send (requester, "Hello");
char *string = s_recv (requester);
printf ("Received reply %d [%s]\n", request_nbr, string);
free (string);
}
zmq_close (requester);
zmq_ctx_destroy (context);
return 0;
}
解析:
a) 分发主控broker是个关键,他负责接受客户端连接,并实现客户端和真正服务端的对接。
b) 真正服务端是个REP响应型服务端,他连接上broker里的backend,由broker按轮询的方式派发任务
c) 客户端是个REQ,他连接broker里的frontend。
d) broker通过zmq的poll模型进行客户端和服务端的对接。
这个模型具有较强的伸缩性,当服务端压力较大时,可以启用更多的后台worker,这些worker也可以是分布在不同的服务器上的,随起随用。
4.其他
这些只是从zmq官方文档及示例代码中整理出来的,zmq还有很多内容在内,比如socket模型就还有ZMQ_PAIR、ZMQ_RADIO、ZMQ_DISH等,bind绑定地址类型还有ipc inproc pgm epgm等,给多进程开发提供了另一套方法,需要开发人员按照需要进一步学习和应用。