多线程的memcached很简单,一个监听主线程,N个工作线程(event worker),实际上每一个线程都是一个单独的libevent实例,都有自己的event_base。
a.主线程负责监听、接收、CQ_ITEM的分发
b.工作线程负责已注册连接的读写事件处理。
网上找了一个图如下,源地址不明:
1.子线程初始化:
bool MyService::thread_init(int thread_num,struct event_base *main_base)
{
threads = (LIBEVENT_THREAD *)malloc(sizeof(LIBEVENT_THREAD) * thread_num);
if(!threads)
{
return false;
}
memset(threads,0,sizeof(LIBEVENT_THREAD)*thread_num);
dispatcher_thread.base = main_base; //main thread event_base
dispatcher_thread.thread_id = pthread_self();
for(int i = 0; i < thread_num; i++)
{
int fds[2];
if(pipe(fds))
{
printf("pipe error");
exit(1);
}
threads[i].notify_receive_fd = fds[0];
threads[i].notify_send_fd = fds[1];
setup_thread(&threads[i]);
}
for(int i = 0; i < thread_num; i++)
{
pthread_t th;
int ret = pthread_create(&th,NULL,ThreadWorkers,&threads[i]);
assert(ret == 0);
}
return true;
}
a.创建pipe,用于主线程接收到新的连接后通知工作线程,注册pipe可读事件
b.创建工作线程,启动工作线程event_base_dispatch(me->base);
注册pipe事件如下:
void MyService::setup_thread( LIBEVENT_THREAD *me )
{
if(!me->base)
{
me->base = event_base_new();
}
//Listen for notifications from other threads
event_set(&me->notify_event, me->notify_receive_fd,EV_READ|EV_PERSIST, thread_libevent_process, me);
event_base_set(me->base, &me->notify_event);
if(event_add(&me->notify_event, 0) == -1)
{
printf("Can't monitor libevent notify pipe\n");
exit(1);
}
me->new_conn_queue = new CConnQueueLock(); //工作线程连接队列
}
2.主线程分发新的连接:
void MyService::dispatch_conn_new( evutil_socket_t sfd )
{
CQ_ITEM *item = free_item_list.cqi_new();
if(item == NULL)
{
return;
}
int tid = (last_thread+1)%THREAD_NUM;
LIBEVENT_THREAD *thread = threads + tid;
last_thread = tid;
item->sfd = sfd;
thread->new_conn_queue->cq_push(item);
char buf[1] = {'c'};
if(write(thread->notify_send_fd,buf,1) != 1)
{
printf("write pipe error");
}
}
a.从free_item_list new一个CQ_ITEM,push到工作线程的连接队列中。
b.通过向该线程管道中写入一个'c'字符通知工作线程有新的连接到达。
3.工作线程触发pipe可读事件,调用thread_libevent_process函数:
void MyService::thread_libevent_process(int fd,short which,void *arg)
{
LIBEVENT_THREAD *me = (LIBEVENT_THREAD*)arg;
CQ_ITEM *item = NULL;
char buf[1];
if (read(fd, buf, 1) != 1)
{
printf("read from pipe error");
}
if(buf[0] == 'c')
item = me->new_conn_queue->cq_pop();
if (NULL != item)
{
bufferevent *bev = bufferevent_socket_new(me->base,item->sfd,BEV_OPT_CLOSE_ON_FREE);
bufferevent_setcb(bev, socket_read_cb, NULL, socket_event_cb, NULL);
int ret = bufferevent_enable(bev, EV_READ | EV_PERSIST);
assert(ret != -1);
}
}
a.工作线程从连接队列pop出CQ_ITEM
b.注册到自己的event_base上。
4.free_item_list创建CQ_ITEM
CQ_ITEM * CFreeListLock::cqi_new()
{
#define ITEMS_PER_ALLOC 64
CQ_ITEM *item = NULL;
if(cqi_freelist)
{
item = cqi_freelist;
cqi_freelist = item->next;
}
if(NULL == item)
{
item = (CQ_ITEM *)malloc(sizeof(CQ_ITEM) * ITEMS_PER_ALLOC);
if(item == NULL)
return NULL;
//组成单链表 cqi_freelist为header
for(int i = 2; i < ITEMS_PER_ALLOC; i++)
{
item[i-1].next = &item[i];
}
_lock.Enter();
item[ITEMS_PER_ALLOC-1].next = cqi_freelist;
cqi_freelist = &item[1];
_lock.Leave();
}
return item;
}
可以看出,CQ_ITEM被用完后,会一次分配
ITEMS_PER_ALLOC个,而不是每次都new。
memcached版本为1.4.24,以上代码只是摘取后修改简化的。