高并发系统设计

作者:周顺利

注:本文大多数观点和代码都是从网上或者开源代码中抄来的,为了疏理和组织这片文章,作者也费了不少心血,为了表示对我劳动的尊重,请转载时注明作者和出处。

一、     引子

最近失业在家,闲来无事。通过网上查找资料和查看开源代码,研究了一下互联网高并发系统的一些设计。这里主要从服务器内部设计和整个系统设计两个方面讨论,更多的是从互联网大型网站设计方面考虑,高性能计算之类系统没有研究过。

二、     服务器内部设计

服务器设计涉及Socket的阻塞/非阻塞,操作系统IO的同步和异步(之前被人问到过两次。第一次让我说说知道的网络模型,我说ISO模型和TCP/IP模型,结果被鄙视了。最后人说了解linux epoll吗?不了解呀!汉,回去查资料才知道是这回事。第二次让我说说知道线程模型,汉!这个名词感觉没有听说过,线程?模型?半同步/半异步,领导者/跟随者知道吗。再汉,我知道同步/异步,还有半同步/半异步?啥呀?领导者/跟随者,我现在没有领导。回去一顿恶补,原来是ACE框架里边经常有这样的提法,Reactor属于同步/半同步,PREACTOR属于领导者/跟随者模式。瀑布汗。小插曲一段,这些不懂没关系,下边我慢慢分解),事件分离器,线程池等。内部设计希望通过各个模块的给出一个简单设计,经过您的进一步的组合和打磨,就可以实现一个基本的高并发服务器。

1.     Java高并发服务器

Java设计高并发服务器相对比较简单。直接是用ServerSocket或者Channel+selector实现。前者属于同步IO设计,后者采用了模拟的异步IO。为什么说模拟的异步IO呢?记得网上看到一篇文章分析了java的selector。在windows上通过建立一个127.0.0.1到127.0.0.1的连接实现IO的异步通知。在linux上通过建立一个管道实现IO的异步通知。考虑到高并并发系统的要求和java上边的异步IO的限制(通常操作系统同时打开的文件数是有限制的)和效率问题,java的高并发服务器设计不做展开深入的分析,可以参考C高并发服务器的分析做同样的设计。

2.     C高并发服务器设计

1)    基本概念

Ø  阻塞和非阻塞socket

所谓阻塞Socket,是指其完成指定的任务之前不允许程序调用另一个函数,在Windows下还会阻塞本线程消息的发送。所谓非阻塞Socket,是指操作启动之后,如果可以立即得到结果就返回结果,否则返回表示结果需要等待的错误信息,不等待任务完成函数就返回。一个比较有意思的问题是accept的Socket是阻塞的还是非阻塞的呢?下边是MSDN上边的一段话:The accept function extracts thefirst connection on the queue of pending connections on socket s. It thencreates and returns a handle to the new socket. The newly created socket is thesocket that will handle the actual connection; it has the same properties assocket s, including the asynchronous events registered with the WSAAsyncSelector WSAEventSelect functions.

Ø  同步/异步IO

有两种类型的文件IO同步:同步文件IO和异步文件IO。异步文件IO也就是重叠IO。
      在同步文件IO中,线程启动一个IO操作然后就立即进入等待状态,直到IO操作完成后才醒来继续执行。而异步文件IO方式中,线程发送一个IO请求到内核,然后继续处理其他的事情,内核完成IO请求后,将会通知线程IO操作完成了。
      如果IO请求需要大量时间执行的话,异步文件IO方式可以显著提高效率,因为在线程等待的这段时间内,CPU将会调度其他线程进行执行,如果没有其他线程需要执行的话,这段时间将会浪费掉(可能会调度操作系统的零页线程)。如果IO请求操作很快,用异步IO方式反而还低效,还不如用同步IO方式。
      同步IO在同一时刻只允许一个IO操作,也就是说对于同一个文件句柄的IO操作是序列化的,即使使用两个线程也不能同时对同一个文件句柄同时发出读写操作。重叠IO允许一个或多个线程同时发出IO请求。异步IO在请求完成时,通过将文件句柄设为有信号状态来通知应用程序,或者应用程序通过GetOverlappedResult察看IO请求是否完成,也可以通过一个事件对象来通知应用程序。高并发系统通常采用异步IO方式提高系统性能。

Ø  事件分离器

事件分离器的概念是针对异步IO来说的。在同步IO的情况下,执行操作等待返回结果,不要事件分离器。异步IO的时候,发送请求后,结果是通过事件通知的。这是产生了事件分离器的需求。事件分离器主要任务是管理和分离不同文件描述符上的所发生的事件,让后通知相应的事件,派发相应的动作。下边是lighthttpd事件分离器定义:

  1. /**
  2. * fd-event handler for select(), poll() andrt-signals on Linux 2.4
  3. *
  4. */ 
  5. typedef struct fdevents { 
  6.          fdevent_handler_t type; 
  7.   
  8.          fdnode **fdarray; 
  9.          size_t maxfds; 
  10.   
  11. #ifdef USE_LINUX_SIGIO 
  12.          int in_sigio; 
  13.          int signum; 
  14.          sigset_t sigset; 
  15.          siginfo_t siginfo; 
  16.          bitset *sigbset; 
  17. #endif 
  18. #ifdef USE_LINUX_EPOLL 
  19.          int epoll_fd; 
  20.          struct epoll_event *epoll_events; 
  21. #endif 
  22. #ifdef USE_POLL 
  23.          struct pollfd *pollfds; 
  24.   
  25.          size_t size; 
  26.          size_t used; 
  27.   
  28.          buffer_int unused; 
  29. #endif 
  30. #ifdef USE_SELECT 
  31.          fd_set select_read; 
  32.          fd_set select_write; 
  33.          fd_set select_error; 
  34.   
  35.          fd_set select_set_read; 
  36.          fd_set select_set_write; 
  37.          fd_set select_set_error; 
  38.   
  39.          int select_max_fd; 
  40. #endif 
  41. #ifdef USE_SOLARIS_DEVPOLL 
  42.          int devpoll_fd; 
  43.          struct pollfd *devpollfds; 
  44. #endif 
  45. #ifdef USE_FREEBSD_KQUEUE 
  46.          int kq_fd; 
  47.          struct kevent *kq_results; 
  48.          bitset *kq_bevents; 
  49. #endif 
  50. #ifdef USE_SOLARIS_PORT 
  51.          int port_fd; 
  52. #endif 
  53.          int (*reset)(struct fdevents *ev); 
  54.          void (*free)(struct fdevents *ev); 
  55.   
  56.          int (*event_add)(struct fdevents *ev, int fde_ndx, int fd,int events); 
  57.          int (*event_del)(struct fdevents *ev, int fde_ndx, int fd); 
  58.          int (*event_get_revent)(struct fdevents *ev, size_t ndx); 
  59.          int (*event_get_fd)(struct fdevents *ev, size_t ndx); 
  60.   
  61.          int (*event_next_fdndx)(struct fdevents *ev, int ndx); 
  62.   
  63.          int (*poll)(struct fdevents *ev, int timeout_ms); 
  64.   
  65.          int (*fcntl_set)(struct fdevents *ev, int fd); 
  66. } fdevents; 
  67.   
  68. fdevents *fdevent_init(size_tmaxfds, fdevent_handler_t type); 
  69. int fdevent_reset(fdevents*ev); 
  70. void fdevent_free(fdevents*ev); 
  71.   
  72. int fdevent_event_add(fdevents*ev, int *fde_ndx, int fd, int events); 
  73. int fdevent_event_del(fdevents*ev, int *fde_ndx, int fd); 
  74. intfdevent_event_get_revent(fdevents *ev, size_t ndx); 
  75. intfdevent_event_get_fd(fdevents *ev, size_t ndx); 
  76. fdevent_handlerfdevent_get_handler(fdevents *ev, int fd); 
  77. void *fdevent_get_context(fdevents *ev, int fd); 
  78.   
  79. int fdevent_event_next_fdndx(fdevents*ev, int ndx); 
  80.   
  81. int fdevent_poll(fdevents *ev,int timeout_ms); 
  82.   
  83. int fdevent_register(fdevents*ev, int fd, fdevent_handler handler, void *ctx); 
  84. int fdevent_unregister(fdevents*ev, int fd); 
  85.   
  86. int fdevent_fcntl_set(fdevents*ev, int fd); 
  87.   
  88. intfdevent_select_init(fdevents *ev); 
  89. int fdevent_poll_init(fdevents*ev); 
  90. intfdevent_linux_rtsig_init(fdevents *ev); 
  91. intfdevent_linux_sysepoll_init(fdevents *ev); 
  92. intfdevent_solaris_devpoll_init(fdevents *ev); 
  93. intfdevent_freebsd_kqueue_init(fdevents *ev); 
/**
 * fd-event handler for select(), poll() andrt-signals on Linux 2.4
 *
 */
typedef struct fdevents {
         fdevent_handler_t type;
 
         fdnode **fdarray;
         size_t maxfds;
 
#ifdef USE_LINUX_SIGIO
         int in_sigio;
         int signum;
         sigset_t sigset;
         siginfo_t siginfo;
         bitset *sigbset;
#endif
#ifdef USE_LINUX_EPOLL
         int epoll_fd;
         struct epoll_event *epoll_events;
#endif
#ifdef USE_POLL
         struct pollfd *pollfds;
 
         size_t size;
         size_t used;
 
         buffer_int unused;
#endif
#ifdef USE_SELECT
         fd_set select_read;
         fd_set select_write;
         fd_set select_error;
 
         fd_set select_set_read;
         fd_set select_set_write;
         fd_set select_set_error;
 
         int select_max_fd;
#endif
#ifdef USE_SOLARIS_DEVPOLL
         int devpoll_fd;
         struct pollfd *devpollfds;
#endif
#ifdef USE_FREEBSD_KQUEUE
         int kq_fd;
         struct kevent *kq_results;
         bitset *kq_bevents;
#endif
#ifdef USE_SOLARIS_PORT
         int port_fd;
#endif
         int (*reset)(struct fdevents *ev);
         void (*free)(struct fdevents *ev);
 
         int (*event_add)(struct fdevents *ev, int fde_ndx, int fd,int events);
         int (*event_del)(struct fdevents *ev, int fde_ndx, int fd);
         int (*event_get_revent)(struct fdevents *ev, size_t ndx);
         int (*event_get_fd)(struct fdevents *ev, size_t ndx);
 
         int (*event_next_fdndx)(struct fdevents *ev, int ndx);
 
         int (*poll)(struct fdevents *ev, int timeout_ms);
 
         int (*fcntl_set)(struct fdevents *ev, int fd);
} fdevents;
 
fdevents *fdevent_init(size_tmaxfds, fdevent_handler_t type);
int fdevent_reset(fdevents*ev);
void fdevent_free(fdevents*ev);
 
int fdevent_event_add(fdevents*ev, int *fde_ndx, int fd, int events);
int fdevent_event_del(fdevents*ev, int *fde_ndx, int fd);
intfdevent_event_get_revent(fdevents *ev, size_t ndx);
intfdevent_event_get_fd(fdevents *ev, size_t ndx);
fdevent_handlerfdevent_get_handler(fdevents *ev, int fd);
void *fdevent_get_context(fdevents *ev, int fd);
 
int fdevent_event_next_fdndx(fdevents*ev, int ndx);
 
int fdevent_poll(fdevents *ev,int timeout_ms);
 
int fdevent_register(fdevents*ev, int fd, fdevent_handler handler, void *ctx);
int fdevent_unregister(fdevents*ev, int fd);
 
int fdevent_fcntl_set(fdevents*ev, int fd);
 
intfdevent_select_init(fdevents *ev);
int fdevent_poll_init(fdevents*ev);
intfdevent_linux_rtsig_init(fdevents *ev);
intfdevent_linux_sysepoll_init(fdevents *ev);
intfdevent_solaris_devpoll_init(fdevents *ev);
intfdevent_freebsd_kqueue_init(fdevents *ev);

具体系统的事件操作通过:

fdevent_freebsd_kqueue.c

fdevent_linux_rtsig.c

fdevent_linux_sysepoll.c

fdevent_poll.c

fdevent_select.c

fdevent_solaris_devpoll.c

几个文件实现。

Ø  线程池

线程池基本上比较简单,实现线程的借入和借出,创建和销毁。最完好可以做到通过一个事件触发一个线程开始工作。下边给出一个简单的,没有实现根据事件触发的,linux和windows通用的线程池模型:

  1. spthread.h 
  2. #ifndef __spthread_hpp__ 
  3. #define __spthread_hpp__ 
  4.   
  5. #ifndef WIN32 
  6.   
  7. /// pthread 
  8.   
  9. #include <pthread.h> 
  10. #include <unistd.h> 
  11.   
  12. typedef void *sp_thread_result_t; 
  13. typedef pthread_mutex_tsp_thread_mutex_t; 
  14. typedef pthread_cond_t  sp_thread_cond_t; 
  15. typedef pthread_t       sp_thread_t; 
  16. typedef pthread_attr_t  sp_thread_attr_t; 
  17.   
  18. #definesp_thread_mutex_init(m,a)  pthread_mutex_init(m,a) 
  19. #definesp_thread_mutex_destroy(m) pthread_mutex_destroy(m) 
  20. #definesp_thread_mutex_lock(m)    pthread_mutex_lock(m) 
  21. #define sp_thread_mutex_unlock(m)   pthread_mutex_unlock(m) 
  22.   
  23. #definesp_thread_cond_init(c,a)   pthread_cond_init(c,a) 
  24. #definesp_thread_cond_destroy(c)  pthread_cond_destroy(c) 
  25. #definesp_thread_cond_wait(c,m)   pthread_cond_wait(c,m) 
  26. #definesp_thread_cond_signal(c)   pthread_cond_signal(c) 
  27.   
  28. #definesp_thread_attr_init(a)       pthread_attr_init(a) 
  29. #definesp_thread_attr_setdetachstate pthread_attr_setdetachstate 
  30. #defineSP_THREAD_CREATE_DETACHED    PTHREAD_CREATE_DETACHED 
  31.   
  32. #define sp_thread_self    pthread_self 
  33. #define sp_thread_create  pthread_create 
  34.   
  35. #define SP_THREAD_CALL 
  36. typedef sp_thread_result_t ( *sp_thread_func_t )( void * args ); 
  37.   
  38. #define sp_sleep(x) sleep(x) 
  39.   
  40. #else/// 
  41.   
  42. // win32 thread 
  43.   
  44. #include <winsock2.h> 
  45. #include <process.h> 
  46.   
  47. typedef unsigned sp_thread_t; 
  48.   
  49. typedef unsignedsp_thread_result_t; 
  50. #define SP_THREAD_CALL__stdcall 
  51. typedef sp_thread_result_t (__stdcall * sp_thread_func_t )( void * args ); 
  52.   
  53. typedef HANDLE  sp_thread_mutex_t; 
  54. typedef HANDLE  sp_thread_cond_t; 
  55. typedef DWORD   sp_thread_attr_t; 
  56.   
  57. #defineSP_THREAD_CREATE_DETACHED 1 
  58. #define sp_sleep(x)Sleep(1000*x) 
  59.   
  60. int sp_thread_mutex_init(sp_thread_mutex_t * mutex, void * attr ) 
  61.          *mutex = CreateMutex( NULL, FALSE, NULL ); 
  62.          return NULL == * mutex ? GetLastError() : 0; 
  63.   
  64. int sp_thread_mutex_destroy(sp_thread_mutex_t * mutex ) 
  65.          int ret = CloseHandle( *mutex ); 
  66.   
  67.          return 0 == ret ? GetLastError() : 0; 
  68.   
  69. int sp_thread_mutex_lock(sp_thread_mutex_t * mutex ) 
  70.          int ret = WaitForSingleObject( *mutex, INFINITE ); 
  71.          return WAIT_OBJECT_0 == ret ? 0 : GetLastError(); 
  72.   
  73. int sp_thread_mutex_unlock(sp_thread_mutex_t * mutex ) 
  74.          int ret = ReleaseMutex( *mutex ); 
  75.          return 0 != ret ? 0 : GetLastError(); 
  76.   
  77. int sp_thread_cond_init(sp_thread_cond_t * cond, void * attr ) 
  78.          *cond = CreateEvent( NULL, FALSE, FALSE, NULL ); 
  79.          return NULL == *cond ? GetLastError() : 0; 
  80.   
  81. int sp_thread_cond_destroy(sp_thread_cond_t * cond ) 
  82.          int ret = CloseHandle( *cond ); 
  83.          return 0 == ret ? GetLastError() : 0; 
  84.   
  85. /*
  86. Caller MUST be holding themutex lock; the
  87. lock is released and the calleris blocked waiting
  88. on 'cond'. When 'cond' issignaled, the mutex
  89. is re-acquired before returningto the caller.
  90. */ 
  91. int sp_thread_cond_wait(sp_thread_cond_t * cond, sp_thread_mutex_t * mutex ) 
  92.          int ret = 0; 
  93.   
  94.          sp_thread_mutex_unlock( mutex ); 
  95.   
  96.          ret = WaitForSingleObject( *cond, INFINITE ); 
  97.   
  98.          sp_thread_mutex_lock( mutex ); 
  99.   
  100.          return WAIT_OBJECT_0 == ret ? 0 : GetLastError(); 
  101.   
  102. int sp_thread_cond_signal(sp_thread_cond_t * cond ) 
  103.          int ret = SetEvent( *cond ); 
  104.          return 0 == ret ? GetLastError() : 0; 
  105.   
  106. sp_thread_t sp_thread_self() 
  107.          return GetCurrentThreadId(); 
  108.   
  109. int sp_thread_attr_init(sp_thread_attr_t * attr ) 
  110.          *attr = 0; 
  111.          return 0; 
  112.   
  113. intsp_thread_attr_setdetachstate( sp_thread_attr_t * attr, int detachstate ) 
  114.          *attr |= detachstate; 
  115.          return 0; 
  116.   
  117. int sp_thread_create(sp_thread_t * thread, sp_thread_attr_t * attr, 
  118.                    sp_thread_func_t myfunc, void * args ) 
  119.          // _beginthreadex returns 0 on an error 
  120.          HANDLE h = (HANDLE)_beginthreadex( NULL, 0, myfunc, args, 0,thread ); 
  121.          return h > 0 ? 0 : GetLastError(); 
  122.   
  123. #endif 
  124.   
  125. #endif 
  126. threadpool.h 
  127. /**
  128. * threadpool.h
  129. *
  130. * This file declares the functionalityassociated with
  131. * your implementation of a threadpool.
  132. */ 
  133.   
  134. #ifndef __threadpool_h__ 
  135. #define __threadpool_h__ 
  136.   
  137. #ifdef __cplusplus 
  138. extern "C"
  139. #endif 
  140.   
  141. // maximum number of threadsallowed in a pool 
  142. #define MAXT_IN_POOL 200 
  143.   
  144. // You must hide the internaldetails of the threadpool 
  145. // structure from callers, thusdeclare threadpool of type "void". 
  146. // In threadpool.c, you willuse type conversion to coerce 
  147. // variables of type"threadpool" back and forth to a 
  148. // richer, internal type.  (See threadpool.c for details.) 
  149.   
  150. typedef void *threadpool; 
  151.   
  152. // "dispatch_fn"declares a typed function pointer.  A 
  153. // variable of type"dispatch_fn" points to a function 
  154. // with the followingsignature: 
  155. // 
  156. //     void dispatch_function(void *arg); 
  157.   
  158. typedef void(*dispatch_fn)(void *); 
  159.   
  160. /**
  161. * create_threadpool creates a fixed-sizedthread
  162. * pool. If the function succeeds, it returns a (non-NULL)
  163. * "threadpool", else it returnsNULL.
  164. */ 
  165. threadpoolcreate_threadpool(int num_threads_in_pool); 
  166.   
  167.   
  168. /**
  169. * dispatch sends a thread off to do somework.  If
  170. * all threads in the pool are busy, dispatchwill
  171. * block until a thread becomes free and isdispatched.
  172. *
  173. * Once a thread is dispatched, this functionreturns
  174. * immediately.
  175. *
  176. * The dispatched thread calls into thefunction
  177. * "dispatch_to_here" with argument"arg".
  178. */ 
  179. intdispatch_threadpool(threadpool from_me, dispatch_fn dispatch_to_here, 
  180.                void *arg); 
  181.   
  182. /**
  183. * destroy_threadpool kills the threadpool,causing
  184. * all threads in it to commit suicide, andthen
  185. * frees all the memory associated with thethreadpool.
  186. */ 
  187. voiddestroy_threadpool(threadpool destroyme); 
  188.   
  189. #ifdef __cplusplus 
  190. #endif 
  191.   
  192. #endif 
  193. threadpool.c 
  194. /**
  195. * threadpool.c
  196. *
  197. * This file will contain your implementation ofa threadpool.
  198. */ 
  199.   
  200. #include <stdio.h> 
  201. #include <stdlib.h> 
  202. //#include <unistd.h> 
  203. //#include <sp_thread.h> 
  204. #include <string.h> 
  205.   
  206. #include"threadpool.h" 
  207. #include "spthread.h" 
  208.   
  209. typedef struct _thread_st { 
  210.          sp_thread_t id; 
  211.          sp_thread_mutex_t mutex; 
  212.          sp_thread_cond_t cond; 
  213.          dispatch_fn fn; 
  214.          void *arg; 
  215.          threadpool parent; 
  216. } _thread; 
  217.   
  218. // _threadpool is the internalthreadpool structure that is 
  219. // cast to type"threadpool" before it given out to callers 
  220. typedef struct _threadpool_st { 
  221.          // you should fill in this structure with whatever you need 
  222.          sp_thread_mutex_t tp_mutex; 
  223.          sp_thread_cond_t tp_idle; 
  224.          sp_thread_cond_t tp_full; 
  225.          sp_thread_cond_t tp_empty; 
  226.          _thread ** tp_list; 
  227.          int tp_index; 
  228.          int tp_max_index; 
  229.          int tp_stop; 
  230.   
  231.          int tp_total; 
  232. } _threadpool; 
  233.   
  234. threadpoolcreate_threadpool(int num_threads_in_pool) 
  235.          _threadpool *pool; 
  236.   
  237.          // sanity check the argument 
  238.          if ((num_threads_in_pool <= 0) || (num_threads_in_pool> MAXT_IN_POOL)) 
  239.                    return NULL; 
  240.   
  241.          pool = (_threadpool *) malloc(sizeof(_threadpool)); 
  242.          if (pool == NULL) { 
  243.                    fprintf(stderr, "Out of memory creating a newthreadpool!\n"); 
  244.                    return NULL; 
  245.          } 
  246.   
  247.          // add your code here to initialize the newly createdthreadpool 
  248.          sp_thread_mutex_init( &pool->tp_mutex, NULL ); 
  249.          sp_thread_cond_init( &pool->tp_idle, NULL ); 
  250.          sp_thread_cond_init( &pool->tp_full, NULL ); 
  251.          sp_thread_cond_init( &pool->tp_empty, NULL ); 
  252.          pool->tp_max_index = num_threads_in_pool; 
  253.          pool->tp_index = 0; 
  254.          pool->tp_stop = 0; 
  255.          pool->tp_total = 0; 
  256.          pool->tp_list = ( _thread ** )malloc( sizeof( void * ) *MAXT_IN_POOL ); 
  257.          memset( pool->tp_list, 0, sizeof( void * ) * MAXT_IN_POOL); 
  258.   
  259.          return (threadpool) pool; 
  260.   
  261. int save_thread( _threadpool *pool, _thread * thread
  262.          int ret = -1; 
  263.   
  264.          sp_thread_mutex_lock( &pool->tp_mutex ); 
  265.   
  266.          if( pool->tp_index < pool->tp_max_index ) { 
  267.                    pool->tp_list[ pool->tp_index ] = thread
  268.                    pool->tp_index++; 
  269.                    ret = 0; 
  270.   
  271.                    sp_thread_cond_signal( &pool->tp_idle ); 
  272.   
  273.                    if( pool->tp_index >= pool->tp_total ) { 
  274.                             sp_thread_cond_signal(&pool->tp_full ); 
  275.                    } 
  276.          } 
  277.   
  278.          sp_thread_mutex_unlock( &pool->tp_mutex ); 
  279.   
  280.          return ret; 
  281.   
  282. sp_thread_result_tSP_THREAD_CALL wrapper_fn( void * arg ) 
  283.          _thread * thread = (_thread*)arg; 
  284.          _threadpool * pool = (_threadpool*)thread->parent; 
  285.   
  286.          for( ; 0 == ((_threadpool*)thread->parent)->tp_stop; ){ 
  287.                    thread->fn( thread->arg ); 
  288.   
  289.                    if( 0 !=((_threadpool*)thread->parent)->tp_stop ) break
  290.   
  291.                    sp_thread_mutex_lock( &thread->mutex ); 
  292.                    if( 0 == save_thread( thread->parent, thread )) { 
  293.                             sp_thread_cond_wait(&thread->cond, &thread->mutex ); 
  294.                             sp_thread_mutex_unlock(&thread->mutex ); 
  295.                    } else
  296.                             sp_thread_mutex_unlock(&thread->mutex ); 
  297.                             sp_thread_cond_destroy(&thread->cond ); 
  298.                             sp_thread_mutex_destroy(&thread->mutex ); 
  299.   
  300.                             free( thread ); 
  301.                             break
  302.                    } 
  303.          } 
  304.   
  305.          sp_thread_mutex_lock( &pool->tp_mutex ); 
  306.          pool->tp_total--; 
  307.          if( pool->tp_total <= 0 ) sp_thread_cond_signal(&pool->tp_empty ); 
  308.          sp_thread_mutex_unlock( &pool->tp_mutex ); 
  309.   
  310.          return 0; 
  311.   
  312. intdispatch_threadpool(threadpool from_me, dispatch_fn dispatch_to_here, void*arg) 
  313.          int ret = 0; 
  314.   
  315.          _threadpool *pool = (_threadpool *) from_me; 
  316.          sp_thread_attr_t attr; 
  317.          _thread * thread = NULL; 
  318.   
  319.          // add your code here to dispatch a thread 
  320.          sp_thread_mutex_lock( &pool->tp_mutex ); 
  321.   
  322.          if( pool->tp_index <= 0 && pool->tp_total>= pool->tp_max_index ) { 
  323.                    sp_thread_cond_wait( &pool->tp_idle,&pool->tp_mutex ); 
  324.          } 
  325.   
  326.          if( pool->tp_index <= 0 ) { 
  327.                    _thread * thread = ( _thread * )malloc( sizeof(_thread ) ); 
  328.                    memset( &( thread->id ), 0, sizeof(thread->id ) ); 
  329.                    sp_thread_mutex_init( &thread->mutex, NULL); 
  330.                    sp_thread_cond_init( &thread->cond, NULL ); 
  331.                    thread->fn = dispatch_to_here; 
  332.                    thread->arg = arg; 
  333.                    thread->parent = pool; 
  334.   
  335.                    sp_thread_attr_init( &attr ); 
  336.                    sp_thread_attr_setdetachstate( &attr,SP_THREAD_CREATE_DETACHED ); 
  337.   
  338.                    if( 0 == sp_thread_create( &thread->id,&attr, wrapper_fn, thread ) ) { 
  339.                             pool->tp_total++; 
  340.                             printf( "create thread#%ld\n",thread->id ); 
  341.                    } else
  342.                             ret = -1; 
  343.                             printf( "cannot createthread\n" ); 
  344.                             sp_thread_mutex_destroy(&thread->mutex ); 
  345.                             sp_thread_cond_destroy(&thread->cond ); 
  346.                             free( thread ); 
  347.                    } 
  348.          } else
  349.                    pool->tp_index--; 
  350.                    thread = pool->tp_list[ pool->tp_index ]; 
  351.                    pool->tp_list[ pool->tp_index ] = NULL; 
  352.   
  353.                    thread->fn = dispatch_to_here; 
  354.                    thread->arg = arg; 
  355.                    thread->parent = pool; 
  356.   
  357.                    sp_thread_mutex_lock( &thread->mutex ); 
  358.                    sp_thread_cond_signal( &thread->cond ) ; 
  359.                    sp_thread_mutex_unlock ( &thread->mutex ); 
  360.          } 
  361.   
  362.          sp_thread_mutex_unlock( &pool->tp_mutex ); 
  363.   
  364.          return ret; 
  365.   
  366. void destroy_threadpool(threadpooldestroyme) 
  367.          _threadpool *pool = (_threadpool *) destroyme; 
  368.   
  369.          // add your code here to kill a threadpool 
  370.          int i = 0; 
  371.   
  372.          sp_thread_mutex_lock( &pool->tp_mutex ); 
  373.   
  374.          if( pool->tp_index < pool->tp_total ) { 
  375.                    printf( "waiting for %d thread(s) tofinish\n", pool->tp_total - pool->tp_index ); 
  376.                    sp_thread_cond_wait( &pool->tp_full,&pool->tp_mutex ); 
  377.          } 
  378.   
  379.          pool->tp_stop = 1; 
  380.   
  381.          for( i = 0; i < pool->tp_index; i++ ) { 
  382.                    _thread * thread = pool->tp_list[ i ]; 
  383.   
  384.                    sp_thread_mutex_lock( &thread->mutex ); 
  385.                    sp_thread_cond_signal( &thread->cond ) ; 
  386.                    sp_thread_mutex_unlock ( &thread->mutex ); 
  387.          } 
  388.   
  389.          if( pool->tp_total > 0 ) { 
  390.                    printf( "waiting for %d thread(s) toexit\n", pool->tp_total ); 
  391.                    sp_thread_cond_wait( &pool->tp_empty,&pool->tp_mutex ); 
  392.          } 
  393.   
  394.          for( i = 0; i < pool->tp_index; i++ ) { 
  395.                    free( pool->tp_list[ i ] ); 
  396.                    pool->tp_list[ i ] = NULL; 
  397.          } 
  398.   
  399.          sp_thread_mutex_unlock( &pool->tp_mutex ); 
  400.   
  401.          pool->tp_index = 0; 
  402.   
  403.          sp_thread_mutex_destroy( &pool->tp_mutex ); 
  404.          sp_thread_cond_destroy( &pool->tp_idle ); 
  405.          sp_thread_cond_destroy( &pool->tp_full ); 
  406.          sp_thread_cond_destroy( &pool->tp_empty ); 
  407.   
  408.          free( pool->tp_list ); 
  409.          free( pool ); 
spthread.h
#ifndef __spthread_hpp__
#define __spthread_hpp__
 
#ifndef WIN32
 
/// pthread
 
#include <pthread.h>
#include <unistd.h>
 
typedef void *sp_thread_result_t;
typedef pthread_mutex_tsp_thread_mutex_t;
typedef pthread_cond_t  sp_thread_cond_t;
typedef pthread_t       sp_thread_t;
typedef pthread_attr_t  sp_thread_attr_t;
 
#definesp_thread_mutex_init(m,a)  pthread_mutex_init(m,a)
#definesp_thread_mutex_destroy(m) pthread_mutex_destroy(m)
#definesp_thread_mutex_lock(m)    pthread_mutex_lock(m)
#define sp_thread_mutex_unlock(m)   pthread_mutex_unlock(m)
 
#definesp_thread_cond_init(c,a)   pthread_cond_init(c,a)
#definesp_thread_cond_destroy(c)  pthread_cond_destroy(c)
#definesp_thread_cond_wait(c,m)   pthread_cond_wait(c,m)
#definesp_thread_cond_signal(c)   pthread_cond_signal(c)
 
#definesp_thread_attr_init(a)       pthread_attr_init(a)
#definesp_thread_attr_setdetachstate pthread_attr_setdetachstate
#defineSP_THREAD_CREATE_DETACHED    PTHREAD_CREATE_DETACHED
 
#define sp_thread_self    pthread_self
#define sp_thread_create  pthread_create
 
#define SP_THREAD_CALL
typedef sp_thread_result_t ( *sp_thread_func_t )( void * args );
 
#define sp_sleep(x) sleep(x)
 
#else///
 
// win32 thread
 
#include <winsock2.h>
#include <process.h>
 
typedef unsigned sp_thread_t;
 
typedef unsignedsp_thread_result_t;
#define SP_THREAD_CALL__stdcall
typedef sp_thread_result_t (__stdcall * sp_thread_func_t )( void * args );
 
typedef HANDLE  sp_thread_mutex_t;
typedef HANDLE  sp_thread_cond_t;
typedef DWORD   sp_thread_attr_t;
 
#defineSP_THREAD_CREATE_DETACHED 1
#define sp_sleep(x)Sleep(1000*x)
 
int sp_thread_mutex_init(sp_thread_mutex_t * mutex, void * attr )
{
         *mutex = CreateMutex( NULL, FALSE, NULL );
         return NULL == * mutex ? GetLastError() : 0;
}
 
int sp_thread_mutex_destroy(sp_thread_mutex_t * mutex )
{
         int ret = CloseHandle( *mutex );
 
         return 0 == ret ? GetLastError() : 0;
}
 
int sp_thread_mutex_lock(sp_thread_mutex_t * mutex )
{
         int ret = WaitForSingleObject( *mutex, INFINITE );
         return WAIT_OBJECT_0 == ret ? 0 : GetLastError();
}
 
int sp_thread_mutex_unlock(sp_thread_mutex_t * mutex )
{
         int ret = ReleaseMutex( *mutex );
         return 0 != ret ? 0 : GetLastError();
}
 
int sp_thread_cond_init(sp_thread_cond_t * cond, void * attr )
{
         *cond = CreateEvent( NULL, FALSE, FALSE, NULL );
         return NULL == *cond ? GetLastError() : 0;
}
 
int sp_thread_cond_destroy(sp_thread_cond_t * cond )
{
         int ret = CloseHandle( *cond );
         return 0 == ret ? GetLastError() : 0;
}
 
/*
Caller MUST be holding themutex lock; the
lock is released and the calleris blocked waiting
on 'cond'. When 'cond' issignaled, the mutex
is re-acquired before returningto the caller.
*/
int sp_thread_cond_wait(sp_thread_cond_t * cond, sp_thread_mutex_t * mutex )
{
         int ret = 0;
 
         sp_thread_mutex_unlock( mutex );
 
         ret = WaitForSingleObject( *cond, INFINITE );
 
         sp_thread_mutex_lock( mutex );
 
         return WAIT_OBJECT_0 == ret ? 0 : GetLastError();
}
 
int sp_thread_cond_signal(sp_thread_cond_t * cond )
{
         int ret = SetEvent( *cond );
         return 0 == ret ? GetLastError() : 0;
}
 
sp_thread_t sp_thread_self()
{
         return GetCurrentThreadId();
}
 
int sp_thread_attr_init(sp_thread_attr_t * attr )
{
         *attr = 0;
         return 0;
}
 
intsp_thread_attr_setdetachstate( sp_thread_attr_t * attr, int detachstate )
{
         *attr |= detachstate;
         return 0;
}
 
int sp_thread_create(sp_thread_t * thread, sp_thread_attr_t * attr,
                   sp_thread_func_t myfunc, void * args )
{
         // _beginthreadex returns 0 on an error
         HANDLE h = (HANDLE)_beginthreadex( NULL, 0, myfunc, args, 0,thread );
         return h > 0 ? 0 : GetLastError();
}
 
#endif
 
#endif
threadpool.h
/**
 * threadpool.h
 *
 * This file declares the functionalityassociated with
 * your implementation of a threadpool.
 */
 
#ifndef __threadpool_h__
#define __threadpool_h__
 
#ifdef __cplusplus
extern "C" {
#endif
 
// maximum number of threadsallowed in a pool
#define MAXT_IN_POOL 200
 
// You must hide the internaldetails of the threadpool
// structure from callers, thusdeclare threadpool of type "void".
// In threadpool.c, you willuse type conversion to coerce
// variables of type"threadpool" back and forth to a
// richer, internal type.  (See threadpool.c for details.)
 
typedef void *threadpool;
 
// "dispatch_fn"declares a typed function pointer.  A
// variable of type"dispatch_fn" points to a function
// with the followingsignature:
//
//     void dispatch_function(void *arg);
 
typedef void(*dispatch_fn)(void *);
 
/**
 * create_threadpool creates a fixed-sizedthread
 * pool. If the function succeeds, it returns a (non-NULL)
 * "threadpool", else it returnsNULL.
 */
threadpoolcreate_threadpool(int num_threads_in_pool);
 
 
/**
 * dispatch sends a thread off to do somework.  If
 * all threads in the pool are busy, dispatchwill
 * block until a thread becomes free and isdispatched.
 *
 * Once a thread is dispatched, this functionreturns
 * immediately.
 *
 * The dispatched thread calls into thefunction
 * "dispatch_to_here" with argument"arg".
 */
intdispatch_threadpool(threadpool from_me, dispatch_fn dispatch_to_here,
               void *arg);
 
/**
 * destroy_threadpool kills the threadpool,causing
 * all threads in it to commit suicide, andthen
 * frees all the memory associated with thethreadpool.
 */
voiddestroy_threadpool(threadpool destroyme);
 
#ifdef __cplusplus
}
#endif
 
#endif
threadpool.c
/**
 * threadpool.c
 *
 * This file will contain your implementation ofa threadpool.
 */
 
#include <stdio.h>
#include <stdlib.h>
//#include <unistd.h>
//#include <sp_thread.h>
#include <string.h>
 
#include"threadpool.h"
#include "spthread.h"
 
typedef struct _thread_st {
         sp_thread_t id;
         sp_thread_mutex_t mutex;
         sp_thread_cond_t cond;
         dispatch_fn fn;
         void *arg;
         threadpool parent;
} _thread;
 
// _threadpool is the internalthreadpool structure that is
// cast to type"threadpool" before it given out to callers
typedef struct _threadpool_st {
         // you should fill in this structure with whatever you need
         sp_thread_mutex_t tp_mutex;
         sp_thread_cond_t tp_idle;
         sp_thread_cond_t tp_full;
         sp_thread_cond_t tp_empty;
         _thread ** tp_list;
         int tp_index;
         int tp_max_index;
         int tp_stop;
 
         int tp_total;
} _threadpool;
 
threadpoolcreate_threadpool(int num_threads_in_pool)
{
         _threadpool *pool;
 
         // sanity check the argument
         if ((num_threads_in_pool <= 0) || (num_threads_in_pool> MAXT_IN_POOL))
                   return NULL;
 
         pool = (_threadpool *) malloc(sizeof(_threadpool));
         if (pool == NULL) {
                   fprintf(stderr, "Out of memory creating a newthreadpool!\n");
                   return NULL;
         }
 
         // add your code here to initialize the newly createdthreadpool
         sp_thread_mutex_init( &pool->tp_mutex, NULL );
         sp_thread_cond_init( &pool->tp_idle, NULL );
         sp_thread_cond_init( &pool->tp_full, NULL );
         sp_thread_cond_init( &pool->tp_empty, NULL );
         pool->tp_max_index = num_threads_in_pool;
         pool->tp_index = 0;
         pool->tp_stop = 0;
         pool->tp_total = 0;
         pool->tp_list = ( _thread ** )malloc( sizeof( void * ) *MAXT_IN_POOL );
         memset( pool->tp_list, 0, sizeof( void * ) * MAXT_IN_POOL);
 
         return (threadpool) pool;
}
 
int save_thread( _threadpool *pool, _thread * thread )
{
         int ret = -1;
 
         sp_thread_mutex_lock( &pool->tp_mutex );
 
         if( pool->tp_index < pool->tp_max_index ) {
                   pool->tp_list[ pool->tp_index ] = thread;
                   pool->tp_index++;
                   ret = 0;
 
                   sp_thread_cond_signal( &pool->tp_idle );
 
                   if( pool->tp_index >= pool->tp_total ) {
                            sp_thread_cond_signal(&pool->tp_full );
                   }
         }
 
         sp_thread_mutex_unlock( &pool->tp_mutex );
 
         return ret;
}
 
sp_thread_result_tSP_THREAD_CALL wrapper_fn( void * arg )
{
         _thread * thread = (_thread*)arg;
         _threadpool * pool = (_threadpool*)thread->parent;
 
         for( ; 0 == ((_threadpool*)thread->parent)->tp_stop; ){
                   thread->fn( thread->arg );
 
                   if( 0 !=((_threadpool*)thread->parent)->tp_stop ) break;
 
                   sp_thread_mutex_lock( &thread->mutex );
                   if( 0 == save_thread( thread->parent, thread )) {
                            sp_thread_cond_wait(&thread->cond, &thread->mutex );
                            sp_thread_mutex_unlock(&thread->mutex );
                   } else {
                            sp_thread_mutex_unlock(&thread->mutex );
                            sp_thread_cond_destroy(&thread->cond );
                            sp_thread_mutex_destroy(&thread->mutex );
 
                            free( thread );
                            break;
                   }
         }
 
         sp_thread_mutex_lock( &pool->tp_mutex );
         pool->tp_total--;
         if( pool->tp_total <= 0 ) sp_thread_cond_signal(&pool->tp_empty );
         sp_thread_mutex_unlock( &pool->tp_mutex );
 
         return 0;
}
 
intdispatch_threadpool(threadpool from_me, dispatch_fn dispatch_to_here, void*arg)
{
         int ret = 0;
 
         _threadpool *pool = (_threadpool *) from_me;
         sp_thread_attr_t attr;
         _thread * thread = NULL;
 
         // add your code here to dispatch a thread
         sp_thread_mutex_lock( &pool->tp_mutex );
 
         if( pool->tp_index <= 0 && pool->tp_total>= pool->tp_max_index ) {
                   sp_thread_cond_wait( &pool->tp_idle,&pool->tp_mutex );
         }
 
         if( pool->tp_index <= 0 ) {
                   _thread * thread = ( _thread * )malloc( sizeof(_thread ) );
                   memset( &( thread->id ), 0, sizeof(thread->id ) );
                   sp_thread_mutex_init( &thread->mutex, NULL);
                   sp_thread_cond_init( &thread->cond, NULL );
                   thread->fn = dispatch_to_here;
                   thread->arg = arg;
                   thread->parent = pool;
 
                   sp_thread_attr_init( &attr );
                   sp_thread_attr_setdetachstate( &attr,SP_THREAD_CREATE_DETACHED );
 
                   if( 0 == sp_thread_create( &thread->id,&attr, wrapper_fn, thread ) ) {
                            pool->tp_total++;
                            printf( "create thread#%ld\n",thread->id );
                   } else {
                            ret = -1;
                            printf( "cannot createthread\n" );
                            sp_thread_mutex_destroy(&thread->mutex );
                            sp_thread_cond_destroy(&thread->cond );
                            free( thread );
                   }
         } else {
                   pool->tp_index--;
                   thread = pool->tp_list[ pool->tp_index ];
                   pool->tp_list[ pool->tp_index ] = NULL;
 
                   thread->fn = dispatch_to_here;
                   thread->arg = arg;
                   thread->parent = pool;
 
                   sp_thread_mutex_lock( &thread->mutex );
                   sp_thread_cond_signal( &thread->cond ) ;
                   sp_thread_mutex_unlock ( &thread->mutex );
         }
 
         sp_thread_mutex_unlock( &pool->tp_mutex );
 
         return ret;
}
 
void destroy_threadpool(threadpooldestroyme)
{
         _threadpool *pool = (_threadpool *) destroyme;
 
         // add your code here to kill a threadpool
         int i = 0;
 
         sp_thread_mutex_lock( &pool->tp_mutex );
 
         if( pool->tp_index < pool->tp_total ) {
                   printf( "waiting for %d thread(s) tofinish\n", pool->tp_total - pool->tp_index );
                   sp_thread_cond_wait( &pool->tp_full,&pool->tp_mutex );
         }
 
         pool->tp_stop = 1;
 
         for( i = 0; i < pool->tp_index; i++ ) {
                   _thread * thread = pool->tp_list[ i ];
 
                   sp_thread_mutex_lock( &thread->mutex );
                   sp_thread_cond_signal( &thread->cond ) ;
                   sp_thread_mutex_unlock ( &thread->mutex );
         }
 
         if( pool->tp_total > 0 ) {
                   printf( "waiting for %d thread(s) toexit\n", pool->tp_total );
                   sp_thread_cond_wait( &pool->tp_empty,&pool->tp_mutex );
         }
 
         for( i = 0; i < pool->tp_index; i++ ) {
                   free( pool->tp_list[ i ] );
                   pool->tp_list[ i ] = NULL;
         }
 
         sp_thread_mutex_unlock( &pool->tp_mutex );
 
         pool->tp_index = 0;
 
         sp_thread_mutex_destroy( &pool->tp_mutex );
         sp_thread_cond_destroy( &pool->tp_idle );
         sp_thread_cond_destroy( &pool->tp_full );
         sp_thread_cond_destroy( &pool->tp_empty );
 
         free( pool->tp_list );
         free( pool );
}

2)    常见的设计模式

根据Socket的阻塞非阻塞,IO的同步和异步。可以分为如下4中情形

阻塞同步     |    阻塞异步

_________|______________

非阻塞同步  |   非阻塞异步

阻塞同步方式是原始的方式,也是许多教科书上介绍的方式,因为Socket和IO默认的为阻塞和同步方式。基本流程如下:

  1. listen_fd = socket( AF_INET,SOCK_STREAM,0 ) 
  2. bind( listen_fd, (struct sockaddr*)&my_addr, sizeof(struct sockaddr_in)) 
  3. listen( listen_fd,1 ) 
  4. accept( listen_fd,  (struct sockaddr*)&remote_addr,&addr_len ) 
  5. recv( accept_fd ,&in_buf ,1024 ,0 ) 
  6. close(accept_fd) 
listen_fd = socket( AF_INET,SOCK_STREAM,0 )
bind( listen_fd, (struct sockaddr*)&my_addr, sizeof(struct sockaddr_in))
listen( listen_fd,1 )
accept( listen_fd,  (struct sockaddr*)&remote_addr,&addr_len )
recv( accept_fd ,&in_buf ,1024 ,0 )
close(accept_fd)

阻塞异步方式有所改进,但是Socket的阻塞方式,前一个连接没有处理完成,下一个连接不能接入,是高并发服务器所不可接收的方式。只不过在上边阻塞同步方式的基础上使用select(严格来说select是一种IO多路服用技术。因为linux尚没有完整的实现异步IO,而winsock实在理解socket没有linux上面那么直观。,这里为了方便,没有做严格的区分)或者其它异步IO方式。

非阻塞同步方式,通过设置socket选项为NONBLOCK,可以很快的接收连接,但是处理采用同步IO方式,服务器处理性能也比较差。

上边三种方式不做深入介绍。下边主要从非阻塞异步IO方式介绍。

非阻塞异步IO方式中,由于异步IO方式在同一系统可能有多种实现,不同系统也有不同实现,下边介绍几种常见的IO方式和服务器框架。

Ø  Select

Select采用轮训注册的fd方式。是一种比较老的IO多路服用实现方式,效率相对要差一些。Select方式在windows和linux上都支持。

基本框架如下:

  1. socket( AF_INET,SOCK_STREAM,0 ) 
  2. fcntl(listen_fd, F_SETFL,flags|O_NONBLOCK); 
  3. bind( listen_fd, (structsockaddr *)&my_addr,sizeof(struct sockaddr_in)) 
  4. listen( listen_fd,1 ) 
  5. FD_ZERO( &fd_sets ); 
  6. FD_SET(listen_fd,&fd_sets); 
  7. for(k=0; k<=i; k++){ 
  8.          FD_SET(accept_fds[k],&fd_sets); 
  9. events = select( max_fd + 1,&fd_sets, NULL, NULL, NULL ); 
  10. if(FD_ISSET(listen_fd,&fd_sets) ){ 
  11. accept_fd = accept( listen_fd, (structsockaddr *)&remote_addr,&addr_len ); 
  12. for( j=0; j<=i; j++ ){ 
  13.          if( FD_ISSET(accept_fds[j],&fd_sets) ){ 
  14.                    recv( accept_fds[j] ,&in_buf ,1024 ,0 ); 
  15.          } 
socket( AF_INET,SOCK_STREAM,0 )
fcntl(listen_fd, F_SETFL,flags|O_NONBLOCK);
bind( listen_fd, (structsockaddr *)&my_addr,sizeof(struct sockaddr_in))
listen( listen_fd,1 )
FD_ZERO( &fd_sets );
FD_SET(listen_fd,&fd_sets);
for(k=0; k<=i; k++){
         FD_SET(accept_fds[k],&fd_sets);
}
events = select( max_fd + 1,&fd_sets, NULL, NULL, NULL );
if(FD_ISSET(listen_fd,&fd_sets) ){
accept_fd = accept( listen_fd, (structsockaddr *)&remote_addr,&addr_len );
}
for( j=0; j<=i; j++ ){
         if( FD_ISSET(accept_fds[j],&fd_sets) ){
                   recv( accept_fds[j] ,&in_buf ,1024 ,0 );
         }
}

Ø  Epoll

Epoll是linux2.6内核以后支持的一种高性能的IO多路服用技术。服务器框架如下:

  1. socket( AF_INET,SOCK_STREAM,0 ) 
  2. fcntl(listen_fd, F_SETFL,flags|O_NONBLOCK); 
  3. bind( listen_fd, (structsockaddr *)&my_addr,sizeof(struct sockaddr_in)) 
  4. listen( listen_fd,1 ) 
  5. epoll_ctl(epfd,EPOLL_CTL_ADD,listen_fd,&ev); 
  6. ev_s = epoll_wait(epfd,events,20,500 ); 
  7. for(i=0; i<ev_s;i++){ 
  8.                    if(events[i].data.fd==listen_fd){ 
  9.                             accept_fd = accept( listen_fd,(structsockaddr *)&remote_addr,&addr_len ); 
  10.                             fcntl(accept_fd, F_SETFL,flags|O_NONBLOCK); 
  11.                             epoll_ctl(epfd,EPOLL_CTL_ADD,accept_fd,&ev); 
  12.                    } 
  13.                    else if(events[i].events&EPOLLIN){ 
  14.                             recv( events[i].data.fd ,&in_buf,1024 ,0 ); 
  15.                    } 
socket( AF_INET,SOCK_STREAM,0 )
fcntl(listen_fd, F_SETFL,flags|O_NONBLOCK);
bind( listen_fd, (structsockaddr *)&my_addr,sizeof(struct sockaddr_in))
listen( listen_fd,1 )
epoll_ctl(epfd,EPOLL_CTL_ADD,listen_fd,&ev);
ev_s = epoll_wait(epfd,events,20,500 );
for(i=0; i<ev_s;i++){
                   if(events[i].data.fd==listen_fd){
                            accept_fd = accept( listen_fd,(structsockaddr *)&remote_addr,&addr_len );
                            fcntl(accept_fd, F_SETFL,flags|O_NONBLOCK);
                            epoll_ctl(epfd,EPOLL_CTL_ADD,accept_fd,&ev);
                   }
                   else if(events[i].events&EPOLLIN){
                            recv( events[i].data.fd ,&in_buf,1024 ,0 );
                   }
}

Ø  AIO

在windows上微软实现了异步IO,通过AIO可以方便的实现高并发的服务器。框架如下:

  1. WSAStartup( 0x0202 ,  & wsaData) 
  2. CreateIoCompletionPort(INVALID_HANDLE_VALUE,NULL,  0 ,  0 ) 
  3. WSASocket(AF_INET,SOCK_STREAM,  0 , NULL,  0 , WSA_FLAG_OVERLAPPED) 
  4. bind(Listen, (PSOCKADDR)  & InternetAddr,  sizeof (InternetAddr)) 
  5. listen(Listen,  5 ) 
  6. WSAAccept(Listen, NULL, NULL,NULL,  0 ) 
  7. PerHandleData  = (LPPER_HANDLE_DATA) GlobalAlloc(GPTR, sizeof (PER_HANDLE_DATA) 
  8. CreateIoCompletionPort((HANDLE)Accept, CompletionPort, (DWORD) PerHandleData, 0 ) 
  9. PerIoData= (LPPER_IO_OPERATION_DATA)GlobalAlloc(GPTR, sizeof (PER_IO_OPERATION_DATA)) 
  10. WSARecv(Accept,&(PerIoData->DataBuf),1,&RecvBytes,&Flags,&(PerIoData->Overlapped), NULL) 
  11. (GetQueuedCompletionStatus(CompletionPort,  & BytesTransferred, 
  12.          (LPDWORD) & PerHandleData,(LPOVERLAPPED  * )  & PerIoData, INFINITE) 
  13. if  (PerIoData -> BytesRECV  > PerIoData -> BytesSEND){ 
  14. WSASend(PerHandleData-> Socket,  & (PerIoData ->DataBuf),  1 ,  & SendBytes,  0 , 
  15.              & (PerIoData ->Overlapped), NULL) 
WSAStartup( 0x0202 ,  & wsaData)
CreateIoCompletionPort(INVALID_HANDLE_VALUE,NULL,  0 ,  0 )
WSASocket(AF_INET,SOCK_STREAM,  0 , NULL,  0 , WSA_FLAG_OVERLAPPED)
bind(Listen, (PSOCKADDR)  & InternetAddr,  sizeof (InternetAddr))
listen(Listen,  5 )
WSAAccept(Listen, NULL, NULL,NULL,  0 )
PerHandleData  = (LPPER_HANDLE_DATA) GlobalAlloc(GPTR, sizeof (PER_HANDLE_DATA)
CreateIoCompletionPort((HANDLE)Accept, CompletionPort, (DWORD) PerHandleData, 0 )
PerIoData= (LPPER_IO_OPERATION_DATA)GlobalAlloc(GPTR, sizeof (PER_IO_OPERATION_DATA))
WSARecv(Accept,&(PerIoData->DataBuf),1,&RecvBytes,&Flags,&(PerIoData->Overlapped), NULL)
(GetQueuedCompletionStatus(CompletionPort,  & BytesTransferred,
         (LPDWORD) & PerHandleData,(LPOVERLAPPED  * )  & PerIoData, INFINITE)
if  (PerIoData -> BytesRECV  > PerIoData -> BytesSEND){
WSASend(PerHandleData-> Socket,  & (PerIoData ->DataBuf),  1 ,  & SendBytes,  0 ,
             & (PerIoData ->Overlapped), NULL)
}

3)    引入线程池和事件分离器后

由于上边只是单纯的使用非阻塞Socket和异步IO的方式。提高了接收连接和处理的速度。但是还是不能解决两个客户端同时连接的问题。这时就需要引入多线程机制。引入多线程后,又有许多策略。Linux上通常采用主进程负责接收连接,之后fork子进程处理连接。Windows通常采用线程池方式,避免线程创建和销毁的开销,当然linux上也可以采用线程池方式。采用多进程和多线程方式后。事件处理也可以再优化,定义一个简单的事件处理器,把所有事件放入一个队列,各个线程去事件队列取相应的事件,然后自己开始工作。这就是我上边提到的半同步/半异步方式了。如果线程工作的时候是接收到连接后,自己处理后续的发送和接收,然后选出另外一个线程作为领导继续接收连接,其它线程作为追随者。这就是领导者/追随者模式了。具体可以参考ACE的Reactor和Preactor的具体实现。半同步和/半异步网上也有很多的讨论,可以自己深入研究。代码就比较复杂了,这里就不给出代码了。给出一个linux下类似,相对简单的fork子进程+epoll方式的实现:

  1. #include<sys/socket.h> 
  2. #include <sys/wait.h> 
  3. #include <netinet/in.h> 
  4. #include <netinet/tcp.h> 
  5. #include <sys/epoll.h> 
  6. #include <sys/sendfile.h> 
  7. #include <sys/stat.h> 
  8. #include <unistd.h> 
  9. #include <stdio.h> 
  10. #include <stdlib.h> 
  11. #include <string.h> 
  12. #include <strings.h> 
  13. #include <fcntl.h> 
  14. #include <errno.h> 
  15.  
  16. #define HANDLE_INFO   1 
  17. #define HANDLE_SEND   2 
  18. #define HANDLE_DEL    3 
  19. #define HANDLE_CLOSE  4 
  20.  
  21. #define MAX_REQLEN         1024 
  22. #define MAX_PROCESS_CONN    3 
  23. #define FIN_CHAR           0x00 
  24. #define SUCCESS  0 
  25. #define ERROR   -1 
  26.  
  27. typedef struct event_handle{ 
  28.     int socket_fd; 
  29.     int file_fd; 
  30.     int file_pos; 
  31.     int epoll_fd; 
  32.     char request[MAX_REQLEN]; 
  33.     int request_len; 
  34.     int ( * read_handle )( struct event_handle * ev ); 
  35.     int ( * write_handle )( struct event_handle * ev ); 
  36.     int handle_method; 
  37. } EV,* EH; 
  38. typedef int ( * EVENT_HANDLE )( struct event_handle * ev ); 
  39.  
  40. int create_listen_fd( int port ){ 
  41.     int listen_fd; 
  42.     struct sockaddr_inmy_addr; 
  43.     if( ( listen_fd = socket( AF_INET, SOCK_STREAM, 0 ) ) == -1 ){ 
  44.         perror( "create socket error" ); 
  45.         exit( 1 ); 
  46.     } 
  47.     int flag; 
  48.     int olen = sizeof(int); 
  49.     if( setsockopt( listen_fd, SOL_SOCKET, SO_REUSEADDR 
  50.                        , (const void *)&flag, olen ) == -1 ){ 
  51.         perror( "setsockopt error" ); 
  52.     } 
  53.     flag = 5; 
  54.     if( setsockopt( listen_fd, IPPROTO_TCP, TCP_DEFER_ACCEPT, &flag, olen ) == -1 ){ 
  55.         perror( "setsockopt error" ); 
  56.     } 
  57.     flag = 1; 
  58.     if( setsockopt( listen_fd, IPPROTO_TCP, TCP_CORK, &flag, olen ) == -1 ){ 
  59.         perror( "setsockopt error" ); 
  60.     } 
  61.     int flags = fcntl( listen_fd, F_GETFL, 0 ); 
  62.     fcntl( listen_fd, F_SETFL, flags|O_NONBLOCK ); 
  63.     my_addr.sin_family = AF_INET; 
  64.     my_addr.sin_port = htons( port ); 
  65.     my_addr.sin_addr.s_addr = INADDR_ANY; 
  66.     bzero( &( my_addr.sin_zero ), 8 ); 
  67.     if( bind( listen_fd, ( struct sockaddr * )&my_addr, 
  68.     sizeof( struct sockaddr_in ) ) == -1 ) { 
  69.         perror( "bind error" ); 
  70.         exit( 1 ); 
  71.     } 
  72.     if( listen( listen_fd, 1 ) == -1 ){ 
  73.         perror( "listen error" ); 
  74.         exit( 1 ); 
  75.     } 
  76.     return listen_fd; 
  77.  
  78. int create_accept_fd( int listen_fd ){ 
  79.     int addr_len = sizeof( struct sockaddr_in ); 
  80.     struct sockaddr_inremote_addr; 
  81.     int accept_fd = accept( listen_fd, 
  82.         ( struct sockaddr * )&remote_addr, &addr_len ); 
  83.     int flags = fcntl( accept_fd, F_GETFL, 0 ); 
  84.     fcntl( accept_fd, F_SETFL, flags|O_NONBLOCK ); 
  85.     return accept_fd; 
  86.  
  87. int fork_process( int process_num ){ 
  88.     int i; 
  89.     int pid=-1; 
  90.     for( i = 0; i < process_num; i++ ){ 
  91.         if( pid != 0 ){ 
  92.             pid = fork(); 
  93.         } 
  94.     } 
  95.     return pid; 
  96.  
  97. int init_evhandle(EH ev,int socket_fd,int epoll_fd,EVENT_HANDLEr_handle,EVENT_HANDLE w_handle){ 
  98.     ev->epoll_fd = epoll_fd; 
  99.     ev->socket_fd = socket_fd; 
  100.     ev->read_handle = r_handle; 
  101.     ev->write_handle = w_handle; 
  102.     ev->file_pos = 0; 
  103.     ev->request_len = 0; 
  104.     ev->handle_method = 0; 
  105.     memset( ev->request, 0, 1024 ); 
  106. //accept->accept_queue->request->request_queue->output->output_queue 
  107. //multi process sendfile 
  108. int parse_request(EH ev){ 
  109.     ev->request_len--; 
  110.     *( ev->request + ev->request_len - 1 ) = 0x00; 
  111.     int i; 
  112.     for( i=0; i<ev->request_len; i++ ){ 
  113.         if( ev->request[i] == ':' ){ 
  114.             ev->request_len = ev->request_len-i-1; 
  115.             char temp[MAX_REQLEN]; 
  116.             memcpy( temp, ev->request, i ); 
  117.             ev->handle_method = atoi( temp ); 
  118.             memcpy( temp, ev->request+i+1, ev->request_len ); 
  119.             memcpy( ev->request, temp, ev->request_len ); 
  120.             break
  121.         } 
  122.     } 
  123.     //handle_request(ev ); 
  124.     //registerto epoll EPOLLOUT 
  125.  
  126.     struct epoll_eventev_temp; 
  127.     ev_temp.data.ptr = ev; 
  128.     ev_temp.events = EPOLLOUT|EPOLLET; 
  129.     epoll_ctl( ev->epoll_fd, EPOLL_CTL_MOD, ev->socket_fd, &ev_temp ); 
  130.     return SUCCESS; 
  131.  
  132. int handle_request(EH ev){ 
  133.     struct statfile_info; 
  134.     switch( ev->handle_method ){ 
  135.         case HANDLE_INFO: 
  136.             ev->file_fd = open( ev->request, O_RDONLY ); 
  137.             if( ev->file_fd == -1 ){ 
  138.                send( ev->socket_fd, "open file failed\n", strlen("open file failed\n"), 0 ); 
  139.                return -1; 
  140.             } 
  141.             fstat(ev->file_fd, &file_info); 
  142.             char info[MAX_REQLEN]; 
  143.             sprintf(info,"filelen:%d\n",file_info.st_size); 
  144.             send( ev->socket_fd, info, strlen( info ), 0 ); 
  145.             break
  146.         case HANDLE_SEND: 
  147.             ev->file_fd = open( ev->request, O_RDONLY ); 
  148.             if( ev->file_fd == -1 ){ 
  149.                send( ev->socket_fd, "open file failed\n", strlen("open file failed\n"), 0 ); 
  150.                return -1; 
  151.             } 
  152.             fstat(ev->file_fd, &file_info); 
  153.             sendfile( ev->socket_fd, ev->file_fd, 0, file_info.st_size ); 
  154.             break
  155.         case HANDLE_DEL: 
  156.             break
  157.         case HANDLE_CLOSE: 
  158.             break
  159.     } 
  160.     finish_request( ev ); 
  161.     return SUCCESS; 
  162.  
  163. int finish_request(EH ev){ 
  164.     close(ev->socket_fd); 
  165.     close(ev->file_fd); 
  166.     ev->handle_method = -1; 
  167.     clean_request( ev ); 
  168.     return SUCCESS; 
  169.  
  170. int clean_request(EH ev){ 
  171.     memset( ev->request, 0, MAX_REQLEN ); 
  172.     ev->request_len = 0; 
  173.  
  174. int read_hook_v2( EH ev ){ 
  175.     char in_buf[MAX_REQLEN]; 
  176.     memset( in_buf, 0, MAX_REQLEN ); 
  177.     int recv_num = recv( ev->socket_fd, &in_buf, MAX_REQLEN, 0 ); 
  178.     if( recv_num ==0 ){ 
  179.         close( ev->socket_fd ); 
  180.         return ERROR; 
  181.     } 
  182.     else
  183.         //checkifoverflow 
  184.         if( ev->request_len > MAX_REQLEN-recv_num ){ 
  185.             close( ev->socket_fd ); 
  186.            clean_request( ev ); 
  187.         } 
  188.         memcpy( ev->request + ev->request_len, in_buf, recv_num ); 
  189.         ev->request_len += recv_num; 
  190.         if( recv_num == 2 && ( !memcmp( &in_buf[recv_num-2], "\r\n", 2 ) ) ){ 
  191.            parse_request(ev); 
  192.         } 
  193.     } 
  194.     return recv_num; 
  195.  
  196. int write_hook_v1( EH ev ){ 
  197.     struct statfile_info; 
  198.     ev->file_fd = open( ev->request, O_RDONLY ); 
  199.     if( ev->file_fd == ERROR ){ 
  200.         send( ev->socket_fd, "openfile failed\n", strlen("openfile failed\n"), 0 ); 
  201.         return ERROR; 
  202.     } 
  203.     fstat(ev->file_fd, &file_info); 
  204.     int write_num; 
  205.     while(1){ 
  206.         write_num = sendfile( ev->socket_fd, ev->file_fd, (off_t *)&ev->file_pos, 10240 ); 
  207.         ev->file_pos += write_num; 
  208.         if( write_num == ERROR ){ 
  209.             if( errno == EAGAIN ){ 
  210.                break
  211.             } 
  212.         } 
  213.         else if( write_num == 0 ){ 
  214.             printf( "writed:%d\n", ev->file_pos ); 
  215.             //finish_request(ev ); 
  216.             break
  217.         } 
  218.     } 
  219.     return SUCCESS; 
  220.  
  221. int main(){ 
  222.     int listen_fd = create_listen_fd( 3389 ); 
  223.     int pid = fork_process( 3 ); 
  224.     if( pid == 0 ){ 
  225.         int accept_handles = 0; 
  226.         struct epoll_eventev, events[20]; 
  227.         int epfd = epoll_create( 256 ); 
  228.         int ev_s = 0; 
  229.  
  230.         ev.data.fd = listen_fd; 
  231.         ev.events = EPOLLIN|EPOLLET; 
  232.         epoll_ctl( epfd, EPOLL_CTL_ADD, listen_fd, &ev ); 
  233.         struct event_handleev_handles[256]; 
  234.         for( ;; ){ 
  235.             ev_s = epoll_wait( epfd, events, 20, 500 ); 
  236.             int i = 0; 
  237.             for( i = 0; i<ev_s; i++ ){ 
  238.                if( events[i].data.fd == listen_fd ){ 
  239.                    if( accept_handles < MAX_PROCESS_CONN ){ 
  240.                        accept_handles++; 
  241.                        int accept_fd = create_accept_fd( listen_fd ); 
  242.                        init_evhandle(&ev_handles[accept_handles],accept_fd,epfd,read_hook_v2,write_hook_v1); 
  243.                        ev.data.ptr = &ev_handles[accept_handles]; 
  244.                        ev.events = EPOLLIN|EPOLLET; 
  245.                        epoll_ctl( epfd, EPOLL_CTL_ADD, accept_fd, &ev ); 
  246.                    } 
  247.                } 
  248.                else if( events[i].events&EPOLLIN ){ 
  249.                    EVENT_HANDLE current_handle = ( ( EH )( events[i].data.ptr ) )->read_handle; 
  250.                    EH current_event = ( EH )( events[i].data.ptr ); 
  251.                    ( *current_handle )( current_event ); 
  252.                } 
  253.                else if( events[i].events&EPOLLOUT ){ 
  254.                    EVENT_HANDLE current_handle = ( ( EH )( events[i].data.ptr ) )->write_handle; 
  255.                    EH current_event = ( EH )( events[i].data.ptr ); 
  256.                    if( ( *current_handle )( current_event )  == 0 ){ 
  257.                        accept_handles--; 
  258.                    } 
  259.                } 
  260.             } 
  261.         } 
  262.     } 
  263.     else
  264.         //managerthe process 
  265.         int child_process_status; 
  266.         wait( &child_process_status ); 
  267.     } 
  268.  
  269.     return SUCCESS; 
#include<sys/socket.h>
#include <sys/wait.h>
#include <netinet/in.h>
#include <netinet/tcp.h>
#include <sys/epoll.h>
#include <sys/sendfile.h>
#include <sys/stat.h>
#include <unistd.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <strings.h>
#include <fcntl.h>
#include <errno.h>

#define HANDLE_INFO   1
#define HANDLE_SEND   2
#define HANDLE_DEL    3
#define HANDLE_CLOSE  4

#define MAX_REQLEN         1024
#define MAX_PROCESS_CONN    3
#define FIN_CHAR           0x00
#define SUCCESS  0
#define ERROR   -1

typedef struct event_handle{
    int socket_fd;
    int file_fd;
    int file_pos;
    int epoll_fd;
    char request[MAX_REQLEN];
    int request_len;
    int ( * read_handle )( struct event_handle * ev );
    int ( * write_handle )( struct event_handle * ev );
    int handle_method;
} EV,* EH;
typedef int ( * EVENT_HANDLE )( struct event_handle * ev );

int create_listen_fd( int port ){
    int listen_fd;
    struct sockaddr_inmy_addr;
    if( ( listen_fd = socket( AF_INET, SOCK_STREAM, 0 ) ) == -1 ){
        perror( "create socket error" );
        exit( 1 );
    }
    int flag;
    int olen = sizeof(int);
    if( setsockopt( listen_fd, SOL_SOCKET, SO_REUSEADDR
                       , (const void *)&flag, olen ) == -1 ){
        perror( "setsockopt error" );
    }
    flag = 5;
    if( setsockopt( listen_fd, IPPROTO_TCP, TCP_DEFER_ACCEPT, &flag, olen ) == -1 ){
        perror( "setsockopt error" );
    }
    flag = 1;
    if( setsockopt( listen_fd, IPPROTO_TCP, TCP_CORK, &flag, olen ) == -1 ){
        perror( "setsockopt error" );
    }
    int flags = fcntl( listen_fd, F_GETFL, 0 );
    fcntl( listen_fd, F_SETFL, flags|O_NONBLOCK );
    my_addr.sin_family = AF_INET;
    my_addr.sin_port = htons( port );
    my_addr.sin_addr.s_addr = INADDR_ANY;
    bzero( &( my_addr.sin_zero ), 8 );
    if( bind( listen_fd, ( struct sockaddr * )&my_addr,
    sizeof( struct sockaddr_in ) ) == -1 ) {
        perror( "bind error" );
        exit( 1 );
    }
    if( listen( listen_fd, 1 ) == -1 ){
        perror( "listen error" );
        exit( 1 );
    }
    return listen_fd;
}

int create_accept_fd( int listen_fd ){
    int addr_len = sizeof( struct sockaddr_in );
    struct sockaddr_inremote_addr;
    int accept_fd = accept( listen_fd,
        ( struct sockaddr * )&remote_addr, &addr_len );
    int flags = fcntl( accept_fd, F_GETFL, 0 );
    fcntl( accept_fd, F_SETFL, flags|O_NONBLOCK );
    return accept_fd;
}

int fork_process( int process_num ){
    int i;
    int pid=-1;
    for( i = 0; i < process_num; i++ ){
        if( pid != 0 ){
            pid = fork();
        }
    }
    return pid;
}

int init_evhandle(EH ev,int socket_fd,int epoll_fd,EVENT_HANDLEr_handle,EVENT_HANDLE w_handle){
    ev->epoll_fd = epoll_fd;
    ev->socket_fd = socket_fd;
    ev->read_handle = r_handle;
    ev->write_handle = w_handle;
    ev->file_pos = 0;
    ev->request_len = 0;
    ev->handle_method = 0;
    memset( ev->request, 0, 1024 );
}
//accept->accept_queue->request->request_queue->output->output_queue
//multi process sendfile
int parse_request(EH ev){
    ev->request_len--;
    *( ev->request + ev->request_len - 1 ) = 0x00;
    int i;
    for( i=0; i<ev->request_len; i++ ){
        if( ev->request[i] == ':' ){
            ev->request_len = ev->request_len-i-1;
            char temp[MAX_REQLEN];
            memcpy( temp, ev->request, i );
            ev->handle_method = atoi( temp );
            memcpy( temp, ev->request+i+1, ev->request_len );
            memcpy( ev->request, temp, ev->request_len );
            break;
        }
    }
    //handle_request(ev );
    //registerto epoll EPOLLOUT

    struct epoll_eventev_temp;
    ev_temp.data.ptr = ev;
    ev_temp.events = EPOLLOUT|EPOLLET;
    epoll_ctl( ev->epoll_fd, EPOLL_CTL_MOD, ev->socket_fd, &ev_temp );
    return SUCCESS;
}

int handle_request(EH ev){
    struct statfile_info;
    switch( ev->handle_method ){
        case HANDLE_INFO:
            ev->file_fd = open( ev->request, O_RDONLY );
            if( ev->file_fd == -1 ){
               send( ev->socket_fd, "open file failed\n", strlen("open file failed\n"), 0 );
               return -1;
            }
            fstat(ev->file_fd, &file_info);
            char info[MAX_REQLEN];
            sprintf(info,"filelen:%d\n",file_info.st_size);
            send( ev->socket_fd, info, strlen( info ), 0 );
            break;
        case HANDLE_SEND:
            ev->file_fd = open( ev->request, O_RDONLY );
            if( ev->file_fd == -1 ){
               send( ev->socket_fd, "open file failed\n", strlen("open file failed\n"), 0 );
               return -1;
            }
            fstat(ev->file_fd, &file_info);
            sendfile( ev->socket_fd, ev->file_fd, 0, file_info.st_size );
            break;
        case HANDLE_DEL:
            break;
        case HANDLE_CLOSE:
            break;
    }
    finish_request( ev );
    return SUCCESS;
}

int finish_request(EH ev){
    close(ev->socket_fd);
    close(ev->file_fd);
    ev->handle_method = -1;
    clean_request( ev );
    return SUCCESS;
}

int clean_request(EH ev){
    memset( ev->request, 0, MAX_REQLEN );
    ev->request_len = 0;
}

int read_hook_v2( EH ev ){
    char in_buf[MAX_REQLEN];
    memset( in_buf, 0, MAX_REQLEN );
    int recv_num = recv( ev->socket_fd, &in_buf, MAX_REQLEN, 0 );
    if( recv_num ==0 ){
        close( ev->socket_fd );
        return ERROR;
    }
    else{
        //checkifoverflow
        if( ev->request_len > MAX_REQLEN-recv_num ){
            close( ev->socket_fd );
           clean_request( ev );
        }
        memcpy( ev->request + ev->request_len, in_buf, recv_num );
        ev->request_len += recv_num;
        if( recv_num == 2 && ( !memcmp( &in_buf[recv_num-2], "\r\n", 2 ) ) ){
           parse_request(ev);
        }
    }
    return recv_num;
}

int write_hook_v1( EH ev ){
    struct statfile_info;
    ev->file_fd = open( ev->request, O_RDONLY );
    if( ev->file_fd == ERROR ){
        send( ev->socket_fd, "openfile failed\n", strlen("openfile failed\n"), 0 );
        return ERROR;
    }
    fstat(ev->file_fd, &file_info);
    int write_num;
    while(1){
        write_num = sendfile( ev->socket_fd, ev->file_fd, (off_t *)&ev->file_pos, 10240 );
        ev->file_pos += write_num;
        if( write_num == ERROR ){
            if( errno == EAGAIN ){
               break;
            }
        }
        else if( write_num == 0 ){
            printf( "writed:%d\n", ev->file_pos );
            //finish_request(ev );
            break;
        }
    }
    return SUCCESS;
}

int main(){
    int listen_fd = create_listen_fd( 3389 );
    int pid = fork_process( 3 );
    if( pid == 0 ){
        int accept_handles = 0;
        struct epoll_eventev, events[20];
        int epfd = epoll_create( 256 );
        int ev_s = 0;

        ev.data.fd = listen_fd;
        ev.events = EPOLLIN|EPOLLET;
        epoll_ctl( epfd, EPOLL_CTL_ADD, listen_fd, &ev );
        struct event_handleev_handles[256];
        for( ;; ){
            ev_s = epoll_wait( epfd, events, 20, 500 );
            int i = 0;
            for( i = 0; i<ev_s; i++ ){
               if( events[i].data.fd == listen_fd ){
                   if( accept_handles < MAX_PROCESS_CONN ){
                       accept_handles++;
                       int accept_fd = create_accept_fd( listen_fd );
                       init_evhandle(&ev_handles[accept_handles],accept_fd,epfd,read_hook_v2,write_hook_v1);
                       ev.data.ptr = &ev_handles[accept_handles];
                       ev.events = EPOLLIN|EPOLLET;
                       epoll_ctl( epfd, EPOLL_CTL_ADD, accept_fd, &ev );
                   }
               }
               else if( events[i].events&EPOLLIN ){
                   EVENT_HANDLE current_handle = ( ( EH )( events[i].data.ptr ) )->read_handle;
                   EH current_event = ( EH )( events[i].data.ptr );
                   ( *current_handle )( current_event );
               }
               else if( events[i].events&EPOLLOUT ){
                   EVENT_HANDLE current_handle = ( ( EH )( events[i].data.ptr ) )->write_handle;
                   EH current_event = ( EH )( events[i].data.ptr );
                   if( ( *current_handle )( current_event )  == 0 ){
                       accept_handles--;
                   }
               }
            }
        }
    }
    else{
        //managerthe process
        int child_process_status;
        wait( &child_process_status );
    }

    return SUCCESS;
}

三、     分布式系统设计

前面讲述了分布式系统中的核心的服务器的实现。可以是http服务器,缓存服务器,分布式文件系统等的内部实现。下边主要从一个高并发的大型网站出发,看一个高并发系统的设计。下边是一个高并发系统的逻辑结构:


主要是参考这篇文章http://www.chinaz.com/web/2010/0310/108211.shtml。下边主要想从这个架构的各个部分的实现展开。

1.     缓存系统

缓存是每一个高并发,高可用系统不可或缺的模块。下边就几个常见缓存系统系统进行介绍。

Squid

Squid作为一个前端缓存,通常部署在网络的离用户最近的地方,通过缓存网站的页面,使用户不必每次都跑到服务器去取数据,提高系统响应和性能。实现应该比较简单:一个带有存储功能的代理。用户访问页面的时候,由它代理,然后存储请求结果,下次再访问的时候,查看是否需要更新,有更新就去服务器取新数据,否则直接返回用户页面。

Ehcache

Ehcache是一个对象缓存系统。通常在J2EE中配合Hibernate使用,这里请原谅作者本人之前是做J2EE开发的,其它使用方式暂不是很了解。应用查询数据库,对经常需要查询,却更新不频繁的数据,可以放入ehcache缓存,提高访问速度。Ehcahe支持内存缓存和硬盘两种方式,支持分布式缓存。数据缓存的基本原理就是:为需要缓存的对象建立一个map,临时对象放入map,查询的时候先查询map,没有找到再查找数据库。关机时可以序列化到硬盘。分布式缓存没有研究过。

页面缓存和动态页面静态化

在大型网站经常使用的一种缓存技术就是动态页面的缓存。由于动态页面经常更新,上边的缓存就不起作用了。通常会采用SSI(Server side include)等技术将动态页面的或者页面片段进行缓存。

还有一种就是动态页面静态化。下边是一本讲spring的书中给出的j2EE中的动态页面静态化的示例:

  1. /**
  2. * 动态内容静态化的Filter。将变化非常缓慢的动态文件生成静态文件。
  3. */ 
  4. package com.zsl.cache.filter; 
  5.   
  6. import java.io.File; 
  7. import java.io.IOException; 
  8. import java.io.UnsupportedEncodingException; 
  9. import java.net.URLEncoder; 
  10. import java.util.Map; 
  11.   
  12. importjavax.management.RuntimeErrorException; 
  13. importjavax.servlet.FilterChain; 
  14. importjavax.servlet.ServletException; 
  15. importjavax.servlet.ServletRequest; 
  16. importjavax.servlet.ServletResponse; 
  17. importjavax.servlet.http.HttpServletRequest; 
  18. importjavax.servlet.http.HttpServletResponse; 
  19.   
  20. importorg.apache.commons.io.FilenameUtils; 
  21. importorg.apache.http.HttpResponse; 
  22. importorg.springframework.core.io.Resource; 
  23.   
  24. importcom.sun.xml.bind.v2.runtime.output.Encoded; 
  25.   
  26.   
  27. /**
  28. * @author zsl
  29. *
  30. */ 
  31. public class FileCacheFilterextends AbstractCacheFilter{ 
  32.          private String root; 
  33.          
  34.          private final String SUFFIX = ".html"
  35.          
  36.          public final void setFileDir(Resource dir){ 
  37.                    try
  38.                             File f = dir.getFile(); 
  39.                             f.mkdirs(); 
  40.                             if(!f.isDirectory()){ 
  41.                                      throw newIllegalArgumentException("Invalid directory: "+f.getPath()); 
  42.                             } 
  43.                             if(!f.canWrite()) 
  44.                                      throw newIllegalArgumentException("Cannot write to directory: "+f.getPath()); 
  45.                             root = f.getPath(); 
  46.                             
  47.                             if(!root.endsWith("/")&&!root.endsWith("//")) 
  48.                                      root = root+"/"
  49.                    } catch (IOException e) { 
  50.                             // TODO Auto-generated catch block 
  51.                             throw new IllegalArgumentException(e); 
  52.                    } 
  53.          } 
  54.          
  55.          public void afterPropertiesSet() throws Exception { 
  56.                    super.afterPropertiesSet(); 
  57.                    if(!new File(root).isDirectory()){ 
  58.                             throw newIllegalArgumentException("No directory: "+root); 
  59.                    } 
  60.          } 
  61.          
  62.          public void doFilter(ServletRequest request,ServletResponseresponse,FilterChain chain) throws IOException,ServletException { 
  63.                    HttpServletRequest httpRequest =(HttpServletRequest)request; 
  64.                    String key = getKey(httpRequest); 
  65.                    if(key == null){ 
  66.                             chain.doFilter(request,response); 
  67.                    }else
  68.                             File file = key2File(key); 
  69.                             if(file.isFile()){ 
  70.                                      HttpServletResponse httpResponse= (HttpServletResponse)response; 
  71.                                      httpResponse.setContentType(getContentType()); 
  72.                                      httpResponse.setHeader("Content-Encoding","gzip"); 
  73.                                      httpResponse.setContentLength((int)file.length()); 
  74.                                      FileUtil.readFil(file,httpResponse.getOutputStream()); 
  75.                             }else
  76.                                      //缓存未找到文件 
  77.                                      HttpServletResponse httpResponse= (HttpServletResponse)response; 
  78.                                      CachedResponseWrapper wrapper =new CachedResponseWrapper(httpResponse); 
  79.                                      chain.doFilter(request,response); 
  80.                                      if(wrapper.getStatus() ==HttpServletResponse.SC_OK){ 
  81.                                                byte[] data =GZipUtil.gzip(wrapper.getResponseData()); 
  82.                                                FileUtil.writeFile(file,data); 
  83.                                                httpResponse.setContentType(getContentType()); 
  84.                                                httpResponse.setHeader("Content-Encoding","gzip"); 
  85.                                                httpResponse.setContentLength(data.length); 
  86.                                                httpResponse.getOutputStream().write(data); 
  87.                                      } 
  88.                             } 
  89.                    } 
  90.                    
  91.          } 
  92.          
  93.          private File key2File(String key){ 
  94.                    int  hash =key.hashCode(); 
  95.                    int dir1 = (hash &0xff00)>>8
  96.                    int dir2 = hash & 0xff
  97.                    String  dir= root+dir1+"/"+dir2; 
  98.                    File fdir = new File(dir); 
  99.                    if(!fdir.isAbsolute()){ 
  100.                             if(!fdir.mkdirs()){ 
  101.                                      return null
  102.                             } 
  103.                    } 
  104.                    return newFile(dir+"/"+encode(key)+SUFFIX); 
  105.          } 
  106.          
  107.          private String encode(String key){ 
  108.                    try
  109.                             return URLEncoder.encode(key,"UTF-8"); 
  110.                    } catch (UnsupportedEncodingException e) { 
  111.                             throw new RuntimeException(e); 
  112.                    } 
  113.                    
  114.          } 
  115.          
  116.          public void remove(String url,Map<String,String>parameters){ 
  117.                    String key =getKey(HttpServletRequestFactory.create(url,parameters)); 
  118.                    if(key != null){ 
  119.                             FileUtil.remveFile(key2File(key)); 
  120.                    } 
  121.          } 
/**
 * 动态内容静态化的Filter。将变化非常缓慢的动态文件生成静态文件。
 */
package com.zsl.cache.filter;
 
import java.io.File;
import java.io.IOException;
import java.io.UnsupportedEncodingException;
import java.net.URLEncoder;
import java.util.Map;
 
importjavax.management.RuntimeErrorException;
importjavax.servlet.FilterChain;
importjavax.servlet.ServletException;
importjavax.servlet.ServletRequest;
importjavax.servlet.ServletResponse;
importjavax.servlet.http.HttpServletRequest;
importjavax.servlet.http.HttpServletResponse;
 
importorg.apache.commons.io.FilenameUtils;
importorg.apache.http.HttpResponse;
importorg.springframework.core.io.Resource;
 
importcom.sun.xml.bind.v2.runtime.output.Encoded;
 
 
/**
 * @author zsl
 *
 */
public class FileCacheFilterextends AbstractCacheFilter{
         private String root;
        
         private final String SUFFIX = ".html";
        
         public final void setFileDir(Resource dir){
                   try {
                            File f = dir.getFile();
                            f.mkdirs();
                            if(!f.isDirectory()){
                                     throw newIllegalArgumentException("Invalid directory: "+f.getPath());
                            }
                            if(!f.canWrite())
                                     throw newIllegalArgumentException("Cannot write to directory: "+f.getPath());
                            root = f.getPath();
                           
                            if(!root.endsWith("/")&&!root.endsWith("//"))
                                     root = root+"/";
                   } catch (IOException e) {
                            // TODO Auto-generated catch block
                            throw new IllegalArgumentException(e);
                   }
         }
        
         public void afterPropertiesSet() throws Exception {
                   super.afterPropertiesSet();
                   if(!new File(root).isDirectory()){
                            throw newIllegalArgumentException("No directory: "+root);
                   }
         }
        
         public void doFilter(ServletRequest request,ServletResponseresponse,FilterChain chain) throws IOException,ServletException {
                   HttpServletRequest httpRequest =(HttpServletRequest)request;
                   String key = getKey(httpRequest);
                   if(key == null){
                            chain.doFilter(request,response);
                   }else{
                            File file = key2File(key);
                            if(file.isFile()){
                                     HttpServletResponse httpResponse= (HttpServletResponse)response;
                                     httpResponse.setContentType(getContentType());
                                     httpResponse.setHeader("Content-Encoding","gzip");
                                     httpResponse.setContentLength((int)file.length());
                                     FileUtil.readFil(file,httpResponse.getOutputStream());
                            }else{
                                     //缓存未找到文件
                                     HttpServletResponse httpResponse= (HttpServletResponse)response;
                                     CachedResponseWrapper wrapper =new CachedResponseWrapper(httpResponse);
                                     chain.doFilter(request,response);
                                     if(wrapper.getStatus() ==HttpServletResponse.SC_OK){
                                               byte[] data =GZipUtil.gzip(wrapper.getResponseData());
                                               FileUtil.writeFile(file,data);
                                               httpResponse.setContentType(getContentType());
                                               httpResponse.setHeader("Content-Encoding","gzip");
                                               httpResponse.setContentLength(data.length);
                                               httpResponse.getOutputStream().write(data);
                                     }
                            }
                   }
                  
         }
        
         private File key2File(String key){
                   int  hash =key.hashCode();
                   int dir1 = (hash &0xff00)>>8;
                   int dir2 = hash & 0xff;
                   String  dir= root+dir1+"/"+dir2;
                   File fdir = new File(dir);
                   if(!fdir.isAbsolute()){
                            if(!fdir.mkdirs()){
                                     return null;
                            }
                   }
                   return newFile(dir+"/"+encode(key)+SUFFIX);
         }
        
         private String encode(String key){
                   try {
                            return URLEncoder.encode(key,"UTF-8");
                   } catch (UnsupportedEncodingException e) {
                            throw new RuntimeException(e);
                   }
                  
         }
        
         public void remove(String url,Map<String,String>parameters){
                   String key =getKey(HttpServletRequestFactory.create(url,parameters));
                   if(key != null){
                            FileUtil.remveFile(key2File(key));
                   }
         }
}

书中给到的另外一个客户端缓存,不知道该归那类:

  1. /**
  2. * 网页静态资源(gif图像,css资源的缓存时间设置filter,避免频繁请求静态资源
  3. */ 
  4. package com.zsl.cache.filter; 
  5.   
  6. import java.io.IOException; 
  7. import java.util.Enumeration; 
  8. import java.util.Map; 
  9.   
  10. importjavax.servlet.FilterChain; 
  11. importjavax.servlet.FilterConfig; 
  12. importjavax.servlet.ServletException; 
  13. import javax.servlet.Filter; 
  14. importjavax.servlet.ServletRequest; 
  15. importjavax.servlet.ServletResponse; 
  16. importjavax.servlet.http.HttpServletRequest; 
  17. importjavax.servlet.http.HttpServletResponse; 
  18.   
  19. importorg.apache.commons.collections.map.HashedMap; 
  20. importorg.apache.commons.logging.Log; 
  21. importorg.apache.commons.logging.LogFactory; 
  22.   
  23.   
  24. /**
  25. * @author zsl
  26. *
  27. */ 
  28. public class ExpireFilterimplements Filter { 
  29.   
  30.          private Log log = LogFactory.getLog(ExpireFilter.class); 
  31.          
  32.          private Map<String, Long> map = new HashedMap(); 
  33.   
  34.          @Override 
  35.          public void destroy() { 
  36.                    log.info("destory ExpiredFilter"); 
  37.          } 
  38.   
  39.          @Override 
  40.          public void doFilter(ServletRequest request, ServletResponseresponse, 
  41.                             FilterChain chain) throws IOException,ServletException { 
  42.                             String uriString =((HttpServletRequest)request).getRequestURI(); 
  43.                             int n = uriString.lastIndexOf('.'); 
  44.                             if(n!= -1){ 
  45.                                      String ext =uriString.substring(n); 
  46.                                      Long exp = map.get(ext); 
  47.                                      if(exp != null){ 
  48.                                                HttpServletResponseresp = (HttpServletResponse)response; 
  49.                                                resp.setHeader("Expires",System.currentTimeMillis()+exp*1000+""); 
  50.                                      } 
  51.                             } 
  52.                             chain.doFilter(request,response); 
  53.          } 
  54.   
  55.          @Override 
  56.          public void init(FilterConfig config) throwsServletException { 
  57.                    Enumeration em = config.getInitParameterNames(); 
  58.                    while(em.hasMoreElements()){ 
  59.                             String paramName =em.nextElement().toString(); 
  60.                             String paramValue = config.getInitParameter(paramName); 
  61.                             try
  62.                                      int time =Integer.valueOf(paramValue); 
  63.                                      if(time>0){ 
  64.                                                log.info("set"+paramName + " expired seconds: "+time); 
  65.                                                map.put(paramName, newLong(time)); 
  66.                                      } 
  67.                             } catch (Exception e) { 
  68.                                      log.warn("Exception ininitilizing ExpiredFilter.",e); 
  69.                             } 
  70.                    } 
  71.          } 
/**
 * 网页静态资源(gif图像,css资源的缓存时间设置filter,避免频繁请求静态资源
 */
package com.zsl.cache.filter;
 
import java.io.IOException;
import java.util.Enumeration;
import java.util.Map;
 
importjavax.servlet.FilterChain;
importjavax.servlet.FilterConfig;
importjavax.servlet.ServletException;
import javax.servlet.Filter;
importjavax.servlet.ServletRequest;
importjavax.servlet.ServletResponse;
importjavax.servlet.http.HttpServletRequest;
importjavax.servlet.http.HttpServletResponse;
 
importorg.apache.commons.collections.map.HashedMap;
importorg.apache.commons.logging.Log;
importorg.apache.commons.logging.LogFactory;
 
 
/**
 * @author zsl
 *
 */
public class ExpireFilterimplements Filter {
 
         private Log log = LogFactory.getLog(ExpireFilter.class);
        
         private Map<String, Long> map = new HashedMap();
 
         @Override
         public void destroy() {
                   log.info("destory ExpiredFilter");
         }
 
         @Override
         public void doFilter(ServletRequest request, ServletResponseresponse,
                            FilterChain chain) throws IOException,ServletException {
                            String uriString =((HttpServletRequest)request).getRequestURI();
                            int n = uriString.lastIndexOf('.');
                            if(n!= -1){
                                     String ext =uriString.substring(n);
                                     Long exp = map.get(ext);
                                     if(exp != null){
                                               HttpServletResponseresp = (HttpServletResponse)response;
                                               resp.setHeader("Expires",System.currentTimeMillis()+exp*1000+"");
                                     }
                            }
                            chain.doFilter(request,response);
         }
 
         @Override
         public void init(FilterConfig config) throwsServletException {
                   Enumeration em = config.getInitParameterNames();
                   while(em.hasMoreElements()){
                            String paramName =em.nextElement().toString();
                            String paramValue = config.getInitParameter(paramName);
                            try {
                                     int time =Integer.valueOf(paramValue);
                                     if(time>0){
                                               log.info("set"+paramName + " expired seconds: "+time);
                                               map.put(paramName, newLong(time));
                                     }
                            } catch (Exception e) {
                                     log.warn("Exception ininitilizing ExpiredFilter.",e);
                            }
                   }
         }
}

2.     负载均衡系统

Ø  负载均衡策略

负载均衡策略有随机分配,平均分配,分布式一致性hash等。随机分配就是通过随机数选择一个服务器来服务。平均分配就是一次循环分配一次。分布式一致性hash算法,比较负载,把资源和节点映射到一个换上,然后通过一定的算法资源对应到节点上,使得添加和去掉服务器变得非常容易,减少对其它服务器的影响。很有名的一个算法,据说是P2P的基础。了解不是很深,就不详细说了,要露马脚了。

Ø  软件负载均衡

软件负载均衡可以采用很多方案,常见的几个方案有:

基于DNS的负载均衡,通过DNS正向区域的配置,将一个域名根据一定的策略解析到多个ip地址,实现负载均衡,这里需要DNS服务器的配合。

基于LVS的负载均衡。LVS可以将多个linux服务器做成一个虚拟的服务器,对外提供服务器,实现负载均衡。

基于Iptables的负载均衡。Iptables可以通过做nat,对外提供一个虚拟IP,对内映射到多个服务器实现负载均衡。基本上可以和硬件均衡方案一致了,这里的linux服务器相当于一台路由器。

Ø  硬件负载均衡

基于路由器的负载均衡,在路由器上配置nat实现负载均衡。对外网一个虚拟IP,内网映射几个内网IP。

一些网络设备厂商也提供了一些负载均衡的设备,如F5,不过价格不菲哦。

数据库的负载均衡

数据库的负载均衡可以是数据库厂商提供的集群方案。

――――――――――――――――――――――――――――――――――――――

今天先写到这里,这个题目太大了,东西太多。后边是将来要写的。还没有组织出来。

好多东西也没有展开。东西太多了。

――――――――――――――――――――――――――――――――――――――

分布式文件系统

Gfs

hfs

Map Reduce系统

云计算



高并发系统设计

作者:周顺利

注:本文大多数观点和代码都是从网上或者开源代码中抄来的,为了疏理和组织这片文章,作者也费了不少心血,为了表示对我劳动的尊重,请转载时注明作者和出处。

一、     引子

最近失业在家,闲来无事。通过网上查找资料和查看开源代码,研究了一下互联网高并发系统的一些设计。这里主要从服务器内部设计和整个系统设计两个方面讨论,更多的是从互联网大型网站设计方面考虑,高性能计算之类系统没有研究过。

二、     服务器内部设计

服务器设计涉及Socket的阻塞/非阻塞,操作系统IO的同步和异步(之前被人问到过两次。第一次让我说说知道的网络模型,我说ISO模型和TCP/IP模型,结果被鄙视了。最后人说了解linux epoll吗?不了解呀!汉,回去查资料才知道是这回事。第二次让我说说知道线程模型,汉!这个名词感觉没有听说过,线程?模型?半同步/半异步,领导者/跟随者知道吗。再汉,我知道同步/异步,还有半同步/半异步?啥呀?领导者/跟随者,我现在没有领导。回去一顿恶补,原来是ACE框架里边经常有这样的提法,Reactor属于同步/半同步,PREACTOR属于领导者/跟随者模式。瀑布汗。小插曲一段,这些不懂没关系,下边我慢慢分解),事件分离器,线程池等。内部设计希望通过各个模块的给出一个简单设计,经过您的进一步的组合和打磨,就可以实现一个基本的高并发服务器。

1.     Java高并发服务器

Java设计高并发服务器相对比较简单。直接是用ServerSocket或者Channel+selector实现。前者属于同步IO设计,后者采用了模拟的异步IO。为什么说模拟的异步IO呢?记得网上看到一篇文章分析了java的selector。在windows上通过建立一个127.0.0.1到127.0.0.1的连接实现IO的异步通知。在linux上通过建立一个管道实现IO的异步通知。考虑到高并并发系统的要求和java上边的异步IO的限制(通常操作系统同时打开的文件数是有限制的)和效率问题,java的高并发服务器设计不做展开深入的分析,可以参考C高并发服务器的分析做同样的设计。

2.     C高并发服务器设计

1)    基本概念

Ø  阻塞和非阻塞socket

所谓阻塞Socket,是指其完成指定的任务之前不允许程序调用另一个函数,在Windows下还会阻塞本线程消息的发送。所谓非阻塞Socket,是指操作启动之后,如果可以立即得到结果就返回结果,否则返回表示结果需要等待的错误信息,不等待任务完成函数就返回。一个比较有意思的问题是accept的Socket是阻塞的还是非阻塞的呢?下边是MSDN上边的一段话:The accept function extracts thefirst connection on the queue of pending connections on socket s. It thencreates and returns a handle to the new socket. The newly created socket is thesocket that will handle the actual connection; it has the same properties assocket s, including the asynchronous events registered with the WSAAsyncSelector WSAEventSelect functions.

Ø  同步/异步IO

有两种类型的文件IO同步:同步文件IO和异步文件IO。异步文件IO也就是重叠IO。
      在同步文件IO中,线程启动一个IO操作然后就立即进入等待状态,直到IO操作完成后才醒来继续执行。而异步文件IO方式中,线程发送一个IO请求到内核,然后继续处理其他的事情,内核完成IO请求后,将会通知线程IO操作完成了。
      如果IO请求需要大量时间执行的话,异步文件IO方式可以显著提高效率,因为在线程等待的这段时间内,CPU将会调度其他线程进行执行,如果没有其他线程需要执行的话,这段时间将会浪费掉(可能会调度操作系统的零页线程)。如果IO请求操作很快,用异步IO方式反而还低效,还不如用同步IO方式。
      同步IO在同一时刻只允许一个IO操作,也就是说对于同一个文件句柄的IO操作是序列化的,即使使用两个线程也不能同时对同一个文件句柄同时发出读写操作。重叠IO允许一个或多个线程同时发出IO请求。异步IO在请求完成时,通过将文件句柄设为有信号状态来通知应用程序,或者应用程序通过GetOverlappedResult察看IO请求是否完成,也可以通过一个事件对象来通知应用程序。高并发系统通常采用异步IO方式提高系统性能。

Ø  事件分离器

事件分离器的概念是针对异步IO来说的。在同步IO的情况下,执行操作等待返回结果,不要事件分离器。异步IO的时候,发送请求后,结果是通过事件通知的。这是产生了事件分离器的需求。事件分离器主要任务是管理和分离不同文件描述符上的所发生的事件,让后通知相应的事件,派发相应的动作。下边是lighthttpd事件分离器定义:

  1. /**
  2. * fd-event handler for select(), poll() andrt-signals on Linux 2.4
  3. *
  4. */ 
  5. typedef struct fdevents { 
  6.          fdevent_handler_t type; 
  7.   
  8.          fdnode **fdarray; 
  9.          size_t maxfds; 
  10.   
  11. #ifdef USE_LINUX_SIGIO 
  12.          int in_sigio; 
  13.          int signum; 
  14.          sigset_t sigset; 
  15.          siginfo_t siginfo; 
  16.          bitset *sigbset; 
  17. #endif 
  18. #ifdef USE_LINUX_EPOLL 
  19.          int epoll_fd; 
  20.          struct epoll_event *epoll_events; 
  21. #endif 
  22. #ifdef USE_POLL 
  23.          struct pollfd *pollfds; 
  24.   
  25.          size_t size; 
  26.          size_t used; 
  27.   
  28.          buffer_int unused; 
  29. #endif 
  30. #ifdef USE_SELECT 
  31.          fd_set select_read; 
  32.          fd_set select_write; 
  33.          fd_set select_error; 
  34.   
  35.          fd_set select_set_read; 
  36.          fd_set select_set_write; 
  37.          fd_set select_set_error; 
  38.   
  39.          int select_max_fd; 
  40. #endif 
  41. #ifdef USE_SOLARIS_DEVPOLL 
  42.          int devpoll_fd; 
  43.          struct pollfd *devpollfds; 
  44. #endif 
  45. #ifdef USE_FREEBSD_KQUEUE 
  46.          int kq_fd; 
  47.          struct kevent *kq_results; 
  48.          bitset *kq_bevents; 
  49. #endif 
  50. #ifdef USE_SOLARIS_PORT 
  51.          int port_fd; 
  52. #endif 
  53.          int (*reset)(struct fdevents *ev); 
  54.          void (*free)(struct fdevents *ev); 
  55.   
  56.          int (*event_add)(struct fdevents *ev, int fde_ndx, int fd,int events); 
  57.          int (*event_del)(struct fdevents *ev, int fde_ndx, int fd); 
  58.          int (*event_get_revent)(struct fdevents *ev, size_t ndx); 
  59.          int (*event_get_fd)(struct fdevents *ev, size_t ndx); 
  60.   
  61.          int (*event_next_fdndx)(struct fdevents *ev, int ndx); 
  62.   
  63.          int (*poll)(struct fdevents *ev, int timeout_ms); 
  64.   
  65.          int (*fcntl_set)(struct fdevents *ev, int fd); 
  66. } fdevents; 
  67.   
  68. fdevents *fdevent_init(size_tmaxfds, fdevent_handler_t type); 
  69. int fdevent_reset(fdevents*ev); 
  70. void fdevent_free(fdevents*ev); 
  71.   
  72. int fdevent_event_add(fdevents*ev, int *fde_ndx, int fd, int events); 
  73. int fdevent_event_del(fdevents*ev, int *fde_ndx, int fd); 
  74. intfdevent_event_get_revent(fdevents *ev, size_t ndx); 
  75. intfdevent_event_get_fd(fdevents *ev, size_t ndx); 
  76. fdevent_handlerfdevent_get_handler(fdevents *ev, int fd); 
  77. void *fdevent_get_context(fdevents *ev, int fd); 
  78.   
  79. int fdevent_event_next_fdndx(fdevents*ev, int ndx); 
  80.   
  81. int fdevent_poll(fdevents *ev,int timeout_ms); 
  82.   
  83. int fdevent_register(fdevents*ev, int fd, fdevent_handler handler, void *ctx); 
  84. int fdevent_unregister(fdevents*ev, int fd); 
  85.   
  86. int fdevent_fcntl_set(fdevents*ev, int fd); 
  87.   
  88. intfdevent_select_init(fdevents *ev); 
  89. int fdevent_poll_init(fdevents*ev); 
  90. intfdevent_linux_rtsig_init(fdevents *ev); 
  91. intfdevent_linux_sysepoll_init(fdevents *ev); 
  92. intfdevent_solaris_devpoll_init(fdevents *ev); 
  93. intfdevent_freebsd_kqueue_init(fdevents *ev); 
/**
 * fd-event handler for select(), poll() andrt-signals on Linux 2.4
 *
 */
typedef struct fdevents {
         fdevent_handler_t type;
 
         fdnode **fdarray;
         size_t maxfds;
 
#ifdef USE_LINUX_SIGIO
         int in_sigio;
         int signum;
         sigset_t sigset;
         siginfo_t siginfo;
         bitset *sigbset;
#endif
#ifdef USE_LINUX_EPOLL
         int epoll_fd;
         struct epoll_event *epoll_events;
#endif
#ifdef USE_POLL
         struct pollfd *pollfds;
 
         size_t size;
         size_t used;
 
         buffer_int unused;
#endif
#ifdef USE_SELECT
         fd_set select_read;
         fd_set select_write;
         fd_set select_error;
 
         fd_set select_set_read;
         fd_set select_set_write;
         fd_set select_set_error;
 
         int select_max_fd;
#endif
#ifdef USE_SOLARIS_DEVPOLL
         int devpoll_fd;
         struct pollfd *devpollfds;
#endif
#ifdef USE_FREEBSD_KQUEUE
         int kq_fd;
         struct kevent *kq_results;
         bitset *kq_bevents;
#endif
#ifdef USE_SOLARIS_PORT
         int port_fd;
#endif
         int (*reset)(struct fdevents *ev);
         void (*free)(struct fdevents *ev);
 
         int (*event_add)(struct fdevents *ev, int fde_ndx, int fd,int events);
         int (*event_del)(struct fdevents *ev, int fde_ndx, int fd);
         int (*event_get_revent)(struct fdevents *ev, size_t ndx);
         int (*event_get_fd)(struct fdevents *ev, size_t ndx);
 
         int (*event_next_fdndx)(struct fdevents *ev, int ndx);
 
         int (*poll)(struct fdevents *ev, int timeout_ms);
 
         int (*fcntl_set)(struct fdevents *ev, int fd);
} fdevents;
 
fdevents *fdevent_init(size_tmaxfds, fdevent_handler_t type);
int fdevent_reset(fdevents*ev);
void fdevent_free(fdevents*ev);
 
int fdevent_event_add(fdevents*ev, int *fde_ndx, int fd, int events);
int fdevent_event_del(fdevents*ev, int *fde_ndx, int fd);
intfdevent_event_get_revent(fdevents *ev, size_t ndx);
intfdevent_event_get_fd(fdevents *ev, size_t ndx);
fdevent_handlerfdevent_get_handler(fdevents *ev, int fd);
void *fdevent_get_context(fdevents *ev, int fd);
 
int fdevent_event_next_fdndx(fdevents*ev, int ndx);
 
int fdevent_poll(fdevents *ev,int timeout_ms);
 
int fdevent_register(fdevents*ev, int fd, fdevent_handler handler, void *ctx);
int fdevent_unregister(fdevents*ev, int fd);
 
int fdevent_fcntl_set(fdevents*ev, int fd);
 
intfdevent_select_init(fdevents *ev);
int fdevent_poll_init(fdevents*ev);
intfdevent_linux_rtsig_init(fdevents *ev);
intfdevent_linux_sysepoll_init(fdevents *ev);
intfdevent_solaris_devpoll_init(fdevents *ev);
intfdevent_freebsd_kqueue_init(fdevents *ev);


 

具体系统的事件操作通过:

fdevent_freebsd_kqueue.c

fdevent_linux_rtsig.c

fdevent_linux_sysepoll.c

fdevent_poll.c

fdevent_select.c

fdevent_solaris_devpoll.c

几个文件实现。

Ø  线程池

线程池基本上比较简单,实现线程的借入和借出,创建和销毁。最完好可以做到通过一个事件触发一个线程开始工作。下边给出一个简单的,没有实现根据事件触发的,linux和windows通用的线程池模型:

  1. spthread.h 
  2. #ifndef __spthread_hpp__ 
  3. #define __spthread_hpp__ 
  4.   
  5. #ifndef WIN32 
  6.   
  7. /// pthread 
  8.   
  9. #include <pthread.h> 
  10. #include <unistd.h> 
  11.   
  12. typedef void *sp_thread_result_t; 
  13. typedef pthread_mutex_tsp_thread_mutex_t; 
  14. typedef pthread_cond_t  sp_thread_cond_t; 
  15. typedef pthread_t       sp_thread_t; 
  16. typedef pthread_attr_t  sp_thread_attr_t; 
  17.   
  18. #definesp_thread_mutex_init(m,a)  pthread_mutex_init(m,a) 
  19. #definesp_thread_mutex_destroy(m) pthread_mutex_destroy(m) 
  20. #definesp_thread_mutex_lock(m)    pthread_mutex_lock(m) 
  21. #define sp_thread_mutex_unlock(m)   pthread_mutex_unlock(m) 
  22.   
  23. #definesp_thread_cond_init(c,a)   pthread_cond_init(c,a) 
  24. #definesp_thread_cond_destroy(c)  pthread_cond_destroy(c) 
  25. #definesp_thread_cond_wait(c,m)   pthread_cond_wait(c,m) 
  26. #definesp_thread_cond_signal(c)   pthread_cond_signal(c) 
  27.   
  28. #definesp_thread_attr_init(a)       pthread_attr_init(a) 
  29. #definesp_thread_attr_setdetachstate pthread_attr_setdetachstate 
  30. #defineSP_THREAD_CREATE_DETACHED    PTHREAD_CREATE_DETACHED 
  31.   
  32. #define sp_thread_self    pthread_self 
  33. #define sp_thread_create  pthread_create 
  34.   
  35. #define SP_THREAD_CALL 
  36. typedef sp_thread_result_t ( *sp_thread_func_t )( void * args ); 
  37.   
  38. #define sp_sleep(x) sleep(x) 
  39.   
  40. #else/// 
  41.   
  42. // win32 thread 
  43.   
  44. #include <winsock2.h> 
  45. #include <process.h> 
  46.   
  47. typedef unsigned sp_thread_t; 
  48.   
  49. typedef unsignedsp_thread_result_t; 
  50. #define SP_THREAD_CALL__stdcall 
  51. typedef sp_thread_result_t (__stdcall * sp_thread_func_t )( void * args ); 
  52.   
  53. typedef HANDLE  sp_thread_mutex_t; 
  54. typedef HANDLE  sp_thread_cond_t; 
  55. typedef DWORD   sp_thread_attr_t; 
  56.   
  57. #defineSP_THREAD_CREATE_DETACHED 1 
  58. #define sp_sleep(x)Sleep(1000*x) 
  59.   
  60. int sp_thread_mutex_init(sp_thread_mutex_t * mutex, void * attr ) 
  61.          *mutex = CreateMutex( NULL, FALSE, NULL ); 
  62.          return NULL == * mutex ? GetLastError() : 0; 
  63.   
  64. int sp_thread_mutex_destroy(sp_thread_mutex_t * mutex ) 
  65.          int ret = CloseHandle( *mutex ); 
  66.   
  67.          return 0 == ret ? GetLastError() : 0; 
  68.   
  69. int sp_thread_mutex_lock(sp_thread_mutex_t * mutex ) 
  70.          int ret = WaitForSingleObject( *mutex, INFINITE ); 
  71.          return WAIT_OBJECT_0 == ret ? 0 : GetLastError(); 
  72.   
  73. int sp_thread_mutex_unlock(sp_thread_mutex_t * mutex ) 
  74.          int ret = ReleaseMutex( *mutex ); 
  75.          return 0 != ret ? 0 : GetLastError(); 
  76.   
  77. int sp_thread_cond_init(sp_thread_cond_t * cond, void * attr ) 
  78.          *cond = CreateEvent( NULL, FALSE, FALSE, NULL ); 
  79.          return NULL == *cond ? GetLastError() : 0; 
  80.   
  81. int sp_thread_cond_destroy(sp_thread_cond_t * cond ) 
  82.          int ret = CloseHandle( *cond ); 
  83.          return 0 == ret ? GetLastError() : 0; 
  84.   
  85. /*
  86. Caller MUST be holding themutex lock; the
  87. lock is released and the calleris blocked waiting
  88. on 'cond'. When 'cond' issignaled, the mutex
  89. is re-acquired before returningto the caller.
  90. */ 
  91. int sp_thread_cond_wait(sp_thread_cond_t * cond, sp_thread_mutex_t * mutex ) 
  92.          int ret = 0; 
  93.   
  94.          sp_thread_mutex_unlock( mutex ); 
  95.   
  96.          ret = WaitForSingleObject( *cond, INFINITE ); 
  97.   
  98.          sp_thread_mutex_lock( mutex ); 
  99.   
  100.          return WAIT_OBJECT_0 == ret ? 0 : GetLastError(); 
  101.   
  102. int sp_thread_cond_signal(sp_thread_cond_t * cond ) 
  103.          int ret = SetEvent( *cond ); 
  104.          return 0 == ret ? GetLastError() : 0; 
  105.   
  106. sp_thread_t sp_thread_self() 
  107.          return GetCurrentThreadId(); 
  108.   
  109. int sp_thread_attr_init(sp_thread_attr_t * attr ) 
  110.          *attr = 0; 
  111.          return 0; 
  112.   
  113. intsp_thread_attr_setdetachstate( sp_thread_attr_t * attr, int detachstate ) 
  114.          *attr |= detachstate; 
  115.          return 0; 
  116.   
  117. int sp_thread_create(sp_thread_t * thread, sp_thread_attr_t * attr, 
  118.                    sp_thread_func_t myfunc, void * args ) 
  119.          // _beginthreadex returns 0 on an error 
  120.          HANDLE h = (HANDLE)_beginthreadex( NULL, 0, myfunc, args, 0,thread ); 
  121.          return h > 0 ? 0 : GetLastError(); 
  122.   
  123. #endif 
  124.   
  125. #endif 
  126. threadpool.h 
  127. /**
  128. * threadpool.h
  129. *
  130. * This file declares the functionalityassociated with
  131. * your implementation of a threadpool.
  132. */ 
  133.   
  134. #ifndef __threadpool_h__ 
  135. #define __threadpool_h__ 
  136.   
  137. #ifdef __cplusplus 
  138. extern "C"
  139. #endif 
  140.   
  141. // maximum number of threadsallowed in a pool 
  142. #define MAXT_IN_POOL 200 
  143.   
  144. // You must hide the internaldetails of the threadpool 
  145. // structure from callers, thusdeclare threadpool of type "void". 
  146. // In threadpool.c, you willuse type conversion to coerce 
  147. // variables of type"threadpool" back and forth to a 
  148. // richer, internal type.  (See threadpool.c for details.) 
  149.   
  150. typedef void *threadpool; 
  151.   
  152. // "dispatch_fn"declares a typed function pointer.  A 
  153. // variable of type"dispatch_fn" points to a function 
  154. // with the followingsignature: 
  155. // 
  156. //     void dispatch_function(void *arg); 
  157.   
  158. typedef void(*dispatch_fn)(void *); 
  159.   
  160. /**
  161. * create_threadpool creates a fixed-sizedthread
  162. * pool. If the function succeeds, it returns a (non-NULL)
  163. * "threadpool", else it returnsNULL.
  164. */ 
  165. threadpoolcreate_threadpool(int num_threads_in_pool); 
  166.   
  167.   
  168. /**
  169. * dispatch sends a thread off to do somework.  If
  170. * all threads in the pool are busy, dispatchwill
  171. * block until a thread becomes free and isdispatched.
  172. *
  173. * Once a thread is dispatched, this functionreturns
  174. * immediately.
  175. *
  176. * The dispatched thread calls into thefunction
  177. * "dispatch_to_here" with argument"arg".
  178. */ 
  179. intdispatch_threadpool(threadpool from_me, dispatch_fn dispatch_to_here, 
  180.                void *arg); 
  181.   
  182. /**
  183. * destroy_threadpool kills the threadpool,causing
  184. * all threads in it to commit suicide, andthen
  185. * frees all the memory associated with thethreadpool.
  186. */ 
  187. voiddestroy_threadpool(threadpool destroyme); 
  188.   
  189. #ifdef __cplusplus 
  190. #endif 
  191.   
  192. #endif 
  193. threadpool.c 
  194. /**
  195. * threadpool.c
  196. *
  197. * This file will contain your implementation ofa threadpool.
  198. */ 
  199.   
  200. #include <stdio.h> 
  201. #include <stdlib.h> 
  202. //#include <unistd.h> 
  203. //#include <sp_thread.h> 
  204. #include <string.h> 
  205.   
  206. #include"threadpool.h" 
  207. #include "spthread.h" 
  208.   
  209. typedef struct _thread_st { 
  210.          sp_thread_t id; 
  211.          sp_thread_mutex_t mutex; 
  212.          sp_thread_cond_t cond; 
  213.          dispatch_fn fn; 
  214.          void *arg; 
  215.          threadpool parent; 
  216. } _thread; 
  217.   
  218. // _threadpool is the internalthreadpool structure that is 
  219. // cast to type"threadpool" before it given out to callers 
  220. typedef struct _threadpool_st { 
  221.          // you should fill in this structure with whatever you need 
  222.          sp_thread_mutex_t tp_mutex; 
  223.          sp_thread_cond_t tp_idle; 
  224.          sp_thread_cond_t tp_full; 
  225.          sp_thread_cond_t tp_empty; 
  226.          _thread ** tp_list; 
  227.          int tp_index; 
  228.          int tp_max_index; 
  229.          int tp_stop; 
  230.   
  231.          int tp_total; 
  232. } _threadpool; 
  233.   
  234. threadpoolcreate_threadpool(int num_threads_in_pool) 
  235.          _threadpool *pool; 
  236.   
  237.          // sanity check the argument 
  238.          if ((num_threads_in_pool <= 0) || (num_threads_in_pool> MAXT_IN_POOL)) 
  239.                    return NULL; 
  240.   
  241.          pool = (_threadpool *) malloc(sizeof(_threadpool)); 
  242.          if (pool == NULL) { 
  243.                    fprintf(stderr, "Out of memory creating a newthreadpool!\n"); 
  244.                    return NULL; 
  245.          } 
  246.   
  247.          // add your code here to initialize the newly createdthreadpool 
  248.          sp_thread_mutex_init( &pool->tp_mutex, NULL ); 
  249.          sp_thread_cond_init( &pool->tp_idle, NULL ); 
  250.          sp_thread_cond_init( &pool->tp_full, NULL ); 
  251.          sp_thread_cond_init( &pool->tp_empty, NULL ); 
  252.          pool->tp_max_index = num_threads_in_pool; 
  253.          pool->tp_index = 0; 
  254.          pool->tp_stop = 0; 
  255.          pool->tp_total = 0; 
  256.          pool->tp_list = ( _thread ** )malloc( sizeof( void * ) *MAXT_IN_POOL ); 
  257.          memset( pool->tp_list, 0, sizeof( void * ) * MAXT_IN_POOL); 
  258.   
  259.          return (threadpool) pool; 
  260.   
  261. int save_thread( _threadpool *pool, _thread * thread
  262.          int ret = -1; 
  263.   
  264.          sp_thread_mutex_lock( &pool->tp_mutex ); 
  265.   
  266.          if( pool->tp_index < pool->tp_max_index ) { 
  267.                    pool->tp_list[ pool->tp_index ] = thread
  268.                    pool->tp_index++; 
  269.                    ret = 0; 
  270.   
  271.                    sp_thread_cond_signal( &pool->tp_idle ); 
  272.   
  273.                    if( pool->tp_index >= pool->tp_total ) { 
  274.                             sp_thread_cond_signal(&pool->tp_full ); 
  275.                    } 
  276.          } 
  277.   
  278.          sp_thread_mutex_unlock( &pool->tp_mutex ); 
  279.   
  280.          return ret; 
  281.   
  282. sp_thread_result_tSP_THREAD_CALL wrapper_fn( void * arg ) 
  283.          _thread * thread = (_thread*)arg; 
  284.          _threadpool * pool = (_threadpool*)thread->parent; 
  285.   
  286.          for( ; 0 == ((_threadpool*)thread->parent)->tp_stop; ){ 
  287.                    thread->fn( thread->arg ); 
  288.   
  289.                    if( 0 !=((_threadpool*)thread->parent)->tp_stop ) break
  290.   
  291.                    sp_thread_mutex_lock( &thread->mutex ); 
  292.                    if( 0 == save_thread( thread->parent, thread )) { 
  293.                             sp_thread_cond_wait(&thread->cond, &thread->mutex ); 
  294.                             sp_thread_mutex_unlock(&thread->mutex ); 
  295.                    } else
  296.                             sp_thread_mutex_unlock(&thread->mutex ); 
  297.                             sp_thread_cond_destroy(&thread->cond ); 
  298.                             sp_thread_mutex_destroy(&thread->mutex ); 
  299.   
  300.                             free( thread ); 
  301.                             break
  302.                    } 
  303.          } 
  304.   
  305.          sp_thread_mutex_lock( &pool->tp_mutex ); 
  306.          pool->tp_total--; 
  307.          if( pool->tp_total <= 0 ) sp_thread_cond_signal(&pool->tp_empty ); 
  308.          sp_thread_mutex_unlock( &pool->tp_mutex ); 
  309.   
  310.          return 0; 
  311.   
  312. intdispatch_threadpool(threadpool from_me, dispatch_fn dispatch_to_here, void*arg) 
  313.          int ret = 0; 
  314.   
  315.          _threadpool *pool = (_threadpool *) from_me; 
  316.          sp_thread_attr_t attr; 
  317.          _thread * thread = NULL; 
  318.   
  319.          // add your code here to dispatch a thread 
  320.          sp_thread_mutex_lock( &pool->tp_mutex ); 
  321.   
  322.          if( pool->tp_index <= 0 && pool->tp_total>= pool->tp_max_index ) { 
  323.                    sp_thread_cond_wait( &pool->tp_idle,&pool->tp_mutex ); 
  324.          } 
  325.   
  326.          if( pool->tp_index <= 0 ) { 
  327.                    _thread * thread = ( _thread * )malloc( sizeof(_thread ) ); 
  328.                    memset( &( thread->id ), 0, sizeof(thread->id ) ); 
  329.                    sp_thread_mutex_init( &thread->mutex, NULL); 
  330.                    sp_thread_cond_init( &thread->cond, NULL ); 
  331.                    thread->fn = dispatch_to_here; 
  332.                    thread->arg = arg; 
  333.                    thread->parent = pool; 
  334.   
  335.                    sp_thread_attr_init( &attr ); 
  336.                    sp_thread_attr_setdetachstate( &attr,SP_THREAD_CREATE_DETACHED ); 
  337.   
  338.                    if( 0 == sp_thread_create( &thread->id,&attr, wrapper_fn, thread ) ) { 
  339.                             pool->tp_total++; 
  340.                             printf( "create thread#%ld\n",thread->id ); 
  341.                    } else
  342.                             ret = -1; 
  343.                             printf( "cannot createthread\n" ); 
  344.                             sp_thread_mutex_destroy(&thread->mutex ); 
  345.                             sp_thread_cond_destroy(&thread->cond ); 
  346.                             free( thread ); 
  347.                    } 
  348.          } else
  349.                    pool->tp_index--; 
  350.                    thread = pool->tp_list[ pool->tp_index ]; 
  351.                    pool->tp_list[ pool->tp_index ] = NULL; 
  352.   
  353.                    thread->fn = dispatch_to_here; 
  354.                    thread->arg = arg; 
  355.                    thread->parent = pool; 
  356.   
  357.                    sp_thread_mutex_lock( &thread->mutex ); 
  358.                    sp_thread_cond_signal( &thread->cond ) ; 
  359.                    sp_thread_mutex_unlock ( &thread->mutex ); 
  360.          } 
  361.   
  362.          sp_thread_mutex_unlock( &pool->tp_mutex ); 
  363.   
  364.          return ret; 
  365.   
  366. void destroy_threadpool(threadpooldestroyme) 
  367.          _threadpool *pool = (_threadpool *) destroyme; 
  368.   
  369.          // add your code here to kill a threadpool 
  370.          int i = 0; 
  371.   
  372.          sp_thread_mutex_lock( &pool->tp_mutex ); 
  373.   
  374.          if( pool->tp_index < pool->tp_total ) { 
  375.                    printf( "waiting for %d thread(s) tofinish\n", pool->tp_total - pool->tp_index ); 
  376.                    sp_thread_cond_wait( &pool->tp_full,&pool->tp_mutex ); 
  377.          } 
  378.   
  379.          pool->tp_stop = 1; 
  380.   
  381.          for( i = 0; i < pool->tp_index; i++ ) { 
  382.                    _thread * thread = pool->tp_list[ i ]; 
  383.   
  384.                    sp_thread_mutex_lock( &thread->mutex ); 
  385.                    sp_thread_cond_signal( &thread->cond ) ; 
  386.                    sp_thread_mutex_unlock ( &thread->mutex ); 
  387.          } 
  388.   
  389.          if( pool->tp_total > 0 ) { 
  390.                    printf( "waiting for %d thread(s) toexit\n", pool->tp_total ); 
  391.                    sp_thread_cond_wait( &pool->tp_empty,&pool->tp_mutex ); 
  392.          } 
  393.   
  394.          for( i = 0; i < pool->tp_index; i++ ) { 
  395.                    free( pool->tp_list[ i ] ); 
  396.                    pool->tp_list[ i ] = NULL; 
  397.          } 
  398.   
  399.          sp_thread_mutex_unlock( &pool->tp_mutex ); 
  400.   
  401.          pool->tp_index = 0; 
  402.   
  403.          sp_thread_mutex_destroy( &pool->tp_mutex ); 
  404.          sp_thread_cond_destroy( &pool->tp_idle ); 
  405.          sp_thread_cond_destroy( &pool->tp_full ); 
  406.          sp_thread_cond_destroy( &pool->tp_empty ); 
  407.   
  408.          free( pool->tp_list ); 
  409.          free( pool ); 
spthread.h
#ifndef __spthread_hpp__
#define __spthread_hpp__
 
#ifndef WIN32
 
/// pthread
 
#include <pthread.h>
#include <unistd.h>
 
typedef void *sp_thread_result_t;
typedef pthread_mutex_tsp_thread_mutex_t;
typedef pthread_cond_t  sp_thread_cond_t;
typedef pthread_t       sp_thread_t;
typedef pthread_attr_t  sp_thread_attr_t;
 
#definesp_thread_mutex_init(m,a)  pthread_mutex_init(m,a)
#definesp_thread_mutex_destroy(m) pthread_mutex_destroy(m)
#definesp_thread_mutex_lock(m)    pthread_mutex_lock(m)
#define sp_thread_mutex_unlock(m)   pthread_mutex_unlock(m)
 
#definesp_thread_cond_init(c,a)   pthread_cond_init(c,a)
#definesp_thread_cond_destroy(c)  pthread_cond_destroy(c)
#definesp_thread_cond_wait(c,m)   pthread_cond_wait(c,m)
#definesp_thread_cond_signal(c)   pthread_cond_signal(c)
 
#definesp_thread_attr_init(a)       pthread_attr_init(a)
#definesp_thread_attr_setdetachstate pthread_attr_setdetachstate
#defineSP_THREAD_CREATE_DETACHED    PTHREAD_CREATE_DETACHED
 
#define sp_thread_self    pthread_self
#define sp_thread_create  pthread_create
 
#define SP_THREAD_CALL
typedef sp_thread_result_t ( *sp_thread_func_t )( void * args );
 
#define sp_sleep(x) sleep(x)
 
#else///
 
// win32 thread
 
#include <winsock2.h>
#include <process.h>
 
typedef unsigned sp_thread_t;
 
typedef unsignedsp_thread_result_t;
#define SP_THREAD_CALL__stdcall
typedef sp_thread_result_t (__stdcall * sp_thread_func_t )( void * args );
 
typedef HANDLE  sp_thread_mutex_t;
typedef HANDLE  sp_thread_cond_t;
typedef DWORD   sp_thread_attr_t;
 
#defineSP_THREAD_CREATE_DETACHED 1
#define sp_sleep(x)Sleep(1000*x)
 
int sp_thread_mutex_init(sp_thread_mutex_t * mutex, void * attr )
{
         *mutex = CreateMutex( NULL, FALSE, NULL );
         return NULL == * mutex ? GetLastError() : 0;
}
 
int sp_thread_mutex_destroy(sp_thread_mutex_t * mutex )
{
         int ret = CloseHandle( *mutex );
 
         return 0 == ret ? GetLastError() : 0;
}
 
int sp_thread_mutex_lock(sp_thread_mutex_t * mutex )
{
         int ret = WaitForSingleObject( *mutex, INFINITE );
         return WAIT_OBJECT_0 == ret ? 0 : GetLastError();
}
 
int sp_thread_mutex_unlock(sp_thread_mutex_t * mutex )
{
         int ret = ReleaseMutex( *mutex );
         return 0 != ret ? 0 : GetLastError();
}
 
int sp_thread_cond_init(sp_thread_cond_t * cond, void * attr )
{
         *cond = CreateEvent( NULL, FALSE, FALSE, NULL );
         return NULL == *cond ? GetLastError() : 0;
}
 
int sp_thread_cond_destroy(sp_thread_cond_t * cond )
{
         int ret = CloseHandle( *cond );
         return 0 == ret ? GetLastError() : 0;
}
 
/*
Caller MUST be holding themutex lock; the
lock is released and the calleris blocked waiting
on 'cond'. When 'cond' issignaled, the mutex
is re-acquired before returningto the caller.
*/
int sp_thread_cond_wait(sp_thread_cond_t * cond, sp_thread_mutex_t * mutex )
{
         int ret = 0;
 
         sp_thread_mutex_unlock( mutex );
 
         ret = WaitForSingleObject( *cond, INFINITE );
 
         sp_thread_mutex_lock( mutex );
 
         return WAIT_OBJECT_0 == ret ? 0 : GetLastError();
}
 
int sp_thread_cond_signal(sp_thread_cond_t * cond )
{
         int ret = SetEvent( *cond );
         return 0 == ret ? GetLastError() : 0;
}
 
sp_thread_t sp_thread_self()
{
         return GetCurrentThreadId();
}
 
int sp_thread_attr_init(sp_thread_attr_t * attr )
{
         *attr = 0;
         return 0;
}
 
intsp_thread_attr_setdetachstate( sp_thread_attr_t * attr, int detachstate )
{
         *attr |= detachstate;
         return 0;
}
 
int sp_thread_create(sp_thread_t * thread, sp_thread_attr_t * attr,
                   sp_thread_func_t myfunc, void * args )
{
         // _beginthreadex returns 0 on an error
         HANDLE h = (HANDLE)_beginthreadex( NULL, 0, myfunc, args, 0,thread );
         return h > 0 ? 0 : GetLastError();
}
 
#endif
 
#endif
threadpool.h
/**
 * threadpool.h
 *
 * This file declares the functionalityassociated with
 * your implementation of a threadpool.
 */
 
#ifndef __threadpool_h__
#define __threadpool_h__
 
#ifdef __cplusplus
extern "C" {
#endif
 
// maximum number of threadsallowed in a pool
#define MAXT_IN_POOL 200
 
// You must hide the internaldetails of the threadpool
// structure from callers, thusdeclare threadpool of type "void".
// In threadpool.c, you willuse type conversion to coerce
// variables of type"threadpool" back and forth to a
// richer, internal type.  (See threadpool.c for details.)
 
typedef void *threadpool;
 
// "dispatch_fn"declares a typed function pointer.  A
// variable of type"dispatch_fn" points to a function
// with the followingsignature:
//
//     void dispatch_function(void *arg);
 
typedef void(*dispatch_fn)(void *);
 
/**
 * create_threadpool creates a fixed-sizedthread
 * pool. If the function succeeds, it returns a (non-NULL)
 * "threadpool", else it returnsNULL.
 */
threadpoolcreate_threadpool(int num_threads_in_pool);
 
 
/**
 * dispatch sends a thread off to do somework.  If
 * all threads in the pool are busy, dispatchwill
 * block until a thread becomes free and isdispatched.
 *
 * Once a thread is dispatched, this functionreturns
 * immediately.
 *
 * The dispatched thread calls into thefunction
 * "dispatch_to_here" with argument"arg".
 */
intdispatch_threadpool(threadpool from_me, dispatch_fn dispatch_to_here,
               void *arg);
 
/**
 * destroy_threadpool kills the threadpool,causing
 * all threads in it to commit suicide, andthen
 * frees all the memory associated with thethreadpool.
 */
voiddestroy_threadpool(threadpool destroyme);
 
#ifdef __cplusplus
}
#endif
 
#endif
threadpool.c
/**
 * threadpool.c
 *
 * This file will contain your implementation ofa threadpool.
 */
 
#include <stdio.h>
#include <stdlib.h>
//#include <unistd.h>
//#include <sp_thread.h>
#include <string.h>
 
#include"threadpool.h"
#include "spthread.h"
 
typedef struct _thread_st {
         sp_thread_t id;
         sp_thread_mutex_t mutex;
         sp_thread_cond_t cond;
         dispatch_fn fn;
         void *arg;
         threadpool parent;
} _thread;
 
// _threadpool is the internalthreadpool structure that is
// cast to type"threadpool" before it given out to callers
typedef struct _threadpool_st {
         // you should fill in this structure with whatever you need
         sp_thread_mutex_t tp_mutex;
         sp_thread_cond_t tp_idle;
         sp_thread_cond_t tp_full;
         sp_thread_cond_t tp_empty;
         _thread ** tp_list;
         int tp_index;
         int tp_max_index;
         int tp_stop;
 
         int tp_total;
} _threadpool;
 
threadpoolcreate_threadpool(int num_threads_in_pool)
{
         _threadpool *pool;
 
         // sanity check the argument
         if ((num_threads_in_pool <= 0) || (num_threads_in_pool> MAXT_IN_POOL))
                   return NULL;
 
         pool = (_threadpool *) malloc(sizeof(_threadpool));
         if (pool == NULL) {
                   fprintf(stderr, "Out of memory creating a newthreadpool!\n");
                   return NULL;
         }
 
         // add your code here to initialize the newly createdthreadpool
         sp_thread_mutex_init( &pool->tp_mutex, NULL );
         sp_thread_cond_init( &pool->tp_idle, NULL );
         sp_thread_cond_init( &pool->tp_full, NULL );
         sp_thread_cond_init( &pool->tp_empty, NULL );
         pool->tp_max_index = num_threads_in_pool;
         pool->tp_index = 0;
         pool->tp_stop = 0;
         pool->tp_total = 0;
         pool->tp_list = ( _thread ** )malloc( sizeof( void * ) *MAXT_IN_POOL );
         memset( pool->tp_list, 0, sizeof( void * ) * MAXT_IN_POOL);
 
         return (threadpool) pool;
}
 
int save_thread( _threadpool *pool, _thread * thread )
{
         int ret = -1;
 
         sp_thread_mutex_lock( &pool->tp_mutex );
 
         if( pool->tp_index < pool->tp_max_index ) {
                   pool->tp_list[ pool->tp_index ] = thread;
                   pool->tp_index++;
                   ret = 0;
 
                   sp_thread_cond_signal( &pool->tp_idle );
 
                   if( pool->tp_index >= pool->tp_total ) {
                            sp_thread_cond_signal(&pool->tp_full );
                   }
         }
 
         sp_thread_mutex_unlock( &pool->tp_mutex );
 
         return ret;
}
 
sp_thread_result_tSP_THREAD_CALL wrapper_fn( void * arg )
{
         _thread * thread = (_thread*)arg;
         _threadpool * pool = (_threadpool*)thread->parent;
 
         for( ; 0 == ((_threadpool*)thread->parent)->tp_stop; ){
                   thread->fn( thread->arg );
 
                   if( 0 !=((_threadpool*)thread->parent)->tp_stop ) break;
 
                   sp_thread_mutex_lock( &thread->mutex );
                   if( 0 == save_thread( thread->parent, thread )) {
                            sp_thread_cond_wait(&thread->cond, &thread->mutex );
                            sp_thread_mutex_unlock(&thread->mutex );
                   } else {
                            sp_thread_mutex_unlock(&thread->mutex );
                            sp_thread_cond_destroy(&thread->cond );
                            sp_thread_mutex_destroy(&thread->mutex );
 
                            free( thread );
                            break;
                   }
         }
 
         sp_thread_mutex_lock( &pool->tp_mutex );
         pool->tp_total--;
         if( pool->tp_total <= 0 ) sp_thread_cond_signal(&pool->tp_empty );
         sp_thread_mutex_unlock( &pool->tp_mutex );
 
         return 0;
}
 
intdispatch_threadpool(threadpool from_me, dispatch_fn dispatch_to_here, void*arg)
{
         int ret = 0;
 
         _threadpool *pool = (_threadpool *) from_me;
         sp_thread_attr_t attr;
         _thread * thread = NULL;
 
         // add your code here to dispatch a thread
         sp_thread_mutex_lock( &pool->tp_mutex );
 
         if( pool->tp_index <= 0 && pool->tp_total>= pool->tp_max_index ) {
                   sp_thread_cond_wait( &pool->tp_idle,&pool->tp_mutex );
         }
 
         if( pool->tp_index <= 0 ) {
                   _thread * thread = ( _thread * )malloc( sizeof(_thread ) );
                   memset( &( thread->id ), 0, sizeof(thread->id ) );
                   sp_thread_mutex_init( &thread->mutex, NULL);
                   sp_thread_cond_init( &thread->cond, NULL );
                   thread->fn = dispatch_to_here;
                   thread->arg = arg;
                   thread->parent = pool;
 
                   sp_thread_attr_init( &attr );
                   sp_thread_attr_setdetachstate( &attr,SP_THREAD_CREATE_DETACHED );
 
                   if( 0 == sp_thread_create( &thread->id,&attr, wrapper_fn, thread ) ) {
                            pool->tp_total++;
                            printf( "create thread#%ld\n",thread->id );
                   } else {
                            ret = -1;
                            printf( "cannot createthread\n" );
                            sp_thread_mutex_destroy(&thread->mutex );
                            sp_thread_cond_destroy(&thread->cond );
                            free( thread );
                   }
         } else {
                   pool->tp_index--;
                   thread = pool->tp_list[ pool->tp_index ];
                   pool->tp_list[ pool->tp_index ] = NULL;
 
                   thread->fn = dispatch_to_here;
                   thread->arg = arg;
                   thread->parent = pool;
 
                   sp_thread_mutex_lock( &thread->mutex );
                   sp_thread_cond_signal( &thread->cond ) ;
                   sp_thread_mutex_unlock ( &thread->mutex );
         }
 
         sp_thread_mutex_unlock( &pool->tp_mutex );
 
         return ret;
}
 
void destroy_threadpool(threadpooldestroyme)
{
         _threadpool *pool = (_threadpool *) destroyme;
 
         // add your code here to kill a threadpool
         int i = 0;
 
         sp_thread_mutex_lock( &pool->tp_mutex );
 
         if( pool->tp_index < pool->tp_total ) {
                   printf( "waiting for %d thread(s) tofinish\n", pool->tp_total - pool->tp_index );
                   sp_thread_cond_wait( &pool->tp_full,&pool->tp_mutex );
         }
 
         pool->tp_stop = 1;
 
         for( i = 0; i < pool->tp_index; i++ ) {
                   _thread * thread = pool->tp_list[ i ];
 
                   sp_thread_mutex_lock( &thread->mutex );
                   sp_thread_cond_signal( &thread->cond ) ;
                   sp_thread_mutex_unlock ( &thread->mutex );
         }
 
         if( pool->tp_total > 0 ) {
                   printf( "waiting for %d thread(s) toexit\n", pool->tp_total );
                   sp_thread_cond_wait( &pool->tp_empty,&pool->tp_mutex );
         }
 
         for( i = 0; i < pool->tp_index; i++ ) {
                   free( pool->tp_list[ i ] );
                   pool->tp_list[ i ] = NULL;
         }
 
         sp_thread_mutex_unlock( &pool->tp_mutex );
 
         pool->tp_index = 0;
 
         sp_thread_mutex_destroy( &pool->tp_mutex );
         sp_thread_cond_destroy( &pool->tp_idle );
         sp_thread_cond_destroy( &pool->tp_full );
         sp_thread_cond_destroy( &pool->tp_empty );
 
         free( pool->tp_list );
         free( pool );
}

2)    常见的设计模式

根据Socket的阻塞非阻塞,IO的同步和异步。可以分为如下4中情形

阻塞同步     |    阻塞异步

_________|______________

非阻塞同步  |   非阻塞异步

阻塞同步方式是原始的方式,也是许多教科书上介绍的方式,因为Socket和IO默认的为阻塞和同步方式。基本流程如下:

  1. listen_fd = socket( AF_INET,SOCK_STREAM,0 ) 
  2. bind( listen_fd, (struct sockaddr*)&my_addr, sizeof(struct sockaddr_in)) 
  3. listen( listen_fd,1 ) 
  4. accept( listen_fd,  (struct sockaddr*)&remote_addr,&addr_len ) 
  5. recv( accept_fd ,&in_buf ,1024 ,0 ) 
  6. close(accept_fd) 
listen_fd = socket( AF_INET,SOCK_STREAM,0 )
bind( listen_fd, (struct sockaddr*)&my_addr, sizeof(struct sockaddr_in))
listen( listen_fd,1 )
accept( listen_fd,  (struct sockaddr*)&remote_addr,&addr_len )
recv( accept_fd ,&in_buf ,1024 ,0 )
close(accept_fd)

阻塞异步方式有所改进,但是Socket的阻塞方式,前一个连接没有处理完成,下一个连接不能接入,是高并发服务器所不可接收的方式。只不过在上边阻塞同步方式的基础上使用select(严格来说select是一种IO多路服用技术。因为linux尚没有完整的实现异步IO,而winsock实在理解socket没有linux上面那么直观。,这里为了方便,没有做严格的区分)或者其它异步IO方式。

非阻塞同步方式,通过设置socket选项为NONBLOCK,可以很快的接收连接,但是处理采用同步IO方式,服务器处理性能也比较差。

上边三种方式不做深入介绍。下边主要从非阻塞异步IO方式介绍。

非阻塞异步IO方式中,由于异步IO方式在同一系统可能有多种实现,不同系统也有不同实现,下边介绍几种常见的IO方式和服务器框架。

Ø  Select

Select采用轮训注册的fd方式。是一种比较老的IO多路服用实现方式,效率相对要差一些。Select方式在windows和linux上都支持。

基本框架如下:

  1. socket( AF_INET,SOCK_STREAM,0 ) 
  2. fcntl(listen_fd, F_SETFL,flags|O_NONBLOCK); 
  3. bind( listen_fd, (structsockaddr *)&my_addr,sizeof(struct sockaddr_in)) 
  4. listen( listen_fd,1 ) 
  5. FD_ZERO( &fd_sets ); 
  6. FD_SET(listen_fd,&fd_sets); 
  7. for(k=0; k<=i; k++){ 
  8.          FD_SET(accept_fds[k],&fd_sets); 
  9. events = select( max_fd + 1,&fd_sets, NULL, NULL, NULL ); 
  10. if(FD_ISSET(listen_fd,&fd_sets) ){ 
  11. accept_fd = accept( listen_fd, (structsockaddr *)&remote_addr,&addr_len ); 
  12. for( j=0; j<=i; j++ ){ 
  13.          if( FD_ISSET(accept_fds[j],&fd_sets) ){ 
  14.                    recv( accept_fds[j] ,&in_buf ,1024 ,0 ); 
  15.          } 
socket( AF_INET,SOCK_STREAM,0 )
fcntl(listen_fd, F_SETFL,flags|O_NONBLOCK);
bind( listen_fd, (structsockaddr *)&my_addr,sizeof(struct sockaddr_in))
listen( listen_fd,1 )
FD_ZERO( &fd_sets );
FD_SET(listen_fd,&fd_sets);
for(k=0; k<=i; k++){
         FD_SET(accept_fds[k],&fd_sets);
}
events = select( max_fd + 1,&fd_sets, NULL, NULL, NULL );
if(FD_ISSET(listen_fd,&fd_sets) ){
accept_fd = accept( listen_fd, (structsockaddr *)&remote_addr,&addr_len );
}
for( j=0; j<=i; j++ ){
         if( FD_ISSET(accept_fds[j],&fd_sets) ){
                   recv( accept_fds[j] ,&in_buf ,1024 ,0 );
         }
}

Ø  Epoll

Epoll是linux2.6内核以后支持的一种高性能的IO多路服用技术。服务器框架如下:

  1. socket( AF_INET,SOCK_STREAM,0 ) 
  2. fcntl(listen_fd, F_SETFL,flags|O_NONBLOCK); 
  3. bind( listen_fd, (structsockaddr *)&my_addr,sizeof(struct sockaddr_in)) 
  4. listen( listen_fd,1 ) 
  5. epoll_ctl(epfd,EPOLL_CTL_ADD,listen_fd,&ev); 
  6. ev_s = epoll_wait(epfd,events,20,500 ); 
  7. for(i=0; i<ev_s;i++){ 
  8.                    if(events[i].data.fd==listen_fd){ 
  9.                             accept_fd = accept( listen_fd,(structsockaddr *)&remote_addr,&addr_len ); 
  10.                             fcntl(accept_fd, F_SETFL,flags|O_NONBLOCK); 
  11.                             epoll_ctl(epfd,EPOLL_CTL_ADD,accept_fd,&ev); 
  12.                    } 
  13.                    else if(events[i].events&EPOLLIN){ 
  14.                             recv( events[i].data.fd ,&in_buf,1024 ,0 ); 
  15.                    } 
socket( AF_INET,SOCK_STREAM,0 )
fcntl(listen_fd, F_SETFL,flags|O_NONBLOCK);
bind( listen_fd, (structsockaddr *)&my_addr,sizeof(struct sockaddr_in))
listen( listen_fd,1 )
epoll_ctl(epfd,EPOLL_CTL_ADD,listen_fd,&ev);
ev_s = epoll_wait(epfd,events,20,500 );
for(i=0; i<ev_s;i++){
                   if(events[i].data.fd==listen_fd){
                            accept_fd = accept( listen_fd,(structsockaddr *)&remote_addr,&addr_len );
                            fcntl(accept_fd, F_SETFL,flags|O_NONBLOCK);
                            epoll_ctl(epfd,EPOLL_CTL_ADD,accept_fd,&ev);
                   }
                   else if(events[i].events&EPOLLIN){
                            recv( events[i].data.fd ,&in_buf,1024 ,0 );
                   }
}

Ø  AIO

在windows上微软实现了异步IO,通过AIO可以方便的实现高并发的服务器。框架如下:

  1. WSAStartup( 0x0202 ,  & wsaData) 
  2. CreateIoCompletionPort(INVALID_HANDLE_VALUE,NULL,  0 ,  0 ) 
  3. WSASocket(AF_INET,SOCK_STREAM,  0 , NULL,  0 , WSA_FLAG_OVERLAPPED) 
  4. bind(Listen, (PSOCKADDR)  & InternetAddr,  sizeof (InternetAddr)) 
  5. listen(Listen,  5 ) 
  6. WSAAccept(Listen, NULL, NULL,NULL,  0 ) 
  7. PerHandleData  = (LPPER_HANDLE_DATA) GlobalAlloc(GPTR, sizeof (PER_HANDLE_DATA) 
  8. CreateIoCompletionPort((HANDLE)Accept, CompletionPort, (DWORD) PerHandleData, 0 ) 
  9. PerIoData= (LPPER_IO_OPERATION_DATA)GlobalAlloc(GPTR, sizeof (PER_IO_OPERATION_DATA)) 
  10. WSARecv(Accept,&(PerIoData->DataBuf),1,&RecvBytes,&Flags,&(PerIoData->Overlapped), NULL) 
  11. (GetQueuedCompletionStatus(CompletionPort,  & BytesTransferred, 
  12.          (LPDWORD) & PerHandleData,(LPOVERLAPPED  * )  & PerIoData, INFINITE) 
  13. if  (PerIoData -> BytesRECV  > PerIoData -> BytesSEND){ 
  14. WSASend(PerHandleData-> Socket,  & (PerIoData ->DataBuf),  1 ,  & SendBytes,  0 , 
  15.              & (PerIoData ->Overlapped), NULL) 
WSAStartup( 0x0202 ,  & wsaData)
CreateIoCompletionPort(INVALID_HANDLE_VALUE,NULL,  0 ,  0 )
WSASocket(AF_INET,SOCK_STREAM,  0 , NULL,  0 , WSA_FLAG_OVERLAPPED)
bind(Listen, (PSOCKADDR)  & InternetAddr,  sizeof (InternetAddr))
listen(Listen,  5 )
WSAAccept(Listen, NULL, NULL,NULL,  0 )
PerHandleData  = (LPPER_HANDLE_DATA) GlobalAlloc(GPTR, sizeof (PER_HANDLE_DATA)
CreateIoCompletionPort((HANDLE)Accept, CompletionPort, (DWORD) PerHandleData, 0 )
PerIoData= (LPPER_IO_OPERATION_DATA)GlobalAlloc(GPTR, sizeof (PER_IO_OPERATION_DATA))
WSARecv(Accept,&(PerIoData->DataBuf),1,&RecvBytes,&Flags,&(PerIoData->Overlapped), NULL)
(GetQueuedCompletionStatus(CompletionPort,  & BytesTransferred,
         (LPDWORD) & PerHandleData,(LPOVERLAPPED  * )  & PerIoData, INFINITE)
if  (PerIoData -> BytesRECV  > PerIoData -> BytesSEND){
WSASend(PerHandleData-> Socket,  & (PerIoData ->DataBuf),  1 ,  & SendBytes,  0 ,
             & (PerIoData ->Overlapped), NULL)
}

3)    引入线程池和事件分离器后

由于上边只是单纯的使用非阻塞Socket和异步IO的方式。提高了接收连接和处理的速度。但是还是不能解决两个客户端同时连接的问题。这时就需要引入多线程机制。引入多线程后,又有许多策略。Linux上通常采用主进程负责接收连接,之后fork子进程处理连接。Windows通常采用线程池方式,避免线程创建和销毁的开销,当然linux上也可以采用线程池方式。采用多进程和多线程方式后。事件处理也可以再优化,定义一个简单的事件处理器,把所有事件放入一个队列,各个线程去事件队列取相应的事件,然后自己开始工作。这就是我上边提到的半同步/半异步方式了。如果线程工作的时候是接收到连接后,自己处理后续的发送和接收,然后选出另外一个线程作为领导继续接收连接,其它线程作为追随者。这就是领导者/追随者模式了。具体可以参考ACE的Reactor和Preactor的具体实现。半同步和/半异步网上也有很多的讨论,可以自己深入研究。代码就比较复杂了,这里就不给出代码了。给出一个linux下类似,相对简单的fork子进程+epoll方式的实现:

  1. #include<sys/socket.h> 
  2. #include <sys/wait.h> 
  3. #include <netinet/in.h> 
  4. #include <netinet/tcp.h> 
  5. #include <sys/epoll.h> 
  6. #include <sys/sendfile.h> 
  7. #include <sys/stat.h> 
  8. #include <unistd.h> 
  9. #include <stdio.h> 
  10. #include <stdlib.h> 
  11. #include <string.h> 
  12. #include <strings.h> 
  13. #include <fcntl.h> 
  14. #include <errno.h> 
  15.  
  16. #define HANDLE_INFO   1 
  17. #define HANDLE_SEND   2 
  18. #define HANDLE_DEL    3 
  19. #define HANDLE_CLOSE  4 
  20.  
  21. #define MAX_REQLEN         1024 
  22. #define MAX_PROCESS_CONN    3 
  23. #define FIN_CHAR           0x00 
  24. #define SUCCESS  0 
  25. #define ERROR   -1 
  26.  
  27. typedef struct event_handle{ 
  28.     int socket_fd; 
  29.     int file_fd; 
  30.     int file_pos; 
  31.     int epoll_fd; 
  32.     char request[MAX_REQLEN]; 
  33.     int request_len; 
  34.     int ( * read_handle )( struct event_handle * ev ); 
  35.     int ( * write_handle )( struct event_handle * ev ); 
  36.     int handle_method; 
  37. } EV,* EH; 
  38. typedef int ( * EVENT_HANDLE )( struct event_handle * ev ); 
  39.  
  40. int create_listen_fd( int port ){ 
  41.     int listen_fd; 
  42.     struct sockaddr_inmy_addr; 
  43.     if( ( listen_fd = socket( AF_INET, SOCK_STREAM, 0 ) ) == -1 ){ 
  44.         perror( "create socket error" ); 
  45.         exit( 1 ); 
  46.     } 
  47.     int flag; 
  48.     int olen = sizeof(int); 
  49.     if( setsockopt( listen_fd, SOL_SOCKET, SO_REUSEADDR 
  50.                        , (const void *)&flag, olen ) == -1 ){ 
  51.         perror( "setsockopt error" ); 
  52.     } 
  53.     flag = 5; 
  54.     if( setsockopt( listen_fd, IPPROTO_TCP, TCP_DEFER_ACCEPT, &flag, olen ) == -1 ){ 
  55.         perror( "setsockopt error" ); 
  56.     } 
  57.     flag = 1; 
  58.     if( setsockopt( listen_fd, IPPROTO_TCP, TCP_CORK, &flag, olen ) == -1 ){ 
  59.         perror( "setsockopt error" ); 
  60.     } 
  61.     int flags = fcntl( listen_fd, F_GETFL, 0 ); 
  62.     fcntl( listen_fd, F_SETFL, flags|O_NONBLOCK ); 
  63.     my_addr.sin_family = AF_INET; 
  64.     my_addr.sin_port = htons( port ); 
  65.     my_addr.sin_addr.s_addr = INADDR_ANY; 
  66.     bzero( &( my_addr.sin_zero ), 8 ); 
  67.     if( bind( listen_fd, ( struct sockaddr * )&my_addr, 
  68.     sizeof( struct sockaddr_in ) ) == -1 ) { 
  69.         perror( "bind error" ); 
  70.         exit( 1 ); 
  71.     } 
  72.     if( listen( listen_fd, 1 ) == -1 ){ 
  73.         perror( "listen error" ); 
  74.         exit( 1 ); 
  75.     } 
  76.     return listen_fd; 
  77.  
  78. int create_accept_fd( int listen_fd ){ 
  79.     int addr_len = sizeof( struct sockaddr_in ); 
  80.     struct sockaddr_inremote_addr; 
  81.     int accept_fd = accept( listen_fd, 
  82.         ( struct sockaddr * )&remote_addr, &addr_len ); 
  83.     int flags = fcntl( accept_fd, F_GETFL, 0 ); 
  84.     fcntl( accept_fd, F_SETFL, flags|O_NONBLOCK ); 
  85.     return accept_fd; 
  86.  
  87. int fork_process( int process_num ){ 
  88.     int i; 
  89.     int pid=-1; 
  90.     for( i = 0; i < process_num; i++ ){ 
  91.         if( pid != 0 ){ 
  92.             pid = fork(); 
  93.         } 
  94.     } 
  95.     return pid; 
  96.  
  97. int init_evhandle(EH ev,int socket_fd,int epoll_fd,EVENT_HANDLEr_handle,EVENT_HANDLE w_handle){ 
  98.     ev->epoll_fd = epoll_fd; 
  99.     ev->socket_fd = socket_fd; 
  100.     ev->read_handle = r_handle; 
  101.     ev->write_handle = w_handle; 
  102.     ev->file_pos = 0; 
  103.     ev->request_len = 0; 
  104.     ev->handle_method = 0; 
  105.     memset( ev->request, 0, 1024 ); 
  106. //accept->accept_queue->request->request_queue->output->output_queue 
  107. //multi process sendfile 
  108. int parse_request(EH ev){ 
  109.     ev->request_len--; 
  110.     *( ev->request + ev->request_len - 1 ) = 0x00; 
  111.     int i; 
  112.     for( i=0; i<ev->request_len; i++ ){ 
  113.         if( ev->request[i] == ':' ){ 
  114.             ev->request_len = ev->request_len-i-1; 
  115.             char temp[MAX_REQLEN]; 
  116.             memcpy( temp, ev->request, i ); 
  117.             ev->handle_method = atoi( temp ); 
  118.             memcpy( temp, ev->request+i+1, ev->request_len ); 
  119.             memcpy( ev->request, temp, ev->request_len ); 
  120.             break
  121.         } 
  122.     } 
  123.     //handle_request(ev ); 
  124.     //registerto epoll EPOLLOUT 
  125.  
  126.     struct epoll_eventev_temp; 
  127.     ev_temp.data.ptr = ev; 
  128.     ev_temp.events = EPOLLOUT|EPOLLET; 
  129.     epoll_ctl( ev->epoll_fd, EPOLL_CTL_MOD, ev->socket_fd, &ev_temp ); 
  130.     return SUCCESS; 
  131.  
  132. int handle_request(EH ev){ 
  133.     struct statfile_info; 
  134.     switch( ev->handle_method ){ 
  135.         case HANDLE_INFO: 
  136.             ev->file_fd = open( ev->request, O_RDONLY ); 
  137.             if( ev->file_fd == -1 ){ 
  138.                send( ev->socket_fd, "open file failed\n", strlen("open file failed\n"), 0 ); 
  139.                return -1; 
  140.             } 
  141.             fstat(ev->file_fd, &file_info); 
  142.             char info[MAX_REQLEN]; 
  143.             sprintf(info,"filelen:%d\n",file_info.st_size); 
  144.             send( ev->socket_fd, info, strlen( info ), 0 ); 
  145.             break
  146.         case HANDLE_SEND: 
  147.             ev->file_fd = open( ev->request, O_RDONLY ); 
  148.             if( ev->file_fd == -1 ){ 
  149.                send( ev->socket_fd, "open file failed\n", strlen("open file failed\n"), 0 ); 
  150.                return -1; 
  151.             } 
  152.             fstat(ev->file_fd, &file_info); 
  153.             sendfile( ev->socket_fd, ev->file_fd, 0, file_info.st_size ); 
  154.             break
  155.         case HANDLE_DEL: 
  156.             break
  157.         case HANDLE_CLOSE: 
  158.             break
  159.     } 
  160.     finish_request( ev ); 
  161.     return SUCCESS; 
  162.  
  163. int finish_request(EH ev){ 
  164.     close(ev->socket_fd); 
  165.     close(ev->file_fd); 
  166.     ev->handle_method = -1; 
  167.     clean_request( ev ); 
  168.     return SUCCESS; 
  169.  
  170. int clean_request(EH ev){ 
  171.     memset( ev->request, 0, MAX_REQLEN ); 
  172.     ev->request_len = 0; 
  173.  
  174. int read_hook_v2( EH ev ){ 
  175.     char in_buf[MAX_REQLEN]; 
  176.     memset( in_buf, 0, MAX_REQLEN ); 
  177.     int recv_num = recv( ev->socket_fd, &in_buf, MAX_REQLEN, 0 ); 
  178.     if( recv_num ==0 ){ 
  179.         close( ev->socket_fd ); 
  180.         return ERROR; 
  181.     } 
  182.     else
  183.         //checkifoverflow 
  184.         if( ev->request_len > MAX_REQLEN-recv_num ){ 
  185.             close( ev->socket_fd ); 
  186.            clean_request( ev ); 
  187.         } 
  188.         memcpy( ev->request + ev->request_len, in_buf, recv_num ); 
  189.         ev->request_len += recv_num; 
  190.         if( recv_num == 2 && ( !memcmp( &in_buf[recv_num-2], "\r\n", 2 ) ) ){ 
  191.            parse_request(ev); 
  192.         } 
  193.     } 
  194.     return recv_num; 
  195.  
  196. int write_hook_v1( EH ev ){ 
  197.     struct statfile_info; 
  198.     ev->file_fd = open( ev->request, O_RDONLY ); 
  199.     if( ev->file_fd == ERROR ){ 
  200.         send( ev->socket_fd, "openfile failed\n", strlen("openfile failed\n"), 0 ); 
  201.         return ERROR; 
  202.     } 
  203.     fstat(ev->file_fd, &file_info); 
  204.     int write_num; 
  205.     while(1){ 
  206.         write_num = sendfile( ev->socket_fd, ev->file_fd, (off_t *)&ev->file_pos, 10240 ); 
  207.         ev->file_pos += write_num; 
  208.         if( write_num == ERROR ){ 
  209.             if( errno == EAGAIN ){ 
  210.                break
  211.             } 
  212.         } 
  213.         else if( write_num == 0 ){ 
  214.             printf( "writed:%d\n", ev->file_pos ); 
  215.             //finish_request(ev ); 
  216.             break
  217.         } 
  218.     } 
  219.     return SUCCESS; 
  220.  
  221. int main(){ 
  222.     int listen_fd = create_listen_fd( 3389 ); 
  223.     int pid = fork_process( 3 ); 
  224.     if( pid == 0 ){ 
  225.         int accept_handles = 0; 
  226.         struct epoll_eventev, events[20]; 
  227.         int epfd = epoll_create( 256 ); 
  228.         int ev_s = 0; 
  229.  
  230.         ev.data.fd = listen_fd; 
  231.         ev.events = EPOLLIN|EPOLLET; 
  232.         epoll_ctl( epfd, EPOLL_CTL_ADD, listen_fd, &ev ); 
  233.         struct event_handleev_handles[256]; 
  234.         for( ;; ){ 
  235.             ev_s = epoll_wait( epfd, events, 20, 500 ); 
  236.             int i = 0; 
  237.             for( i = 0; i<ev_s; i++ ){ 
  238.                if( events[i].data.fd == listen_fd ){ 
  239.                    if( accept_handles < MAX_PROCESS_CONN ){ 
  240.                        accept_handles++; 
  241.                        int accept_fd = create_accept_fd( listen_fd ); 
  242.                        init_evhandle(&ev_handles[accept_handles],accept_fd,epfd,read_hook_v2,write_hook_v1); 
  243.                        ev.data.ptr = &ev_handles[accept_handles]; 
  244.                        ev.events = EPOLLIN|EPOLLET; 
  245.                        epoll_ctl( epfd, EPOLL_CTL_ADD, accept_fd, &ev ); 
  246.                    } 
  247.                } 
  248.                else if( events[i].events&EPOLLIN ){ 
  249.                    EVENT_HANDLE current_handle = ( ( EH )( events[i].data.ptr ) )->read_handle; 
  250.                    EH current_event = ( EH )( events[i].data.ptr ); 
  251.                    ( *current_handle )( current_event ); 
  252.                } 
  253.                else if( events[i].events&EPOLLOUT ){ 
  254.                    EVENT_HANDLE current_handle = ( ( EH )( events[i].data.ptr ) )->write_handle; 
  255.                    EH current_event = ( EH )( events[i].data.ptr ); 
  256.                    if( ( *current_handle )( current_event )  == 0 ){ 
  257.                        accept_handles--; 
  258.                    } 
  259.                } 
  260.             } 
  261.         } 
  262.     } 
  263.     else
  264.         //managerthe process 
  265.         int child_process_status; 
  266.         wait( &child_process_status ); 
  267.     } 
  268.  
  269.     return SUCCESS; 
#include<sys/socket.h>
#include <sys/wait.h>
#include <netinet/in.h>
#include <netinet/tcp.h>
#include <sys/epoll.h>
#include <sys/sendfile.h>
#include <sys/stat.h>
#include <unistd.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <strings.h>
#include <fcntl.h>
#include <errno.h>

#define HANDLE_INFO   1
#define HANDLE_SEND   2
#define HANDLE_DEL    3
#define HANDLE_CLOSE  4

#define MAX_REQLEN         1024
#define MAX_PROCESS_CONN    3
#define FIN_CHAR           0x00
#define SUCCESS  0
#define ERROR   -1

typedef struct event_handle{
    int socket_fd;
    int file_fd;
    int file_pos;
    int epoll_fd;
    char request[MAX_REQLEN];
    int request_len;
    int ( * read_handle )( struct event_handle * ev );
    int ( * write_handle )( struct event_handle * ev );
    int handle_method;
} EV,* EH;
typedef int ( * EVENT_HANDLE )( struct event_handle * ev );

int create_listen_fd( int port ){
    int listen_fd;
    struct sockaddr_inmy_addr;
    if( ( listen_fd = socket( AF_INET, SOCK_STREAM, 0 ) ) == -1 ){
        perror( "create socket error" );
        exit( 1 );
    }
    int flag;
    int olen = sizeof(int);
    if( setsockopt( listen_fd, SOL_SOCKET, SO_REUSEADDR
                       , (const void *)&flag, olen ) == -1 ){
        perror( "setsockopt error" );
    }
    flag = 5;
    if( setsockopt( listen_fd, IPPROTO_TCP, TCP_DEFER_ACCEPT, &flag, olen ) == -1 ){
        perror( "setsockopt error" );
    }
    flag = 1;
    if( setsockopt( listen_fd, IPPROTO_TCP, TCP_CORK, &flag, olen ) == -1 ){
        perror( "setsockopt error" );
    }
    int flags = fcntl( listen_fd, F_GETFL, 0 );
    fcntl( listen_fd, F_SETFL, flags|O_NONBLOCK );
    my_addr.sin_family = AF_INET;
    my_addr.sin_port = htons( port );
    my_addr.sin_addr.s_addr = INADDR_ANY;
    bzero( &( my_addr.sin_zero ), 8 );
    if( bind( listen_fd, ( struct sockaddr * )&my_addr,
    sizeof( struct sockaddr_in ) ) == -1 ) {
        perror( "bind error" );
        exit( 1 );
    }
    if( listen( listen_fd, 1 ) == -1 ){
        perror( "listen error" );
        exit( 1 );
    }
    return listen_fd;
}

int create_accept_fd( int listen_fd ){
    int addr_len = sizeof( struct sockaddr_in );
    struct sockaddr_inremote_addr;
    int accept_fd = accept( listen_fd,
        ( struct sockaddr * )&remote_addr, &addr_len );
    int flags = fcntl( accept_fd, F_GETFL, 0 );
    fcntl( accept_fd, F_SETFL, flags|O_NONBLOCK );
    return accept_fd;
}

int fork_process( int process_num ){
    int i;
    int pid=-1;
    for( i = 0; i < process_num; i++ ){
        if( pid != 0 ){
            pid = fork();
        }
    }
    return pid;
}

int init_evhandle(EH ev,int socket_fd,int epoll_fd,EVENT_HANDLEr_handle,EVENT_HANDLE w_handle){
    ev->epoll_fd = epoll_fd;
    ev->socket_fd = socket_fd;
    ev->read_handle = r_handle;
    ev->write_handle = w_handle;
    ev->file_pos = 0;
    ev->request_len = 0;
    ev->handle_method = 0;
    memset( ev->request, 0, 1024 );
}
//accept->accept_queue->request->request_queue->output->output_queue
//multi process sendfile
int parse_request(EH ev){
    ev->request_len--;
    *( ev->request + ev->request_len - 1 ) = 0x00;
    int i;
    for( i=0; i<ev->request_len; i++ ){
        if( ev->request[i] == ':' ){
            ev->request_len = ev->request_len-i-1;
            char temp[MAX_REQLEN];
            memcpy( temp, ev->request, i );
            ev->handle_method = atoi( temp );
            memcpy( temp, ev->request+i+1, ev->request_len );
            memcpy( ev->request, temp, ev->request_len );
            break;
        }
    }
    //handle_request(ev );
    //registerto epoll EPOLLOUT

    struct epoll_eventev_temp;
    ev_temp.data.ptr = ev;
    ev_temp.events = EPOLLOUT|EPOLLET;
    epoll_ctl( ev->epoll_fd, EPOLL_CTL_MOD, ev->socket_fd, &ev_temp );
    return SUCCESS;
}

int handle_request(EH ev){
    struct statfile_info;
    switch( ev->handle_method ){
        case HANDLE_INFO:
            ev->file_fd = open( ev->request, O_RDONLY );
            if( ev->file_fd == -1 ){
               send( ev->socket_fd, "open file failed\n", strlen("open file failed\n"), 0 );
               return -1;
            }
            fstat(ev->file_fd, &file_info);
            char info[MAX_REQLEN];
            sprintf(info,"filelen:%d\n",file_info.st_size);
            send( ev->socket_fd, info, strlen( info ), 0 );
            break;
        case HANDLE_SEND:
            ev->file_fd = open( ev->request, O_RDONLY );
            if( ev->file_fd == -1 ){
               send( ev->socket_fd, "open file failed\n", strlen("open file failed\n"), 0 );
               return -1;
            }
            fstat(ev->file_fd, &file_info);
            sendfile( ev->socket_fd, ev->file_fd, 0, file_info.st_size );
            break;
        case HANDLE_DEL:
            break;
        case HANDLE_CLOSE:
            break;
    }
    finish_request( ev );
    return SUCCESS;
}

int finish_request(EH ev){
    close(ev->socket_fd);
    close(ev->file_fd);
    ev->handle_method = -1;
    clean_request( ev );
    return SUCCESS;
}

int clean_request(EH ev){
    memset( ev->request, 0, MAX_REQLEN );
    ev->request_len = 0;
}

int read_hook_v2( EH ev ){
    char in_buf[MAX_REQLEN];
    memset( in_buf, 0, MAX_REQLEN );
    int recv_num = recv( ev->socket_fd, &in_buf, MAX_REQLEN, 0 );
    if( recv_num ==0 ){
        close( ev->socket_fd );
        return ERROR;
    }
    else{
        //checkifoverflow
        if( ev->request_len > MAX_REQLEN-recv_num ){
            close( ev->socket_fd );
           clean_request( ev );
        }
        memcpy( ev->request + ev->request_len, in_buf, recv_num );
        ev->request_len += recv_num;
        if( recv_num == 2 && ( !memcmp( &in_buf[recv_num-2], "\r\n", 2 ) ) ){
           parse_request(ev);
        }
    }
    return recv_num;
}

int write_hook_v1( EH ev ){
    struct statfile_info;
    ev->file_fd = open( ev->request, O_RDONLY );
    if( ev->file_fd == ERROR ){
        send( ev->socket_fd, "openfile failed\n", strlen("openfile failed\n"), 0 );
        return ERROR;
    }
    fstat(ev->file_fd, &file_info);
    int write_num;
    while(1){
        write_num = sendfile( ev->socket_fd, ev->file_fd, (off_t *)&ev->file_pos, 10240 );
        ev->file_pos += write_num;
        if( write_num == ERROR ){
            if( errno == EAGAIN ){
               break;
            }
        }
        else if( write_num == 0 ){
            printf( "writed:%d\n", ev->file_pos );
            //finish_request(ev );
            break;
        }
    }
    return SUCCESS;
}

int main(){
    int listen_fd = create_listen_fd( 3389 );
    int pid = fork_process( 3 );
    if( pid == 0 ){
        int accept_handles = 0;
        struct epoll_eventev, events[20];
        int epfd = epoll_create( 256 );
        int ev_s = 0;

        ev.data.fd = listen_fd;
        ev.events = EPOLLIN|EPOLLET;
        epoll_ctl( epfd, EPOLL_CTL_ADD, listen_fd, &ev );
        struct event_handleev_handles[256];
        for( ;; ){
            ev_s = epoll_wait( epfd, events, 20, 500 );
            int i = 0;
            for( i = 0; i<ev_s; i++ ){
               if( events[i].data.fd == listen_fd ){
                   if( accept_handles < MAX_PROCESS_CONN ){
                       accept_handles++;
                       int accept_fd = create_accept_fd( listen_fd );
                       init_evhandle(&ev_handles[accept_handles],accept_fd,epfd,read_hook_v2,write_hook_v1);
                       ev.data.ptr = &ev_handles[accept_handles];
                       ev.events = EPOLLIN|EPOLLET;
                       epoll_ctl( epfd, EPOLL_CTL_ADD, accept_fd, &ev );
                   }
               }
               else if( events[i].events&EPOLLIN ){
                   EVENT_HANDLE current_handle = ( ( EH )( events[i].data.ptr ) )->read_handle;
                   EH current_event = ( EH )( events[i].data.ptr );
                   ( *current_handle )( current_event );
               }
               else if( events[i].events&EPOLLOUT ){
                   EVENT_HANDLE current_handle = ( ( EH )( events[i].data.ptr ) )->write_handle;
                   EH current_event = ( EH )( events[i].data.ptr );
                   if( ( *current_handle )( current_event )  == 0 ){
                       accept_handles--;
                   }
               }
            }
        }
    }
    else{
        //managerthe process
        int child_process_status;
        wait( &child_process_status );
    }

    return SUCCESS;
}

三、     分布式系统设计

前面讲述了分布式系统中的核心的服务器的实现。可以是http服务器,缓存服务器,分布式文件系统等的内部实现。下边主要从一个高并发的大型网站出发,看一个高并发系统的设计。下边是一个高并发系统的逻辑结构:


主要是参考这篇文章http://www.chinaz.com/web/2010/0310/108211.shtml。下边主要想从这个架构的各个部分的实现展开。

1.     缓存系统

缓存是每一个高并发,高可用系统不可或缺的模块。下边就几个常见缓存系统系统进行介绍。

Squid

Squid作为一个前端缓存,通常部署在网络的离用户最近的地方,通过缓存网站的页面,使用户不必每次都跑到服务器去取数据,提高系统响应和性能。实现应该比较简单:一个带有存储功能的代理。用户访问页面的时候,由它代理,然后存储请求结果,下次再访问的时候,查看是否需要更新,有更新就去服务器取新数据,否则直接返回用户页面。

Ehcache

Ehcache是一个对象缓存系统。通常在J2EE中配合Hibernate使用,这里请原谅作者本人之前是做J2EE开发的,其它使用方式暂不是很了解。应用查询数据库,对经常需要查询,却更新不频繁的数据,可以放入ehcache缓存,提高访问速度。Ehcahe支持内存缓存和硬盘两种方式,支持分布式缓存。数据缓存的基本原理就是:为需要缓存的对象建立一个map,临时对象放入map,查询的时候先查询map,没有找到再查找数据库。关机时可以序列化到硬盘。分布式缓存没有研究过。

页面缓存和动态页面静态化

在大型网站经常使用的一种缓存技术就是动态页面的缓存。由于动态页面经常更新,上边的缓存就不起作用了。通常会采用SSI(Server side include)等技术将动态页面的或者页面片段进行缓存。

还有一种就是动态页面静态化。下边是一本讲spring的书中给出的j2EE中的动态页面静态化的示例:

  1. /**
  2. * 动态内容静态化的Filter。将变化非常缓慢的动态文件生成静态文件。
  3. */ 
  4. package com.zsl.cache.filter; 
  5.   
  6. import java.io.File; 
  7. import java.io.IOException; 
  8. import java.io.UnsupportedEncodingException; 
  9. import java.net.URLEncoder; 
  10. import java.util.Map; 
  11.   
  12. importjavax.management.RuntimeErrorException; 
  13. importjavax.servlet.FilterChain; 
  14. importjavax.servlet.ServletException; 
  15. importjavax.servlet.ServletRequest; 
  16. importjavax.servlet.ServletResponse; 
  17. importjavax.servlet.http.HttpServletRequest; 
  18. importjavax.servlet.http.HttpServletResponse; 
  19.   
  20. importorg.apache.commons.io.FilenameUtils; 
  21. importorg.apache.http.HttpResponse; 
  22. importorg.springframework.core.io.Resource; 
  23.   
  24. importcom.sun.xml.bind.v2.runtime.output.Encoded; 
  25.   
  26.   
  27. /**
  28. * @author zsl
  29. *
  30. */ 
  31. public class FileCacheFilterextends AbstractCacheFilter{ 
  32.          private String root; 
  33.          
  34.          private final String SUFFIX = ".html"
  35.          
  36.          public final void setFileDir(Resource dir){ 
  37.                    try
  38.                             File f = dir.getFile(); 
  39.                             f.mkdirs(); 
  40.                             if(!f.isDirectory()){ 
  41.                                      throw newIllegalArgumentException("Invalid directory: "+f.getPath()); 
  42.                             } 
  43.                             if(!f.canWrite()) 
  44.                                      throw newIllegalArgumentException("Cannot write to directory: "+f.getPath()); 
  45.                             root = f.getPath(); 
  46.                             
  47.                             if(!root.endsWith("/")&&!root.endsWith("//")) 
  48.                                      root = root+"/"
  49.                    } catch (IOException e) { 
  50.                             // TODO Auto-generated catch block 
  51.                             throw new IllegalArgumentException(e); 
  52.                    } 
  53.          } 
  54.          
  55.          public void afterPropertiesSet() throws Exception { 
  56.                    super.afterPropertiesSet(); 
  57.                    if(!new File(root).isDirectory()){ 
  58.                             throw newIllegalArgumentException("No directory: "+root); 
  59.                    } 
  60.          } 
  61.          
  62.          public void doFilter(ServletRequest request,ServletResponseresponse,FilterChain chain) throws IOException,ServletException { 
  63.                    HttpServletRequest httpRequest =(HttpServletRequest)request; 
  64.                    String key = getKey(httpRequest); 
  65.                    if(key == null){ 
  66.                             chain.doFilter(request,response); 
  67.                    }else
  68.                             File file = key2File(key); 
  69.                             if(file.isFile()){ 
  70.                                      HttpServletResponse httpResponse= (HttpServletResponse)response; 
  71.                                      httpResponse.setContentType(getContentType()); 
  72.                                      httpResponse.setHeader("Content-Encoding","gzip"); 
  73.                                      httpResponse.setContentLength((int)file.length()); 
  74.                                      FileUtil.readFil(file,httpResponse.getOutputStream()); 
  75.                             }else
  76.                                      //缓存未找到文件 
  77.                                      HttpServletResponse httpResponse= (HttpServletResponse)response; 
  78.                                      CachedResponseWrapper wrapper =new CachedResponseWrapper(httpResponse); 
  79.                                      chain.doFilter(request,response); 
  80.                                      if(wrapper.getStatus() ==HttpServletResponse.SC_OK){ 
  81.                                                byte[] data =GZipUtil.gzip(wrapper.getResponseData()); 
  82.                                                FileUtil.writeFile(file,data); 
  83.                                                httpResponse.setContentType(getContentType()); 
  84.                                                httpResponse.setHeader("Content-Encoding","gzip"); 
  85.                                                httpResponse.setContentLength(data.length); 
  86.                                                httpResponse.getOutputStream().write(data); 
  87.                                      } 
  88.                             } 
  89.                    } 
  90.                    
  91.          } 
  92.          
  93.          private File key2File(String key){ 
  94.                    int  hash =key.hashCode(); 
  95.                    int dir1 = (hash &0xff00)>>8
  96.                    int dir2 = hash & 0xff
  97.                    String  dir= root+dir1+"/"+dir2; 
  98.                    File fdir = new File(dir); 
  99.                    if(!fdir.isAbsolute()){ 
  100.                             if(!fdir.mkdirs()){ 
  101.                                      return null
  102.                             } 
  103.                    } 
  104.                    return newFile(dir+"/"+encode(key)+SUFFIX); 
  105.          } 
  106.          
  107.          private String encode(String key){ 
  108.                    try
  109.                             return URLEncoder.encode(key,"UTF-8"); 
  110.                    } catch (UnsupportedEncodingException e) { 
  111.                             throw new RuntimeException(e); 
  112.                    } 
  113.                    
  114.          } 
  115.          
  116.          public void remove(String url,Map<String,String>parameters){ 
  117.                    String key =getKey(HttpServletRequestFactory.create(url,parameters)); 
  118.                    if(key != null){ 
  119.                             FileUtil.remveFile(key2File(key)); 
  120.                    } 
  121.          } 
/**
 * 动态内容静态化的Filter。将变化非常缓慢的动态文件生成静态文件。
 */
package com.zsl.cache.filter;
 
import java.io.File;
import java.io.IOException;
import java.io.UnsupportedEncodingException;
import java.net.URLEncoder;
import java.util.Map;
 
importjavax.management.RuntimeErrorException;
importjavax.servlet.FilterChain;
importjavax.servlet.ServletException;
importjavax.servlet.ServletRequest;
importjavax.servlet.ServletResponse;
importjavax.servlet.http.HttpServletRequest;
importjavax.servlet.http.HttpServletResponse;
 
importorg.apache.commons.io.FilenameUtils;
importorg.apache.http.HttpResponse;
importorg.springframework.core.io.Resource;
 
importcom.sun.xml.bind.v2.runtime.output.Encoded;
 
 
/**
 * @author zsl
 *
 */
public class FileCacheFilterextends AbstractCacheFilter{
         private String root;
        
         private final String SUFFIX = ".html";
        
         public final void setFileDir(Resource dir){
                   try {
                            File f = dir.getFile();
                            f.mkdirs();
                            if(!f.isDirectory()){
                                     throw newIllegalArgumentException("Invalid directory: "+f.getPath());
                            }
                            if(!f.canWrite())
                                     throw newIllegalArgumentException("Cannot write to directory: "+f.getPath());
                            root = f.getPath();
                           
                            if(!root.endsWith("/")&&!root.endsWith("//"))
                                     root = root+"/";
                   } catch (IOException e) {
                            // TODO Auto-generated catch block
                            throw new IllegalArgumentException(e);
                   }
         }
        
         public void afterPropertiesSet() throws Exception {
                   super.afterPropertiesSet();
                   if(!new File(root).isDirectory()){
                            throw newIllegalArgumentException("No directory: "+root);
                   }
         }
        
         public void doFilter(ServletRequest request,ServletResponseresponse,FilterChain chain) throws IOException,ServletException {
                   HttpServletRequest httpRequest =(HttpServletRequest)request;
                   String key = getKey(httpRequest);
                   if(key == null){
                            chain.doFilter(request,response);
                   }else{
                            File file = key2File(key);
                            if(file.isFile()){
                                     HttpServletResponse httpResponse= (HttpServletResponse)response;
                                     httpResponse.setContentType(getContentType());
                                     httpResponse.setHeader("Content-Encoding","gzip");
                                     httpResponse.setContentLength((int)file.length());
                                     FileUtil.readFil(file,httpResponse.getOutputStream());
                            }else{
                                     //缓存未找到文件
                                     HttpServletResponse httpResponse= (HttpServletResponse)response;
                                     CachedResponseWrapper wrapper =new CachedResponseWrapper(httpResponse);
                                     chain.doFilter(request,response);
                                     if(wrapper.getStatus() ==HttpServletResponse.SC_OK){
                                               byte[] data =GZipUtil.gzip(wrapper.getResponseData());
                                               FileUtil.writeFile(file,data);
                                               httpResponse.setContentType(getContentType());
                                               httpResponse.setHeader("Content-Encoding","gzip");
                                               httpResponse.setContentLength(data.length);
                                               httpResponse.getOutputStream().write(data);
                                     }
                            }
                   }
                  
         }
        
         private File key2File(String key){
                   int  hash =key.hashCode();
                   int dir1 = (hash &0xff00)>>8;
                   int dir2 = hash & 0xff;
                   String  dir= root+dir1+"/"+dir2;
                   File fdir = new File(dir);
                   if(!fdir.isAbsolute()){
                            if(!fdir.mkdirs()){
                                     return null;
                            }
                   }
                   return newFile(dir+"/"+encode(key)+SUFFIX);
         }
        
         private String encode(String key){
                   try {
                            return URLEncoder.encode(key,"UTF-8");
                   } catch (UnsupportedEncodingException e) {
                            throw new RuntimeException(e);
                   }
                  
         }
        
         public void remove(String url,Map<String,String>parameters){
                   String key =getKey(HttpServletRequestFactory.create(url,parameters));
                   if(key != null){
                            FileUtil.remveFile(key2File(key));
                   }
         }
}

书中给到的另外一个客户端缓存,不知道该归那类:

  1. /**
  2. * 网页静态资源(gif图像,css资源的缓存时间设置filter,避免频繁请求静态资源
  3. */ 
  4. package com.zsl.cache.filter; 
  5.   
  6. import java.io.IOException; 
  7. import java.util.Enumeration; 
  8. import java.util.Map; 
  9.   
  10. importjavax.servlet.FilterChain; 
  11. importjavax.servlet.FilterConfig; 
  12. importjavax.servlet.ServletException; 
  13. import javax.servlet.Filter; 
  14. importjavax.servlet.ServletRequest; 
  15. importjavax.servlet.ServletResponse; 
  16. importjavax.servlet.http.HttpServletRequest; 
  17. importjavax.servlet.http.HttpServletResponse; 
  18.   
  19. importorg.apache.commons.collections.map.HashedMap; 
  20. importorg.apache.commons.logging.Log; 
  21. importorg.apache.commons.logging.LogFactory; 
  22.   
  23.   
  24. /**
  25. * @author zsl
  26. *
  27. */ 
  28. public class ExpireFilterimplements Filter { 
  29.   
  30.          private Log log = LogFactory.getLog(ExpireFilter.class); 
  31.          
  32.          private Map<String, Long> map = new HashedMap(); 
  33.   
  34.          @Override 
  35.          public void destroy() { 
  36.                    log.info("destory ExpiredFilter"); 
  37.          } 
  38.   
  39.          @Override 
  40.          public void doFilter(ServletRequest request, ServletResponseresponse, 
  41.                             FilterChain chain) throws IOException,ServletException { 
  42.                             String uriString =((HttpServletRequest)request).getRequestURI(); 
  43.                             int n = uriString.lastIndexOf('.'); 
  44.                             if(n!= -1){ 
  45.                                      String ext =uriString.substring(n); 
  46.                                      Long exp = map.get(ext); 
  47.                                      if(exp != null){ 
  48.                                                HttpServletResponseresp = (HttpServletResponse)response; 
  49.                                                resp.setHeader("Expires",System.currentTimeMillis()+exp*1000+""); 
  50.                                      } 
  51.                             } 
  52.                             chain.doFilter(request,response); 
  53.          } 
  54.   
  55.          @Override 
  56.          public void init(FilterConfig config) throwsServletException { 
  57.                    Enumeration em = config.getInitParameterNames(); 
  58.                    while(em.hasMoreElements()){ 
  59.                             String paramName =em.nextElement().toString(); 
  60.                             String paramValue = config.getInitParameter(paramName); 
  61.                             try
  62.                                      int time =Integer.valueOf(paramValue); 
  63.                                      if(time>0){ 
  64.                                                log.info("set"+paramName + " expired seconds: "+time); 
  65.                                                map.put(paramName, newLong(time)); 
  66.                                      } 
  67.                             } catch (Exception e) { 
  68.                                      log.warn("Exception ininitilizing ExpiredFilter.",e); 
  69.                             } 
  70.                    } 
  71.          } 
/**
 * 网页静态资源(gif图像,css资源的缓存时间设置filter,避免频繁请求静态资源
 */
package com.zsl.cache.filter;
 
import java.io.IOException;
import java.util.Enumeration;
import java.util.Map;
 
importjavax.servlet.FilterChain;
importjavax.servlet.FilterConfig;
importjavax.servlet.ServletException;
import javax.servlet.Filter;
importjavax.servlet.ServletRequest;
importjavax.servlet.ServletResponse;
importjavax.servlet.http.HttpServletRequest;
importjavax.servlet.http.HttpServletResponse;
 
importorg.apache.commons.collections.map.HashedMap;
importorg.apache.commons.logging.Log;
importorg.apache.commons.logging.LogFactory;
 
 
/**
 * @author zsl
 *
 */
public class ExpireFilterimplements Filter {
 
         private Log log = LogFactory.getLog(ExpireFilter.class);
        
         private Map<String, Long> map = new HashedMap();
 
         @Override
         public void destroy() {
                   log.info("destory ExpiredFilter");
         }
 
         @Override
         public void doFilter(ServletRequest request, ServletResponseresponse,
                            FilterChain chain) throws IOException,ServletException {
                            String uriString =((HttpServletRequest)request).getRequestURI();
                            int n = uriString.lastIndexOf('.');
                            if(n!= -1){
                                     String ext =uriString.substring(n);
                                     Long exp = map.get(ext);
                                     if(exp != null){
                                               HttpServletResponseresp = (HttpServletResponse)response;
                                               resp.setHeader("Expires",System.currentTimeMillis()+exp*1000+"");
                                     }
                            }
                            chain.doFilter(request,response);
         }
 
         @Override
         public void init(FilterConfig config) throwsServletException {
                   Enumeration em = config.getInitParameterNames();
                   while(em.hasMoreElements()){
                            String paramName =em.nextElement().toString();
                            String paramValue = config.getInitParameter(paramName);
                            try {
                                     int time =Integer.valueOf(paramValue);
                                     if(time>0){
                                               log.info("set"+paramName + " expired seconds: "+time);
                                               map.put(paramName, newLong(time));
                                     }
                            } catch (Exception e) {
                                     log.warn("Exception ininitilizing ExpiredFilter.",e);
                            }
                   }
         }
}

2.     负载均衡系统

Ø  负载均衡策略

负载均衡策略有随机分配,平均分配,分布式一致性hash等。随机分配就是通过随机数选择一个服务器来服务。平均分配就是一次循环分配一次。分布式一致性hash算法,比较负载,把资源和节点映射到一个换上,然后通过一定的算法资源对应到节点上,使得添加和去掉服务器变得非常容易,减少对其它服务器的影响。很有名的一个算法,据说是P2P的基础。了解不是很深,就不详细说了,要露马脚了。

Ø  软件负载均衡

软件负载均衡可以采用很多方案,常见的几个方案有:

基于DNS的负载均衡,通过DNS正向区域的配置,将一个域名根据一定的策略解析到多个ip地址,实现负载均衡,这里需要DNS服务器的配合。

基于LVS的负载均衡。LVS可以将多个linux服务器做成一个虚拟的服务器,对外提供服务器,实现负载均衡。

基于Iptables的负载均衡。Iptables可以通过做nat,对外提供一个虚拟IP,对内映射到多个服务器实现负载均衡。基本上可以和硬件均衡方案一致了,这里的linux服务器相当于一台路由器。

Ø  硬件负载均衡

基于路由器的负载均衡,在路由器上配置nat实现负载均衡。对外网一个虚拟IP,内网映射几个内网IP。

一些网络设备厂商也提供了一些负载均衡的设备,如F5,不过价格不菲哦。

数据库的负载均衡

数据库的负载均衡可以是数据库厂商提供的集群方案。

――――――――――――――――――――――――――――――――――――――

今天先写到这里,这个题目太大了,东西太多。后边是将来要写的。还没有组织出来。

好多东西也没有展开。东西太多了。

――――――――――――――――――――――――――――――――――――――

分布式文件系统

Gfs

hfs

Map Reduce系统

云计算



 

评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值