线程池代码记录

1、线程池思想
线程池最主要的思想就是为了减少由于对资源的动态分配而引起的系统调用产生的耗时,即不在客户端需要连接时再创建线程,而是提前创建好一定数量的线程,等待对客户端连接的任务的处理,同时,线程的数量也并非是固定不变的,而是根据任务队列中任务的数量、现存线程数量以及处于繁忙状态线程数量等,来对线程的数量进行动态管理,适时增加或减少线程池中线程的数量。

对任务的处理过程中用到了生产者与消费者模型,即有一个公共的任务队列区域,客户端生产任务,服务器负责把任务放到任务队列中,线程池中的线程则从任务队列中获取任务并进行处理,任务队列中的任务是动态变化的,这里为了保证线程的数据同步,需要用到锁和条件变量,防止数据混乱。

由于线程池中的线程数量、线程状态以及任务队列中的任务数量等都是动态变化的,为了对这些数据统一进行管理,创建一个结构体struct threadpool_t,用于对线程池中各种数据归纳处理,此外创建一个结构体 struct threadpool_task_t 用于管理任务队列中各个任务的相关属性。

2、各函数作用
(1)threadpool_create:
用于为结构体分配内存,初始化结构体中线程池相关参数,并为任务队列以及线程 ID 的存储分配内存。

(2)threadpool_add:
向任务队列中添加任务,任务队列中元素即为结构体 struct threadpool_task_t,该结构体中存储了任务的处理函数以及为处理函数传递的参数,在完成任务添加后,由于任务队列发生变化,故与任务队列相关的参数也发生了变化,比如任务队列的尾指针和任务队列中现存的任务数量等,因此还需要对线程池结构体 struct threadpool_t 中的相关成员参数进行维护。在添加任务并对相关参数进行维护之后,还需要发送信号,告知处于空闲状态的线程,现在任务队列中已有任务,可以停止阻塞,并领取任务,此功能即由条件变量和互斥锁配合完成。

(3)threadpool_thread:
此函数为所创建出来的线程要执行的代码段,主要逻辑为当任务队列为空时,则阻塞等待,直到接到通知(满足条件变量queue_not_empty,即队列不为空),停止阻塞,并从队列头部取出任务,对比向任务队列中添加任务,从队列头部取出任务也需要对相关参数进行维护。此外,此函数中还添加功能:当存在等待退出的线程时(空闲线程过多),即wait_exit_thr_num > 0 时,引导相关线程退出,并维护结构体 struct threadpool_t 中与线程数量有关的参数。

(4)adjust_thread:
此函数为管理线程所要执行的代码段,主要功能为根据相关参数,设计算法,以便适时对线程进行管理,即适当的添加或者删除,其达到的数量的要求是要满足结构体 struct threadpool_t 中定义的最大线程数量和最小线程数量所形成的范围。此处注意其删除线程的方式,即发送 “虚假” 通知满足条件变量queue_not_empty,引导线程自行退出,见函数(3)。

(5)其余函数:
剩余函数就相对比较简单,主要为线程、锁以及条件变量等的回收,以及之后的动态分配的内存的释放等,此处不再赘述。

3、完整代码

  1 #include <cstdio>
  2 #include <cstdlib>
  3 #include <unistd.h>
  4 #include <cstring>
  5 #include <signal.h>
  6 #include <errno.h>
  7 #include <pthread.h>
  8 //#include "pthreadpool.h"
  9 
 10 #define DEFAULT_TIME 10
 11 #define MIN_WAIT_TASK_NUM 10
 12 #define DEFAULT_THREAD_VARY 10
 13 #define true 1
 14 #define false 0
 15 
 16 typedef struct {
 17     void *(*function)(void *);
 18     void *arg;
 19 } threadpool_task_t;
 20 
 21 struct threadpool_t {
 22     pthread_mutex_t lock;
 23     pthread_mutex_t thread_counter;
 24     pthread_cond_t queue_not_full;
 25     pthread_cond_t queue_not_empty;
 26 
 27     pthread_t *threads;
 28     pthread_t adjust_tid;
 29     threadpool_task_t *task_queue;
 30 
 31     int min_thr_num;
 32     int max_thr_num;
 33     int live_thr_num;
 34     int busy_thr_num;
 35     int wait_exit_thr_num;
 36 
 37     int queue_front;
 38     int queue_rear;
 39     int queue_size;
 40     int queue_max_size;
 41 
 42     int shutdown;
 43 };
 44 
 45 void *adjust_thread(void *threadpool);
 46 
 47 int is_thread_alive(pthread_t pid);
 48 int threadpool_free(threadpool_t *pool);
 49 void *threadpool_thread(void *threadpool);
 50 
 51 threadpool_t *threadpool_create(int min_thr_num, int max_thr_num, int queue_max_size)
 52 {
 53     int i;
 54     threadpool_t *pool = nullptr;
 55     do {
 56         if((pool = (threadpool_t *)malloc(sizeof(threadpool_t))) == NULL) {
 57             printf("malloc threadpool fail\n");
 58             break;
 59         }
 60 
 61         pool->min_thr_num = min_thr_num;
 62         pool->max_thr_num = max_thr_num;
 63         pool->busy_thr_num = 0;
 64         pool->live_thr_num = min_thr_num;
 65         pool->queue_size = 0;
 66         pool->queue_max_size = queue_max_size;
 67         pool->queue_front = 0;
 68         pool->queue_rear = 0;
 69         pool->shutdown = false;
 70 
 71         pool->threads = (pthread_t *)malloc(sizeof(pthread_t)*max_thr_num);
 72         if(pool->threads == nullptr) {
 73             printf("malloc threads error\n");
 74             break;
 75         }
 76         memset(pool->threads, 0, sizeof(pthread_t)*max_thr_num);
 77 
 78         pool->task_queue = (threadpool_task_t *)malloc(sizeof(threadpool_task_t)*queue_max_size);
 79         if(pool->task_queue == nullptr) {
 80             printf("malloc task_queue error\n");
 81             break;
 82         }
 83 
 84         if(pthread_mutex_init(&(pool->lock), NULL) != 0
 85             || pthread_mutex_init(&(pool->thread_counter), NULL) != 0
 86             || pthread_cond_init(&(pool->queue_not_empty), NULL) != 0
 87             || pthread_cond_init(&(pool->queue_not_full), NULL) != 0)
 88         {
 89             printf("init the lock or cond fail\n");
 90             break;
 91         }
 92 
 93         for(i = 0; i < min_thr_num; i++) {
 94             pthread_create(&(pool->threads[i]), NULL, threadpool_thread, (void *)pool);
 95             printf("start thread 0x%x...\n", (unsigned int)pool->threads[i]);
 96         }
 97         pthread_create(&(pool->adjust_tid), NULL, adjust_thread, (void *)pool);
 98 
 99         return pool;
100     } while(0);
101 
102     threadpool_free(pool);
103     return nullptr;
104 }
105 
106 int threadpool_add(threadpool_t *pool, void*(*function)(void *arg), void *arg)
107 {
108     pthread_mutex_lock(&(pool->lock));
109 
110     while((pool->queue_size == pool->queue_max_size) && (!pool->shutdown)) {
111         pthread_cond_wait(&(pool->queue_not_full), &(pool->lock));
112     }
113     if(pool->shutdown) {
114         pthread_mutex_unlock(&(pool->lock));
115     }
116 
117     if(pool->task_queue[pool->queue_rear].arg != nullptr) {
118         free(pool->task_queue[pool->queue_rear].arg);
119         pool->task_queue[pool->queue_rear].arg = nullptr;
120     }
121 
122     pool->task_queue[pool->queue_rear].function = function;
123     pool->task_queue[pool->queue_rear].arg = arg;
124     pool->queue_rear = (pool->queue_rear+1) % pool->queue_max_size;
125     pool->queue_size++;
126 
127     pthread_cond_signal(&(pool->queue_not_empty));
128     pthread_mutex_unlock(&(pool->lock));
129 
130     return 0;
131 }
132 
133 void *threadpool_thread(void *threadpool)
134 {
135     threadpool_t *pool = (threadpool_t *)threadpool;
136     threadpool_task_t task;
137 
138     while(true) {
139         pthread_mutex_lock(&(pool->lock));
140 
141         while((pool->queue_size == 0) && (!pool->shutdown)) {
142             printf("thread 0x%x is waiting\n", (unsigned int)pthread_self());
143             pthread_cond_wait(&(pool->queue_not_empty), &(pool->lock));
144 
145             if(pool->wait_exit_thr_num > 0) {
146                 pool->wait_exit_thr_num--;
147 
148                 if(pool->live_thr_num > pool->min_thr_num) {
149                     printf("thread 0x%x is exiting\n", (unsigned int)pthread_self());
150                     pool->live_thr_num--;
151                     pthread_mutex_unlock(&(pool->lock));
152                     pthread_exit(NULL);
153                 }
154             }
155         }
156 
157         if(pool->shutdown) {
158             pthread_mutex_unlock(&(pool->lock));
159             printf("thread 0x%x is exiting\n", (unsigned int)pthread_self());
160             pthread_exit(NULL);
161         }
162 
163         task.function = pool->task_queue[pool->queue_front].function;
164         task.arg = pool->task_queue[pool->queue_front].arg;
165 
166         pool->queue_front = (pool->queue_front + 1) % pool->queue_max_size;
167         pool->queue_size--;
168 
169         pthread_cond_broadcast(&(pool->queue_not_full));
170 
171         pthread_mutex_unlock(&(pool->lock));
172 
173         printf("thread 0x%x start working\n", (unsigned int)pthread_self());
174         pthread_mutex_lock(&(pool->thread_counter));
175         pool->busy_thr_num++;
176         pthread_mutex_unlock(&(pool->thread_counter));
177         (*(task.function))(task.arg);
178 
179         printf("thread 0x%x end working\n", (unsigned int)pthread_self());
180         pthread_mutex_lock(&(pool->thread_counter));
181         pool->busy_thr_num--;
182         pthread_mutex_unlock(&(pool->thread_counter));
183     }
184 
185     pthread_exit(NULL);
186 }
187 
188 void *adjust_thread(void *threadpool)
189 {
190     int i;
191     threadpool_t *pool = (threadpool_t *)threadpool;
192 
193     while(!pool->shutdown) {
194 
195         sleep(DEFAULT_TIME);
196 
197         pthread_mutex_lock(&(pool->lock));
198         int queue_size = pool->queue_size;
199         int live_thr_num = pool->live_thr_num;
200         pthread_mutex_unlock(&(pool->lock));
201 
202         pthread_mutex_lock(&(pool->thread_counter));
203         int busy_thr_num = pool->busy_thr_num;
204         pthread_mutex_unlock(&(pool->thread_counter));
205 
206         if(queue_size >= MIN_WAIT_TASK_NUM && live_thr_num < pool->max_thr_num) {
207             pthread_mutex_lock(&(pool->lock));
208             int add = 0;
209 
210             for(i = 0; i < pool->max_thr_num && add < DEFAULT_THREAD_VARY
211                     && pool->live_thr_num < pool->max_thr_num; i++) {
212                 if(pool->threads[i] == 0 || !is_thread_alive(pool->threads[i])) {
213                     pthread_create(&(pool->threads[i]), NULL, threadpool_thread, (void *)pool);
214                     add++;
215                     pool->live_thr_num++;
216                 }
217             }
218 
219             pthread_mutex_unlock(&(pool->lock));
220         }
221 
222         if(busy_thr_num * 2 < live_thr_num && live_thr_num > pool->min_thr_num) {
223 
224             pthread_mutex_lock(&(pool->lock));
225             pool->wait_exit_thr_num = DEFAULT_THREAD_VARY;
226             pthread_mutex_unlock(&(pool->lock));
227 
228             for(i = 0; i < DEFAULT_THREAD_VARY; i++) {
229                 //The threads stop blocking where they are waiting for the task,and begin to commit suicide
230                 pthread_cond_signal(&(pool->queue_not_empty));
231             }
232         }
233     }
234 
235     return NULL;
236 }
237 
238 int threadpool_destroy(threadpool_t *pool)
239 {
240     int i;
241     if(pool == nullptr) {
242         return -1;
243     }
244     pool->shutdown = true;
245 
246     pthread_join(pool->adjust_tid, NULL);
247 
248     for(i = 0; i < pool->live_thr_num; i++) {
249         pthread_cond_broadcast(&(pool->queue_not_empty));
250     }
251     for(i = 0; i < pool->live_thr_num; i++) {
252         pthread_join(pool->threads[i], NULL);
253     }
254     threadpool_free(pool);
255 
256     return 0;
257 }
258 
259 int threadpool_free(threadpool_t *pool)
260 {
261     if(pool == nullptr) {
262         return -1;
263     }
264 
265     if(pool->task_queue) {
266         free(pool->task_queue);
267     }
268 
269     if(pool->threads) {
270         free(pool->threads);
271         pthread_mutex_lock(&(pool->lock));
272         pthread_mutex_destroy(&(pool->lock));
273         pthread_mutex_lock(&(pool->thread_counter));
274         pthread_mutex_destroy(&(pool->thread_counter));
275         pthread_cond_destroy(&(pool->queue_not_empty));
276         pthread_cond_destroy(&(pool->queue_not_full));
277     }
278     free(pool);
279     pool = nullptr;
280 
281     return 0;
282 }
283 
284 int threadpool_all_threadnum(threadpool_t *pool)
285 {
286     int all_threadnum = -1;
287     pthread_mutex_lock(&(pool->lock));
288     all_threadnum = pool->live_thr_num;
289     pthread_mutex_unlock(&(pool->lock));
290     return all_threadnum;
291 }
292 
293 int threadpool_busy_threadnum(threadpool_t *pool)
294 {
295     int busy_threadnum = -1;
296     pthread_mutex_lock(&(pool->thread_counter));
297     busy_threadnum = pool->busy_thr_num;
298     pthread_mutex_unlock(&(pool->thread_counter));
299     return busy_threadnum;
300 }
301 
302 int is_thread_alive(pthread_t tid)
303 {
304     int kill_rc = pthread_kill(tid, 0);
305     if(kill_rc == ESRCH) {
306         return false;
307     }
308 
309     return true;
310 }
311 
312 /* test */
313 
314 
315 void *process(void *arg)
316 {
317     printf("thread 0x%x working on task %d\n", (unsigned int)pthread_self(), (long)arg);
318     sleep(1);
319     printf("task %d is end\n", (long)arg);
320 
321     return NULL;
322 }
323 
324 int main(int argc, char *argv[])
325 {
326     threadpool_t *thp = threadpool_create(3, 100, 100);
327     printf("pool inited");
328 
329     int num[20], i;
330     for(i = 0; i < 20; i++) {
331         num[i] = i;
332         printf("add task %d\n", i);
333         threadpool_add(thp, process, (void *)&num[i]);
334     }
335     sleep(10);
336     threadpool_destroy(thp);
337 
338     return 0;
339 }

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
可以使用Spring Boot提供的ThreadPoolTaskExecutor来创建线程池。 首先,在你的Spring Boot应用程序中创建一个配置类,如下所示: ```java @Configuration public class ThreadPoolConfig { @Bean public ThreadPoolTaskExecutor threadPoolTaskExecutor() { ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor(); executor.setCorePoolSize(10); executor.setMaxPoolSize(20); executor.setQueueCapacity(30); executor.setThreadNamePrefix("ThreadPoolTaskExecutor-"); executor.initialize(); return executor; } } ``` 在上面的代码中,我们创建了一个ThreadPoolTaskExecutor实例,并配置了一些属性: - corePoolSize:线程池的核心线程数。 - maxPoolSize:线程池的最大线程数。 - queueCapacity:线程池的队列容量。 - threadNamePrefix:线程池线程的名称前缀。 然后,我们可以在Spring Boot应用程序中使用@Autowired来注入ThreadPoolTaskExecutor实例,然后调用submit()方法来提交任务,如下所示: ```java @Service public class MyService { @Autowired private ThreadPoolTaskExecutor executor; public void doSomethingAsync() { executor.submit(() -> { // 执行异步任务 }); } } ``` 在上面的代码中,我们注入了ThreadPoolTaskExecutor实例,并使用submit()方法提交了一个异步任务。 注意,当我们使用线程池处理异步任务时,我们需要非常小心地处理线程池中的异常,否则可能会导致应用程序崩溃。因此,我们建议在异步任务中使用try-catch块来捕获异常,并将异常记录到日志中。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值