java 线程池 固定大小,使用Executors服务在Java中创建固定大小线程池的最佳方法...

I am using the Executors framework in Java to create thread pools for a multi-threaded application, and I have a question related to performance.

I have an application which can work in realtime or non-realtime mode. In case it's realtime, I'm simply using the following:

THREAD_POOL = Executors.newCachedThreadPool();

But in case it's not realtime, I want the ability to control the size of my thread pool.

To do this, I'm thinking about 2 options, but I don't really understand the difference, and which one would perform better.

Option 1 is to use the simple way:

THREAD_POOL = Executors.newFixedThreadPool(threadPoolSize);

Option 2 is to create my own ThreadPoolExecutor like this:

RejectedExecutionHandler rejectHandler = new RejectedExecutionHandler() {

@Override

public void rejectedExecution(Runnable r, ThreadPoolExecutor executor) {

try {

executor.getQueue().put(r);

} catch (Exception e) {}

}

};

THREAD_POOL = new ThreadPoolExecutor(threadPoolSize, threadPoolSize, 0, TimeUnit.SECONDS, new LinkedBlockingQueue(10000), rejectHandler);

I would like to understand what is the advantage of using the more complex option 2, and also if I should use another data structure than LinkedBlockingQueue? Any help would be appreciated.

解决方案

Looking at the source code you'll realize that:

Executors.newFixedThreadPool(threadPoolSize);

is equivalent to:

return new ThreadPoolExecutor(threadPoolSize, threadPoolSize, 0L, MILLISECONDS,

new LinkedBlockingQueue());

Since it doesn't provide explicit RejectedExecutionHandler, default AbortPolicy is used. It basically throws RejectedExecutionException once the queue is full. But the queue is unbounded, so it will never be full. Thus this executor accepts inifnite1 number of tasks.

Your declaration is much more complex and quite different:

new LinkedBlockingQueue(10000) will cause the thread pool to discard tasks if more than 10000 are awaiting.

I don't understand what your RejectedExecutionHandler is doing. If the pool discovers it cannot put any more runnables to the queue it calls your handler. In this handler you... try to put that Runnable into the queue again (which will fail in like 99% of the cases block). Finally you swallow the exception. Seems like ThreadPoolExecutor.DiscardPolicy is what you are after.

Looking at your comments below seems like you are trying to block or somehow throttle clients if tasks queue is too large. I don't think blocking inside RejectedExecutionHandler is a good idea. Instead consider CallerRunsPolicy rejection policy. Not entirely the same, but close enough.

To wrap up: if you want to limit the number of pending tasks, your approach is almost good. If you want to limit the number of concurrent threads, the first one-liner is enough.

1 - assuming 2^31 is infinity

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值