- 1. 开启线程过多,会消耗cpu
- 2. 单核cpu,同一时刻只能处理一个线程,多核cpu同一时刻可以处理多个线程
- 3. 操作系统为每个运行线程安排一定的CPU时间----`时间片`,系统通过一种循环的方式为线程提供时间片,线程在自己的时间内运行,因为时间相当短,多个线程频繁地发生切换,因此给用户的感觉就是好像多个线程同时运行一样,但是如果计算机有多个CPU,线程就能真正意义上的同时运行了.
- 1. 线程池是预先创建线程的一种技术。线程池在还没有任务到来之前,创建一定数量的线程,放入空闲队列中,然后对这些资源进行复用。`
- 2. 减少频繁的创建和销毁对象。`
- 3. 频繁创建和销毁线程耗资源,耗时间
- 4. 因为有的线程执行时间比创建和销毁一个线程的时间还长`
- * Executor:Java里面线程池的顶级接口
- * ExecutorService:真正的线程池接口
- * ScheduledExecutorService:能和Timer/TimerTask类似,解决那些需要任务重复执行的问题
- * ThreadPoolExecutor(重点):ExecutorService的默认实现。
- * ScheduledThreadPoolExecutor:继承ThreadPoolExecutor的ScheduledExecutorService接口实现,周期性任务调度的类实现。
- 1.newSingleThreadExecutor
- 2.newFixedThreadPool
- 3.newCachedThreadPool
- 4.newScheduledThreadPool
public ThreadPoolExecutor(int corePoolSize,
int maximumPoolSize,
long keepAliveTime,
TimeUnit unit,
BlockingQueue<Runnable> workQueue,
ThreadFactory threadFactory,
RejectedExecutionHandler handler)
看这个参数很容易让人以为是线程池里保持corePoolSize个线程,如果不够用,就加线程入池直至maximumPoolSize大小,如果还不够就往workQueue里加,如果workQueue也不够就用RejectedExecutionHandler来做拒绝处理。
但实际情况不是这样,具体流程如下:
1)当池子大小小于corePoolSize就新建线程,并处理请求
2)当池子大小等于corePoolSize,把请求放入workQueue中,池子里的空闲线程就去从workQueue中取任务并处理
3)当workQueue放不下新入的任务时,新建线程入池,并处理请求,如果池子大小撑到了maximumPoolSize就用RejectedExecutionHandler来做拒绝处理
4)另外,当池子的线程数大于corePoolSize的时候,多余的线程会等待keepAliveTime长的时间,如果无请求可处理就自行销毁
内部结构如下所示:
从中可以发现ThreadPoolExecutor就是依靠BlockingQueue的阻塞机制来维持线程池,当池子里的线程无事可干的时候就通过workQueue.take()阻塞住。
其实可以通过Executes来学学几种特殊的ThreadPoolExecutor是如何构建的。
public static ExecutorService newFixedThreadPool(int nThreads) {
return new ThreadPoolExecutor(nThreads, nThreads,
0L, TimeUnit.MILLISECONDS,
new LinkedBlockingQueue<Runnable>());
}
newFixedThreadPool就是一个固定大小的ThreadPool
public static ExecutorService newCachedThreadPool() {
return new ThreadPoolExecutor(0, Integer.MAX_VALUE,
60L, TimeUnit.SECONDS,
new SynchronousQueue<Runnable>());
}
newCachedThreadPool比较适合没有固定大小并且比较快速就能完成的小任务,没必要维持一个Pool,这比直接new Thread来处理的好处是能在60秒内重用已创建的线程。
其他类型的ThreadPool看看构建参数再结合上面所说的特性就大致知道它的特性
public class ThreadPoolFactory {
public static ThreadPoolProxy normalThreadPool;
public static final int NORMAL_COREPOOLSIZE = 5;
public static final int NORMAL_MAXIMUMPOOLSIZE = 5;
public static final long NORMAL_KEEPALIVETIME = 60;
public static ThreadPoolProxy getNormalThreadPool() {
//双重检测机制
if (normalThreadPool == null) {
synchronized (ThreadPoolFactory.class) {
if (normalThreadPool == null) {
normalThreadPool = new ThreadPoolProxy(NORMAL_COREPOOLSIZE,NORMAL_MAXIMUMPOOLSIZE,NORMAL_KEEPALIVETIME);}
}
}
return normalThreadPool;
}
}
public class ThreadPoolProxy {
ThreadPoolExecutor executor;
int corePoolSize;
int maximumPoolSize;
long keepAliveTime;
public ThreadPoolProxy(int corePoolSize, int maximumPoolSize, long keepAliveTime) {
this.corePoolSize = corePoolSize;
this.maximumPoolSize = maximumPoolSize;
this.keepAliveTime = keepAliveTime;
}
public void initThreadPoolProxy() {
//双重检测机制
if (executor == null) {
synchronized (ThreadPoolExecutor.class) {
if (executor == null) {
BlockingQueue<Runnable> workQueue = new LinkedBlockingDeque<>();
ThreadFactory threadFactory = Executors.defaultThreadFactory();
RejectedExecutionHandler handler = new ThreadPoolExecutor.AbortPolicy();
executor = new ThreadPoolExecutor(corePoolSize, maximumPoolSize, keepAliveTime, TimeUnit.SECONDS, workQueue, threadFactory, handler);
}
}
}
}
public void exector(Runnable runnable) {
initThreadPoolProxy();
executor.execute(runnable);
}
public void remove(Runnable runnable) {
initThreadPoolProxy();
executor.remove(runnable);
}
}
public class ThreadPoolUtils {
//在UI中执行
static Handler handler = new Handler();
public static void runTaskOnMainThread(Runnable runnable) {
handler.post(runnable);
}
//非UI执行
public static void runTaskOnThread(Runnable runnable) {
ThreadPoolFactory.getNormalThreadPool().exector(runnable);
}
}
喜欢的点个赞~谢谢