多线程学习(八)---线程池

参考文章:

线程池你真不来了解一下吗?
线程池之ThreadPoolExecutor概述
Java线程池—ShutDown以及ShutDownNow解析

一、 线程池简介

线程池可以看做是线程的集合。在没有任务时线程处于空闲状态,当请求到来:线程池给这个请求分配一个空闲的线程,任务完成后回到线程池中等待下次任务(而不是销毁)。这样就实现了线程的重用。

如果没有使用线程池,为每个请求都新开一个线程:

public class Test {

    public static void main(String[] args) throws IOException {
        ServerSocket socket = new ServerSocket(80);
        while (true) {
            final Socket connection = socket.accept();
            Runnable task = ()-> deal(connection);
            new Thread(task).start();
        }
    }
    
    private static void deal(Socket connection) {
        System.out.println("do something...");
    }
}

为每个请求都开一个新的线程虽然理论上是可以的,但是会有缺点:

  • 线程生命周期的开销非常高。每个线程都有自己的生命周期,创建和销毁线程所花费的时间和资源可能比处理客户端的任务花费的时间和资源更多,并且还会有某些空闲线程也会占用资源。
  • 程序的稳定性和健壮性会下降,每个请求开一个线程。如果受到了恶意攻击或者请求过多(内存不足),程序很容易就奔溃掉了。

我们的线程最好是交由线程池来管理,这样可以减少对线程生命周期的管理,一定程度上提高性能。

二、 JDK提供的线程池API

JDK给我们提供了Excutor框架来使用线程池,它是线程池的基础。

Executor提供了一种将“任务提交”与“任务执行”分离开来的机制(解耦)
在这里插入图片描述
Executor接口:

public interface Executor {

    /**
     * Executes the given command at some time in the future.  The command
     * may execute in a new thread, in a pooled thread, or in the calling
     * thread, at the discretion of the {@code Executor} implementation.
     *
     * @param command the runnable task
     * @throws RejectedExecutionException if this task cannot be
     * accepted for execution
     * @throws NullPointerException if command is null
     */
    void execute(Runnable command);
}

主要的实现类有ThreadPoolExecutor类和ScheduledThreadPoolExecutor

2.1 ForkJoinPool线程池

除了ScheduledThreadPoolExecutor和ThreadPoolExecutor类线程池以外,还有一个是JDK1.7新增的线程池:ForkJoinPool线程池
在这里插入图片描述
与ThreadPoolExecutor一样,同样继承了AbstractExecutorService。ForkJoinPool是Fork/Join框架的两大核心类之一。与其它类型的ExecutorService相比,其主要的不同在于采用了工作窃取算法(work-stealing):所有池中线程会尝试找到并执行已被提交到池中的或由其他线程创建的任务。这样很少有线程会处于空闲状态,非常高效。这使得能够有效地处理以下情景:大多数由任务产生大量子任务的情况;从外部客户端大量提交小任务到池中的情况。

来源: https://blog.csdn.net/panweiwei1994/article/details/78969238

2.2 Callable和Future

我们可以简单认为:Callable就是Runnable的扩展。

Runnable没有返回值,不能抛出受检查的异常,而Callable可以
当我们的任务需要返回值的时,我们就可以使用Callable

@FunctionalInterface
public interface Callable<V> {
    /**
     * Computes a result, or throws an exception if unable to do so.
     *
     * @return computed result
     * @throws Exception if unable to compute a result
     */
    V call() throws Exception;
}

Future一般我们认为是Callable的返回值,但他其实代表的是任务的生命周期(当然了,它是能获取得到Callable的返回值的)
在这里插入图片描述

之前多线程学习(二)—线程的创建一文中,第三章、第四章就介绍了这种用法:

public class ThreadPoolTest implements Callable<String> {

    public static void main(String[] args) {
        // 创建线程池
        ExecutorService e = Executors.newFixedThreadPool(5);
        ThreadPoolTest t = new ThreadPoolTest();
        for (int i = 0; i < 3; i++) {
            Future<String> submit = e.submit(t);
            try {
                System.out.println(submit.get());
            } catch (InterruptedException | ExecutionException exception) {
                exception.printStackTrace();
            }
        }
        e.shutdown();
    }

    public String call() throws Exception {
        return Thread.currentThread().getName();
    }

}

三、ThreadPoolExecutor详解

先看源码:

/**
 * An {@link ExecutorService} that executes each submitted task using
 * one of possibly several pooled threads, normally configured
 * using {@link Executors} factory methods.
 *

使用线程池的线程来完成任务,一般使用Executors工厂方法来配置

 * <p>Thread pools address two different problems: they usually
 * provide improved performance when executing large numbers of
 * asynchronous tasks, due to reduced per-task invocation overhead,
 * and they provide a means of bounding and managing the resources,
 * including threads, consumed when executing a collection of tasks.
 * Each {@code ThreadPoolExecutor} also maintains some basic
 * statistics, such as the number of completed tasks.
 *

线程池解决了两个不同的问题:

  1. 提升性能:它们通常在执行大量异步任务时,由于减少了每个任务的调用开销,并且它们提供了一种限制和管理资源(包括线程)的方法,使得性能提升明显;
  2. 统计信息:每个ThreadPoolExecutor保持一些基本的统计信息,例如完成的任务数量。
 * <p>To be useful across a wide range of contexts, this class
 * provides many adjustable parameters and extensibility
 * hooks. However, programmers are urged to use the more convenient
 * {@link Executors} factory methods {@link
 * Executors#newCachedThreadPool} (unbounded thread pool, with
 * automatic thread reclamation), {@link Executors#newFixedThreadPool}
 * (fixed size thread pool) and {@link
 * Executors#newSingleThreadExecutor} (single background thread), that
 * preconfigure settings for the most common usage
 * scenarios. Otherwise, use the following guide when manually
 * configuring and tuning this class:
 *

为了在广泛的上下文中有用,此类提供了许多可调参数和可扩展性钩子。 但是,在常见场景中,我们预配置了几种线程池,我们敦促程序员使用更方便的Executors的工厂方法直接使用。

  • Executors.newCachedThreadPool(无界线程池,自动线程回收)
  • Executors.newFixedThreadPool(固定大小的线程池)
  • Executors.newSingleThreadExecutor(单一后台线程)
 * <dl>
 *
 * <dt>Core and maximum pool sizes</dt>
 *
 * <dd>A {@code ThreadPoolExecutor} will automatically adjust the
 * pool size (see {@link #getPoolSize})
 * according to the bounds set by
 * corePoolSize (see {@link #getCorePoolSize}) and
 * maximumPoolSize (see {@link #getMaximumPoolSize}).
 *

核心线程数量和最大线程数量

 * When a new task is submitted in method {@link #execute(Runnable)},
 * and fewer than corePoolSize threads are running, a new thread is
 * created to handle the request, even if other worker threads are
 * idle.  If there are more than corePoolSize but less than
 * maximumPoolSize threads running, a new thread will be created only
 * if the queue is full.  By setting corePoolSize and maximumPoolSize
 * the same, you create a fixed-size thread pool. By setting
 * maximumPoolSize to an essentially unbounded value such as {@code
 * Integer.MAX_VALUE}, you allow the pool to accommodate an arbitrary
 * number of concurrent tasks. Most typically, core and maximum pool
 * sizes are set only upon construction, but they may also be changed
 * dynamically using {@link #setCorePoolSize} and {@link
 * #setMaximumPoolSize}. </dd>
 *
  1. 如果运行的线程数量少于核心线程数corePoolSize,则创建新线程来处理请求,即时其他辅助线程是空闲的
  2. 如果运行的线程数量大于核心线程数corePoolSize,小于最大线程数maximumPoolSize,则仅当队列满时才创建新线程
  3. 如果设置的核心线程数corePoolSize和最大线程数maximumPoolSize相同,则创建了一个固定大小的线程池
  4. 如果设置了最大线程数maximumPoolSize的最大值,那么允许池适应任意数量的并发任务
 * <dt>On-demand construction</dt>
 *

按需初始化线程池

 * <dd>By default, even core threads are initially created and
 * started only when new tasks arrive, but this can be overridden
 * dynamically using method {@link #prestartCoreThread} or {@link
 * #prestartAllCoreThreads}.  You probably want to prestart threads if
 * you construct the pool with a non-empty queue. </dd>
 *

默认只有当任务请求过来,才会初始化核心线程
也可以使用重写来实现

 * <dt>Creating new threads</dt>
 *

创建新线程

 * <dd>New threads are created using a {@link ThreadFactory}.  If not
 * otherwise specified, a {@link Executors#defaultThreadFactory} is
 * used, that creates threads to all be in the same {@link
 * ThreadGroup} and with the same {@code NORM_PRIORITY} priority and
 * non-daemon status. By supplying a different ThreadFactory, you can
 * alter the thread's name, thread group, priority, daemon status,
 * etc. If a {@code ThreadFactory} fails to create a thread when asked
 * by returning null from {@code newThread}, the executor will
 * continue, but might not be able to execute any tasks. Threads
 * should possess the "modifyThread" {@code RuntimePermission}. If
 * worker threads or other threads using the pool do not possess this
 * permission, service may be degraded: configuration changes may not
 * take effect in a timely manner, and a shutdown pool may remain in a
 * state in which termination is possible but not completed.</dd>
 *

新创建的线程默认由线程工厂创建,都是属于同一线程组,相同优先级,没有守护线程的
这些都可以自定义
如果线程创建失败了,return null,执行器会继续执行,但是不会执行任务

 * <dt>Keep-alive times</dt>
 *

线程空闲

 * <dd>If the pool currently has more than corePoolSize threads,
 * excess threads will be terminated if they have been idle for more
 * than the keepAliveTime (see {@link #getKeepAliveTime(TimeUnit)}).
 * This provides a means of reducing resource consumption when the
 * pool is not being actively used. If the pool becomes more active
 * later, new threads will be constructed. This parameter can also be
 * changed dynamically using method {@link #setKeepAliveTime(long,
 * TimeUnit)}.  Using a value of {@code Long.MAX_VALUE} {@link
 * TimeUnit#NANOSECONDS} effectively disables idle threads from ever
 * terminating prior to shut down. By default, the keep-alive policy
 * applies only when there are more than corePoolSize threads. But
 * method {@link #allowCoreThreadTimeOut(boolean)} can be used to
 * apply this time-out policy to core threads as well, so long as the
 * keepAliveTime value is non-zero. </dd>
 *

如果线程数大于核心线程数,并且空闲时间大于keepAliveTime,那么会销毁
可以用方法来修改时间数

如果池在以后变得更加活跃,则应构建新线程。 也可以使用方法setKeepAliveTime(long,TimeUnit)进行动态调整。

防止空闲线程在关闭之前终止,可以使用如下方法:
setKeepAliveTime(Long.MAX_VALUE,TimeUnit.NANOSECONDS);

默认情况下,keep-alive策略仅适用于存在超过corePoolSize线程的情况。 但是,只要keepAliveTime值不为零,方法allowCoreThreadTimeOut(boolean)也可用于将此超时策略应用于核心线程。

 * <dt>Queuing</dt>
 *
 * <dd>Any {@link BlockingQueue} may be used to transfer and hold
 * submitted tasks.  The use of this queue interacts with pool sizing:
 *

阻塞队列

 * <ul>
 *
 * <li> If fewer than corePoolSize threads are running, the Executor
 * always prefers adding a new thread
 * rather than queuing.</li>
 *
 * <li> If corePoolSize or more threads are running, the Executor
 * always prefers queuing a request rather than adding a new
 * thread.</li>
 *
 * <li> If a request cannot be queued, a new thread is created unless
 * this would exceed maximumPoolSize, in which case, the task will be
 * rejected.</li>
 *
 * </ul>
 *

如果少于corePoolSize线程正在运行,Executor会优先创建一个线程任务,而不是从线程队列中取一个空闲线程
反之,Executor会优先从线程队列当中取一个空闲线程,而不是创建一个线程任务
如果队列满了,不能入队了,会先新建一个线程来处理,除非超过了最大线程数,这样的话就拒绝这个请求

 * There are three general strategies for queuing:
 * <ol>
 *

三种排队策略

 * <li> <em> Direct handoffs.</em> A good default choice for a work
 * queue is a {@link SynchronousQueue} that hands off tasks to threads
 * without otherwise holding them. Here, an attempt to queue a task
 * will fail if no threads are immediately available to run it, so a
 * new thread will be constructed. This policy avoids lockups when
 * handling sets of requests that might have internal dependencies.
 * Direct handoffs generally require unbounded maximumPoolSizes to
 * avoid rejection of new submitted tasks. This in turn admits the
 * possibility of unbounded thread growth when commands continue to
 * arrive on average faster than they can be processed.  </li>
 *

同步移交:

  1. 该策略不会将任务放到队列中,而是会直接移交给执行它的线程
  2. 如果当前没有线程执行它,则新建一个线程
  3. 一般用于线程池是无界限的情况 .
 * <li><em> Unbounded queues.</em> Using an unbounded queue (for
 * example a {@link LinkedBlockingQueue} without a predefined
 * capacity) will cause new tasks to wait in the queue when all
 * corePoolSize threads are busy. Thus, no more than corePoolSize
 * threads will ever be created. (And the value of the maximumPoolSize
 * therefore doesn't have any effect.)  This may be appropriate when
 * each task is completely independent of others, so tasks cannot
 * affect each others execution; for example, in a web page server.
 * While this style of queuing can be useful in smoothing out
 * transient bursts of requests, it admits the possibility of
 * unbounded work queue growth when commands continue to arrive on
 * average faster than they can be processed.  </li>
 *

无界限策略:

  1. 如果所有的核心线程都在工作,那么新的线程会在队列中等待
  2. 因此,线程的创建不会多于核心线程数
 * <li><em>Bounded queues.</em> A bounded queue (for example, an
 * {@link ArrayBlockingQueue}) helps prevent resource exhaustion when
 * used with finite maximumPoolSizes, but can be more difficult to
 * tune and control.  Queue sizes and maximum pool sizes may be traded
 * off for each other: Using large queues and small pools minimizes
 * CPU usage, OS resources, and context-switching overhead, but can
 * lead to artificially low throughput.  If tasks frequently block (for
 * example if they are I/O bound), a system may be able to schedule
 * time for more threads than you otherwise allow. Use of small queues
 * generally requires larger pool sizes, which keeps CPUs busier but
 * may encounter unacceptable scheduling overhead, which also
 * decreases throughput.  </li>
 *

有界限策略:

  1. 避免资源耗尽的情况发生
  2. 使用较大的队列和较小的线程池,最小化CPU使用率,OS资源和上下文切换开销,但可能导致人为的低吞吐量
 * </ol>
 *
 * </dd>
 *
 * <dt>Rejected tasks</dt>
 *

拒绝任务

 * <dd>New tasks submitted in method {@link #execute(Runnable)} will be
 * <em>rejected</em> when the Executor has been shut down, and also when
 * the Executor uses finite bounds for both maximum threads and work queue
 * capacity, and is saturated.  In either case, the {@code execute} method
 * invokes the {@link
 * RejectedExecutionHandler#rejectedExecution(Runnable, ThreadPoolExecutor)}
 * method of its {@link RejectedExecutionHandler}.  Four predefined handler
 * policies are provided:
 *
  1. 线程池关闭
  2. 线程数量满了,并且队列饱和了

以上两种情况就会发生拒绝任务的情况,并且抛出异常

有4种拒绝任务策略:

 * <ol>
 *
 * <li> In the default {@link ThreadPoolExecutor.AbortPolicy}, the
 * handler throws a runtime {@link RejectedExecutionException} upon
 * rejection. </li>
 *

直接抛出异常,默认的策略

 * <li> In {@link ThreadPoolExecutor.CallerRunsPolicy}, the thread
 * that invokes {@code execute} itself runs the task. This provides a
 * simple feedback control mechanism that will slow down the rate that
 * new tasks are submitted. </li>
 *

用调用者所在的线程来执行

 * <li> In {@link ThreadPoolExecutor.DiscardPolicy}, a task that
 * cannot be executed is simply dropped.  </li>
 *

直接丢弃这个任务

 * <li>In {@link ThreadPoolExecutor.DiscardOldestPolicy}, if the
 * executor is not shut down, the task at the head of the work queue
 * is dropped, and then execution is retried (which can fail again,
 * causing this to be repeated.) </li>
 *

丢弃最旧的任务

 * </ol>
 *
 * It is possible to define and use other kinds of {@link
 * RejectedExecutionHandler} classes. Doing so requires some care
 * especially when policies are designed to work only under particular
 * capacity or queuing policies. </dd>
 *
 * <dt>Hook methods</dt>
 *

钩子方法

 * <dd>This class provides {@code protected} overridable
 * {@link #beforeExecute(Thread, Runnable)} and
 * {@link #afterExecute(Runnable, Throwable)} methods that are called
 * before and after execution of each task.  These can be used to
 * manipulate the execution environment; for example, reinitializing
 * ThreadLocals, gathering statistics, or adding log entries.
 * Additionally, method {@link #terminated} can be overridden to perform
 * any special processing that needs to be done once the Executor has
 * fully terminated.
 *
 * <p>If hook or callback methods throw exceptions, internal worker
 * threads may in turn fail and abruptly terminate.</dd>
 *

ThreadPoolExecutor为提供了每个任务执行前后提供了钩子方法,重写beforeExecute(Thread,Runnable)和afterExecute(Runnable,Throwable)方法来操纵执行环境; 例如,重新初始化ThreadLocals,收集统计信息或记录日志等。此外,terminated()在Executor完全终止后需要完成后会被调用,可以重写此方法,以执行任殊处理。

注意:如果hook或回调方法抛出异常,内部的任务线程将会失败并结束。

 * <dt>Queue maintenance</dt>
 *

队列维护

 * <dd>Method {@link #getQueue()} allows access to the work queue
 * for purposes of monitoring and debugging.  Use of this method for
 * any other purpose is strongly discouraged.  Two supplied methods,
 * {@link #remove(Runnable)} and {@link #purge} are available to
 * assist in storage reclamation when large numbers of queued tasks
 * become cancelled.</dd>
 *

getQueue()方法可以访问任务队列,一般用于监控和调试。绝不建议将这个方法用于其他目的。当在大量的队列任务被取消时,remove()purge()方法可用于回收空间。

 * <dt>Finalization</dt>
 *

关闭

 * <dd>A pool that is no longer referenced in a program <em>AND</em>
 * has no remaining threads will be {@code shutdown} automatically. If
 * you would like to ensure that unreferenced pools are reclaimed even
 * if users forget to call {@link #shutdown}, then you must arrange
 * that unused threads eventually die, by setting appropriate
 * keep-alive times, using a lower bound of zero core threads and/or
 * setting {@link #allowCoreThreadTimeOut(boolean)}.  </dd>
 *
 * </dl>
 *
 * <p><b>Extension example</b>. Most extensions of this class
 * override one or more of the protected hook methods. For example,
 * here is a subclass that adds a simple pause/resume feature:
 *

如果程序中不在持有线程池的引用,并且线程池中没有线程时,线程池将会自动关闭。如果您希望确保即使用户忘记调用 shutdown()方法也可以回收未引用的线程池,使未使用线程最终死亡。那么必须通过设置适当的 keep-alive times 并设置allowCoreThreadTimeOut(boolean)或者 使 corePoolSize下限为0 。
一般情况下,线程池启动后建议手动调用shutdown()关闭。

 *  <pre> {@code
 * class PausableThreadPoolExecutor extends ThreadPoolExecutor {
 *   private boolean isPaused;
 *   private ReentrantLock pauseLock = new ReentrantLock();
 *   private Condition unpaused = pauseLock.newCondition();
 *
 *   public PausableThreadPoolExecutor(...) { super(...); }
 *
 *   protected void beforeExecute(Thread t, Runnable r) {
 *     super.beforeExecute(t, r);
 *     pauseLock.lock();
 *     try {
 *       while (isPaused) unpaused.await();
 *     } catch (InterruptedException ie) {
 *       t.interrupt();
 *     } finally {
 *       pauseLock.unlock();
 *     }
 *   }
 *
 *   public void pause() {
 *     pauseLock.lock();
 *     try {
 *       isPaused = true;
 *     } finally {
 *       pauseLock.unlock();
 *     }
 *   }
 *
 *   public void resume() {
 *     pauseLock.lock();
 *     try {
 *       isPaused = false;
 *       unpaused.signalAll();
 *     } finally {
 *       pauseLock.unlock();
 *     }
 *   }
 * }}</pre>
 *
 * @since 1.5
 * @author Doug Lea
 */

3.1 内部状态

/**
     * The main pool control state, ctl, is an atomic integer packing
     * two conceptual fields
     *   workerCount, indicating the effective number of threads
     *   runState,    indicating whether running, shutting down etc
     *
     * In order to pack them into one int, we limit workerCount to
     * (2^29)-1 (about 500 million) threads rather than (2^31)-1 (2
     * billion) otherwise representable. If this is ever an issue in
     * the future, the variable can be changed to be an AtomicLong,
     * and the shift/mask constants below adjusted. But until the need
     * arises, this code is a bit faster and simpler using an int.
     *
     * The workerCount is the number of workers that have been
     * permitted to start and not permitted to stop.  The value may be
     * transiently different from the actual number of live threads,
     * for example when a ThreadFactory fails to create a thread when
     * asked, and when exiting threads are still performing
     * bookkeeping before terminating. The user-visible pool size is
     * reported as the current size of the workers set.
     *
     * The runState provides the main lifecycle control, taking on values:
     *
     *   RUNNING:  Accept new tasks and process queued tasks
     *   SHUTDOWN: Don't accept new tasks, but process queued tasks
     *   STOP:     Don't accept new tasks, don't process queued tasks,
     *             and interrupt in-progress tasks
     *   TIDYING:  All tasks have terminated, workerCount is zero,
     *             the thread transitioning to state TIDYING
     *             will run the terminated() hook method
     *   TERMINATED: terminated() has completed
     *
     * The numerical order among these values matters, to allow
     * ordered comparisons. The runState monotonically increases over
     * time, but need not hit each state. The transitions are:
     *
     * RUNNING -> SHUTDOWN
     *    On invocation of shutdown(), perhaps implicitly in finalize()
     * (RUNNING or SHUTDOWN) -> STOP
     *    On invocation of shutdownNow()
     * SHUTDOWN -> TIDYING
     *    When both queue and pool are empty
     * STOP -> TIDYING
     *    When pool is empty
     * TIDYING -> TERMINATED
     *    When the terminated() hook method has completed
     *
     * Threads waiting in awaitTermination() will return when the
     * state reaches TERMINATED.
     *
     * Detecting the transition from SHUTDOWN to TIDYING is less
     * straightforward than you'd like because the queue may become
     * empty after non-empty and vice versa during SHUTDOWN state, but
     * we can only terminate if, after seeing that it is empty, we see
     * that workerCount is 0 (which sometimes entails a recheck -- see
     * below).
     */

变量ctl定义为AtomicInteger,记录了“线程池中的任务数量”和“线程池的状态”两个信息。

	private final AtomicInteger ctl = new AtomicInteger(ctlOf(RUNNING, 0));
    private static final int COUNT_BITS = Integer.SIZE - 3;
    private static final int CAPACITY   = (1 << COUNT_BITS) - 1;

    // runState is stored in the high-order bits
    private static final int RUNNING    = -1 << COUNT_BITS;
    private static final int SHUTDOWN   =  0 << COUNT_BITS;
    private static final int STOP       =  1 << COUNT_BITS;
    private static final int TIDYING    =  2 << COUNT_BITS;
    private static final int TERMINATED =  3 << COUNT_BITS;

    // Packing and unpacking ctl
    private static int runStateOf(int c)     { return c & ~CAPACITY; }
    private static int workerCountOf(int c)  { return c & CAPACITY; }
    private static int ctlOf(int rs, int wc) { return rs | wc; }

其中高3位表示“线程池状态
低29位表示“线程池中的任务数量

线程的状态:

  1. RUNNING:线程池能够接受新任务,以及对新添加的任务进行处理。
  2. SHUTDOWN:线程池不可以接受新任务,但是可以对已添加的任务进行处理。
  3. STOP:线程池不接收新任务,不处理已添加的任务,并且会中断正在处理的任务。
  4. TIDYING:当所有的任务已终止,ctl记录的"任务数量"为0,线程池会变为TIDYING状态。当线程池变为TIDYING状态时,会执行钩子函数terminated()。terminated()在ThreadPoolExecutor类中是空的,若用户想在线程池变为TIDYING时,进行相应的处理;可以通过重载terminated()函数来实现。
  5. TERMINATED:线程池彻底终止的状态。

在这里插入图片描述
各个状态之间转换:
在这里插入图片描述

3.2 已默认实现的池

  • newFixedThreadPool
  • newCachedThreadPool
  • newSingleThreadExecutor

3.2.1 newFixedThreadPool

一个固定线程数的线程池,它将返回一个corePoolSize和maximumPoolSize相等的线程池。

	public static ExecutorService newFixedThreadPool(int nThreads) {
        return new ThreadPoolExecutor(nThreads, nThreads,
                                      0L, TimeUnit.MILLISECONDS,
                                      new LinkedBlockingQueue<Runnable>());
    }

3.2.2 newCachedThreadPool

非常有弹性的线程池,对于新的任务,如果此时线程池里没有空闲线程,线程池会毫不犹豫的创建一条新的线程去处理这个任务。

	public static ExecutorService newCachedThreadPool() {
        return new ThreadPoolExecutor(0, Integer.MAX_VALUE,
                                      60L, TimeUnit.SECONDS,
                                      new SynchronousQueue<Runnable>());
    }

3.2.3 newSingleThreadExecutor

使用单个worker线程的Executor

	public static ExecutorService newSingleThreadExecutor() {
        return new FinalizableDelegatedExecutorService
            (new ThreadPoolExecutor(1, 1,
                                    0L, TimeUnit.MILLISECONDS,
                                    new LinkedBlockingQueue<Runnable>()));
    }

3.3 构造方法

	public ThreadPoolExecutor(int corePoolSize,
                              int maximumPoolSize,
                              long keepAliveTime,
                              TimeUnit unit,
                              BlockingQueue<Runnable> workQueue,
                              ThreadFactory threadFactory,
                              RejectedExecutionHandler handler) {
        if (corePoolSize < 0 ||
            maximumPoolSize <= 0 ||
            maximumPoolSize < corePoolSize ||
            keepAliveTime < 0)
            throw new IllegalArgumentException();
        if (workQueue == null || threadFactory == null || handler == null)
            throw new NullPointerException();
        this.acc = System.getSecurityManager() == null ?
                null :
                AccessController.getContext();
        this.corePoolSize = corePoolSize;
        this.maximumPoolSize = maximumPoolSize;
        this.workQueue = workQueue;
        this.keepAliveTime = unit.toNanos(keepAliveTime);
        this.threadFactory = threadFactory;
        this.handler = handler;
    }
  1. 指定核心线程数量
  2. 指定最大线程数量
  3. 允许线程空闲时间
  4. 时间对象
  5. 阻塞队列
  6. 线程工厂
  7. 任务拒绝策略

线程数量要点:

  • 如果运行线程的数量少于核心线程数量,则创建新的线程处理请求
  • 如果运行线程的数量大于核心线程数量,小于最大线程数量,则当队列满的时候才创建新的线程
  • 如果核心线程数量等于最大线程数量,那么将创建固定大小的连接池
  • 如果设置了最大线程数量为无穷,那么允许线程池适合任意的并发数量

线程空闲时间要点:

  • 当前线程数大于核心线程数,如果空闲时间已经超过了,那该线程会销毁。

排队策略要点:

  • 同步移交:不会放到队列中,而是等待线程执行它。如果当前线程没有执行,很可能会新开一个线程执行。
  • 无界限策略:如果核心线程都在工作,该线程会放到队列中。所以线程数不会超过核心线程数
  • 有界限策略:可以避免资源耗尽,但是一定程度上减低了吞吐量

当线程关闭或者线程数量满了和队列饱和了,就有拒绝任务的情况了:

拒绝任务策略:

  • 直接抛出异常
  • 使用调用者的线程来处理
  • 直接丢掉这个任务
  • 丢掉最老的任务

四、 execute执行方法

	public void execute(Runnable command) {
        if (command == null)
            throw new NullPointerException();
        int c = ctl.get();
        // 如果线程池中运行的线程数量<corePoolSize,则创建新线程来处理请求,即使其他辅助线程是空闲的。
        if (workerCountOf(c) < corePoolSize) {
            if (addWorker(command, true))
                return;
            c = ctl.get();
        }

       // 如果线程池中运行的线程数量>=corePoolSize,且线程池处于RUNNING状态,且把提交的任务成功放入阻塞队列中,就再次检查线程池的状态,
       // 1.如果线程池不是RUNNING状态,且成功从阻塞队列中删除任务,则该任务由当前 RejectedExecutionHandler 处理。
       // 2.否则如果线程池中运行的线程数量为0,则通过addWorker(null, false)尝试新建一个线程,新建线程对应的任务为null。
        if (isRunning(c) && workQueue.offer(command)) {
            int recheck = ctl.get();
            if (! isRunning(recheck) && remove(command))
                reject(command);
            else if (workerCountOf(recheck) == 0)
                addWorker(null, false);
        }
        // 如果以上两种case不成立,即没能将任务成功放入阻塞队列中,且addWoker新建线程失败,则该任务由当前 RejectedExecutionHandler 处理。
        else if (!addWorker(command, false))
            reject(command);
    }

五、 线程池关闭

ThreadPoolExecutor提供了shutdown()shutdownNow()两个方法来关闭线程池

5.1 shutdown()

 /**
     * Initiates an orderly shutdown in which previously submitted
     * tasks are executed, but no new tasks will be accepted.
     * Invocation has no additional effect if already shut down.
     *
     * <p>This method does not wait for previously submitted tasks to
     * complete execution.  Use {@link #awaitTermination awaitTermination}
     * to do that.
     *
     * @throws SecurityException {@inheritDoc}
     */

按过去执行已提交任务的顺序发起一个有序的关闭,但是不接受新任务
此方法不会等待先前提交的任务完成执行,如果希望的话,则需要调用awaitTermiante方法

	public void shutdown() {
        final ReentrantLock mainLock = this.mainLock;
        // 锁定
        mainLock.lock();
        try {
            checkShutdownAccess();
            // 设置线程状态为 shutdown
            advanceRunState(SHUTDOWN);
            // 中断空闲线程
            interruptIdleWorkers();
            onShutdown(); // hook for ScheduledThreadPoolExecutor
        } finally {
            mainLock.unlock();
        }
        // 将线程状态设置为 terminated
        tryTerminate();
    }
  1. 获取线程池的锁,然后调用checkShutdownAccess方法检查每一个线程池的线程是否有可以ShutDown的权限。
  2. 调用advanceRunState函数通过自旋的CAS操作来将ctl中的状态变为SHUTDOWN
  3. 调用interruptIdleWorkers方法,将所有Idle状态的线程都调用interrupt方法,中断线程。而判断idle状态使用Worker中的ReentrantLock来调用tryLock尝试加锁,看Worker线程是否已经获取了锁,如果Worker的锁已经被加了的话,那么tryLock返回的就是false。
  4. 通过onShutDown()方法告知子类,线程池要处于ShutDown状态了。
  5. 解锁完后,调用tryTermiante的方法尝试终止线程池。

5.2 shutdownNow()

    /**
     * Attempts to stop all actively executing tasks, halts the
     * processing of waiting tasks, and returns a list of the tasks
     * that were awaiting execution. These tasks are drained (removed)
     * from the task queue upon return from this method.
     *
     * <p>This method does not wait for actively executing tasks to
     * terminate.  Use {@link #awaitTermination awaitTermination} to
     * do that.
     *
     * <p>There are no guarantees beyond best-effort attempts to stop
     * processing actively executing tasks.  This implementation
     * cancels tasks via {@link Thread#interrupt}, so any task that
     * fails to respond to interrupts may never terminate.
     *
     * @throws SecurityException {@inheritDoc}
     */

shutdownNow 尝试停止所有的活动执行任务、暂停等待任务的处理,并返回等待执行的任务列表
正在运行的任务也停止

 	public List<Runnable> shutdownNow() {
        List<Runnable> tasks;
        final ReentrantLock mainLock = this.mainLock;
        mainLock.lock();
        try {
            checkShutdownAccess();
            advanceRunState(STOP);
            interruptWorkers();
            tasks = drainQueue();
        } finally {
            mainLock.unlock();
        }
        tryTerminate();
        return tasks;
    }
  1. 判断当前线程池是否正在运行,或者当前线程池的状态比TIDYING(整理中)要大(也就是处于TIDYING或者TERMINATED状态),或者当前线程状态处于SHUTDOWN并且任务队列不为空的话,那么就直接return
  2. 如果当前的WorkerCount不为0,那么就会调用interruptedIdleWorkers(true),并且返回
  3. 通过CAS操作将ctl设置成TIDYING,如果设置成功之后就会调用terminated方法, 告知子类,要终止了,终止完之后,就会将ctl的状态设置成TERMINATED,以及workerCount为0。

5.3 tryTerminate

 final void tryTerminate() {
        for (;;) {
            int c = ctl.get();
            if (isRunning(c) ||
                runStateAtLeast(c, TIDYING) ||
                (runStateOf(c) == SHUTDOWN && ! workQueue.isEmpty()))
                return;
            if (workerCountOf(c) != 0) { // Eligible to terminate
                interruptIdleWorkers(ONLY_ONE);
                return;
            }

            final ReentrantLock mainLock = this.mainLock;
            mainLock.lock();
            try {
                if (ctl.compareAndSet(c, ctlOf(TIDYING, 0))) {
                    try {
                        terminated();
                    } finally {
                        ctl.set(ctlOf(TERMINATED, 0));
                        termination.signalAll();
                    }
                    return;
                }
            } finally {
                mainLock.unlock();
            }
            // else retry on failed CAS
        }
    }

在tryTerminate函数中,会有判断如果是SHUTDOWN状态的话,那么就需要判断任务队列是否为空,如果为空的话,那么就会接着往下走,如果任务队列不为空的话,那么说明还有任务没有执行完,直接返回了。在最后一个Worker运行结束后,调用的processWorkerExit函数中还会调用tryTerminate,所以到那时候,才是真正的结束。
而在tryTeminate后的一句if(workerCountOf©!=0)成立的话,也就是说明任务队列空了,但是Worker还有多的,那么就将最多一个处于IDLE状态的Worker中断。

5.4 shutdown与shutdownNow的区别

  • 调用shutdown()后,线程池状态立刻变为SHUTDOWN,而调用shutdownNow(),线程池状态立刻变为STOP。
  • shutdown()等待任务执行完才中断线程,而shutdownNow()不等任务执行完就中断了线程。
  • shutDownNow方法会返回未完成的任务队列中的任务列表
  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值