1.阻塞队列使用场景
阻塞队列最常使用的场景是生产者和消费者模式。
生产者就是生产数据的线程,消费者就是消费数据的线程。在多线程开发中,如果生产者处理速度很快,而消费者处理速度很慢,那么生产者就必须等待消费者处理完,才能继续生产数据。同样的道理,如果消费者的处理能力大于生产者,那么消费者就必须等待生产者。为了解决这种生产消费能力不均衡的问题,便有了生产者和消费者模式。生产者和消费者模式是通过一个容器来解决生产者和消费者的强耦合问题。生产者和消费者彼此之间不直接通信,而是通过阻塞队列来进行通信,所以生产者生产完数据之后不用等待消费者处理,直接扔给阻塞队列,消费者不找生产者要数据,而是直接从阻塞队列里取,阻塞队列就相当于一个缓冲区,平衡了生产者和消费者的处理能力。
2.常用阻塞队列
2.1 ArrayBlockingQueue
一个由数组结构组成的有界阻塞队列。按照先进先出原则,要求设定初始大小. ReentrantLock锁。
(1)构造函数
public ArrayBlockingQueue(int capacity, boolean fair) {
if (capacity <= 0)
throw new IllegalArgumentException();
this.items = new Object[capacity];
//ReentrantLock锁
lock = new ReentrantLock(fair);
//两个Condition队列
notEmpty = lock.newCondition();
notFull = lock.newCondition();
}
(2)add方法
public boolean add(E e) {
if (offer(e))
return true;
else
throw new IllegalStateException("Queue full");
}
(3)offer方法
public boolean offer(E e) {
//数据不能为空
checkNotNull(e);
final ReentrantLock lock = this.lock;
//加锁
lock.lock();
try {
//队列已经满
if (count == items.length)
return false;
else {
enqueue(e);
return true;
}
} finally {
//解锁
lock.unlock();
}
}
(4)enqueue方法
private void enqueue(E x) {
// assert lock.getHoldCount() == 1;
// assert items[putIndex] == null;
//添加元素
final Object[] items = this.items;
items[putIndex] = x;
if (++putIndex == items.length)
putIndex = 0;
count++;
//唤醒在notEmpty条件上等待的线程
notEmpty.signal();
}
(5)put方法
put方法和add方法不同的地方在于,put方法在队列满的时候,会加入notFull等待,而put方法直接抛出异常。
public void put(E e) throws InterruptedException {
checkNotNull(e);
final ReentrantLock lock = this.lock;
lock.lockInterruptibly();
try {
while (count == items.length)
notFull.await();
enqueue(e);
} finally {
lock.unlock();
}
}
(6)poll方法
poll方法在队列为空时,直接返回。
public E poll() {
final ReentrantLock lock = this.lock;
lock.lock();
try {
return (count == 0) ? null : dequeue();
} finally {
lock.unlock();
}
}
private E dequeue() {
// assert lock.getHoldCount() == 1;
// assert items[takeIndex] != null;
final Object[] items = this.items;
@SuppressWarnings("unchecked")
E x = (E) items[takeIndex];
items[takeIndex] = null;
if (++takeIndex == items.length)
takeIndex = 0;
count--;
if (itrs != null)
itrs.elementDequeued();
notFull.signal();
return x;
}
(7)take方法
poll方法在队列为空时,等待。
public E take() throws InterruptedException {
final ReentrantLock lock = this.lock;
lock.lockInterruptibly();
try {
while (count == 0)
notEmpty.await();
return dequeue();
} finally {
lock.unlock();
}
}
2.2 LinkedBlockingQueue
一个由链表结构组成的有界阻塞队列。按照先进先出原则,可以不设定初始大小,Integer.Max_Value。
public LinkedBlockingQueue() {
this(Integer.MAX_VALUE);
}
private final ReentrantLock takeLock = new ReentrantLock();
/** Wait queue for waiting takes */
private final Condition notEmpty = takeLock.newCondition();
/** Lock held by put, offer, etc */
private final ReentrantLock putLock = new ReentrantLock();
/** Wait queue for waiting puts */
private final Condition notFull = putLock.newCondition();
(1)put方法
public void put(E e) throws InterruptedException {
//数据不能为空
if (e == null) throw new NullPointerException();
// Note: convention in all put/take/etc is to preset local var
// holding count negative to indicate failure unless set.
int c = -1;
Node<E> node = new Node<E>(e);
//拿到put锁
final ReentrantLock putLock = this.putLock;
final AtomicInteger count = this.count;
putLock.lockInterruptibly();
try {
/*
* Note that count is used in wait guard even though it is
* not protected by lock. This works because count can
* only decrease at this point (all other puts are shut
* out by lock), and we (or some other waiting put) are
* signalled if it ever changes from capacity. Similarly
* for all other uses of count in other wait guards.
*/
//队列已满等待
while (count.get() == capacity) {
notFull.await();
}
//加入链表
enqueue(node);
c = count.getAndIncrement();
if (c + 1 < capacity)
//通知notFull上等待线程可以加入数据了
notFull.signal();
} finally {
putLock.unlock();
}
if (c == 0)
//如果原来队列为空,通知notEmpty队列上线程可以取数据了
signalNotEmpty();
}
(2)enqueue方法
private void enqueue(Node<E> node) {
// assert putLock.isHeldByCurrentThread();
// assert last.next == null;
last = last.next = node;
}
(3)signalNotEmpty方法
private void signalNotEmpty() {
final ReentrantLock takeLock = this.takeLock;
takeLock.lock();
try {
notEmpty.signal();
} finally {
takeLock.unlock();
}
}
(4)offer方法
offer方法和put区别是offer方法在队列满时不会等待。
public boolean offer(E e) {
if (e == null) throw new NullPointerException();
final AtomicInteger count = this.count;
if (count.get() == capacity)
return false;
int c = -1;
Node<E> node = new Node<E>(e);
final ReentrantLock putLock = this.putLock;
putLock.lock();
try {
if (count.get() < capacity) {
enqueue(node);
c = count.getAndIncrement();
if (c + 1 < capacity)
notFull.signal();
}
} finally {
putLock.unlock();
}
if (c == 0)
signalNotEmpty();
return c >= 0;
}
(5)take方法
public E take() throws InterruptedException {
E x;
int c = -1;
final AtomicInteger count = this.count;
final ReentrantLock takeLock = this.takeLock;
takeLock.lockInterruptibly();
try {
while (count.get() == 0) {
//队列为空时,等待
notEmpty.await();
}
x = dequeue();
c = count.getAndDecrement();
if (c > 1)
//队列不为空,通知在notEmpt上等待线程可以取数据
notEmpty.signal();
} finally {
takeLock.unlock();
}
if (c == capacity)
//如果队列原来已满,通知在notFull上等待线程可以加数据
signalNotFull();
return x;
}
(6)poll方法
poll方法在队列为空时返回,不等待。
总结:LinkedBlockingQueue和ArrayBlockingQueue不同的地方在于,LinkedBlockingQueue有两把锁,加数据和取数据可以并发进行。ArrayBlockingQueue不行。
2.3 PriorityBlockingQueue
个支持优先级排序的无界阻塞队列(内部用数组,无界是因数组拷贝扩容)。默认情况下,按照自然顺序,要么实现compareTo()方法,指定构造参数Comparator。
2.3.1 PriorityQueue
PriorityQueue其实是一个优先队列,虽然内部是数组保存数据,但本质是个二叉堆。二叉堆有两种:最大堆和最小堆。最大堆:父结点的键值总是大于或等于任何一个子节点的键值;最小堆:父结点的键值总是小于或等于任何一个子节点的键值。
PriorityQueue用数组存储的时候如下:
对于数组中任意位置的n上元素,其左孩子在[2n+1]位置上,右孩子[2(n+1)]位置,它的父亲则在[n-1/2]上。
PriorityQueue平时我们使用的不多,而且源码比较简单,只要懂了原理,代码看起来就很清晰了。
(1)添加数据
添加数据一般都是从队尾开始添加,再向上移动。直到两个子节点都比自己小。
public boolean offer(E e) {
if (e == null)
throw new NullPointerException();
modCount++;
int i = size;
if (i >= queue.length)
grow(i + 1);
size = i + 1;
if (i == 0)
queue[0] = e;
else
siftUp(i, e);
return true;
}
private void siftUp(int k, E x) {
if (comparator != null)
siftUpUsingComparator(k, x);
else
siftUpComparable(k, x);
}
private void siftUpUsingComparator(int k, E x) {
while (k > 0) {
int parent = (k - 1) >>> 1;
Object e = queue[parent];
if (comparator.compare(x, (E) e) >= 0)
break;
queue[k] = e;
k = parent;
}
queue[k] = x;
}
(2)删除元素
删除元素都是从根节点删除,然后再将下方最小值上移。
private void siftDownComparable(int k, E x) {
// 比较器comparator为空,需要插入的元素实现Comparable接口,用于比较大小
Comparable<? super E> key = (Comparable<? super E>)x;
// 通过size/2找到一个没有叶子节点的元素
int half = size >>> 1; // loop while a non-leaf
// 比较位置k和half,如果k小于half,则k位置的元素就不是叶子节点
while (k < half) {
// 找到根元素的左孩子的位置[2n+1]
int child = (k << 1) + 1; // assume left child is least
// 左孩子的元素
Object c = queue[child];
// 找到根元素的右孩子的位置[2(n+1)]
int right = child + 1;
// 如果左孩子大于右孩子,则将c复制为右孩子的值,这里也就是找出左右孩子哪个最小
if (right < size &&
((Comparable<? super E>) c).compareTo((E) queue [right]) > 0)
c = queue[child = right];
// 如果队尾元素比根元素孩子都要小,则不需"下移",结束
if (key.compareTo((E) c) <= 0)
break;
// 队尾元素比根元素孩子都大,则需要"下移"
// 交换跟元素和孩子c的位置
queue[k] = c;
// 将根元素位置k指向最小孩子的位置,进入下层循环
k = child;
}
// 找到队尾元素x的合适位置k之后进行赋值
queue[k] = key;
}
2.3.2 PriorityBlockingQueue
(1)构造方法
public PriorityBlockingQueue(int initialCapacity,
Comparator<? super E> comparator) {
if (initialCapacity < 1)
throw new IllegalArgumentException();
this.lock = new ReentrantLock();
this.notEmpty = lock.newCondition();
this.comparator = comparator;
this.queue = new Object[initialCapacity];
}
(2)offer方法
public boolean offer(E e) {
if (e == null)
throw new NullPointerException();
final ReentrantLock lock = this.lock;
//加锁
lock.lock();
int n, cap;
Object[] array;
while ((n = size) >= (cap = (array = queue).length))
//扩容,用的数组拷贝System.arraycopy
tryGrow(array, cap);
try {
Comparator<? super E> cmp = comparator;
if (cmp == null)
//排序,原理和PriorityQueue相同
siftUpComparable(n, e, array);
else
siftUpUsingComparator(n, e, array, cmp);
size = n + 1;
//通知在notEmpty队列上线程可以取数据了
notEmpty.signal();
} finally {
lock.unlock();
}
return true;
}
(3)take方法
public E take() throws InterruptedException {
//加锁
final ReentrantLock lock = this.lock;
lock.lockInterruptibly();
E result;
try {
while ( (result = dequeue()) == null)
//notEmpty等待
notEmpty.await();
} finally {
lock.unlock();
}
return result;
}
总结:其实只要理解了优先队列的实现队列实现原理,在之前源码分析的基础上,代码其实挺清晰的。
2.4 SynchronousQueue
一个不存储元素的阻塞队列,每一个put操作都要等待一个take操作,公平用的队列先进先出,非公平用的栈后进先出。
2.4.1 非公平TransferStack
1、如果当前的交易栈是空的,或者包含与请求交易节点模式相同的节点,那么就将这个请求交易的节点作为新的栈顶节点,等待被下一个请求交易的节点匹配,最后会返回匹配节点的数据或者null,如果被取消则会返回null。
2、如果当前交易栈不为空,并且请求交易的节点和当前栈顶节点模式互补,那么将这个请求交易的节点的模式变为FULFILLING,然后将其压入栈中,和互补的节点进行匹配,完成交易之后将两个节点一起弹出,并且返回交易的数据。
3、如果栈顶已经存在一个模式为FULFILLING的节点,说明栈顶的节点正在进行匹配,那么就帮助这个栈顶节点快速完成交易,然后继续交易。
当节点加入栈内后,通过调用awaitFulfill()方法自旋等待节点匹配。
(1)transfer方法
E transfer(E e, boolean timed, long nanos) {
/*
* Basic algorithm is to loop trying one of three actions:
*
* 1. If apparently empty or already containing nodes of same
* mode, try to push node on stack and wait for a match,
* returning it, or null if cancelled.
*
* 2. If apparently containing node of complementary mode,
* try to push a fulfilling node on to stack, match
* with corresponding waiting node, pop both from
* stack, and return matched item. The matching or
* unlinking might not actually be necessary because of
* other threads performing action 3:
*
* 3. If top of stack already holds another fulfilling node,
* help it out by doing its match and/or pop
* operations, and then continue. The code for helping
* is essentially the same as for fulfilling, except
* that it doesn't return the item.
*/
SNode s = null; // constructed/reused as needed
//如果e为空,那线程就是消费者,否则是生产者
int mode = (e == null) ? REQUEST : DATA;
for (;;) {
SNode h = head;
//头结点为null或者头结点的模式与此次转移的模式相同
if (h == null || h.mode == mode) { // empty or same-mode
//是否需要等待
if (timed && nanos <= 0) { // can't wait
//如果头结点不为空,并且头结点已经取消,那更新头节点,否则返回空
if (h != null && h.isCancelled())
casHead(h, h.next); // pop cancelled node
else
return null;
//生成一个SNode结点;将原来的head头结点设置为该结点的next结点;将head头结点设置为该结点
} else if (casHead(h, s = snode(s, e, h, mode))) {
//返回匹配的节点
SNode m = awaitFulfill(s, timed, nanos);
if (m == s) { // wait was cancelled
//删除s节点
clean(s);
return null;
}
if ((h = head) != null && h.next == s)
//如果头结点匹配成功,将匹配的两个节点删除
casHead(h, s.next); // help s's fulfiller
return (E) ((mode == REQUEST) ? m.item : s.item);
}
} else if (!isFulfilling(h.mode)) { // try to fulfill,头结点是否正在匹配
//头结点如果取消了,更新头结点
if (h.isCancelled()) // already cancelled
casHead(h, h.next); // pop and retry
else if (casHead(h, s=snode(s, e, h, FULFILLING|mode))) {
for (;;) { // loop until matched or waiters disappear
SNode m = s.next; // m is s's match
if (m == null) { // all waiters are gone
casHead(s, null); // pop fulfill node
s = null; // use new node next time
break; // restart main loop
}
//返回匹配数据
SNode mn = m.next;
if (m.tryMatch(s)) {
casHead(s, mn); // pop both s and m
return (E) ((mode == REQUEST) ? m.item : s.item);
} else // lost match
s.casNext(m, mn); // help unlink
}
}
} else { // help a fulfiller
//帮助匹配
SNode m = h.next; // m is h's match
if (m == null) // waiter is gone
casHead(h, null); // pop fulfilling node
else {
SNode mn = m.next;
if (m.tryMatch(h)) // help match
casHead(h, mn); // pop both h and m
else // lost match
h.casNext(m, mn); // help unlink
}
}
}
}
(2)awaitFulfill方法
SNode awaitFulfill(SNode s, boolean timed, long nanos) {
/*
* When a node/thread is about to block, it sets its waiter
* field and then rechecks state at least one more time
* before actually parking, thus covering race vs
* fulfiller noticing that waiter is non-null so should be
* woken.
*
* When invoked by nodes that appear at the point of call
* to be at the head of the stack, calls to park are
* preceded by spins to avoid blocking when producers and
* consumers are arriving very close in time. This can
* happen enough to bother only on multiprocessors.
*
* The order of checks for returning out of main loop
* reflects fact that interrupts have precedence over
* normal returns, which have precedence over
* timeouts. (So, on timeout, one last check for match is
* done before giving up.) Except that calls from untimed
* SynchronousQueue.{poll/offer} don't check interrupts
* and don't wait at all, so are trapped in transfer
* method rather than calling awaitFulfill.
*/
final long deadline = timed ? System.nanoTime() + nanos : 0L;
Thread w = Thread.currentThread();
//确定自旋等待的次数
int spins = (shouldSpin(s) ?
(timed ? maxTimedSpins : maxUntimedSpins) : 0);
for (;;) {
if (w.isInterrupted())
//线程中断后,取消s节点
s.tryCancel();
//当前节点匹配的结点
SNode m = s.match;
if (m != null)
return m;
if (timed) {
nanos = deadline - System.nanoTime();
if (nanos <= 0L) {
//时间到了,取消当前节点
s.tryCancel();
continue;
}
}
if (spins > 0)
//减少自旋次数
spins = shouldSpin(s) ? (spins-1) : 0;
else if (s.waiter == null)
//设值节点线程为当前线程
s.waiter = w; // establish waiter so can park next iter
else if (!timed)
//挂起线程
LockSupport.park(this);
else if (nanos > spinForTimeoutThreshold)
LockSupport.parkNanos(this, nanos);
}
}
(3)shouldSpin方法
boolean shouldSpin(SNode s) {
SNode h = head;
//如果当前节点在头结点或者头结点正在匹配中
return (h == s || h == null || isFulfilling(h.mode));
}
(4)tryMatch方法
boolean tryMatch(SNode s) {
//将匹配的节点设值为node的match字段
if (match == null &&
UNSAFE.compareAndSwapObject(this, matchOffset, null, s)) {
Thread w = waiter;
//唤醒当前线程
if (w != null) { // waiters need at most one unpark
waiter = null;
LockSupport.unpark(w);
}
return true;
}
return match == s;
}
2.4.2 公平TransferQueue
1、如果队列为空,或者请求交易的节点和队列中的节点具有相同的交易类型,那么就将该请求交易的节点添加到队列尾部等待交易,直到被匹配或者被取消。
2、如果队列中包含了等待的节点,并且请求的节点和等待的节点是互补的,那么进行匹配并且进行交易。当队列为空时,节点入列然后通过调用awaitFulfill()方法自旋,该方法主要用于自旋/阻塞节点,直到节点被匹配返回或者取消、中断。
(1)transfer方法
E transfer(E e, boolean timed, long nanos) {
/* Basic algorithm is to loop trying to take either of
* two actions:
*
* 1. If queue apparently empty or holding same-mode nodes,
* try to add node to queue of waiters, wait to be
* fulfilled (or cancelled) and return matching item.
*
* 2. If queue apparently contains waiting items, and this
* call is of complementary mode, try to fulfill by CAS'ing
* item field of waiting node and dequeuing it, and then
* returning matching item.
*
* In each case, along the way, check for and try to help
* advance head and tail on behalf of other stalled/slow
* threads.
*
* The loop starts off with a null check guarding against
* seeing uninitialized head or tail values. This never
* happens in current SynchronousQueue, but could if
* callers held non-volatile/final ref to the
* transferer. The check is here anyway because it places
* null checks at top of loop, which is usually faster
* than having them implicitly interspersed.
*/
QNode s = null; // constructed/reused as needed
//e不为空的话为数据节点
boolean isData = (e != null);
for (;;) {
QNode t = tail;
QNode h = head;
//节点还没初始化,进行下次循环
if (t == null || h == null) // saw uninitialized value
continue; // spin
//队列为空,尾节点模式和新节点相同
if (h == t || t.isData == isData) { // empty or same-mode
QNode tn = t.next;
//队列已经发生变化
if (t != tail) // inconsistent read
continue;
//更新尾节点
if (tn != null) { // lagging tail
advanceTail(t, tn);
continue;
}
//不等待的话,返回
if (timed && nanos <= 0) // can't wait
return null;
if (s == null)
s = new QNode(e, isData);
//更新尾节点失败,进行下次循环
if (!t.casNext(null, s)) // failed to link in
continue;
advanceTail(t, s); // swing tail and wait
Object x = awaitFulfill(s, e, timed, nanos);
if (x == s) { // wait was cancelled
clean(t, s);
return null;
}
if (!s.isOffList()) { // not already unlinked
advanceHead(t, s); // unlink if head
if (x != null) // and forget fields
s.item = s;
s.waiter = null;
}
return (x != null) ? (E)x : e;
} else { // complementary-mode
QNode m = h.next; // node to fulfill
if (t != tail || m == null || h != head)
continue; // inconsistent read
Object x = m.item;
if (isData == (x != null) || // m already fulfilled
x == m || // m cancelled
!m.casItem(x, e)) { // lost CAS
advanceHead(h, m); // dequeue and retry
continue;
}
advanceHead(h, m); // successfully fulfilled
LockSupport.unpark(m.waiter);
return (x != null) ? (E)x : e;
}
}
}
(2)awaitFulfill方法
Object awaitFulfill(QNode s, E e, boolean timed, long nanos) {
/* Same idea as TransferStack.awaitFulfill */
final long deadline = timed ? System.nanoTime() + nanos : 0L;
Thread w = Thread.currentThread();
//自旋的次数
int spins = ((head.next == s) ?
(timed ? maxTimedSpins : maxUntimedSpins) : 0);
for (;;) {
if (w.isInterrupted())
//线程被中断,删除节点
s.tryCancel(e);
Object x = s.item;
if (x != e)
//节点被取消,返回
return x;
if (timed) {
//时间到后取消节点
nanos = deadline - System.nanoTime();
if (nanos <= 0L) {
s.tryCancel(e);
continue;
}
}
if (spins > 0)
--spins;
else if (s.waiter == null)
s.waiter = w;
else if (!timed)
LockSupport.park(this);
else if (nanos > spinForTimeoutThreshold)
LockSupport.parkNanos(this, nanos);
}
}
2.5 LinkedTransferQueue
一个由链表结构组成的无界阻塞队列。这个阻塞队列和SynchronousQueue比较像。实际上LinkedTransferQueue是整合了SynchronousQueue和LinkedBlockingQueue的思想。
区别:SynchronousQueue是一个不存储元素的阻塞队列。看到这里的时候估计不少人都有疑惑,因为分析源码的时候,我们可以看到SynchronousQueue内部是有一个队列的呀,为什么说它不存储元素。
这里说的不存储元素是和之前的阻塞队列相比而言,比如向LinkedBlockingQueue放入元素的时候,线程不关心有没有消费,放入后返回。而SynchronousQueue放入元素如果没有消费者,当前线程要挂起等待,直到有消费者。其次,其他阻塞队列会有方法可以返回队列第一个元素,并且不删除或者判断是否包含某元素的方法。而SynchronousQueue没有这些功能。
LinkedTransferQueue内部其实就是SynchronousQueue的公平版本。不仅如此,它是内部有链表可以存储元素,添加元素的时候可以指定阻塞和非阻塞。也有方法可以返回方法第一个元素,并且不删除。
(1)offer方法
public boolean offer(E e) {
xfer(e, true, ASYNC, 0);
return true;
}
(2)xfer方法
private E xfer(E e, boolean haveData, int how, long nanos) {
if (haveData && (e == null))
throw new NullPointerException();
Node s = null; // the node to append, if needed
retry:
for (;;) { // restart on append race
//找到匹配的节点
for (Node h = head, p = h; p != null;) { // find & match first node
boolean isData = p.isData;
Object item = p.item;
//头节点还未匹配加入队列尾部
if (item != p && (item != null) == isData) { // unmatched
//无法匹配,跳出循环
if (isData == haveData) // can't match
break;
//匹配成功的话,设值
if (p.casItem(item, e)) { // match
for (Node q = p; q != h;) {
Node n = q.next; // update by 2 unless singleton
//匹配成功后,重置头结点
if (head == h && casHead(h, n == null ? q : n)) {
h.forgetNext();
break;
} // advance and retry
//头结点发生变化,重试
if ((h = head) == null ||
(q = h.next) == null || !q.isMatched())
break; // unless slack < 2
}
LockSupport.unpark(p.waiter);
return LinkedTransferQueue.<E>cast(item);
}
}
//头结点已经匹配,移动到下一节点
Node n = p.next;
p = (p != n) ? n : (h = head); // Use head if p offlist
}
//是否需要等待
if (how != NOW) { // No matches available
if (s == null)
s = new Node(e, haveData);
//加入队列尾部,返回前一个节点,如果是第一个节点,返回的是自己
//如果返回为空,代表加入队列失败(比如尾节点和加入的节点模式不同),重试
Node pred = tryAppend(s, haveData);
if (pred == null)
continue retry; // lost race vs opposite mode
//同步的话,挂起线程等待
if (how != ASYNC)
return awaitMatch(s, pred, e, (how == TIMED), nanos);
}
return e; // not waiting
}
}
(3)tryAppend方法
private Node tryAppend(Node s, boolean haveData) {
for (Node t = tail, p = t;;) { // move p to last node and append
Node n, u; // temps for reads of next & tail
if (p == null && (p = head) == null) {
if (casHead(null, s))
return s; // initialize
}
else if (p.cannotPrecede(haveData))
return null; // lost race vs opposite mode
else if ((n = p.next) != null) // not last; keep traversing
p = p != t && t != (u = tail) ? (t = u) : // stale tail
(p != n) ? n : null; // restart if off list
else if (!p.casNext(null, s))
p = p.next; // re-read on CAS failure
else {
if (p != t) { // update if slack now >= 2
while ((tail != t || !casTail(t, s)) &&
(t = tail) != null &&
(s = t.next) != null && // advance and retry
(s = s.next) != null && s != t);
}
return p;
}
}
}
(4)awaitMatch方法
private E awaitMatch(Node s, Node pred, E e, boolean timed, long nanos) {
final long deadline = timed ? System.nanoTime() + nanos : 0L;
Thread w = Thread.currentThread();
int spins = -1; // initialized after first item and cancel checks
ThreadLocalRandom randomYields = null; // bound if needed
for (;;) {
Object item = s.item;
if (item != e) { // matched
// assert item != s;
s.forgetContents(); // avoid garbage
return LinkedTransferQueue.<E>cast(item);
}
if ((w.isInterrupted() || (timed && nanos <= 0)) &&
s.casItem(e, s)) { // cancel
unsplice(pred, s);
return e;
}
if (spins < 0) { // establish spins at/near front
if ((spins = spinsFor(pred, s.isData)) > 0)
randomYields = ThreadLocalRandom.current();
}
else if (spins > 0) { // spin
--spins;
if (randomYields.nextInt(CHAINED_SPINS) == 0)
Thread.yield(); // occasionally yield
}
else if (s.waiter == null) {
s.waiter = w; // request unpark then recheck
}
else if (timed) {
nanos = deadline - System.nanoTime();
if (nanos > 0L)
LockSupport.parkNanos(this, nanos);
}
else {
LockSupport.park(this);
}
}
}
2.6 LinkedBlockingDeque
LinkedBlockingDeque一个由链表结构组成的双向阻塞队列。可以从队列的头和尾都可以插入和移除元素,实现工作密取,方法名带了First对头部操作,带了last从尾部操作,另外:add=addLast; remove=removeFirst; take=takeFirst。
(1)offerFirst
public boolean offerFirst(E e) {
if (e == null) throw new NullPointerException();
Node<E> node = new Node<E>(e);
final ReentrantLock lock = this.lock;
lock.lock();
try {
return linkFirst(node);
} finally {
lock.unlock();
}
}
(2)putFirst方法
public void putFirst(E e) throws InterruptedException {
if (e == null) throw new NullPointerException();
Node<E> node = new Node<E>(e);
final ReentrantLock lock = this.lock;
lock.lock();
try {
while (!linkFirst(node))
notFull.await();
} finally {
lock.unlock();
}
}