Queue常用类解析之PriorityQueue
Queue常用类解析之ConcurrentLinkedQueue
Queue常用类解析之BlockingQueue(一):PriorityBlockingQueue、DelayQueue和DelayedWorkQueue
Queue常用类解析之BlockingQueue(二):ArrayBlockingQueue
Queue常用类解析之BlockingQueue(三):LinkedBlockingQueue
接着上文对BlockingQueue的介绍继续向下
七、SynchronousQueue
SynchronousQueue称作同步队列,其不具有任何存放元素的作用,可以将其理解成一个空的队列,size方法固定返回0,peek方法和迭代器操作也都是无意义的。我们不能插入一个新的元素除非有另一个线程在准备移除。
Synchronous支持两种策略模式,分别是公平模式和非公平模式,分别对应TransferQueue(队列)和TransferStack(栈)结构。公平模式先进先出,保证线程的顺序,先请求的线程率先得到响应;非公平模式则是后进先出,后请求的线程率先得到响应。
1. 属性
(1)自旋属性
static final int NCPUS = Runtime.getRuntime().availableProcessors();
static final int maxTimedSpins = (NCPUS < 2) ? 0 : 32;
static final int maxUntimedSpins = maxTimedSpins * 16;
static final long spinForTimeoutThreshold = 1000L;
在SynchronousQueue中,在阻塞之前会先自旋,以减少阻塞带来的代价和消耗。
(2)transfer
private transient volatile Transferer<E> transferer;
核心属性,SynchronousQueue的操作通过transfer完成。
(3)序列化属性
private ReentrantLock qlock;
private WaitQueue waitingProducers;
private WaitQueue waitingConsumers;
2. Transferer
Transferer是SynchronousQueue的内部抽象类,只有一个抽象方法。
/**
* Performs a put or take.
*
* @param e if non-null, the item to be handed to a consumer;
* if null, requests that transfer return an item
* offered by producer.
* @param timed if this operation should timeout
* @param nanos the timeout, in nanoseconds
* @return if non-null, the item provided or received; if null,
* the operation failed due to timeout or interrupt --
* the caller can distinguish which of these occurred
* by checking Thread.interrupted.
*/
abstract E transfer(E e, boolean timed, long nanos);
SynchronousQueue的入队和出队操作都通过该方法完成。
出队时参数e为null,入队时参数e为入队的元素。
timed表示操作是否有超时限制,nanos为具体纳秒表示的超时时间。
take和put时timed为false,nanos为0。
阻塞一段时间的poll和take时timed为true,nanos为具体的阻塞时间。
非阻塞方法时timed为true,nanos为0.
3. TransferStack
TransferStack表示Synchronous的非公平模式,底层结构为栈。
3.1 属性
volatile SNode head;//表示栈的头结点
//节点的3种模式
/** Node represents an unfulfilled consumer */
static final int REQUEST = 0; //出队
/** Node represents an unfulfilled producer */
static final int DATA = 1; //入队
/** Node is fulfilling another unfulfilled DATA or REQUEST */
static final int FULFILLING = 2; //匹配过程中
3.2 SNode
volatile SNode next; // next node in stack //下一个节点
volatile SNode match; // the node matched to this //匹配节点
volatile Thread waiter; // to control park/unpark //节点对于的线程
Object item; // data; or null for REQUESTs //节点的属性,出队时null,入队时入队的对象
int mode; //节点模式
3.3 TransferStack#transfer(Object, boolean, long)
E transfer(E e, boolean timed, long nanos) {
/*
* Basic algorithm is to loop trying one of three actions:
*
* 1. If apparently empty or already containing nodes of same
* mode, try to push node on stack and wait for a match,
* returning it, or null if cancelled.
*
* 2. If apparently containing node of complementary mode,
* try to push a fulfilling node on to stack, match
* with corresponding waiting node, pop both from
* stack, and return matched item. The matching or
* unlinking might not actually be necessary because of
* other threads performing action 3:
*
* 3. If top of stack already holds another fulfilling node,
* help it out by doing its match and/or pop
* operations, and then continue. The code for helping
* is essentially the same as for fulfilling, except
* that it doesn't return the item.
*/
SNode s = null; // constructed/reused as needed
int mode = (e == null) ? REQUEST : DATA;
for (;;) {
SNode h = head;
if (h == null || h.mode == mode) { // empty or same-mode
if (timed && nanos <= 0) { // can't wait
if (h != null && h.isCancelled())
casHead(h, h.next); // pop cancelled node
else
return null;
} else if (casHead(h, s = snode(s, e, h, mode))) {
SNode m = awaitFulfill(s, timed, nanos);
if (m == s) { // wait was cancelled
clean(s);
return null;
}
//(1),和fulfill线程的(4)对应
if ((h = head) != null && h.next == s)
casHead(h, s.next); // help s's fulfiller
return (E) ((mode == REQUEST) ? m.item : s.item);
}
} else if (!isFulfilling(h.mode)) { // try to fulfill
if (h.isCancelled()) // already cancelled
casHead(h, h.next); // pop and retry
else if (casHead(h, s=snode(s, e, h, FULFILLING|mode))) {
for (;;) { // loop until matched or waiters disappear
SNode m = s.next; // m is s's match
//(2)
if (m == null) { // all waiters are gone
casHead(s, null); // pop fulfill node
s = null; // use new node next time
break; // restart main loop
}
SNode mn = m.next;
//(3)
if (m.tryMatch(s)) {
//(4)
casHead(s, mn); // pop both s and m
return (E) ((mode == REQUEST) ? m.item : s.item);
} else // lost match
//(5)
s.casNext(m, mn); // help unlink
}
}
} else { // help a fulfiller
SNode m = h.next; // m is h's match
if (m == null) // waiter is gone 对于fulfill线程的(2)
casHead(h, null); // pop fulfilling node
else {
SNode mn = m.next;
if (m.tryMatch(h)) // help match,对应fulfull线程(3)
casHead(h, mn); // pop both h and m
else // lost match
h.casNext(m, mn); // help unlink,对应fulfull线程的(5)
}
}
}
}
循环根据栈顶元素的3种情况进行处理
(1)空栈或者和栈顶元素模式相同
入栈,等待后续节点的匹配,如果取消(线程被中断或超时)出栈则返回null。
(2)栈顶元素模式匹配
作为fulfill节点入栈,进行匹配,将节点和匹配节点都出栈,唤醒匹配节点的线程
(3)栈顶元素处于fulfill模式
辅助栈顶元素的fulfill过程。
3.4 TransferStack#awaitFulfill(SNode, boolean, long)
SNode awaitFulfill(SNode s, boolean timed, long nanos) {
/*
* When a node/thread is about to block, it sets its waiter
* field and then rechecks state at least one more time
* before actually parking, thus covering race vs
* fulfiller noticing that waiter is non-null so should be
* woken.
*
* When invoked by nodes that appear at the point of call
* to be at the head of the stack, calls to park are
* preceded by spins to avoid blocking when producers and
* consumers are arriving very close in time. This can
* happen enough to bother only on multiprocessors.
*
* The order of checks for returning out of main loop
* reflects fact that interrupts have precedence over
* normal returns, which have precedence over
* timeouts. (So, on timeout, one last check for match is
* done before giving up.) Except that calls from untimed
* SynchronousQueue.{poll/offer} don't check interrupts
* and don't wait at all, so are trapped in transfer
* method rather than calling awaitFulfill.
*/
final long deadline = timed ? System.nanoTime() + nanos : 0L;
Thread w = Thread.currentThread();
int spins = (shouldSpin(s) ?
(timed ? maxTimedSpins : maxUntimedSpins) : 0);
for (;;) {
if (w.isInterrupted())
s.tryCancel();
SNode m = s.match;
if (m != null)
return m;
if (timed) {
nanos = deadline - System.nanoTime();
if (nanos <= 0L) {
s.tryCancel();
continue;
}
}
if (spins > 0)
spins = shouldSpin(s) ? (spins-1) : 0;
else if (s.waiter == null)
s.waiter = w; // establish waiter so can park next iter
else if (!timed)
LockSupport.park(this);
else if (nanos > spinForTimeoutThreshold)
LockSupport.parkNanos(this, nanos);
}
}
先自旋检查,再阻塞等待,返回match节点。超时或被中断的情况下match节点指向自身。在阻塞前需要先将当前线程记录为waiter。
boolean shouldSpin(SNode s) {
SNode h = head;
return (h == s || h == null || isFulfilling(h.mode));
}
可以自旋的情况,s为头结点或者头结点在fulfill模式,至于h == null也说明存在fulfill模式的节点。
这3种情况都是s可能匹配的情况。
3.5 SNode#tryMatch(SNode)
boolean tryMatch(SNode s) {
if (match == null &&
UNSAFE.compareAndSwapObject(this, matchOffset, null, s)) {
Thread w = waiter;
if (w != null) { // waiters need at most one unpark
waiter = null;
LockSupport.unpark(w);
}
return true;
}
return match == s;
}
进行匹配并唤醒线程。
最后return match == s;
而不是直接return false是因为除了fulfill线程和help fulfill线程可能会调用该方法,需要考虑其他线程已经进行将两个节点进行匹配的可能。
3.6 TransferStack#clean(SNode)
void clean(SNode s) {
s.item = null; // forget item
s.waiter = null; // forget thread
/*
* At worst we may need to traverse entire stack to unlink
* s. If there are multiple concurrent calls to clean, we
* might not see s if another thread has already removed
* it. But we can stop when we see any node known to
* follow s. We use s.next unless it too is cancelled, in
* which case we try the node one past. We don't check any
* further because we don't want to doubly traverse just to
* find sentinel.
*/
SNode past = s.next;
if (past != null && past.isCancelled())
past = past.next;
// Absorb cancelled nodes at head
SNode p;
//找到head到last之间的第一个没有过期的元素作为新的head节点
while ((p = head) != null && p != past && p.isCancelled())
casHead(p, p.next);
// Unsplice embedded nodes
//从head到past之间的过期元素,将其前一个节点和后一个节点连接起来
while (p != null && p != past) {
SNode n = p.next;
if (n != null && n.isCancelled())
p.casNext(n, n.next);
else
p = n;
}
}
4. TransferQueue
从名字上就能看出,TransferQueue采用队列的数据结构,对于线程采用先进先出的方式。
4.1 属性
/** Head of queue */
transient volatile QNode head;
/** Tail of queue */
transient volatile QNode tail;
/**
* Reference to a cancelled node that might not yet have been
* unlinked from queue because it was the last inserted node
* when it was cancelled.
*/
transient volatile QNode cleanMe;
主要有3个属性,头结点head,尾节点tail,以及一个指向被取消但还没有从队列中移除的待清理节点的cleanMe。
4.2 QNode
volatile QNode next; // next node in queue
volatile Object item; // CAS'ed to or from null
volatile Thread waiter; // to control park/unpark
final boolean isData;
大体与SNode类似,主要讲一下不同的地方
(1)节点模式
SNode有3种模式,REQUEST(出队线程)、DATA(入队线程)和FULFILLING(匹配中),其中标记节点为FULFILLING有可能可以让其他线程辅助fulfill任务,提高效率。因此节点模式采用int型的mode属性表示。
QNode不需要其他线程辅助fulfill任务,只有入队线程和出队线程两种模式,因此采用boolean型的isData属性表示。
(2)节点匹配和取消
SNode通过match属性记录匹配节点,取消时match属性为自身。
QNode不记录匹配节点,只记录匹配节点的item属性值。
当QNode待匹配时,item属性为自身的属性值(入队时为入队的对象,出队时为null),取消时item属性为自身,被匹配成功时为匹配节点的item属性值。
另外,当QNode节点被移除时,next属性指向自身已帮助GC。
4.3 TransferQueue#transfer(Object, boolean, long)
E transfer(E e, boolean timed, long nanos) {
/* Basic algorithm is to loop trying to take either of
* two actions:
*
* 1. If queue apparently empty or holding same-mode nodes,
* try to add node to queue of waiters, wait to be
* fulfilled (or cancelled) and return matching item.
*
* 2. If queue apparently contains waiting items, and this
* call is of complementary mode, try to fulfill by CAS'ing
* item field of waiting node and dequeuing it, and then
* returning matching item.
*
* In each case, along the way, check for and try to help
* advance head and tail on behalf of other stalled/slow
* threads.
*
* The loop starts off with a null check guarding against
* seeing uninitialized head or tail values. This never
* happens in current SynchronousQueue, but could if
* callers held non-volatile/final ref to the
* transferer. The check is here anyway because it places
* null checks at top of loop, which is usually faster
* than having them implicitly interspersed.
*/
QNode s = null; // constructed/reused as needed
boolean isData = (e != null);
for (;;) {
QNode t = tail;
QNode h = head;
//没有初始化,重试
if (t == null || h == null) // saw uninitialized value
continue; // spin
//空队列或者尾结点模式相同
if (h == t || t.isData == isData) { // empty or same-mode
QNode tn = t.next;
//不一致,重试
if (t != tail) // inconsistent read
continue;
//tail的更新滞后了,跟新tail并重试
if (tn != null) { // lagging tail
advanceTail(t, tn);
continue;
}
//非阻塞,直接返回
if (timed && nanos <= 0) // can't wait
return null;
//节点封装
if (s == null)
s = new QNode(e, isData);
//插入队列,失败则重试
if (!t.casNext(null, s)) // failed to link in
continue;
//更新tail节点
advanceTail(t, s); // swing tail and wait
//自旋或阻塞,等待取消或被匹配
Object x = awaitFulfill(s, e, timed, nanos);
//节点取消,进行clean后返回
if (x == s) { // wait was cancelled
clean(t, s);
return null;
}
//节点没有从队列中移除
if (!s.isOffList()) { // not already unlinked
//如果头结点,则更新。 这一段的处理和后面匹配逻辑的`advanceHead(h, m); // successfully fulfilled `是相同的,分别由匹配和被匹配线负责执行,也只有一个能够成功。
advanceHead(t, s); // unlink if head
if (x != null) // and forget fields
s.item = s;
s.waiter = null;
}
return (x != null) ? (E)x : e;
} else { // complementary-mode
//尾结点模式不同,进行匹配逻辑
//获取第一个实际节点m
QNode m = h.next; // node to fulfill
//不一致,重试
if (t != tail || m == null || h != head)
continue; // inconsistent read
Object x = m.item;
if (isData == (x != null) || // m already fulfilled //已经被匹配
x == m || // m cancelled //已经被取消
!m.casItem(x, e)) { // lost CAS //CAS方式修改item属性,即进行匹配
advanceHead(h, m); // dequeue and retry 头结点出队,并重试
continue;
}
//匹配成功的情况
advanceHead(h, m); // successfully fulfilled //头结点出队
LockSupport.unpark(m.waiter); //唤醒被匹配的线程
return (x != null) ? (E)x : e;
}
}
}
除了没有help fulfill的场景外,剩下的两种场景的处理方式也和TransferStack类似。
(1)空队列,或者尾节点的模式相同,将节点插入队列,然后根据阻塞时间参数进行阻塞/自旋,知道取消或者被匹配。
(2)尾结点的模式不同,进行匹配。
需要注意头结点并不是一个事实上的data节点或者request节点,只是一个逻辑节点,头结点的next节点(head.next)节点才是队列中的第一个实际节点,也就是需要被匹配的节点。
4.4 TransferQueue#awaitFulFill(QNode, Object, boolean, long)
Object awaitFulfill(QNode s, E e, boolean timed, long nanos) {
/* Same idea as TransferStack.awaitFulfill */
final long deadline = timed ? System.nanoTime() + nanos : 0L;
Thread w = Thread.currentThread();
int spins = ((head.next == s) ?
(timed ? maxTimedSpins : maxUntimedSpins) : 0);
for (;;) {
if (w.isInterrupted())
s.tryCancel(e);
Object x = s.item;
if (x != e)
return x;
if (timed) {
nanos = deadline - System.nanoTime();
if (nanos <= 0L) {
s.tryCancel(e);
continue;
}
}
if (spins > 0)
--spins;
else if (s.waiter == null)
s.waiter = w;
else if (!timed)
LockSupport.park(this);
else if (nanos > spinForTimeoutThreshold)
LockSupport.parkNanos(this, nanos);
}
}
自旋或阻塞等待,超时则进行取消。
4.5 TransferQueue#clean(QNode, QNode)
void clean(QNode pred, QNode s) {
s.waiter = null; // forget thread
/*
* At any given time, exactly one node on list cannot be
* deleted -- the last inserted node. To accommodate this,
* if we cannot delete s, we save its predecessor as
* "cleanMe", deleting the previously saved version
* first. At least one of node s or the node previously
* saved can always be deleted, so this always terminates.
*/
while (pred.next == s) { // Return early if already unlinked
QNode h = head;
QNode hn = h.next; // Absorb cancelled first node as head
//更新head节点
if (hn != null && hn.isCancelled()) {
advanceHead(h, hn);
continue;
}
QNode t = tail; // Ensure consistent read for tail
//空队列
if (t == h)
return;
QNode tn = t.next;
//不一致,过程中尾节点发生变化,重试
if (t != tail)
continue;
//更新tail节点
if (tn != null) {
advanceTail(t, tn);
continue;
}
if (s != t) { // If not tail, try to unsplice
//s不是尾节点,将s从节点链表中移除
QNode sn = s.next;
//s指向自身,说明已经被移除。否则pred.next = s.next
if (sn == s || pred.casNext(s, sn))
return;
}
//对cleanMe节点的处理
QNode dp = cleanMe;
if (dp != null) { // Try unlinking previous cancelled node
QNode d = dp.next;
QNode dn;
if (d == null || // d is gone or //d不存在
d == dp || // d is off list or //dp已经被移除队列
!d.isCancelled() || // d not cancelled or //d没有被取消
(d != t && // d not tail and //d不是尾节点 且 d存在下一个节点 且 d在队列中 且 d节点移除成功。由于tail节点更新有可能滞后,需要同时判断/d不是尾节点 且 d存在下一个节点
(dn = d.next) != null && // has successor
dn != d && // that is on list
dp.casNext(d, dn))) // d unspliced
casCleanMe(dp, null); //cleanMe清空
if (dp == pred)
return; // s is already saved node
} else if (casCleanMe(null, pred))
return; // Postpone cleaning s
}
}
可以看到,整个清理过程除了清理waiter属性外全部放在while (pred.next == s)
的循环中。
在循环中,首先要及时更新head和tail节点。
对于待清理的节点不是tail节点的情况下,直接将其从队列中移除。
如果是tail节点,则需要将其前一个节点prev节点保存为cleanMe属性。当然在保存之前,需要将原来的cleanMe节点进行清理。