ConcurrentHashMap实现原理及源码解析
1.为什么要用ConcurrentHashMap?
HashMap线程不安全,而Hashtable是线程安全,但是它使用了synchronized进行方法同步,插入、读取数据都使用了synchronized,当插入数据的时候不能进行读取(相当于把整个Hashtable都锁住了,全表锁),当多线程并发的情况下都要竞争同一把锁,导致效率极其低下.而在JDK1.5后为了改进Hashtable的痛点,ConcurrentHashMap应运而生.
2.ConcurrentHashMap为什么高效?
JDK1.5中的实现
ConcurrentHashMap使用的是分段锁技术,将ConcurrentHashMap将锁一段一段的存储,然后给每一段数据配一把锁(segment),当一个线程占用一把锁(segment)访问其中一段数据的时候,其他段的数据也能被其它的线程访问,默认分配16个segment。默认比Hashtable效率提高16倍。
JDK1.5的ConcurrentHashMap的结构图如下:
JDK1.8中的实现
ConcurrentHashMap取消了segment分段锁,而采用CAS和synchronized来保证并发安全.数据结构跟HashMap1.8的结构一样:数组+链表/红黑二叉树.
synchronized只锁定当前链表或红黑二叉树的首节点,这样只要hash不冲突就不会产生并发,效率又提升N倍.
JDK1.8的ConcurrentHashMap的结构图如下:
3.ConcurrentHashMap源码分析(JDK1.8)
(3.1)ConcurrentHashMap类的继承关
public class ConcurrentHashMap<K,V> extends AbstractMap<K,V>
implements ConcurrentMap<K,V>, Serializable
说明:ConcurrentHashMap继承了AbstractMap抽象类,该抽象类定义了一些基本操作,同时也实现了ConcurrentMap接口,ConcurrentMap接口也定义了一系列操作,且实现了Serializable接口表示ConcurrentHashMap可以被序列化.
(3.2)ConcurrentHashMap类的属性
//node数组最大容量
private static final int MAXIMUM_CAPACITY = 1 << 30;
//默认的初始表容量,一定是2的幂数
private static final int DEFAULT_CAPACITY = 16;
//数组可能最大值,需要与toArray()相关方法关联
static final int MAX_ARRAY_SIZE = Integer.MAX_VALUE - 8;
//并发级别,遗留下来的,为兼容以前的版本
private static final int DEFAULT_CONCURRENCY_LEVEL = 16;
//负载因子
private static final float LOAD_FACTOR = 0.75f;
//链表转红黑树阀值,大于8链表转换为红黑树
static final int TREEIFY_THRESHOLD = 8;
//由红黑树转化为链表的阈值
static final int UNTREEIFY_THRESHOLD = 6;
//转化为红黑树的表的最小容量
static final int MIN_TREEIFY_CAPACITY = 64;
//每次进行转移的最小值
private static final int MIN_TRANSFER_STRIDE = 16;
//生成sizeCtl所使用的bit位数
private static int RESIZE_STAMP_BITS = 16;
//进行扩容所允许的最大线程数
private static final int MAX_RESIZERS = (1 << (32 - RESIZE_STAMP_BITS)) - 1;
//记录sizeCtl中的大小所需要进行的偏移位数
private static final int RESIZE_STAMP_SHIFT = 32 - RESIZE_STAMP_BITS;
/*
* Encodings for Node hash fields. See above for explanation.
* 一系列的标识
*/
static final int MOVED = -1; // hash for forwarding nodes
static final int TREEBIN = -2; // hash for roots of trees
static final int RESERVED = -3; // hash for transient reservations
static final int HASH_BITS = 0x7fffffff; // usable bits of normal node hash
//获取可用的CPU个数
static final int NCPU = Runtime.getRuntime().availableProcessors();
//进行序列化的属性
private static final ObjectStreamField[] serialPersistentFields = {
new ObjectStreamField("segments", Segment[].class),
new ObjectStreamField("segmentMask", Integer.TYPE),
new ObjectStreamField("segmentShift", Integer.TYPE)
};
//存放节点的数组:默认为null,在第一次进行插入操作时进行初始化,大小总是2幂次方,可以由迭代器进行访问.
transient volatile Node<K,V>[] table;
//默认为null,扩容时新生成的数组.
private transient volatile Node<K,V>[] nextTable;
//基本计数
private transient volatile long baseCount;
//对表初始化和调整控制.sizeCtl默认为0,如果ConcurrentHashMap实例化时有传参数,sizeCtl会是一个2的幂次方的值.
//-1表示Map正在初始化
//—N表示有N-1个线程在执行扩容操作
//如果table是null的,那么这个值就是由构造方法执行时确定的
//本质就是一个标志控制变量
private transient volatile int sizeCtl;
//扩容下另一个表的索引
private transient volatile int transferIndex;
//旋转锁
private transient volatile int cellsBusy;
//counterCell表
private transient volatile CounterCell[] counterCells;
// views 视图
private transient KeySetView<K,V> keySet;
private transient ValuesView<K,V> values;
private transient EntrySetView<K,V> entrySet;
// Unsafe mechanics
private static final sun.misc.Unsafe U;
private static final long SIZECTL;
private static final long TRANSFERINDEX;
private static final long BASECOUNT;
private static final long CELLSBUSY;
private static final long CELLVALUE;
private static final long ABASE;
private static final int ASHIFT;
static {
try {
U = sun.misc.Unsafe.getUnsafe();
Class<?> k = ConcurrentHashMap.class;
SIZECTL = U.objectFieldOffset
(k.getDeclaredField("sizeCtl"));
TRANSFERINDEX = U.objectFieldOffset
(k.getDeclaredField("transferIndex"));
BASECOUNT = U.objectFieldOffset
(k.getDeclaredField("baseCount"));
CELLSBUSY = U.objectFieldOffset
(k.getDeclaredField("cellsBusy"));
Class<?> ck = CounterCell.class;
CELLVALUE = U.objectFieldOffset
(ck.getDeclaredField("value"));
Class<?> ak = Node[].class;
ABASE = U.arrayBaseOffset(ak);
int scale = U.arrayIndexScale(ak);
if ((scale & (scale - 1)) != 0)
throw new Error("data type scale not a power of two");
ASHIFT = 31 - Integer.numberOfLeadingZeros(scale);
} catch (Exception e) {
throw new Error(e);
}
}
(3.3)类的内部类
ConcurrentHashMap包含了很多内部类,其中主要的内部类框架图如下图所示:
(3.3)类的内部类
ConcurrentHashMap包含了很多内部类,其中主要的内部类框架图如下图所示:
(3.3.1)Segment类
Segment类在JDK1.8中与之前的JDK版本作用存在很大的差别,JDK1.8下Segment在普通的ConcurrentHashMap操作中已经失效,Segment在序列化与反序列化的时候会发挥作用.
static class Segment<K,V> extends ReentrantLock implements Serializable {
private static final long serialVersionUID = 2249069246763182397L;
final float loadFactor;
Segment(float lf) { this.loadFactor = lf; }
}
ReentrantLock:重入锁(ReentrantLock)是一种递归无阻塞的同步机制.Java无法直接访问底层操作系统,而是通过本地(native)方法来访问.不过尽管如此,JVM还是开了一个后门,JDK中有一个类Unsafe,它提供了硬件级别的原子操作,也就是说Unsafe类提供了硬件级别的操作.
(3.3.2)Node类
Node类主要用于存储具体键值对,其子类有ForwardingNode、ReservationNode、TreeNode和TreeBin四个子类.
Node类的源码:
//Node节点
static class Node<K,V> implements Map.Entry<K,V> {
//哈希值
final int hash;
//键
final K key;
//值
volatile V val;
//下一节点
volatile Node<K,V> next;
//构造函数
Node(int hash, K key, V val, Node<K,V> next) {
this.hash = hash;
this.key = key;
this.val = val;
this.next = next;
}
public final K getKey() { return key; }
public final V getValue() { return val; }
//计算Hash值
public final int hashCode() { return key.hashCode() ^ val.hashCode(); }
public final String toString(){ return key + "=" + val; }
public final V setValue(V value) {
throw new UnsupportedOperationException();
}
//比较两个entry是否相等
public final boolean equals(Object o) {
Object k, v, u; Map.Entry<?,?> e;
return ((o instanceof Map.Entry) &&
(k = (e = (Map.Entry<?,?>)o).getKey()) != null &&
(v = e.getValue()) != null &&
(k == key || k.equals(key)) &&
(v == (u = val) || v.equals(u)));
}
/**
* Virtualized support for map.get(); overridden in subclasses.
* 对map.get()的虚拟化支持,在子类中重写.
*/
Node<K,V> find(int h, Object k) {
Node<K,V> e = this;
if (k != null) {
do {
K ek;
if (e.hash == h &&
((ek = e.key) == k || (ek != null && k.equals(ek))))
return e;
} while ((e = e.next) != null);
}
return null;
}
}
ForwardingNode:转发节点
ForwardingNode是一种临时节点,在扩容中才会出现,hash值固定为-1,并且它不存储实际的数据.如果旧数组的一个hash桶中全部的节点都迁移到新数组中,旧数组就在这个hash桶中放置一个ForwardingNode.读操作或者迭代读时碰到ForwardingNode时,将操作转发到扩容后的新的table数组上去执行,写操作碰见它时则尝试帮助扩容.
源码为:
static final class ForwardingNode<K,V> extends Node<K,V> {
final Node<K,V>[] nextTable;
ForwardingNode(Node<K,V>[] tab) {
super(MOVED, null, null, null);
this.nextTable = tab;
}
//ForwardingNode的查找操作,直接在新数组nextTable上去进行查找.
Node<K,V> find(int h, Object k) {
// loop to avoid arbitrarily deep recursion on forwarding nodes
//使用循环,避免多次碰到ForwardingNode导致递归过深
outer: for (Node<K,V>[] tab = nextTable;;) {
Node<K,V> e; int n;
if (k == null || tab == null || (n = tab.length) == 0 ||
(e = tabAt(tab, (n - 1) & h)) == null)
return null;
for (;;) {
int eh; K ek;
//第一个节点就是要找的节点,直接返回
if ((eh = e.hash) == h &&
((ek = e.key) == k || (ek != null && k.equals(ek))))
return e;
if (eh < 0) {
//继续碰见ForwardingNode的情况,这里相当于是递归调用一次本方法
if (e instanceof ForwardingNode) {
tab = ((ForwardingNode<K,V>)e).nextTable;
continue outer;
}
else
//碰见特殊节点,调用其find方法进行查找
return e.find(h, k);
}
//普通节点直接循环遍历链表
if ((e = e.next) == null)
return null;
}
}
}
}
ReservationNode:保留节点或者叫空节点,computeIfAbsent和compute这两个函数式api中才会使用.它的hash值固定为-3,就是个占位符,不会保存实际的数据,正常情况是不会出现的;在jdk1.8新的函数式有关的两个方法computeIfAbsent和compute中才会出现.
为什么需要这个节点?因为正常的写操作都会想对hash桶的第一个节点进行加锁,但是null是不能加锁,所以就要new一个占位符出来,放在这个空hash桶中成为第一个节点,把占位符当锁的对象,这样就能对整个hash桶加锁了.put/remove不使用ReservationNode是因为它们都特殊处理了下,并且这种特殊情况实际上还更简单,put直接使用cas操作,remove直接不操作都不用加锁.但是computeIfAbsent和compute这个两个方法在碰见这种特殊情况时稍微复杂些,代码多一些不加锁不好处理,所以需要ReservationNode来帮助完成对hash桶的加锁操作.
static final class ReservationNode<K,V> extends Node<K,V> {
ReservationNode() {
super(RESERVED, null, null, null);
}
//空节点代表这个hash桶当前为null,所以肯定找不到"相等"的节点
Node<K,V> find(int h, Object k) {
return null;
}
}
TreeNode:红黑树节点
在红黑树形式保存时才存在,它也存储有实际的数据;结构和JDK1.8的HashMap的TreeNode一样,一些方法的实现代码也基本一样.不过ConcurrentHashMap对此节点的操作,都会由TreeBin来代理执行.
红黑树节点本身保存有普通链表节点Node的所有属性,因此可以使用两种方式进行读操作。
源码分析:
static final class TreeNode<K,V> extends Node<K,V> {
TreeNode<K,V> parent; // red-black tree links
TreeNode<K,V> left;
TreeNode<K,V> right;
//新添加的prev指针是为了删除方便,删除链表的非根节点,都需要知道它的前一个节点才能进行删除,所以直接提供一个prev指针
TreeNode<K,V> prev; // needed to unlink next upon deletion
boolean red;
TreeNode(int hash, K key, V val, Node<K,V> next,
TreeNode<K,V> parent) {
super(hash, key, val, next);
this.parent = parent;
}
Node<K,V> find(int h, Object k) {
return findTreeNode(h, k, null);
}
/**
* Returns the TreeNode (or null if not found) for the given key
* starting at given root.
*/
//以当前节点this为根节点开始遍历查找,跟HashMap.TreeNode.find实现一样
final TreeNode<K,V> findTreeNode(int h, Object k, Class<?> kc) {
if (k != null) {
TreeNode<K,V> p = this;
do {
int ph, dir; K pk; TreeNode<K,V> q;
TreeNode<K,V> pl = p.left, pr = p.right;
if ((ph = p.hash) > h)
p = pl;
else if (ph < h)
p = pr;
else if ((pk = p.key) == k || (pk != null && k.equals(pk)))
return p;
else if (pl == null)
p = pr;
else if (pr == null)
p = pl;
else if ((kc != null ||
(kc = comparableClassFor(k)) != null) &&
(dir = compareComparables(kc, k, pk)) != 0)
p = (dir < 0) ? pl : pr;
else if ((q = pr.findTreeNode(h, k, kc)) != null) //对右子树进行递归查找
return q;
else
//前面递归查找了右边子树,这里循环时只用一直往左边找
p = pl;
} while (p != null);
}
return null;
}
}
TreeBin:代理操作TreeNode的节点
TreeBin的hash值固定为-2,它是ConcurrentHashMap中用于代理操作TreeNode的特殊节点,持有存储实际数据的红黑树的根节点。因为红黑树进行写入操作,整个树的结构可能会有很大的变化,这个对读线程有很大的影响,所以TreeBin还要维护一个简单读写锁,这是相对HashMap,这个类新引入这种特殊节点的重要原因。
源码为:
//红黑树节点TreeNode实际上还保存有链表的指针,因此也可以用链表的方式进行遍历读取操作
//自身维护一个简单的读写锁,不用考虑读写竞争的情况
//不是全部的写操作都要加写锁,只有部分的put/remove需要加写锁
//很多方法的实现和jdk1.8的ConcurrentHashMap.TreeNode里面的方法基本一样,可以互相参考
static final class TreeBin<K,V> extends Node<K,V> {
//红黑树结构的跟节点
TreeNode<K,V> root;
//链表结构的头节点
volatile TreeNode<K,V> first;
//最近的一个设置WAITER标识位的线程
volatile Thread waiter;
//整体的锁状态标识位
volatile int lockState;
// values for lockState
//二进制001,红黑树的写锁状态
static final int WRITER = 1; // set while holding write lock
//二进制010,红黑树的等待获取写锁的状态,中文名字太长后面用WAITER代替
static final int WAITER = 2; // set when waiting for write lock
//二进制100,红黑树的读锁状态,读锁可以叠加,也就是红黑树方式可以并发读,每有一个这样的读线程,lockState都加上一个READER的值
static final int READER = 4; // increment value for setting read lock
/**
* Tie-breaking utility for ordering insertions when equal
* hashCodes and non-comparable. We don't require a total
* order, just a consistent insertion rule to maintain
* equivalence across rebalancings. Tie-breaking further than
* necessary simplifies testing a bit.
*/
//在hashCode相等并且不是Comparable类时才使用此方法进行判断大小
static int tieBreakOrder(Object a, Object b) {
int d;
if (a == null || b == null ||
(d = a.getClass().getName().
compareTo(b.getClass().getName())) == 0)
d = (System.identityHashCode(a) <= System.identityHashCode(b) ?
-1 : 1);
return d;
}
/**
* Creates bin with initial set of nodes headed by b.
*/
//用以b为头结点的链表创建一棵红黑树
TreeBin(TreeNode<K,V> b) {
super(TREEBIN, null, null, null);
this.first = b;
TreeNode<K,V> r = null;
for (TreeNode<K,V> x = b, next; x != null; x = next) {
next = (TreeNode<K,V>)x.next;
x.left = x.right = null;
if (r == null) {
x.parent = null;
x.red = false;
r = x;
}
else {
K k = x.key;
int h = x.hash;
Class<?> kc = null;
for (TreeNode<K,V> p = r;;) {
int dir, ph;
K pk = p.key;
if ((ph = p.hash) > h)
dir = -1;
else if (ph < h)
dir = 1;
else if ((kc == null &&
(kc = comparableClassFor(k)) == null) ||
(dir = compareComparables(kc, k, pk)) == 0)
dir = tieBreakOrder(k, pk);
TreeNode<K,V> xp = p;
if ((p = (dir <= 0) ? p.left : p.right) == null) {
x.parent = xp;
if (dir <= 0)
xp.left = x;
else
xp.right = x;
r = balanceInsertion(r, x);
break;
}
}
}
}
this.root = r;
assert checkInvariants(root);
}
/**
* Acquires write lock for tree restructuring.
*/
//对根节点加写锁,红黑树重构时需要加上写锁
private final void lockRoot() {
//先尝试获取一次写锁
if (!U.compareAndSwapInt(this, LOCKSTATE, 0, WRITER))
//单独抽象出一个方法,直到获取到写锁这个调用才会返回
contendedLock(); // offload to separate method
}
/**
* Releases write lock for tree restructuring.
*/
//释放写锁
private final void unlockRoot() {
lockState = 0;
}
/**
* Possibly blocks awaiting root lock.
*/
//可能会阻塞写线程,当写线程获取到写锁时才会返回
//ConcurrentHashMap的put/remove/replace方法本身就会锁住TreeBin节点,这里不会出现写-写竞争的情况
//本身这个方法就是给写线程用的,因此只用考虑读锁阻碍线程获取写锁,不用考虑写锁阻碍线程获取写锁
//这个读写锁本身实现得很简单,处理不了写-写竞争的情况
//waiter要么是null,要么是当前线程本身
private final void contendedLock() {
boolean waiting = false;
for (int s;;) {
// ~WAITER是对WAITER进行二进制取反,当此时没有线程持有读锁(不会有线程持有写锁)时,这个if为真(true)
if (((s = lockState) & ~WAITER) == 0) {
//在读锁、写锁都没有被别的线程持有时,尝试为自己这个写线程获取写锁,同时清空WAITER状态的标识位
if (U.compareAndSwapInt(this, LOCKSTATE, s, WRITER)) {
//获取到写锁时,如果自己曾经注册过WAITER状态,将其清除
if (waiting)
waiter = null;
return;
}
}
//有线程持有读锁(不会有线程持有写锁),并且当前线程不是WAITER状态时,这个else if为真(true)
else if ((s & WAITER) == 0) {
//尝试占据WAITER状态标识位
if (U.compareAndSwapInt(this, LOCKSTATE, s, s | WAITER)) {
//表明自己正处于WAITER状态,并且让下一个被用于进入下一个 else if
waiting = true;
waiter = Thread.currentThread();
}
}
//有线程持有读锁(不会有线程持有写锁),并且当前线程处于WAITER状态时,这个else if为真
else if (waiting)
//阻塞自己
LockSupport.park(this);
}
}
/**
* Returns matching node or null if none. Tries to search
* using tree comparisons from root, but continues linear
* search when lock not available.
*/
//从根节点开始遍历查找,找到"相等"的节点就返回它,没找到就返回null
//当有写线程加上写锁时,使用链表方式进行查找
final Node<K,V> find(int h, Object k) {
if (k != null) {
for (Node<K,V> e = first; e != null; ) {
int s; K ek;
//两种特殊情况下以链表的方式进行查找
//1、有线程正持有写锁,这样做能够不阻塞读线程
//2、WAITER时,不再继续加读锁,能够让已经被阻塞的写线程尽快恢复运行,或者刚好让某个写线程不被阻塞
if (((s = lockState) & (WAITER|WRITER)) != 0) {
if (e.hash == h &&
((ek = e.key) == k || (ek != null && k.equals(ek))))
return e;
e = e.next;
}
//读线程数量加1,读状态进行累加
else if (U.compareAndSwapInt(this, LOCKSTATE, s,
s + READER)) {
TreeNode<K,V> r, p;
try {
p = ((r = root) == null ? null :
r.findTreeNode(h, k, null));
} finally {
Thread w;
//如果这是最后一个读线程,并且有写线程因为读锁而阻塞,那么要通知它,告诉它可以尝试获取写锁了
//U.getAndAddInt(this, LOCKSTATE, -READER)这个操作是在更新之后返回lockstate的旧值,
//不是返回新值,相当于先判断==,再执行减法
if (U.getAndAddInt(this, LOCKSTATE, -READER) ==
(READER|WAITER) && (w = waiter) != null)
//让被阻塞的写线程运行起来,重新去尝试获取 写锁
LockSupport.unpark(w);
}
return p;
}
}
}
return null;
}
/**
* Finds or adds a node.
* @return null if added
*/
//用于实现ConcurrentHashMap.putVal
final TreeNode<K,V> putTreeVal(int h, K k, V v) {
Class<?> kc = null;
boolean searched = false;
for (TreeNode<K,V> p = root;;) {
int dir, ph; K pk;
if (p == null) {
first = root = new TreeNode<K,V>(h, k, v, null, null);
break;
}
else if ((ph = p.hash) > h)
dir = -1;
else if (ph < h)
dir = 1;
else if ((pk = p.key) == k || (pk != null && k.equals(pk)))
return p;
else if ((kc == null &&
(kc = comparableClassFor(k)) == null) ||
(dir = compareComparables(kc, k, pk)) == 0) {
if (!searched) {
TreeNode<K,V> q, ch;
searched = true;
if (((ch = p.left) != null &&
(q = ch.findTreeNode(h, k, kc)) != null) ||
((ch = p.right) != null &&
(q = ch.findTreeNode(h, k, kc)) != null))
return q;
}
dir = tieBreakOrder(k, pk);
}
TreeNode<K,V> xp = p;
if ((p = (dir <= 0) ? p.left : p.right) == null) {
TreeNode<K,V> x, f = first;
first = x = new TreeNode<K,V>(h, k, v, f, xp);
if (f != null)
f.prev = x;
if (dir <= 0)
xp.left = x;
else
xp.right = x;
//下面是有关put加写锁部分
//二叉搜索树新添加的节点,都是取代原来某个的NIL节点(空节点,null节点)的位置
//xp是新添加的节点的父节点,如果它是黑色的,新添加一个红色节点就能够保证x这部分的一部分路径关系不变,这是insert重新染色的最最简单的情况
if (!xp.red)
//因为这种情况就是在树的某个末端添加节点,不会改变树的整体结构,对读线程使用红黑树搜索的搜索路径没影响
x.red = true;
//其他情况下会有树的旋转的情况出现,当读线程使用红黑树方式进行查找时,可能会因为树的旋转,导致多遍历、少遍历节点,影响find的结果
else {
//除了那种最最简单的情况,其余的都要加写锁,让读线程用链表方式进行遍历读取
lockRoot();
try {
root = balanceInsertion(root, x);
} finally {
unlockRoot();
}
}
break;
}
}
assert checkInvariants(root);
return null;
}
/**
* Removes the given node, that must be present before this
* call. This is messier than typical red-black deletion code
* because we cannot swap the contents of an interior node
* with a leaf successor that is pinned by "next" pointers
* that are accessible independently of lock. So instead we
* swap the tree linkages.
*
* @return true if now too small, so should be untreeified
*/
//基本是同jdk1.8的HashMap.TreeNode.removeTreeNode,仍然是从链表以及红黑树上都删除节点
//两点区别:1、返回值,红黑树的规模太小时返回true,调用者再去进行树->链表的转化;2、红黑树规模足够,不用变换成链表时,进行红黑树上的删除要加写锁
final boolean removeTreeNode(TreeNode<K,V> p) {
TreeNode<K,V> next = (TreeNode<K,V>)p.next;
TreeNode<K,V> pred = p.prev; // unlink traversal pointers
TreeNode<K,V> r, rl;
if (pred == null)
first = next;
else
pred.next = next;
if (next != null)
next.prev = pred;
if (first == null) {
root = null;
return true;
}
if ((r = root) == null || r.right == null || // too small
(rl = r.left) == null || rl.left == null)
return true;
lockRoot();
try {
TreeNode<K,V> replacement;
TreeNode<K,V> pl = p.left;
TreeNode<K,V> pr = p.right;
if (pl != null && pr != null) {
TreeNode<K,V> s = pr, sl;
while ((sl = s.left) != null) // find successor
s = sl;
boolean c = s.red; s.red = p.red; p.red = c; // swap colors
TreeNode<K,V> sr = s.right;
TreeNode<K,V> pp = p.parent;
if (s == pr) { // p was s's direct parent
p.parent = s;
s.right = p;
}
else {
TreeNode<K,V> sp = s.parent;
if ((p.parent = sp) != null) {
if (s == sp.left)
sp.left = p;
else
sp.right = p;
}
if ((s.right = pr) != null)
pr.parent = s;
}
p.left = null;
if ((p.right = sr) != null)
sr.parent = p;
if ((s.left = pl) != null)
pl.parent = s;
if ((s.parent = pp) == null)
r = s;
else if (p == pp.left)
pp.left = s;
else
pp.right = s;
if (sr != null)
replacement = sr;
else
replacement = p;
}
else if (pl != null)
replacement = pl;
else if (pr != null)
replacement = pr;
else
replacement = p;
if (replacement != p) {
TreeNode<K,V> pp = replacement.parent = p.parent;
if (pp == null)
r = replacement;
else if (p == pp.left)
pp.left = replacement;
else
pp.right = replacement;
p.left = p.right = p.parent = null;
}
root = (p.red) ? r : balanceDeletion(r, replacement);
if (p == replacement) { // detach pointers
TreeNode<K,V> pp;
if ((pp = p.parent) != null) {
if (p == pp.left)
pp.left = null;
else if (p == pp.right)
pp.right = null;
p.parent = null;
}
}
} finally {
unlockRoot();
}
assert checkInvariants(root);
return false;
}
/* ------------------------------------------------------------ */
// Red-black tree methods, all adapted from CLR
static <K,V> TreeNode<K,V> rotateLeft(TreeNode<K,V> root,
TreeNode<K,V> p) {
TreeNode<K,V> r, pp, rl;
if (p != null && (r = p.right) != null) {
if ((rl = p.right = r.left) != null)
rl.parent = p;
if ((pp = r.parent = p.parent) == null)
(root = r).red = false;
else if (pp.left == p)
pp.left = r;
else
pp.right = r;
r.left = p;
p.parent = r;
}
return root;
}
static <K,V> TreeNode<K,V> rotateRight(TreeNode<K,V> root,
TreeNode<K,V> p) {
TreeNode<K,V> l, pp, lr;
if (p != null && (l = p.left) != null) {
if ((lr = p.left = l.right) != null)
lr.parent = p;
if ((pp = l.parent = p.parent) == null)
(root = l).red = false;
else if (pp.right == p)
pp.right = l;
else
pp.left = l;
l.right = p;
p.parent = l;
}
return root;
}
static <K,V> TreeNode<K,V> balanceInsertion(TreeNode<K,V> root,
TreeNode<K,V> x) {
x.red = true;
for (TreeNode<K,V> xp, xpp, xppl, xppr;;) {
if ((xp = x.parent) == null) {
x.red = false;
return x;
}
else if (!xp.red || (xpp = xp.parent) == null)
return root;
if (xp == (xppl = xpp.left)) {
if ((xppr = xpp.right) != null && xppr.red) {
xppr.red = false;
xp.red = false;
xpp.red = true;
x = xpp;
}
else {
if (x == xp.right) {
root = rotateLeft(root, x = xp);
xpp = (xp = x.parent) == null ? null : xp.parent;
}
if (xp != null) {
xp.red = false;
if (xpp != null) {
xpp.red = true;
root = rotateRight(root, xpp);
}
}
}
}
else {
if (xppl != null && xppl.red) {
xppl.red = false;
xp.red = false;
xpp.red = true;
x = xpp;
}
else {
if (x == xp.left) {
root = rotateRight(root, x = xp);
xpp = (xp = x.parent) == null ? null : xp.parent;
}
if (xp != null) {
xp.red = false;
if (xpp != null) {
xpp.red = true;
root = rotateLeft(root, xpp);
}
}
}
}
}
}
static <K,V> TreeNode<K,V> balanceDeletion(TreeNode<K,V> root,
TreeNode<K,V> x) {
for (TreeNode<K,V> xp, xpl, xpr;;) {
if (x == null || x == root)
return root;
else if ((xp = x.parent) == null) {
x.red = false;
return x;
}
else if (x.red) {
x.red = false;
return root;
}
else if ((xpl = xp.left) == x) {
if ((xpr = xp.right) != null && xpr.red) {
xpr.red = false;
xp.red = true;
root = rotateLeft(root, xp);
xpr = (xp = x.parent) == null ? null : xp.right;
}
if (xpr == null)
x = xp;
else {
TreeNode<K,V> sl = xpr.left, sr = xpr.right;
if ((sr == null || !sr.red) &&
(sl == null || !sl.red)) {
xpr.red = true;
x = xp;
}
else {
if (sr == null || !sr.red) {
if (sl != null)
sl.red = false;
xpr.red = true;
root = rotateRight(root, xpr);
xpr = (xp = x.parent) == null ?
null : xp.right;
}
if (xpr != null) {
xpr.red = (xp == null) ? false : xp.red;
if ((sr = xpr.right) != null)
sr.red = false;
}
if (xp != null) {
xp.red = false;
root = rotateLeft(root, xp);
}
x = root;
}
}
}
else { // symmetric
if (xpl != null && xpl.red) {
xpl.red = false;
xp.red = true;
root = rotateRight(root, xp);
xpl = (xp = x.parent) == null ? null : xp.left;
}
if (xpl == null)
x = xp;
else {
TreeNode<K,V> sl = xpl.left, sr = xpl.right;
if ((sl == null || !sl.red) &&
(sr == null || !sr.red)) {
xpl.red = true;
x = xp;
}
else {
if (sl == null || !sl.red) {
if (sr != null)
sr.red = false;
xpl.red = true;
root = rotateLeft(root, xpl);
xpl = (xp = x.parent) == null ?
null : xp.left;
}
if (xpl != null) {
xpl.red = (xp == null) ? false : xp.red;
if ((sl = xpl.left) != null)
sl.red = false;
}
if (xp != null) {
xp.red = false;
root = rotateRight(root, xp);
}
x = root;
}
}
}
}
}
/**
* Recursive invariant check
*/
static <K,V> boolean checkInvariants(TreeNode<K,V> t) {
TreeNode<K,V> tp = t.parent, tl = t.left, tr = t.right,
tb = t.prev, tn = (TreeNode<K,V>)t.next;
if (tb != null && tb.next != t)
return false;
if (tn != null && tn.prev != t)
return false;
if (tp != null && t != tp.left && t != tp.right)
return false;
if (tl != null && (tl.parent != t || tl.hash > t.hash))
return false;
if (tr != null && (tr.parent != t || tr.hash < t.hash))
return false;
if (t.red && tl != null && tl.red && tr != null && tr.red)
return false;
if (tl != null && !checkInvariants(tl))
return false;
if (tr != null && !checkInvariants(tr))
return false;
return true;
}
private static final sun.misc.Unsafe U;
private static final long LOCKSTATE;
static {
try {
U = sun.misc.Unsafe.getUnsafe();
Class<?> k = TreeBin.class;
LOCKSTATE = U.objectFieldOffset
(k.getDeclaredField("lockState"));
} catch (Exception e) {
throw new Error(e);
}
}
}
参考: https://blog.csdn.net/u011392897/article/details/60479937