ConcurrentHashMap源码理解——第一篇
ConcurrentHashMap东西太多了,这一篇先说说它的数据结构和放元素的过程!
源码依赖于jdk1.8版本
一、数据结构
先给出结论:数组+链表+红黑树
首先我们先看看它有哪些属性,以方便后面的理解。
/* ---------------- Fields -------------- */
/**
* The array of bins. Lazily initialized upon first insertion.
* Size is always a power of two. Accessed directly by iterators.
* 这是保存数据用的数组。还说只在插入第一个元素时初始化,数组的大小还总是2的幂次方。
* 能看出数组和链表了!红黑树呢?
*/
transient volatile Node<K,V>[] table;
/**
* The next table to use; non-null only while resizing.
* 扩容时用的,只有扩容时才不等于null。
*/
private transient volatile Node<K,V>[] nextTable;
/**
* Base counter value, used mainly when there is no contention,
* but also as a fallback during table initialization
* races. Updated via CAS.
*/
private transient volatile long baseCount;
/**
* Table initialization and resizing control. When negative, the
* table is being initialized or resized: -1 for initialization,
* else -(1 + the number of active resizing threads). Otherwise,
* when table is null, holds the initial table size to use upon
* creation, or 0 for default. After initialization, holds the
* next element count value upon which to resize the table.
* 控制数组变化的,-1表示正在初始化,-n表示正在扩容,
* 否则当 table 为空时,保存初始容量(new的时候把初始容量赋给它),或默认为 0。
* 初始化以后保存下次扩容的阈值
*/
private transient volatile int sizeCtl;
/**
* The next table index (plus one) to split while resizing.
* 扩容后从哪个索引开始迁移元素
*/
private transient volatile int transferIndex;
/**
* Spinlock (locked via CAS) used when resizing and/or creating CounterCells.
*/
private transient volatile int cellsBusy;
/**
* Table of counter cells. When non-null, size is a power of 2.
*
*/
private transient volatile CounterCell[] counterCells;
还有一些常量也会用到
/* ---------------- Constants -------------- */
/**
* The largest possible table capacity. This value must be
* exactly 1<<30 to stay within Java array allocation and indexing
* bounds for power of two table sizes, and is further required
* because the top two bits of 32bit hash fields are used for
* control purposes.
* 这个就是数组的最大值
*/
private static final int MAXIMUM_CAPACITY = 1 << 30;
/**
* The default initial table capacity. Must be a power of 2
* (i.e., at least 1) and at most MAXIMUM_CAPACITY.
* 默认数组的初始大小
*/
private static final int DEFAULT_CAPACITY = 16;
/**
* The largest possible (non-power of two) array size.
* Needed by toArray and related methods.
* 数组的大小,map转成数组时用的。至于减8是为了空出数组的一些元数据需要的空间。
* 数组不像引用对象那样有对应的class对象,class对象还有一些属于自身的信息呢,
* 那数组也得有啊,但是数组有没有class对象(实际上数组的getclass()返回的是元素的class对象,
* getClass().getComponentType()方法的解释是:
* 如果class对象代表的是一个数组返回元素的class对象否则返回null)
*/
static final int MAX_ARRAY_SIZE = Integer.MAX_VALUE - 8;
/**
* The default concurrency level for this table. Unused but
* defined for compatibility with previous versions of this class.
* 并发级别。不知道干啥的,但是用在序列化方法上
*/
private static final int DEFAULT_CONCURRENCY_LEVEL = 16;
/**
* The load factor for this table. Overrides of this value in
* constructors affect only the initial table capacity. The
* actual floating point value isn't normally used -- it is
* simpler to use expressions such as {@code n - (n >>> 2)} for
* the associated resizing threshold.
* 负载因子,表示元素数量达到数组的0.75时开始扩容
*/
private static final float LOAD_FACTOR = 0.75f;
/**
* The bin count threshold for using a tree rather than list for a
* bin. Bins are converted to trees when adding an element to a
* bin with at least this many nodes. The value must be greater
* than 2, and should be at least 8 to mesh with assumptions in
* tree removal about conversion back to plain bins upon
* shrinkage.
* 树化阈值,表示链表的长度达到8时转化成红黑树
*/
static final int TREEIFY_THRESHOLD = 8;
/**
* The bin count threshold for untreeifying a (split) bin during a
* resize operation. Should be less than TREEIFY_THRESHOLD, and at
* most 6 to mesh with shrinkage detection under removal.
* 树退化阈值,表示链表的长度达到6时回退到链表
*/
static final int UNTREEIFY_THRESHOLD = 6;
/**
* The smallest table capacity for which bins may be treeified.
* (Otherwise the table is resized if too many nodes in a bin.)
* The value should be at least 4 * TREEIFY_THRESHOLD to avoid
* conflicts between resizing and treeification thresholds.
* 最小树容量,树化的时候再看啥意思吧
*/
static final int MIN_TREEIFY_CAPACITY = 64;
/**
* Minimum number of rebinnings per transfer step. Ranges are
* subdivided to allow multiple resizer threads. This value
* serves as a lower bound to avoid resizers encountering
* excessive memory contention. The value should be at least
* DEFAULT_CAPACITY.
* 最小迁移步长
*/
private static final int MIN_TRANSFER_STRIDE = 16;
/**
* The number of bits used for generation stamp in sizeCtl.
* Must be at least 6 for 32bit arrays.
* 什么玩意
*/
private static int RESIZE_STAMP_BITS = 16;
/**
* The maximum number of threads that can help resize.
* Must fit in 32 - RESIZE_STAMP_BITS bits.
* 可以帮助调整大小的最大线程数。必须符合(32-RESIZE_STAMP_BITS)位
*/
private static final int MAX_RESIZERS = (1 << (32 - RESIZE_STAMP_BITS)) - 1;
/**
* The bit shift for recording size stamp in sizeCtl.
* 不懂
*/
private static final int RESIZE_STAMP_SHIFT = 32 - RESIZE_STAMP_BITS;
/*
* Encodings for Node hash fields. See above for explanation.
*/
static final int MOVED = -1; // hash for forwarding nodes 节点迁移的标记
static final int TREEBIN = -2; // hash for roots of trees
static final int RESERVED = -3; // hash for transient reservations
static final int HASH_BITS = 0x7fffffff; // usable bits of normal node hash
/** Number of CPUS, to place bounds on some sizings */
static final int NCPU = Runtime.getRuntime().availableProcessors();
/** For serialization compatibility. */ 兼容jdk1.8以前的分段锁
private static final ObjectStreamField[] serialPersistentFields = {
new ObjectStreamField("segments", Segment[].class),
new ObjectStreamField("segmentMask", Integer.TYPE),
new ObjectStreamField("segmentShift", Integer.TYPE)
};
二、put过程
从下面的一段代码开始说起
ConcurrentHashMap<Integer, Integer> concurrentHashMap = new ConcurrentHashMap<>(10);
for (int i = 0; i < 12; i++) {
concurrentHashMap.put(i, i);
}
new ConcurrentHashMap<>(10),通过源码看到只要指定了了初始容量,最总的初始容量一定会被变成大于它的最小2的幂次方,除非指定的容量大于了最大容量(MAXIMUM_CAPACITY)的一半。例如:指定了10,但是最终的初始化容量会是24=16。
以下是源码(篇幅太长,我把注释删了):
public ConcurrentHashMap() {
}
public ConcurrentHashMap(int initialCapacity) {
if (initialCapacity < 0)
throw new IllegalArgumentException();
int cap = ((initialCapacity >= (MAXIMUM_CAPACITY >>> 1)) ?
MAXIMUM_CAPACITY :
tableSizeFor(initialCapacity + (initialCapacity >>> 1) + 1));
this.sizeCtl = cap;
}
public ConcurrentHashMap(Map<? extends K, ? extends V> m) {
this.sizeCtl = DEFAULT_CAPACITY;
putAll(m);
}
public ConcurrentHashMap(int initialCapacity, float loadFactor) {
this(initialCapacity, loadFactor, 1);
}
public ConcurrentHashMap(int initialCapacity,
float loadFactor, int concurrencyLevel) {
if (!(loadFactor > 0.0f) || initialCapacity < 0 || concurrencyLevel <= 0)
throw new IllegalArgumentException();
if (initialCapacity < concurrencyLevel) // Use at least as many bins
initialCapacity = concurrencyLevel; // as estimated threads
long size = (long)(1.0 + (long)initialCapacity / loadFactor);
int cap = (size >= (long)MAXIMUM_CAPACITY) ?
MAXIMUM_CAPACITY : tableSizeFor((int)size);
this.sizeCtl = cap;
}
/**
* Returns a power of two table size for the given desired capacity.
* 为给定的所需容量返回2的幂次方大小的数组。
* See Hackers Delight, sec 3.2
*/
private static final int tableSizeFor(int c) {
int n = c - 1;
n |= n >>> 1;
n |= n >>> 2;
n |= n >>> 4;
n |= n >>> 8;
n |= n >>> 16;
return (n < 0) ? 1 : (n >= MAXIMUM_CAPACITY) ? MAXIMUM_CAPACITY : n + 1;
}
从以上代码中也能看到sizeCtl的大小被设置成了初始容量。
接下来进入第一次put:
我们先大概了解一次put的大体过程,再说细节
以下是源码:
public V put(K key, V value) {
return putVal(key, value, false);
}
final V putVal(K key, V value, boolean onlyIfAbsent) {
// ConcurrentHashMap不能存储null的键值对
if (key == null || value == null) throw new NullPointerException();
// hash扰动函数,再算一次hash值,尽可能避免hash冲突
int hash = spread(key.hashCode());
//链表大小
int binCount = 0;
// 无线循环的处理放入逻辑,知道放入成功,每次循环可能只处理了一部分逻辑。
for (Node<K,V>[] tab = table;;) {
Node<K,V> f; int n, i, fh;
// 如果没初始化,就初始化数组
if (tab == null || (n = tab.length) == 0)
// 见下方initTable()的注释
tab = initTable();
// (n - 1)&hash计算放置的位置,如果i位置没数据,就CAS放进去
else if ((f = tabAt(tab, i = (n - 1) & hash)) == null) {
if (casTabAt(tab, i, null,
new Node<K,V>(hash, key, value, null)))
break;
}
// 如果当前节点的hash是-1(MOVED就是-1),就帮助迁移,针对的是多线程
else if ((fh = f.hash) == MOVED)
tab = helpTransfer(tab, f);
else {
V oldVal = null;
// 锁定当前节点,此时已经出现hash冲突。
synchronized (f) {
if (tabAt(tab, i) == f) {
// 如果是链表,就按照链表的形式增加节点
if (fh >= 0) {
binCount = 1;
for (Node<K,V> e = f;; ++binCount) {
K ek;
// 如果放的节点已经存在就跳过或是覆盖
if (e.hash == hash &&
((ek = e.key) == key ||
(ek != null && key.equals(ek)))) {
oldVal = e.val;
//是否用新值覆盖旧值
if (!onlyIfAbsent)
e.val = value;
break;
}
Node<K,V> pred = e;
// 插入尾部
if ((e = e.next) == null) {
pred.next = new Node<K,V>(hash, key,
value, null);
break;
}
}
}
// 如果是树节点
else if (f instanceof TreeBin) {
Node<K,V> p;
binCount = 2;
// 如果创建了一个新的节点就返回null,否则返回key对应的节点
if ((p = ((TreeBin<K,V>)f).putTreeVal(hash, key,
value)) != null) {
oldVal = p.val;
if (!onlyIfAbsent)
p.val = value;
}
}
}
}
if (binCount != 0) {
// 如果链表大小达到了阈值就树化
if (binCount >= TREEIFY_THRESHOLD)
treeifyBin(tab, i);
// 如果有旧值就直接返回,不用扩容
if (oldVal != null)
return oldVal;
break;
}
}
}
// 元素容量计数器增加
addCount(1L, binCount);
return null;
}
private final Node<K,V>[] initTable() {
Node<K,V>[] tab; int sc;
// 一个线程初始化就行了,其它线程自选等待
while ((tab = table) == null || tab.length == 0) {
// 如果sc小于0证明有线程正在初始化,让出CPU
if ((sc = sizeCtl) < 0)
Thread.yield();
// CAS锁定sizeCtl。
else if (U.compareAndSwapInt(this, SIZECTL, sc, -1)) {
try {
// 双重检查
if ((tab = table) == null || tab.length == 0) {
int n = (sc > 0) ? sc : DEFAULT_CAPACITY;
@SuppressWarnings("unchecked")
Node<K,V>[] nt = (Node<K,V>[])new Node<?,?>[n];
table = tab = nt;
// 计算下次扩容的阈值 n - (n >>> 2) <=> n-n*(1-0.75)
sc = n - (n >>> 2);
}
} finally {
sizeCtl = sc;
}
break;
}
}
return tab;
}
private final void addCount(long x, int check) {
CounterCell[] as; long b, s;
// 这一部分应该是多线程在协助迁移,我也没看懂
if ((as = counterCells) != null ||
// 更新元素数量的最新值
!U.compareAndSwapLong(this, BASECOUNT, b = baseCount, s = b + x)) {
CounterCell a; long v; int m;
boolean uncontended = true;
if (as == null || (m = as.length - 1) < 0 ||
(a = as[ThreadLocalRandom.getProbe() & m]) == null ||
!(uncontended =
U.compareAndSwapLong(a, CELLVALUE, v = a.value, v + x))) {
fullAddCount(x, uncontended);
return;
}
if (check <= 1)
return;
s = sumCount();
}
//是否需要扩容
if (check >= 0) {
Node<K,V>[] tab, nt; int n, sc;
//判断是否符合扩容条件
while (s >= (long)(sc = sizeCtl) && (tab = table) != null &&
(n = tab.length) < MAXIMUM_CAPACITY) {
int rs = resizeStamp(n);
if (sc < 0) {
if ((sc >>> RESIZE_STAMP_SHIFT) != rs || sc == rs + 1 ||
sc == rs + MAX_RESIZERS || (nt = nextTable) == null ||
transferIndex <= 0)
break;
if (U.compareAndSwapInt(this, SIZECTL, sc, sc + 1))
transfer(tab, nt);
}
//如果锁定了SIZECTL表明此时有线程正在扩容
else if (U.compareAndSwapInt(this, SIZECTL, sc,
(rs << RESIZE_STAMP_SHIFT) + 2))
// 扩容函数
transfer(tab, null);
//计算所有线程的总元素数量??
s = sumCount();
}
}
}
private final void transfer(Node<K,V>[] tab, Node<K,V>[] nextTab) {
int n = tab.length, stride;
// 每个线程会从某个索引处迁移数据,这是要迁移的步长
if ((stride = (NCPU > 1) ? (n >>> 3) / NCPU : n) < MIN_TRANSFER_STRIDE)
stride = MIN_TRANSFER_STRIDE;
// 如果临时数组为空,创建
if (nextTab == null) {
try {
@SuppressWarnings("unchecked")
//扩容两倍
Node<K,V>[] nt = (Node<K,V>[])new Node<?,?>[n << 1];
nextTab = nt;
} catch (Throwable ex) { // try to cope with OOME
sizeCtl = Integer.MAX_VALUE;
return;
}
//赋值给临时数组
nextTable = nextTab;
//开始迁移的起始索引
transferIndex = n;
}
int nextn = nextTab.length;
//目标转移节点,如果从这个节点进入,就会进入新的数组中
ForwardingNode<K,V> fwd = new ForwardingNode<K,V>(nextTab);
boolean advance = true;
boolean finishing = false;
//遍历每一个数组元素,i起始索引,bound迁移元素的终点索引
for (int i = 0, bound = 0;;) {
Node<K,V> f; int fh;
//寻找到下一个要迁移的元素
while (advance) {
int nextIndex, nextBound;
// 倒序迁移,
if (--i >= bound || finishing)
advance = false;
else if ((nextIndex = transferIndex) <= 0) {
i = -1;
advance = false;
}
//CAS确定下一个迁移元素
else if (U.compareAndSwapInt
(this, TRANSFERINDEX, nextIndex,
nextBound = (nextIndex > stride ?
nextIndex - stride : 0))) {
bound = nextBound;
i = nextIndex - 1;
advance = false;
}
}
if (i < 0 || i >= n || i + n >= nextn) {
int sc;
if (finishing) {
nextTable = null;
table = nextTab;
sizeCtl = (n << 1) - (n >>> 1);
return;
}
if (U.compareAndSwapInt(this, SIZECTL, sc = sizeCtl, sc - 1)) {
if ((sc - 2) != resizeStamp(n) << RESIZE_STAMP_SHIFT)
return;
finishing = advance = true;
i = n; // recheck before commit
}
}
//在原数组的i位置放上转移节点,后续的操作就会在新数组上操作
else if ((f = tabAt(tab, i)) == null)
advance = casTabAt(tab, i, null, fwd);
//fwd是转移节点,hash是-1放在原数组上,新来的线程如果发现操作的节点的hash是-1
//就会帮忙迁移,这里是-1表示已经处理了
else if ((fh = f.hash) == MOVED)
advance = true;
else {
//准备迁移的节点
synchronized (f) {
if (tabAt(tab, i) == f) {
Node<K,V> ln, hn;
//链表的迁移方式 具体看下面的解释
if (fh >= 0) {
int runBit = fh & n;
Node<K,V> lastRun = f;
for (Node<K,V> p = f.next; p != null; p = p.next) {
// 计算迁移数据的hash值高位是1还是0
int b = p.hash & n;
if (b != runBit) {
runBit = b;
lastRun = p;
}
}
if (runBit == 0) {
ln = lastRun;
hn = null;
}
else {
hn = lastRun;
ln = null;
}
for (Node<K,V> p = f; p != lastRun; p = p.next) {
int ph = p.hash; K pk = p.key; V pv = p.val;
if ((ph & n) == 0)
ln = new Node<K,V>(ph, pk, pv, ln);
else
hn = new Node<K,V>(ph, pk, pv, hn);
}
setTabAt(nextTab, i, ln);
setTabAt(nextTab, i + n, hn);
setTabAt(tab, i, fwd);
advance = true;
}
//树的迁移方式
else if (f instanceof TreeBin) {
TreeBin<K,V> t = (TreeBin<K,V>)f;
TreeNode<K,V> lo = null, loTail = null;
TreeNode<K,V> hi = null, hiTail = null;
int lc = 0, hc = 0;
for (Node<K,V> e = t.first; e != null; e = e.next) {
int h = e.hash;
TreeNode<K,V> p = new TreeNode<K,V>
(h, e.key, e.val, null, null);
if ((h & n) == 0) {
if ((p.prev = loTail) == null)
lo = p;
else
loTail.next = p;
loTail = p;
++lc;
}
else {
if ((p.prev = hiTail) == null)
hi = p;
else
hiTail.next = p;
hiTail = p;
++hc;
}
}
ln = (lc <= UNTREEIFY_THRESHOLD) ? untreeify(lo) :
(hc != 0) ? new TreeBin<K,V>(lo) : t;
hn = (hc <= UNTREEIFY_THRESHOLD) ? untreeify(hi) :
(lc != 0) ? new TreeBin<K,V>(hi) : t;
setTabAt(nextTab, i, ln);
setTabAt(nextTab, i + n, hn);
setTabAt(tab, i, fwd);
advance = true;
}
}
}
}
}
}
三、Question & Answer
-
HashMap的树化为什么是红黑树?
红黑树是二叉平衡树,它保证二叉树的最大分支高度不超过最小分支高度的两倍(高度概念是二叉树中的一个概念,没获取一次子节点,高度+1),这样树的旋转频率不会特别高就会剩下来插入时间提高性能。或者说它适合HashMap这个场景。
-
ConcurrentHashMap什么key和value不能为null,而HashMap可以?
二义性问题,假定都可以放null,那么get(key)得到的value会有以下两个问题:
- value可以是被一开始这是进入的null
- 可能map中从来就没有key-value这个映射关系,返回的也是null
但是HashMap可以用contains(key)来避免二义性问题,而ConcurrentHashMap作为多线程访问容器,很有可能在A线程调用containsKey(key)方法之前B线程 可以中间再插入一步操作put或是remove,这样A线程用得到的结果去处理一个不再是预想中的map就会有问题了。
-
HashMap的容量为什么必须是2的幂次方
散列表最大的特性就是查询性能,所以一种出色的索引算法是很有必要的。HashMap采用的就是”除法散列法“。
但是计算机中运算最快的又是位运算,所以把”取余“运算换成位运算再好不过。而h(k)=k mod n 等价于h(k)=k & (n-1)。k是key的hash值,n是数组容量。所以n要是2的幂次方会很方便计算。 -
ConcurrentHashMap的链表数据迁移解释
数组的容量是2的幂次方的另一个原因是对于数据的迁移也很方便。在数据迁移过程中会再次把原数组的元素重新hash分布到新的数组上,避免新数组一端有数据一段没数据。但是重新计算hash又费时所以1.8中就把原来的数组分成了两部分,一部分位置不变一部分变。怎么做呢?扩容之后节点的新位置比之原来的位置,他们的二进制位多了一位(当然前提全都是数组容量一直是2的幂次方)。这一位如果是0表示位置没变,如果是1表示位置变了。我们用以下例子来看看是不是这样:
假设ConcurrentHashMap扩容以后数据容量变成了32,原来是16。原数组13号位置上有3个节点,hash分别是:
h1:1101 1100 1110 1011 0001 0110 0011 1101 计算新位置 h1 & (32-1) = h1 & 11111 = 11101 => 29
h2:1000 1110 0111 1001 1011 0110 0010 1101 计算新位置 h2 & (32-1) = h1 & 11111 = 1101 =>13
h3:1111 1010 1000 1011 0001 1010 0101 1101 计算新位置 h3 & (32-1) = h1 & 11111 = 11101 => 29
这样来看确实位置需不需要改变看只需要计算 h & n的结果就可以了。n是原数组的长度
h1&16=h1&1 0000=16 ; h2&16=h2&1 0000=0,这样就已经区分出了0,至于1的时候就是n。
源代码:int runBit = fh & n
接下来是数据迁移:
原数组朝着新数组迁移的时候0的那部分数组和1的那部分数组各自被分成了两个链表,然后把这两个链表直接放到新数组中,不需要一个一个的放了。
Node<K,V> ln, hn;
........省略的一点代码........
//for循环会遍历每个节点列表,找到其中相同操作的最后一段节点的第一个,看下面的1解释
for (Node<K,V> p = f.next; p != null; p = p.next) {
int b = p.hash & n;
if (b != runBit) {
runBit = b;
lastRun = p;
}
}
//初始化两个链表
if (runBit == 0) {
ln = lastRun;
hn = null;
}
else {
hn = lastRun;
ln = null;
}
//遍历把两个链表填充完整
for (ConcurrentHashMap.Node<K,V> p = f; p != lastRun; p = p.next) {
int ph = p.hash; K pk = p.key; V pv = p.val;
if ((ph & n) == 0)
ln = new ConcurrentHashMap.Node<K,V>(ph, pk, pv, ln);
else
hn = new ConcurrentHashMap.Node<K,V>(ph, pk, pv, hn);
}
setTabAt(nextTab, i, ln);
setTabAt(nextTab, i + n, hn);
//把fwd节点替换旧数据,把以后的操作put、get啥的都导向新数组。
setTabAt(tab, i, fwd);
advance = true;
1的解释
数组中1号位置有7个节点,绿色的代表新数组中位置跟旧数组一样不变,蓝色代表位置变了。for循环就是找到lastRun的位置。
也不知道写的好不好,容易理解不 。有意见尽管提,谢谢!