ConcurrentHashMap的rehash过程

void rehash() {
            HashEntry<K,V>[] oldTable = table;
            int oldCapacity = oldTable.length;
            if (oldCapacity >= MAXIMUM_CAPACITY)
                return;

            /*
             * Reclassify nodes in each list to new Map.  Because we are
             * using power-of-two expansion, the elements from each bin
             * must either stay at same index, or move with a power of two
             * offset. We eliminate unnecessary node creation by catching
             * cases where old nodes can be reused because their next
             * fields won't change. Statistically, at the default
             * threshold, only about one-sixth of them need cloning when
             * a table doubles. The nodes they replace will be garbage
             * collectable as soon as they are no longer referenced by any
             * reader thread that may be in the midst of traversing table
             * right now.
             */

            HashEntry<K,V>[] newTable = HashEntry.newArray(oldCapacity<<1);
            threshold = (int)(newTable.length * loadFactor);
            int sizeMask = newTable.length - 1;
            for (int i = 0; i < oldCapacity ; i++) {
                // We need to guarantee that any existing reads of old Map can
                //  proceed. So we cannot yet null out each bin.
                HashEntry<K,V> e = oldTable[i];

                if (e != null) {
                    HashEntry<K,V> next = e.next;
                    int idx = e.hash & sizeMask;

                    //  Single node on list
                    if (next == null)
                        newTable[idx] = e;

                    else {
                        // Reuse trailing consecutive sequence at same slot
                        HashEntry<K,V> lastRun = e;     //这里注释说的很明白,就是为了重用链表上的已有节点
                        int lastIdx = idx;
                        for (HashEntry<K,V> last = next;  //这个for循环时为了找到rehash之后新的bucket 的Index不再改变的链表位置
                             last != null;
                             last = last.next) {
                            int k = last.hash & sizeMask;
                            if (k != lastIdx) {          //找到可重用部分的链表头部
                                lastIdx = k;
                                lastRun = last;
                            }
                        }
                        newTable[lastIdx] = lastRun;

                        // Clone all remaining nodes
                        for (HashEntry<K,V> p = e; p != lastRun; p = p.next) { //在重新hash的过程中只需要rehash到可重用链表头部就好了
                            int k = p.hash & sizeMask;                          
                            HashEntry<K,V> n = newTable[k];                     
                            newTable[k] = new HashEntry<K,V>(p.key, p.hash,   //在rehash过程中所有的不可重用节点都是new出来的
                                                             n, p.value);     //这一点保证不会对原链表结构进行更改
                        }
                    }
                }
            }
            table = newTable;
        }

最近看了http://www.ibm.com/developerworks/cn/java/java-lo-concurrenthashmap/对ConcurrentHashMap的解读,自己也看了一下其中的源码,总是晕晕的,先简单总结一下。可以看到rehash过程的主要思路就是找到“重用点”,也就是图中的红色节点,途中虚线框中的节点构成了可重用的节点链表。

  • 3
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 2
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值