java基础--java核心包-Map

参见:
-https://blog.csdn.net/ZJQ990909/article/details/126965624

参见:

Map

基础知识

原码、反码、补码

正数补码与原码完全一样,负数在计算机内部是以补码形式存放的。

参见:

位运算

参见:

JAVA语言运算符

参见:
-JAVA语言运算符

java按位移操作符 <<、 >>和 >>>

  • << 表示左移,不分正负数,低位补0;
  • >> 表示右移,若该数为正,高位补0;若该数为负,高位补1;
  • >>> 表示无符号右移,也称逻辑右移,不分正负数,高位补0;

关于>>

把一个二进制数右移N位,规则为:
除符号位外,全部右移N位,
如果数字是一个无符号数值,则用0填补最左边的N位,
如果数字是一个有符号数值,则用1填补最左边的N位,
也就是说如果数字原先是一个正数,则右移之后在最左边补N个0;如果数字原先是个负数,则右移之后在最左边填补N个1。

例子:
0000 0010 >> 1 = 0000 0001
0000 1010 >> 2 = 0000 0010
1000 0010 >> 1 = 1100 0001
1000 1010 >> 3 = 1111 0001

上面的例子,正数的容易理解,为什么负数的是填补N个1呢,解释如下:
负数在计算机内部是以补码形式存放的,
1000 0010是一个负数,它的原码是1111 1110,也就是说它不是表示-2,而是表示-126;
证明1000 0010右移1位后是1100 0001呢,
1000 0010的原码是1111 1110,对原码右移1位,为1011 1111,(也就是-63),再换成补码,为1100 0001,
可以看出,1000 0010右移1位确实是1100 0001,也就是说,

换句话说,原码(正数)移位时,缺失位补0;

补码(负数)移位时,缺失位补符号位,当然正数时亦可(正数补码与原码完全一样)。

如果数字原先是个负数,则右移之后在最左边填补N个1。

链表

红黑树

HashMap

JDK 1.7


public class HashMap<K,V>
    extends AbstractMap<K,V>
    implements Map<K,V>, Cloneable, Serializable
{

    /**
     * HashMap主要存储结构,是一个哈希表:数组+Entry(Entry是一个链表,数组+链表即哈希表)。
     * 数组容量可变,必须是2的n幂(原因参见下方面试题:table数组为什么必须是2的n次幂)
     * 
     * 初始值=DEFAULT_INITIAL_CAPACITY 
     * 最大值=MAXIMUM_CAPACITY+1
     * 
     */
    transient Entry<K,V>[] table;
    
    /**
     * 数组table的默认初始容量,16
     */
    static final int DEFAULT_INITIAL_CAPACITY = 1 << 4; // aka 16

    /**
     * 数组table的最大容量
     */
    static final int MAXIMUM_CAPACITY = 1 << 30;

    /**
     * HashMap中当前存储的元素个数.
     */
    transient int size;
    
	/**
	 * 标记talbe数组扩容阈值。
	 * 
	 * 取值公式:扩容阈值=table数组容量*loadFactor
	 * 初始值=DEFAULT_INITIAL_CAPACITY*DEFAULT_LOAD_FACTOR=16*0.75=12
	 * 
	 * table数组是可变数组,在插入操作(如put、putAll)前,若其中存储的元素数量size超过了
	 * threshold值,且table[bucketIndex](bucketIndex为根据待插入数据计算出的table数组的下标
	 * 位置)有数据时,将调用resize(newCapacity:2 * table.length) (由此可见,每次扩容时,
	 * table数组将扩充一倍。)方法(内部调用栈:put->addEntry->resize)方法对table数组进行扩容。
	 * 扩容完成后,threshold将按照扩容后的数据重新计算(取值公式不变)。
	 * 
	 */
    int threshold;

    /**
     * 负载因子或扩容因子。
     * 初始值=DEFAULT_LOAD_FACTOR
     * 
     */
    final float loadFactor;
    
    /**
     * 负载因子默认值0.75
     */
    static final float DEFAULT_LOAD_FACTOR = 0.75f;

	/**
	 * table数组元素类型Entry,Entry是链表结构,存储了key、value、key的hash值,
	 * 以及指向下一个Entry的指针
	 */
    static class Entry<K,V> implements Map.Entry<K,V> {
        final K key;
        V value;
        Entry<K,V> next;
        int hash;
    }
    
    public HashMap() {
        this(DEFAULT_INITIAL_CAPACITY, DEFAULT_LOAD_FACTOR);
    }


    public HashMap(int initialCapacity) {
        this(initialCapacity, DEFAULT_LOAD_FACTOR);
    }
    
    public HashMap(int initialCapacity, float loadFactor) {
        if (initialCapacity < 0)
            throw new IllegalArgumentException("Illegal initial capacity: " +
                                               initialCapacity);
        if (initialCapacity > MAXIMUM_CAPACITY)
            initialCapacity = MAXIMUM_CAPACITY;
        if (loadFactor <= 0 || Float.isNaN(loadFactor))
            throw new IllegalArgumentException("Illegal load factor: " +
                                               loadFactor);

        this.loadFactor = loadFactor;
        threshold = initialCapacity;
        inflateTable(threshold);
    }
	
	/**
	 * 
	 * 
	 * 
	 */
    public V put(K key, V value) {
    	//第一次put操作,对table进行初始化操作,如确定数组容量,threshold值等。
        if (table == EMPTY_TABLE) {
            inflateTable(threshold);
        }
        // HashMap 允许key和value为空值,若key为null,则将其放到tabel[0]下。
        // ConcurrentHashMap既不允许key为knull,也不允许value为null. 
        if (key == null)
            return putForNullKey(value);


        /*
         * 计算key的hash值,参见hash()。
         * 1. 什么是hash
         * 任意变长输入,得到定长输出。
         * 
         * 2. 什么是hash冲突?
         * 两个不同元素的key,经过计算,得到的在table数组的位置相同。
         * 
         * 3. 当发生hash冲突时,HashMap如何存储?
         * HashMap会将发生hash冲突的两个元素组成一个链表
         * (链表元素是Entry),插入到相同的table数组下标位置。
         * 
         * 4. hash冲突带来的问题?
         * 由于链表的查找效率低,当发生hash冲突的概率越高,则HashMap的查找
         * (get操作)效率越低,所以应尽量降低hash冲突,应尽量让元素在table数组中分布均匀,提高
         * 查找性能。hash()函数的实现尽量降低了hash冲突。
         * 
         * 5. hash()函数如何尽量降低hash冲突?
         * 先根据key.hashCodea按位异或进行二次散列,再进行扰动,以增加结果的散列性,
         * 以尽量降低hash冲突,让key在HashMap table数组中分布均匀,提高查找性能。
         * 
         * 
         */
        int hash = hash(key);
        
        //根据hash值,计算该key在table数组中的下标位置。
        //内部实现是h & (length-1),该计算公式等效于h%lengh,用前者按位与计算效率更高。
        int i = indexFor(hash, table.length);
        
        //若table数组中有此key,则先从table[i]对应的链表中遍历,
        //看是否对应的key已存在,若存在,则替换,并返回替换前的值。
        //若待插入key在hashMap中不存在,则table[i]为null,此时会直接跳过该循环
        for (Entry<K,V> e = table[i]; e != null; e = e.next) {
            Object k;
            /* 
             * 当发生hash冲突(e.hash == hash)时:
             *  1. 先比较key是否同一个对象,若为同一个对象,即可说明hashmap中已存在待put的数据,
             * 就不比较equals方法了
             *  2. 若不是同一个key对象,再比较equals方法,若相同,也说明hashmap中已存在待put数据
             * 
             * 若判定为hashmap中已存在待put的数据,则执行替换操作,并返回替换前的旧value
             */
            if (e.hash == hash && ((k = e.key) == key || key.equals(k))) {
            	// 进入到该条件,说明判定为hashmap中已存在待put的数据,
            	// 此时执行替换操作,并返回替换前的旧值
                V oldValue = e.value;
                e.value = value;
                e.recordAccess(this);
                return oldValue;
            }
        }

		//table中无此key,新增数据到table中
        modCount++;
        addEntry(hash, key, value, i);
        return null;
    }

    final int hash(Object k) {
        int h = hashSeed;
        if (0 != h && k instanceof String) {
            return sun.misc.Hashing.stringHash32((String) k);
        }
		//根据key.hashCodea按位异或进行二次散列
        h ^= k.hashCode();

        // This function ensures that hashCodes that differ only by
        // constant multiples at each bit position have a bounded
        // number of collisions (approximately 8 at default load factor).
        /* 再进行扰动(4次扰动),目的是减少hash冲突
         * (直接使用key.hashCode()容易导致两个不同的key对象的hashCode相同),
         * 通过二次散列+扰动含税,可以增加结果的散列性,以尽量让key在HashMap table数组中分布均匀
         */
        h ^= (h >>> 20) ^ (h >>> 12);
        return h ^ (h >>> 7) ^ (h >>> 4);
    }

	//根据key计算的hash值,确定数据在table数组的下标位置,
	static int indexFor(int h, int length) {
        // assert Integer.bitCount(length) == 1 : "length must be a non-zero power of 2";
        //h & (length-1) 等效于h%lengh,用按位与效率更高。
        return h & (length-1);
    }
    
	/**
	 * 新增元素到HashMap中
	 * 
	 */
    void addEntry(int hash, K key, V value, int bucketIndex) {
    	/*
    	 * 1. 扩容条件:当HashMap当前元素数量size超过了threshold值,
    	 * 且table[bucketIndex]有数据时,将调用resize进行扩容。
    	 * 
    	 * 2. 扩容操作(resize方法)内部实现:根据计算后的容量创建一个新的table数组,
    	 * 并将原table数组中存储的所有元素复制到扩容后的table数组内。
    	 */
        if ((size >= threshold) && (null != table[bucketIndex])) {
            // table数组扩容到原来的2倍
            resize(2 * table.length);
            // 扩容后重新计算待插入数据的hash值和table数组下标bucketIndex 
            // HashMap 允许key和value为空值,若key为null,则将其放到tabel[0]下。
            hash = (null != key) ? hash(key) : 0;
            bucketIndex = indexFor(hash, table.length);
        }
        // 根据计算的待插入元素在table数组的下标位置(可能是扩容前或扩容后),
		// 创建一个包含了hash、key、value的Entry对象,
		// 并将其插入到table[bucketIndex]指定的链条头部(即链表的头插法)
		// size++
        createEntry(hash, key, value, bucketIndex);
    }

	/**
	 * HashMap扩容
	 * 扩容操作内部实现:
	 * 1. 根据指定的扩容后大小newCapacity 创建一个新的table数组
	 * 2. 将原table数组中存储的所有元素复制到扩容后的table数组内,参见:transfer()
	 * 3. 重新计算下次扩容的临界值。
	 */
    void resize(int newCapacity) {
        Entry[] oldTable = table;
        int oldCapacity = oldTable.length;
        if (oldCapacity == MAXIMUM_CAPACITY) {
            threshold = Integer.MAX_VALUE;
            return;
        }

        Entry[] newTable = new Entry[newCapacity];
        transfer(newTable, initHashSeedAsNeeded(newCapacity));
        table = newTable;
        threshold = (int)Math.min(newCapacity * loadFactor, MAXIMUM_CAPACITY + 1);
    }

	
	/**
	 * 创建一个包含了hash、key、value的Entry对象,
	 * 并将其插入到table[bucketIndex]指定的链条头部(即链表的头插法)
	 * size++
	 * 
	 */
    void createEntry(int hash, K key, V value, int bucketIndex) {
        Entry<K,V> e = table[bucketIndex];
        table[bucketIndex] = new Entry<>(hash, key, value, e);
        size++;
    }
}

在这里插入图片描述

JDK1.8+(代码来自JDK11)

与JDK1.7的区别
  1. JDK1.8开始,由Entry数组+桶的单向链表结构,变为Node数组+桶的单向链表或红黑树,红黑树和链表之间会随着链表和数组的容量满足指定条件后相互转换。转换条件参见jdk1.8 链表与红黑树相互转化的条件
代码及注释
public class HashMap<K,V> extends AbstractMap<K,V>
    implements Map<K,V>, Cloneable, Serializable {
    
    /**
     * 1. Node<K,V>[] table是HashMap的主要存储结构:数组+数组元素(链表或红黑树)
     * 	  a. 数组元素类型为Node,可以是一个链表(元素类型为Node本身),也可能是红黑树
     *      (元素类型为Node的子类TreeeNode),并在满足指定条件时会相互转化。
     * 		 参见下方面试题:jdk1.8 链表与红黑树相互转化的条件 
     *    b. Node类型定义了key、value、key的hash值,以及指向下一个Node的指针
     *    c. 红黑树元素类型TreeNode内部定义了TreeNode<K,V> prev 因此TreeNode同时具备
     *       红黑树+双向链表的特性。
     *    d. table数组下标位置,又叫一个槽(slot),其中存储的数据又叫一个bucket或bin
     * 
     * 2. table数组容量可变,必须是2的n幂(原因参见下方面试题:table数组容量为什么必须是2的n次幂)
     * 
     * 3. tablesh数组取值:
     *    初始值=null,懒加载机制,在第一次put时,通过调用resize方法进行初始化
     *    最大值=MAXIMUM_CAPACITY+1
     * 
     */
    //transient Entry<K,V>[] table; JDK1.7
    transient Node<K,V>[] table;
    
    /**
     * 数组table的默认容量,16
     */
    static final int DEFAULT_INITIAL_CAPACITY = 1 << 4; // aka 16

    /**
     * 数组table的最大容量
     */
    static final int MAXIMUM_CAPACITY = 1 << 30;

    /**
     * HashMap中当前存储的元素个数.
     */
    transient int size;
    
	/**
	 * 标记talbe数组扩容阈值。
	 * 
	 * 取值公式:扩容阈值=table数组容量*loadFactor
	 * 初始值=DEFAULT_INITIAL_CAPACITY*DEFAULT_LOAD_FACTOR=16*0.75=12
	 * 
	 * table数组是可变数组,在插入操作(如put、putAll)前,若其中存储的元素数量size超过了
	 * threshold值,且table[bucketIndex](bucketIndex为根据待插入数据计算出的table数组的下标
	 * 位置)有数据时,将调用resize(newCapacity:2 * table.length) (由此可见,每次扩容时,
	 * table数组将扩充一倍。)方法(内部调用栈:put->addEntry->resize)方法对table数组进行扩容。
	 * 扩容完成后,threshold将按照扩容后的数据重新计算(取值公式不变)。
	 * 
	 */
    int threshold;

    /**
     * 负载因子或扩容因子。
     * 初始值=DEFAULT_LOAD_FACTOR
     * 
     */
    final float loadFactor;
    
    /**
     * 负载因子默认值0.75
     */
    static final float DEFAULT_LOAD_FACTOR = 0.75f;

    /**
     * The bin count threshold for using a tree rather than list for a
     * bin.  Bins are converted to trees when adding an element to a
     * bin with at least this many nodes. The value must be greater
     * than 2 and should be at least 8 to mesh with assumptions in
     * tree removal about conversion back to plain bins upon
     * shrinkage.
     * 
     *  put元素到链表后,判断否应该将链表是转换为红黑树的条件:
     * 1. put后,当前table slot位置链表容量>TREEIFY_THRESHOLD  
     *      putVal:
     * 		if (binCount >= TREEIFY_THRESHOLD - 1) //此处binCount = 链表容量-2
     * 			treeifyBin(tab, hash);
     * 2. 且table数组容量>=MIN_TREEIFY_CAPACITY
     *      treeifyBin:
     * 		if (tab == null || (n = tab.length) < MIN_TREEIFY_CAPACITY)
     * 			resize();
     * 
     * 两个条件都满足时,才会将链表转换为红黑树。只满足第一个条件时,只调用resize进行行table数组扩容
     */
    static final int TREEIFY_THRESHOLD = 8;
    
    /**
     * 参见TREEIFY_THRESHOLD 的注释。
     * 
     * The smallest table capacity for which bins may be treeified.
     * (Otherwise the table is resized if too many nodes in a bin.)
     * Should be at least 4 * TREEIFY_THRESHOLD to avoid conflicts
     * between resizing and treeification thresholds.
     */
    static final int MIN_TREEIFY_CAPACITY = 64;

    /**
     * 将红黑树转为链表的阈值。
     * 
     * 当某个bin的元素个数减少到<=该阈值时,触发树转链操作。
     * 
     * The bin count threshold for untreeifying a (split) bin during a
     * resize operation. Should be less than TREEIFY_THRESHOLD, and at
     * most 6 to mesh with shrinkage detection under removal.
     */
    static final int UNTREEIFY_THRESHOLD = 6;
    

	/**
	 * table数组元素类型Node,Node是单向链表或红黑树(Node的子类TreeNode)结构。
	 * Node 存储了key、value、key的hash值,以及指向下一个Node的指针
	 * 当Node为红黑树TreeNode时,还通过prev存储了上一个Node,同时具备红黑树+双向链表的特性。
	 * 问题:
	 * 1. TreeNode的双向链表,next和pre链是如何构造的?
	 * 2. 链表和红黑树相互转换的条件是什么?
	 * 
	 */
	static class Node<K,V> implements Map.Entry<K,V> {
        final int hash;
        final K key;
        V value;
        Node<K,V> next;
	}
	
	/**
	 * table数组元素类型Node,Node是链表或红黑树(Node的子类TreeNode)结构,
	 * Node 存储了key、value、key的hash值,以及指向下一个Node的指针
	 * 当Node为红黑树TreeNode时,还通过prev存储了上一个Node,同时具备树+双向链表的特性。
	 * 
	 * 问题:TreeNode的双向链表,next和pre链如何构造的? 
	 * 答:在将链表转换为红黑树之前,将链表先转换为双向链表,再转换为红黑树。
	 * 将链表转为双向链表的目的是在处理红黑树时需要逆向遍历元素。
	 */
    static final class TreeNode<K,V> extends LinkedHashMap.Entry<K,V> {
        TreeNode<K,V> parent;  // red-black tree links
        TreeNode<K,V> left;
        TreeNode<K,V> right;
        TreeNode<K,V> prev;    // needed to unlink next upon deletion
        boolean red;


        /**
         * 将一棵树拆分成两棵树,若拆分后,树元素数量太少,触发了树转链条件,则执行树转链操作,
         * 参见下方resize方法调用本方法上方的注释。
         * 
         * 处理步骤:
         * 1. 先将一棵树拆分成两个元素类型为TreeNode的双向链表 
      	 * 	用loHead和loTail 指向位于低位处L1的链表
      	 * 	用hiHead 和hiTail指向位于高位处L2的链表
      	 * 	下方for循环块用于构造两棵子链表(而非子树),
      	 * 	之后将它们分别放入新数组的L1和L2处。
      	 * 
         * 2. 分别判断拆分后的两个链表的元素数量,并执行不同转换操作(两个链表的转换逻辑完全相同):
         *   	a. 若元素数量<=UNTREEIFY_THRESHOLD,则执行树转链操作:
         * 		 	调用untreeify操作,将链表元素类型从TreeNode改为Node即可。
         *   	b. 否则,将根据链表中的元素,构造一棵新的红黑树。
         * 
         * 3. 将转换后的数据结构(链表或树)分别存入到新数组的L1和L2处。
      	 * 
      	 * 
         * Splits nodes in a tree bin into lower and upper tree bins,
         * or untreeifies if now too small. Called only from resize;
         * see above discussion about split bits and indices.
         *
         * @param map the map
         * @param tab the table for recording bin heads
         * @param index the index of the table being split
         * @param bit the bit of hash to split on
         */
        final void split(HashMap<K,V> map, Node<K,V>[] tab, int index, int bit) {
            TreeNode<K,V> b = this;
            // Relink into lo and hi lists, preserving order
            TreeNode<K,V> loHead = null, loTail = null;
            TreeNode<K,V> hiHead = null, hiTail = null;
            int lc = 0, hc = 0;
            for (TreeNode<K,V> e = b, next; e != null; e = next) {
                next = (TreeNode<K,V>)e.next;
                e.next = null;
                if ((e.hash & bit) == 0) {
                    if ((e.prev = loTail) == null)
                        loHead = e;
                    else
                        loTail.next = e;
                    loTail = e;
                    ++lc;
                }
                else {
                    if ((e.prev = hiTail) == null)
                        hiHead = e;
                    else
                        hiTail.next = e;
                    hiTail = e;
                    ++hc;
                }
            }

            if (loHead != null) {
                if (lc <= UNTREEIFY_THRESHOLD)
                	//树转链
                    tab[index] = loHead.untreeify(map);
                else {
                    tab[index] = loHead;
                   
                   /**
                     * 若高位链表为null,说明所有只有低位链表。
                     * 该链表本身也是扩容前的那颗红黑树,上述for循环并未对扩容前的原始链表进行拆分,
                     * 因此,将原来的红黑树挂载在此位置即可,无需再将其转换为红黑树(因为已经是
                     * 红黑树了)。
                     * 
                     * 高位链表不为null,说明将原始链表进行了拆分,需要将拆分后的双向链表
                     * 转换成红黑树。
                     */
                    if (hiHead != null) // (else is already treeified)
                    	//根据链表重新构造一棵红黑树
                        loHead.treeify(tab);
                }
            }
            if (hiHead != null) {
                if (hc <= UNTREEIFY_THRESHOLD)
                    tab[index + bit] = hiHead.untreeify(map);
                else {
                    tab[index + bit] = hiHead;
                    if (loHead != null)
                        hiHead.treeify(tab);
                }
            }
        }
       
        /**
         * 红黑树插入节点
         * 参照:https://www.yuque.com/renyong-jmovm/ufz328/erw9ri 密码为:ggmc
         *  
         */
        static <K,V> TreeNode<K,V> balanceInsertion(TreeNode<K,V> root,
                                                    TreeNode<K,V> x) {
            //待插入节点,默认为红色
            x.red = true;
            // xp表示父结点,xpp表示祖父结点,xppl表示xpp的左孩子结点,xppr表示xpp的右孩子结点
            for (TreeNode<K,V> xp, xpp, xppl, xppr;;) {
            	// 如果x没有父结点,则表示x是第一个结点,自动为根节点,根节点为黑色
                if ((xp = x.parent) == null) {
                    x.red = false;
                    return x;
                }
		        // 如果父结点不是红色(就是黑色),或者x没有祖父节点,那么就证明x是第二层节点,
		        // 父节点为根节点。 这种情况无需进行操作(旋转和变色)
                else if (!xp.red || (xpp = xp.parent) == null)
                    return root;
                    
                // 进入到这里,表示x的父节点为红色
        		// a. 如果x的父节点是祖父结点的左孩子
                if (xp == (xppl = xpp.left)) {
                	 // 祖父结点的右孩子,也就是x的叔叔节点不为空,且为红色
                    if ((xppr = xpp.right) != null && xppr.red) {//叔红父叔黑祖红
                    	// 父节点和叔叔节点都为红色,只需要变色(父亲和叔叔变黑,祖父变红),
                    	// 且将x替换为祖父节点然后进行递归
                        xppr.red = false;
                        xp.red = false;
                        xpp.red = true;
                        x = xpp;
                    }
                    else {// 如果叔叔节点为空或者为黑色
                        if (x == xp.right) {// 如果x节点为xp的右孩子
                        	// 先进行左旋,并且把x替换为xp进行递归,左旋的过程中产生了新的root节点
                        	//叔黑子右父子左
                            root = rotateLeft(root, x = xp);
                            // x替换后,修改xp和xpp
                            xpp = (xp = x.parent) == null ? null : xp.parent;
                        }
                         // 如果x本来是左孩子,或者已经经过了上面的左旋之后,进行变色加右旋
                        if (xp != null) {
                            xp.red = false;
                            if (xpp != null) {
                                xpp.red = true;
                                root = rotateRight(root, xpp);
                            }
                        }
                    }
                }
                // b 如果x的父节点是祖父结点的右孩子
                else {
                    if (xppl != null && xppl.red) {
                        xppl.red = false;
                        xp.red = false;
                        xpp.red = true;
                        x = xpp;
                    }
                    else {
                        if (x == xp.left) {
                            root = rotateRight(root, x = xp);
                            xpp = (xp = x.parent) == null ? null : xp.parent;
                        }
                        if (xp != null) {
                            xp.red = false;
                            if (xpp != null) {
                                xpp.red = true;
                                root = rotateLeft(root, xpp);
                            }
                        }
                    }
                }
            }
        }
        
        /**
         * 红黑树的左旋操作
         * 参照:https://www.yuque.com/renyong-jmovm/ufz328/erw9ri 密码为:ggmc
         *  
         */
        static <K,V> TreeNode<K,V> rotateLeft(TreeNode<K,V> root,
                                              TreeNode<K,V> p) {
		    // pp是祖父结点
		    // p是待旋转结点
		    // r是p的右孩子结点
		    // rl是r的左孩子结点
            TreeNode<K,V> r, pp, rl;
            if (p != null && (r = p.right) != null) {
        		// 如果rl不为空,则设置p.right=rl
                if ((rl = p.right = r.left) != null)
                    rl.parent = p;
        		// 如果祖父结点为null,那么r设置为黑色,r左旋之后即为root节点
                if ((pp = r.parent = p.parent) == null)
                    (root = r).red = false;
        		// 如果待旋转结点是左孩子节点
                else if (pp.left == p)
                    pp.left = r;
        		// 如果待旋转结点为右孩子
                else
                    pp.right = r;
                r.left = p;
                p.parent = r;
            }
            return root;
        }

        static <K,V> TreeNode<K,V> rotateRight(TreeNode<K,V> root,
                                               TreeNode<K,V> p) {
            TreeNode<K,V> l, pp, lr;
            if (p != null && (l = p.left) != null) {
                if ((lr = p.left = l.right) != null)
                    lr.parent = p;
                if ((pp = l.parent = p.parent) == null)
                    (root = l).red = false;
                else if (pp.right == p)
                    pp.right = l;
                else
                    pp.left = l;
                l.right = p;
                p.parent = l;
            }
            return root;
        }

        /**
         * Tree version of putVal.
         */
        final TreeNode<K,V> putTreeVal(HashMap<K,V> map, Node<K,V>[] tab,
                                       int h, K k, V v) {
            Class<?> kc = null;
            boolean searched = false;
            TreeNode<K,V> root = (parent != null) ? root() : this;
            for (TreeNode<K,V> p = root;;) {
                int dir, ph; K pk;
                if ((ph = p.hash) > h)
                    dir = -1;
                else if (ph < h)
                    dir = 1;
                else if ((pk = p.key) == k || (k != null && k.equals(pk)))
                    return p;
                else if ((kc == null &&
                          (kc = comparableClassFor(k)) == null) ||
                         (dir = compareComparables(kc, k, pk)) == 0) {
                    if (!searched) {
                        TreeNode<K,V> q, ch;
                        searched = true;
                        if (((ch = p.left) != null &&
                             (q = ch.find(h, k, kc)) != null) ||
                            ((ch = p.right) != null &&
                             (q = ch.find(h, k, kc)) != null))
                            return q;
                    }
                    dir = tieBreakOrder(k, pk);
                }

                TreeNode<K,V> xp = p;
                if ((p = (dir <= 0) ? p.left : p.right) == null) {
                    Node<K,V> xpn = xp.next;
                    TreeNode<K,V> x = map.newTreeNode(h, k, v, xpn);
                    if (dir <= 0)
                        xp.left = x;
                    else
                        xp.right = x;
                    xp.next = x;
                    x.parent = x.prev = xp;
                    if (xpn != null)
                        ((TreeNode<K,V>)xpn).prev = x;
                    moveRootToFront(tab, balanceInsertion(root, x));
                    return null;
                }
            }
        }
	}
	
	/***********************构造方法****************************/
    public HashMap() {
        this.loadFactor = DEFAULT_LOAD_FACTOR; // all other fields defaulted
    }
    public HashMap(int initialCapacity) {
        this(initialCapacity, DEFAULT_LOAD_FACTOR);
    }

	public HashMap(int initialCapacity, float loadFactor) {
        if (initialCapacity < 0)
            throw new IllegalArgumentException("Illegal initial capacity: " +
                                               initialCapacity);
        if (initialCapacity > MAXIMUM_CAPACITY)
            initialCapacity = MAXIMUM_CAPACITY;
        if (loadFactor <= 0 || Float.isNaN(loadFactor))
            throw new IllegalArgumentException("Illegal load factor: " +
                                               loadFactor);
        this.loadFactor = loadFactor;
        // tableSizeFor:根据传入的size重新计算,变为2的整数倍,作为最终的table容量
        this.threshold = tableSizeFor(initialCapacity);
    }
	/***********************put ****************************/
	
	public V put(K key, V value) {
        return putVal(hash(key), key, value, false, true);
    }

    /**
     * 计算Hash值,取高16位与低16位进行异或运算,让hashCode高位部分参与hash散列运算,
     * 以在计算元素在table数组的下标 h&(table.lenght-1)时,在元素在table上分布更散列,
     * 减少hash冲突,进而减少发生冲突的链表长度,以提高后期查找的性能(在插入和查找、
     * 删除等各操作都涉及到链表的查找,链表的查找性能为O(N),链表长度太长性能会急剧下降)
     */
    static final int hash(Object key) {
        int h;
        return (key == null) ? 0 : (h = key.hashCode()) ^ (h >>> 16);
    }
    
    /**
     * Implements Map.put and related methods.
     *
     * @param hash hash for key
     * @param key the key
     * @param value the value to put
     * @param onlyIfAbsent if true, don't change existing value
     * @param evict if false, the table is in creation mode.
     * @return previous value, or null if none
     */
    final V putVal(int hash, K key, V value, boolean onlyIfAbsent,
                   boolean evict) {
        Node<K,V>[] tab; Node<K,V> p; int n, i;
        //table为null,说明首次向HashMap写入数据,此时进行初始化,即懒加载机制
        //通过resize方法实现初始化
        if ((tab = table) == null || (n = tab.length) == 0)
            n = (tab = resize()).length;//HashMap首次初始化,懒加载,通过resize方法实现初始化
        //未发生hash冲突,向对应的table slot槽位置,存入Node元素即可。
        if ((p = tab[i = (n - 1) & hash]) == null)
        	//newNode:将待插入元素封装为一个Node对象
            tab[i] = newNode(hash, key, value, null);
        else { //发生了hash冲突
            Node<K,V> e; K k;
            /* 
             * 判断slot处根元素与待put数据是否为同一个元素:
             *    当发生hash冲突(p.hash == hash)时,两个元素key相同((k = p.key) == key)或
             *    equals方法相同,则视为同一个元素
             * 若为同一个元素,将执行下方if (e != null) { // existing mapping for key...
             * 操作,将元素的值替换为新值,并返回旧值。
             */
            if (p.hash == hash &&
                ((k = p.key) == key || (key != null && key.equals(k))))
                e = p;
            /* 
             * bucket(或bin) 为红黑树,则向红黑树树上插入元素,具体算法见putTreeVal
             */
            else if (p instanceof TreeNode)
                e = ((TreeNode<K,V>)p).putTreeVal(this, tab, hash, key, value);
            /* 
             * bucket(或bin)为链表,则遍历链表
             * 1. 若链表中某个元素与待put元素为同一个元素(判定条件与上方
             * 判断slot处根元素与待put数据是否为同一个元素 相同),则执行下方
             * if (e != null) { // existing mapping for key...操作,
             * 将元素的值替换为新值,并返回旧值
             * 
             * 2. 否则,遍历到链表尾部,并将元素插入到链表尾部。之后判定是否链表长度是否达到
             * TREEIFY_THRESHOLD,若达到了,即调用treeifyBin方法,执行resize对table数组
             * 进行扩容(当table数组容量<MIN_TREEIFY_CAPACITY即64时)操作,或将链表转换为红黑树
             * (当table数组容量>=MIN_TREEIFY_CAPACITY即64时)
             */
            else {
            	// 尾插法:for+if遍历链表,直到链表尾部,
                for (int binCount = 0; ; ++binCount) {
                    if ((e = p.next) == null) {//到达链表尾部
                    	//将元素插入到链表尾部
                        p.next = newNode(hash, key, value, null);
                        /* 
                         * 将元素插入到链表后,判断当前链表元素数量是否>TREEIFY_THRESHOLD
                         * binCount 从0开始,上一行又将待put元素追加到链表尾部,
                         * 因而其值=链表元素数量-2
                         * 故当链表数量达到9时,binCount(值为7)>=TREEIFY_THRESHOLD - 1(值为7)
                         * 才为true,此时才会调用treeifyBin
                         */
                        if (binCount >= TREEIFY_THRESHOLD - 1) // -1 for 1st
                            /*
                             * 若链表元素数量>TREEIFY_THRESHOLD,调用treeifyBin,treeifyBin
                             * 进一步判断table数组容量是否达到MIN_TREEIFY_CAPACITY(64),
                             *    若达到,则将链表转化为红黑树
                             *    否则保持链表结构,调用resize()对table数组进行扩容。
                             */
                            treeifyBin(tab, hash);
                        break;
                    }
                    //待put元素与链表中当前元素为同一个元素
                    if (e.hash == hash &&
                        ((k = e.key) == key || (key != null && key.equals(k))))
                        break;
                    p = e;
                }
            }
            //存在相同元素,将元素的值替换成新值,并返回旧值
            if (e != null) { // existing mapping for key
                V oldValue = e.value;
                if (!onlyIfAbsent || oldValue == null)
                    e.value = value;
                afterNodeAccess(e);
                return oldValue;
            }
        }
        
        ++modCount;
        //HashMap元素个数达到了扩容阈值,调用resize进行扩容
        if (++size > threshold)
            resize();
        afterNodeInsertion(evict);
        return null;
    }
    
    // Create a regular (non-tree) node
    Node<K,V> newNode(int hash, K key, V value, Node<K,V> next) {
        return new Node<>(hash, key, value, next);
    }
    
    /**
     * Replaces all linked nodes in bin at index for given hash unless
     * table is too small, in which case resizes instead.
     * 
     * 若链表元素数量>TREEIFY_THRESHOLD(8),调用此方法。
     * 本方法会进一步判断table数组容量是否达到MIN_TREEIFY_CAPACITY,
     *    若达到,则将链表转化为红黑树
     *    否则保持链表结构,调用resize()对table数组进行扩容。
     */
    final void treeifyBin(Node<K,V>[] tab, int hash) {
        int n, index; Node<K,V> e;
        // hash值对应table数组位置n的链表长度虽然达到了 TREEIFY_THRESHOLD(8),
        // 但数组长度未达到MIN_TREEIFY_CAPACITY(64),此时先对table进行扩容,扩容后,
        // 该链表(可能)会被拆分为两个链表到n和n+table.length处,每个位置的链表长度可能会降低
        // 到TREEIFY_THRESHOLD(8)以下。
        if (tab == null || (n = tab.length) < MIN_TREEIFY_CAPACITY)
            resize();
        // 链表长度达到 MIN_TREEIFY_CAPACITY(64),需要将链表转化为红黑树
        else if ((e = tab[index = (n - 1) & hash]) != null) {
        	// 链转树
            TreeNode<K,V> hd = null, tl = null;
            // do while 块:链转树之前,先将链表改为双向链表(元素类型为TreeNode)
            do {
                TreeNode<K,V> p = replacementTreeNode(e, null);
                if (tl == null)
                    hd = p;
                else {
                    p.prev = tl;
                    tl.next = p;
                }
                tl = p;
            } while ((e = e.next) != null);
            
            if ((tab[index] = hd) != null)
            	//执行真正的链转树操作
                hd.treeify(tab);
        }
    }
    
    /**
     * Initializes or doubles table size.  If null, allocates in
     * accord with initial capacity target held in field threshold.
     * Otherwise, because we are using power-of-two expansion, the
     * elements from each bin must either stay at same index, or move
     * with a power of two offset in the new table.
     * 
     * 完成HasMapd懒加载初始化、或扩容(数组容量扩大一倍)操作
     * @return the table
     */
    final Node<K,V>[] resize() {
        Node<K,V>[] oldTab = table;
        int oldCap = (oldTab == null) ? 0 : oldTab.length;
        int oldThr = threshold;
        int newCap, newThr = 0;
        
        /* 
         * if elseif 
         */
        /* 
         * oldCap>0,说明table数组中已有数据,且触发了扩容的条件:
         * 	1. 触发了链转树的条件
         *  2. 或数组大小达到了扩容阈值。
         * 
         * 进而进入此if块,是要计算扩容后的数组大小和新的扩容阈值
         */
        if (oldCap > 0) {
        	//若数组太大了,达到上限MAXIMUM_CAPACITY,就不再扩容了
            if (oldCap >= MAXIMUM_CAPACITY) {
            	//将threshold赋值为足够大的Integer.MAX_VALUE,是为了以后插入数据
            	//不再触发扩容阈值,也就不会进入到resize方法中来
                threshold = Integer.MAX_VALUE;
                return oldTab;
            }
            
		    /**
		     * 计算扩容后新的数组容量newCap 和新的扩容阈值newThr(均扩大一倍,
		     * 且要保证扩大后数组仍然不能超过最大值MAXIMUM_CAPACITY)
		     */
            else if ((newCap = oldCap << 1) < MAXIMUM_CAPACITY &&
                     oldCap >= DEFAULT_INITIAL_CAPACITY)
                newThr = oldThr << 1; // double threshold
        }
	    /**
	     * 数组容量==0 && threshold>0
	     * 通过非默认构造方法创建HashMap时,会出现这种情况,懒加载
	     */
        else if (oldThr > 0) // initial capacity was placed in threshold
            newCap = oldThr;
	    /**
	     * 数组容量==0 && threshold==0
	     * 通过默认构造方法创建HashMap时,会出现这种情况,懒加载
	     */
        else { // 数组容量==0 && threshold==0 代表HashMap未初始化,懒加载初始化,zero initial threshold signifies using defaults
            newCap = DEFAULT_INITIAL_CAPACITY;
            newThr = (int)(DEFAULT_LOAD_FACTOR * DEFAULT_INITIAL_CAPACITY);
        }
        
        if (newThr == 0) {
            float ft = (float)newCap * loadFactor;
            newThr = (newCap < MAXIMUM_CAPACITY && ft < (float)MAXIMUM_CAPACITY ?
                      (int)ft : Integer.MAX_VALUE);
        }
      
        threshold = newThr;
        @SuppressWarnings({"rawtypes","unchecked"})
		//创建一个以扩容或初始化后的容量的数组,并赋给table(原table通过oldTab引用)。
        Node<K,V>[] newTab = (Node<K,V>[])new Node[newCap];
        table = newTab;
        /* 
         * 若原table中有数据,则需要将其转移到新table中,没数据时
         * (oldTab=null,懒加载初始化时)会跳过复制步骤
         */
        if (oldTab != null) {
            for (int j = 0; j < oldCap; ++j) {//遍历老table数组,开始向新tablez转移元素
                Node<K,V> e;
                if ((e = oldTab[j]) != null) {//说明该slot有数据,无数据会跳出if,遍历下一个slot
                    oldTab[j] = null; 
                    if (e.next == null) //slot只有一个元素,将其转移到新元素的指定新slot位置
                        newTab[e.hash & (newCap - 1)] = e;
                    else if (e instanceof TreeNode)//slot为红黑树元素
                    	
                    	/**
                    	 * slot为红黑树元素。
                    	 * 
                    	 * 根据 面试题“扩容前在同一数组下标位置的元素,扩容后也在同一下标位置吗?有什么分布规律?”
                    	 * 可知,扩容前位于同一数组下标位置的多个元素,在扩容后可能会被分配到新数组
                    	 * 两个位置处,从而形成:
                    	 *  1. 一些元素在新数组的下标位置与扩容前下标位置L1相同,L1也叫低位。
                    	 * 	  新数组中位于低位的元素,hash值第n位(2的n次方=扩容前数组容量)= 0,
                    	 *    满足条件(e.hash & oldCap) == 0
                    	 *  2. 一些元素于新数组的相同下标位置L2=L1+扩容前数组容量处,L2也叫高位。
                    	 * 	  新数组中位于高位的元素,hash值第n位(2的n次方=扩容前数组容量)为1,
                    	 *    满足条件:(e.hash & oldCap) == 1
                    	 * 
                    	 * 因此,扩容后,可能存在将一棵树拆分成两棵树的情况,
                    	 * 若拆分后树的元素数量太少,触发了树转链条件,还会继续执行树转链操作,
                    	 * 参见TreeNode.split方法
                    	 */
                        ((TreeNode<K,V>)e).split(this, newTab, j, oldCap);
                    else { //preserve order
                    	/**
                    	 * slot为链表。
                    	 * 根据 面试题“扩容前在同一数组下标位置的元素,扩容后也在同一下标位置吗?有什么分布规律?”
                    	 * 可知,扩容前位于同一数组下标位置的多个元素,在扩容后可能会被分配到新数组
                    	 * 两个位置处,形成两个位于新数组不同下标位置的链表。
                    	 *  
                    	 * 此处用loHead和loTail 指向位于低位处L1的链表
                    	 * 用hiHead 和hiTail指向位于高位处L2的链表
                    	 * 
                    	 * 下方do {} while块用于构造两个链表,
                    	 * 之后将它们分别放入新数组的L1和L2处。
                    	 * 
                    	 */
                        Node<K,V> loHead = null, loTail = null;
                        Node<K,V> hiHead = null, hiTail = null;
                        Node<K,V> next;
                        //
                        do {
                            next = e.next;
                            if ((e.hash & oldCap) == 0) {
                                if (loTail == null)
                                    loHead = e;
                                else
                                    loTail.next = e;
                                loTail = e;
                            }
                            else {
                                if (hiTail == null)
                                    hiHead = e;
                                else
                                    hiTail.next = e;
                                hiTail = e;
                            }
                        } while ((e = next) != null);
                        
                        if (loTail != null) {
                            loTail.next = null;
                            newTab[j] = loHead;
                        }
                        if (hiTail != null) {
                            hiTail.next = null;
                            newTab[j + oldCap] = hiHead;
                        }
                    }
                }
            }
        }
        return newTab;
    }

    
}
参见

面试题

什么是Hash算法

对任意容量的输入,得到定长输出

Hash算法问题

会发生冲突

Hash算法能否避免hash冲突

无法避免,10个苹果放到9个抽屉,必然会发生冲突

如何设计hash算法
  1. 效率要高
  2. 两次输入,只要有一点不同,得到的hash值就不同
  3. hash结果尽量分散,防止发生hash冲突
  4. 不能根据hash值逆推出原文
HashMap如何设计hash值

扰动函数。

  • JDK1.7 用四次扰动
    final int hash(Object k) {
        int h = hashSeed;
        if (0 != h && k instanceof String) {
            return sun.misc.Hashing.stringHash32((String) k);
        }

        h ^= k.hashCode();

        // This function ensures that hashCodes that differ only by
        // constant multiples at each bit position have a bounded
        // number of collisions (approximately 8 at default load factor).
        h ^= (h >>> 20) ^ (h >>> 12);
        return h ^ (h >>> 7) ^ (h >>> 4);
    }
  • JDK1.8: 通过将key的hashCode的高16位与hashCode进行异或(同0异1)得到的值。由此可知右移16位和HashCode的^运算,不会对hashCode的高16位进行改变。
    static final int hash(Object key) {
        int h;
        return (key == null) ? 0 : (h = key.hashCode()) ^ (h >>> 16);
    }
HashMap计算hash值,为何进行扰动处理
  1. table用的是2n掩码方式用table.lenggth-1作为掩码与hash值进行按位与操作,得到元素在table数组的下标位置(table即数组下标=(n - 1) & hash)。用该操作会将hash值的高位(超过数组容量的部分)消除,只保留hash值的低位部分,得出的十进制值作为元素在table的下标位置。

    例如:table.length(必须为2n)=16,16二进制:10000, 16-1的二进制为01111,用01111与hash值进行按位与操作,会将has值的高位(hash值是int类型,长度是32),即超出15(数组范围0-15)以上的二进制部分(指二进制第5位及以上部分,该部分的值一定能够被table.lengh即16整除)消除,只取剩下的低位部分(二进制取值范围[0,15]与table数组取值范围正好相同)作为table数组下标。

    可以看出,(n - 1) & hashCode 实际上与hash%n等效,但二者等效的前提是table数组容量为2n

  2. 存在的问题: 上述操作若只取hashCode低位部分参与计算table下标,容易导致许多低位相同、但高位不同的hashCode落到同一个table下标内而发生散列冲突。具体原因可能有:
    a. key本身实现的hashCode()方法本身散列性较差,容易引发散列冲突
    b. 许多对象的hashCode()方法本身散列性不错,但其低位部分相同,用该部分参与计算数组下标,容易引发散列冲突。

  3. 解决办法:让hashCode高位部分也参与计算数组下标(这个过程叫扰动),可减少冲突,参见hash()方法。

    具体做法是,让hashCode的高16位与低16位进行异或操作(return (key == null) ? 0 : (h = key.hashCode()) ^ (h >>> 16)),得到结果hash(该操作也叫扰动函数),之后再以(n - 1) & hash的值作为table数组的下标,可尽量较少hash冲突。

HashMap数组初始容量是多少

16

table数组容量为什么必须是2的n次幂

两个原因:
1.计算元素在数组的下标位置时,没有用h%table.length,而是用了效率更高的按位与公式h & (table.length-1),该公式只有在数组容量为2n时才成立。
2. 将数组容量设置为2n,比不用2n,得到的结果更加散列,可减少散列冲突。

负载因子的作用,以及默认负载因子是多少

负载因子用于计算扩容阈值:threshold=数组容量capacity负载因子。
默认的capacity是16,默认负载因子是0.75,因此默认的threshold=16
0.75=12

负载因子为什么是0.75
  • 若太高(如1),空间利用率得到了很大满足,但很容易发生hash冲突,产生链表,导致查找(反映在get和put操作中)效率较低。(时间问题)

  • 若太低(如0.5),发生hash冲突概率降低了,查找效率较高,但空间利用率较低。(空间问题)

  • 设置为0.75,在空间和时间上进行了折中处理,比较合理地兼顾了时间和空间。

HashMap中存储数据的结构

JDK1.7和JDK1.8不同。

  • JDK1.7: 数组+链表(元素类型均为Entry),Entry有key、value、hash、next字段。
  • JDK1.8 :数组+链表+红黑树,元素类型是Node,红黑树类型是Node的子类TreeNode。
jdk1.8 链表与红黑树相互转化的条件
  • 链转树(链表转化为红黑树):随着put操作的进行,当同时满足以下两个条件时,链表转化为红黑树:
    a. 链表长度达到8
    b. 数组容量达到64
  • 树转链(红黑树转化为链表):随着remove操作的进行,当不满足链转树的任一条件时,会退回到链表结构。(待验证)
HashMap散列表是在new HashMap时创建的吗?

不是,是懒加载机制,在第一次put时创建。

扩容前在同一数组下标位置的元素,扩容后也在同一下标位置吗?有什么分布规律?

不一定。假设扩容前数组容量为16(24),扩容后为32(25)。
key的hash值分为两种情况:

  1. 第5(24+1)位为0:扩容后,元素在新数组中的下标位置不变

    例如hash值为:0010 0101

    • 在数组容量=16时,计算得出的下标位置为:二进制(0101)=十进制(5)
      hash值: 0010 0101
      table.lengh-1(15): 0000 1111
      按位与结果: 0000 0101 = 5

    • 扩容到32后,计算得出的下标位置为:二进制(0101)=十进制(5)
      hash值: 0010 0101
      table.lengh-1(31): 0001 1111
      按位与结果: 0000 0101 = 5

    此时,元素扩容后,在新数组种的下标位置不变。

  2. 第5(24+1)位为1:扩容后,元素在新数组中的下标位置-扩容前下标位置=扩容前数组容量

    例如hash值为:0011 0101

    • 在数组容量=16时,计算得出的下标位置为:二进制(0101)=十进制(5)
      hash值: 0011 0101
      table.lengh-1(15): 0000 1111
      按位与结果: 0000 0101 = 5

    • 扩容到32后,计算得出的下标位置为:二进制(10101)=十进制(21)
      hash值: 0011 0101
      table.lengh-1(31): 0001 1111
      按位与结果: 0001 0101 = 21

    扩容后比扩容前,多了1 0000,转换成十进制为16,即扩容后,元素在新数组中的下标位置-扩容前下标位置=扩容前数组容量

据此可得到结论:

  1. 扩容前在同一数组下标处的元素,在扩容后,可能会位于不同的数组下标位置。
  2. 扩容后,这些元素在新数组中,可能位于同一个下标位置,也可能位于最多两个不同位置。
  3. 若这些元素在新数组中位于两个不同下标位置的距离=扩容前数组的容量。

即扩容前多个元素在数组的下标位置为L1,在扩容后所处的数组下标位置为L2。则L1与L2满足:
要么L1==L2,
要么L2-L1=扩容前table数组的容量,此时L1为数组下标的低位,L2为高位。

为什么JDK1.8的put操作从jdk1.7的头插法改为尾插法?

因为JDK1.7 HashMapput操作采用的是头插法,这种方式在多线程下扩容时,会发生的循环链表问题。

JDK1.7 put操作是头插法,即将新增元素放到链表的头部,但在table数组扩容时,若扩容后恰好将链表逆序放到新table数组同一下标的位置(扩容前在同一个slot链表上的元素,扩容后有可能会位于不同的slot处),如原来链表顺序是3->2->1,扩容后会被存储在新的table数组同一下标位置的链表中,但链表顺序变为:1->2->3,在多线程环境下,在扩容过程中,会产生循环链表问题,导致transfer方法一直死循环。

参见:大厂面试必问的HashMap扩容死循环问题源码分析问题,20分钟讲清楚!-哔哩哔哩
解决办法:jdk1.8之后,采用尾查法。

HashMap是线程安全的吗?如何实现线程安全?

不是。
参见:HashMap 在 Java7 ,Java8 的线程安全问题

如何实现线程安全:

  1. Map m = Collections.synchronizedMap(new HashMap());
  2. 使用ConcurrentHashMap
  3. 使用HashTable
TREEIFY_THRESHOLD 为什么是8,MIN_TREEIFY_CAPACITY 为什么是64?

TREEIFY_THRESHOLD 为8改为红黑树,是由链表和红黑树的特性决定的,核心是时间和空间的权衡。

  1. 因为树节点所占用的空间是普通节点的两倍。最开始使用链表的时候,链表是比较短的,空间占用也是比较少的,查询性能都差不多,但是当链表越来越长,链表查询越来越慢,为了保证查询效率,这时候才会舍弃链表而使用红黑树,以空间换时间,所以没有必要一开始就用红黑树,只有当节点足够多的时候,才会使用树节点。

  2. 另外,链表较长的情况非常少见(理想情况下随机hashCode算法下所有bin中节点的分布频率会遵循泊松分布,可以看到,一个bin中链表长度达到8个元素的概率为0.00000006,几乎是不可能事件),一开始就使用红黑树反而会导致所有的情况都会占用比链表大2倍的空间,适得其反,这也是一种平衡的策略。

  3. 之所以选择8,是根据概率统计决定的。
    红黑树的查找时间复杂度是log(n),如果长度为8,查找时间复杂度为log(8)=3,链表的平均查找时间复杂度为O(N),当长度为8时,查找时间复杂度为8/2=4,这才有转换成树的必要;链表长度如果是小于等于6,6/2=3,而log(6)=2.6,虽然速度也很快的,但是转化为树结构和生成树的时间并不会太短。

综上:基于空间和时间的权衡考虑,一开始使用链表是为了既保证性能、又保证存储空间,链表太长引起性能下降后,改为红黑树是为了以空间换时间,加快性能。

MIN_TREEIFY_CAPACITY 为什么是64?

红黑树的put、get效率都是O(lgN),链表的查找效率为O(N),将链表改为红黑树可以均衡查找和写入操作的性能。
但在数组容量在MIN_TREEIFY_CAPACITY为64以内扩容后,可以将扩容前的一个链表进行分裂成多个两个不同的链表,挂载到新数组不同的下标处处,链表一分为二可提高查找性能,带来的性能提升优于转换成红黑树。

数组容量超过64后,换成红黑树带来的性能提升优于链表分裂带来的性能提升,64是权衡两种性能提升方法优劣的一个合理的临界值。

将链表转换为红黑树之前,为什么将单向链表转换为双向链表?

在将链表转换为红黑树之前,将链表先转换为双向链表,再转换为红黑树。
将链表转为双向链表的目的是在处理红黑树时需要逆向遍历元素。
问题:红黑树如何使用双向链表?

HashMap元素是有序的吗?如何实现元素的有序存储?

ConcurrentHashMap

JDK1.7

  1. 一个ConcurrentHashMap包含一个Segment(分段)数组。

  2. 每个Segment元素内有一个HashEntry元素,HashEntry类定义跟HashMap的Entry结构相同,put元素真正被存储到HashEntry数组上。

  3. Segment继承了ReentrantLock,因此,ConcurrentHashMap是由多个分段锁组成,对每个Segment上的所有数据的操作都必须先获得Segment锁(读写分离吗?)。

  4. 无参构造方法:ConcurrentHashMap map= new ConcurrentHashMap()->this(DEFAULT_INITIAL_CAPACITY, DEFAULT_LOAD_FACTOR, DEFAULT_CONCURRENCY_LEVEL);->this(16,0.75,16)

  5. 根据构造方法public ConcurrentHashMap(int initialCapacity, float loadFactor, int concurrencyLevel)实现逻辑,可以看出:

    • concurrencyLevel表示ConcurrentHashMap中Segment数组的大小.
      • 要求Segment数组的大小必须为2n。若构造方法传入concurrencyLevel的不是2n,内部会用大于concurrencyLevel的最近的一个2n作为实际的Segment数组的大小。
        • 如:concurrencyLevel=12,内部实际会创建大容量16的Segment数组)。
    • initialCapacity是所有Segment内HashEntry数组的总容量。每个Segment内HashEntry数组的容量cap,理论上是initialCapacity平均分配到每个Segment的大小m,即整数m=initialCapacity/concurrencyLevel(m为整数,因此计算结果是向下取整),但要求该值也必须为2的整数幂,且需满足cap>=MIN_SEGMENT_TABLE_CAPACITY=2。因此详细计算规则为:
      • 若m是2的整数幂,则取cap=m
      • 否则m向上取整得m+1,取cap=大于m+1的最近的一个2n
        • 举例:若m=5,则cap=8
      • 要求cap最小为MIN_SEGMENT_TABLE_CAPACITY(2),因此若以上结果若得到cap<2,则取cap=2
  6. 由上可知 Map<String,String> map = new ConcurrentHashMap<>()后,map初始化如下:
    - 创建了一个容量为16的Segment数组
    - Segment数组0处,创建了一个Segment对象
    - Segment对象内部初始化了一个大小为2的HashEntry数组

在这里插入图片描述

public class ConcurrentHashMap<K, V> extends AbstractMap<K, V>
        implements ConcurrentMap<K, V>, Serializable {

    /* ---------------- Fields -------------- */
    /**
     * Mask value for indexing into segments. The upper bits of a
     * key's hash code are used to choose the segment.
     */
    final int segmentMask;

    /**
     * Shift value for indexing within segments.
     */
    final int segmentShift;

    /**
     * The segments, each of which is a specialized hash table.
     */
    final Segment<K,V>[] segments;

    transient Set<K> keySet;
    transient Set<Map.Entry<K,V>> entrySet;
    transient Collection<V> values;


    /* ---------------- Constants -------------- */

    /**
     * The default initial capacity for this table,
     * used when not otherwise specified in a constructor.
     */
    static final int DEFAULT_INITIAL_CAPACITY = 16;

    /**
     * The default load factor for this table, used when not
     * otherwise specified in a constructor.
     */
    static final float DEFAULT_LOAD_FACTOR = 0.75f;

    /**
     * The default concurrency level for this table, used when not
     * otherwise specified in a constructor.
     */
    static final int DEFAULT_CONCURRENCY_LEVEL = 16;

    /**
     * The maximum capacity, used if a higher value is implicitly
     * specified by either of the constructors with arguments.  MUST
     * be a power of two <= 1<<30 to ensure that entries are indexable
     * using ints.
     */
    static final int MAXIMUM_CAPACITY = 1 << 30;

    /**
     * The minimum capacity for per-segment tables.  Must be a power
     * of two, at least two to avoid immediate resizing on next use
     * after lazy construction.
     */
    static final int MIN_SEGMENT_TABLE_CAPACITY = 2;

    /**
     * The maximum number of segments to allow; used to bound
     * constructor arguments. Must be power of two less than 1 << 24.
     */
    static final int MAX_SEGMENTS = 1 << 16; // slightly conservative

    /**
     * Number of unsynchronized retries in size and containsValue
     * methods before resorting to locking. This is used to avoid
     * unbounded retries if tables undergo continuous modification
     * which would make it impossible to obtain an accurate result.
     */
    static final int RETRIES_BEFORE_LOCK = 2;

    static final class HashEntry<K,V> {
        final int hash;
        final K key;
        volatile V value;
        volatile HashEntry<K,V> next;
	}

    public ConcurrentHashMap() {
        this(DEFAULT_INITIAL_CAPACITY, DEFAULT_LOAD_FACTOR, DEFAULT_CONCURRENCY_LEVEL);
    }


    /**
     * Creates a new, empty map with the specified initial
     * capacity, load factor and concurrency level.
     *
     * @param initialCapacity the initial capacity. The implementation
     * performs internal sizing to accommodate this many elements.
     * @param loadFactor  the load factor threshold, used to control resizing.
     * Resizing may be performed when the average number of elements per
     * bin exceeds this threshold.
     * @param concurrencyLevel the estimated number of concurrently
     * updating threads. The implementation performs internal sizing
     * to try to accommodate this many threads.
     * @throws IllegalArgumentException if the initial capacity is
     * negative or the load factor or concurrencyLevel are
     * nonpositive.
     */
    @SuppressWarnings("unchecked")
    public ConcurrentHashMap(int initialCapacity,
                             float loadFactor, int concurrencyLevel) {
        if (!(loadFactor > 0) || initialCapacity < 0 || concurrencyLevel <= 0)
            throw new IllegalArgumentException();
        /**
         * Segment数组最大大小不能超过MAX_SEGMENTS
         */
        if (concurrencyLevel > MAX_SEGMENTS)
            concurrencyLevel = MAX_SEGMENTS;
        // Find power-of-two sizes best matching arguments
        int sshift = 0;
        int ssize = 1;
        
        /**
         * 计算Segment数组容量ssize,必须为2的整数幂,
         * 若传入的concurrencyLevel不是2的整数幂,则ssize取大于concurrencyLevel的最近的2的整数幂
         */
        while (ssize < concurrencyLevel) {
            ++sshift;
            ssize <<= 1;
        }
        
       	// segmentShift:默认是32-4=28将最高位移动到最右侧
	    // segmentMask掩码,默认是15,即0000 0000 0000 1111
		// 目的:put/get元素时,将对应key的hashCode向右移segmentShift位,之后与segmentMask进行安位与运算,可得到key在segment数组的下标位置
        this.segmentShift = 32 - sshift;
        this.segmentMask = ssize - 1;
        
        /**
         * 计算每个Segment内HashEntry数组的容量cap,理论上是initialCapacity平均分配到每个
         * Segment的大小m=initialCapacity / ssize,但同时也必须为2的整数幂,
         * 且需满足cap>=MIN_SEGMENT_TABLE_CAPACITY=2.
         * 
         * 若传入的concurrencyLevel不是2的整数幂,则ssize取大于concurrencyLevel的最近的2的整数幂
         */
        
        if (initialCapacity > MAXIMUM_CAPACITY)
            initialCapacity = MAXIMUM_CAPACITY;
            
        //若initialCapacity不能被ssize整除,将除法结果赋予一个整数是向下取整
        int c = initialCapacity / ssize; 
        if (c * ssize < initialCapacity) //说明initialCapacity不能被ssize整除,此时需向上取整
            ++c;//向上取整
        //cap 必须为2的n次幂
        int cap = MIN_SEGMENT_TABLE_CAPACITY;
        while (cap < c)
            cap <<= 1;

		//初始化:创建一个Segment数组,并且只创建一个Segment数组对象并将其放到Segment数组的第0个位置
        // create segments and segments[0]
        Segment<K,V> s0 =
            new Segment<K,V>(loadFactor, (int)(cap * loadFactor),
                             (HashEntry<K,V>[])new HashEntry[cap]);
        Segment<K,V>[] ss = (Segment<K,V>[])new Segment[ssize];
        UNSAFE.putOrderedObject(ss, SBASE, s0); // ordered write of segments[0]
        this.segments = ss;
    }


    /**
     * Returns the number of key-value mappings in this map.  If the
     * map contains more than <tt>Integer.MAX_VALUE</tt> elements, returns
     * <tt>Integer.MAX_VALUE</tt>.
     *
     * @return the number of key-value mappings in this map
     */
	// size = sum(segment.count) 
	// 但在计数过程中,可能还有其它线程在put/remove(会导致seg.modCount变更),
	// 为了技数准确,size会计数多次,对size 计数的同时也会记录seg.modCount
	// 若相邻两次计数之间,
	// 1. sum(seg.modCount) 不变,说明期间没有新增/删除元素
	//  (也可能新增数=删除数,此时不影响size计数的准确性),
	//  则表明size计数也是准确的,返回size计数即可
	// 2. 若相邻两次计数之间,sum(seg.modCount)一直在变,说明size计数不准确
	//   此时继续计数一次,但最多不能超过RETRIES_BEFORE_LOCK,
	// 3. 若经过RETRIES_BEFORE_LOCK次计数后,仍然得不到一致的sum(seg.modCount),
	//  则对所有的segment对象全部加锁统计,防止期间有其它线程进行put/remove操作,
	//  最后得到准确的size结果返回
    public int size() {
        // Try a few times to get accurate count. On failure due to
        // continuous async changes in table, resort to locking.
        final Segment<K,V>[] segments = this.segments;
        int size;
        boolean overflow; // true if size overflows 32 bits
        long sum;         // sum of modCounts
        long last = 0L;   // previous sum
        int retries = -1; // first iteration isn't retry
        try {
            for (;;) {
                if (retries++ == RETRIES_BEFORE_LOCK) {
                    for (int j = 0; j < segments.length; ++j)
                        ensureSegment(j).lock(); // force creation
                }
                sum = 0L;
                size = 0;
                overflow = false;
                for (int j = 0; j < segments.length; ++j) {
                    Segment<K,V> seg = segmentAt(segments, j);
                    if (seg != null) {
                        sum += seg.modCount;
                        int c = seg.count;
                        if (c < 0 || (size += c) < 0)
                            overflow = true;
                    }
                }
                if (sum == last)
                    break;
                last = sum;
            }
        } finally {
            if (retries > RETRIES_BEFORE_LOCK) {
                for (int j = 0; j < segments.length; ++j)
                    segmentAt(segments, j).unlock();
            }
        }
        return overflow ? Integer.MAX_VALUE : size;
    }

    /**
     * Returns the value to which the specified key is mapped,
     * or {@code null} if this map contains no mapping for the key.
     *
     * <p>More formally, if this map contains a mapping from a key
     * {@code k} to a value {@code v} such that {@code key.equals(k)},
     * then this method returns {@code v}; otherwise it returns
     * {@code null}.  (There can be at most one such mapping.)
     *
     * @throws NullPointerException if the specified key is null
	 * 
     */
	 
	// get并未加锁,而是用Unsafe方法保证了可见性,扩容过程中,get先发生就从旧表取内容
	// get后发生就从新表取内容
    public V get(Object key) {
        Segment<K,V> s; // manually integrate access methods to reduce overhead
        HashEntry<K,V>[] tab;
        int h = hash(key);
		// u 为segment对象在segment数组中的偏移量
        long u = (((h >>> segmentShift) & segmentMask) << SSHIFT) + SBASE;
		
		// s即为segment对象
        if ((s = (Segment<K,V>)UNSAFE.getObjectVolatile(segments, u)) != null &&
            (tab = s.table) != null) {
            for (HashEntry<K,V> e = (HashEntry<K,V>) UNSAFE.getObjectVolatile
                     (tab, ((long)(((tab.length - 1) & h)) << TSHIFT) + TBASE);
                 e != null; e = e.next) {
                K k;
                if ((k = e.key) == key || (e.hash == h && key.equals(k)))
                    return e.value;
            }
        }
        return null;
    }


    @SuppressWarnings("unchecked")
    public V put(K key, V value) {
        Segment<K,V> s;
        if (value == null)
            throw new NullPointerException();
        int hash = hash(key);
		// 计算元素在segment数组的下标位置
        int j = (hash >>> segmentShift) & segmentMask;
		// 获得segment对象,判断是否为null,是则创建segment对象
        if ((s = (Segment<K,V>)UNSAFE.getObject          // nonvolatile; recheck
             (segments, (j << SSHIFT) + SBASE)) == null) //  in ensureSegment
			// 这时不能确定是否真的为null,因为其他线程也发现segment为null
			// 因此在ensureSegment里用CAS方式保证该segment的安全性
            s = ensureSegment(j);
		//进入segment的put流程
        return s.put(key, hash, value, false);
    }


    /**
     * Returns the segment for the given index, creating it and
     * recording in segment table (via CAS) if not already present.
     *
     * @param k the index
     * @return the segment
     */
    @SuppressWarnings("unchecked")
    private Segment<K,V> ensureSegment(int k) {
        final Segment<K,V>[] ss = this.segments;
        long u = (k << SSHIFT) + SBASE; // raw offset
        Segment<K,V> seg;
        if ((seg = (Segment<K,V>)UNSAFE.getObjectVolatile(ss, u)) == null) {
            Segment<K,V> proto = ss[0]; // use segment 0 as prototype
            int cap = proto.table.length;
            float lf = proto.loadFactor;
            int threshold = (int)(cap * lf);
            HashEntry<K,V>[] tab = (HashEntry<K,V>[])new HashEntry[cap];
            if ((seg = (Segment<K,V>)UNSAFE.getObjectVolatile(ss, u))
                == null) { // recheck
                Segment<K,V> s = new Segment<K,V>(lf, threshold, tab);
                while ((seg = (Segment<K,V>)UNSAFE.getObjectVolatile(ss, u))
                       == null) {
                    if (UNSAFE.compareAndSwapObject(ss, u, null, seg = s))
                        break;
                }
            }
        }
        return seg;
    }

 /**
     * Segments are specialized versions of hash tables.  This
     * subclasses from ReentrantLock opportunistically, just to
     * simplify some locking and avoid separate construction.
     */
    static final class Segment<K,V> extends ReentrantLock implements Serializable {


        final V put(K key, int hash, V value, boolean onlyIfAbsent) {
			// 尝试加锁
            HashEntry<K,V> node = tryLock() ? null :
				// 加锁失败,进入scanAndLockForPut流程
				// 如果是多核CPU,最多tryLock 64次,若仍失败,则进入lock
				// 在尝试期间,还顺便看该节点在链表中有没有,若没有则顺便创建出来(返回值node即为创建的Node)
                scanAndLockForPut(key, hash, value);
				
			// segment已被成功加锁,可以安全执行
            V oldValue;
            try {
                HashEntry<K,V>[] tab = table;
                int index = (tab.length - 1) & hash;
                HashEntry<K,V> first = entryAt(tab, index);
				// 遍历链表
                for (HashEntry<K,V> e = first;;) {
                    if (e != null) {
					    //更新,新值覆盖旧值
                        K k;
                        if ((k = e.key) == key ||
                            (e.hash == hash && key.equals(k))) {
                            oldValue = e.value;
                            if (!onlyIfAbsent) {
                                e.value = value;
                                ++modCount;
                            }
                            break;
                        }
                        e = e.next;
                    }
                    else {
						// 新增
						// (1. 之前等待时,node已被创建,next指向链表头(头插法)
                        if (node != null)
                            node.setNext(first);
                        else
							// (2. 创建Node
                            node = new HashEntry<K,V>(hash, key, value, first);
							
                        int c = count + 1;
						// (3. 扩容
                        if (c > threshold && tab.length < MAXIMUM_CAPACITY)
                            rehash(node);
                        else
							//将node作为链表头
                            setEntryAt(tab, index, node);
                        ++modCount;
                        count = c;
                        oldValue = null;
                        break;
                    }
                }
            } finally {
                unlock();
            }
            return oldValue;
        }

        /**
         * Scans for a node containing given key while trying to
         * acquire lock, creating and returning one if not found. Upon
         * return, guarantees that lock is held. UNlike in most
         * methods, calls to method equals are not screened: Since
         * traversal speed doesn't matter, we might as well help warm
         * up the associated code and accesses as well.
         *
         * @return a new node if key not found, else null
         */
        private HashEntry<K,V> scanAndLockForPut(K key, int hash, V value) {
			// 获取hash在segment内HashEntry数组的下标位置处的链表第一个元素
            HashEntry<K,V> first = entryForHash(this, hash);
            HashEntry<K,V> e = first;
            HashEntry<K,V> node = null;
            int retries = -1; // negative while locating node
            while (!tryLock()) {
                HashEntry<K,V> f; // to recheck first below
                if (retries < 0) {//首次tryLock失败,将待插入节点封装为node
                    if (e == null) {
                        if (node == null) // speculatively create node
                            node = new HashEntry<K,V>(hash, key, value, null);
                        retries = 0;
                    }
                    else if (key.equals(e.key))
                        retries = 0;
                    else
                        e = e.next;
                }
				//重试次数超过了64次数,调用lock()阻塞等待
                else if (++retries > MAX_SCAN_RETRIES) {
                    lock();
                    break;
                }
				// 期间有别的线程在当前segment内HashEntry数组的相同下标位置插入了新的元素
				// 导致之前的first与最新的first不一致,需要重新tryLock
                else if ((retries & 1) == 0 &&
                         (f = entryForHash(this, hash)) != first) {
                    e = first = f; // re-traverse if entry changed
                    retries = -1;
                }
            }
            return node;
        }

        /**
         * Doubles size of table and repacks entries, also adding the
         * given node to new table

         */
        @SuppressWarnings("unchecked")
        private void rehash(HashEntry<K,V> node) {
            /*
             * Reclassify nodes in each list to new table.  Because we
             * are using power-of-two expansion, the elements from
             * each bin must either stay at same index, or move with a
             * power of two offset. We eliminate unnecessary node
             * creation by catching cases where old nodes can be
             * reused because their next fields won't change.
             * Statistically, at the default threshold, only about
             * one-sixth of them need cloning when a table
             * doubles. The nodes they replace will be garbage
             * collectable as soon as they are no longer referenced by
             * any reader thread that may be in the midst of
             * concurrently traversing table. Entry accesses use plain
             * array indexing because they are followed by volatile
             * table write.
             */
            HashEntry<K,V>[] oldTable = table;
            int oldCapacity = oldTable.length;
            int newCapacity = oldCapacity << 1;
            threshold = (int)(newCapacity * loadFactor);
            HashEntry<K,V>[] newTable =
                (HashEntry<K,V>[]) new HashEntry[newCapacity];
            int sizeMask = newCapacity - 1;
            for (int i = 0; i < oldCapacity ; i++) {
                HashEntry<K,V> e = oldTable[i];
                if (e != null) {
                    HashEntry<K,V> next = e.next;
                    int idx = e.hash & sizeMask;
					//只有一个元素,直接搬迁到新table对应下标位置处
                    if (next == null)   //  Single node on list
                        newTable[idx] = e;
                    else {
						// 有hash冲突,遍历链表
						// Reuse consecutive sequence at same slot
                        HashEntry<K,V> lastRun = e;
                        int lastIdx = idx;
						// 假如在oldtable 下标idx中有5个元素,第一个元素在新元素的位置为k, 
						// 最后面有遍历完成后,发现从第N个元素(用lastRun指向第N个元素)开始,
						// 直到链表最后,后面的元素在新元素的下标位置(用lastIdx标识)均相同,
						// 此时,将链表从第N个元素lastRun处分割成两个子链,后半部分的链只需设置
						// newTable[lastIdx]=lastRun,即可将后面的链,直接搬迁到新table的lasdIdx处。
						
                        for (HashEntry<K,V> last = next;
                             last != null;
                             last = last.next) {
                            int k = last.hash & sizeMask;
                            if (k != lastIdx) {
                                lastIdx = k;
                                lastRun = last;
                            }
                        }
                        newTable[lastIdx] = lastRun;
						
						//对前半个链,采用HashEntry复制的方式,复制到newTable上。
                        // Clone remaining nodes
                        for (HashEntry<K,V> p = e; p != lastRun; p = p.next) {
                            V v = p.value;
                            int h = p.hash;
                            int k = h & sizeMask;
                            HashEntry<K,V> n = newTable[k];
                            newTable[k] = new HashEntry<K,V>(h, p.key, v, n);
                        }
                    }
                }
            }
			
			// 扩容后,将当前待插入节点插入到对应位置(头插法)
            int nodeIndex = node.hash & sizeMask; // add the new node
            node.setNext(newTable[nodeIndex]);
            newTable[nodeIndex] = node;

			// 替换为新的Entry table
            table = newTable;
        }



	}


}

JDK1.8+

JDK1.7 ConcurrentHashMap 缺点:
在构造方法指定Segment数组大小(默认大小为16)后就不能变了,并且不是懒惰初始化。
JDK1.8随着扩容,数组不断扩容。

参见:

加锁机制

在JDK1.7之前,ConcurrentHashMap是通过分段锁机制来实现的,所以其最大并发度受Segment的个数限制。因此,在JDK1.8中,ConcurrentHashMap的实现原理摒弃了这种设计,而是选择了与HashMap类似的数组+链表+红黑树的方式实现,而加锁则采用CAS和synchronized实现。

JDK1.8+ ConcurrentHashMap 由于是多线程访问,有的线程增加元素,有的线程删除元素,因而size()方法的计数可能有一定的误差。

存储结构

Java8 的 ConcurrentHashMap 相对于 Java7 来说变化比较大,从1.7的 Segment 数组 + HashEntry 数组 + 链表变成了1.8的Node 数组 + 链表 / 红黑树。当冲突链表达到一定长度时,链表会转换成红黑树。
在这里插入图片描述

重要参数
sizeCtl

它的值决定着当前的初始化状态。

  • -1 说明正在初始化
  • -N 说明有N-1个线程正在进行扩容
  • 如果 table 没有初始化,表示 table 初始化大小
  • 如果 table已经初始化,表示 table 容量。
/*
    该字段控制table(也被称作hash桶数组)的初始化和扩容。
    sizeCtl为负数的时候,表示table初始化或者扩容。
    sizeCtl = -1 表示已经初始化。
    sizeCtl = -(1+正在扩容的线程数)
     */
    private transient volatile int sizeCtl;
    
	// 正在扩容节点的的hash值,表示正在转移(扩容)(hash for forwarding nodes)
	// 在扩容过程中,会在旧table已完成迁移的数组元素的根节点上挂载一个ForwardingNode节点,节点的hash值为MOVED,
	// 并包含指向扩容后table的指针nextTable,当扩容时有别的线程在get数据时,若发现对应位置是
	// ForwardingNode类型节点,就会到扩容后的nextTable上get数据
	static final int MOVED     = -1; 
	// 树的根节点的hash值,即表示已经转换成树(hash for roots of trees)
	static final int TREEBIN   = -2; 
	static final int RESERVED  = -3; // 表示正在 (hash for transient reservations)
	// 0x7fffffff即Integer.MAX_VALUE,转换为二进制是31个1。
    static final int HASH_BITS = 0x7fffffff; // usable bits of normal node hash

    //最大容量(table最大容量是2的30次方)
    private static final int MAXIMUM_CAPACITY = 1 << 30;

 
    //默认容量(table默认初始化容量16。扩容总是2的n次方。)
    private static final int DEFAULT_CAPACITY = 16;

    //默认并发级别
    private static final int DEFAULT_CONCURRENCY_LEVEL = 16;

    //默认的负载因子0.75(当前已使用容量 >= 负载因子*总容量的时候,进行resize扩容)
    private static final float LOAD_FACTOR = 0.75f;

    //转红黑树阈值,当桶内链表长度>=8时,会将链表转成红黑树
    static final int TREEIFY_THRESHOLD = 8;

 
    //红黑树还原链表阈值,当桶内node小于6时,红黑树会转成链表
    static final int UNTREEIFY_THRESHOLD = 6;

    //最小树型化容量(table的总容量,要大于64,桶内链表才转换为树形结构,否则当桶内链表长度>=8时会扩容)
    static final int MIN_TREEIFY_CAPACITY = 64;


    /**
     * The array of bins. Lazily initialized upon first insertion.
     * Size is always a power of two. Accessed directly by iterators.
     */
    transient volatile Node<K,V>[] table;

    /**
     * The next table to use; non-null only while resizing.
     */
    private transient volatile Node<K,V>[] nextTable;

    /**
     * Base counter value, used mainly when there is no contention,
     * but also as a fallback during table initialization
     * races. Updated via CAS.
     */
    private transient volatile long baseCount;

    /**
     * Table initialization and resizing control.  When negative, the
     * table is being initialized or resized: -1 for initialization,
     * else -(1 + the number of active resizing threads).  Otherwise,
     * when table is null, holds the initial table size to use upon
     * creation, or 0 for default. After initialization, holds the
     * next element count value upon which to resize the table.
     */
    private transient volatile int sizeCtl;

    /**
     * The next table index (plus one) to split while resizing.
     */
    private transient volatile int transferIndex;

    /**
     * Spinlock (locked via CAS) used when resizing and/or creating CounterCells.
     */
    private transient volatile int cellsBusy;

    /**
     * Table of counter cells. When non-null, size is a power of 2.
     */
    private transient volatile CounterCell[] counterCells;
构造函数

初始化过程与JDK1.7不同,详见注释。

 /**
     使用默认的初始表大小 (16) 创建一个新的空映射。
     */
    public ConcurrentHashMap() {
    }

    /**
    构造函数,其初始表大小可容纳指定数量的元素,而无需动态调整大小。
     @param initialCapacity 初始容量。如果元素的初始容量为负,则抛出异常
     @throws IllegalArgumentException
     */
    public ConcurrentHashMap(int initialCapacity,
                             float loadFactor, int concurrencyLevel) {
        if (!(loadFactor > 0.0f) || initialCapacity < 0 || concurrencyLevel <= 0)
            throw new IllegalArgumentException();
        if (initialCapacity < concurrencyLevel)   // Use at least as many bins
            initialCapacity = concurrencyLevel;   // as estimated threads
        // 原理与JDK1.7不同。这里table数组容量会被传入的initialCapacity放大(而非缩小)
        // table数组容量=initialCapacity / loadFactor 向上取整的最近一个2的n次幂。
        // 若构造方法传入参数为(16,0.75,1),则size=22,this.sizeCtl=cap=32
        long size = (long)(1.0 + (long)initialCapacity / loadFactor);

		// tableSizeFor:根据传入的size重新计算,变为2的整数倍,作为最终的table容量
        int cap = (size >= (long)MAXIMUM_CAPACITY) ?
            MAXIMUM_CAPACITY : tableSizeFor((int)size);
        this.sizeCtl = cap;
    }

源码
public class ConcurrentHashMap<K,V> extends AbstractMap<K,V>
    implements ConcurrentMap<K,V>, Serializable {
    /**
     * The largest possible table capacity.  This value must be
     * exactly 1<<30 to stay within Java array allocation and indexing
     * bounds for power of two table sizes, and is further required
     * because the top two bits of 32bit hash fields are used for
     * control purposes.
     */
    private static final int MAXIMUM_CAPACITY = 1 << 30;

    /**
     * The default initial table capacity.  Must be a power of 2
     * (i.e., at least 1) and at most MAXIMUM_CAPACITY.
     */
    private static final int DEFAULT_CAPACITY = 16;

    /**
     * The largest possible (non-power of two) array size.
     * Needed by toArray and related methods.
     */
    static final int MAX_ARRAY_SIZE = Integer.MAX_VALUE - 8;

    /**
     * The default concurrency level for this table. Unused but
     * defined for compatibility with previous versions of this class.
     */
    private static final int DEFAULT_CONCURRENCY_LEVEL = 16;

    /**
     * The load factor for this table. Overrides of this value in
     * constructors affect only the initial table capacity.  The
     * actual floating point value isn't normally used -- it is
     * simpler to use expressions such as {@code n - (n >>> 2)} for
     * the associated resizing threshold.
     */
    private static final float LOAD_FACTOR = 0.75f;

    /**
     * The bin count threshold for using a tree rather than list for a
     * bin.  Bins are converted to trees when adding an element to a
     * bin with at least this many nodes. The value must be greater
     * than 2, and should be at least 8 to mesh with assumptions in
     * tree removal about conversion back to plain bins upon
     * shrinkage.
     */
    static final int TREEIFY_THRESHOLD = 8;

    /**
     * The bin count threshold for untreeifying a (split) bin during a
     * resize operation. Should be less than TREEIFY_THRESHOLD, and at
     * most 6 to mesh with shrinkage detection under removal.
     */
    static final int UNTREEIFY_THRESHOLD = 6;

    /**
     * The smallest table capacity for which bins may be treeified.
     * (Otherwise the table is resized if too many nodes in a bin.)
     * The value should be at least 4 * TREEIFY_THRESHOLD to avoid
     * conflicts between resizing and treeification thresholds.
     */
    static final int MIN_TREEIFY_CAPACITY = 64;

    /**
     * Minimum number of rebinnings per transfer step. Ranges are
     * subdivided to allow multiple resizer threads.  This value
     * serves as a lower bound to avoid resizers encountering
     * excessive memory contention.  The value should be at least
     * DEFAULT_CAPACITY.
     */
    private static final int MIN_TRANSFER_STRIDE = 16;

    /**
     * The number of bits used for generation stamp in sizeCtl.
     * Must be at least 6 for 32bit arrays.
     */
    private static final int RESIZE_STAMP_BITS = 16;

    /**
     * The maximum number of threads that can help resize.
     * Must fit in 32 - RESIZE_STAMP_BITS bits.
     */
    private static final int MAX_RESIZERS = (1 << (32 - RESIZE_STAMP_BITS)) - 1;

    /**
     * The bit shift for recording size stamp in sizeCtl.
     */
    private static final int RESIZE_STAMP_SHIFT = 32 - RESIZE_STAMP_BITS;
    

    /* ---------------- Fields -------------- */

    /**
     * The array of bins. Lazily initialized upon first insertion.
     * Size is always a power of two. Accessed directly by iterators.
     */
    transient volatile Node<K,V>[] table;

    /**
     * The next table to use; non-null only while resizing.
     */
    private transient volatile Node<K,V>[] nextTable;

    /**
     * Base counter value, used mainly when there is no contention,
     * but also as a fallback during table initialization
     * races. Updated via CAS.
     */
    private transient volatile long baseCount;

    /**
     * Table initialization and resizing control.  When negative, the
     * table is being initialized or resized: -1 for initialization,
     * else -(1 + the number of active resizing threads).  Otherwise,
     * when table is null, holds the initial table size to use upon
     * creation, or 0 for default. After initialization, holds the
     * next element count value upon which to resize the table.
     */
    private transient volatile int sizeCtl;

    /**
     * The next table index (plus one) to split while resizing.
     */
    private transient volatile int transferIndex;

    /**
     * Spinlock (locked via CAS) used when resizing and/or creating CounterCells.
     */
    private transient volatile int cellsBusy;

    /**
     * Table of counter cells. When non-null, size is a power of 2.
     */
    private transient volatile CounterCell[] counterCells;



    /**
     * Key-value entry.  This class is never exported out as a
     * user-mutable Map.Entry (i.e., one supporting setValue; see
     * MapEntry below), but can be used for read-only traversals used
     * in bulk tasks.  Subclasses of Node with a negative hash field
     * are special, and contain null keys and values (but are never
     * exported).  Otherwise, keys and vals are never null.
     */
    static class Node<K,V> implements Map.Entry<K,V> {
        final int hash;
        final K key;
        volatile V val;
        volatile Node<K,V> next;
	}


    /**
     * Returns the value to which the specified key is mapped,
     * or {@code null} if this map contains no mapping for the key.
     *
     * <p>More formally, if this map contains a mapping from a key
     * {@code k} to a value {@code v} such that {@code key.equals(k)},
     * then this method returns {@code v}; otherwise it returns
     * {@code null}.  (There can be at most one such mapping.)
     *
     * @throws NullPointerException if the specified key is null
     */
    public V get(Object key) {
        Node<K,V>[] tab; Node<K,V> e, p; int n, eh; K ek;
        int h = spread(key.hashCode());
        if ((tab = table) != null && (n = tab.length) > 0 &&
            (e = tabAt(tab, (n - 1) & h)) != null) {
            if ((eh = e.hash) == h) {
                if ((ek = e.key) == key || (ek != null && key.equals(ek)))
                    return e.val;
            }
            else if (eh < 0)
                return (p = e.find(h, key)) != null ? p.val : null;
            while ((e = e.next) != null) {
                if (e.hash == h &&
                    ((ek = e.key) == key || (ek != null && key.equals(ek))))
                    return e.val;
            }
        }
        return null;
    }

    public V put(K key, V value) {
        return putVal(key, value, false);
    }

    /** Implementation for put and putIfAbsent */
    final V putVal(K key, V value, boolean onlyIfAbsent) {
        if (key == null || value == null) throw new NullPointerException();
        // spread:将hashcode进行扰动(高16位与低16位进行异或,并与integer.MAX_INT按位与)
        int hash = spread(key.hashCode());
        int binCount = 0;
        for (Node<K,V>[] tab = table;;) {
        	// f是链表头结点
        	// fh是链表头结点的hash值
        	// i是链表在table数组的下标        	
            Node<K,V> f; int n, i, fh; K fk; V fv;
            // 当table==null,调用initTable进行延迟初始化
            if (tab == null || (n = tab.length) == 0)
                tab = initTable();
            // 未发生hash冲突,将节点直接加到数组下标处(通过CAS操作实现,防止发生多线程冲突)
            else if ((f = tabAt(tab, i = (n - 1) & hash)) == null) {
                if (casTabAt(tab, i, null, new Node<K,V>(hash, key, value)))
                    break;                   // no lock when adding to empty bin
            }
            // 正在扩容,当前线程并不阻塞,而是去帮助扩容(在旧table中找还为被迁移到新table中的节点,然后对此节点进行扩容)
            // 帮助扩容完成后,进入下一个循环,进入其它分支,将节点插入到合适的位置
            else if ((fh = f.hash) == MOVED)
                tab = helpTransfer(tab, f);
            // table下标头结点与待插入节点是同一个节点,且值不需要覆盖时,返回头结点的value值
            else if (onlyIfAbsent // check first node without acquiring lock
                     && fh == hash
                     && ((fk = f.key) == key || (fk != null && key.equals(fk)))
                     && (fv = f.val) != null)
                return fv;
            else {
            	// 发生了hash冲突,对当前下标处的链表头节点进行加锁,并发处理
                V oldVal = null;
                synchronized (f) {
                	// 再次确认链表头节点没有被移动
                	// (若此时发生了扩容或将当前下标的链表转换成了红黑树,就会导致出现tabAt(tab, i) != f)
                    if (tabAt(tab, i) == f) {
                    	// 链表头节点的hash值>0 (链表结构),表示正常的节点(<0表示正在扩容节点或红黑树根节点)
                        if (fh >= 0) {
                            binCount = 1;
                            // 检索链表
                            for (Node<K,V> e = f;; ++binCount) {
                                K ek;
                                // 找到了相同元素,返回
                                if (e.hash == hash &&
                                    ((ek = e.key) == key ||
                                     (ek != null && key.equals(ek)))) {
                                    oldVal = e.val;
                                    if (!onlyIfAbsent)
                                        e.val = value;
                                    break;
                                }
                                Node<K,V> pred = e;
                                // 未找到相同元素,将当前节点追加到链表最后(尾插法)
                                if ((e = e.next) == null) {
                                    pred.next = new Node<K,V>(hash, key, value);
                                    break;
                                }
                            }
                        }
                        // 链表头节点是红黑树根节点,将节点插入到红黑树中
                        else if (f instanceof TreeBin) {
                            Node<K,V> p;
                            binCount = 2;
                            if ((p = ((TreeBin<K,V>)f).putTreeVal(hash, key,
                                                           value)) != null) {
                                oldVal = p.val;
                                if (!onlyIfAbsent)
                                    p.val = value;
                            }
                        }
                        else if (f instanceof ReservationNode)
                            throw new IllegalStateException("Recursive update");
                    }
                }
                
                // 插入元素后,若触发了链转树条件(同JDK1.8 HashMap),则将链表转换成红黑树
                if (binCount != 0) {
                    if (binCount >= TREEIFY_THRESHOLD)
                        treeifyBin(tab, i);
                    if (oldVal != null)
                        return oldVal;
                    break;
                }
            }
        }
        // size() 计数(+1)和扩容:采用类似LongAdder的方式进行计数,size计数完成后,若binCount(链表长度)>1触发了扩容的条件,还会继续扩容
        addCount(1L, binCount);
        return null;
    }
    
    /**
     * Initializes table, using the size recorded in sizeCtl.
     */
	private final Node<K,V>[] initTable() {
	   Node<K,V>[] tab; int sc;
	
	    // 如果table为空或者容量为0,进入while准备开始初始化。否则说明已经初始化完成
	    while ((tab = table) == null || tab.length == 0) {
	
	        // 将sizeCtl赋值给sc。如果sizeCtl<0说明有线程正在初始化,当前线程要进入自旋等待状态
	        if ((sc = sizeCtl) < 0)
	            // 线程进入等待
	            Thread.yield(); // lost initialization race; just spin(spin:快速旋转)
	            
	// 尝试通过CAS操作将sizeCtl设置为-1(正在初始化table)
	        // 将sizeCtl设置为-1,代表抢到了锁,开始进行初始化操作,利用CAS操作来提高性能
	        else if (U.compareAndSwapInt(this, SIZECTL, sc, -1)) {
	        	// 获得锁,此时其它线程会在while循环中不断yield,直到table初始化完成
	        	// 之后进入下方try内的第一个if,由于不满足条件而直接进入break,从而跳出while循环
	            try {
	                //再次判断表是否为空,不为空说明别的线程已经初始化完成,break跳出循环并退出
	                if ((tab = table) == null || tab.length == 0) {
	                    //判断sc实际为(sizeCtl),构造函数时代表了初始化容量
	                    //如果有指定初始化容量,就用用户指定的,否则用默认的16.
	                    int n = (sc > 0) ? sc : DEFAULT_CAPACITY;
	                    @SuppressWarnings("unchecked")
	
	                    // 生成一个容量为n(上面的容量)的Node数组
	                    Node<K,V>[] nt = (Node<K,V>[])new Node<?,?>[n];
	                    //将地址赋给table
	                    table = tab = nt;
	
	                    // 重新设置sizeCtl=数组容量 - (数组容量 >>>2)
	                    // 右移2位表示数组容量乘以1/4(前提是数组容量>=4,否则会溢出)
	                    // 数组容量 - (数组容量 >>>2)=数组容量*3/4 即扩容阈值
	                    // 这里没有用0.75来计算,相当于写死了用0.75,不太合理
	                    // 另外,若数组容量<4,右移2位=0
	                
	                    // 如果 n 为 16 的话,那么这里 sc = 12
	                    // 其实就是 0.75 * 容量(默认的扩容阈值)
	                    sc = n - (n >>> 2);
	                }
	            } finally {
	                // 重新设置sizeCtl,此时sizeCtl代表下一次的扩容阈值
	                sizeCtl = sc;
	            }
	            break;
	        }
	    }
	    return tab;
	}
   
    /**
     * Spreads (XORs) higher bits of hash to lower and also forces top
     * bit to 0. Because the table uses power-of-two masking, sets of
     * hashes that vary only in bits above the current mask will
     * always collide. (Among known examples are sets of Float keys
     * holding consecutive whole numbers in small tables.)  So we
     * apply a transform that spreads the impact of higher bits
     * downward. There is a tradeoff between speed, utility, and
     * quality of bit-spreading. Because many common sets of hashes
     * are already reasonably distributed (so don't benefit from
     * spreading), and because we use trees to handle large sets of
     * collisions in bins, we just XOR some shifted bits in the
     * cheapest possible way to reduce systematic lossage, as well as
     * to incorporate impact of the highest bits that would otherwise
     * never be used in index calculations because of table bounds.
     */
    static final int spread(int h) {
        return (h ^ (h >>> 16)) & HASH_BITS;
    }

    /**
     * 根据传入的size重新计算,变为2的整数倍,作为最终的table容量。
     * 
     * Returns a power of two table size for the given desired capacity.
     * See Hackers Delight, sec 3.2
     * 
     */
    private static final int tableSizeFor(int c) {
        int n = -1 >>> Integer.numberOfLeadingZeros(c - 1);
        return (n < 0) ? 1 : (n >= MAXIMUM_CAPACITY) ? MAXIMUM_CAPACITY : n + 1;
    }
    
    /**
     * 本方法实现两个处理:
	 *  
	 * 1. 增/删元素后,调用该方法进行size计数。
     * 采用LongAdder相同的思想(分段累加,再求和)进行累加:baseCount为基础累加基数器,CounterCells为分段计数数组。
	 * map.size()结果来源:baseCount+CounterCell[] 各元素的分段计数和。
	 * 
	 * 2. 计数完成后,本方法还会判断是否需要扩容,若满足扩容条件,则执行扩容操作。
	 * CounterCells第一次初始化时,初始化两个cell,如果竞争比较激烈,会创建新的cell(即对CounterCells)累加计数
	 * 
     * Adds to count, and if table is too small and not already
     * resizing, initiates transfer. If already resizing, helps
     * perform transfer if work is available.  Rechecks occupancy
     * after a transfer to see if another resize is already needed
     * because resizings are lagging additions.
     *
     * @param table中增加/减少的元素个数,put一个元素,该参数值为1,remove一个元素,该参数值为-1

     * @param check if <0, don't check resize, if <= 1 only check if uncontended
     *   check 为链表的长度
     */
    private final void addCount(long x, int check) {
        CounterCell[] cs; long b, s;
		// 1. 计数
        // 已经有了CounterCells,说明有多线程竞争发生,此时向Cell分段累加
        if ((cs = counterCells) != null ||
			// 或者还没有CounterCells,说明没有有竞争发生,此时通过CAS向baseCount累加计数,
			// 当累加失败时,说明有竞争发生,也进入if块向Cell(if块内包含了CounterCells的初始化逻辑)分段累加
            !U.compareAndSetLong(this, BASECOUNT, b = baseCount, s = b + x)) {
            CounterCell c; long v; int m;
            boolean uncontended = true;
            	// 1. 还没有CounterCells
            if (cs == null || (m = cs.length - 1) < 0 ||
            	// 2. 还没有Cell
                (c = cs[ThreadLocalRandom.getProbe() & m]) == null ||
                // 3. cell cas累加计数,但失败了
                !(uncontended =
                  U.compareAndSetLong(c, CELLVALUE, v = c.value, v + x))) {
                // 以上3种情况都会到fullAddCount,fullAddCount包含了通过死循环不断重试,
                // 来创建及扩容counterCells数组、创建各cell元素,以及对各cell累加计数的逻辑。
                fullAddCount(x, uncontended);
                return;
            }
            if (check <= 1)
                return;
            // 获取元素个数(即map.size()方法返回值个数)
            s = sumCount();
        }
		
		// 2. 扩容,若满足扩容条件:当链表长度>0,且map元素个数达到了扩容阈值sizeCtl时
        if (check >= 0) {
            Node<K,V>[] tab, nt; int n, sc;
			//元素个数s达到了扩容阈值sizeCtl(s>sizeCtl),先跳转到下方1,再跳转到上方2
            while (s >= (long)(sc = sizeCtl) && (tab = table) != null &&
                   (n = tab.length) < MAXIMUM_CAPACITY) {
                int rs = resizeStamp(n);
				//2. sc<0表示正在扩容,则帮助扩容
                if (sc < 0) {
                    if ((sc >>> RESIZE_STAMP_SHIFT) != rs || sc == rs + 1 ||
                        sc == rs + MAX_RESIZERS || (nt = nextTable) == null ||
                        transferIndex <= 0)
                        break;
					// nextTable(nt)已由1分支下第一个扩容的线程创建,帮助扩容
                    if (U.compareAndSetInt(this, SIZECTL, sc, sc + 1))
                        transfer(tab, nt);
                }
				// 1. 将sizeCtl设置为一个-(1 + the number of active resizing threads),表示正在扩容
                else if (U.compareAndSetInt(this, SIZECTL, sc,
                                             (rs << RESIZE_STAMP_SHIFT) + 2))
					// 第一个线程开始扩容,第二个参数nextTable为空,调用transfer方法进行扩容(会创建nextTable)
					// 其它线程会进入上方的分支2进行协助扩容
                    transfer(tab, null);
                s = sumCount();
            }
        }
    }
    
    // See LongAdder version for explanation
    private final void fullAddCount(long x, boolean wasUncontended) {
        int h;
        if ((h = ThreadLocalRandom.getProbe()) == 0) {
            ThreadLocalRandom.localInit();      // force initialization
            h = ThreadLocalRandom.getProbe();
            wasUncontended = true;
        }
        boolean collide = false;                // True if last slot nonempty
        for (;;) {
            CounterCell[] cs; CounterCell c; int n; long v;
			// 1. 已有counterCells数组
            if ((cs = counterCells) != null && (n = cs.length) > 0) {
				// 1.1 但没有cell,
                if ((c = cs[(n - 1) & h]) == null) {
					// 多线程通过竞争cellsBusy方式,创建cell,并初始化计数值为x
                    if (cellsBusy == 0) {            // Try to attach new Cell
                        CounterCell r = new CounterCell(x); // Optimistic create
                        if (cellsBusy == 0 &&
                            U.compareAndSetInt(this, CELLSBUSY, 0, 1)) {
							// 竞争到cellsBusy,创建cell
                            boolean created = false;
                            try {               // Recheck under lock
                                CounterCell[] rs; int m, j;
                                if ((rs = counterCells) != null &&
                                    (m = rs.length) > 0 &&
                                    rs[j = (m - 1) & h] == null) {
                                    rs[j] = r;
                                    created = true;
                                }
                            } finally {
                                cellsBusy = 0;
                            }
							// 创建完成,跳出for循环,并结束fullAddCount
                            if (created)
                                break;
                            continue;           // Slot is now non-empty
                        }
                    }
                    collide = false;
                }
                else if (!wasUncontended)       // CAS already known to fail
                    wasUncontended = true;      // Continue after rehash
				// 1.2 已有cell,累加计数
                else if (U.compareAndSetLong(c, CELLVALUE, v = c.value, v + x))
                    break;
                else if (counterCells != cs || n >= NCPU)
                    collide = false;            // At max size or stale
                else if (!collide)
                    collide = true;
				// CounterCells第一次初始化时,初始化两个cell,
				// 如果竞争比较激烈,会创建新的cell(即对CounterCells进行扩容)累加计数
                else if (cellsBusy == 0 &&
                         U.compareAndSetInt(this, CELLSBUSY, 0, 1)) {
                    try {
                        if (counterCells == cs) // Expand table unless stale
                            counterCells = Arrays.copyOf(cs, n << 1);
                    } finally {
                        cellsBusy = 0;
                    }
                    collide = false;
                    continue;                   // Retry with expanded table
                }
                h = ThreadLocalRandom.advanceProbe(h);
            }
			// 2. 还没有counterCells数组,多线程通过CAS竞争cellsBusy方式,初始化counterCells数组,长度为2
            else if (cellsBusy == 0 && counterCells == cs &&
                     U.compareAndSetInt(this, CELLSBUSY, 0, 1)) {
                boolean init = false;
                try {                           // Initialize table
                    if (counterCells == cs) {
                        CounterCell[] rs = new CounterCell[2];
                        rs[h & 1] = new CounterCell(x);
                        counterCells = rs;
                        init = true;
                    }
                } finally {
                    cellsBusy = 0;
                }
				// 初始化counterCells数组完成,跳出for循环,并结束fullAddCount
				// 调用fullAddCount的函数addCount会进一步判断元素cell是否创建,
				// 若未创建还会继续调用fullAddCount进入到分支1,创建cell
                if (init)
                    break;
            }
            else if (U.compareAndSetLong(this, BASECOUNT, v = baseCount, v + x))
                break;                          // Fall back on using base
        }
    }


    /**
     * Moves and/or copies the nodes in each bin to new table. See
     * above for explanation.
     */
    private final void transfer(Node<K,V>[] tab, Node<K,V>[] nextTab) {
        int n = tab.length, stride;
        if ((stride = (NCPU > 1) ? (n >>> 3) / NCPU : n) < MIN_TRANSFER_STRIDE)
            stride = MIN_TRANSFER_STRIDE; // subdivide range
		// 1. 首次扩容,nextTab == null,此时创建nextTable,其容量加倍
        if (nextTab == null) {            // initiating
            try {
                @SuppressWarnings("unchecked")
                Node<K,V>[] nt = (Node<K,V>[])new Node<?,?>[n << 1];
                nextTab = nt;
            } catch (Throwable ex) {      // try to cope with OOME
                sizeCtl = Integer.MAX_VALUE;
                return;
            }
            nextTable = nextTab;
            transferIndex = n;
        }
        int nextn = nextTab.length;
        ForwardingNode<K,V> fwd = new ForwardingNode<K,V>(nextTab);
        boolean advance = true;
        boolean finishing = false; // to ensure sweep before committing nextTab
		
		// 以链表为单位,将旧table中的元素搬迁到nextTable
        for (int i = 0, bound = 0;;) {
            Node<K,V> f; int fh;
            while (advance) {
                int nextIndex, nextBound;
                if (--i >= bound || finishing)
                    advance = false;
                else if ((nextIndex = transferIndex) <= 0) {
                    i = -1;
                    advance = false;
                }
                else if (U.compareAndSetInt
                         (this, TRANSFERINDEX, nextIndex,
                          nextBound = (nextIndex > stride ?
                                       nextIndex - stride : 0))) {
                    bound = nextBound;
                    i = nextIndex - 1;
                    advance = false;
                }
            }
            if (i < 0 || i >= n || i + n >= nextn) {
                int sc;
                if (finishing) {
                    nextTable = null;
                    table = nextTab;
                    sizeCtl = (n << 1) - (n >>> 1);
                    return;
                }
                if (U.compareAndSetInt(this, SIZECTL, sc = sizeCtl, sc - 1)) {
                    if ((sc - 2) != resizeStamp(n) << RESIZE_STAMP_SHIFT)
                        return;
                    finishing = advance = true;
                    i = n; // recheck before commit
                }
            }
			//搬迁完成(旧table没有元素了),将旧table放置一个ForwardingNode,
			//目的是其它get线程进来后,被ForwardingNode引导到nextTable中查找元素
            else if ((f = tabAt(tab, i)) == null)
                advance = casTabAt(tab, i, null, fwd);
            else if ((fh = f.hash) == MOVED)
                advance = true; // already processed
            else {
				// 链表有元素,将链表锁定,进行迁移
                synchronized (f) {
                    if (tabAt(tab, i) == f) {
                        Node<K,V> ln, hn;
                        if (fh >= 0) {// 普通节点
                            int runBit = fh & n;
                            Node<K,V> lastRun = f;
                            for (Node<K,V> p = f.next; p != null; p = p.next) {
                                int b = p.hash & n;
                                if (b != runBit) {
                                    runBit = b;
                                    lastRun = p;
                                }
                            }
                            if (runBit == 0) {
                                ln = lastRun;
                                hn = null;
                            }
                            else {
                                hn = lastRun;
                                ln = null;
                            }
                            for (Node<K,V> p = f; p != lastRun; p = p.next) {
                                int ph = p.hash; K pk = p.key; V pv = p.val;
                                if ((ph & n) == 0)
                                    ln = new Node<K,V>(ph, pk, pv, ln);
                                else
                                    hn = new Node<K,V>(ph, pk, pv, hn);
                            }
                            setTabAt(nextTab, i, ln);
                            setTabAt(nextTab, i + n, hn);
                            setTabAt(tab, i, fwd);
                            advance = true;
                        }
                        else if (f instanceof TreeBin) {// 树节点迁移
                            TreeBin<K,V> t = (TreeBin<K,V>)f;
                            TreeNode<K,V> lo = null, loTail = null;
                            TreeNode<K,V> hi = null, hiTail = null;
                            int lc = 0, hc = 0;
                            for (Node<K,V> e = t.first; e != null; e = e.next) {
                                int h = e.hash;
                                TreeNode<K,V> p = new TreeNode<K,V>
                                    (h, e.key, e.val, null, null);
                                if ((h & n) == 0) {
                                    if ((p.prev = loTail) == null)
                                        lo = p;
                                    else
                                        loTail.next = p;
                                    loTail = p;
                                    ++lc;
                                }
                                else {
                                    if ((p.prev = hiTail) == null)
                                        hi = p;
                                    else
                                        hiTail.next = p;
                                    hiTail = p;
                                    ++hc;
                                }
                            }
                            ln = (lc <= UNTREEIFY_THRESHOLD) ? untreeify(lo) :
                                (hc != 0) ? new TreeBin<K,V>(lo) : t;
                            hn = (hc <= UNTREEIFY_THRESHOLD) ? untreeify(hi) :
                                (lc != 0) ? new TreeBin<K,V>(hi) : t;
                            setTabAt(nextTab, i, ln);
                            setTabAt(nextTab, i + n, hn);
                            setTabAt(tab, i, fwd);
                            advance = true;
                        }
                    }
                }
            }
        }
    }

    /**
     * Helps transfer if a resize is in progress.
     */
    final Node<K,V>[] helpTransfer(Node<K,V>[] tab, Node<K,V> f) {
        Node<K,V>[] nextTab; int sc;
        if (tab != null && (f instanceof ForwardingNode) &&
            (nextTab = ((ForwardingNode<K,V>)f).nextTable) != null) {
            int rs = resizeStamp(tab.length);
            while (nextTab == nextTable && table == tab &&
                   (sc = sizeCtl) < 0) {
                if ((sc >>> RESIZE_STAMP_SHIFT) != rs || sc == rs + 1 ||
                    sc == rs + MAX_RESIZERS || transferIndex <= 0)
                    break;
                if (U.compareAndSetInt(this, SIZECTL, sc, sc + 1)) {
                    transfer(tab, nextTab);
                    break;
                }
            }
            return nextTab;
        }
        return table;
    }


	// map.size()结果来源:baseCount+CounterCell[] 各元素的分段计数和。
    public int size() {
        long n = sumCount();
        return ((n < 0L) ? 0 :
                (n > (long)Integer.MAX_VALUE) ? Integer.MAX_VALUE :
                (int)n);
    }

    final long sumCount() {
        CounterCell[] cs = counterCells;
        long sum = baseCount;
        if (cs != null) {
            for (CounterCell c : cs)
                if (c != null)
                    sum += c.value;
        }
        return sum;
    }

   /* ---------------- Conversion from/to TreeBins -------------- */

    /**
     * Replaces all linked nodes in bin at given index unless table is
     * too small, in which case resizes instead.
     */
    private final void treeifyBin(Node<K,V>[] tab, int index) {
        Node<K,V> b; int n;
        if (tab != null) {
            if ((n = tab.length) < MIN_TREEIFY_CAPACITY)
                tryPresize(n << 1);
            else if ((b = tabAt(tab, index)) != null && b.hash >= 0) {
                synchronized (b) {
                    if (tabAt(tab, index) == b) {
                        TreeNode<K,V> hd = null, tl = null;
                        for (Node<K,V> e = b; e != null; e = e.next) {
                            TreeNode<K,V> p =
                                new TreeNode<K,V>(e.hash, e.key, e.val,
                                                  null, null);
                            if ((p.prev = tl) == null)
                                hd = p;
                            else
                                tl.next = p;
                            tl = p;
                        }
                        setTabAt(tab, index, new TreeBin<K,V>(hd));
                    }
                }
            }
        }
    }

}
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值