JAVA集合(四、ConcurrentHashMap)

参考自:

    https://javadoop.com/post/hashmap
    https://blog.csdn.net/lsgqjh/article/details/54867107
    https://www.baidu.com/link?url=EdjLYTP4UFbVMXPGGe76qAfAFuqEIshx2_xFwZkkIcYrMXs1P5qV0Hhmx6FzJRKPuOfIRG_56hj87sttsTIK1_&wd=&eqid=8955176e000a4b31000000065b93f440
复制代码

阅读之前建议先了解HashMap一文

要点总结

1.ConcurrentHashMap对节点的key和value不允许为null

2.ConcurrentHashMap是线程安全的,并且相对Hashtable,效率更高

3.JDK1.8中ConcurrentHashMap经过较大改动,主要利用了CAS算法和部分synchronized实现了并发,摒弃了之前使用 Segment分段加锁的机制

4.ConcurrentHashMap底层存储结构同JDK1.8的HashMap一样,都是采用了"数组" + "链表" + "红黑树"的结构

5.计算元素存放索引的公式为(n-1)&hash,其中n为当前哈希同table的容量,hash为元素key经过特定的哈希运算方法获取的

6.在JDK8(即JDK1.8)中当链表的长度达到8后,便会转换为红黑树用于提高查询、插入效率;

7.扩容操作类似HashMap都是达到当前容量的0.75后,翻倍扩容并进行数据迁移

8.负载因子不可以修改,固定为0.75

源码分析

重要属性
    /** 默认初始容量为16。必须是2的倍数
     * The default initial table capacity.  Must be a power of 2
     * (i.e., at least 1) and at most MAXIMUM_CAPACITY.
     */
    private static final int DEFAULT_CAPACITY = 16;
    
    /** 
     *   哈希表容量初始值和当做扩容阈值使用
     * Table initialization and resizing control.  When negative, the
     * table is being initialized or resized: -1 for initialization,
     * else -(1 + the number of active resizing threads).  Otherwise,
     * when table is null, holds the initial table size to use upon
     * creation, or 0 for default. After initialization, holds the
     * next element count value upon which to resize the table.
     */
    private transient volatile int sizeCtl;
    
    /** 
     *   哈希表容量初始值和当做扩容阈值使用
     * Table initialization and resizing control.  When negative, the
     * table is being initialized or resized: -1 for initialization,
     * else -(1 + the number of active resizing threads).  Otherwise,
     * when table is null, holds the initial table size to use upon
     * creation, or 0 for default. After initialization, holds the
     * next element count value upon which to resize the table.
     *
     *   1.整数或者为0时,代表哈希表数组还没被初始化,初始化时若sizeCtl=0,则使用DEFAULT_CAPACITY=16作为默*   认哈希表数组容量n,若sizeCtl>0(使用指定初始容量的构造函数),
     *   则使用sizeCtl作为哈希表数组容量n,然后重新赋值sizeCtl=0.75n 作为下次发生扩容的阈值
     *
     *   2.若sizeCtl=-1代表该哈希表数组正在进行初始化,避免多个线程都都进行初始化操作导致异常发生
     *   (其它线程判断为-1说明有线程正在进行初始化操作,则不再重复操作)
     *
     *   3.负数代表正在进行扩容操作, -N代表有N-1个线程正在进行扩容操作
     *
     *   4.初始化和扩容后,后续sizeCtl都会赋值为0.75n作为阈值
     */
    private transient volatile int sizeCtl;
    
    /**  哈希表数组,用于存放每个索引的元素(可能为链表结构或者红黑树)
     * The array of bins. Lazily initialized upon first insertion.
     * Size is always a power of two. Accessed directly by iterators.
     */
    transient volatile Node<K,V>[] table;

    /** 哈希表数组,用于数据迁移时,创建一个原数组2倍容量的临时存储区
     * The next table to use; non-null only while resizing.
     */
    private transient volatile Node<K,V>[] nextTable;
复制代码
构造函数

ConcurrentHashMap提供了和HashMap类似提供4种构造函数(1~4),并且提供了自己独有的第5种构造函数:

    //1.最常用的构造函数
    ConcurrentHashMap<String,String> map = new ConcurrentHashMap<String,String>();
    
    //2.指定初始容量的构造函数
    HConcurrentHashMap<String,String> map2 = new ConcurrentHashMap<String,String>(16);
    
    //3.指定初始容量和负载因子的构造函数
    ConcurrentHashMap<String,String> map3 = new ConcurrentHashMap<String,String>(16,0.5f);
    
    //4.通过一个已存在的Map进行内容赋值的构造函数
    ConcurrentHashMap<String,String> map4 = new ConcurrentHashMap<String,String>(new ConcurrentHashMap<String,String>());
    
    //5.指定初始容量和负载因子、同时并发数目的构造函数
    ConcurrentHashMap<String,String> map5 = new ConcurrentHashMap<String,String>(16,0.5f,32);
复制代码

下面为源码的构造函数相关源码:

   /**
     * Creates a new, empty map with an initial table size based on
     * the given number of elements ({@code initialCapacity}), table
     * density ({@code loadFactor}), and number of concurrently
     * updating threads ({@code concurrencyLevel}).
     *
     * @param initialCapacity the initial capacity. The implementation
     * performs internal sizing to accommodate this many elements,
     * given the specified load factor.
     * @param loadFactor the load factor (table density) for
     * establishing the initial table size
     * @param concurrencyLevel the estimated number of concurrently
     * updating threads. The implementation may use this value as
     * a sizing hint.
     * @throws IllegalArgumentException if the initial capacity is
     * negative or the load factor or concurrencyLevel are
     * nonpositive
     */
    public ConcurrentHashMap(int initialCapacity,
                             float loadFactor, int concurrencyLevel) {
        if (!(loadFactor > 0.0f) || initialCapacity < 0 || concurrencyLevel <= 0)
            throw new IllegalArgumentException();
        if (initialCapacity < concurrencyLevel)   // Use at least as many bins
            initialCapacity = concurrencyLevel;   // as estimated threads
            
        //这边简单来讲就是将sizeCtl赋值为initialCapacity/loadFactor,并取<=MAXIMUM_CAPACITY且最接近的2^n倍的数值赋值给sizeCtl
        long size = (long)(1.0 + (long)initialCapacity / loadFactor);
        int cap = (size >= (long)MAXIMUM_CAPACITY) ?
            MAXIMUM_CAPACITY : tableSizeFor((int)size);
        //根据参数设置的阈值,类似HashMap中的threshold成员变量  
        this.sizeCtl = cap;
    }
复制代码
Node链表类,实现了Map.Entry<K,V>元素接口,用于存放HashMap内容
    /**
     * Key-value entry.  This class is never exported out as a
     * user-mutable Map.Entry (i.e., one supporting setValue; see
     * MapEntry below), but can be used for read-only traversals used
     * in bulk tasks.  Subclasses of Node with a negative hash field
     * are special, and contain null keys and values (but are never
     * exported).  Otherwise, keys and vals are never null.
     */
    static class Node<K,V> implements Map.Entry<K,V> {
        final int hash;
        final K key;
        volatile V val;
        volatile Node<K,V> next;

        Node(int hash, K key, V val, Node<K,V> next) {
            this.hash = hash;
            this.key = key;
            this.val = val;
            this.next = next;
        }

        public final K getKey()     { return key; }
        public final V getValue()   { return val; }
        public final int hashCode() { return key.hashCode() ^ val.hashCode(); }
        public final String toString() {
            return Helpers.mapEntryToString(key, val);
        }
        public final V setValue(V value) {
            throw new UnsupportedOperationException();
        }

        public final boolean equals(Object o) {
            Object k, v, u; Map.Entry<?,?> e;
            return ((o instanceof Map.Entry) &&
                    (k = (e = (Map.Entry<?,?>)o).getKey()) != null &&
                    (v = e.getValue()) != null &&
                    (k == key || k.equals(key)) &&
                    (v == (u = val) || v.equals(u)));
        }

        /**
         * Virtualized support for map.get(); overridden in subclasses.
         */
        Node<K,V> find(int h, Object k) {
            Node<K,V> e = this;
            if (k != null) {
                do {
                    K ek;
                    if (e.hash == h &&
                        ((ek = e.key) == k || (ek != null && k.equals(ek))))
                        return e;
                } while ((e = e.next) != null);
            }
            return null;
        }
    }
复制代码
tableSizeFor函数,根据期望容量获取哈希桶阈值
    /**
     * 这边的位操作最后会得到一个>=期望容量cap的最接近的2^n的值;
     * 结果会判断是否阈值是否<0或者大于现在的最大容量2^30,,并进行修复
     * Returns a power of two size for the given target capacity.
     */
    static final int tableSizeFor(int cap) {
     //经过下面的 或 和位移 运算, n最终各位都是1。
        int n = cap - 1;
        n |= n >>> 1;
        n |= n >>> 2;
        n |= n >>> 4;
        n |= n >>> 8;
        n |= n >>> 16;
        return (n < 0) ? 1 : (n >= MAXIMUM_CAPACITY) ? MAXIMUM_CAPACITY : n + 1;
    }
复制代码
扩容
tryPresize详解
    /**     尝试扩容,size传进去的时候就是翻倍的了
     * Tries to presize table to accommodate the given number of elements.
     *
     * @param size number of elements (doesn't need to be perfectly accurate)
     *  size :需要达到的容量(不必完全准确)
     */
    
    private final void tryPresize(int size) {
        //获取最接近>=c的2^n的数值
        int c = (size >= (MAXIMUM_CAPACITY >>> 1)) ? MAXIMUM_CAPACITY :
            tableSizeFor(size + (size >>> 1) + 1);
        int sc;
        //获取阈值进行判断 ,若 < 0则说明有其它线程正在初始化/扩容
        while ((sc = sizeCtl) >= 0) {
            Node<K,V>[] tab = table; int n;
            //如果当前哈希表数组容量为0,还没进行初始化
            if (tab == null || (n = tab.length) == 0) {
                n = (sc > c) ? sc : c;
                //CAS操作,设置sizeCtl为-1,代表该线程正在扩容
                if (U.compareAndSwapInt(this, SIZECTL, sc, -1)) {
                    try {
                        //初始化哈希表数组
                        if (table == tab) {
                            @SuppressWarnings("unchecked")
                            Node<K,V>[] nt = (Node<K,V>[])new Node<?,?>[n];
                            table = nt;
                            //sc = n - n/2/2 = 0.75n
                            sc = n - (n >>> 2);
                        }
                    } finally {
                        //设置新的阈值
                        sizeCtl = sc;
                    }
                }
            }
            //需要改变的大小比当前阈值还低或者当前哈希表数组已经达到最大了,则不进行操作
            else if (c <= sc || n >= MAXIMUM_CAPACITY)
                break;
            //再原本基本上进行扩容,这边这个==判断,应该是为多线程下保险的判断吧    
            else if (tab == table) {
                //作用不是很理解,得到一个很大的负数,https://blog.csdn.net/u011392897/article/details/60479937
                int rs = resizeStamp(n);
                //将sizeCtl设置为一个负数
                if (U.compareAndSwapInt(this, SIZECTL, sc,
                                        (rs << RESIZE_STAMP_SHIFT) + 2))
                    transfer(tab, null);
            }
        }
    }
复制代码

如何保障扩容操作是单线程完成的:

这边使用判断sizeCtl>0时说明没有线程执行扩容,于是使用CAS操作赋值sizeCtl为-1,这样后续线程判断已经在执行扩容操作了,则不再重复执行

transfer详解
    /**             扩容后的数据迁移
     * Moves and/or copies the nodes in each bin to new table. See
     * above for explanation.
     *
     *  tab为未扩容前的原哈希数组引用
     *  nextTab扩容时新生成的数组,其大小为原数组的两倍。第一个扩容线程传入的为null
     */
    private final void transfer(Node<K,V>[] tab, Node<K,V>[] nextTab) {
        int n = tab.length, stride;
        //获取该线程需要处理的步长
        if ((stride = (NCPU > 1) ? (n >>> 3) / NCPU : n) < MIN_TRANSFER_STRIDE)
            stride = MIN_TRANSFER_STRIDE; // subdivide range
        if (nextTab == null) {            // initiating
            //首个发起数据迁移的线程会执行这个部分,后续线程nextTab != null
            try {
                @SuppressWarnings("unchecked")
                Node<K,V>[] nt = (Node<K,V>[])new Node<?,?>[n << 1];
                nextTab = nt;
            } catch (Throwable ex) {      // try to cope with OOME
                sizeCtl = Integer.MAX_VALUE;
                return;
            }
            //创建一个新的容量翻倍2n的哈希表数组赋给nextTable
            nextTable = nextTab;
            //赋值transferIndex为原哈希表数组容量n
            transferIndex = n;
        }
        //获取新哈希表容量,2n
        int nextn = nextTab.length;
        //创建ForwardingNode,用于旧的哈希表数组"占位",并持有新表的引用
        ForwardingNode<K,V> fwd = new ForwardingNode<K,V>(nextTab);
        boolean advance = true;//代表可以进行下一个节点的迁移
        boolean finishing = false; // 迁移工作是否全部结束
        for (int i = 0, bound = 0;;) {
            Node<K,V> f; int fh;
            /**
            * 这边有点难以理解,迁移工作是从原哈希表数组的尾部到头部开始的。
            *
            * i: 代表需要进行迁移的哈希表索引,在while 会被赋值为 n-1
            * bound: 迁移边界,到这位置结束
            *
            **/
            while (advance) {
                int nextIndex, nextBound;
                //首次不符合 ,后续都符合直到迁移结束
                if (--i >= bound || finishing)
                    advance = false;
                //  nextIndex =   transferIndex = n
                else if ((nextIndex = transferIndex) <= 0) {
                    i = -1;
                    advance = false;
                }
                //一般首次进来会执行到,将 i赋值为n-1,并获取截止的索引nextBound,
                这也是为啥说迁移是按尾部到头部的顺序进行的
                else if (U.compareAndSwapInt
                         (this, TRANSFERINDEX, nextIndex,
                          nextBound = (nextIndex > stride ?
                                       nextIndex - stride : 0))) {
                    bound = nextBound;
                    i = nextIndex - 1;
                    advance = false;
                }
            }
            //判断是否完成了数据迁移工作
            if (i < 0 || i >= n || i + n >= nextn) {
                int sc;
                if (finishing) {
                    //如果所有的节点都已经完成复制工作,就把nextTable赋值给table 
                    清空对象nextTable,以免对下次操作造成影响  
                    nextTable = null;
                    table = nextTab;
                    //扩容阈值设置为原来容量的1.5倍  依然相当于现在容量的0.75倍 (1.5n/2n=0.75) 
                    sizeCtl = (n << 1) - (n >>> 1);
                    return;
                }
                //利用CAS方法更新这个扩容阈值,在这里面sizectl值减一,说明新加入一个线程参与到扩容操作  
                if (U.compareAndSwapInt(this, SIZECTL, sc = sizeCtl, sc - 1)) {
                    if ((sc - 2) != resizeStamp(n) << RESIZE_STAMP_SHIFT)
                        return;
                    finishing = advance = true;
                    i = n; // recheck before commit
                }
            }
            //如果位置i是空的,则放入刚初始化的ForwardingNode ”空节点“ ;
            else if ((f = tabAt(tab, i)) == null)
                advance = casTabAt(tab, i, null, fwd);
            //如果该位置是ForwardingNode,代表该位置已经迁移过了; 
            else if ((fh = f.hash) == MOVED)
                advance = true; // already processed
            else {
                //锁定头节点
                synchronized (f) {
                    if (tabAt(tab, i) == f) {//双重判断,和单例模式的双重判断类似
                        Node<K,V> ln, hn;
                        //头结点的hash>0,说明该位置是链表结构
                        if (fh >= 0) {
                            /** 
                              *下面这一块和 JDK1.7 中的 ConcurrentHashMap 迁移是差不多的,
                              * 需要将链表一分为二,
                              * 找到原链表中的 lastRun,然后 lastRun 及其之后的节点是一起进行迁移的
                              * 这样就不需要一个个判断链表中的节点进行计算新位置并迁移
                              * ln,hn分别为旧位置的低索引,新位置的高索引(实际应用过程,这样子效率较高)
                              **/
                            int runBit = fh & n;
                            Node<K,V> lastRun = f;
                            for (Node<K,V> p = f.next; p != null; p = p.next) {
                                int b = p.hash & n;
                                if (b != runBit) {
                                    runBit = b;
                                    lastRun = p;
                                }
                            }
                            if (runBit == 0) {
                                ln = lastRun;
                                hn = null;
                            }
                            else {
                                hn = lastRun;
                                ln = null;
                            }
                            for (Node<K,V> p = f; p != lastRun; p = p.next) {
                                int ph = p.hash; K pk = p.key; V pv = p.val;
                                if ((ph & n) == 0)
                                    ln = new Node<K,V>(ph, pk, pv, ln);
                                else
                                    hn = new Node<K,V>(ph, pk, pv, hn);
                            }
                            //将原链表数据迁移到新哈希表数组
                           (从i位置迁移过来,节点可能在新哈希表数组的索引i或者i+n的位置,可查看HashMap扩容说明)
                            setTabAt(nextTab, i, ln);
                            setTabAt(nextTab, i + n, hn);
                            //将原哈希表该i位置设置为ForwardingNode ”空节点“
                            setTabAt(tab, i, fwd);
                            advance = true;
                        }
                        //迁移红黑树到新哈希表
                        else if (f instanceof TreeBin) {
                            TreeBin<K,V> t = (TreeBin<K,V>)f;
                            TreeNode<K,V> lo = null, loTail = null;
                            TreeNode<K,V> hi = null, hiTail = null;
                            int lc = 0, hc = 0;
                            for (Node<K,V> e = t.first; e != null; e = e.next) {
                                int h = e.hash;
                                TreeNode<K,V> p = new TreeNode<K,V>
                                    (h, e.key, e.val, null, null);
                                if ((h & n) == 0) {
                                    if ((p.prev = loTail) == null)
                                        lo = p;
                                    else
                                        loTail.next = p;
                                    loTail = p;
                                    ++lc;
                                }
                                else {
                                    if ((p.prev = hiTail) == null)
                                        hi = p;
                                    else
                                        hiTail.next = p;
                                    hiTail = p;
                                    ++hc;
                                }
                            }
                            ln = (lc <= UNTREEIFY_THRESHOLD) ? untreeify(lo) :
                                (hc != 0) ? new TreeBin<K,V>(lo) : t;
                            hn = (hc <= UNTREEIFY_THRESHOLD) ? untreeify(hi) :
                                (lc != 0) ? new TreeBin<K,V>(hi) : t;
                            setTabAt(nextTab, i, ln);
                            setTabAt(nextTab, i + n, hn);
                            setTabAt(tab, i, fwd);
                            advance = true;
                        }
                    }
                }
            }
        }
    }
复制代码

多线程是如何协作完成数据迁移的: 在transfer有一个判断 if ((fh = f.hash) == MOVED),如果遍历到的节点是forward节点,就向后继续遍历,再加上给节点上锁的机制,就完成了多线程的控制。多线程遍历节点,处理了一个节点,就把对应点的值set为forward,另一个线程看到forward,就向后遍历。这样交叉就完成了复制工作。而且还很好的解决了线程安全的问题。

helpTransfer详解
    /**
     * Helps transfer if a resize is in progress.
     */
    final Node<K,V>[] helpTransfer(Node<K,V>[] tab, Node<K,V> f) {
        Node<K,V>[] nextTab; int sc;
        if (tab != null && (f instanceof ForwardingNode) &&
            (nextTab = ((ForwardingNode<K,V>)f).nextTable) != null) {
            int rs = resizeStamp(tab.length);
            while (nextTab == nextTable && table == tab &&
                   (sc = sizeCtl) < 0) {
                if ((sc >>> RESIZE_STAMP_SHIFT) != rs || sc == rs + 1 ||
                    sc == rs + MAX_RESIZERS || transferIndex <= 0)
                    break;
                if (U.compareAndSwapInt(this, SIZECTL, sc, sc + 1)) {
                    transfer(tab, nextTab);
                    break;
                }
            }
            return nextTab;
        }
        return table;
    }
复制代码

这是一个协助扩容的方法。这个方法被调用的时候,当前ConcurrentHashMap一定已经有了nextTable对象,首先拿到这个nextTable对象,调用transfer方法。回看上面的transfer方法可以看到,当本线程进入扩容方法的时候会直接进入复制阶段。

增加、修改

1.put(k,v)方法,往哈希表中添加或者覆盖更新对应key的元素,调用putVal实现

public V put(K key, V value) {
    return putVal(key, value, false); //putVal和HashMap相比少了一个参数
}
复制代码

2.putIfAbsent(k,v)方法(JDK8才新增的API),往哈希表中添加key对应的新元素,若原本已存在key对应的元素则不进行更新,调用putVal实现

public V putIfAbsent(K key, V value) {
    return putVal(key, value, true);
}
复制代码

3.replace(k,v)方法(JDK8才新增的方法),替换key对应元素的value值,不存在不替换(内部还是调用了put)

    /**
     * {@inheritDoc}
     *
     * @return the previous value associated with the specified key,
     *         or {@code null} if there was no mapping for the key
     * @throws NullPointerException if the specified key or value is null
     */
    public V replace(K key, V value) {
        if (key == null || value == null)
            throw new NullPointerException();
        return replaceNode(key, value, null);
    }
复制代码

4.replace(K key, V oldValue, V newValue) ,若存在key对应的元素且value相等于oldValue则将其替换为newValue

    /**
     * {@inheritDoc}
     *
     * @throws NullPointerException if any of the arguments are null
     */
    public boolean replace(K key, V oldValue, V newValue) {
        if (key == null || oldValue == null || newValue == null)
            throw new NullPointerException();
        return replaceNode(key, newValue, oldValue) != null;
    }
复制代码

5.putAll(Map<? extends K, ? extends V> m),将m中存在的元素全部添加进哈希表

/**
 * Copies all of the mappings from the specified map to this one.
 * These mappings replace any mappings that this map had for any of the
 * keys currently in the specified map.
 *
 * @param m mappings to be stored in this map
 */
public void putAll(Map<? extends K, ? extends V> m) {
    tryPresize(m.size());
    for (Map.Entry<? extends K, ? extends V> e : m.entrySet())
        putVal(e.getKey(), e.getValue(), false);
}
复制代码
putVal详解

    //实际实现put和putIfAbsent的接口
    /** Implementation for put and putIfAbsent */
    final V putVal(K key, V value, boolean onlyIfAbsent) {
        //不允许key和value为Null
        if (key == null || value == null) throw new NullPointerException();
        //获取哈希值
        int hash = spread(key.hashCode());
        //用于记录该位置上链表的长度
        int binCount = 0;
        for (Node<K,V>[] tab = table;;) {
            Node<K,V> f; int n, i, fh;
            //如果哈表目前为空,则初始化哈表
            if (tab == null || (n = tab.length) == 0)
                tab = initTable();
            //CAS操作,找到该元素key hash对应的
            的哈希表数组下标(n - 1) & hash,获取该位置的第一个节点f    
            else if ((f = tabAt(tab, i = (n - 1) & hash)) == null) {
                //如果该数组位置为空,CAS操作创建一个新的元素节点存入该位置
                if (casTabAt(tab, i, null,
                             new Node<K,V>(hash, key, value, null)))
                    break;                   // no lock when adding to empty bin
            }
            //如果当前正在扩容,( 扩容时hash会为赋值为MOVE)
            else if ((fh = f.hash) == MOVED)
                tab = helpTransfer(tab, f);
            //这边代表该数组下标位置对应的第一个节点元素f不为空    
            else {
                V oldVal = null;
                synchronized (f) {//获取该位置头结点的监视锁
                    if (tabAt(tab, i) == f) {//双重判断保障
                        if (fh >= 0) {//这边头结点的hash>0,则说明该位置是链表结构,不只一个元素
                            binCount = 1;
                            //遍历该位置的链表
                            for (Node<K,V> e = f;; ++binCount) {
                                //如果该元素与要添加的新元素key一致,处理是否更新,退出循环
                                K ek;
                                if (e.hash == hash &&
                                    ((ek = e.key) == key ||
                                     (ek != null && key.equals(ek)))) {
                                    oldVal = e.val;
                                    //onlyIfAbsent==true时,代表只能在原本不存在该元素时才进行更新value
                                    if (!onlyIfAbsent)
                                        e.val = value;
                                    break;
                                }
                                
                                Node<K,V> pred = e;
                                //获取该链表的下一个元素,进行判断
                                if ((e = e.next) == null) {
                                //如果遍历结束后,则说明该需要更新的元素是新元素,添加到该链表的末尾,
                                并退出循环
                                    pred.next = new Node<K,V>(hash, key,
                                                              value, null);
                                    break;
                                }
                            }
                        }
                        else if (f instanceof TreeBin) { //该数组位置为红黑树结构
                            Node<K,V> p;
                            binCount = 2;
                            //使用红黑树的插值方法插入节点
                            if ((p = ((TreeBin<K,V>)f).putTreeVal(hash, key,
                                                           value)) != null) {
                                oldVal = p.val;
                                if (!onlyIfAbsent)
                                    p.val = value;
                            }
                        }
                        else if (f instanceof ReservationNode) //保留节点
                            throw new IllegalStateException("Recursive update");
                    }
                }
                //判断当前链表元素个数
                if (binCount != 0) {
                    // 判断是否要将链表转换为红黑树,临界值和 HashMap 一样,也是 8
                    if (binCount >= TREEIFY_THRESHOLD)
                        //该方法和hashmap有点不同,就是该方法不一定会进行红黑树转换,
                        如果当前数组的长度小于 64,那么会选择进行数组扩容,而不是转换为红黑树
                        treeifyBin(tab, i); 
                    if (oldVal != null)
                        return oldVal; //put操作返回old value
                    break;
                }
            }
        }
        addCount(1L, binCount);
        return null;
    }
复制代码
spread详解
    /*
     * Encodings for Node hash fields. See above for explanation.
     */
    static final int MOVED     = -1; // hash for forwarding nodes
    static final int TREEBIN   = -2; // hash for roots of trees
    static final int RESERVED  = -3; // hash for transient reservations
    static final int HASH_BITS = 0x7fffffff; // usable bits of normal node hash
    
    /**
     * Spreads (XORs) higher bits of hash to lower and also forces top
     * bit to 0. Because the table uses power-of-two masking, sets of
     * hashes that vary only in bits above the current mask will
     * always collide. (Among known examples are sets of Float keys
     * holding consecutive whole numbers in small tables.)  So we
     * apply a transform that spreads the impact of higher bits
     * downward. There is a tradeoff between speed, utility, and
     * quality of bit-spreading. Because many common sets of hashes
     * are already reasonably distributed (so don't benefit from
     * spreading), and because we use trees to handle large sets of
     * collisions in bins, we just XOR some shifted bits in the
     * cheapest possible way to reduce systematic lossage, as well as
     * to incorporate impact of the highest bits that would otherwise
     * never be used in index calculations because of table bounds.
     */
     static final int spread(int h) {
        return (h ^ (h >>> 16)) & HASH_BITS;
     }
复制代码
initTable详解
/** 用于使用sizeCtl创建并初始化哈希表容量
 * Initializes table, using the size recorded in sizeCtl.
 */
private final Node<K,V>[] initTable() {
    Node<K,V>[] tab; int sc;
    while ((tab = table) == null || tab.length == 0) {
        //将sc赋值为sizeCtl,获取初始容量,这边如果小于0说明已经存在其他线程正在执行初始化操作了
        if ((sc = sizeCtl) < 0)
            Thread.yield(); // lost initialization race; just spin   通知让出cpu时间片让其他线程执行
        //执行CAS操作,硬件级别的原子操作,作用是将 sizeCtl赋值为-1,代表该线程抢到了锁
        else if (U.compareAndSwapInt(this, SIZECTL, sc, -1)) {
            try {
                if ((tab = table) == null || tab.length == 0) {
                    //未调用构造函数赋值初始容量的,则使用默认容量DEFAULT_CAPACITY==16,否则使用计算好的sizeCtl
                    int n = (sc > 0) ? sc : DEFAULT_CAPACITY;
                    //创建了对应容量的哈希表
                    @SuppressWarnings("unchecked")
                    Node<K,V>[] nt = (Node<K,V>[])new Node<?,?>[n];
                    table = tab = nt;
                    //相当于 sc = n- n/2/2 = 0.75n
                    sc = n - (n >>> 2);
                }
            } finally {
                //重新赋值sizeCtl , 下次存入的元素达到容量值的0.75的时候将发生扩容
                sizeCtl = sc;
            }
            break;
        }
    }
    return tab;
}
复制代码
treeifyBin详解
/**     链表转红黑树
 * Replaces all linked nodes in bin at given index unless table is
 * too small, in which case resizes instead.
 */
 //tab:当前存放所有数据的哈希表数组 ; index:需要判断是否转换为红黑树的数组索引下标
private final void treeifyBin(Node<K,V>[] tab, int index) {
    Node<K,V> b; int n;
    if (tab != null) {
        //如果当前哈希表数组长度小于MIN_TREEIFY_CAPACITY(64)时,进行数组扩容
        if ((n = tab.length) < MIN_TREEIFY_CAPACITY)
            tryPresize(n << 1);
        else if ((b = tabAt(tab, index)) != null && b.hash >= 0) {
            //获取头结点加锁
            synchronized (b) {
                if (tabAt(tab, index) == b) {
                    TreeNode<K,V> hd = null, tl = null;
                    //转换为红黑树
                    for (Node<K,V> e = b; e != null; e = e.next) {
                        TreeNode<K,V> p =
                            new TreeNode<K,V>(e.hash, e.key, e.val,
                                              null, null);
                        if ((p.prev = tl) == null)
                            hd = p;
                        else
                            tl.next = p;
                        tl = p;
                    }
                    //将红黑树存在到该数组索引位置
                    setTabAt(tab, index, new TreeBin<K,V>(hd));
                }
            }
        }
    }
}
复制代码
addCount详解
/**  更新baseCount的值,检测是否进行扩容。
 * Adds to count, and if table is too small and not already
 * resizing, initiates transfer. If already resizing, helps
 * perform transfer if work is available.  Rechecks occupancy
 * after a transfer to see if another resize is already needed
 * because resizings are lagging additions.
 *
 * @param x the count to add
 * @param check if <0, don't check resize, if <= 1 only check if uncontended
 */
private final void addCount(long x, int check) {
    CounterCell[] as; long b, s;
    //利用CAS方法更新baseCount的值   
    if ((as = counterCells) != null ||
        !U.compareAndSwapLong(this, BASECOUNT, b = baseCount, s = b + x)) {
        CounterCell a; long v; int m;
        boolean uncontended = true;
        if (as == null || (m = as.length - 1) < 0 ||
            (a = as[ThreadLocalRandom.getProbe() & m]) == null ||
            !(uncontended =
              U.compareAndSwapLong(a, CELLVALUE, v = a.value, v + x))) {
            fullAddCount(x, uncontended);
            return;
        }
        if (check <= 1)
            return;
        s = sumCount();
    }
    //如果check值大于等于0 则需要检验是否需要进行扩容操作
    if (check >= 0) {
        Node<K,V>[] tab, nt; int n, sc;
        while (s >= (long)(sc = sizeCtl) && (tab = table) != null &&
               (n = tab.length) < MAXIMUM_CAPACITY) {
            int rs = resizeStamp(n);
            if (sc < 0) {
                if ((sc >>> RESIZE_STAMP_SHIFT) != rs || sc == rs + 1 ||
                    sc == rs + MAX_RESIZERS || (nt = nextTable) == null ||
                    transferIndex <= 0)
                    break;
                //如果已经有其他线程在执行扩容操作  
                if (U.compareAndSwapInt(this, SIZECTL, sc, sc + 1))
                    transfer(tab, nt);
            }
            //当前线程是唯一的或是第一个发起扩容的线程  此时nextTable=null  
            else if (U.compareAndSwapInt(this, SIZECTL, sc,
                                         (rs << RESIZE_STAMP_SHIFT) + 2))
                transfer(tab, null);
            s = sumCount();
        }
    }
}
复制代码
replaceNode详解
    /**
     * Implementation for the four public remove/replace methods:
     * Replaces node value with v, conditional upon match of cv if
     * non-null.  If resulting value is null, delete.
     */
    final V replaceNode(Object key, V value, Object cv) {
        //获取该key对应的hash
        int hash = spread(key.hashCode());
        for (Node<K,V>[] tab = table;;) {
            Node<K,V> f; int n, i, fh;
            //如果不存在对应key的节点则结束
            if (tab == null || (n = tab.length) == 0 ||
                (f = tabAt(tab, i = (n - 1) & hash)) == null)
                break;
            //当前正在扩容  
            else if ((fh = f.hash) == MOVED)
                tab = helpTransfer(tab, f);
            else {
                V oldVal = null;
                boolean validated = false;
                synchronized (f) {
                    if (tabAt(tab, i) == f) {
                        //链表结构
                        if (fh >= 0) {
                            validated = true;
                            for (Node<K,V> e = f, pred = null;;) {
                                K ek;
                                //如果找到了Key对应节点
                                if (e.hash == hash &&
                                    ((ek = e.key) == key ||
                                     (ek != null && key.equals(ek)))) {
                                    V ev = e.val;
                                    //判断cv==null,或者cv符合节点value指向删除操作
                                    if (cv == null || cv == ev ||
                                        (ev != null && cv.equals(ev))) {
                                        oldVal = ev;
                                        if (value != null)
                                            e.val = value;
                                        //找到要删除的节点,非首节点,更新前节点指向后节点    
                                        else if (pred != null)
                                            pred.next = e.next;
                                        //如果该首节点f就是要删除的节点,将数字i位置指向f.next    
                                        else
                                            setTabAt(tab, i, e.next);
                                    }
                                    break;
                                }
                                pred = e;
                                if ((e = e.next) == null)
                                    break;
                            }
                        }
                        //红黑树结构的删除操作
                        else if (f instanceof TreeBin) {
                            validated = true;
                            TreeBin<K,V> t = (TreeBin<K,V>)f;
                            TreeNode<K,V> r, p;
                            if ((r = t.root) != null &&
                                (p = r.findTreeNode(hash, key, null)) != null) {
                                V pv = p.val;
                                if (cv == null || cv == pv ||
                                    (pv != null && cv.equals(pv))) {
                                    oldVal = pv;
                                    if (value != null)
                                        p.val = value;
                                    else if (t.removeTreeNode(p))
                                        setTabAt(tab, i, untreeify(t.first));
                                }
                            }
                        }
                        else if (f instanceof ReservationNode)
                            throw new IllegalStateException("Recursive update");
                    }
                }
                if (validated) {
                    if (oldVal != null) {
                        if (value == null)
                            addCount(-1L, -1);//更新删除一个节点
                        //操作成功返回旧value
                        return oldVal;
                    }
                    break;
                }
            }
        }
        return null;
    }
复制代码
删除 (查看replaceNode方法)

1.remove(k)方法,从哈希表中移除指定key对应的元素

    public V remove(Object key) {
        return replaceNode(key, null, null);
    }
复制代码

2.remove(k,v)方法,从哈希表中移除指定key并且value匹配对应的元素

    public V replace(K key, V value) {
        if (key == null || value == null)
            throw new NullPointerException();
        return replaceNode(key, value, null);
    }
复制代码

3.remove(k,oldValue,newValue)方法,从哈希表中移除指定key并且value匹配对应的元素

   public boolean replace(K key, V oldValue, V newValue) {
        if (key == null || oldValue == null || newValue == null)
            throw new NullPointerException();
        return replaceNode(key, newValue, oldValue) != null;
    }
复制代码

2.clear方法,清空哈希表的所有元素

    public void clear() {
        long delta = 0L; // negative number of deletions
        int i = 0;
        Node<K,V>[] tab = table;
        while (tab != null && i < tab.length) {
            int fh;
            Node<K,V> f = tabAt(tab, i);
            if (f == null)
                ++i;
            else if ((fh = f.hash) == MOVED) {
                tab = helpTransfer(tab, f);
                i = 0; // restart
            }
            else {
                synchronized (f) {
                    if (tabAt(tab, i) == f) {
                        Node<K,V> p = (fh >= 0 ? f :
                                       (f instanceof TreeBin) ?
                                       ((TreeBin<K,V>)f).first : null);
                        while (p != null) {
                            --delta;
                            p = p.next;
                        }
                        setTabAt(tab, i++, null);
                    }
                }
            }
        }
        if (delta != 0L)
            addCount(delta, -1);
    }
复制代码
查询

1.get方法,获取指定key对应元素的Value,存放则返回value,否则返回null

get过程操作如下: 1.计算该key对应的hash值,确认该key位于哈希表数组的索引值:(n - 1) & h

2.获取该索引值的位置对应的首节点信息,若首节点不为空,则接着往下判断,否则返回Null

3.判读首个节点是否就是要找的节点,若是,则返回value结束查找

4.判断首个节点的hash值是否小于0,若是则说明该位置为红黑树,或者正在扩容,执行find操作

5.否则该位置为链表结构,遍历链表判断是否存在符合的节点

    public V get(Object key) {
        Node<K,V>[] tab; Node<K,V> e, p; int n, eh; K ek;
        //计算key对应hash
        int h = spread(key.hashCode());
        //判断哈希表数组不为空,并且获取并判断该key对应的位置节点不为空
        if ((tab = table) != null && (n = tab.length) > 0 &&
            (e = tabAt(tab, (n - 1) & h)) != null) {
            //如果该链表首节点为索要查询的节点,返回value
            if ((eh = e.hash) == h) {
                if ((ek = e.key) == key || (ek != null && key.equals(ek)))
                    return e.val;
            }
            //如果该链表首节点的hash值<0,说明是红黑树或者正在扩容
            else if (eh < 0)
                return (p = e.find(h, key)) != null ? p.val : null;
            //循环遍历该链表,判读是否存在与key对应的节点    
            while ((e = e.next) != null) {
                if (e.hash == h &&
                    ((ek = e.key) == key || (ek != null && key.equals(ek))))
                    return e.val;
            }
        }
        return null;
    }
复制代码

2.getOrDefault(K key,V defaultValue),JDK8提供的方法,获取指定key对应的元素value,存在则返回对应的value,否则返回defaultValue

    public V getOrDefault(Object key, V defaultValue) {
        V v;
        return (v = get(key)) == null ? defaultValue : v;
    }
复制代码

3.containsKey方法,判断哈希表中所有key项是否包含该key,存在返回true,不存在返回false

    public boolean containsKey(Object key) {
        return get(key) != null;
    }
复制代码

3.containsValue方法,判断哈希表中所有value项中是否包含该value,存在返回true,不存在返回false,源码上看就是所有节点遍历过去,遇到存在value相等则停止

    public boolean containsValue(Object value) {
        if (value == null)
            throw new NullPointerException();
        Node<K,V>[] t;
        if ((t = table) != null) {
            Traverser<K,V> it = new Traverser<K,V>(t, t.length, 0, t.length);
            for (Node<K,V> p; (p = it.advance()) != null; ) {
                V v;
                if ((v = p.val) == value || (v != null && value.equals(v)))
                    return true;
            }
        }
        return false;
    }
复制代码

关注微信公众号,共同进步

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值