juc 下的集合之二 (ConcurrentHashMap)(JDK1.8版本)

为什么要再写一篇ConcurrentHashMap的文章,有下面几个原因:

1. jdk1.8 和我上次写的1.6版本的在实现上差距很大,我也是今天看了下才发现,去年又一次去面试刚好问道这个地方了,我就胸有成竹的回答了有关同步,锁,效率的问题,今天一看基本全错了。

2. 上次写的文章有点笼统,没有触及到问题的根本,只是在代码层面走了流程,这次我需要把没完成的问题一一解决。

 

所以此文会解决如下几个问题:

1. 基本结构是什么样的,关键的操作(增加,修改,删除会如何影响结构)

2. 既然遍历可以不像HashTable一样出异常,那么它是怎么做到的

3. 同样比HashTable快,又快在哪里,为什么会快呢

 

一、基本结构&基本操作

 我们都知道HashMap的数据结构是数组+链表的结构,我之前认为高端一点的说法应该是邻接表的结构,但是我上网搜了下很少有人直接说是邻接表。

//数组
/**
     * The array of bins. Lazily initialized upon first insertion.
     * Size is always a power of two. Accessed directly by iterators.
     */
    transient volatile Node<K,V>[] table;

    /**
     * The next table to use; non-null only while resizing.
     */
    private transient volatile Node<K,V>[] nextTable;

//链表
static class Node<K,V> implements Map.Entry<K,V> {
        final int hash;
        final K key;
        volatile V val;
        volatile Node<K,V> next;
        ......
}

 这里会有两个数组的情况,为什么会这样呢,注释上说的清楚,当扩张大小的时候会用到这张表,比如说之前有16个元素的数组,当第16 * LOAD_FACTOR +1 = 16* 0.75 +1 = 12 +1 = 13的时候,会新建一个数组大小为16 << 1 = 32 ,同样下一次的变动大小为 25。如果你想看jdk中是如何变动的,可以看下面的代码

 

//方法调用为 putVal() -> addCount() -> transfer()

private final void transfer(Node<K,V>[] tab, Node<K,V>[] nextTab) {
        int n = tab.length, stride;
        if ((stride = (NCPU > 1) ? (n >>> 3) / NCPU : n) < MIN_TRANSFER_STRIDE)
            stride = MIN_TRANSFER_STRIDE; // subdivide range
        if (nextTab == null) {            // initiating
            try {
                @SuppressWarnings("unchecked")
                Node<K,V>[] nt = (Node<K,V>[])new Node<?,?>[n << 1];//看这里
......
}

 

所以添加的时候,会根据key的hash值,映射到当前数组的某一个元素也就是Node,然后在从当前Node找key为新添加元素的k,如果等于表示已经存在key,进行更新操作,否则进行添加操作。添加完之后在看size()是否达到扩展的情况,如果达到了执行上面扩大数据的操作了。这里注意一下,新版本的hash映射到数组元素的算法已经修改了。

/**
     * Spreads (XORs) higher bits of hash to lower and also forces top
     * bit to 0. Because the table uses power-of-two masking, sets of
     * hashes that vary only in bits above the current mask will
     * always collide. (Among known examples are sets of Float keys
     * holding consecutive whole numbers in small tables.)  So we
     * apply a transform that spreads the impact of higher bits
     * downward. There is a tradeoff between speed, utility, and
     * quality of bit-spreading. Because many common sets of hashes
     * are already reasonably distributed (so don't benefit from
     * spreading), and because we use trees to handle large sets of
     * collisions in bins, we just XOR some shifted bits in the
     * cheapest possible way to reduce systematic lossage, as well as
     * to incorporate impact of the highest bits that would otherwise
     * never be used in index calculations because of table bounds.
     */
static final int spread(int h) {
        return (h ^ (h >>> 16)) & HASH_BITS;
}

final V putVal(K key, V value, boolean onlyIfAbsent) {
        if (key == null || value == null) throw new NullPointerException();
        int hash = spread(key.hashCode()); //这里新的算法
        int binCount = 0;
        for (Node<K,V>[] tab = table;;) {
            Node<K,V> f; int n, i, fh;
            if (tab == null || (n = tab.length) == 0)
                tab = initTable();
            else if ((f = tabAt(tab, i = (n - 1) & hash)) == null) { //tabAt 也是新的
                if (casTabAt(tab, i, null,
                             new Node<K,V>(hash, key, value, null)))
                    break;                   // no lock when adding to empty bin
            }
            else if ((fh = f.hash) == MOVED)
                tab = helpTransfer(tab, f);
            else {
                V oldVal = null;
                synchronized (f) {
                    if (tabAt(tab, i) == f) {
                        if (fh >= 0) {
                            binCount = 1;
                            for (Node<K,V> e = f;; ++binCount) {
                                K ek;
                                if (e.hash == hash &&
                                    ((ek = e.key) == key ||
                                     (ek != null && key.equals(ek)))) {
                                    oldVal = e.val;
                                    if (!onlyIfAbsent)
                                        e.val = value;
                                    break;
                                }
                                Node<K,V> pred = e;
                                if ((e = e.next) == null) {
                                    pred.next = new Node<K,V>(hash, key,
                                                              value, null);
                                    break;
                                }
                            }
                        }
                        else if (f instanceof TreeBin) {
                            Node<K,V> p;
                            binCount = 2;
                            if ((p = ((TreeBin<K,V>)f).putTreeVal(hash, key,
                                                           value)) != null) {
                                oldVal = p.val;
                                if (!onlyIfAbsent)
                                    p.val = value;
                            }
                        }
                    }
                }
                if (binCount != 0) {
                    if (binCount >= TREEIFY_THRESHOLD)
                        treeifyBin(tab, i);
                    if (oldVal != null)
                        return oldVal;
                    break;
                }
            }
        }
        addCount(1L, binCount);
        return null;
    }

tabAt 是根据hash 定位的方法,源代码如下

/*
     * Volatile access methods are used for table elements as well as
     * elements of in-progress next table while resizing.  All uses of
     * the tab arguments must be null checked by callers.  All callers
     * also paranoically precheck that tab's length is not zero (or an
     * equivalent check), thus ensuring that any index argument taking
     * the form of a hash value anded with (length - 1) is a valid
     * index.  Note that, to be correct wrt arbitrary concurrency
     * errors by users, these checks must operate on local variables,
     * which accounts for some odd-looking inline assignments below.
     * Note that calls to setTabAt always occur within locked regions,
     * and so in principle require only release ordering, not
     * full volatile semantics, but are currently coded as volatile
     * writes to be conservative.
     */

    @SuppressWarnings("unchecked")
    static final <K,V> Node<K,V> tabAt(Node<K,V>[] tab, int i) {
        return (Node<K,V>)U.getObjectVolatile(tab, ((long)i << ASHIFT) + ABASE);
    }

 U 是一个sun.misc.Unsafe 的实例,Unsafe是一个低级别的,不安全的方法集合,主要用在一些多线程处理方面的优化,绝大部分是native的方法。

 

二、安全的遍历如何做到

 说到遍历我们用的最多的是下面这种方式了

ConcurrentHashMap<K,V> map ;
map.entrySet().iterator();
while(iterator.hasNext){
    iterator.next();
}

 接下来我们看看,entrySet()这个方法

public Set<Map.Entry<K,V>> entrySet() {
        EntrySetView<K,V> es;
        return (es = entrySet) != null ? es : (entrySet = new EntrySetView<K,V>(this));
    }

 方法中使用的是EntrySetView这个类包装的,注意到参数this,表示把当前的Map传递过去了的,为什么传递过去呢,很简单就为了得到map中的数组和链表,也就是数据。接下来看看EntrySetView这个类

static final class EntrySetView<K,V> extends CollectionView<K,V,Map.Entry<K,V>>
        implements Set<Map.Entry<K,V>>, java.io.Serializable {
        private static final long serialVersionUID = 2249069246763182397L;
        EntrySetView(ConcurrentHashMap<K,V> map) { super(map); }
......
//迭代器方法如下
public Iterator<Map.Entry<K,V>> iterator() {
            ConcurrentHashMap<K,V> m = map;
            Node<K,V>[] t;
            int f = (t = m.table) == null ? 0 : t.length;
            return new EntryIterator<K,V>(t, f, 0, f, m);
        }
......
}

迭代器也有一个包装类来完成EntryIterator, 同样注意这个类的构造方法有5个参数(map中的节点数组,数组的长度,0,数组的长度,map本身的引用)。看看这个看似简单的类是如何工作的。

static final class EntryIterator<K,V> extends BaseIterator<K,V>
        implements Iterator<Map.Entry<K,V>> {
        EntryIterator(Node<K,V>[] tab, int index, int size, int limit,
                      ConcurrentHashMap<K,V> map) {
            super(tab, index, size, limit, map);
        }

        public final Map.Entry<K,V> next() {
            Node<K,V> p;
            if ((p = next) == null)
                throw new NoSuchElementException();
            K k = p.key;
            V v = p.val;
            lastReturned = p;
            advance();
            return new MapEntry<K,V>(k, v, map);
        }
    }

 这里只是把next node的值取出来包装到MapEntry中返回就好,具体如何找next呢,关键来了,看advance()这个方法。

/**
         * Advances if possible, returning next valid node, or null if none.
         */
        final Node<K,V> advance() {
            Node<K,V> e;
            if ((e = next) != null)
                e = e.next;
            for (;;) {
                Node<K,V>[] t; int i, n;  // must use locals in checks
                if (e != null)
                    return next = e;
                if (baseIndex >= baseLimit || (t = tab) == null ||
                    (n = t.length) <= (i = index) || i < 0)
                    return next = null;
                if ((e = tabAt(t, i)) != null && e.hash < 0) {//特殊类型的节点hash值都为负数
                    if (e instanceof ForwardingNode) {
                        tab = ((ForwardingNode<K,V>)e).nextTable;
                        e = null;
                        pushState(t, i, n);
                        continue;
                    }
                    else if (e instanceof TreeBin)
                        e = ((TreeBin<K,V>)e).first;
                    else
                        e = null;
                }
                if (stack != null)
                    recoverState(n);
                else if ((index = i + baseSize) >= n)
                    index = ++baseIndex; // visit upper slots if present
            }
        }

 上面代码注意下面这一部分

if (e instanceof ForwardingNode) {
                        tab = ((ForwardingNode<K,V>)e).nextTable;
                        e = null;
                        pushState(t, i, n);
                        continue;
                    }

代码大意很明确,意思是如果当前节点Node,是ForwardingNode的类型的时候,就使用nextTable来当作当前的table, 在最前面的数据结构中我们提到了有两个table,另一个table只有当调整空间也就是resize的时候使用,此类中与resize相关的方法transfer(tab,nextTab)在最前的时候也已经讲过了,这里再多贴出点东西,看看关于ForwardingNode相关的东西。

private final void transfer(Node<K,V>[] tab, Node<K,V>[] nextTab) {
        int n = tab.length, stride;
        if ((stride = (NCPU > 1) ? (n >>> 3) / NCPU : n) < MIN_TRANSFER_STRIDE)
            stride = MIN_TRANSFER_STRIDE; // subdivide range
        if (nextTab == null) {            // initiating
            try {
                @SuppressWarnings("unchecked")
                Node<K,V>[] nt = (Node<K,V>[])new Node<?,?>[n << 1];
                nextTab = nt;
            } catch (Throwable ex) {      // try to cope with OOME
                sizeCtl = Integer.MAX_VALUE;
                return;
            }
            nextTable = nextTab;
            transferIndex = n;
        }
        int nextn = nextTab.length;
        ForwardingNode<K,V> fwd = new ForwardingNode<K,V>(nextTab);//这里
......
}

ForwardingNode表示正在移动的节点,并且此类节点的Hash值都为-1 , 

到这里大致清楚了,任何数组大小调整的时候都会有两个表,一个表应对各种外界操作,一个表应对内部大小调整,调整完全好了,才会同步到主表中。

 

三、快在哪里

       要比较快在哪里得看在什么情况下(是否并发访问),和谁比较(HashMap,还是HashTable,这里既然说并发情况下肯定是和HashTable)比较了。

      说上面一段话是想说明,任何数据类型都有一定的限制性,我很多次面试的时候问过,为什么ConcurrentHashMap要好,給我的回答是新出来的,然后反问一句,如果不好为什么要新写一个出来,弄的我也是一愣的。

      说正事,通过上面的说明,ConcurrentHashMap比HashTable快的两个地方是

 

      1.  更细粒度的锁

HashTable是通过在每个方法上加Synchronized来完成同步的,而ConcurrentHashMap是在某个Node列上(某个数组节点,里面包含了链表)加锁来实现同步。相比之下增加了锁的个数从而提高并发的数量,但同时锁的管理也更加复杂,锁的消耗也更加大,其实这也是为什么不在每一个节点(每个列上的每个子节点)上都加锁的原因。

 

       2. 空间换时间

ConcurrentHashMap 通过增加额外的副本,来避免最耗时的resize操作,这样有两个好处一是将耗时的resize操作让其它的线程处理,二是任何时间遍历都不会出现数据不全的情况,同时不用额外锁住整个map。

 

 

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值