ConcurrentHashMap

ConcurrentHashMap继承于AbstractMap,实现了ConcurrentMap接口,同时标记了Serializable接口。

为什么要使用ConcurrentHashMap

  1. HashMap是非线程安全的,在多线程场景下可能引起死循环,具体分析见文章疫苗:JAVA HASHMAP的死循环

  2. HashTable虽然解决了线程安全问题,但是由于其低效性,所以不推荐使用。

ConcurrentHashMap是线程安全且高效的HashMap。ConcurrentHashMap之所以高效是因为它降低了锁的粒度。ConcurrentHashMap先是一个Segment数组: final Segment<K,V>[] segments;,每个Segment又是一个HashEntry数组: transient volatile HashEntry<K,V>[] table;。我们存储的每个key-value键值对就存放在HashEntry节点上。
这里写图片描述

常量

    /**
     * 初始总容量,各个Segment中HashEntry的累加和
     * //Segment数组中的每一个segment的HashEntry[]的初始容量
     */
    static final int DEFAULT_INITIAL_CAPACITY = 16;

    /**
     * 默认的加载因子
     */
    static final float DEFAULT_LOAD_FACTOR = 0.75f;

    /**
     * 根据这个数来计算segment的个数(ssize),segment的个数是不小于concurrencyLevel的最小的2的整数次幂
     */
    static final int DEFAULT_CONCURRENCY_LEVEL = 16;

    /**
     * 各个Segment中HashEntry的累加和
     * //Segment数组中的每一个segment的HashEntry[]的最大容量(2的30次方)
     */
    static final int MAXIMUM_CAPACITY = 1 << 30;

    /**
     * Segment数组中的每一个segment的HashEntry[]的最小容量是2,避免在懒构造后下一次使用就立即resize
     */
    static final int MIN_SEGMENT_TABLE_CAPACITY = 2;

    /**
     * Segment数组的最大长度(2的16次方)
     */
    static final int MAX_SEGMENTS = 1 << 16; // 定义地轻微有些保守

    /**
     * Number of unsynchronized retries in size and containsValue
     * methods before resorting to locking. This is used to avoid
     * unbounded retries if tables undergo continuous modification
     * which would make it impossible to obtain an accurate result.
     */
    static final int RETRIES_BEFORE_LOCK = 2;

全局变量

    /**
     * 将给定的key的hash值定位到一个Segment中去
     */
    final int segmentMask;

    /**
     * 段偏移量
     */
    final int segmentShift;

    /**
     * ConcurrentHashMap先是一个Segment数组
     */
    final Segment<K,V>[] segments;

    transient Set<K> keySet;
    transient Set<Map.Entry<K,V>> entrySet;
    transient Collection<V> values;

构造函数

ConcurrentHashMap提供了五个构造函数:
(1) ConcurrentHashMap():构造一个初始容量为16、加载因子为0.75和并发度为16的空ConcurrentHashMap。

(2) ConcurrentHashMap(int initialCapacity):构造一个带指定初始容量、默认加载因子0.75和默认并发度为16的空ConcurrentHashMap。

(3) ConcurrentHashMap(int initialCapacity, float loadFactor):构造一个带指定初始容量、指定加载因子和默认并发度为16的空ConcurrentHashMap。

(4) ConcurrentHashMap(int initialCapacity, float loadFactor, int concurrencyLevel):构造一个带指定初始容量、指定加载因子和指定并发度的空ConcurrentHashMap。

    /**
     * Creates a new, empty map with the specified initial
     * capacity, load factor and concurrency level.
     */
    @SuppressWarnings("unchecked")
    public ConcurrentHashMap(int initialCapacity,
                             float loadFactor, int concurrencyLevel) {
        // 参数合法性校验
        if (!(loadFactor > 0) || initialCapacity < 0 || concurrencyLevel <= 0)
            throw new IllegalArgumentException();
        if (concurrencyLevel > MAX_SEGMENTS)
            concurrencyLevel = MAX_SEGMENTS;
        // 计算segment的个数,保存在ssize字段里
        int sshift = 0;
        int ssize = 1;
        // 确保segment的个数是不小于concurrencyLevel的最小的2的整数次幂
        // 默认有16个线程同时修改,那就默认将segments大小设置为16,最理想情况下每个线程对应一个segment,这样就不冲突了。
        while (ssize < concurrencyLevel) {
            ++sshift;
            ssize <<= 1;
        }
        this.segmentShift = 32 - sshift;
        //segmentMask的二进制是一个全是1的数
        this.segmentMask = ssize - 1;
        if (initialCapacity > MAXIMUM_CAPACITY)
            initialCapacity = MAXIMUM_CAPACITY;
        int c = initialCapacity / ssize;
        if (c * ssize < initialCapacity)
            ++c;
        int cap = MIN_SEGMENT_TABLE_CAPACITY;
        // cap是每个segment的容量大小,每个容量都是2的整数次幂。最少也是2。
        while (cap < c)
            cap <<= 1;
        Segment<K,V> s0 =
            new Segment<K,V>(loadFactor, (int)(cap * loadFactor),
                             (HashEntry<K,V>[])new HashEntry[cap]);
        // 初始状态下,将创建一个size为16的Segment数组
        Segment<K,V>[] ss = (Segment<K,V>[])new Segment[ssize];
        // 同时初始化Segment[0]元素,s0是一个size为2的HashEntry数组,负载因子0.75,阈值1.5
        UNSAFE.putOrderedObject(ss, SBASE, s0); // ordered write of segments[0]
        this.segments = ss;
    }

    /**
     * Segments are specialized versions of hash tables.  This
     * subclasses from ReentrantLock opportunistically, just to
     * simplify some locking and avoid separate construction.
     */
    static final class Segment<K,V> extends ReentrantLock implements Serializable {
        /**
         * The maximum number of times to tryLock in a prescan before
         * possibly blocking on acquire in preparation for a locked
         * segment operation. On multiprocessors, using a bounded
         * number of retries maintains cache acquired while locating
         * nodes.
         */
        static final int MAX_SCAN_RETRIES =
            Runtime.getRuntime().availableProcessors() > 1 ? 64 : 1;

        /**
         * Segment的数据结构是HashEntry数组
         */
        transient volatile HashEntry<K,V>[] table;

        /**
         * The number of elements. Accessed only either within locks
         * or among other volatile reads that maintain visibility.
         */
        transient int count;

        /**
         * The total number of mutative operations in this segment.
         * Even though this may overflows 32 bits, it provides
         * sufficient accuracy for stability checks in CHM isEmpty()
         * and size() methods.  Accessed only either within locks or
         * among other volatile reads that maintain visibility.
         */
        transient int modCount;

        /**
         * 当HashEntry数组中元素的个数超过了阈值(threshold),就将引发HashEntry桶的rehash。
         */
        transient int threshold;

        /**
         * Segment的负载因子。即时所有Segment的负载因子都相同,这里依然冗余避免和外部对象关联。
         */
        final float loadFactor;

        Segment(float lf, int threshold, HashEntry<K,V>[] tab) {
            this.loadFactor = lf;
            this.threshold = threshold;
            this.table = tab;
        }
        // 其它代码略去
    }

(5) ConcurrentHashMap(Map<? extends K, ? extends V> m):构造一个包含传参Map所有键值对的ConcurrentHashMap,初始容量要足以容纳参数对象,默认加载因子0.75,默认并发度为16。

    /**
     * Creates a new map with the same mappings as the given map.
     * The map is created with a capacity of 1.5 times the number
     * of mappings in the given map or 16 (whichever is greater),
     * and a default load factor (0.75) and concurrencyLevel (16).
     */
    public ConcurrentHashMap(Map<? extends K, ? extends V> m) {
        this(Math.max((int) (m.size() / DEFAULT_LOAD_FACTOR) + 1,
                      DEFAULT_INITIAL_CAPACITY),
             DEFAULT_LOAD_FACTOR, DEFAULT_CONCURRENCY_LEVEL);
        putAll(m);
    }

    /**
     * Copies all of the mappings from the specified map to this one.
     * These mappings replace any mappings that this map had for any of the
     * keys currently in the specified map.
     */
    public void putAll(Map<? extends K, ? extends V> m) {
        for (Map.Entry<? extends K, ? extends V> e : m.entrySet())
            put(e.getKey(), e.getValue());
    }

    /**
     * Maps the specified key to the specified value in this table.
     * Neither the key nor the value can be null.
     */
    @SuppressWarnings("unchecked")
    public V put(K key, V value) {
        Segment<K,V> s;
        if (value == null)
            throw new NullPointerException();
        // 再hash减少碰撞几率
        int hash = hash(key);
        // 取segment下标j
        int j = (hash >>> segmentShift) & segmentMask;
        // 再次确认ensureSegment(j)存在
        if ((s = (Segment<K,V>)UNSAFE.getObject          // nonvolatile; recheck
             (segments, (j << SSHIFT) + SBASE)) == null) //  in ensureSegment
            s = ensureSegment(j);
        // 将k-v键值对塞入Segment[j]中的HashEntry<K,V>[]中
        return s.put(key, hash, value, false);
    }

(未完待续)

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值