HashMap源码再探(JDK1.8源码解析)

再看这篇文章之前,建议先 看一下这篇文章了解一下hashMap在jdk7中的实现:
HashMap源码再探(JDK1.7源码解析)

hashmap类结构

在这里插入图片描述
从类图可以看出 HashMap 集合实现了 Cloneable 接口以及 Serializable 接口,分别用来进行对象克隆以及将对象进行序列化。
我们可以看到HashMap已经继承了AbstractMap而AbstractMap类实现了Map接口,那为什么HashMap还要在实现Map接口呢?

据 java 集合框架的创始人Josh
Bloch描述,这样的写法是一个失误。在java集合框架中,类似这样的写法很多,最开始写java集合框架的时候,他认为这样写,在某些地方可能是有价值的,直到他意识到错了。显然的,JDK的维护者,后来不认为这个小小的失误值得去修改,所以就这样存在下来了。

AbstractMap 提供Map实现接口,可以让我们不必实现map中所有定义的方法,但是又实现了map接口就。。。
在这里插入图片描述

HashMap数据结构

在 JDK1.8 中,HashMap 是由 数组+链表+红黑树构成,新增了红黑树作为底层数据结构,结构变得复杂了,但是效率也变的更高效。当一个值中要存储到Map的时候会根据Key的值来计算出他的
hash,通过哈希来确认到数组的位置,如果发生哈希碰撞就以链表的形式存储 在Object源码分析中解释过,但是这样如果链表过长来的话,HashMap会把这个链表转换成红黑树来存储。而转化成红黑树后链表的查找速度将会是质的提升

因为Map中的元素初始化是链表保存的,其查找性能是O(n),而树结构能将查找性能提升到O(log(n))。当链表长度很小的时候,即使遍历,速度也非常快,但是当链表长度不断变长,肯定会对查询性能有一定的影响,所以才需要转成树。至于为什么阈值是8,欢迎在评论区探讨

在这里插入图片描述
想了解红黑树的同学可以看下这篇五分钟搞懂什么是红黑树

成员变量和属性

    /**
     * The default initial capacity - MUST be a power of two.
     */
    static final int DEFAULT_INITIAL_CAPACITY = 1 << 4; // aka 16

默认初始化容量16 必须是2的幂

    /**
     * The maximum capacity, used if a higher value is implicitly specified
     * by either of the constructors with arguments.
     * MUST be a power of two <= 1<<30.
     */
    static final int MAXIMUM_CAPACITY = 1 << 30;

数组最大的长度 必须是2的幂

    /**
     * The load factor used when none specified in constructor.
     */
    static final float DEFAULT_LOAD_FACTOR = 0.75f;

默认负载因子0.5 ,当数组元素个数> 负载因子 * node数组长度时会进行扩容,至于为什么是0.75,欢迎评论区指导

    /**
     * The bin count threshold for using a tree rather than list for a
     * bin.  Bins are converted to trees when adding an element to a
     * bin with at least this many nodes. The value must be greater
     * than 2 and should be at least 8 to mesh with assumptions in
     * tree removal about conversion back to plain bins upon
     * shrinkage.
     */
    static final int TREEIFY_THRESHOLD = 8;

链表转红黑树系数,当链表长度超过此数字时会转成红黑树

    /**
     * The bin count threshold for untreeifying a (split) bin during a
     * resize operation. Should be less than TREEIFY_THRESHOLD, and at
     * most 6 to mesh with shrinkage detection under removal.
     */
    static final int UNTREEIFY_THRESHOLD = 6;

红黑树转链表系数,红黑树个数小于6时会转成链表

    /**
     * The smallest table capacity for which bins may be treeified.
     * (Otherwise the table is resized if too many nodes in a bin.)
     * Should be at least 4 * TREEIFY_THRESHOLD to avoid conflicts
     * between resizing and treeification thresholds.
     */
    static final int MIN_TREEIFY_CAPACITY = 64;

当Map里面的数量超过这个值时,表中的桶才能进行树形化 ,否则桶内元素太多时会扩容,而不是树形化 为了避免进行扩容、树形化选择的冲突,这个值不能小于 4 * TREEIFY_THRESHOLD

    /**
     * The table, initialized on first use, and resized as
     * necessary. When allocated, length is always a power of two.
     * (We also tolerate length zero in some operations to allow
     * bootstrapping mechanics that are currently not needed.)
     */
    transient Node<K,V>[] table;

node数组

    /**
     * Holds cached entrySet(). Note that AbstractMap fields are used
     * for keySet() and values().
     */
    transient Set<Map.Entry<K,V>> entrySet;

用来转成entry遍历

    /**
     * The number of key-value mappings contained in this map.
     */
    transient int size;

map大小

/**
 * The number of times this HashMap has been structurally modified
 * Structural modifications are those that change the number of mappings in
 * the HashMap or otherwise modify its internal structure (e.g.,
 * rehash).  This field is used to make iterators on Collection-views of
 * the HashMap fail-fast.  (See ConcurrentModificationException).
 */
transient int modCount;

map更新次数

/**
 * The next size value at which to resize (capacity * load factor).
 *
 * @serial
 */
// (The javadoc description is true upon serialization.
// Additionally, if the table array has not been allocated, this
// field holds the initial array capacity, or zero signifying
// DEFAULT_INITIAL_CAPACITY.)
int threshold;  //用来调整大小下一个容量的值计算方式为(容量*负载因子)

/**
 * The load factor for the hash table.
 *
 * @serial
 */
final float loadFactor; //哈希表的加载因子

构造方法

在这里插入图片描述
一共提供了四个构造函数

/**
 * Constructs an empty <tt>HashMap</tt> with the default initial capacity
 * (16) and the default load factor (0.75).
 */
public HashMap() {
    this.loadFactor = DEFAULT_LOAD_FACTOR; // all other fields defaulted
}

构造一个空的 HashMap,默认初始容量(16)和默认负载因子(0.75)。数组的大小在第一次put操作时进行初始化

    /**
     * Constructs an empty <tt>HashMap</tt> with the specified initial
     * capacity and the default load factor (0.75).
     *
     * @param  initialCapacity the initial capacity.
     * @throws IllegalArgumentException if the initial capacity is negative.
     */
    public HashMap(int initialCapacity) {
        this(initialCapacity, DEFAULT_LOAD_FACTOR);
    }

构造一个空的 HashMap具有指定的初始容量和默认负载因子(0.75)。

    /**
     * Constructs an empty <tt>HashMap</tt> with the specified initial
     * capacity and load factor.
     *
     * @param  initialCapacity the initial capacity
     * @param  loadFactor      the load factor
     * @throws IllegalArgumentException if the initial capacity is negative
     *         or the load factor is nonpositive
     */
    public HashMap(int initialCapacity, float loadFactor) {
        if (initialCapacity < 0) //判断初始容量 小于0则抛异常
            throw new IllegalArgumentException("Illegal initial capacity: " +
                                               initialCapacity);
        if (initialCapacity > MAXIMUM_CAPACITY) //判断初始容量是否超过最大值,超过则默认为最大值
            initialCapacity = MAXIMUM_CAPACITY;
        if (loadFactor <= 0 || Float.isNaN(loadFactor)) //判断负载因子是否小于0或者为一个非数字 是则抛异常
            throw new IllegalArgumentException("Illegal load factor: " +
                                               loadFactor);
        this.loadFactor = loadFactor;
        this.threshold = tableSizeFor(initialCapacity); //根据初始容量计算倍数
    }

    /**
     * Returns a power of two size for the given target capacity.
     * 返回初始容量的2次幂 写位运算的都是大佬 哈哈
     */
    static final int tableSizeFor(int cap) {
        int n = cap - 1;
        n |= n >>> 1;
        n |= n >>> 2;
        n |= n >>> 4;
        n |= n >>> 8;
        n |= n >>> 16;
        return (n < 0) ? 1 : (n >= MAXIMUM_CAPACITY) ? MAXIMUM_CAPACITY : n + 1;
    }

看到这个方法是不是不知道干了哈懵的不行,上面的代码确实有点不太好看,反正我第一次看的时候不明白它想干啥。不过画画怎么运算的,大概知道了它的用途。总结起来就一句话:找到大于或等于 cap 的最小2的幂。我们先来看看 tableSizeFor 方法的图解:
在这里插入图片描述

下面开始来到最重要的put方法了

/**
 * Associates the specified value with the specified key in this map.
 * If the map previously contained a mapping for the key, the old
 * value is replaced.
 *
 * @param key key with which the specified value is to be associated
 * @param value value to be associated with the specified key
 * @return the previous value associated with <tt>key</tt>, or
 *         <tt>null</tt> if there was no mapping for <tt>key</tt>.
 *         (A <tt>null</tt> return can also indicate that the map
 *         previously associated <tt>null</tt> with <tt>key</tt>.)
 */
public V put(K key, V value) {
    return putVal(hash(key), key, value, false, true);
}

主要是调用了putVal方法,调用之前会对key进行hash计算

/**
 * Computes key.hashCode() and spreads (XORs) higher bits of hash
 * to lower.  Because the table uses power-of-two masking, sets of
 * hashes that vary only in bits above the current mask will
 * always collide. (Among known examples are sets of Float keys
 * holding consecutive whole numbers in small tables.)  So we
 * apply a transform that spreads the impact of higher bits
 * downward. There is a tradeoff between speed, utility, and
 * quality of bit-spreading. Because many common sets of hashes
 * are already reasonably distributed (so don't benefit from
 * spreading), and because we use trees to handle large sets of
 * collisions in bins, we just XOR some shifted bits in the
 * cheapest possible way to reduce systematic lossage, as well as
 * to incorporate impact of the highest bits that would otherwise
 * never be used in index calculations because of table bounds.
 */
static final int hash(Object key) {
    int h;
    return (key == null) ? 0 : (h = key.hashCode()) ^ (h >>> 16);
}

当key为null时hash值为0,不为空时会调用key的hashcode方法然后和计算出的hashcode值进行异或运算(hashcode会先进行无符号右移16位)

但是这里为什么要这样做呢?为什么不直接用键的 hashCode 方法产生的 hash 呢?

这里个人觉得主要是为了提高hash值的复杂性,减少hash冲突的概率,不然直接用hashcode不就好了吗?如果你有更好的见解,欢迎评论区指教

接下来看下putVal的入参:
hash key的hash值
key 原始Key
value 要存放的值
onlyIfAbsent 如果true代表不更改现有的值
evict 如果为false表示table为创建状态
在这里插入图片描述
在这里插入图片描述
这里说明一下当node节点中的元素还为链表时插入新节点和1.7的区别,jdk1.7中插入新节点时是将新节点插入到链头,减少遍历链表的开销,而1.8中为了计算链表的长度,在长度超过8时转成红黑树,就把新节点插入到链尾

putVal 方法主要做了这么几件事情:

当桶数组 table 为空时,通过扩容的方式初始化 table
查找要插入的键值对是否已经存在,存在的话根据条件判断是否用新值替换旧值
如果不存在,则将键值对链入链表中,并根据链表长度决定是否将链表转为红黑树
判断键值对数量是否大于阈值,大于的话则进行扩容操作
参考:
https://segmentfault.com/a/1190000012926722
https://blog.csdn.net/v123411739/article/details/78996181

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值