JDK源码学习之HashMap

HashMap类图

在这里插入图片描述

HashMap概述

  • 允许null key和null value
  • HashMap与HashTable紧密相关,除了不是同步的,以及允许null值外。
  • HashMap对迭代元素的顺序不保证,特别是,不保证迭代的顺序一致不变。
  • 假定Hash函数分散的元素比较合理,它会提供常数时间的get 、put操作
  • 如果迭代性能很重要,不把初始容量设置太高或者加载因子太低是很重要的。更高的值,会降低空间开销,但是会增加查找成本的开销。因此,当设定HashMap的初始容量时,应该考虑map集合中entry的预期数量和它的加载因子,以便减少rehash操作的次数。

HashMap原理

HashMap中实现的细节

  1. 在创建HashMap的时候,仅仅是对初始容量和加载因子做了赋值操作,并没有初始化数组操作,而是将数组的创建放到了第一次put中去。(用到才创建空间
  2. transient修饰了table,entrySet,size,modCount。
    Q:为什么用transient修饰这四个成员变量?
    A: HashMap是经常会被序列化用于数据传输的。因此,序列化传输数据,我们关心的是接收到的数据,而不关新他的结构等。那么,当对方接受这些数据时,要怎么从数据中重新构建结构?HashMap手动实现了序列化反序列化方法writeObject readObject。
    **注意:**用transient修饰的成员变量,会包含两层含义:
    第一层:就是该变量确实不需要序列化传输,比如数据结构。
    第二层:在明确需要手动序列化反序列化对象时,它的成员变量都要transient。
  3. HashMap中的容量先做减1再运算。对任意一个输入的容量,减1的操作是为了便于进行二进制运算。
	/**
	 * 给定任意一个输入的容量,返回是2的整数权重大小的值。
	 * 为什么选择1 2 4 8 16,因为cap是int类型,为32位,所以最高移位就16位
	 * 1 2 4 8 移的都是2的倍数位,为了使得HashMap的容量都是2的幂。
     * Returns a power of two size for the given target capacity.
     */
    static final int tableSizeFor(int cap) {
        int n = cap - 1;
        n |= n >>> 1;
        n |= n >>> 2;
        n |= n >>> 4;
        n |= n >>> 8;
        n |= n >>> 16;
        return (n < 0) ? 1 : (n >= MAXIMUM_CAPACITY) ? MAXIMUM_CAPACITY : n + 1;
    }
  1. 计算Hash值,也是用的二进制运算。
	/**
     * Computes key.hashCode() and spreads (XORs) higher bits of hash
     * to lower.  Because the table uses power-of-two masking, sets of
     * hashes that vary only in bits above the current mask will
     * always collide. (Among known examples are sets of Float keys
     * holding consecutive whole numbers in small tables.)  So we
     * apply a transform that spreads the impact of higher bits
     * downward. There is a tradeoff between speed, utility, and
     * quality of bit-spreading. Because many common sets of hashes
     * are already reasonably distributed (so don't benefit from
     * spreading), and because we use trees to handle large sets of
     * collisions in bins, we just XOR some shifted bits in the
     * cheapest possible way to reduce systematic lossage, as well as
     * to incorporate impact of the highest bits that would otherwise
     * never be used in index calculations because of table bounds.
     */
    static final int hash(Object key) {
        int h;
        // 移16位,是为了避免高位的影响
        return (key == null) ? 0 : (h = key.hashCode()) ^ (h >>> 16);
    }
  1. 为了解决某个bin的链表过长,HashMap也采用了红黑树
	/**
     * The bin count threshold for using a tree rather than list for a
     * bin.  Bins are converted to trees when adding an element to a
     * bin with at least this many nodes. The value must be greater
     * than 2 and should be at least 8 to mesh with assumptions in
     * tree removal about conversion back to plain bins upon
     * shrinkage.
     * 触发树化的条件,但不一定就树化了,还需要满足table的length为64
     */
    static final int TREEIFY_THRESHOLD = 8;

    /**
     * The bin count threshold for untreeifying a (split) bin during a
     * resize operation. Should be less than TREEIFY_THRESHOLD, and at
     * most 6 to mesh with shrinkage detection under removal.
     */
    static final int UNTREEIFY_THRESHOLD = 6;

    /**
     * The smallest table capacity for which bins may be treeified.
     * (Otherwise the table is resized if too many nodes in a bin.)
     * Should be at least 4 * TREEIFY_THRESHOLD to avoid conflicts
     * between resizing and treeification thresholds.
     * 如果,table的length少于64,即便某个bin中链表长度达到TREEIFY_THRESHOLD,依然不会树化,而是触发扩容
     */
    static final int MIN_TREEIFY_CAPACITY = 64;

事例:

	// 在容量为16的HashMap中,这些值都被存储在下标为3的bucket中。
	// 当163被放到集合中的时候,会触发树化条件TREEIFY_THRESHOLD
	// 同时,又判断当前HashMap的容量是否小于MIN_TREEIFY_CAPACITY 
	// 如果,小于,触发HashMap的扩容操作
	// 大于,则进行下标为3的bucket的将链表进行树化
	int[] data = new int[] {1935, 51, 67, 83, 99, 115, 131, 147};
	Map<Integer, Integer> t1 = new HashMap<>();
	for (int i : data) {
		t1.put(i, i);
	}

Q:HashMap是怎么通过扩容来解决某个bin的链表长度过长这种情形的?
A:将bucket中,链长度超过阈值8的bin在非树化的情况下,扩容。以默认容量16为例说明。
经过Hash计算【index = 16 % data[i]】,data数组中的值存储到HashMap中,都将会落到下标为3的bin中去。当下标为3的bin的链表长度为8,且又不满足HashMap容量至少为64,触发HashMap的扩容resize操作。
比如,原来只有一个bin的链表过长,经过扩容,会将该链表上的这些元素一分为2。
比如原来为扩容之前的HashMap部分结构:
bucket idx table[] list capacity = 16
3 19 -> 35 -> 51 -> 67 -> 83 -> 99 -> 115 -> 131 -> 147
当向HashMap集合中存放147元素时,触发一倍的扩容 capacity = 32
bucket idx table[] list capacity = 32
3 35 -> 67 -> 99 -> 131
19 19 -> 51 -> 83 -> 115 -> 147
从这方面来说,这也是HashMap为什么每次扩容都扩一倍。 为了解决Hash碰撞 。从存放在3变成到3和19这个过程是resize的preserve order的过程。

 	/**
     * Initializes or doubles table size.  If null, allocates in
     * accord with initial capacity target held in field threshold.
     * Otherwise, because we are using power-of-two expansion, the
     * elements from each bin must either stay at same index, or move
     * with a power of two offset in the new table.
     *
     * @return the table
     */
    final Node<K,V>[] resize() {
        Node<K,V>[] oldTab = table;
        int oldCap = (oldTab == null) ? 0 : oldTab.length;
        int oldThr = threshold;
        int newCap, newThr = 0;
        if (oldCap > 0) {
            if (oldCap >= MAXIMUM_CAPACITY) {
                threshold = Integer.MAX_VALUE;
                return oldTab;
            }
            else if ((newCap = oldCap << 1) < MAXIMUM_CAPACITY &&
                     oldCap >= DEFAULT_INITIAL_CAPACITY)
                newThr = oldThr << 1; // double threshold
        }
        else if (oldThr > 0) // initial capacity was placed in threshold
            newCap = oldThr;
        else {               // zero initial threshold signifies using defaults
            newCap = DEFAULT_INITIAL_CAPACITY;
            newThr = (int)(DEFAULT_LOAD_FACTOR * DEFAULT_INITIAL_CAPACITY);
        }
        if (newThr == 0) {
            float ft = (float)newCap * loadFactor;
            newThr = (newCap < MAXIMUM_CAPACITY && ft < (float)MAXIMUM_CAPACITY ?
                      (int)ft : Integer.MAX_VALUE);
        }
        threshold = newThr;
        @SuppressWarnings({"rawtypes","unchecked"})
        Node<K,V>[] newTab = (Node<K,V>[])new Node[newCap];
        table = newTab;
        // 扩容的
        if (oldTab != null) {
            for (int j = 0; j < oldCap; ++j) {
                Node<K,V> e;
                if ((e = oldTab[j]) != null) {
                    oldTab[j] = null;
                    if (e.next == null)
                        newTab[e.hash & (newCap - 1)] = e;
                    else if (e instanceof TreeNode)
                        ((TreeNode<K,V>)e).split(this, newTab, j, oldCap);
                    else { // preserve order
                        Node<K,V> loHead = null, loTail = null;
                        Node<K,V> hiHead = null, hiTail = null;
                        Node<K,V> next;
                        do {
                            next = e.next;
                            if ((e.hash & oldCap) == 0) {
                                if (loTail == null)
                                    loHead = e;
                                else
                                    loTail.next = e;
                                loTail = e;
                            }
                            else {
                                if (hiTail == null)
                                    hiHead = e;
                                else
                                    hiTail.next = e;
                                hiTail = e;
                            }
                        } while ((e = next) != null);
                        if (loTail != null) {
                            loTail.next = null;
                            // 比如,原来的数组中存放在table中的bin个数超过8,意味着这个8个元素的Hash计算后都落到同一个下标中
                            // 经过一倍扩容后,原来的8个Hash冲突的在同一个下标中,现在会被重新计算,分配到两个下标(按照冲突的下标个数,
                            // 比如,原来只有一个下标冲突,那么新扩容后用两个下标存放冲突值;如果原来有两个下标都有冲突,那么就会有四个下标
                            // 重新计算Hash值存放)中。
                            newTab[j] = loHead;
                        }
                        if (hiTail != null) {
                            hiTail.next = null;
                            // 将Hash值冲突的存放到扩容一倍后的数组中。
                            newTab[j + oldCap] = hiHead;
                        }
                    }
                }
            }
        }
        return newTab;
    }
  1. HashMap是怎么树化的?
    还以data中的数据为例,当put147的时候,触发了HashMap的第一次扩容16->32。要想达到树化,还需进行第二次扩容由32->64的过程。因此,此时的data数组中的数据元素应该为:
    int[] data = new int[] {19, 35, 51, 67, 83, 99, 115, 131, 147, 163, 179, 195, 211, 227, 243, 259, 275, 291, 307, 323, 339, 355, 371, 387, 403, 419, 435, 451, 467, 483, 499, 515, 531}当向HashMap中添加275元素触发该次扩容,继续添加后续元素至531触发树化
    bucket idx table[] list capacity = 64
    3 67 -> 131 -> 195 -> 259 -> 323 -> 387 -> 451 -> 515
    19 19 -> 83 -> 147 -> 211 -> 275 -> 339 -> 403 -> 467 -> 531
    35 35 -> 99 -> 163 -> 227 -> 291 -> 355 -> 419 -> 483
    51 51 -> 115 -> 179 -> 243 -> 307 -> 371 -> 435 -> 499
		/**
         * Forms tree of the nodes linked from this node.
         */
        final void treeify(Node<K,V>[] tab) {
            TreeNode<K,V> root = null;
            for (TreeNode<K,V> x = this, next; x != null; x = next) {
                next = (TreeNode<K,V>)x.next;
                x.left = x.right = null;
                if (root == null) {
                    x.parent = null;
                    x.red = false;
                    root = x;
                }
                else {
                    K k = x.key;
                    int h = x.hash;
                    Class<?> kc = null;
                    for (TreeNode<K,V> p = root;;) {
                        int dir, ph;
                        K pk = p.key;
                        if ((ph = p.hash) > h)
                            dir = -1;
                        else if (ph < h)
                            dir = 1;
                        else if ((kc == null &&
                                  (kc = comparableClassFor(k)) == null) ||
                                 (dir = compareComparables(kc, k, pk)) == 0)
                            dir = tieBreakOrder(k, pk);

                        TreeNode<K,V> xp = p;
                        if ((p = (dir <= 0) ? p.left : p.right) == null) {
                        	// 这些做的是,先向树中添加Node结点,然后再满足红黑树性质
                            x.parent = xp;
                            if (dir <= 0)
                                xp.left = x;
                            else
                                xp.right = x;
                            root = balanceInsertion(root, x);
                            break;
                        }
                    }
                }
            }
            // 确保树化后的根节点在tab数组中
            moveRootToFront(tab, root);
        }
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值