源码分析之HashMap

3 篇文章 0 订阅
1 篇文章 0 订阅

写在前面

最近想写一些关于集合源码的博文,因为集合对于我们编程人员来说太常用了,但是很多人都是仅停留在使用的阶段,对于它们的实现不是很了解。本文的宗旨在于通过自己对源码的阅读,把一些技术细节阐述出来,加深自己印象的同时,希望对别人有所帮助,有说明不妥的地方欢迎大家指正。

什么是HashMap

HashMap是存储Key-Value键值对的集合。

类关系解析

HashMap的类关系图如下:
HashMap的类关系

其中:

  • 实现Cloneable接口:表示支持克隆;
  • 实现Serializable接口:表示支持对象的序列化;
  • 实现Map接口:HashMap仅是Map的一种,HashMap固然有它自己的实现;
  • 继承AbstractMap抽象类:Map接口有一些通用的实现,通用的实现就放这了AbstractMap里面,方便其他Map的实现直接继承使用里面通用的方法。

结构介绍

我们先通过一张图来了解HashMap的结构:
HashMap结构

这张图中提前揭示了HashMap的三个重要实现:

  1. HashMap是一个数组
  2. 哈希冲突时的HashMap的节点会形成单链表的形式;
  3. 当某个节点哈希冲突的数量达到阈值时,单链表会转换为红黑树

内部类解析

Node

/**
 * Basic hash bin node, used for most entries.  (See below for
 * TreeNode subclass, and in LinkedHashMap for its Entry subclass.)
 */
static class Node<K,V> implements Map.Entry<K,V> {
    final int hash;
    final K key;
    V value;
    Node<K,V> next;
	...
}

它是一个泛型的类实现了Map.Entry<K,V>,用于HashMap的数据存放。它有四个成员变量:

  • hash:通过key算出的哈希;
  • key:Node的key;
  • value:Node的value;
  • next:它是另外一个Node的引用,上面在说结构图的时候提到过当HashMap发生哈希冲突的时候Node会形成单链表的形式,而这个next在于指向单链表的下一个结点。

TreeNode

/**
 * Entry for Tree bins. Extends LinkedHashMap.Entry (which in turn
 * extends Node) so can be used as extension of either regular or
 * linked node.
 */
static final class TreeNode<K,V> extends LinkedHashMap.Entry<K,V> {
    TreeNode<K,V> parent;  // red-black tree links
    TreeNode<K,V> left;
    TreeNode<K,V> right;
    TreeNode<K,V> prev;    // needed to unlink next upon deletion
    boolean red;
    ...
}

它是一个泛型的类继承了LinkedHashMap.Entry<K,V>,而LinkedHashMap.Entry<K,V>又是继承HashMap.Node<K,V>,所以Node亦是TreeNode的父类。上面提到过当单链表的长度达到某个阈值时,单链表会转化为红黑树,而红黑树用的数据结构就是TreeNode了。

成员变量解析

table

/**
 * The table, initialized on first use, and resized as
 * necessary. When allocated, length is always a power of two.
 * (We also tolerate length zero in some operations to allow
 * bootstrapping mechanics that are currently not needed.)
 */
transient Node<K,V>[] table;

这个没什么好说的了,看结构图的时候已经说过HashMap实际就是Node数组。

entrySet

/**
 * Holds cached entrySet(). Note that AbstractMap fields are used
 * for keySet() and values().
 */
transient Set<Map.Entry<K,V>> entrySet;

Entry是Map的内部接口,而HashMap保持一个Entry集合是为了更方便的遍历key或者value(因为存在哈希冲突的可能,我们不可能直接去遍历table),别忘了Node也是实现了Map.Entry,通过Entry也可以轻松获取到key和value。

size

/**
 * The number of key-value mappings contained in this map.
 */
transient int size;

这个就不多说了,就是HashMap中包含多少键值对。

modCount

/**
  * The number of times this HashMap has been structurally modified
  * Structural modifications are those that change the number of mappings in
  * the HashMap or otherwise modify its internal structure (e.g.,
  * rehash).  This field is used to make iterators on Collection-views of
  * the HashMap fail-fast.  (See ConcurrentModificationException).
  */
 transient int modCount;

熟悉并发编程的同学对这个属性应该不陌生,它在很多非线程安全的结构都存在,它的作用是用于记录当前HashMap的修改次数,然后检查是否有多个线程在作修改,如果有则抛出ConcurrentModificationException

threshold

/**
 * The next size value at which to resize (capacity * load factor).
 *
 * @serial
 */
// (The javadoc description is true upon serialization.
// Additionally, if the table array has not been allocated, this
// field holds the initial array capacity, or zero signifying
// DEFAULT_INITIAL_CAPACITY.)
int threshold;

就是记录HashMap下次扩容时键值对需要达到的数量,它等于capacity * load factor

loadFactor

/**
 * The load factor for the hash table.
 *
 * @serial
 */
final float loadFactor;

也就是所谓的负载因子。

常量解析

好的变量命名可以直接从变量名知道他的意思,JDK更是如此。JDK中大量运用了移位运算,因为cpu可以直接进行位运算,速度会优于直接用十进制运算。一个数左移n位可以理解为将这个数乘以2 ^ n,而一个数右移n位则相反地除以2 ^ n。

DEFAULT_INITIAL_CAPACITY

/**
 * The default initial capacity - MUST be a power of two.
 */
static final int DEFAULT_INITIAL_CAPACITY = 1 << 4; // aka 16

默认初始化容量,也就是在使用无参构造方法new一个HashMap时,它的容量时16。

MAXIMUM_CAPACITY

/**
 * The maximum capacity, used if a higher value is implicitly specified
 * by either of the constructors with arguments.
 * MUST be a power of two <= 1<<30.
 */
static final int MAXIMUM_CAPACITY = 1 << 30;

最大容量,这个可能很少人会去关注,或者压根不知道原来HashMap是有最大容量的。它的最大容量是2 ^ 30,也就是1073741824,足够吓人的。

DEFAULT_LOAD_FACTOR

/**
 * The load factor used when none specified in constructor.
 */
static final float DEFAULT_LOAD_FACTOR = 0.75f;

默认负载因子,默认负载因子是0.75。怎么去理解负载因子呢,就是元素总量 / 总容量 = 负载因子时,HashMap就必须做扩容操作。

默认负载因子为什么是0.75,而不是其他?这个可能是一些面试官比较喜欢杠的问题,这里附上一篇比较好的文章:

HashMap的loadFactor为什么是0.75?

TREEIFY_THRESHOLD

/**
 * The bin count threshold for using a tree rather than list for a
 * bin.  Bins are converted to trees when adding an element to a
 * bin with at least this many nodes. The value must be greater
 * than 2 and should be at least 8 to mesh with assumptions in
 * tree removal about conversion back to plain bins upon
 * shrinkage.
 */
static final int TREEIFY_THRESHOLD = 8;

就是从哈希冲突的单链表转成红黑树的阈值,也就是说单链表长度大于等于8会转为红黑树。

UNTREEIFY_THRESHOLD

/**
 * The bin count threshold for untreeifying a (split) bin during a
 * resize operation. Should be less than TREEIFY_THRESHOLD, and at
 * most 6 to mesh with shrinkage detection under removal.
 */
static final int UNTREEIFY_THRESHOLD = 6;

当红黑树的大小小于6时,红黑树转换回单链表。

MIN_TREEIFY_CAPACITY

/**
 * The smallest table capacity for which bins may be treeified.
 * (Otherwise the table is resized if too many nodes in a bin.)
 * Should be at least 4 * TREEIFY_THRESHOLD to avoid conflicts
 * between resizing and treeification thresholds.
 */
static final int MIN_TREEIFY_CAPACITY = 64;

单链表转化为红黑树所需的最小HashMap容量,结合上面的条件也就是说当单链表的长度大于8且HashMap的容量不小于64时,单链表才会转为红黑树。

重要方法解析

hash(Object key)

/**
 * Computes key.hashCode() and spreads (XORs) higher bits of hash
 * to lower.  Because the table uses power-of-two masking, sets of
 * hashes that vary only in bits above the current mask will
 * always collide. (Among known examples are sets of Float keys
 * holding consecutive whole numbers in small tables.)  So we
 * apply a transform that spreads the impact of higher bits
 * downward. There is a tradeoff between speed, utility, and
 * quality of bit-spreading. Because many common sets of hashes
 * are already reasonably distributed (so don't benefit from
 * spreading), and because we use trees to handle large sets of
 * collisions in bins, we just XOR some shifted bits in the
 * cheapest possible way to reduce systematic lossage, as well as
 * to incorporate impact of the highest bits that would otherwise
 * never be used in index calculations because of table bounds.
 */
static final int hash(Object key) {
    int h;
    return (key == null) ? 0 : (h = key.hashCode()) ^ (h >>> 16);
}

这个方法是干什么用?Node的hash就是通过此方法计算出来的,实际上算出hash的最主要目的是计算数组的下标,后面讲到set和get方法会详细说明。

从方法上看,一个key的hash等于它的hashCode的无符号右移16位,再与hashCode异或。为什么要右移16位再异或?结合方法上的注释理解,hashCode是32位二进制,而将高位扩展到低位这是一种折中的考虑,这样算出来的hash可以减少一定的哈希冲突又能比较高效的算出hash。

put(K key, V value)

/**
 * Associates the specified value with the specified key in this map.
 * If the map previously contained a mapping for the key, the old
 * value is replaced.
 *
 * @param key key with which the specified value is to be associated
 * @param value value to be associated with the specified key
 * @return the previous value associated with <tt>key</tt>, or
 *         <tt>null</tt> if there was no mapping for <tt>key</tt>.
 *         (A <tt>null</tt> return can also indicate that the map
 *         previously associated <tt>null</tt> with <tt>key</tt>.)
 */
public V put(K key, V value) {
    return putVal(hash(key), key, value, false, true);
}

/**
 * Implements Map.put and related methods
 *
 * @param hash hash for key
 * @param key the key
 * @param value the value to put
 * @param onlyIfAbsent if true, don't change existing value
 * @param evict if false, the table is in creation mode.
 * @return previous value, or null if none
 */
final V putVal(int hash, K key, V value, boolean onlyIfAbsent,
               boolean evict) {
    Node<K,V>[] tab; Node<K,V> p; int n, i;
    if ((tab = table) == null || (n = tab.length) == 0)
        n = (tab = resize()).length;
    if ((p = tab[i = (n - 1) & hash]) == null)
        tab[i] = newNode(hash, key, value, null);
    else {
        Node<K,V> e; K k;
        if (p.hash == hash &&
            ((k = p.key) == key || (key != null && key.equals(k))))
            e = p;
        else if (p instanceof TreeNode)
            e = ((TreeNode<K,V>)p).putTreeVal(this, tab, hash, key, value);
        else {
            for (int binCount = 0; ; ++binCount) {
                if ((e = p.next) == null) {
                    p.next = newNode(hash, key, value, null);
                    if (binCount >= TREEIFY_THRESHOLD - 1) // -1 for 1st
                        treeifyBin(tab, hash);
                    break;
                }
                if (e.hash == hash &&
                    ((k = e.key) == key || (key != null && key.equals(k))))
                    break;
                p = e;
            }
        }
        if (e != null) { // existing mapping for key
            V oldValue = e.value;
            if (!onlyIfAbsent || oldValue == null)
                e.value = value;
            afterNodeAccess(e);
            return oldValue;
        }
    }
    ++modCount;
    if (++size > threshold)
        resize();
    afterNodeInsertion(evict);
    return null;
}

关键代码分析:

Node<K,V>[] tab; Node<K,V> p; int n, i;
if ((tab = table) == null || (n = tab.length) == 0)
	n = (tab = resize()).length;

如果哈希表为空,或者它的长度为0,直接调用resize方法扩容,n赋值为哈希表扩容后的大小。

if ((p = tab[i = (n - 1) & hash]) == null)
	tab[i] = newNode(hash, key, value, null);

这里有个很关键的地方(n - 1) & hash,它用来算出当前Node对象要存储的数组下标。n上面已经说过是哈希表的大小,很明显地,假设n = 16,那么(n - 1) & hash的范围就为[0, 15],符合数组的范围。将该数组下标的引用赋给p,整个判断的意思就是如果数组该位置未被占用,那么就new一个Node对象放进去。

这时候再回过头看hash方法的设计思路,简直很聪明有木有。
因为HashMap的默认长度是16,那么n - 1就是15,表示为二进制就是:
0000000000000000 0000000000001111
假设hash方法直接使用hashCode的话,那么(n - 1) & hash只要hashCode的低四位一样,算出来的数组下标就会一样,显然很容易就冲突。而hash方法采用hashCode高16位与低16位异或无疑大大减小冲突的可能。

Node<K,V> e; K k;
if (p.hash == hash &&
    ((k = p.key) == key || (key != null && key.equals(k))))
    e = p;

如果走到这个判断,说明p不为空,也就是说数组中的这个位置已经被占用了。
这里抛出了一个概念——哈希冲突

如果能进入这个if条件,就甭管什么结构了。说白了放入的key与数组中本身存在的key是相等的,它可能是就仅仅是个节点,也可能是单链表的头结点,也可能是红黑树的根节点,那么就把p赋给e(从我们使用HashMap的经验看,key相等,新value肯定会覆盖旧value的,这在后面的代码上有体现)。

怎么判断两个key相等?

  1. 哈希相等;
  2. key1 == key2或者key1.equals(key2)。
else if (p instanceof TreeNode)
	e = ((TreeNode<K,V>)p).putTreeVal(this, tab, hash, key, value);

判断p是不是TreeNode实例,如果是,那么就说明这个下标的冲突大于等于8,冲突的节点已经从单链表转为红黑树,调用putTreeVal方法将数据放入树中。

这有两种结果,一种是放入的key跟红黑树某个节点的key是一样的(根节点除外,因为如果是和根节点相同,就直接进上一个if了),那么也是将其赋给e;另一种是和红黑树中所有节点的key都不一样,那么这就需要给红黑树增加一个节点了。大家知道红黑树是自平衡的二叉树,故这里又涉及红黑树在增加节点时如何保持平衡的问题,这里就不展开了,因为光是讲红黑树就够写一篇很长的博文……

else {
    for (int binCount = 0; ; ++binCount) {
        if ((e = p.next) == null) {
            p.next = newNode(hash, key, value, null);
            if (binCount >= TREEIFY_THRESHOLD - 1) // -1 for 1st
                treeifyBin(tab, hash);
            break;
        }
        if (e.hash == hash &&
            ((k = e.key) == key || (key != null && key.equals(k))))
            break;
        p = e;
    }
}

如果能走到这里说明冲突的节点小于8,仍是单链表的形式。如果放入的key和单链表的某个结点一样(头结点除外,因为如果是和头结点的key一样就直接进第一个if不会来到这里),将其记到e中。否则将该Node放入单链表的末尾,如果此时单链表的长度大于等于8,那么还需调用treeifyBin方法将单链表转为红黑树

if (e != null) { // existing mapping for key
    V oldValue = e.value;
    if (!onlyIfAbsent || oldValue == null)
        e.value = value;
    afterNodeAccess(e);
    return oldValue;
}

这段代码很明显了,上面已经说了把与当前key相等的节点赋值到e了,因为这种情况不需要创建新节点,只要使用e.value = value把新value覆盖旧value即可。这里注意有个onlyIfAbsent的参数,说明是可以通过参数控制要不要覆盖的。

++modCount;
if (++size > threshold)
    resize();
afterNodeInsertion(evict);

modCount在讲成员变量的时候已经提到是用来控制并发修改的,然后判断size有没有达到扩容量,有则调用resize方法扩容,afterNodeInsertion方法在HashMap中是个空实现,但在LinkedHashMap是有具体实现的。

get(Object key)

/**
 * Returns the value to which the specified key is mapped,
 * or {@code null} if this map contains no mapping for the key.
 *
 * <p>More formally, if this map contains a mapping from a key
 * {@code k} to a value {@code v} such that {@code (key==null ? k==null :
 * key.equals(k))}, then this method returns {@code v}; otherwise
 * it returns {@code null}.  (There can be at most one such mapping.)
 *
 * <p>A return value of {@code null} does not <i>necessarily</i>
 * indicate that the map contains no mapping for the key; it's also
 * possible that the map explicitly maps the key to {@code null}.
 * The {@link #containsKey containsKey} operation may be used to
 * distinguish these two cases.
 *
 * @see #put(Object, Object)
 */
public V get(Object key) {
    Node<K,V> e;
    return (e = getNode(hash(key), key)) == null ? null : e.value;
}

/**
 * Implements Map.get and related methods
 *
 * @param hash hash for key
 * @param key the key
 * @return the node, or null if none
 */
final Node<K,V> getNode(int hash, Object key) {
    Node<K,V>[] tab; Node<K,V> first, e; int n; K k;
    if ((tab = table) != null && (n = tab.length) > 0 &&
        (first = tab[(n - 1) & hash]) != null) {
        if (first.hash == hash && // always check first node
            ((k = first.key) == key || (key != null && key.equals(k))))
            return first;
        if ((e = first.next) != null) {
            if (first instanceof TreeNode)
                return ((TreeNode<K,V>)first).getTreeNode(hash, key);
            do {
                if (e.hash == hash &&
                    ((k = e.key) == key || (key != null && key.equals(k))))
                    return e;
            } while ((e = e.next) != null);
        }
    }
    return null;
}

关键代码分析:

Node<K,V>[] tab; Node<K,V> first, e; int n; K k;
if ((tab = table) != null && (n = tab.length) > 0 && 
	(first = tab[(n - 1) & hash]) != null)

获取数组(n - 1) & hash下标的第一个元素(当然,如果哈希表为空或者哈希表没数据就没必要获取),hash也就是通过key算出哈希。这里怎么理解第一个元素呢?因为它有可能仅仅是一个节点,也有可能是单链表头结点或者红黑树根节点,其实看了put方法再看get方法只要顺藤摸瓜就可以。

if (first.hash == hash && // always check first node
	((k = first.key) == key || (key != null && key.equals(k))))
	return first;

这里有一句注释always check first node,为什么呢?
大家都知道单链表查找的时间复杂度是O(n),而红黑树查找的时间复杂度是O(logn)。那如果取的key就是头或根了,就没必要去做查找了。

if ((e = first.next) != null) {
    if (first instanceof TreeNode)
        return ((TreeNode<K,V>)first).getTreeNode(hash, key);
    do {
        if (e.hash == hash &&
            ((k = e.key) == key || (key != null && key.equals(k))))
            return e;
    } while ((e = e.next) != null);
}

如果走到这个if判断说明key就不是first了,很显然如果是红黑树那就调用getTreeNode方法去拿它的节点,如果是单链表则遍历它。

resize()

说到resize方法,先看一下HashMap的无参构造方法:

/**
 * Constructs an empty <tt>HashMap</tt> with the default initial capacity
 * (16) and the default load factor (0.75).
 */
public HashMap() {
    this.loadFactor = DEFAULT_LOAD_FACTOR; // all other fields defaulted
}

可以看到无参构造方法里,只赋值了一个负载因子。也就是说通过new HashMap(),这个HashMap的table实际上还没实例化,那么HashMap的table数组是在哪里实例化的呢?

答案就在resize(),附上resize()的源码:

/**
 * Initializes or doubles table size.  If null, allocates in
 * accord with initial capacity target held in field threshold.
 * Otherwise, because we are using power-of-two expansion, the
 * elements from each bin must either stay at same index, or move
 * with a power of two offset in the new table.
 *
 * @return the table
 */
final Node<K,V>[] resize() {
    Node<K,V>[] oldTab = table;
    int oldCap = (oldTab == null) ? 0 : oldTab.length;
    int oldThr = threshold;
    int newCap, newThr = 0;
    if (oldCap > 0) {
        if (oldCap >= MAXIMUM_CAPACITY) {
            threshold = Integer.MAX_VALUE;
            return oldTab;
        }
        else if ((newCap = oldCap << 1) < MAXIMUM_CAPACITY &&
                 oldCap >= DEFAULT_INITIAL_CAPACITY)
            newThr = oldThr << 1; // double threshold
    }
    else if (oldThr > 0) // initial capacity was placed in threshold
        newCap = oldThr;
    else {               // zero initial threshold signifies using defaults
        newCap = DEFAULT_INITIAL_CAPACITY;
        newThr = (int)(DEFAULT_LOAD_FACTOR * DEFAULT_INITIAL_CAPACITY);
    }
    if (newThr == 0) {
        float ft = (float)newCap * loadFactor;
        newThr = (newCap < MAXIMUM_CAPACITY && ft < (float)MAXIMUM_CAPACITY ?
                  (int)ft : Integer.MAX_VALUE);
    }
    threshold = newThr;
    @SuppressWarnings({"rawtypes","unchecked"})
        Node<K,V>[] newTab = (Node<K,V>[])new Node[newCap];
    table = newTab;
    if (oldTab != null) {
        for (int j = 0; j < oldCap; ++j) {
            Node<K,V> e;
            if ((e = oldTab[j]) != null) {
                oldTab[j] = null;
                if (e.next == null)
                    newTab[e.hash & (newCap - 1)] = e;
                else if (e instanceof TreeNode)
                    ((TreeNode<K,V>)e).split(this, newTab, j, oldCap);
                else { // preserve order
                    Node<K,V> loHead = null, loTail = null;
                    Node<K,V> hiHead = null, hiTail = null;
                    Node<K,V> next;
                    do {
                        next = e.next;
                        if ((e.hash & oldCap) == 0) {
                            if (loTail == null)
                                loHead = e;
                            else
                                loTail.next = e;
                            loTail = e;
                        }
                        else {
                            if (hiTail == null)
                                hiHead = e;
                            else
                                hiTail.next = e;
                            hiTail = e;
                        }
                    } while ((e = next) != null);
                    if (loTail != null) {
                        loTail.next = null;
                        newTab[j] = loHead;
                    }
                    if (hiTail != null) {
                        hiTail.next = null;
                        newTab[j + oldCap] = hiHead;
                    }
                }
            }
        }
    }
    return newTab;
}

关键代码分析:

Node<K,V>[] oldTab = table;
int oldCap = (oldTab == null) ? 0 : oldTab.length;
int oldThr = threshold;
int newCap, newThr = 0;
if (oldCap > 0) {
   if (oldCap >= MAXIMUM_CAPACITY) {
        threshold = Integer.MAX_VALUE;
        return oldTab;
    }
    else if ((newCap = oldCap << 1) < MAXIMUM_CAPACITY &&
             oldCap >= DEFAULT_INITIAL_CAPACITY)
        newThr = oldThr << 1; // double threshold
}

当前HashMap在调用resize()时,容量不小于默认初始化容量且小于最大容量,将newCap赋值为原容量2倍,newThr赋值为原扩容临界值的2倍。(newCap就是后面用来实例化数组的,也就是说HashMap每次扩容都为原来容量的2倍)

else if (oldThr > 0) // initial capacity was placed in threshold
    newCap = oldThr;
else {               // zero initial threshold signifies using defaults
    newCap = DEFAULT_INITIAL_CAPACITY;
    newThr = (int)(DEFAULT_LOAD_FACTOR * DEFAULT_INITIAL_CAPACITY);
}

走到这里说明这个HashMap的table是还未被实例化的,那么这里有两种情况:

  • 通过有参构造方法实例化的HashMap:
    这里有个有趣的地方initial capacity was placed in threshold,也就是说通过有参构造方法实际上不一定创建我们指定大小的数组,它是通过tableSizeFor(initialCapacity)算出来的,只是临时存放在threshold中;
  • 通过有无参构造方法实例化的HashMap:
    那么需要将默认初始化容量赋给newCap,计算出第一次的扩容临界赋值给newThr。
if (newThr == 0) {
    float ft = (float)newCap * loadFactor;
    newThr = (newCap < MAXIMUM_CAPACITY && ft < (float)MAXIMUM_CAPACITY ?
              (int)ft : Integer.MAX_VALUE);
}

上面说了有参构造方法的threshold并非真正的threshold,它临时存放了数组的大小,所以在这里面重新计算threshold。

threshold = newThr;
@SuppressWarnings({"rawtypes","unchecked"})
    Node<K,V>[] newTab = (Node<K,V>[])new Node[newCap];
table = newTab;

赋值新的threshold,用newCap创建新的数组。

if (oldTab != null) {
    for (int j = 0; j < oldCap; ++j) {
        Node<K,V> e;
        if ((e = oldTab[j]) != null) {
            oldTab[j] = null;
            if (e.next == null)
                newTab[e.hash & (newCap - 1)] = e;
            else if (e instanceof TreeNode)
                ((TreeNode<K,V>)e).split(this, newTab, j, oldCap);
            else { // preserve order
                Node<K,V> loHead = null, loTail = null;
                Node<K,V> hiHead = null, hiTail = null;
                Node<K,V> next;
                do {
                    next = e.next;
                    if ((e.hash & oldCap) == 0) {
                        if (loTail == null)
                            loHead = e;
                        else
                            loTail.next = e;
                        loTail = e;
                    }
                    else {
                        if (hiTail == null)
                            hiHead = e;
                        else
                            hiTail.next = e;
                        hiTail = e;
                    }
                } while ((e = next) != null);
                if (loTail != null) {
                    loTail.next = null;
                    newTab[j] = loHead;
                }
                if (hiTail != null) {
                    hiTail.next = null;
                    newTab[j + oldCap] = hiHead;
                }
            }
        }
    }
}

这一整段代码作用就是如果原先哈希表是有数据的,那扩容后需要把数据重新计算一遍数组下标。因为根据(n - 1) & hash,n变化了结果很可能就不一样了。

参考:

  • JDK1.8
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值