史上最严谨的 JDK 1.8 HashMap 源码解析
面试常见问题以及答案
HashMap的数据结构是什么样子的?
- 数组+链表+红黑树
- 数据结构之红黑树
hash冲突是如何解决的?为什么hashmap中的链表需要转成红黑树
- 根据对冲突的处理方式不同,哈希表有两种实现方式,一种开放地址方式(Open addressing),另一种是冲突链表方式(Separate chaining with linked lists)。Java HashMap采用的是冲突链表方式。
- 主要是为了提升在 hash 冲突严重时(链表过长)的查找性能,使用链表的查找性能是 O(n),而使用红黑树是 O(logn)。
那在什么时候使用链表,又在什么时候使用红黑树?
- 插入默认情况使用链表结点,当同一个索引位置的结点数量达到9个(阈值8)并且此时数组长度大于64才会出发链表结点转红黑树,源码方法为put中的treeifyBin。如果小于等于64那么就会进行扩容,因为此时数据的量还比较小。
- 移除,当前索引位置的结点在移除后达到6个,并且该索引位置的结点为红黑树结点,就会出发红黑树转链表,源码方法为untreeify
HashMap的扩容机制是什么样子的?什么时候会触发扩容
- 查看resize方法源码
- hashmap元素越来越多的时候,hash碰撞的几率也会越来越高,因为table的长度是固定的,所以为了提高效率,就是进行扩容。
- 扩容的机制就是threshold = 加载因子*数组大小,超过这个阈值扩容,扩容就是oldcap<<1,threshold <<1。
扩容怎么避免rehash?
- 其实这个问题不应该出现在jdk1.8中,但是面试官喜欢混淆看你是否理解透彻,所以就会问你1.8rehash什么什么的
- java8实际上没有将原来数组中的元素rehash再作映射,他用的一个非常巧妙的方法newcap = oldcap<<1。
jdk1.8之前并发操作hashmap为什么会有死循环问题?
- 结合jdk1.7的扩容可找到答案,1.7的源码解析后续给出链接
hashmap的数组长度为什么一定要是2的幂次方?
- 查看put方法源码
- 计算索引位置的公式为(lenth-1)&hash,当lenth为2的N次方时,lenth-1的二进制低位全是1,达到了和取模相同的效果,实现了均匀分布,如果不为2的N次方,hash冲突的概率会明显增大。
负载因子为什么是0.75?
- 举个栗子,如果为1那么扩容的阈值就是数组大小,此时减少了空间的消耗,但是hash冲突的概率会增加,查询的成本也会增加;如果为0.5,hash冲突降低,空间会浪费。0.75也就是个这种值。
为什么链表转红黑树的阈值为8?
-
从注释中我们可以看到链表中结点为8时的概率,写的是 0.00000006,概率是非常低的,这个值是按照泊松分布来计算的,所以在时间和空间上权衡的结果定义了此阈值为8
-
下面这张图就是HashMap类图,非常简单,继承了一个AbstractMap抽象类
注释拜读
-
打开HashMap的源码,发现serialVersionUID下面有一大堆的英文,很有意思,先带着大家来看一下里面到底讲述了一些什么内容。科技这么发达,直接翻译也可以看看这部分注释。
/* * Implementation notes. * * This map usually acts as a binned (bucketed) hash table, but * when bins get too large, they are transformed into bins of * TreeNodes, each structured similarly to those in * java.util.TreeMap. Most methods try to use normal bins, but * relay to TreeNode methods when applicable (simply by checking * instanceof a node). Bins of TreeNodes may be traversed and * used like any others, but additionally support faster lookup * when overpopulated. However, since the vast majority of bins in * normal use are not overpopulated, checking for existence of * tree bins may be delayed in the course of table methods. * 解释:我们一般把map理解成散列表,散列表又是由一个个的桶组成,当map过大的时候, * 性能就不太行了,就会转换为TreeNodes,也就是树化。也就是数组+链表+红黑树 * * Tree bins (i.e., bins whose elements are all TreeNodes) are * ordered primarily by hashCode, but in the case of ties, if two * elements are of the same "class C implements Comparable<C>", * type then their compareTo method is used for ordering. (We * conservatively check generic types via reflection to validate * this -- see method comparableClassFor). The added complexity * of tree bins is worthwhile in providing worst-case O(log n) * operations when keys either have distinct hashes or are * orderable, Thus, performance degrades gracefully under * accidental or malicious usages in which hashCode() methods * return values that are poorly distributed, as well as those in * which many keys share a hashCode, so long as they are also * Comparable. (If neither of these apply, we may waste about a * factor of two in time and space compared to taking no * precautions. But the only known cases stem from poor user * programming practices that are already so slow that this makes * little difference.) *解释:由于编码习惯不好,可能存在恶意的hashcode值,导致性能下降 * * * Because TreeNodes are about twice the size of regular nodes, we * use them only when bins contain enough nodes to warrant use * (see TREEIFY_THRESHOLD). And when they become too small (due to * removal or resizing) they are converted back to plain bins. In * usages with well-distributed user hashCodes, tree bins are * rarely used. Ideally, under random hashCodes, the frequency of * nodes in bins follows a Poisson distribution * (http://en.wikipedia.org/wiki/Poisson_distribution) with a * parameter of about 0.5 on average for the default resizing * threshold of 0.75, although with a large variance because of * resizing granularity. Ignoring variance, the expected * occurrences of list size k are (exp(-0.5) * pow(0.5, k) / * factorial(k)). The first values are: * * 0: 0.60653066 * 1: 0.30326533 * 2: 0.07581633 * 3: 0.01263606 * 4: 0.00157952 * 5: 0.00015795 * 6: 0.00001316 * 7: 0.00000094 * 8: 0.00000006 * more: less than 1 in ten million *解释:由于性能和效率问题,TreeNode大小大约是常规结点的两倍,所以我们仅仅在特定情况下才会用 *由于移动或者删除,当他太小的时候,我们就会转成普通的链表方式。在具有良好分布的hashcode的时候 *很少会使用TreeNode,理想情况下,在随机hashcode中,结点的频率遵守泊松分布。这里就解释了为什 *么加载因子是0.75f,就是遵从了泊松分布。 * * The root of a tree bin is normally its first node. However, * sometimes (currently only upon Iterator.remove), the root might * be elsewhere, but can be recovered following parent links * (method TreeNode.root()). * * All applicable internal methods accept a hash code as an * argument (as normally supplied from a public method), allowing * them to call each other without recomputing user hashCodes. * Most internal methods also accept a "tab" argument, that is * normally the current table, but may be a new or old one when * resizing or converting. * * When bin lists are treeified, split, or untreeified, we keep * them in the same relative access/traversal order (i.e., field * Node.next) to better preserve locality, and to slightly * simplify handling of splits and traversals that invoke * iterator.remove. When using comparators on insertion, to keep a * total ordering (or as close as is required here) across * rebalancings, we compare classes and identityHashCodes as * tie-breakers. * * The use and transitions among plain vs tree modes is * complicated by the existence of subclass LinkedHashMap. See * below for hook methods defined to be invoked upon insertion, * removal and access that allow LinkedHashMap internals to * otherwise remain independent of these mechanics. (This also * requires that a map instance be passed to some utility methods * that may create new nodes.) * * The concurrent-programming-like SSA-based coding style helps * avoid aliasing errors amid all of the twisty pointer operations. */
基本属性
-
默认初始容量-必须为2的幂次方。这里作者用的是位运算,在hashmap源码中位运算无处不再。
/** * The default initial capacity - MUST be a power of two. */ static final int DEFAULT_INITIAL_CAPACITY = 1 << 4; // aka 16
-
最大的容量,也就是2的30次方;如果使用构造函数进行声明的时候,超过了这个值就会以此为容量。
/** * The maximum capacity, used if a higher value is implicitly specified * by either of the constructors with arguments. * MUST be a power of two <= 1<<30. */ static final int MAXIMUM_CAPACITY = 1 << 30;
-
加载因子默认为0.75
/** * The load factor used when none specified in constructor. */ static final float DEFAULT_LOAD_FACTOR = 0.75f;
-
链表转红黑树的阈值,在存储数据时,当链表长度大于8的时候,将转换为红黑树
/** * The bin count threshold for using a tree rather than list for a * bin. Bins are converted to trees when adding an element to a * bin with at least this many nodes. The value must be greater * than 2 and should be at least 8 to mesh with assumptions in * tree removal about conversion back to plain bins upon * shrinkage. */ static final int TREEIFY_THRESHOLD = 8;
-
红黑树转为链表的阈值,应该小于TREEIFY_THRESHOLD,当原有的红黑树内的结点数量小于6就将红黑树转换为链表。
/** * The bin count threshold for untreeifying a (split) bin during a * resize operation. Should be less than TREEIFY_THRESHOLD, and at * most 6 to mesh with shrinkage detection under removal. */ static final int UNTREEIFY_THRESHOLD = 6;
-
最小树行化容量阈值,当哈希表中的容量大于该值,才允许将链表转为红黑树,若桶内元素太多则是直接扩容不是树化,为了避免进行扩容,树化的冲突,这个值不能小于4*TREEIFY_THRESHOLD
/** * The smallest table capacity for which bins may be treeified. * (Otherwise the table is resized if too many nodes in a bin.) * Should be at least 4 * TREEIFY_THRESHOLD to avoid conflicts * between resizing and treeification thresholds. */ static final int MIN_TREEIFY_CAPACITY = 64;
构造函数
-
开始分析源码,分析源码按照规矩从构造函数开始
/** * Constructs an empty <tt>HashMap</tt> with the specified initial * capacity and load factor. * * @param initialCapacity 初始容量 * @param loadFactor 加载因子 * @throws IllegalArgumentException if the initial capacity is negative * or the load factor is nonpositive */ public HashMap(int initialCapacity, float loadFactor) { // 如果容量小于零,抛出异常,小于零不行的。 if (initialCapacity < 0) throw new IllegalArgumentException("Illegal initial capacity: " + initialCapacity); // MAXIMUM_CAPACITY成员变量 // 容量超过了这个2的30次方就默认为MAXIMUM_CAPACITY的默认值,容量不允许超过这个值 if (initialCapacity > MAXIMUM_CAPACITY) initialCapacity = MAXIMUM_CAPACITY; if (loadFactor <= 0 || Float.isNaN(loadFactor)) throw new IllegalArgumentException("Illegal load factor: " + loadFactor); // 把传入的加载因子给成员变量 this.loadFactor = loadFactor; //int threshold 很多文章都直接说是容量*加载因子,超过这个值需要扩容,当然注释也是这么说的,但是实际不是 // 其实一开是初始化的时候并没有乘加载因子,初始化的值就是返回一个大于等于容量的一个2的次方数 this.threshold = tableSizeFor(initialCapacity); } /** * Returns a power of two size for the given target capacity. */ // 对于给定目标容量,返回两倍大小的幂,中间有无符号右移运算和|或运算 // 1|0 = 1;1|1=1;0|0=0 // static final int tableSizeFor(int cap) { int n = cap - 1; n |= n >>> 1; n |= n >>> 2; n |= n >>> 4; n |= n >>> 8; n |= n >>> 16; return (n < 0) ? 1 : (n >= MAXIMUM_CAPACITY) ? MAXIMUM_CAPACITY : n + 1; } /** * Constructs an empty <tt>HashMap</tt> with the specified initial * capacity and the default load factor (0.75). * * @param initialCapacity the initial capacity. * @throws IllegalArgumentException if the initial capacity is negative. */ public HashMap(int initialCapacity) { this(initialCapacity, DEFAULT_LOAD_FACTOR); } /** * Constructs an empty <tt>HashMap</tt> with the default initial capacity * (16) and the default load factor (0.75). */ public HashMap() { this.loadFactor = DEFAULT_LOAD_FACTOR; // all other fields defaulted } /** * Constructs a new <tt>HashMap</tt> with the same mappings as the * specified <tt>Map</tt>. The <tt>HashMap</tt> is created with * default load factor (0.75) and an initial capacity sufficient to * hold the mappings in the specified <tt>Map</tt>. * * @param m the map whose mappings are to be placed in this map * @throws NullPointerException if the specified map is null */ public HashMap(Map<? extends K, ? extends V> m) { this.loadFactor = DEFAULT_LOAD_FACTOR; // 核心方法就是hashmap的put方法 putMapEntries(m, false); } /** * Implements Map.putAll and Map constructor * * @param m the map * @param evict false when initially constructing this map, else * true (relayed to method afterNodeInsertion). */ final void putMapEntries(Map<? extends K, ? extends V> m, boolean evict) { int s = m.size(); // 如果为0,就什么都不做了 if (s > 0) { if (table == null) { // pre-size // 如果传入的m.size为8,ft的值可能是12.66666667 float ft = ((float)s / loadFactor) + 1.0F; // 转成int类型,精度丢失为11 int t = ((ft < (float)MAXIMUM_CAPACITY) ? (int)ft : MAXIMUM_CAPACITY); // 构造函数调用这个方法实例化的时候threshold是为0的 if (t > threshold) threshold = tableSizeFor(t); } else if (s > threshold) // resize方法要么是初始化要么是扩容,后面讲 resize(); for (Map.Entry<? extends K, ? extends V> e : m.entrySet()) { K key = e.getKey(); V value = e.getValue(); putVal(hash(key), key, value, false, evict); } } }
hash(key)代码块
-
查看hash(key)方法的源码
/** * Computes key.hashCode() and spreads (XORs) higher bits of hash * to lower. Because the table uses power-of-two masking, sets of * hashes that vary only in bits above the current mask will * always collide. (Among known examples are sets of Float keys * holding consecutive whole numbers in small tables.) So we * apply a transform that spreads the impact of higher bits * downward. There is a tradeoff between speed, utility, and * quality of bit-spreading. Because many common sets of hashes * are already reasonably distributed (so don't benefit from * spreading), and because we use trees to handle large sets of * collisions in bins, we just XOR some shifted bits in the * cheapest possible way to reduce systematic lossage, as well as * to incorporate impact of the highest bits that would otherwise * never be used in index calculations because of table bounds. */ static final int hash(Object key) { int h; //key.hashCode()返回的int类型,int类型4字节 1个字节8位,也就是32位 //这里的数组下标就是通过key的哈希值与其无符号为无符号右移16位的异或运算 //0^0=0; 0^1=1; 1^0=1; 1^1=0; return (key == null) ? 0 : (h = key.hashCode()) ^ (h >>> 16); }
put(K key, V value)代码块
-
查看put(K key, V value)方法的源码,可以看到put方法是有返回值的,返回V。但是与java1.7不同的是1.8返回的都是null。
/** * Associates the specified value with the specified key in this map. * If the map previously contained a mapping for the key, the old * value is replaced. * * @param key key with which the specified value is to be associated * @param value value to be associated with the specified key * @return the previous value associated with <tt>key</tt>, or * <tt>null</tt> if there was no mapping for <tt>key</tt>. * (A <tt>null</tt> return can also indicate that the map * previously associated <tt>null</tt> with <tt>key</tt>.) */ public V put(K key, V value) { return putVal(hash(key), key, value, false, true); } /** * Implements Map.put and related methods * * @param hash 这个用于计算数组的index值 * @param key the key * @param value the value to put * @param onlyIfAbsent if true, don't change existing value * @param evict if false, the table is in creation mode. * @return previous value, or null if none */ final V putVal(int hash, K key, V value, boolean onlyIfAbsent, boolean evict) { // 定义了两个Node对象tab和p;,tab就是table,p就是要插入key value键值对的地方的值 Node<K,V>[] tab; Node<K,V> p; // n为tab的容量,i为插入的下标i = (n - 1) & hash int n, i; // table开始为null,初次使用的时候有必要调整大小,如果是已分配的情况下,大小必须是2的幂次方 if ((tab = table) == null || (n = tab.length) == 0) // HashMap<String, String> map = new HashMap<>(16); // HashMap<String, String> stringStringHashMap = new HashMap<>(map); // HashMap<String, String> map1 = new HashMap<>(); // map1,stringStringHashMap初次put进来resize,也就是初始化 // resize在首次的时候其实就是threshold的赋值和table的创建。具体的源码在后面分析 n = (tab = resize()).length;//n就是table的长度也就是容量 // i = (n - 1) & hash计算存放数据的inde下标 // 这里也能解释为什么hashmap的数组长度为什么一定要是2的幂次方,与下标算法配套使用 if ((p = tab[i = (n - 1) & hash]) == null) // 没有值就直接放进去,new Node<>(hash, key, value, next); tab[i] = newNode(hash, key, value, null); else { Node<K,V> e; K k; //如果当前这个值的key的hash和要put进来的key的hash一致并且值也是相等的,直接替换 if (p.hash == hash && ((k = p.key) == key || (key != null && key.equals(k)))) e = p; // 树,讲数据put到树中 else if (p instanceof TreeNode) e = ((TreeNode<K,V>)p).putTreeVal(this, tab, hash, key, value); // 链表 else { for (int binCount = 0; ; ++binCount) { // 使用尾插法讲元素插入到链表最后 if ((e = p.next) == null) { p.next = newNode(hash, key, value, null); if (binCount >= TREEIFY_THRESHOLD - 1) // -1 for 1st // 转红黑树。当binCount等于8的时候,也就是第九个元素进来的时候转树 // 上面的说法不严谨,进入方法可知,table的容量如果小于64还是使用的扩容 treeifyBin(tab, hash); break; } if (e.hash == hash && ((k = e.key) == key || (key != null && key.equals(k)))) break; // 这里是和p.next配套使用,遍历整个链表 p = e; } } if (e != null) { // existing mapping for key V oldValue = e.value; if (!onlyIfAbsent || oldValue == null) e.value = value; // hashset才会使用,hashmap中为空方法 afterNodeAccess(e); // 存在相同的key值得时候会返回老的value return oldValue; } } ++modCount; if (++size > threshold) resize(); afterNodeInsertion(evict); return null; }
resize()代码块
-
上面频繁出现的resize()方法,初始化或者扩容方法。现在就来对这个方法进行源码分析
/** * Initializes or doubles table size. If null, allocates in * accord with initial capacity target held in field threshold. * Otherwise, because we are using power-of-two expansion, the * elements from each bin must either stay at same index, or move * with a power of two offset in the new table. * * @return the table */ // 上面的注释说的就是初始化或者增加表大小,如果table为null就使用默认参数进行初始化; // 否则就是用2的幂次方的方法进行扩容,每个bin的元素必须保持相同的索引,或者在新的表种以2的幂次方进行偏移 final Node<K,V>[] resize() { Node<K,V>[] oldTab = table; int oldCap = (oldTab == null) ? 0 : oldTab.length; int oldThr = threshold; int newCap, newThr = 0; if (oldCap > 0) { if (oldCap >= MAXIMUM_CAPACITY) { threshold = Integer.MAX_VALUE; return oldTab; } else if ((newCap = oldCap << 1) < MAXIMUM_CAPACITY && oldCap >= DEFAULT_INITIAL_CAPACITY) newThr = oldThr << 1; // double threshold } else if (oldThr > 0) // initial capacity was placed in threshold newCap = oldThr; else { // zero initial threshold signifies using defaults newCap = DEFAULT_INITIAL_CAPACITY; newThr = (int)(DEFAULT_LOAD_FACTOR * DEFAULT_INITIAL_CAPACITY); } if (newThr == 0) { float ft = (float)newCap * loadFactor; newThr = (newCap < MAXIMUM_CAPACITY && ft < (float)MAXIMUM_CAPACITY ? (int)ft : Integer.MAX_VALUE); } threshold = newThr; @SuppressWarnings({"rawtypes","unchecked"}) // 初始化到这里就可以了 Node<K,V>[] newTab = (Node<K,V>[])new Node[newCap]; table = newTab; // 核心的扩容逻辑 if (oldTab != null) { for (int j = 0; j < oldCap; ++j) { Node<K,V> e; if ((e = oldTab[j]) != null) { oldTab[j] = null; if (e.next == null) newTab[e.hash & (newCap - 1)] = e; else if (e instanceof TreeNode) // 树行结构的扩容中包含底下do while的核心方法,下一个模块讲解 ((TreeNode<K,V>)e).split(this, newTab, j, oldCap); else { // preserve order // 把扩容分成了两个区域【】【】【】lo区域|【】【】【】hi区域 // 具体效果看下图,先double扩容,核心代码就是e.hash & oldCap // e.hash & oldCap为0的处于低位 e.hash & oldCap为1的处于高位 // 高位=低位+oldCap,先遍历数据组成两个链表,然后再迁移到table中 Node<K,V> loHead = null, loTail = null; Node<K,V> hiHead = null, hiTail = null; Node<K,V> next; do { next = e.next; if ((e.hash & oldCap) == 0) { if (loTail == null) loHead = e; else loTail.next = e; loTail = e; } else { if (hiTail == null) hiHead = e; else hiTail.next = e; hiTail = e; } } while ((e = next) != null); if (loTail != null) { loTail.next = null; newTab[j] = loHead; } if (hiTail != null) { hiTail.next = null; // 高位=低位+oldCap newTab[j + oldCap] = hiHead; } } } } } return newTab; }
红黑树拆分扩容代码块
- ((TreeNode<K,V>)e).split(this, newTab, j, oldCap);树行结构的扩容源码分析。要把红黑树拆分为高位和低位双向链表,然后进行处理。
/**
* Splits nodes in a tree bin into lower and upper tree bins,
* or untreeifies if now too small. Called only from resize;
* see above discussion about split bits and indices.
*
* @param map the map
* @param tab the table for recording bin heads
* @param index the index of the table being split
* @param bit the bit of hash to split on
*/
final void split(HashMap<K,V> map, Node<K,V>[] tab, int index, int bit) {
TreeNode<K,V> b = this;
// Relink into lo and hi lists, preserving order
// 定义一个高位的链表和一个低位的链表
TreeNode<K,V> loHead = null, loTail = null;
TreeNode<K,V> hiHead = null, hiTail = null;
// 低位链表count和高位链表count
int lc = 0, hc = 0;
// 一开始非常纠结这个地方,考虑这个是拆分红黑树,那么为什么这里和外面链表扩容这么类似
// 其实就是因为TreeNode继承了Enter<K,V>, Entry<K,V>继承了Node<K,V>,也就是说他隐藏了一个链表在里面
for (TreeNode<K,V> e = b, next; e != null; e = next) {
// 下边的代码和链表扩容的代码类似,除了++lc 和 ++hc
next = (TreeNode<K,V>)e.next;
e.next = null;
if ((e.hash & bit) == 0) {
if ((e.prev = loTail) == null)
loHead = e;
else
loTail.next = e;
loTail = e;
// 记录低位链表的长度
++lc;
}
else {
if ((e.prev = hiTail) == null)
hiHead = e;
else
hiTail.next = e;
hiTail = e;
// 记录高位链表的长度
++hc;
}
}
// static final int UNTREEIFY_THRESHOLD = 6;
if (loHead != null) {
// 低位链表的头部不为空,并且长度小于等于6的时候拆树变成链表
if (lc <= UNTREEIFY_THRESHOLD)
tab[index] = loHead.untreeify(map);
// 长度大于6,要变成树,如果高位链表为空,就直接把当前这个树放在当前为止就行了,不用做任何处理
else {
tab[index] = loHead;
if (hiHead != null) // (else is already treeified)
loHead.treeify(tab);
}
}
// index index+bit实际上和hash&lenth-1得到的值一样
// 1100 0010
// 0011 1111 64-1
//------------
// 0000 0010 =4 index = 4
// 1100 0010
// 0111 1111 128-1
//-----------
// 0100 0010 = index = 64+4 =68
if (hiHead != null) {
if (hc <= UNTREEIFY_THRESHOLD)
tab[index + bit] = hiHead.untreeify(map);
else {
tab[index + bit] = hiHead;
if (loHead != null)
hiHead.treeify(tab);
}
}
}
get(Object key)代码块
-
根据key值获取元素的值
/** * Returns the value to which the specified key is mapped, * or {@code null} if this map contains no mapping for the key. * * <p>More formally, if this map contains a mapping from a key * {@code k} to a value {@code v} such that {@code (key==null ? k==null : * key.equals(k))}, then this method returns {@code v}; otherwise * it returns {@code null}. (There can be at most one such mapping.) * * <p>A return value of {@code null} does not <i>necessarily</i> * indicate that the map contains no mapping for the key; it's also * possible that the map explicitly maps the key to {@code null}. * The {@link #containsKey containsKey} operation may be used to * distinguish these two cases. * * @see #put(Object, Object) */ public V get(Object key) { Node<K,V> e; // 核心方法getNode() return (e = getNode(hash(key), key)) == null ? null : e.value; } /** * Implements Map.get and related methods * @param hash hash for key * @param key the key * @return the node, or null if none */ final Node<K,V> getNode(int hash, Object key) { Node<K,V>[] tab; Node<K,V> first, e; int n; K k; if ((tab = table) != null && (n = tab.length) > 0 && (first = tab[(n - 1) & hash]) != null) { if (first.hash == hash && // always check first node ((k = first.key) == key || (key != null && key.equals(k)))) return first; if ((e = first.next) != null) { if (first instanceof TreeNode) return ((TreeNode<K,V>)first).getTreeNode(hash, key); do { if (e.hash == hash && ((k = e.key) == key || (key != null && key.equals(k)))) return e; } while ((e = e.next) != null); } } return null; }