1.HashMap的原理,内部数据结构?
底层使用哈希表(数组+链表),当链表长度超过8的时候会转成红黑树,以实现O(logn)时间复杂度内查找
2.讲一下HashMap中put方法的过程
a.对key求hash值,计算下标(hash算法)
b.如果数组的下标没有值,直接放入数组中
c.如果数组的下标有值,以链表的方式连接到后面
d.如果链表的长度超过阈值8,就把链表变为红黑树
e.如果节点已经存在,就替换旧值
f.如果数组满了,就需要扩容
3.HashMap中hash函数怎么实现的
a.hash
int h;
return (key == null) ? 0 : (h = key.hashCode()) ^ (h >>> 16);
b.(n-1)&hash
4.hashMap怎样解决冲突,讲一下扩容过程,假如一个值在原数组中,现在移动了新数组,位置肯定改变了,那是什么定位到这个值新数组中的位置
a.将新节点添加到链表后
b.容量扩充为原来的两倍,然后对每个节点重新计算哈希值
分4中情况情况
第一种情况:数组的位置为空,则不需要重新散列
第二种情况:数组的位置不为空,不是链表 不是红黑树
hash & (新数组的长度- 1)
第三种情况:数组的位置不为空,是红黑树
((TreeNode<K,V>)e).split(this, newTab, j, oldCap);
第四种情况:数组的位置不为空,是链表
这个值可能有2个地方,一个是原来下表的位置,一个为原下标+原容量的位置
(暂时没看明白,先这样记录下,望大神指点)
5.Hash冲突有哪些解决办法
链表地址
6.hashMap中某个链表太长,查找的时间复杂度达到O(N)怎么优化?
将链表转为红黑树,jdk1.8已经实现了
学习hashmap源码的方法
从功能入手:存储键值对数据,优点是集成了数组查询快的优点和链表增删快的优点
数组 ArrayList 图是什么 源码是什么
链表 LinkedList 图是什么 源码是什么 (单向链表和双向链表)
数组
ArrayList的数据结构
/**
* The array buffer into which the elements of the ArrayList are stored.
* The capacity of the ArrayList is the length of this array buffer. Any
* empty ArrayList with elementData == DEFAULTCAPACITY_EMPTY_ELEMENTDATA
* will be expanded to DEFAULT_CAPACITY when the first element is added.
*/
transient Object[] elementData; // non-private to simplify nested class access
单向量表
双向链表
Linkedlist数据结构
private static class Node<E> {
E item;
Node<E> next;
Node<E> prev;
Node(Node<E> prev, E element, Node<E> next) {
this.item = element;
this.next = next;
this.prev = prev;
}
}
图解HashMap的数据结构
数组+单向链表的优势结合起来
HashMap底层存储结构
/**
* Basic hash bin node, used for most entries. (See below for
* TreeNode subclass, and in LinkedHashMap for its Entry subclass.)
*/
static class Node<K,V> implements Map.Entry<K,V> {
final int hash;//记录自己去了数组的哪个位置
final K key;
V value;
Node<K,V> next;
Node(int hash, K key, V value, Node<K,V> next) {
this.hash = hash;
this.key = key;
this.value = value;
this.next = next;
}
public final K getKey() { return key; }
public final V getValue() { return value; }
public final String toString() { return key + "=" + value; }
public final int hashCode() {
return Objects.hashCode(key) ^ Objects.hashCode(value);
}
public final V setValue(V newValue) {
V oldValue = value;
value = newValue;
return oldValue;
}
public final boolean equals(Object o) {
if (o == this)
return true;
if (o instanceof Map.Entry) {
Map.Entry<?,?> e = (Map.Entry<?,?>)o;
if (Objects.equals(key, e.getKey()) &&
Objects.equals(value, e.getValue()))
return true;
}
return false;
}
}
解析HashMap的put方法
/**
* Associates the specified value with the specified key in this map.
* If the map previously contained a mapping for the key, the old
* value is replaced.
*
* @param key key with which the specified value is to be associated
* @param value value to be associated with the specified key
* @return the previous value associated with <tt>key</tt>, or
* <tt>null</tt> if there was no mapping for <tt>key</tt>.
* (A <tt>null</tt> return can also indicate that the map
* previously associated <tt>null</tt> with <tt>key</tt>.)
*/
public V put(K key, V value) {
return putVal(hash(key), key, value, false, true);
}
/**
* The table, initialized on first use, and resized as
* necessary. When allocated, length is always a power of two.
* (We also tolerate length zero in some operations to allow
* bootstrapping mechanics that are currently not needed.)
* 这里体现了hashmap是数组加链表结构
*/
transient Node<K,V>[] table;
/**
* The number of times this HashMap has been structurally modified
* Structural modifications are those that change the number of mappings in
* the HashMap or otherwise modify its internal structure (e.g.,
* rehash). This field is used to make iterators on Collection-views of
* the HashMap fail-fast. (See ConcurrentModificationException).
*/
transient int modCount;
/**
* Implements Map.put and related methods
*
* @param hash hash for key
* @param key the key
* @param value the value to put
* @param onlyIfAbsent if true, don't change existing value
* @param evict if false, the table is in creation mode.
* @return previous value, or null if none
*/
final V putVal(int hash, K key, V value, boolean onlyIfAbsent,
boolean evict) {
Node<K,V>[] tab; //新的数组
Node<K,V> p; //数组落点位置上的元素
int n, i;//n为tab的长度
if ((tab = table) == null || (n = tab.length) == 0)
//当table是空的时候,重新调整tab的大小
n = (tab = resize()).length;
if ((p = tab[i = (n - 1) & hash]) == null)//计算Node的落点,见下文,根据hash算法得到node应该落到的数组位置,数组的位置为空
tab[i] = newNode(hash, key, value, null);
else {
//根据hash算法得到node应该落到的数组位置,数组的位置为不空
Node<K,V> e; K k;
if (p.hash == hash &&
((k = p.key) == key || (key != null && key.equals(k))))
//如果传进来的key在源数组中已经存在,则进行替换
e = p;
else if (p instanceof TreeNode)
//如果是红黑树,则按照红黑树的规则添加进去
e = ((TreeNode<K,V>)p).putTreeVal(this, tab, hash, key, value);
else {
//如果是链表结构
for (int binCount = 0; ; ++binCount) {
//链表的下一个节点是否为空
if ((e = p.next) == null) {
//链表的下一个节点为空,则给链表的下一个节点赋值
p.next = newNode(hash, key, value, null);
if (binCount >= TREEIFY_THRESHOLD - 1) // -1 for 1st 链表的长度>=7,变成红黑树
treeifyBin(tab, hash);
break;
}
//如果下一个节点的key和传进来的key相同,结束循环
if (e.hash == hash &&
((k = e.key) == key || (key != null && key.equals(k))))
break;
p = e;
}
}
if (e != null) { // existing mapping for key
V oldValue = e.value;
if (!onlyIfAbsent || oldValue == null)
e.value = value;
afterNodeAccess(e);
return oldValue;
}
}
++modCount;
if (++size > threshold)//到了该扩容的时候了
resize();
afterNodeInsertion(evict);
return null;
}
/**
* The default initial capacity - MUST be a power of two.
* 数组的初始化大小为16
*为什么数组的大小必须是2的n次方,见下文
*/
static final int DEFAULT_INITIAL_CAPACITY = 1 << 4; // aka 16
/**
* The load factor used when none specified in constructor.
* 当构造函数中没有指定加载因子时,数组的容量使用了0.75时开始扩容,扩容后需要重新散列
*/
static final float DEFAULT_LOAD_FACTOR = 0.75f;
/**
* The maximum capacity, used if a higher value is implicitly specified
* by either of the constructors with arguments.
* MUST be a power of two <= 1<<30.
* 数组的最大长度为2的30次幂
*/
static final int MAXIMUM_CAPACITY = 1 << 30;
/**
* The bin count threshold for using a tree rather than list for a
* bin. Bins are converted to trees when adding an element to a
* bin with at least this many nodes. The value must be greater
* than 2 and should be at least 8 to mesh with assumptions in
* tree removal about conversion back to plain bins upon
* shrinkage.
* 链表的长度也不能无限大,否则查询慢 ,put也慢 因为必须知道上一个节点,所以当链表长度到达8的时候,转为红黑树
*/
static final int TREEIFY_THRESHOLD = 8;
/**
* The bin count threshold for untreeifying a (split) bin during a
* resize operation. Should be less than TREEIFY_THRESHOLD, and at
* most 6 to mesh with shrinkage detection under removal.
* 红黑树的链表长度少于6的时候,转为链表
*/
static final int UNTREEIFY_THRESHOLD = 6;
/**
* The number of key-value mappings contained in this map.
* 记录数组使用的数量
*/
transient int size;
/**
* The next size value at which to resize (capacity * load factor).
*当数组的容量使用多少的时候,进行大小调整
* @serial
*/
// (The javadoc description is true upon serialization.
// Additionally, if the table array has not been allocated, this
// field holds the initial array capacity, or zero signifying
// DEFAULT_INITIAL_CAPACITY.)
int threshold;
/**
* Initializes or doubles table size. If null, allocates in
* accord with initial capacity target held in field threshold.
* Otherwise, because we are using power-of-two expansion, the
* elements from each bin must either stay at same index, or move
* with a power of two offset in the new table.
*
* @return the table
* 初始化数组大小或者两倍大小扩容数组
*/
final Node<K,V>[] resize() {
Node<K,V>[] oldTab = table;
int oldCap = (oldTab == null) ? 0 : oldTab.length;//老数组的长度
int oldThr = threshold;
int newCap, newThr = 0;//新数组的大小 新数组的大小用到多少需要扩容
if (oldCap > 0) {
//老数组的长度大于等于最大值,则不能再次进行扩容
if (oldCap >= MAXIMUM_CAPACITY) {
threshold = Integer.MAX_VALUE;
return oldTab;
}
//新数组的长度等于老数组的长度*2,新叔叔组的长度<最大值,并且老数组的长度>=默认初始化值8
else if ((newCap = oldCap << 1) < MAXIMUM_CAPACITY &&
oldCap >= DEFAULT_INITIAL_CAPACITY)
//下一次数组的大小用到原来数组剩余大小的2倍扩容
newThr = oldThr << 1; // double threshold
}
else if (oldThr > 0) // initial capacity was placed in threshold
newCap = oldThr;
else { // zero initial threshold signifies using defaults
newCap = DEFAULT_INITIAL_CAPACITY;
newThr = (int)(DEFAULT_LOAD_FACTOR * DEFAULT_INITIAL_CAPACITY);
}
if (newThr == 0) {
float ft = (float)newCap * loadFactor;
newThr = (newCap < MAXIMUM_CAPACITY && ft < (float)MAXIMUM_CAPACITY ?
(int)ft : Integer.MAX_VALUE);
}
threshold = newThr;//提升作用域
@SuppressWarnings({"rawtypes","unchecked"})
Node<K,V>[] newTab = (Node<K,V>[])new Node[newCap];
table = newTab;
if (oldTab != null) {
//扩容以后需要重新散列
for (int j = 0; j < oldCap; ++j) {
Node<K,V> e;
//e 等于 oldTab[j]
if ((e = oldTab[j]) != null) {
//老数组原来的位置需要腾地方
oldTab[j] = null;
if (e.next == null)//老数组原来的位置的下一个节点如果是空
newTab[e.hash & (newCap - 1)] = e;
else if (e instanceof TreeNode)//老数组原来的位置是个红黑树
((TreeNode<K,V>)e).split(this, newTab, j, oldCap);
else { // preserve order //老数组原来的位置是个链表
Node<K,V> loHead = null, loTail = null;
Node<K,V> hiHead = null, hiTail = null;
Node<K,V> next;
do {
next = e.next;
if ((e.hash & oldCap) == 0) {
if (loTail == null)
loHead = e;
else
loTail.next = e;
loTail = e;
}
else {
if (hiTail == null)
hiHead = e;
else
hiTail.next = e;
hiTail = e;
}
} while ((e = next) != null);
if (loTail != null) {
loTail.next = null;
newTab[j] = loHead;
}
if (hiTail != null) {
hiTail.next = null;
newTab[j + oldCap] = hiHead;
}
}
}
}
}
return newTab;
}
/**
* Computes key.hashCode() and spreads (XORs) higher bits of hash
* to lower. Because the table uses power-of-two masking, sets of
* hashes that vary only in bits above the current mask will
* always collide. (Among known examples are sets of Float keys
* holding consecutive whole numbers in small tables.) So we
* apply a transform that spreads the impact of higher bits
* downward. There is a tradeoff between speed, utility, and
* quality of bit-spreading. Because many common sets of hashes
* are already reasonably distributed (so don't benefit from
* spreading), and because we use trees to handle large sets of
* collisions in bins, we just XOR some shifted bits in the
* cheapest possible way to reduce systematic lossage, as well as
* to incorporate impact of the highest bits that would otherwise
* never be used in index calculations because of table bounds.
*/
static final int hash(Object key) {
int h;
return (key == null) ? 0 : (h = key.hashCode()) ^ (h >>> 16);
}
计算Node的落点 又称作(aka)hash算法 p = tab[i = (n - 1) & hash]
需要满足的条件
1.返回值是个int
key.hashCode()
2.返回的int应该是[0-数组.length-1]个范围内,n为数组的长度-1
方法一:
hash%n
最简单的实现返回0-n范围内的int,直接用hasCode对取余,存在一种缺陷,就是取余的计算结果对高位是无效的,只是对低位有效,当计算出来的hasCode()只有高位有变化时,取余的结果还是一样的。
方法二:jdk1.8中的hash算法
(n - 1) & hash
hash=key.hashCode()^ (key.hashCode() >>> 16)
hash=key的hash的高16位与低16位异或,目的是为了hash值尽可能不一样,也就是为了满足下面的第三点(尽可能的充分利用数组的每一个位置)
方法一和方法二的结果都是0-n的长度
当数组的长度为16是 n=16-1
hash算法
3.尽可能的充分利用数组的每一个位置
为什么数组的大小必须是2的n次方,为什么要双倍扩容
否则 落点就不能在[0-数组长度-1]之间了
保证添加的元素均匀分布在HashMap中的数组上,减少hash碰撞,避免形成链表的结构,使得查询效率降低