继承关系
HashMap的继承关系不算复杂,继承了AbstractMap,实现了Map、Serializable、cloneable三个接口
常量 静态方法 属性
HashMap底层的数据结构在JDK1.8之前是基于数组+链表,在JDK1.8之后是数组+链表/红黑树
HashMap的常量:
16、2^30、0.75f、8、6、64
/**
* 1左移四位表示16(默认的初始容量), 容量的大小必须为2的次方
* The default initial capacity - MUST be a power of two.
*/
static final int DEFAULT_INITIAL_CAPACITY = 1 << 4; // aka 16
/**
* The maximum capacity, used if a higher value is implicitly specified
* by either of the constructors with arguments.
* MUST be a power of two <= 1<<30.
* 最大的数量,如果使用带参数的构造函数,则必须指定的数字要是2的次方并且小于2^30
*/
static final int MAXIMUM_CAPACITY = 1 << 30;
/**
* The load factor used when none specified in constructor.
* 扩容因子0.75f,当数组长度大于75%的时候就需要扩容,当数组长度达到75% hash碰撞的概率就很大了因此需要扩容
*/
static final float DEFAULT_LOAD_FACTOR = 0.75f;
/**
* The bin count threshold for using a tree rather than list for a
* bin. Bins are converted to trees when adding an element to a
* bin with at least this many nodes. The value must be greater
* than 2 and should be at least 8 to mesh with assumptions in
* tree removal about conversion back to plain bins upon
* shrinkage.
* 链表转换成红黑树的阈值是8,如果当链表长度达到8的时候就需要转换成红黑树了
*/
static final int TREEIFY_THRESHOLD = 8;
/**
* The bin count threshold for untreeifying a (split) bin during a
* resize operation. Should be less than TREEIFY_THRESHOLD, and at
* most 6 to mesh with shrinkage detection under removal.
* 当树的高度小于等于6的时候就需要转换成链表
*/
static final int UNTREEIFY_THRESHOLD = 6;
/**
* The smallest table capacity for which bins may be treeified.
* (Otherwise the table is resized if too many nodes in a bin.)
* Should be at least 4 * TREEIFY_THRESHOLD to avoid conflicts
* between resizing and treeification thresholds.
* 最小树形化容量阈值,当数组的长度大于这个值才允许树形话,否则进行扩容
*/
static final int MIN_TREEIFY_CAPACITY = 64;
/**
* Basic hash bin node, used for most entries. (See below for
* TreeNode subclass, and in LinkedHashMap for its Entry subclass.)
* 单链表对象
*/
static class Node<K,V> implements Map.Entry<K,V> {
//hash值
final int hash;
//HashMap的key
final K key;
//HashMap的value
V value;
//下一个节点
Node<K,V> next;
Node(int hash, K key, V value, Node<K,V> next) {
this.hash = hash;
this.key = key;
this.value = value;
this.next = next;
}
public final K getKey() { return key; }
public final V getValue() { return value; }
public final String toString() { return key + "=" + value; }
//将key和value的hashcode异或运算
public final int hashCode() {
return Objects.hashCode(key) ^ Objects.hashCode(value);
}
public final V setValue(V newValue) {
V oldValue = value;
value = newValue;
return oldValue;
}
public final boolean equals(Object o) {
if (o == this)
return true;
if (o instanceof Map.Entry) {
Map.Entry<?,?> e = (Map.Entry<?,?>)o;
if (Objects.equals(key, e.getKey()) &&
Objects.equals(value, e.getValue()))
return true;
}
return false;
}
}
HashMap的静态方法:
hash()、comparableClassFor()、compareComparables()、tableSizeFor()
/**
* Computes key.hashCode() and spreads (XORs) higher bits of hash
* to lower. Because the table uses power-of-two masking, sets of
* hashes that vary only in bits above the current mask will
* always collide. (Among known examples are sets of Float keys
* holding consecutive whole numbers in small tables.) So we
* apply a transform that spreads the impact of higher bits
* downward. There is a tradeoff between speed, utility, and
* quality of bit-spreading. Because many common sets of hashes
* are already reasonably distributed (so don't benefit from
* spreading), and because we use trees to handle large sets of
* collisions in bins, we just XOR some shifted bits in the
* cheapest possible way to reduce systematic lossage, as well as
* to incorporate impact of the highest bits that would otherwise
* never be used in index calculations because of table bounds.
* hashMap put值时候进行获取key的hash方法
*/
static final int hash(Object key) {
int h;
//将key的hashCode无符号右移16位,目的是将hashCode的高低16位特征混合起来,得到的hash值是高16位无变化低16位是高低16位异或运算的值
return (key == null) ? 0 : (h = key.hashCode()) ^ (h >>> 16);
}
/**
* Returns x's Class if it is of the form "class C implements
* Comparable<C>", else null.
* 如果符合class test implements Comparable<test> 前后类型一致的时候才返回test类
*/
static Class<?> comparableClassFor(Object x) {
if (x instanceof Comparable) {
//如果x是Comparable实例
Class<?> c; Type[] ts, as; ParameterizedType p;
if ((c = x.getClass()) == String.class) // bypass checks
return c;
if ((ts = c.getGenericInterfaces()) != null) {
for (Type t : ts) {
if ((t instanceof ParameterizedType) &&
((p = (ParameterizedType) t).getRawType() ==
Comparable.class) &&
(as = p.getActualTypeArguments()) != null &&
as.length == 1 && as[0] == c) // type arg is c
return c;
}
}
}
return null;
}
/**
* Returns a power of two size for the given target capacity.
* 返回一个2的幂次方的大小
*/
static final int tableSizeFor(int cap) {
int n = -1 >>> Integer.numberOfLeadingZeros(cap - 1);
return (n < 0) ? 1 : (n >= MAXIMUM_CAPACITY) ? MAXIMUM_CAPACITY : n + 1;
}
字段
/**
* The table, initialized on first use, and resized as
* necessary. When allocated, length is always a power of two.
* (We also tolerate length zero in some operations to allow
* bootstrapping mechanics that are currently not needed.)
* 散列表(结构一般是一个链表的数组),在第一次使用的时候初始化,并且调整大小。当调整时,长度总是2的幂次。并且不需要序列化保存
*/
transient Node<K,V>[] table;
/**
* Holds cached entrySet(). Note that AbstractMap fields are used
* for keySet() and values().
* 键值集合的缓存
*/
transient Set<Map.Entry<K,V>> entrySet;
/**
* The number of key-value mappings contained in this map.
* 键值对的个数
*/
transient int size;
/**
* The number of times this HashMap has been structurally modified
* Structural modifications are those that change the number of mappings in
* the HashMap or otherwise modify its internal structure (e.g.,
* rehash). This field is used to make iterators on Collection-views of
* the HashMap fail-fast. (See ConcurrentModificationException).
* 当前散列表修改的次数
*/
transient int modCount;
/**
* The next size value at which to resize (capacity * load factor).
* 下次扩容时的大小
* @serial
*/
int threshold;
/**
* The load factor for the hash table.
* hashtable的加载因子,默认0.75f
* @serial
*/
final float loadFactor;
公共方法
构造方法
/**
* Constructs an empty {@code HashMap} with the specified initial
* capacity and load factor.
* 一个带初始化大小和加载因子的构造函数
* @param initialCapacity the initial capacity
* @param loadFactor the load factor
* @throws IllegalArgumentException if the initial capacity is negative
* or the load factor is nonpositive
*/
public HashMap(int initialCapacity, float loadFactor) {
if (initialCapacity < 0)
throw new IllegalArgumentException("Illegal initial capacity: " +
initialCapacity);
if (initialCapacity > MAXIMUM_CAPACITY)
//超过2^31次方
initialCapacity = MAXIMUM_CAPACITY;
if (loadFactor <= 0 || Float.isNaN(loadFactor))
throw new IllegalArgumentException("Illegal load factor: " +
loadFactor);
this.loadFactor = loadFactor;
//获取下次扩容的大小确保数组大小为2的幂次方,同时也可以看出并不是在构造函数中初始化数组大小
this.threshold = tableSizeFor(initialCapacity);
}
/**
* Constructs an empty {@code HashMap} with the specified initial
* capacity and the default load factor (0.75).
* 带初始化大小的构造函数
*
* @param initialCapacity the initial capacity.
* @throws IllegalArgumentException if the initial capacity is negative.
*/
public HashMap(int initialCapacity) {
//调上一个构造函数,只不过使用的是默认的加载因子0.75
this(initialCapacity, DEFAULT_LOAD_FACTOR);
}
/**
* Constructs a new {@code HashMap} with the same mappings as the
* specified {@code Map}. The {@code HashMap} is created with
* default load factor (0.75) and an initial capacity sufficient to
* hold the mappings in the specified {@code Map}.
* 将其他的Map类型加载到hashmap当中,此时会将传进来的m全部赋值到当前的hashmap中
* @param m the map whose mappings are to be placed in this map
* @throws NullPointerException if the specified map is null
*/
public HashMap(Map<? extends K, ? extends V> m) {
this.loadFactor = DEFAULT_LOAD_FACTOR;
putMapEntries(m, false);
}
小结在构造方法中只有HashMap(Map<? extends K, ? extends V> m) 会根据传入进来的map进行数据加载,其他两种会获取下次resize的大小,然后在第一次赋值的时候进行初始化