一些数据我们需要使用一一对应的键值对的方式来存储,那么我们就可以使用Map,Map就是用来存储“键(key)-值(value) 对”的,是通过键来标识做唯一区分,键是具有唯一性,不可重复的。
Map作为一个接口,实现它的对象有HashMap、HashTable等。
public interface Map<K, V>{}
下面就来看一下HashMap和HashTable
HashMap类的定义
public class HashMap<K,V> extends AbstractMap<K,V>
implements Map<K,V>, Cloneable, Serializable {}
HashTable类的定义
public class Hashtable<K,V>
extends Dictionary<K,V>
implements Map<K,V>, Cloneable, java.io.Serializable {}
从上面两个类的定义可以看到HashMap与HashTable两个类相同的地方是:
HashMap与HashTable都实现了Map、Cloneable(支持被克隆)、Serializable(支持序列化)接口;
HashMap与HashTable不同之处是:
1、HashMap继承自AbstractMap类;而Hashtable继承自Dictionary类,但Dictionary这个类已经过时了,新的实现应该实现Map接口,而不是扩展这个类(在Dictionary类的源码中有标注,请看下面源码)。
/**
* The <code>Dictionary</code> class is the abstract parent of any
* class, such as <code>Hashtable</code>, which maps keys to values.
* Every key and every value is an object. In any one <tt>Dictionary</tt>
* object, every key is associated with at most one value. Given a
* <tt>Dictionary</tt> and a key, the associated element can be looked up.
* Any non-<code>null</code> object can be used as a key and as a value.
* <p>
* As a rule, the <code>equals</code> method should be used by
* implementations of this class to decide if two keys are the same.
* <p>
* <strong>NOTE: This class is obsolete. New implementations should
* implement the Map interface, rather than extending this class.</strong>
* 注意:这个类已经过时了。新的实现应该实现Map接口,而不是扩展这个类。
*
* @author unascribed
* @see java.util.Map
* @see java.lang.Object#equals(java.lang.Object)
* @see java.lang.Object#hashCode()
* @see java.util.Hashtable
* @since JDK1.0
*/
public abstract
class Dictionary<K,V> {}
2、HashMap中key(键)和value(值)都可以为null;HashTable中key(键)和value(值)都不能为空,否则会报空指针异常。
HashMap:基于哈希表的映射接口实现。该实现提供了所有可选的映射操作,并允许值为null和键为null。(HashMap类大致相当于Hashtable,除了它是不同步的并且允许空值。)这个类不保证映射的顺序;
特别是,它不能保证顺序随时间的推移保持不变。
/**
* Hash table based implementation of the <tt>Map</tt> interface. This
* implementation provides all of the optional map operations, and permits
* <tt>null</tt> values and the <tt>null</tt> key. (The <tt>HashMap</tt>
* class is roughly equivalent to <tt>Hashtable</tt>, except that it is
* unsynchronized and permits nulls.) This class makes no guarantees as to
* the order of the map; in particular, it does not guarantee that the order
* will remain constant over time.
*/
HashTable:这个类实现了一个哈希表,它将键映射到值。任何非null对象都可以用作键或值。
/**
* This class implements a hash table, which maps keys to values. Any
* non-<code>null</code> object can be used as a key or as a value.
*/
3、HashMap和HashTable的初始容量不同,HashMap的初始容量是16,HashTable的初始容量是11;但是它们的负载因子相同默认都是:0.75。
(1)构造一个空的HashMap,其默认初始容量(16)和默认负载因子(0.75)。
/**
* Constructs an empty <tt>HashMap</tt> with the default initial capacity
* (16) and the default load factor (0.75).
*/
public HashMap() {
this.loadFactor = DEFAULT_LOAD_FACTOR; // all other fields defaulted
}
(2)构造一个新的空哈希表,其默认初始容量(11)和负载系数(0.75)。
/**
* Constructs a new, empty hashtable with a default initial capacity (11)
* and load factor (0.75).
*/
public Hashtable() {
this(11, 0.75f);
}
4、当容量不够时需要扩容,他们的扩容机制不同。
当已用容量>总容量 * 负载因子时,HashMap 扩容规则为当前容量*2;Hashtable 扩容规则为当前容量*2 +1。
(1)HashMap扩容:
初始化或加倍表的大小。如果为空,则按照字段阈值中持有的初始容量目标分配。否则,因为我们使用的是2的幂展开,所以每个bin中的元素要么必须保持相同的索引,要么在新表中以2的幂偏移量移动。
/**
* Initializes or doubles table size. If null, allocates in
* accord with initial capacity target held in field threshold.
* Otherwise, because we are using power-of-two expansion, the
* elements from each bin must either stay at same index, or move
* with a power of two offset in the new table.
*
* @return the table
*/
final Node<K,V>[] resize() {
Node<K,V>[] oldTab = table;
int oldCap = (oldTab == null) ? 0 : oldTab.length;
int oldThr = threshold;
int newCap, newThr = 0;
if (oldCap > 0) {
if (oldCap >= MAXIMUM_CAPACITY) {
threshold = Integer.MAX_VALUE;
return oldTab;
}
else if ((newCap = oldCap << 1) < MAXIMUM_CAPACITY &&
oldCap >= DEFAULT_INITIAL_CAPACITY)
newThr = oldThr << 1; // double threshold
}
else if (oldThr > 0) // initial capacity was placed in threshold
newCap = oldThr;
else { // zero initial threshold signifies using defaults
newCap = DEFAULT_INITIAL_CAPACITY;
newThr = (int)(DEFAULT_LOAD_FACTOR * DEFAULT_INITIAL_CAPACITY);
}
if (newThr == 0) {
float ft = (float)newCap * loadFactor;
newThr = (newCap < MAXIMUM_CAPACITY && ft < (float)MAXIMUM_CAPACITY ?
(int)ft : Integer.MAX_VALUE);
}
threshold = newThr;
@SuppressWarnings({"rawtypes","unchecked"})
Node<K,V>[] newTab = (Node<K,V>[])new Node[newCap];
table = newTab;
if (oldTab != null) {
for (int j = 0; j < oldCap; ++j) {
Node<K,V> e;
if ((e = oldTab[j]) != null) {
oldTab[j] = null;
if (e.next == null)
newTab[e.hash & (newCap - 1)] = e;
else if (e instanceof TreeNode)
((TreeNode<K,V>)e).split(this, newTab, j, oldCap);
else { // preserve order
Node<K,V> loHead = null, loTail = null;
Node<K,V> hiHead = null, hiTail = null;
Node<K,V> next;
do {
next = e.next;
if ((e.hash & oldCap) == 0) {
if (loTail == null)
loHead = e;
else
loTail.next = e;
loTail = e;
}
else {
if (hiTail == null)
hiHead = e;
else
hiTail.next = e;
hiTail = e;
}
} while ((e = next) != null);
if (loTail != null) {
loTail.next = null;
newTab[j] = loHead;
}
if (hiTail != null) {
hiTail.next = null;
newTab[j + oldCap] = hiHead;
}
}
}
}
}
return newTab;
}
(2)HashTable扩容:
增加这个散列表的容量并在内部重新组织它,以便更有效地容纳和访问它的条目。当散列表中的键数超过该散列表的容量和负载因子时,将自动调用此方法。
/**
* Increases the capacity of and internally reorganizes this
* hashtable, in order to accommodate and access its entries more
* efficiently. This method is called automatically when the
* number of keys in the hashtable exceeds this hashtable's capacity
* and load factor.
*/
@SuppressWarnings("unchecked")
protected void rehash() {
int oldCapacity = table.length;
HashtableEntry<?,?>[] oldMap = table;
// overflow-conscious code
int newCapacity = (oldCapacity << 1) + 1;
if (newCapacity - MAX_ARRAY_SIZE > 0) {
if (oldCapacity == MAX_ARRAY_SIZE)
// Keep running with MAX_ARRAY_SIZE buckets
return;
newCapacity = MAX_ARRAY_SIZE;
}
HashtableEntry<?,?>[] newMap = new HashtableEntry<?,?>[newCapacity];
modCount++;
threshold = (int)Math.min(newCapacity * loadFactor, MAX_ARRAY_SIZE + 1);
table = newMap;
for (int i = oldCapacity ; i-- > 0 ;) {
for (HashtableEntry<K,V> old = (HashtableEntry<K,V>)oldMap[i] ; old != null ; ) {
HashtableEntry<K,V> e = old;
old = old.next;
int index = (e.hash & 0x7FFFFFFF) % newCapacity;
e.next = (HashtableEntry<K,V>)newMap[index];
newMap[index] = e;
}
}
}
5、HashMap线程不安全,HashTable线程安全
Hashtable是同步(synchronized)的,在它的一些方法里都使用了synchronized关键字,由此保证其线程安全,适用于多线程环境。
而hashmap不是同步的,适用于单线程环境。
由于Hashtable是同步的(synchronized)线程安全的,所以在单线程环境下它比HashMap要慢。如果不需要同步,只需要单一线程,那么使用HashMap性能要比Hashtable好。
注解:
sychronized意味着在一次仅有一个线程能够更改Hashtable。就是说任何线程要更新Hashtable时要首先获得同步锁,其它线程要等到同步锁被释放之后才能再次获得同步锁更新Hashtable。比如Hashtable 提供的几个主要方法(如下源码)中,如 put(),get(), contains(), remove() 等。不会出现两个线程同时对数据进行操作的情况,因此保证了线程安全性,但是也大大的降低了执行效率。
在Java中,可以使用synchronized关键字来标记一个方法或者代码块,当某个线程调用该对象的synchronized方法或者访问synchronized代码块时,这个线程便获得了该对象的锁,其他线程暂时无法访问这个方法,只有等待这个方法执行完毕或者代码块执行完毕,这个线程才会释放该对象的锁,其他线程才能执行这个方法或者代码块。
public synchronized V put(K key, V value){}
public synchronized V get(Object key) {}
public synchronized void putAll(Map<? extends K, ? extends V> t) {}
public synchronized boolean contains(Object value) {}
public synchronized boolean containsKey(Object key) {}
public synchronized V remove(Object key) {}
public synchronized void clear() {}
public synchronized String toString() {}
下面看一下HashMap与HashTable的基本使用方法
//定义的键key-值value的具体数据类型为String
HashMap<String,String> stringMap = new HashMap<>();
stringMap.put("","");
stringMap.get("");
stringMap.putAll(new HashMap<String,String>());
stringMap.containsKey("");
stringMap.containsValue("");
stringMap.remove("");
stringMap.clear();
stringMap.values();
stringMap.toString();
//没有定义具体数据类型
Map map = new HashMap();
map.put("","");
HashMap hashMap = new HashMap();
hashMap.put(1,1);
//数据类型可根据实际使用更改
Hashtable<String,String> hashtable = new Hashtable<>();
hashtable.put("1","1");
hashtable.get("");
hashtable.putAll(new HashMap<String,String>());
hashtable.contains("");
hashtable.containsKey("");
hashtable.containsValue("");
hashtable.remove("");
hashtable.clear();
hashtable.values();
hashtable.toString();
后面将根据源码来对比一下HashMap与HashTable上面列出的各个方法,以及其它的一些不同之处,待更新下一篇文章……