今天光顾一下hashtable吧,其实又了之前hashmap和Concurrenthashmap的源码分析,再看hashtable,就显的简单多了。
public Hashtable(int initialCapacity, float loadFactor) {
//hashtable中默认值是11,也就是table的大小默认为11 loadFactor依然是0.75f
if (initialCapacity < 0)
throw new IllegalArgumentException("Illegal Capacity: "+
initialCapacity);
if (loadFactor <= 0 || Float.isNaN(loadFactor))
throw new IllegalArgumentException("Illegal Load: "+loadFactor);
if (initialCapacity==0)
initialCapacity = 1;
this.loadFactor = loadFactor;
table = new Entry[initialCapacity];
threshold = (int)Math.min(initialCapacity * loadFactor, MAX_ARRAY_SIZE + 1);
useAltHashing = sun.misc.VM.isBooted() &&
(initialCapacity >= Holder.ALTERNATIVE_HASHING_THRESHOLD);
}
可以看见对于hashtable来说,它的同步都是加在方法级别,put和get都是同步的,这就导致在写的时候不能读,读的时候不能写,还是很费时的。
public synchronized V put(K key, V value) {
// Make sure the value is not null
if (value == null) {
throw new NullPointerException();
}
// Makes sure the key is not already in the hashtable.
Entry tab[] = table;
int hash = hash(key);
int index = (hash & 0x7FFFFFFF) % tab.length; //可以看见每个集合的定位index不同,但是结果一致
for (Entry<K,V> e = tab[index] ; e != null ; e = e.next) {
//定位到在哪个桶内之后,遍历桶中的entry,排除null值
if ((e.hash == hash) && e.key.equals(key)) {
V old = e.value;
e.value = value;
return old; //返回旧值
}
}
modCount++;
if (count >= threshold) {
//这边执行扩容,不去关心
// Rehash the table if the threshold is exceeded
rehash();
tab = table;
hash = hash(key);
index = (hash & 0x7FFFFFFF) % tab.length;
}
// Creates the new entry.
Entry<K,V> e = tab[index]; //扩容之后将原有index的值放在新的hash值处
tab[index] = new Entry<>(hash, key, value, e);
count++;
return null;
}
public synchronized V get(Object key) {
Entry tab[] = table;
int hash = hash(key);
int index = (hash & 0x7FFFFFFF) % tab.length;
for (Entry<K,V> e = tab[index] ; e != null ; e = e.next) {
if ((e.hash == hash) && e.key.equals(key)) {
return e.value;
}
}
return null;
}
对于之前详细分析过hashmap和ConcurrentHashMap来说的话,看table真实一眼看到底,其实没什么区别的,只要注意它的锁是加在整个table上的,ConcurrentHashMap是加在Segment处的,更快一些。