Does anyone know why the java jdk implementation of hashtable does not rehash the table upon remove ?
What if space usage is too low? Isnt it a reason to reduce size and rehash?
Just like load factor 0.75 which triggers rehash on put, we could have a lower bound like 0.25 (of course analysis can be done on the best value here) on the density of the table and trigger the rehash again, provided the size of the table is greater than the initialCapacity.
解决方案
Rehashing is an expensive operation and the java hash based data structures try to avoid it. They only do rehashing when the lookup performance is bad. This is the purpose of this type of data structure: lookup performance.
Here is a quote from the HashMap java docs:
The expected number of entries in the map and its load factor should be taken into account when setting its initial capacity, so as to minimize the number of rehash operations. If the initial capacity is greater than the maximum number of entries divided by the load factor, no rehash operations will ever occur.
If many mappings are to be stored in a HashMap instance, creating it with a sufficiently large capacity will allow the mappings to be stored more efficiently than letting it perform automatic rehashing as needed to grow the table.
Beside this argument, the java creators might have thought that if you had that many elements in your hashtable the probability to have them again is quite large so there is no need to rehash twice the table.