Collection集合多个实现类对比

1.ArrayList和LinkedList区别

 

 

ArrayList和LinkedList都是Collection接口的通用实现方式,两者采用了不用的存储策略,用来适应不同场合的需要。

ArrayList的内部采用数组的方式存储数据,唯一需要注意的是对于容量超过阈值的处理逻辑,数组的默认容量大小是10,最大容量是Integer.Max_Value,超过最大容量会抛内存溢出异常。扩容后的容量是原有容量的1.5倍

LinkedList内部采用双向链表Node内部类来存储数据,由于采用了双向链表,LinkedList也可以当做栈和队列来使用,但是效率比较低,Java提供了ArrayDeqeue的高效率实现。

在尾部插入效率上面,两者相差不会太大,但是LinkedList需要维护双向链表的关系,所有存储效率上面会略逊于ArrayList。

ArrayList的时间主要耗时在容量扩容数据迁移上面,如果我们一次性初始化容量,应该还可以有提升的空间。

Linked的优势在于头部插入的效率,只需要修改头部元素的指针就可以做到,而数组还需要移动后续的数据,所有效率远远低于LinkedList。

对于get,set的操作,链表内部通过二分查找数组可以通过下标直接访问元素,所以效率高于LinkedList。

 

总结:

如果只是存放数据,并进行简单的迭代情况下,我们一般采用ArrayList。

 

如果涉及到频繁的修改元素,就应该采用LinkedList。

 

 

 

 

 

 

 

2.HashMap和Hashtable的区别

 

HashMap不是线程安全的。hastmap是一个接口 是map接口的子接口,是将键映射到值的对象,其中键和值都是对象,并且不能包含重复键,但可以包含重复值。HashMap允许null key和null value,而hashtable不允许。

HashTable是线程安全的一个Collection。

 

HashMap是Hashtable的轻量级实现(非线程安全的实现),他们都完成了Map接口,主要区别在于HashMap允许空(null)键值(key),由于非线程安全,效率上可能高于Hashtable。 HashMap允许将null作为一个entry的key或者value,而Hashtable不允许。 HashMap把Hashtable的contains方法去掉了,改成containsvalue和containsKey。因为contains方法容易让人引起误解。 Hashtable继承自Dictionary类,而HashMap是Java1.2引进的Map interface的一个实现。 最大的不同是,Hashtable的方法是Synchronize的,而HashMap不是,在多个线程访问Hashtable时,不需要自己为它的方法实现同步,而HashMap 就必须为之提供外同步。 Hashtable和HashMap采用的hash/rehash算法都大概一样,所以性能不会有很大的差。

 

LinkedHashMap(ListView缓存) 是HashMap的一个子类,保存了记录的插入顺序,在用Iterator遍历LinkedHashMap时,先得到的记录肯定是先插入的.也可以在构造时用带参数,按照应用次数排序。在遍历的时候会比HashMap慢,不过有种情况例外,当HashMap容量很大,实际数据较少时,遍历起来可能会比 LinkedHashMap慢,因为LinkedHashMap的遍历速度只和实际数据有关,和容量无关,而HashMap的遍历速度和他的容量有关。

 

 

 

 

 

3.HashMap和ConcurrentHashMap区别

 

 

HashMap不是线程安全的,因此多线程操作时需要格外小心。

 

ConcurrentHashMap源码:

<strong>public class ConcurrentHashMap<K, V> extends AbstractMap<K, V> implements ConcurrentMap<K, V>, Serializable {
    public ConcurrentHashMap() {
        throw new RuntimeException("Stub!");
    }

    public ConcurrentHashMap(int initialCapacity) {
        throw new RuntimeException("Stub!");
    }

    public ConcurrentHashMap(Map<? extends K, ? extends V> m) {
        throw new RuntimeException("Stub!");
    }

    public ConcurrentHashMap(int initialCapacity, float loadFactor) {
        throw new RuntimeException("Stub!");
    }

    public ConcurrentHashMap(int initialCapacity, float loadFactor, int concurrencyLevel) {
        throw new RuntimeException("Stub!");
    }

    public int size() {
        throw new RuntimeException("Stub!");
    }

    public boolean isEmpty() {
        throw new RuntimeException("Stub!");
    }

    public V get(Object key) {
        throw new RuntimeException("Stub!");
    }

    public boolean containsKey(Object key) {
        throw new RuntimeException("Stub!");
    }

    public boolean containsValue(Object value) {
        throw new RuntimeException("Stub!");
    }

    public V put(K key, V value) {
        throw new RuntimeException("Stub!");
    }

    public void putAll(Map<? extends K, ? extends V> m) {
        throw new RuntimeException("Stub!");
    }

    public V remove(Object key) {
        throw new RuntimeException("Stub!");
    }

    public void clear() {
        throw new RuntimeException("Stub!");
    }

    public Set<K> keySet() {
        throw new RuntimeException("Stub!");
    }

    public Collection<V> values() {
        throw new RuntimeException("Stub!");
    }

    public Set<Entry<K, V>> entrySet() {
        throw new RuntimeException("Stub!");
    }

    public int hashCode() {
        throw new RuntimeException("Stub!");
    }

    public String toString() {
        throw new RuntimeException("Stub!");
    }

    public boolean equals(Object o) {
        throw new RuntimeException("Stub!");
    }

    public V putIfAbsent(K key, V value) {
        throw new RuntimeException("Stub!");
    }

    public boolean remove(Object key, Object value) {
        throw new RuntimeException("Stub!");
    }

    public boolean replace(K key, V oldValue, V newValue) {
        throw new RuntimeException("Stub!");
    }

    public V replace(K key, V value) {
        throw new RuntimeException("Stub!");
    }

    public V getOrDefault(Object key, V defaultValue) {
        throw new RuntimeException("Stub!");
    }

    public void forEach(BiConsumer<? super K, ? super V> action) {
        throw new RuntimeException("Stub!");
    }

    public void replaceAll(BiFunction<? super K, ? super V, ? extends V> function) {
        throw new RuntimeException("Stub!");
    }

    public V computeIfAbsent(K key, Function<? super K, ? extends V> mappingFunction) {
        throw new RuntimeException("Stub!");
    }

    public V computeIfPresent(K key, BiFunction<? super K, ? super V, ? extends V> remappingFunction) {
        throw new RuntimeException("Stub!");
    }

    public V compute(K key, BiFunction<? super K, ? super V, ? extends V> remappingFunction) {
        throw new RuntimeException("Stub!");
    }

    public V merge(K key, V value, BiFunction<? super V, ? super V, ? extends V> remappingFunction) {
        throw new RuntimeException("Stub!");
    }

    public boolean contains(Object value) {
        throw new RuntimeException("Stub!");
    }

    public Enumeration<K> keys() {
        throw new RuntimeException("Stub!");
    }

    public Enumeration<V> elements() {
        throw new RuntimeException("Stub!");
    }

    public long mappingCount() {
        throw new RuntimeException("Stub!");
    }

    public static <K> ConcurrentHashMap.KeySetView<K, Boolean> newKeySet() {
        throw new RuntimeException("Stub!");
    }

    public static <K> ConcurrentHashMap.KeySetView<K, Boolean> newKeySet(int initialCapacity) {
        throw new RuntimeException("Stub!");
    }

    public ConcurrentHashMap.KeySetView<K, V> keySet(V mappedValue) {
        throw new RuntimeException("Stub!");
    }

    public void forEach(long parallelismThreshold, BiConsumer<? super K, ? super V> action) {
        throw new RuntimeException("Stub!");
    }

    public <U> void forEach(long parallelismThreshold, BiFunction<? super K, ? super V, ? extends U> transformer, Consumer<? super U> action) {
        throw new RuntimeException("Stub!");
    }

    public <U> U search(long parallelismThreshold, BiFunction<? super K, ? super V, ? extends U> searchFunction) {
        throw new RuntimeException("Stub!");
    }

    public <U> U reduce(long parallelismThreshold, BiFunction<? super K, ? super V, ? extends U> transformer, BiFunction<? super U, ? super U, ? extends U> reducer) {
        throw new RuntimeException("Stub!");
    }

    public double reduceToDouble(long parallelismThreshold, ToDoubleBiFunction<? super K, ? super V> transformer, double basis, DoubleBinaryOperator reducer) {
        throw new RuntimeException("Stub!");
    }

    public long reduceToLong(long parallelismThreshold, ToLongBiFunction<? super K, ? super V> transformer, long basis, LongBinaryOperator reducer) {
        throw new RuntimeException("Stub!");
    }

    public int reduceToInt(long parallelismThreshold, ToIntBiFunction<? super K, ? super V> transformer, int basis, IntBinaryOperator reducer) {
        throw new RuntimeException("Stub!");
    }

    public void forEachKey(long parallelismThreshold, Consumer<? super K> action) {
        throw new RuntimeException("Stub!");
    }

    public <U> void forEachKey(long parallelismThreshold, Function<? super K, ? extends U> transformer, Consumer<? super U> action) {
        throw new RuntimeException("Stub!");
    }

    public <U> U searchKeys(long parallelismThreshold, Function<? super K, ? extends U> searchFunction) {
        throw new RuntimeException("Stub!");
    }

    public K reduceKeys(long parallelismThreshold, BiFunction<? super K, ? super K, ? extends K> reducer) {
        throw new RuntimeException("Stub!");
    }

    public <U> U reduceKeys(long parallelismThreshold, Function<? super K, ? extends U> transformer, BiFunction<? super U, ? super U, ? extends U> reducer) {
        throw new RuntimeException("Stub!");
    }

    public double reduceKeysToDouble(long parallelismThreshold, ToDoubleFunction<? super K> transformer, double basis, DoubleBinaryOperator reducer) {
        throw new RuntimeException("Stub!");
    }

    public long reduceKeysToLong(long parallelismThreshold, ToLongFunction<? super K> transformer, long basis, LongBinaryOperator reducer) {
        throw new RuntimeException("Stub!");
    }

    public int reduceKeysToInt(long parallelismThreshold, ToIntFunction<? super K> transformer, int basis, IntBinaryOperator reducer) {
        throw new RuntimeException("Stub!");
    }

    public void forEachValue(long parallelismThreshold, Consumer<? super V> action) {
        throw new RuntimeException("Stub!");
    }

    public <U> void forEachValue(long parallelismThreshold, Function<? super V, ? extends U> transformer, Consumer<? super U> action) {
        throw new RuntimeException("Stub!");
    }

    public <U> U searchValues(long parallelismThreshold, Function<? super V, ? extends U> searchFunction) {
        throw new RuntimeException("Stub!");
    }

    public V reduceValues(long parallelismThreshold, BiFunction<? super V, ? super V, ? extends V> reducer) {
        throw new RuntimeException("Stub!");
    }

    public <U> U reduceValues(long parallelismThreshold, Function<? super V, ? extends U> transformer, BiFunction<? super U, ? super U, ? extends U> reducer) {
        throw new RuntimeException("Stub!");
    }

    public double reduceValuesToDouble(long parallelismThreshold, ToDoubleFunction<? super V> transformer, double basis, DoubleBinaryOperator reducer) {
        throw new RuntimeException("Stub!");
    }

    public long reduceValuesToLong(long parallelismThreshold, ToLongFunction<? super V> transformer, long basis, LongBinaryOperator reducer) {
        throw new RuntimeException("Stub!");
    }

    public int reduceValuesToInt(long parallelismThreshold, ToIntFunction<? super V> transformer, int basis, IntBinaryOperator reducer) {
        throw new RuntimeException("Stub!");
    }

    public void forEachEntry(long parallelismThreshold, Consumer<? super Entry<K, V>> action) {
        throw new RuntimeException("Stub!");
    }

    public <U> void forEachEntry(long parallelismThreshold, Function<Entry<K, V>, ? extends U> transformer, Consumer<? super U> action) {
        throw new RuntimeException("Stub!");
    }

    public <U> U searchEntries(long parallelismThreshold, Function<Entry<K, V>, ? extends U> searchFunction) {
        throw new RuntimeException("Stub!");
    }

    public Entry<K, V> reduceEntries(long parallelismThreshold, BiFunction<Entry<K, V>, Entry<K, V>, ? extends Entry<K, V>> reducer) {
        throw new RuntimeException("Stub!");
    }

    public <U> U reduceEntries(long parallelismThreshold, Function<Entry<K, V>, ? extends U> transformer, BiFunction<? super U, ? super U, ? extends U> reducer) {
        throw new RuntimeException("Stub!");
    }

    public double reduceEntriesToDouble(long parallelismThreshold, ToDoubleFunction<Entry<K, V>> transformer, double basis, DoubleBinaryOperator reducer) {
        throw new RuntimeException("Stub!");
    }

    public long reduceEntriesToLong(long parallelismThreshold, ToLongFunction<Entry<K, V>> transformer, long basis, LongBinaryOperator reducer) {
        throw new RuntimeException("Stub!");
    }

    public int reduceEntriesToInt(long parallelismThreshold, ToIntFunction<Entry<K, V>> transformer, int basis, IntBinaryOperator reducer) {
        throw new RuntimeException("Stub!");
    }

    public static class KeySetView<K, V> extends ConcurrentHashMap.CollectionView<K, V, K> implements Set<K>, Serializable {
        KeySetView() {
            throw new RuntimeException("Stub!");
        }

        public V getMappedValue() {
            throw new RuntimeException("Stub!");
        }

        public boolean contains(Object o) {
            throw new RuntimeException("Stub!");
        }

        public boolean remove(Object o) {
            throw new RuntimeException("Stub!");
        }

        public Iterator<K> iterator() {
            throw new RuntimeException("Stub!");
        }

        public boolean add(K e) {
            throw new RuntimeException("Stub!");
        }

        public boolean addAll(Collection<? extends K> c) {
            throw new RuntimeException("Stub!");
        }

        public int hashCode() {
            throw new RuntimeException("Stub!");
        }

        public boolean equals(Object o) {
            throw new RuntimeException("Stub!");
        }

        public Spliterator<K> spliterator() {
            throw new RuntimeException("Stub!");
        }

        public void forEach(Consumer<? super K> action) {
            throw new RuntimeException("Stub!");
        }
    }

    abstract static class CollectionView<K, V, E> implements Collection<E>, Serializable {
        CollectionView() {
            throw new RuntimeException("Stub!");
        }

        public ConcurrentHashMap<K, V> getMap() {
            throw new RuntimeException("Stub!");
        }

        public final void clear() {
            throw new RuntimeException("Stub!");
        }

        public final int size() {
            throw new RuntimeException("Stub!");
        }

        public final boolean isEmpty() {
            throw new RuntimeException("Stub!");
        }

        public abstract Iterator<E> iterator();

        public abstract boolean contains(Object var1);

        public abstract boolean remove(Object var1);

        public final Object[] toArray() {
            throw new RuntimeException("Stub!");
        }

        public final <T> T[] toArray(T[] a) {
            throw new RuntimeException("Stub!");
        }

        public final String toString() {
            throw new RuntimeException("Stub!");
        }

        public final boolean containsAll(Collection<?> c) {
            throw new RuntimeException("Stub!");
        }

        public final boolean removeAll(Collection<?> c) {
            throw new RuntimeException("Stub!");
        }

        public final boolean retainAll(Collection<?> c) {
            throw new RuntimeException("Stub!");
        }
    }
}</strong>

 

ConcurrentHashMap具体是怎么实现线程安全的呢,肯定不可能是每个方法加synchronized,那样就变成了HashTable。

 

从ConcurrentHashMap代码中可以看出,它引入了一个“分段锁”的概念,具体可以理解为把一个大的Map拆分成N个小的HashTable,根据key.hashCode()来决定把key放到哪个HashTable中。

 

在ConcurrentHashMap中,就是把Map分成了N个Segment,put和get的时候,都是现根据key.hashCode()算出放到哪个Segment中。

 

测试程序:

<strong>public class ConcurrentHashMapTest {  
      
    private static ConcurrentHashMap<Integer, Integer> map = new ConcurrentHashMap<Integer, Integer>();  
    public static void main(String[] args) {  
        new Thread("Thread1"){  
            @Override  
            public void run() {  
                map.put(3, 33);  
            }  
        };  
          
        new Thread("Thread2"){  
            @Override  
            public void run() {  
                map.put(4, 44);  
            }  
        };  
          
        new Thread("Thread3"){  
            @Override  
            public void run() {  
                map.put(7, 77);  
            }  
        };  
        System.out.println(map);  
    }  
}  </strong>

 

ConcurrentHashMap中默认是把segments初始化为长度为16的数组。

 

根据ConcurrentHashMap.segmentFor的算法,3、4对应的Segment都是segments[1],7对应的Segment是segments[12]。

 (1)Thread1和Thread2先后进入Segment.put方法时,Thread1会首先获取到锁,可以进入,而Thread2则会阻塞在锁上:

(2)切换到Thread3,也走到Segment.put方法,因为7所存储的Segment和3、4不同,因此,不会阻塞在lock():

 

以上就是ConcurrentHashMap的工作机制,通过把整个Map分为N个Segment(类似HashTable),可以提供相同的线程安全,但是效率提升N倍,默认提升16倍。

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值