集合

Java的Arrays.sort()方法到底用的什么排序算法

数组长度>=286:归并排序
数组长度>=47:快排
小于47:插入排序

hashtable

/**
 * This class implements a hash table, which maps keys to values. Any
 * non-<code>null</code> object can be used as a key or as a value. <p>
 *	这个类实现了一个hash表,除了null,任何对象都可以存储
 * To successfully store and retrieve objects from a hashtable, the
 * objects used as keys must implement the <code>hashCode</code>
 * method and the <code>equals</code> method. <p>
 *key必须实现hashCode()、equals()方法
 * An instance of <code>Hashtable</code> has two parameters that affect its
 * performance: <i>initial capacity</i> and <i>load factor</i>.  The
 * <i>capacity</i> is the number of <i>buckets</i> in the hash table, and the
 * <i>initial capacity</i> is simply the capacity at the time the hash table
	capacity是桶的数量
 * is created.  Note that the hash table is <i>open</i>: in the case of a "hash
 * collision", a single bucket stores multiple entries, which must be searched
 * sequentially.  The <i>load factor</i> is a measure of how full the hash
 * table is allowed to get before its capacity is automatically increased.
 * The initial capacity and load factor parameters are merely hints to
 * the implementation.  The exact details as to when and whether the rehash
 * method is invoked are implementation-dependent.<p>
 什么时候调用rehash方法依赖于具体的实现
 *
 * Generally, the default load factor (.75) offers a good tradeoff between
 * time and space costs.  Higher values decrease the space overhead but
 * increase the time cost to look up an entry (which is reflected in most
 * <tt>Hashtable</tt> operations, including <tt>get</tt> and <tt>put</tt>).<p>
	默认的装载因子0.75满足较好的时间和空间成本,如果这个值变大的话,虽然能利用更大的空间,但是却加大了时间成本
 *
 * The initial capacity controls a tradeoff between wasted space and the
 * need for <code>rehash</code> operations, which are time-consuming.
 * No <code>rehash</code> operations will <i>ever</i> occur if the initial
 * capacity is greater than the maximum number of entries the
 * <tt>Hashtable</tt> will contain divided by its load factor.  However,
 * setting the initial capacity too high can waste space.<p>
	初始容量应该设置一个合适的值,来保证尽量不会执行rehash,但也不能有较大的空间浪费。
 *
 * If many entries are to be made into a <code>Hashtable</code>,
 * creating it with a sufficiently large capacity may allow the
 * entries to be inserted more efficiently than letting it perform
 * automatic rehashing as needed to grow the table. <p>
 如果很多对象要插入进去,创建一个较大容量的hashtable可能比自动rehash更有效率
 *
 * This example creates a hashtable of numbers. It uses the names of
 * the numbers as keys:
 * <pre>   {@code
 *   Hashtable<String, Integer> numbers
 *     = new Hashtable<String, Integer>();
 *   numbers.put("one", 1);
 *   numbers.put("two", 2);
 *   numbers.put("three", 3);}</pre>
 *
 * <p>To retrieve a number, use the following code:
 * <pre>   {@code
 *   Integer n = numbers.get("two");
 *   if (n != null) {
 *     System.out.println("two = " + n);
 *   }}</pre>
 *
 * <p>The iterators returned by the <tt>iterator</tt> method of the collections
 * returned by all of this class's "collection view methods" are
 * <em>fail-fast</em>: if the Hashtable is structurally modified at any time
 * after the iterator is created, in any way except through the iterator's own
 * <tt>remove</tt> method, the iterator will throw a {@link
 * ConcurrentModificationException}.  Thus, in the face of concurrent
 * modification, the iterator fails quickly and cleanly, rather than risking
 * arbitrary, non-deterministic behavior at an undetermined time in the future.
 * The Enumerations returned by Hashtable's keys and elements methods are
 * <em>not</em> fail-fast.
 *	使用iterators可能会触发fast-fail。
 * <p>Note that the fail-fast behavior of an iterator cannot be guaranteed
 * as it is, generally speaking, impossible to make any hard guarantees in the
 * presence of unsynchronized concurrent modification.  Fail-fast iterators
 * throw <tt>ConcurrentModificationException</tt> on a best-effort basis.
 * Therefore, it would be wrong to write a program that depended on this
 * exception for its correctness: <i>the fail-fast behavior of iterators
 * should be used only to detect bugs.</i>
 *
 * <p>As of the Java 2 platform v1.2, this class was retrofitted to
 * implement the {@link Map} interface, making it a member of the
 * <a href="{@docRoot}/../technotes/guides/collections/index.html">
 *
 * Java Collections Framework</a>.  Unlike the new collection
 * implementations, {@code Hashtable} is synchronized.  If a
 * thread-safe implementation is not needed, it is recommended to use
 * {@link HashMap} in place of {@code Hashtable}.  If a thread-safe
 * highly-concurrent implementation is desired, then it is recommended
 * to use {@link java.util.concurrent.ConcurrentHashMap} in place of
 * {@code Hashtable}.
 *
 * @author  Arthur van Hoff
 * @author  Josh Bloch
 * @author  Neal Gafter
 * @see     Object#equals(java.lang.Object)
 * @see     Object#hashCode()
 * @see     Hashtable#rehash()
 * @see     Collection
 * @see     Map
 * @see     HashMap
 * @see     TreeMap
 * @since JDK1.0
 */

如果指定初始容量为0:

if (initialCapacity==0)
            initialCapacity = 1;

如果没有指定初始容量,初始容量为11:

public Hashtable() {
        this(11, 0.75f);
    }

最大容量:

private static final int MAX_ARRAY_SIZE = Integer.MAX_VALUE - 8;

https://www.cnblogs.com/jameszheng/p/10229112.html

hashmap和hashtable的区别

HashMap和Hashtable的区别
HashMap和Hashtable都实现了Map接口,但决定用哪一个之前先要弄清楚它们之间的分别。主要的区别有:线程安全性,同步(synchronization),以及速度。

hashtable继承了Dictionary,hashmap继承了AbstractMap。
HashMap几乎可以等价于Hashtable,除了HashMap是非synchronized的,并可以接受null(HashMap可以接受为null的键值(key)和值(value),而Hashtable则不行)。
HashMap是非synchronized,而Hashtable是synchronized,这意味着Hashtable是线程安全的,多个线程可以共享一个Hashtable;而如果没有正确的同步的话,多个线程是不能共享HashMap的。Java 5提供了ConcurrentHashMap,它是HashTable的替代,比HashTable的扩展性更好。
另一个区别是HashMap的迭代器(Iterator)是fail-fast迭代器,而Hashtable的enumerator迭代器不是fail-fast的。所以当有其它线程改变了HashMap的结构(增加或者移除元素),将会抛出ConcurrentModificationException,但迭代器本身的remove()方法移除元素则不会抛出ConcurrentModificationException异常。但这并不是一个一定发生的行为,要看JVM。这条同样也是Enumeration和Iterator的区别。
由于Hashtable是线程安全的也是synchronized,所以在单线程环境下它比HashMap要慢。如果你不需要同步,只需要单一线程,那么使用HashMap性能要好过Hashtable。
HashMap不能保证随着时间的推移Map中的元素次序是不变的。

ArrayList

/**
 * Resizable-array implementation of the <tt>List</tt> interface.  Implements
 * all optional list operations, and permits all elements, including
 * <tt>null</tt>.arrayList可以放任意类型,包括空值  In addition to implementing the <tt>List</tt> interface,
 * this class provides methods to manipulate the size of the array that is
 * used internally to store the list.  (This class is roughly equivalent to
 * <tt>Vector</tt>, except that it is unsynchronized.)
 除了实现List接口之外,它还提供了一些方法来管理容器的大小。
 *除了是非同步的之外,它和vector几乎是等价的
 * <p>The <tt>size</tt>, <tt>isEmpty</tt>, <tt>get</tt>, <tt>set</tt>,
 * <tt>iterator</tt>, and <tt>listIterator</tt> operations run in constant 上述这些操作的时间复杂度都是常数时间
 * time.  The <tt>add</tt> operation runs in <i>amortized constant time</i>, add操作的时间复杂度是均摊常数时间,意思是添加n个元素的时间复杂度是O(n)
 其中某一次add操作可能会导致数组扩容,所以它并不是稳定的常数时间复杂度。
 * that is, adding n elements requires O(n) time.  All of the other operations
 * run in linear time (roughly speaking).  The constant factor is low compared
 * to that for the <tt>LinkedList</tt> implementation.
 *
 * <p>Each <tt>ArrayList</tt> instance has a <i>capacity</i>.  The capacity is
 * the size of the array used to store the elements in the list.  It is always
 * at least as large as the list size.  As elements are added to an ArrayList,随着元素的添加,容量自动增长
 * its capacity grows automatically.  The details of the growth policy are not
 * specified beyond the fact that adding an element has constant amortized
 * time cost.除了添加元素的摊还时间成本为常量之外,增长策略的细节没有被指定。
 *
 * <p>An application can increase the capacity of an <tt>ArrayList</tt> instance
 * before adding a large number of elements using the <tt>ensureCapacity</tt>
 * operation.  This may reduce the amount of incremental reallocation.
 *	可以使用ensureCapacity方法来指定容器的大小,这种方法一般是在知道容器需要放多少元素的前提下,提前声明容量
	这样可以减少扩容带来的花费。
 * <p><strong>Note that this implementation is not synchronized.</strong>
 * If multiple threads access an <tt>ArrayList</tt> instance concurrently,
 * and at least one of the threads modifies the list structurally, it
 * <i>must</i> be synchronized externally.  (A structural modification is结构上的修改
 * any operation that adds or deletes one or more elements, or explicitly
 * resizes the backing array; merely setting the value of an element is not
 * a structural modification.)  This is typically accomplished by
 * synchronizing on some object that naturally encapsulates the list.
 *	容器不是线程安全的,add,delete,resize操作应该放在同步代码块中,set操作不需要
 * If no such object exists, the list should be "wrapped" using the
 * {@link Collections#synchronizedList Collections.synchronizedList}
 * method.  This is best done at creation time, to prevent accidental
 * unsynchronized access to the list:<pre>
 *   List list = Collections.synchronizedList(new ArrayList(...));</pre>
 *可以使用Collections.synchronizedList来将容器变成线程安全的
 * <p><a name="fail-fast">
 * The iterators returned by this class's {@link #iterator() iterator} and
 * {@link #listIterator(int) listIterator} methods are <em>fail-fast</em>:</a>
 * if the list is structurally modified at any time after the iterator is
 * created, in any way except through the iterator's own
 * {@link ListIterator#remove() remove} or      在遍历的时候可以使用remove和add方法,而不会触发fast-fail机制
 * {@link ListIterator#add(Object) add} methods, the iterator will throw a
 * {@link ConcurrentModificationException}.  Thus, in the face of
 * concurrent modification, the iterator fails quickly and cleanly, rather
 * than risking arbitrary, non-deterministic behavior at an undetermined
 * time in the future.
 *	上面讲的是快速失败机制
 * <p>Note that the fail-fast behavior of an iterator cannot be guaranteed
 * as it is, generally speaking, impossible to make any hard guarantees in the
 * presence of unsynchronized concurrent modification.  Fail-fast iterators
 * throw {@code ConcurrentModificationException} on a best-effort basis.
 * Therefore, it would be wrong to write a program that depended on this
 * exception for its correctness:  <i>the fail-fast behavior of iterators
 * should be used only to detect bugs.</i>

扩容

ArrayList 的内部实现,其实是用一个对象数组进行存放具体的值,然后用一种扩容的机制,进行数组的动态增长。

其扩容机制可以理解为,如果元素的个数,大于其容量,则把其容量扩展为原来容量的1.5倍。

private void grow(int minCapacity) {
    // overflow-conscious code
    int oldCapacity = elementData.length;   // 原来的容量
    int newCapacity = oldCapacity + (oldCapacity >> 1);  // 新的容量,原来容量的1.5倍。
    if (newCapacity - minCapacity < 0)
        newCapacity = minCapacity;
    if (newCapacity - MAX_ARRAY_SIZE > 0)  // 如果大于ArrayList 可以容许的最大容量,则设置为最大容量。
        newCapacity = hugeCapacity(minCapacity);
    // minCapacity is usually close to size, so this is a win:
    elementData = Arrays.copyOf(elementData, newCapacity);  // 最终利用Arrays.coppy 进行扩容,生成一个1.5倍元素的数组。(即例子中的15个元素的数组。)
}

ArrayList的遍历foreach与iterator时remove的区别

https://baijiahao.baidu.com/s?id=1679036972015630210&wfr=spider&for=pc
使用for(int i=0;i<n;i++)这种形式遍历的时候,去删除不会触发fast-fail机制。
LinkedList使用foreach遍历的时候,也会出现类似的问题。

LinkedList

/**
 * Doubly-linked list implementation of the {@code List} and {@code Deque}
 * interfaces.  Implements all optional list operations, and permits all
 * elements (including {@code null}).
 *	双向链表,实现了list的所有操作,允许包括null在内的所有类型
 * <p>All of the operations perform as could be expected for a doubly-linked
 * list.  Operations that index into the list will traverse the list from
 * the beginning or the end, whichever is closer to the specified index.
 *	索引到列表中的操作将从开始或结束遍历列表,以最接近指定索引的那一端为起点。
 * <p><strong>Note that this implementation is not synchronized.</strong>非同步
 * If multiple threads access a linked list concurrently, and at least
 * one of the threads modifies the list structurally, it <i>must</i> be
 * synchronized externally.  (A structural modification is any operation
 * that adds or deletes one or more elements; merely setting the value of
 * an element is not a structural modification.)  This is typically
 * accomplished by synchronizing on some object that naturally
 * encapsulates the list.
 *
 * If no such object exists, the list should be "wrapped" using the
 * {@link Collections#synchronizedList Collections.synchronizedList}
 * method.  This is best done at creation time, to prevent accidental
 * unsynchronized access to the list:<pre>
 *   List list = Collections.synchronizedList(new LinkedList(...));</pre>
 *		通过工具类来使它变成同步的列表,而且最好在初始化的时候就使它同步
 * <p>The iterators returned by this class's {@code iterator} and
 * {@code listIterator} methods are <i>fail-fast</i>: if the list is
 * structurally modified at any time after the iterator is created, in
 * any way except through the Iterator's own {@code remove} or
 * {@code add} methods, the iterator will throw a {@link
 * ConcurrentModificationException}.  Thus, in the face of concurrent
 * modification, the iterator fails quickly and cleanly, rather than
 * risking arbitrary, non-deterministic behavior at an undetermined
 * time in the future.
		快速失败机制
 *
 * <p>Note that the fail-fast behavior of an iterator cannot be guaranteed
 * as it is, generally speaking, impossible to make any hard guarantees in the
 * presence of unsynchronized concurrent modification.  Fail-fast iterators
 * throw {@code ConcurrentModificationException} on a best-effort basis.
 * Therefore, it would be wrong to write a program that depended on this
 * exception for its correctness:   <i>the fail-fast behavior of iterators
 * should be used only to detect bugs.</i>
 *
 * <p>This class is a member of the
 * <a href="{@docRoot}/../technotes/guides/collections/index.html">
 * Java Collections Framework</a>.
 *
 * @author  Josh Bloch
 * @see     List
 * @see     ArrayList
 * @since 1.2
 * @param <E> the type of elements held in this collection
 */

美团试题:ArrayList和linkedlist有什么区别,如何遍历,使用for循环遍历linkedlist为什么效率低,linkedlist能使用索引访问么,使用迭代器呢

https://blog.csdn.net/qq_36520235/article/details/82535044

arrayList数据不断的增加会怎么样

private static final int DEFAULT_CAPACITY = 10;
/**
     * The maximum size of array to allocate.
     * Some VMs reserve some header words in an array.
     * Attempts to allocate larger arrays may result in
     * OutOfMemoryError: Requested array size exceeds VM limit
     */
    private static final int MAX_ARRAY_SIZE = Integer.MAX_VALUE - 8;

当元素减少时ArrayList不会自动缩小容积,那么如果我们需要缩小容积怎么办呢?
其实ArrayList里面有一个方法可以缩小容积

/**
     * Trims the capacity of this <tt>ArrayList</tt> instance to be the
     * list's current size.  An application can use this operation to minimize
     * the storage of an <tt>ArrayList</tt> instance.
     */
    public void trimToSize() {
        modCount++;
        if (size < elementData.length) {
            elementData = Arrays.copyOf(elementData, size);
        }
    }

如何线程安全地使用hashmap

hashtable
concurrenthashmap
collections
synchronized+hashmap

1.8hashmap为什么采用尾插法

为了避免出现逆序且链表死循环的问题,JDK1.7的HashMap扩容导致死循环。

为什么HashMap中table数组用transient修饰?

transient 表示易变的意思,在 Java 中,被该关键字修饰的变量不会被默认的序列化机制序列化。我们再回到源码中,考虑一个问题:桶数组 table 是 HashMap 底层重要的数据结构,不序列化的话,别人还怎么还原呢?

这里简单说明一下吧,HashMap 并没有使用默认的序列化机制,而是通过实现readObject/writeObject两个方法自定义了序列化的内容。这样做是有原因的,试问一句,HashMap 中存储的内容是什么?不用说,大家也知道是键值对。所以只要我们把键值对序列化了,我们就可以根据键值对数据重建 HashMap。有的朋友可能会想,序列化 table 不是可以一步到位,后面直接还原不就行了吗?这样一想,倒也是合理。但序列化 talbe 存在着两个问题:

1)table 多数情况下是无法被存满的,序列化未使用的部分,浪费空间。
2)同一个键值对在不同 JVM 下,所处的桶位置可能是不同的,在不同的 JVM 下反序列化 table 可能会发生错误。

以上两个问题中,第一个问题比较好理解,第二个问题解释一下。HashMap 的get/put/remove等方法第一步就是根据 hash 找到键所在的桶位置,但如果键没有覆写 hashCode 方法,计算 hash 时最终调用 Object 中的 hashCode 方法。但 Object 中的 hashCode 方法是 native 型的,不同的 JVM 下,可能会有不同的实现,产生的 hash 可能也是不一样的。也就是说同一个键在不同平台下可能会产生不同的 hash,此时再对在同一个 table 继续操作,就会出现问题。

为什么使用红黑树而不是AVL

由于二叉查找树(BST)存在数据倾斜的问题(极端情况下会形成一个链表),所以平衡二叉查找树(Balanced BST)产生了。平衡树在插入和删除的时候,会通过旋转操作将高度保持在logN。其中两款具有代表性的平衡树分别为AVL树和红黑树。AVL树由于实现比较复杂,而且插入和删除性能差,因此在实际环境中我们更多的是应用红黑树。
https://blog.csdn.net/21aspnet/article/details/88939297

hashmap到达最大值扩容失败怎么办?

Node<K,V>[] oldTab = table;
        int oldCap = (oldTab == null) ? 0 : oldTab.length;
        int oldThr = threshold;
        int newCap, newThr = 0;
        if (oldCap > 0) {
            if (oldCap >= MAXIMUM_CAPACITY) {
                threshold = Integer.MAX_VALUE;
                return oldTab;
            }
            else if ((newCap = oldCap << 1) < MAXIMUM_CAPACITY &&
                     oldCap >= DEFAULT_INITIAL_CAPACITY)
                newThr = oldThr << 1; // double threshold
        }
        else if (oldThr > 0) // initial capacity was placed in threshold
            newCap = oldThr;
        else {               // zero initial threshold signifies using defaults
            newCap = DEFAULT_INITIAL_CAPACITY;
            newThr = (int)(DEFAULT_LOAD_FACTOR * DEFAULT_INITIAL_CAPACITY);
        }
        if (newThr == 0) {
            float ft = (float)newCap * loadFactor;
            newThr = (newCap < MAXIMUM_CAPACITY && ft < (float)MAXIMUM_CAPACITY ?
                      (int)ft : Integer.MAX_VALUE);
        }
        threshold = newThr;

如果旧数组的容量达到了最大值,那么直接返回旧数组,并且把threshold设置为Integer.MAX_VALUE。后面还是可以继续往hashmap中添加,只不过是每次进行扩容的时候都直接返回旧数组,不再进行扩容操作。
  如果新数组的容量达到了最大值,那么会将threshold设置为Integer.MAX_VALUE。
  在将threshold设置为Integer.MAX_VALUE后,都不会再触发扩容函数resize。为什么?

if (++size > threshold)
            resize();

size是整型数,在它溢出之前是不会满足这个条件的,而溢出之后,size就变成负的Integer.MAX_VALUE。在这之后又满足了if条件,会进入resize函数,但是还是会将阈值设置会int最大值。
  其实,size一般是达不到int最大值的,不停的往hashmap中添加Integer,最终当size==50331649就抛出了OOM。

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值