ConcurrentHashMap解析
ConCurrentHashMap定义
/**
* 支持完全并发性检索和高期望并发更新的 hash table
* 此类遵循Hashtable相同的函数规范,和包含与Hashtable的每个方法对应的方法版本
* 然而,尽管所有的操作都是线程安全的,但是检索操作不需要锁定,而且不支持防止所有访问锁定整个table
* 在依赖于线程安全性而不是synchronize细节的程序中,该类完全可以与Hashtable相互操作
*
* 检索操作(包含get)通常不阻塞,所以可能与更新操作重叠(包括put和remove)
* retrievals检索操作反映了最近完成的update 操作(更正式的说:给定key的update操作具有happens-before
* 关系和任何(non-null)retrieval检索指定key report 更新的value。)
* 对于聚合操作,例如:clear,putAll,并发retireval检索可能反映insertion或removal的某些enties
* 类似的,Iterator、Spliterators和Enumerations返回元素反映了这个hash table在创建iterator/Enumeration时或之后的状态
* 它们不会throw ConcurrentModificationException,不过Iteratori 一次只能被一个线程使用
* 请记住:聚合状态方法(size、isEmpty、containsValue)的结果通常只有在映射没有其他线程中进行并发更新时才有用。
* 否则,这些方法的结果映射的是瞬时状态可能足以用于检测或评估的目的,但不能用于程序控制
*
* 当有太多碰撞时,table被总动态扩展。(例如:key具有不同的hash code,但是取模table size后属于相同slot)
* 预期平均效果每个map大约每两个bins维护一个mapping 数量(对应的0.75 load factor 掌握resize)
* 随着mapping的added和removed,这个平均值可能会有很多差异,但总来说,这维护了hash table 普遍接受的时间/空间的权衡
* 然而,reszie或者hash table 的其他类型操作可能相对较慢。在可能的情况下,提供一个估计大小作为initalCapacity的构造参数是一个好主意
* 一个可选项loadFactor构造函数提供了一种定制initalCapacity的进一步的方法,
* 它指定了table的密度,用于计算且给定数量的元素分配的空间数量。
* 此外,为了与该类的之前版本兼容,构造函数还可以指定一个expected期望的concurrencyLevel作为内部分级的额外提示
* Note:使用具有完全相同的hashCode的许多key可定会降低任何hash table的性能,为了改善影响
* 当key是Comparable时,此类可以使用key之间的compare顺序来帮助断开tie链接
*
* 一个ConcurrentHashMap的 Set 投影可能由newKey(),newKeySet(int)来创建,或viewed(使用keySet())
* 只有当Keys被interest并且被mapped的value(可能暂时性的)没有被使用或全部形同的mapping value。
*
* ConcurrentHashMap可以作为可伸缩频率map(直方图或多Set形式)通过使用LongAdder value和使用computeIfAbsent来初始化
* 例如:要向ConcurrentHashMap<String,LongAdder> freqs添加count,可以使用freqs.computeIfAbsent(key,k->new LongAdder()).increment()
*
* 此类的其他view和Iterator实现了Map和Iterator接口所有可选方法
*
* 类似Hashtable但是与HashMap不同,这个类不允许将null作为key或value
*
* ConcurrentHashMap支持一组顺序和并行的批量操纵,与大多数Stream方法不同,这些操作被设计为安全的
* 而且通常是明智的,即使使用油其他线程并发更行mapping;例如:当计算共享注册类表中value的快照摘要时,
* 有三种类型的操作,每种操作都有四种形式,接受带有key、value、entry和(key,value)对的函数作为从参数and/or返回值
* 因为ConcurrentHashMap的element不是顺序在任何特定的地方,并且可能在不同顺序不同的并行执行处理
* 正确的处理程序不能依赖于任何排序,或任何其他Object或value可能能是短时性的在计算过程中变化;
* 除了每个动作,最好是没有副作用的,批量操作在Map.Entry 上不支持setValue方法
*
* 1.forEach:对每个元素执行给定的操作。变量形式在执行操作前对每个元素执行给定的转换
*
* 2.search:返回对每个元素执行应用函数的第一个可用 non-null 结果;找到结果时跳过进一步的search。
*
* 3.reduce:累积每个元素。所提供的reduce 函数不能依赖于排序(更正式的说:它应该同时具有关联性和交换性)有五种变体:
* 1.plain reductions:(对于key,value函数参数,没有此方法的形式,因为没有对应的返回值类型)
* 2.mapped reductions:累积应用于每个元素的给定函数结果
* 3.使用给定的基值减少到标量double、long、int
*
* 这些bulk操作接受parallelismThreshold参数。如果当前map size小于给定的阈值,则按方法顺序进行。
* 使用参数Long.MAX_VALUE禁止所有parallel。使用 1 将会导致大量parallel,方法将其划分为足够的subTask
* 以充分利用所有用于并行计算的ForkJoinPool.commonPool().通常首选择将这些极值中的一个
* 然后测量在值之间使用的性能,这些值权衡开销与吞吐量
*
* bulk操作的parallel属性跟在ConcurrentHashMap的parallel属性后面:从get(key)返回的任何non-null结果
* 并且相关的访方法在与关联的插入、更新的方法有happens-before关系。任何bulk操作的结果都反映了这些
* 元素的组成(但对整个map来说不一定是原子的关系,除非它以某种方式被认为是静态的)。
* 相反,由于map中的key和value从不为null,因此null可以作为当前缺少任何结果的可靠原子指示器。
* 若要维护此属性,null值可以用作非标量reduce操作的隐式基础(更正式的说:他应该是应用于reduce的标示元素)
* 最常见的reduce具有这些属性;例如:使用基数0计算和或使用基数最大值计算最小值
*
* 作为参数提供的search和transformation 函数应该类似的返回null,表示缺少任何结果(在这种情下不使用它)
* 对于mapped reduce,这还允许transformation充当filter,如果元素不应该组合,则返回null
* (或者对于原始专门化返回标示基)。可以创建符合transformation和filter,方法是在search或reduce操作中
* 使用他们之前,在null意味着现在什么都没有,自己组合他们
*
* accepting and/or returning entry参数的方法维护key-value关联。例如:当找到最大的key时,它们可能有用
* Note:可以使用new AbstractMap.SimpleEntry(k,v)提供普通entry参数
*
* bulk操作可能会突然完成,在应用提供的函数遇到异常时。在处理此类异常时,请记住:
* 其他parallel执行的函数也可能抛出异常,或者如果没有异常发生,则已经完成
*
* 与sequential顺序形式相比,parallel形式的加速比较常见但并不能保证。如果parallel计算的底层工作比计算
* 本身更昂贵,那么涉及小map上的极短函数的parallel操作可能比sequential执行的更慢。
* 类似的,如果所有处理器都忙于执行不相关的任务,parallel可能不会导致太多的parallel;
*
* 所有任何方法刹那火速必须时non-null
*
* 此类时java cellection framework 成员
*
* @since 1.5
* @author Doug Lea
* @param <K> the type of keys maintained by this map
* @param <V> the type of mapped values
*/
public class ConcurrentHashMap<K,V> extends AbstractMap<K,V>
implements ConcurrentMap<K,V>, Serializable {
private static final long serialVersionUID = 7249069246763182397L;
ConcurrentHashMap方法实现:
Overview 概述
/*
* Overview:概述
*
* 这个hash table的主要设计目的时保持并发可读性(通常是get()方法,但也包含Iterator和相关方法),
*
* 同时最小化update争用。次要目标是保持空间消耗与HashMap相同或更好。
* 并支持多个线程对空table的高效插入。
*
* 此map通常充当一个binned(bucketed)hash table,每个key-value mapping都保存在一个node中。
* 大多数node具有hash、key、value和next 字段。然而,存在各种子类:
* TreeNode:排列在balanced tree,而不是lists中。
* TreeBin:持有一组TreeNode的root节点
* ForwardingNode:在resizing期间提案bins的head。
* ReservationNode:用作占位符,同时在computeIfAbsent和相关方法中建立值
* TreeNode、ForwardNode和ReservationNode不包含普通的key、value或hash,
* 在search过程中很容易区分,因为他们有负值hash字段和null key和value字段
* (这些特殊node不是不常见的就是瞬时的,因为携带未使用的字段的影响微不足道)
*
* 第一次插入时,该table被惰性的初始化2的幂的size。table中的每个bin通常包含
* 一个node list(通常情况下list中只有0个或1个node)。
* table访问需要volatile/atomic reads、writes和CASes。因为没有其他方法可以
* 在不添加进一步的间接操作的情况下安排此操作,所以我们使用intrinsics(Safe)操作
*
* 我们将Node 的 hash 字段 的top(sign)bit位用于控制目的--
* 由于处理约束,无论如果它都是可用的。在map方法中,具有负hash字段的node被特殊处理或忽略
*
* 在empty bin中插入第一个node(通过put或者它的变体)只需要将其封装到bin中即可。
* 到目前为止,这是大多数key/hash 分布下的put的最常见情况。其他的update操作(insert,delete和replace)
* 需要lock。我们不希望浪费将一个不同的lock对象与每个bin关联所需的空间,所以应该使用bin
* list 的first node 作为lock,对于这些lock的locking支持依赖于内置synchronize monitors
*
* 但是将将list的first node作为lock本身还不够:当一个node被锁定时,任何uodate必须验证
* 它仍然是锁定后frist node,如果不是,则重试。因为新的node'总是被附加到list中,
* 一旦一个node在一个bin中是first,它就会保持在first,直到删除或bin失效(reszie后)
*
* per-bin lock的主要缺点是:受相同的lock保护的bin list中其他node的update操作可能停止
* 例如:当使用equals()或mapping函数话费很长时间时。然而,统计上在随机hashCode下
* 这不是一个常见问题。理想情况下,bin中节点的频率遵循泊松分布
* (http://en.wikipedia.org/wiki/Poisson_distribution),给定调整大小的阈值为0.75
* 平均参数约为0.5,尽管由于调整粒度而存在较大差异,忽略方差,list大小k的预期出现出现次数
* 是 (exp(-0.5) * pow(0.5, k) / factorial(k)). 第一个值是:
*
* 0: 0.60653066
* 1: 0.30326533
* 2: 0.07581633
* 3: 0.01263606
* 4: 0.00157952
* 5: 0.00015795
* 6: 0.00001316
* 7: 0.00000094
* 8: 0.00000006
* more:少于千分之一
*
* 在随机hash下,两个线程访问不同的元素lock竞争的概率大约为 1 / (8 * #elements)
*
* 在实践中遇到hash code分布有明显偏离均匀随机性。这包括 N > (1<<30) 时的情况
* 因此一些key必须碰撞。类似的,在一些愚蠢或恶意的用法中,多个key被设计为具有相同的hashCode
* 或者只有在屏蔽的高bit位上不同的hashCode。因此,当一个bin中的node数量超过阈值
* 时,我们使用第二个策略。这些TreeNode使用balanced tree节点(red-black的特殊形式)
* 边界搜索时间O(logN).每个search一次TreeBin至少两倍缓慢在常规list,但是鉴于
* N不能超过(1<<64) (在抛出address之前) 这个边界search step、lock等合理的常量
* (约100个Node检查最坏的情况)只要key具有comparable(这是很常见的:String,Long等)
* TreeBin节点(TreeNode)也像普通node那样维护next遍历指针,因此可以以相同的方式在iterator中遍历
*
* 当入住率超过百分比阈值(通常为0.75,但是见上下文)时,将resize table。
* 任何线程注意到一个过满的bin时都可以在初始化线程之后
* 都可以分配并设置替换数组之后帮助resize。但是这些其他线程可能会继续执行插入等操作,
* 而不是停止。在resize的过程中,TreeBin的使用避免了最坏的填充效果
* resize的过程是一个接一个的将bin从table转移到下一个table。但是,线程声明在之前
* 要传输小blocks的索引(通过字段transaferIndex),从而减少争用。字段sizeCtl中的生成
* stamp确保resizeings不重叠。因为使用的是2的幂展开,所以每个bin中的元素必须保持相同
* 的index。或者以2的幂的偏移量移动。我们通过捕获cases来消除不必要的节点创建
* 其中old node可以重用,因为他们的next字段不会更改。平均而言,当table翻倍时,
* 大约六分之一的table需要clone,他们的替换node将是垃圾回收节点,只要它们不被任何
* 位于并发遍历table中的reader线程引用。在transfer时,old table bin只包含一个特殊的
* forwardNode(hash字段:“MOVED”),该node包含下一个table作为key。
* 遇到forwardNode时,使用new table重新启动访问和更新操作
*
* 每个bin transfer都需要它的bin lock,它可以在resize时等待lock。但是由于其他线程
* 可以加入并帮助resize,而不是争用lock,所以随着调整的进行,平均聚合等待时间更短
* transfer操作必须确保任何遍历都可以使用new old table中所有可访问的bins。
* 这部分是通过last bin(table.size-1)向上first 开始安排的。当看到一个forwardNode
* 时,遍历(参calss Traverser)安排到new table,而不需要重新访问node。为了确保
* 没有跳过中间的node,即使移动顺序不正确,也要在遍历过程中第一次遇见ForwardNode时
* 创建一个堆栈(参见class TableStack),以便在稍后处理当前table时维护它的位置。
* 对这些save/restore机制的需求相对较少,但当遇到一个forwardNdoe时,通常会
* 遇到更多。所以iterator使用一个简单的缓存方案来避免创建这么多的新TableStack 节点
*
*
* 遍历方案还是用于范围的bins的部分遍历(通过一个备用的遍历构造函数),以支持分区的聚合
* 操作。此外,如果将read-only操作转发给空table,则会放弃该操作,空table提供了对于
* 关闭样式清除的支持,而关闭样式目前没有实现
*
* 延迟table初始化在第一次使用之前将占用的空间最小化,并且当第一个操作来自putAll、带有
* map参数的构造函数或返序列化时,也避免了调整大小。这些情况试图覆盖inital capacity设置
* 但是在竞争的情况下也没有伤害harmlessly
*
* 元素计数使用LongAdder的专门化来维护。我们需要结合专门化,而不是仅仅使用LongAdder
* 来访问隐式的竞争感知,这会创建更多个CounterCells。计数器机制避免了对updates的争用
* 但如果在并发访问期间频繁的read,则会遇到cache 抖动。为了避免频繁read,仅在添加到
* 已经包含两个或多个node的bin时才尝试在争用下resize。在均匀hash 分布的情况下,在阈值
* 出于发生这种情况的概率大约为13%, 这意味着大约只有八分之一的人会设置check threshold
* (resize后,这样做的人会少很多)
*
* TreeBin为search和相关操纵使用了一种特殊的比较形式(这是我们不能使用还有集合(
* 如TreeMap)的主要原因)。TreeBin包含Comparable元素,但可能包含其他的元素,
* 这些元素具有comparable但对于相同的T不一定具有可比性,因此我门不能在他们之间调用compareTo
* 要处理这个问题,tree的顺序主要由hash值决定的,然后时Compara.comparateTo.
* (如果适用的话)。在node查找时,如果elsements不可比或比较结果为0,那么在绑定hash值的
* 情况下,可能需要搜索左右的子元素。(这对应于完整的list搜索,如果所有元素都不可比较
* 并且有绑定的散列将是必要的)。在insert时,为了保持rebalanced的总顺序(按这里
* 要求的接近),我门将类和identityHashCode作为链接符进行比较。
* red-black balance 代码是根据CLR算法介绍更新而来:
* (http://gee.cs.oswego.edu/dl/classes/collections/RBCell.java)
*
* TreeBins还需要一个额外的lock机制。虽然list遍历始终是可能的,即使在更新期间
* reader也可以执行list遍历,但是tree遍历不行,这主要是因为tree的旋转可能会更改
* root节点and/or其他链接。TreeBins包含一个简单的read/write机制,基于
* main bin-synchorization策略:与插入或删除关联的机构调整已经被bin-locked
* (因此不能与其他writer发生冲突),但必须等待正在进行的reader 读取完成。
* 由于只能哟一个这样的waiter,所以我们使用一个简单的方案,使用一个waiter字段来
* 阻止writer,但是reader永远不需要阻塞。如果持有root lock,他们将沿着缓慢遍历的
* 路径(通过next-pointer)前进,直到lock可用或list被耗尽,无论哪种情况先出现
* 这些情况并不快,但是最大化了总预期吞吐量
*
* 威化API和该类之前的版本序列化的兼容性带来了一些奇怪的现象。这主要是我们保留了
* concurrencyLevel的未修改但未使用的构造参数。我们接受loadFactor构造函数参数
* 但只将其应用于初始table capacity(这是我们能够保证遵守它的唯一一次)
* 我们还声明了一个未使用的segment类,该类仅在序列化时以最小形式实例化
*
* 而且,仅仅为了于这个类以前的版本兼容,它扩展了AbstracMap,即使它的所有方法都被覆盖了
* 所以它只是无用的包袱
*
* 这个文件是组织使事情更容易跟随比他们可能阅读:
* 1.主要的静态声明和工具方法
* 2.field和主要的public 方法(将多个public方法分解为internal 方法)
* 3.sizing、tree、traversers、bulk 操作
*/
Constants 常量
/**
* 最大的table capacity,这个值必须恰好为 1<<30 才能保证java数组分配和index范围内
* 以获取两倍table的大小,而且还需要,因为32bit位hash字段的前两位用于控制目的
*/
private static final int MAXIMUM_CAPACITY = 1 << 30;
/**
* 默认inital table capacity。必须是2的幂(即至少是1)和最大MAXIMUM_CAPACITY
*/
private static final int DEFAULT_CAPACITY = 16;
/**
* 最大的数组大小(非2的幂)。toArray和相关方法需要。
*/
static final int MAX_ARRAY_SIZE = Integer.MAX_VALUE - 8;
/**
* 默认concurrency level并发级别。未使用,但为与该类之前的版本兼容而定义
*/
private static final int DEFAULT_CONCURRENCY_LEVEL = 16;
/**
* 此table的load tactor。在constructors构造器中重写此值,只影响
* inital table capacity。通常不是用实际的double point值--
* 使用 n - (n >>> 2)之类的表达式来关联的resizing threahold 更简单
*/
private static final float LOAD_FACTOR = 0.75f;
/**
* 使用Tree而不是list来使用bin,bin的计数阈值。
* 当一个元素添加到具有这么node的bin时,bin将转换为tree。
* 该值必须大于2,并且至少应该是8,以便与tree removal中关于收缩后
* 转换为plain list的假设相吻合
*/
static final int TREEIFY_THRESHOLD = 8;
/**
* 在resize操作期间untreeifying(spilt)bin的计数阈值。
* 应该小于TREEIFY_THRESHOLD,且最多为6,以配合去除后的收缩检查
*/
static final int UNTREEIFY_THRESHOLD = 6;
/**
* 最小的table capcaity,其中bin可以treeify(否则该table将resize如果太多的node在bin中.)。
* 该值应该至少为4*TREEIFY_CAPACITY,以避免resize和treeification阈值之间冲突
*/
static final int MIN_TREEIFY_CAPACITY = 64;
/**
* 每个tranfer step转移步骤最小应该rebinnings复归数。
* 范围细分为多少个resizer线程,此值用作下限,以避免resizer遇到过多的内存争用
* 此值最少是DEFAULT_CAPACITY
*/
private static final int MIN_TRANSFER_STRIDE = 16;
/**
* 用于生成stamp的bits位数在sizeCtl中,32bit数组至少为6
*/
private static final int RESIZE_STAMP_BITS = 16;
/**
* 可以帮助resize的最大线程数。必须服恶化32bit- RESIZE_STAMP_BITS bits
* Must fit in 32 - RESIZE_STAMP_BITS bits.
*/
private static final int MAX_RESIZERS = (1 << (32 - RESIZE_STAMP_BITS)) - 1;
/**
* 用于sizeCtl记录size stamp的bit偏移
*/
private static final int RESIZE_STAMP_SHIFT = 32 - RESIZE_STAMP_BITS;
/*
* Node hash字段的编码,见上面的解释
*/
static final int MOVED = -1; // 用于 forwarding nodes 的hash
static final int TREEBIN = -2; // 用于 roots of trees 的hash
static final int RESERVED = -3; // 用于 transient reservations 的hash
static final int HASH_BITS = 0x7fffffff; // 普通hash的可用bits
/** CPU的数量,以限制某些大小 */
static final int NCPU = Runtime.getRuntime().availableProcessors();
/**
* 序列化伪字段,仅用于jdk7兼容性提供
* @serialField segments Segment[]
* segments,每个是特殊的hash table
* @serialField segmentMask int
* 为每个segment建立的mask值,key的hashCode的上半部用于选择segment
* @serialField segmentShift int
* segemnt内索引的偏移量
*/
private static final ObjectStreamField[] serialPersistentFields = {
new ObjectStreamField("segments", Segment[].class),
new ObjectStreamField("segmentMask", Integer.TYPE),
new ObjectStreamField("segmentShift", Integer.TYPE),
};
Nodes 节点
/**
* Key-value entry. 这个类永远不会以user-mutable 用户可变 Map.Entry 的形式导出,
* (即:支持setValue,参见下面的的MapEntry),但可以用于bulk任务中使用的read-only遍历
* 具有负值hash的Node的Subclasses是特殊的,并且包含包含null key和value
* (但从不导出)。否则,key和value永远不会为null
*/
static class Node<K,V> implements Map.Entry<K,V> {
final int hash; //entry的hash字段
final K key; //key
volatile V val; //value
volatile Node<K,V> next; //next指向
Node(int hash, K key, V val) {
this.hash = hash;
this.key = key;
this.val = val;
}
Node(int hash, K key, V val, Node<K,V> next) {
this(hash, key, val);
this.next = next;
}
public final K getKey() { return key; }
public final V getValue() { return val; }
public final int hashCode() { return key.hashCode() ^ val.hashCode(); }
public final String toString() {
return Helpers.mapEntryToString(key, val);
}
public final V setValue(V value) {
throw new UnsupportedOperationException();
}
public final boolean equals(Object o) {
Object k, v, u; Map.Entry<?,?> e;
return ((o instanceof Map.Entry) &&
(k = (e = (Map.Entry<?,?>)o).getKey()) != null &&
(v = e.getValue()) != null &&
(k == key || k.equals(key)) &&
(v == (u = val) || v.equals(u)));
}
/**
* 对map.get()的虚拟支持;在subclasses被重写
*/
Node<K,V> find(int h, Object k) {
Node<K,V> e = this;
if (k != null) {
do { //判断当前e的key与指定的k是否相等
K ek;
if (e.hash == h &&
((ek = e.key) == k || (ek != null && k.equals(ek))))
return e;
} while ((e = e.next) != null); //一直遍历,直到next为null
}
return null;
}
}
Static utilities 静态工具方法
/**
* 将(XORs)hash较高的bit扩展为较低的,并强制top顶部bit为0.由于该table使用的是
* 2的幂的mask,所以仅在当前mask之上以bit为单位的hash集合将总是发生冲突。
* (已知的例子中有一组浮点key在小table中保存连续的整数) So we
* 因此我呢应用宇哥transform将高位的影响向下传播。bit的传播速度、实用性和质量之间的权衡
* 因为许多常见的hash集合已经得到了合理的分布(所以不能从传播中获益)
* 因为我们使用tree处理bin中大量的碰撞,所以我们只是以最便宜的方式移动一些bit来减少系统损失
* 以及以包含最高位的影响,否则由于table的边界,将永远不会在索引计算中使用
*/
static final int spread(int h) {
return (h ^ (h >>> 16)) & HASH_BITS;
}
/**
* 返回给定size的所需 capacity 的2的幂
* See Hackers Delight, sec 3.2
*/
private static final int tableSizeFor(int c) {
int n = -1 >>> Integer.numberOfLeadingZeros(c - 1);
return (n < 0) ? 1 : (n >= MAXIMUM_CAPACITY) ? MAXIMUM_CAPACITY : n + 1;
}
/**
* 返回x's Class 如果它的形式是 "class C implements Comparable<C>",
* 否则返回 null.
*/
static Class<?> comparableClassFor(Object x) {
if (x instanceof Comparable) {
Class<?> c; Type[] ts, as; ParameterizedType p;
if ((c = x.getClass()) == String.class) // 快速检查
return c;
if ((ts = c.getGenericInterfaces()) != null) {
for (Type t : ts) {
if ((t instanceof ParameterizedType) &&
((p = (ParameterizedType)t).getRawType() ==
Comparable.class) &&
(as = p.getActualTypeArguments()) != null &&
as.length == 1 && as[0] == c) // type arg is c 类型参数为c
return c;
}
}
}
return null;
}
/**
* 返回 k.compareTo(x) if x 匹配 kc (k's 筛选后的 comparable class),
* 否则 0.
*/
@SuppressWarnings({"rawtypes","unchecked"}) // for cast to Comparable
static int compareComparables(Class<?> kc, Object k, Object x) {
return (x == null || x.getClass() != kc ? 0 :
((Comparable)k).compareTo(x));
}
/* ---------------- Table element access。table元素访问 -------------- */
/*
* atomic access方法用于table elements以及正在进行resizing的next table。
* 所有caller必须检查tab参数是否为null。所有caller都会偏执的预先检查tab的长度是否为0
* (或者类似的检查),因此确保采用任何hash 与(length-1)处理得到的的索引参数都是有效的
* Note:要纠正用户wrt任意并发性的错误,这里检查必须对本地变量进行操作,这导致了下面
* 一些奇怪的inline 内联分配。
* Note:setTabAt 总是发生在 locked区域内,因此要求只能顺序release。
*/
@SuppressWarnings("unchecked")
//原子访问
static final <K,V> Node<K,V> tabAt(Node<K,V>[] tab, int i) {
return (Node<K,V>)U.getObjectAcquire(tab, ((long)i << ASHIFT) + ABASE);
}
//原子CAS设置
static final <K,V> boolean casTabAt(Node<K,V>[] tab, int i,
Node<K,V> c, Node<K,V> v) {
return U.compareAndSetObject(tab, ((long)i << ASHIFT) + ABASE, c, v);
}
//原子设置
static final <K,V> void setTabAt(Node<K,V>[] tab, int i, Node<K,V> v) {
U.putObjectRelease(tab, ((long)i << ASHIFT) + ABASE, v);
}
Fields & Public Method .字段和Public方法
/* ---------------- Fields 字段/域-------------- */
/**
* bins的数字,在第一次insertion时lazily inital
* size总是2的幂。由iterator直接访问
*/
transient volatile Node<K,V>[] table;
/**
* 要使用的下一个table,只有在resize时非空
*/
private transient volatile Node<K,V>[] nextTable;
/**
* 基本counter计数器值,主要在没有争用时使用,但也作为table inital时的fallback回退
* 通过CAS updated
*/
private transient volatile long baseCount;
/**
* tanle inital和resize的空控件。当为负数时,table开始inital或resize,
* -1:标示初始化; -(1+active reize线程数量):resize。
* 否则,当table为null时,当创建时保留inital table size,默认情况为0.
* 初始化后保存下一个元素数量,根据该值resize table
*/
private transient volatile int sizeCtl;
/**
* next table 索引(+1)split当resize时
*/
private transient volatile int transferIndex;
/**
* Spinlock (locked via CAS) used when resizing and/or creating CounterCells.
*/
private transient volatile int cellsBusy;
/**
* Table of counter cells. When non-null, size is a power of 2.
*/
private transient volatile CounterCell[] counterCells;
// views
private transient KeySetView<K,V> keySet;
private transient ValuesView<K,V> values;
private transient EntrySetView<K,V> entrySet;
/* ---------------- Public operations -------------- */
/**
* Creates a new, empty map with the default initial table size (16).
*/
public ConcurrentHashMap() {
}
/**
* Creates a new, empty map with an initial table size
* accommodating the specified number of elements without the need
* to dynamically resize.
*
* @param initialCapacity The implementation performs internal
* sizing to accommodate this many elements.
* @throws IllegalArgumentException if the initial capacity of
* elements is negative
*/
public ConcurrentHashMap(int initialCapacity) {
this(initialCapacity, LOAD_FACTOR, 1);
}
/**
* Creates a new map with the same mappings as the given map.
*
* @param m the map
*/
public ConcurrentHashMap(Map<? extends K, ? extends V> m) {
this.sizeCtl = DEFAULT_CAPACITY;
putAll(m);
}
/**
* Creates a new, empty map with an initial table size based on
* the given number of elements ({@code initialCapacity}) and
* initial table density ({@code loadFactor}).
*
* @param initialCapacity the initial capacity. The implementation
* performs internal sizing to accommodate this many elements,
* given the specified load factor.
* @param loadFactor the load factor (table density) for
* establishing the initial table size
* @throws IllegalArgumentException if the initial capacity of
* elements is negative or the load factor is nonpositive
*
* @since 1.6
*/
public ConcurrentHashMap(int initialCapacity, float loadFactor) {
this(initialCapacity, loadFactor, 1);
}
/**
* Creates a new, empty map with an initial table size based on
* the given number of elements ({@code initialCapacity}), initial
* table density ({@code loadFactor}), and number of concurrently
* updating threads ({@code concurrencyLevel}).
*
* @param initialCapacity the initial capacity. The implementation
* performs internal sizing to accommodate this many elements,
* given the specified load factor.
* @param loadFactor the load factor (table density) for
* establishing the initial table size
* @param concurrencyLevel the estimated number of concurrently
* updating threads. The implementation may use this value as
* a sizing hint.
* @throws IllegalArgumentException if the initial capacity is
* negative or the load factor or concurrencyLevel are
* nonpositive
*/
public ConcurrentHashMap(int initialCapacity,
float loadFactor, int concurrencyLevel) {
if (!(loadFactor > 0.0f) || initialCapacity < 0 || concurrencyLevel <= 0)
throw new IllegalArgumentException();
if (initialCapacity < concurrencyLevel) // Use at least as many bins
initialCapacity = concurrencyLevel; // as estimated threads
long size = (long)(1.0 + (long)initialCapacity / loadFactor);
int cap = (size >= (long)MAXIMUM_CAPACITY) ?
MAXIMUM_CAPACITY : tableSizeFor((int)size);
this.sizeCtl = cap;
}
// Original (since JDK1.2) Map methods
/**
* {@inheritDoc}
*/
public int size() {
long n = sumCount();
return ((n < 0L) ? 0 :
(n > (long)Integer.MAX_VALUE) ? Integer.MAX_VALUE :
(int)n);
}
/**
* {@inheritDoc}
*/
public boolean isEmpty() {
return sumCount() <= 0L; // ignore transient negative values
}
/**
* Returns the value to which the specified key is mapped,
* or {@code null} if this map contains no mapping for the key.
*
* <p>More formally, if this map contains a mapping from a key
* {@code k} to a value {@code v} such that {@code key.equals(k)},
* then this method returns {@code v}; otherwise it returns
* {@code null}. (There can be at most one such mapping.)
*
* @throws NullPointerException if the specified key is null
*/
public V get(Object key) {
Node<K,V>[] tab; Node<K,V> e, p; int n, eh; K ek;
int h = spread(key.hashCode());
if ((tab = table) != null && (n = tab.length) > 0 &&
(e = tabAt(tab, (n - 1) & h)) != null) {
if ((eh = e.hash) == h) {
if ((ek = e.key) == key || (ek != null && key.equals(ek)))
return e.val;
}
else if (eh < 0)
return (p = e.find(h, key)) != null ? p.val : null;
while ((e = e.next) != null) {
if (e.hash == h &&
((ek = e.key) == key || (ek != null && key.equals(ek))))
return e.val;
}
}
return null;
}
/**
* Tests if the specified object is a key in this table.
*
* @param key possible key
* @return {@code true} if and only if the specified object
* is a key in this table, as determined by the
* {@code equals} method; {@code false} otherwise
* @throws NullPointerException if the specified key is null
*/
public boolean containsKey(Object key) {
return get(key) != null;
}
/**
* Returns {@code true} if this map maps one or more keys to the
* specified value. Note: This method may require a full traversal
* of the map, and is much slower than method {@code containsKey}.
*
* @param value value whose presence in this map is to be tested
* @return {@code true} if this map maps one or more keys to the
* specified value
* @throws NullPointerException if the specified value is null
*/
public boolean containsValue(Object value) {
if (value == null)
throw new NullPointerException();
Node<K,V>[] t;
if ((t = table) != null) {
Traverser<K,V> it = new Traverser<K,V>(t, t.length, 0, t.length);
for (Node<K,V> p; (p = it.advance()) != null; ) {
V v;
if ((v = p.val) == value || (v != null && value.equals(v)))
return true;
}
}
return false;
}
/**
* Maps the specified key to the specified value in this table.
* Neither the key nor the value can be null.
*
* <p>The value can be retrieved by calling the {@code get} method
* with a key that is equal to the original key.
*
* @param key key with which the specified value is to be associated
* @param value value to be associated with the specified key
* @return the previous value associated with {@code key}, or
* {@code null} if there was no mapping for {@code key}
* @throws NullPointerException if the specified key or value is null
*/
public V put(K key, V value) {
return putVal(key, value, false);
}
/** Implementation for put and putIfAbsent */
final V putVal(K key, V value, boolean onlyIfAbsent) {
if (key == null || value == null) throw new NullPointerException();
int hash = spread(key.hashCode());
int binCount = 0;
for (Node<K,V>[] tab = table;;) {
Node<K,V> f; int n, i, fh; K fk; V fv;
if (tab == null || (n = tab.length) == 0)
tab = initTable();
else if ((f = tabAt(tab, i = (n - 1) & hash)) == null) {
if (casTabAt(tab, i, null, new Node<K,V>(hash, key, value)))
break; // no lock when adding to empty bin
}
else if ((fh = f.hash) == MOVED)
tab = helpTransfer(tab, f);
else if (onlyIfAbsent // check first node without acquiring lock
&& fh == hash
&& ((fk = f.key) == key || (fk != null && key.equals(fk)))
&& (fv = f.val) != null)
return fv;
else {
V oldVal = null;
synchronized (f) {
if (tabAt(tab, i) == f) {
if (fh >= 0) {
binCount = 1;
for (Node<K,V> e = f;; ++binCount) {
K ek;
if (e.hash == hash &&
((ek = e.key) == key ||
(ek != null && key.equals(ek)))) {
oldVal = e.val;
if (!onlyIfAbsent)
e.val = value;
break;
}
Node<K,V> pred = e;
if ((e = e.next) == null) {
pred.next = new Node<K,V>(hash, key, value);
break;
}
}
}
else if (f instanceof TreeBin) {
Node<K,V> p;
binCount = 2;
if ((p = ((TreeBin<K,V>)f).putTreeVal(hash, key,
value)) != null) {
oldVal = p.val;
if (!onlyIfAbsent)
p.val = value;
}
}
else if (f instanceof ReservationNode)
throw new IllegalStateException("Recursive update");
}
}
if (binCount != 0) {
if (binCount >= TREEIFY_THRESHOLD)
treeifyBin(tab, i);
if (oldVal != null)
return oldVal;
break;
}
}
}
addCount(1L, binCount);
return null;
}
/**
* Copies all of the mappings from the specified map to this one.
* These mappings replace any mappings that this map had for any of the
* keys currently in the specified map.
*
* @param m mappings to be stored in this map
*/
public void putAll(Map<? extends K, ? extends V> m) {
tryPresize(m.size());
for (Map.Entry<? extends K, ? extends V> e : m.entrySet())
putVal(e.getKey(), e.getValue(), false);
}
/**
* Removes the key (and its corresponding value) from this map.
* This method does nothing if the key is not in the map.
*
* @param key the key that needs to be removed
* @return the previous value associated with {@code key}, or
* {@code null} if there was no mapping for {@code key}
* @throws NullPointerException if the specified key is null
*/
public V remove(Object key) {
return replaceNode(key, null, null);
}
/**
* Implementation for the four public remove/replace methods:
* Replaces node value with v, conditional upon match of cv if
* non-null. If resulting value is null, delete.
*/
final V replaceNode(Object key, V value, Object cv) {
int hash = spread(key.hashCode());
for (Node<K,V>[] tab = table;;) {
Node<K,V> f; int n, i, fh;
if (tab == null || (n = tab.length) == 0 ||
(f = tabAt(tab, i = (n - 1) & hash)) == null)
break;
else if ((fh = f.hash) == MOVED)
tab = helpTransfer(tab, f);
else {
V oldVal = null;
boolean validated = false;
synchronized (f) {
if (tabAt(tab, i) == f) {
if (fh >= 0) {
validated = true;
for (Node<K,V> e = f, pred = null;;) {
K ek;
if (e.hash == hash &&
((ek = e.key) == key ||
(ek != null && key.equals(ek)))) {
V ev = e.val;
if (cv == null || cv == ev ||
(ev != null && cv.equals(ev))) {
oldVal = ev;
if (value != null)
e.val = value;
else if (pred != null)
pred.next = e.next;
else
setTabAt(tab, i, e.next);
}
break;
}
pred = e;
if ((e = e.next) == null)
break;
}
}
else if (f instanceof TreeBin) {
validated = true;
TreeBin<K,V> t = (TreeBin<K,V>)f;
TreeNode<K,V> r, p;
if ((r = t.root) != null &&
(p = r.findTreeNode(hash, key, null)) != null) {
V pv = p.val;
if (cv == null || cv == pv ||
(pv != null && cv.equals(pv))) {
oldVal = pv;
if (value != null)
p.val = value;
else if (t.removeTreeNode(p))
setTabAt(tab, i, untreeify(t.first));
}
}
}
else if (f instanceof ReservationNode)
throw new IllegalStateException("Recursive update");
}
}
if (validated) {
if (oldVal != null) {
if (value == null)
addCount(-1L, -1);
return oldVal;
}
break;
}
}
}
return null;
}
/**
* Removes all of the mappings from this map.
*/
public void clear() {
long delta = 0L; // negative number of deletions
int i = 0;
Node<K,V>[] tab = table;
while (tab != null && i < tab.length) {
int fh;
Node<K,V> f = tabAt(tab, i);
if (f == null)
++i;
else if ((fh = f.hash) == MOVED) {
tab = helpTransfer(tab, f);
i = 0; // restart
}
else {
synchronized (f) {
if (tabAt(tab, i) == f) {
Node<K,V> p = (fh >= 0 ? f :
(f instanceof TreeBin) ?
((TreeBin<K,V>)f).first : null);
while (p != null) {
--delta;
p = p.next;
}
setTabAt(tab, i++, null);
}
}
}
}
if (delta != 0L)
addCount(delta, -1);
}
/**
* Returns a {@link Set} view of the keys contained in this map.
* The set is backed by the map, so changes to the map are
* reflected in the set, and vice-versa. The set supports element
* removal, which removes the corresponding mapping from this map,
* via the {@code Iterator.remove}, {@code Set.remove},
* {@code removeAll}, {@code retainAll}, and {@code clear}
* operations. It does not support the {@code add} or
* {@code addAll} operations.
*
* <p>The view's iterators and spliterators are
* <a href="package-summary.html#Weakly"><i>weakly consistent</i></a>.
*
* <p>The view's {@code spliterator} reports {@link Spliterator#CONCURRENT},
* {@link Spliterator#DISTINCT}, and {@link Spliterator#NONNULL}.
*
* @return the set view
*/
public KeySetView<K,V> keySet() {
KeySetView<K,V> ks;
if ((ks = keySet) != null) return ks;
return keySet = new KeySetView<K,V>(this, null);
}
/**
* Returns a {@link Collection} view of the values contained in this map.
* The collection is backed by the map, so changes to the map are
* reflected in the collection, and vice-versa. The collection
* supports element removal, which removes the corresponding
* mapping from this map, via the {@code Iterator.remove},
* {@code Collection.remove}, {@code removeAll},
* {@code retainAll}, and {@code clear} operations. It does not
* support the {@code add} or {@code addAll} operations.
*
* <p>The view's iterators and spliterators are
* <a href="package-summary.html#Weakly"><i>weakly consistent</i></a>.
*
* <p>The view's {@code spliterator} reports {@link Spliterator#CONCURRENT}
* and {@link Spliterator#NONNULL}.
*
* @return the collection view
*/
public Collection<V> values() {
ValuesView<K,V> vs;
if ((vs = values) != null) return vs;
return values = new ValuesView<K,V>(this);
}
/**
* Returns a {@link Set} view of the mappings contained in this map.
* The set is backed by the map, so changes to the map are
* reflected in the set, and vice-versa. The set supports element
* removal, which removes the corresponding mapping from the map,
* via the {@code Iterator.remove}, {@code Set.remove},
* {@code removeAll}, {@code retainAll}, and {@code clear}
* operations.
*
* <p>The view's iterators and spliterators are
* <a href="package-summary.html#Weakly"><i>weakly consistent</i></a>.
*
* <p>The view's {@code spliterator} reports {@link Spliterator#CONCURRENT},
* {@link Spliterator#DISTINCT}, and {@link Spliterator#NONNULL}.
*
* @return the set view
*/
public Set<Map.Entry<K,V>> entrySet() {
EntrySetView<K,V> es;
if ((es = entrySet) != null) return es;
return entrySet = new EntrySetView<K,V>(this);
}
/**
* Returns the hash code value for this {@link Map}, i.e.,
* the sum of, for each key-value pair in the map,
* {@code key.hashCode() ^ value.hashCode()}.
*
* @return the hash code value for this map
*/
public int hashCode() {
int h = 0;
Node<K,V>[] t;
if ((t = table) != null) {
Traverser<K,V> it = new Traverser<K,V>(t, t.length, 0, t.length);
for (Node<K,V> p; (p = it.advance()) != null; )
h += p.key.hashCode() ^ p.val.hashCode();
}
return h;
}
/**
* Returns a string representation of this map. The string
* representation consists of a list of key-value mappings (in no
* particular order) enclosed in braces ("{@code {}}"). Adjacent
* mappings are separated by the characters {@code ", "} (comma
* and space). Each key-value mapping is rendered as the key
* followed by an equals sign ("{@code =}") followed by the
* associated value.
*
* @return a string representation of this map
*/
public String toString() {
Node<K,V>[] t;
int f = (t = table) == null ? 0 : t.length;
Traverser<K,V> it = new Traverser<K,V>(t, f, 0, f);
StringBuilder sb = new StringBuilder();
sb.append('{');
Node<K,V> p;
if ((p = it.advance()) != null) {
for (;;) {
K k = p.key;
V v = p.val;
sb.append(k == this ? "(this Map)" : k);
sb.append('=');
sb.append(v == this ? "(this Map)" : v);
if ((p = it.advance()) == null)
break;
sb.append(',').append(' ');
}
}
return sb.append('}').toString();
}
/**
* Compares the specified object with this map for equality.
* Returns {@code true} if the given object is a map with the same
* mappings as this map. This operation may return misleading
* results if either map is concurrently modified during execution
* of this method.
*
* @param o object to be compared for equality with this map
* @return {@code true} if the specified object is equal to this map
*/
public boolean equals(Object o) {
if (o != this) {
if (!(o instanceof Map))
return false;
Map<?,?> m = (Map<?,?>) o;
Node<K,V>[] t;
int f = (t = table) == null ? 0 : t.length;
Traverser<K,V> it = new Traverser<K,V>(t, f, 0, f);
for (Node<K,V> p; (p = it.advance()) != null; ) {
V val = p.val;
Object v = m.get(p.key);
if (v == null || (v != val && !v.equals(val)))
return false;
}
for (Map.Entry<?,?> e : m.entrySet()) {
Object mk, mv, v;
if ((mk = e.getKey()) == null ||
(mv = e.getValue()) == null ||
(v = get(mk)) == null ||
(mv != v && !mv.equals(v)))
return false;
}
}
return true;
}
/**
* Stripped-down version of helper class used in previous version,
* declared for the sake of serialization compatibility.
*/
static class Segment<K,V> extends ReentrantLock implements Serializable {
private static final long serialVersionUID = 2249069246763182397L;
final float loadFactor;
Segment(float lf) { this.loadFactor = lf; }
}
/**
* Saves this map to a stream (that is, serializes it).
*
* @param s the stream
* @throws java.io.IOException if an I/O error occurs
* @serialData
* the serialized fields, followed by the key (Object) and value
* (Object) for each key-value mapping, followed by a null pair.
* The key-value mappings are emitted in no particular order.
*/
private void writeObject(java.io.ObjectOutputStream s)
throws java.io.IOException {
// For serialization compatibility
// Emulate segment calculation from previous version of this class
int sshift = 0;
int ssize = 1;
while (ssize < DEFAULT_CONCURRENCY_LEVEL) {
++sshift;
ssize <<= 1;
}
int segmentShift = 32 - sshift;
int segmentMask = ssize - 1;
@SuppressWarnings("unchecked")
Segment<K,V>[] segments = (Segment<K,V>[])
new Segment<?,?>[DEFAULT_CONCURRENCY_LEVEL];
for (int i = 0; i < segments.length; ++i)
segments[i] = new Segment<K,V>(LOAD_FACTOR);
java.io.ObjectOutputStream.PutField streamFields = s.putFields();
streamFields.put("segments", segments);
streamFields.put("segmentShift", segmentShift);
streamFields.put("segmentMask", segmentMask);
s.writeFields();
Node<K,V>[] t;
if ((t = table) != null) {
Traverser<K,V> it = new Traverser<K,V>(t, t.length, 0, t.length);
for (Node<K,V> p; (p = it.advance()) != null; ) {
s.writeObject(p.key);
s.writeObject(p.val);
}
}
s.writeObject(null);
s.writeObject(null);
}
/**
* Reconstitutes this map from a stream (that is, deserializes it).
* @param s the stream
* @throws ClassNotFoundException if the class of a serialized object
* could not be found
* @throws java.io.IOException if an I/O error occurs
*/
private void readObject(java.io.ObjectInputStream s)
throws java.io.IOException, ClassNotFoundException {
/*
* To improve performance in typical cases, we create nodes
* while reading, then place in table once size is known.
* However, we must also validate uniqueness and deal with
* overpopulated bins while doing so, which requires
* specialized versions of putVal mechanics.
*/
sizeCtl = -1; // force exclusion for table construction
s.defaultReadObject();
long size = 0L;
Node<K,V> p = null;
for (;;) {
@SuppressWarnings("unchecked")
K k = (K) s.readObject();
@SuppressWarnings("unchecked")
V v = (V) s.readObject();
if (k != null && v != null) {
p = new Node<K,V>(spread(k.hashCode()), k, v, p);
++size;
}
else
break;
}
if (size == 0L)
sizeCtl = 0;
else {
long ts = (long)(1.0 + size / LOAD_FACTOR);
int n = (ts >= (long)MAXIMUM_CAPACITY) ?
MAXIMUM_CAPACITY : tableSizeFor((int)ts);
@SuppressWarnings("unchecked")
Node<K,V>[] tab = (Node<K,V>[])new Node<?,?>[n];
int mask = n - 1;
long added = 0L;
while (p != null) {
boolean insertAtFront;
Node<K,V> next = p.next, first;
int h = p.hash, j = h & mask;
if ((first = tabAt(tab, j)) == null)
insertAtFront = true;
else {
K k = p.key;
if (first.hash < 0) {
TreeBin<K,V> t = (TreeBin<K,V>)first;
if (t.putTreeVal(h, k, p.val) == null)
++added;
insertAtFront = false;
}
else {
int binCount = 0;
insertAtFront = true;
Node<K,V> q; K qk;
for (q = first; q != null; q = q.next) {
if (q.hash == h &&
((qk = q.key) == k ||
(qk != null && k.equals(qk)))) {
insertAtFront = false;
break;
}
++binCount;
}
if (insertAtFront && binCount >= TREEIFY_THRESHOLD) {
insertAtFront = false;
++added;
p.next = first;
TreeNode<K,V> hd = null, tl = null;
for (q = p; q != null; q = q.next) {
TreeNode<K,V> t = new TreeNode<K,V>
(q.hash, q.key, q.val, null, null);
if ((t.prev = tl) == null)
hd = t;
else
tl.next = t;
tl = t;
}
setTabAt(tab, j, new TreeBin<K,V>(hd));
}
}
}
if (insertAtFront) {
++added;
p.next = first;
setTabAt(tab, j, p);
}
p = next;
}
table = tab;
sizeCtl = n - (n >>> 2);
baseCount = added;
}
}
// ConcurrentMap methods
/**
* {@inheritDoc}
*
* @return the previous value associated with the specified key,
* or {@code null} if there was no mapping for the key
* @throws NullPointerException if the specified key or value is null
*/
public V putIfAbsent(K key, V value) {
return putVal(key, value, true);
}
/**
* {@inheritDoc}
*
* @throws NullPointerException if the specified key is null
*/
public boolean remove(Object key, Object value) {
if (key == null)
throw new NullPointerException();
return value != null && replaceNode(key, null, value) != null;
}
/**
* {@inheritDoc}
*
* @throws NullPointerException if any of the arguments are null
*/
public boolean replace(K key, V oldValue, V newValue) {
if (key == null || oldValue == null || newValue == null)
throw new NullPointerException();
return replaceNode(key, newValue, oldValue) != null;
}
/**
* {@inheritDoc}
*
* @return the previous value associated with the specified key,
* or {@code null} if there was no mapping for the key
* @throws NullPointerException if the specified key or value is null
*/
public V replace(K key, V value) {
if (key == null || value == null)
throw new NullPointerException();
return replaceNode(key, value, null);
}
// Overrides of JDK8+ Map extension method defaults
/**
* Returns the value to which the specified key is mapped, or the
* given default value if this map contains no mapping for the
* key.
*
* @param key the key whose associated value is to be returned
* @param defaultValue the value to return if this map contains
* no mapping for the given key
* @return the mapping for the key, if present; else the default value
* @throws NullPointerException if the specified key is null
*/
public V getOrDefault(Object key, V defaultValue) {
V v;
return (v = get(key)) == null ? defaultValue : v;
}
public void forEach(BiConsumer<? super K, ? super V> action) {
if (action == null) throw new NullPointerException();
Node<K,V>[] t;
if ((t = table) != null) {
Traverser<K,V> it = new Traverser<K,V>(t, t.length, 0, t.length);
for (Node<K,V> p; (p = it.advance()) != null; ) {
action.accept(p.key, p.val);
}
}
}
public void replaceAll(BiFunction<? super K, ? super V, ? extends V> function) {
if (function == null) throw new NullPointerException();
Node<K,V>[] t;
if ((t = table) != null) {
Traverser<K,V> it = new Traverser<K,V>(t, t.length, 0, t.length);
for (Node<K,V> p; (p = it.advance()) != null; ) {
V oldValue = p.val;
for (K key = p.key;;) {
V newValue = function.apply(key, oldValue);
if (newValue == null)
throw new NullPointerException();
if (replaceNode(key, newValue, oldValue) != null ||
(oldValue = get(key)) == null)
break;
}
}
}
}
/**
* Helper method for EntrySetView.removeIf.
*/
boolean removeEntryIf(Predicate<? super Entry<K,V>> function) {
if (function == null) throw new NullPointerException();
Node<K,V>[] t;
boolean removed = false;
if ((t = table) != null) {
Traverser<K,V> it = new Traverser<K,V>(t, t.length, 0, t.length);
for (Node<K,V> p; (p = it.advance()) != null; ) {
K k = p.key;
V v = p.val;
Map.Entry<K,V> e = new AbstractMap.SimpleImmutableEntry<>(k, v);
if (function.test(e) && replaceNode(k, null, v) != null)
removed = true;
}
}
return removed;
}
/**
* Helper method for ValuesView.removeIf.
*/
boolean removeValueIf(Predicate<? super V> function) {
if (function == null) throw new NullPointerException();
Node<K,V>[] t;
boolean removed = false;
if ((t = table) != null) {
Traverser<K,V> it = new Traverser<K,V>(t, t.length, 0, t.length);
for (Node<K,V> p; (p = it.advance()) != null; ) {
K k = p.key;
V v = p.val;
if (function.test(v) && replaceNode(k, null, v) != null)
removed = true;
}
}
return removed;
}
/**
* If the specified key is not already associated with a value,
* attempts to compute its value using the given mapping function
* and enters it into this map unless {@code null}. The entire
* method invocation is performed atomically, so the function is
* applied at most once per key. Some attempted update operations
* on this map by other threads may be blocked while computation
* is in progress, so the computation should be short and simple,
* and must not attempt to update any other mappings of this map.
*
* @param key key with which the specified value is to be associated
* @param mappingFunction the function to compute a value
* @return the current (existing or computed) value associated with
* the specified key, or null if the computed value is null
* @throws NullPointerException if the specified key or mappingFunction
* is null
* @throws IllegalStateException if the computation detectably
* attempts a recursive update to this map that would
* otherwise never complete
* @throws RuntimeException or Error if the mappingFunction does so,
* in which case the mapping is left unestablished
*/
public V computeIfAbsent(K key, Function<? super K, ? extends V> mappingFunction) {
if (key == null || mappingFunction == null)
throw new NullPointerException();
int h = spread(key.hashCode());
V val = null;
int binCount = 0;
for (Node<K,V>[] tab = table;;) {
Node<K,V> f; int n, i, fh; K fk; V fv;
if (tab == null || (n = tab.length) == 0)
tab = initTable();
else if ((f = tabAt(tab, i = (n - 1) & h)) == null) {
Node<K,V> r = new ReservationNode<K,V>();
synchronized (r) {
if (casTabAt(tab, i, null, r)) {
binCount = 1;
Node<K,V> node = null;
try {
if ((val = mappingFunction.apply(key)) != null)
node = new Node<K,V>(h, key, val);
} finally {
setTabAt(tab, i, node);
}
}
}
if (binCount != 0)
break;
}
else if ((fh = f.hash) == MOVED)
tab = helpTransfer(tab, f);
else if (fh == h // check first node without acquiring lock
&& ((fk = f.key) == key || (fk != null && key.equals(fk)))
&& (fv = f.val) != null)
return fv;
else {
boolean added = false;
synchronized (f) {
if (tabAt(tab, i) == f) {
if (fh >= 0) {
binCount = 1;
for (Node<K,V> e = f;; ++binCount) {
K ek;
if (e.hash == h &&
((ek = e.key) == key ||
(ek != null && key.equals(ek)))) {
val = e.val;
break;
}
Node<K,V> pred = e;
if ((e = e.next) == null) {
if ((val = mappingFunction.apply(key)) != null) {
if (pred.next != null)
throw new IllegalStateException("Recursive update");
added = true;
pred.next = new Node<K,V>(h, key, val);
}
break;
}
}
}
else if (f instanceof TreeBin) {
binCount = 2;
TreeBin<K,V> t = (TreeBin<K,V>)f;
TreeNode<K,V> r, p;
if ((r = t.root) != null &&
(p = r.findTreeNode(h, key, null)) != null)
val = p.val;
else if ((val = mappingFunction.apply(key)) != null) {
added = true;
t.putTreeVal(h, key, val);
}
}
else if (f instanceof ReservationNode)
throw new IllegalStateException("Recursive update");
}
}
if (binCount != 0) {
if (binCount >= TREEIFY_THRESHOLD)
treeifyBin(tab, i);
if (!added)
return val;
break;
}
}
}
if (val != null)
addCount(1L, binCount);
return val;
}
/**
* If the value for the specified key is present, attempts to
* compute a new mapping given the key and its current mapped
* value. The entire method invocation is performed atomically.
* Some attempted update operations on this map by other threads
* may be blocked while computation is in progress, so the
* computation should be short and simple, and must not attempt to
* update any other mappings of this map.
*
* @param key key with which a value may be associated
* @param remappingFunction the function to compute a value
* @return the new value associated with the specified key, or null if none
* @throws NullPointerException if the specified key or remappingFunction
* is null
* @throws IllegalStateException if the computation detectably
* attempts a recursive update to this map that would
* otherwise never complete
* @throws RuntimeException or Error if the remappingFunction does so,
* in which case the mapping is unchanged
*/
public V computeIfPresent(K key, BiFunction<? super K, ? super V, ? extends V> remappingFunction) {
if (key == null || remappingFunction == null)
throw new NullPointerException();
int h = spread(key.hashCode());
V val = null;
int delta = 0;
int binCount = 0;
for (Node<K,V>[] tab = table;;) {
Node<K,V> f; int n, i, fh;
if (tab == null || (n = tab.length) == 0)
tab = initTable();
else if ((f = tabAt(tab, i = (n - 1) & h)) == null)
break;
else if ((fh = f.hash) == MOVED)
tab = helpTransfer(tab, f);
else {
synchronized (f) {
if (tabAt(tab, i) == f) {
if (fh >= 0) {
binCount = 1;
for (Node<K,V> e = f, pred = null;; ++binCount) {
K ek;
if (e.hash == h &&
((ek = e.key) == key ||
(ek != null && key.equals(ek)))) {
val = remappingFunction.apply(key, e.val);
if (val != null)
e.val = val;
else {
delta = -1;
Node<K,V> en = e.next;
if (pred != null)
pred.next = en;
else
setTabAt(tab, i, en);
}
break;
}
pred = e;
if ((e = e.next) == null)
break;
}
}
else if (f instanceof TreeBin) {
binCount = 2;
TreeBin<K,V> t = (TreeBin<K,V>)f;
TreeNode<K,V> r, p;
if ((r = t.root) != null &&
(p = r.findTreeNode(h, key, null)) != null) {
val = remappingFunction.apply(key, p.val);
if (val != null)
p.val = val;
else {
delta = -1;
if (t.removeTreeNode(p))
setTabAt(tab, i, untreeify(t.first));
}
}
}
else if (f instanceof ReservationNode)
throw new IllegalStateException("Recursive update");
}
}
if (binCount != 0)
break;
}
}
if (delta != 0)
addCount((long)delta, binCount);
return val;
}
/**
* Attempts to compute a mapping for the specified key and its
* current mapped value (or {@code null} if there is no current
* mapping). The entire method invocation is performed atomically.
* Some attempted update operations on this map by other threads
* may be blocked while computation is in progress, so the
* computation should be short and simple, and must not attempt to
* update any other mappings of this Map.
*
* @param key key with which the specified value is to be associated
* @param remappingFunction the function to compute a value
* @return the new value associated with the specified key, or null if none
* @throws NullPointerException if the specified key or remappingFunction
* is null
* @throws IllegalStateException if the computation detectably
* attempts a recursive update to this map that would
* otherwise never complete
* @throws RuntimeException or Error if the remappingFunction does so,
* in which case the mapping is unchanged
*/
public V compute(K key,
BiFunction<? super K, ? super V, ? extends V> remappingFunction) {
if (key == null || remappingFunction == null)
throw new NullPointerException();
int h = spread(key.hashCode());
V val = null;
int delta = 0;
int binCount = 0;
for (Node<K,V>[] tab = table;;) {
Node<K,V> f; int n, i, fh;
if (tab == null || (n = tab.length) == 0)
tab = initTable();
else if ((f = tabAt(tab, i = (n - 1) & h)) == null) {
Node<K,V> r = new ReservationNode<K,V>();
synchronized (r) {
if (casTabAt(tab, i, null, r)) {
binCount = 1;
Node<K,V> node = null;
try {
if ((val = remappingFunction.apply(key, null)) != null) {
delta = 1;
node = new Node<K,V>(h, key, val);
}
} finally {
setTabAt(tab, i, node);
}
}
}
if (binCount != 0)
break;
}
else if ((fh = f.hash) == MOVED)
tab = helpTransfer(tab, f);
else {
synchronized (f) {
if (tabAt(tab, i) == f) {
if (fh >= 0) {
binCount = 1;
for (Node<K,V> e = f, pred = null;; ++binCount) {
K ek;
if (e.hash == h &&
((ek = e.key) == key ||
(ek != null && key.equals(ek)))) {
val = remappingFunction.apply(key, e.val);
if (val != null)
e.val = val;
else {
delta = -1;
Node<K,V> en = e.next;
if (pred != null)
pred.next = en;
else
setTabAt(tab, i, en);
}
break;
}
pred = e;
if ((e = e.next) == null) {
val = remappingFunction.apply(key, null);
if (val != null) {
if (pred.next != null)
throw new IllegalStateException("Recursive update");
delta = 1;
pred.next = new Node<K,V>(h, key, val);
}
break;
}
}
}
else if (f instanceof TreeBin) {
binCount = 1;
TreeBin<K,V> t = (TreeBin<K,V>)f;
TreeNode<K,V> r, p;
if ((r = t.root) != null)
p = r.findTreeNode(h, key, null);
else
p = null;
V pv = (p == null) ? null : p.val;
val = remappingFunction.apply(key, pv);
if (val != null) {
if (p != null)
p.val = val;
else {
delta = 1;
t.putTreeVal(h, key, val);
}
}
else if (p != null) {
delta = -1;
if (t.removeTreeNode(p))
setTabAt(tab, i, untreeify(t.first));
}
}
else if (f instanceof ReservationNode)
throw new IllegalStateException("Recursive update");
}
}
if (binCount != 0) {
if (binCount >= TREEIFY_THRESHOLD)
treeifyBin(tab, i);
break;
}
}
}
if (delta != 0)
addCount((long)delta, binCount);
return val;
}
/**
* If the specified key is not already associated with a
* (non-null) value, associates it with the given value.
* Otherwise, replaces the value with the results of the given
* remapping function, or removes if {@code null}. The entire
* method invocation is performed atomically. Some attempted
* update operations on this map by other threads may be blocked
* while computation is in progress, so the computation should be
* short and simple, and must not attempt to update any other
* mappings of this Map.
*
* @param key key with which the specified value is to be associated
* @param value the value to use if absent
* @param remappingFunction the function to recompute a value if present
* @return the new value associated with the specified key, or null if none
* @throws NullPointerException if the specified key or the
* remappingFunction is null
* @throws RuntimeException or Error if the remappingFunction does so,
* in which case the mapping is unchanged
*/
public V merge(K key, V value, BiFunction<? super V, ? super V, ? extends V> remappingFunction) {
if (key == null || value == null || remappingFunction == null)
throw new NullPointerException();
int h = spread(key.hashCode());
V val = null;
int delta = 0;
int binCount = 0;
for (Node<K,V>[] tab = table;;) {
Node<K,V> f; int n, i, fh;
if (tab == null || (n = tab.length) == 0)
tab = initTable();
else if ((f = tabAt(tab, i = (n - 1) & h)) == null) {
if (casTabAt(tab, i, null, new Node<K,V>(h, key, value))) {
delta = 1;
val = value;
break;
}
}
else if ((fh = f.hash) == MOVED)
tab = helpTransfer(tab, f);
else {
synchronized (f) {
if (tabAt(tab, i) == f) {
if (fh >= 0) {
binCount = 1;
for (Node<K,V> e = f, pred = null;; ++binCount) {
K ek;
if (e.hash == h &&
((ek = e.key) == key ||
(ek != null && key.equals(ek)))) {
val = remappingFunction.apply(e.val, value);
if (val != null)
e.val = val;
else {
delta = -1;
Node<K,V> en = e.next;
if (pred != null)
pred.next = en;
else
setTabAt(tab, i, en);
}
break;
}
pred = e;
if ((e = e.next) == null) {
delta = 1;
val = value;
pred.next = new Node<K,V>(h, key, val);
break;
}
}
}
else if (f instanceof TreeBin) {
binCount = 2;
TreeBin<K,V> t = (TreeBin<K,V>)f;
TreeNode<K,V> r = t.root;
TreeNode<K,V> p = (r == null) ? null :
r.findTreeNode(h, key, null);
val = (p == null) ? value :
remappingFunction.apply(p.val, value);
if (val != null) {
if (p != null)
p.val = val;
else {
delta = 1;
t.putTreeVal(h, key, val);
}
}
else if (p != null) {
delta = -1;
if (t.removeTreeNode(p))
setTabAt(tab, i, untreeify(t.first));
}
}
else if (f instanceof ReservationNode)
throw new IllegalStateException("Recursive update");
}
}
if (binCount != 0) {
if (binCount >= TREEIFY_THRESHOLD)
treeifyBin(tab, i);
break;
}
}
}
if (delta != 0)
addCount((long)delta, binCount);
return val;
}
// Hashtable legacy methods
/**
* Tests if some key maps into the specified value in this table.
*
* <p>Note that this method is identical in functionality to
* {@link #containsValue(Object)}, and exists solely to ensure
* full compatibility with class {@link java.util.Hashtable},
* which supported this method prior to introduction of the
* Java Collections Framework.
*
* @param value a value to search for
* @return {@code true} if and only if some key maps to the
* {@code value} argument in this table as
* determined by the {@code equals} method;
* {@code false} otherwise
* @throws NullPointerException if the specified value is null
*/
public boolean contains(Object value) {
return containsValue(value);
}
/**
* Returns an enumeration of the keys in this table.
*
* @return an enumeration of the keys in this table
* @see #keySet()
*/
public Enumeration<K> keys() {
Node<K,V>[] t;
int f = (t = table) == null ? 0 : t.length;
return new KeyIterator<K,V>(t, f, 0, f, this);
}
/**
* Returns an enumeration of the values in this table.
*
* @return an enumeration of the values in this table
* @see #values()
*/
public Enumeration<V> elements() {
Node<K,V>[] t;
int f = (t = table) == null ? 0 : t.length;
return new ValueIterator<K,V>(t, f, 0, f, this);
}
// ConcurrentHashMap-only methods
/**
* Returns the number of mappings. This method should be used
* instead of {@link #size} because a ConcurrentHashMap may
* contain more mappings than can be represented as an int. The
* value returned is an estimate; the actual count may differ if
* there are concurrent insertions or removals.
*
* @return the number of mappings
* @since 1.8
*/
public long mappingCount() {
long n = sumCount();
return (n < 0L) ? 0L : n; // ignore transient negative values
}
/**
* Creates a new {@link Set} backed by a ConcurrentHashMap
* from the given type to {@code Boolean.TRUE}.
*
* @param <K> the element type of the returned set
* @return the new set
* @since 1.8
*/
public static <K> KeySetView<K,Boolean> newKeySet() {
return new KeySetView<K,Boolean>
(new ConcurrentHashMap<K,Boolean>(), Boolean.TRUE);
}
/**
* Creates a new {@link Set} backed by a ConcurrentHashMap
* from the given type to {@code Boolean.TRUE}.
*
* @param initialCapacity The implementation performs internal
* sizing to accommodate this many elements.
* @param <K> the element type of the returned set
* @return the new set
* @throws IllegalArgumentException if the initial capacity of
* elements is negative
* @since 1.8
*/
public static <K> KeySetView<K,Boolean> newKeySet(int initialCapacity) {
return new KeySetView<K,Boolean>
(new ConcurrentHashMap<K,Boolean>(initialCapacity), Boolean.TRUE);
}
/**
* Returns a {@link Set} view of the keys in this map, using the
* given common mapped value for any additions (i.e., {@link
* Collection#add} and {@link Collection#addAll(Collection)}).
* This is of course only appropriate if it is acceptable to use
* the same value for all additions from this view.
*
* @param mappedValue the mapped value to use for any additions
* @return the set view
* @throws NullPointerException if the mappedValue is null
*/
public KeySetView<K,V> keySet(V mappedValue) {
if (mappedValue == null)
throw new NullPointerException();
return new KeySetView<K,V>(this, mappedValue);
}
新的改变
我们对Markdown编辑器进行了一些功能拓展与语法支持,除了标准的Markdown编辑器功能,我们增加了如下几点新功能,帮助你用它写博客:
- 全新的界面设计 ,将会带来全新的写作体验;
- 在创作中心设置你喜爱的代码高亮样式,Markdown 将代码片显示选择的高亮样式 进行展示;
- 增加了 图片拖拽 功能,你可以将本地的图片直接拖拽到编辑区域直接展示;
- 全新的 KaTeX数学公式 语法;
- 增加了支持甘特图的mermaid语法1 功能;
- 增加了 多屏幕编辑 Markdown文章功能;
- 增加了 焦点写作模式、预览模式、简洁写作模式、左右区域同步滚轮设置 等功能,功能按钮位于编辑区域与预览区域中间;
- 增加了 检查列表 功能。
功能快捷键
撤销:Ctrl/Command + Z
重做:Ctrl/Command + Y
加粗:Ctrl/Command + B
斜体:Ctrl/Command + I
标题:Ctrl/Command + Shift + H
无序列表:Ctrl/Command + Shift + U
有序列表:Ctrl/Command + Shift + O
检查列表:Ctrl/Command + Shift + C
插入代码:Ctrl/Command + Shift + K
插入链接:Ctrl/Command + Shift + L
插入图片:Ctrl/Command + Shift + G
查找:Ctrl/Command + F
替换:Ctrl/Command + G
合理的创建标题,有助于目录的生成
直接输入1次#,并按下space后,将生成1级标题。
输入2次#,并按下space后,将生成2级标题。
以此类推,我们支持6级标题。有助于使用TOC
语法后生成一个完美的目录。
如何改变文本的样式
强调文本 强调文本
加粗文本 加粗文本
标记文本
删除文本
引用文本
H2O is是液体。
210 运算结果是 1024.
插入链接与图片
链接: link.
图片:
带尺寸的图片:
居中的图片:
居中并且带尺寸的图片:
当然,我们为了让用户更加便捷,我们增加了图片拖拽功能。
如何插入一段漂亮的代码片
去博客设置页面,选择一款你喜欢的代码片高亮样式,下面展示同样高亮的 代码片
.
// An highlighted block
var foo = 'bar';
生成一个适合你的列表
- 项目
- 项目
- 项目
- 项目
- 项目1
- 项目2
- 项目3
- 计划任务
- 完成任务
创建一个表格
一个简单的表格是这么创建的:
项目 | Value |
---|---|
电脑 | $1600 |
手机 | $12 |
导管 | $1 |
设定内容居中、居左、居右
使用:---------:
居中
使用:----------
居左
使用----------:
居右
第一列 | 第二列 | 第三列 |
---|---|---|
第一列文本居中 | 第二列文本居右 | 第三列文本居左 |
SmartyPants
SmartyPants将ASCII标点字符转换为“智能”印刷标点HTML实体。例如:
TYPE | ASCII | HTML |
---|---|---|
Single backticks | 'Isn't this fun?' | ‘Isn’t this fun?’ |
Quotes | "Isn't this fun?" | “Isn’t this fun?” |
Dashes | -- is en-dash, --- is em-dash | – is en-dash, — is em-dash |
创建一个自定义列表
-
Markdown
- Text-to- HTML conversion tool Authors
- John
- Luke
如何创建一个注脚
一个具有注脚的文本。2
注释也是必不可少的
Markdown将文本转换为 HTML。
KaTeX数学公式
您可以使用渲染LaTeX数学表达式 KaTeX:
Gamma公式展示 Γ ( n ) = ( n − 1 ) ! ∀ n ∈ N \Gamma(n) = (n-1)!\quad\forall n\in\mathbb N Γ(n)=(n−1)!∀n∈N 是通过欧拉积分
Γ ( z ) = ∫ 0 ∞ t z − 1 e − t d t   . \Gamma(z) = \int_0^\infty t^{z-1}e^{-t}dt\,. Γ(z)=∫0∞tz−1e−tdt.
你可以找到更多关于的信息 LaTeX 数学表达式here.
新的甘特图功能,丰富你的文章
- 关于 甘特图 语法,参考 这儿,
UML 图表
可以使用UML图表进行渲染。 Mermaid. 例如下面产生的一个序列图::
这将产生一个流程图。:
- 关于 Mermaid 语法,参考 这儿,
FLowchart流程图
我们依旧会支持flowchart的流程图:
- 关于 Flowchart流程图 语法,参考 这儿.
导出与导入
导出
如果你想尝试使用此编辑器, 你可以在此篇文章任意编辑。当你完成了一篇文章的写作, 在上方工具栏找到 文章导出 ,生成一个.md文件或者.html文件进行本地保存。
导入
如果你想加载一篇你写过的.md文件,在上方工具栏可以选择导入功能进行对应扩展名的文件导入,
继续你的创作。
注脚的解释 ↩︎