java代码分流解决并发_代码审查清单:如何解决Java并发问题

java代码分流解决并发

by Roman Leventov

罗马·列文托夫(Roman Leventov)

代码审查清单:如何解决Java并发问题 (Code review checklist: how to tackle issues with Java concurrency)

At the Apache Druid community, we are currently preparing a detailed checklist to be used during code reviews. I decided to publish parts of the checklist as posts on Medium to gather more ideas for checklist items. Hopefully, somebody will find it useful in practice.

Apache Druid社区中,我们目前正在准备一份详细的清单,以供在代码审查期间使用。 我决定将清单的一部分作为“中等”发布,以收集更多清单项目的想法。 希望有人会在实践中发现它有用。

By the way, it seems me that creating project-specific checklists for code reviews should be a powerful idea, yet I don’t see any existing examples among large open source projects.

顺便说一句,在我看来,为代码审查创建特定于项目的清单应该是一个有力的主意,但是在大型开源项目中我看不到任何现有示例。

This post contains checklist items about problems that arise with the multithreaded Java code.

这篇文章包含有关多线程Java代码出现的问题的清单项目。

Thanks to Marko Topolnik, Matko Medenjak, Chris Vest, Simon Willnauer, Ben Manes, Gleb Smirnov, Andrey Satarin, Benedict Jin, and Petr Janeček for reviews and contributions to this post. The checklist is not considered complete, comments and suggestions are welcome!

感谢Marko TopolnikMatko MedenjakChris VestSimon WillnauerBen ManesGleb SmirnovAndrey SatarinBenedict JinPetrJaneček对本文的评论和贡献。 该清单不完整,欢迎提出意见和建议!

Update: this checklist is now available on Github.

更新:此清单现在可以在Github上使用

1.设计 (1. Design)

1.1. If the patch introduces a new subsystem with concurrent code, is the necessity for concurrency rationalized in the patch description? Is there a discussion of alternative design approaches that could simplify the concurrency model of the code (see the next item)?

1.1。 如果补丁程序引入了带有并发代码的新子系统, 那么在补丁程序说明中并发的必要性是否合理 ? 是否讨论了可以简化代码并发模型的替代设计方法(请参阅下一项)?

1.2. Is it possible to apply one or several design patterns (some of them are listed below) to significantly simplify the concurrency model of the code, while not considerably compromising other quality aspects, such as overall simplicity, efficiency, testability, extensibility, etc?

1.2。 是否可以应用一种或几种设计模式(下面列出了其中的一些模式)来显着简化代码的并发模型,同时又不显着影响其他质量方面 ,例如总体简单性,效率,可测试性,可扩展性等?

Immutability/Snapshotting. When some state should be updated, a new immutable object (or a snapshot within a mutable object) is created, published and used, while some concurrent threads may still use older copies or snapshots. See [EJ Item 17], [JCIP 3.4], items 4.5 and 9.2 in this checklist, CopyOnWriteArrayList, CopyOnWriteArraySet, persistent data structures.

不变性/快照。 当某些状态应该更新时,将创建,发布和使用新的不可变对象(或可变对象中的快照),而某些并发线程可能仍使用较旧的副本或快照。 参见[EJ项目17],[JCIP 3.4],此清单中的项目4.5和9.2, CopyOnWriteArrayListCopyOnWriteArraySet持久数据结构

Divide and conquer. Work is split into several parts that are processed independently, each part in a single thread. Then the results of processing are combined. Parallel Streams (see section 14) or ForkJoinPool (see items 10.4 and 10.5) can be used to apply this pattern.

分而治之。 工作分为几个独立处理的部分,每个部分都在一个线程中。 然后将处理结果合并。 可以使用并行流(请参阅第14节)或ForkJoinPool (请参阅项目10.4和10.5)来应用此模式。

Producer-consumer. Pieces of work are transmitted between worker threads via queues. See [JCIP 5.3], item 6.1 in this checklist, CSP, SEDA.

生产者-消费者。 作业通过队列在工作线程之间传输。 请参阅此清单CSP SEDA中的 [JCIP 5.3]项目6.1。

Instance confinement. Objects of some root type encapsulate some complex hierarchical child state. Root objects are solitarily responsible for the safety of accesses and modifications to the child state from multiple threads. In other words, composed objects are synchronized rather than synchronized objects are composed. See [JCIP 4.2, 10.1.3, 10.1.4].

实例限制。 某些根类型的对象封装了一些复杂的分层子状态。 根对象单独负责从多个线程访问和修改子状态的安全性。 换句话说,合成的对象是同步的,而不是合成的同步对象。 参见[JCIP 4.2、10.1.3、10.1.4]。

Thread/Task/Serial thread confinement. Some state is made local to a thread using top-down pass-through parameters or ThreadLocal. See [JCIP 3.3]. Task confinement is a variation of the idea of thread confinement that is used in conjunction with the divide-and-conquer pattern. It usually comes in the form of lambda-captured “context” parameters or fields in the per-thread task objects. Serial thread confinement is an extension of the idea of thread confinement for the producer-consumer pattern, see [JCIP 5.3.2].

线程/任务/串行线程限制。 使用自上而下的传递参数或ThreadLocal将某些状态设置为线程本地。 参见[JCIP 3.3]。 任务限制是线程限制概念的变体,该概念与分而治之模式一起使用。 它通常以lambda捕获的“上下文”参数或每个线程任务对象中的字段的形式出现。 串行线程限制是生产者-消费者模式的线程限制概念的扩展,请参见[JCIP 5.3.2]。

2.文件 (2. Documentation)

2.1. For every class, method, and field that has signs of being thread-safe, such as the synchronized keyword, volatile modifiers on fields, use of any classes from java.util.concurrent.*, or third-party concurrency primitives, or concurrent collections: do their Javadoc comments include

2.1。 对于每个具有线程安全迹象的类,方法和字段,例如synchronized关键字,字段上的volatile修饰符,使用java.util.concurrent.*的任何类或第三方并发原语或并发集合:他们的Javadoc注释是否包括

  • The justification for thread safety: is it explained why a particular class, method or field has to be thread-safe?

    线程安全的理由 :是否解释了为什么特定类,方法或字段必须是线程安全的?

  • Concurrent control flow documentation: is it enumerated from what methods and in contexts of what threads (executors, thread pools) each specific method of a thread-safe class is called?

    并发控制流文档 :是否从什么方法以及在什么线程(执行器,线程池)的上下文中枚举了线程安全类的每个特定方法?

Wherever some logic is parallelized or the execution is delegated to another thread, are there comments explaining why it’s worse or inappropriate to execute the logic sequentially or in the same thread? See also item 14.1 in this checklist about parallel Stream use.

在将某些逻辑并行化或将执行委派给另一个线程的地方,是否有注释解释为什么顺序或在同一线程中执行该逻辑更糟或更不合适? 另请参见此清单中关于并行Stream使用的项目14.1。

2.2. If the patch introduces a new subsystem that uses threads or thread pools, are there high-level descriptions of the threading model, the concurrent control flow (or the data flow) of the subsystem somewhere, e. g. in the Javadoc comment for the package in package-info.java or for the main class of the subsystem? Are these descriptions kept up-to-date when new threads or thread pools are added or some old ones deleted from the system?

2.2。 如果补丁引入了一个新的子系统使用线程或线程池,是线程模型的高级描述,并发控制流(或数据流)子系统某处 ,例如,在Javadoc注释为包中package-info.java还是子系统的主类? 当添加新线程或线程池或从系统中删除一些旧线程或线程池时,这些描述是否保持最新状态?

Description of the threading model includes the enumeration of threads and thread pools created and managed in the subsystem, and external pools used in the subsystem (such as ForkJoinPool.commonPool()), their sizes and other important characteristics such as thread priorities, and the lifecycle of the managed threads and thread pools.

线程模型的描述包括在子系统中创建和管理的线程和线程池的枚举,以及子系统中使用的外部池(例如ForkJoinPool.commonPool() ),它们的大小和其他重要特征(例如线程优先级)以及受管线程和线程池的生命周期。

A high-level description of concurrent control flow should be an overview and tie together concurrent control flow documentation for individual classes, see the previous item. If the producer-consumer pattern is used, the concurrent control flow is trivial and the data flow should be documented instead.

并发控制流的高级描述应该是概述,并将各个类的并发控制流文档联系在一起,请参见上一项。 如果使用了生产者-消费者模式,那么并发控制流将变得微不足道,而应该记录数据流。

Describing threading models and control/data flow greatly improves the maintainability of the system, because in the absence of descriptions or diagrams developers spend a lot of time and effort to create and refresh these models in their minds. Putting the models down also helps to discover bottlenecks and the ways to simplify the design (see item 1.2).

描述线程模型和控制/数据流可以极大地提高系统的可维护性,因为在没有描述或图表的情况下,开发人员会花费大量时间和精力来创建和刷新这些模型。 放下模型还有助于发现瓶颈和简化设计的方式(请参见第1.2条)。

2.3. For classes and methods that are parts of the public API or the extensions API of the project: is it specified in their Javadoc comments whether they are (or in case of interfaces and abstract classes designed for subclassing in extensions, should they be implemented as) immutable, thread-safe or not thread-safe? For classes and methods that are (or should be implemented as) thread-safe, is it documented precisely with what other methods (or themselves) they may be called concurrently from multiple threads? See also [EJ Item 82] and [JCIP 4.5].

2.3。 对于属于项目的公共API或扩展API的类和方法:是否在其Javadoc注释中指定了它们是否是(或在为扩展中的子类设计的接口和抽象类的情况下,应将其实现为) 不可变,线程安全或非线程安全 ? 对于(或应实现为)线程安全的类和方法,是否与其他哪些方法(或它们本身)一起准确记录了它们可以从多个线程中同时调用? 另请参见[EJ项目82]和[JCIP 4.5]。

If the @com.google.errorprone.annotations.Immutable annotation is used to mark immutable classes, Error Prone static analysis tool is capable to detect when a class is not actually immutable (see the relevant bug pattern).

如果将@com.google.errorprone.annotations.Immutable批注用于标记不可变的类,则易错静态分析工具能够检测出何时类实际上不是不可变的(请参见相关的错误模式 )。

2.4. For subsystems, classes, methods, and fields that use some concurrency design patterns, either high-level (such as those mentioned in item 1.2 in this checklist) or low-level (such as double-checked locking, see section 8 in this checklist): are the used concurrency patterns pronounced in the design or implementation comments for the respective subsystems, classes, methods, and fields? This helps readers to make sense out of the code quicker.

2.4。 对于使用某些并发设计模式的子系统,类,方法和字段,高级(例如,此检查清单中第1.2条中提到的)或低级别(例如双重检查的锁定),请参见此清单中的第8节):在相应子系统,类,方法和字段的设计或实现注释中 ,是否声明了已使用的并发模式 ? 这有助于读者更快地理解代码。

2.5. Are ConcurrentHashMap and ConcurrentSkipListMap objects stored in fields and variables of ConcurrentHashMap or ConcurrentSkipListMap or ConcurrentMap type, but not just Map?

2.5。 ConcurrentHashMapConcurrentSkipListMap对象是否存储在ConcurrentHashMapConcurrentSkipListMapConcurrentMap 类型的字段和变量中,而不仅仅是Map

This is important, because in code like the following:

这很重要,因为在如下代码中:

ConcurrentMap<String, Entity> entities = getEntities();if (!entities.containsKey(key)) {  entities.put(key, entity);} else {  ...}

It should be pretty obvious that there might be a race condition because an entity may be put into the map by a concurrent thread between the calls to containsKey() and put() (see item 4.1 about this type of race conditions). While if the type of the entities variable was just Map<String, Entity> it would be less obvious and readers might think this is only slightly suboptimal code and pass by.

很明显,可能存在竞争条件,因为在对containsKey()put()的调用之间,并发线程可以将实体放入映射containsKey()有关此类竞争条件,请参见第4.1节)。 尽管如果entities变量的类型只是Map<String, Enti ty>,那么它就不太明显了,读者可能会认为这只是次优的代码而已。

It’s possible to turn this advice into an inspection in IntelliJ IDEA.

可以将此建议变成IntelliJ IDEA中的检查

2.6. An extension of the previous item: are ConcurrentHashMaps on which compute(), computeIfAbsent(), computeIfPresent(), or merge() methods are called stored in fields and variables of ConcurrentHashMap type rather than ConcurrentMap? This is because ConcurrentHashMap (unlike the generic ConcurrentMap interface) guarantees that the lambdas passed into compute()-like methods are performed atomically per key, and the thread safety of the class may depend on that guarantee.

2.6。 上一项的扩展:是否在其上computeIfAbsent() compute()computeIfAbsent()computeIfPresent()merge()方法的ConcurrentHashMap存储在ConcurrentHashMap类型而不是ConcurrentMap类型的字段和变量中? 这是因为ConcurrentHashMap (与通用的ConcurrentMap接口不同)保证了每个键都原子地执行传递给类似execute compute()的方法的lambda,并且类的线程安全性可能取决于该保证。

This advice may seem to be overly pedantic, but if used in conjunction with a static analysis rule that prohibits calling compute()-like methods on ConcurrentMap-typed objects that are not ConcurrentHashMaps (it’s possible to create such inspection in IntelliJ IDEA too) it could prevent some bugs: e. g. calling compute() on a ConcurrentSkipListMap might be a race condition and it’s easy to overlook that for somebody who is used to rely on the strong semantics of compute() in ConcurrentHashMap.

该建议似乎过于古怪,但是如果与禁止在非ConcurrentMap类型对象上调用调用类似于compute()方法的静态分析规则结合使用(它也可以在IntelliJ IDEA中创建此类检查),可以防止某些错误:例如,在ConcurrentSkipListMap上调用compute()可能是一种竞争条件 ,对于那些习惯于依赖ConcurrentHashMapcompute()强大语义的人来说,这很容易忽略。

2.7. Is @GuardedBy annotation used? If accesses to some fields should be protected by some lock, are those fields annotated with @GuardedBy? Are private methods that are called from within critical sections in other methods annotated with @GuardedBy? If the project doesn’t depend on any library containing this annotation (it’s provided by jcip-annotations, error_prone_annotations, jsr305 and other libraries) and for some reason it’s undesirable to add such dependency, it should be mentioned in Javadoc comments for the respective fields and methods that accesses and calls to them should be protected by some specified locks.

2.7。 是否使用@GuardedBy 批注 ? 如果对某些字段的访问应受某些锁保护,则这些字段是否用@GuardedBy注释? 从其他方法的关键部分调用的私有方法是否用@GuardedBy注释? 如果项目不依赖于包含此批注的任何库(由jcip-annotationserror_prone_annotationsjsr305和其他库提供),并且由于某种原因不希望添加此类依赖项,则应在Javadoc注释中针对相应字段进行提及访问和调用它们的方法应受某些指定的锁保护。

See [JCIP 2.4] for more information about @GuardedBy.

参见[JCIP 2.4]有关更多信息@GuardedBy

Using GuardedBy is especially beneficial in together with Error Prone, which is able to statically check for unguarded accesses to fields and methods with @GuardedBy annotations.

GuardedBy与容易出错一起使用特别有利,后者可以静态检查使用@GuardedBy批注对字段和方法的无保护访问

2.8. If in a thread-safe class some fields are accessed both from within critical sections and outside of critical sections, is it explained in comments why this is safe? For example, unprotected read-only access to a reference to an immutable object might be benignly racy (see item 4.5).

2.8。 如果在线程安全的类中,从关键部分内部和关键部分外部都访问了某些字段 ,是否在注释中解释了为什么这是安全的? 例如,对一个不变对象的引用的不受保护的只读访问可能是良性的(请参阅第4.5项)。

2.9. Regarding every field with a volatile modifier: does it really need to be volatile? Does the Javadoc comment for the field explain why the semantics of volatile field reads and writes (as defined in the Java Memory Model) are required for the field?

2.9。 关于每个带有volatile修饰符的字段: 它真的需要volatile吗? Javadoc对该字段的注释是否解释了为何该字段需要volatile字段的语义读写(如Java Memory Model中所定义)?

2.10. Is it explained in the Javadoc comment for each mutable field in a thread-safe class that is neither volatile nor annotated with @GuardedBy, why that is safe? Perhaps, the field is only accessed and mutated from a single method or a set of methods that are specified to be called only from a single thread sequentially (as described in item 2.1). This recommendation also applies to final fields that store objects of non-thread-safe classes when those objects could be mutated from the methods of the enclosing thread-safe class. See items 4.2–4.4 in this checklist about what could go wrong with such code.

2.10。 它在Javadoc注释中针对线程安全类中的每个可变字段进行了解释,这些线程既不是volatile也不是@GuardedBy注释的 ,为什么这样安全? 也许,只能从单个方法或一组指定为仅从单个线程顺序调用的方法中访问和更改该字段(如第2.1节中所述)。 当可以从封闭线程安全类的方法中更改那些对象时,此建议还适用于存储非线程安全类的对象的final字段。 有关此类代码可能出什么问题,请参阅此清单中的4.2–4.4项。

3.过多的线程安全 (3. Excessive thread safety)

3.1. An example of excessive thread safety is a class where every modifiable field is volatile or an AtomicReference or other atomic, and every collection field stores a concurrent collection (e. g. ConcurrentHashMap), although all accesses to those fields are synchronized.

3.1。 线程安全性过高的一个例子是一类,其中每个可修改字段都是volatileAtomicReference或其他原子,并且每个collection字段都存储一个并发的collection(例如ConcurrentHashMap ),尽管对这些字段的所有访问都是同步的。

There shouldn’t be any “extra” thread safety in code, there should be just enough of it. Duplication of thread safety confuses readers because they might think the extra thread safety precautions are (or used to be) needed for something but will fail to find the purpose.

代码中不应有任何“额外的”线程安全性,而应该足够。 线程安全性的重复使读者感到困惑,因为他们可能认为某些事情(或曾经)需要额外的线程安全性预防措施,但无法找到目的。

The exception from this principle is the volatile modifier on the lazily initialized field in the safe local double-checked locking pattern which is the recommended way to implement double-checked locking, despite that volatile is excessive for correctness when the lazily initialized object has all final fields*. Without that volatile modifier the thread safety of the double-checked locking could easily be broken by a change (addition of a non-final field) in the class of lazily initialized objects, though that class should not be aware of subtle concurrency implications. If the class of lazily initialized objects is specified to be immutable (see item 2.3) the volatile is still unnecessary and the UnsafeLocalDCL pattern could be used safely, but the fact that some class has all final fields doesn’t necessarily mean that it’s immutable.

此原理的一个例外是安全本地双重检查锁定模式中延迟初始化字段上的volatile修饰符,这是实现双重检查锁定的推荐方法,尽管当延迟初始化对象具有所有final时, volatile 对于正确性而言过度的字段* 。 如果没有该volatile修饰符,则懒惰初始化的对象类中的更改(添加了非final字段)很容易破坏双重检查锁的线程安全性,尽管该类不应该意识到并发的隐含含义。 如果将延迟初始化的对象的类指定为不可变的(请参见第2.3条),则volatile仍然是不必要的,并且可以安全地使用UnsafeLocalDCL模式,但是某些类具有所有final字段的事实并不一定意味着它是不可变的。

See also section 8 in this post about double-checked locking.

另请参阅此帖子中有关双重检查锁定的第8节。

3.2. Aren’t there AtomicReference, AtomicBoolean, AtomicInteger or AtomicLong fields on which only get() and set() methods are called? Simple fields with volatile modifiers can be used instead, but volatile might not be needed too; see item 2.9.

3.2。 是否没有仅调用get()set()方法的 AtomicReference AtomicBooleanAtomicIntegerAtomicLong字段? 可以使用带有volatile修饰符的简单字段来代替,但是也可能不需要volatile 。 请参阅项目2.9。

4.比赛条件 (4. Race conditions)

4.1. Aren’t ConcurrentHashMaps updated with multiple separate containsKey(), get(), put() and remove() calls instead of a single call to compute()/computeIfAbsent()/computeIfPresent()/replace()?

4.1。 ConcurrentHashMaps是否使用多个单独的containsKey()get()put()remove()调用进行了更新,而不是一次对compute() / computeIfAbsent() / computeIfPresent() / replace()调用?

4.2. Aren’t there point read accesses such as Map.get(), containsKey() or List.get() outside of critical sections to a non-thread-safe collection such as HashMap or ArrayList, while new entries can be added to the collection concurrently, even though there is a happens-before edge between the moment when some entry is put into the collection and the moment when the same entry is point-queried outside of a critical section?

4.2。 在关键部分之外,没有将诸如Map.get()containsKey()List.get()类的读取访问指向非线程安全的集合(如HashMapArrayList ,而新条目可以添加到即使在将某些条目放入集合的时刻与在关键部分之外对同一条目进行点查询的时刻之间存在前缘,也可以同时进行收集吗?

The problem is that when new entries can be added to a collection, it grows and changes its internal structure from time to time (HashMap rehashes the hash table, ArrayList reallocates the internal array). At such moments races might happen and unprotected point read accesses might fail with NullPointerException, ArrayIndexOutOfBoundsException, or return null or some random entry.

问题在于,当可以将新条目添加到集合时,它会不时增长并更改其内部结构( HashMap重新哈希表, ArrayList重新分配内部数组)。 在这种情况下,可能会发生ArrayIndexOutOfBoundsException ,并且未保护的点读取访问可能会因NullPointerExceptionArrayIndexOutOfBoundsException失败,或者返回null或某些随机条目。

Note that this concern applies to ArrayList even when elements are only added to the end of the list. However, a small change in ArrayList’s implementation in OpenJDK could have disallowed data races in such cases at very little cost. If you are subscribed to the concurrency-interest mailing list, you could help to bring attention to this problem by reviving this thread.

请注意,即使仅将元素添加到列表的末尾,此问题也适用于ArrayList 。 但是,在OpenJDK中对ArrayList的实现进行小的更改可能会在这种情况下以极少的成本禁止数据争用。 如果您已订阅并发兴趣邮件列表,则可以通过恢复该线程来帮助引起对此问题的注意。

4.3. A variation of the previous item: isn’t a non-thread-safe collection such as HashMap or ArrayList iterated outside of a critical section, while it can be modified concurrently? This could happen by accident when an Iterable, Iterator or Stream over a collection is returned from a method of a thread-safe class, even though the iterator or stream is created within a critical section.

4.3。 上一项的变体:非线程安全的集合(例如HashMapArrayList是否可以在关键部分之外进行迭代 ,而可以同时进行修改? 从线程安全类的方法返回集合上的IterableIteratorStream ,即使在关键部分内创建了IteratorStream ,也可能会偶然发生。

Like the previous item, this one applies to growing ArrayLists too.

像上一项一样,这一点也适用于不断增长的ArrayLists。

4.4. More generally, aren’t non-trivial objects that can be mutated concurrently returned from getters on a thread-safe class?

4.4。 更一般地说,不是可以从线程安全类的getter同时返回的可以被突变的非平凡对象吗?

4.5. If there are multiple variables in a thread-safe class that are updated at once but have individual getters, isn’t there a race condition in the code that calls those getters? If there is, the variables should be made final fields in a dedicated POJO, that serves as a snapshot of the updated state. The POJO is stored in a field of the thread-safe class, directly or as an AtomicReference. Multiple getters to individual fields should be replaced with a single getter that returns the POJO. This allows avoiding a race condition in the client code by reading a consistent snapshot of the state at once.

4.5。 如果线程安全类中有多个变量立即更新但有单独的getter ,那么在调用这些getter的代码中是否存在竞争条件? 如果存在,则应在专用的POJO中将变量设置为final字段,以用作更新状态的快照。 POJO直接或作为AtomicReference存储在线程安全类的字段中。 单个字段的多个getter应替换为返回POJO的单个getter。 这样可以通过立即读取状态的一致快照来避免客户端代码中的竞争条件。

This pattern is also very useful for crafting safe and reasonably simple non-blocking code: see item 9.2 in this checklist and [JCIP 15.3.1].

该模式对于设计安全且相当简单的非阻塞代码也非常有用:请参阅此清单中的第9.2节和[JCIP 15.3.1]。

4.6. If some logic within some critical section depends on some data that principally is part of the internal mutable state of the class, but was read outside of the critical section or in a different critical section, isn’t there a race condition because the local copy of the data may become out of sync with the internal state by the time when the critical section is entered? This is a typical variant of check-then-act race condition, see [JCIP 2.2.1].

4.6。 如果某个关键部分中的某些逻辑依赖于某些数据,这些数据主要是该类的内部可变状态的一部分,但在该关键部分之外或在其他关键部分中读取,则不存在争用条件,因为本地副本进入关键部分时,有多少数据可能与内部状态不同步 ? 这是“先检查后生效”竞赛条件的典型变体,请参见[JCIP 2.2.1]。

4.7. Aren’t there race conditions between the code (i. e. program runtime actions) and some actions in the outside world or actions performed by some other programs running on the machine? For example, if some configurations or credentials are hot reloaded from some file or external registry, reading separate configuration parameters or separate credentials (such as username and password) in separate transactions with the file or the registry may be racing with a system operator updating those configurations or credentials.

4.7。 代码(例如程序运行时动作)与外界的某些动作或机器上运行的其他某些程序执行的动作之间是否存在竞争条件 ? 例如,如果从某些文件或外部注册表中热加载某些配置或凭据,则在与文件或注册表的单独事务中读取单独的配置参数或单独的凭据(例如用户名和密码)可能会与系统操作员争相更新配置或凭据。

Another example is checking that a file exists (or not exists) and then reading, deleting, or creating it, respectively, while another program or a user may delete or create the file between the check and the act. It’s not always possible to cope with such race conditions, but it’s useful to keep such possibilities in mind. Prefer static methods from java.nio.file.Files class instead of methods from the old java.io.File for file system operations. Methods from Files are more sensitive to file system race conditions and tend to throw exceptions in adverse cases, while methods on File swallow errors and make it hard even to detect race conditions.

另一个示例是检查文件是否存在(然后不存在),然后分别读取,删除或创建文件,而另一个程序或用户可以在检查和操作之间删除或创建文件。 不一定总是能够应对这样的比赛条件,但记住这种可能性很有用。 对于文件系统操作,最好使用java.nio.file.Files类中的静态方法,而不是旧的java.io.File 。 “ Files中的方法对文件系统竞争条件更为敏感,在不利情况下往往会引发异常,而“ File方法会吞噬错误,甚至难以检测竞争条件。

5.用并发实用程序替换锁 (5. Replacing locks with concurrency utilities)

5.1. Is it possible to use concurrent collections and/or utilities from java.util.concurrent.* and avoid using locks with Object.wait()/notify()/notifyAll()? Code redesigned around concurrent collections and utilities is often both clearer and less error-prone than code implementing the equivalent logic with intrinsic locks, Object.wait() and notify() (Lock objects with await() and signal() are not different in this regard). See [EJ Item 81] for more information.

5.1。 是否可以使用java.util.concurrent.*并发集合和/或实用程序,并避免对Object.wait() / notify() / notifyAll()使用锁 ? 与使用内部锁Object.wait()notify()实现等效逻辑的代码相比,围绕并发集合和实用程序重新设计的代码通常更清晰,更不容易出错(带有await()signal() Lock对象在对此)。 有关更多信息,请参见[EJ项目81]。

5.2. Is it possible to simplify code that uses intrinsic locks or Lock objects with conditional waits by using Guava’s Monitor instead?

5.2。 是否可以通过使用番石榴的Monitor简化使用内部锁或带有条件等待的Lock对象的代码

6.避免死锁 (6. Avoiding deadlocks)

6.1. If a thread-safe class is implemented so that there are nested critical sections protected by different locks, is it possible to redesign the code to get rid of nested critical sections? Sometimes a class could be split into several distinct classes, or some work that is done within a single thread could be split between several threads or tasks which communicate via concurrent queues. See [JCIP 5.3] for more information about the producer-consumer pattern.

6.1。 如果实现了线程安全类,以便嵌套的关键部分受到不同的锁保护, 是否可以重新设计代码以摆脱嵌套的关键部分 ? 有时,一个类可以分为几个不同的类,或者在单个线程内完成的某些工作可以在通过并发队列进行通信的多个线程或任务之间进行分割。 有关生产者-消费者模式的更多信息,请参见[JCIP 5.3]。

6.2. If restructuring a thread-safe class to avoid nested critical sections is not reasonable, was it deliberately checked that the locks are acquired in the same order throughout the code of the class? Is the locking order documented in the Javadoc comments for the fields where the lock objects are stored?

6.2。 如果重组线程安全类以避免嵌套的关键部分是不合理的,那么是否故意检查了整个类代码中的锁获取顺序是否相同? Javadoc注释中是否记录了存储锁定对象的字段的锁定顺序?

6.3. If there are nested critical sections protected by several (potentially different) dynamically determined locks (for example, associated with some business logic entities), are the locks ordered before the acquisition? See [JCIP 10.1.2] for more information.

6.3。 如果嵌套的关键部分受到几个(可能不同) 动态确定的锁(例如,与某些业务逻辑实体相关联)的保护,则这些锁在获取之前是否已排序 ? 有关更多信息,请参见[JCIP 10.1.2]。

6.4. Aren’t there calls to some callbacks (listeners, etc.) that can be configured through public API or extension interface calls within critical sections of a class? With such calls, the system might be inherently prone to deadlocks because the external logic executed within a critical section might be unaware of the locking considerations and call back into the logic of the project, where some more locks may be acquired, potentially forming a locking cycle that might lead to deadlock. Let alone the external logic could just perform some time-consuming operation and by that harm the efficiency of the system (see the next item). See [JCIP 10.1.3] and [EJ Item 79] for more information.

6.4。 是否可以通过类的关键部分中的公共API或扩展接口调用来配置对某些回调(侦听器等)的调用 ? 进行此类调用时,系统可能会固有地陷入死锁,因为在关键部分内执行的外部逻辑可能不了解锁定注意事项,并回调到项目的逻辑中,在项目逻辑中可能会获取更多的锁定,从而可能形成锁定可能导致死锁的周期。 更不用说外部逻辑可能只是执行一些耗时的操作,从而损害了系统的效率(请参阅下一项)。 有关更多信息,请参见[JCIP 10.1.3]和[EJ项目79]。

7.改善可扩展性 (7. Improving scalability)

7.1. Are critical sections as small as possible? For every critical section: can’t some statements in the beginning and the end of the section be moved out of it? Not only minimizing critical sections improves scalability, but also makes it easier to review them and spot race conditions and deadlocks.

7.1。 关键部分是否尽可能小? 对于每个关键部分:是否不能将其开头和结尾的某些语句移出? 最小化关键部分不仅可以提高可伸缩性,而且还可以更轻松地查看它们并确定比赛条件和死锁。

This advice equally applies to lambdas passed into ConcurrentHashMap’s compute()-like methods.

该建议同样适用于传递给ConcurrentHashMap的类似于compute()的方法的lambda。

See also [JCIP 11.4.1] and [EJ Item 79].

另请参见[JCIP 11.4.1]和[EJ项目79]。

7.2. Is it possible to increase locking granularity? If a thread-safe class encapsulates accesses to map, is it possible to turn critical sections into lambdas passed into ConcurrentHashMap.compute() or computeIfAbsent() or computeIfPresent() methods to enjoy effective per-key locking granularity? Otherwise, is it possible to use Guava’s Striped or an equivalent? See [JCIP 11.4.3] for more information about lock striping.

7.2。 是否可以增加锁定粒度 ? 如果线程安全类封装了对映射的访问,是否有可能将关键部分转换为传递给ConcurrentHashMap.compute()computeIfAbsent()computeIfPresent()方法的computeIfPresent()以享受有效的每键锁定粒度? 否则,是否可以使用番石榴的Striped或同等品? 有关锁条的更多信息,请参见[JCIP 11.4.3]。

7.3. Is it possible to use non-blocking collections instead of blocking ones? Here are some possible replacements within JDK:

7.3。 是否可以使用非阻塞集合而不是阻塞集合? 这是JDK中的一些可能的替代品:

  • Collections.synchronizedMap(HashMap), HashtableConcurrentHashMap

    Collections.synchronizedMap(HashMap)HashtableConcurrentHashMap

  • Collections.synchronizedSet(HashSet)ConcurrentHashMap.newKeySet()

    Collections.synchronizedSet(HashSet)ConcurrentHashMap.newKeySet()

  • Collections.synchronizedMap(TreeMap)ConcurrentSkipListMap. By the way, ConcurrentSkipListMap is not the state of the art concurrent sorted dictionary implementation. SnapTree is more efficient than ConcurrentSkipListMap and there have been some research papers presenting algorithms that are claimed to be more efficient than SnapTree.

    Collections.synchronizedMap(TreeMap)ConcurrentSkipListMap 。 顺便说一句, ConcurrentSkipListMap并不是最新的并发排序字典实现。 SnapTreeConcurrentSkipListMap 更有效 ,并且有一些研究论文提出了比SnapTree更有效的算法。

  • Collections.synchronizedSet(TreeSet)ConcurrentSkipListSet

    Collections.synchronizedSet(TreeSet)ConcurrentSkipListSet

  • Collections.synchronizedList(ArrayList), VectorCopyOnWriteArrayList

    Collections.synchronizedList(ArrayList)VectorCopyOnWriteArrayList

  • LinkedBlockingQueueConcurrentLinkedQueue

    LinkedBlockingQueueConcurrentLinkedQueue

  • LinkedBlockingDequeConcurrentLinkedDeque

    LinkedBlockingDequeConcurrentLinkedDeque LinkedBlockingDeque

Was it considered to use one of the array-based queues from the JCTools library instead of ArrayBlockingQueue? Those queues from JCTools are classified as blocking, but they avoid lock acquisition in many cases and are generally much faster than ArrayBlockingQueue.

是否考虑使用JCTools库中基于数组的队列之一而不是ArrayBlockingQueue ? 来自JCTools的那些队列被分类为阻塞,但是在许多情况下它们避免了获取锁,并且通常比ArrayBlockingQueue快得多。

7.4. Is it possible to use ClassValue instead of ConcurrentHashMap<Class, ...>? Note, however, that unlike ConcurrentHashMap with its computeIfAbsent() method ClassValue doesn’t guarantee that per-class value is computed only once, i. e. ClassValue.computeValue() might be executed by multiple concurrent threads. So if the computation inside computeValue() is not thread-safe, it should be synchronized separately. On the other hand, ClassValue does guarantee that the same value is always returned from ClassValue.get() (unless remove() is called).

7.4。 是否可以使用ClassValue代替ConcurrentHashMap<Class, . ClassValue 。>? 但是请注意, nlike ConcurrentH ashMap一样, h its computeIfAb ethod Clas h its computeIfAb ethod Clas不保证每个类值仅计算一次, ie ClassValue.computeV ()可以由多个并发线程执行。 因此,如果nside computeV () nside computeV的计算不是线程安全的,则应单独进行同步。 hand, Clas sValue确实确保始终from ClassValue .get()返回相同的值( nless re调用nless re ())。

7.5. Was it considered to replace a simple lock with a ReadWriteLock? Beware, however, that it’s more expensive to acquire and release a ReentrantReadWriteLock than a simple intrinsic lock, so the increase in scalability comes at the cost of reduced throughput. If the operations to be performed under a lock are short, or if a lock is already striped (see item 7.2) and therefore very lightly contended, replacing a simple lock with a ReadWriteLock might have a net negative effect on the application performance. See this comment for more details.

7.5。 是否考虑ReadWriteLock替换简单的锁 ? 但是请注意,获取和释放ReentrantReadWriteLock比简单的固有锁的费用更高,因此,可伸缩性的提高是以降低吞吐量为代价的。 如果要在一个锁下执行的操作很短,或者某个锁已经被条带化(请参阅第7.2条),因此争论不休,则ReadWriteLock替换一个简单的锁可能会对应用程序性能产生负面影响 。 有关更多详细信息,请参见此评论

7.6. Is it possible to use a StampedLock instead of a ReentrantReadWriteLock when reentrancy is not needed?

7.6。 当不需要重新输入时,是否可以使用StampedLock 代替ReentrantReadWriteLock

7.7. Is it possible to use LongAdder for “hot fields” (see [JCIP 11.4.4]) instead of AtomicLong or AtomicInteger on which only methods like incrementAndGet(), decrementAndGet(), addAndGet() and (rarely) get() is called, but not set() and compareAndSet()?

7.7。 是否有可能使用LongAdder 的“热点领域”(见[JCIP 11.4.4]),而不是AtomicLongAtomicInteger上唯一的方法,如incrementAndGet() decrementAndGet() addAndGet()和(很少) get()被调用,但没有set()compareAndSet()吗?

8.延迟初始化和双重检查锁定 (8. Lazy initialization and double-checked locking)

8.1. For every lazily initialized field: is the initialization code thread-safe and might it be called from multiple threads concurrently? If the answers are “no” and “yes”, either double-checked locking should be used or the initialization should be eager.

8.1。 对于每个延迟初始化的字段: 初始化代码是否是线程安全的,是否可以从多个线程同时调用? 如果答案为“否”和“是”,则应使用经过双重检查的锁定,或者应进行初始化。

8.2. If a field is initialized lazily under a simple lock, is it possible to use double-checked locking instead to improve performance?

8.2。 如果通过简单的锁定延迟初始化字段,是否可以使用双重检查锁定来提高性能?

8.3. Does double-checked locking follow the SafeLocalDCL pattern, as noted in item 3.1 in this checklist?

8.3。 是否如本清单3.1所述, 仔细检查了锁定是否遵循SafeLocalDCL模式?

If the initialized objects are immutable a more efficient UnsafeLocalDCL pattern might also be used. However, if the lazily-initialized field is not volatile and there are accesses to the field that bypass the initialization path, the value of the field must be carefully cached in a local variable. For example, the following code is buggy:

如果初始化的对象是不可变的,则也可以使用更有效的UnsafeLocalDCL模式。 但是,如果延迟初始化的字段不是volatile并且可以访问绕过初始化路径的字段,则必须小心地将该字段的值缓存在局部变量中 。 例如,以下代码有错误:

private MyClass lazilyInitializedField;
void foo() {  if (lazilyInitializedField != null) { // (1)    // Can throw NPE!    lazilyInitializedField.bar();     // (2)  }}

It might result in a NullPointerException, because although a non-null value is observed when the field is read the first time at line 1, the second read at line 2 could observe null.

这可能会导致NullPointerException ,因为尽管在第1行第一次读取字段时观察到非null值,但在第2行第二次读取时却观察到null。

The above code could be fixed as follows:

上面的代码可以固定如下:

void foo() {  MyClass lazilyInitialized = this.lazilyInitializedField;  if (lazilyInitialized != null) {    lazilyInitialized.bar();  }}

See “Wishful Thinking: Happens-Before Is The Actual Ordering” for more information.

有关更多信息,请参见“ 如意算盘:发生在实际订购之前 ”。

8.4. In each particular case, doesn’t the net impact of double-checked locking and lazy field initialization on performance and complexity overweight the benefits of lazy initialization? Isn’t it ultimately better to initialize the field eagerly?

8.4。 在每种特定情况下, 双重检查锁定和惰性字段初始化对性能和复杂性净影响是否超过了惰性初始化的好处? 急于初始化字段最终不是更好吗?

8.5. If a field is initialized lazily under a simple lock or using double-checked locking, does it really need locking? If nothing bad may happen if two threads do the initialization at the same time and use different copies of the initialized state, a benign race could be allowed. The initialized field should still be volatile (unless the initialized objects are immutable) to ensure there is a happens-before edge between threads doing the initialization and reading the field.

8.5。 如果通过简单锁定或使用双重检查锁定来延迟初始化字段,那么它真的需要锁定吗? 如果两个线程同时执行初始化并使用初始化状态的不同副本,如果没有发生任何不好的情况,则可以允许良性竞争。 初始化字段仍应是volatile (除非初始化对象是不可变的),以确保在进行初始化和读取字段的线程之间存在先于边缘。

See also [EJ Item 83] and “Safe Publication this and Safe Initialization in Java”.

另请参见[EJ项目83]和“ 对此内容进行安全发布以及Java中的安全初始化 ”。

9.非阻塞和部分阻塞的代码 (9. Non-blocking and partially blocking code)

9.1. If there is some non-blocking or semi-symmetrically blocking code that mutates the state of a thread-safe class, was it deliberately checked that if a thread on a non-blocking mutation path is preempted after each statement, the object is still in a valid state? Are there enough comments, perhaps before almost every statement where the state is changed, to make it relatively easy for readers of the code to repeat and verify the check?

9.1。 如果有一些非阻塞式或半对称阻塞式代码使线程安全类的状态发生突变,是否故意检查了是否在每个语句之后抢占了非阻塞式突变路径上的线程,则该对象仍在有效状态 ? 是否有足够的注释,也许在几乎每个状态更改的语句之前,都可以使代码阅读者相对容易地重复并验证检查?

9.2. Is it possible to simplify some non-blocking code by confining all mutable state in an immutable POJO and update it via compare-and-swap operations? This pattern is also mentioned in item 4.5. Instead of a POJO, a single long value could be used if all parts of the state are integers that can together fit 64 bits. See also [JCIP 15.3.1].

9.2。 是否可以通过将所有可变状态都限制在不可变的POJO中并通过比较和交换操作对其进行更新来简化某些非阻塞代码? 4.5中也提到了这种模式。 如果状态的所有部分都是可以一起容纳64位的整数,则可以使用单个long值代替POJO。 另请参见[JCIP 15.3.1]。

10.线程和执行器 (10. Threads and Executors)

10.1. Are Threads given names when created? Are ExecutorServices created with thread factories that name threads?

10.1。 创建线程是否给它们命名 ? 是否使用命名线程的线程工厂创建了ExecutorService?

It appears that different projects have different policies regarding other aspects of Thread creation: whether to make them daemon with setDaemon(), whether to set thread priorities and whether a ThreadGroup should be specified. Many of such rules can be effectively enforced with forbidden-apis.

似乎不同的项目在Thread创建的其他方面有不同的策略:是否使用setDaemon()使它们成为守护程序,是否设置线程优先级以及是否应指定ThreadGroup禁止使用api可以有效地执行许多这样的规则。

10.2. Aren’t there threads created and started, but not stored in fields, a-la new Thread(...).start(), in some methods that may be called repeatedly? Is it possible to delegate the work to a cached or a shared ExecutorService instead?

10.2。 在某些可能会反复调用的方法中,是否没有创建和启动线程,但没有将其存储在new Thread(...).start()字段中? 是否可以将工作委派给缓存的或共享的ExecutorService

10.3. Aren’t some network I/O operations performed in an Executors.newCachedThreadPool()-created ExecutorService? If a machine that runs the application has network problems or the network bandwidth is exhausted due to increased load, CachedThreadPools that perform network I/O might begin to create new threads uncontrollably.

10.3。 Executors.newCachedThreadPool()创建的ExecutorService中不是Executors.newCachedThreadPool()某些网络I / O操作吗? 如果运行该应用程序的计算机出现网络问题,或者由于增加的负载而导致网络带宽耗尽,则执行网络I / O的CachedThreadPool可能会开始不受控制地创建新线程。

10.4. Aren’t there blocking or I/O operations performed in tasks scheduled to a ForkJoinPool (except those performed via a managedBlock() call)? Parallel Stream operations are executed in the common ForkJoinPool implicitly, as well as the lambdas passed into CompletableFuture’s methods whose names end with “Async”.

10.4。 在计划给ForkJoinPool任务中不存在阻塞或I / O操作 (通过managedBlock()调用执行的那些除外)吗? 并行Stream操作隐式在公共ForkJoinPool中执行,以及传入名称以“ Async”结尾的CompletableFuture方法中传递的lambda。

This advice should not be taken too far: occasional transient IO (such as that may happen during logging) and operations that may rarely block (such as ConcurrentHashMap.put() calls) usually shouldn’t disqualify all their callers from execution in a ForkJoinPool or in a parallel Stream. See Parallel Stream Guidance for the more detailed discussion of those tradeoffs.

该建议不应太过分:偶尔的瞬态IO(例如在记录过程中可能发生)和很少阻塞的操作(例如ConcurrentHashMap.put()调用)通常不应取消所有调用者的资格,使其无法在ForkJoinPool执行或在并行Stream 。 有关这些折衷的更详细讨论,请参见并行流指导

See also section 14 in this checklist about parallel Streams.

另请参阅此清单中有关并行流的第14节。

10.5. Opposite of the previous item: can non-blocking computations be parallelized or executed asynchronously by submitting tasks to ForkJoinPool.commonPool() or via parallel Streams instead of using a custom thread pool (e. g. created by one of the static factory methods from ExecutorServices)? Unless the custom thread pool is configured with a ThreadFactory that specifies a non-default priority for threads or a custom exception handler (see item 10.1) there is little reason to create more threads in the system instead of reusing threads of the common ForkJoinPool.

10.5。 与上一项相反:是否可以通过将任务提交给ForkJoinPool.commonPool()或通过并行流而不是使用自定义线程池 (例如,由ExecutorServices的静态工厂方法之一创建ForkJoinPool.commonPool()来并行化或异步执行非阻塞计算 ? 除非为自定义线程池配置了为线程或自定义异常处理程序指定非默认优先级的ThreadFactory (请参见条款10.1),否则没有理由在系统中创建更多线程,而不是重用公共ForkJoinPool线程。

11.线程中断和Future取消 (11. Thread interruption and Future cancellation)

11.1. If some code propagates InterruptedException wrapped into another exception (e. g. RuntimeException), is the interruption status of the current thread restored before the wrapping exception is thrown?

11.1。 如果某些代码将InterruptedException传播到另一个异常(例如RuntimeException )中,则在引发包装异常之前是否恢复了当前线程的中断状态?

Propagating InterruptedException wrapped into another exception is a controversial practice (especially in libraries) and it may be prohibited in some projects completely, or in specific subsystems.

InterruptedException传播到另一个异常中是有争议的做法(尤其是在库中),在某些项目中或在特定子系统中可能完全禁止这样做。

11.2. If some method returns normally after catching an InterruptedException, is this coherent with the (documented) semantics of the method? Returning normally after catching an InterruptedException usually makes sense only in two types of methods:

11.2。 如果某个方法在捕获InterruptedException后正常返回 ,这是否与方法的(已记录的)语义一致? 捕获InterruptedException通常正常返回仅在两种类型的方法中有意义:

  • Runnable.run() or Callable.call() themselves, or methods that are intended to be submitted as tasks to some Executors as method references. Thread.currentThread().interrupt() should still be called before returning from the method, assuming that the interruption policy of the threads in the Executor is unknown.

    Runnable.run()Callable.call()本身,或打算作为任务提交给某些执行器的方法,作为方法引用。 假定Executor线程的中断策略未知,则在从方法返回之前,仍应调用Thread.currentThread().interrupt()

  • Methods with “try” or “best effort” semantics. Documentation for such methods should be clear that they stop attempting to do something when the thread is interrupted, restore the interruption status of the thread and return.

    具有“尝试”或“尽力而为”语义的方法。 此类方法的文档应该清楚,它们在线程中断时停止尝试执行某些操作,恢复线程的中断状态并返回。

If a method doesn’t fall into either of these categories, it should propagate InterruptedException directly or wrapped into another exception (see the previous item), or it should not return normally after catching an InterruptedException, but rather continue execution in some sort of retry loop, saving the interruption status and restoring it before returning (see an example from JCIP). Fortunately, in most situations, it’s not needed to write such boilerplate code: one of the methods from Uninterruptibles utility class from Guava can be used.

如果方法不属于这两种类型,则应直接传播InterruptedException或将其包装到另一个异常中(请参见上一项),或者在捕获InterruptedException后不应正常返回,而应以某种重试方式继续执行循环,保存中断状态并在返回之前将其恢复(请参见JCIP中的示例 )。 幸运的是,在大多数情况下,无需编写此类样板代码: 可以使用Guava的Uninterruptibles实用工具类中的一种方法。

11.3. If an InterruptedException or a TimeoutException is caught on a Future.get() call and the task behind the future doesn’t have side effects, i. e. get() is called only to obtain and use the result in the context of the current thread rather than achieve some side effect, is the future canceled?

11.3。 如果Future.get()调用中捕获到 InterruptedException TimeoutException并且将来的任务没有副作用,即调用get()只是为了在当前线程的上下文中获取和使用结果。会带来一些副作用,未来会被取消吗?

See [JCIP 7.1] for more information about thread interruption and task cancellation.

有关线程中断和任务取消的更多信息,请参见[JCIP 7.1]。

12.时间 (12. Time)

12.1. Are values returned from System.nanoTime() compared in an overflow-aware manner, as described in the documentation for this method?

12.1。 如System.nanoTime()返回的值是否以可感知溢出的方式进行比较 (如该方法的文档所述)

12.2. Does the code that compares values returned from System.currentTimeMillis() have precautions against “time going backward”? This might happen due to time correction on a server. Values that are returned from currentTimeMillis() that are less than some other values that have already been seen should be ignored. Otherwise, there should be comments explaining why this issue is not relevant for the code.

12.2。 比较System.currentTimeMillis()返回的值的代码是否具有防止“时间倒退”的预防措施 ? 这可能是由于服务器上的时间校正而发生的。 从currentTimeMillis()返回的值小于已经看到的其他一些值的值应该被忽略。 否则,应该有注释来解释为什么此问题与代码无关。

Alternatively, System.nanoTime() could be used instead of currentTimeMillis(). Values returned from nanoTime() never decrease (but may overflow — see the previous item). Warning: nanoTime() didn’t always uphold to this guarantee in OpenJDK until 8u192 (see JDK-8184271). Make sure to use the freshest distribution.

或者,可以使用System.nanoTime()代替currentTimeMillis() 。 从nanoTime()返回的值永远不会减少(但可能会溢出-请参阅上一项)。 警告:在8u192之前, nanoTime()并不总是在OpenJDK中坚持这一保证(请参阅JDK-8184271 )。 确保使用最新版本。

In distributed systems, the leap second adjustment causes similar issues.

在分布式系统中,the 调整会导致类似的问题。

12.3. Do variables that store time limits and periods have suffixes identifying their units, for example, “timeoutMillis” (also -Seconds, -Micros, -Nanos) rather than just “timeout”? In method and constructor parameters, an alternative is providing a TimeUnit parameter next to a “timeout” parameter. This is the preferred option for public APIs.

12.3。 存储时间限制和时间段的变量是否具有标识其单位的后缀 ,例如“ timeoutMillis”(也为-Seconds,-Micros,-Nanos),而不仅仅是“ timeout”? 在方法和构造函数参数中,替代方法是在“超时”参数旁边提供一个TimeUnit参数。 这是公共API的首选选项。

12.4. Do methods that have “timeout” and “delay” parameters treat negative arguments as zeros? This is to obey the principle of least astonishment because all timed blocking methods in classes from java.util.concurrent.* follow this convention.

12.4。 Do methods that have “timeout” and “delay” parameters treat negative arguments as zeros? This is to obey the principle of least astonishment because all timed blocking methods in classes from java.util.concurrent.* follow this convention.

13. Thread safety of Cleaners and native code (13. Thread safety of Cleaners and native code)

13.1. If a class manages native resources and employs java.lang.ref.Cleaner (or sun.misc.Cleaner; or overrides Object.finalize()) to ensure that resources are freed when objects of the class are garbage collected, and the class implements Closeable with the same cleanup logic executed from close() directly rather than through Cleanable.clean() (or sun.misc.Cleaner.clean()) to be able to distinguish between explicit close() and cleanup through a cleaner (for example, clean() can log a warning about the object not being closed explicitly before freeing the resources), is it ensured that even if the cleanup logic is called concurrently from multiple threads, the actual cleanup is performed only once? The cleanup logic in such classes should obviously be idempotent because it’s usually expected to be called twice: the first time from the close() method and the second time from the cleaner or finalize(). The catch is that the cleanup must be concurrently idempotent, even if close() is never called concurrently on objects of the class. That’s because the garbage collector may consider the object to become unreachable before the end of a close() call and initiate cleanup through the cleaner or finalize() while close() is still being executed.

13.1。 If a class manages native resources and employs java.lang.ref.Cleaner (or sun.misc.Cleaner ; or overrides Object.finalize() ) to ensure that resources are freed when objects of the class are garbage collected, and the class implements Closeable with the same cleanup logic executed from close() directly rather than through Cleanable.clean() (or sun.misc.Cleaner.clean() ) to be able to distinguish between explicit close() and cleanup through a cleaner (for example, clean() can log a warning about the object not being closed explicitly before freeing the resources), is it ensured that even if the cleanup logic is called concurrently from multiple threads, the actual cleanup is performed only once ? The cleanup logic in such classes should obviously be idempotent because it's usually expected to be called twice: the first time from the close() method and the second time from the cleaner or finalize() . The catch is that the cleanup must be concurrently idempotent, even if close() is never called concurrently on objects of the class . That's because the garbage collector may consider the object to become unreachable before the end of a close() call and initiate cleanup through the cleaner or finalize() while close() is still being executed.

Alternatively, close() could simply delegate to Cleanable.clean() (sun.misc.Cleaner.clean()) which is thread-safe. But then it’s impossible to distinguish between explicit and automatic cleanup.

Alternatively, close() could simply delegate to Cleanable.clean() ( sun.misc.Cleaner.clean() ) which is thread-safe. But then it's impossible to distinguish between explicit and automatic cleanup.

See also JLS 12.6.2.

See also JLS 12.6.2 .

13.2. In a class with some native state that has a cleaner or overrides finalize(), are bodies of all methods that interact with the native state wrapped withtry { ... } finally { Reference.reachabilityFence(this); },including constructors and the close() method, but excluding finalize()? This is needed because an object could become unreachable and the native memory might be freed from the cleaner while the method that interacts with the native state is being executed, that might lead to use-after-free or JVM memory corruption.

13.2. In a class with some native state that has a cleaner or overrides finalize() , are bodies of all methods that interact with the native state wrapped with try { ... } finally { Reference.reachabilityFence(this); } ,including constructors and the close() method, but excluding finalize() ? This is needed because an object could become unreachable and the native memory might be freed from the cleaner while the method that interacts with the native state is being executed, that might lead to use-after-free or JVM memory corruption.

reachabilityFence() in close() also eliminates the race between close() and the cleanup executed through the cleaner or finalize() (see the previous item), but it may be a good idea to retain the thread safety precautions in the cleanup procedure, especially if the class in question belongs to the public API of the project because otherwise if close() is accidentally or maliciously called concurrently from multiple threads, the JVM might crash due to double memory free or, worse, memory might be silently corrupted, while the promise of the Java platform is that whatever buggy some code is, as long as it passes bytecode verification, thrown exceptions should be the worst possible outcome, but the virtual machine shouldn’t crash. Item 13.4 also stresses on this principle.

reachabilityFence() in close() also eliminates the race between close() and the cleanup executed through the cleaner or finalize() (see the previous item), but it may be a good idea to retain the thread safety precautions in the cleanup procedure, especially if the class in question belongs to the public API of the project because otherwise if close() is accidentally or maliciously called concurrently from multiple threads, the JVM might crash due to double memory free or, worse, memory might be silently corrupted, while the promise of the Java platform is that whatever buggy some code is, as long as it passes bytecode verification, thrown exceptions should be the worst possible outcome, but the virtual machine shouldn't crash. Item 13.4 also stresses on this principle.

Reference.reachabilityFence() has been added in JDK 9. If the project targets JDK 8 and Hotspot JVM, any method with an empty body is an effective emulation of reachabilityFence().

Reference.reachabilityFence() has been added in JDK 9. If the project targets JDK 8 and Hotspot JVM, any method with an empty body is an effective emulation of reachabilityFence() .

See the documentation for Reference.reachabilityFence() and this discussion in the concurrency-interest mailing list for more information.

See the documentation for Reference.reachabilityFence() and this discussion in the concurrency-interest mailing list for more information.

13.3. Aren’t there classes that have cleaners or override finalize() not to free native resources, but merely to return heap objects to some pools, or merely to report that some heap objects are not returned to some pools? This is an antipattern because of the tremendous complexity of using cleaners and finalize() correctly (see the previous two items) and the negative impact on performance (especially of finalize()), that might be even larger than the impact of not returning objects back to some pool and thus slightly increasing the garbage allocation rate in the application. If the latter issue arises to be any important, it should better be diagnosed with async-profiler in the allocation profiling mode (-e alloc) than by registering cleaners or overriding finalize().

13.3. Aren't there classes that have cleaners or override finalize() not to free native resources , but merely to return heap objects to some pools, or merely to report that some heap objects are not returned to some pools? This is an antipattern because of the tremendous complexity of using cleaners and finalize() correctly (see the previous two items) and the negative impact on performance (especially of finalize() ), that might be even larger than the impact of not returning objects back to some pool and thus slightly increasing the garbage allocation rate in the application. If the latter issue arises to be any important, it should better be diagnosed with async-profiler in the allocation profiling mode ( -e alloc ) than by registering cleaners or overriding finalize() .

This advice also applies when pooled objects are direct ByteBuffers or other Java wrappers of native memory chunks. async-profiler -e malloc could be used in such cases to detect direct memory leaks.

This advice also applies when pooled objects are direct ByteBuffers or other Java wrappers of native memory chunks. async-profiler -e malloc could be used in such cases to detect direct memory leaks.

13.4. If some classes have some state in native memory and are used actively in concurrent code, or belong to the public API of the project, was it considered making them thread-safe? As described in item 13.2, if objects of such classes are inadvertently accessed from multiple threads without proper synchronization, memory corruption and JVM crashes might result. This is why classes in the JDK such as java.util.zip.Deflater use synchronization internally despite Deflater objects are not intended to be used concurrently from multiple threads.

13.4. If some classes have some state in native memory and are used actively in concurrent code, or belong to the public API of the project, was it considered making them thread-safe ? As described in item 13.2, if objects of such classes are inadvertently accessed from multiple threads without proper synchronization, memory corruption and JVM crashes might result. This is why classes in the JDK such as java.util.zip.Deflater use synchronization internally despite Deflater objects are not intended to be used concurrently from multiple threads.

Note that making classes with some state in native memory thread-safe also implies that the native state should be safely published in constructors. This means that either the native state should be stored exclusively in final fields, or VarHandle.storeStoreFence() should be called in constructors after full initialization of the native state. If the project targets JDK 9 and VarHandle is not available, the same effect could be achieved by wrapping constructors’ bodies in synchronized (this) { ... }.

Note that making classes with some state in native memory thread-safe also implies that the native state should be safely published in constructors . This means that either the native state should be stored exclusively in final fields, or VarHandle.storeStoreFence() should be called in constructors after full initialization of the native state. If the project targets JDK 9 and VarHandle is not available, the same effect could be achieved by wrapping constructors' bodies in synchronized (this) { ... } .

14. Parallel Streams (14. Parallel Streams)

14.1. For every use of parallel Streams via Collection.parallelStream() or Stream.parallel(): is it explained why parallel Stream is used in a comment preceding the stream operation? Are there back-of-the-envelope calculations or references to benchmarks showing that the total CPU time cost of the parallelized computation exceeds 100 microseconds?

14.1。 For every use of parallel Streams via Collection.parallelStream() or Stream.parallel() : is it explained why parallel Stream is used in a comment preceding the stream operation? Are there back-of-the-envelope calculations or references to benchmarks showing that the total CPU time cost of the parallelized computation exceeds 100 microseconds ?

Is there a note in the comment that parallelized operations are generally I/O-free and non-blocking, as per item 10.4? The latter might be obvious momentary, but as codebase evolves the logic that is called from the parallel stream operation might become blocking accidentally. Without comment, it’s harder to notice the discrepancy and the fact that the computation is no longer a good fit for parallel Streams. It can be fixed by calling the non-blocking version of the logic again or by using a simple sequential Stream instead of a parallel Stream.

Is there a note in the comment that parallelized operations are generally I/O-free and non-blocking, as per item 10.4? The latter might be obvious momentary, but as codebase evolves the logic that is called from the parallel stream operation might become blocking accidentally. Without comment, it's harder to notice the discrepancy and the fact that the computation is no longer a good fit for parallel Streams. It can be fixed by calling the non-blocking version of the logic again or by using a simple sequential Stream instead of a parallel Stream .

Bonus: is forbidden-apis configured for the project and are java.util.StringBuffer, java.util.Random and Math.random() prohibited? StringBuffer and Random are thread-safe and all their methods are synchronized, which is never useful in practice and only inhibits the performance. In OpenJDK, Math.random() delegates to a global static Random instance. StringBuilder should be used instead of StringBuffer, ThreadLocalRandom or SplittableRandom should be used instead of Random.

Bonus: is forbidden-apis configured for the project and are java.util.StringBuffer , java.util.Random and Math.random() prohibited? StringBuffer and Random are thread-safe and all their methods are synchronized , which is never useful in practice and only inhibits the performance. In OpenJDK, Math.random() delegates to a global static Random instance. StringBuilder should be used instead of StringBuffer , ThreadLocalRandom or SplittableRandom should be used instead of Random .

阅读清单 (Reading List)

翻译自: https://www.freecodecamp.org/news/code-review-checklist-java-concurrency-49398c326154/

java代码分流解决并发

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值