ConcurrentHashMap的聚合方法在多线程下的问题及解决
声明静态属性
private static int THREAD_COUNT = 10;
private static int ITEM_COUNT = 1000;
再来定义一个生成指定大小的ConcurrentHashMap的function
//生成指定大小的ConcurrentHashMap
private static ConcurrentHashMap<String, Long> getConcurrentHashMapData(int count) {
return LongStream.range(1, count+1)
//针对基础类型参数需要添加boxed才可用collect。
.boxed()
//新建一个Stream 内容为1到count,然后以这个流创建一个ConcurrentHashMap
.collect(Collectors.toConcurrentMap(i -> UUID.randomUUID().toString(), Function.identity(),
(o1, o2) -> o1, ConcurrentHashMap::new));
}
来重现一下问题场景
public static void errorTest() throws InterruptedException {
//创建900个item的ConcurrentHashMap
ConcurrentHashMap<String, Long> concurrentHashMap = getConcurrentHashMapData(ITEM_COUNT - 100);
log.info("concurrentHashMap.size{}", concurrentHashMap.size());
ForkJoinPool forkJoinPool = new ForkJoinPool(THREAD_COUNT);
forkJoinPool.execute(() -> IntStream.rangeClosed(1, 12).parallel().forEach(i -> {
int needAddNumber = ITEM_COUNT - concurrentHashMap.size();
log.info("concurrentHashMap.size{}", concurrentHashMap.size());
concurrentHashMap.putAll(getConcurrentHashMapData(needAddNumber));
}));
forkJoinPool.shutdown();
forkJoinPool.awaitTermination(10, TimeUnit.MINUTES);
log.info("after add, concurrentHashMap.size{}", concurrentHashMap.size());
}
运行结果为:
分析
很明显,这儿是有问题的,这是因为多个线程同时使用int needAddNumber = ITEM_COUNT - concurrentHashMap.size(); 作为逻辑处理条件,尽管ConcurrentHashMap则具有原子性的读写特性,但是ConcurrentHashMap的聚合方法(例如:size、isEmpty 和 containsValue…)在并发场景下,可能只是一个参考值,不可用于流程的控制。在一个线程pullAll的时候,另一个线程获取size,此时的size就是错误的。
要想避免上述问题,则可以通过添加synchronized来控制
{
ConcurrentHashMap<String, Long> concurrentHashMap = getConcurrentHashMapData(ITEM_COUNT - 100);
log.info("concurrentHashMap.size{}", concurrentHashMap.size());
ForkJoinPool forkJoinPool = new ForkJoinPool(THREAD_COUNT);
forkJoinPool.execute(() -> IntStream.rangeClosed(1, 12).parallel().forEach(i -> {
//给concurrentHashMap 加个锁,保证只有一个线程可以处理
synchronized (concurrentHashMap) {
int gap = ITEM_COUNT - concurrentHashMap.size();
log.info("concurrentHashMap.size{}", concurrentHashMap.size());
concurrentHashMap.putAll(getConcurrentHashMapData(gap));
}
}));
forkJoinPool.shutdown();
forkJoinPool.awaitTermination(10, TimeUnit.MINUTES);
log.info("after add, concurrentHashMap.size{}", concurrentHashMap.size());
}
}
运行结果为:
分析
通过synchronized 加锁可以有效的解决该问题
相关引入包
import lombok.extern.slf4j.Slf4j;
import java.util.UUID;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.ForkJoinPool;
import java.util.concurrent.TimeUnit;
import java.util.function.Function;
import java.util.stream.Collectors;
import java.util.stream.IntStream;
import java.util.stream.LongStream;