最近在做推荐相关的一些接口,核心处理流程:prepare=>process=>post;使用接口编程
但是在prepare阶段往往存在很多的信息拼接的过程,而我想让整个外部的处理结构平顺,且不想让后续的同事掺杂私活;因此有了这个博客;
再次说明一下我的场景:
1.prepre阶段需要从两个数据源获取数据,都是Video数据,但是有不同的组成部分,拼接起来就是完整的Video,且都是多线程
2.在process阶段必须要拿到完整的数据
1.主体流程
public static List<Video> prepare(VideoHandlerContext context, Map<VideoHandler, List<Long>> prepareHandlerInfoIdListMap) {
List<Video> result = Lists.newArrayList();
try {
Map<VideoType, List<Future<List<Video>>>> videoTypeFuture = new HashMap<>();
// 各个 Handler 的准备工作
for (Map.Entry<VideoHandler, List<Long>> entry : prepareHandlerInfoIdListMap.entrySet()) {
VideoHandler handler = entry.getKey();
List<Long> handlerInfoIdList = entry.getValue();
List<Future<List<Video>>> handlerTaskList = handler.prepare(context, handlerInfoIdList);
if (handlerTaskList != null && !handlerTaskList.isEmpty()) {
videoTypeFuture.put(handler.type(), handlerTaskList);
}
}
// 等待任务完成
Table<VideoType, Long, Video> videoInfoTable = HashBasedTable.create();
long deadLine = System.currentTimeMillis() + FEED_TIMEOUT - 1;
videoTypeFuture.forEach((videoType, taskList) -> {
List<Video> prepareResult = Lists.newArrayList();
for (Future<List<Video>> future : taskList) {
try {
long timeout = Math.max(deadLine - System.currentTimeMillis(), 1L);
List<Video> videoList = future.get(timeout, TimeUnit.MILLISECONDS);
if (CollectionUtils.isNotEmpty(videoList)) {
prepareResult.addAll(videoList);
}
} catch (Exception e) {
log.error("视频准备任务处理异常.{},异常信息 {}", context.getTraceId(), e);
if (!future.isCancelled()) {
future.cancel(true);
}
}
}
if (!CollectionUtils.isEmpty(prepareResult)) {
result.addAll(HANDLER_MAP.get(videoType).merge(prepareResult));
}
for (Video video : prepareResult) {
videoInfoTable.put(videoType, video.getId(), video);
}
});
} catch (Exception e) {
log.error("video feed prepare exception {}", context.getTraceId(), e);
throw e;
}
2.某handler代码
@Override
public List<Future<List<ErShouFangVideo>>> prepare(VideoHandlerContext context, List<Long> ids) {
if (CollectionUtils.isEmpty(ids)) {
return Collections.emptyList();
}
CompletionService<List<ErShouFangVideo>> completionService = new ExecutorCompletionService<>(ThreadPoolUtils.getVideoQueryPoolExecutor());
Integer batchSize = ContentFeedpostConfig.batchErShouFangLimit;
List<Future<List<ErShouFangVideo>>> userFutures = ThreadPoolUtils.batchTaskPrepare(batchSize, ids,
list -> () -> ErShouFangVideoService.userDetailBatch(context.getTraceId(), Long.parseLong(context.getUserId()), context.getImei(), (List<Long>) list),
completionService);
List<Future<List<ErShouFangVideo>>> detailFuture = ThreadPoolUtils.batchTaskPrepare(batchSize, ids,
list -> () -> ErShouFangVideoService.videoDetailBatch(context.getTraceId(), Long.parseLong(context.getUserId()), context.getImei(), (List<Long>) list),
completionService);
userFutures.addAll(detailFuture);
return userFutures;
}
@Override
public JSONObject process(VideoHandlerContext context, ErShouFangVideo video, Map<String, String> request) {
// 进到此方法必须是完整的数据
JSONObject result = new JSONObject();
return result;
}
@Override
public void post(VideoHandlerContext context, ErShouFangVideo video, Map<String, String> request) {
// doNothing
}
@Override
public List<ErShouFangVideo> merge(List<ErShouFangVideo> videoList) {
int originSize = videoList.size();
List<ErShouFangVideo> mergedVideos = BeanConvertUtil.mergeListsWithUniqueKey(
ErShouFangVideo::getId,
(v1, v2) -> {
v1.setId(v1.getId() != null ? v1.getId() : v2.getId());
v1.setAuthorId(v1.getAuthorId() != null ? v1.getAuthorId() : v2.getAuthorId());
v1.setAvater(v1.getAvater() != null ? v1.getAvater() : v2.getAvater());
v1.setName(v1.getName() != null ? v1.getName() : v2.getName());
v1.setPlayJumpUrl(v1.getPlayJumpUrl() != null ? v1.getPlayJumpUrl() : v2.getPlayJumpUrl());
v1.setPlayNum(v1.getPlayNum() != null ? v1.getPlayNum() : v2.getPlayNum());
v1.setType(v1.getType() != null ? v1.getType() : v2.getType());
v1.setMerge(true);
return v1;
},
videoList
).stream().filter(Objects::nonNull).filter(Video::isMerge).toList();
log.info("ershoufang video video merge {}=>{}", originSize, mergedVideos.size());
return mergedVideos;
}
public static <T, K> List<T> mergeListsWithUniqueKey(Function<T, K> keyExtractor, BinaryOperator<T> mergeFunction, List<T>... lists) {
return new ArrayList<>(Arrays.stream(lists)
.flatMap(List::stream)
.collect(Collectors.toMap(
keyExtractor,
Function.identity(),
mergeFunction,
LinkedHashMap::new
)).values());
}
核心处理逻辑其实就是等prepare的task结束后,进行List数据的收集,并且对其进行合并,合并完后过滤掉不完整的数据,当然,也会存在超时的情况,但是这就看业务层面如何去抉择了,整体可控;主流程依旧优雅!