数据处理
数据存储
在物联网中,设备上传数据量过大,单挑数据插入过于消费数据库性能,所以需要批量插入。
方案1:定时任务批量执行插入任务
- 使用
quartz
执行定时任务
@Resource
private QuartzService quartzService;
@Override
public void initial() {
quartzService.createJobWithInterval(ScheduleConstant.DATA_SCHEDULE_GROUP, "data-point-value-schedule-job", interval, DateBuilder.IntervalUnit.SECOND, PointValueJob.class);
}
- 定时任务时,使用列表存储数据,定时插入数据库,后清空。保证数据一致性,需要加锁。
@Override
protected void executeInternal(@NotNull JobExecutionContext jobExecutionContext) throws JobExecutionException {
// Statistical point value receive rate
long speed = VALUE_COUNT.getAndSet(0);
VALUE_SPEED.set(speed);
speed /= interval;
if (speed >= batchSpeed) {
log.debug("Point value receiver speed: {} /s, value size: {}, interval: {}", speed, getPointValuesSize(), interval);
}
// Save point value array to Redis & MongoDB
threadPoolExecutor.execute(() -> {
VALUE_LOCK.writeLock().lock();
if (!POINT_VALUE_LIST.isEmpty()) {
pointValueService.save(POINT_VALUE_LIST);
clearPointValues();
}
VALUE_LOCK.writeLock().unlock();
});
}
- 在消费队列中,如果数据数据量未达到预期,可以直接插入
if (PointValueJob.VALUE_SPEED.get() < batchSpeed) {
threadPoolExecutor.execute(() ->
// Save point value to Redis & MongoDB
pointValueService.save(pointValueBO)
);
} else {
// Save point value to schedule
PointValueJob.VALUE_LOCK.writeLock().lock();
PointValueJob.addPointValues(pointValueBO);
PointValueJob.VALUE_LOCK.writeLock().unlock();
}
来源:
https://gitee.com/pnoker/iot-dc3
io.github.pnoker.center.data.job.PointValueJob
方案2 阻塞队列
将需要插入的数据放到队列中,然后根据实际情况去创建消费线程
private BlockingQueue<HistoryDataSharding> saveQueue;
public int insert(HistoryDataSharding historyData) {
saveQueue.add(historyData);
if (!run) {
synchronized (this) {
if (!run) {
run = true;
fixedThreadPool.submit(new InsertTask());
}
}
}
return 0;
}
避免频繁创建线程,可以等待。等待一定时间后,如果还没有数据,则结束线程。
class InsertTask implements Runnable {
@Override
public void run() {
log.info("正在执行批量线程");
try {
List<HistoryDataSharding> list = new ArrayList<>();
boolean insert = true;
boolean sleep = true;
while (insert) {
HistoryDataSharding historyData = saveQueue.poll();
if (null == historyData) {
insert = false;
}
if (insert) {
//如果一秒内有数据传入
sleep = false;
historyData.setId(IdUtil.getSnowflake().nextId());
list.add(historyData);
if (list.size() > dataSize) {
historyDataShardingService.insertBatch(list);
list.clear();
}
} else if (sleep) {
if (list.size() > 0) {
historyDataShardingService.insertBatch(list);
}
} else {
//暂停
Thread.sleep(1000);
insert = true;
sleep = true;
}
}
log.info("批量线程结束");
} catch (Exception e) {
e.printStackTrace();
}
run = false;
}
}