业务场景:
多并发请求回调接口,回调成功时将某个计数,将count入库
测试案例1:
单线程,无并发,1000次请求
public static void main(String[] args) {
for (int i = 0; i < 1000; i++) {
String s = CustomerHttpClient.doGet("http://localhost:8080/testCount");
System.out.println(s);
}
}
@RequestMapping("testCount")
public String testCount(){
TestCount testCount = testCountMapper.selectByPrimaryKey(1L);
testCount.setCount(testCount.getCount() + 1);
int i = testCountMapper.updateByPrimaryKey(testCount);
return String.valueOf(i);
}
执行结果: count为1000 没有发生计数丢失
耗时:162s
测试案例2:
多线程并发,1000次请求
private static AtomicInteger atomicInteger = new AtomicInteger();
@RequestMapping("testCount")
public String testCount(){
TestCount testCount = testCountMapper.selectByPrimaryKey(1L);
testCount.setCount(testCount.getCount() + 1);
int i = testCountMapper.updateByPrimaryKey(testCount);
int i1 = atomicInteger.incrementAndGet();
System.out.println(1);
return String.valueOf(i1);
}
public static void main(String[] args) {
long startTime = System.currentTimeMillis();
ExecutorService executorService = Executors.newFixedThreadPool(20);
for (int i = 0; i < 1000; i++) {
executorService.submit(()->{
CustomerHttpClient.doGet("http://localhost:8080/testCount");
// System.out.println(1);
});
}
long time = System.currentTimeMillis() - startTime;
System.out.println(time);
}
执行结果,controller的确调用了1000次,数据库计数结果为83,相差特别大
测试案例3:
多线程并发,1000次请求,加synchronized
@Resource
private TestCountMapper testCountMapper;
private static AtomicInteger atomicInteger = new AtomicInteger();
@RequestMapping("testCount")
public String testCount(){
TestCount testCount = testCountMapper.selectByPrimaryKey(1L);
synchronized (TestController.class){
testCount.setCount(testCount.getCount() + 1);
int i = testCountMapper.updateByPrimaryKey(testCount);
}
int i1 = atomicInteger.incrementAndGet();
System.out.println(1);
return String.valueOf(i1);
}
public static void main(String[] args) {
long startTime = System.currentTimeMillis();
ExecutorService executorService = Executors.newFixedThreadPool(20);
for (int i = 0; i < 1000; i++) {
executorService.submit(()->{
CustomerHttpClient.doGet("http://localhost:8080/testCount");
// System.out.println(1);
});
}
long time = System.currentTimeMillis() - startTime;
System.out.println(time);
}
加了,记录到库里的为51
测试案例4:
多线程并发,1000次请求,加synchronized 这种不满足分布式的服务器
@Resource
private TestCountMapper testCountMapper;
private static AtomicInteger atomicInteger = new AtomicInteger();
@RequestMapping("testCount")
public String testCount(){
synchronized (this){
TestCount testCount = testCountMapper.selectByPrimaryKey(1L);
testCount.setCount(testCount.getCount() + 1);
int i = testCountMapper.updateByPrimaryKey(testCount);
}
int i1 = atomicInteger.incrementAndGet();
System.out.println(1);
return String.valueOf(i1);
}
public static void main(String[] args) {
long startTime = System.currentTimeMillis();
ExecutorService executorService = Executors.newFixedThreadPool(20);
for (int i = 0; i < 1000; i++) {
executorService.submit(()->{
CustomerHttpClient.doGet("http://localhost:8080/testCount");
// System.out.println(1);
});
}
long time = System.currentTimeMillis() - startTime;
System.out.println(time);
}
记录到库里数据1000,说明阻塞的时候并没有把请求丢掉。但是执行的总时间比单线程执行的时间还要慢
测试案例5:
多线程并发,1000次请求,数据库维度++ 可以应用到分布式
@RequestMapping("dbTestCount")
public String dbTestCount() {
int ret = testCountMapper.incrementCount();
return String.valueOf(ret);
}
public static void main(String[] args) {
long startTime = System.currentTimeMillis();
ExecutorService executorService = Executors.newFixedThreadPool(20);
for (int i = 0; i < 1000; i++) {
executorService.submit(()->{
CustomerHttpClient.doGet("http://localhost:8080/dbTestCount");
// System.out.println(1);
});
}
long time = System.currentTimeMillis() - startTime;
System.out.println(time);
}
db:
update test_count set count = count + 1 where id = 1
记录到库里数据1000,执行的时间很快。 可以应用
mysql在高并发下 set aa=aa+1要进行锁表
测试案例6:
设置redis锁,当redis锁加不上时,加入队列,再请求一次。 类似于个加不上锁一直循环重试的逻辑 可以应用到分布式
@Resource
private SRedis sRedis;
private static final ExecutorService executorService = Executors.newFixedThreadPool(20);
@RequestMapping("redisTestCount")
public String redisTestCount() {
int i1 = 0;
try {
String lockKey = "redisTestCount";
boolean lock = sRedis.tryLock(lockKey);
if (lock) {
try {
TestCount testCount = testCountMapper.selectByPrimaryKey(1L);
testCount.setCount(testCount.getCount() + 1);
int i = testCountMapper.updateByPrimaryKey(testCount);
i1 = atomicInteger.incrementAndGet();
System.out.println(1);
} finally {
sRedis.unlock(lockKey);
}
} else {
executorService.submit(() -> redisTestCount());
}
} catch (Exception e) {
e.printStackTrace();
}
return String.valueOf(i1);
}
public static void main(String[] args) {
long startTime = System.currentTimeMillis();
ExecutorService executorService = Executors.newFixedThreadPool(20);
for (int i = 0; i < 1000; i++) {
executorService.submit(()->{
CustomerHttpClient.doGet("http://localhost:8080/redisTestCount");
// System.out.println(1);
});
}
long time = System.currentTimeMillis() - startTime;
System.out.println(time);
}
记录到库里数据1000 执行速度没有数据库层面锁执行速度快