MySql 分页SQL 大数据量limit替代和优化(试验)

参考:https://my.oschina.net/cart/blog/354999

select SQL_NO_CACHE 
			 u.id, u.user_id, u.user_name, u.user_name_index, u.email, u.pwd, u.email_token, u.email_active_date,
       u.real_name, u.real_name_index, u.identity_card, u.identity_card_index
  from t_user u
 ORDER BY u.id
 LIMIT 100000, 100;

-- 试验方法1
select SQL_NO_CACHE 
			 u.id, u.user_id, u.user_name, u.user_name_index, u.email, u.pwd, u.email_token, u.email_active_date,
       u.real_name, u.real_name_index, u.identity_card, u.identity_card_index
  from t_user u
 where u.id in (
		select t.id from (select id from t_user_basic_info order by id limit 100000,100) t
 );

-- 试验方法2 
-- EXPLAIN 
select SQL_NO_CACHE 
			 u.id, u.user_id, u.user_name, u.user_name_index, u.email, u.pwd, u.email_token, u.email_active_date,
       u.real_name, u.real_name_index, u.identity_card, u.identity_card_index
  from t_user u
		inner join (select id from t_user_basic_info order by id limit 100000,100) as t USING(id)
;

-- 试验方法3
-- EXPLAIN
select SQL_NO_CACHE 
			 u.id, u.user_id, u.user_name, u.user_name_index, u.email, u.pwd, u.email_token, u.email_active_date,
       u.real_name, u.real_name_index, u.identity_card, u.identity_card_index
  from t_user u
 where u.id >= (select id from t_user_basic_info order by id limit 100000,1)
 order by id
 limit 100
;


运行结果:

[SQL]
select SQL_NO_CACHE 
u.id, u.user_id, u.user_name, u.user_name_index, u.email, u.pwd, u.email_token, u.email_active_date,
       u.real_name, u.real_name_index, u.identity_card, u.identity_card_index
  from t_user u
 ORDER BY u.id
 LIMIT 100000, 100;
受影响的行: 0
时间: 0.069s


[SQL]


select SQL_NO_CACHE 
u.id, u.user_id, u.user_name, u.user_name_index, u.email, u.pwd, u.email_token, u.email_active_date,
       u.real_name, u.real_name_index, u.identity_card, u.identity_card_index
  from t_user u
 where u.id in (
select t.id from (select id from t_user_basic_info order by id limit 100000,100) t
 );
受影响的行: 0
时间: 0.119s


[SQL]
 
-- EXPLAIN 
select SQL_NO_CACHE 
u.id, u.user_id, u.user_name, u.user_name_index, u.email, u.pwd, u.email_token, u.email_active_date,
       u.real_name, u.real_name_index, u.identity_card, u.identity_card_index
  from t_user u
inner join (select id from t_user_basic_info order by id limit 100000,100) as t USING(id)
;
受影响的行: 0
时间: 0.034s


[SQL]


-- EXPLAIN
select SQL_NO_CACHE 
u.id, u.user_id, u.user_name, u.user_name_index, u.email, u.pwd, u.email_token, u.email_active_date,
       u.real_name, u.real_name_index, u.identity_card, u.identity_card_index
  from t_user  u
 where u.id >= (select id from t_user_basic_info order by id limit 100000,1)
 order by id
 limit 100
;
受影响的行: 0
时间: 0.099s


方案1和3,,速度竟然都不如原生limit,试验表数据样本也就十多万条,有机会用百万千万级别来测试。。

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值