Django很强大,发现了一个很有用的功能
下方是打印的查询报告:
验证一:
count查询:11.340s切片limit查询:0.009s
2、验证一的切片limit查询15条数据:0.009s,验证二的遍历15次惟一索引查询,每个都是0.000s,几乎不耗时,相比之下,还是通过id的惟一索引去查更快些,但也不排除15条数据都通过id去查的时间之和会不会被四舍五入后跟limit查询相差不多
counts = ArticleMmOther.objects.all().count()
#arts = ArticleMmOther.objects.values("id","title")[start:end]
arts = [ArticleMmOther.objects.values("id","title").get(id=i) for i in range(start, end)]
paginator = Paginator(range(counts), 15)
我要探索2个问题:
from django.db import connection
if stype=='b_other':
counts = ArticleMmOther.objects.all().count()
arts = ArticleMmOther.objects.values("id","title")[start:end]
print(connection.queries)
# arts = [ArticleMmOther.objects.values("id","title").get(id=i) for i in range(start, end)]
title = a
print(arts,type(arts))
paginator = Paginator(range(counts), 15)
下方是打印的查询报告:
验证一:
count查询:11.340s切片limit查询:0.009s
[{'sql': 'SELECT @@SQL_AUTO_IS_NULL', 'time': '0.000'},
{'sql': 'SET SESSION TRANSACTION ISOLATION LEVEL READ COMMITTED',
'time': '0.000'},
# count查询总数量,查询时间11.340s
{'sql': 'SELECT COUNT(*) AS `__count`
FROM `articlemmother`', 'time': '11.340'},
# limit查询15条数据,查询时间0.009s
{'sql': 'SELECT `articlemmother`.`id`, `articlemmother`.`title`
FROM `articlemmother` ORDER BY `articlemmother`.`like` DESC,
`articlemmother`.`read` DESC LIMIT 15 OFFSET 2',
'time': '0.009'}]
验证二:
count查询:0.030s惟一索引id查询:0.000s[{'time': '0.000', 'sql': 'SELECT @@SQL_AUTO_IS_NULL'}, {'time': '0.000', 'sql': 'SET SESSION TRANSACTION ISOLATION LEVEL READ COMMITTED'},
# count查询用时0.030s
{'time': '0.030', 'sql': 'SELECT COUNT(*) AS `__count`
FROM `articlemmother`'},
# 15条数据全部通过惟一索引id查询,每条数据都是0.000s
{'time': '0.000', 'sql': 'SELECT VERSION()'},
{'time': '0.000', 'sql': 'SELECT `articlemmother`.`id`, `articlemmother`.`title` FROM `articlemmother` WHERE `articlemmother`.`id` = 1'},
{'time': '0.000', 'sql': 'SELECT `articlemmother`.`id`, `articlemmother`.`title` FROM `articlemmother` WHERE `articlemmother`.`id` = 2'},
{'time': '0.000', 'sql': 'SELECT `articlemmother`.`id`, `articlemmother`.`title` FROM `articlemmother` WHERE `articlemmother`.`id` = 3'},
{'time': '0.000', 'sql': 'SELECT `articlemmother`.`id`, `articlemmother`.`title` FROM `articlemmother` WHERE `articlemmother`.`id` = 4'},
{'time': '0.000', 'sql': 'SELECT `articlemmother`.`id`, `articlemmother`.`title` FROM `articlemmother` WHERE `articlemmother`.`id` = 5'},
{'time': '0.000', 'sql': 'SELECT `articlemmother`.`id`, `articlemmother`.`title` FROM `articlemmother` WHERE `articlemmother`.`id` = 6'},
{'time': '0.000', 'sql': 'SELECT `articlemmother`.`id`, `articlemmother`.`title` FROM `articlemmother` WHERE `articlemmother`.`id` = 7'},
{'time': '0.000', 'sql': 'SELECT `articlemmother`.`id`, `articlemmother`.`title` FROM `articlemmother` WHERE `articlemmother`.`id` = 8'},
{'time': '0.000', 'sql': 'SELECT `articlemmother`.`id`, `articlemmother`.`title` FROM `articlemmother` WHERE `articlemmother`.`id` = 9'},
{'time': '0.000', 'sql': 'SELECT `articlemmother`.`id`, `articlemmother`.`title` FROM `articlemmother` WHERE `articlemmother`.`id` = 10'},
{'time': '0.000', 'sql': 'SELECT `articlemmother`.`id`, `articlemmother`.`title` FROM `articlemmother` WHERE `articlemmother`.`id` = 11'},
{'time': '0.000', 'sql': 'SELECT `articlemmother`.`id`, `articlemmother`.`title` FROM `articlemmother` WHERE `articlemmother`.`id` = 12'},
{'time': '0.000', 'sql': 'SELECT `articlemmother`.`id`, `articlemmother`.`title` FROM `articlemmother` WHERE `articlemmother`.`id` = 13'},
{'time': '0.000', 'sql': 'SELECT `articlemmother`.`id`, `articlemmother`.`title` FROM `articlemmother` WHERE `articlemmother`.`id` = 14'},
{'time': '0.001', 'sql': 'SELECT `articlemmother`.`id`, `articlemmother`.`title` FROM `articlemmother` WHERE `articlemmother`.`id` = 15'}]
[
比较分析与结论:
2、验证一的切片limit查询15条数据:0.009s,验证二的遍历15次惟一索引查询,每个都是0.000s,几乎不耗时,相比之下,还是通过id的惟一索引去查更快些,但也不排除15条数据都通过id去查的时间之和会不会被四舍五入后跟limit查询相差不多
总体来说,通过上面所说的方法,在django性能优化时候,对每个查询的执行时间有个把握,就能更有针对性的优化,而不是去盲目猜测怎么优化,更多django好用的、避免重复造轮子的功能也欢迎留言大家共同学习
python爬虫人工智能大数据公众号