前面我们提到过,大数据量分页时,skip如果跳过大量的数据会很慢,这里针对这一情况,我们优化一下分页。
看代码:
/**
* 大数据量数据分页优化
* @param page
* @param pageSize
* @param lastId 上一页的最大id
* @return
*/
public List<User> largePageList(int page, int pageSize, int lastId) {
DB myMongo = MongoManager.getDB("myMongo");
DBCollection userCollection = myMongo.getCollection("user");
DBCursor limit = null;
if (page == 1) {
limit = userCollection.find()
.sort(new BasicDBObject("id", 1)).limit(pageSize);
} else {
limit = userCollection
.find(new BasicDBObject("id", new BasicDBObject(
QueryOperators.GT, lastId)))
.sort(new BasicDBObject("id", 1)).limit(pageSize);
}
List<User> userList = new ArrayList<User>();
while (limit.hasNext()) {
User user = new User();
user.parse(limit.next());
userList.add(user);
}
return userList;
}
public static void main(String[] args) {
UserDao userDao = new UserDao();
List<User> largePageList = userDao.largePageList(1,5,0);//第一页
print(largePageList);
System.out.println("============");
List<User> largePageList2 = userDao.largePageList(2,5,5);//第二页 需要记录上一页的最大id
print(largePageList2);
System.out.println("============");
List<User> largePageList3 = userDao.largePageList(3,5,11);//第三页 需要记录上一页的最大id
print(largePageList3);
}
public static void print(List<User> largePageList){
for(User user : largePageList){
System.out.println(user);
}
}
输出结果如下:
id:1,name:jinhui,address:beijing
id:2,name:manman,address:beijing
id:3,name:3,address:3
id:4,name:4,address:4
id:5,name:5,address:5
============
id:6,name:6,address:6
id:7,name:7,address:7
id:8,name:8,address:8
id:9,name:9,address:9
id:11,name:11,address:11
============
id:12,name:12,address:12
我们依靠记录排序,记录下上一页排序列的最大值,来进行下一次分页。从而避免了使用skip操作。