使用skip+limit的方式实现分页。直接上代码(依赖的代码在上一篇博客中):
/**
* 分页查询
* @param page
* @param pageSize
* @return
*/
public List<User> pageList(int page,int pageSize){
DB myMongo = MongoManager.getDB("myMongo");
DBCollection userCollection = myMongo.getCollection("user");
DBCursor limit = userCollection.find().skip((page - 1) * 10).sort(new BasicDBObject()).limit(pageSize); List<User> userList = new ArrayList<User>();
while (limit.hasNext()) {
User user = new User();
user.parse(limit.next());
userList.add(user);
}
return userList;
}
List<User> list = userAction.pageList(1,10);
for(User user : list){
System.out.println(user);
}
System.out.println("=======================");
list = userAction.pageList(2,10);
for(User user : list){
System.out.println(user);
}
输出如下:
id:2,name:manman,address:beijing
id:1,name:jinhui,address:beijing
id:3,name:3,address:3
id:4,name:4,address:4
id:5,name:5,address:5
id:6,name:6,address:6
id:7,name:7,address:7
id:8,name:8,address:8
id:9,name:9,address:9
id:10,name:10,address:10
=======================
id:11,name:11,address:11
id:12,name:12,address:12
id:13,name:13,address:13
id:13,name:13,address:13
id:13,name:13,address:13
id:13,name:13,address:13
id:13,name:13,address:13
注意:
使用skip跳过少量的数据是很好的选择,但是如果跳过大量的数据的时候,skip方法就会执行的很慢。所以我们要尽量的避免使用skip跳过大量的数据。后面我们会提到如何优化大数据量的分页。