本文介绍如何用webmagic
爬取《Elasticsearch 权威指南》中文版 数据
完整代码github地址
依赖引入
新建一个maven项目,引入webmagic
依赖
<dependency>
<groupId>us.codecraft</groupId>
<artifactId>webmagic-core</artifactId>
<version>0.7.3</version>
</dependency>
<dependency>
<groupId>us.codecraft</groupId>
<artifactId>webmagic-extension</artifactId>
<version>0.7.3</version>
</dependency>
<dependency>
<groupId>org.apache.commons</groupId>
<artifactId>commons-lang3</artifactId>
<version>3.8</version>
</dependency>
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-to-slf4j</artifactId>
<version>2.11.2</version>
</dependency>
<dependency>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
<version>1.18.8</version>
</dependency>
创建爬取PageProcessor
package app.processor;
import lombok.extern.slf4j.Slf4j;
import org.apache.commons.lang3.StringUtils;
import us.codecraft.webmagic.Page;
import us.codecraft.webmagic.Site;
import us.codecraft.webmagic.processor.PageProcessor;
import java.time.LocalDateTime;
import java.util.List;
import java.util.stream.Collectors;
/**
* 爬取逻辑
*
* @author faith.huan 2019-10-13 14:37:01
*/
@Slf4j
public class EsDocPageProcessor implements PageProcessor {
/**
* 部分一:抓取网站的相关配置,包括编码、抓取间隔、重试次数等
*/
private Site site = Site.me().setRetryTimes(3).setSleepTime(1000).setTimeOut(10000);
@Override
public void process(Page page) {
String currentUrl = page.getUrl().toString();
if (isIndexPage(currentUrl)) {
// 如果是index页,则将页面中所有连接加入抓取列表
List<String> subUrls = page.getHtml().links().all().stream()
// 只添加中文页面
.filter(EsDocPageProcessor::isCnPage)
.collect(Collectors.toList());
page.addTargetRequests(subUrls);
} else {
/*
* 通过xpath获取标题和内容
*/
String title = page.getHtml().xpath("//*[@class='title']/text()").toString();
String content = String.join(" ", page.getHtml().xpath("//p/text()").all());
if (StringUtils.isAnyBlank(title, content)) {
// 如果标题或者内容为空,则不保存页面
page.setSkip(true);
} else {
page.putField("title", title);
page.putField("content", content);
page.putField("url", currentUrl);
page.putField("crawlDate", LocalDateTime.now().toString());
}
}
}
/**
* 判断url是不是index页
*/
private boolean isIndexPage(String url) {
return StringUtils.endsWith(url, "current/index.html");
}
/**
* 判断url是否为中文页,通过包含/cn/来判断
*/
private static boolean isCnPage(String url){
return StringUtils.contains(url,"/cn/");
}
@Override
public Site getSite() {
return site;
}
}
创建Main方法
package app;
import app.processor.EsDocPageProcessor;
import lombok.extern.slf4j.Slf4j;
import us.codecraft.webmagic.Spider;
import us.codecraft.webmagic.pipeline.JsonFilePipeline;
import java.io.File;
/**
* 爬取《Elasticsearch 权威指南》中文版 数据
* https://www.elastic.co/guide/cn/elasticsearch/guide/current/index.html
*
* @author faith.huan 2019-07-21 04:00:54
*/
@Slf4j
public class CrawlingData {
public static void main(String[] args) {
// 爬取开始路径
String beginUrl = "https://www.elastic.co/guide/cn/elasticsearch/guide/current/index.html";
// 爬取结果存放文件夹
String dir = "D:/es-doc/权威指南";
try {
File file = new File(dir);
if (!file.exists()) {
boolean res = file.mkdir();
if (res) {
log.info("创建目录成功");
}
}
Spider.create(new EsDocPageProcessor())
//从url开始抓
.addUrl(beginUrl)
//设置Scheduler,使用Redis来管理URL队列
//.setScheduler(new RedisScheduler("localhost"))
//设置Pipeline,将结果以json方式保存到文件
.addPipeline(new JsonFilePipeline(dir))
//开启5个线程同时执行
.thread(5)
//启动爬虫
.run();
} catch (Exception e) {
log.error("启动爬虫发生异常", e);
}
}
}
直接执行main方法则会抓取权威指南数据