java实现爬虫利用httpclient获取页面数据,再用jsoup解析获取数据,在此页面的分析不作过多赘述,读者自行分析.
1.首页输入手机,观察url参数,将其中"&page="提取到最后,便于拼接页码参数;
2.查看网页源代码,观看到商品列表标签,所以第一步,获取商品列表,Elements spuEles = doc.select( "div#J_goodsList>ul>li" );第二步,获取spu,long spu = Long.parseLong( spuEle.attr( "data-spu" ) );第三步,获取sku列表,Elements skuEles = spuEle.select( "li.ps-item" );第四步,获取单个sku,long sku = Long.parseLong( skuEle.select( "[data-sku]" ).attr( "data-sku" ) );之后的操作就都是通过这个sku来操作的;
3.代码:
(1)添加依赖
<!-- https://mvnrepository.com/artifact/org.apache.httpcomponents/httpclient -->
<dependency>
<groupId>org.apache.httpcomponents</groupId>
<artifactId>httpclient</artifactId>
</dependency>
<!-- https://mvnrepository.com/artifact/org.jsoup/jsoup -->
<dependency>
<groupId>org.jsoup</groupId>
<artifactId>jsoup</artifactId>
<version>1.11.3</version>
</dependency>
<!-- https://mvnrepository.com/artifact/org.apache.commons/commons-lang3 -->
<dependency>
<groupId>org.apache.commons</groupId>
<artifactId>commons-lang3</artifactId>
<version>3.4</version>
</dependency>
<!-- jpa -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
数据库等其他依赖就不贴了;
(2)工具类封装
@Component
public class HttpUtils {
private PoolingHttpClientConnectionManager cm;
public HttpUtils() {
this.cm = new PoolingHttpClientConnectionManager();
// 设置最大连接数
this.cm.setMaxTotal( 100 );
// 设置每个主机的最大连接数
this.cm.setDefaultMaxPerRoute( 10 );
}
// 根据请求地址下载页面数据
public String doGetHtml( String url ) {
// 获取httpclient对象
CloseableHttpClient httpClient = HttpClients.custom()
.setConnectio