我也是才开始接触java爬虫,就是从简单开始了解爬虫
先列一下爬虫的好处:
- 可以实现搜索引擎
- 大数据时代,可以让我们获取更多的数据源
- 可以更好地进行搜索引擎优化(seo)(使用会较少)
- 有利于就就业
爬虫主要分为3部分:采集,处理,储存
先上一个简单的爬虫示例:
Idea创建Maven项目
pom.xml引入HttpClient和log4j
<!-- https://mvnrepository.com/artifact/org.apache.httpcomponents/httpclient -->
<dependency>
<groupId>org.apache.httpcomponents</groupId>
<artifactId>httpclient</artifactId>
<version>4.5.2</version>
</dependency>
<!-- https://mvnrepository.com/artifact/org.slf4j/slf4j-log4j12 -->
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
<version>1.7.25</version>
<!--<scope>test</scope>-->
</dependency>
在resources中创建log4j.properties文件
### 设置###
log4j.rootLogger = DEBUG,A1
log4j.logger.cn.itcast = DEBUG
log4j.appender.A1 = org.apache.log4j.ConsoleAppender
log4j.appender.A1.layout = org.apache.log4j.PatternLayout
log4j.appender.A1.layout.ConversionPattern = %-d{yyyy-MM-dd HH:mm:ss} [ %t:%r ] - [ %p ] %m%n
然后主代码
package cn.itcast.crawler.test;
import org.apache.http.HttpEntity;
import org.apache.http.client.methods.CloseableHttpResponse;
import org.apache.http.client.methods.HttpGet;
import org.apache.http.impl.client.CloseableHttpClient;
import org.apache.http.impl.client.HttpClients;
import org.apache.http.util.EntityUtils;
import java.io.IOException;
public class CrawlerFirst {
public static void main(String[] args) throws IOException {
//打开浏览器,创建HttpClient对象
CloseableHttpClient httpClient = HttpClients.createDefault();
//输入网址,创建发起get请求
HttpGet httpGet = new HttpGet("https://data.sh.gov.cn/");
//按回车。发请求,返回响应使用HttpClient对象发请求
CloseableHttpResponse response = httpClient.execute(httpGet);
//解析响应,获取数据
//判断状态码是否为200
if (response.getStatusLine().getStatusCode() == 200) {
HttpEntity httpEntity = response.getEntity();
String content = EntityUtils.toString(httpEntity, "UTF-8");
System.out.println(content);
}
}
}
控制台可显示网址的HTML内容