爬虫一般分为数据采集,处理,存储三部分
1.创建项目
2.添加坐标
在maven库中搜索HttpClient和slf4j(日志)
https://mvnrepository.com <——maven库网址
直接复制到pom.xml文件中
同理,也是使用使用量最多的那个
我将这两个已经粘贴过来
<dependency>
<groupId>org.apache.httpcomponents</groupId>
<artifactId>httpclient</artifactId>
<version>4.5.2</version>
</dependency>
<!-- https://mvnrepository.com/artifact/org.slf4j/slf4j-log4j12 -->
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
<version>1.7.25</version>
<scope>test</scope>
</dependency>
3.为日志创建配置文件
将下面的内容粘贴进去
log4j.rootLogger=DEBUG,A1
log4j.logger.cn.itcast = DEBUG
log4j.appender.A1=org.apache.log4j.ConsoleAppender
log4j.appender.A1.layout=org.apache.log4j.PatternLayout
log4j.appender.A1.layout.ConversionPattern=%-d{yyyy-MM-dd HH:mm:ss,SSS} [%t] [%c]-[%p] %m%n
4.创建测试类,抓取微博主页面的内容
package cn.itcast.crawler.test;
import org.apache.http.HttpEntity;
import org.apache.http.client.methods.CloseableHttpResponse;
import org.apache.http.client.methods.HttpGet;
import org.apache.http.impl.client.CloseableHttpClient;
import org.apache.http.impl.client.HttpClients;
import org.apache.http.util.EntityUtils;
public class CrawlerFirst {
public static void main(String[] args) throws Exception {
//1.打开浏览器,创建HttpClient对象
CloseableHttpClient httpClient = HttpClients.createDefault();
//2.输入网址,发起get请求创建HttpGet对象
HttpGet httpGet = new HttpGet("http://weibo.com/login.php");
//3.发起请求,返回响应,使用httpClient对象发起请求
CloseableHttpResponse response = httpClient.execute(httpGet);
//4.解析响应,获取数据
//4.1判断状态码是否是200
if (response.getStatusLine().getStatusCode() == 200){
//4.2获取响应
HttpEntity httpEntity = response.getEntity();
String conent = EntityUtils.toString(httpEntity, "utf8");
System.out.println(conent);
}
}
}