1.0 简介
网络爬虫(Web Crawler),是按照一定的规则,自动地抓取万维网信息的程序或脚本。
1.1 入门 Java 爬虫
1.1.1 环境准备
- JDK (链接: 如何查看自己的版本)
- IntelliJ IDEA
- IDEA 自带的 Maven
版本(IntelliJ IDEA Ultimate + version 11.0.11 + Maven)
- JetBrains 能很方便地管理多个 IDEA
- 左上角File -> Project Structure -> Project Settings -> Project
- 用自带的 Maven 进行管理,主要的东西都在 main 中
JDK VS SDK
- JDK,是 Java 开发工具包,主要用于编写Java程序;即想使用Java语言,就需要安装jdk
- SDK,就是软件开发包,是一个广义的概念,任何编程工具几乎都可以看成是 SDK
- 单说SDK,范围太大,如果是Android SDK,就可以理解是安卓机器的操作系统,类似 Windows 操作系统
- 简言之,JDK是SDK的一种!
1.1.2 环境配置
- 建立框架:New Module -> Maven -> GroupId & ArtifactId etc
- 在 pom.xml 中创建依赖关系: Apache HttpClient & SLF4J LOG4J 12 Binding(链接:Maven Repository)
- 在 resources 中创建日志 log4j.properties
pom.xml 配置
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>cn.itcast</groupId>
<artifactId>itcast-crawler</artifactId>
<version>1.0-SNAPSHOT</version>
<properties>
<maven.compiler.source>11</maven.compiler.source>
<maven.compiler.target>11</maven.compiler.target>
</properties>
<dependencies>
<!-- https://mvnrepository.com/artifact/org.apache.httpcomponents/httpclient -->
<dependency>
<groupId>org.apache.httpcomponents</groupId>
<artifactId>httpclient</artifactId>
<version>4.5.13</version>
</dependency>
<!-- 日志 -->
<!-- https://mvnrepository.com/artifact/org.slf4j/slf4j-log4j12 -->
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
<version>1.7.25</version>
<!--<scope>test</scope>-->
</dependency>
</dependencies>
</project>
log4j.properties 配置
# A1: print in the console
log4j.rootLogger=DEBUG,A1
log4j.logger.cn.itcast = DEBUG
log4j.appender.A1=org.apache.log4j.ConsoleAppender
log4j.appender.A1.layout=org.apache.log4j.PatternLayout
log4j.appender.A1.layout.ConversionPattern=%-d{yyyy-MM-dd HH??ss,SSS} [%t] [%c]-[%p] %m%n
1.1.3 编写程序
- 在 main 中 new 一个 Java Class, 就可以开始写程序啦~
代码
package cn.itcast.crawler.test;
import org.apache.http.HttpEntity;
import org.apache.http.client.methods.CloseableHttpResponse;
import org.apache.http.client.methods.HttpGet;
import org.apache.http.impl.client.CloseableHttpClient;
import org.apache.http.impl.client.HttpClients;
import org.apache.http.util.EntityUtils;
public class CrawlerFirst {
public static void main(String[] args) throws Exception {
// 1. Open a browser, set up a 'HttpClient' object
CloseableHttpClient httpClient = HttpClients.createDefault();
// 2. Input the URL, initiate a 'get' request, set up a 'HttpGet' object
HttpGet httpGet = new HttpGet("https://www.itcast.cn");
// 3. Enter, use 'HttpClient' to initiate a request, return the response
CloseableHttpResponse response = httpClient.execute(httpGet);
// 4. Parse the response, get the data
// judge whether the 'status code' is 200
if (response.getStatusLine().getStatusCode() == 200) {
HttpEntity httpEntity = response.getEntity();
String content = EntityUtils.toString(httpEntity, "utf8");
System.out.println(content);
}
}
}