请求后进行响应的网页内容都有类型,也就是Content-Type
我们可以通过HttpClient接口来获取
例子:
package com.gcx.demo.HelloWorld2;
import org.apache.http.HttpEntity;
import org.apache.http.client.methods.CloseableHttpResponse;
import org.apache.http.client.methods.HttpGet;
import org.apache.http.impl.client.CloseableHttpClient;
import org.apache.http.impl.client.HttpClients;
import org.apache.http.util.EntityUtils;
public class App2 {
public static void main(String[] args) throws Exception{
CloseableHttpClient httpClient=HttpClients.createDefault(); // 创建httpClient实例
HttpGet httpGet=new HttpGet("https://www.baidu.com"); // 创建httpget实例
httpGet.setHeader("User-Agent", "Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:50.0) Gecko/20100101 Firefox/50.0"); // 设置请求头消息User-Agent
CloseableHttpResponse response=httpClient.execute(httpGet); // 执行http get请求
HttpEntity entity=response.getEntity(); // 获取返回实体
System.out.println("Content-Type:"+entity.getContentType().getValue());
response.close(); // response关闭
httpClient.close(); // httpClient关闭
}
}
返回类型:
Content-Type:text/html; charset=utf-8
运行返回:Content-Type:application/javascript
有人会有这样的疑问 Content-Type 对于我们爬虫有啥用,我们再爬取网页的时候 ,可以通过
Content-Type来提取我们需要爬取的网页 或者 是爬取的时候需要过滤掉的一些网页