http头的设置很重要,可以骗过某些服务器认为是在和浏览器打交道,而避免被拒绝访问的可能:
若像下面这样,不对头进行任何设置,有些网站在用浏览器可以访问,而用爬虫抓时就会返回403错误:
CloseableHttpClient httpclient = HttpClients.createDefault();
HttpGet httpget = new HttpGet(myURL);
CloseableHttpResponse response = httpclient.execute(httpget);
System.out.println(response.getStatusLine().getStatusCode()); //403错误
所以应该对头进行如下的设置:
httpget.addHeader("Accept", "text/html");
httpget.addHeader("Accept-Charset", "utf-8");
httpget.addHeader("Accept-Encoding", "gzip");
httpget.addHeader("Accept-Language", "en-US,en");
httpget.addHeader("User-Agent", "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.22 (KHTML, like Gecko) Chrome/25.0.1364.160 Safari/537.22");
header参考:http://kb.cnblogs.com/page/92320/