python爬虫初步-与java爬虫的比较

相比较java来说,python的http库类更佳丰富,用java需要几十行代码才能完成的事情,python往往只需要十几行,例如打开并且存储一个网页

java代码:

import java.io.BufferedReader;
import java.io.FileOutputStream;
import java.io.FileWriter;
import java.io.IOException;
import java.io.InputStreamReader;
import java.net.MalformedURLException;
import java.net.URL;
import java.net.URLConnection;


public class HeaderTest {
public static void main(String[] args) {
new HeaderTest().downloadHtml("http://www.bilibili.com");
}
public void downloadHtml(String link){
BufferedReader reader = null;
FileWriter writer = null;
try {
URL url = new URL(link);
URLConnection con = url.openConnection();
reader = new BufferedReader(new InputStreamReader(con.getInputStream()));
writer = new FileWriter("f://test1.txt");
String buff = null;
StringBuilder sb = new StringBuilder();
while((buff = reader.readLine()) != null){
writer.write(buff);
}
} catch (MalformedURLException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}finally{
try {
reader.close();
writer.close();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
}

python代码:

import urllib.request

def downloadHtml(link):
    try:
        data = urllib.request.urlopen(link).read()
        text = data.decode('utf-8','ignore')
        file = open('f:\\test2.txt','w')
        file.write(text)
        file.close()
    except urllib.request.URLError as e:
        print(e.reason)
downloadHtml("http://www.bilibili.com")
然而这种爬取网站的方式是不安全的,1是网站会根据访问者的类型来屏蔽访问.2是在爬取的时候访问网站会非常频繁,网站会封掉我们的IP

我们需要欺骗网站,在java中实现这一点比较常用的是使用HttpClient的jar包来实现

java代码如下

import java.io.BufferedInputStream;
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStreamReader;


import org.apache.http.HttpEntity;
import org.apache.http.HttpHost;
import org.apache.http.HttpRequest;
import org.apache.http.HttpResponse;
import org.apache.http.client.ClientProtocolException;
import org.apache.http.client.config.RequestConfig;
import org.apache.http.client.methods.CloseableHttpResponse;
import org.apache.http.client.methods.HttpGet;
import org.apache.http.impl.client.CloseableHttpClient;
import org.apache.http.impl.client.HttpClients;
import org.apache.http.impl.execchain.MainClientExec;


public class HttpclientTest {
public static void main(String[] args){
System.out.println(new HttpclientTest().gethtml("http://www.bilibili.com"));
}
public String gethtml(String url){
String html = null;
//建立HttpClient类,这是HttpClient的jar包中的一个类
CloseableHttpClient httpclient = HttpClients.createDefault();
//读取html的流
BufferedReader reader = null;
//设置请求信息
HttpGet getmethod = new HttpGet(url);
//响应
HttpResponse response = null;
//设置代理IP
HttpHost proxy = new HttpHost("124.88.67.81",80);
//设置超时时间
RequestConfig config = RequestConfig.custom().setProxy(proxy).setConnectTimeout(5000).setConnectionRequestTimeout(5000).setSocketTimeout(5000).build();
//添加请求头
getmethod.addHeader("User-Agent","Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.87 Safari/537.36");
getmethod.setConfig(config);
try {
//获得响应
response = httpclient.execute(getmethod);
//响应体
HttpEntity entity = response.getEntity();
//建立一个读取流
reader = new BufferedReader(new InputStreamReader(entity.getContent()));
//读取html
String buff = null;
StringBuilder sb = new StringBuilder();
while((buff = reader.readLine()) != null){
sb.append(buff);
}
html = sb.toString();
System.out.println();
return html;
} catch (ClientProtocolException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (IOException e) {
// TODO Auto-generated catch block
System.out.println("读取异常");
return null;
}finally{
try {
httpclient.close();
reader.close();
} catch (IOException e) {
// TODO Auto-generated catch block
System.out.println("关闭资源失败");
}
}
}
}

在这段代码中我们设置了代理ip,请求头中的user-agent被我们设置成谷歌浏览器的内核,这样网站会认为我们是浏览器

python代码:

import urllib.request

def downloadHtml(link):
    try:
        proxy = {"http":"123.179.128.170:8080"}
        proxy_support = urllib.request.ProxyHandler(proxy)
        headers = ( "User-Agent","Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.87 Safari/537.36")
        opener = urllib.request.build_opener(proxy_support)
        opener.addheaders = [headers]
	opener = urllib.request.build_opener(proxy_support)
data = opener.open(link).read() text = data.decode('utf-8','ignore') print(text) except urllib.request.URLError as e: print(e.reason)def dowmloadHtml2(link): req = urllib.request.Request(link) req.add_header("User-Agent","Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.87 Safari/537.36") data = urllib.request.urlopen(req,timeout=1).read() text = data.decode("utf-8","ignore") print(text)downloadHtml("http://blog.csdn.net/weiwei_pig/article/details/51178226")

  • 9
    点赞
  • 27
    收藏
    觉得还不错? 一键收藏
  • 2
    评论
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值