HttpClient4.5+Jsoup实现抓取网页

和上篇博客一样,我在学习用httpclient实现网络爬虫时,在网上搜了很多资料,但httpclient版本都是之前的3.1版本或者4.3的版本,我通过自己的学习了解了httpclient4.5,今天在这里我将我自己写的用httpclient+jsoup实现抓取网页分享给大家。

httpclient+jsoup获取网页非常简单,首先通过httpclient的get方法(如有不懂的地方可以看下我的上一篇关于get方法的讲解)获取到一个网页的所有内容,然后通过jsoup对获取到的内容进行解析,将这个网页内的链接全部获取到,然后再通过get方法获取到这些链接内网页的内容,这样我们就可以获取到一个网页下所有链接的网页内容。比如说,我们获取到一个网页,其下有50个链接,我们就可以获取到50个网页,下面是代码,大家感受一下

获取到url

public class GetUrl {
	public static List<String> getUrl(String ur) {
		
		//创建默认的httpClient实例
		CloseableHttpClient client = HttpClients.createDefault();
		
		//创建list存放已读取过的页面
		List<String> urllist = new ArrayList<String>();
		
		//创建get
		HttpGet get=new HttpGet(ur);
		
		//设置连接超时
		Builder custom = RequestConfig.custom();
		
		RequestConfig config = custom.setConnectTimeout(5000).setConnectionRequestTimeout(1000).setSocketTimeout(5000).build();
		
		get.setConfig(config);
		
		//设置消息头部
		get.setHeader("Accept", "Accept text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8");    
		    
		get.setHeader("User-Agent", "Mozilla/5.0 (Windows NT 6.1; rv:6.0.2) Gecko/20100101 Firefox/6.0.2");  
		
		try {
			//执行get请求
			CloseableHttpResponse response = client.execute(get);
						
			//获取响应实体
				HttpEntity entity = response.getEntity();
				
			//将响应实体转为String类型
				String html = EntityUtils.toString(entity);
				
				//通过jsoup将String转为jsoup可处理的文档类型
				Document doc = Jsoup.parse(html);
				
				//找到该页面中所有的a标签
				Elements links = doc.getElementsByTag("a");
				
				int i=1;
				
				for (Element element : links) {
					//获取到a标签中href属性的内容
					String url = element.attr("href");
					//对href内容进行判断 来判断该内容是否是url
					if(url.startsWith("http://blog.csdn.net/") && !urllist.contains(url)){
					
							GetPage.getPage(url);
							System.out.println(url);
						
						urllist.add(url);
						i++;
					}
					
					}		
			response.close();	
		} catch (IOException e) {
			
			e.printStackTrace();
		}finally {
			try {
				client.close();
			} catch (IOException e) {

				e.printStackTrace();
			}
		}
		return urllist;
	}
}

根据获取到的url生成页面

public class GetPage {

    public static boolean getPage(String url) {
        
        
        //创建默认的httpClient实例
        CloseableHttpClient client = HttpClients.createDefault();
        
        //定义BufferedReader输入流来读取URL的响应
        BufferedReader br=null;
        
        //设置连接超时
        Builder custom = RequestConfig.custom();
        RequestConfig config = custom.setConnectTimeout(5000).setConnectionRequestTimeout(1000).setSocketTimeout(5000).build();
        
        //创建httpget.    
            HttpGet get = new HttpGet(url);
        
            get.setConfig(config);
        //设置消息头部模拟浏览器访问    
            get.setHeader("Accept", "Accept text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8");    
                                                        
            get.setHeader("User-Agent", "Mozilla/5.0 (Windows NT 6.1; rv:6.0.2) Gecko/20100101 Firefox/6.0.2");  
                        
            try {
            //执行get请求    
                CloseableHttpResponse response = client.execute(get);                
                //获取响应实体
                    HttpEntity entity = response.getEntity();
                    
                    br=new BufferedReader(new InputStreamReader(entity.getContent(),"UTF-8"));
                    
                    String page="";
                    
                    //逐行读取数据
                    while(true){
                        String line = br.readLine();
                        if(line == null){
                            break;
                        }else{
                            page+=line;
                            page+="\n";
                        }
                    }
                    //设置获取到的网页的输出路径
                    FileWriter writer = new FileWriter("D:/html/"+UUID.randomUUID()+".html");
                    //创建字符输出流
                    PrintWriter fout = new PrintWriter(writer);
                    
                    fout.print(page);
                    fout.close();
                    br.close();
                    response.close();
            } catch (IOException e) {
                e.printStackTrace();
            }
            
        
        return true;
    }
}

最后是测试类

public class Test {
	
	public static void main(String[] args) {
		List<String> list = GetUrl.getUrl("http://blog.csdn.net/");
		System.out.println("所写链接内的所有内容");
		int i=1;
		for (String url : list) {
			
			GetUrl.getUrl(url);
		System.out.println("第"+i+"条链接内容");
		i++;
		}
		
	}
}

以上就是如何用httpclient+jsoup实现抓取网页了,希望能给大家带来帮助。

  • 0
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
Jsoup+httpclient 模拟登陆和抓取页面 package com.app.html; import java.io.BufferedReader; import java.io.BufferedWriter; import java.io.File; import java.io.FileOutputStream; import java.io.FileReader; import java.io.IOException; import java.io.OutputStreamWriter; import java.io.Writer; import java.text.SimpleDateFormat; import java.util.Date; import org.apache.commons.httpclient.Cookie; import org.apache.commons.httpclient.HttpClient; import org.apache.commons.httpclient.NameValuePair; import org.apache.commons.httpclient.cookie.CookiePolicy; import org.apache.commons.httpclient.cookie.CookieSpec; import org.apache.commons.httpclient.methods.PostMethod; import org.apache.commons.httpclient.params.HttpMethodParams; import org.jsoup.Jsoup; import org.jsoup.nodes.Document; import org.jsoup.nodes.Element; import org.jsoup.select.Elements; import com.app.comom.FileUtil; public class HttpClientHtml { private static final String SITE = "login.goodjobs.cn"; private static final int PORT = 80; private static final String loginAction = "/index.php/action/UserLogin"; private static final String forwardURL = "http://user.goodjobs.cn/dispatcher.php/module/Personal/?skip_fill=1"; private static final String toUrl = "d:\\test\\"; private static final String css = "http://user.goodjobs.cn/personal.css"; private static final String Img = "http://user.goodjobs.cn/images"; private static final String _JS = "http://user.goodjobs.cn/scripts/fValidate/fValidate.one.js"; /** * 模拟等录 * @param LOGON_SITE * @param LOGON_PORT * @param login_Action * @param params * @throws Exception */ private static HttpClient loginHtml(String LOGON_SITE, int LOGON_PORT,String login_Action,String ...params) throws Exception { HttpClient client = new HttpClient(); client.getHostConfiguration().setHost(LOGON_SITE, LOGON_PORT); // 模拟登录页面 PostMethod post = new PostMethod(login_Action); NameValuePair userName = new NameValuePair("memberName",params[0] ); NameValuePair password = new NameValuePair("password",params[1] ); post.setRequestBody(new NameValuePair[] { userName, password }); client.executeMethod(post); post.releaseConnection(); // 查看cookie信息 CookieSpec cookiespec = CookiePolicy.getDefaultSpec(); Cookie[] cookies = cookiespec.match(LOGON_SITE, LOGON_PORT, "/", false, client.getState().getCookies()); if (cookies != null) if (cookies.length == 0) { System.out.println("Cookies is not Exists "); } else { for (int i = 0; i < cookies.length; i++) { System.out.println(cookies[i].toString()); } } return client; } /** * 模拟等录 后获取所需要的页面 * @param client * @param newUrl * @throws Exception */ private static String createHtml(HttpClient client, String newUrl) throws Exception { SimpleDateFormat format = new SimpleDateFormat("yyyy-MM-dd"); String filePath = toUrl + format.format(new Date() )+ "_" + 1 + ".html"; PostMethod post = new PostMethod(newUrl); client.executeMethod(post); //设置编码 post.getParams().setParameter(HttpMethodParams.HTTP_CONTENT_CHARSET, "GBK"); String content= post.getResponseBodyAsString(); FileUtil.write(content, filePath); System.out.println("\n写入文件成功!"); post.releaseConnection(); return filePath; } /** * 解析html代码 * @param filePath * @param random * @return */ private static String JsoupFile(String filePath, int random) { SimpleDateFormat format = new SimpleDateFormat("yyyy-MM-dd"); File infile = new File(filePath); String url = toUrl + format.format(new Date()) + "_new_" + random+ ".html"; try { File outFile = new File(url); Document doc = Jsoup.parse(infile, "GBK"); String html="<!DOCTYPE HTML PUBLIC '-//W3C//DTD HTML 4.01 Transitional//EN'>"; StringBuffer sb = new StringBuffer(); sb.append(html).append("\n"); sb.append("<html>").append("\n"); sb.append("<head>").append("\n"); sb.append("<title>欢迎使用新安人才网个人专区</title>").append("\n"); Elements meta = doc.getElementsByTag("meta"); sb.append(meta.toString()).append("\n"); ////////////////////////////body////////////////////////// Elements body = doc.getElementsByTag("body"); ////////////////////////////link////////////////////////// Elements links = doc.select("link");//对link标签有href的路径都作处理 for (Element link : links) { String hrefAttr = link.attr("href"); if (hrefAttr.contains("/personal.css")) { hrefAttr = hrefAttr.replace("/personal.css",css); Element hrefVal=link.attr("href", hrefAttr);//修改href的属性值 sb.append(hrefVal.toString()).append("\n"); } } ////////////////////////////script////////////////////////// Elements scripts = doc.select("script");//对script标签 for (Element js : scripts) { String jsrc = js.attr("src"); if (jsrc.contains("/fValidate.one.js")) { String oldJS="/scripts/fValidate/fValidate.one.js";//之前的css jsrc = jsrc.replace(oldJS,_JS); Element val=js.attr("src", jsrc);//修改href的属性值 sb.append(val.toString()).append("\n").append("</head>"); } } ////////////////////////////script////////////////////////// Elements tags = body.select("*");//对所有标签有src的路径都作处理 for (Element tag : tags) { String src = tag.attr("src"); if (src.contains("/images")) { src = src.replace("/images",Img); tag.attr("src", src);//修改src的属性值 } } sb.append(body.toString()); sb.append("</html>"); BufferedReader in = new BufferedReader(new FileReader(infile)); Writer out = new BufferedWriter(new OutputStreamWriter( new FileOutputStream(outFile), "gbk")); String content = sb.toString(); out.write(content); in.close(); System.out.println("页面已经爬完"); out.close(); } catch (IOException e) { e.printStackTrace(); } return url; } public static void main(String[] args) throws Exception { String [] params={"admin","admin123"}; HttpClient client = loginHtml(SITE, PORT, loginAction,params); // 访问所需的页面 String path=createHtml(client, forwardURL); System.out.println( JsoupFile(path,1)); } }

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值