玩玩小爬虫——抓取时的几个小细节

      这一篇我们聊聊在页面抓取时应该注意到的几个问题。

一:网页更新

     我们知道,一般网页中的信息是不断翻新的,这也要求我们定期的去抓这些新信息,但是这个“定期”该怎么理解,也就是多长时间需要

抓一次该页面,其实这个定期也就是页面缓存时间,在页面的缓存时间内我们再次抓取该网页是没有必要的,反而给人家服务器造成压力。

就比如说我要抓取博客园首页,首先清空页面缓存,

从Last-Modified到Expires,我们可以看到,博客园的缓存时间是2分钟,而且我还能看到当前的服务器时间Date,如果我再次

刷新页面的话,这里的Date将会变成下图中 If-Modified-Since,然后发送给服务器,判断浏览器的缓存有没有过期?

最后服务器发现If-Modified-Since >= Last-Modifined的时间,服务器也就返回304了,不过发现这cookie信息真是贼多啊。。。

在实际开发中,如果在知道网站缓存策略的情况下,我们可以让爬虫2min爬一次就好了,当然这些都是可以由数据团队来配置维护了,

好了,下面我们用爬虫模拟一下。

 1 using System;
 2 using System.Net;
 3 
 4 namespace ConsoleApplication2
 5 {
 6     public class Program
 7     {
 8         static void Main(string[] args)
 9         {
10             DateTime prevDateTime = DateTime.MinValue;
11 
12             for (int i = 0; i < 10; i++)
13             {
14                 try
15                 {
16                     var url = "http://cnblogs.com";
17 
18                     var request = (HttpWebRequest)HttpWebRequest.Create(url);
19 
20                     request.Method = "Head";
21 
22                     if (i > 0)
23                     {
24                         request.IfModifiedSince = prevDateTime;
25                     }
26 
27                     request.Timeout = 3000;
28 
29                     var response = (HttpWebResponse)request.GetResponse();
30 
31                     var code = response.StatusCode;
32 
33                     //如果服务器返回状态是200,则认为网页已更新,记得当时的服务器时间
34                     if (code == HttpStatusCode.OK)
35                     {
36                         prevDateTime = Convert.ToDateTime(response.Headers[HttpResponseHeader.Date]);
37                     }
38 
39                     Console.WriteLine("当前服务器的状态码:{0}", code);
40                 }
41                 catch (WebException ex)
42                 {
43                     if (ex.Response != null)
44                     {
45                         var code = (ex.Response as HttpWebResponse).StatusCode;
46 
47                         Console.WriteLine("当前服务器的状态码:{0}", code);
48                     }
49                 }
50             }
51         }
52     }
53 }

 

二:网页编码的问题

     有时候我们已经抓取到网页了,准备去解析的时候,tmd的全部是乱码,真是操蛋,比如下面这样,

或许我们依稀的记得在html的meta中有一个叫做charset的属性,里面记录的就是编码方式,还有一个要点就是

response.CharacterSet这个属性中同样也记录了编码方式,下面我们再来试试看。

艹,居然还是乱码,蛋疼了,这次需要到官网上面去看一看,到底http头信息里面都交互了些什么,凭什么浏览器能正常显示,

爬虫爬过来的就不行。

查看了http头信息,终于我们知道了,浏览器说我可以解析gzip,deflate,sdch这三种压缩方式,服务器发送的是gzip压缩,到这里

我们也应该知道了常用的web性能优化。

 1 using System;
 2 using System.Collections.Generic;
 3 using System.Linq;
 4 using System.Text;
 5 using System.Threading;
 6 using HtmlAgilityPack;
 7 using System.Text.RegularExpressions;
 8 using System.Net;
 9 using System.IO;
10 using System.IO.Compression;
11 
12 namespace ConsoleApplication2
13 {
14     public class Program
15     {
16         static void Main(string[] args)
17         {
18             //var currentUrl = "http://www.mm5mm.com/";
19 
20             var currentUrl = "http://www.sohu.com/";
21 
22             var request = WebRequest.Create(currentUrl) as HttpWebRequest;
23 
24             var response = request.GetResponse() as HttpWebResponse;
25 
26             var encode = string.Empty;
27 
28             if (response.CharacterSet == "ISO-8859-1")
29                 encode = "gb2312";
30             else
31                 encode = response.CharacterSet;
32 
33             Stream stream;
34 
35             if (response.ContentEncoding.ToLower() == "gzip")
36             {
37                 stream = new GZipStream(response.GetResponseStream(), CompressionMode.Decompress);
38             }
39             else
40             {
41                 stream = response.GetResponseStream();
42             }
43 
44             var sr = new StreamReader(stream, Encoding.GetEncoding(encode));
45 
46             var html = sr.ReadToEnd();
47         }
48     }
49 }

 

 三:网页解析

既然经过千辛万苦拿到了网页,下一个就要解析了,当然正则匹配是个好方法,毕竟工作量还是比较大的,可能业界也比较推崇

HtmlAgilityPack这个解析工具,能够将Html解析成XML,然后可以用XPath去提取指定的内容,大大提高了开发速度,性能也

不赖,毕竟Agility也就是敏捷的意思,关于XPath的内容,大家看懂W3CSchool的这两张图就OK了。

 1 using System;
 2 using System.Collections.Generic;
 3 using System.Linq;
 4 using System.Text;
 5 using System.Threading;
 6 using HtmlAgilityPack;
 7 using System.Text.RegularExpressions;
 8 using System.Net;
 9 using System.IO;
10 using System.IO.Compression;
11 
12 namespace ConsoleApplication2
13 {
14     public class Program
15     {
16         static void Main(string[] args)
17         {
18             //var currentUrl = "http://www.mm5mm.com/";
19 
20             var currentUrl = "http://www.sohu.com/";
21 
22             var request = WebRequest.Create(currentUrl) as HttpWebRequest;
23 
24             var response = request.GetResponse() as HttpWebResponse;
25 
26             var encode = string.Empty;
27 
28             if (response.CharacterSet == "ISO-8859-1")
29                 encode = "gb2312";
30             else
31                 encode = response.CharacterSet;
32 
33             Stream stream;
34 
35             if (response.ContentEncoding.ToLower() == "gzip")
36             {
37                 stream = new GZipStream(response.GetResponseStream(), CompressionMode.Decompress);
38             }
39             else
40             {
41                 stream = response.GetResponseStream();
42             }
43 
44             var sr = new StreamReader(stream, Encoding.GetEncoding(encode));
45 
46             var html = sr.ReadToEnd();
47 
48             sr.Close();
49 
50             HtmlDocument document = new HtmlDocument();
51 
52             document.LoadHtml(html);
53 
54             //提取title
55             var title = document.DocumentNode.SelectSingleNode("//title").InnerText;
56 
57             //提取keywords
58             var keywords = document.DocumentNode.SelectSingleNode("//meta[@name='Keywords']").Attributes["content"].Value;
59         }
60     }
61 }

好了,打完收工,睡觉。。。

NUTCH1.0抓取HADOOP的数据接点错误

05-18

我用NUTCH1.0配置分布式抓取,1台MASTAER,10台SLAVER,每台都是4G内存,1T硬盘.但是当我抓取数据是,如果是3个数据接点以内就都正常,如果数据接点超过3个就出现以下错误,那位大哥知道是什么情况吗?(我的系统是ubuntu64位系统)rn2009-05-15 22:47:34,145 ERROR datanode.DataNode - DatanodeRegistration(61.16.54.80:50010, storageID=DS-1055642516-61.16.54.80-50010-1242441578071, infoPort=50075, ipcPort=50020):DataXceiverrnorg.apache.hadoop.hdfs.server.datanode.BlockAlreadyExistsException: Block blk_-6383669944497305101_1003 is valid, and cannot be written to.rn at org.apache.hadoop.hdfs.server.datanode.FSDataset.writeToBlock(FSDataset.java:975)rn at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.(BlockReceiver.java:97)rn at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:259)rn at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:103)rn at java.lang.Thread.run(Thread.java:619)rn2009-05-15 22:48:35,300 WARN datanode.DataNode - DatanodeRegistration(61.16.54.80:50010, storageID=DS-1055642516-61.16.54.80-50010-1242441578071, infoPort=50075, ipcPort=50020):Failed to transfer blk_2793279559950989337_1044 to 61.16.54.79:50010 got java.net.SocketException: Original Exception : java.io.IOException: Connection reset by peerrn at sun.nio.ch.FileChannelImpl.transferTo0(Native Method)rn at sun.nio.ch.FileChannelImpl.transferToDirectly(FileChannelImpl.java:418)rn at sun.nio.ch.FileChannelImpl.transferTo(FileChannelImpl.java:519)rn at org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:199)rn at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendChunks(BlockSender.java:313)rn at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:400)rn at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:1108)rn at java.lang.Thread.run(Thread.java:619)rnCaused by: java.io.IOException: Connection reset by peerirn ... 8 morernrnmaster configrnrnrnrnrnrnrn dfs.block.sizern 134217728rn truernrnrn dfs.data.dirrn /data/filesystem/datarn truernrnrn dfs.datanode.du.reservedrn 1073741824rn truernrnrn dfs.datanode.handler.countrn 3rn truernrnrn dfs.name.dirrn /data/filesystem/namenodern truernrnrn dfs.namenode.handler.countrn 5rn truernrnrn dfs.permissionsrn Truern truernrnrn dfs.replicationrn 3rnrnrn fs.checkpoint.dirrn /data/filesystem/secondary-nnrn truernrnrn fs.default.namern hdfs://ubuntu76:9000rnrnrn fs.trash.intervalrn 1440rn truernrnrn hadoop.tmp.dirrn /tmp/hadoop-$user.namern truernrnrn io.file.buffer.sizern 65536rnrnrn mapred.child.java.optsrn -Xmx1945mrnrnrn mapred.child.ulimitrn 3983360rn truernrnrn mapred.job.trackerrn ubuntu76:9001rnrnrn mapred.job.tracker.handler.countrn 5rn truernrnrn mapred.local.dirrn $hadoop.tmp.dir/mapred/localrn truernrnrn mapred.map.tasks.speculative.executionrn truernrnrn mapred.reduce.parallel.copiesrn 10rnrnrn mapred.reduce.tasksrn 10rnrnrn mapred.reduce.tasks.speculative.executionrn falsernrnrn mapred.tasktracker.map.tasks.maximumrn 1rn truernrnrn mapred.tasktracker.reduce.tasks.maximumrn 1rn truernrnrn tasktracker.http.threadsrn 12rn truernrnrnrnrnslaver configrnrnrnrnrnrnrnrn dfs.block.sizern 134217728rn truernrnrn dfs.data.dirrn /data/filesystem/datarn truernrnrn dfs.datanode.du.reservedrn 1073741824rn truernrnrn dfs.datanode.handler.countrn 3rn truernrnrn dfs.name.dirrn /data/filesystem/namenodern truernrnrn dfs.namenode.handler.countrn 5rn truernrnrn dfs.permissionsrn Truern truernrnrn dfs.replicationrn 3rnrnrn fs.checkpoint.dirrn /data/filesystem/secondary-nnrn truernrnrn fs.default.namern hdfs://ubuntu76:9000rnrnrn fs.trash.intervalrn 1440rn truernrnrn hadoop.tmp.dirrn /tmp/hadoop-$user.namern truernrnrn io.file.buffer.sizern 65536rnrnrn mapred.child.java.optsrn -Xmx1945mrnrnrn mapred.child.ulimitrn 3983360rn truernrnrn mapred.job.trackerrn ubuntu76:9001rnrnrn mapred.job.tracker.handler.countrn 5rn truernrnrn mapred.local.dirrn /data/filesystem/mapred/localrn truernrnrn mapred.map.tasks.speculative.executionrn truernrnrn mapred.reduce.parallel.copiesrn 10rnrnrn mapred.reduce.tasksrn 10rnrnrn mapred.reduce.tasks.speculative.executionrn falsernrnrn mapred.tasktracker.map.tasks.maximumrn 1rn truernrnrn mapred.tasktracker.reduce.tasks.maximumrn 1rn truernrnrn tasktracker.http.threadsrn 12rn truernrnrnrn rn mapred.map.tasksrn 50rn rn rn mapred.reduce.tasksrn 50rn 论坛

没有更多推荐了,返回首页