转载 2007年09月17日 09:51:00
 从HTML文件中抽取正文的简单方案     CSDN Blog推出文章指数概念,文章指数是对Blog文章综合评分后推算出的,综合评分项分别是该文章的点击量,回复次数,被网摘收录数量,文章长度和文章类型;满分100,每月更新一次。
The Easy Way to Extract Useful Text from Arbitrary HTML
译 者导读:这篇文章主要介绍了从不同类型的HTML文件中抽取出真正有用的正文内容的一种有广泛适应性的方法。其功能类似于CSDN近期推出的“剪影”,能 够去除页眉、页脚和侧边栏的无关内容,非常实用。其方法简单有效而又出乎意料,看完后难免大呼原来还可以这样!行文简明易懂,虽然应用了人工神经网络这样 的算法,但因为FANN良好的封装性,并不要求读者需要懂得ANN。全文示例以Python代码写成,可读性更佳,具有科普气息,值得一读。
You’ve finally got your hands on the diverse collection of HTML documents you needed. But the content you’re interested in is hidden amidst adverts, layout tables or formatting markup, and other various links. Even worse, there’s visible text in the menus, headers and footers that you want to filter out. If you don’t want to write a complex scraping program for each type of HTML file, there is a solution.
每 个人手中都可能有一大堆讨论不同话题的HTML文档。但你真正感兴趣的内容可能隐藏于广告、布局表格或格式标记以及无数链接当中。甚至更糟的是,你希望那 些来自菜单、页眉和页脚的文本能够被过滤掉。如果你不想为每种类型的HTML文件分别编写复杂的抽取程序的话,我这里有一个解决方案。
This article shows you how to write a relatively simple script to extract text paragraphs from large chunks of HTML code, without knowing its structure or the tags used. It works on news articles and blogs pages with worthwhile text content, among others…
Do you want to find out how statistics and machine learning can save you time and effort mining text?

<script type="text/javascript"><!--google_ad_client = "pub-0940885572422333";google_alternate_color = "FFFFFF";google_ad_width = 468;google_ad_height = 60;google_ad_format = "468x60_as";google_ad_type = "text";//2007-04-05: Contentgoogle_ad_channel = "6105530284";google_color_border = "FFFFFF";google_color_bg = "FFFFFF";google_color_link = "202040";google_color_text = "000000";google_color_url = "606030";//--></script> The concept is rather simple: use information about the density of text vs. HTML code to work out if a line of text is worth outputting. (This isn’t a novel idea, but it works!) The basic process works as follows:

  1. Parse the HTML code and keep track of the number of bytes processed.
  1. Store the text output on a per-line, or per-paragraph basis.
  1. Associate with each text line the number of bytes of HTML required to describe it.
  1. Compute the text density of each line by calculating the ratio of text to bytes.
  1. Then decide if the line is part of the content by using a neural network.
You can get pretty good results just by checking if the line’s density is above a fixed threshold (or the average), but the system makes fewer mistakes if you use machine learning — not to mention that it’s easier to implement!
Let’s take it from the top…
Converting the HTML to Text
What you need is the core of a text-mode browser, which is already setup to read files with HTML markup and display raw text. By reusing existing code, you won’t have to spend too much time handling invalid XML documents, which are very common — as you’ll realise quickly.
As a quick example, we’ll be using Python along with a few built-in modules: htmllib for the parsing and formatter for outputting formatted text. This is what the top-level function looks like:
def extract_text(html):
    # Derive from formatter.AbstractWriter to store paragraphs.
    writer = LineWriter()
    # Default formatter sends commands to our writer.
    formatter = AbstractFormatter(writer)
    # Derive from htmllib.HTMLParser to track parsed bytes.
    parser = TrackingParser(writer, formatter)
    # Give the parser the raw HTML data.
    # Filter the paragraphs stored and output them.
    return writer.output()
The TrackingParser itself overrides the callback functions for parsing start and end tags, as they are given the current parse index in the buffer. You don’t have access to that normally, unless you start diving into frames in the call stack — which isn’t the best approach! Here’s what the class looks like:
class TrackingParser(htmllib.HTMLParser):
    """Try to keep accurate pointer of parsing location."""
    def __init__(self, writer, *args):
        htmllib.HTMLParser.__init__(self, *args)
        self.writer = writer
    def parse_starttag(self, i):
        index = htmllib.HTMLParser.parse_starttag(self, i)
        self.writer.index = index
        return index
    def parse_endtag(self, i):
        self.writer.index = i
        return htmllib.HTMLParser.parse_endtag(self, i)
The LineWriter class does the bulk of the work when called by the default formatter. If you have any improvements or changes to make, most likely they’ll go here. This is where we’ll put our machine learning code in later. But you can keep the implementation rather simple and still get good results. Here’s the simplest possible code:
class Paragraph:
    def __init__(self):
        self.text = ''
        self.bytes = 0
        self.density = 0.0
class LineWriter(formatter.AbstractWriter):
    def __init__(self, *args):
        self.last_index = 0
        self.lines = [Paragraph()]
    def send_flowing_data(self, data):
        # Work out the length of this text chunk.
        t = len(data)
        # We've parsed more text, so increment index.
        self.index += t
        # Calculate the number of bytes since last time.
        b = self.index - self.last_index
        self.last_index = self.index
        # Accumulate this information in current line.
        l = self.lines[-1]
        l.text += data
        l.bytes += b
    def send_paragraph(self, blankline):
        """Create a new paragraph if necessary."""
        if self.lines[-1].text == '':
        self.lines[-1].text += 'n' * (blankline+1)
        self.lines[-1].bytes += 2 * (blankline+1)
    def send_literal_data(self, data):
    def send_line_break(self):
This code doesn’t do any outputting yet, it just gathers the data. We now have a bunch of paragraphs in an array, we know their length, and we know roughly how many bytes of HTML were necessary to create them. Let’s see what emerge from our statistics.
Examining the Data
Luckily, there are some patterns in the data. In the raw output below, you’ll notice there are definite spikes in the number of HTML bytes required to encode lines of text, notably around the title, both sidebars, headers and footers.
While the number of HTML bytes spikes in places, it remains below average for quite a few lines. On these lines, the text output is rather high. Calculating the density of text to HTML bytes gives us a better understanding of this relationship.
The patterns are more obvious in this density value, so it gives us something concrete to work with.
Filtering the Lines
The simplest way we can filter lines now is by comparing the density to a fixed threshold, such as 50% or the average density. Finishing the LineWriter class:
    def compute_density(self):
        """Calculate the density for each line, and the average."""
        total = 0.0
        for l in self.lines:
            l.density = len(l.text) / float(l.bytes)
            total += l.density
        # Store for optional use by the neural network.
        self.average = total / float(len(self.lines))
    def output(self):
        """Return a string with the useless lines filtered out."""
        output = StringIO.StringIO()
        for l in self.lines:
            # Check density against threshold.
            # Custom filter extensions go here.
           if l.density > 0.5:
        return output.getvalue()
This rough filter typically gets most of the lines right. All the headers, footers and sidebars text is usually stripped as long as it’s not too long. However, if there are long copyright notices, comments, or descriptions of other stories, then those are output too. Also, if there are short lines around inline graphics or adverts within the text, these are not output.
To fix this, we need a more complex filtering heuristic. But instead of spending days working out the logic manually, we’ll just grab loads of information about each line and use machine learning to find patterns for us.
Supervised Machine Learning
Here’s an example of an interface for tagging lines of text as content or not:
The idea of supervised learning is to provide examples for an algorithm to learn from. In our case, we give it a set documents that were tagged by humans, so we know which line must be output and which line must be filtered out. For this we’ll use a simple neural network known as the perceptron. It takes floating point inputs and filters the information through weighted connections between “neurons” and outputs another floating point number. Roughly speaking, the number of neurons and layers affects the ability to approximate functions precisely; we’ll use both single-layer perceptrons (SLP) and multi-layer perceptrons (MLP) for prototyping.
所 谓的监督式学习就是为算法提供学习的例子。在这个案例中,我们给定一系列已经由人标识好的文档——我们知道哪一行必须输出或者过滤掉。我们用使用一个简单 的神经网络作为感知器,它接受浮点输入并通过“神经元”间的加权连接过滤信息,然后输后另一个浮点数。大体来说,神经元数量和层数将影响获取最优解的能 力。我们的原型将分别使用单层感知器(SLP)和多层感知器(MLP)模型。
To get the neural network to learn, we need to gather some data. This is where the earlier LineWriter.output() function comes in handy; it gives us a central point to process all the lines at once, and make a global decision which lines to output. Starting with intuition and experimenting a bit, we discover that the following data is useful to decide how to filter a line:
  • Density of the current line.
  • 当前行的密度
  • Number of HTML bytes of the line.
  • 当前行的HTML字节数
  • Length of output text for this line.
  • 当前行的输出文本长度
  • These three values for the previous line,
  • 前一行的这三个值
  • … and the same for the next line.
  • 后一行的这三个值
For the implementation, we’ll be using Python to interface with FANN, the Fast Artificial Neural Network Library. The essence of the learning code goes like this:
我们可以利用FANN的Python接口来实现,FANN是Fast Artificial Neural NetWork库的简称。基本的学习代码如下:
from pyfann import fann, libfann
# This creates a new single-layer perceptron with 1 output and 3 inputs.
obj = libfann.fann_create_standard_array(2, (3, 1))
ann = fann.fann_class(obj)
# Load the data we described above.
patterns = fann.read_train_from_file('training.txt')
ann.train_on_data(patterns, 1000, 1, 0.0)
# Then test it with different data.
for datin, datout in validation_data:
    result = ann.run(datin)
    print 'Got:', result, ' Expected:', datout
Trying out different data and different network structures is a rather mechanical process. Don’t have too many neurons or you may train too well for the set of documents you have (overfitting), and conversely try to have enough to solve the problem well. Here are the results, varying the number of lines used (1L-3L) and the number of attributes per line (1A-3A):
The interesting thing to note is that 0.5 is already a pretty good guess at a fixed threshold (see first set of columns). The learning algorithm cannot find much better solution for comparing the density alone (1 Attribute in the second column). With 3 Attributes, the next SLP does better overall, though it gets more false negatives. Using multiple lines also increases the performance of the single layer perceptron (fourth set of columns). And finally, using a more complex neural network structure works best overall — making 80% less errors in filtering the lines.
有 趣的是作为一个猜测的固定阈值,0.5的表现非常好(看第一列)。学习算法并不能仅仅通过比较密度来找出更佳的方案(第二列)。使用三个属性,下一个 SLP比前两都好,但它引入了更多的假阴性。使用多行文本也增进了性能(第四列),最后使用更复杂的神经网络结构比所有的结果都要更好,在文本行过滤中减 少了80%错误。
Note that you can tweak how the error is calculated if you want to punish false positives more than false negatives.
Extracting text from arbitrary HTML files doesn’t necessarily require scraping the file with custom code. You can use statistics to get pretty amazing results, and machine learning to get even better. By tweaking the threshold, you can avoid the worst false positive that pollute your text output. But it’s not so bad in practice; where the neural network makes mistakes, even humans have trouble classifying those lines as “content” or not.
Now all you have to figure out is what to do with that clean text content!

Trackback: http://tb.blog.csdn.net/TrackBack.aspx?PostId=1741185

[收藏到我的网摘]   [发送Trackback]  恋花蝶发表于 2007年08月13日 19:09:00


网页展现给用户的是主要内容是它的文本。因此,在获取网页源代码时,针对网页抽取出它的特定的文本内容,是我们做网页爬虫的一个基本功。我们结合HtmlParser和正则表达式来实现这一目的。       第...
  • zhangppmm
  • zhangppmm
  • 2016年04月11日 10:35
  • 1446


http://blog.csdn.net/lanphaday/article/details/1741185 2011.04.08 更新:想找此方案的代码的朋友请访问:http://code.g...
  • happyjxt
  • happyjxt
  • 2016年01月11日 19:19
  • 191


  • SeaTomorrow
  • SeaTomorrow
  • 2015年09月12日 13:39
  • 6172


基于行块分布函数的通用网页正文抽取 http://wenku.baidu.com/link?url=TOBoIHWT_k68h5z8k_Pmqr-wJMPfCy2q64yzS8hxsgTg4lMNH8...
  • levy_cui
  • levy_cui
  • 2016年05月23日 14:18
  • 2891


无标题文档 var names=new Array("蔡帅","常赛赛","陈晨","陈俊","陈磊","陈晓东","陈楠","杜广伟","段礼想","范晓峡","冯贺娟","高春发","高洋","...
  • liweifengwf
  • liweifengwf
  • 2013年04月08日 13:54
  • 1916


(1)网页去噪          网页去噪需要去掉与网页内表达内容不相关的文字,如广告,评论等等。现在对于博客、新闻类的网页去噪已经有很多的应用,比如常用的印象笔记、有道笔记就用到了相关的技术。 ...
  • cscmaker
  • cscmaker
  • 2013年04月23日 19:22
  • 11140


 基于网页分析构思出的正文提取算法 回顾以上的网页分析,如果按照文本密度来找提取正文,那么就是写这么一个算法,能够从过滤html标签后的文本中找到正文文本的起止行号,行号之间的文本就是网页正文...
  • oaa608868
  • oaa608868
  • 2016年12月07日 19:02
  • 575


今天将 一个bfs 的爬虫 和 抽取Html整合到一起了。现在功能还是有局限性 。 其中抽取正文,详见 http://www.fuxiang90.me/2012/02/%E6%8A%BD%E5%8F%...
  • wang4959520
  • wang4959520
  • 2016年02月19日 10:15
  • 236


1、标题文字 标题文字标签:h1~h6; 标题文字对齐方式:align=“left(左对齐)、center(居中)、right(右对齐)” 2、设置文字格式 设置文字字体:face=“字体名称”; 设...
  • aliuxina
  • aliuxina
  • 2017年03月30日 00:44
  • 734


最近在在做一个与资讯相关的APP,资讯是通过爬取获得,但是获取只有简单的信息,正文没有获取。所以在显示的时候很麻烦,一个标签链到到别人的网页,满屏的广告 ,还有各种弹窗,虽然页面确实做得很漂亮,但是不...
  • leiguang55555
  • leiguang55555
  • 2016年07月19日 21:20
  • 2150
您举报文章: 从HTML文件中抽取正文的简单方案