用Python脚本保存网页

1. 需求

因为一些原因,希望把自己在csdn上的blog都自动备份到本地。


这涉及到两个方面:

  • 保存网页:即把一个网页的内容保存到本地文件;——这里做了简化,即保存整个网页,而不是网页中blog的真正内容的那一部分;
  • 分析出所有blog的网址:网址是前一步的输入,为了导出所有的blog,则需要自动分析出自己所有blog的网址。


按照Scrum的方法,我们首先进行Technique Story,即对一些关键技术进行预研。——这是可选的步骤,如果对这一块的技术领域非常熟悉,自然就不用technique story。

2. 保存一个网页的内容

通常Python教材会介绍下面的方法,即urllib.urlretrieve()方法:

>>> import urllib
>>> filename="/home/flying-bird/test.html"
>>> addr="http://blog.csdn.net/a_flying_bird/article/details/38780569"
>>> urllib.urlretrieve(addr, filename)
('/home/flying-bird/test.html', <httplib.HTTPMessage instance at 0xb6ea4c4c>)
>>> f = open(filename,"r")
>>> f.read()
'<html>\r\n<head><title>403 Forbidden</title></head>\r\n<body bgcolor="white">\r\n<center><h1>403 Forbidden</h1></center>\r\n<hr><center>nginx</center>\r\n</body>\r\n</html>\r\n'
>>> f.close()
>>> 

但我们打开保存的本地文件,发现获取网页内容失败了,提示403 Forbidden。


参考 “Python抓取中文网页”: http://blog.csdn.net/nevasun/article/details/7331644 ,可以解决上述问题。对应代码如下:

>>> import sys, urllib2
>>> addr="http://blog.csdn.net/a_flying_bird/article/details/38780569"
>>> filename="/home/flying-bird/test.html"
>>> headers = {'User-Agent':'Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.6) Gecko/20091201 Firefox/3.5.6'}
>>> req = urllib2.Request(addr, headers = headers)
>>> content = urllib2.urlopen(req).read()
>>> type = sys.getfilesystemencoding()
>>> data = content.decode("UTF-8").encode(type)
>>> f = open(filename, "wb")
>>> f.write(data)
>>> f.close()
>>> 

其中urllib2是urllib的改进版本,分别对应的帮助页面:

题外话:之前有过一段时间,无法正常访问Python网站,最近恢复正常了哦。希望google也尽快恢复难过


2. 分析所有的blog网址

在可以保存指定网址的页面之后,我们就需要自动分析出所有blog的地址。


为此,我们可以首先查看博客入口页面http://blog.csdn.net/a_flying_bird的源码,发现会有如下的部分:

<div class="list_item article_item">
    <div class="article_title">   
        <span class="ico ico_type_Original"></span>

        <h1>
            <span class="link_title">
                <a href="/a_flying_bird/article/details/38930901">用Python脚本保存网页</a>
            </span>
        </h1>
    </div>

以及:

<div id="papelist" class="pagelist">
    <span> 92条数据  共7页</span>

    <strong>1</strong> 
    <a href="/u0/article/list/2">2</a> 
    <a href="/u0/article/list/3">3</a> 
    <a href="/u0/article/list/4">4</a> 
    <a href="/u0/article/list/5">5</a> 
    <a href="/u0/article/list/6">...</a> 

    <a href="/u0/article/list/2">下一页</a> 

    <a href="/u0/article/list/7">尾页</a> 
</div>

由此,通过解析HTML文件,可以知道blog入口页面的文件列表(文件名和网址);以及博客剩下的文章所在的网页。据此,就可以把所有blog的标题及网址都自动分析处理。

如果有xml解析的经验,可能会想到用XML来解析上面的HTML源码。但我们在分析上面的源码的时候,发现它并没有遵循严格的XML语法。这也是XML和HTML的区别,尽管XHTML提出了很多年,但实际的情况还是不容乐观。为此,就需要用HTML的语法来分析。


在Python中,提供了HTMLParser。经过分析blog页面的HTML源码特征,我们使用了下面的代码:

#!/usr/bin/env python
#encoding: utf-8

import htmllib
import re
import formatter


'''
Example:

<span class="link_title">
    <a href="/a_flying_bird/article/details/38930901">
            用Python脚本保存网页
    </a>
</span>
'''

blogUrls = []
blogNames = []

class BlogItemsParser(htmllib.HTMLParser):
    def __init__(self):
        htmllib.HTMLParser.__init__(self, formatter.NullFormatter())
        self.isArticleItem = False
        self.pattern = "^http://blog.csdn.net/a_flying_bird/article/details/[\\d]+$"

    def start_a(self, attributes):
        #print "len=", len(attributes), ": ", attributes
        if len(attributes) != 1:
            return

        attr_name, attr_value = attributes[0]
        #print "name: %s, value: %s" % (attr_name, attr_value)
        if attr_name != "href":
            return

        if re.match(self.pattern, attr_value):
            blogUrls.append(attr_value)
            self.isArticleItem = True

    def end_a(self):
        self.isArticleItem = False

    def handle_data(self, text):
        if self.isArticleItem:
            blogNames.append(text.strip())


def getContent():
    '''We export a blog page manually.'''
    filename = "/home/flying-bird/examples/python/export_csdn_blog/blog.html"
    f = open(filename, "r")
    content = f.read()
    f.close()
    return content

if (__name__ == "__main__"):
    content = getContent()
    parser = BlogItemsParser()
    parser.feed(content)
    parser.close()

    len = len(blogUrls)
    for i in range(0, len):
        print blogUrls[i], blogNames[i]

运行效果:

flying-bird@flyingbird:~/examples/python/export_csdn_blog$ ./export-csdn-blog.py 
http://blog.csdn.net/a_flying_bird/article/details/38930901 用Python脚本保存网页
http://blog.csdn.net/a_flying_bird/article/details/38876477 ListView的例子
http://blog.csdn.net/a_flying_bird/article/details/38780569 滑动页面的一个例子
http://blog.csdn.net/a_flying_bird/article/details/38776553 转  提问的智慧
http://blog.csdn.net/a_flying_bird/article/details/38711751 C语言的一个正则表达式pcre
http://blog.csdn.net/a_flying_bird/article/details/38690025 一个C语言小程序
http://blog.csdn.net/a_flying_bird/article/details/38666177 查看Android虚拟机上的文件
http://blog.csdn.net/a_flying_bird/article/details/38665897 数据存储之SharedPreferences
http://blog.csdn.net/a_flying_bird/article/details/38665387 获取RunningTaskInfo
http://blog.csdn.net/a_flying_bird/article/details/38590093 Android EditText的使用方法
http://blog.csdn.net/a_flying_bird/article/details/38563305 Android TextView的使用方法
http://blog.csdn.net/a_flying_bird/article/details/38542253 谈谈Java的对象(续)
http://blog.csdn.net/a_flying_bird/article/details/38541855 谈谈Java的对象
http://blog.csdn.net/a_flying_bird/article/details/38519965 搭建gtest环境
http://blog.csdn.net/a_flying_bird/article/details/38497919 Ubuntu频繁报错
flying-bird@flyingbird:~/examples/python/export_csdn_blog$ 

3. 获取blog页面个数

前面的代码是分析了一个页面中的blog列表。当然blog较多,会分多个页面展示。通过分析网址,发现形式如下:

http://blog.csdn.net/u0/article/list/2

因此,我们可以从2开始,构造这种网址,直到出现非法页面。——对于我们的应用场景而言,这种处理方法是可以接受的。

但是,实际验证发现这种偷懒的方法行不通。好在注意到第一页blog最下面在跳转其他页面的时候,有如下信息:

<div id="papelist" class="pagelist">
    <span> 92条数据  共7页</span>
    <strong>1</strong> 
    <a href="http://blog.csdn.net/u0/article/list/2">2</a> 
    <a href="http://blog.csdn.net/u0/article/list/3">3</a> 
    <a href="http://blog.csdn.net/u0/article/list/4">4</a> 
    <a href="http://blog.csdn.net/u0/article/list/5">5</a> 
    <a href="http://blog.csdn.net/u0/article/list/6">...</a> 
    <a href="http://blog.csdn.net/u0/article/list/2">下一页</a> 
    <a href="http://blog.csdn.net/u0/article/list/7">尾页</a> 
</div>

注意到“尾页”即可,从这里可以获取最后一页的序号。因此,范围缩小为仅分析如下一句话:

<a href="http://blog.csdn.net/u0/article/list/7">尾页</a> 

虽然在前面代码中的class中,可以增加一个处理,获取页面数据。但从代码可读性上来讲,把多个功能糅合在一起,可能并不是一个好主意。因此,我们另外再创建一个class,专门获取blog的页面个数。——尽管代码规模有一定增加,而且还有重复代码的嫌疑。当然,性能在此不是我们的考量方面。

代码重构如下:——关注class BlogPagesParser和getBlogPagesCount()。

#!/usr/bin/env python
#encoding: utf-8

import htmllib
import re
import formatter


'''
Example:

<span class="link_title">
    <a href="/a_flying_bird/article/details/38930901">
            用Python脚本保存网页
    </a>
</span>
'''

blogUrls = []
blogNames = []

class BlogItemsParser(htmllib.HTMLParser):
    def __init__(self):
        htmllib.HTMLParser.__init__(self, formatter.NullFormatter())
        self.isArticleItem = False
        self.pattern = "^http://blog.csdn.net/a_flying_bird/article/details/[\\d]+$"

    def start_a(self, attributes):
        #print "len=", len(attributes), ": ", attributes
        if len(attributes) != 1:
            return

        attr_name, attr_value = attributes[0]
        #print "name: %s, value: %s" % (attr_name, attr_value)
        if attr_name != "href":
            return

        if re.match(self.pattern, attr_value):
            blogUrls.append(attr_value)
            self.isArticleItem = True

    def end_a(self):
        self.isArticleItem = False

    def handle_data(self, text):
        if self.isArticleItem:
            blogNames.append(text.strip())

class BlogPagesParser(htmllib.HTMLParser):
    def __init__(self):
        htmllib.HTMLParser.__init__(self, formatter.NullFormatter())
        self.isBlogPageNumber = False
        self.pattern = "^http://blog.csdn.net/u0/article/list/([\\d]+)$"
        self.tempPageCount = 1 # Only 1 blog page by default

    def start_a(self, attributes):
        #print "len=", len(attributes), ": ", attributes
        if len(attributes) != 1:
            return

        attr_name, attr_value = attributes[0]
        #print "name: %s, value: %s" % (attr_name, attr_value)
        if attr_name != "href":
            return

        match = re.match(self.pattern, attr_value)
        if not match:
            return

        self.isBlogPageNumber = True
        self.tempPageCount = match.group(1)

    def end_a(self):
        self.isBlogPageNumber = False

    def handle_data(self, text):
        if not self.isBlogPageNumber:
            return

        text = text.strip()
        #print "text: ", text
        if text == "尾页":
            print "got it: ", self.tempPageCount
        #else:
        #    print "fail :("

    def get_page_count(self):
        return self.tempPageCount

def getContent():
    '''We export a blog page manually.'''
    filename = "/home/flying-bird/examples/python/export_csdn_blog/blog.html"
    f = open(filename, "r")
    content = f.read()
    f.close()
    return content

def getBlogItems():
    content = getContent()
    parser = BlogItemsParser()
    parser.feed(content)
    parser.close()

    len = len(blogUrls)
    for i in range(0, len):
        print blogUrls[i], blogNames[i]

def getBlogPagesCount():
    content = getContent()
    parser = BlogPagesParser()
    parser.feed(content)
    pageCount = parser.get_page_count()
    parser.close()
    print "blog pages: ", pageCount


if (__name__ == "__main__"):
    getBlogPagesCount()

4. 打通全部环节

至此,所有的技术点都已经验证通过,是时候把所有要素串在一起了。思路再梳理一遍:

  • 给定个人blog的入口页面地址;
  • 读取这个页面的内容;
  • 读取这个页面中包含的blog列表(地址, 文章标题)
  • 读取blog的总页面个数
  • 遍历2开始的各个blog页面,获取每个页面的blog列表。至此,获取了所有的blog列表。
  • 依次获取每个blog页面的内容,并保存到本地的指定目录下。

最后的代码如下:

#!/usr/bin/env python
#encoding: utf-8

'''
Save all of my csdn blogs, for backup only.

Input: 
    1. blogUrl: URL of my blog.
    2. userName: the registered user name of csdn, such as "a_flying_bird".
    2. userId: User id corresponding userName, such as "u0", which is allocated by csdn.

The steps:
    1. Read the content of blogUrl.
    2. Parse all the blog items in blogUrl.
    3. Get the page number (N) from blogUrl.
    4. Parse all the blog items in page 2..N
       And now we get all the blog items.
    5. Read each blog, and save to local.

TODO:
1. Read only the blog's content, not the page's content.
2. Update/Replace all of the hyper-links in the blogs.

Examples:
1. Blog item in a blog page:
    <span class="link_title">
        <a href="/a_flying_bird/article/details/38930901">
                用Python脚本保存网页
        </a>
    </span>
2. Blog page number in blogUrl:
    <div id="papelist" class="pagelist">
        <span> 92条数据  共7页</span>
        <strong>1</strong> 
        <a href="http://blog.csdn.net/u0/article/list/2">2</a> 
        <a href="http://blog.csdn.net/u0/article/list/3">3</a> 
        <a href="http://blog.csdn.net/u0/article/list/4">4</a> 
        <a href="http://blog.csdn.net/u0/article/list/5">5</a> 
        <a href="http://blog.csdn.net/u0/article/list/6">...</a> 
        <a href="http://blog.csdn.net/u0/article/list/2">下一页</a> 
        <a href="http://blog.csdn.net/u0/article/list/7">尾页</a> 
    </div>
3. blogUrl, i.e., URL of one person's blog entrance:
    http://blog.csdn.net/a_flying_bird
   Here, we consider "a_flying_bird" as the user name.
4. Blog URLs from second page:
    http://blog.csdn.net/u0/article/list/2
   Here, we consider "u0" as the user id.

Exception:
  Q: urllib2.HTTPError: HTTP Error 502: Bad Gateway
  A: Add some sleep between each reading
'''

import htmllib
import urllib2
import sys
import re
import formatter
import string
import time

def readContentFromUrl(url, filename=None):
    '''
    If filename is not null, save the content of url to this file.
    url: for example, http://blog.csdn.net/a_flying_bird/article/details/38780569
    '''
    print "readContentFromUrl(), url: ", url
    headers = {'User-Agent':'Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.6) Gecko/20091201 Firefox/3.5.6'}  
    req = urllib2.Request(url, headers = headers)  
    content = urllib2.urlopen(req).read()  
    type = sys.getfilesystemencoding()  
    data = content.decode("UTF-8").encode(type)  

    if filename is not None:
        f = open(filename, "wb")  
        f.write(data)  
        f.close()  
    
    return data

class BlogItemsParser(htmllib.HTMLParser):
    def __init__(self):
        htmllib.HTMLParser.__init__(self, formatter.NullFormatter())
        self.isArticleItem = False
        self.pattern = None # before feed(), must call set_patter() frist
        self.blogUrls = []
        self.blogNames = []

    def set_pattern(self, blogItemPattern):
        self.pattern = blogItemPattern

    def start_a(self, attributes):
        #print "len=", len(attributes), ": ", attributes
        if len(attributes) != 1:
            return

        attr_name, attr_value = attributes[0]
        if attr_name != "href":
            return

        #print "name: %s, value: %s, pattern: %s" % (attr_name, attr_value, self.pattern)
        if re.match(self.pattern, attr_value):
            self.blogUrls.append(attr_value)
            self.isArticleItem = True

    def end_a(self):
        self.isArticleItem = False

    def handle_data(self, text):
        if self.isArticleItem:
            #s = text.strip() # debug the title with specifial '&'
            #print "title: BEGIN--<" + s + ">--END"
            self.blogNames.append(text.strip())

    def get_blog_urls(self):
        return self.blogUrls

    def get_blog_names(self):
        return self.blogNames

class BlogPagesParser(htmllib.HTMLParser):
    def __init__(self):
        htmllib.HTMLParser.__init__(self, formatter.NullFormatter())
        self.isBlogPageNumber = False
        self.pattern = None # before feed, must call set_pattern() first.
        self.pageCount = 1 # Only 1 blog page by default

    def set_pattern(self, moreUrlPattern):
        self.pattern = moreUrlPattern

    def start_a(self, attributes):
        #print "len=", len(attributes), ": ", attributes
        if len(attributes) != 1:
            return

        attr_name, attr_value = attributes[0]
        #print "name: %s, value: %s" % (attr_name, attr_value)
        if attr_name != "href":
            return

        match = re.match(self.pattern, attr_value)
        if not match:
            return

        self.isBlogPageNumber = True
        self.pageCount = match.group(1)

    def end_a(self):
        self.isBlogPageNumber = False

    def handle_data(self, text):
        if not self.isBlogPageNumber:
            return

        text = text.strip()
        #print "text: ", text
        
        #TODO CHS now, you can change it manually.
        if text == "尾页":
            print "got the page count: ", self.pageCount
        #else:
        #    print "fail :("

    def get_page_count(self):
        return string.atoi(self.pageCount)

# title is used for debugging.
def getBlogItems(title, content, blogItemRefPattern):
    parser = BlogItemsParser()
    parser.set_pattern(blogItemRefPattern)
    parser.feed(content)
    blogUrls = parser.get_blog_urls()
    blogNames = parser.get_blog_names()
    parser.close()

    #print "blog items for ", title
    #count = len(blogUrls)
    #for i in range(0, count):
    #    print blogUrls[i], blogNames[i]

    return blogUrls, blogNames

def getBlogPagesCount(moreBlogUrl, content, blogPagesPattern):
    parser = BlogPagesParser()
    parser.set_pattern(blogPagesPattern)
    parser.feed(content)
    pageCount = parser.get_page_count()
    parser.close()
    return pageCount

def export_csdn_blog(userName, userId, savedDirectory):
    blogUrls = []
    blogNames = []

    if savedDirectory[-1] != '/':
        savedDirectory = savedDirectory + '/'

    blogUrl = "http://blog.csdn.net/" + userName
    moreBlogUrl = "http://blog.csdn.net/" + userId + "/article/list/"
    blogItemHrefPattern = "^/" + userName + "/article/details/[\\d]+$"
    blogPagesPattern = "^/" + userId + "/article/list/([\\d]+)$"

    # Read the content of blogUrl.
    filename = None # "/home/flying-bird/examples/python/export_csdn_blog/blog.html" # for debugging only
    content = readContentFromUrl(blogUrl, filename)
    #print content

    #Parse all the blog items in blogUrl.
    tmpBlogUrls, tmpBlogNames = getBlogItems(blogUrl, content, blogItemHrefPattern)
    blogUrls = blogUrls + tmpBlogUrls
    blogNames = blogNames + tmpBlogNames

    # Get the page number (N) from blogUrl.
    pageCount = getBlogPagesCount(moreBlogUrl, content, blogPagesPattern)

    # Parse all the blog items in page 2..N
    for i in range(2, pageCount):
        url = moreBlogUrl + ("%d" % i)
        print "i = %d, url = %s" % (i, url)
        content = readContentFromUrl(url)
        tmpBlogUrls, tmpBlogNames = getBlogItems(url, content, blogItemHrefPattern)
        blogUrls = blogUrls + tmpBlogUrls
        blogNames = blogNames + tmpBlogNames

    # Read each blog, and save to local.
    count = len(blogUrls)
    for i in range(0, count):
        url = "http://blog.csdn.net" + blogUrls[i]
        filename = savedDirectory + blogNames[i] + ".html"
        print "url=%s, filename=%s" % (url, filename)
        readContentFromUrl(url, filename)
        time.sleep(30) # unit: seconds

    print "DONE"

def usage(proccessName):
    print "Usage: %s userName userId savedDirectory" % (processName,)
    print "For example:"
    print "    userName: a_flying_bird"
    print "    userId: u0"
    print "    savedDirectory: /home/csdn/"

if (__name__ == "__main__"):
    argc = len(sys.argv)
    if argc == 1: # for debugging only.
        userName = "a_flying_bird"
        userId = "u0"
        savedDirectory = "/home/flying-bird/csdn/"
    elif argc == 4:
        userName = sys.argv[1]
        userId = sys.argv[2]
        savedDirectory = sys.argv[3]
    else:
        usage(sys.argv[0])
        sys.exit(-1)

    #TODO check the directory, or mkdir if neccessary

    export_csdn_blog(userName, userId, savedDirectory)

效果图:


5. 后记

那个30秒,显然太长了,估计10秒也行(⊙o⊙)哦。


代码目前功能还行,质量还不容乐观,后面有机会再重构。


还有几个TODO会影响易用性,比如自动创建保存目录;暂未分析是否当前代码是否支持置顶的blog。


所有blog导出来之后,以后搭建自己的网站时,可以轻松把内容挂上去。——当然,要把页面上真正的内容剥出来。然后自动塞到自己网站的页面模板的对应位置,即生成符合自己网站的页面了。


  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值