python并行爬虫

Python并行化

  • 并行化介绍
  • Map的使用

1)并行化介绍

  • [x] 多个线程同时处理任务
  • [x] 高效
  • [x] 快速

2)Map的使用

  • map函数一手包办了序列的操作,参数传递和结果保存等一系列的操作。
  • from multiprocessing.dummy import Pool
  • pool = Pool(计算机核数)
  • results = pool.map(爬取函数,网址列表)
# -*-coding: utf-8 -*-
from multiprocessing.dummy import Pool as ThreadPool
import requests
import time

def getsource(url):
    html = requests.get(url)

urls = []

for i in range(1,21):
    newpage = 'http://tieba.baidu.com/p/3522395718?pn=' + str(i)
    urls.append(newpage)

time1 = time.time()
for i in urls:
    print i
    getsource(i)
time2 = time.time()
print u'单线程耗时:' + str(time2-time1)

pool = ThreadPool(2)
time3 = time.time()
results = pool.map(getsource, urls)
pool.close()
pool.join()
time4 = time.time()
print u'并行耗时:' + str(time4-time3)
 
 
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28

5.实战–百度贴吧爬虫

  • 目标网站:http://tieba.baidu.com/p/3522395718
  • 目标内容:前20页的跟帖用户名,跟帖内容,跟帖时间
  • 涉及知识:Requests获取网页,XPath提取内容,map实现多线程爬虫

tiebaspider.py

# -*- coding: utf-8 -*-
from lxml import etree
from multiprocessing.dummy import Pool as ThreadPool
import requests
import json  #转义json格式的内容

import sys      #将编码转义为utf-8
reload(sys)
sys.setfaultencoding('utf-8')


def towrite(contentdict):
    f.writelines(u'回帖时间:' +str(contentdict['topic_reply_time']) + '\n')
    f.writelines(u'回帖内容:' +unicode(contentdict['topic_reply_content']) + '\n')
    f.writelines(u'回帖人:' +str(contentdict['user_time']) + '\n\n')

def spider(url):
    html = requests.get(url)
    selector = etree.HTML(html.text)
    content_field = selector.xpath('//div[@class="l_post l_post_bright "]')
    item = {}
    for each in content_field:
        reply_info = json.loads(each.xpath('@data-field')[0].replace('&quot',''))       #因为获取的是xpath后的内容所以直接@
        author = reply_info['author']['user_name']
        content = each.xpath('div[@class="d_post_content_main"]/div/cc/div[@class="d_post_content j_d_post_content "]/text()')[0]
        reply_time = reply_info['content']['date']
        print content
        print reply_time
        print author
        item['user_name'] = author
        item['topic_reply_content'] = content
        item['topic_reply_time'] = reply_time
        towrite(item)


if __name__ == "__main__":
    pool = ThreadPool(2)
    f = open('content.txt', 'a')
    page = []
    for i in range(1, 21):
        newpage = 'http://tieba.baidu.com/p/3522395718?pn=' + str(i)
        page.append(newpage)

    results = pool.map(spider, page)
    pool.close()
    pool.join()
    f.close()
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值