python分布式编程_Python安全工具开发(一) :分布式爬虫初探

本文由江sir原创投稿一叶知安,已向作者发放一定奖励。

投稿参见:一叶知安征稿[欢迎踊跃投稿]

前言

写的一款百度爬虫采集工具,自己平时用的都是window下的一些exe工具,但在自己linux系统里还是有点不太方便,就自己写了一个百度爬虫工具,之后做一些批量扫描测试的时候可以发挥很大用处了

多线程类没啥可说的,写好多了,都可以直接拿来用的,详细讲解下spider()函数

网页分析

在网页分析时主要是注意一下几点每页的网页链接格式,一般都有固定的链接格式,如百度的每页搜索结果链接是只取两个个参数的结果是这样,每页10条

https://www.baidu.com/s?wd=ctf&pn=10

F12对当前页面分析每个链接的特点,百度搜索有点坑,你会发现百度都是通过一个长长的链接302跳转来访问的,随便选取一个链接都是这种

http://www.secbox.cn/tag/ctf 

特征就是class=”c-showurl” 属性值,用bs库去获取所有有这个属性的tagres = soup.find_all(name="a", attrs={'class':'c-showurl'})

访问跳转链接获取实际网站url,title之类的信息

上面就是我们写一个百度爬虫需要做的所有事,也没多少内容

爬虫实战

我习惯先写一个简单脚本测试下爬虫,先把第一页的所有网站url爬取下来,如果没问题,在搬到多线程中,如下简单爬虫脚本

target = 'https://www.baidu.com/s?wd=%s&pn=%s'%('ctf',10)

headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64; rv:50.0) Gecko/20100101 Firefox/50.0'}

html = requests.get(target,headers=headers)

soup = bs(html.text,"lxml")

res = soup.find_all(name="a", attrs={'class':'c-showurl'})

# print res

for r in res:

try:

h = requests.get(r['href'],headers=headers,timeout=3)

if h.status_code == 200:

url = h.url

title = re.findall(r'

(.*?)',h.content)[0]

title = title.decode('utf-8')#解码成unicode,否则add_row会转换出错

print url,title

urls.append((pn,url,title))

print urls

else:

continue

except:

continue

下面是具体多线程百度爬虫类

#!/usr/bin/env python

# -*- coding: utf-8 -*-

# @Date : 2017-03-19 15:06:48

# @Author : 江sir (qq:2461805286)

# @Link : http://www.blogsir.com

# @Version : v1.0

import requests

from bs4 import BeautifulSoup as bs

import threading

import re

from Queue import Queue

from prettytable import PrettyTable

import argparse

import time

import sys

thread_count = 3 #修改线程

page = 5 #可修改抓取页数

urls = []

x = PrettyTable(['page','url','title'])

x.align["title"] = "1" # Left align city names

x.padding_width = 1

page = (page+1) * 10

class ichunqiu(threading.Thread):

def __init__(self, queue):

threading.Thread.__init__(self)

self.Q = queue

self.headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64; rv:50.0) Gecko/20100101 Firefox/50.0'}

def run(self):

while 1:

try:

t = self.Q.get(True, 1)

# print t

self.spider(t)

except Exception,e: #调试最好打印出错信息,否则,spider函数出错也无法定位错误,多次遇到这个问题了,靠打印才解决

print e

break

def spider(self,target):

pn = int(target.split('=')[-1])/10 +1

# print pn

# print target

html = requests.get(target,headers=self.headers)

# print html

soup = bs(html.text,"lxml")

res = soup.find_all(name="a", attrs={'class':'c-showurl'})

# print res

for r in res:

try:

h = requests.get(r['href'],headers=self.headers,timeout=3)

if h.status_code == 200:

url = h.url

title = re.findall(r'

(.*?)',h.content)[0]

title = title.decode('utf-8')#解码成unicode,否则add_row会转换出错

print url,title

urls.append((pn,url,title))

else:

continue

except:

continue

def Load_Thread(queue):

return [ichunqiu(queue) for i in range(thread_count)]

def Start_Thread(threads):

print 'thread is start...'

for t in threads:

t.setDaemon(True)

t.start()

for t in threads:

t.join()

print 'thread is end...'

def main():

start = time.time()

parser = argparse.ArgumentParser()

parser.add_argument('-s')

parser.add_argument('-f')

arg = parser.parse_args()

# print arg

word = arg.s

output = arg.f

# word = 'inurl:login.action'

# output = 'test.txt'

queue = Queue()

for i in range(0,page,10):

target = 'https://www.baidu.com/s?wd=%s&pn=%s'%(word,i)

queue.put(target)

thread_list = Load_Thread(queue)

Start_Thread(thread_list)

if output:

with open(output,'a') as f:

for record in urls:

f.write(record[1]+'\n')

print urls,len(urls)

for record in urls:

x.add_row(list(record))

print x

print '共爬取数据%s条'%len(urls)

print time.time()-start

if __name__ == '__main__':

main()

扫的结果如上,现在爬’inurl:login.action’ 竟然还能捞到几个服务器,2333,最大可以爬取500多条数据,写爬虫的过程中又发现不少问题,每次写都能收获很多,这次主要是编码的问题,PrettyTable在add_row时编码错误,挺奇怪,然后在存储MySQL数据库时也出现了编码错误,于是下工夫花了半天把python编解码的问题有复习了一篇,本来想单独写成一个python编解码那些事,想想算了,弄一起得了

字符编解码那些事

Python默认字符串采用的是ascii编码方式,如下所示:

python -c "import sys; print sys.getdefaultencoding()"

ascii

可以通过#coding:utf-8 指定页面默认编码为utf-8(ps:但系统默认还是ascii)

字符串的编解码都是以unicode为中间编码,无法直接完成转换,python会自动按其系统默认编码方式解码为unicode,再编码成另一中编码格式

比如:

#coding:utf-8

s = '中文'

print s.decode('gbk')

一些报错信息如下报错为UnicodeDecodeError: 'gbk' codec can't decode bytes in position 2-3: illegal multibyte sequence

即页面指定为utf-8,但decode()会将gbk转换为unicode,所有就报错了,报错类型为UnicodeDecodeError如果去掉#coding:utf-8 报错为SyntaxError: Non-ASCII character '\xe4' in file /home/sublime/test/exception/2.py on line 2, but no encoding declared; see PEP 263 -- Defining Python Source Code Encodings for details

The problem is that your code is trying to use the ASCII encoding, but the pound symbol is not an ASCII character. Try using UTF-8 encoding. You can start by putting # -- coding: utf-8 -- at the top of your .py file.(大概就是页面尝试去使用ascii编码,但发现页面存在非ascii字符,因此报了一个语法错误)如果指定页面编码为ascii

#!/usr/bin/python

# -*- coding: ascii -*-

s = '中文'

那么报错就是SyntaxError: 'ascii' codec can't decode byte 0xe4 in position 5: ordinal not in range(128) 即ascii无法编码超过128的字符,这个和上一个很相似,但报错确不一样

另外发现一个有趣的事

>>> s='中文'

>>> s

'\xe4\xb8\xad\xe6\x96\x87'

>>> s.decode('utf8')

u'\u4e2d\u6587'

linux终端下终端python交互环境默认也是utf-8格式,不需要指定#coding:utf-8即可编码中文字符linux终端和sublime默认使用unicode,可以显示utf-8和unicode编码,但window下终端使用gbk编码,utf和unicode显示乱码,这点要注意,出错是编解码转换出错,乱码是终端或显示器不支持该编码格式,无法显示

小技巧

1.如果想知道一个字符串是什么编码,可以print [字符串] 来看二进制码,一般有如下两种[u’\u76ee\u6807\u533a\u670d’] unicode编码

[‘\xe7\x9b\xae\xe6\xa0\x87\xe5\x8c\xba\xe6\x9c\x8d’] utf-8编码

另一个例子

#!/usr/bin/python

# -*- coding: utf-8 -*-

import sys

# reload(sys)

# sys.setdefaultencoding('utf-8')

s = '中文'

print [s]

print s.encode('gbk')

此时报错为UnicodeDecodeError: 'ascii' codec can't decode byte 0xe4 in position 0: ordinal not in range(128) 猜测虽然指定了当前页面为utf-8,但因为直接encode()转换程序会自动先按照系统默认的编码(此时还是ascii) decode一次成unicode,再从unicode编码为gbk, 因为s编码为utf-8,明显解码出错

有两种解决办法

1 手动解码 print s.decode(‘utf-8’).encode(‘gbk’)

2 改变系统默认编码,即加入这两句

reload(sys)

sys.setdefaultencoding('utf-8')

分布式爬虫

在将多线程版本改写成分布式的百度爬虫,主要用的可跨平台的multiprocessing.managers的BaseManager模块,这个模块的主要功能就是将task_queue和result_queue两个队列注册成函数暴露到网上去,Master节点监听端口,让Worker子节点去连接,不同主机之间就可以通过注册的函数来共享同步资源,Master节点主要负责发送任务和获取结果,Worker就获取任务队列的任务开始跑,并将获取的结果存储到数据库获取返回回来

spider_Master 文件: 注释写的很明白了,Master节点创建了task_queue和result_queue队列,通过BaseManager模块和Worker子节点交互,将结果存储到数据库中

BaseManager一些常用函数

connect(self)

| Connect manager object to the server process

|

| get_server(self)

| Return server object with serve_forever() method and address attribute

|

| join(self, timeout=None)

| Join the manager process (if it has been spawned)

|

| start(self, initializer=None, initargs=())

| Spawn a server process for this manager object

|

| ----------------------------------------------------------------------

| Class methods defined here:

|

| register(cls, typeid, callable=None, proxytype=None, exposed=None, method_to_typeid=None, create_method=True) from __builtin__.type

| Register a typeid with the manager type

spider_Worker 节点主要调用spider()函数对任务进行处理,方法都类似,子节点每获取一个链接就传回Master, 另外需要注意的是Master文件只能运行一个,但Worker节点可以同时运行多个并行同步处理task任务队列

spider_Master.py

#coding:utf-8

from multiprocessing.managers import BaseManager

from Queue import Queue

import time

import argparse

import MySQLdb

import sys

page = 2

word = 'inurl:login.action'

output = 'test.txt'

page = (page+1) * 10

host = '127.0.0.1'

port = 500

urls = []

class Master():

def __init__(self):

self.task_queue = Queue() #server需要先创建两个共享队列,worker端不需要

self.result_queue = Queue()

def start(self):

BaseManager.register('get_task_queue',callable=lambda:self.task_queue) #在网络上注册一个get_task_queue函数,即把两个队列暴露到网上,worker端不需要callable参数

BaseManager.register('get_result_queue',callable=lambda:self.result_queue)

manager = BaseManager(address=(host,port),authkey='sir')

manager.start() #master端为start,即开始监听端口,worker端为connect

task = manager.get_task_queue() #master和worker都是从网络上获取task队列和result队列,不能在创建的两个队列

result = manager.get_result_queue()

print 'put task'

for i in range(0,page,10):

target = 'https://www.baidu.com/s?wd=%s&pn=%s'%(word,i)

print 'put task %s'%target

task.put(target)

print 'try get result'

while True:

try:

url = result.get(True,5) #获取数据时需要超时长一些

print url

urls.append(url)

except:

break

manager.shutdown()

if __name__ == '__main__':

start = time.time()

server = Master()

server.start()

print '共爬取数据%s条'%len(urls)

print time.time()-start

with open(output,'a') as f:

for url in urls:

f.write(url[1]+'\n')

conn = MySQLdb.connect('localhost','root','root','Struct',charset='utf8')

cursor = conn.cursor()

for record in urls:

sql = "insert into s045 values('%s','%s','%s')"%(record[0],record[1],str(record[2]))

cursor.execute(sql)

conn.commit()

conn.close()

spider_Worker

#coding:utf-8

import re

import Queue

import time

import requests

from multiprocessing.managers import BaseManager

from bs4 import BeautifulSoup as bs

host = '127.0.0.1'

port = 500

class Worder():

def __init__(self):

self.headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64; rv:50.0) Gecko/20100101 Firefox/50.0'}

def spider(self,target,result):

urls = []

pn = int(target.split('=')[-1])/10 +1

# print pn

# print target

html = requests.get(target,headers=self.headers)

soup = bs(html.text,"lxml")

res = soup.find_all(name="a", attrs={'class':'c-showurl'})

for r in res:

try:

h = requests.get(r['href'],headers=self.headers,timeout=3)

if h.status_code == 200:

url = h.url

# print url

time.sleep(1)

title = re.findall(r'

(.*?)',h.content)[0]

# print url,title

title = title.decode('utf-8')

print 'send spider url:',url

result.put((pn,url,title))

else:

continue

except:

continue

# return urls

def start(self):

BaseManager.register('get_task_queue')

BaseManager.register('get_result_queue')

print 'Connect to server%s'%host

m = BaseManager(address=(host,port),authkey='sir')

m.connect()

task = m.get_task_queue()

result = m.get_result_queue()

print 'try get queue'

while True:

try:

target = task.get(True,1)

print 'run pages%s'%target

res = self.spider(target,result)

# print res

except:

break

if __name__ == '__main__':

w = Worder()

w.start()这个分布式爬虫还有待优化,发现跑的速度还没多线程快,继续会更新优化python爬虫相关内容

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值