Python3爬虫笔记一

1.提取出‘[ ]’里的数字,比如在爬取煎蛋网妹子图时需要去掉'[ ]'提取出里面的数字,也就是页码,这里用到的是python里的re模块的sub方法。

span_tag = sou.find_all('span', attrs={'class': 'current-comment-page'})[0].text
        max_page = int(re.sub(r'\[|\]', '', span_tag))

并且煎蛋网妹子图的图片URL可以通过正则表达式来获得(虽然丑了点,但有用)。

pic_orgin = sou.find_all('a', {'href': re.compile('//wx\d{1,2}\.sinaimg.cn/large/.*?\.jpg')})

2.通用请求代码:

user_agent_list = [
        "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/22.0.1207.1 Safari/537.1",
        "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.6 (KHTML, like Gecko) Chrome/20.0.1092.0 Safari/536.6",
        "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.6 (KHTML, like Gecko) Chrome/20.0.1090.0 Safari/536.6",
        "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/19.77.34.5 Safari/537.1",
        "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.9 Safari/536.5",
        "Mozilla/5.0 (Windows NT 6.0) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.36 Safari/536.5",
        "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
        "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
        "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3",
        "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3",
        "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
        "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
        "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
        "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.0 Safari/536.3",
        "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/535.24 (KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24",
        "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/535.24 (KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24"
        "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/45.0.2454.101 Safari/537.36"
    ]
    UA = random.choice(user_agent_list)
    header = {'User-Agent': UA,
               'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8',
               # 'Host': 'jandan.net', 自己要访问网站的域名
               'Accept - Encoding': 'gzip, deflate, sdch',
               'Accept - Language': 'zh - CN, zh;q=0.8',
               'Connection': 'keep - alive',
               }
    url = 'xxx'		#xxx:你要访问的URL
3.提取汉字中的数字,也是用到了re模块的sub方法:

page = soup.select('body > div.wrapper > div.photo > div.wrapper.clearfix.imgtitle > div.pages > ul > li > a')[0].text
max_page = re.sub('\D', '', page)
4.当爬取的图片为1kb或打开显示'已损坏或无法打开'时,可以通过以下语句来解决:

img = pic.attrs['src']
try:
    # r=requests.get(img,headers=header)
    s=requests.Session()
    s.headers['User-Agent']=UA
    r=s.get(img)
except:
    print('sorry! Request pictures url fail.')
else:
    file_name=img.split('/')[-1]
    with open(file_name,'wb') as f:
    f.write(r.content)
5.循环:

i = 0
while i < 10:
    url = mmurl + str(i)
    print(url)
    i +=1
或者也可以这样:

for n in range(1, int(page)+1):
    each_page = url + 'list_10_' + str(n) + '.html'

或者:

for n in range(1,int(page)+1):
    same_url = url+'/p{}.html'.format(str(n))

效果都是一样的。







  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值