我是在B站上看到的一个视频,视频不长就40分钟,对新手来说很好,简单易懂还能激发兴趣(当看到我的代码爬到了那么多妹子图时,我对爬虫的兴趣顿时暴涨了许多)。下边也分享了我的代码,代码不长有注释,很简单。视频链接如下,有兴趣的话可以看下https://www.bilibili.com/video/av75562300?from=search&seid=16725157051954348830。
有图有真相,下图是我爬取到妹子,代码中我设置了爬取11个妹子,所以就显示了这几个文件夹。
![](https://i-blog.csdnimg.cn/blog_migrate/a63be6292c2916d580a29999e8525768.png)
![](https://i-blog.csdnimg.cn/blog_migrate/e85667375e87bc0ff58435163c395674.png)
我的代码如下,是稍加修改过的。
"""请求网页"""
import requests
import re
import time
import os
headers = {
'user-agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.108 Safari/537.36'
}
#爬取所有妹子的网页
response = requests.get("https://www.vmgirls.com/",headers=headers)
html = response.text
"""解析网页"""
def jiexi(html):
dir_name = re.findall('<h1 class="post-title h3">(.*?)</h1>',html)[-1]
if not os.path.exists(dir_name):
os.mkdir(dir_name)
urls = re.findall('<a href="(.*?)" alt=".*?" title=".*?" .*?></a>', html)
return dir_name,urls
"""保存图片"""
def saveImg(urls,dir_name):
for url in urls:
#time.sleep(1)#延时1秒再爬,防止把网站爬崩
file_name = url.split('/')[-1]
response = requests.get(url, headers=headers,timeout=10)
with open(dir_name+'/'+file_name,'wb') as f:
print(file_name)
f.write(response.content)
#开始爬取
urls = re.findall('<a href="(.*?)" .*?>.*?</a>',html) #所有妹子的页面
def mistaken():
try:
pass
except:
mistaken()
number_mei = 11 #爬取妹子数量,可以自己设置
cur_number = 0 #当前妹子数量
for url in urls:
if(cur_number<=number_mei):
try:
cur_number += 1
print('-------------第%d个妹子-------------' % cur_number)
response = requests.get(url, headers=headers)
html = response.text
dir_one_meizi,urls_one_meizi = jiexi(html)
saveImg(urls_one_meizi,dir_one_meizi)
except:
mistaken()
cur_number-=1
else:
break