以爬取新浪图片网站为例,用requests和re这两个库来实现目的。
网站:http://photo.sina.com.cn/newyouth/
下面为代码实现的过程:
1、打开url的函数:open_url(url)
def open_url(url):
headers = {'User-Agent':'Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.142 Mobile Safari/537.36'}
response = requests.get(url,headers=headers)
return response.text
2、查找网站的源代码的图片链接并把图片保存:find_save(html)
def find_inf(html):
p =r'
imglist = re.findall(p,html)
i = 1
for each in imglist:
res = requests.get(each)
with open('C:/picture/%s.jpg'%i,'wb') as f:
f.write(res.content)
i +=1
f.close()
3、再定义一个主函数:main()
def main()
url = "http://photo.sina.com.cn/newyouth/"
html = open_url(url)
find_inf(html)
4、利用if__name__ == ‘main’ :模块去调用main()函数:
if __name__ == '__main__':
main()
源代码如下:
import requests
import re
def open_url(url):
headers = {'User-Agent':'Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.142 Mobile Safari/537.36'}
response = requests.get(url,headers=headers)
return response.text
def find_inf(html):
p =r'
imglist = re.findall(p,html)
i=1
for each in imglist:
res = requests.get(each)
with open('C:/picture/%s.jpg'%i,'wb') as f:
f.write(res.content)
i +=1
f.close()
def main():
url = "http://photo.sina.com.cn/newyouth/"
html = open_url(url)
find_inf(html)
if __name__ == '__main__':
main()
这样就可以实现爬取新浪图片并保存图片到本地
标签:__,re,Python,爬取,headers,html,url,requests,main