打开新浪微博,登录,打开yz的相册
打开chrome的开发者工具,在Sources中+New snippet
timeout=prompt("Set timeout (Second):");
count=0
current=location.href;
if(timeout>0)
setTimeout('reload()',1000*timeout);
else
location.replace(current);
function reload(){
setTimeout('reload()',1000*timeout);
count++;
console.log('每('+timeout+')秒自动刷新,刷新次数:'+count);
window.scrollTo(0,document.body.scrollHeight);
}
右键Run,等结束,在Elements中Copy Element body
保存为yz.txt
然后执行脚本
import os
from lxml import etree
import requests
import sys
import datetime
html = etree.parse('yz.txt', etree.HTMLParser(encoding='utf-8'))
print(type(html))
ust = html.xpath('//ul/@group_id')
print(type(ust))
curr_time = datetime.datetime.now()
for iul in ust:
print(iul)
print(type(iul))
path = str(iul)
if not "年" in path:
year = str(curr_time.year) + "年"
path = year+path
isExists = os.path.exists(path)
if not isExists:
os.makedirs(path)
else:
print(path)
output = '//ul[@group_id="'
output += str(iul)
output += '"]//img/@src'
print(output)
lst = html.xpath(output)
print(type(lst))
for ili in lst:
print(ili)
link = str(ili)
if not link.startswith('https:'):
link = 'https:' + link
link = link.replace("/thumb300/", "/large/")
print(link)
response = requests.get(link,verify=False)
index = link.rfind('/')
fn = link[index + 1:]
if path.startswith('2010') or path.startswith('2009'):
if not ".jpg" in fn:
fn += ".jpg"
file_name = path+'/'+fn
with open(file_name, "wb") as f:
f.write(response.content)
现在只有保存图片功能,保存视频以后加吧。
一些年代久远的图片竟然没有后缀
试了这个可以,不过下不全,只能下200页
Python爬虫——批量爬取微博图片(不使用cookie)