分析
看官,点个赞呀~很需要你们的支持
请求头,cookie,user-agent这些信息,请F12,就看得到;
url,只需要注意一点,每新一页page增加30;
解析网页用的Beautifulsoup;
提取信息使用的re;
保存信息使用的pandas;
其它见代码,非常简单,可以尝试跟着敲一敲
import requests
from bs4 import BeautifulSoup
import time
import re
import pandas as pd
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.25 Safari/537.36 Core/1.70.3861.400 QQBrowser/10.7.4313.400',
'Cookie': 'uuid_n_v=v1; uuid=055F5ED0A11A11EB86F59B2D0E11C57E6BAEEE9B823D4249B71B2B27076732CE; _csrf=b6c54997c1adfe4e61c95989483d4e89c831b8b174ccd2dd1b49800d54d5f5ec; Hm_lvt_703e94591e87be68cc8da0da7cbd0be2=1618841870; _lx_utm=utm_source%3Dwww.sogou%26utm_medium%3Dorganic; _lxsdk_cuid=178ea7e9e7cc8-01d6904c32c98a-33524d7c-144000-178ea7e9e7ec8; _lxsdk=055F5ED0A11A11EB86F59B2D0E11C57E6BAEEE9B823D4249B71B2B27076732CE; Hm_lpvt_703e94591e87be68cc8da0da7cbd0be2=1618841898; __mta=49656345.1618841870051.1618841893975.1618841898032.6; _lxsdk_s=178ea7e9e7f-c6b-f0c-e79%7C%7C15'
}
target = []
for i in range(0, 301, 30):
url = 'https://maoyan.com/films?showType=3&offset='+str(i)
html = requests.get(url, headers=headers).text
soup = BeautifulSoup(html, 'html.parser')
items = soup.find_all('dd')
for item in items:
movieName = re.findall(re.compile(
r'div class="channel-detail movie-item-title" title="(.*?)"'), str(item))[0]
integer = re.findall(re.compile(
r'<i class="integer">(.*?)</i>'), str(item))[0]
fraction = re.findall(re.compile(
r'<i class="fraction">(.*?)</i>'), str(item))[0]
#fraction=fraction[0] if len(fraction)!=0 else 0
score = str(integer)+str(fraction)
target.append([movieName, score])
print('已爬取第{}页'.format(int(i/30)+1))
time.sleep(1)
out = pd.DataFrame(target)
out.columns = ['movieName', 'score']
out.to_csv('猫眼电影.csv')
爬取到的信息如下



被折叠的 条评论
为什么被折叠?



