话不多说,切入正题。
基于对https://news.qq.com/ 热点精选的分析,获取热点精选的文本和url信息还是比较简单的,selenium模拟浏览器,ajax加载,利用bs4进行页面解析便可实现,代码如下:
import time
import csv
from selenium import webdriver
driver=webdriver.Chrome(executable_path="../chrome/chromedriver.exe")
driver.get("https://news.qq.com")
#了解ajax加载
for i in range(1,50):
time.sleep(1)
driver.execute_script("window.scrollTo(window.scrollX, %d);"%(i*200))
from bs4 import BeautifulSoup
html=driver.page_source
bsObj=BeautifulSoup(html,"lxml")
jxtits=bsObj.find_all("div",{"class":"jx-tit"})[0].find_next_sibling().find_all("li")
#print("index",",","title",",","url")
f = open('information.csv','w',encoding='GB2312',newline='') #创建文件对