Python_selenium爬虫

该博客展示了如何使用Python的Selenium库自动化点击网页上的年份和期号链接,以爬取特定年份范围内的期刊目录。代码中遍历了1985年至1990年的每个年份,并对每一年的12个月份进行操作,获取并打印目录内容。爬取的数据可能用于后续的数据分析或信息整理。
摘要由CSDN通过智能技术生成

1、webdriver下载地址:

https://registry.npmmirror.com/binary.html?path=chromedriver/

from selenium.webdriver import Chrome
from selenium.webdriver.common.by import By
import time

driver = Chrome()
driver.maximize_window()
driver.get("https://jour.duxiu.com/magDetail.jsp?magid=320910034129&d=3305845747873E9BCB4DD9EDA6053C20")
# for year in range(1990,2022):
for year in range(1985,1990):
    driver.find_element(By.XPATH,f'//*[@id="y{year}"]/a').click()
    year = str(year)
    time.sleep(2)
# driver.find_element(By.XPATH,'//*[@id="qihao_20220"]/a').click()
    for qihao in range(0,12):
        time.sleep(4)
        qihao = str(qihao)
        try:
            driver.find_element(By.XPATH,f'//*[@id="qihao_{year+qihao}"]/a').click()
        except:
            continue
        else:
        # 爬取目录详情
        # ulEle = driver.find_element(By.XPATH,'//*[@id="jourlist"]/ul')
        # time.sleep(2)
        # titles = ulEle.find_elements(By.TAG_NAME,'li')
        # time.sleep(2)
        # print(len(titles))
        # print(f"{year}-{qihao}:")
            qihao = int(qihao)
            year_content = f"{year}-{qihao+1}:"
            file = open('content.txt', 'a')
            file.write(year_content + '\n')
            for i in range(1, 16):
                # //*[@id="jourlist"]/ul/li[15]
                time.sleep(2)
                try:
                    titles = driver.find_elements(By.XPATH, f'//*[@id="jourlist"]/ul/li[{i}]')
                except:
                    continue
                else:
                # file = open('content.txt', 'a')
                # file.write(year_content + '\n')
                    for title in titles:
                        print(year_content)
                        print(title.text)
                        file.write(title.text + '\n')
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值