python点击下一页数据还是原来的_python – 无法使用发布请求继续下一页

我在python中编写了一个脚本,以获取不同的链接,从而导致网页上的不同文章.在运行我的脚本后,我可以完美地获得它们.但是,我面临的问题是文章链接遍历多个页面,因为它们是大数字以适合单个页面.如果我点击下一页按钮,我可以在开发人员工具中看到附加的信息,这实际上是通过帖子请求产生一个ajax调用.由于下一页按钮没有附加链接,我找不到任何方法继续下一页并从那里解析链接.我尝试过使用该formdata的帖子请求,但它似乎不起作用.我哪里错了?

这是我点击下一页按钮时使用chrome dev工具获得的信息:

GENERAL

=======================================================

Request URL: https://www.ncbi.nlm.nih.gov/pubmed/

Request Method: POST

Status Code: 200 OK

Remote Address: 130.14.29.110:443

Referrer Policy: origin-when-cross-origin

RESPONSE HEADERS

=======================================================

Cache-Control: private

Connection: Keep-Alive

Content-Encoding: gzip

Content-Security-Policy: upgrade-insecure-requests

Content-Type: text/html; charset=UTF-8

Date: Fri, 29 Jun 2018 10:27:42 GMT

Keep-Alive: timeout=1, max=9

NCBI-PHID: 396E3400B36089610000000000C6005E.m_12.03.m_8

NCBI-SID: CE8C479DB3510951_0083SID

Referrer-Policy: origin-when-cross-origin

Server: Apache

Set-Cookie: ncbi_sid=CE8C479DB3510951_0083SID; domain=.nih.gov; path=/; expires=Sat, 29 Jun 2019 10:27:42 GMT

Set-Cookie: WebEnv=1Jqk9ZOlyZSMGjHikFxNDsJ_ObuK0OxHkidgMrx8vWy2g9zqu8wopb8_D9qXGsLJQ9mdylAaDMA_T-tvHJ40Sq_FODOo33__T-tAH%40CE8C479DB3510951_0083SID; domain=.nlm.nih.gov; path=/; expires=Fri, 29 Jun 2018 18:27:42 GMT

Strict-Transport-Security: max-age=31536000; includeSubDomains; preload

Transfer-Encoding: chunked

Vary: Accept-Encoding

X-UA-Compatible: IE=Edge

X-XSS-Protection: 1; mode=block

REQUEST HEADERS

========================================================

Accept: text/html, */*; q=0.01

Accept-Encoding: gzip, deflate, br

Accept-Language: en-US,en;q=0.9

Connection: keep-alive

Content-Length: 395

Content-Type: application/x-www-form-urlencoded; charset=UTF-8

Cookie: ncbi_sid=CE8C479DB3510951_0083SID; _ga=GA1.2.1222765292.1530204312; _gid=GA1.2.739858891.1530204312; _gat=1; WebEnv=18Kcapkr72VVldfGaODQIbB2bzuU50uUwU7wrUi-x-bNDgwH73vW0M9dVXA_JOyukBSscTE8Qmd1BmLAi2nDUz7DRBZpKj1wuA_QB%40CE8C479DB3510951_0083SID; starnext=MYGwlsDWB2CmAeAXAXAbgA4CdYDcDOsAhpsABZoCu0IA9oQCZxLJA===

Host: www.ncbi.nlm.nih.gov

NCBI-PHID: 396E3400B36089610000000000C6005E.m_12.03

Origin: https://www.ncbi.nlm.nih.gov

Referer: https://www.ncbi.nlm.nih.gov/pubmed

User-Agent: Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.99 Safari/537.36

X-Requested-With: XMLHttpRequest

FORM DATA

========================================================

p$l: AjaxServer

portlets: id=relevancesortad:sort=;id=timelinead:blobid=NCID_1_120519284_130.14.22.215_9001_1530267709_1070655576_0MetA0_S_MegaStore_F_1:yr=:term=%222015%22%5BDate%20-%20Publication%5D%20%3A%20%223000%22%5BDate%20-%20Publication%5D;id=reldata:db=pubmed:querykey=1;id=searchdetails;id=recentactivity

load: yes

到目前为止这是我的脚本(如果取消注释,获取请求将完美运行,但对于第一页):

import requests

from urllib.parse import urljoin

from bs4 import BeautifulSoup

geturl = "https://www.ncbi.nlm.nih.gov/pubmed/?term=%222015%22%5BDate+-+Publication%5D+%3A+%223000%22%5BDate+-+Publication%5D"

posturl = "https://www.ncbi.nlm.nih.gov/pubmed/"

# res = requests.get(geturl,headers={"User-Agent":"Mozilla/5.0"})

# soup = BeautifulSoup(res.text,"lxml")

# for items in soup.select("div.rslt p.title a"):

# print(items.get("href"))

FormData={

'p$l': 'AjaxServer',

'portlets': 'id=relevancesortad:sort=;id=timelinead:blobid=NCID_1_120519284_130.14.22.215_9001_1530267709_1070655576_0MetA0_S_MegaStore_F_1:yr=:term=%222015%22%5BDate%20-%20Publication%5D%20%3A%20%223000%22%5BDate%20-%20Publication%5D;id=reldata:db=pubmed:querykey=1;id=searchdetails;id=recentactivity',

'load': 'yes'

}

req = requests.post(posturl,data=FormData,headers={"User-Agent":"Mozilla/5.0"})

soup = BeautifulSoup(req.text,"lxml")

for items in soup.select("div.rslt p.title a"):

print(items.get("href"))

我不想寻找任何与任何浏览器模拟器相关的解决方案.提前致谢.

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值