python获取ajax网页_用python请求在网页上抓取AJAX内容

您只需要比较两个post数据,就会发现除了几个参数(draw=page...start=xx)之外,它们几乎相同。这意味着您可以通过修改draw和{}来获取Ajax数据。在

编辑:数据被转换成字典,所以我们不需要urlencode,也不需要cookie(我测试过)。在import requests

import json

headers = {

"Accept": "application/json, text/javascript, */*; q=0.01",

"Origin": "https://cafe.bithumb.com",

"X-Requested-With": "XMLHttpRequest",

"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.92 Safari/537.36",

"DNT": "1",

"Content-Type": "application/x-www-form-urlencoded; charset=UTF-8",

"Referer": "https://cafe.bithumb.com/view/boards/43",

"Accept-Encoding": "gzip, deflate, br"

}

string = """columns[0][data]=0&columns[0][name]=&columns[0][searchable]=true&columns[0][orderable]=false&columns[0][search][value]=&columns[0][search][regex]=false&columns[1][data]=1&columns[1][name]=&columns[1][searchable]=true&columns[1][orderable]=false&columns[1][search][value]=&columns[1][search][regex]=false&columns[2][data]=2&columns[2][name]=&columns[2][searchable]=true&columns[2][orderable]=false&columns[2][search][value]=&columns[2][search][regex]=false&columns[3][data]=3&columns[3][name]=&columns[3][searchable]=true&columns[3][orderable]=false&columns[3][search][value]=&columns[3][search][regex]=false&columns[4][data]=4&columns[4][name]=&columns[4][searchable]=true&columns[4][orderable]=false&columns[4][search][value]=&columns[4][search][regex]=false&start=30&length=30&search[value]=&search[regex]=false"""

article_root = "https://cafe.bithumb.com/view/board-contents/{}"

for page in range(1,4):

with requests.Session() as s:

s.headers.update(headers)

data = {"draw":page}

data.update( { ele[:ele.find("=")]:ele[ele.find("=")+1:] for ele in string.split("&") } )

data["start"] = 30 * (page - 1)

r = s.post('https://cafe.bithumb.com/boards/43/contents', data = data, verify = False) # set verify = False while you are using fiddler

json_data = json.loads(r.text).get("data") # transform string to dict then we can extract data easier

for each in json_data:

url = article_root.format(each[0])

print(url)

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值