python登录网页后抓取数据_如何绕过登录抓取js动态加载网页数据[Python]

时间:2018-11-24 ┊

阅读:1,545 次 ┊

标签: 开发 , 编程 , 经验

今天经历了一翻折腾,把一个需要登录网站并js动态加载的数据一一给抓下来了。

首先,登录时有cookie,我们需要把cookie保存下来,用urllib2构建request时加入header信息,这时还多了一点,虚构了浏览器信息,让服务器以为是正常的浏览器发起的请求,这样可以绕过简单的反爬虫策略。

等用cookie搞定了登录,发现网页是js动态加载,抓取失败!

搜索了一下,发现两种方法:

用工具,构建webdriver,用chrome或firefox来打开网页,缺点效率太低。

分析网页加载过程,通过response信息找到网页加载时调用的api或服务地址,这个较烦琐麻烦,不过找到可是一劳永逸,用python全部能过后台抓取成为可能。

在几十个请求中,最后找到了后台加载数据服务地址,找到了规律,然后用id进行拼接得到完整地址,就可以构造请求了。

服务器返回的数据竟然不是json数据,而是xml,赶紧研究一下xml解析方法,我选择了minidom来解析,感觉看着舒服。

然后遇到了空标签问题,在网页上没有评论的时候,标签是空的时候,就报错了,因为直接访问list,这时会报out of index error。简单粗暴直接try然后跳过。

抓取的数据写入csv,download的附件分别保存到以id建的小文件夹下面。

时间的格式化,打印出含毫秒的长时期字符,和robot测试工具输出提示一致。

最后把中间生成的临时xml文件删除了。哈哈

代码:

#!/usr/bin/python

# -*- coding:utf-8 -*-

import urllib2

import xml.dom.minidom

import os

import csv

import time

def get_timestamp():

ct = time.time()

local_time = time.localtime(ct)

time_head = time.strftime("%Y%m%d %H:%M:%S", local_time)

time_secs = (ct - long(ct)) * 1000

timestamp = "%s.%03d" % (time_head, time_secs)

return timestamp

# create request user-agent header

userAgent = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.110 Safari/537.36'

cookie = '...0000Y3Pwq2s9BdZn8e0zpTDmkVv:-1...'

uaHeaders = {'User-Agent': userAgent, 'Cookie': cookie}

# item url string connection

itemUrlHead = "https://api.amkevin.com:8888/ccm/service/com.amkevin.team.workitem.common.internal.rest.IWorkItemRestService/workItemDT88?includeHistory=false&id="

itemUrlId = "99999"

itemUrlTail = "&projectAreaItemId=_hnbI8sMnEeSExMMeBFetbg"

itemUrl = itemUrlHead + itemUrlId + itemUrlTail

# send request with user-agent header

request = urllib2.Request(itemUrl, headers=uaHeaders)

response = urllib2.urlopen(request)

xmlResult = response.read()

# prepare the xml file

curPath = os.getcwd()

curXml = curPath + '/' + itemUrlId + '.xml'

if os.path.exists(curXml):

os.remove(curXml)

curAttObj = open(curXml, 'w')

curAttObj.write(xmlResult)

curAttObj.close()

# print xmlItem

# parse response xml file

DOMTree = xml.dom.minidom.parse(curXml)

xmlDom = DOMTree.documentElement

# prepare write to csv

csvHeader = ["ID", "Creator", "Creator UserID", "Comment Content", "Creation Date"]

csvRow = []

csvCmtOneFile = curPath + '/rtcComment.csv'

if not os.path.exists(csvCmtOneFile):

csvObj = open(csvCmtOneFile, 'w')

csvWriter = csv.writer(csvObj)

csvWriter.writerow(csvHeader)

csvObj.close()

# get comments & write to csv file

items = xmlDom.getElementsByTagName("items")

for item in items:

try:

if item.hasAttribute("xsi:type"):

curItem = item.getAttribute("xsi:type")

if curItem == "workitem.restDTO:CommentDTO":

curCommentContent = item.getElementsByTagName("content")[0].childNodes[0].data

curCommentContent = curCommentContent.replace('', '')

curCommentContent = curCommentContent.replace('', '')

curCommentCreationDate = item.getElementsByTagName("creationDate")[0].childNodes[0].data

curCommentCreator = item.getElementsByTagName("creator")[0]

curCommentCreatorName = curCommentCreator.getElementsByTagName("name")[0].childNodes[0].data

curCommentCreatorId = curCommentCreator.getElementsByTagName("userId")[0].childNodes[0].data

csvRow = []

csvRow.append(itemUrlId)

csvRow.append(curCommentCreatorName)

csvRow.append(curCommentCreatorId)

csvRow.append(curCommentContent)

csvRow.append(curCommentCreationDate)

csvObj = open(csvCmtOneFile, 'a')

csvWriter = csv.writer(csvObj)

csvWriter.writerow(csvRow)

csvObj.close()

# save the attachment file to local dir

curAttFile = curPath + '/' + itemUrlId

if not os.path.exists(curAttFile):

os.mkdir(curAttFile)

curAttFile = curPath + '/' + itemUrlId + '/' + itemUrlId + '.csv'

if os.path.exists(curAttFile):

curCsvObj = open(curAttFile, 'a')

curCsvWriter = csv.writer(curCsvObj)

curCsvWriter.writerow(csvRow)

else:

curCsvObj = open(curAttFile, 'w')

curCsvWriter = csv.writer(curCsvObj)

curCsvWriter.writerow(csvRow)

curCsvObj.close()

print get_timestamp() + " :" + " INFO :" + " write comments to csv success."

except:

print get_timestamp() + " :" + " INFO :" + " parse xml encountered empty element, skipped."

continue

# get attachment

linkDTOs = xmlDom.getElementsByTagName("linkDTOs")

for linkDTO in linkDTOs:

try:

if linkDTO.getElementsByTagName("target")[0].hasAttribute("xsi:type"):

curAtt = linkDTO.getElementsByTagName("target")[0].getAttribute("xsi:type")

if curAtt == "workitem.restDTO:AttachmentDTO":

curAttUrl = linkDTO.getElementsByTagName("url")[0].childNodes[0].data

curTarget = linkDTO.getElementsByTagName("target")[0]

curAttName = curTarget.getElementsByTagName("name")[0].childNodes[0].data

# save the attachment file to local dir

curAttFolder = curPath + '/' + itemUrlId

if not os.path.exists(curAttFile):

os.mkdir(curAttFolder)

curAttFile = curPath + '/' + itemUrlId + '/' + curAttName

curRequest = urllib2.Request(curAttUrl, headers=uaHeaders)

curResponse = urllib2.urlopen(curRequest)

curAttRes = curResponse.read()

if os.path.exists(curAttFile):

os.remove(curAttFile)

curAttObj = open(curAttFile, 'w')

curAttObj.write(curAttRes)

curAttObj.close()

print get_timestamp() + " :" + " INFO :" + " download attachment [" + curAttName + "] success."

except:

print get_timestamp() + " :" + " INFO :" + " parse xml encountered empty element, skipped."

continue

# print linkDTO

# delete temporary xml file

if os.path.exists(curXml):

os.remove(curXml)

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值