python requests 爬取数据

import requests
from lxml import etree
import time
import pymysql
import json
headers={

    'User-Agent':'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.181 Safari/537.36',
    'Content-Type':'application/x-www-form-urlencoded',
    'Pragma':'no-cache',
    'Upgrade-Insecure-Requests':'1',
    'Content-Length':'86',
    'Host':'www.bjda.gov.cn'
}

headers_xiangqing={

    'User-Agent':'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.181 Safari/537.36',
    'Pragma':'no-cache',
    'Upgrade-Insecure-Requests':'1',
    'Host':'www.bjda.gov.cn'
}

dd={
    'pageSize':'20'
}

temp=[]

dd['currentPage'] = '10'
print(dd)
response = requests.post('http://www.bjda.gov.cn/eportal/ui?pageId=348736', headers=headers, data=dd)
selector = etree.HTML(response.text)
item_spider = list(set(selector.xpath('//tr[@class="chaxun_con"]//a/@href')))
temp.extend(item_spider)

for i in temp:
    print('http://www.bjda.gov.cn/eportal/ui?pageId=348738&'+i[1:])
    response=requests.get('http://www.bjda.gov.cn/eportal/ui?pageId=348738&'+i[1:],headers=headers_xiangqing)
    print(response.status_code)
    selector=etree.HTML(response.text)
    tr=selector.xpath('//table[@class="table_sjcx"]//tr')
    print(tr

 

posted on 2018-04-18 11:15 秦瑞It行程实录 阅读( ...) 评论( ...) 编辑 收藏

转载于:https://www.cnblogs.com/ruiy/p/8872962.html

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值