爬虫练习(一):模拟登录并爬取表格数据(提交表单数据)

内容有:

  • 通过requests库模拟表单提交
  • 通过pandas库提取网页表格

目标分析

网址是这个:https://www.ctic.org/crm?tdsourcetag=s_pctim_aiomsg

打开长这样:
在这里插入图片描述
点击View Summary后出现目标网页长这样
在这里插入图片描述

目标数据所在网页的网址是这样的:https://www.ctic.org/crm/?action=result,刚刚选择的那些参数并没有作为url的参数啊!网址网页都变了,所以也不是ajax

尝试获取目标页面

让我来点击View Summary这个按钮时到底发生了啥:右键View Summary检查是这样:
在这里插入图片描述

提交的为 post请求:
点击View Summary,到DevTools里找network第一条:
在这里插入图片描述
不管三七二十一,post一下试试看

import requests

url = 'https://www.ctic.org/crm?tdsourcetag=s_pctim_aiomsg'
headers = {'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) '
           'AppleWebKit/537.36 (KHTML, like Gecko) '
           'Chrome/74.0.3729.131 Safari/537.36',
           'Host': 'www.ctic.org'}
data = {'_csrf': 'SjFKLWxVVkkaSRBYQWYYCA1TMG8iYR8ReUYcSj04Jh4EBzIdBGwmLw==',
        'CRMSearchForm[year]': '2011',
        'CRMSearchForm[format]': 'Acres',
        'CRMSearchForm[area]': 'County',
        'CRMSearchForm[region]': 'Midwest',
        'CRMSearchForm[state]': 'IL',
        'CRMSearchForm[county]': 'Adams',
        'CRMSearchForm[crop_type]': 'All',
        'summary': 'county'}
response = requests.post(url, data=data, headers=headers)
print(response.status_code)

果不其然,输出400……

尝试用cookies

首先,搞不清cookies具体是啥,只知道它是用来维持会话的,应该来自于第一次get,搞出来看看先:

response1 = requests.get(url, headers=headers)
if response1.status_code == 200:
    cookies = response1.cookies
    print(cookies)

输出:

<RequestsCookieJar[<Cookie PHPSESSID=52asgghnqsntitqd7c8dqesgh6 for www.ctic.org/>, <Cookie _csrf=2571c72a4ca9699915ea4037b967827150715252de98ea2173b162fa376bad33s%3A32%3A%22TAhjwgNo5ElZzV55k3DMeFoc5TWrEmXj%22%3B for www.ctic.org/>]>

直接把它放到post里试试

response2 = requests.post(url, data=data, headers=headers, cookies=cookies)
print(response2.status_code)

还是400

post请求所带的data中那个一开始就显得很可疑的_csrf

那个完全看不懂的cookies里好像就有一个_csrf啊!但是两个_csrf的值很明显结构不一样,试了一下把data里的_csrf换成cookies里的_csrf确实也不行。

这个两个_csrf虽然不相等,但是应该是匹配的,刚刚的data来自浏览器,cookies来自python程序,所以不匹配!
点开浏览器的DevTools,Ctrl+F搜索了一下,嘿嘿,发现了:
在这里插入图片描述

这三处。

第一处那里的下一行的csrf_token很明显就是post请求所带的data里的_csrf,另外两个是js里的函数,虽然js没好好学但也能看出来这俩是通过post请求获得州名和县名的,Binggo!一下子解决两个问题。

为了验证猜想,打算先直接用requests获取点击View Summary前的页面的HTML和cookies,将从HTML中提取的csrf_token值作为点击View Summary时post请求的data里的_csrf值,同时附上cookies,这样两处_csrf就应该是匹配的了:

from lxml import etree
response1 = requests.get(url, headers=headers)
cookies = response1.cookies
html = etree.HTML(response1.text)
csrf_token = html.xpath('/html/head/meta[3]/@content')[0]
data.update({'_csrf': csrf_token})
response2 = requests.post(url, data=data, headers=headers, cookies=cookies)
print(response2.status_code)

输出200,虽然和Chrome显示的302不一样,但是也表示成功

尝试pandas库提取网页表格

现在既然已经拿到了目标页面的HTML,那在获取所有年、地区、州名、县名之前,先测试一下pandas.read_html提取网页表格的功能。

import pandas as pd
df = pd.read_html(response2.text)[0]
print(df)

准备所有参数

接下来要获取所有年、地区、州名、县名。年份和地区是写死在HTML里的,直接xpath获取:
在这里插入图片描述
州名、县名根据之前发现的两个js函数,要用post请求来获得,其中州名要根据地区名获取,县名要根据州名获取,套两层循环就行

def new():
    session = requests.Session()
    response = session.get(url=url, headers=headers)
    html = etree.HTML(response.text)
    return session, html

session, html = new()
years = html.xpath('//*[@id="crmsearchform-year"]/option/text()')
regions = html.xpath('//*[@id="crmsearchform-region"]/option/text()')
_csrf = html.xpath('/html/head/meta[3]/@content')[0]
region_state = {}
state_county = {}
for region in regions:
    data = {'region': region, '_csrf': _csrf}
    response = session.post(url_state, data=data)
    html = etree.HTML(response.json())
    region_state[region] = {x: y for x, y in
                            zip(html.xpath('//option/@value'),
                                html.xpath('//option/text()'))}
    for state in region_state[region]:
        data = {'state': state, '_csrf': _csrf}
        response = session.post(url_county, data=data)
        html = etree.HTML(response.json())
        state_county[state] = html.xpath('//option/@value')

使用requests.Session就完全不需要自己管理cookies了,方便!
然后把所有年、地区、州名、县名的可能组合先整理成csv文件,一会直接从csv里读取并构造post请求的data字典:

remain = [[str(year), str(region), str(state), str(county)] 
         for year in years for region in regions
         for state in region_state[region] for county in state_county[state]]
remain = pd.DataFrame(remain, columns=['CRMSearchForm[year]',
                                       'CRMSearchForm[region]',
                                       'CRMSearchForm[state]',
                                       'CRMSearchForm[county]'])
remain.to_csv('remain.csv', index=False)
# 由于州名有缩写和全称,也本地保存一份
import json
with open('region_state.json', 'w') as json_file:
        json.dump(region_state, json_file, indent=4)

正式开始

import pyodbc
with open("region_state.json") as json_file:
    region_state = json.load(json_file)
data = pd.read_csv('remain.csv')
# 读取已经爬取的
cnxn = pyodbc.connect('DRIVER={Microsoft Access Driver (*.mdb, *.accdb)};'
                      'DBQ=./ctic_crm.accdb')
crsr = cnxn.cursor()
crsr.execute('select Year_, Region, State, County from ctic_crm')
done = crsr.fetchall()
done = [list(x) for x in done]
done = pd.DataFrame([list(x) for x in done], columns=['CRMSearchForm[year]',
                                                      'CRMSearchForm[region]',
                                                      'CRMSearchForm[state]',
                                                      'CRMSearchForm[county]'])
done['CRMSearchForm[year]'] = done['CRMSearchForm[year]'].astype('int64')
state2st = {y: x for z in region_state.values() for x, y in z.items()}
done['CRMSearchForm[state]'] = [state2st[x]
                                for x in done['CRMSearchForm[state]']]
# 排除已经爬取的
remain = data.append(done)
remain = remain.drop_duplicates(keep=False)
total = len(remain)
print(f'{total} left.n')
del data

# %%
remain['CRMSearchForm[year]'] = remain['CRMSearchForm[year]'].astype('str')
columns = ['Crop',
           'Total_Planted_Acres',
           'Conservation_Tillage_No_Till',
           'Conservation_Tillage_Ridge_Till',
           'Conservation_Tillage_Mulch_Till',
           'Conservation_Tillage_Total',
           'Other_Tillage_Practices_Reduced_Till15_30_Residue',
           'Other_Tillage_Practices_Conventional_Till0_15_Residue']
fields = ['Year_', 'Units', 'Area', 'Region', 'State', 'County'] + columns
data = {'CRMSearchForm[format]': 'Acres',
        'CRMSearchForm[area]': 'County',
        'CRMSearchForm[crop_type]': 'All',
        'summary': 'county'}
headers = {'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) '
           'AppleWebKit/537.36 (KHTML, like Gecko) '
           'Chrome/74.0.3729.131 Safari/537.36',
           'Host': 'www.ctic.org',
           'Upgrade-Insecure-Requests': '1',
           'DNT': '1',
           'Connection': 'keep-alive'}
url = 'https://www.ctic.org/crm?tdsourcetag=s_pctim_aiomsg'
headers2 = headers.copy()
headers2 = headers2.update({'Referer': url,
                            'Origin': 'https://www.ctic.org'})
def new():
    session = requests.Session()
    response = session.get(url=url, headers=headers)
    html = etree.HTML(response.text)
    _csrf = html.xpath('/html/head/meta[3]/@content')[0]
    return session, _csrf
session, _csrf = new()
for _, row in remain.iterrows():
    temp = dict(row)
    data.update(temp)
    data.update({'_csrf': _csrf})
    while True:
        try:
            response = session.post(url, data=data, headers=headers2, timeout=15)
            break
        except Exception as e:
            session.close()
            print(e)
            print('nSleep 30s.n')
            time.sleep(30)
            session, _csrf = new()
            data.update({'_csrf': _csrf})

    df = pd.read_html(response.text)[0].dropna(how='all')
    df.columns = columns
    df['Year_'] = int(temp['CRMSearchForm[year]'])
    df['Units'] = 'Acres'
    df['Area'] = 'County'
    df['Region'] = temp['CRMSearchForm[region]']
    df['State'] = region_state[temp['CRMSearchForm[region]']][temp['CRMSearchForm[state]']]
    df['County'] = temp['CRMSearchForm[county]']
    df = df.reindex(columns=fields)
    for record in df.itertuples(index=False):
        tuple_record = tuple(record)
        sql_insert = f'INSERT INTO ctic_crm VALUES {tuple_record}'
        sql_insert = sql_insert.replace(', nan,', ', null,')
        crsr.execute(sql_insert)
        crsr.commit()
    print(total, row.to_list())
    total -= 1
else:
    print('Done!')
    crsr.close()
    cnxn.close()
  • 0
    点赞
  • 7
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值