python爬取知网论文的关键词摘要_Python爬虫根据关键词爬取知网论文摘要并保存到数据库中【入门必学】...

使用Python爬虫程序,结合requests和BeautifulSoup库,从中国知网抓取指定关键词(如大肠杆菌、菌群总落等)的论文摘要,数据保存至MySQL数据库。
摘要由CSDN通过智能技术生成

# -*- coding: utf-8 -*-

import time

import re

import random

import requests

from bs4 import BeautifulSoup

import pymysql

connection = pymysql.connect(host='',

user='',

password='',

db='',

port=3306,

charset='utf8') # 注意是utf8不是utf-8

# 获取游标

cursor = connection.cursor()

#url = 'http://epub.cnki.net/grid2008/brief/detailj.aspx?filename=RLGY201806014&dbname=CJFDLAST2018'

#这个headers信息必须包含,否则该网站会将你的请求重定向到其它页面

headers = {

'Accept':'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8',

'Accept-Encoding':'gzip, deflate, sdch',

'Accept-Language':'zh-CN,zh;q=0.8',

'Connection':'keep-alive',

'Host':'www.cnki.net',

'Referer':'http://search.cnki.net/search.aspx?q=%E4%BD%9C%E8%80%85%E5%8D%95%E4%BD%8D%3a%E6%AD%A6%E6%B1%89%E5%A4%A7%E5%AD%A6&rank=relevant&cluster=zyk&val=CDFDTOTAL',

'Upgrade-Insecure-Requests':'1',

'User-Agent':'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36'

}

headers1 = {

'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2785.143 Safari/537.36'

}

def get_url_list(start_url):

depth = 20

url_list = []

for i in range(depth):

try:

url = start_url + "&p=" + str(i * 15)

search = requests.get(url.replace('\n', ''), headers=headers1)

soup = BeautifulSoup(search.text, 'html.parser')

for art in soup.find_all('div', class_='wz_tab'):

print(art.find('a')['href'])

if art.find('a')['href'] not in url_list:

url_list.append(art.find('a')['href'])

print("爬取第" + str(i) + "页成功!")

time.sleep(random.randint(1, 3))

except:

print("爬取第" + str(i) + "页失败!")

return url_list

def get_data(url_list, wordType):

try:

# 通过url_results.txt读取链接进行访问

for url in url_list:

i = 1;

if url == pymysql.NULL or url == '':

continue

try:

html = requests.get(url.replace('\n', ''), headers=headers)

soup = BeautifulSoup(html.text, 'html.parser')

except:

print("获取网页失败")

try:

print(url)

if soup is None:

continue

# 获取标题

title = soup.find('title').get_text().split('-')[0]

# 获取作者

author = ''

for a in soup.find('div', class_='summary pad10').find('p').find_all('a', class_='KnowledgeNetLink'):

author += (a.get_text() + ' ')

# 获取摘要

abstract = soup.find('span', id='ChDivSummary').get_text()

# 获取关键词,存在没有关键词的情况

except:

print("部分获取失败")

pass

try:

key = ''

for k in soup.find('span', id='ChDivKeyWord').find_all('a', class_='KnowledgeNetLink'):

key += (k.get_text() + ' ')

except:

pass

print("第" + str(i) + "个url")

print("【Title】:" + title)

print("【author】:" + author)

print("【abstract】:" + abstract)

print("【key】:" + key)

# 执行SQL语句

cursor.execute('INSERT INTO cnki VALUES (NULL, %s, %s, %s, %s, %s)', (wordType, title, author, abstract, key))

# 提交到数据库执行

connection.commit()

print()

print("爬取完毕")

finally:

print()

if __name__ == '__main__':

try:

for wordType in {"大肠杆菌", "菌群总落", "胭脂红", "日落黄"}:

wordType = "肉+" + wordType

start_url = "http://search.cnki.net/search.aspx?q=%s&rank=relevant&cluster=zyk&val=" % wordType

url_list = get_url_list(start_url)

print("开始爬取")

get_data(url_list, wordType)

print("一种类型爬取完毕")

print("全部爬取完毕")

finally:

connection.close()

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值