对豆瓣小说网爬取豆瓣网小说网页前10页所有小说的信息(网页地址为:https://book.douban.com/tag/小说),包括每一部小说的如下信息(作者、小说名、出版社、出版日期、价格、评分、评分人数、内容简介),并将这些信息存储到数据库中。数据库不限于sqlite, -部小说一条记录。最后需要实现查询某部小说信息的功能。
一、导入数据库
import requests
from bs4 import BeautifulSoup as bs
import sqlite3
二、定义函数爬取网页源代码
def getHTML(url):
headers = {
'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.72 Safari/537.36 Edg/90.0.818.42'
}
r = requests.get(url, headers=headers)
r.encoding = r.apparent_encoding
html = r.text
return html
三、对每一页进行爬取,并将结果保存到文件存储
# 对每一页进行爬取,并将结果保存到文件存储
i=0
f = open('xaioshuo.text', 'w+', encoding='utf-8')
if i<=200:
root = 'https://book.douban.com/tag/%E5%B0%8F%E8%AF%B4?start={}&type=T'.format(i)
i=i+20
html = getHTML(root)
poet_title_div = bs(html, 'html.parser').select('h2')
poet_titles = [item.text for item in poet_title_div]
poet_author_div = bs(html, 'html.parser').select('div.pub')
poet_authors = [item.text for item in poet_author_div]
poet_num= bs(html, 'html.parser').select('span.rating_nums')
poet_nums= [item.text for item in poet_num]
poet_pl1 = bs(html, 'html.parser').select('span.pl')
poet_div = bs(html, 'html.parser').select('p')
poet_contents = [item.text for item in poet_div]
四、创建数据库
conn = sqlite3.connect('xiaoshuo.db')
sql_tables = "create table xiao(id integer primary key autoincrement,name text,author text,grade text,people text,content text)"
conn.execute(sql_tables)
conn.commit()
contents=[]
page=contents.append(poet_titles+poet_authors+poet_nums+poet_pl+poet_contents)
print("开始存入数据库....")
五、写入数据库
for index,page in enumerate(contents):
print("写入第{}页的小说".format(index+1))
for i in range(20):
#string=page[i]+page[i+20]+page[i+40]
name=page[i]
author=page[i+20]
grade=page[i+40]
people=page[i+60]
content=page[i+80]
sql="insert into xiao values(null,'{}','{}','{}','{}','{}')".format(name,author,grade,people,content)
conn.execute(sql)
conn.commit()
print("第{}页的小说已经爬取完毕,稍等进行下一页".format(i+1))
最后附带可视化查询图: