阅读《Practical web scraping for data science》p161的代码之 Cannot operate on a closed database错误

问题描述

最近阅读了一本爬虫方面的书1,按照书上161页的代码原封不动的敲到电脑中,编写一个爬虫蜘蛛,但运行以后出现以下错误:

Error closing cursor
Traceback (most recent call last):
File “E:\StudyCard\BigData\WebScrape\PWSfDScode.pwsenv\lib\site-packages\sqlalchemy\engine\result.py”, line 1324, in fetchone
row = self._fetchone_impl()
File “E:\StudyCard\BigData\WebScrape\PWSfDScode.pwsenv\lib\site-packages\sqlalchemy\engine\result.py”, line 1204, in _fetchone_impl
return self.cursor.fetchone()
sqlite3.ProgrammingError: Cannot operate on a closed database.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File “E:\StudyCard\BigData\WebScrape\PWSfDScode.pwsenv\lib\site-packages\sqlalchemy\engine\base.py”, line 1339, in _safe_close_cursor
cursor.close()
sqlite3.ProgrammingError: Cannot operate on a closed database.
Traceback (most recent call last):
File “E:\StudyCard\BigData\WebScrape\PWSfDScode.pwsenv\lib\site-packages\sqlalchemy\engine\result.py”, line 1324, in fetchone
row = self._fetchone_impl()
File “E:\StudyCard\BigData\WebScrape\PWSfDScode.pwsenv\lib\site-packages\sqlalchemy\engine\result.py”, line 1204, in _fetchone_impl
return self.cursor.fetchone()
sqlite3.ProgrammingError: Cannot operate on a closed database.

我记得之前看这本书的时候就遇到过该错误。该错误产生的原因应该是records库的原因。

解决方案

需要在建立数据库的代码后添加代码:

import requests
import records
from bs4 import BeautifulSoup
from urllib.parse import urljoin
from sqlalchemy.exc import IntegrityError

db = records.Database('sqlite:///crawler_database.db')
db = db.get_connection() # 新加

代码即可正常运行。


  1. Seppe v. Broucke, Bart Baesens. Practical Web Scraping for Data Science. Apress, 2018. ↩︎

©️2020 CSDN 皮肤主题: 大白 设计师: CSDN官方博客 返回首页
实付0元
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、C币套餐、付费专栏及课程。

余额充值