python小说爬虫实训报告_python之新手一看就懂的小说爬虫

晚上回来学学爬虫,记住,很多网站一般新手是爬不出来的,来个简单的,往下看:

import urllib.request

from bs4 import BeautifulSoup #我用的pycharm需要手动导入这个包的

import lxml  #同上

def getHtml(url,headers):

req = urllib.request.Request(url=url, headers=headers)

res =urllib.request.urlopen(req)

html = res.read()

return html

def saveTxt(path,html):

f = open(path,‘wb‘)

f.write(html)

def praseHtml(currentURL,headers,path):

# html = html.decode(‘utf-8‘)

chapter = 0

flag = 1

while flag:

chapter = chapter+1

if chapter >= 30: #控制下载的数量,太多数据电脑要爆。

flag = 0 #停止下载

html = getHtml(currentURL,headers)

savePath = path +"\\"+str(chapter)+ ".txt"

f = open(savePath,"w")

soup =BeautifulSoup(html,"lxml") #注意这里是lxml格式,我第一次居然写成了html,不小心就会吃亏的

nameText = soup.find(‘h3‘,attrs={‘class‘:‘j_chapterName‘})

contentText = soup.find(‘div‘,attrs={‘class‘:‘read-content j_readContent‘})

result = nameText.getText()+‘\n‘+contentText.getText()

result = result.replace(‘ ‘,‘\n ‘)

f = open(savePath,"w")

f.write(result)

nextpage = soup.find(‘a‘,attrs={‘id‘:‘j_chapterNext‘})

if next :

currentURL = "http:" + nextpage[‘href‘]

else:

currentURL = None

flag = 0

def main():

url = "https://www.readnovel.com/chapter/22160402000540402/107513768840595159"

headers = {

‘User-Agent‘: ‘Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/59.0.3071.115 Safari/537.36‘} #请求头自己可以再网页中查看 (f12->network->刷新)

path = "D:\\novel"

praseHtml(url,headers,path)

main()

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值