爬虫乱码问题

趁周末爬了下小说,代码如下:

import requests
from bs4 import BeautifulSoup
#需求:爬取三国演义小说所有的章节标题和章节内容http://www.shicimingju.com/book/sanguoyanyi.html
if __name__ == "__main__":
    #对首页的页面数据进行爬取
    headers = {
        'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.121 Safari/537.36'
    }
    url = 'http://www.shicimingju.com/book/sanguoyanyi.html'
    page_text = requests.get(url=url,headers=headers).text

    #在首页中解析出章节的标题和详情页的url
    #1.实例化BeautifulSoup对象,需要将页面源码数据加载到该对象中
    soup = BeautifulSoup(page_text,'lxml')
    #解析章节标题和详情页的url
    li_list = soup.select('.book-mulu > ul > li')
    fp = open('./sanguo.txt','w',encoding='utf-8')
    for li in li_list:
        title = li.a.string
        detail_url = 'http://www.shicimingju.com'+li.a['href']
        #对详情页发起请求,解析出章节内容
        detail_page_text = requests.get(url=detail_url,headers=headers).text
        #解析出详情页中相关的章节内容
        detail_soup = BeautifulSoup(detail_page_text,'lxml')
        div_tag = detail_soup.find('div',class_='chapter_content')
        #解析到了章节的内容
        content = div_tag.text
        fp.write(title+':'+content+'\n')
        print(title,'爬取成功!!!')

爬到的结果乱码了:

 我去看了下网页源码,编码为utf-8

pycharm编译器,采用的也是utf-8,都是utf-8为什么会乱码呢?这时候就想到了,直接输出网页的编码方式

page_text=requests.get(url=url,headers=headers)
print(page_text.encoding)

结果竟然是:ISO-8859-1

原因后来在这篇博文里找到了:如何解决python爬虫乱码问题_giun的博客-CSDN博客_python爬虫乱码怎么解决

这种编码不一致问题怎么解决的呢?

page_text=requests.get(url=url,headers=headers).text

#encode编码,将ISO-8859-1编成unicode 
page_text=page_text.encode('ISO-8859-1') 

#decode解码,将unicode解码成utf-8
page_text=page_text.decode('utf-8')

更改后的代码:

import requests
from bs4 import BeautifulSoup
if __name__=='__main__':
    #1.对首页的页面信息进行爬取
    url='http://www.shicimingju.com/book/sanguoyanyi.html'
    headers={'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.55 Safari/537.36'}
    # print(page_text.encoding) #编码方式是:ISO-8859-1  pycharam只能显示utf-8
    page_text=requests.get(url=url,headers=headers).text
    page_text=page_text.encode('ISO-8859-1') #encode编码,将ISO-8859-1编成unicode
    page_text=page_text.decode('utf-8')

    #2.在首页中解析出章节的标题和详情页的url
    #(1)实例化BeautifulSoup对象,需要将页面源码数据加载到该对象中
    soup=BeautifulSoup(page_text,'lxml')
    #(2)
    li_list=soup.select('.book-mulu > ul >li')
    fp=open('./sanguo.txt','w',encoding='UTF-8')
    for li in li_list:
        title=li.a.string
        detail_url='https://www.shicimingju.com'+li.a['href']
        #对详情页发起请求,解析出章节内容
        detail_page_text=requests.get(url=detail_url,headers=headers).text
        detail_page_text=detail_page_text.encode('ISO-8859-1')
        detail_page_text = detail_page_text.decode('utf-8')


        #解析出详情页中相关的章节内容
        detail_soup=BeautifulSoup(detail_page_text,'lxml')
        div_tag=detail_soup.find('div',class_='chapter_content')
        content=div_tag.text
        #存数据
        fp.write(title+':'+content+'\n')
        print(title,'爬取成功')

最后爬取成功!!!

 还有更好的方法吗,欢迎分享👏

  • 2
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值