网页字符乱码处理
一、查看原网页编码的方式
1.首先呢,咱来说说如何在网页中查看编码方式,以爱奇艺为例,爱奇艺进入爱奇艺网页页面,鼠标 “右击–>检查–>点击Console–>输入document.charse 即可显示出网页的编码格式,如图:
二、网页编码方式为“utf-8”照样可以乱码
由上可知爱奇艺的网页编码方式时“utf-8”,接下来,我们来爬取一下它的网页代码:
import requests
url="https://www.iqiyi.com/"
header = {"user-agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.130 Safari/537.36"}
html=requests.get(url,headers=header).text
print(html)
显示出的结果为:
好吧,很明显,乱码了,那咱来转换:
import requests
url = "https://www.iqiyi.com/"
header = {"user-agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.130 Safari/537.36"}
html = requests.get(url,headers=header)
html.encoding = "utf-8"
MyHtml = html.text
print(MyHtml)
运行后结果为:
OK,搞定 !!!那咱再来看看编码方式为“utf-8”的网易云
import requests
url = "https://music.163.com/"
header={"user-agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.122 Safari/537.36"}
html = requests.get(url,headers=header).text
print(html)
啊哈,我们再来看看另一个网页——前程无忧
网页编码方式为"GBK"
我们来获取一下源代码
import requests
url="https://search.51job.com/list/150300,000000,0000,32,9,99,%E5%A4%A7%E6%95%B0%E6%8D%AE%E5%BC%80%E5%8F%91%E5%B7%A5%E7%A8%8B%E5%B8%88,2,1.html"
header = {"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.122 Safari/537.36"}
html=requests.get(url,headers=header).text
print(html)
结果为:
Ok,遇到问题我们不怕,我们继续来解决,办法总是有的,看看这种情况下我们该如何解决
import requests
url="https://search.51job.com/list/150300,000000,0000,32,9,99,%E5%A4%A7%E6%95%B0%E6%8D%AE%E5%BC%80%E5%8F%91%E5%B7%A5%E7%A8%8B%E5%B8%88,2,1.html"
header = {"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.122 Safari/537.36"}
html=requests.get(url,headers=header)
html.encoding="utf-8"
MyHtml=html.text
print(MyHtml)
结果为:
这又出现的是什么玩意,咱继续——
import requests
url="https://search.51job.com/list/150300,000000,0000,32,9,99,%E5%A4%A7%E6%95%B0%E6%8D%AE%E5%BC%80%E5%8F%91%E5%B7%A5%E7%A8%8B%E5%B8%88,2,1.html"
header = {"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.122 Safari/537.36"}
html=requests.get(url,headers=header)
MyHtml=html.text.encode('iso-8859-1').decode('gbk')
print(MyHtml)
结果为:
三、总结
这篇文章主要解决的是网络爬虫的时候出现编码中文乱码问题,如果再遇到其他的编码的问题,我会继续拿小本本记下来,这都是自己学习的过程呀!!!😊