获取(复制)网页上的文字
今天在搜索历史课本上一段文言文的翻译时,找到的网页,屏蔽了右键,不能选择,当然不让复制啦。对于这样的网站可以采用如下方法进行数据的获取,以chrome为例。
1、网页另存为…
将网页另存为后,用文字编辑软件如:word打开即可。
2、登录微信截图文字识别
登录微信后,按快捷键Alt + A 截取图片,点击方字识别按钮识别。
3、安装浏览器插件
安装Toggle JavaScript 2.0(插件说明:Enable or disable JavaScript without the hassle.)禁止网页JavaScript。
4、在手机上打开网页后用在线编辑器打开
将网页地址传到微信的文件助手,打开链接,点击右上角三个点,选择“更多打开方式…", 用在线小程序打开。
5、使用开发者工具
使用爬虫获取接口时,有些网站屏蔽了右键打开检查,可以点击chrome菜单中三个点中选择更多工具中的开发者工具。
6、使用爬虫解析网页
import requests
from bs4 import BeautifulSoup
from lxml import etree
headers = {
"accept": "image/avif,image/webp,image/apng,image/svg+xml,image/*,*/*;q=0.8",
"accept-language": "zh-CN,zh;q=0.9",
"cache-control": "no-cache",
"dnt": "1",
"pragma": "no-cache",
"priority": "u=1, i",
"sec-ch-ua": "\"Google Chrome\";v=\"105\", \"Not)A;Brand\";v=\"8\", \"Chromium\";v=\"105\"",
"sec-ch-ua-mobile": "?0",
"sec-ch-ua-platform": "\"Windows\"",
"sec-fetch-dest": "image",
"sec-fetch-mode": "no-cors",
"sec-fetch-site": "same-origin",
"sec-fetch-user": "?1",
"upgrade-insecure-requests": "1",
"user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/125.0.0.0 Safari/537.36",
"referer": "https://wbblishi.com/post/161.html",
"Origin": "https://wbblishi.com",
"x-requested-with": "XMLHttpRequest",
}
cookies = {
"PHPSESSID": "lodm1klkvlh1ft2nreaq1olpde",
"timezone": "8",
"mochu_us_notice_alert": "1"
}
url = "https://wbblishi.com/post/161.html"
# 发送GET请求
response = requests.get(url, headers=headers, cookies=cookies)
# 确保请求成功
if response.status_code == 200:
# 尝试从响应头中获取编码并设置
if 'Content-Type' in response.headers:
content_type = response.headers['Content-Type']
if 'charset=' in content_type:
encoding = content_type.split('charset=')[-1]
response.encoding = encoding
else:
response.encoding = 'utf-8'
# 获取网页的HTML源码
html_content = response.text
# 使用BeautifulSoup解析HTML
soup = BeautifulSoup(html_content, 'lxml')
# 使用lxml的etree解析器
parser = etree.HTMLParser()
tree = etree.fromstring(str(soup), parser)
# 使用XPath提取指定范围的内容
elements = tree.xpath('//*[@id="post-161"]/div/div[1]/p[position() >= 3 and position() <= 124]/span')
# 输出提取的非None文本内容
for element in elements:
text = element.text
if text: # 过滤掉 None 和空字符串
print(text)
else:
print(f"获取网页失败. Status code: {response.status_code}")
最后结果