爬虫历程:神经网络与爬虫结合
前沿
最近接到一个小项目,是去抓取一个网站的全部网站,其中这个网站有个比较有意思的地方是当你频繁的去访问它,就会对你的ip进行封禁(哈哈这个和普通的网站没有什么区别是吧?)但是离谱的是封禁后你再去打开这个网页,它提示你输入一个验证码,验证成功后便会解封。然后便在爬虫中引入了神经网络的图像识别技术——OCR
一、OCR是什么
OCR是光学字符识别的缩写,OCR技术简单来说就是将文字信息转换为图像信息,然后再利用文字识别技术将图像信息转化为可以使用的输入技术。
OCR的功能:
1、OCR识别技术不仅具有可以自动判断、拆分、 识别和还原各种通用型印刷体表格,还在表格理解上做出了令人满意的实用结果。
2、OCR能够自动分析文稿的版面布局,自动分栏、并判断出标题、横栏、图像、表格等相应属性,并判定识别顺序,能将识别结果还原成与扫描文稿的版面布局一致的新文本。
3、OCR还可以支持表格自动录入技术,可自动识别特定表格的印刷或打印汉字、字母、数字,可识别手写体汉字、手写体字母、数字及多种手写符号,并按表格格式输出。提高了表格录入效率,可节省大量人力。(在百度百科抄的)
二、 问题展示
1.正常访问
2.IP暂时被封禁
网站正常打开,发现其中有许多各种类型的文章,当访问频繁便会让你进行输入验证码
三、解决方法以及源码
1.解除IP封禁的函数
思路是当被IP被封禁时,使用selenium框架,去模拟点击显示二维码,二维码图片是以Base64的形式显示,接着暂存到1.png,最后通过预训练的OCR模型进行识别,然后输入,根据网页源代码的关键字,判断验证码是否识别成功。
def unlock(url):
def base64_to_img(bstr, file_path):
imgdata = base64.b64decode(bstr)
file = open(file_path, 'wb')
file.write(imgdata)
file.close()
while True:
driver = webdriver.Edge(
r'C:\Program Files (x86)\Microsoft\Edge\Application\msedgedriver.exe') # 这里添加的是driver的绝对路径
driver.get(url)
reload = driver.find_element_by_xpath('/html/body/div[2]/form/input[1]')
reload.click()
data = BeautifulSoup(driver.page_source, "lxml")
img = data.find('img')["src"].split(',')[1]
base64_to_img(img, "1.png")
ocr = ddddocr.DdddOcr()
with open("1.png", 'rb') as f:
img_bytes = f.read()
res = ocr.classification(img_bytes)
print('识别出的验证码为:' + res)
captchaValue = driver.find_element_by_xpath('//*[@id="code"]') # 验证码输入框
captchaValue.send_keys(res)
print(driver.find_element_by_xpath('/html/body/div[2]/form/input[5]').click()) # 点击登录按钮
driver.quit()
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.111 Safari/537.36'}
response = requests.get(url, headers=headers).content
if """src="data:image/jpg;base64""" not in str(response):
print("解锁")
break
2.平平无奇的爬虫
旧套路,获取每篇文章的子链接,然后分析网页结构,清洗网页源代码,获取文章,然后保存到硬盘
def get_dad_url():
url = 'https://doc.wendoc.com/'
# 反爬
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64)''AppleWebKit/537.36 (KHTML, like Gecko) ''Chrome/51.0.2704.63 Safari/537.36'}
# 请求网页源代码
response = requests.get(url, headers=headers).content
if """src="data:image/jpg;base64""" in str(response):
unlock(url)
# 调用lxml处理hmtl网页
data = BeautifulSoup(response, "lxml")
# print(response)
book_url = data.find('ul')
url_items = book_url.find_all('a')
result = []
for i in url_items[:]:
if i['href'] != "#":
result.append(i['href'])
ans = []
for i in result:
if "html" in i:
ans.append("https://doc.wendoc.com" + i)
return ans
def get_content_url(url):
base_url = url[:-6]
# print(base_url)
urls = [url]
for i in range(25, 10000, 25):
urls.append(base_url + str(i) + ".html")
# print(urls)
# 反爬
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64)''AppleWebKit/537.36 (KHTML, like Gecko) ''Chrome/51.0.2704.63 Safari/537.36'}
# 请求网页源代码
res = []
temp = 5000
for url in urls:
# print(url)
try:
response = requests.get(url, headers=headers).content
if """src="data:image/jpg;base64""" in str(response):
unlock(url)
# time.sleep(5)
# 调用lxml处理hmtl网页
data = BeautifulSoup(response, "lxml")
# print(data)
book_url = data.find('ul', {'class': "list"})
# print(book_url)
for i in book_url.find_all("li"):
if "doc" in str(i):
res.append(i.find('a')['href'])
if temp <= len(res):
break
except:
print(url)
# print(response)
# break
return res
def get_content(url):
# url = 'https://doc.wendoc.com/b6defd9f1206a423c4270b6ae.html'
# 反爬
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64)''AppleWebKit/537.36 (KHTML, like Gecko) ''Chrome/51.0.2704.63 Safari/537.36'}
# 请求网页源代码
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.111 Safari/537.36'}
# time.sleep(5)
response = requests.get(url, headers=headers).content
if """src="data:image/jpg;base64""" in str(response):
unlock(url)
# 调用lxml处理hmtl网页
data = BeautifulSoup(response, "lxml")
# print(data)
try:
sum_page = data.find("a", {'class': "current"})["title"][-2]
title = data.find("h1", {'class': "title"}).text
except:
print(url)
content = ""
for i in data.find_all("p"):
if "页" in i.text:
break
if len(str(i.text)) > 0:
content += str(i)+"\n"
for i in range(2,int(sum_page)+1):
signal_url = url[:-5] + "-" + str(i) + ".html"
# print(signal_url)
response = requests.get(signal_url, headers=headers).content
if """src="data:image/jpg;base64""" in str(response):
unlock(signal_url)
# 调用lxml处理hmtl网页
data = BeautifulSoup(response, "lxml")
for j in data.find_all("p"):
if "页" in j.text:
break
if len(str(j.text))>0:
content += str(j)+"\n"
f = open(r"D:\桌面\result\\"+title+'.txt', 'a', encoding='utf-8')
f.write(title + '\n')
f.write(content + '\n')
f.close()
四、COR
人工智能应用(AIGC)、网络爬虫、数据分析、微信小程序开发、Web开发、App开发等等
WCHAT:1 | 8 | 7 * 9 * 0 * 3 * 7 ? 8 ? 7 ? 2 ? 1