简单以爬虫获取到的数据写入文件为例用代码来介绍。
任务目标:某个urls.txt文件中存在一些url,通过这些url来获取网站的title,如果爬取成功写入accuracy_data.txt文件中,爬取失败的url写入error_data.txt文件中。
urls.txt:
http://www.baidu.com
https://www.baidu.com
http://www.baidus.com
https://fanyi.baidu.com/
https://www.zhihu.com/
https://www.aixuexi.com/
https://www.aixuexiaaa.com/
示例代码1:
import requests
import re
# 爬取数据
def crawl_data(url):
try:
response = requests.get(url)
data = response.content.decode()
title = re.findall(r'<title>(.*?)</title>', data)[0]
except Exception as e:
return False, url, e.args
return True, url, title
with open('urls.txt', 'r', encoding='utf-8') as r_datas, open('accuracy_data.txt', 'a', encoding='utf-8') as accuracy_data, open('error_data.txt', 'a', encoding='utf-8') as error_data:
r_datas = r_datas.readlines()
# print(r_datas)
for url in r_datas:
status, url, title = crawl_data(url.replace('\n', ''))
print(status, url, title)
if status:
accuracy_data.write(url + '\t' + title + '\n')
else:
error_data.write(url + '\t' + str(title) + '\n')
运行效果:
示例代码2:
import requests
import re
# 爬取数据
def crawl_data(url, error_data):
try:
response = requests.get(url)
data = response.content.decode()
title = re.findall(r'<title>(.*?)</title>', data)[0]
except Exception as e:
error_data.write(url + '\t' + str(e.args) + '\n')
return False, url, e.args
return True, url, title
with open('urls.txt', 'r', encoding='utf-8') as r_datas, open('accuracy_data.txt', 'a', encoding='utf-8') as accuracy_data, open('error_data.txt', 'a', encoding='utf-8') as error_data:
r_datas = r_datas.readlines()
# print(r_datas)
for url in r_datas:
status, url, title = crawl_data(url.replace('\n', ''), error_data)
print(status, url, title)
if status:
accuracy_data.write(url + '\t' + title + '\n')
运行效果:
同示例代码1运行效果!