在现代数据驱动的世界中,获取并处理丰富的网页数据是非常重要的技能。本文将介绍如何使用Python编写一个程序,自动获取财经新闻数据并进行处理。这不仅可以帮助我们快速获取最新的财经信息,还可以为后续的数据分析和研究提供支持。
环境准备
首先,确保你的Python环境已经安装了以下库:
pip install requests beautifulsoup4 tqdm concurrent.futures
核心代码解析
我们将分步骤讲解代码实现的关键部分。
1. 设置请求头和会话
为了模拟浏览器行为,我们需要设置合适的请求头:
import requests
session = requests.session()
session.headers['User-Agent'] = (
'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 '
'(KHTML, like Gecko) Chrome/113.0.0.0 Safari/537.36'
)
session.headers['Referer'] = 'https://money.163.com/'
session.headers['Accept-Language'] = 'zh-CN,zh;q=0.9'
2. 定义主函数和数据获取逻辑
主函数负责调度和管理整个流程:
def main():
base_url = [
'https://money.163.com/special/00259BVP/news_flow_index.js?callback=data_callback',
'https://money.163.com/special/00259BVP/news_flow_biz.js?callback=data_callback',
'https://money.163.com/special/00259BVP/news_flow_fund.js?callback=data_callback',
'https://money.163.com/special/00259BVP/news_flow_house.js?callback=data_callback',
'https://money.163.com/special/00259BVP/news_flow_licai.js?callback=data_callback'
]
kind = ['股票', '商业', '基金', '房产', '理财']
path = r'.财经(根数据).json'
save_path = r'./财经.json'
# 载入已有数据
try:
if os.path.isfile(path):
source_ls = bag.Bag.read_json(path)
else:
source_ls = []
except FileNotFoundError:
source_ls = []
index = 0
urls = []
for url in base_url:
result = get_url(url, kind[index])
index += 1
urls = urls + result
newly_added = []
if len(source_ls) == 0:
bag.Bag.save_json(urls, path)
newly_added = urls
else:
flag = [i[1] for i in source_ls]
for link in urls:
if link[1] in flag:
pass
else:
newly_added.append(link)
if len(newly_added) == 0:
print('无新数据')
else:
bag.Bag.save_json(newly_added + source_ls, path)
if os.path.isfile(save_path):
data_result = bag.Bag.read_json(save_path)
else:
data_result = []
with ThreadPoolExecutor(max_workers=20) as t:
tasks = []
for url in tqdm(newly_added[:], desc='网易财经'):
url: list
tasks.append(t.submit(get_data, url))
end = []
for task in tqdm(tasks, desc='网易财经'):
end.append(task.result())
bag.Bag.save_json(end + data_result, save_path)
3. 获取URL和数据
get_url
函数负责从特定URL获取数据链接和相关信息:
def get_url(url, kind):
num = 1
result = []
while True:
if num == 1:
resp = session.get(url)
else:
if num < 10:
resp = session.get(
url.replace('.js?callback=data_callback', '') + f'_0{num}' + '.js?callback=data_callback')
else:
resp = session.get(
url.replace('.js?callback=data_callback', '') + f'_{num}' + '.js?callback=data_callback')
if resp.status_code == 404:
break
num += 1
title = re.findall(r'"title":"(.*?)"', resp.text, re.S)
docurl = re.findall(r'"docurl":"(.*?)"', resp.text, re.S)
label = re.findall('"label":"(.*?)"', resp.text, re.S)
keyword = re.findall(r'"keywords":\[(.*?)]', resp.text, re.S)
mid = []
for k in keyword:
mid1 = []
for j in re.findall(r'"keyname":"(.*?)"', str(k), re.S):
mid1.append(j.strip())
mid.append(','.join(mid1))
for i in range(len(title)):
result.append([
title[i],
docurl[i],
label[i],
kind,
mid[i]
])
return result
get_data
函数负责从获取的链接中提取具体内容:
def get_data(ls):
resp = session.get(ls[1])
resp.encoding = 'utf8'
resp.close()
html = BeautifulSoup(resp.text, 'lxml')
content = []
p = re.compile(r'<p.*?>(.*?)</p>', re.S)
contents = html.find_all('div', class_='post_body')
for info in re.findall(p, str(contents)):
content.append(re.sub('<.*?>', '', info))
return [ls[-1], ls[0], '\n'.join(content), ls[-2], ls[1]]
运行程序
最后,在主程序中调用主函数:
if __name__ == '__main__':
main()
总结
通过这篇教程,我们展示了如何使用Python实现一个自动化数据获取和处理的程序。这个程序从指定的网址获取财经新闻,并将其保存到本地文件中。通过这种方式,我们可以轻松地获取并管理大量的财经信息,为后续的分析和研究提供便利。
希望这篇文章对你有所帮助。如果你有任何问题或建议,欢迎在评论区留言交流。