一、需求:
近期打算做一个财经事件分析预测的系统,需要爬取大量新闻作为数据集训练模型,于是写了爬虫去爬取路透社的财经新闻。
二、思路:
- 观察:
i. 观察路透社财经新闻的网页源代码,发现该网页每页显示十篇新闻,翻页后url中的‘page’ + 1,可以通过循环每次page + 1实现自动翻页。
ii. 每页的十篇新闻,只展示新闻标题、新闻导语、新闻发布时间和新闻照片,如果需要浏览新闻全文内容,需要点击该新闻进入二级页面。二级页面的链接在a标签的href中。
iii. 二级页面中新闻正文内容在p标签中。 - 实现:
i. 使用request库、BeautifulSoup库得到一级页面HTML内容:
headers = {
'Accept': '*/*',
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.169 Safari/537.36',
'Referer': "https://item.jd.com/100000177760.html#comment"}
url = 'https://www.reuters.com/news/archive/businessnews?view=page&page={}&pageSize=10'.format(str(page))
r = requests.get(url, headers=headers, verify=False)
soup = BeautifulSoup(r.content, 'lxml')
ii. 通过提取HTML中的a标签,得到二级页面的链接,由于二级页面的文字链接和图片链接是重复的,需进行去重,将这些链接去重后得到十篇新闻的链接列表:
# get the link to each news from the home page
def get_linklists(soup):
# create a list to contain the unselected links
link_raw = []
# create a list to contain the selected links
link_lists = []
# get all <a> tags
for k in soup.find_all('a'):
# get all href in the <a> tags
link = k.get('href')
# if the link contains '/article', this is a link to an article
if '/article' in link:
# append all links to articles to the unselected list
link_raw.append(link)
# because each news has picture links and content links, the links will be repeated, and the duplicate links need to be removed
for i in range(len(link_raw) - 1):
# remove the repeated links
if link_raw[i] != link_raw[i + 1]:
# append the selected links to link_list
link_lists.append(link_raw[i])
return link_lists
iii. 通过链接得到每个二级页面的HTML信息,选择其中p标签内的正文内容,经过正则表达式去掉无关部分,最终得到十篇新闻正文的列表:
# the network is not stable, so use retry() to execute this function again after failure
@retry()
# get the content of each news
def get_newscontent(link_lists):
# create a list to contain news of each page