网络程序设计综合实验,爬虫爬取百度新闻的代码(北京信息科技大学信息管理学院)

编程实现网络爬虫,爬取内容包括:

在百度新闻中输入关键字“徐念沙“的结果网页,要求保存最新的30条新闻的信息;

信息保存在sql server数据库中,包括以下字段:标题、url、日期、摘要、图片url、内容。新闻中包含图片的,把图片保存在本地文件夹中。

代码的流程如下:

1)导入所需的库:requests、lxml、os、urllib3、pymysql。

2)定义待爬取的三个页面的URL列表。

3)设置请求头信息。

4)定义存储新闻信息的列表。

5)遍历每个页面的URL: a. 发送HTTP请求获取页面内容。 b. 使用lxml库解析HTML内容。 c. 提取标题、URL、日期和摘要信息,并将其存储到对应的列表中。 d. 将页面内容写入本地文件。

6)遍历每个新闻的URL: a. 发送HTTP请求获取二级页面的内容。 b. 使用lxml库解析二级页面的HTML内容。 c. 提取文章内容和图片URL,并将其存储到对应的列表中。 d. 下载图片并保存到本地。

7)连接数据库。

8)遍历每条新闻的信息: a. 构建插入新闻信息的SQL语句。 b. 执行SQL语句将数据插入数据库。

9)提交事务并关闭数据库连接

import requests
from lxml import html
etree = html.etree
import os
import urllib3
import pymysql

# -------------------------------------------三页新闻的url,用来爬取30条新闻------------------------------------
url_list = [
    'https://www.baidu.com/s?ie=utf-8&medium=0&rtt=4&bsst=1&rsv_dl=news_t_sk&cl=2&wd=%E5%BE%90%E5%BF%B5%E6%B2%99&tn=news&rsv_bp=1&rsv_sug3=1&oq=&rsv_btype=t&f=8&rsv_sug4=1943',
    'https://www.baidu.com/s?ie=utf-8&medium=0&rtt=4&bsst=1&rsv_dl=news_b_pn&cl=2&wd=%E5%BE%90%E5%BF%B5%E6%B2%99&tn=news&rsv_bp=1&rsv_sug3=1&oq=&rsv_btype=t&f=8&rsv_sug4=1943&x_bfe_rqs=032000000000000000000000000000000000000000000008&x_bfe_tjscore=0.080000&tngroupname=organic_news&newVideo=12&goods_entry_switch=1&pn=10',
    'https://www.baidu.com/s?ie=utf-8&medium=0&rtt=4&bsst=1&rsv_dl=news_b_pn&cl=2&wd=%E5%BE%90%E5%BF%B5%E6%B2%99&tn=news&rsv_bp=1&rsv_sug3=1&oq=&rsv_btype=t&f=8&rsv_sug4=1943&x_bfe_rqs=032000000000000000000000000000000000000000000008&x_bfe_tjscore=0.080000&tngroupname=organic_news&newVideo=12&goods_entry_switch=1&pn=20'
    ]
# --------------------------------------------------------请求头--------------------------------------------------
headers = {
    'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9',
    'Accept-Language': 'zh-CN,zh;q=0.9,en;q=0.8,en-GB;q=0.7,en-US;q=0.6',
    'Cookie': 'MCITY=-131:; BDUSS=UpUWENzdXFJcWJUQ3VBSmR5eDlkeTU0Yzl-cTY3Vm1lbDlNY2FPMGV0TXhZM0JqRVFBQUFBJCQAAAAAAAAAAAEAAAAxqJrNsKbO97XDwrcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADHWSGMx1khjT; BDUSS_BFESS=UpUWENzdXFJcWJUQ3VBSmR5eDlkeTU0Yzl-cTY3Vm1lbDlNY2FPMGV0TXhZM0JqRVFBQUFBJCQAAAAAAAAAAAEAAAAxqJrNsKbO97XDwrcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADHWSGMx1khjT; BIDUPSID=5EADF9B19B0334270FDE3C2877210223; PSTM=1666017974; ZFY=g:BDO611V0dVy3o2q33lvGY9:AvO1:B0yTkybgAgNCVuV0:C; __bid_n=183fa9640c38d91fb54207; newlogin=1; BAIDUID=4C7E7017E15BB384AFA60B72B372AF08:FG=1; BAIDU_WISE_UID=wapp_1667457007392_4; BAIDUID_BFESS=4C7E7017E15BB384AFA60B72B372AF08:FG=1; RT="z=1&dm=baidu.com&si=0d3f86a8-cea8-48f4-b69e-514071e7c23e&ss=lanpwf8p&sl=0&tt=0&bcn=https://fclog.baidu.com/log/weirwood?type=perf&ul=1mz&hd=1ox"; BD_UPN=12314753; BDRCVFR[C0p6oIjvx-c]=mbxnW11j9Dfmh7GuZR8mvqV; delPer=0; BD_CK_SAM=1; PSINO=2; H_PS_PSSID=36544_37551_37691_37767_34812_37777_37728_37801_36802_37533_37674_37785; ab_sr=1.0.1_MDliZGNkM2RjMGEwYjI2Y2YxYTI0NTM2MjEwMzRiMDYxYmIwZDFkOTA4NjkyYzVkNWFmOTAwMzBjODcwZDAxOWQzMDlhNGY5OGY3M2VlODVmOTZlNzQxZTZlMjBkNTY3ZGIzN2UyNjk4MjgzMGYyNTM4YjNmNDhlZTY3MjRjMWNmNDA3ODc3OTY5YzRjNWRhN2NmOWUyZDA0NGIzMjA4ZA==; BDSVRTM=247',
    'Referer': 'https://www.baidu.com/s?tn=news&rtt=4&bsst=1&cl=2&wd=%E5%BE%90%E5%BF%B5%E6%B2%99&medium=0&x_bfe_rqs=032000000000000000000000000000000000000000000008&x_bfe_tjscore=0.080000&tngroupname=organic_news&newVideo=12&goods_entry_switch=1&rsv_dl=news_b_pn&pn=20',
    'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/107.0.0.0 Safari/537.36 Edg/107.0.1418.52'
}
title_list = []
nurl_list = []
date_list = []
summary_list = []
picturl_list = []
text_list = []
n = 1
# --------------对每页源代码进行爬取-----------------------------------------------------------------
for url in url_list:
    response = requests.get(url=url, headers=headers)
    # 从内容中分析出响应内容的编码方式
    response.encoding = response.apparent_encoding
    page_text = response.text    #获取了HTTP响应的文本内容
    tree = etree.HTML(page_text)   #解析了响应的HTML内容,并将解析后的树结构赋值给变量
    # 写入html
    with open('./xuniansha.html','a',encoding='utf-8') as fp:
        fp.write(page_text)

    # --------------------------------对标题进行爬取-------------------------------------------------
    title_div_list = tree.xpath('//div[@id="content_left"]/div[@class="result-op c-container xpath-log new-pmd"]')
    for title_div in title_div_list:
        title = title_div.xpath('./div/h3/a/@aria-label')[0]
        title_list.append(title)
    #print(title_list)

    # --------------------------------对url进行爬取-------------------------------------------------
    url_div_list = tree.xpath('//div[@id="content_left"]/div[@class="result-op c-container xpath-log new-pmd"]')
    for url_div in url_div_list:
        url = url_div.xpath('./div/h3/a/@href')[0]
        nurl_list.append(url)
    #print(len(nurl_list))

    # -------------------------------对时间进行爬取-------------------------------------------------------
    date_div_list = tree.xpath('//div[@id="content_left"]/div[@class="result-op c-container xpath-log new-pmd"]')
    for date_div in date_div_list:
        try:
            date = date_div.xpath('./div//span[@class="c-color-gray2 c-font-normal c-gap-right-xsmall"]/text()')[0]
            date_list.append(date)
        except:
            date_list.append('null')
    # print(date_list)

    # ----------------------------------对摘要进行爬取---------------------------------------------------------
    summary_div_list = tree.xpath('//div[@id="content_left"]/div[@class="result-op c-container xpath-log new-pmd"]')
    for summary_div in summary_div_list:
        summary = summary_div.xpath('./div//span[@class="c-font-normal c-color-text"]/@aria-label')[0]
        summary_list.append(summary)
# print(summary_list)


# ------------------------对二级页面进行爬取---------------------------------------------
for url_2 in nurl_list:
    urllib3.disable_warnings()
    response = requests.get(url=url_2, headers=headers, verify=False)
    # 手动设定响应数据编码格式
    response.encoding = response.apparent_encoding
    page_2_text = response.text
    tree_2 = etree.HTML(page_2_text)
    with open('./xu.html','a',encoding='utf-8') as fp:
        fp.write(page_2_text)
    # print('ok!')

    # ------------------  对文章内容爬取-----------------------------------------------
    content_div_list = tree_2.xpath('//body//p/text()')
    text_list.append(content_div_list)

    # ------------------ -对图片url进行爬取---------------------------------------------
    img_url_list = tree_2.xpath('//img//@src')
    picturl_list.append(img_url_list)
# print(picturl_list)
#print(text_list)
# --------------------------对图片进行下载--------------------------------
if not os.path.exists('./p'):
    os.makedirs('./p')
for j in picturl_list:
    for i in j:
        urllib3.disable_warnings()
        try:
            r = requests.get(i, headers=headers, verify=False).content
            photo = 'p/' + str(n) + '.jpg'
            with open(photo, 'wb') as fp:
                fp.write(r)
            n = n + 1
        except:
            continue
        #print(photo, 'ok!')
        fp.close

    # ---------------------------------------------数据库导入----------------------------------------------
conn = pymysql.connect(
    host='localhost',
    user='root',
    password='123456',
    db='aa',
    charset='utf8mb4'
)
cursor = conn.cursor()
i = 0
for i in range(len(nurl_list)):
    sql = "INSERT INTO b(标题,url,日期,摘要) VALUES (%s,%s,%s,%s)"
    sql_1 = "update b set 图片url=%s where url=%s"
    sql_2 = "update b set 内容=%s where url=%s"
    try:

        cursor.execute(sql,
                       (str(title_list[i]), str(nurl_list[i]), str(date_list[i]),str(summary_list[i])))

        cursor.execute(sql_1, (str(picturl_list[i]), str(nurl_list[i])))
        cursor.execute(sql_2, (str(text_list[i]), str(nurl_list[i])))
        #print("成功插入数据:", title_list[i], nurl_list[i], date_list[i], summary_list[i])
    except Exception as e:
        # 输出错误信息
        continue
print('数据库导入成功!')
conn.commit()
cursor.close()
conn.close()

  • 14
    点赞
  • 9
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值