python爬取糗事百科的标题和内容

这篇文章基于python3来编写,这里使用来xpath来解析数据,由于糗事百科的反爬机制,这里我们需要加入header信息,我认为最主要的就是解析数据这块,我推荐https://www.cnblogs.com/gaochsh/p/6757475.html这个博客,博主由浅入深的解释来如何来使用xpath来获取我们需要的节点,在xpath中,返回的是一个元素,我们可以继续对这个元素进行xpath解析,这里主要是方法的简要介绍,就只提取来一个页面的内容



import requests
from lxml import etree
from bs4 import BeautifulSoup
import re
page=1
url = 'http://www.qiushibaike.com/hot/page/' + str(page)
headers={
'Cookie':'BIDUPSID=944A139885EC4A8CDAC4B9278AAA9E23; PSTM=1514534756; BAIDUID=9A3C4DD0DB98A4A48B7720E055B378E6:FG=1; BDUSS=XpsRERJeUJCaFVWQWZxTXZ2c3lRMTQ3RlVwa3h4UHV6TG14Y3RKd3QzM0M4SmhhQUFBQUFBJCQAAAAAAAAAAAEAAAAiyDo1aG9tZdDEu6jFrbfFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAMJjcVrCY3Fac; MCITY=-75%3A; H_PS_645EC=290aIE41J1QwHtZSUPAQjr16T%2Bz3Un20Ght9uT4Ww7CXkDceENpmFfbfob0ApxI0oZbX; BD_CK_SAM=1; PSINO=2; BDORZ=B490B5EBF6F3CD402E515D22BCDA1598; BD_HOME=1; H_PS_PSSID=25314_1443_24565_21097_17001_20928; BD_UPN=1a314353; sugstore=1; H_WISE_SIDS=121076_110316_108267_122159_102431_100098_120139_110772_120009_118888_118869_118854_118837_118805_122187_107314_121255_121533_121924_121215_117331_121862_117437_121666_121561_120943_121042_122169_122138_121617_120852_121465_121307_120262_116407_110085_122021; bd_traffictrace=031431; BDSVRTM=402; plus_lsv=90c10fea240be0ef; plus_cv=1::m:11f40515; Hm_lvt_12423ecbc0e2ca965d84259063d35238=1517623328,1517636910,1517639481; Hm_lpvt_12423ecbc0e2ca965d84259063d35238=1517639481; SE_LAUNCH=5%3A25293991_0%3A25293991_3%3A25293991',
	'User - Agent': 'Mozilla / 5.0(Macintosh;IntelMacOSX10_13_1) AppleWebKit / 537.36(KHTML, likeGecko) Chrome / 63.0.3239.132Safari / 537.36,'
	}
html = requests.get(url=url,headers=headers).text
selector = etree.HTML(html)
list=selector.xpath('//div[@id="content-left"]/div')
for l in list:
	print("name"+str(l.xpath('div/a/h2/text()')).replace("\\n",""))
	print("content"+str(l.xpath('a/div/span


Python可以用于爬取百度百科内容。有一本实践技巧的书籍提供了详细的指导[1]。在爬取百度百科的过程中,我们可以设置一个目标,比如爬取1000条词条,并将这些词条的URL地址输出,并将词条的相关信息(URL、标题、概述)写入一个叫output.htm的文件。 在爬取百度百科上的词条时,我们需要注意将爬取到的内容以字典的形式进行整理。通过使用字典,我们可以将词条的标题和相应的内容进行配对。代码示例如下: elem_dict = dict(zip(elem_name, elem_value)) dict_1 = {} for key in elem_dict: print(key.text, elem_dict[key].text) dict_1.update({key.text: elem_dict[key].text}) 通过这样的方法,我们可以将爬取到的词条标题内容以字典的形式保存下来,方便后续的处理和使用。<span class="em">1</span><span class="em">2</span><span class="em">3</span> #### 引用[.reference_title] - *1* [Python爬虫实战笔记-股票爬取示例.md](https://download.csdn.net/download/weixin_52057528/88258593)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v93^chatsearchT3_2"}}] [.reference_item style="max-width: 33.333333333333336%"] - *2* [Python爬虫爬取百度百科词条](https://blog.csdn.net/DongChengRong/article/details/77924695)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v93^chatsearchT3_2"}}] [.reference_item style="max-width: 33.333333333333336%"] - *3* [基于python里selenium库的信息盒爬取](https://blog.csdn.net/poorlytechnology/article/details/109574110)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v93^chatsearchT3_2"}}] [.reference_item style="max-width: 33.333333333333336%"] [ .reference_list ]
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值