Python爬虫实战

实战编码 1:爬取百度指定词条对应的搜索结果页面(简易网页采集器)

import requests

#step1:指定url
headers = {
    'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/77.0.3865.90 Safari/537.36'
}
url = 'https://www.baidu.com/s?wd='
wd = '王锐'
#setp2: 发起请求
#get方法会返回一个响应对象
response = requests.get(url=url+wd,headers=headers)
print(response.status_code)  
# step3: 获取响应数据  
page_text = response.text  # 返回的是字符串形式的数据
print(response.content.decode('utf-8'))
#setp4:持久化存储
with open('./baidu.html','w',encoding='utf-8') as fp:
    fp.write(response.content.decode('utf-8'))
    
print('爬取结束!')

实战编码2:https://bbs.hupu.com/topic/ 爬取虎扑社区中足球话题区内容

import requests
from lxml import etree

url = 'https://bbs.hupu.com/topic-2'
headers = {
    'User-Agent':'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.108 Safari/537.36'
}

response = requests.get(url=url,headers=headers)
response.encoding =response.apparent_encoding
html = etree.HTML(response.text)
Title =html.xpath('//ul[@class="for-list"]/li//div[@class="titlelink box"]/a/text()')
Href =html.xpath('//ul[@class="for-list"]/li//div[@class="titlelink box"]/a/@href')
print(len(Title))
data = []

for  i  in range(0,len(Title)):
    news ={ 
        'header':Title[i],
        'links':Href[i]
         
    }
    data.append(news)  
    
for i  in data:
    print('标题:'+i['header']+'\t'+'链接:'+i['links']+'\n')
    
print('爬取结束!')
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值