scrapy写多了,手写爬虫有点生疏,今天来回顾手写爬取静态页面,以便日后做笔记用,我今天爬取的是汽车之家网页,
第一步:导入requests和bs4
import requests
from bs4 import BeautifulSoup
第二步:获取页面,我们用get发送请求,encoding是编码,apparent_encoding是页面编码获取的一种方式
response = requests.get(url='https://www.autohome.com.cn/news/')
content = response.content
response.encoding = response.apparent_encoding
第三步:解析页面,在这里我们用的BeautifulSoup和lxml来解析页面,首先定义一个soup对象,然后通过find去完成解析操作,代码如下
soup = BeautifulSoup(response.text,'lxml')
div = soup.find(name='div',id='auto-channel-lazyload-article')
li_list = div.find_all(name='li')
for li in li_list:
h3 = li.find(name='h3')
if not h3:
continue
# print(h3.text)
p = li.find(name='p')
a = li.find('a')
print(a.get('href'))
print(p.text)
print('-'*30)