网址链接:
http://product.auto.163.com/#DQ2001
分析
分析发现:网页是通过左边部分的点击,从而改变右边的数据,所以我们需要先获取左边所有品牌对应的链接,拿到所有品牌的连接后进行逐个爬取车系图。
分析左边区域的源代码:每个链接只有最后一段是不一样的,切其中数字和上面父div的id属性一致。
实现
1.获取所有链接(单独文件和scrapy无关系)
import requests
from bs4 import BeautifulSoup
request = requests.get("http://product.auto.163.com/")
request.encoding = "GBK" #编码
#选择解析器
soup = BeautifulSoup(request.content, 'html.parser')
lists = soup.select(".brand_cont .brand_name")
l = []
for brand in lists:
data = "http://product.auto.163.com/new_daquan/brand/" + brand["id"] + ".html"
l.append(data)
print(l)
#查看结果
for x in l:
print(x)
结果展示:
2.编写scrapy代码
1)Item 是保存爬取到的数据的容器: items.py
import scrapy
class CarseriesItem(scrapy.Item):
name = scrapy.Field() #车系名
image_urls = scrapy.Field() # 车系图片地址
brand = scrapy.Field() #车品牌
- settings.py
打开管道,关闭ROBOTSTXT_OBEY,设置请求头
ROBOTSTXT_OBEY = False
DEFAULT_REQUEST_HEADERS = {
'Accept': 'Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9',
'Accept-Encoding': 'gzip, deflate',
'Accept-Language': 'zh-CN,zh;q=0.9',
'Host': 'product.auto.163.com',
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.132 Safari/537.36'
}
# 结果处理管道
ITEM_PIPELINES = {
'carseries.pipelines.CarseriesPipeline': 200,
}
#图片保存的路径
IMAGES_STORE = "C:/Users/Lance/Desktop/carSeries/"
3)解析类itcast.py
这里我就不分析了,大伙可以自己分析,有需要讨论的可以私信我交流
import scrapy
from carseries.items import CarseriesItem
class ItcastSpider(scrapy.Spider):
#爬虫名,独一无二
name = 'itcast'
# 允许的请求域
allowed_domains = ['http://product.auto.163.com/#DQ2001']
#这里填第一步获取到的链接列表
start_urls = ['http://product.auto.163.com/new_daquan/brand/1685.html', '']
def parse(self, response):
type = response.css("div[class='item-cont cur'] .item")
for t in type:
#拿到种类名
brand = t.css(".brand-c-title::text").extract()[0]
lis = t.css("li")
for li in lis:
if li.css("img"):
item = CarseriesItem()
name = li.css("img::attr(title)").extract()[0]
img = li.css("img::attr(src)").extract()[0]
item['image_urls'] = [img] #这里得换成这种列表形式,否则怕数据时会出错
item['brand'] = brand
item['name'] = name
yield item
4)管道处理数据下载图片pipelines.py
import requests
import os
from scrapy.utils.project import get_project_settings # 获取项目的setting文件
IMAGES_STORE = get_project_settings().get("IMAGES_STORE")
class CarseriesPipeline(object):
def process_item(self, item, spider):
for img in item['image_urls']:
path = IMAGES_STORE + item['brand'] + "/"
if not os.path.exists(path): # 如果路径不存在,就创建
os.makedirs(path)
path = path + item['name'] + ".jpg"
response = requests.get(img)
with open(path, 'wb') as f:
f.write(response.content)
f.flush()
f.close()
return item
3. 执行程序
scrapy crawl itcast
4.结果展示:只截了一部分