获取欧洲时报中国板块前新闻数据-scrapy

1.创建项目文件

创建scrapy项目的命令:scrapy startproject <项目名字>
示例:
scrapy startproject myspider

在这里插入图片描述

 scrapy genspider <爬虫名字> <允许爬取的域名>
cd myspider
scrapy genspider itcast itcast.cn

二.爬虫文件编写

import time
import scrapy
from ..items import ChinanewsItem
s=1
class NewsSpider(scrapy.Spider):
    name = 'news'
    allowed_domains = []
    start_urls = ['https://cms.offshoremedia.net/front/list/latest?pageNum=1&pageSize=15&siteId=694841922577108992&channelId=780811183157682176']

    def parse(self, response):
        #print(response)
        global s
        res=response.json()
        for i in res["info"]["list"]:

            try:
                newurl = i["contentStaticPage"]
                #print(newurl)
                yield scrapy.Request(url=newurl,callback=self.datas,cb_kwargs={'newurl': newurl},dont_filter=True)
            except Exception as e:
                print(f"Failed to process URL {i['contentStaticPage']}: {str(e)}")
        s=s+1
        if s<1000:
            print(s)
            url=f'https://cms.offshoremedia.net/front/list/latest?pageNum={str(s)}&pageSize=15&siteId=694841922577108992&channelId=780811183157682176'
            #print(url)
            yield scrapy.Request(url=url, callback=self.parse)
    def datas(self,response,newurl):
        print(response)
        item=ChinanewsItem()
        id = newurl.split('/')[-1].split('.')[0]
        clas = newurl.split('/')[5]
        title = response.xpath(f'//*[@id="{id}"]/text()')[0]
        title=title.get()
        timee = response.xpath('/html/body/div[1]/div[2]/div/div[1]/div[1]/div[1]/div[3]/span[1]/i/text()')[0]

        now = int(timee.get())
        timeArray = time.localtime(now / 1000)
        otherStyleTime = time.strftime("%Y-%m-%d", timeArray)
        Released = "发布时间:" + otherStyleTime
        imgurl = response.xpath('/html/body/div[1]/div[2]/div/div[1]/div[1]/div[1]//img/@src')
        imgurl=imgurl.getall()
        if imgurl == []:
            imgurl = "无图片"
        Imageannotations = response.xpath('/html/body/div[1]/div[2]/div/div[1]/div[1]/div[1]/div[4]/div/p/b/text()')  # b标签含有图片来源
        Imageannotations = Imageannotations.getall()
        Imageannotationstrue = []
        for i in Imageannotations:
            if "图片来源" in i:
                Imageannotationstrue.append(i)
        if Imageannotationstrue == []:
            Imageannotationstrue = "无图片注释"

        texts = response.xpath(
            '/html/body/div[1]/div[2]/div/div[1]/div[1]/div[1]/div[4]/div/p[@style="text-indent:2em;"]//text()')
        texts=texts.getall()
        text = [item for item in texts if item.strip()]

        # print(imgurl,Imageannotations)
        if len(text) > 1:
            summary = text[0]
            del text[0]
            body = ""
            for i in text:
                body = body + '\n' + i
        else:
            summary = []
            body = []
        if body!=[]:
            item["list"]=[id, clas, title, otherStyleTime, Released, str(imgurl), str(Imageannotationstrue), summary,
                body, newurl]

            yield item
        else:
            id = newurl.split('/')[-1].split('.')[0]
            clas = newurl.split('/')[5]
            title = response.xpath(f'//*[@id="{id}"]/text()')[0]
            title=title.get()
            timee = response.xpath('/html/body/div[1]/div[2]/div/div[1]/div[1]/div[1]/div[3]/span[1]/i/text()')[0]
            now = int(timee.get())
            timeArray = time.localtime(now / 1000)
            otherStyleTime = time.strftime("%Y-%m-%d", timeArray)
            Released = "发布时间:" + otherStyleTime
            imgurl = response.xpath('/html/body/div[1]/div[2]/div/div[1]/div[1]/div[1]//img/@src')
            imgurl=imgurl.getall()
            if imgurl == []:
                imgurl = "无图片"
            Imageannotations = response.xpath(
                '/html/body/div[1]/div[2]/div/div[1]/div[1]/div[1]/div[4]/div/p/b/text()')  # b标签含有图片来源
            Imageannotations= Imageannotations.getall()
            Imageannotationstrue = []
            for i in Imageannotations:
                if "图片来源" in i:
                    Imageannotationstrue.append(i)
            if Imageannotationstrue == []:
                Imageannotationstrue = "无图片注释"
            text = response.xpath('/html/body/div[1]/div[2]/div/div[1]/div[1]/div[1]/div[4]/div/p/span/text()')
            text= text.getall()
            try:
                summary = response.xpath('/html/body/div[1]/div[2]/div/div[1]/div[1]/div[1]/div[4]/p/text()')[0]
                summary = summary.get()
            except:
                summary=[]
            item["list"]= [id, clas, title, otherStyleTime, Released, str(imgurl), str(Imageannotationstrue), summary,
                    str(text), newurl]
            yield item

起始url为’https://cms.offshoremedia.net/front/list/latest?pageNum=1&pageSize=15&siteId=694841922577108992&channelId=780811183157682176’
在这里插入图片描述
1.返回的数据为此页每个新闻的详细页面信息获取后将数据传递给parse()函数
2.parse()函数对数据进行提取提取出新闻页详细url然后发送请求并将结果交给datas()

yield scrapy.Request(url=newurl,callback=self.datas,cb_kwargs={'newurl': newurl},dont_filter=True)

3.datas()通过xpath对信息进行提取然后将数据提交给管道*

4.s=s+1更新url实现自动翻页然后通过yield scrapy.Request(url=url, callback=self.parse)将数据重新回调给parse()进入循环

三.管道存储

传统的同步数据库操作可能会导致Scrapy爬虫阻塞,等待数据库操作完成。而adbapi允许Scrapy以异步的方式与数据库进行交互。这意味着Scrapy可以在等待数据库操作完成的同时继续执行其他任务,如抓取更多的网页或解析数据。这种异步处理方式极大地提高了Scrapy的运行效率和性能,使得项目能够更快地处理大量数据。

import logging
from twisted.enterprise import adbapi
import pymysql
class ChinanewsPipeline2:

    def __init__(self):
        self.dbpool = adbapi.ConnectionPool(
            'pymysql',
                        host='127.0.0.1',
                        user='root',
                        password='root',
                        port=3306,
                        database='news',
        )

    def process_item(self, item, spider):
        self.dbpool.runInteraction(self._do_insert, item["list"])

    def _do_insert(self, txn, list):
        sql = """INSERT INTO untitled (id, clas, title, otherStyleTime, Released, imgurl, Imageannotations, summary, body, url) VALUES (%s, %s, %s, %s, %s, %s, %s, %s, %s, %s)"""
            # txn是一个事务对象,用于执行SQL语句
        try:
            txn.execute(sql, tuple(list))
        except pymysql.err.IntegrityError:
            # 处理重复键错误,例如记录日志或忽略
            print(f"Duplicate entry for id {list[0]}, skipping...")

四.settings文件

# Scrapy settings for chinanews project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     https://docs.scrapy.org/en/latest/topics/settings.html
#     https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#     https://docs.scrapy.org/en/latest/topics/spider-middleware.html

BOT_NAME = 'chinanews'
LOG_LEVEL = 'ERROR'
SPIDER_MODULES = ['chinanews.spiders']
NEWSPIDER_MODULE = 'chinanews.spiders'


# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'chinanews (+http://www.yourdomain.com)'

# Obey robots.txt rules
ROBOTSTXT_OBEY = False

# Configure maximum concurrent requests performed by Scrapy (default: 16)
CONCURRENT_REQUESTS = 100

# Configure a delay for requests for the same website (default: 0)
# See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
CONCURRENT_REQUESTS_PER_DOMAIN = 150
CONCURRENT_REQUESTS_PER_IP = 150

# Disable cookies (enabled by default)
#COOKIES_ENABLED = False

# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False

# Override the default request headers:
DEFAULT_REQUEST_HEADERS = {
'User-Agent':
'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/126.0.0.0 Safari/537.36 Edg/126.0.0.0',
'Origin':
'https://www.oushinet.com',
'Referer':
'https://www.oushinet.com/',
'Content-Type':
'application/json;charset=UTF-8'
}

# Enable or disable spider middlewares
# See https://docs.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
#    'chinanews.middlewares.ChinanewsSpiderMiddleware': 543,
#}

# Enable or disable downloader middlewares
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
#    'chinanews.middlewares.ChinanewsDownloaderMiddleware': 543,
#}

# Enable or disable extensions
# See https://docs.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
#    'scrapy.extensions.telnet.TelnetConsole': None,
#}

# Configure item pipelines
# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
   #'chinanews.pipelines.ChinanewsPipeline': 300,
'chinanews.pipelines.ChinanewsPipeline2': 200
}

# Enable and configure the AutoThrottle extension (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False

# Enable and configure HTTP caching (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'

  • 5
    点赞
  • 7
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 1
    评论
这里是一个简单的Python-Scrapy爬取百度搜索结果并对搜索结果进行分析的例子: 首先,我们需要安装Scrapy和lxml库。在命令行中输入以下命令: ``` pip install scrapy pip install lxml ``` 然后,我们可以创建一个名为baidu_spider的新项目,并在项目中创建一个名为baidu的新爬虫。在命令行中输入以下命令: ``` scrapy startproject baidu_spider cd baidu_spider scrapy genspider baidu www.baidu.com ``` 现在,我们在baidu_spider/spiders/baidu.py文件中编写我们的代码。我们将使用Scrapy的Selector来选择我们想要的数据。代码如下: ```python import scrapy class BaiduSpider(scrapy.Spider): name = "baidu" allowed_domains = ["www.baidu.com"] start_urls = ["http://www.baidu.com/s?wd=python"] def parse(self, response): # 获取搜索结果 results = response.xpath('//div[@class="result c-container "]') for result in results: # 获取标题和链接 title = result.xpath('.//h3/a/text()').extract_first().strip() link = result.xpath('.//h3/a/@href').extract_first() # 获取摘要 abstract = result.xpath('.//div[@class="c-abstract"]//text()').extract() abstract = "".join(abstract).strip() # 打印结果 print(title) print(link) print(abstract) ``` 在这个例子中,我们首先定义了我们的爬虫的名称,允许的域名和起始URL。然后我们定义了一个parse函数来处理响应。在parse函数中,我们使用XPath选择器来选择搜索结果。我们使用extract_first()和extract()方法来提取标题、链接和摘要。最后,我们打印了结果。 现在,我们可以在baidu_spider目录中运行以下命令来运行我们的爬虫: ``` scrapy crawl baidu ``` 这将启动我们的爬虫并开始爬取百度搜索结果。在控制台中,您应该能够看到我们的爬虫正在输出搜索结果。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

我把把C

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值