scrapy的一些事

保存html源码方法:

        with open("a,html",'w',encoding='utf-8') as f:
            f.write(response.body.decode())

下载网页图片:

from urllib import request
request.urlretrieve('网址','xxx.jpg')

对内容进行去除空格:

def process_content(self,content):
        content = [re.sub(r'\s|\xa0','',i) for i in content]#将换行什么的符号替换成空格
        content = [i for i in content if len(i) > 0]#将空格号去掉
        return content

若爬取的url地址不完整:

import urllib
item["href"] is not None:
	item["href"] = urllib.parse.urljoin(response.url,item["href"])

登录可以有以下操作:

  1. cookies:`
    def start_requests(self):
        cookies = "_s_tentry=-; Apache=1769401044437.5996.1588400454551; SINAGLOBAL=1769401044437.5996.1588400454551; ULV=1588400454765:1:1:1:1769401044437.5996.1588400454551:; login_sid_t=8111d554c0a2d6dcf9228ebc58708bdf; cross_origin_proto=SSL; Ugrow-G0=9ec894e3c5cc0435786b4ee8ec8a55cc; YF-V5-G0=7a7738669dbd9095bf06898e71d6256d; UOR=,,www.baidu.com; wb_view_log=1366*7681; SUB=_2A25zutvVDeRhGeRK7lIS8CfJwj6IHXVQzkodrDV8PUNbmtAKLVjNkW9NU2bIGGUcalCxA0p7h9RIP8zDZ5jkAOnZ; SUBP=0033WrSXqPxfM725Ws9jqgMF55529P9D9W5U.g5UpcseWH5HDGKF_78r5JpX5KzhUgL.FozXSK50eh.f1Kz2dJLoI7yWwJyadJMXSBtt; SUHB=0P19BEYcGR28ZI; ALF=1621090026; SSOLoginState=1589554053; wvr=6; wb_view_log_2450309592=1366*7681; YF-Page-G0=e44a6a701dd9c412116754ca0e3c82c3|1589556569|1589556569; webim_unReadCount=%7B%22time%22%3A1589556679580%2C%22dm_pub_total%22%3A0%2C%22chat_group_client%22%3A0%2C%22chat_group_notice%22%3A0%2C%22allcountNum%22%3A0%2C%22msgbox%22%3A0%7D"
        cookies = {i.split("=")[0]:i.split("=")[1] for i in cookies.split(";")}
        yield scrapy.Request(
            self.start_urls[0],
            callback = self.parse,
            cookies = cookies
        )
  1. post登录
        authenticity_token = response.xpath('//div[@class = "auth-form px-3"]/form/input[1]/@value').extract_first()
        ga_id = response.xpath('//div[@class = "auth-form px-3"]/form/input[2]/@value').extract_first()
        commit = response.xpath('//input[@name = "commit"]/@value').extract_first()
        post_data = dict(
            login='Azhong-github',
            password= 'WUzhong961028',
            authenticity_token = authenticity_token,
            ga_id = str(ga_id),
            commit = commit
       
  1. scrapy模拟登录之自动登录
def parse(self, response):
        yield scrapy.FormRequest.from_response(
            response,#自动的从respon中寻找form表单
            formdata={"login":"Azhong-github","password":"WUzhong961028"},
            callback = self.after_login
        )

get请求:模拟输入(如百度搜索框):
要善于发现get请求的规律,下面?是分隔符

kw={"kw":"电子科技大学"}       
        kw = parse.urlencode(kw,encoding = "utf-8"("gb2312")) #kw:wd=%e7%94%b5%e5%ad%90%e7%a7%91%e6%8a%80%e5%a4%a7%e5%ad%a6
        url = response.url + "?" + kw
        print(type(url))
        yield scrapy.Request(
            url,
            callback = self.parse_detail
        )
  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值