链家爬虫获取链家网页数据-附带源码

链家爬虫
在这里插入图片描述
没啥教程就是简单的爬虫 加个正则 有疑问公众号后台留言给你处理。
公众号–>python网络小蜘蛛

# -*- endoding: utf-8 -*-
# @ModuleName:链家
# @Function(功能):
# @Author : 苏穆冰白月晨
# @Time : 2021/4/7 10:19
import requests, re
from fake_useragent import UserAgent
import time
import csv


headers = {
    'UserAgent':UserAgent().random,
    'Cookie': 'lianjia_uuid=f04bbf1f-5132-45bf-8ac3-a100531e4d4d; Hm_lvt_9152f8221cb6243a53c83b956842be8a=1617159068; UM_distinctid=17886311e4437d-0d1133832a12c7-5771031-144000-17886311e45450; _smt_uid=6063e39e.5f664d99; sensorsdata2015jssdkcross=%7B%22distinct_id%22%3A%2217886312828574-02d1fbeeecf7ff-5771031-1327104-1788631282970c%22%2C%22%24device_id%22%3A%2217886312828574-02d1fbeeecf7ff-5771031-1327104-1788631282970c%22%2C%22props%22%3A%7B%22%24latest_traffic_source_type%22%3A%22%E7%9B%B4%E6%8E%A5%E6%B5%81%E9%87%8F%22%2C%22%24latest_referrer%22%3A%22%22%2C%22%24latest_referrer_host%22%3A%22%22%2C%22%24latest_search_keyword%22%3A%22%E6%9C%AA%E5%8F%96%E5%88%B0%E5%80%BC_%E7%9B%B4%E6%8E%A5%E6%89%93%E5%BC%80%22%7D%7D; _ga=GA1.2.907807975.1617159080; login_ucid=2000000161545349; lianjia_token=2.001030290d700bb864019d003c207cb384; lianjia_token_secure=2.001030290d700bb864019d003c207cb384; security_ticket=C8amt+uFCkgOhn/I6vCdEbvSEjibLtSIKf2aFnAyYOl9ZZAUN2m21h6yrYu1S+/b8+lBNzBeSbLLsH3Zpl1dVXkPMHObtz7EkVLOp0mov1HDDbtw66+9zanNwb6m8Lae3HDRvsYAKPZbjSrYD5nPtAoITG2wI88fZySlyNAN5Ss=; Hm_lpvt_9152f8221cb6243a53c83b956842be8a=1617161206; select_city=610100; lianjia_ssid=d353d143-1a0a-477a-aaad-230bbb5549ed; _gid=GA1.2.860179285.1617761947Host: imapi.lianjia.com'
}


def request_cookies():
    url = 'https://xa.fang.lianjia.com/'
    sess = requests.session()
    cookies = sess.get(url, headers = headers).cookies
    for a in range (0 , 101):
        request_data(cookies , a)


def request_data(cookies, a):
    url = 'https://xa.fang.lianjia.com/loupan/pg' + str(a)
    response = requests.get(url, cookies = cookies, headers = headers).text
    for i in range(0,10):
        response_re(response, i)


def response_re(response, i):

    guize_leixing = """<span class="resblock-type" style="background:.*?">(.*?)</span>"""
    leixing = re.findall(guize_leixing, response)[i]

    guize_shoukuang = """<span class="sale-status" style="background: #.*?">(.*?)</span>"""
    shoukuang = re.findall(guize_shoukuang, response)[i+2]

    guize_zhuti = """<a href=".*?" class="name " target="_blank" .*?>(.*?)</a>"""
    zhuti = re.findall(guize_zhuti, response)[i]

    guize_zilianjie = """<a href="(.*?)" class="name " target="_blank" .*?>.*?</a>"""
    zilianjie = r'https://xa.fang.lianjia.com/' + re.findall(guize_zilianjie, response)[i]

    guize_junjia = """<span class="number">(.*?)</span>"""
    junjia = re.findall(guize_junjia, response)[i] + '元/㎡(均价)'

    guize_mianji = """<div class="resblock-area">
                        <span>(.*?)</span>
                    </div>"""
    mianji = re.findall(guize_mianji, response)[i]

    guize_dizhi1 = """<div class="resblock-location">
                        <span>(.*?)</span>
                        <i class="split">/</i>
                        <span>.*?</span>
                        <i class="split">/</i>
                        <.*?>.*?</a>
                    </div>"""
    dizhi1 = re.findall(guize_dizhi1, response)[i]
    guize_dizhi2 = """<div class="resblock-location">
                        <span>.*?</span>
                        <i class="split">/</i>
                        <span>(.*?)</span>
                        <i class="split">/</i>
                        <.*?>.*?</a>
                    </div>"""
    dizhi2 = re.findall(guize_dizhi2, response)[i]
    guize_dizhi3 = """<div class="resblock-location">
                        <span>.*?</span>
                        <i class="split">/</i>
                        <span>.*?</span>
                        <i class="split">/</i>
                        <.*?>(.*?)</a>
                    </div>"""
    dizhi3 = re.findall(guize_dizhi3, response)[i]
    dizhi = "西安" + "," + dizhi1 + "区" + "," +dizhi2 + "," + dizhi3

    data = {
        "主题" : zhuti,
        "销售状况": shoukuang,
        '类型': leixing,
        "详情地址": zilianjie,
        "均价" : junjia,
        "面积" : mianji,
        "地址" : dizhi,
    }
    csv_writer.writerow([zhuti, shoukuang, leixing, zilianjie, junjia, mianji, dizhi])
    print(data)


if __name__ == '__main__':
    f = open('lianjia.csv', 'w', encoding='utf-8', newline='')
    csv_writer = csv.writer(f)
    csv_writer.writerow(["主题", "销售状况", '类型',"详情地址","均价","面积","地址"])
    request_cookies()

效果展示
在这里插入图片描述
在这里插入图片描述

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
【为什么学爬虫?】        1、爬虫入手容易,但是深入较难,如何写出高效率的爬虫,如何写出灵活性高可扩展的爬虫都是一项技术活。另外在爬虫过程中,经常容易遇到被反爬虫,比如字体反爬、IP识别、验证码等,如何层层攻克难点拿到想要的数据,这门课程,你都能学到!        2、如果是作为一个其他行业的开发者,比如app开发,web开发,学习爬虫能让你加强对技术的认知,能够开发出更加安全的软件和网站 【课程设计】 一个完整的爬虫程序,无论大小,总体来说可以分成三个步骤,分别是:网络请求:模拟浏览器的行为从网上抓取数据数据解析:将请求下来的数据进行过滤,提取我们想要的数据数据存储:将提取到的数据存储到硬盘或者内存中。比如用mysql数据库或者redis等。那么本课程也是按照这几个步骤循序渐进的进行讲解,带领学生完整的掌握每个步骤的技术。另外,因为爬虫的多样性,在爬取的过程中可能会发生被反爬、效率低下等。因此我们又增加了两个章节用来提高爬虫程序的灵活性,分别是:爬虫进阶:包括IP代理,多线程爬虫,图形验证码识别、JS加密解密、动态网页爬虫、字体反爬识别等。Scrapy和分布式爬虫:Scrapy框架、Scrapy-redis组件、分布式爬虫等。通过爬虫进阶的知识点我们能应付大量的反爬网站,而Scrapy框架作为一个专业的爬虫框架,使用他可以快速提高我们编写爬虫程序的效率和速度。另外如果一台机器不能满足你的需求,我们可以用分布式爬虫让多台机器帮助你快速爬取数据。 从基础爬虫到商业化应用爬虫,本套课程满足您的所有需求!【课程服务】 专属付费社群+定期答疑

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值