Python爬虫初体验

该博客展示了如何利用Python的BeautifulSoup和Selenium库爬取小猪短租网站上的房源信息,包括标题、地址、价格、图片、房东性别等关键数据。通过对网页元素的选择和解析,实现了对房源页面的自动化抓取,并进行了简单的数据处理。
摘要由CSDN通过智能技术生成
from bs4 import BeautifulSoup
from selenium import webdriver
import requests
import time

headers = {
'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_6) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/13.1.2 Safari/605.1.15'
}

def judgment_sex(class_name):
    if class_name == ['member_ico1']:
        return '女'
    else:
        return '男'

def get_links(url):
    chrome_driver = 'C:\Program Files (x86)\Google\Chrome\Application\chromedriver.exe'
    browser = webdriver.Chrome(executable_path=chrome_driver)
    browser.get(url)
    time.sleep(10)
    wb_data = {browser.page_source}

    #print('===>'+wb_data)

    soup = BeautifulSoup(wb_data, 'lxml')
    links = soup.select('#page_list > ul > li > a')
    for link in links:
        href= link.get("href")
        get_info(href)

def get_info(url):
    wb_data = requests.get(url, headers = headers)
    soup = BeautifulSoup(wb_data.text, 'lxml')
    tittles = soup.select('div.pho_info > h4')
    addresses = soup.select('span.pr5')
    prices = soup.select('#pricePart > div.day_l > span')
    imgs = soup.select('#floatRightBox > div.js_box.clearfix > div.member_pic')
    names = soup.select('#floatRightBox > div.js_box.clearfix > div.w_240 > h6 > a')
    sexs = soup.select('#floatRightBox > div.js_box.clearfix > div.member_pic > div')
    for tittle, address, price, img, name, sex in zip(tittles, addresses, prices, imgs, names, sexs):
        data = {
        'tittle': tittle.get_text().strip(),
        'address': address.get_text().strip(),
        'price': price.get_text(),
        'img': img.get("src"),
        'name': name.get_text(),
        'sex': judgment_sex(sex.get("class"))
        }
        print(data)

if __name__ == '__main__':
    urls =['https://bj.xiaozhu.com/search-duanzufang-p{}-0/'.format(number) for number in range(1, 14)]
    for single_url in urls:
        get_links(single_url)
        time.sleep(2)

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值