7个非常经典的 Python爬虫 案例(附源码)

前言

在这篇文章中,我们将分享7个Python爬虫的小案例,帮助大家更好地学习和了解Python爬虫的基础知识。以下是每个案例的简介和源代码:

 

1. 爬取豆瓣电影Top250

这个案例使用BeautifulSoup库爬取豆瓣电影Top250的电影名称、评分和评价人数等信息,并将这些信息保存到CSV文件中。

import requestsfrom bs4 import BeautifulSoupimport csv # 请求URLurl = '<https://movie.douban.com/top250>'# 请求头部headers = {    'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36'} # 解析页面函数def parse_html(html):    soup = BeautifulSoup(html, 'lxml')    movie_list = soup.find('ol', class_='grid_view').find_all('li')    for movie in movie_list:        title = movie.find('div', class_='hd').find('span', class_='title').get_text()        rating_num = movie.find('div', class_='star').find('span', class_='rating_num').get_text()        comment_num = movie.find('div', class_='star').find_all('span')[-1].get_text()        writer.writerow([title, rating_num, comment_num]) # 保存数据函数def save_data():    f = open('douban_movie_top250.csv', 'a', newline='', encoding='utf-8-sig')    global writer    writer = csv.writer(f)    writer.writerow(['电影名称', '评分', '评价人数'])    for i in range(10):        url = '<https://movie.douban.com/top250?start=>' + str(i*25) + '&filter='        response = requests.get(url, headers=headers)        parse_html(response.text)    f.close() if __name__ == '__main__':    save_data() 

2. 爬取猫眼电影Top100

这个案例使用正则表达式和requests库爬取猫眼电影Top100的电影名称、主演和上映时间等信息,并将这些信息保存到TXT文件中。

import requestsimport re # 请求URLurl = '<https://maoyan.com/board/4>'# 请求头部headers = {    'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36'} # 解析页面函数def parse_html(html):    pattern = re.compile('<p class="name"><a href=".*?" title="(.*?)" data-act="boarditem-click" data-val="{movieId:\\\\d+}">(.*?)</a></p>.*?<p class="star">(.*?)</p>.*?<p class="releasetime">(.*?)</p>', re.S)    items = re.findall(pattern, html)    for item in items:        yield {            '电影名称': item[1],            '主演': item[2].strip(),            '上映时间': item[3]        } # 保存数据函数def save_data():    f = open('maoyan_top100.txt', 'w', encoding='utf-8')    for i in range(10):        url = '<https://maoyan.com/board/4?offset=>' + str(i*10)        response = requests.get(url, headers=headers)        for item in parse_html(response.text):            f.write(str(item) + '\\\\n')    f.close() if __name__ == '__main__':    save_data() 

 

3. 爬取全国高校名单

这个案例使用正则表达式和requests库爬取全国高校名单,并将这些信息保存到TXT文件中。​​​​​​​

import requestsimport re # 请求URLurl = '<http://www.zuihaodaxue.com/zuihaodaxuepaiming2019.html>'# 请求头部headers = {    'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36'} # 解析页面函数def parse_html(html):    pattern = re.compile('<tr class="alt">.*?<td>(.*?)</td>.*?<td><div align="left">.*?<a href="(.*?)" target="_blank">(.*?)</a></div></td>.*?<td>(.*?)</td>.*?<td>(.*?)</td>.*?</tr>', re.S)    items = re.findall(pattern, html)    for item in items:        yield {            '排名': item[0],            '学校名称': item[2],            '省市': item[3],            '总分': item[4]        } # 保存数据函数def save_data():    f = open('university_top100.txt', 'w', encoding='utf-8')    response = requests.get(url, headers=headers)    for item in parse_html(response.text):        f.write(str(item) + '\\\\n')    f.close() if __name__ == '__main__':    save_data()

4. 爬取中国天气网城市天气

这个案例使用xpath和requests库爬取中国天气网的城市天气,并将这些信息保存到CSV文件中。​​​​​​​

import requestsfrom lxml import etreeimport csv # 请求URLurl = '<http://www.weather.com.cn/weather1d/101010100.shtml>'# 请求头部headers = {    'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36'} # 解析页面函数def parse_html(html):    selector = etree.HTML(html)    city = selector.xpath('//*[@id="around"]/div/div[1]/div[1]/h1/text()')[0]    temperature = selector.xpath('//*[@id="around"]/div/div[1]/div[1]/p/i/text()')[0]    weather = selector.xpath('//*[@id="around"]/div/div[1]/div[1]/p/@title')[0]    wind = selector.xpath('//*[@id="around"]/div/div[1]/div[1]/p/span/text()')[0]    return city, temperature, weather, wind # 保存数据函数def save_data():    f = open('beijing_weather.csv', 'w', newline='', encoding='utf-8-sig')    writer = csv.writer(f)    writer.writerow(['城市', '温度', '天气', '风力'])    for i in range(10):        response = requests.get(url, headers=headers)        city, temperature, weather, wind = parse_html(response.text)        writer.writerow([city, temperature, weather, wind])    f.close() if __name__ == '__main__':    save_data() 

5. 爬取当当网图书信息

这个案例使用xpath和requests库爬取当当网图书信息,并将这些信息保存到CSV文件中。​​​​​​​

import requestsfrom lxml import etreeimport csv # 请求URLurl = '<http://search.dangdang.com/?key=Python&act=input>'# 请求头部headers = {    'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36'} # 解析页面函数def parse_html(html):    selector = etree.HTML(html)    book_list = selector.xpath('//*[@id="search_nature_rg"]/ul/li')    for book in book_list:        title = book.xpath('a/@title')[0]        link = book.xpath('a/@href')[0]        price = book.xpath('p[@class="price"]/span[@class="search_now_price"]/text()')[0]        author = book.xpath('p[@class="search_book_author"]/span[1]/a/@title')[0]        publish_date = book.xpath('p[@class="search_book_author"]/span[2]/text()')[0]        publisher = book.xpath('p[@class="search_book_author"]/span[3]/a/@title')[0]        yield {            '书名': title,            '链接': link,            '价格': price,            '作者': author,            '出版日期': publish_date,            '出版社': publisher        } # 保存数据函数def save_data():    f = open('dangdang_books.csv', 'w', newline='', encoding='utf-8-sig')    writer = csv.writer(f)    writer.writerow(['书名', '链接', '价格', '作者', '出版日期', '出版社'])    response = requests.get(url, headers=headers)    for item in parse_html(response.text):        writer.writerow(item.values())    f.close() if __name__ == '__main__':    save_data() 

 

6. 爬取糗事百科段子

这个案例使用xpath和requests库爬取糗事百科的段子,并将这些信息保存到TXT文件中。​​​​​​​

import requestsfrom lxml import etree # 请求URLurl = '<https://www.qiushibaike.com/text/>'# 请求头部headers = {    'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36'} # 解析页面函数def parse_html(html):    selector = etree.HTML(html)    content_list = selector.xpath('//div[@class="content"]/span/text()')    for content in content_list:        yield content # 保存数据函数def save_data():    f = open('qiushibaike_jokes.txt', 'w', encoding='utf-8')    for i in range(3):        url = '<https://www.qiushibaike.com/text/page/>' + str(i+1) + '/'        response = requests.get(url, headers=headers)        for content in parse_html(response.text):            f.write(content + '\\\\n')    f.close() if __name__ == '__main__':    save_data() 

7. 爬取新浪微博

这个案例使用selenium和requests库爬取新浪微博,并将这些信息保存到TXT文件中。​​​​​​​

import timefrom selenium import webdriverimport requests # 请求URLurl = '<https://weibo.com/>'# 请求头部headers = {    'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36'} # 解析页面函数def parse_html(html):    print(html) # 保存数据函数def save_data():    f = open('weibo.txt', 'w', encoding='utf-8')    browser = webdriver.Chrome()    browser.get(url)    time.sleep(10)    browser.find_element_by_name('username').send_keys('username')    browser.find_element_by_name('password').send_keys('password')    browser.find_element_by_class_name('W_btn_a').click()    time.sleep(10)    response = requests.get(url, headers=headers, cookies=browser.get_cookies())    parse_html(response.text)    browser.close()    f.close() if __name__ == '__main__':    save_data() 

希望这7个小案例能够帮助大家更好地掌握Python爬虫的基础知识!

 如果你是准备学习Python或者正在学习(想通过Python兼职),下面这些你应该能用得上:

【点击这里】领取!

包括:Python激活码+安装包、Python web开发,Python爬虫,Python数据分析,人工智能、自动化办公等学习教程。带你从零基础系统性的学好Python!

① Python所有方向的学习路线图,清楚各个方向要学什么东西

② 100多节Python课程视频,涵盖必备基础、爬虫和数据分析

③ 100多个Python实战案例,学习不再是只会理论

④ 华为出品独家Python漫画教程,手机也能学习

⑤ 历年互联网企业Python面试真题,复习时非常方便****

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值