适合新手小白的几个练习Python爬虫的实战

经常有新手小白在学习完 Python 的基础知识之后,不知道该如何进一步提升编码水平,那么此时找一些友好的网站来练习爬虫可能是一个比较好的方法,因为高级爬虫本身就需要掌握很多知识点,以爬虫作为切入点,既可以掌握巩固 Python 知识,也可能在未来学习接触到更多其他方面的知识,比如分布式,多线程等等,何乐而不为呢!

下面我们介绍几个非常简单入门的爬虫项目,相信不会再出现那种直接劝退的现象啦!

豆瓣

豆瓣作为国民级网站,在爬虫方面也非常友好,几乎没有设置任何反爬措施,以此网站来练手实在是在适合不过了。

评论爬取

我们以如下地址为例子

https://movie.douban.com/subject/3878007/

可以看到这里需要进行翻页处理,通过观察发现,评论的URL如下:

https://movie.douban.com/subject/3878007/comments?start=0&limit=20&sort=new_score&status=P&percent_type=l

每次翻一页,start都会增长20,由此可以写代码如下



def get\_praise():  
    praise\_list = \[\]  
    for i in range(0, 2000, 20):  
        url = 'https://movie.douban.com/subject/3878007/comments?start=%s&limit=20&sort=new\_score&status=P&percent\_type=h' % str(i)  
        req = requests.get(url).text  
        content = BeautifulSoup(req, "html.parser")  
        check\_point = content.title.string  
        if check\_point != r"没有访问权限":  
            comment = content.find\_all("span", attrs={"class": "short"})  
            for k in comment:  
                praise\_list.append(k.string)  
        else:  
            break  
    return 


使用range函数,步长设置为20,同时通过title等于“没有访问权限”来作为翻页的终点。

下面继续分析评论等级。

豆瓣的评论是分为三个等级的,这里分别获取,方便后面的继续分析。



def get\_ordinary():  
    ordinary\_list = \[\]  
    for i in range(0, 2000, 20):  
        url = 'https://movie.douban.com/subject/3878007/comments?start=%s&limit=20&sort=new\_score&status=P&percent\_type=m' % str(i)  
        req = requests.get(url).text  
        content = BeautifulSoup(req, "html.parser")  
        check\_point = content.title.string  
        if check\_point != r"没有访问权限":  
            comment = content.find\_all("span", attrs={"class": "short"})  
            for k in comment:  
                ordinary\_list.append(k.string)  
        else:  
            break  
    return   
  
def get\_lowest():  
    lowest\_list = \[\]  
    for i in range(0, 2000, 20):  
        url = 'https://movie.douban.com/subject/3878007/comments?start=%s&limit=20&sort=new\_score&status=P&percent\_type=l' % str(i)  
        req = requests.get(url).text  
        content = BeautifulSoup(req, "html.parser")  
        check\_point = content.title.string  
        if check\_point != r"没有访问权限":  
            comment = content.find\_all("span", attrs={"class": "short"})  
            for k in comment:  
                lowest\_list.append(k.string)  
        else:  
            break  
    return 


其实可以看到,这里的三段区别主要在请求URL那里,分别对应豆瓣的好评,一般和差评。

最后把得到的数据保存到文件里。



if \_\_name\_\_ == "\_\_main\_\_":  
    print("Get Praise Comment")  
    praise\_data = get\_praise()  
    print("Get Ordinary Comment")  
    ordinary\_data = get\_ordinary()  
    print("Get Lowest Comment")  
    lowest\_data = get\_lowest()  
    print("Save Praise Comment")  
    praise\_pd = pd.DataFrame(columns=\['praise\_comment'\], data=praise\_data)  
    praise\_pd.to\_csv('praise.csv', encoding='utf-8')  
    print("Save Ordinary Comment")  
    ordinary\_pd = pd.DataFrame(columns=\['ordinary\_comment'\], data=ordinary\_data)  
    ordinary\_pd.to\_csv('ordinary.csv', encoding='utf-8')  
    print("Save Lowest Comment")  
    lowest\_pd = pd.DataFrame(columns=\['lowest\_comment'\], data=lowest\_data)  
    lowest\_pd.to\_csv('lowest.csv', encoding='utf-8')  
    print("THE END!!!")  



制作词云

这里使用jieba来分词,使用wordcloud库制作词云,还是分成三类,同时去掉了一些干扰词,比如“一部”、“一个”、“故事”和一些其他名词,操作都不是很难,直接上代码。



import jieba  
import pandas as pd  
from wordcloud import WordCloud  
import numpy as np  
from PIL import Image  
  
font = r'C:\\Windows\\Fonts\\FZSTK.TTF'  
STOPWORDS = set(map(str.strip, open('stopwords.txt').readlines()))  
  
  
def wordcloud\_praise():  
    df = pd.read\_csv('praise.csv', usecols=\[1\])  
    df\_list = df.values.tolist()  
    comment\_after = jieba.cut(str(df\_list), cut\_all=False)  
    words = ' '.join(comment\_after)  
    img = Image.open('haiwang8.jpg')  
    img\_array = np.array(img)  
    wc = WordCloud(width=2000, height=1800, background\_color='white', font\_path=font, mask=img\_array, stopwords=STOPWORDS)  
    wc.generate(words)  
    wc.to\_file('praise.png')  
  
  
def wordcloud\_ordinary():  
    df = pd.read\_csv('ordinary.csv', usecols=\[1\])  
    df\_list = df.values.tolist()  
    comment\_after = jieba.cut(str(df\_list), cut\_all=False)  
    words = ' '.join(comment\_after)  
    img = Image.open('haiwang8.jpg')  
    img\_array = np.array(img)  
    wc = WordCloud(width=2000, height=1800, background\_color='white', font\_path=font, mask=img\_array, stopwords=STOPWORDS)  
    wc.generate(words)  
    wc.to\_file('ordinary.png')  
  
  
def wordcloud\_lowest():  
    df = pd.read\_csv('lowest.csv', usecols=\[1\])  
    df\_list = df.values.tolist()  
    comment\_after = jieba.cut(str(df\_list), cut\_all=False)  
    words = ' '.join(comment\_after)  
    img = Image.open('haiwang7.jpg')  
    img\_array = np.array(img)  
    wc = WordCloud(width=2000, height=1800, background\_color='white', font\_path=font, mask=img\_array, stopwords=STOPWORDS)  
    wc.generate(words)  
    wc.to\_file('lowest.png')  
  
  
if \_\_name\_\_ == "\_\_main\_\_":  
    print("Save praise wordcloud")  
    wordcloud\_praise()  
    print("Save ordinary wordcloud")  
    wordcloud\_ordinary()  
    print("Save lowest wordcloud")  
    wordcloud\_lowest()  
    print("THE END!!!")  



海报爬取

对于海报的爬取,其实也十分类似,直接给出代码



import requests  
import json  
  
  
def deal\_pic(url, name):  
    pic = requests.get(url)  
    with open(name + '.jpg', 'wb') as f:  
        f.write(pic.content)  
  
  
def get\_poster():  
    for i in range(0, 10000, 20):  
        url = 'https://movie.douban.com/j/new\_search\_subjects?sort=U&range=0,10&tags=电影&start=%s&genres=爱情' % i  
        req = requests.get(url).text  
        req\_dict = json.loads(req)  
        for j in req\_dict\['data'\]:  
            name = j\['title'\]  
            poster\_url = j\['cover'\]  
            print(name, poster\_url)  
            deal\_pic(poster\_url, name)  
  
  
if \_\_name\_\_ == "\_\_main\_\_":  
    get\_poster()  



烂番茄网站

这是一个国外的电影影评网站,也比较适合新手练习,网址如下

❝https://www.rottentomatoes.com/tv/game_of_thrones

我们就以权力的游戏作为爬取例子



import requests  
from bs4 import BeautifulSoup  
from pyecharts.charts import Line  
import pyecharts.options as opts  
from wordcloud import WordCloud  
import jieba  
  
  
baseurl = 'https://www.rottentomatoes.com'  
  
  
def get\_total\_season\_content():  
    url = 'https://www.rottentomatoes.com/tv/game\_of\_thrones'  
    response = requests.get(url).text  
    content = BeautifulSoup(response, "html.parser")  
    season\_list = \[\]  
    div\_list = content.find\_all('div', attrs={'class': 'bottom\_divider media seasonItem '})  
    for i in div\_list:  
        suburl = i.find('a')\['href'\]  
        season = i.find('a').text  
        rotten = i.find('span', attrs={'class': 'meter-value'}).text  
        consensus = i.find('div', attrs={'class': 'consensus'}).text.strip()  
        season\_list.append(\[season, suburl, rotten, consensus\])  
    return season\_list  
  
  
def get\_season\_content(url):  
    # url = 'https://www.rottentomatoes.com/tv/game\_of\_thrones/s08#audience\_reviews'  
    response = requests.get(url).text  
    content = BeautifulSoup(response, "html.parser")  
    episode\_list = \[\]  
    div\_list = content.find\_all('div', attrs={'class': 'bottom\_divider'})  
    for i in div\_list:  
        suburl = i.find('a')\['href'\]  
        fresh = i.find('span', attrs={'class': 'tMeterScore'}).text.strip()  
        episode\_list.append(\[suburl, fresh\])  
    return episode\_list\[:5\]  
  
  
mylist = \[\['/tv/game\_of\_thrones/s08/e01', '92%'\],  
          \['/tv/game\_of\_thrones/s08/e02', '88%'\],  
          \['/tv/game\_of\_thrones/s08/e03', '74%'\],  
          \['/tv/game\_of\_thrones/s08/e04', '58%'\],  
          \['/tv/game\_of\_thrones/s08/e05', '48%'\],  
          \['/tv/game\_of\_thrones/s08/e06', '49%'\]\]  
  
  
def get\_episode\_detail(episode):  
    # episode = mylist  
    e\_list = \[\]  
    for i in episode:  
        url = baseurl + i\[0\]  
        # print(url)  
        response = requests.get(url).text  
        content = BeautifulSoup(response, "html.parser")  
        critic\_consensus = content.find('p', attrs={'class': 'critic\_consensus superPageFontColor'}).text.strip().replace(' ', '').replace('\\n', '')  
        review\_list\_left = content.find\_all('div', attrs={'class': 'quote\_bubble top\_critic pull-left cl '})  
        review\_list\_right = content.find\_all('div', attrs={'class': 'quote\_bubble top\_critic pull-right  '})  
        review\_list = \[\]  
        for i\_left in review\_list\_left:  
            left\_review = i\_left.find('div', attrs={'class': 'media-body'}).find('p').text.strip()  
            review\_list.append(left\_review)  
        for i\_right in review\_list\_right:  
            right\_review = i\_right.find('div', attrs={'class': 'media-body'}).find('p').text.strip()  
            review\_list.append(right\_review)  
        e\_list.append(\[critic\_consensus, review\_list\])  
    print(e\_list)  
  
  
if \_\_name\_\_ == '\_\_main\_\_':  
    total\_season\_content = get\_total\_season\_content()  
  



王者英雄网站

我这里选取的是如下网站

❝http://db.18183.com/



import requests  
from bs4 import BeautifulSoup  
  
  
def get\_hero\_url():  
    print('start to get hero urls')  
    url = 'http://db.18183.com/'  
    url\_list = \[\]  
    res = requests.get(url + 'wzry').text  
    content = BeautifulSoup(res, "html.parser")  
    ul = content.find('ul', attrs={'class': "mod-iconlist"})  
    hero\_url = ul.find\_all('a')  
    for i in hero\_url:  
        url\_list.append(i\['href'\])  
    print('finish get hero urls')  
    return url\_list  
  
  
def get\_details(url):  
    print('start to get details')  
    base\_url = 'http://db.18183.com/'  
    detail\_list = \[\]  
    for i in url:  
        # print(i)  
        res = requests.get(base\_url + i).text  
        content = BeautifulSoup(res, "html.parser")  
        name\_box = content.find('div', attrs={'class': 'name-box'})  
        name = name\_box.h1.text  
        hero\_attr = content.find('div', attrs={'class': 'attr-list'})  
        attr\_star = hero\_attr.find\_all('span')  
        survivability = attr\_star\[0\]\['class'\]\[1\].split('-')\[1\]  
        attack\_damage = attr\_star\[1\]\['class'\]\[1\].split('-')\[1\]  
        skill\_effect = attr\_star\[2\]\['class'\]\[1\].split('-')\[1\]  
        getting\_started = attr\_star\[3\]\['class'\]\[1\].split('-')\[1\]  
        details = content.find('div', attrs={'class': 'otherinfo-datapanel'})  
        # print(details)  
        attrs = details.find\_all('p')  
        attr\_list = \[\]  
        for attr in attrs:  
            attr\_list.append(attr.text.split(':')\[1\].strip())  
        detail\_list.append(\[name, survivability, attack\_damage,  
                            skill\_effect, getting\_started, attr\_list\])  
    print('finish get details')  
    return detail\_list  
  
  
def save\_tocsv(details):  
    print('start save to csv')  
    with open('all\_hero\_init\_attr\_new.csv', 'w', encoding='gb18030') as f:  
        f.write('英雄名字,生存能力,攻击伤害,技能效果,上手难度,最大生命,最大法力,物理攻击,'  
                '法术攻击,物理防御,物理减伤率,法术防御,法术减伤率,移速,物理护甲穿透,法术护甲穿透,攻速加成,暴击几率,'  
                '暴击效果,物理吸血,法术吸血,冷却缩减,攻击范围,韧性,生命回复,法力回复\\n')  
        for i in details:  
            try:  
                rowcsv = '{},{},{},{},{},{},{},{},{},{},{},{},{},{},{},{},{},{},{},{},{},{},{},{},{},{}'.format(  
                    i\[0\], i\[1\], i\[2\], i\[3\], i\[4\], i\[5\]\[0\], i\[5\]\[1\], i\[5\]\[2\], i\[5\]\[3\], i\[5\]\[4\], i\[5\]\[5\],  
                    i\[5\]\[6\], i\[5\]\[7\], i\[5\]\[8\], i\[5\]\[9\], i\[5\]\[10\], i\[5\]\[11\], i\[5\]\[12\], i\[5\]\[13\], i\[5\]\[14\], i\[5\]\[15\],  
                    i\[5\]\[16\], i\[5\]\[17\], i\[5\]\[18\], i\[5\]\[19\], i\[5\]\[20\]  
                )  
                f.write(rowcsv)  
                f.write('\\n')  
            except:  
                continue  
    print('finish save to csv')  
  
  
if \_\_name\_\_ == "\_\_main\_\_":  
    get\_hero\_url()  
    hero\_url = get\_hero\_url()  
    details = get\_details(hero\_url)  
    save\_tocsv(details)  



好了,今天先分享这三个网站,咱们后面再慢慢分享更多好的练手网站与实战代码!

 # 关于Python学习指南

学好 Python 不论是就业还是做副业赚钱都不错,但要学会 Python 还是要有一个学习规划。最后给大家分享一份全套的 Python 学习资料,给那些想学习 Python 的小伙伴们一点帮助!


<mark>包括:Python激活码+安装包、Python web开发,Python爬虫,Python数据分析,人工智能、自动化办公等学习教程。带你从零基础系统性的学好Python!





### 👉Python所有方向的学习路线👈

Python所有方向路线就是把Python常用的技术点做整理,形成各个领域的知识点汇总,它的用处就在于,你可以按照上面的知识点去找对应的学习资源,保证自己学得较为全面。<mark>**(全套教程文末领取)**

![在这里插入图片描述](https://img-blog.csdnimg.cn/3c4ee87941694f3789398db3d52a2637.png#pic_center)






### 👉Python学习视频600合集👈

观看零基础学习视频,看视频学习是最快捷也是最有效果的方式,跟着视频中老师的思路,从基础到深入,还是很容易入门的。



![在这里插入图片描述](https://img-blog.csdnimg.cn/64c89bf6293d4699bf7ee8f34b9e69fd.png#pic_center)

### <mark>温馨提示:篇幅有限,已打包文件夹,获取方式在:文末






### 👉Python70个实战练手案例&源码👈

光学理论是没用的,要学会跟着一起敲,要动手实操,才能将自己的所学运用到实际当中去,这时候可以搞点实战案例来学习。


![在这里插入图片描述](https://img-blog.csdnimg.cn/2017b67544f94e8898db755e2703224a.png#pic_center)


### 👉Python大厂面试资料👈

我们学习Python必然是为了找到高薪的工作,下面这些面试题是来自**阿里、腾讯、字节等一线互联网大厂**最新的面试资料,并且有阿里大佬给出了权威的解答,刷完这一套面试资料相信大家都能找到满意的工作。


![在这里插入图片描述](https://img-blog.csdnimg.cn/3055c54d3224495987c589f150324d73.png#pic_center)

![在这里插入图片描述](https://img-blog.csdnimg.cn/b0751719fe914aec8c8d09f62f772e44.png#pic_center)

### 👉Python副业兼职路线&方法👈

学好 Python 不论是就业还是做副业赚钱都不错,但要学会兼职接单还是要有一个学习规划。

![在这里插入图片描述](https://img-blog.csdnimg.cn/01bcd7cbfd6d43fb85ef410766735154.png#pic_center)




 **👉** **这份完整版的Python全套学习资料已经上传,朋友们如果需要可以扫描下方CSDN官方认证二维码或者点击链接免费领取**【**`保证100%免费`**】


<img src="https://hnxx.oss-cn-shanghai.aliyuncs.com/official/1712472795340.jpg?t=0.9031473288055516" style="margin: auto" />


  • 3
    点赞
  • 7
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值