爬虫 代理池 数据分析

爬取豆瓣《镇魂》评论并进行数据分析

这是引用了一个写得还不错的朋友的博客,感觉会用到下面的处理手段,拿出来和大家分享一下。但是数据分析这块略显的单薄了些,如若自己有更好的分析方法和手段欢迎讨论!

#引入包
import requests
from bs4 import BeautifulSoup
import random
import matplotlib.pyplot as plt
import jieba
from wordcloud import WordCloud
import PIL
import numpy as np
from snownlp import SnowNLP
import csv
import codecs
import pandas as pd
#设置浏览器
agents = [
    "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:17.0; Baiduspider-ads) Gecko/17.0 Firefox/17.0",
    "Mozilla/5.0 (Linux; U; Android 2.3.6; en-us; Nexus S Build/GRK39F) AppleWebKit/533.1 (KHTML, like Gecko) Version/4.0 Mobile Safari/533.1",
    "Avant Browser/1.2.789rel1 (http://www.avantbrowser.com)",
    "Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US) AppleWebKit/532.5 (KHTML, like Gecko) Chrome/4.0.249.0 Safari/532.5",
    "Mozilla/5.0 (Windows; U; Windows NT 5.2; en-US) AppleWebKit/532.9 (KHTML, like Gecko) Chrome/5.0.310.0 Safari/532.9",
    "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/534.7 (KHTML, like Gecko) Chrome/7.0.514.0 Safari/534.7",
    "Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US) AppleWebKit/534.14 (KHTML, like Gecko) Chrome/9.0.601.0 Safari/534.14",
    "Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US) AppleWebKit/534.14 (KHTML, like Gecko) Chrome/10.0.601.0 Safari/534.14",
    "Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US) AppleWebKit/534.20 (KHTML, like Gecko) Chrome/11.0.672.2 Safari/534.20",
    "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/534.27 (KHTML, like Gecko) Chrome/12.0.712.0 Safari/534.27",
    "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/535.1 (KHTML, like Gecko) Chrome/13.0.782.24 Safari/535.1",
    "Mozilla/5.0 (Windows NT 6.0) AppleWebKit/535.2 (KHTML, like Gecko) Chrome/15.0.874.120 Safari/535.2",
    "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/535.7 (KHTML, like Gecko) Chrome/16.0.912.36 Safari/535.7",
    "Mozilla/5.0 (Windows; U; Windows NT 6.0 x64; en-US; rv:1.9pre) Gecko/2008072421 Minefield/3.0.2pre",
    "Mozilla/5.0 (Windows; U; Windows NT 5.1; zh-CN; rv:1.9b4) Gecko/2008030317 Firefox/3.0b4",
    "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.0.10) Gecko/2009042316 Firefox/3.0.10",
    "Mozilla/5.0 (Windows; U; Windows NT 6.0; en-GB; rv:1.9.0.11) Gecko/2009060215 Firefox/3.0.11 (.NET CLR 3.5.30729)",
    "Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US; rv:1.9.1.6) Gecko/20091201 Firefox/3.5.6 GTB5",
    "Mozilla/5.0 (Windows; U; Windows NT 5.1; tr; rv:1.9.2.8) Gecko/20100722 Firefox/3.6.8 ( .NET CLR 3.5.30729; .NET4.0E)",
    "Mozilla/5.0 (Windows; U; MSIE 6.0; Windows NT 5.1; SV1; .NET CLR 2.0.50727; BIDUBrowser 7.6)",
    "Mozilla/5.0 (Windows NT 6.3; WOW64; Trident/7.0; rv:11.0) like Gecko",
    "Mozilla/5.0 (Windows NT 6.3; WOW64; rv:46.0) Gecko/20100101 Firefox/46.0",
    "Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/45.0.2454.99 Safari/537.36",
    "Mozilla/5.0 (Windows NT 6.3; Win64; x64; Trident/7.0; Touch; LCJB; rv:11.0) like Gecko",
    "Mozilla/5.0 (Windows NT 6.1; rv:2.0.1) Gecko/20100101 Firefox/4.0.1",
    "Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:2.0.1) Gecko/20100101 Firefox/4.0.1",
    "Mozilla/5.0 (Windows NT 5.1; rv:5.0) Gecko/20100101 Firefox/5.0",
    "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:6.0a2) Gecko/20110622 Firefox/6.0a2",
    "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:7.0.1) Gecko/20100101 Firefox/7.0.1",
    "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:2.0b4pre) Gecko/20100815 Minefield/4.0b4pre",
    "Mozilla/4.0 (compatible; MSIE 5.5; Windows NT 5.0 )",
    "Mozilla/4.0 (compatible; MSIE 5.5; Windows 98; Win 9x 4.90)",
    "Mozilla/5.0 (Windows; U; Windows XP) Gecko MultiZilla/1.6.1.0a",
    "Mozilla/2.02E (Win95; U)",
    "Mozilla/3.01Gold (Win95; I)",
    "Mozilla/4.8 [en] (Windows NT 5.1; U)",
    "Mozilla/5.0 (Windows; U; Win98; en-US; rv:1.4) Gecko Netscape/7.1 (ax)",
    "HTC_Dream Mozilla/5.0 (Linux; U; Android 1.5; en-ca; Build/CUPCAKE) AppleWebKit/528.5  (KHTML, like Gecko) Version/3.1.2 Mobile Safari/525.20.1",
    "Mozilla/5.0 (hp-tablet; Linux; hpwOS/3.0.2; U; de-DE) AppleWebKit/534.6 (KHTML, like Gecko) wOSBrowser/234.40.1 Safari/534.6 TouchPad/1.0",
    "Mozilla/5.0 (Linux; U; Android 1.5; en-us; sdk Build/CUPCAKE) AppleWebkit/528.5  (KHTML, like Gecko) Version/3.1.2 Mobile Safari/525.20.1",
    "Mozilla/5.0 (Linux; U; Android 2.1; en-us; Nexus One Build/ERD62) AppleWebKit/530.17 (KHTML, like Gecko) Version/4.0 Mobile Safari/530.17",
    "Mozilla/5.0 (Linux; U; Android 2.2; en-us; Nexus One Build/FRF91) AppleWebKit/533.1 (KHTML, like Gecko) Version/4.0 Mobile Safari/533.1",
    "Mozilla/5.0 (Linux; U; Android 1.5; en-us; htc_bahamas Build/CRB17) AppleWebKit/528.5  (KHTML, like Gecko) Version/3.1.2 Mobile Safari/525.20.1",
    "Mozilla/5.0 (Linux; U; Android 2.1-update1; de-de; HTC Desire 1.19.161.5 Build/ERE27) AppleWebKit/530.17 (KHTML, like Gecko) Version/4.0 Mobile Safari/530.17",
    "Mozilla/5.0 (Linux; U; Android 2.2; en-us; Sprint APA9292KT Build/FRF91) AppleWebKit/533.1 (KHTML, like Gecko) Version/4.0 Mobile Safari/533.1",
    "Mozilla/5.0 (Linux; U; Android 1.5; de-ch; HTC Hero Build/CUPCAKE) AppleWebKit/528.5  (KHTML, like Gecko) Version/3.1.2 Mobile Safari/525.20.1",
    "Mozilla/5.0 (Linux; U; Android 2.2; en-us; ADR6300 Build/FRF91) AppleWebKit/533.1 (KHTML, like Gecko) Version/4.0 Mobile Safari/533.1",
    "Mozilla/5.0 (Linux; U; Android 2.1; en-us; HTC Legend Build/cupcake) AppleWebKit/530.17 (KHTML, like Gecko) Version/4.0 Mobile Safari/530.17",
    "Mozilla/5.0 (Linux; U; Android 1.5; de-de; HTC Magic Build/PLAT-RC33) AppleWebKit/528.5  (KHTML, like Gecko) Version/3.1.2 Mobile Safari/525.20.1 FirePHP/0.3",
    "Mozilla/5.0 (Linux; U; Android 1.6; en-us; HTC_TATTOO_A3288 Build/DRC79) AppleWebKit/528.5  (KHTML, like Gecko) Version/3.1.2 Mobile Safari/525.20.1",
    "Mozilla/5.0 (Linux; U; Android 1.0; en-us; dream) AppleWebKit/525.10  (KHTML, like Gecko) Version/3.0.4 Mobile Safari/523.12.2",
    "Mozilla/5.0 (Linux; U; Android 1.5; en-us; T-Mobile G1 Build/CRB43) AppleWebKit/528.5  (KHTML, like Gecko) Version/3.1.2 Mobile Safari 525.20.1",
    "Mozilla/5.0 (Linux; U; Android 1.5; en-gb; T-Mobile_G2_Touch Build/CUPCAKE) AppleWebKit/528.5  (KHTML, like Gecko) Version/3.1.2 Mobile Safari/525.20.1",
    "Mozilla/5.0 (Linux; U; Android 2.0; en-us; Droid Build/ESD20) AppleWebKit/530.17 (KHTML, like Gecko) Version/4.0 Mobile Safari/530.17",
    "Mozilla/5.0 (Linux; U; Android 2.2; en-us; Droid Build/FRG22D) AppleWebKit/533.1 (KHTML, like Gecko) Version/4.0 Mobile Safari/533.1",
    "Mozilla/5.0 (Linux; U; Android 2.0; en-us; Milestone Build/ SHOLS_U2_01.03.1) AppleWebKit/530.17 (KHTML, like Gecko) Version/4.0 Mobile Safari/530.17",
    "Mozilla/5.0 (Linux; U; Android 2.0.1; de-de; Milestone Build/SHOLS_U2_01.14.0) AppleWebKit/530.17 (KHTML, like Gecko) Version/4.0 Mobile Safari/530.17",
    "Mozilla/5.0 (Linux; U; Android 3.0; en-us; Xoom Build/HRI39) AppleWebKit/525.10  (KHTML, like Gecko) Version/3.0.4 Mobile Safari/523.12.2",
    "Mozilla/5.0 (Linux; U; Android 0.5; en-us) AppleWebKit/522  (KHTML, like Gecko) Safari/419.3",
    "Mozilla/5.0 (Linux; U; Android 1.1; en-gb; dream) AppleWebKit/525.10  (KHTML, like Gecko) Version/3.0.4 Mobile Safari/523.12.2",
    "Mozilla/5.0 (Linux; U; Android 2.0; en-us; Droid Build/ESD20) AppleWebKit/530.17 (KHTML, like Gecko) Version/4.0 Mobile Safari/530.17",
    "Mozilla/5.0 (Linux; U; Android 2.1; en-us; Nexus One Build/ERD62) AppleWebKit/530.17 (KHTML, like Gecko) Version/4.0 Mobile Safari/530.17",
    "Mozilla/5.0 (Linux; U; Android 2.2; en-us; Sprint APA9292KT Build/FRF91) AppleWebKit/533.1 (KHTML, like Gecko) Version/4.0 Mobile Safari/533.1",
    "Mozilla/5.0 (Linux; U; Android 2.2; en-us; ADR6300 Build/FRF91) AppleWebKit/533.1 (KHTML, like Gecko) Version/4.0 Mobile Safari/533.1",
    "Mozilla/5.0 (Linux; U; Android 2.2; en-ca; GT-P1000M Build/FROYO) AppleWebKit/533.1 (KHTML, like Gecko) Version/4.0 Mobile Safari/533.1",
    "Mozilla/5.0 (Linux; U; Android 3.0.1; fr-fr; A500 Build/HRI66) AppleWebKit/534.13 (KHTML, like Gecko) Version/4.0 Safari/534.13",
    "Mozilla/5.0 (Linux; U; Android 3.0; en-us; Xoom Build/HRI39) AppleWebKit/525.10  (KHTML, like Gecko) Version/3.0.4 Mobile Safari/523.12.2",
    "Mozilla/5.0 (Linux; U; Android 1.6; es-es; SonyEricssonX10i Build/R1FA016) AppleWebKit/528.5  (KHTML, like Gecko) Version/3.1.2 Mobile Safari/525.20.1",
    "Mozilla/5.0 (Linux; U; Android 1.6; en-us; SonyEricssonX10i Build/R1AA056) AppleWebKit/528.5  (KHTML, like Gecko) Version/3.1.2 Mobile Safari/525.20.1",
] 
heads = {                                                # ip代理池获取时用的url的headers
        'User-Agent': random.choice(agents),
        'ue': 'utf-8'  # 设置翻译支持中文
    }
# 获取代理ip池,获得匿名的ip
def get_ip_list():
    urlip = 'http://www.xicidaili.com/nn/'
    html = requests.get(urlip, headers=heads).text
    soup = BeautifulSoup(html, 'html.parser')
    ips = soup.find_all('tr')
    ip_list = []
    for i in range(1, len(ips)):
        ip_info = ips[i]
        tds = ip_info.find_all('td')
        ip_list.append(tds[1].text + ':' + tds[2].text)
    return ip_list
# 从ip代理池随机选取一个ip返回
def get_random_ip():
    ip_list = get_ip_list()
    proxy_list = []
    for ip in ip_list:
        proxy_list.append('http://' + ip)
    proxy_ip = random.choice(proxy_list)
    proxies = {'http': proxy_ip}
    return proxies
#返回的是一个字典
# 用于评论页面html获取的url的headers
heade = {
        'User-Agent': random.choice(agents),
        'proxies': get_random_ip(),                     # 反爬虫策略——每次从ip池中随机调取一个ip
        'ue': 'utf-8'  # 设置翻译支持中文
    }

# 获取一个评论页面的所有html文本信息
def get_html(url):
    response = requests.get(url, params=heade)
    response.encoding = 'utf-8'
    html = response.text
    #print(html)
    return html

## 获取下一页评论区信息的url
def get_url(html):
    bs = BeautifulSoup(html, 'html.parser')
    url_list = bs.find_all('div', attrs={'id': "paginator"})
    #print(url_list)
    if len(url_list) > 0:
        url_next_part = url_list[0].find('a', attrs={'class': "next"})['href']
        url_next = 'https://movie.douban.com/subject/26979097/comments' + url_next_part
        return url_next


#先把表头写好,以便后面构建表格

comment_columns=['标号','评论','星标','地区','时间']
with codecs.open("D:\\comment4.csv",'ab',"utf-8") as f:
    writer = csv.writer(f,delimiter ='\t',quotechar='"',quoting=csv.QUOTE_ALL)
    writer.writerow(comment_columns)

#b_comment_list里面是逐条的记录
b_comment_list=[]

# 获取某一页评论每种等级和影评信息
def get_star_and_comments(url_comment):
    global b_comment_list
    #print(url_comment)
    comment_html = get_html(url_comment)
    bs = BeautifulSoup(comment_html, 'html.parser')
    comment_list = bs.find_all('div', attrs={'class': "comment-item"})
    #print(comment_list)
    star=0
    name=0#表示每一个人的id号
    for comment in comment_list:
        name+=1#这个是用来作为id号码识别对象
        comments = (comment.find('p')).text.strip('\n')
        span = (comment.find_all('span')[4])['class']
        if span[0] == 'allstar50':
            star = 5
        elif span[0] == 'allstar40':
            star=4
        elif span[0] == 'allstar30':
            star=3
        elif span[0] == 'allstar20':
            star=2
        elif span[0] == 'allstar10':
            star=1
        #评论人的个人信息链接
        info_url=(comment.find_all('a')[2])['href']#字符串
        #print(type(info_url))
        info_html=get_html(info_url)
        info_bs=BeautifulSoup(info_html,"html.parser")
        #print(info_bs)
        #地区
        try:
            place=info_bs.find_all('div',attrs={"class":'user-info'})[0].find('a').text.strip()
        except:
            place=''
        #place=info_bs.find_all('div',attrs={"class":'user-info'})[0]
        #print(place)
        #print(type(place))
        #评论日期
        date=comment.find('span',attrs={"class":"comment-time"}).text.strip()
        #print(date)
        #一条数据的表示方法
        b_comment_list.append([name,comments,star,place,date])
        #print("haha:")
        #print(b_comment_list)
        #make_dir(file_path,b_comment_list)

# 创建循环调用实现完成一个页面信息获取后,自动获取下个页面信息的函数,这是一个调用自己的函数
i=0
def get_all(url):
    global i
    i+=1
    print(url)  
    get_star_and_comments(url)#把当前页面的内容形成列表放在b_comment_list中    
    comment_url = get_url(get_html(url))
    #print(comment_url)
    if comment_url != None and i%5!=0:        
        get_all(comment_url)
    else:
        pass
# 主函数运行程序
if __name__ == '__main__':
    b_comment_list=[]
    file_path='D:\\comment4.csv'#放在c盘中会有权限的限制
    #逐行写入数据
    def make_dir(file_path,data_list):
        with codecs.open(file_path,'ab',"utf-8") as f:
            writer = csv.writer(f,delimiter ='\t',quotechar='"',quoting=csv.QUOTE_ALL)   
            writer.writerows(data_list) # 写入多行
    url = 'https://movie.douban.com/subject/26979097/comments?start=240&limit=20&sort=new_score&status=P'
    get_all(url)#表格构建完成
    make_dir(file_path,b_comment_list)
    #print(b_comment_list)  

#读取形成的.csv文件
cp=pd.read_csv('D:\comment4.csv',engine='python',delimiter ='\t',sep='"',encoding="utf-8")

comment_list=list(cp["评论"])  
#file_list是评论列表
with open('haha.txt',"r",encoding='utf-8') as file:
    file_list=[i.strip() for i in file.readlines() if i!='\n']
#接下来是情感分析
def emotion():
    #f = open('comment.txt', 'r', encoding='UTF-8')
    #list = f.readlines()
    sentimentslist = []
    for i in comment_list:
        try:
            s = SnowNLP(i)
            #print(s.sentiments)
            sentimentslist.append(s.sentiments)
        except:
            break
    plt.hist(sentimentslist, bins=np.arange(0, 1, 0.01), facecolor='g')
    plt.xlabel('Value of sentiments')
    plt.ylabel('Quantity')
    plt.title('Sentiments probability of douban')
    plt.show()


#####分别构建积极和消极评论的列表,用来分析是什么原因导致产生积极或者消极的评价
def get_pos_neg_all(comment_list):
    pos_text=''
    neg_text=''
    comment_text=''
    with open('pos.txt', 'w', encoding='utf-8') as pos, open('neg.txt', 'w', encoding='utf-8') as neg,open('comment.txt', 'w', encoding='utf-8') as comment:        
        for i in comment_list:

            comment.write(i)
            s=i.strip()
            comment_text+=s
            try:
                if SnowNLP(s).sentiments>0.5:
                    pos_text+=s
                    pos.write(s)
                else:
                    neg_text+=s
                    neg.write(s)
            except:
                continue
    return pos_text,neg_text,comment_text

pos_text,neg_text,comment_text=get_pos_neg_all(comment_list)            
  # 调用wordcloud生成词云图并保存为ipg
#comment里面是字符串
def wc(text,name):
    path = r'C:\Windows\Fonts\STXINGKA.TTF'
    alien_mask = np.array(PIL.Image.open(r'C:\Users\ChengYiMing\Desktop\kuang.png')) 

    wc = WordCloud(font_path=path, background_color='white', margin=5, mask=alien_mask,width=1800, height=800, max_words=2000, max_font_size=60, random_state=42)

    a = []
    words = list(jieba.cut(text))
    for word in words:
        if len(word) > 1:
            a.append(word)
    txt = r' '.join(a)
    wc = wc.generate(txt)
    wc.to_file(name+'.jpg')       


wc(pos_text,"pos") 
wc(pos_text,"pos") 


参考文献: 
python爬取豆瓣《狂暴巨兽》评分影评,matplotlib和wordcloud制作评分图和词云图

以下是爬取《镇魂》官微评论数据的过程,最后的分析过程没有写出来,和上面的豆瓣评论分析过程如出一辙,可以参考上面的。

爬取《镇魂》微博数据
本文皆在于通过爬虫方式爬取微博镇魂。

 

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
通过Python网络爬虫技术,我们可以方便地爬取微博数据。首先,我们需要掌握Python网络爬虫与数据抓取的方法与技巧。网络爬虫在数据获取方面有着广泛的应用,能够帮助我们从互联网上抓取大量的数据用于分析和应用。 在爬取微博数据的过程中,我们可以使用解析页面的方法来获取微博的文本内容。可以对文本内容进行简单的清洗操作,比如去除换行符等,这样可以使结果更加干净。 接下来,我们可以定义爬取微博数据的具体信息。例如,可以指定微博的起始时间、是否将数据写入数据库等。此外,我们还可以在代码的基础上增加新的功能,比如增加一个cookie或者代理等。 总结来说,通过Python爬虫技术可以方便地爬取微博数据,并且可以通过自定义参数来获取所需的信息,同时还可以根据需求增加功能。<span class="em">1</span><span class="em">2</span><span class="em">3</span> #### 引用[.reference_title] - *1* [Python网络爬虫与数据抓取.md](https://download.csdn.net/download/pleaseprintf/88229800)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v93^chatsearchT3_2"}}] [.reference_item style="max-width: 33.333333333333336%"] - *2* [python3--爬虫--微博爬虫实战](https://blog.csdn.net/weixin_46863267/article/details/108512962)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v93^chatsearchT3_2"}}] [.reference_item style="max-width: 33.333333333333336%"] - *3* [Python 超简单爬取新浪微博数据](https://blog.csdn.net/four91/article/details/106192297)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v93^chatsearchT3_2"}}] [.reference_item style="max-width: 33.333333333333336%"] [ .reference_list ]

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值