Web Crawling and Data Analysis (Dyson V8 Vacuum Review data)

Acknowledgement

This work note is for study use only, and not allowed to be reposted or be used for commercial purpose without a permission from author. 本文只可用于个人学习目的,未经作者许可禁止转载或用于商业目的。


This work note is basically an entry level of web crawling and data analysis targeting on the feedback of a Dyson product which I have interest in. 

Result and conclusion

As a potential customer at age 18-34, it is highly likely that I would be happy if purchasing a dyson V8 vacuum.




Furture Work:

TBC...

Step 1: Crawl the data from Dyson Official website

I crawled the customers review data from Dyson UK official website, and then do some data visualisation on the review rating and text mining on the comments title to find out what are customers thinking on this product.

The data contains the rating, customer nickname, age, gender and the review date, title, content.

The review: 

 https://www.dyson.co.uk/sticks/dyson-v8-reviews.html?productCode=232707-01

The first 20 review data: 

https://dyson.ugc.bazaarvoice.com/8787-en_gb/DYSON-V8-TOTAL-CLEAN/reviews.djs?format=embeddedhtml&page=1&scrollToTop=true

The pattern of urls is quite obvious: 

'https://dyson.ugc.bazaarvoice.com/8787-en_gb/DYSON-V8-TOTAL-CLEAN/reviews.djs?format=embeddedhtml&page=' + No. page + '&scrollToTop=true'

The first method is just crawling the raw data (22.9 MB).

# -*- coding: UTF-8 -*-
#url = 'https://dyson.ugc.bazaarvoice.com/8787-en_gb/DYSON-V8-TOTAL-CLEAN/reviews.djs?format=embeddedhtml&page=1&scrollToTop=true'

import sys
from HTMLParser import HTMLParser
from bs4 import BeautifulSoup
from PyQt4.QtWebKit import *
from PyQt4.QtGui import *
from PyQt4.QtCore import *
reload(sys)
sys.setdefaultencoding('utf-8')
urls = []

class Client(QWebPage):

    new_url = pyqtSignal(['QString'], name='new_url')

    def __init__(self, urls):
        self.app = QApplication(sys.argv)
        self.urls = urls
        self.pages = dict()
        QWebPage.__init__(self)
        self.new_url.connect(self.load_url)
        self.loadFinished.connect(self.on_page_load)
        if len(self.urls):
            self.new_url.emit(urls.pop())
        self.app.exec_()

    def load_url(self, url):
        self.current_url = url
        print "Loading: {0}".format(url)
        self.mainFrame().load(QUrl(url))

    def on_page_load(self):
        print "Retrieved: {0}".format(self.current_url)
        self.pages[self.current_url] = unicode(self.mainFrame().toHtml())
        if len(self.urls):
            self.new_url.emit(self.urls.pop())
        else:
            self.app.quit()

for i in range(1,74):
    url = 'https://dyson.ugc.bazaarvoice.com/8787-en_gb/DYSON-V8-TOTAL-CLEAN/reviews.djs?format=embeddedhtml&page='+i.__str__()+'&scrollToTop=true'
    urls.append(url)
out_path ='C:\Users\home\Documents\CV\Dyson\db2\\data.txt'

text = []
r = Client(urls)
for (url,page) in r.pages.items():
    soup = BeautifulSoup(page, "html.parser")
    text.append( "{0}\t{1}".format(url, soup.get_text()))


f=open(out_path, 'w+')
f.write(''.join(text))
f.close()
print ' Done.'

The second method is using the Selenium module (still developing) to extract different part of the dataset.

import sys,os
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.common.exceptions import TimeoutException,NoSuchElementException
import time
#import pyquery
#import xlrd
#import xlwt
#import xlutils
import pandas as pd
import json
chrome_options = webdriver.ChromeOptions()
prefs = {"profile.managed_default_content_settings.images": 2}
chrome_options.add_experimental_option("prefs", prefs)
driver = webdriver.Chrome('C:\Users\home\Documents\chromedriver.exe',chrome_options=chrome_options)  # or webdriver.Firefox()
url1 = 'https://www.dyson.co.uk/sticks/dyson-v8-reviews.html?productCode=232707-01'
str1 = 'span.BVRRReviewText'

comment = []
rate = []
print('looking for review...')

driver.get(url1)

for i in range(1,5):
    driver.find_element_by_link_text('Load more reviews').click()
    time.sleep(3)
WebDriverWait(driver, 30).until(EC.presence_of_element_located((By.CSS_SELECTOR, str1)))
for j in driver.find_elements_by_css_selector(str1):
    comment.append(j.text.encode('utf8'))

driver.close()

path = 'C:\Users\home\Documents\CV\Dyson\db3\comment4.txt'

f=open(path, 'w+')
f.write(''.join(comment))
f.close()

TBC...

Step 2: Clean the data

The review data are generated dynamically by JavaScript in the *. djs format, but not the usual *. json that I have worked on. Therefore it cost me more time than I expected in crawling and cleaning to extract the useful data. 

At first I clean the data with re module of python which was extremely inefficient. Finally I found the Selenium module was designed for crawling the dynamic website. Thanks a lot.

 There are 1416  reviews in total over Dyson V8 Vacuum, amoung which 1355 reviews contains information of gender and age, which is the data used for analysis. The data contains customers information in the aspects of nickname, overall rating, gender, age, review title and review contents. 

TBC...

Step 3: Text Mining and Visualisation

TBC...


  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值