pythonajax登录网站_python-使用BeautifulSoup抓取ajax网站的Web

I am trying to scrape e-commerce site that uses ajax call to load its next pages.

I am able to scrape data present on page 1 but page 2 loads automatically through ajax call when I scroll page 1 to bottom.

My code :

from bs4 import BeautifulSoup as soup

from urllib.request import urlopen as ureq

my_url='http://www.shopclues.com/mobiles-smartphones.html'

page=ureq(my_url).read()

page_soup=soup(page,"html.parser")

containers=page_soup.findAll("div",{"class":"column col3"})

for container in containers:

name=container.h3.text

price=container.find("span",{'class':'p_price'}).text

print("Name : "+name.replace(","," "))

print("Price : "+price)

for i in range(2,7):

my_url="http://www.shopclues.com/ajaxCall/moreProducts?catId=1431&filters=&pageType=c&brandName=&start="+str(36*(i-1))+"&columns=4&fl_cal=1&page="+str(i)

page=ureq(my_url).read()

print(page)

page_soup=soup(page,"html.parser")

containers=page_soup.findAll("div",{"class":"column col3"})

for container in containers:

name=container.h3.text

price=container.find("span",{'class':'p_price'}).text

print("Name : "+name.replace(","," "))

print("Price : "+price)

I have printed the ajax page read by ureq to know whether I am able to open the ajax page and I got an output as:

b' ' are the outputs of:

print(page)

please provide me a solution to scrape the remaining data.

解决方案from selenium import webdriver

from selenium.webdriver.common.desired_capabilities import DesiredCapabilities

from selenium.webdriver.common.by import By

from selenium.webdriver.support.ui import WebDriverWait

from selenium.webdriver.support import expected_conditions as EC

from bs4 import BeautifulSoup as soup

from urllib2 import urlopen as ureq

import random

import time

chrome_options = webdriver.ChromeOptions()

prefs = {"profile.default_content_setting_values.notifications": 2}

chrome_options.add_experimental_option("prefs", prefs)

# A randomizer for the delay

seconds = 5 + (random.random() * 5)

# create a new Chrome session

driver = webdriver.Chrome(chrome_options=chrome_options)

driver.implicitly_wait(30)

# driver.maximize_window()

# navigate to the application home page

driver.get("http://www.shopclues.com/mobiles-smartphones.html")

time.sleep(seconds)

time.sleep(seconds)

# Add more to range for more phones

for i in range(1):

element = driver.find_element_by_id("moreProduct")

driver.execute_script("arguments[0].click();", element)

time.sleep(seconds)

time.sleep(seconds)

html = driver.page_source

page_soup = soup(html, "html.parser")

containers = page_soup.findAll("div", {"class": "column col3"})

for container in containers:

# Add error handling

try:

name = container.h3.text

price = container.find("span", {'class': 'p_price'}).text

print("Name : " + name.replace(",", " "))

print("Price : " + price)

except AttributeError:

continue

driver.quit()

I used selenium to load the website and click the button to load more results. Then take the resulting html and put in your code.

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值