python3抓取数据时失败_使用BeautifulSoup python 3.6抓取数据时缺少网页值

I am using below script to scrap "STOCK QUOTE" data from http://fortune.com/fortune500/xcel-energy/, But its giving blank.

I have used selenium driver also, but same issue. Please help on this.

import requests

from bs4 import BeautifulSoup as bs

import pandas as pd

r = requests.get('http://fortune.com/fortune500/xcel-energy/')

soup = bs(r.content, 'lxml') # tried: 'html.parser

data = pd.DataFrame(columns=['C1','C2','C3','C4'], dtype='object', index=range(0,11))

for table in soup.find_all('div', {'class': 'stock-quote row'}):

row_marker = 0

for row in table.find_all('li'):

column_marker = 0

columns = row.find_all('span')

for column in columns:

data.iat[row_marker, column_marker] = column.get_text()

column_marker += 1

row_marker += 1

print(data)

Output getting:

C1 C2 C3 C4

0 Previous Close: NaN NaN

1 Market Cap: NaNB NaN B

2 Next Earnings Date: NaN NaN

3 High: NaN NaN

4 Low: NaN NaN

5 52 Week High: NaN NaN

6 52 Week Low: NaN NaN

7 52 Week Change %: 0.00 NaN NaN

8 P/E Ratio: n/a NaN NaN

9 EPS: NaN NaN

10 Dividend Yield: n/a NaN NaN

解决方案

It looks like the data you are looking for is available at this API endpoint:

import requests

response = requests.get("http://fortune.com/api/v2/company/xel/expand/1")

data = response.json()

print(data['ticker'])

FYI, when opening the page in an selenium-automated browser, you just need to make sure you wait for the desired data to appear before parsing the HTML, working code:

from bs4 import BeautifulSoup

from selenium import webdriver

from selenium.webdriver.common.by import By

from selenium.webdriver.support.ui import WebDriverWait

from selenium.webdriver.support import expected_conditions as EC

import pandas as pd

url = 'http://fortune.com/fortune500/xcel-energy/'

driver = webdriver.Chrome()

wait = WebDriverWait(driver, 10)

driver.get(url)

wait.until(EC.visibility_of_element_located((By.CSS_SELECTOR, ".stock-quote")))

page_source = driver.page_source

driver.close()

# HTML parsing part

soup = BeautifulSoup(page_source, 'lxml') # tried: 'html.parser

data = pd.DataFrame(columns=['C1','C2','C3','C4'], dtype='object', index=range(0,11))

for table in soup.find_all('div', {'class': 'stock-quote'}):

row_marker = 0

for row in table.find_all('li'):

column_marker = 0

columns = row.find_all('span')

for column in columns:

data.iat[row_marker, column_marker] = column.get_text()

column_marker += 1

row_marker += 1

print(data)

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值