python中bs4做web自动化_python – 使用selenium和bs4进行Web抓取

我正在尝试基于该网页的网页报废来构建数据框

html firstable我说selenium点击我选择的页面然后我把xpath和标签元素用于构建标题和正文但我没有我想要的格式是NaN或重复的格式.

按照我的脚本:

def get_browser(url_selector):

"""Get the browser (a "driver")."""

#option = webdriver.ChromeOptions()

#option.add_argument(' — incognito')

path_to_chromedriver = r"C:/Users/xxxxx/Downloads/chromedriver_win32/chromedriver.exe"

browser = webdriver.Chrome(executable_path= path_to_chromedriver)

browser.get(url_selector)

""" Try with Italie"""

browser.find_element_by_xpath(italie_buton_xpath).click()

""" Raise exception : down browser if loading take more than 45sec : timer is the logo website as a flag"""

# Wait 45 seconds for page to load

timeout = 45

try:

WebDriverWait(browser, timeout).until(EC.visibility_of_element_located((By.XPATH, '//*[@id="s5_logo_wrap"]/img')))

except TimeoutException:

print("Timed out waiting for page to load")

browser.quit()

return browser

browser = get_browser(url_selector)

headers = browser.find_element_by_xpath('//*[@id="s5_component_wrap_inner"]/main/div[2]/div[2]/div[3]/table/thead').find_elements_by_tag_name('tr')

headings = [i.text.strip() for i in headers]

bs_obj = BeautifulSoup(browser.page_source, 'html.parser')

rows = bs_obj.find_all('table')[0].find('tbody').find_all('tr')[1:]

table = []

for row in rows :

line = next(td.get_text() for td in row.find_all("td"))

print(line)

table.append(line)

browser.quit()

pd.DataFrame(line, columns = headings)

它返回

一列数据框如:

School Holiday Region Start date End date Week

0 Easter holidays 2018

1 REMARK: Small differences by region are possi...

2 Summer holiday 2018

3 REMARK: First region through to last region.

4 Christmas holiday 2018

有三个问题,我不希望REMARK行和学校假期开始日期和结束日期被视为单独的单词,整个数据框是未分割的.

如果我分裂我的标题并排列两个不匹配的形状

由于REMARKS行,我的列表中有9个元素而不是3个,由于分隔的单词,我在标题中得到8个元素而不是5个元素.

最佳答案 您可以在主页面上找到所有链接,然后使用selenium遍历每个URL:

from selenium import webdriver

from bs4 import BeautifulSoup as soup

import re, contextlib, pandas

d = webdriver.Chrome('/Users/jamespetullo/Downloads/chromedriver')

d.get('https://www.schoolholidayseurope.eu/choose-a-country')

_, *countries = [(lambda x:[x.text, x['href']])(i.find('a')) for i in soup(d.page_source, 'html.parser').find_all('li', {'class':re.compile('item\d+$')})]

@contextlib.contextmanager

def get_table(source:str):

yield [[[i.text for i in c.find_all('th')], [i.text for i in c.find_all('td')]] for c in soup(source, 'html.parser').find('table', {'class':'zebra'}).find_all('tr')]

results = {}

for country, url in countries:

d.get(f'https://www.schoolholidayseurope.eu{url}')

with get_table(d.page_source) as source:

results[country] = source

def clean_results(_data):

[headers, _], *data = _data

return [dict(zip(headers, i)) for _, i in data]

final_countries = {a:clean_results(b) for a, b in results.items()}

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值