一、Selenium介绍
Selenium是一系列基于Web的自动化工具,提供一套测试函数,用于支持Web自动化测试。函数非常灵活,能够完成界面元素定位、窗口跳转、结果比较。具有如下特点:
1.多浏览器支持
可以对多浏览器进行测试,如IE、Firefox、Safari、Chrome、Android手机浏览器等。
2.支持多种语言
如Java、C#、Python、Ruby、PHP等。
3.支持多种操作系统
如Windows、Linux、IOS、Android等。
二、配置环境
安装依赖
pip install selenium
然后官网下载驱动,用的什么浏览器下载对应的驱动
打开命令行输入jupyter notebook输入以下代码查看是否配置成功
from selenium import webdriver
driver = webdriver.Chrome(executable_path=r'chromedriver.exe')
driver.get("https://www.baidu.com/")
成功的话会弹出chrome并打开百度
三、自动填充百度网页的查询关键字并完成自动搜索
使用开发者工具,查看代码,定位搜索框的id,右键搜索框,点击检查,就可以看到搜索框的id
使用ID找到这个元素
from selenium import webdriver
# 打开一个Chrome浏览器,executable_path是Chrome浏览器驱动的路径
driver = webdriver.Chrome(executable_path=r'chromedriver.exe')
driver.get("https://www.baidu.com/")
p_input=driver.find_element_by_id('kw')
print(p_input)
print(p_input.location)
print(p_input.size)
print(p_input.send_keys('aaa'))
print(p_input.text)
定位百度一下ID,用它完成自动搜索
p_btn = driver.find_element_by_id('su')
p_btn.click()
四、爬取一个动态网页的数据
这是要爬取的网站链接 http://quotes.toscrape.com/js/
import time
import csv
from bs4 import BeautifulSoup as bs
from selenium import webdriver
driver = webdriver.Chrome(executable_path=r'chromedriver.exe')
# 名言所在网站
driver.get("http://quotes.toscrape.com/js/")
# 所有数据
subjects = []
# 单个数据
subject=[]
#定义csv表头
quote_head=['名言','作者','标签']
#csv文件的路径和名字
quote_path='名人名言.csv'
#存放内容的列表
def write_csv(csv_head,csv_content,csv_path):
with open(csv_path, 'w', newline='',encoding='utf-8') as file:
fileWriter =csv.writer(file)
fileWriter.writerow(csv_head)
fileWriter.writerows(csv_content)
n = 10
for i in range(0, n):
driver.find_elements_by_class_name("quote")
res_list=driver.find_elements_by_class_name("quote")
# 分离出需要的内容
for tmp in res_list:
saying = tmp.find_element_by_class_name("text").text
author =tmp.find_element_by_class_name("author").text
tags =tmp.find_element_by_class_name("tags").text
subject=[]
subject.append(saying)
subject.append(author)
subject.append(tags)
print(subject)
subjects.append(subject)
subject=[]
write_csv(quote_head,subjects,quote_path)
print('成功爬取第' + str(i + 1) + '页')
if i == n-1:
break
driver.find_elements_by_css_selector('[aria-hidden]')[-1].click()
time.sleep(2)
driver.close()
结果
五、爬取京东网站上的感兴趣书籍信息
1.查看网站首页,输入框id和搜索按钮,跟查百度的id一样
2.查看要爬取的书籍相关信息,右击书本,就能发现该书本信息了
3.代码
import time
import csv
from bs4 import BeautifulSoup as bs
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
import time
from lxml import etree
driver = webdriver.Chrome(executable_path=r'chromedriver.exe')
# 京东所在网站
driver.get("https://www.jd.com/")
p_input = driver.find_element_by_id('key')# 找到输入框输入
p_input.send_keys('三体') # 输入需要查找的关键字
time.sleep(1)
button=driver.find_element_by_class_name("button").click()# 点击搜素按钮
time.sleep(1)
all_book_info = []
num=200
head=['书名', '价格', '书店名']
#csv文件的路径和名字
path='网络爬虫书本.csv'
def write_csv(head,all_book_info,path):
with open(path, 'w', newline='',encoding='utf-8') as file:
fileWriter =csv.writer(file)
fileWriter.writerow(head)
fileWriter.writerows(all_book_info)
# 爬取一页
def get_onePage_info(web,num):
driver.execute_script('window.scrollTo(0, document.body.scrollHeight);')
time.sleep(2)
page_text =driver.page_source
# with open('3-.html', 'w', encoding='utf-8')as fp:
# fp.write(page_text)
# 进行解析
tree = etree.HTML(page_text)
li_list = tree.xpath('//li[contains(@class,"gl-item")]')
for li in li_list:
num=num-1
book_infos = []
book_name = ''.join(li.xpath('.//div[@class="p-name"]/a/em/text()')) # 书名
book_infos.append(book_name)
price = '¥' + li.xpath('.//div[@class="p-price"]/strong/i/text()')[0] # 价格
book_infos.append(price)
store_span = li.xpath('.//div[@class="p-shopnum"]/a/text()') # 书店名
# if len(store_span) > 0:
# store = store_span[0]
# else:
# store = '无'
book_infos.append(store_span)
all_book_info.append(book_infos)
if num==0:
break
return num
while num!=0:
num=get_onePage_info(driver,num)
driver.find_element_by_class_name('pn-next').click() # 点击下一页
time.sleep(2)
write_csv(head,all_book_info,path)
driver.close()