python爬虫不打印页面
设置了–headless出现问题,卡在driver.get(url)
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
chrome_options = Options()
chrome_options.add_argument(“–headless”)
driver = webdriver.Chrome(chrome_options=chrome_options,executable_path=r”C:\Program Files (x86)\Google\Chrome\Application\chromedriver.exe”)
url = “http://iwebshop.spider.com/index.php?controller=site&action=products&id=96“`
driver.get(url)
html = driver.page_source
print(html)
原因在于翻墙设置了脚本,关掉就正常了