ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to

最近再用mac下的pycharm编写python文件,在抓取制定网站的错误提示每次运行都直接报错如下:
ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1076)

During handling of the above exception, another exception occurred:
然后查阅资料,发现是因为SSL 证书的验证问题,于是我就加了两行代码如下:
import ssl
ssl._create_default_https_context = ssl._create_unverified_context

然后再次运行一下,就解决了,完整代码如下:
from urllib import request
import csv
import time
import re
import random
import ssl
ssl._create_default_https_context = ssl.create_unverified_context
class MaoyanSpider(object):
def init(self):
self.url = 'https://www.haodf.com/faculty/DE4roiYGYZw0JOrEpjdCy8jrf/menzhen
{}.htm’
self.headers = {
‘User-Agent’: ‘Mozilla/5.0(Windows;U;WindowsNT6.1;en-us)AppleWebKit/534.50(KHTML,likeGecko)Version/5.1Safari/534.50’
}
# 添加计数变量;
self.page = 1

def get_page(self, url):
    req = request.Request(
        url,
        headers=self.headers
    )
    res = request.urlopen(req)
    html = res.read().decode("gbk","ignore")
    # print(html)
    # 直接调用解析函数:
    self.parse_page(html)

def parse_page(self, html):
    # names = re.findall('< a href=" " target="_blank">(.*?)</ a>', html)
    # pattern = re.compile(r'< a href=".*" target="_blank">(.*?)</ a>', re.S)

    names = re.findall(r'<a class="name" target="_blank" href=".*" title=".*">(.*?)</a>\n\s*'
                       r'<a\n\s*href=".*"\n\s*title=".*" target=".*"> <im'
                       r'g\n\s*src=".*" width=".*"\n\s*height=".*" ali'
                       r'gn=".*" id=".*" /></a>\n\s*<p>(.*)</p>', html)

    print(names)

    self.write_csv(names)

    # self.write_csv(film_list)

def write_csv(self, film_list):
    a_tuple=[list(item) for item in film_list ]
    print(a_tuple)
    with open('get_doctrt.csv', 'a+' ) as f:
        # writer = csv.writer(f)
        for i in range(0, len(a_tuple)):
            str=a_tuple[i][0] +'\t'+a_tuple[i][1]
            f.writelines(str+'\n' )
        f.close()
def main(self):
    for offset in range(1,5):
        url=self.url.format(str(offset))
        self.get_page(url)
        print('第%d页爬取完成 ' % self.page)
        self.page+=1

if name==“main”:
spider=MaoyanSpider()
spider.main()

评论 13
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

周传伦

您的微薄的鼓励,是我前进的动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值