requests.exceptions.SSLError: HTTPSConnectionPool(host='item.jd.com', port=443)

本文详细记录了在使用Python的requests库进行HTTPS请求时遇到的SSL模块不可用的错误,以及通过降级Python版本到3.7.1来解决此问题的过程。

requestsget方法

import requests
r = requests.get("https://item.jd.com/100004788063.html")

报错:

Traceback (most recent call last):
  File "D:\Anaconda\envs\scrapy\lib\site-packages\urllib3\connectionpool.py", line 654, in urlopen
    conn = self._get_conn(timeout=pool_timeout)
  File "D:\Anaconda\envs\scrapy\lib\site-packages\urllib3\connectionpool.py", line 274, in _get_conn
    return conn or self._new_conn()
  File "D:\Anaconda\envs\scrapy\lib\site-packages\urllib3\connectionpool.py", line 964, in _new_conn
    "Can't connect to HTTPS URL because the SSL " "module is not available."
urllib3.exceptions.SSLError: Can't connect to HTTPS URL because the SSL module is not available.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "D:\Anaconda\envs\scrapy\lib\site-packages\requests\adapters.py", line 449, in send
    timeout=timeout
  File "D:\Anaconda\envs\scrapy\lib\site-packages\urllib3\connectionpool.py", line 720, in urlopen
    method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]
  File "D:\Anaconda\envs\scrapy\lib\site-packages\urllib3\util\retry.py", line 436, in increment
    raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='item.jd.com', port=443): Max retries exceeded with url: /100004788063.html (Caused by SSLError("Can't connect to HTTPS URL because the SSL module is not available."))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "D:\Program Files\JetBrains\PyCharm 2018.1.4\helpers\pydev\pydev_run_in_console.py", line 52, in run_file
    pydev_imports.execfile(file, globals, locals)  # execute the script
  File "D:\Program Files\JetBrains\PyCharm 2018.1.4\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile
    exec(compile(contents+"\n", file, 'exec'), glob, loc)
  File "G:/3-project/SCRAPY/main.py", line 5, in <module>
    r = requests.get("https://item.jd.com/100004788063.html")
  File "D:\Anaconda\envs\scrapy\lib\site-packages\requests\api.py", line 75, in get
    return request('get', url, params=params, **kwargs)
  File "D:\Anaconda\envs\scrapy\lib\site-packages\requests\api.py", line 60, in request
    return session.request(method=method, url=url, **kwargs)
  File "D:\Anaconda\envs\scrapy\lib\site-packages\requests\sessions.py", line 533, in request
    resp = self.send(prep, **send_kwargs)
  File "D:\Anaconda\envs\scrapy\lib\site-packages\requests\sessions.py", line 646, in send
    r = adapter.send(request, **kwargs)
  File "D:\Anaconda\envs\scrapy\lib\site-packages\requests\adapters.py", line 514, in send
    raise SSLError(e, request=request)
requests.exceptions.SSLError: HTTPSConnectionPool(host='item.jd.com', port=443): Max retries exceeded with url: /100004788063.html (Caused by SSLError("Can't connect to HTTPS URL because the SSL module is not available."))

解决办法:

将python降级为3.7.1版本
conda install python=3.7.1

Traceback (most recent call last): File "C:\Users\密码\AppData\Local\Programs\Python\Python313\Lib\site-packages\urllib3\connection.py", line 198, in _new_conn sock = connection.create_connection( (self._dns_host, self.port), ...<2 lines>... socket_options=self.socket_options, ) File "C:\Users\密码\AppData\Local\Programs\Python\Python313\Lib\site-packages\urllib3\util\connection.py", line 60, in create_connection for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): ~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\密码\AppData\Local\Programs\Python\Python313\Lib\socket.py", line 977, in getaddrinfo for res in _socket.getaddrinfo(host, port, family, type, proto, flags): ~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ socket.gaierror: [Errno 11001] getaddrinfo failed The above exception was the direct cause of the following exception: Traceback (most recent call last): File "C:\Users\密码\AppData\Local\Programs\Python\Python313\Lib\site-packages\urllib3\connectionpool.py", line 787, in urlopen response = self._make_request( conn, ...<10 lines>... **response_kw, ) File "C:\Users\密码\AppData\Local\Programs\Python\Python313\Lib\site-packages\urllib3\connectionpool.py", line 488, in _make_request raise new_e File "C:\Users\密码\AppData\Local\Programs\Python\Python313\Lib\site-packages\urllib3\connectionpool.py", line 464, in _make_request self._validate_conn(conn) ~~~~~~~~~~~~~~~~~~~^^^^^^ File "C:\Users\密码\AppData\Local\Programs\Python\Python313\Lib\site-packages\urllib3\connectionpool.py", line 1093, in _validate_conn conn.connect() ~~~~~~~~~~~~^^ File "C:\Users\密码\AppData\Local\Programs\Python\Python313\Lib\site-packages\urllib3\connection.py", line 704, in connect self.sock = sock = self._new_conn() ~~~~~~~~~~~~~~^^ File "C:\Users\密码\AppData\Local\Programs\Python\Python313\Lib\site-packages\urllib3\connection.py", line 205, in _new_conn raise NameResolutionError(self.host, self, e) from e urllib3.exceptions.NameResolutionError: <urllib3.connection.HTTPSConnection object at 0x000001D8B31C2E40>: Failed to resolve 'api.example.com' ([Errno 11001] getaddrinfo failed) The above exception was the direct cause of the following exception: Traceback (most recent call last): File "C:\Users\密码\AppData\Local\Programs\Python\Python313\Lib\site-packages\requests\adapters.py", line 667, in send resp = conn.urlopen( method=request.method, ...<9 lines>... chunked=chunked, ) File "C:\Users\密码\AppData\Local\Programs\Python\Python313\Lib\site-packages\urllib3\connectionpool.py", line 841, in urlopen retries = retries.increment( method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2] ) File "C:\Users\密码\AppData\Local\Programs\Python\Python313\Lib\site-packages\urllib3\util\retry.py", line 519, in increment raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='api.example.com', port=443): Max retries exceeded with url: /jd/phones?page=1&size=50 (Caused by NameResolutionError("<urllib3.connection.HTTPSConnection object at 0x000001D8B31C2E40>: Failed to resolve 'api.example.com' ([Errno 11001] getaddrinfo failed)")) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "D:\python学习\pythonProject7\main.py", line 61, in <module> phone_data = fetch_jd_phone_data() File "D:\python学习\pythonProject7\main.py", line 17, in fetch_jd_phone_data response = requests.get(url, headers=headers, params=params) File "C:\Users\密码\AppData\Local\Programs\Python\Python313\Lib\site-packages\requests\api.py", line 73, in get return request("get", url, params=params, **kwargs) File "C:\Users\密码\AppData\Local\Programs\Python\Python313\Lib\site-packages\requests\api.py", line 59, in request return session.request(method=method, url=url, **kwargs) ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\密码\AppData\Local\Programs\Python\Python313\Lib\site-packages\requests\sessions.py", line 589, in request resp = self.send(prep, **send_kwargs) File "C:\Users\密码\AppData\Local\Programs\Python\Python313\Lib\site-packages\requests\sessions.py", line 703, in send r = adapter.send(request, **kwargs) File "C:\Users\密码\AppData\Local\Programs\Python\Python313\Lib\site-packages\requests\adapters.py", line 700, in send raise ConnectionError(e, request=request) requests.exceptions.ConnectionError: HTTPSConnectionPool(host='api.example.com', port=443): Max retries exceeded with url: /jd/phones?page=1&size=50 (Caused by NameResolutionError("<urllib3.connection.HTTPSConnection object at 0x000001D8B31C2E40>: Failed to resolve 'api.example.com' ([Errno 11001] getaddrinfo failed)")) 进程已结束,退出代码为 1
最新发布
06-08
【为什么学爬虫?】        1、爬虫入手容易,但是深入较难,如何写出高效率的爬虫,如何写出灵活性高可扩展的爬虫都是一项技术活。另外在爬虫过程中,经常容易遇到被反爬虫,比如字体反爬、IP识别、验证码等,如何层层攻克难点拿到想要的数据,这门课程,你都能学到!        2、如果是作为一个其他行业的开发者,比如app开发,web开发,学习爬虫能让你加强对技术的认知,能够开发出更加安全的软件和网站 【课程设计】 一个完整的爬虫程序,无论大小,总体来说可以分成三个步骤,分别是:网络请求:模拟浏览器的行为从网上抓取数据。数据解析:将请求下来的数据进行过滤,提取我们想要的数据。数据存储:将提取到的数据存储到硬盘或者内存中。比如用mysql数据库或者redis等。那么本课程也是按照这几个步骤循序渐进的进行讲解,带领学生完整的掌握每个步骤的技术。另外,因为爬虫的多样性,在爬取的过程中可能会发生被反爬、效率低下等。因此我们又增加了两个章节用来提高爬虫程序的灵活性,分别是:爬虫进阶:包括IP代理,多线程爬虫,图形验证码识别、JS加密解密、动态网页爬虫、字体反爬识别等。Scrapy和分布式爬虫:Scrapy框架、Scrapy-redis组件、分布式爬虫等。通过爬虫进阶的知识点我们能应付大量的反爬网站,而Scrapy框架作为一个专业的爬虫框架,使用他可以快速提高我们编写爬虫程序的效率和速度。另外如果一台机器不能满足你的需求,我们可以用分布式爬虫让多台机器帮助你快速爬取数据。 从基础爬虫到商业化应用爬虫,本套课程满足您的所有需求!【课程服务】 专属付费社群+定期答疑
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值