Python爬虫常用库(一)urllib、requests

一、urllib
1、请求 urillib.reques.urlopen(url = url, data = post_data, [time_out]*)
(1)get类请求

from urllib import request
response = request.urlopen("http://www.baidu.com")
print(response.read().decode("utf-8"))

(2)post类请求

from urllib import request
from urllib import parse
data = bytes(parse.urlencode({"yan":18}), encoding = ("utf-8"))
response = request.urlopen(url = "http://httpbin.org/post", data = data)
print(response.read())

2、响应

from urllib import request
response = request.urlopen("http://www.baidu.com")
print(response.read().decode("utf-8"))#获取响应体
print(response.status)#获取状态码
print(response.getheaders())#获取响应头
print(response.getheader("Server"))#获取响应头中的特定项

3、请求
(1)请求参数扩展

from urllib import request
from urllib import parse
data = bytes(parse.urlencode({"name":"Han Yan"}), encoding = "utf-8")
header = {"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.106 Safari/537.36",
          "Host": "httpbin.org"}
req = request.Request(url = "http://httpbin.org/post", data = data, headers = header, method = "POST")
response = request.urlopen(req, timeout = 5)
print(response.read().decode("utf-8"))

4、Cookie
(1)直接输出

from http import cookiejar
from urllib import request
cookie = cookiejar.CookieJar()
handler = request.HTTPCookieProcessor(cookie)
opener = request.build_opener(handler)
response = opener.open("http://www.baidu.com")
for item in cookie:
    print(item.name + " : " + item.value)

(2)存为文本文件

from http import cookiejar
from urllib import request
file_name = "cookie.txt"
cookie = cookiejar.MozillaCookieJar(file_name)#cookie = cookiejar.LWPCookieJar(file_name)
handle = request.HTTPCookieProcessor(cookie)
opener = request.build_opener(handle)
response = opener.open("http://www.baidu.com")
cookie.save()
with open(file_name, 'r') as f:
    for line in f:
        print(line.rstrip())

5、异常处理
(1)404找不到网页【父类error.URLERROR成员reason】

from urllib import request, error
try:
    response = request.urlopen("http://www.bilibili.com/dwdwd")
    print(response.status)
except error.URLError as e:
    print("ERROR REASON: " + e.reason)

(2)time out超时

from urllib import request, error
import socket

try:
    response = request.urlopen("http://www.bilibili.com/dwdwd", timeout = 0.001)
    print(response.status)
except error.URLError as e:
    if isinstance(e.reason, socket.timeout):
        print("ERROR REASON:", e.reason, sep = " ")

(3)子类error.HTTPERROR三个成员code, reason, headers}

from urllib import request, error
import socket

try:
    response = request.urlopen("http://www.bilibili.com/dwdwd")
    print(response.status)
except error.HTTPError as e:
    print("ERROR REASON:" , e.reason, sep = " ")
    print("ERROR CODE:", e.code, sep = " ")
    print("ERROR HEADER:", e.headers, sep = "\n*\n", end = "*\n")
except error.URLError as e:
    if isinstance(e.reason, socket.timeout):
        print("ERROR REASON:", e.reason, sep = " ")

6、URL解析
(1)urlparse

from urllib.parse import urlparse

url = "https://www.bilibili.com/video/av19057145?p=8#comment"
res = urlparse(url, scheme = "http", allow_fragments = True)
#若url含协议,scheme忽略
#alllow_fragments指定是否合并fragments内容
print(type(res), res, sep = "\n")


(2)urljoin(url1, url2)使用url2中的字段覆盖url1中的字段,返回覆盖后的url
(3)urlencode(dict)将字典转换为get类型的参数格式

from urllib import parse
base_url = "https://www.bilibili.com/video/av19057145"
para = {"p":8}
url = base_url + "?" + parse.urlencode(para)
print(url)

二、requests库
1、get请求
(1)get()请求:参数在url中,可以使用字典存储

import requests

argus = {"name":"Han Yan",
         "age":25}
base_url = "http://httpbin.org"
response = requests.get(base_url + "/get", argus)
print(response.text)

(2)get响应返回值可以通过json()方法解析,格式与调用json.loads()相同

import  requests, json

response = requests.get("http://httpbin.org/get")
print(type(response.json()), type(json.loads(response.text)))

(3)保存图片
注意:.text成员为str类,.content成员为bytes(二进制流)类,调用open(file_name, “wb”)创建写入

import requests

response = requests.get("http://www.zimuxia.cn/wp-content/uploads/2020/03/LadyWP.jpg")
with open("img.jpg", "wb") as img:
    img.write(response.content)
    img.close()

(4)增加请求头
部分站点,无请求头访问会出现状态码400/500

import requests

header = {"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.106 Safari/537.36"}
request = requests.get("https://www.zhihu.com/", headers = header)
print(request.status_code)

注意:部分网站仅添加header中的User_Agent仍然403,考虑添加referer

referer = "http://image.baidu.com/search/index?tn=baiduimage&ipn=r&ct=201326592&cl=2&lm=&st=-1&fm=result&fr=&sf=1&fmq=1586774219869_R&pv=&ic=&nc=1&z=&hd=&latest=&copyright=&se=1&showtab=0&fb=0&width=&height=&face=0&istype=2&ie=utf-8&sid=&word=%E7%8C%AB"
header = {"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.106 Safari/537.36", "Referer":referer}

print(requests.get("http://img1.imgtn.bdimg.com/it/u=3750061867,603167218&fm=26&gp=0.jpg", headers = header).status_code)

# 200

2、post请求
(1)data参数传递

import requests

data = {"name":"Han Yan",
        "age":25}
response = requests.post("http://httpbin.org/post", data = data)
print(response.text)

3、响应
(1)response属性

import requests

header = {"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.106 Safari/537.36"}
response = requests.get("http://www.zhihu.com", headers = header)
print(type(response.status_code))
print(type(response.headers))
print(type(response.url))
print(type(response.cookies))
print(type(response.history))

(2)状态码判断

requests.codes = {
    # Informational.
    100: ('continue',),
    101: ('switching_protocols',),
    102: ('processing',),
    103: ('checkpoint',),
    122: ('uri_too_long', 'request_uri_too_long'),
    200: ('ok', 'okay', 'all_ok', 'all_okay', 'all_good', '\\o/', '✓'),
    201: ('created',),
    202: ('accepted',),
    203: ('non_authoritative_info', 'non_authoritative_information'),
    204: ('no_content',),
    205: ('reset_content', 'reset'),
    206: ('partial_content', 'partial'),
    207: ('multi_status', 'multiple_status', 'multi_stati', 'multiple_stati'),
    208: ('already_reported',),
    226: ('im_used',),

    # Redirection.
    300: ('multiple_choices',),
    301: ('moved_permanently', 'moved', '\\o-'),
    302: ('found',),
    303: ('see_other', 'other'),
    304: ('not_modified',),
    305: ('use_proxy',),
    306: ('switch_proxy',),
    307: ('temporary_redirect', 'temporary_moved', 'temporary'),
    308: ('permanent_redirect',
          'resume_incomplete', 'resume',),  # These 2 to be removed in 3.0

    # Client Error.
    400: ('bad_request', 'bad'),
    401: ('unauthorized',),
    402: ('payment_required', 'payment'),
    403: ('forbidden',),
    404: ('not_found', '-o-'),
    405: ('method_not_allowed', 'not_allowed'),
    406: ('not_acceptable',),
    407: ('proxy_authentication_required', 'proxy_auth', 'proxy_authentication'),
    408: ('request_timeout', 'timeout'),
    409: ('conflict',),
    410: ('gone',),
    411: ('length_required',),
    412: ('precondition_failed', 'precondition'),
    413: ('request_entity_too_large',),
    414: ('request_uri_too_large',),
    415: ('unsupported_media_type', 'unsupported_media', 'media_type'),
    416: ('requested_range_not_satisfiable', 'requested_range', 'range_not_satisfiable'),
    417: ('expectation_failed',),
    418: ('im_a_teapot', 'teapot', 'i_am_a_teapot'),
    421: ('misdirected_request',),
    422: ('unprocessable_entity', 'unprocessable'),
    423: ('locked',),
    424: ('failed_dependency', 'dependency'),
    425: ('unordered_collection', 'unordered'),
    426: ('upgrade_required', 'upgrade'),
    428: ('precondition_required', 'precondition'),
    429: ('too_many_requests', 'too_many'),
    431: ('header_fields_too_large', 'fields_too_large'),
    444: ('no_response', 'none'),
    449: ('retry_with', 'retry'),
    450: ('blocked_by_windows_parental_controls', 'parental_controls'),
    451: ('unavailable_for_legal_reasons', 'legal_reasons'),
    499: ('client_closed_request',),

    # Server Error.
    500: ('internal_server_error', 'server_error', '/o\\', '✗'),
    501: ('not_implemented',),
    502: ('bad_gateway',),
    503: ('service_unavailable', 'unavailable'),
    504: ('gateway_timeout',),
    505: ('http_version_not_supported', 'http_version'),
    506: ('variant_also_negotiates',),
    507: ('insufficient_storage',),
    509: ('bandwidth_limit_exceeded', 'bandwidth'),
    510: ('not_extended',),
    511: ('network_authentication_required', 'network_auth', 'network_authentication')}

4、post()添加files参数实现文件上传

import requests

file = {"file":open("img.jpg", "rb")}
response = requests.post(url = "http://httpbin.org/post", files = file)
print(response.text)

5、会话维持
(1)若设置cookie后再次访问,会出现cookie丢失的情况,因为两次访问是独立的

import requests

response = requests.get("http://httpbin.org/cookies/set/number/996649")
response = requests.get("http://httpbin.org/cookies")
print(response.text)

(2)使用requests.session()方法访问,实现会话维持

import requests

session = requests.session()
session.get("http://httpbin.org/cookies/set/number/996649")
response = session.get("http://httpbin.org/cookies")
print(response.text)

6、证书验证
(1)关闭证书验证

import requests

requests.packages.urllib3.disable_warnings()
response = requests.get(url = "", verify = False)
print(response.status_code)

(2)增加cert()参数提供证书

7、代理设置

import requests

proxy = {}
response = requests.get(url = "https://www.baidu.com", proxies = proxy)

8、超时设置

import requests
from requests.exceptions import ConnectTimeout

try:
    response = requests.get("http://httpbin.org/get", timeout = 0.1)
    print(response.status_code)
except ConnectTimeout:
    print("timeout!")

9、遇到登录验证设置认证

import requests

auth = {"1033":"yanhan"}
response = requests.get(url= "", auth = auth)
print(respnse.text)

10、requests包Exceptions相关官方文档

Exceptions
exception requests.RequestException(*args, **kwargs)[source]
There was an ambiguous exception that occurred while handling your request.

exception requests.ConnectionError(*args, **kwargs)[source]
A Connection error occurred.

exception requests.HTTPError(*args, **kwargs)[source]
An HTTP error occurred.

exception requests.URLRequired(*args, **kwargs)[source]
A valid URL is required to make a request.

exception requests.TooManyRedirects(*args, **kwargs)[source]
Too many redirects.

exception requests.ConnectTimeout(*args, **kwargs)[source]
The request timed out while trying to connect to the remote server.

Requests that produced this error are safe to retry.

exception requests.ReadTimeout(*args, **kwargs)[source]
The server did not send any data in the allotted amount of time.

exception requests.Timeout(*args, **kwargs)[source]
The request timed out.

Catching this error will catch both ConnectTimeout and ReadTimeout errors.
import requests
from requests import exceptions

try:
    response = requests.get("http://httpbin.org/get", timeout = 0.5)
    if response.status_code == requests.codes.ok:
        print("Request successfully")
except exceptions.Timeout:
    print("Timeout!")
except exceptions.ConnectionError:
    print("Connectionerror!")

  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值