urllib是Python的一个获取URLs(uniform resourcelocators)的组件,是Python内置的http请求库。他以urlopen函数的形式提供了一个非常简单的接口,这是具有利用不同协议获取URLs的能力,他同样提供了一个比较复杂的接口来处理一般情况,例如:基础验证,cookies,代理和其他。它们通过handlers和openers的对象提供。
urllib.request | 请求模块 |
urllib.error | 异常处理模块 |
urllib.parse | url解析模块 |
Urllib.robotparser | robot.txt解析模块 |
一、urllib.request模块
打开URL(主要是http)的函数和类。urllib.request模块默认定义了一下几个函数:
urllib.request.urlopen(url, data=None, [timeout, ]*, cafile=None, capath=None, cadefault=False, context=None):该函数主要用来打开一个URL,其可以是一个字符串,也可以是一个请求对象。
import urllib.request #以get的类型发送请求
response= urllib.request.urlopen('http://www.baidu.com')
print(response.read().decode('utf-8')) #response是字节形式,将其编码为utf-8,方便进行阅读。
import urllib.request #以post类型发送
import urllib.parse
data = bytes(urllib.parse.urlencode({'word':'hello'}),encoding='utf-8')
response = urllib.request.urlopen('http://httpbin.org/post',data=data) #参数需要有data
print(response.read())
response = urllib.request.urlopen('http://www.baidu.com',timeout=1) #设定超时时间
print(response.read())
二、关于响应内容
1、response响应类型
response = urllib.request.urlopen('http://www.baidu.com',timeout=1)
print(response.read())
print(type(response))
回显:<class 'http.client.HTTPResponse'>
2、状态码和响应头
response = urllib.request.urlopen('http://www.baidu.com',timeout=1)
print(response.status) #获取状态码
print(response.getheaders()) #获取响应头
print(response.getheader('Server'))
回显:
200
[('Bdpagetype', '1'), ('Bdqid', '0xe434a394000062b4'), ('Cache-Control', 'private'), ('Content-Type', 'text/html'), ('Cxy_all', 'baidu+1f0de77a7d36ee32237b3264ef8863b5'), ('Date', 'Tue, 03 Jul 2018 02:28:13 GMT'), ('Expires', 'Tue, 03 Jul 2018 02:28:03 GMT'), ('P3p', 'CP=" OTI DSP COR IVA OUR IND COM "'), ('Server', 'BWS/1.1'), ('Set-Cookie', 'BAIDUID=21324649592794C9F595AF1E445EB4A4:FG=1; expires=Thu, 31-Dec-37 23:55:55 GMT; max-age=2147483647; path=/; domain=.baidu.com'), ('Set-Cookie', 'BIDUPSID=21324649592794C9F595AF1E445EB4A4; expires=Thu, 31-Dec-37 23:55:55 GMT; max-age=2147483647; path=/; domain=.baidu.com'), ('Set-Cookie', 'PSTM=1530584893; expires=Thu, 31-Dec-37 23:55:55 GMT; max-age=2147483647; path=/; domain=.baidu.com'), ('Set-Cookie', 'BDSVRTM=0; path=/'), ('Set-Cookie', 'BD_HOME=0; path=/'), ('Set-Cookie', 'H_PS_PSSID=26654_1446_21125_26350_20927; path=/; domain=.baidu.com'), ('Vary', 'Accept-Encoding'), ('X-Ua-Compatible', 'IE=Edge,chrome=1'), ('Connection', 'close'), ('Transfer-Encoding', 'chunked')]
BWS/1.1
3、向请求中的添加request的头部信息(urllib.request.Request)
import urllib.request #将URL直接构造成一个request,然后当成参数传递给urlopen函数即可。
request = urllib.request.Request('http://www.python.org')
response = urllib.request.urlopen(request)
print(response.read().decode('utf-8'))
from urllib import request,parse #完整的request参数信息
url = 'http://httpbin.org/post'
headers = {'User-Agent':'Mozilla/5.0 (Windows NT 6.1; WOW64)',
'Host':'httpbin.org'}
dict = {'name':'Germey'}
data = bytes(parse.urlencode(dict),encoding='utf-8')
req = request.Request(url=url,data=data,headers=headers,method='POST')
response = request.urlopen(req)
print(response.read().decode('utf-8'))
三、handler和cookie
1、handler:代理
import urllib.request
proxy_handler = urllib.request.ProxyHandler({
'http':'http://127.0.0.1.9743',
'https':'https://127.0.0.1.9743' #传送代理的网址
})
opener = urllib.request.build_opener(proxy_handler)
response = opener.open('http://httpbin.org/get') #相当于利用代理网站来访问服务器。
print(response.read())
2、cookie是客户端用来保存的,用来记录用户身份的文本文件。维持登陆状态的机制。
(1)获取cookie
import http.cookiejar,urllib.request
cookie = http.cookiejar.CookieJar()
handler = urllib.request.HTTPCookieProcessor(cookie)
opener = urllib.request.build_opener(handler)
response = opener.open('http://www.baidu.com')
for item in cookie:
print(item.name+"="+item.value)
回显:
BAIDUID=6B2B67E34BDFEC72582152BE2F91F9F1:FG=1
BIDUPSID=6B2B67E34BDFEC72582152BE2F91F9F1
H_PS_PSSID=1428_21119_26350_26431_22158
PSTM=1530606009
BDSVRTM=0
BD_HOME=0
(2)写入文件
import http.cookiejar,urllib.request
filename = 'cookie.txt'
cookie = http.cookiejar.MozillaCookieJar(filename)
handler = urllib.request.HTTPCookieProcessor(cookie)
opener = urllib.request.build_opener(handler)
response = opener.open('http://www.baidu.com')
#for item in cookie:
# print(item.name+"="+item.value)
cookie.save(ignore_discard = True,ignore_expires = True)
四、异常处理模块
(1)异常类型:
URLError,其有一个reson属性,捕捉之后只能打印出错信息。
HTTPError:有三个属性,分别是code、reason、headers(打印头部错误信息)# -*- coding:utf-8 -*-
from urllib import request,error
try:
response = request.urlopen('http://cuiqingcai.com/index.htm')
except error.HTTPError as e: #先捕捉子类错误
print(e.reason,e.code,e.headers,sep='\n')
except error.URLError as e: #再捕捉父类错误
print(e.reason)
else:
print('Request Successfully')
回显:
Not Found # error.HTTPError. reason
404 # error.HTTPError.code
Server: nginx/1.10.3 (Ubuntu) # error.HTTPError. headers
Date: Tue, 03 Jul 2018 09:07:27 GMT
Content-Type: text/html; charset=UTF-8
Transfer-Encoding: chunked
Connection: close
Vary: Cookie
Expires: Wed, 11 Jan 1984 05:00:00 GMT
Cache-Control: no-cache, must-revalidate, max-age=0
Link: <https://cuiqingcai.com/wp-json/>; rel="https://api.w.org/"
(2)原因判断
import socket
import urllib.request
import urllib.error
try:
response = urllib.request.urlopen('https://www.baidu.com',timeout=0.01)
except urllib.error.URLError as e:
print(type(e.reason))
if isinstance(e.reason,socket.timeout):
print('TIME OUT')
回显:
<class 'socket.timeout'>
TIME OUT
五、urllib.parse模块(解析模块)
urllib.parse.urlparse (urlsrting,scheme=’’,allow_frament=Ture)
(1)urlparse:将urlstring进行拆分
from urllib.parse import urlparse
#第一参数的用法:协议类型、域名、路径等
result = urlparse('http://www.baidu.com/index.html;user??id=5#comment')
print(type(result),result)
回显:
<class 'urllib.parse.ParseResult'> ParseResult(scheme='http', netloc='www.baidu.com', path='/index.html', params='user', query='?id=5', fragment='comment')
from urllib.parse import urlparse
#第二个参数的用法,指定协议类型,如果url本身有协议类型,则第二个参数无效。
result = urlparse('www.baidu.com/index.html;user??id=5#comment',scheme='https')
print(type(result),result)
回显:
<class 'urllib.parse.ParseResult'> ParseResult(scheme='https', netloc='', path='www.baidu.com/index.html', params='user', query='?id=5', fragment='comment')
result = urlparse('http://www.baidu.com/index.html;user??id=5#comment',allow_fragments=False)
print(type(result),result) #第三个参数应用:若指定其为false,则fragment为空,将内容拼接到前面的内容。
回显:
<class 'urllib.parse.ParseResult'> ParseResult(scheme='http', netloc='www.baidu.com', path='/index.html', params='user',
query='?id=5#comment', fragment='')
(2) urlunparse:将url进行拼接
from urllib.parse import urlunparse
data = ['http','www.baidu.com','index.html','user','a=6','comment']
print(urlunparse(data))
回显:
http://www.baidu.com/index.html;user?a=6#comment
(3)urljoin:以第二个参数为基准,第一个参数为补充。
from urllib.parse import urljoin
print(urljoin('http://www.baidu.com','FAQ.html'))
print(urljoin('http://www.baidu.com','https://cuiqingcai.com/FAQ.html'))
print(urljoin('http://www.baidu.com/about.html','https://cuiqingcai.com/FAQ.html'))
print(urljoin('http://www.baidu.com/about.html','https://cuiqingcai.com/FAQ.html?question=2'))
print(urljoin('http://www.baidu.com?wd=abc','https://cuiqingcai.com/index.php'))
print(urljoin('http://www.baidu.com','?category=2#comment'))
回显:
http://www.baidu.com/FAQ.html
https://cuiqingcai.com/FAQ.html
https://cuiqingcai.com/FAQ.html
https://cuiqingcai.com/FAQ.html?question=2
https://cuiqingcai.com/index.php
http://www.baidu.com?category=2#comment
(4)urlencode:将字典对象转化为get请求参数
from urllib.parse import urlencode
params = {
'name':'germey', 'age':22}
base_url = 'http://www.baidu.com?'
url = base_url + urlencode(params)
print(url)
回显:
http://www.baidu.com?name=germey&age=22