Python的爬虫学习笔记本(二)Urllib库使用

 

 

Urllib库详解


Urllib:是请求库,提供了强大的处理函数;Python内置的HTTP请求库

urllib.request # 请求模块
urllib.error # 异常处理模块
urllib.parse # url解析模块
urllib.robotparser # robots.txt解析模块

重点前三个模块,第四个用的少了


urlopen

urllib.request.urlopen(url, data=None, [timeout, ]*, cafile=None, capath=None, cadefaul=False, context=None)

e.g.1

import urllib.request

response = urllib.request.urlopen('http://www.baidu.com')
print(response.read().decode('utf-8'))

.read() 获取内容,decode()方法编码;获取到的内容应与request中第一个包相同

e.g. 2 请求post类型

import urllib.parse
import urllib.request

data = bytes(urllib.parse.urlencode({'word': 'hello'}), encoding='utf8')
response = urllib.request.urlopen('http://httpbin.org/post', data=data)
print(response.read())

e.g. 3 对timeout进行约束(超时时间)

import urllib.request
import urllib.error
import socket

try:
    response= urllib.request.urlopen('http://httpbin.org/get', timeout=0.1) # 在0.1s之内得到相应
except urllib.error.URLError as e:
    if isinstance(e.reason, socket.timeout):
        print('TIME OUT')

输出: TIME OUT


Response

响应类型

import urllib.request

response = urllib.request.urlopen('http://www.baidu.com')
print(type(response))

OUTPUT:  <class 'http.client.HTTPResponse'>

有用信息: 状态码,响应头(响应是否成功);

查看他们:

import urllib.request

response = urllib.request.urlopen('http://www.baidu.com')
print(response.status)
print(response.getheaders())

或者使用read,但read得到的是字节流形式,需要decode:

import urllib.request

response = urllib.request.urlopen('http://www.baidu.com')
print(response.read().decode('utf-8'))

Request

调用request使request的细节可以被改变:

import urllib.request


request = urllib.request.Request('http://www.baidu.com')
response = urllib.request.urlopen(request)
print(response.read().decode('utf-8'))

可以看到,通过request也是可以正常的得到response

但是request的构造可以是请求方式变得不同:

from urllib import request, parse

url = 'http://httpbin.org/post'
headers = {
    'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64)',
    'Host':'httpbin.org'
}
dict = {
    'name':'Germey'
}
data = bytes(parse.urlencode(dict), encoding='utf-8')
req = request.Request(url=url, data=data, headers=headers, method='POST')
response = request.urlopen(req)
print(response.read().decode('utf-8'))

或者使用:

from urllib import request, parse

url = 'http://httpbin.org/post'
dict = {
    'name':'Germey'
}
data = bytes(parse.urlencode(dict), encoding='utf-8')
req = request.Request(url=url, data=data, method='POST')
req.add_header('User-Agent','Mozilla/5.0 (Windows NT 10.0; Win64; x64)')
response = request.urlopen(req)
print(response.read().decode('utf-8'))

得到的结果都一样,为:

{
  "args": {}, 
  "data": "", 
  "files": {}, 
  "form": {
    "name": "Germey"
  }, 
  "headers": {
    "Accept-Encoding": "identity", 
    "Connection": "close", 
    "Content-Length": "11", 
    "Content-Type": "application/x-www-form-urlencoded", 
    "Host": "httpbin.org", 
    "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64)"
  }, 
  "json": null, 
  "origin": "182.116.195.55", 
  "url": "http://httpbin.org/post"
}

Handler 代理的使用因为都不涉及就略过了

Cookie:用户信息,维持登陆状态

在浏览器F12中 Application, Cookie中可以看到; Cookie使我们保持网站的登录/认证状态

Cookie的处理:

import http.cookiejar, urllib.request

cookie = http.cookiejar.CookieJar()
handler = urllib.request.HTTPCookieProcessor(cookie)
opener = urllib.request.build_opener(handler)
response = opener.open('http://www.baidu.com')
for item in cookie:
    print(item.name+"="+item.value)

得到:

BAIDUID=F8954DF46819CDEA7151D26EC87BAB92:FG=1
BIDUPSID=F8954DF46819CDEA7151D26EC87BAB92
H_PS_PSSID=26523_1437_21090_28329_28413_22072
PSTM=1548302451
delPer=0
BDSVRTM=0
BD_HOME=0

若要保存到文件,使用Mozilla/LWP的子函数,有save方法:

import http.cookiejar, urllib.request

filename = 'cookie.txt'
cookie = http.cookiejar.MozillaCookieJar(filename) # or LWPCookieJar
handler = urllib.request.HTTPCookieProcessor(cookie)
opener = urllib.request.build_opener(handler)
response = opener.open('http://www.baidu.com')
cookie.save(ignore_discard=True, ignore_expires=True)

若cookie没有过期,则能继续使用。在存储过后的load方法:

import http.cookiejar, urllib.request

filename = 'cookie.txt'
cookie = http.cookiejar.LWPCookieJar()
cookie.load(filename, ignore_discard=True, ignore_expires=True)
handler = urllib.request.HTTPCookieProcessor(cookie)
opener = urllib.request.build_opener(handler)
response = opener.open('http://www.baidu.com')
print(response.read().decode('utf8'))

异常处理:请求不存在/错误的网页

请求的状态码:404 not found类似

当我们error时:

from urllib import request, error

try: 
    response = request.urlopen('http://shaonian.com/index.htm')
except error.URLError as e:
    print(e.reason)

如果没有捕捉,程序可能中断。

具体捕获异常的类型:

URLError - reason 只能打印信息;

HTTPError - code/headers/reason 三种信息 - 子类错误

使用:

from urllib import request, error

try: 
    response = request.urlopen('http://shaonian.com/index.htm')
except error.HTTPError as e:
     print(e.reason, e.code,  e.headers, sep='\n')
except error.URLError as e:
    print(e.reason)
else:
    print('Request Successfully')

输出:

Not Found
404
Date: Thu, 24 Jan 2019 04:18:12 GMT
Content-Type: text/html; charset=utf-8
Content-Length: 5045
Connection: close
Cache-Control: private
X-Powered-By: ASP.NET
Server: wts/1.2

e.reason 也是一个类,可以打印出来。我们可以通过isinstance来判断原因:

import socket
from urllib import request, error

try: 
    response = request.urlopen('http://www.baidu.com', timeout=0.01)
except error.URLError as e:
    print(type(e.reason))
    if isinstance(e.reason, socket.timeout):
        print('TIME OUT')

# 其实这样也可以
# except error.URLError as e:
#     print(e.reason)

URL解析: urlparse

把URL进行分割,分割后赋值 - 域名/路径/...

urllib.parse.urlparse(urlstring, scheme='', allow_fragments=True) 如:

from urllib.parse import urlparse

result = urlparse('https://www.baidu.com/s?ie=UTF-8&wd=%E4%BD%A0%E5%A5%BD')
print(type(result), result)

output:

<class 'urllib.parse.ParseResult'> ParseResult(scheme='https', netloc='www.baidu.com', path='/s', params='', query='ie=UTF-8&wd=%E4%BD%A0%E5%A5%BD', fragment='')

注意到,这里的参数是可以更改的,但只有当第一个参数不包含某种参数的时候,默认参数才会有效

from urllib.parse import urlparse

result = urlparse('www.baidu.com/s?ie=UTF-8&wd=%E4%BD%A0%E5%A5%BD', scheme='https')
print(result)

会得到和上面PraseResult后一样的结果

Allow_fragments 函数会使函数取消fragment部分的功能,使后面fragment直接拼接到前面的query/param上,如果query/param为空,就会直接拼接到path上(向前拼接)

from urllib.parse import urlparse

result = urlparse('https://www.baidu.com/s?ie=UTF-8&wd=%E4%BD%A0%E5%A5%BD',allow_fragments=False)
print(result)

output:

<class 'urllib.parse.ParseResult'> ParseResult(scheme='https', netloc='www.baidu.com', path='/s', params='', query='ie=UTF-8&wd=%E4%BD%A0%E5%A5%BD', fragment='')

urlunparse: 拼接url地址

from urllib.parse import urlunparse

data= ['http', 'www.baidu.com','index.html','user','a=6','comment']
print(urlunparse(data))

output: http://www.baidu.com/index.html;user?a=6#comment

urljoin:可以拼接地址,前面的字段会被后面的字段覆盖。

urljoin('...','...')

urlencode: 把字典转换成请求参数

from urllib.parse import urlencode

params = {
    'name': 'germey',
    'age': 22
}
base_url = 'http://www.baidu.com?'
url = base_url + urlencode(params)
print(url)

Output: http://www.baidu.com?name=germey&age=22

 

urllib是一个很好的工具模块,注意哦,上面已经搞定了开头说的三个模块:

urllib.request # 请求模块
urllib.error # 异常处理模块
urllib.parse # url解析模块

如果要看更加高级的操作还是转去看下官方文档,但是常用的就是这么几个啦!

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值