[原创] Python 仅获取响应头, 不获取实体

Python Just get Response Headers, not get content.

1. Use HEAD method

>>> import requests
>>> res = requests.head("http://www.baidu.com/")
>>> req.head("https://www.baidu.com/").headers
{'Content-Encoding': 'gzip', 'Server': 'bfe/1.0.8.18', 'Last-Modified': 'Mon, 13 Jun 2016 02:50:08 GMT', 'Connection': 'Keep-Alive', 'Pragma': 'no-cache', 'Cache-Control': 'private, no-cache, no-store, proxy-revalidate, no-transform', 'Date': 'Fri, 13 Oct 2017 04:36:20 GMT', 'Content-Type': 'text/html'}
>>> res.ok
True
>>> res.content
''
# 但是会遇到一些问题, 比如, 服务器不支持 HEAD, 或者拒绝 HEAD.
# 如下情况就被拒绝
#
>>> res = req.head("https://www.douban.com/subject/1/")
>>> res
<Response [403]>
>>> res.ok
False
>>> res.content
''
>>> res.headers
{'Content-Encoding': 'gzip', 'Keep-Alive': 'timeout=30', 'Server': 'dae', 'Connection': 'keep-alive', 'Date': 'Fri, 13 Oct 2017 04:39:00 GMT', 'Content-Type': 'text/html'}

不是很通用, 因为有些服务器不支持.


2. Use urllib

import urllib
>>> res = urllib.urlopen("http://127.0.0.1:8000/git.exe")
>>> res.url
'http://127.0.0.1:8000/git.exe'
>>> res.headers.headers
['Server: SimpleHTTP/0.6 Python/2.7.10\r\n', 'Date: Fri, 13 Oct 2017 06:06:37 GMT\r\n', 'Content-type: application/x-msdownload\r\n', 'Content-Length: 7569408\r\n', 'Last-Modified: Fri, 16 Dec 2016 07:09:32 GMT\r\n']
>>> len(r.read())
7569408
# urllib 只有在调用 read/readline/readlines 的时候才会从 web 服务器读取数据.
# 源码可以在 urllib/httplib 中找到. 
# urllib.py
def urlopen(url, ...):
    opener = FancyURLopener()
    return opener.open(url)
class FancyURLopener(URLopener).open():
    getattr(self, name)(url)
class URLopener.open_http():
    errcode, errmsg, headers = h.getreply()
    if(200 <= errcode < 300):
        return addinfourl(fp, headers, "http:" + url, errcode)
    else:
        if data is None:
            return self.http_error(url, fp, errcode, errmsg, headers)
        else:
            return self.http_error(url, fp, errcode, errmsg, headers, data)
class URLopener.http_error():
    return method(url, fp, errcode, errmsg, headers)
class FancyURLopener.http_error_default():
    return addinfourl(fp, headers, "http:" + url, errcode)
class addinfourl(addbase):
    # 代码中并没有对 fp 做任何操作,包括读写. 
class addbase.__init __():
    self.fp = fp
    self.read = self.fp.read
    self.readline = self.fp.readline
    if hasattr(self.fp, "readlines"): self.readlines = self.fp.readlines
        self.fileno  = self.fp.fileno
    # ... ...

可以看到, urllib.open 最终返回了 addbase, addbase 中没有对 socket 做任务处理, 不会有任何读写. 之后显示调用 read/readline/readlines, 才会从 web 服务器读取数据.

图 1. 初始化网络.
Init Network

图 2. urlopen() 之后
Get Header

图 3. read() 之后
Get Content


3. Use socket

看过 urllib 之后, 可以使用 socket 写一个方法, 只获取 header.

import socket
import ssl


_timeout = 10
socket.setdefaulttimeout(_timeout)

def get_header(host, port=80, uri="/", method="GET", user_ssl=False):
    # 这里可以再扩充一下, 支持 headers
    conn = None
    header = """%s %s HTTP/1.1\r\nHost: %s\r\nConnection: close\r\nUser-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/61.0.3163.100 Safari/537.36\r\n\r\n""" % (
        method, uri, host)
    if user_ssl:
        ssl_context = ssl.SSLContext(ssl.PROTOCOL_TLSv1)
        _socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
        conn = ssl_context.wrap_socket(_socket, server_hostname=host)
        conn.connect((host, port))
        conn.send(header)
    else:
        conn = socket.create_connection((host, port), _timeout)
        conn.sendall(header)
    text = ""
    while True:
        if "\r\n\r\n" in text:
            break
        buff = conn.recv(10)
        text += buff
        # print buff
    conn.close()
    return text.split("\r\n\r\n")[0]

if __name__ == '__main__':
    print get_header("www.douban.com", uri="/subject/27076001/")
    print
    print get_header("www.douban.com", uri="/subject/27076001/", port=443, user_ssl=True)
76[14:48:20]zhipeng@zhipeng-MacBook ~/demo/python
�� $ python test_header.py
HTTP/1.1 301 Moved Permanently
Date: Fri, 13 Oct 2017 06:48:23 GMT
Content-Type: text/html
Content-Length: 178
Connection: close
Location: https://www.douban.com/subject/27076001/
Server: dae

HTTP/1.1 302 Moved Temporarily
Server: ADSSERVER/45863
Date: Fri, 13 Oct 2017 06:48:23 GMT
Content-Type: text/html
Transfer-Encoding: chunked
Connection: close
Location: https://sec.douban.com/b?r=https%3A%2F%2Fwww.douban.com%2Fsubject%2F27076001%2F
Strict-Transport-Security: max-age=15552000;
Set-Cookie: __ads_session=uY8l3pLW/AjCKJ8Y4wA=; domain=.douban.com; path=/
X-Powered-By-ADS: uni-jnads-1-0277[14:48:23]zhipeng@zhipeng-MacBook ~/demo/python 
�� $ 

参考

<< Python socket server handle HTTPS request >> (https://stackoverflow.com/questions/32062925/python-socket-server-handle-https-request)


  • 2
    点赞
  • 6
    收藏
    觉得还不错? 一键收藏
  • 4
    评论
评论 4
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值