urllib3

http://urllib3.readthedocs.org/en/latest/


urllib3 Documentation

Highlights

  • Re-use the same socket connection for multiple requests, with optional client-side certificate verification. See: HTTPConnectionPool andHTTPSConnectionPool
  • File posting. See: encode_multipart_formdata()
  • Built-in redirection and retries (optional).
  • Supports gzip and deflate decoding. See: decode_gzip() and decode_deflate()
  • Thread-safe and sanity-safe.
  • Tested on Python 2.6+ and Python 3.2+, 100% unit test coverage.
  • Works with AppEngine, gevent, eventlib, and the standard library io module.
  • Small and easy to understand codebase perfect for extending and building upon. For a more comprehensive solution, have a look at Requests which is also powered by urllib3.

Getting Started

Installing

pip install urllib3 or fetch the latest source from github.com/shazow/urllib3.

Usage

>>> import urllib3
>>> http = urllib3.PoolManager()
>>> r = http.request('GET', 'http://example.com/')
>>> r.status
200
>>> r.headers['server']
'ECS (iad/182A)'
>>> 'data: ' + r.data
'data: ...'

By default, urllib3 does not verify your HTTPS requests. You’ll need to supply a root certificate bundle, or use certifi

>>> import urllib3, certifi
>>> http = urllib3.PoolManager(cert_reqs='CERT_REQUIRED', ca_certs=certifi.where())
>>> r = http.request('GET', 'https://insecure.com/')
Traceback (most recent call last):
  ...
SSLError: hostname 'insecure.com' doesn't match 'svn.nmap.org'

For more on making secure SSL/TLS HTTPS requests, read the Security section.

urllib3’s responses respect the io framework from Python’s standard library, allowing use of these standard objects for purposes like buffering:

>>> http = urllib3.PoolManager()
>>> r = http.urlopen('GET','http://example.com/', preload_content=False)
>>> b = io.BufferedReader(r, 2048)
>>> firstpart = b.read(100)
>>> # ... your internet connection fails momentarily ...
>>> secondpart = b.read()

Upgrading & Versioning

urllib3 uses a compatibility-based versioning scheme (let’s call it compatver). For the user, they indicate the required decision for upgrading.

Given a version A.B.C:

C. Strictly backwards-compatible, usually a bug-fix. Always upgrade.

B. Possibly partially incompatible, usually a new feature or a minor API improvement. Read the changelog and upgrade when ready.

A. Major rewrite and possibly breaks everything. Not really an upgrade, basically a new library under the same namespace, decide if you want to switch.

For example, when going from urllib3 v1.2.3 to v1.2.4, you should always upgrade without hesitation. When going from v1.2 to v1.3, you should read the changes to make sure they’re not going to affect you.

Components

urllib3 tries to strike a fine balance between power, extendability, and sanity. To achieve this, the codebase is a collection of small reusable utilities and abstractions composed together in a few helpful layers.

PoolManager

The highest level is the PoolManager(...).

The PoolManager will take care of reusing connections for you whenever you request the same host. This should cover most scenarios without significant loss of efficiency, but you can always drop down to a lower level component for more granular control.

>>> import urllib3
>>> http = urllib3.PoolManager(10)
>>> r1 = http.request('GET', 'http://example.com/')
>>> r2 = http.request('GET', 'http://httpbin.org/')
>>> r3 = http.request('GET', 'http://httpbin.org/get')
>>> len(http.pools)
2

PoolManager is a proxy for a collection of ConnectionPool objects. They both inherit from RequestMethods to make sure that their API is similar, so that instances of either can be passed around interchangeably.

ProxyManager

The ProxyManager is an HTTP proxy-aware subclass of PoolManager. It produces a single HTTPConnectionPool instance for all HTTP connections and individual per-server:port HTTPSConnectionPool instances for tunnelled HTTPS connections:

>>> proxy = urllib3.ProxyManager('http://localhost:3128/')
>>> r1 = proxy.request('GET', 'http://google.com/')
>>> r2 = proxy.request('GET', 'http://httpbin.org/')
>>> len(proxy.pools)
1
>>> r3 = proxy.request('GET', 'https://httpbin.org/')
>>> r4 = proxy.request('GET', 'https://twitter.com/')
>>> len(proxy.pools)
3

ConnectionPool

The next layer is the ConnectionPool(...).

The HTTPConnectionPool and HTTPSConnectionPool classes allow you to define a pool of connections to a single host and make requests against this pool with automatic connection reusing and thread safety.

When the ssl module is available, then HTTPSConnectionPool objects can be configured to check SSL certificates against specific provided certificate authorities.

>>> import urllib3
>>> conn = urllib3.connection_from_url('http://httpbin.org/')
>>> r1 = conn.request('GET', 'http://httpbin.org/')
>>> r2 = conn.request('GET', '/user-agent')
>>> r3 = conn.request('GET', 'http://example.com')
Traceback (most recent call last):
  ...
urllib3.exceptions.HostChangedError: HTTPConnectionPool(host='httpbin.org', port=None): Tried to open a foreign host with url: http://example.com

Again, a ConnectionPool is a pool of connections to a specific host. Trying to access a different host through the same pool will raise a HostChangedErrorexception unless you specify assert_same_host=False. Do this at your own risk as the outcome is completely dependent on the behaviour of the host server.

If you need to access multiple hosts and don’t want to manage your own collection of ConnectionPool objects, then you should use a PoolManager.

ConnectionPool is composed of a collection of httplib.HTTPConnection objects.

Timeout

A timeout can be set to abort socket operations on individual connections after the specified duration. The timeout can be defined as a float or an instance ofTimeout which gives more granular configuration over how much time is allowed for different stages of the request. This can be set for the entire pool or per-request.

>>> from urllib3 import PoolManager, Timeout

>>> # Manager with 3 seconds combined timeout.
>>> http = PoolManager(timeout=3.0)
>>> r = http.request('GET', 'http://httpbin.org/delay/1')

>>> # Manager with 2 second timeout for the read phase, no limit for the rest.
>>> http = PoolManager(timeout=Timeout(read=2.0))
>>> r = http.request('GET', 'http://httpbin.org/delay/1')

>>> # Manager with no timeout but a request with a timeout of 1 seconds for
>>> # the connect phase and 2 seconds for the read phase.
>>> http = PoolManager()
>>> r = http.request('GET', 'http://httpbin.org/delay/1', timeout=Timeout(connect=1.0, read=2.0))

>>> # Same Manager but request with a 5 second total timeout.
>>> r = http.request('GET', 'http://httpbin.org/delay/1', timeout=Timeout(total=5.0))

See the Timeout definition for more details.

Retry

Retries can be configured by passing an instance of Retry, or disabled by passing False, to the retries parameter.

Redirects are also considered to be a subset of retries but can be configured or disabled individually.

>>> from urllib3 import PoolManager, Retry

>>> # Allow 3 retries total for all requests in this pool. These are the same:
>>> http = PoolManager(retries=3)
>>> http = PoolManager(retries=Retry(3))
>>> http = PoolManager(retries=Retry(total=3))

>>> r = http.request('GET', 'http://httpbin.org/redirect/2')
>>> # r.status -> 200

>>> # Disable redirects for this request.
>>> r = http.request('GET', 'http://httpbin.org/redirect/2', retries=Retry(3, redirect=False))
>>> # r.status -> 302

>>> # No total limit, but only do 5 connect retries, for this request.
>>> r = http.request('GET', 'http://httpbin.org/', retries=Retry(connect=5))

See the Retry definition for more details.

Stream

You may also stream your response and get data as they come (e.g. when using transfer-encoding: chunked). In this case, method stream() will return generator.

>>> import urllib3
>>> http = urllib3.PoolManager()

>>> r = http.request("GET", "http://httpbin.org/stream/3")
>>> r.getheader("transfer-encoding")
'chunked'

>>> for chunk in r.stream():
... print chunk
{"url": "http://httpbin.org/stream/3", ..., "id": 0, ...}
{"url": "http://httpbin.org/stream/3", ..., "id": 1, ...}
{"url": "http://httpbin.org/stream/3", ..., "id": 2, ...}
>>> r.closed
True

Completely consuming the stream will auto-close the response and release the connection back to the pool. If you’re only partially consuming the consuming a stream, make sure to manually call r.close() on the response.


评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值