源文件
http://theday.guohongfu.top/letter.txt内容为abcdefghijklmnopqrstuvwxyz
获取第20字节及以后的内容import requests
url = 'http://theday.guohongfu.top/letter.txt'
headers1 = {
'Range': "bytes=20-" # 获取 第20字节及以后的
}
response = requests.get(url, headers=headers1)
print('data={}'.format(response.content.decode())) # abcdef
# 结果
#data=uvwxyz
设置 If-Match 判断文件在两次请求间是否发生了改变import requests
url = 'http://theday.guohongfu.top/letter.txt'
headers1 = {
'Range': "bytes=0-5" # 获取0-5 的字节
}
response = requests.get(url, headers=headers1)
print('data={}'.format(response.content.decode())) # abcdef
# 得到etag
req_etag = response.headers['ETag']
headers1['If-Match'] = req_etag # 判断文件在两次请求间是否发生了改变
headers1['Range'] = 'bytes=6-10' # 获取6-10字节的数据
response = requests.get(url, headers=headers1)
print('data={}'.format(response.content.decode())) # ghijk
得到结果:# data=abcdef
# data=ghijk
使用 Python 分片下载文件import requests
mp4url = 'https://mp4.vjshi.com/2020-11-20/1c28d06e0278413bf6259ba8b9d26140.mp4'
response = requests.get(mp4url, stream=True)
with open('test.mp4', 'wb') as f:
[f.write(chunk) for chunk in response.iter_content(chunk_size=512) if chunk]每次以512字节进行下载数据,防止下载文件过大而被一次性读取到内存中,导致内存爆满。