python corr()用的是什么方法_用python下载文件的若干种方法汇总

在日常科研或者工作中,我们免不了要批量从网上下载一些资料。要是手工一个个去下载,浪费时间又让鼠标折寿,好不容易点完了发现手指都麻木了。

这种重复性的批量作业我们应该交给python小弟去帮我们搞定,这篇文章汇总了用python下载文件的若干种方法,快点学起来吧。

1. 下载图片

import requestsurl = 'https://www.python.org/static/img/python-logo@2x.png'myfile = requests.get(url)open('PythonImage.png', 'wb').write(myfile.content)

wget:

import wgeturl = "https://www.python.org/static/img/python-logo@2x.png"wget.download(url, 'pythonLogo.png')

requests是python实现的简单易用的HTTP库。requests[1]标准模板:

import requestsurl="******"try:    r=requests.get(url)    r.raise_for_status()  #如果不是200,产生异常requests.HTTPError    r.encoding=r.apparent_encoding    print(r.text)except:    print("爬取失败...")

2. 下载重定向的文件

import requestsurl = 'https://readthedocs.org/projects/python-guide/downloads/pdf/latest/'myfile = requests.get(url, allow_redirects=True)open('hello.pdf', 'wb').write(myfile.content)

3. 分块下载大文件

import requestsurl = 'https://buildmedia.readthedocs.org/media/pdf/python-guide/latest/python-guide.pdf'r = requests.get(url, stream = True)with open("PythonBook.pdf", "wb") as Pypdf:    for chunk in r.iter_content(chunk_size = 1024): # 1024 bytes        if chunk:            Pypdf.write(chunk)

4. 并行下载多文件

不并行版本:

import osimport requestsfrom time import timefrom multiprocessing.pool import ThreadPooldef url_response(url):    path, url = url    r = requests.get(url, stream=True)    with open(path, 'wb') as f:        for ch in r:            f.write(ch)urls = [("Event1", "https://www.python.org/events/python-events/805/"),        ("Event2", "https://www.python.org/events/python-events/801/"),        ("Event3", "https://www.python.org/events/python-events/790/"),        ("Event4", "https://www.python.org/events/python-events/798/"),        ("Event5", "https://www.python.org/events/python-events/807/"),        ("Event6", "https://www.python.org/events/python-events/807/"),        ("Event7", "https://www.python.org/events/python-events/757/"),        ("Event8", "https://www.python.org/events/python-user-group/816/")]start = time()for x in urls:    url_response(x)print(f"Time to download: {time() - start}")# Time to download: 7.306085824966431

并行版本,只需改动一行代码ThreadPool(9).imap_unordered(url_response, urls),时间会大幅度减少:

import osimport requestsfrom time import timefrom multiprocessing.pool import ThreadPooldef url_response(url):    path, url = url    r = requests.get(url, stream=True)    with open(path, 'wb') as f:        for ch in r:            f.write(ch)urls = [("Event1", "https://www.python.org/events/python-events/805/"),        ("Event2", "https://www.python.org/events/python-events/801/"),        ("Event3", "https://www.python.org/events/python-events/790/"),        ("Event4", "https://www.python.org/events/python-events/798/"),        ("Event5", "https://www.python.org/events/python-events/807/"),        ("Event6", "https://www.python.org/events/python-events/807/"),        ("Event7", "https://www.python.org/events/python-events/757/"),        ("Event8", "https://www.python.org/events/python-user-group/816/")]start = time()ThreadPool(9).imap_unordered(url_response, urls)print(f"Time to download: {time() - start}")# Time to download: 0.0064961910247802734

5. 使用urllib获取html页面

import urllib.request# urllib.request.urlretrieve('url', 'path')urllib.request.urlretrieve('https://www.python.org/', 'PythonOrganization.html')

6. python下载视频的神器

you-get[2],目前you-get所支持的网站包含国内外几十个网站(youtube、twitter、腾讯、爱奇艺、优酷、bilibili等)。

pip install you-get

测试一下:

you-get https://www.bilibili.com/video/av52694584/?spm_id_from=333.334.b_686f6d655f706f70756c6172697a65.3

youtube-dl[3]也是一个类似的工具。

7. 举个例子

批量下载: NOAA-CIRES 20th Century 2m气温再分析资料[4]。一个个点手会点残,这时候可以借助Python来批量化下载数据。

首先打开页面,按F12查看网页源码:

a621cd35049f718a290635bcb8f4307d.png

可以看出,对应下载文件的链接都在div标签下的a标签中,需要将这些链接一一获取然后就可以进行批量化下载了。

# -*- coding: utf-8 -*-import urllibfrom bs4 import BeautifulSouprawurl='https://www.esrl.noaa.gov/psd/cgi-bin/db_search/DBListFiles.pl?did=118&tid=40290&vid=2227'content = urllib.request.urlopen(rawurl).read().decode('ascii')  # 获取页面的HTMLsoup = BeautifulSoup(content, 'lxml')url_cand_html=soup.find_all(id='content') # 定位到存放url的标号为content的div标签list_urls=url_cand_html[0].find_all("a") # 定位到a标签,其中存放着文件的urlurls=[]for i in list_urls[1:]:    urls.append(i.get('href')) # 取出链接for i,url in enumerate(urls):    print("This is file"+str(i+1)+" downloading! You still have "+str(142-i-1)+" files waiting for downloading!!")    file_name = "./ncfile/"+url.split('/')[-1] # 文件保存位置+文件名    urllib.request.urlretrieve(url, file_name)

并行下载的版本,大家自己先试试,欢迎留言写下你的方案。

References

[1] requests: https://2.python-requests.org//zh_CN/latest/index.html[2] you-get: https://github.com/soimort/you-get[3] youtube-dl: https://github.com/ytdl-org/youtube-dl[4] NOAA-CIRES 20th Century 2m气温再分析资料: https://www.esrl.noaa.gov/psd/cgi-bin/db_search/DBListFiles.pl?did=118&tid=40290&vid=2227

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值