linux 文件400,如何使用linux服务器修复HTTP错误400:错误请求?

我试图在Linux下运行这个python代码(第二天使用Linux和python),我从github复制了这个程序,需要更改头,因为我使用的服务器使用的是firefox60.2.2,我试图从web上收集图像并将其保存到一个文件中。但是我收到一个HTTP错误400:Bad请求。

如果有什么解决办法,请解释一下,因为我对这个项目很陌生。在

代码:#!/usr/bin/env python

import os

import urllib.request as ulib

from bs4 import BeautifulSoup as Soup

import json

url_a = 'https://www.google.com/search?ei=1m7NWePfFYaGmQG51q7IBg&hl=en&q={}'

url_b = '\&tbm=isch&ved=0ahUKEwjjovnD7sjWAhUGQyYKHTmrC2kQuT0I7gEoAQ&start={}'

url_c = '\&yv=2&vet=10ahUKEwjjovnD7sjWAhUGQyYKHTmrC2kQuT0I7gEoAQ.1m7NWePfFYaGmQG51q7IBg'

url_d = '\.i&ijn=1&asearch=ichunk&async=_id:rg_s,_pms:s'

url_base = ''.join((url_a, url_b, url_c, url_d))

headers = {'User-Agent':'Mozilla/5.0 (X11; Linux i986; rv:60.0) Gecko/20200101 Firefox/60.0'}

def get_links(search_name):

search_name = search_name.replace(' ', '+')

url = url_base.format(search_name, 0)

request = ulib.Request(url, None, headers)

json_string = ulib.urlopen(request).read()

page = json.loads(json_string)

new_soup = Soup(page[1][1], 'lxml')

images = new_soup.find_all('img')

links = [image['src'] for image in images]

return links

def save_images(links, search_name):

directory = search_name.replace(' ', '_')

if not os.path.isdir(directory):

os.mkdir(directory)

for i, link in enumerate(links):

savepath = os.path.join(directory, '{:06}.png'.format(i))

ulib.urlretrieve(link, savepath)

if __name__ == '__main__':

search_name = 'auv side scan sonar sunken ship'

links = get_links(search_name)

save_images(links, search_name)

(tensorflow) [usr-login2 darkflow-master]$ ./get_jimages.py

Traceback (most recent call last):

File "./get_jimages.py", line 40, in

links = get_links(search_name)

File "./get_jimages.py", line 20, in get_links

json_string = ulib.urlopen(request).read()

File "/home/usr/anaconda3/envs/tensorflow/lib/python3.6/urllib/request.py", line 223, in urlopen

return opener.open(url, data, timeout)

File "/home/usr/anaconda3/envs/tensorflow/lib/python3.6/urllib/request.py", line 532, in open

response = meth(req, response)

File "/home/usr/anaconda3/envs/tensorflow/lib/python3.6/urllib/request.py", line 642, in http_response

'http', request, response, code, msg, hdrs)

File "/home/usr/anaconda3/envs/tensorflow/lib/python3.6/urllib/request.py", line 570, in error

return self._call_chain(*args)

File "/home/usr/anaconda3/envs/tensorflow/lib/python3.6/urllib/request.py", line 504, in _call_chain

result = func(*args)

File "/home/usr/anaconda3/envs/tensorflow/lib/python3.6/urllib/request.py", line 650, in http_error_default

raise HTTPError(req.full_url, code, msg, hdrs, fp)

urllib.error.HTTPError: HTTP Error 400: Bad Request

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值