想做一个图片站,之前爬虫抓了某网站一个类别的数据,就有33GB大小,真要上线运行,投入成本太高,于是开始图片压缩之旅。
1、到官方下载可执行程序:http://developers.google.com/speed/webp
2、直接调用exe程序执行压缩,代码非常简单
# -*- coding: utf-8 -*-
from glob import glob
import os
from threading import Thread
import logging
from PIL import Image
def convert_img_type(infile, index):
encoder_path = "C:/Users/ruonan/libwebp-1.0.0-windows-x64/bin/cwebp.exe"
output_path = "C:/Test01/20150806205033/webp/"
if not os.path.exists(output_path):
os.mkdir(output_path)
new_size = get_new_size(infile)
commond = encoder_path + " -q 80 -resize " + str(new_size[0]) + " " + str(new_size[1]) + " " +infile + " -o "+output_path + str(index) + ".webp"
os.system(commond)
def get_new_size(infile):
img = Image.open(infile)
width = img.size[0]
height = img.size[1]
phone_px = 500
scale = float(phone_px) / width
if width <= phone_px:
return width,height
else:
height = int(height * scale)
return phone_px, height
def start():
index = 1
input_path = "C:/Test01/20150806205033/*.jpg"
for infile in glob(input_path):
print infile
t = Thread(target=convert_img_type, args=(infile, index,))
t.start()
t.join()
index += 1
if __name__ == "__main__":
start()
原图:113kb,压缩后13kb
核心压缩命令简写如下:
cwebp input.png -q 80 -o output.webp
目前的痛苦之处在于ISO和不少浏览器都不支持webp,小程序也不支持,需要解决兼任性问题。
编程就是这样,解决一个问题就可能带来一个新的挑战,很有意思