python utf8 bom,在Python中将没有BOM的UTF-8转换为带有BOM的UTF-8

Two questions here. I have a set of files which are usually UTF-8 with BOM. I'd like to convert them (ideally in place) to UTF-8 with no BOM. It seems like codecs.StreamRecoder(stream, encode, decode, Reader, Writer, errors) would handle this. But I don't really see any good examples on usage. Would this be the best way to handle this?

source files:

Tue Jan 17$ file brh-m-157.json

brh-m-157.json: UTF-8 Unicode (with BOM) text

Also, it would be ideal if we could handle different input encoding wihtout explicitly knowing (seen ASCII and UTF-16). It seems like this should all be feasible. Is there a solution that can take any known Python encoding and output as UTF-8 without BOM?

edit 1 proposed sol'n from below (thanks!)

fp = open('brh-m-157.json','rw')

s = fp.read()

u = s.decode('utf-8-sig')

s = u.encode('utf-8')

print fp.encoding

fp.write(s)

This gives me the following error:

IOError: [Errno 9] Bad file descriptor

Newsflash

I'm being told in comments that the mistake is I open the file with mode 'rw' instead of 'r+'/'r+b', so I should eventually re-edit my question and remove the solved part.

解决方案fp = open("file.txt")

s = fp.read()

u = s.decode("utf-8-sig")

That gives you a unicode string without the BOM. You can then use

s = u.encode("utf-8")

to get a normal UTF-8 encoded string back in s. If your files are big, then you should avoid reading them all into memory. The BOM is simply three bytes at the beginning of the file, so you can use this code to strip them out of the file:

import os, sys, codecs

BUFSIZE = 4096

BOMLEN = len(codecs.BOM_UTF8)

path = sys.argv[1]

with open(path, "r+b") as fp:

chunk = fp.read(BUFSIZE)

if chunk.startswith(codecs.BOM_UTF8):

i = 0

chunk = chunk[BOMLEN:]

while chunk:

fp.seek(i)

fp.write(chunk)

i += len(chunk)

fp.seek(BOMLEN, os.SEEK_CUR)

chunk = fp.read(BUFSIZE)

fp.seek(-BOMLEN, os.SEEK_CUR)

fp.truncate()

It opens the file, reads a chunk, and writes it out to the file 3 bytes earlier than where it read it. The file is rewritten in-place. As easier solution is to write the shorter file to a new file like newtover's answer. That would be simpler, but use twice the disk space for a short period.

As for guessing the encoding, then you can just loop through the encoding from most to least specific:

def decode(s):

for encoding in "utf-8-sig", "utf-16":

try:

return s.decode(encoding)

except UnicodeDecodeError:

continue

return s.decode("latin-1") # will always work

An UTF-16 encoded file wont decode as UTF-8, so we try with UTF-8 first. If that fails, then we try with UTF-16. Finally, we use Latin-1 — this will always work since all 256 bytes are legal values in Latin-1. You may want to return None instead in this case since it's really a fallback and your code might want to handle this more carefully (if it can).

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值