python跳过异常,Python中的UnicodeDecodeError在读取文件时如何忽略错误并跳至下一行?...

当Python尝试读取使用非UTF-8编码且包含非ASCII字符的文件时,会出现UnicodeDecodeError。解决方法是在打开文件时指定错误处理方式,如'surrogateescape',这允许读取含有错误字符的文件。如果需要逐行检测错误,可以使用正则表达式检测并处理含有surrogate范围内的字符的行。
摘要由CSDN通过智能技术生成

I have to read a text file into Python. The file encoding is:

file -bi test.csv

text/plain; charset=us-ascii

This is a third-party file, and I get a new one every day, so I would rather not change it. The file has non ascii characters, such as Ö, for example. I need to read the lines using python, and I can afford to ignore a line which has a non-ascii character.

My problem is that when I read the file in Python, I get the UnicodeDecodeError when reaching the line where a non-ascii character exists, and I cannot read the rest of the file.

Is there a way to avoid this. If I try this:

fileHandle = codecs.open("test.csv", encoding='utf-8');

try:

for line in companiesFile:

print(line, end="");

except UnicodeDecodeError:

pass;

then when the error is reached the for loop ends and I cannot read the remaining of the file. I want to skip the line that causes the mistake and go on. I would rather not do any changes to the input file, if possible.

Is there any way to do this?

Thank you very much.

解决方案

Your file doesn't appear to use the UTF-8 encoding. It is important to use the correct codec when opening a file.

You can tell open() how to treat decoding errors, with the errors keyword:

errors is an optional string that specifies how encoding and decoding errors are to be handled–this cannot be used in binary mode. A variety of standard error handlers are available, though any error handling name that has been registered with codecs.register_error() is also valid. The standard names are:

'strict' to raise a ValueError exception if there is an encoding error. The default value of None has the same effect.

'ignore' ignores errors. Note that ignoring encoding errors can lead to data loss.

'replace' causes a replacement marker (such as '?') to be inserted where there is malformed data.

'surrogateescape' will represent any incorrect bytes as code points in the Unicode Private Use Area ranging from U+DC80 to U+DCFF. These private code points will then be turned back into the same bytes when the surrogateescape error handler is used when writing data. This is useful for processing files in an unknown encoding.

'xmlcharrefreplace' is only supported when writing to a file. Characters not supported by the encoding are replaced with the appropriate XML character reference nnn;.

'backslashreplace' (also only supported when writing) replaces unsupported characters with Python’s backslashed escape sequences.

Opening the file with anything other than 'strict' ('ignore', 'replace', etc.) will then let you read the file without exceptions being raised.

Note that decoding takes place per buffered block of data, not per textual line. If you must detect errors on a line-by-line basis, use the surrogateescape handler and test each line read for codepoints in the surrogate range:

import re

_surrogates = re.compile(r"[\uDC80-\uDCFF]")

def detect_decoding_errors_line(l, _s=_surrogates.finditer):

"""Return decoding errors in a line of text

Works with text lines decoded with the surrogateescape

error handler.

Returns a list of (pos, byte) tuples

"""

# DC80 - DCFF encode bad bytes 80-FF

return [(m.start(), bytes([ord(m.group()) - 0xDC00]))

for m in _s(l)]

E.g.

with open("test.csv", encoding="utf8", errors="surrogateescape") as f:

for i, line in enumerate(f, 1):

errors = detect_decoding_errors_line(line)

if errors:

print(f"Found errors on line {i}:")

for (col, b) in errors:

print(f" {col + 1:2d}: {b[0]:02x}")

Take into account that not all decoding errors can be recovered from gracefully. While UTF-8 is designed to be robust in the face of small errors, other multi-byte encodings such as UTF-16 and UTF-32 can't cope with dropped or extra bytes, which will then affect how accurately line separators can be located. The above approach can then result in the remainder of the file being treated as one long line. If the file is big enough, that can then in turn lead to a MemoryError exception if the 'line' is large enough.

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值