近些天想把一本很长的小说分割一下,拿来导入吾爱上很火的那个语音合成助手里,生成语音用来听书;
先放打包好的文件:TXT分割.exe - 123云盘
正好我有ChatGPT的账号,于是就去问了问这个AI,它给了个不错的代码,我又稍加改造,就有了下面的源码:
import codecs
input_fn = input('请输入分割文件名:\n(含后缀,不在同一目录请输入文件路径,请勿添加引号):\n')
output_fnh = input('请输入分割后文件名前缀(勿加斜杠):\n')
output_fls = int(input('请输入分割后每文件行数:\n'))
def split_file(input_file, output_prefix, num_lines_per_file):
# Open the input file in UTF-8 encoding
with codecs.open(input_file, 'r', encoding='utf-8') as f:
# Create a counter for the current line number
line_number = 1
# Create a counter for the current output file number
file_number = 1
# Create an output file with the correct file number
output_file = codecs.open(f'{output_prefix}_{file_number}.txt', 'w', encoding='utf-8')
# Iterate over the lines in the input file
for line in f:
# Write the line to the output file
output_file.write(line)
# Increment the line counter
line_number += 1
# If the line counter is greater than the number of lines per file,
# close the output file, increment the file counter, and create a new output file
if line_number > num_lines_per_file:
output_file.close()
file_number += 1
line_number = 1
output_file = codecs.open(f'{output_prefix}_{file_number}.txt', 'w', encoding='utf-8')
# Close the last output file
output_file.close()
# Test the function with an input file and a specified number of lines per output file
split_file(input_fn,output_fnh,output_fls)
使用也非常简单:
1.将py文件和txt文件放在同一个文件夹下(或者记下txt文件的路径)。
2.然后用python解释器运行py文件,输入txt文件的名称或路径(包括后缀名),回车。
3.输入分割后文件名前缀(这里不能加斜杠),回车。
4.输入多少行每个文件,回车。
5.然后就能在py文件所在文件夹找到分割后的文件了。