python同一个文件中不可以编写多个函数,Python3:如何从多个进程中写入同一个文件而不把它弄乱?...

如果这是您的project from yesterday的延续,那么内存中已经有了您的下载列表-只需在进程完成下载时从已加载列表中删除条目,并且只在退出“下载器”后在输入文件上写下整个列表。没有理由不停地写下这些变化。在

如果您想知道(比如从外部进程)什么时候下载了一个url,即使您的“下载器”正在运行,那么每次进程返回下载成功时,请在downloaded.dat中写一行新行。在

当然,在这两种情况下,都要在主进程/线程中编写,这样就不必担心互斥。在

更新-以下是如何使用与昨天相同的代码库,使用附加文件来完成此操作:def init_downloader(params): # our downloader initializator

downloader = Downloader(**params[0]) # instantiate our downloader

downloader.run(params[1]) # run our downloader

return params # job finished, return the same params for identification

if __name__ == "__main__": # important protection for cross-platform use

downloader_params = [ # Downloaders will be initialized using these params

{"port_number": 7751},

{"port_number": 7851},

{"port_number": 7951}

]

downloader_cycle = cycle(downloader_params) # use a cycle for round-robin distribution

with open("downloaded_links.dat", "a+") as diff_file: # open your diff file

diff_file.seek(0) # rewind the diff file to the beginning to capture all lines

diff_links = {row.strip() for row in diff_file} # load downloaded links into a set

with open("input_links.dat", "r+") as input_file: # open your input file

available_links = []

download_jobs = [] # store our downloader parameters + a link here

# read our file line by line and filter out downloaded links

for row in input_file: # loop through our file

link = row.strip() # remove the extra whitespace to get the link

if link not in diff_links: # make sure link is not already downloaded

available_links.append(row)

download_jobs.append([next(downloader_cycle), link])

input_file.seek(0) # rewind our input file

input_file.truncate() # clear out the input file

input_file.writelines(available_links) # store back the available links

diff_file.seek(0) # rewind the diff file

diff_file.truncate() # blank out the diff file now that the input is updated

# and now let's get to business...

if download_jobs:

download_pool = Pool(processes=5) # make our pool use 5 processes

# run asynchronously so we can capture results as soon as they ar available

for response in download_pool.imap_unordered(init_downloader, download_jobs):

# since it returns the same parameters, the second item is a link

# add the link to our `diff` file so it doesn't get downloaded again

diff_file.write(response[1] + "\n")

else:

print("Nothing left to download...")

正如我在评论中所写,整个想法是在下载链接时使用一个文件来存储下载的链接,然后在下次运行时过滤掉下载的链接并更新输入文件。这样,即使你强行杀死它,它也会一直在它停止的地方恢复(除了部分下载)。在

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值