python task done_Python - What is queue.task_done() used for?

I wrote a script that has multiple threads (created with threading.Thread) fetching URLs from a Queue using queue.get_nowait(), and then processing the HTML. I am new to multi-threaded programming, and am having trouble understanding the purpose of the queue.task_done() function.

When the Queue is empty, it automatically returns the queue.Empty exception. So I don't understand the need for each thread to call the task_done() function. We know that we're done with the queue when its empty, so why do we need to notify it that the worker threads have finished their work (which has nothing to do with the queue, after they've gotten the URL from it)?

Could someone provide me with a code example (ideally using urllib, file I/O, or something other than fibonacci numbers and printing "Hello") that shows me how this function would be used in practical applications?

# Answer 1

4d350fd91e33782268f371d7edaa8a76.png

Queue.task_done is not there for the workers' benefit. It is there to support Queue.join.

If I give you a box of work assignments, do I care about when you've taken everything out of the box?

No. I care about when the work is done. Looking at an empty box doesn't tell me that. You and 5 other guys might still be working on stuff you took out of the box.

Queue.task_done lets workers say when a task is done. Someone waiting for all the work to be done with Queue.join will wait until enough task_done calls have been made, not when the queue is empty.

# Answer 2

Could someone provide me with a code example (ideally using urllib, file I/O, or something other than fibonacci numbers and printing "Hello") that shows me how this function would be used in practical applications?

@user2357112's answer nicely explains the purpose of task_done, but lacks the requested example. Here is a function that calculates checksums of an arbitrary number of files and returns a dict mapping each file name to the corresponding checksum. Internal to the function, the work is divided among a several threads.

The function uses of Queue.join to wait until the workers have finished their assigned tasks, so it is safe to return the dictionary to the caller. It is a convenient way to wait for all files being processed, as opposed to them being merely dequeued.

import threading, queue, hashlib

def _work(q, checksums):

while True:

filename = q.get()

if filename is None:

q.put(None)

break

try:

sha = hashlib.sha256()

with open(filename, 'rb') as f:

for chunk in iter(lambda: f.read(65536), b''):

sha.update(chunk)

checksums[filename] = sha.digest()

finally:

q.task_done()

def calc_checksums(files):

q = queue.Queue()

checksums = {}

for i in range(1):

threading.Thread(target=_work, args=(q, checksums)).start()

for f in files:

q.put(f)

q.join()

q.put(None) # tell workers to exit

return checksums

A note on the GIL: since the code in hashlib internally releases the GIL while calculating the checksum, using multiple threads yields a measurable (1.75x-2x depending on Python version) speedup compared to the single-threaded variant.

# Answer 3

.task_done() is used to mark .join() that the processing is done.

💡 If you use .join() and don't call .task_done() for every processed item, your script will hang forever.

Ain't nothin' like a short example;

import logging

import queue

import threading

import time

items_queue = queue.Queue()

running = False

def items_queue_worker():

while running:

try:

item = items_queue.get(timeout=0.01)

if item is None:

continue

try:

process_item(item)

finally:

items_queue.task_done()

except queue.Empty:

pass

except:

logging.exception('error while processing item')

def process_item(item):

print('processing {} started...'.format(item))

time.sleep(0.5)

print('processing {} done'.format(item))

if __name__ == '__main__':

running = True

# Create 10 items_queue_worker threads

worker_threads = 10

for _ in range(worker_threads):

threading.Thread(target=items_queue_worker).start()

# Populate your queue with data

for i in range(100):

items_queue.put(i)

# Wait for all items to finish processing

items_queue.join()

running = False

# Answer 4

"Read the source, Luke!" -- Obi-one Codobi

The source for ayncio.queue is pretty short.

the number of unfinished tasks goes up by one when you put to the queue.

it goes down by one with you call task_done

join() awaits there being no unfinished tasks.

This makes join useful if and only if you are calling task_done(). Using the classic bank analogy:

people come in the doors and get in line; door is a producer doing a q.put()

when a teller is idle and a person is in line, they go to the teller window. teller does a q.get().

When the teller has finished helping the person, they are ready for the next one. teller does a q.task_done()

at 5 p.m., the doors are locked door task finishes

you wait until both the line is empty and each teller has finished helping the person in front of them. await q.join(tellers)

then you send the tellers home, who are now all idling with an empty queue. for teller in tellers: teller.cancel()

Without the task_done(), you cannot know every teller is done with people. You cannot send a teller home while they have a person at his or her window.

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值