使用DistributedDataParallel和logging导致logger重复输出的bug:
当使用DistributedDataParallel时,forward过程中会运行这么一段代码:
if torch.is_grad_enabled() and self.reducer._rebuild_buckets():
logging.info("Reducer buckets have been rebuilt in this iteration.")
self._has_rebuilt_buckets = True
logging.info(“Reducer buckets have been rebuilt in this iteration.”)
将导致logger.parrent.handlers出现
[<StreamHandler <stderr> (NOTSET)>]
最简单的解决方法:
在输出前添加:
logger.parent = None
即可解决问题。