profile
viewpoint

Ask questionsDataLoader leaking Semaphores.

Following off of this error report: https://github.com/pytorch/pytorch/issues/11727

We concluded that multiprocessing.set_start_method('spawn') needed to be included in if __name__ == '__main__' for the error to go away; however, that's no longer the case with torch.multiprocessing.spawn.

🐛 Bug

/usr/local/Cellar/python/3.7.3/Frameworks/Python.framework/Versions/3.7/lib/python3.7/multiprocessing/semaphore_tracker.py:144: UserWarning: semaphore_tracker: There appear to be 1 leaked semaphores to clean up at shutdown
  len(cache))
/usr/local/Cellar/python/3.7.3/Frameworks/Python.framework/Versions/3.7/lib/python3.7/multiprocessing/semaphore_tracker.py:144: UserWarning: semaphore_tracker: There appear to be 1 leaked semaphores to clean up at shutdown
  len(cache))
/usr/local/Cellar/python/3.7.3/Frameworks/Python.framework/Versions/3.7/lib/python3.7/multiprocessing/semaphore_tracker.py:144: UserWarning: semaphore_tracker: There appear to be 1 leaked semaphores to clean up at shutdown
  len(cache))
/usr/local/Cellar/python/3.7.3/Frameworks/Python.framework/Versions/3.7/lib/python3.7/multiprocessing/semaphore_tracker.py:144: UserWarning: semaphore_tracker: There appear to be 1 leaked semaphores to clean up at shutdown
  len(cache))

To Reproduce

from torch import multiprocessing
from torch.utils.data import DataLoader
import torch

mp_lock = multiprocessing.Lock()

def main(device_index):
    list(DataLoader([torch.tensor(i) for i in range(10)], num_workers=4))


if __name__ == '__main__':
    torch.multiprocessing.spawn(main)

Environment

  • PyTorch Version (e.g., 1.0): 1.1.post2
  • OS (e.g., Linux): MacOS
  • How you installed PyTorch (conda, pip, source): pip
  • Python version: 3.7.3
pytorch/pytorch

Answer questions SsnL

Also, I'm not sure what we can do though. This is a purely python (no pytorch) repro:

import multiprocessing as mp

mp_lock = mp.Lock()

def w():
    pass

def main():
    ps = [mp.Process(target=w) for _ in range(4)]
    for p in ps:
        p.daemon = True
        p.start()
    for p in ps:
        p.join()

if __name__ == '__main__':
    p = mp.get_context('spawn').Process(target=main)
    p.start()
    p.join()


useful!

Related questions

TensorBoard logging requires TensorBoard with Python summary writer installed. This should be available in 1.14 or above hot 3
AttributeError: module 'torch.jit' has no attribute 'unused' hot 3
Script freezes with no output when using DistributedDataParallel hot 2
Adding Pixel Unshuffle hot 2
[feature request] Add matrix exponential hot 2
cublas runtime error on torch.bmm() with CUDA10 and RTX2080Ti hot 2
libtorch does not initialize OpenMP/MKL by default hot 2
Use torch.device() with torch.load(..., map_location=torch.device()) hot 2
Cuda required when loading a TorchScript with map_location='cpu' hot 2
PyTorch 1.5 failed to import c:miniconda3-x64envs estlibsite-packages orchlibcaffe2_nvrtc.dll - pytorch hot 2
Error during import torch, NameError: name '_C' is not defined - pytorch hot 2
Quantisation of object detection models. hot 2
Problems with install python from source hot 2
torch.utils.tensorboard.SummaryWriter.add_graph do not support non-tensor inputs - pytorch hot 2
a retrained and saved jit module could not be reload. hot 2
source:https://uonfu.com/
Github User Rank List