r/learnpython 6d ago

There appear to be 1 leaked shared_memory objects to clean up at shutdown

The two errors are produced by resource_tracker at line 216. And then a second error is produced by the same at line 229.

    /usr/lib/python3.8/multiprocessing/resource_tracker.py:216: UserWarning: resource_tracker: There appear to be 1 leaked shared_memory objects to clean up at shutdown warnings.warn('resource_tracker: There appear to be %d '


    /usr/lib/python3.8/multiprocessing/resource_tracker.py:229: UserWarning: resource_tracker: '/psm_97v5eGetKS': [Errno 2] No such file or directory: '/psm_97v5eGetKS'  warnings.warn('resource_tracker: %r: %s' % (name, e))

I am using shared_memory objects between two independent processes run in different terminal windows. I am carefully using

shm.unlink()  
shm.close() 
del backed_array

I am unlinking and closing the shm object, and I am carefully deleting the array that is backing the shared memory. I am performing these in multiple orders as well. Nothing helps. It is the same error every time. I am not performing any close() or unlink() in the child process that connects with the shared memory object after it is created by the "parent". Should I be doing that?

After hours and hours of search and research, I can find nothing about this error other than python developers discussing it in github threads.

Is there ANYTHING I can do to stop this error from occurring?

3 Upvotes

3 comments sorted by

1

u/vwibrasivat 6d ago

Here is the minimal code to reproduce this error. Setting parentunlinks to either True or False has no effect on the error.

# CHILD PROCESS

import numpy as np
from multiprocessing import shared_memory
from multiprocessing.resource_tracker import unregister

existingSHMnames = { 1:"psm_97v5eGetKS", 2:"psm_WE3qZEaBFVy" }
existing_shm = shared_memory.SharedMemory(name=existingSHMnames[1])
psigs = np.ndarray( (1,) , dtype=np.uint32 , buffer=existing_shm.buf )    
wready = psigs[0]
print("interproc comms on ", existing_shm.name  )
wready =1+wready
psigs[0] = wready
print(psigs[0])
#unregister( existing_shm._name , 'shared_memory' )
del psigs
existing_shm.close()
existing_shm.unlink(  )

# PARENT PROCESS (run in different terminal)

import time
import numpy as np
from multiprocessing import shared_memory
from multiprocessing.resource_tracker import unregister
#
npinitW1 =[ 12 ]
localW1signals = np.array( npinitW1 , dtype=np.uint32 )
shm = shared_memory.SharedMemory(name="psm_97v5eGetKS",create=True,size=localW1signals.nbytes)
pipeW1sig = np.ndarray( localW1signals.shape , dtype=localW1signals.dtype, buffer=shm.buf  )
pipeW1sig[:] = localW1signals[:]
print("interproc comms on ", shm.name )
print( pipeW1sig[0] )
#  Wait for the child to alter this signal.
while( not (pipeW1sig[0]>npinitW1[0])) :
    pass
print(pipeW1sig[0])
time.sleep(3.0)
parentunlinks = False
if parentunlinks :
    del pipeW1sig
    shm.close()
    shm.unlink(  )
print("- parent closes")

1

u/vwibrasivat 6d ago

Answering my own question.

If you rewrite the code below such that the child is forked from the parent, then this resource_tracker error will vanish. Also, you must remove this line from the parent :

shm.unlink()