python - Calling os.fsync on the stdout/stderr file descriptors kills a subprocess -
python - Calling os.fsync on the stdout/stderr file descriptors kills a subprocess -
after spawning subprocess using python subprocess library, i'm using stderr pass message kid process parent process containing serialized data. want parent homecoming (via stdin) result of function applied data.
in essence, have function within subprocess this:
sys.stderr.write("some stuff write") # time later some_var = sys.stdin.read() however, completes locks parent while waiting stderr input, tried call:
sys.stderr.flush() os.fsync(sys.stderr.fileno()) however, doesn't work. nil after os.fsync executed. in addition, when phone call proc.poll() in parent process, appears, child's homecoming code 1.
what can prevent this? should consider approach?
i consider approach. may utilize indipendent process (multiprocessing.process) , using 2 queues communicate (multiprocessing.queue) 1 input , other 1 output. illustration on starting process:
import multiprocessing def processworker(input, result): work = input.get() print work result.put(work*work) input = multiprocessing.queue() result = multiprocessing.queue() p = multiprocessing.process(target = processworker, args = (input, result)) p.start() input.put(2) res = result.get(block = true) print res then may iterate passing 1 time again it. usage of multiprocessing.queue more robust since not need rely on stdout/err parsing , avoid related limitation. can manage more subprocesses.
then, can set timeout on how long want phone call wait @ max, eg:
import queue try: res = result.get(block = true, timeout = 10) except queue.empty: print error python subprocess fsync
Comments
Post a Comment