Once you create the process, you can read its
stdout until it closes, indicating that the process has closed. But there are a couple of problems.
The first is that the process may fill up
stderr and block trying to write more. That is solved with a background thread that reads stderr for you. In this example, I just copy it to an in-memory buffer to read after the process exits. There are other options, depending on what you want to do with the data stream.
Then there’s the question of how often the
stdout pipe is flushed. Since its a pipe, writing is block buffered. Without flushes coming from the subprocess, you won’t get the output real time. In unix-like systems, you can replace the pipe with a pseudo-tty (see pty module). But this is Windows, so there isn’t much you can do from the calling process. What ends up happening is that you get the incoming lines in groups based on when the clibrary of the child flushes (or you put a lot of flushes in the code).
import subprocess import sys import io import time import shutil import threading def runProcess(): process = subprocess.Popen([ sys.executable, 'subpy.py'], stdin=subprocess.PIPE, stderr=subprocess.PIPE, stdout=subprocess.PIPE) process.stdin.close() err_buf = io.BytesIO() err_thread = threading.Thread(target=shutil.copyfileobj, args=(process.stderr, err_buf)) err_thread.start() for line in process.stdout: line = line.decode() # defaulting to system encoding print(line,end='',flush=True) process.wait() err_thread.join() err_buf.seek(0) print("Errors:", err_buf.read().decode()) runProcess() input("press enter to exit.")
CLICK HERE to find out more related problems solutions.