Let’s consider the following scenario: I have two Python processes receiving the same events and I have to measure the delay between when process A received the event and when process B received it, as precisely as possible (i.e. less than 1ms).
Using Python 2.7 and a Unix system you can use the
time.time method which provides the time in seconds since Epoch and has a typical resolution of a fraction of a ms on Unix. You can use it on different processes and still compare the results, since both processes receive the time since Epoch, a defined and fixed time in the past.
On Windows time.time also provides the time since Epoch, but the resolution is in the range of 10ms, which is not suitable for my application.
There is also
time.clock which is super precise on Windows, and much less precise on Unix. The mayor drawback is that it returns the time since the process started or since the first call of
time.clock within that processes. This means you cannot compare the results of
time.clock between two processes as they are not calibrated to a common t-zero.
I had high hopes for Python 3.3 where the
time module was revamped and I was reading about
time.perf_counter looked like it would suit my needs as the documentation said it provides the highest available resolution for the system and was “system-wide”, in contrast to for example the new
time.process_time which was “process_wide”. Unfortunately it turned out that
time.perf_counter acts similar to
time.clock on Python 2.7 as it provides you with the time since the process started or the first time the method was called within the process. The results of
time.monotonic are comparable between processes, but again not precise enough on Windows.
Here is a small script which demonstrates how the times provided by
time.perf_counter are not comparable between processes. It starts two processes and lets both of them print out the output of the timer to stdout. In the output the times should be monotonically increasing. Since I let process 2 sleep for one second before calling the timer method for the first time, the output of this process is usually one second smaller when using
from multiprocessing import Process
timers = ['clock', 'time', 'monotonic', 'perf_counter']
timer = getattr(time, timer)
for i in range(3):
if __name__ == '__main__':
for t in timers:
p = Process(target=proc, args=(t,))
timer = getattr(time, t)
for i in range(3):
The result when running on Windows with Python 3.3:
$ python timertest.py
So as far as I see it, there is no way of getting comparable times between two processes on Windows with more precision than 10ms. Is that correct or am I missing something?