# Monotonic time and the duration in Telemetry

I’ve just used a bit about the Telemetry and got some confusion about duration.

1. Duration might be zero at an unspecified point in time
2. Duration = n*1024 (n >= 0)

Duration is just a erlang:monotonic_time() - StartTime where StartTime is also an erlang:monotonic_time() but taken prior to execution (source).

Where did you get that info about n*1024 from?

As I read the source of Telemetry and searched about Erlang monotonic time.

Erlang monotonic time. This is Erlang’s view of the OS monotonic time if available, or the VM’s own monotonic version of the system time if not available. This is the adjusted clock that is used for events, timers, and so on. Its stability makes it ideal to count a time interval. Do note that this time is monotonic, but not strictly monotonic, meaning that the clock can’t go backwards, but it can return the same value many times !

As mentioned earlier, the Erlang monotonic time is not strictly monotonic: it will possibly return the same number twice if it’s called at the same time on two different cores, for example.

Then the duration might be zero at an unspecified point in time.

For #2:
I performed the tests and calculate the results and saw that the durations always return in formula 1024*n (n >= 0).
I imagine that the ERTS executed the monotonic sequence by 1024, but not sure and don’t see any document about that.

Interesting, I just tried on my machine seems to be :erlang.monotonic_time() always returning a number that is multiple of 1000.

To get information about the Erlang runtime system’s source of OS monotonic time, call erlang:system_info(os_monotonic_time_source).

Which gives me:

# macOS Big Sur 11.5.1
iex> :erlang.system_info(:os_monotonic_time_source)
[
function: :clock_gettime,
clock_id: :CLOCK_MONOTONIC,
resolution: 1000000,
extended: :no,
parallel: :yes,
time: 60625520085000
]

The resolution might be the key. From the same docs:

Highest possible resolution of current OS monotonic time source as parts per second. If no resolution information can be retrieved from the OS, OsMonotonicTimeResolution is set to the resolution of the time unit of Functions return value. That is, the actual resolution can be lower than OsMonotonicTimeResolution. Notice that the resolution does not say anything about the accuracy or whether the precision aligns with the resolution. You do, however, know that the precision is not better than OsMonotonicTimeResolution.

So If I understand correctly ERTS on my machine optimistically tries to get a monotonic time in higher resolution than my machine is able to provide. I’m using MacBook Pro 2017

I did the tests for #2 by paste the statements on iex.
time_start = :erlang.monotonic_time()
time_end = :erlang.monotonic_time()
(time_end - time_start) / 1024

I’m using Windows 10.

iex(83)> :erlang.system_info(:os_system_time_source)
[
function: :GetSystemTime,
resolution: 100,
parallel: :yes,
time: 16289886379090000
]
iex(84)> :erlang.system_info(:os_monotonic_time_source)
[
function: :QueryPerformanceCounter,
resolution: 10000000,
extended: :no,
parallel: :yes,
time: 23239833146368
]
iex(85)> :erlang.system_info(:time_correction)
true
iex(86)> :erlang.system_info(:time_warp_mode)
:no_time_war

At the same time on testing environment where we run elixir app on Fedora 33 looks like I’m getting more precise results!

# on Fedora 33
iex> :erlang.system_info(:os_monotonic_time_source)
[
function: :clock_gettime,
clock_id: :CLOCK_MONOTONIC,
resolution: 1000000000,
extended: :no,
parallel: :yes,
time: 3973380287955165
]

Not only resolution is higher, but also the monotonic time is not a multiple of 1000

Do you get an integer number? I tried this and looks like I’m getting float with non-zero decimals… (Though on my laptop since those monotonic times are multiples of 1000, if I divide by 1000 I get an integer)

Getting the float with zero decimal.

iex(101)> (time_end - time_start) / 1024
62.0

And by this way, it make me don’t know why. It not affect much on my current project, but just want to understand deeper about monotonic time to more confidence when use it.

As stated before, if you are passing durations in telemetry measurements, you use System.monotonic_time to capture the starting point and ending point and take the difference between the two. This is now a duration in “native” units. Which will be unique to your system. The convention is to use native units unless specified otherwise as described here: Telemetry Conventions.

You can convert the duration into a different unit of time like so: System.convert_time_unit(duration, :native, :millisecond).

3 Likes

Seems like it’s specific to Windows:

Here’s what I found about QueryPerformanceCounter (QPC):

one QPC tick is 1024 rdtsc ticks

from Stopwatch under the hood

2 Likes