I just ran the following code to time how long a part of a function takes to execute:
time1 = NaiveDateTime.utc_now
...some code or function call...
time2 = NaiveDateTime.utc_now
time_total = NaiveDateTime.diff(time2, time1, :microsecond)
time_total came back remarkably low (two-digit microseconds). Quite plausible (and fascinating!), but might I be missing something with this logic, particularly if a function call is between time1 and time2?
If you want to do a micro benchmark (measure how long a function takes), doing one run of the function is not enough.
You can take a look at https://github.com/bencheeorg/benchee to run microbenchmarks.
…and now also doing my own “quick stats” by just running the code multiple times and getting a rough sense for the distribution of time outcomes as well as some average