This removes all mutex/atomics based workarounds for non-monotonic clocks and makes the previously panicking methods saturating instead.
Effectively this moves the monotonization from `Instant` construction to the comparisons.
This has some observable effects, especially on platforms without monotonic clocks:
* Incorrectly ordered Instant comparisons no longer panic. This may hide some programming errors until someone actually looks at the resulting `Duration`
* `checked_duration_since` will now return `None` in more cases. Previously it only happened when one compared instants obtained in the wrong order or
manually created ones. Now it also does on backslides.
The upside is reduced complexity and lower overhead of `Instant::now`.
While issues have been seen on arm64 platforms the Arm architecture requires
that the counter monotonically increases and that it must provide a uniform
view of system time (e.g. it must not be possible for a core to receive a
message from another core with a time stamp and observe time going backwards
(ARM DDI 0487G.b D11.1.2). While there have been a few 64bit SoCs that have
bugs (#49281, #56940) which cause time to not monotonically increase, these have
been fixed in the Linux kernel and we shouldn't penalize all Arm SoCs for those
who refuse to update their kernels:
SUN50I_ERRATUM_UNKNOWN1 - Allwinner A64 / Pine A64 - fixed in 5.1
FSL_ERRATUM_A008585 - Freescale LS2080A/LS1043A - fixed in 4.10
HISILICON_ERRATUM_161010101 - Hisilicon 1610 - fixed in 4.11
ARM64_ERRATUM_858921 - Cortex A73 - fixed in 4.12
255a3f3e18 std: Force `Instant::now()` to be monotonic added a mutex to work around
this problem and a small test program using glommio shows the majority of time spent
acquiring and releasing this Mutex. 3914a7b0da tries to improve this, but actually
makes it worse on big systems as for 128b atomics a ldxp/stxp pair (and
successful loop) is required which is expensive as a lock and because of how
the load/store-exclusives scale on large Arm systems is both unfair to threads
and tends to go backwards in performance.