Added table containing the system calls used by Instant and SystemTime.
# Description
See #32626 for a discussion on documenting system calls used by Instant and SystemTime.
## Changes
- Added a table containing the system calls used by each platform.
EDIT: If I can format this table better (due to the large links) please let me know.
I'd also be happy to learn a quick command to generate the docs on my host machine! Currently I am using: `python x.py doc --stage 0 src/libstd` but that gives me some `unrecognized intrinsic` errors. Advice is always welcome :)
closes#32626
Turn duration consts into associated consts
As suggested in https://github.com/rust-lang/rust/issues/57391#issuecomment-459658236, I'm moving `Duration` constants (`SECOND`, `MILLISECOND` and so on; currently behind unstable `duration_constants` feature) into the `impl Duration` block.
cc @frewsxcv @SimonSapin
Right now we do unit conversions between PerfCounter measurements
and nanoseconds for every add/sub we do between Durations and Instants
on Windows machines. This leads to goofy behavior, like this snippet
failing:
```
let now = Instant::now();
let offset = Duration::from_millis(5);
assert_eq!((now + offset) - now, (now - now) + offset);
```
with precision problems like this:
```
thread 'main' panicked at 'assertion failed: `(left == right)`
left: `4.999914ms`,
right: `5ms`', src\main.rs:6:5
```
To fix it, this changeset does the unit conversion once, when we
measure the clock, and all the subsequent math in u64 nanoseconds.
It also adds an exact associativity test to the `sys/time.rs`
test suite to make sure we don't regress on this in the future.
std: Force `Instant::now()` to be monotonic
This commit is an attempt to force `Instant::now` to be monotonic
through any means possible. We tried relying on OS/hardware/clock
implementations, but those seem buggy enough that we can't rely on them
in practice. This commit implements the same hammer Firefox recently
implemented (noted in #56612) which is to just keep whatever the lastest
`Instant::now()` return value was in memory, returning that instead of
the OS looks like it's moving backwards.
Closes#48514Closes#49281
cc #51648
cc #56560Closes#56612Closes#56940
This commit is an attempt to force `Instant::now` to be monotonic
through any means possible. We tried relying on OS/hardware/clock
implementations, but those seem buggy enough that we can't rely on them
in practice. This commit implements the same hammer Firefox recently
implemented (noted in #56612) which is to just keep whatever the lastest
`Instant::now()` return value was in memory, returning that instead of
the OS looks like it's moving backwards.
Closes#48514Closes#49281
cc #51648
cc #56560Closes#56612Closes#56940
Add duration constants
Add constants `SECOND`, `MILLISECOND`, `MICROSECOND`, and `NANOSECOND` to `core::time`.
This will make working with durations more ergonomic. Compare:
```rust
// Convenient, but deprecated function.
thread::sleep_ms(2000);
// The current canonical way to sleep for two seconds.
thread::sleep(Duration::from_secs(2));
// Sleeping using one of the new constants.
thread::sleep(2 * SECOND);
```
Eliminate Receiver::recv_timeout panic
Fixes#54552.
This panic is because `recv_timeout` uses `Instant::now() + timeout` internally. This possible panic is not mentioned in the documentation for this method.
Very recently we merged (still unstable) support for checked addition (#56490) of `Instant + Duration`, so it's now finally possible to add these together without risking a panic.
Implement checked_add_duration for SystemTime
[Original discussion on the rust user forum](https://users.rust-lang.org/t/std-systemtime-misses-a-checked-add-function/21785)
Since `SystemTime` is opaque there is no way to check if the result of an addition will be in bounds. That makes the `Add<Duration>` trait completely unusable with untrusted data. This is a big problem because adding a `Duration` to `UNIX_EPOCH` is the standard way of constructing a `SystemTime` from a unix timestamp.
This PR implements `checked_add_duration(&self, &Duration) -> Option<SystemTime>` for `std::time::SystemTime` and as a prerequisite also for all platform specific time structs. This also led to the refactoring of many `add_duration(&self, &Duration) -> SystemTime` functions to avoid redundancy (they now unwrap the result of `checked_add_duration`).
Some basic unit tests for the newly introduced function were added too.
I wasn't sure which stabilization attribute to add to the newly introduced function, so I just chose `#[stable(feature = "time_checked_add", since = "1.32.0")]` for now to make it compile. Please let me know how I should change it or if I violated any other conventions.
P.S.: I could only test on Linux so far, so I don't necessarily expect it to compile for all platforms.
Previously this threshold when testing was 100ns, but the Windows
documentation states:
> which is a high resolution (<1us) time stamp
which presumably means that we could have up to 1us resolution, which
means that 100ns doesn't capture "equivalent" time intervals due to
various bits of rounding here and there.
It's hoped that this..
Closes#56034
Since SystemTime is opaque there is no way to check if the result
of an addition will be in bounds. That makes the Add<Duration>
trait completely unusable with untrusted data. This is a big problem
because adding a Duration to UNIX_EPOCH is the standard way of
constructing a SystemTime from a unix timestamp.
This commit implements checked_add_duration(&self, &Duration) -> Option<SystemTime>
for std::time::SystemTime and as a prerequisite also for all platform
specific time structs. This also led to the refactoring of many
add_duration(&self, &Duration) -> SystemTime functions to avoid
redundancy (they now unwrap the result of checked_add_duration).
Some basic unit tests for the newly introduced function were added
too.
As per issue #43601, types that can change size depending on the
target operating system should say so in their documentation.
I used this template when adding doc comments:
The size of a(n) <name> struct may vary depending on the target
operating system, and may change between Rust releases.
For enums, I used "instance" instead of "struct".