Add Condvar APIs not susceptible to spurious wake
Provide wait_until and wait_timeout_until helper wrappers that aren't susceptible to spurious wake.
Additionally wait_timeout_until makes it possible to more easily write code that waits for a fixed amount of time in face of spurious wakes since otherwise each user would have to do math on adjusting the duration.
Implements #47960.
Add std::sync::mpsc::Receiver::recv_deadline()
Essentially renames recv_max_until to recv_deadline (mostly copying recv_timeout
documentation). This function is useful to avoid the often unnecessary call to
Instant::now in recv_timeout (e.g. when the user already has a deadline). A
concrete example would be something along those lines:
```rust
use std::sync::mpsc::Receiver;
use std::time::{Duration, Instant};
/// Reads a batch of elements
///
/// Returns as soon as `max_size` elements have been received or `timeout` expires.
fn recv_batch_timeout<T>(receiver: &Receiver<T>, timeout: Duration, max_size: usize) -> Vec<T> {
recv_batch_deadline(receiver, Instant::now() + timeout, max_size)
}
/// Reads a batch of elements
///
/// Returns as soon as `max_size` elements have been received or `deadline` is reached.
fn recv_batch_deadline<T>(receiver: &Receiver<T>, deadline: Instant, max_size: usize) -> Vec<T> {
let mut result = Vec::new();
while let Ok(x) = receiver.recv_deadline(deadline) {
result.push(x);
if result.len() == max_size {
break;
}
}
result
}
```
Implement From<RecvError> for TryRecvError and RecvTimeoutError
According to the documentation, it looks to me that `TryRecvError` and `RecvTimeoutError` are strict extensions of `RecvError`. As such, it makes sense to allow conversion from the latter type to the two former types without constraining future developments.
This permits to write `input.recv()?` and `input.recv_timeout(timeout)?` in the same function for example.
Essentially renames recv_max_until to recv_deadline (mostly copying recv_timeout
documentation). This function is useful to avoid the often unnecessary call to
Instant::now in recv_timeout (e.g. when the user already has a deadline). A
concrete example would be something along those lines:
```rust
use std::sync::mpsc::Receiver;
use std::time::{Duration, Instant};
/// Reads a batch of elements
///
/// Returns as soon as `max_size` elements have been received or `timeout` expires.
fn recv_batch_timeout<T>(receiver: &Receiver<T>, timeout: Duration, max_size: usize) -> Vec<T> {
recv_batch_deadline(receiver, Instant::now() + timeout, max_size)
}
/// Reads a batch of elements
///
/// Returns as soon as `max_size` elements have been received or `deadline` is reached.
fn recv_batch_deadline<T>(receiver: &Receiver<T>, deadline: Instant, max_size: usize) -> Vec<T> {
let mut result = Vec::new();
while let Ok(x) = receiver.recv_deadline(deadline) {
result.push(x);
if result.len() == max_size {
break;
}
}
result
}
```
This commit removes the `rand` crate from the standard library facade as
well as the `__rand` module in the standard library. Neither of these
were used in any meaningful way in the standard library itself. The only
need for randomness in libstd is to initialize the thread-local keys of
a `HashMap`, and that unconditionally used `OsRng` defined in the
standard library anyway.
The cruft of the `rand` crate and the extra `rand` support in the
standard library makes libstd slightly more difficult to port to new
platforms, namely WebAssembly which doesn't have any randomness at all
(without interfacing with JS). The purpose of this commit is to clarify
and streamline randomness in libstd, focusing on how it's only required
in one location, hashmap seeds.
Note that the `rand` crate out of tree has almost always been a drop-in
replacement for the `rand` crate in-tree, so any usage (accidental or
purposeful) of the crate in-tree should switch to the `rand` crate on
crates.io. This then also has the further benefit of avoiding
duplication (mostly) between the two crates!
Currently, the compiler requires `T` to also be `Send`. There is no reason for
that. `&Rw{Read,Write}LockGuard` only provides a shared referenced to `T`, sending
that across threads is safe if `T` is `Sync`.
remove the `T: Sync` requirement for `RwLock<T>: Send`
That requirement makes sense for containers like `Arc` that don't
uniquely own their contents, but `RwLock` is not one of those.
This restriction was added in 380d23b5d4, but it's not clear why. @hniksic
and I [were discussing this on reddit](https://www.reddit.com/r/rust/comments/763o7r/blog_posts_introducing_lockfree_rust_comparing/dobcvbm/). I might be totally wrong about this change being sound, but I'm super curious to find out :)
That requirement makes sense for containers like `Arc` that don't
uniquely own their contents, but `RwLock` is not one of those.
This restriction was added in
380d23b5d4,
but it's not clear why.
Improve performance of spsc_queue and stream.
This PR makes two main changes:
1. It switches the `spsc_queue` node caching strategy from keeping a shared
counter of the number of nodes in the cache to keeping a consumer only counter
of the number of node eligible to be cached.
2. It separates the consumer and producers fields of `spsc_queue` and `stream` into
a producer cache line and consumer cache line.
Overall, it speeds up `mpsc` in `spsc` mode by 2-10x.
Variance is higher than I'd like (that 2-10x speedup is on one benchmark), I believe this is due to the drop check in `send` (`fn stream::Queue::send:107`). I think this check can be combined with the sleep detection code into a version which only uses 1 shared variable, and only one atomic access per `send`, but I haven't looked through the select implementation enough to be sure.
The code currently assumes a cache line size of 64 bytes. I added a CacheAligned newtype in `mpsc` which I expect to reuse for `shared`. It doesn't really belong there, it would probably be best put in `core::sync::atomic`, but putting it in `core` would involve making it public, which I thought would require an RFC.
Benchmark runner is [here](3eca46279c/shootout), benchmarks [here](3eca46279c/queue_bench/src/lib.rs (L170-L293)).
Fixes#44512.
This commit makes two main changes.
1. It switches the spsc_queue node caching strategy from keeping a shared
counter of the number of nodes in the cache to keeping a consumer only counter
of the number of node eligible to be cached.
2. It separate the consumer and producers fields of spsc_queue and stream into
a producer cache line and consumer cache line.
Fixed mutable vars being marked used when they weren't
#### NB : bootstrapping is slow on my machine, even with `keep-stage` - fixes for occurances in the current codebase are <s>in the pipeline</s> done. This PR is being put up for review of the fix of the issue.
Fixes#43526, Fixes#30280, Fixes#25049
### Issue
Whenever the compiler detected a mutable deref being used mutably, it marked an associated value as being used mutably as well. In the case of derefencing local variables which were mutable references, this incorrectly marked the reference itself being used mutably, instead of its contents - with the consequence of making the following code emit no warnings
```
fn do_thing<T>(mut arg : &mut T) {
... // don't touch arg - just deref it to access the T
}
```
### Fix
Make dereferences not be counted as a mutable use, but only when they're on borrows on local variables.
#### Why not on things other than local variables?
* Whenever you capture a variable in a closure, it gets turned into a hidden reference - when you use it in the closure, it gets dereferenced. If the closure uses the variable mutably, that is actually a mutable use of the thing being dereffed to, so it has to be counted.
* If you deref a mutable `Box` to access the contents mutably, you are using the `Box` mutably - so it has to be counted.