Optimize `AtomicBool` for target that don't support byte-sized atomics
`AtomicBool` is defined to have the same layout as `bool`, which means that we guarantee that it has a size of 1 byte. However on certain architectures such as RISC-V, LLVM will emulate byte atomics using a masked CAS loop on an aligned word.
We can take advantage of the fact that `bool` only ever has a value of 0 or 1 to replace `swap` operations with `and`/`or` operations that LLVM can lower to word-sized atomic `and`/`or` operations. This takes advantage of the fact that the incoming value to a `swap` or `compare_exchange` for `AtomicBool` is often a compile-time constant.
### Example
```rust
pub fn swap_true(atomic: &AtomicBool) -> bool {
atomic.swap(true, Ordering::Relaxed)
}
```
### Old
```asm
andi a1, a0, -4
slli a0, a0, 3
li a2, 255
sllw a2, a2, a0
li a3, 1
sllw a3, a3, a0
slli a3, a3, 32
srli a3, a3, 32
.LBB1_1:
lr.w a4, (a1)
mv a5, a3
xor a5, a5, a4
and a5, a5, a2
xor a5, a5, a4
sc.w a5, a5, (a1)
bnez a5, .LBB1_1
srlw a0, a4, a0
andi a0, a0, 255
snez a0, a0
ret
```
### New
```asm
andi a1, a0, -4
slli a0, a0, 3
li a2, 1
sllw a2, a2, a0
amoor.w a1, a2, (a1)
srlw a0, a1, a0
andi a0, a0, 255
snez a0, a0
ret
```
`AtomicBool` is defined to have the same layout as `bool`, which means
that we guarantee that it has a size of 1 byte. However on certain
architectures such as RISC-V, LLVM will emulate byte atomics using a
masked CAS loop on an aligned word.
We can take advantage of the fact that `bool` only ever has a value of 0
or 1 to replace `swap` operations with `and`/`or` operations that LLVM
can lower to word-sized atomic `and`/`or` operations. This takes
advantage of the fact that the incoming value to a `swap` or
`compare_exchange` for `AtomicBool` is often a compile-time constant.
Stabilize `atomic_as_ptr`
Fixes#66893
This stabilizes the `as_ptr` methods for atomics. The stabilization feature gate used here is `atomic_as_ptr` which supersedes `atomic_mut_ptr` to match the change in https://github.com/rust-lang/rust/pull/107736.
This needs FCP.
New stable API:
```rust
impl AtomicBool {
pub const fn as_ptr(&self) -> *mut bool;
}
impl AtomicI32 {
pub const fn as_ptr(&self) -> *mut i32;
}
// Includes all other atomic types
impl<T> AtomicPtr<T> {
pub const fn as_ptr(&self) -> *mut *mut T;
}
```
r? libs-api
``@rustbot`` label +needs-fcp
Before this change, the following functions and macros were annotated
with `#[cfg(target_has_atomic = "8")]` or
`#[cfg(target_has_atomic_load_store = "8")]`:
* `atomic_int`
* `strongest_failure_ordering`
* `atomic_swap`
* `atomic_add`
* `atomic_sub`
* `atomic_compare_exchange`
* `atomic_compare_exchange_weak`
* `atomic_and`
* `atomic_nand`
* `atomic_or`
* `atomic_xor`
* `atomic_max`
* `atomic_min`
* `atomic_umax`
* `atomic_umin`
However, none of those functions and macros actually depend on 8-bit
width and they are needed for all atomic widths (16-bit, 32-bit, 64-bit
etc.). Some targets might not support 8-bit atomics (i.e. BPF, if we
would enable atomic CAS for it).
This change fixes that by removing the `"8"` argument from annotations,
which results in accepting the whole variety of widths.
Fixes#106845Fixes#106795
Signed-off-by: Michal Rostecki <vadorovsky@gmail.com>
Warn about safety of `fetch_update`
Specifically as it relates to the ABA problem.
`fetch_update` is a useful function, and one that isn't provided by, say, C++. However, this does not mean the function is magic. It is implemented in terms of `compare_exchange_weak`, and in particular, suffers from the ABA problem. See the following code, which is a naive implementation of `pop` in a lock-free queue:
```rust
fn pop(&self) -> Option<i32> {
self.front.fetch_update(Ordering::Relaxed, Ordering::Acquire, |front| {
if front == ptr::null_mut() {
None
}
else {
Some(unsafe { (*front).next })
}
}.ok()
}
```
This code is unsound if called from multiple threads because of the ABA problem. Specifically, suppose nodes are allocated with `Box`. Suppose the following sequence happens:
```
Initial: Queue is X -> Y.
Thread A: Starts popping, is pre-empted.
Thread B: Pops successfully, twice, leaving the queue empty.
Thread C: Pushes, and `Box` returns X (very common for allocators)
Thread A: Wakes up, sees the head is still X, and stores Y as the new head.
```
But `Y` is deallocated. This is undefined behaviour.
Adding a note about this problem to `fetch_update` should hopefully prevent users from being misled, and also, a link to this common problem is, in my opinion, an improvement to our docs on atomics.
Allow arithmetic and certain bitwise ops on AtomicPtr
This is mainly to support migrating from `AtomicUsize`, for the strict provenance experiment.
This is a pretty dubious set of APIs, but it should be sufficient to allow code that's using `AtomicUsize` to manipulate a tagged pointer atomically. It's under a new feature gate, `#![feature(strict_provenance_atomic_ptr)]`, but I'm not sure if it needs its own tracking issue. I'm happy to make one, but it's not clear that it's needed.
I'm unsure if it needs changes in the various non-LLVM backends. Because we just cast things to integers anyway (and were already doing so), I doubt it.
API change proposal: https://github.com/rust-lang/libs-team/issues/60Fixes#95492
[core] add `Exclusive` to sync
(discussed here: https://rust-lang.zulipchat.com/#narrow/stream/219381-t-libs/topic/Adding.20.60SyncWrapper.60.20to.20std)
`Exclusive` is a wrapper that exclusively allows mutable access to the inner value if you have exclusive access to the wrapper. It acts like a compile time mutex, and hold an unconditional `Sync` implementation.
## Justification for inclusion into std
- This wrapper unblocks actual problems:
- The example that I hit was a vector of `futures::future::BoxFuture`'s causing a central struct in a script to be non-`Sync`. To work around it, you either write really difficult code, or wrap the futures in a needless mutex.
- Easy to maintain: this struct is as simple as a wrapper can get, and its `Sync` implementation has very clear reasoning
- Fills a gap: `&/&mut` are to `RwLock` as `Exclusive` is to `Mutex`
## Public Api
```rust
// core::sync
#[derive(Default)]
struct Exclusive<T: ?Sized> { ... }
impl<T: ?Sized> Sync for Exclusive {}
impl<T> Exclusive<T> {
pub const fn new(t: T) -> Self;
pub const fn into_inner(self) -> T;
}
impl<T: ?Sized> Exclusive<T> {
pub const fn get_mut(&mut self) -> &mut T;
pub const fn get_pin_mut(Pin<&mut self>) -> Pin<&mut T>;
pub const fn from_mut(&mut T) -> &mut Exclusive<T>;
pub const fn from_pin_mut(Pin<&mut T>) -> Pin<&mut Exclusive<T>>;
}
impl<T: Future> Future for Exclusive { ... }
impl<T> From<T> for Exclusive<T> { ... }
impl<T: ?Sized> Debug for Exclusive { ... }
```
## Naming
This is a big bikeshed, but I felt that `Exclusive` captured its general purpose quite well.
## Stability and location
As this is so simple, it can be in `core`. I feel that it can be stabilized quite soon after it is merged, if the libs teams feels its reasonable to add. Also, I don't really know how unstable feature work in std/core's codebases, so I might need help fixing them
## Tips for review
The docs probably are the thing that needs to be reviewed! I tried my best, but I'm sure people have more experience than me writing docs for `Core`
### Implementation:
The API is mostly pulled from https://docs.rs/sync_wrapper/latest/sync_wrapper/struct.SyncWrapper.html (which is apache 2.0 licenesed), and the implementation is trivial:
- its an unsafe justification for pinning
- its an unsafe justification for the `Sync` impl (mostly reasoned about by ````@danielhenrymantilla```` here: https://github.com/Actyx/sync_wrapper/pull/2)
- and forwarding impls, starting with derivable ones and `Future`
Simplify memory ordering intrinsics
This changes the names of the atomic intrinsics to always fully include their memory ordering arguments.
```diff
- atomic_cxchg
+ atomic_cxchg_seqcst_seqcst
- atomic_cxchg_acqrel
+ atomic_cxchg_acqrel_release
- atomic_cxchg_acqrel_failrelaxed
+ atomic_cxchg_acqrel_relaxed
// And so on.
```
- `seqcst` is no longer implied
- The failure ordering on chxchg is no longer implied in some cases, but now always explicitly part of the name.
- `release` is no longer shortened to just `rel`. That was especially confusing, since `relaxed` also starts with `rel`.
- `acquire` is no longer shortened to just `acq`, such that the names now all match the `std::sync::atomic::Ordering` variants exactly.
- This now allows for more combinations on the compare exchange operations, such as `atomic_cxchg_acquire_release`, which is necessary for #68464.
- This PR only exposes the new possibilities through unstable intrinsics, but not yet through the stable API. That's for [a separate PR](https://github.com/rust-lang/rust/pull/98383) that requires an FCP.
Suffixes for operations with a single memory order:
| Order | Before | After |
|---------|--------------|------------|
| Relaxed | `_relaxed` | `_relaxed` |
| Acquire | `_acq` | `_acquire` |
| Release | `_rel` | `_release` |
| AcqRel | `_acqrel` | `_acqrel` |
| SeqCst | (none) | `_seqcst` |
Suffixes for compare-and-exchange operations with two memory orderings:
| Success | Failure | Before | After |
|---------|---------|--------------------------|--------------------|
| Relaxed | Relaxed | `_relaxed` | `_relaxed_relaxed` |
| Relaxed | Acquire | ❌ | `_relaxed_acquire` |
| Relaxed | SeqCst | ❌ | `_relaxed_seqcst` |
| Acquire | Relaxed | `_acq_failrelaxed` | `_acquire_relaxed` |
| Acquire | Acquire | `_acq` | `_acquire_acquire` |
| Acquire | SeqCst | ❌ | `_acquire_seqcst` |
| Release | Relaxed | `_rel` | `_release_relaxed` |
| Release | Acquire | ❌ | `_release_acquire` |
| Release | SeqCst | ❌ | `_release_seqcst` |
| AcqRel | Relaxed | `_acqrel_failrelaxed` | `_acqrel_relaxed` |
| AcqRel | Acquire | `_acqrel` | `_acqrel_acquire` |
| AcqRel | SeqCst | ❌ | `_acqrel_seqcst` |
| SeqCst | Relaxed | `_failrelaxed` | `_seqcst_relaxed` |
| SeqCst | Acquire | `_failacq` | `_seqcst_acquire` |
| SeqCst | SeqCst | (none) | `_seqcst_seqcst` |
clarify how Rust atomics correspond to C++ atomics
``@cbeuw`` noted in https://github.com/rust-lang/miri/pull/1963 that the correspondence between C++ atomics and Rust atomics is not quite as obvious as one might think, since in Rust I can use `get_mut` to treat previously non-atomic data as atomic. However, I think using C++20 `atomic_ref`, we can establish a suitable relation between the two -- or do you see problems with that ``@cbeuw?`` (I recall you said there was some issue, but it was deep inside that PR and Github makes it impossible to find...)
Cc ``@thomcc;`` not sure whom else to ping for atomic memory model things.