Commit Graph

119 Commits

Author SHA1 Message Date
David Tolnay
a95f20c9ad Add Exclusive forwarding impls (FnOnce, FnMut, Generator) 2023-09-28 10:22:19 -07:00
est31
33970db8c6 Add #[rustc_never_returns_null_ptr] to std functions
Add the attribute to standard library functions that
are guaranteed to never return null pointers, as their
originating data wouldn't allow it.
2023-08-06 00:20:28 +02:00
bors
e7d6ce3a6f Auto merge of #114034 - Amanieu:riscv-atomicbool, r=thomcc
Optimize `AtomicBool` for target that don't support byte-sized atomics

`AtomicBool` is defined to have the same layout as `bool`, which means that we guarantee that it has a size of 1 byte. However on certain architectures such as RISC-V, LLVM will emulate byte atomics using a masked CAS loop on an aligned word.

We can take advantage of the fact that `bool` only ever has a value of 0 or 1 to replace `swap` operations with `and`/`or` operations that LLVM can lower to word-sized atomic `and`/`or` operations. This takes advantage of the fact that the incoming value to a `swap` or `compare_exchange` for `AtomicBool` is often a compile-time constant.

### Example

```rust
pub fn swap_true(atomic: &AtomicBool) -> bool {
    atomic.swap(true, Ordering::Relaxed)
}
```

### Old

```asm
	andi	a1, a0, -4
	slli	a0, a0, 3
	li	a2, 255
	sllw	a2, a2, a0
	li	a3, 1
	sllw	a3, a3, a0
	slli	a3, a3, 32
	srli	a3, a3, 32
.LBB1_1:
	lr.w	a4, (a1)
	mv	a5, a3
	xor	a5, a5, a4
	and	a5, a5, a2
	xor	a5, a5, a4
	sc.w	a5, a5, (a1)
	bnez	a5, .LBB1_1
	srlw	a0, a4, a0
	andi	a0, a0, 255
	snez	a0, a0
	ret
```

### New

```asm
	andi	a1, a0, -4
	slli	a0, a0, 3
	li	a2, 1
	sllw	a2, a2, a0
	amoor.w	a1, a2, (a1)
	srlw	a0, a1, a0
	andi	a0, a0, 255
	snez	a0, a0
	ret
```
2023-07-27 01:00:12 +00:00
Amanieu d'Antras
3dbee5bc71 Optimize AtomicBool for target that don't support byte-sized atomics
`AtomicBool` is defined to have the same layout as `bool`, which means
that we guarantee that it has a size of 1 byte. However on certain
architectures such as RISC-V, LLVM will emulate byte atomics using a
masked CAS loop on an aligned word.

We can take advantage of the fact that `bool` only ever has a value of 0
or 1 to replace `swap` operations with `and`/`or` operations that LLVM
can lower to word-sized atomic `and`/`or` operations. This takes
advantage of the fact that the incoming value to a `swap` or
`compare_exchange` for `AtomicBool` is often a compile-time constant.
2023-07-26 00:09:49 +01:00
KaDiWa
c3462a0676 remove the unstable core::sync::atomic::ATOMIC_*_INIT constants 2023-07-18 09:45:52 +02:00
Pietro Albini
4e04da6183 replace version placeholders 2023-04-28 08:47:55 -07:00
Deadbeef
76dbe29104 rm const traits in libcore 2023-04-16 06:49:27 +00:00
KaDiWa
ad2b34d0e3 remove some unneeded imports 2023-04-12 19:27:18 +02:00
Mark Rousskov
bb8a0ffa23 Bump to latest beta 2023-03-15 08:55:22 -04:00
Matthias Krüger
e670379b57 Rollup merge of #108419 - tgross35:atomic-as-ptr, r=m-ou-se
Stabilize `atomic_as_ptr`

Fixes #66893

This stabilizes the `as_ptr` methods for atomics. The stabilization feature gate used here is `atomic_as_ptr` which supersedes `atomic_mut_ptr` to match the change in https://github.com/rust-lang/rust/pull/107736.

This needs FCP.

New stable API:

```rust
impl AtomicBool {
    pub const fn as_ptr(&self) -> *mut bool;
}

impl AtomicI32 {
    pub const fn as_ptr(&self) -> *mut i32;
}

// Includes all other atomic types

impl<T> AtomicPtr<T> {
    pub const fn as_ptr(&self) -> *mut *mut T;
}
```

r? libs-api
``@rustbot`` label +needs-fcp
2023-03-13 21:55:35 +01:00
Maybe Waffle
a2baba09a2 Fill-in tracking issue for feature("atomic_from_ptr") 2023-03-02 12:00:26 +00:00
Maybe Waffle
4a2c555904 Add Atomic*::from_ptr 2023-02-27 19:05:19 +00:00
Trevor Gross
318be2bee9 Stabilize atomic_as_ptr 2023-02-23 23:29:10 -05:00
Trevor Gross
787b1116e8 Rename atomic 'as_mut_ptr' to 'as_ptr' to match Cell (ref #66893) 2023-02-10 20:46:14 -05:00
Trevor Gross
b51d3b9443 Mark 'atomic_mut_ptr' methods const 2023-02-05 17:03:46 -05:00
Michal Rostecki
474ea87943 core: Support variety of atomic widths in width-agnostic functions
Before this change, the following functions and macros were annotated
with `#[cfg(target_has_atomic = "8")]` or
`#[cfg(target_has_atomic_load_store = "8")]`:

* `atomic_int`
* `strongest_failure_ordering`
* `atomic_swap`
* `atomic_add`
* `atomic_sub`
* `atomic_compare_exchange`
* `atomic_compare_exchange_weak`
* `atomic_and`
* `atomic_nand`
* `atomic_or`
* `atomic_xor`
* `atomic_max`
* `atomic_min`
* `atomic_umax`
* `atomic_umin`

However, none of those functions and macros actually depend on 8-bit
width and they are needed for all atomic widths (16-bit, 32-bit, 64-bit
etc.). Some targets might not support 8-bit atomics (i.e. BPF, if we
would enable atomic CAS for it).

This change fixes that by removing the `"8"` argument from annotations,
which results in accepting the whole variety of widths.

Fixes #106845
Fixes #106795

Signed-off-by: Michal Rostecki <vadorovsky@gmail.com>
2023-01-25 10:44:03 +08:00
Maybe Waffle
22b4c68895 Make // SAFETY comment part of the doctest, and not surrounding code 2023-01-12 07:28:43 +00:00
Maybe Waffle
f1a63bc2dd Remove unused mut from a doctest 2023-01-12 07:27:51 +00:00
Maybe Waffle
a513c84a5b Add AtomicPtr::as_mut_ptr 2023-01-12 07:27:36 +00:00
Albert Larsan
736342bb46 Correct typos in core::sync::Exclusive::get_{pin_mut, mut} 2022-12-12 09:19:17 +01:00
clubby789
19bc8fb05a Remove extra spaces 2022-10-19 23:54:00 +01:00
Rageking8
7122abaddf more dupe word typos 2022-10-14 12:57:56 +08:00
Matthias Krüger
d13f7aef70 Rollup merge of #101774 - Riolku:atomic-update-aba, r=m-ou-se
Warn about safety of `fetch_update`

Specifically as it relates to the ABA problem.

`fetch_update` is a useful function, and one that isn't provided by, say, C++. However, this does not mean the function is magic. It is implemented in terms of `compare_exchange_weak`, and in particular, suffers from the ABA problem. See the following code, which is a naive implementation of `pop` in a lock-free queue:

```rust
fn pop(&self) -> Option<i32> {
    self.front.fetch_update(Ordering::Relaxed, Ordering::Acquire, |front| {
        if front == ptr::null_mut() {
            None
        }
        else {
            Some(unsafe { (*front).next })
        }
    }.ok()
}
```

This code is unsound if called from multiple threads because of the ABA problem. Specifically, suppose nodes are allocated with `Box`. Suppose the following sequence happens:

```
Initial: Queue is X -> Y.

Thread A: Starts popping, is pre-empted.
Thread B: Pops successfully, twice, leaving the queue empty.
Thread C: Pushes, and `Box` returns X (very common for allocators)
Thread A: Wakes up, sees the head is still X, and stores Y as the new head.
```

But `Y` is deallocated. This is undefined behaviour.

Adding a note about this problem to `fetch_update` should hopefully prevent users from being misled, and also, a link to this common problem is, in my opinion, an improvement to our docs on atomics.
2022-10-11 18:59:46 +02:00
Thom Chiovoloni
29efe8c789 Add #[inline] to trivial functions on core::sync::Exclusive 2022-09-22 22:15:27 -07:00
Keenan Gugeler
3d28a1ad76 Warn about safety of fetch_update
Specifically as it relates to the ABA problem.
2022-09-14 13:25:14 -04:00
Maybe Waffle
b2625e24b9 fix nitpicks from review 2022-08-21 06:36:11 +04:00
Maybe Waffle
3ba393465f Make some docs nicer wrt pointer offsets 2022-08-21 02:22:20 +04:00
Mark Rousskov
154a09dd91 Adjust cfgs 2022-08-12 16:28:15 -04:00
Ralf Jung
2b269cad43 miri: prune some atomic operation details from stacktrace 2022-07-20 16:34:24 -04:00
Yuki Okushi
796bc7cae3 Rollup merge of #98383 - m-ou-se:remove-memory-order-restrictions, r=joshtriplett
Remove restrictions on compare-exchange memory ordering.

We currently don't allow the failure memory ordering of compare-exchange operations to be stronger than the success ordering, as was the case in C++11 when its memory model was copied to Rust. However, this restriction was lifted in C++17 as part of [p0418r2](https://wg21.link/p0418r2). It's time  we lift the restriction too.

| Success | Failure | Before | After |
|---------|---------|--------|-------|
| Relaxed | Relaxed | ✔️ | ✔️ |
| Relaxed | Acquire |                 | ✔️ |
| Relaxed | SeqCst  |                 | ✔️ |
| Acquire | Relaxed | ✔️ | ✔️ |
| Acquire | Acquire | ✔️ | ✔️ |
| Acquire | SeqCst  |                 | ✔️ |
| Release | Relaxed | ✔️ | ✔️ |
| Release | Acquire |                 | ✔️ |
| Release | SeqCst  |                 | ✔️ |
| AcqRel  | Relaxed | ✔️ | ✔️ |
| AcqRel  | Acquire | ✔️ | ✔️ |
| AcqRel  | SeqCst  |                 | ✔️ |
| SeqCst  | Relaxed | ✔️ | ✔️ |
| SeqCst  | Acquire | ✔️ | ✔️ |
| SeqCst  | SeqCst  | ✔️ | ✔️ |
| \*      | Release |                 |                 |
| \*      | AcqRel  |                 |                 |

Fixes https://github.com/rust-lang/rust/issues/68464
2022-07-18 08:39:57 +09:00
Matthias Krüger
342b666d59 Rollup merge of #99094 - AldaronLau:atomic-ptr-extra-space, r=Dylan-DPC
Remove extra space in AtomicPtr::new docs
2022-07-11 00:33:48 +02:00
Maybe Waffle
e9292b7652 fill new tracking issue for feature(strict_provenance_atomic_ptr) 2022-07-10 13:17:33 +04:00
Jeron Aldaron Lau
4944b5769b Remove extra space in AtomicPtr::new docs 2022-07-09 14:20:34 -05:00
Dylan DPC
3c35da224b Rollup merge of #99070 - tamird:update-tracking-issue, r=RalfJung
Update integer_atomics tracking issue

Updates #32976.
Updates #99069.

r? ``@RalfJung``
2022-07-09 11:28:09 +05:30
Tamir Duberstein
a491d4582d Update integer_atomics tracking issue
Updates #32976.
Updates #99069.
2022-07-08 17:52:04 -04:00
Guillaume Gomez
4755173cf6 Rollup merge of #96935 - thomcc:atomicptr-strict-prov, r=dtolnay
Allow arithmetic and certain bitwise ops on AtomicPtr

This is mainly to support migrating from `AtomicUsize`, for the strict provenance experiment.

This is a pretty dubious set of APIs, but it should be sufficient to allow code that's using `AtomicUsize` to manipulate a tagged pointer atomically. It's under a new feature gate, `#![feature(strict_provenance_atomic_ptr)]`, but I'm not sure if it needs its own tracking issue. I'm happy to make one, but it's not clear that it's needed.

I'm unsure if it needs changes in the various non-LLVM backends. Because we just cast things to integers anyway (and were already doing so), I doubt it.

API change proposal: https://github.com/rust-lang/libs-team/issues/60

Fixes #95492
2022-07-06 20:43:23 +02:00
Thom Chiovoloni
e65ecee90e Rename AtomicPtr::fetch_{add,sub}{,_bytes} 2022-07-01 06:21:19 -07:00
Thom Chiovoloni
2f872afdb5 Allow arithmetic and certain bitwise ops on AtomicPtr
This is mainly to support migrating from AtomicUsize, for the strict
provenance experiment.

Fixes #95492
2022-07-01 06:21:18 -07:00
Matthias Krüger
0e71d1f237 Rollup merge of #97629 - guswynn:exclusive_struct, r=m-ou-se
[core] add `Exclusive` to sync

(discussed here: https://rust-lang.zulipchat.com/#narrow/stream/219381-t-libs/topic/Adding.20.60SyncWrapper.60.20to.20std)

`Exclusive` is a wrapper that exclusively allows mutable access to the inner value if you have exclusive access to the wrapper. It acts like a compile time mutex, and hold an unconditional `Sync` implementation.

## Justification for inclusion into std
- This wrapper unblocks actual problems:
  - The example that I hit was a vector of `futures::future::BoxFuture`'s causing a central struct in a script to be non-`Sync`. To work around it, you either write really difficult code, or wrap the futures in a needless mutex.
- Easy to maintain: this struct is as simple as a wrapper can get, and its `Sync` implementation has very clear reasoning
- Fills a gap: `&/&mut` are to `RwLock` as `Exclusive` is to `Mutex`

## Public Api
```rust
// core::sync
#[derive(Default)]
struct Exclusive<T: ?Sized> { ... }

impl<T: ?Sized> Sync for Exclusive {}

impl<T> Exclusive<T> {
    pub const fn new(t: T) -> Self;
    pub const fn into_inner(self) -> T;
}

impl<T: ?Sized> Exclusive<T> {
    pub const fn get_mut(&mut self) -> &mut T;
    pub const fn get_pin_mut(Pin<&mut self>) -> Pin<&mut T>;
    pub const fn from_mut(&mut T) -> &mut Exclusive<T>;
    pub const fn from_pin_mut(Pin<&mut T>) -> Pin<&mut Exclusive<T>>;
}

impl<T: Future> Future for Exclusive { ... }

impl<T> From<T> for Exclusive<T> { ... }
impl<T: ?Sized> Debug for Exclusive { ... }
```

## Naming
This is a big bikeshed, but I felt that `Exclusive` captured its general purpose quite well.

## Stability and location
As this is so simple, it can be in `core`. I feel that it can be stabilized quite soon after it is merged, if the libs teams feels its reasonable to add. Also, I don't really know how unstable feature work in std/core's codebases, so I might need help fixing them

## Tips for review
The docs probably are the thing that needs to be reviewed! I tried my best, but I'm sure people have more experience than me writing docs for `Core`

### Implementation:
The API is mostly pulled from https://docs.rs/sync_wrapper/latest/sync_wrapper/struct.SyncWrapper.html (which is apache 2.0 licenesed), and the implementation is trivial:
- its an unsafe justification for pinning
- its an unsafe justification for the `Sync` impl (mostly reasoned about by ````@danielhenrymantilla```` here: https://github.com/Actyx/sync_wrapper/pull/2)
- and forwarding impls, starting with derivable ones and `Future`
2022-06-30 19:55:50 +02:00
Dylan DPC
3f2ba25159 Rollup merge of #98479 - leocth:atomic-bool-fetch-not, r=joshtriplett
Add `fetch_not` method on `AtomicBool`

This PR adds a `fetch_not` method on `AtomicBool` performs the NOT operation on the inner value.
Internally, this just calls the `fetch_xor` method with the value `true`.

[See this IRLO discussion](https://internals.rust-lang.org/t/could-we-have-fetch-not-for-atomicbool-s/16881)
2022-06-29 17:59:32 +05:30
Mara Bos
a898f41379 Only enable new cmpxchg memory orderings in cfg(not(bootstrap)).
(The bootstrap/beta compiler doesn't support them yet.)
2022-06-29 12:00:06 +02:00
Mara Bos
a7434da9be Remove restrictions on compare-exchange memory ordering. 2022-06-29 12:00:06 +02:00
Dylan DPC
45740acd34 Rollup merge of #97423 - m-ou-se:memory-ordering-intrinsics, r=tmiasko
Simplify memory ordering intrinsics

This changes the names of the atomic intrinsics to always fully include their memory ordering arguments.

```diff
- atomic_cxchg
+ atomic_cxchg_seqcst_seqcst

- atomic_cxchg_acqrel
+ atomic_cxchg_acqrel_release

- atomic_cxchg_acqrel_failrelaxed
+ atomic_cxchg_acqrel_relaxed

// And so on.
```

- `seqcst` is no longer implied
- The failure ordering on chxchg is no longer implied in some cases, but now always explicitly part of the name.
- `release` is no longer shortened to just `rel`. That was especially confusing, since `relaxed` also starts with `rel`.
- `acquire` is no longer shortened to just `acq`, such that the names now all match the `std::sync::atomic::Ordering` variants exactly.
- This now allows for more combinations on the compare exchange operations, such as `atomic_cxchg_acquire_release`, which is necessary for #68464.
- This PR only exposes the new possibilities through unstable intrinsics, but not yet through the stable API. That's for [a separate PR](https://github.com/rust-lang/rust/pull/98383) that requires an FCP.

Suffixes for operations with a single memory order:

| Order   | Before       | After      |
|---------|--------------|------------|
| Relaxed | `_relaxed`   | `_relaxed` |
| Acquire | `_acq`       | `_acquire` |
| Release | `_rel`       | `_release` |
| AcqRel  | `_acqrel`    | `_acqrel`  |
| SeqCst  | (none)       | `_seqcst`  |

Suffixes for compare-and-exchange operations with two memory orderings:

| Success | Failure | Before                   | After              |
|---------|---------|--------------------------|--------------------|
| Relaxed | Relaxed | `_relaxed`               | `_relaxed_relaxed` |
| Relaxed | Acquire |                       | `_relaxed_acquire` |
| Relaxed | SeqCst  |                       | `_relaxed_seqcst`  |
| Acquire | Relaxed | `_acq_failrelaxed`       | `_acquire_relaxed` |
| Acquire | Acquire | `_acq`                   | `_acquire_acquire` |
| Acquire | SeqCst  |                       | `_acquire_seqcst`  |
| Release | Relaxed | `_rel`                   | `_release_relaxed` |
| Release | Acquire |                       | `_release_acquire` |
| Release | SeqCst  |                       | `_release_seqcst`  |
| AcqRel  | Relaxed | `_acqrel_failrelaxed`    | `_acqrel_relaxed`  |
| AcqRel  | Acquire | `_acqrel`                | `_acqrel_acquire`  |
| AcqRel  | SeqCst  |                       | `_acqrel_seqcst`   |
| SeqCst  | Relaxed | `_failrelaxed`           | `_seqcst_relaxed`  |
| SeqCst  | Acquire | `_failacq`               | `_seqcst_acquire`  |
| SeqCst  | SeqCst  | (none)                   | `_seqcst_seqcst`   |
2022-06-29 10:28:18 +05:30
Mara Bos
4982a59986 Rename/restructure memory ordering intrinsics. 2022-06-28 08:58:27 +02:00
leocth
9c5ae20c59 forgot about the feature flag in the doctest 2022-06-26 10:49:05 +08:00
leocth
7d5f236c3d Add feature gate #![atomic_bool_fetch_not] 2022-06-25 18:31:01 +08:00
leocth
dcfe92e193 add fetch_not method on AtomicBool 2022-06-25 11:19:08 +08:00
Gus Wynn
029f9aa3bf add tracking issue for exclusive 2022-06-23 08:52:13 -07:00
Yuki Okushi
25b84491f7 Rollup merge of #97516 - RalfJung:atomics, r=joshtriplett
clarify how Rust atomics correspond to C++ atomics

``@cbeuw`` noted in https://github.com/rust-lang/miri/pull/1963 that the correspondence between C++ atomics and Rust atomics is not quite as obvious as one might think, since in Rust I can use `get_mut` to treat previously non-atomic data as atomic. However, I think using C++20 `atomic_ref`, we can establish a suitable relation between the two -- or do you see problems with that ``@cbeuw?`` (I recall you said there was some issue, but it was deep inside that PR and Github makes it impossible to find...)

Cc ``@thomcc;`` not sure whom else to ping for atomic memory model things.
2022-06-22 15:16:11 +09:00
Ralf Jung
4768bfc6ef hedge our bets
Co-authored-by: Josh Triplett <josh@joshtriplett.org>
2022-06-21 16:54:54 -07:00