2015-01-01 23:53:35 -08:00
|
|
|
//! Atomic types
|
|
|
|
|
//!
|
|
|
|
|
//! Atomic types provide primitive shared-memory communication between
|
|
|
|
|
//! threads, and are the building blocks of other concurrent
|
|
|
|
|
//! types.
|
|
|
|
|
//!
|
2022-06-21 16:54:54 -07:00
|
|
|
//! Rust atomics currently follow the same rules as [C++20 atomics][cpp], specifically `atomic_ref`.
|
2022-05-29 13:26:09 +02:00
|
|
|
//! Basically, creating a *shared reference* to one of the Rust atomic types corresponds to creating
|
|
|
|
|
//! an `atomic_ref` in C++; the `atomic_ref` is destroyed when the lifetime of the shared reference
|
|
|
|
|
//! ends. (A Rust atomic type that is exclusively owned or behind a mutable reference does *not*
|
|
|
|
|
//! correspond to an "atomic object" in C++, since it can be accessed via non-atomic operations.)
|
|
|
|
|
//!
|
2015-01-01 23:53:35 -08:00
|
|
|
//! This module defines atomic versions of a select number of primitive
|
2019-01-07 12:41:55 -08:00
|
|
|
//! types, including [`AtomicBool`], [`AtomicIsize`], [`AtomicUsize`],
|
|
|
|
|
//! [`AtomicI8`], [`AtomicU16`], etc.
|
2015-01-01 23:53:35 -08:00
|
|
|
//! Atomic types present operations that, when used correctly, synchronize
|
|
|
|
|
//! updates between threads.
|
|
|
|
|
//!
|
2017-03-27 23:55:03 -04:00
|
|
|
//! Each method takes an [`Ordering`] which represents the strength of
|
2015-01-01 23:53:35 -08:00
|
|
|
//! the memory barrier for that operation. These orderings are the
|
2019-10-12 17:57:31 +02:00
|
|
|
//! same as the [C++20 atomic orderings][1]. For more information see the [nomicon][2].
|
2015-01-01 23:53:35 -08:00
|
|
|
//!
|
2022-05-29 13:26:09 +02:00
|
|
|
//! [cpp]: https://en.cppreference.com/w/cpp/atomic
|
2019-10-12 16:22:41 +02:00
|
|
|
//! [1]: https://en.cppreference.com/w/cpp/atomic/memory_order
|
2017-01-27 18:08:51 +00:00
|
|
|
//! [2]: ../../../nomicon/atomics.html
|
2015-01-01 23:53:35 -08:00
|
|
|
//!
|
2017-03-27 23:55:03 -04:00
|
|
|
//! Atomic variables are safe to share between threads (they implement [`Sync`])
|
2016-05-16 18:54:12 +02:00
|
|
|
//! but they do not themselves provide the mechanism for sharing and follow the
|
2019-12-03 19:11:53 -05:00
|
|
|
//! [threading model](../../../std/thread/index.html#the-threading-model) of Rust.
|
2017-03-27 23:55:03 -04:00
|
|
|
//! The most common way to share an atomic variable is to put it into an [`Arc`][arc] (an
|
2015-01-01 23:53:35 -08:00
|
|
|
//! atomically-reference-counted shared pointer).
|
|
|
|
|
//!
|
2017-03-29 00:52:16 -04:00
|
|
|
//! [arc]: ../../../std/sync/struct.Arc.html
|
2017-03-27 23:55:03 -04:00
|
|
|
//!
|
2019-01-07 12:41:55 -08:00
|
|
|
//! Atomic types may be stored in static variables, initialized using
|
|
|
|
|
//! the constant initializers like [`AtomicBool::new`]. Atomic statics
|
2015-01-01 23:53:35 -08:00
|
|
|
//! are often used for lazy global initialization.
|
|
|
|
|
//!
|
2019-01-07 12:41:55 -08:00
|
|
|
//! # Portability
|
|
|
|
|
//!
|
|
|
|
|
//! All atomic types in this module are guaranteed to be [lock-free] if they're
|
|
|
|
|
//! available. This means they don't internally acquire a global mutex. Atomic
|
|
|
|
|
//! types and operations are not guaranteed to be wait-free. This means that
|
|
|
|
|
//! operations like `fetch_or` may be implemented with a compare-and-swap loop.
|
|
|
|
|
//!
|
|
|
|
|
//! Atomic operations may be implemented at the instruction layer with
|
|
|
|
|
//! larger-size atomics. For example some platforms use 4-byte atomic
|
|
|
|
|
//! instructions to implement `AtomicI8`. Note that this emulation should not
|
|
|
|
|
//! have an impact on correctness of code, it's just something to be aware of.
|
|
|
|
|
//!
|
2021-07-23 19:14:28 -04:00
|
|
|
//! The atomic types in this module might not be available on all platforms. The
|
2019-01-07 12:41:55 -08:00
|
|
|
//! atomic types here are all widely available, however, and can generally be
|
|
|
|
|
//! relied upon existing. Some notable exceptions are:
|
|
|
|
|
//!
|
|
|
|
|
//! * PowerPC and MIPS platforms with 32-bit pointers do not have `AtomicU64` or
|
|
|
|
|
//! `AtomicI64` types.
|
2020-11-17 01:38:53 +01:00
|
|
|
//! * ARM platforms like `armv5te` that aren't for Linux only provide `load`
|
|
|
|
|
//! and `store` operations, and do not support Compare and Swap (CAS)
|
|
|
|
|
//! operations, such as `swap`, `fetch_add`, etc. Additionally on Linux,
|
|
|
|
|
//! these CAS operations are implemented via [operating system support], which
|
|
|
|
|
//! may come with a performance penalty.
|
|
|
|
|
//! * ARM targets with `thumbv6m` only provide `load` and `store` operations,
|
|
|
|
|
//! and do not support Compare and Swap (CAS) operations, such as `swap`,
|
|
|
|
|
//! `fetch_add`, etc.
|
|
|
|
|
//!
|
|
|
|
|
//! [operating system support]: https://www.kernel.org/doc/Documentation/arm/kernel_user_helpers.txt
|
2019-01-07 12:41:55 -08:00
|
|
|
//!
|
|
|
|
|
//! Note that future platforms may be added that also do not have support for
|
|
|
|
|
//! some atomic operations. Maximally portable code will want to be careful
|
|
|
|
|
//! about which atomic types are used. `AtomicUsize` and `AtomicIsize` are
|
|
|
|
|
//! generally the most portable, but even then they're not available everywhere.
|
2021-10-15 20:43:52 +01:00
|
|
|
//! For reference, the `std` library requires `AtomicBool`s and pointer-sized atomics, although
|
2019-01-07 12:41:55 -08:00
|
|
|
//! `core` does not.
|
|
|
|
|
//!
|
2022-03-27 15:13:17 -07:00
|
|
|
//! The `#[cfg(target_has_atomic)]` attribute can be used to conditionally
|
|
|
|
|
//! compile based on the target's supported bit widths. It is a key-value
|
|
|
|
|
//! option set for each supported size, with values "8", "16", "32", "64",
|
|
|
|
|
//! "128", and "ptr" for pointer-sized atomics.
|
2019-01-07 12:41:55 -08:00
|
|
|
//!
|
|
|
|
|
//! [lock-free]: https://en.wikipedia.org/wiki/Non-blocking_algorithm
|
2015-01-01 23:53:35 -08:00
|
|
|
//!
|
|
|
|
|
//! # Examples
|
|
|
|
|
//!
|
|
|
|
|
//! A simple spinlock:
|
|
|
|
|
//!
|
|
|
|
|
//! ```
|
|
|
|
|
//! use std::sync::Arc;
|
2015-01-10 13:42:48 -08:00
|
|
|
//! use std::sync::atomic::{AtomicUsize, Ordering};
|
2021-04-10 11:58:48 +02:00
|
|
|
//! use std::{hint, thread};
|
2015-01-01 23:53:35 -08:00
|
|
|
//!
|
|
|
|
|
//! fn main() {
|
2015-01-10 13:42:48 -08:00
|
|
|
//! let spinlock = Arc::new(AtomicUsize::new(1));
|
2015-01-01 23:53:35 -08:00
|
|
|
//!
|
2020-08-30 22:14:17 +02:00
|
|
|
//! let spinlock_clone = Arc::clone(&spinlock);
|
2016-05-16 18:54:12 +02:00
|
|
|
//! let thread = thread::spawn(move|| {
|
2015-01-01 23:53:35 -08:00
|
|
|
//! spinlock_clone.store(0, Ordering::SeqCst);
|
2015-01-05 21:59:45 -08:00
|
|
|
//! });
|
2015-01-01 23:53:35 -08:00
|
|
|
//!
|
2015-05-09 00:12:29 +09:00
|
|
|
//! // Wait for the other thread to release the lock
|
2021-04-10 11:58:48 +02:00
|
|
|
//! while spinlock.load(Ordering::SeqCst) != 0 {
|
|
|
|
|
//! hint::spin_loop();
|
|
|
|
|
//! }
|
2016-05-16 18:54:12 +02:00
|
|
|
//!
|
|
|
|
|
//! if let Err(panic) = thread.join() {
|
2022-02-12 23:16:17 +04:00
|
|
|
//! println!("Thread had an error: {panic:?}");
|
2016-05-16 18:54:12 +02:00
|
|
|
//! }
|
2015-01-01 23:53:35 -08:00
|
|
|
//! }
|
|
|
|
|
//! ```
|
|
|
|
|
//!
|
2015-05-09 00:12:29 +09:00
|
|
|
//! Keep a global count of live threads:
|
2015-01-01 23:53:35 -08:00
|
|
|
//!
|
|
|
|
|
//! ```
|
2019-01-07 12:41:55 -08:00
|
|
|
//! use std::sync::atomic::{AtomicUsize, Ordering};
|
2015-01-01 23:53:35 -08:00
|
|
|
//!
|
2019-01-07 12:41:55 -08:00
|
|
|
//! static GLOBAL_THREAD_COUNT: AtomicUsize = AtomicUsize::new(0);
|
2015-01-01 23:53:35 -08:00
|
|
|
//!
|
2015-05-09 00:12:29 +09:00
|
|
|
//! let old_thread_count = GLOBAL_THREAD_COUNT.fetch_add(1, Ordering::SeqCst);
|
|
|
|
|
//! println!("live threads: {}", old_thread_count + 1);
|
2015-01-01 23:53:35 -08:00
|
|
|
//! ```
|
2014-05-12 21:30:48 -07:00
|
|
|
|
2015-01-23 21:48:20 -08:00
|
|
|
#![stable(feature = "rust1", since = "1.0.0")]
|
2019-10-08 17:09:23 +01:00
|
|
|
#![cfg_attr(not(target_has_atomic_load_store = "8"), allow(dead_code))]
|
|
|
|
|
#![cfg_attr(not(target_has_atomic_load_store = "8"), allow(unused_imports))]
|
2020-12-02 15:16:12 -08:00
|
|
|
#![rustc_diagnostic_item = "atomic_mod"]
|
2014-08-04 15:42:36 -07:00
|
|
|
|
2014-12-30 10:19:20 -08:00
|
|
|
use self::Ordering::*;
|
2014-11-06 00:05:53 -08:00
|
|
|
|
2019-04-15 11:23:21 +09:00
|
|
|
use crate::cell::UnsafeCell;
|
|
|
|
|
use crate::fmt;
|
|
|
|
|
use crate::intrinsics;
|
2020-09-20 12:14:41 +02:00
|
|
|
|
2019-04-15 11:23:21 +09:00
|
|
|
use crate::hint::spin_loop;
|
2018-12-19 16:43:29 -05:00
|
|
|
|
2014-11-19 10:35:47 -05:00
|
|
|
/// A boolean type which can be safely shared between threads.
|
2016-08-16 16:58:20 +01:00
|
|
|
///
|
2017-10-05 23:20:58 +02:00
|
|
|
/// This type has the same in-memory representation as a [`bool`].
|
|
|
|
|
///
|
2020-06-05 19:07:24 +02:00
|
|
|
/// **Note**: This type is only available on platforms that support atomic
|
2020-06-06 22:30:09 +02:00
|
|
|
/// loads and stores of `u8`.
|
2019-11-06 08:09:55 -05:00
|
|
|
#[cfg(target_has_atomic_load_store = "8")]
|
2015-01-23 21:48:20 -08:00
|
|
|
#[stable(feature = "rust1", since = "1.0.0")]
|
2022-01-13 23:32:49 +01:00
|
|
|
#[rustc_diagnostic_item = "AtomicBool"]
|
2018-10-27 13:46:33 +03:00
|
|
|
#[repr(C, align(1))]
|
2014-05-12 21:30:48 -07:00
|
|
|
pub struct AtomicBool {
|
2016-05-12 16:12:05 +01:00
|
|
|
v: UnsafeCell<u8>,
|
2014-05-12 21:30:48 -07:00
|
|
|
}
|
|
|
|
|
|
2019-11-06 08:09:55 -05:00
|
|
|
#[cfg(target_has_atomic_load_store = "8")]
|
2015-11-16 19:54:28 +03:00
|
|
|
#[stable(feature = "rust1", since = "1.0.0")]
|
2021-08-14 16:35:12 +00:00
|
|
|
#[rustc_const_unstable(feature = "const_default_impls", issue = "87864")]
|
|
|
|
|
impl const Default for AtomicBool {
|
2016-10-02 14:45:49 +02:00
|
|
|
/// Creates an `AtomicBool` initialized to `false`.
|
2020-11-01 04:38:41 +00:00
|
|
|
#[inline]
|
2015-05-27 11:18:36 +03:00
|
|
|
fn default() -> Self {
|
2016-05-12 16:12:05 +01:00
|
|
|
Self::new(false)
|
2015-04-10 19:23:22 +02:00
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2015-09-09 12:12:25 +01:00
|
|
|
// Send is implicitly implemented for AtomicBool.
|
2019-11-06 08:09:55 -05:00
|
|
|
#[cfg(target_has_atomic_load_store = "8")]
|
2015-11-16 19:54:28 +03:00
|
|
|
#[stable(feature = "rust1", since = "1.0.0")]
|
2014-12-22 12:29:46 +01:00
|
|
|
unsafe impl Sync for AtomicBool {}
|
|
|
|
|
|
2014-11-19 10:35:47 -05:00
|
|
|
/// A raw pointer type which can be safely shared between threads.
|
2016-08-16 16:58:20 +01:00
|
|
|
///
|
|
|
|
|
/// This type has the same in-memory representation as a `*mut T`.
|
2020-06-04 22:07:56 +02:00
|
|
|
///
|
2020-06-05 19:07:24 +02:00
|
|
|
/// **Note**: This type is only available on platforms that support atomic
|
|
|
|
|
/// loads and stores of pointers. Its size depends on the target pointer's size.
|
2019-11-06 08:09:55 -05:00
|
|
|
#[cfg(target_has_atomic_load_store = "ptr")]
|
2015-01-23 21:48:20 -08:00
|
|
|
#[stable(feature = "rust1", since = "1.0.0")]
|
2022-04-28 16:42:20 -04:00
|
|
|
#[cfg_attr(not(test), rustc_diagnostic_item = "AtomicPtr")]
|
2018-10-27 13:46:33 +03:00
|
|
|
#[cfg_attr(target_pointer_width = "16", repr(C, align(2)))]
|
|
|
|
|
#[cfg_attr(target_pointer_width = "32", repr(C, align(4)))]
|
|
|
|
|
#[cfg_attr(target_pointer_width = "64", repr(C, align(8)))]
|
2014-05-12 21:30:48 -07:00
|
|
|
pub struct AtomicPtr<T> {
|
2015-05-27 11:18:36 +03:00
|
|
|
p: UnsafeCell<*mut T>,
|
2014-05-12 21:30:48 -07:00
|
|
|
}
|
|
|
|
|
|
2019-11-06 08:09:55 -05:00
|
|
|
#[cfg(target_has_atomic_load_store = "ptr")]
|
2015-11-16 19:54:28 +03:00
|
|
|
#[stable(feature = "rust1", since = "1.0.0")]
|
2021-08-14 16:35:12 +00:00
|
|
|
#[rustc_const_unstable(feature = "const_default_impls", issue = "87864")]
|
|
|
|
|
impl<T> const Default for AtomicPtr<T> {
|
2016-09-11 22:58:01 +05:30
|
|
|
/// Creates a null `AtomicPtr<T>`.
|
2015-04-25 22:55:15 -04:00
|
|
|
fn default() -> AtomicPtr<T> {
|
2019-04-15 11:23:21 +09:00
|
|
|
AtomicPtr::new(crate::ptr::null_mut())
|
2015-04-25 22:55:15 -04:00
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2019-11-06 08:09:55 -05:00
|
|
|
#[cfg(target_has_atomic_load_store = "ptr")]
|
2015-11-16 19:54:28 +03:00
|
|
|
#[stable(feature = "rust1", since = "1.0.0")]
|
2015-07-15 08:09:04 -04:00
|
|
|
unsafe impl<T> Send for AtomicPtr<T> {}
|
2019-11-06 08:09:55 -05:00
|
|
|
#[cfg(target_has_atomic_load_store = "ptr")]
|
2015-11-16 19:54:28 +03:00
|
|
|
#[stable(feature = "rust1", since = "1.0.0")]
|
2014-12-22 12:29:46 +01:00
|
|
|
unsafe impl<T> Sync for AtomicPtr<T> {}
|
|
|
|
|
|
2014-05-12 21:30:48 -07:00
|
|
|
/// Atomic memory orderings
|
|
|
|
|
///
|
2018-11-17 14:48:18 +01:00
|
|
|
/// Memory orderings specify the way atomic operations synchronize memory.
|
2020-08-28 17:24:47 +02:00
|
|
|
/// In its weakest [`Ordering::Relaxed`], only the memory directly touched by the
|
|
|
|
|
/// operation is synchronized. On the other hand, a store-load pair of [`Ordering::SeqCst`]
|
2018-11-17 14:48:18 +01:00
|
|
|
/// operations synchronize other memory while additionally preserving a total order of such
|
|
|
|
|
/// operations across all threads.
|
2014-05-12 21:30:48 -07:00
|
|
|
///
|
2019-10-12 16:22:41 +02:00
|
|
|
/// Rust's memory orderings are [the same as those of
|
2019-10-12 17:57:31 +02:00
|
|
|
/// C++20](https://en.cppreference.com/w/cpp/atomic/memory_order).
|
2017-01-19 20:43:43 -06:00
|
|
|
///
|
2017-04-06 12:57:40 +01:00
|
|
|
/// For more information see the [nomicon].
|
|
|
|
|
///
|
|
|
|
|
/// [nomicon]: ../../../nomicon/atomics.html
|
2015-01-23 21:48:20 -08:00
|
|
|
#[stable(feature = "rust1", since = "1.0.0")]
|
2018-12-16 16:37:18 +00:00
|
|
|
#[derive(Copy, Clone, Debug, Eq, PartialEq, Hash)]
|
2018-08-15 16:56:29 +01:00
|
|
|
#[non_exhaustive]
|
2020-12-02 15:16:12 -08:00
|
|
|
#[rustc_diagnostic_item = "Ordering"]
|
2014-05-12 21:30:48 -07:00
|
|
|
pub enum Ordering {
|
2017-03-29 00:52:16 -04:00
|
|
|
/// No ordering constraints, only atomic operations.
|
|
|
|
|
///
|
2019-10-12 17:57:31 +02:00
|
|
|
/// Corresponds to [`memory_order_relaxed`] in C++20.
|
2017-03-29 00:52:16 -04:00
|
|
|
///
|
2019-10-12 16:22:41 +02:00
|
|
|
/// [`memory_order_relaxed`]: https://en.cppreference.com/w/cpp/atomic/memory_order#Relaxed_ordering
|
2015-01-23 21:48:20 -08:00
|
|
|
#[stable(feature = "rust1", since = "1.0.0")]
|
2014-05-12 21:30:48 -07:00
|
|
|
Relaxed,
|
2018-08-06 12:05:59 +02:00
|
|
|
/// When coupled with a store, all previous operations become ordered
|
|
|
|
|
/// before any load of this value with [`Acquire`] (or stronger) ordering.
|
|
|
|
|
/// In particular, all previous writes become visible to all threads
|
|
|
|
|
/// that perform an [`Acquire`] (or stronger) load of this value.
|
2017-03-29 00:52:16 -04:00
|
|
|
///
|
2018-08-06 12:05:59 +02:00
|
|
|
/// Notice that using this ordering for an operation that combines loads
|
|
|
|
|
/// and stores leads to a [`Relaxed`] load operation!
|
|
|
|
|
///
|
|
|
|
|
/// This ordering is only applicable for operations that can perform a store.
|
|
|
|
|
///
|
2019-10-12 17:57:31 +02:00
|
|
|
/// Corresponds to [`memory_order_release`] in C++20.
|
2018-08-06 12:05:59 +02:00
|
|
|
///
|
2019-10-12 16:22:41 +02:00
|
|
|
/// [`memory_order_release`]: https://en.cppreference.com/w/cpp/atomic/memory_order#Release-Acquire_ordering
|
2015-01-23 21:48:20 -08:00
|
|
|
#[stable(feature = "rust1", since = "1.0.0")]
|
2014-05-12 21:30:48 -07:00
|
|
|
Release,
|
2018-08-06 12:05:59 +02:00
|
|
|
/// When coupled with a load, if the loaded value was written by a store operation with
|
|
|
|
|
/// [`Release`] (or stronger) ordering, then all subsequent operations
|
|
|
|
|
/// become ordered after that store. In particular, all subsequent loads will see data
|
|
|
|
|
/// written before the store.
|
2017-03-29 00:52:16 -04:00
|
|
|
///
|
2018-08-06 12:05:59 +02:00
|
|
|
/// Notice that using this ordering for an operation that combines loads
|
|
|
|
|
/// and stores leads to a [`Relaxed`] store operation!
|
|
|
|
|
///
|
|
|
|
|
/// This ordering is only applicable for operations that can perform a load.
|
|
|
|
|
///
|
2019-10-12 17:57:31 +02:00
|
|
|
/// Corresponds to [`memory_order_acquire`] in C++20.
|
2018-08-06 12:05:59 +02:00
|
|
|
///
|
2019-10-12 16:22:41 +02:00
|
|
|
/// [`memory_order_acquire`]: https://en.cppreference.com/w/cpp/atomic/memory_order#Release-Acquire_ordering
|
2015-01-23 21:48:20 -08:00
|
|
|
#[stable(feature = "rust1", since = "1.0.0")]
|
2014-05-12 21:30:48 -07:00
|
|
|
Acquire,
|
2018-08-06 12:05:59 +02:00
|
|
|
/// Has the effects of both [`Acquire`] and [`Release`] together:
|
|
|
|
|
/// For loads it uses [`Acquire`] ordering. For stores it uses the [`Release`] ordering.
|
|
|
|
|
///
|
|
|
|
|
/// Notice that in the case of `compare_and_swap`, it is possible that the operation ends up
|
2018-11-17 14:48:18 +01:00
|
|
|
/// not performing any store and hence it has just [`Acquire`] ordering. However,
|
2019-10-12 16:22:41 +02:00
|
|
|
/// `AcqRel` will never perform [`Relaxed`] accesses.
|
2018-03-19 14:33:39 +01:00
|
|
|
///
|
2018-03-25 14:19:27 +02:00
|
|
|
/// This ordering is only applicable for operations that combine both loads and stores.
|
|
|
|
|
///
|
2019-10-12 17:57:31 +02:00
|
|
|
/// Corresponds to [`memory_order_acq_rel`] in C++20.
|
2017-03-29 00:52:16 -04:00
|
|
|
///
|
2019-10-12 16:22:41 +02:00
|
|
|
/// [`memory_order_acq_rel`]: https://en.cppreference.com/w/cpp/atomic/memory_order#Release-Acquire_ordering
|
2015-01-23 21:48:20 -08:00
|
|
|
#[stable(feature = "rust1", since = "1.0.0")]
|
2014-05-12 21:30:48 -07:00
|
|
|
AcqRel,
|
2018-08-06 12:05:59 +02:00
|
|
|
/// Like [`Acquire`]/[`Release`]/[`AcqRel`] (for load, store, and load-with-store
|
|
|
|
|
/// operations, respectively) with the additional guarantee that all threads see all
|
2014-05-12 21:30:48 -07:00
|
|
|
/// sequentially consistent operations in the same order.
|
2018-08-06 12:05:59 +02:00
|
|
|
///
|
2019-10-12 17:57:31 +02:00
|
|
|
/// Corresponds to [`memory_order_seq_cst`] in C++20.
|
2018-08-06 12:05:59 +02:00
|
|
|
///
|
2019-10-12 16:22:41 +02:00
|
|
|
/// [`memory_order_seq_cst`]: https://en.cppreference.com/w/cpp/atomic/memory_order#Sequentially-consistent_ordering
|
2015-01-23 21:48:20 -08:00
|
|
|
#[stable(feature = "rust1", since = "1.0.0")]
|
2014-11-19 10:35:47 -05:00
|
|
|
SeqCst,
|
2014-05-12 21:30:48 -07:00
|
|
|
}
|
|
|
|
|
|
2017-03-27 23:55:03 -04:00
|
|
|
/// An [`AtomicBool`] initialized to `false`.
|
2019-11-06 08:09:55 -05:00
|
|
|
#[cfg(target_has_atomic_load_store = "8")]
|
2015-01-23 21:48:20 -08:00
|
|
|
#[stable(feature = "rust1", since = "1.0.0")]
|
2022-04-07 21:20:32 -04:00
|
|
|
#[deprecated(
|
2019-01-30 17:47:36 +01:00
|
|
|
since = "1.34.0",
|
2022-04-07 21:20:32 -04:00
|
|
|
note = "the `new` function is now preferred",
|
2019-01-30 17:47:36 +01:00
|
|
|
suggestion = "AtomicBool::new(false)"
|
2019-02-27 16:58:12 -07:00
|
|
|
)]
|
2015-05-27 11:18:36 +03:00
|
|
|
pub const ATOMIC_BOOL_INIT: AtomicBool = AtomicBool::new(false);
|
2014-05-12 21:30:48 -07:00
|
|
|
|
2019-11-06 08:09:55 -05:00
|
|
|
#[cfg(target_has_atomic_load_store = "8")]
|
2014-05-12 21:30:48 -07:00
|
|
|
impl AtomicBool {
|
2014-11-19 10:35:47 -05:00
|
|
|
/// Creates a new `AtomicBool`.
|
|
|
|
|
///
|
|
|
|
|
/// # Examples
|
|
|
|
|
///
|
|
|
|
|
/// ```
|
|
|
|
|
/// use std::sync::atomic::AtomicBool;
|
|
|
|
|
///
|
2022-10-19 23:54:00 +01:00
|
|
|
/// let atomic_true = AtomicBool::new(true);
|
2014-11-19 10:35:47 -05:00
|
|
|
/// let atomic_false = AtomicBool::new(false);
|
|
|
|
|
/// ```
|
2014-10-27 09:14:57 -07:00
|
|
|
#[inline]
|
2015-01-23 21:48:20 -08:00
|
|
|
#[stable(feature = "rust1", since = "1.0.0")]
|
2021-03-12 13:45:13 +00:00
|
|
|
#[rustc_const_stable(feature = "const_atomic_new", since = "1.24.0")]
|
2021-10-10 02:44:26 -04:00
|
|
|
#[must_use]
|
2015-05-27 11:18:36 +03:00
|
|
|
pub const fn new(v: bool) -> AtomicBool {
|
2016-05-12 16:12:05 +01:00
|
|
|
AtomicBool { v: UnsafeCell::new(v as u8) }
|
2014-05-12 21:30:48 -07:00
|
|
|
}
|
|
|
|
|
|
2017-10-05 23:20:58 +02:00
|
|
|
/// Returns a mutable reference to the underlying [`bool`].
|
2016-08-16 16:58:20 +01:00
|
|
|
///
|
|
|
|
|
/// This is safe because the mutable reference guarantees that no other threads are
|
|
|
|
|
/// concurrently accessing the atomic data.
|
|
|
|
|
///
|
|
|
|
|
/// # Examples
|
|
|
|
|
///
|
|
|
|
|
/// ```
|
|
|
|
|
/// use std::sync::atomic::{AtomicBool, Ordering};
|
|
|
|
|
///
|
|
|
|
|
/// let mut some_bool = AtomicBool::new(true);
|
|
|
|
|
/// assert_eq!(*some_bool.get_mut(), true);
|
|
|
|
|
/// *some_bool.get_mut() = false;
|
|
|
|
|
/// assert_eq!(some_bool.load(Ordering::SeqCst), false);
|
|
|
|
|
/// ```
|
|
|
|
|
#[inline]
|
2016-12-14 12:13:12 -08:00
|
|
|
#[stable(feature = "atomic_access", since = "1.15.0")]
|
2016-08-16 16:58:20 +01:00
|
|
|
pub fn get_mut(&mut self) -> &mut bool {
|
2019-12-26 12:56:34 -08:00
|
|
|
// SAFETY: the mutable reference guarantees unique ownership.
|
2016-08-16 16:58:20 +01:00
|
|
|
unsafe { &mut *(self.v.get() as *mut bool) }
|
|
|
|
|
}
|
|
|
|
|
|
2020-09-21 20:42:40 +02:00
|
|
|
/// Get atomic access to a `&mut bool`.
|
|
|
|
|
///
|
|
|
|
|
/// # Examples
|
|
|
|
|
///
|
|
|
|
|
/// ```
|
|
|
|
|
/// #![feature(atomic_from_mut)]
|
|
|
|
|
/// use std::sync::atomic::{AtomicBool, Ordering};
|
|
|
|
|
///
|
|
|
|
|
/// let mut some_bool = true;
|
|
|
|
|
/// let a = AtomicBool::from_mut(&mut some_bool);
|
|
|
|
|
/// a.store(false, Ordering::Relaxed);
|
|
|
|
|
/// assert_eq!(some_bool, false);
|
|
|
|
|
/// ```
|
|
|
|
|
#[inline]
|
2020-09-20 12:14:41 +02:00
|
|
|
#[cfg(target_has_atomic_equal_alignment = "8")]
|
2020-09-21 20:42:40 +02:00
|
|
|
#[unstable(feature = "atomic_from_mut", issue = "76314")]
|
2022-01-08 16:57:20 +03:00
|
|
|
pub fn from_mut(v: &mut bool) -> &mut Self {
|
2020-09-21 20:42:40 +02:00
|
|
|
// SAFETY: the mutable reference guarantees unique ownership, and
|
|
|
|
|
// alignment of both `bool` and `Self` is 1.
|
2022-01-08 16:57:20 +03:00
|
|
|
unsafe { &mut *(v as *mut bool as *mut Self) }
|
2020-09-21 20:42:40 +02:00
|
|
|
}
|
|
|
|
|
|
2022-03-10 22:57:02 +04:00
|
|
|
/// Get non-atomic access to a `&mut [AtomicBool]` slice.
|
|
|
|
|
///
|
|
|
|
|
/// This is safe because the mutable reference guarantees that no other threads are
|
|
|
|
|
/// concurrently accessing the atomic data.
|
|
|
|
|
///
|
|
|
|
|
/// # Examples
|
|
|
|
|
///
|
|
|
|
|
/// ```
|
2022-06-09 10:53:45 +02:00
|
|
|
/// #![feature(atomic_from_mut, inline_const)]
|
2022-03-10 22:57:02 +04:00
|
|
|
/// use std::sync::atomic::{AtomicBool, Ordering};
|
|
|
|
|
///
|
|
|
|
|
/// let mut some_bools = [const { AtomicBool::new(false) }; 10];
|
|
|
|
|
///
|
|
|
|
|
/// let view: &mut [bool] = AtomicBool::get_mut_slice(&mut some_bools);
|
|
|
|
|
/// assert_eq!(view, [false; 10]);
|
|
|
|
|
/// view[..5].copy_from_slice(&[true; 5]);
|
|
|
|
|
///
|
|
|
|
|
/// std::thread::scope(|s| {
|
|
|
|
|
/// for t in &some_bools[..5] {
|
|
|
|
|
/// s.spawn(move || assert_eq!(t.load(Ordering::Relaxed), true));
|
|
|
|
|
/// }
|
|
|
|
|
///
|
|
|
|
|
/// for f in &some_bools[5..] {
|
|
|
|
|
/// s.spawn(move || assert_eq!(f.load(Ordering::Relaxed), false));
|
|
|
|
|
/// }
|
|
|
|
|
/// });
|
|
|
|
|
/// ```
|
|
|
|
|
#[inline]
|
|
|
|
|
#[unstable(feature = "atomic_from_mut", issue = "76314")]
|
|
|
|
|
pub fn get_mut_slice(this: &mut [Self]) -> &mut [bool] {
|
|
|
|
|
// SAFETY: the mutable reference guarantees unique ownership.
|
|
|
|
|
unsafe { &mut *(this as *mut [Self] as *mut [bool]) }
|
|
|
|
|
}
|
|
|
|
|
|
2022-02-25 15:30:29 -08:00
|
|
|
/// Get atomic access to a `&mut [bool]` slice.
|
|
|
|
|
///
|
|
|
|
|
/// # Examples
|
|
|
|
|
///
|
|
|
|
|
/// ```
|
2022-06-09 10:53:45 +02:00
|
|
|
/// #![feature(atomic_from_mut)]
|
2022-02-25 15:30:29 -08:00
|
|
|
/// use std::sync::atomic::{AtomicBool, Ordering};
|
|
|
|
|
///
|
|
|
|
|
/// let mut some_bools = [false; 10];
|
|
|
|
|
/// let a = &*AtomicBool::from_mut_slice(&mut some_bools);
|
|
|
|
|
/// std::thread::scope(|s| {
|
|
|
|
|
/// for i in 0..a.len() {
|
2022-03-03 14:58:49 +01:00
|
|
|
/// s.spawn(move || a[i].store(true, Ordering::Relaxed));
|
2022-02-25 15:30:29 -08:00
|
|
|
/// }
|
|
|
|
|
/// });
|
|
|
|
|
/// assert_eq!(some_bools, [true; 10]);
|
|
|
|
|
/// ```
|
|
|
|
|
#[inline]
|
|
|
|
|
#[cfg(target_has_atomic_equal_alignment = "8")]
|
|
|
|
|
#[unstable(feature = "atomic_from_mut", issue = "76314")]
|
|
|
|
|
pub fn from_mut_slice(v: &mut [bool]) -> &mut [Self] {
|
|
|
|
|
// SAFETY: the mutable reference guarantees unique ownership, and
|
|
|
|
|
// alignment of both `bool` and `Self` is 1.
|
|
|
|
|
unsafe { &mut *(v as *mut [bool] as *mut [Self]) }
|
|
|
|
|
}
|
|
|
|
|
|
2016-08-16 16:58:20 +01:00
|
|
|
/// Consumes the atomic and returns the contained value.
|
|
|
|
|
///
|
|
|
|
|
/// This is safe because passing `self` by value guarantees that no other threads are
|
|
|
|
|
/// concurrently accessing the atomic data.
|
|
|
|
|
///
|
|
|
|
|
/// # Examples
|
|
|
|
|
///
|
|
|
|
|
/// ```
|
|
|
|
|
/// use std::sync::atomic::AtomicBool;
|
|
|
|
|
///
|
|
|
|
|
/// let some_bool = AtomicBool::new(true);
|
|
|
|
|
/// assert_eq!(some_bool.into_inner(), true);
|
|
|
|
|
/// ```
|
|
|
|
|
#[inline]
|
2016-12-14 12:13:12 -08:00
|
|
|
#[stable(feature = "atomic_access", since = "1.15.0")]
|
2020-11-04 11:58:41 +01:00
|
|
|
#[rustc_const_unstable(feature = "const_cell_into_inner", issue = "78729")]
|
2020-11-04 11:41:57 +01:00
|
|
|
pub const fn into_inner(self) -> bool {
|
2018-01-05 01:11:20 +00:00
|
|
|
self.v.into_inner() != 0
|
2016-08-16 16:58:20 +01:00
|
|
|
}
|
|
|
|
|
|
2014-11-19 10:35:47 -05:00
|
|
|
/// Loads a value from the bool.
|
|
|
|
|
///
|
2017-03-29 00:52:16 -04:00
|
|
|
/// `load` takes an [`Ordering`] argument which describes the memory ordering
|
2018-08-07 11:33:20 +02:00
|
|
|
/// of this operation. Possible values are [`SeqCst`], [`Acquire`] and [`Relaxed`].
|
2014-08-04 15:42:36 -07:00
|
|
|
///
|
2014-11-12 03:36:09 +09:00
|
|
|
/// # Panics
|
2014-08-04 15:42:36 -07:00
|
|
|
///
|
2016-12-26 10:51:25 +01:00
|
|
|
/// Panics if `order` is [`Release`] or [`AcqRel`].
|
|
|
|
|
///
|
2014-11-19 10:35:47 -05:00
|
|
|
/// # Examples
|
|
|
|
|
///
|
|
|
|
|
/// ```
|
|
|
|
|
/// use std::sync::atomic::{AtomicBool, Ordering};
|
|
|
|
|
///
|
|
|
|
|
/// let some_bool = AtomicBool::new(true);
|
|
|
|
|
///
|
2015-06-10 16:53:09 +02:00
|
|
|
/// assert_eq!(some_bool.load(Ordering::Relaxed), true);
|
2014-11-19 10:35:47 -05:00
|
|
|
/// ```
|
2014-05-12 21:30:48 -07:00
|
|
|
#[inline]
|
2015-01-23 21:48:20 -08:00
|
|
|
#[stable(feature = "rust1", since = "1.0.0")]
|
2022-07-20 16:34:24 -04:00
|
|
|
#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
|
2014-05-12 21:30:48 -07:00
|
|
|
pub fn load(&self, order: Ordering) -> bool {
|
2019-12-26 12:56:34 -08:00
|
|
|
// SAFETY: any data races are prevented by atomic intrinsics and the raw
|
|
|
|
|
// pointer passed in is valid because we got it from a reference.
|
2016-05-12 16:12:05 +01:00
|
|
|
unsafe { atomic_load(self.v.get(), order) != 0 }
|
2014-05-12 21:30:48 -07:00
|
|
|
}
|
|
|
|
|
|
2014-11-19 10:35:47 -05:00
|
|
|
/// Stores a value into the bool.
|
|
|
|
|
///
|
2017-03-29 00:52:16 -04:00
|
|
|
/// `store` takes an [`Ordering`] argument which describes the memory ordering
|
2018-08-07 11:33:20 +02:00
|
|
|
/// of this operation. Possible values are [`SeqCst`], [`Release`] and [`Relaxed`].
|
|
|
|
|
///
|
|
|
|
|
/// # Panics
|
|
|
|
|
///
|
|
|
|
|
/// Panics if `order` is [`Acquire`] or [`AcqRel`].
|
2016-12-26 10:51:25 +01:00
|
|
|
///
|
2014-11-19 10:35:47 -05:00
|
|
|
/// # Examples
|
|
|
|
|
///
|
|
|
|
|
/// ```
|
|
|
|
|
/// use std::sync::atomic::{AtomicBool, Ordering};
|
|
|
|
|
///
|
|
|
|
|
/// let some_bool = AtomicBool::new(true);
|
|
|
|
|
///
|
|
|
|
|
/// some_bool.store(false, Ordering::Relaxed);
|
2015-06-10 16:53:09 +02:00
|
|
|
/// assert_eq!(some_bool.load(Ordering::Relaxed), false);
|
2014-11-19 10:35:47 -05:00
|
|
|
/// ```
|
2014-05-12 21:30:48 -07:00
|
|
|
#[inline]
|
2015-01-23 21:48:20 -08:00
|
|
|
#[stable(feature = "rust1", since = "1.0.0")]
|
2022-07-20 16:34:24 -04:00
|
|
|
#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
|
2014-05-12 21:30:48 -07:00
|
|
|
pub fn store(&self, val: bool, order: Ordering) {
|
2019-12-26 12:56:34 -08:00
|
|
|
// SAFETY: any data races are prevented by atomic intrinsics and the raw
|
|
|
|
|
// pointer passed in is valid because we got it from a reference.
|
2016-10-16 22:11:01 +05:30
|
|
|
unsafe {
|
|
|
|
|
atomic_store(self.v.get(), val as u8, order);
|
|
|
|
|
}
|
2014-05-12 21:30:48 -07:00
|
|
|
}
|
|
|
|
|
|
2017-03-30 16:23:46 +02:00
|
|
|
/// Stores a value into the bool, returning the previous value.
|
2014-11-19 10:35:47 -05:00
|
|
|
///
|
2017-03-29 00:52:16 -04:00
|
|
|
/// `swap` takes an [`Ordering`] argument which describes the memory ordering
|
2018-08-07 11:57:43 +02:00
|
|
|
/// of this operation. All ordering modes are possible. Note that using
|
|
|
|
|
/// [`Acquire`] makes the store part of this operation [`Relaxed`], and
|
|
|
|
|
/// using [`Release`] makes the load part [`Relaxed`].
|
2016-12-26 10:51:25 +01:00
|
|
|
///
|
2020-06-05 19:07:24 +02:00
|
|
|
/// **Note:** This method is only available on platforms that support atomic
|
|
|
|
|
/// operations on `u8`.
|
|
|
|
|
///
|
2014-11-19 10:35:47 -05:00
|
|
|
/// # Examples
|
|
|
|
|
///
|
|
|
|
|
/// ```
|
|
|
|
|
/// use std::sync::atomic::{AtomicBool, Ordering};
|
|
|
|
|
///
|
|
|
|
|
/// let some_bool = AtomicBool::new(true);
|
|
|
|
|
///
|
2015-06-10 16:53:09 +02:00
|
|
|
/// assert_eq!(some_bool.swap(false, Ordering::Relaxed), true);
|
|
|
|
|
/// assert_eq!(some_bool.load(Ordering::Relaxed), false);
|
2014-11-19 10:35:47 -05:00
|
|
|
/// ```
|
2014-05-12 21:30:48 -07:00
|
|
|
#[inline]
|
2015-01-23 21:48:20 -08:00
|
|
|
#[stable(feature = "rust1", since = "1.0.0")]
|
2019-10-08 17:09:23 +01:00
|
|
|
#[cfg(target_has_atomic = "8")]
|
2022-07-20 16:34:24 -04:00
|
|
|
#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
|
2014-05-12 21:30:48 -07:00
|
|
|
pub fn swap(&self, val: bool, order: Ordering) -> bool {
|
2019-12-26 12:56:34 -08:00
|
|
|
// SAFETY: data races are prevented by atomic intrinsics.
|
2016-05-12 16:12:05 +01:00
|
|
|
unsafe { atomic_swap(self.v.get(), val as u8, order) != 0 }
|
2014-05-12 21:30:48 -07:00
|
|
|
}
|
|
|
|
|
|
2017-10-05 23:20:58 +02:00
|
|
|
/// Stores a value into the [`bool`] if the current value is the same as the `current` value.
|
2014-05-12 21:30:48 -07:00
|
|
|
///
|
2015-06-29 21:44:40 +03:00
|
|
|
/// The return value is always the previous value. If it is equal to `current`, then the value
|
|
|
|
|
/// was updated.
|
2014-05-12 21:30:48 -07:00
|
|
|
///
|
2017-03-29 00:52:16 -04:00
|
|
|
/// `compare_and_swap` also takes an [`Ordering`] argument which describes the memory
|
2018-08-06 12:05:59 +02:00
|
|
|
/// ordering of this operation. Notice that even when using [`AcqRel`], the operation
|
|
|
|
|
/// might fail and hence just perform an `Acquire` load, but not have `Release` semantics.
|
2018-08-07 11:57:43 +02:00
|
|
|
/// Using [`Acquire`] makes the store part of this operation [`Relaxed`] if it
|
|
|
|
|
/// happens, and using [`Release`] makes the load part [`Relaxed`].
|
2016-12-26 10:51:25 +01:00
|
|
|
///
|
2020-06-05 19:07:24 +02:00
|
|
|
/// **Note:** This method is only available on platforms that support atomic
|
|
|
|
|
/// operations on `u8`.
|
|
|
|
|
///
|
2020-11-20 22:16:15 +01:00
|
|
|
/// # Migrating to `compare_exchange` and `compare_exchange_weak`
|
|
|
|
|
///
|
|
|
|
|
/// `compare_and_swap` is equivalent to `compare_exchange` with the following mapping for
|
|
|
|
|
/// memory orderings:
|
|
|
|
|
///
|
|
|
|
|
/// Original | Success | Failure
|
|
|
|
|
/// -------- | ------- | -------
|
|
|
|
|
/// Relaxed | Relaxed | Relaxed
|
|
|
|
|
/// Acquire | Acquire | Acquire
|
|
|
|
|
/// Release | Release | Relaxed
|
|
|
|
|
/// AcqRel | AcqRel | Acquire
|
|
|
|
|
/// SeqCst | SeqCst | SeqCst
|
|
|
|
|
///
|
|
|
|
|
/// `compare_exchange_weak` is allowed to fail spuriously even when the comparison succeeds,
|
|
|
|
|
/// which allows the compiler to generate better assembly code when the compare and swap
|
|
|
|
|
/// is used in a loop.
|
|
|
|
|
///
|
2014-11-19 10:35:47 -05:00
|
|
|
/// # Examples
|
2014-05-12 21:30:48 -07:00
|
|
|
///
|
2014-11-19 10:35:47 -05:00
|
|
|
/// ```
|
|
|
|
|
/// use std::sync::atomic::{AtomicBool, Ordering};
|
2014-05-12 21:30:48 -07:00
|
|
|
///
|
2014-11-19 10:35:47 -05:00
|
|
|
/// let some_bool = AtomicBool::new(true);
|
2014-05-12 21:30:48 -07:00
|
|
|
///
|
2015-06-10 16:53:09 +02:00
|
|
|
/// assert_eq!(some_bool.compare_and_swap(true, false, Ordering::Relaxed), true);
|
|
|
|
|
/// assert_eq!(some_bool.load(Ordering::Relaxed), false);
|
|
|
|
|
///
|
|
|
|
|
/// assert_eq!(some_bool.compare_and_swap(true, true, Ordering::Relaxed), false);
|
|
|
|
|
/// assert_eq!(some_bool.load(Ordering::Relaxed), false);
|
2014-05-12 21:30:48 -07:00
|
|
|
/// ```
|
|
|
|
|
#[inline]
|
2015-01-23 21:48:20 -08:00
|
|
|
#[stable(feature = "rust1", since = "1.0.0")]
|
2022-04-07 21:20:32 -04:00
|
|
|
#[deprecated(
|
2020-11-20 21:45:51 +01:00
|
|
|
since = "1.50.0",
|
2022-04-07 21:20:32 -04:00
|
|
|
note = "Use `compare_exchange` or `compare_exchange_weak` instead"
|
2020-11-20 21:45:51 +01:00
|
|
|
)]
|
2019-10-08 17:09:23 +01:00
|
|
|
#[cfg(target_has_atomic = "8")]
|
2022-07-20 16:34:24 -04:00
|
|
|
#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
|
2015-06-29 21:44:40 +03:00
|
|
|
pub fn compare_and_swap(&self, current: bool, new: bool, order: Ordering) -> bool {
|
2016-03-14 11:57:50 +01:00
|
|
|
match self.compare_exchange(current, new, order, strongest_failure_ordering(order)) {
|
|
|
|
|
Ok(x) => x,
|
|
|
|
|
Err(x) => x,
|
|
|
|
|
}
|
2016-01-17 05:11:31 +00:00
|
|
|
}
|
|
|
|
|
|
2017-10-05 23:20:58 +02:00
|
|
|
/// Stores a value into the [`bool`] if the current value is the same as the `current` value.
|
2016-01-17 05:11:31 +00:00
|
|
|
///
|
2016-03-14 11:57:50 +01:00
|
|
|
/// The return value is a result indicating whether the new value was written and containing
|
2016-06-02 19:36:10 +02:00
|
|
|
/// the previous value. On success this value is guaranteed to be equal to `current`.
|
2016-01-17 05:11:31 +00:00
|
|
|
///
|
2017-03-29 00:52:16 -04:00
|
|
|
/// `compare_exchange` takes two [`Ordering`] arguments to describe the memory
|
2020-11-22 20:36:29 +01:00
|
|
|
/// ordering of this operation. `success` describes the required ordering for the
|
|
|
|
|
/// read-modify-write operation that takes place if the comparison with `current` succeeds.
|
|
|
|
|
/// `failure` describes the required ordering for the load operation that takes place when
|
|
|
|
|
/// the comparison fails. Using [`Acquire`] as success ordering makes the store part
|
2018-08-07 11:57:43 +02:00
|
|
|
/// of this operation [`Relaxed`], and using [`Release`] makes the successful load
|
2022-06-22 13:15:03 +02:00
|
|
|
/// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
|
2018-08-07 11:57:43 +02:00
|
|
|
///
|
2020-06-05 19:07:24 +02:00
|
|
|
/// **Note:** This method is only available on platforms that support atomic
|
|
|
|
|
/// operations on `u8`.
|
2016-12-26 10:51:25 +01:00
|
|
|
///
|
2016-01-17 05:11:31 +00:00
|
|
|
/// # Examples
|
|
|
|
|
///
|
|
|
|
|
/// ```
|
|
|
|
|
/// use std::sync::atomic::{AtomicBool, Ordering};
|
|
|
|
|
///
|
|
|
|
|
/// let some_bool = AtomicBool::new(true);
|
|
|
|
|
///
|
|
|
|
|
/// assert_eq!(some_bool.compare_exchange(true,
|
|
|
|
|
/// false,
|
|
|
|
|
/// Ordering::Acquire,
|
|
|
|
|
/// Ordering::Relaxed),
|
2016-03-14 11:57:50 +01:00
|
|
|
/// Ok(true));
|
2016-01-17 05:11:31 +00:00
|
|
|
/// assert_eq!(some_bool.load(Ordering::Relaxed), false);
|
|
|
|
|
///
|
|
|
|
|
/// assert_eq!(some_bool.compare_exchange(true, true,
|
|
|
|
|
/// Ordering::SeqCst,
|
|
|
|
|
/// Ordering::Acquire),
|
2016-03-14 11:57:50 +01:00
|
|
|
/// Err(false));
|
2016-01-17 05:11:31 +00:00
|
|
|
/// assert_eq!(some_bool.load(Ordering::Relaxed), false);
|
|
|
|
|
/// ```
|
|
|
|
|
#[inline]
|
std: Stabilize APIs for the 1.10 release
This commit applies the FCP decisions made by the libs team for the 1.10 cycle,
including both new stabilizations and deprecations. Specifically, the list of
APIs is:
Stabilized:
* `os::windows::fs::OpenOptionsExt::access_mode`
* `os::windows::fs::OpenOptionsExt::share_mode`
* `os::windows::fs::OpenOptionsExt::custom_flags`
* `os::windows::fs::OpenOptionsExt::attributes`
* `os::windows::fs::OpenOptionsExt::security_qos_flags`
* `os::unix::fs::OpenOptionsExt::custom_flags`
* `sync::Weak::new`
* `Default for sync::Weak`
* `panic::set_hook`
* `panic::take_hook`
* `panic::PanicInfo`
* `panic::PanicInfo::payload`
* `panic::PanicInfo::location`
* `panic::Location`
* `panic::Location::file`
* `panic::Location::line`
* `ffi::CStr::from_bytes_with_nul`
* `ffi::CStr::from_bytes_with_nul_unchecked`
* `ffi::FromBytesWithNulError`
* `fs::Metadata::modified`
* `fs::Metadata::accessed`
* `fs::Metadata::created`
* `sync::atomic::Atomic{Usize,Isize,Bool,Ptr}::compare_exchange`
* `sync::atomic::Atomic{Usize,Isize,Bool,Ptr}::compare_exchange_weak`
* `collections::{btree,hash}_map::{Occupied,Vacant,}Entry::key`
* `os::unix::net::{UnixStream, UnixListener, UnixDatagram, SocketAddr}`
* `SocketAddr::is_unnamed`
* `SocketAddr::as_pathname`
* `UnixStream::connect`
* `UnixStream::pair`
* `UnixStream::try_clone`
* `UnixStream::local_addr`
* `UnixStream::peer_addr`
* `UnixStream::set_read_timeout`
* `UnixStream::set_write_timeout`
* `UnixStream::read_timeout`
* `UnixStream::write_Timeout`
* `UnixStream::set_nonblocking`
* `UnixStream::take_error`
* `UnixStream::shutdown`
* Read/Write/RawFd impls for `UnixStream`
* `UnixListener::bind`
* `UnixListener::accept`
* `UnixListener::try_clone`
* `UnixListener::local_addr`
* `UnixListener::set_nonblocking`
* `UnixListener::take_error`
* `UnixListener::incoming`
* RawFd impls for `UnixListener`
* `UnixDatagram::bind`
* `UnixDatagram::unbound`
* `UnixDatagram::pair`
* `UnixDatagram::connect`
* `UnixDatagram::try_clone`
* `UnixDatagram::local_addr`
* `UnixDatagram::peer_addr`
* `UnixDatagram::recv_from`
* `UnixDatagram::recv`
* `UnixDatagram::send_to`
* `UnixDatagram::send`
* `UnixDatagram::set_read_timeout`
* `UnixDatagram::set_write_timeout`
* `UnixDatagram::read_timeout`
* `UnixDatagram::write_timeout`
* `UnixDatagram::set_nonblocking`
* `UnixDatagram::take_error`
* `UnixDatagram::shutdown`
* RawFd impls for `UnixDatagram`
* `{BTree,Hash}Map::values_mut`
* `<[_]>::binary_search_by_key`
Deprecated:
* `StaticCondvar` - this, and all other static synchronization primitives
below, are usable today through the lazy-static crate on
stable Rust today. Additionally, we'd like the non-static
versions to be directly usable in a static context one day,
so they're unlikely to be the final forms of the APIs in any
case.
* `CONDVAR_INIT`
* `StaticMutex`
* `MUTEX_INIT`
* `StaticRwLock`
* `RWLOCK_INIT`
* `iter::Peekable::is_empty`
Closes #27717
Closes #27720
cc #27784 (but encode methods still exist)
Closes #30014
Closes #30425
Closes #30449
Closes #31190
Closes #31399
Closes #31767
Closes #32111
Closes #32281
Closes #32312
Closes #32551
Closes #33018
2016-05-17 11:57:07 -07:00
|
|
|
#[stable(feature = "extended_compare_and_swap", since = "1.10.0")]
|
2020-11-22 18:56:47 +01:00
|
|
|
#[doc(alias = "compare_and_swap")]
|
2019-10-08 17:09:23 +01:00
|
|
|
#[cfg(target_has_atomic = "8")]
|
2022-07-20 16:34:24 -04:00
|
|
|
#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
|
2016-01-17 05:11:31 +00:00
|
|
|
pub fn compare_exchange(
|
|
|
|
|
&self,
|
|
|
|
|
current: bool,
|
|
|
|
|
new: bool,
|
|
|
|
|
success: Ordering,
|
2016-10-16 22:11:01 +05:30
|
|
|
failure: Ordering,
|
|
|
|
|
) -> Result<bool, bool> {
|
2019-12-26 12:56:34 -08:00
|
|
|
// SAFETY: data races are prevented by atomic intrinsics.
|
2016-10-16 22:11:01 +05:30
|
|
|
match unsafe {
|
|
|
|
|
atomic_compare_exchange(self.v.get(), current as u8, new as u8, success, failure)
|
|
|
|
|
} {
|
2016-05-12 16:12:05 +01:00
|
|
|
Ok(x) => Ok(x != 0),
|
|
|
|
|
Err(x) => Err(x != 0),
|
2016-03-14 11:57:50 +01:00
|
|
|
}
|
2016-01-17 05:11:31 +00:00
|
|
|
}
|
|
|
|
|
|
2017-10-05 23:20:58 +02:00
|
|
|
/// Stores a value into the [`bool`] if the current value is the same as the `current` value.
|
2016-01-17 05:11:31 +00:00
|
|
|
///
|
2020-08-28 17:24:47 +02:00
|
|
|
/// Unlike [`AtomicBool::compare_exchange`], this function is allowed to spuriously fail even when the
|
2016-01-17 05:11:31 +00:00
|
|
|
/// comparison succeeds, which can result in more efficient code on some platforms. The
|
2016-03-14 11:57:50 +01:00
|
|
|
/// return value is a result indicating whether the new value was written and containing the
|
|
|
|
|
/// previous value.
|
2016-01-17 05:11:31 +00:00
|
|
|
///
|
2017-03-29 00:52:16 -04:00
|
|
|
/// `compare_exchange_weak` takes two [`Ordering`] arguments to describe the memory
|
2020-11-22 20:36:29 +01:00
|
|
|
/// ordering of this operation. `success` describes the required ordering for the
|
|
|
|
|
/// read-modify-write operation that takes place if the comparison with `current` succeeds.
|
|
|
|
|
/// `failure` describes the required ordering for the load operation that takes place when
|
|
|
|
|
/// the comparison fails. Using [`Acquire`] as success ordering makes the store part
|
2018-08-07 11:57:43 +02:00
|
|
|
/// of this operation [`Relaxed`], and using [`Release`] makes the successful load
|
2022-06-22 13:15:03 +02:00
|
|
|
/// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
|
2016-12-26 10:51:25 +01:00
|
|
|
///
|
2020-06-05 19:07:24 +02:00
|
|
|
/// **Note:** This method is only available on platforms that support atomic
|
|
|
|
|
/// operations on `u8`.
|
|
|
|
|
///
|
2016-01-17 05:11:31 +00:00
|
|
|
/// # Examples
|
|
|
|
|
///
|
|
|
|
|
/// ```
|
|
|
|
|
/// use std::sync::atomic::{AtomicBool, Ordering};
|
|
|
|
|
///
|
|
|
|
|
/// let val = AtomicBool::new(false);
|
|
|
|
|
///
|
|
|
|
|
/// let new = true;
|
|
|
|
|
/// let mut old = val.load(Ordering::Relaxed);
|
|
|
|
|
/// loop {
|
2016-03-14 11:57:50 +01:00
|
|
|
/// match val.compare_exchange_weak(old, new, Ordering::SeqCst, Ordering::Relaxed) {
|
|
|
|
|
/// Ok(_) => break,
|
|
|
|
|
/// Err(x) => old = x,
|
2016-01-17 05:11:31 +00:00
|
|
|
/// }
|
|
|
|
|
/// }
|
|
|
|
|
/// ```
|
|
|
|
|
#[inline]
|
std: Stabilize APIs for the 1.10 release
This commit applies the FCP decisions made by the libs team for the 1.10 cycle,
including both new stabilizations and deprecations. Specifically, the list of
APIs is:
Stabilized:
* `os::windows::fs::OpenOptionsExt::access_mode`
* `os::windows::fs::OpenOptionsExt::share_mode`
* `os::windows::fs::OpenOptionsExt::custom_flags`
* `os::windows::fs::OpenOptionsExt::attributes`
* `os::windows::fs::OpenOptionsExt::security_qos_flags`
* `os::unix::fs::OpenOptionsExt::custom_flags`
* `sync::Weak::new`
* `Default for sync::Weak`
* `panic::set_hook`
* `panic::take_hook`
* `panic::PanicInfo`
* `panic::PanicInfo::payload`
* `panic::PanicInfo::location`
* `panic::Location`
* `panic::Location::file`
* `panic::Location::line`
* `ffi::CStr::from_bytes_with_nul`
* `ffi::CStr::from_bytes_with_nul_unchecked`
* `ffi::FromBytesWithNulError`
* `fs::Metadata::modified`
* `fs::Metadata::accessed`
* `fs::Metadata::created`
* `sync::atomic::Atomic{Usize,Isize,Bool,Ptr}::compare_exchange`
* `sync::atomic::Atomic{Usize,Isize,Bool,Ptr}::compare_exchange_weak`
* `collections::{btree,hash}_map::{Occupied,Vacant,}Entry::key`
* `os::unix::net::{UnixStream, UnixListener, UnixDatagram, SocketAddr}`
* `SocketAddr::is_unnamed`
* `SocketAddr::as_pathname`
* `UnixStream::connect`
* `UnixStream::pair`
* `UnixStream::try_clone`
* `UnixStream::local_addr`
* `UnixStream::peer_addr`
* `UnixStream::set_read_timeout`
* `UnixStream::set_write_timeout`
* `UnixStream::read_timeout`
* `UnixStream::write_Timeout`
* `UnixStream::set_nonblocking`
* `UnixStream::take_error`
* `UnixStream::shutdown`
* Read/Write/RawFd impls for `UnixStream`
* `UnixListener::bind`
* `UnixListener::accept`
* `UnixListener::try_clone`
* `UnixListener::local_addr`
* `UnixListener::set_nonblocking`
* `UnixListener::take_error`
* `UnixListener::incoming`
* RawFd impls for `UnixListener`
* `UnixDatagram::bind`
* `UnixDatagram::unbound`
* `UnixDatagram::pair`
* `UnixDatagram::connect`
* `UnixDatagram::try_clone`
* `UnixDatagram::local_addr`
* `UnixDatagram::peer_addr`
* `UnixDatagram::recv_from`
* `UnixDatagram::recv`
* `UnixDatagram::send_to`
* `UnixDatagram::send`
* `UnixDatagram::set_read_timeout`
* `UnixDatagram::set_write_timeout`
* `UnixDatagram::read_timeout`
* `UnixDatagram::write_timeout`
* `UnixDatagram::set_nonblocking`
* `UnixDatagram::take_error`
* `UnixDatagram::shutdown`
* RawFd impls for `UnixDatagram`
* `{BTree,Hash}Map::values_mut`
* `<[_]>::binary_search_by_key`
Deprecated:
* `StaticCondvar` - this, and all other static synchronization primitives
below, are usable today through the lazy-static crate on
stable Rust today. Additionally, we'd like the non-static
versions to be directly usable in a static context one day,
so they're unlikely to be the final forms of the APIs in any
case.
* `CONDVAR_INIT`
* `StaticMutex`
* `MUTEX_INIT`
* `StaticRwLock`
* `RWLOCK_INIT`
* `iter::Peekable::is_empty`
Closes #27717
Closes #27720
cc #27784 (but encode methods still exist)
Closes #30014
Closes #30425
Closes #30449
Closes #31190
Closes #31399
Closes #31767
Closes #32111
Closes #32281
Closes #32312
Closes #32551
Closes #33018
2016-05-17 11:57:07 -07:00
|
|
|
#[stable(feature = "extended_compare_and_swap", since = "1.10.0")]
|
2020-11-22 18:56:47 +01:00
|
|
|
#[doc(alias = "compare_and_swap")]
|
2019-10-08 17:09:23 +01:00
|
|
|
#[cfg(target_has_atomic = "8")]
|
2022-07-20 16:34:24 -04:00
|
|
|
#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
|
2016-01-17 05:11:31 +00:00
|
|
|
pub fn compare_exchange_weak(
|
|
|
|
|
&self,
|
|
|
|
|
current: bool,
|
|
|
|
|
new: bool,
|
|
|
|
|
success: Ordering,
|
2016-10-16 22:11:01 +05:30
|
|
|
failure: Ordering,
|
|
|
|
|
) -> Result<bool, bool> {
|
2019-12-26 12:56:34 -08:00
|
|
|
// SAFETY: data races are prevented by atomic intrinsics.
|
2016-10-16 22:11:01 +05:30
|
|
|
match unsafe {
|
|
|
|
|
atomic_compare_exchange_weak(self.v.get(), current as u8, new as u8, success, failure)
|
|
|
|
|
} {
|
2016-05-12 16:12:05 +01:00
|
|
|
Ok(x) => Ok(x != 0),
|
|
|
|
|
Err(x) => Err(x != 0),
|
2016-03-14 11:57:50 +01:00
|
|
|
}
|
2014-05-12 21:30:48 -07:00
|
|
|
}
|
|
|
|
|
|
2014-11-19 10:35:47 -05:00
|
|
|
/// Logical "and" with a boolean value.
|
|
|
|
|
///
|
|
|
|
|
/// Performs a logical "and" operation on the current value and the argument `val`, and sets
|
|
|
|
|
/// the new value to the result.
|
2014-05-12 21:30:48 -07:00
|
|
|
///
|
|
|
|
|
/// Returns the previous value.
|
|
|
|
|
///
|
2018-08-07 11:57:43 +02:00
|
|
|
/// `fetch_and` takes an [`Ordering`] argument which describes the memory ordering
|
|
|
|
|
/// of this operation. All ordering modes are possible. Note that using
|
|
|
|
|
/// [`Acquire`] makes the store part of this operation [`Relaxed`], and
|
|
|
|
|
/// using [`Release`] makes the load part [`Relaxed`].
|
|
|
|
|
///
|
2020-06-05 19:07:24 +02:00
|
|
|
/// **Note:** This method is only available on platforms that support atomic
|
|
|
|
|
/// operations on `u8`.
|
|
|
|
|
///
|
2014-05-12 21:30:48 -07:00
|
|
|
/// # Examples
|
|
|
|
|
///
|
|
|
|
|
/// ```
|
2015-01-01 23:53:35 -08:00
|
|
|
/// use std::sync::atomic::{AtomicBool, Ordering};
|
2014-05-12 21:30:48 -07:00
|
|
|
///
|
|
|
|
|
/// let foo = AtomicBool::new(true);
|
2015-06-10 16:53:09 +02:00
|
|
|
/// assert_eq!(foo.fetch_and(false, Ordering::SeqCst), true);
|
|
|
|
|
/// assert_eq!(foo.load(Ordering::SeqCst), false);
|
2014-05-12 21:30:48 -07:00
|
|
|
///
|
|
|
|
|
/// let foo = AtomicBool::new(true);
|
2015-06-10 16:53:09 +02:00
|
|
|
/// assert_eq!(foo.fetch_and(true, Ordering::SeqCst), true);
|
|
|
|
|
/// assert_eq!(foo.load(Ordering::SeqCst), true);
|
2014-05-12 21:30:48 -07:00
|
|
|
///
|
|
|
|
|
/// let foo = AtomicBool::new(false);
|
2015-06-10 16:53:09 +02:00
|
|
|
/// assert_eq!(foo.fetch_and(false, Ordering::SeqCst), false);
|
|
|
|
|
/// assert_eq!(foo.load(Ordering::SeqCst), false);
|
2014-05-12 21:30:48 -07:00
|
|
|
/// ```
|
|
|
|
|
#[inline]
|
2015-01-23 21:48:20 -08:00
|
|
|
#[stable(feature = "rust1", since = "1.0.0")]
|
2019-10-08 17:09:23 +01:00
|
|
|
#[cfg(target_has_atomic = "8")]
|
2022-07-20 16:34:24 -04:00
|
|
|
#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
|
2014-05-12 21:30:48 -07:00
|
|
|
pub fn fetch_and(&self, val: bool, order: Ordering) -> bool {
|
2019-12-26 12:56:34 -08:00
|
|
|
// SAFETY: data races are prevented by atomic intrinsics.
|
2016-05-12 16:12:05 +01:00
|
|
|
unsafe { atomic_and(self.v.get(), val as u8, order) != 0 }
|
2014-05-12 21:30:48 -07:00
|
|
|
}
|
|
|
|
|
|
2014-11-19 10:35:47 -05:00
|
|
|
/// Logical "nand" with a boolean value.
|
|
|
|
|
///
|
|
|
|
|
/// Performs a logical "nand" operation on the current value and the argument `val`, and sets
|
|
|
|
|
/// the new value to the result.
|
2014-05-12 21:30:48 -07:00
|
|
|
///
|
|
|
|
|
/// Returns the previous value.
|
|
|
|
|
///
|
2018-08-07 11:57:43 +02:00
|
|
|
/// `fetch_nand` takes an [`Ordering`] argument which describes the memory ordering
|
|
|
|
|
/// of this operation. All ordering modes are possible. Note that using
|
|
|
|
|
/// [`Acquire`] makes the store part of this operation [`Relaxed`], and
|
|
|
|
|
/// using [`Release`] makes the load part [`Relaxed`].
|
|
|
|
|
///
|
2020-06-05 19:07:24 +02:00
|
|
|
/// **Note:** This method is only available on platforms that support atomic
|
|
|
|
|
/// operations on `u8`.
|
|
|
|
|
///
|
2014-05-12 21:30:48 -07:00
|
|
|
/// # Examples
|
|
|
|
|
///
|
|
|
|
|
/// ```
|
2015-01-01 23:53:35 -08:00
|
|
|
/// use std::sync::atomic::{AtomicBool, Ordering};
|
2014-05-12 21:30:48 -07:00
|
|
|
///
|
|
|
|
|
/// let foo = AtomicBool::new(true);
|
2015-06-10 16:53:09 +02:00
|
|
|
/// assert_eq!(foo.fetch_nand(false, Ordering::SeqCst), true);
|
|
|
|
|
/// assert_eq!(foo.load(Ordering::SeqCst), true);
|
2014-05-12 21:30:48 -07:00
|
|
|
///
|
|
|
|
|
/// let foo = AtomicBool::new(true);
|
2015-06-10 16:53:09 +02:00
|
|
|
/// assert_eq!(foo.fetch_nand(true, Ordering::SeqCst), true);
|
|
|
|
|
/// assert_eq!(foo.load(Ordering::SeqCst) as usize, 0);
|
|
|
|
|
/// assert_eq!(foo.load(Ordering::SeqCst), false);
|
2014-05-12 21:30:48 -07:00
|
|
|
///
|
|
|
|
|
/// let foo = AtomicBool::new(false);
|
2015-06-10 16:53:09 +02:00
|
|
|
/// assert_eq!(foo.fetch_nand(false, Ordering::SeqCst), false);
|
|
|
|
|
/// assert_eq!(foo.load(Ordering::SeqCst), true);
|
2014-05-12 21:30:48 -07:00
|
|
|
/// ```
|
|
|
|
|
#[inline]
|
2015-01-23 21:48:20 -08:00
|
|
|
#[stable(feature = "rust1", since = "1.0.0")]
|
2019-10-08 17:09:23 +01:00
|
|
|
#[cfg(target_has_atomic = "8")]
|
2022-07-20 16:34:24 -04:00
|
|
|
#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
|
2014-05-12 21:30:48 -07:00
|
|
|
pub fn fetch_nand(&self, val: bool, order: Ordering) -> bool {
|
2016-05-12 16:12:05 +01:00
|
|
|
// We can't use atomic_nand here because it can result in a bool with
|
|
|
|
|
// an invalid value. This happens because the atomic operation is done
|
|
|
|
|
// with an 8-bit integer internally, which would set the upper 7 bits.
|
2017-04-07 18:04:15 +02:00
|
|
|
// So we just use fetch_xor or swap instead.
|
2017-04-07 17:28:55 +02:00
|
|
|
if val {
|
|
|
|
|
// !(x & true) == !x
|
|
|
|
|
// We must invert the bool.
|
|
|
|
|
self.fetch_xor(true, order)
|
|
|
|
|
} else {
|
|
|
|
|
// !(x & false) == true
|
2017-04-07 18:04:15 +02:00
|
|
|
// We must set the bool to true.
|
|
|
|
|
self.swap(true, order)
|
2016-05-12 16:12:05 +01:00
|
|
|
}
|
2014-05-12 21:30:48 -07:00
|
|
|
}
|
|
|
|
|
|
2014-11-19 10:35:47 -05:00
|
|
|
/// Logical "or" with a boolean value.
|
|
|
|
|
///
|
|
|
|
|
/// Performs a logical "or" operation on the current value and the argument `val`, and sets the
|
|
|
|
|
/// new value to the result.
|
2014-05-12 21:30:48 -07:00
|
|
|
///
|
|
|
|
|
/// Returns the previous value.
|
|
|
|
|
///
|
2018-08-07 11:57:43 +02:00
|
|
|
/// `fetch_or` takes an [`Ordering`] argument which describes the memory ordering
|
|
|
|
|
/// of this operation. All ordering modes are possible. Note that using
|
|
|
|
|
/// [`Acquire`] makes the store part of this operation [`Relaxed`], and
|
|
|
|
|
/// using [`Release`] makes the load part [`Relaxed`].
|
|
|
|
|
///
|
2020-06-05 19:07:24 +02:00
|
|
|
/// **Note:** This method is only available on platforms that support atomic
|
|
|
|
|
/// operations on `u8`.
|
|
|
|
|
///
|
2014-05-12 21:30:48 -07:00
|
|
|
/// # Examples
|
|
|
|
|
///
|
|
|
|
|
/// ```
|
2015-01-01 23:53:35 -08:00
|
|
|
/// use std::sync::atomic::{AtomicBool, Ordering};
|
2014-05-12 21:30:48 -07:00
|
|
|
///
|
|
|
|
|
/// let foo = AtomicBool::new(true);
|
2015-06-10 16:53:09 +02:00
|
|
|
/// assert_eq!(foo.fetch_or(false, Ordering::SeqCst), true);
|
|
|
|
|
/// assert_eq!(foo.load(Ordering::SeqCst), true);
|
2014-05-12 21:30:48 -07:00
|
|
|
///
|
|
|
|
|
/// let foo = AtomicBool::new(true);
|
2015-06-10 16:53:09 +02:00
|
|
|
/// assert_eq!(foo.fetch_or(true, Ordering::SeqCst), true);
|
|
|
|
|
/// assert_eq!(foo.load(Ordering::SeqCst), true);
|
2014-05-12 21:30:48 -07:00
|
|
|
///
|
|
|
|
|
/// let foo = AtomicBool::new(false);
|
2015-06-10 16:53:09 +02:00
|
|
|
/// assert_eq!(foo.fetch_or(false, Ordering::SeqCst), false);
|
|
|
|
|
/// assert_eq!(foo.load(Ordering::SeqCst), false);
|
2014-05-12 21:30:48 -07:00
|
|
|
/// ```
|
|
|
|
|
#[inline]
|
2015-01-23 21:48:20 -08:00
|
|
|
#[stable(feature = "rust1", since = "1.0.0")]
|
2019-10-08 17:09:23 +01:00
|
|
|
#[cfg(target_has_atomic = "8")]
|
2022-07-20 16:34:24 -04:00
|
|
|
#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
|
2014-05-12 21:30:48 -07:00
|
|
|
pub fn fetch_or(&self, val: bool, order: Ordering) -> bool {
|
2019-12-26 12:56:34 -08:00
|
|
|
// SAFETY: data races are prevented by atomic intrinsics.
|
2016-05-12 16:12:05 +01:00
|
|
|
unsafe { atomic_or(self.v.get(), val as u8, order) != 0 }
|
2014-05-12 21:30:48 -07:00
|
|
|
}
|
|
|
|
|
|
2014-11-19 10:35:47 -05:00
|
|
|
/// Logical "xor" with a boolean value.
|
|
|
|
|
///
|
|
|
|
|
/// Performs a logical "xor" operation on the current value and the argument `val`, and sets
|
|
|
|
|
/// the new value to the result.
|
2014-05-12 21:30:48 -07:00
|
|
|
///
|
|
|
|
|
/// Returns the previous value.
|
|
|
|
|
///
|
2018-08-07 11:57:43 +02:00
|
|
|
/// `fetch_xor` takes an [`Ordering`] argument which describes the memory ordering
|
|
|
|
|
/// of this operation. All ordering modes are possible. Note that using
|
|
|
|
|
/// [`Acquire`] makes the store part of this operation [`Relaxed`], and
|
|
|
|
|
/// using [`Release`] makes the load part [`Relaxed`].
|
|
|
|
|
///
|
2020-06-05 19:07:24 +02:00
|
|
|
/// **Note:** This method is only available on platforms that support atomic
|
|
|
|
|
/// operations on `u8`.
|
|
|
|
|
///
|
2014-05-12 21:30:48 -07:00
|
|
|
/// # Examples
|
|
|
|
|
///
|
|
|
|
|
/// ```
|
2015-01-01 23:53:35 -08:00
|
|
|
/// use std::sync::atomic::{AtomicBool, Ordering};
|
2014-05-12 21:30:48 -07:00
|
|
|
///
|
|
|
|
|
/// let foo = AtomicBool::new(true);
|
2015-06-10 16:53:09 +02:00
|
|
|
/// assert_eq!(foo.fetch_xor(false, Ordering::SeqCst), true);
|
|
|
|
|
/// assert_eq!(foo.load(Ordering::SeqCst), true);
|
2014-05-12 21:30:48 -07:00
|
|
|
///
|
|
|
|
|
/// let foo = AtomicBool::new(true);
|
2015-06-10 16:53:09 +02:00
|
|
|
/// assert_eq!(foo.fetch_xor(true, Ordering::SeqCst), true);
|
|
|
|
|
/// assert_eq!(foo.load(Ordering::SeqCst), false);
|
2014-05-12 21:30:48 -07:00
|
|
|
///
|
|
|
|
|
/// let foo = AtomicBool::new(false);
|
2015-06-10 16:53:09 +02:00
|
|
|
/// assert_eq!(foo.fetch_xor(false, Ordering::SeqCst), false);
|
|
|
|
|
/// assert_eq!(foo.load(Ordering::SeqCst), false);
|
2014-05-12 21:30:48 -07:00
|
|
|
/// ```
|
|
|
|
|
#[inline]
|
2015-01-23 21:48:20 -08:00
|
|
|
#[stable(feature = "rust1", since = "1.0.0")]
|
2019-10-08 17:09:23 +01:00
|
|
|
#[cfg(target_has_atomic = "8")]
|
2022-07-20 16:34:24 -04:00
|
|
|
#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
|
2014-05-12 21:30:48 -07:00
|
|
|
pub fn fetch_xor(&self, val: bool, order: Ordering) -> bool {
|
2019-12-26 12:56:34 -08:00
|
|
|
// SAFETY: data races are prevented by atomic intrinsics.
|
2016-05-12 16:12:05 +01:00
|
|
|
unsafe { atomic_xor(self.v.get(), val as u8, order) != 0 }
|
2014-05-12 21:30:48 -07:00
|
|
|
}
|
2019-11-23 17:07:31 +01:00
|
|
|
|
2022-06-25 11:19:08 +08:00
|
|
|
/// Logical "not" with a boolean value.
|
|
|
|
|
///
|
|
|
|
|
/// Performs a logical "not" operation on the current value, and sets
|
|
|
|
|
/// the new value to the result.
|
|
|
|
|
///
|
|
|
|
|
/// Returns the previous value.
|
|
|
|
|
///
|
|
|
|
|
/// `fetch_not` takes an [`Ordering`] argument which describes the memory ordering
|
|
|
|
|
/// of this operation. All ordering modes are possible. Note that using
|
|
|
|
|
/// [`Acquire`] makes the store part of this operation [`Relaxed`], and
|
|
|
|
|
/// using [`Release`] makes the load part [`Relaxed`].
|
|
|
|
|
///
|
|
|
|
|
/// **Note:** This method is only available on platforms that support atomic
|
|
|
|
|
/// operations on `u8`.
|
|
|
|
|
///
|
|
|
|
|
/// # Examples
|
|
|
|
|
///
|
|
|
|
|
/// ```
|
2022-06-26 10:48:17 +08:00
|
|
|
/// #![feature(atomic_bool_fetch_not)]
|
2022-06-25 11:19:08 +08:00
|
|
|
/// use std::sync::atomic::{AtomicBool, Ordering};
|
|
|
|
|
///
|
|
|
|
|
/// let foo = AtomicBool::new(true);
|
|
|
|
|
/// assert_eq!(foo.fetch_not(Ordering::SeqCst), true);
|
|
|
|
|
/// assert_eq!(foo.load(Ordering::SeqCst), false);
|
|
|
|
|
///
|
|
|
|
|
/// let foo = AtomicBool::new(false);
|
|
|
|
|
/// assert_eq!(foo.fetch_not(Ordering::SeqCst), false);
|
|
|
|
|
/// assert_eq!(foo.load(Ordering::SeqCst), true);
|
|
|
|
|
/// ```
|
|
|
|
|
#[inline]
|
2022-06-25 18:18:58 +08:00
|
|
|
#[unstable(feature = "atomic_bool_fetch_not", issue = "98485")]
|
2022-06-25 11:19:08 +08:00
|
|
|
#[cfg(target_has_atomic = "8")]
|
2022-07-20 16:34:24 -04:00
|
|
|
#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
|
2022-06-25 11:19:08 +08:00
|
|
|
pub fn fetch_not(&self, order: Ordering) -> bool {
|
|
|
|
|
self.fetch_xor(true, order)
|
|
|
|
|
}
|
|
|
|
|
|
2019-11-23 17:07:31 +01:00
|
|
|
/// Returns a mutable pointer to the underlying [`bool`].
|
|
|
|
|
///
|
|
|
|
|
/// Doing non-atomic reads and writes on the resulting integer can be a data race.
|
|
|
|
|
/// This method is mostly useful for FFI, where the function signature may use
|
|
|
|
|
/// `*mut bool` instead of `&AtomicBool`.
|
|
|
|
|
///
|
2019-11-30 12:57:50 +01:00
|
|
|
/// Returning an `*mut` pointer from a shared reference to this atomic is safe because the
|
|
|
|
|
/// atomic types work with interior mutability. All modifications of an atomic change the value
|
|
|
|
|
/// through a shared reference, and can do so safely as long as they use atomic operations. Any
|
|
|
|
|
/// use of the returned raw pointer requires an `unsafe` block and still has to uphold the same
|
|
|
|
|
/// restriction: operations on it must be atomic.
|
|
|
|
|
///
|
2019-11-23 17:07:31 +01:00
|
|
|
/// # Examples
|
|
|
|
|
///
|
|
|
|
|
/// ```ignore (extern-declaration)
|
|
|
|
|
/// # fn main() {
|
|
|
|
|
/// use std::sync::atomic::AtomicBool;
|
2020-09-01 17:12:52 -04:00
|
|
|
/// extern "C" {
|
2019-11-23 17:07:31 +01:00
|
|
|
/// fn my_atomic_op(arg: *mut bool);
|
|
|
|
|
/// }
|
|
|
|
|
///
|
|
|
|
|
/// let mut atomic = AtomicBool::new(true);
|
|
|
|
|
/// unsafe {
|
|
|
|
|
/// my_atomic_op(atomic.as_mut_ptr());
|
|
|
|
|
/// }
|
|
|
|
|
/// # }
|
|
|
|
|
/// ```
|
|
|
|
|
#[inline]
|
|
|
|
|
#[unstable(feature = "atomic_mut_ptr", reason = "recently added", issue = "66893")]
|
|
|
|
|
pub fn as_mut_ptr(&self) -> *mut bool {
|
|
|
|
|
self.v.get() as *mut bool
|
|
|
|
|
}
|
2020-11-01 13:24:22 -05:00
|
|
|
|
|
|
|
|
/// Fetches the value, and applies a function to it that returns an optional
|
|
|
|
|
/// new value. Returns a `Result` of `Ok(previous_value)` if the function
|
|
|
|
|
/// returned `Some(_)`, else `Err(previous_value)`.
|
|
|
|
|
///
|
|
|
|
|
/// Note: This may call the function multiple times if the value has been
|
|
|
|
|
/// changed from other threads in the meantime, as long as the function
|
|
|
|
|
/// returns `Some(_)`, but the function will have been applied only once to
|
|
|
|
|
/// the stored value.
|
|
|
|
|
///
|
|
|
|
|
/// `fetch_update` takes two [`Ordering`] arguments to describe the memory
|
|
|
|
|
/// ordering of this operation. The first describes the required ordering for
|
|
|
|
|
/// when the operation finally succeeds while the second describes the
|
|
|
|
|
/// required ordering for loads. These correspond to the success and failure
|
|
|
|
|
/// orderings of [`AtomicBool::compare_exchange`] respectively.
|
|
|
|
|
///
|
|
|
|
|
/// Using [`Acquire`] as success ordering makes the store part of this
|
|
|
|
|
/// operation [`Relaxed`], and using [`Release`] makes the final successful
|
|
|
|
|
/// load [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`],
|
2022-06-22 13:15:03 +02:00
|
|
|
/// [`Acquire`] or [`Relaxed`].
|
2020-11-01 13:24:22 -05:00
|
|
|
///
|
|
|
|
|
/// **Note:** This method is only available on platforms that support atomic
|
|
|
|
|
/// operations on `u8`.
|
|
|
|
|
///
|
2022-09-14 13:25:14 -04:00
|
|
|
/// # Considerations
|
|
|
|
|
///
|
|
|
|
|
/// This method is not magic; it is not provided by the hardware.
|
|
|
|
|
/// It is implemented in terms of [`AtomicBool::compare_exchange_weak`], and suffers from the same drawbacks.
|
|
|
|
|
/// In particular, this method will not circumvent the [ABA Problem].
|
|
|
|
|
///
|
|
|
|
|
/// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
|
|
|
|
|
///
|
2020-11-01 13:24:22 -05:00
|
|
|
/// # Examples
|
|
|
|
|
///
|
|
|
|
|
/// ```rust
|
|
|
|
|
/// use std::sync::atomic::{AtomicBool, Ordering};
|
|
|
|
|
///
|
|
|
|
|
/// let x = AtomicBool::new(false);
|
|
|
|
|
/// assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(false));
|
|
|
|
|
/// assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(!x)), Ok(false));
|
|
|
|
|
/// assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(!x)), Ok(true));
|
|
|
|
|
/// assert_eq!(x.load(Ordering::SeqCst), false);
|
|
|
|
|
/// ```
|
|
|
|
|
#[inline]
|
2021-04-11 11:45:46 +02:00
|
|
|
#[stable(feature = "atomic_fetch_update", since = "1.53.0")]
|
2020-11-01 13:24:22 -05:00
|
|
|
#[cfg(target_has_atomic = "8")]
|
2022-07-20 16:34:24 -04:00
|
|
|
#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
|
2020-11-01 13:24:22 -05:00
|
|
|
pub fn fetch_update<F>(
|
|
|
|
|
&self,
|
|
|
|
|
set_order: Ordering,
|
|
|
|
|
fetch_order: Ordering,
|
|
|
|
|
mut f: F,
|
|
|
|
|
) -> Result<bool, bool>
|
|
|
|
|
where
|
|
|
|
|
F: FnMut(bool) -> Option<bool>,
|
|
|
|
|
{
|
|
|
|
|
let mut prev = self.load(fetch_order);
|
|
|
|
|
while let Some(next) = f(prev) {
|
|
|
|
|
match self.compare_exchange_weak(prev, next, set_order, fetch_order) {
|
|
|
|
|
x @ Ok(_) => return x,
|
|
|
|
|
Err(next_prev) => prev = next_prev,
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
Err(prev)
|
|
|
|
|
}
|
2014-05-12 21:30:48 -07:00
|
|
|
}
|
|
|
|
|
|
2019-11-06 08:09:55 -05:00
|
|
|
#[cfg(target_has_atomic_load_store = "ptr")]
|
2014-05-12 21:30:48 -07:00
|
|
|
impl<T> AtomicPtr<T> {
|
2014-11-19 10:35:47 -05:00
|
|
|
/// Creates a new `AtomicPtr`.
|
|
|
|
|
///
|
|
|
|
|
/// # Examples
|
|
|
|
|
///
|
|
|
|
|
/// ```
|
|
|
|
|
/// use std::sync::atomic::AtomicPtr;
|
|
|
|
|
///
|
2015-01-22 14:08:56 +00:00
|
|
|
/// let ptr = &mut 5;
|
2022-07-09 14:20:34 -05:00
|
|
|
/// let atomic_ptr = AtomicPtr::new(ptr);
|
2014-11-19 10:35:47 -05:00
|
|
|
/// ```
|
2014-10-27 09:14:57 -07:00
|
|
|
#[inline]
|
2015-01-23 21:48:20 -08:00
|
|
|
#[stable(feature = "rust1", since = "1.0.0")]
|
2021-03-12 13:45:13 +00:00
|
|
|
#[rustc_const_stable(feature = "const_atomic_new", since = "1.24.0")]
|
2015-05-27 11:18:36 +03:00
|
|
|
pub const fn new(p: *mut T) -> AtomicPtr<T> {
|
|
|
|
|
AtomicPtr { p: UnsafeCell::new(p) }
|
2014-05-12 21:30:48 -07:00
|
|
|
}
|
|
|
|
|
|
2016-08-16 16:58:20 +01:00
|
|
|
/// Returns a mutable reference to the underlying pointer.
|
|
|
|
|
///
|
|
|
|
|
/// This is safe because the mutable reference guarantees that no other threads are
|
|
|
|
|
/// concurrently accessing the atomic data.
|
|
|
|
|
///
|
|
|
|
|
/// # Examples
|
|
|
|
|
///
|
|
|
|
|
/// ```
|
|
|
|
|
/// use std::sync::atomic::{AtomicPtr, Ordering};
|
|
|
|
|
///
|
2021-04-10 11:58:48 +02:00
|
|
|
/// let mut data = 10;
|
|
|
|
|
/// let mut atomic_ptr = AtomicPtr::new(&mut data);
|
|
|
|
|
/// let mut other_data = 5;
|
|
|
|
|
/// *atomic_ptr.get_mut() = &mut other_data;
|
2016-08-16 16:58:20 +01:00
|
|
|
/// assert_eq!(unsafe { *atomic_ptr.load(Ordering::SeqCst) }, 5);
|
|
|
|
|
/// ```
|
|
|
|
|
#[inline]
|
2016-12-14 12:13:12 -08:00
|
|
|
#[stable(feature = "atomic_access", since = "1.15.0")]
|
2016-08-16 16:58:20 +01:00
|
|
|
pub fn get_mut(&mut self) -> &mut *mut T {
|
2020-09-19 21:33:40 +02:00
|
|
|
self.p.get_mut()
|
2016-08-16 16:58:20 +01:00
|
|
|
}
|
|
|
|
|
|
2020-09-21 20:42:40 +02:00
|
|
|
/// Get atomic access to a pointer.
|
|
|
|
|
///
|
|
|
|
|
/// # Examples
|
|
|
|
|
///
|
|
|
|
|
/// ```
|
|
|
|
|
/// #![feature(atomic_from_mut)]
|
|
|
|
|
/// use std::sync::atomic::{AtomicPtr, Ordering};
|
|
|
|
|
///
|
2021-04-10 11:58:48 +02:00
|
|
|
/// let mut data = 123;
|
|
|
|
|
/// let mut some_ptr = &mut data as *mut i32;
|
2020-09-21 20:42:40 +02:00
|
|
|
/// let a = AtomicPtr::from_mut(&mut some_ptr);
|
2021-04-10 11:58:48 +02:00
|
|
|
/// let mut other_data = 456;
|
|
|
|
|
/// a.store(&mut other_data, Ordering::Relaxed);
|
2020-09-21 20:42:40 +02:00
|
|
|
/// assert_eq!(unsafe { *some_ptr }, 456);
|
|
|
|
|
/// ```
|
|
|
|
|
#[inline]
|
2020-09-20 12:14:41 +02:00
|
|
|
#[cfg(target_has_atomic_equal_alignment = "ptr")]
|
2020-09-21 20:42:40 +02:00
|
|
|
#[unstable(feature = "atomic_from_mut", issue = "76314")]
|
2022-01-08 16:57:20 +03:00
|
|
|
pub fn from_mut(v: &mut *mut T) -> &mut Self {
|
2020-09-21 20:44:45 +02:00
|
|
|
use crate::mem::align_of;
|
2020-09-21 20:42:40 +02:00
|
|
|
let [] = [(); align_of::<AtomicPtr<()>>() - align_of::<*mut ()>()];
|
|
|
|
|
// SAFETY:
|
|
|
|
|
// - the mutable reference guarantees unique ownership.
|
|
|
|
|
// - the alignment of `*mut T` and `Self` is the same on all platforms
|
|
|
|
|
// supported by rust, as verified above.
|
2022-01-08 16:57:20 +03:00
|
|
|
unsafe { &mut *(v as *mut *mut T as *mut Self) }
|
2020-09-21 20:42:40 +02:00
|
|
|
}
|
|
|
|
|
|
2022-03-10 22:57:02 +04:00
|
|
|
/// Get non-atomic access to a `&mut [AtomicPtr]` slice.
|
|
|
|
|
///
|
|
|
|
|
/// This is safe because the mutable reference guarantees that no other threads are
|
|
|
|
|
/// concurrently accessing the atomic data.
|
|
|
|
|
///
|
|
|
|
|
/// # Examples
|
|
|
|
|
///
|
|
|
|
|
/// ```
|
2022-06-09 10:53:45 +02:00
|
|
|
/// #![feature(atomic_from_mut, inline_const)]
|
2022-03-10 22:57:02 +04:00
|
|
|
/// use std::ptr::null_mut;
|
|
|
|
|
/// use std::sync::atomic::{AtomicPtr, Ordering};
|
|
|
|
|
///
|
|
|
|
|
/// let mut some_ptrs = [const { AtomicPtr::new(null_mut::<String>()) }; 10];
|
|
|
|
|
///
|
|
|
|
|
/// let view: &mut [*mut String] = AtomicPtr::get_mut_slice(&mut some_ptrs);
|
|
|
|
|
/// assert_eq!(view, [null_mut::<String>(); 10]);
|
|
|
|
|
/// view
|
|
|
|
|
/// .iter_mut()
|
|
|
|
|
/// .enumerate()
|
|
|
|
|
/// .for_each(|(i, ptr)| *ptr = Box::into_raw(Box::new(format!("iteration#{i}"))));
|
|
|
|
|
///
|
|
|
|
|
/// std::thread::scope(|s| {
|
|
|
|
|
/// for ptr in &some_ptrs {
|
|
|
|
|
/// s.spawn(move || {
|
|
|
|
|
/// let ptr = ptr.load(Ordering::Relaxed);
|
|
|
|
|
/// assert!(!ptr.is_null());
|
|
|
|
|
///
|
|
|
|
|
/// let name = unsafe { Box::from_raw(ptr) };
|
|
|
|
|
/// println!("Hello, {name}!");
|
|
|
|
|
/// });
|
|
|
|
|
/// }
|
|
|
|
|
/// });
|
|
|
|
|
/// ```
|
|
|
|
|
#[inline]
|
|
|
|
|
#[unstable(feature = "atomic_from_mut", issue = "76314")]
|
|
|
|
|
pub fn get_mut_slice(this: &mut [Self]) -> &mut [*mut T] {
|
|
|
|
|
// SAFETY: the mutable reference guarantees unique ownership.
|
|
|
|
|
unsafe { &mut *(this as *mut [Self] as *mut [*mut T]) }
|
|
|
|
|
}
|
|
|
|
|
|
2022-02-25 15:30:29 -08:00
|
|
|
/// Get atomic access to a slice of pointers.
|
|
|
|
|
///
|
|
|
|
|
/// # Examples
|
|
|
|
|
///
|
|
|
|
|
/// ```
|
2022-06-09 10:53:45 +02:00
|
|
|
/// #![feature(atomic_from_mut)]
|
2022-02-25 15:30:29 -08:00
|
|
|
/// use std::ptr::null_mut;
|
|
|
|
|
/// use std::sync::atomic::{AtomicPtr, Ordering};
|
|
|
|
|
///
|
|
|
|
|
/// let mut some_ptrs = [null_mut::<String>(); 10];
|
|
|
|
|
/// let a = &*AtomicPtr::from_mut_slice(&mut some_ptrs);
|
|
|
|
|
/// std::thread::scope(|s| {
|
|
|
|
|
/// for i in 0..a.len() {
|
2022-03-03 14:58:49 +01:00
|
|
|
/// s.spawn(move || {
|
2022-02-25 15:30:29 -08:00
|
|
|
/// let name = Box::new(format!("thread{i}"));
|
|
|
|
|
/// a[i].store(Box::into_raw(name), Ordering::Relaxed);
|
|
|
|
|
/// });
|
|
|
|
|
/// }
|
|
|
|
|
/// });
|
|
|
|
|
/// for p in some_ptrs {
|
|
|
|
|
/// assert!(!p.is_null());
|
|
|
|
|
/// let name = unsafe { Box::from_raw(p) };
|
|
|
|
|
/// println!("Hello, {name}!");
|
|
|
|
|
/// }
|
|
|
|
|
/// ```
|
|
|
|
|
#[inline]
|
|
|
|
|
#[cfg(target_has_atomic_equal_alignment = "ptr")]
|
|
|
|
|
#[unstable(feature = "atomic_from_mut", issue = "76314")]
|
|
|
|
|
pub fn from_mut_slice(v: &mut [*mut T]) -> &mut [Self] {
|
|
|
|
|
// SAFETY:
|
|
|
|
|
// - the mutable reference guarantees unique ownership.
|
|
|
|
|
// - the alignment of `*mut T` and `Self` is the same on all platforms
|
|
|
|
|
// supported by rust, as verified above.
|
|
|
|
|
unsafe { &mut *(v as *mut [*mut T] as *mut [Self]) }
|
|
|
|
|
}
|
|
|
|
|
|
2016-08-16 16:58:20 +01:00
|
|
|
/// Consumes the atomic and returns the contained value.
|
|
|
|
|
///
|
|
|
|
|
/// This is safe because passing `self` by value guarantees that no other threads are
|
|
|
|
|
/// concurrently accessing the atomic data.
|
|
|
|
|
///
|
|
|
|
|
/// # Examples
|
|
|
|
|
///
|
|
|
|
|
/// ```
|
|
|
|
|
/// use std::sync::atomic::AtomicPtr;
|
|
|
|
|
///
|
2021-04-10 11:58:48 +02:00
|
|
|
/// let mut data = 5;
|
|
|
|
|
/// let atomic_ptr = AtomicPtr::new(&mut data);
|
2016-08-16 16:58:20 +01:00
|
|
|
/// assert_eq!(unsafe { *atomic_ptr.into_inner() }, 5);
|
|
|
|
|
/// ```
|
|
|
|
|
#[inline]
|
2016-12-14 12:13:12 -08:00
|
|
|
#[stable(feature = "atomic_access", since = "1.15.0")]
|
2020-11-04 11:58:41 +01:00
|
|
|
#[rustc_const_unstable(feature = "const_cell_into_inner", issue = "78729")]
|
2020-11-04 11:41:57 +01:00
|
|
|
pub const fn into_inner(self) -> *mut T {
|
2018-01-05 01:11:20 +00:00
|
|
|
self.p.into_inner()
|
2016-08-16 16:58:20 +01:00
|
|
|
}
|
|
|
|
|
|
2014-11-19 10:35:47 -05:00
|
|
|
/// Loads a value from the pointer.
|
|
|
|
|
///
|
2017-03-29 00:52:16 -04:00
|
|
|
/// `load` takes an [`Ordering`] argument which describes the memory ordering
|
2018-08-07 11:33:20 +02:00
|
|
|
/// of this operation. Possible values are [`SeqCst`], [`Acquire`] and [`Relaxed`].
|
2014-08-04 15:42:36 -07:00
|
|
|
///
|
2014-11-12 03:36:09 +09:00
|
|
|
/// # Panics
|
2014-08-04 15:42:36 -07:00
|
|
|
///
|
2016-12-27 13:52:30 +01:00
|
|
|
/// Panics if `order` is [`Release`] or [`AcqRel`].
|
|
|
|
|
///
|
2014-11-19 10:35:47 -05:00
|
|
|
/// # Examples
|
|
|
|
|
///
|
|
|
|
|
/// ```
|
|
|
|
|
/// use std::sync::atomic::{AtomicPtr, Ordering};
|
|
|
|
|
///
|
2015-01-22 14:08:56 +00:00
|
|
|
/// let ptr = &mut 5;
|
2022-10-19 23:54:00 +01:00
|
|
|
/// let some_ptr = AtomicPtr::new(ptr);
|
2014-11-19 10:35:47 -05:00
|
|
|
///
|
|
|
|
|
/// let value = some_ptr.load(Ordering::Relaxed);
|
|
|
|
|
/// ```
|
2014-05-12 21:30:48 -07:00
|
|
|
#[inline]
|
2015-01-23 21:48:20 -08:00
|
|
|
#[stable(feature = "rust1", since = "1.0.0")]
|
2022-07-20 16:34:24 -04:00
|
|
|
#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
|
2014-05-12 21:30:48 -07:00
|
|
|
pub fn load(&self, order: Ordering) -> *mut T {
|
2019-12-26 12:56:34 -08:00
|
|
|
// SAFETY: data races are prevented by atomic intrinsics.
|
2020-12-30 08:04:59 -05:00
|
|
|
unsafe { atomic_load(self.p.get(), order) }
|
2014-05-12 21:30:48 -07:00
|
|
|
}
|
|
|
|
|
|
2014-11-19 10:35:47 -05:00
|
|
|
/// Stores a value into the pointer.
|
|
|
|
|
///
|
2017-03-29 00:52:16 -04:00
|
|
|
/// `store` takes an [`Ordering`] argument which describes the memory ordering
|
2018-08-07 11:33:20 +02:00
|
|
|
/// of this operation. Possible values are [`SeqCst`], [`Release`] and [`Relaxed`].
|
|
|
|
|
///
|
|
|
|
|
/// # Panics
|
|
|
|
|
///
|
|
|
|
|
/// Panics if `order` is [`Acquire`] or [`AcqRel`].
|
2016-12-27 13:52:30 +01:00
|
|
|
///
|
2014-11-19 10:35:47 -05:00
|
|
|
/// # Examples
|
|
|
|
|
///
|
|
|
|
|
/// ```
|
|
|
|
|
/// use std::sync::atomic::{AtomicPtr, Ordering};
|
|
|
|
|
///
|
2015-01-22 14:08:56 +00:00
|
|
|
/// let ptr = &mut 5;
|
2022-10-19 23:54:00 +01:00
|
|
|
/// let some_ptr = AtomicPtr::new(ptr);
|
2014-11-19 10:35:47 -05:00
|
|
|
///
|
2015-01-22 14:08:56 +00:00
|
|
|
/// let other_ptr = &mut 10;
|
2014-11-19 10:35:47 -05:00
|
|
|
///
|
|
|
|
|
/// some_ptr.store(other_ptr, Ordering::Relaxed);
|
|
|
|
|
/// ```
|
2014-05-12 21:30:48 -07:00
|
|
|
#[inline]
|
2015-01-23 21:48:20 -08:00
|
|
|
#[stable(feature = "rust1", since = "1.0.0")]
|
2022-07-20 16:34:24 -04:00
|
|
|
#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
|
2014-05-12 21:30:48 -07:00
|
|
|
pub fn store(&self, ptr: *mut T, order: Ordering) {
|
2020-10-06 10:03:52 +00:00
|
|
|
// SAFETY: data races are prevented by atomic intrinsics.
|
|
|
|
|
unsafe {
|
|
|
|
|
atomic_store(self.p.get(), ptr, order);
|
|
|
|
|
}
|
2014-05-12 21:30:48 -07:00
|
|
|
}
|
|
|
|
|
|
2017-03-30 16:23:46 +02:00
|
|
|
/// Stores a value into the pointer, returning the previous value.
|
2014-11-19 10:35:47 -05:00
|
|
|
///
|
2017-03-29 00:52:16 -04:00
|
|
|
/// `swap` takes an [`Ordering`] argument which describes the memory ordering
|
2018-08-07 11:57:43 +02:00
|
|
|
/// of this operation. All ordering modes are possible. Note that using
|
|
|
|
|
/// [`Acquire`] makes the store part of this operation [`Relaxed`], and
|
|
|
|
|
/// using [`Release`] makes the load part [`Relaxed`].
|
2016-12-27 13:52:30 +01:00
|
|
|
///
|
2020-06-05 19:07:24 +02:00
|
|
|
/// **Note:** This method is only available on platforms that support atomic
|
|
|
|
|
/// operations on pointers.
|
|
|
|
|
///
|
2014-11-19 10:35:47 -05:00
|
|
|
/// # Examples
|
|
|
|
|
///
|
|
|
|
|
/// ```
|
|
|
|
|
/// use std::sync::atomic::{AtomicPtr, Ordering};
|
|
|
|
|
///
|
2015-01-22 14:08:56 +00:00
|
|
|
/// let ptr = &mut 5;
|
2022-10-19 23:54:00 +01:00
|
|
|
/// let some_ptr = AtomicPtr::new(ptr);
|
2014-11-19 10:35:47 -05:00
|
|
|
///
|
2015-01-22 14:08:56 +00:00
|
|
|
/// let other_ptr = &mut 10;
|
2014-11-19 10:35:47 -05:00
|
|
|
///
|
|
|
|
|
/// let value = some_ptr.swap(other_ptr, Ordering::Relaxed);
|
|
|
|
|
/// ```
|
2014-05-12 21:30:48 -07:00
|
|
|
#[inline]
|
2015-01-23 21:48:20 -08:00
|
|
|
#[stable(feature = "rust1", since = "1.0.0")]
|
2019-10-08 17:09:23 +01:00
|
|
|
#[cfg(target_has_atomic = "ptr")]
|
2022-07-20 16:34:24 -04:00
|
|
|
#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
|
2014-05-12 21:30:48 -07:00
|
|
|
pub fn swap(&self, ptr: *mut T, order: Ordering) -> *mut T {
|
2020-12-20 00:00:00 +00:00
|
|
|
// SAFETY: data races are prevented by atomic intrinsics.
|
2020-12-30 08:04:59 -05:00
|
|
|
unsafe { atomic_swap(self.p.get(), ptr, order) }
|
2014-05-12 21:30:48 -07:00
|
|
|
}
|
|
|
|
|
|
2015-06-29 21:44:40 +03:00
|
|
|
/// Stores a value into the pointer if the current value is the same as the `current` value.
|
2014-05-12 21:30:48 -07:00
|
|
|
///
|
2015-06-29 21:44:40 +03:00
|
|
|
/// The return value is always the previous value. If it is equal to `current`, then the value
|
|
|
|
|
/// was updated.
|
2014-11-19 10:35:47 -05:00
|
|
|
///
|
2017-03-29 00:52:16 -04:00
|
|
|
/// `compare_and_swap` also takes an [`Ordering`] argument which describes the memory
|
2018-08-07 11:26:05 +02:00
|
|
|
/// ordering of this operation. Notice that even when using [`AcqRel`], the operation
|
|
|
|
|
/// might fail and hence just perform an `Acquire` load, but not have `Release` semantics.
|
2018-08-07 11:57:43 +02:00
|
|
|
/// Using [`Acquire`] makes the store part of this operation [`Relaxed`] if it
|
|
|
|
|
/// happens, and using [`Release`] makes the load part [`Relaxed`].
|
2016-12-27 13:52:30 +01:00
|
|
|
///
|
2020-06-05 19:07:24 +02:00
|
|
|
/// **Note:** This method is only available on platforms that support atomic
|
|
|
|
|
/// operations on pointers.
|
|
|
|
|
///
|
2020-11-20 22:16:15 +01:00
|
|
|
/// # Migrating to `compare_exchange` and `compare_exchange_weak`
|
|
|
|
|
///
|
|
|
|
|
/// `compare_and_swap` is equivalent to `compare_exchange` with the following mapping for
|
|
|
|
|
/// memory orderings:
|
|
|
|
|
///
|
|
|
|
|
/// Original | Success | Failure
|
|
|
|
|
/// -------- | ------- | -------
|
|
|
|
|
/// Relaxed | Relaxed | Relaxed
|
|
|
|
|
/// Acquire | Acquire | Acquire
|
|
|
|
|
/// Release | Release | Relaxed
|
|
|
|
|
/// AcqRel | AcqRel | Acquire
|
|
|
|
|
/// SeqCst | SeqCst | SeqCst
|
|
|
|
|
///
|
|
|
|
|
/// `compare_exchange_weak` is allowed to fail spuriously even when the comparison succeeds,
|
|
|
|
|
/// which allows the compiler to generate better assembly code when the compare and swap
|
|
|
|
|
/// is used in a loop.
|
|
|
|
|
///
|
2014-11-19 10:35:47 -05:00
|
|
|
/// # Examples
|
|
|
|
|
///
|
|
|
|
|
/// ```
|
|
|
|
|
/// use std::sync::atomic::{AtomicPtr, Ordering};
|
|
|
|
|
///
|
2015-01-22 14:08:56 +00:00
|
|
|
/// let ptr = &mut 5;
|
2022-10-19 23:54:00 +01:00
|
|
|
/// let some_ptr = AtomicPtr::new(ptr);
|
2014-11-19 10:35:47 -05:00
|
|
|
///
|
2022-10-19 23:54:00 +01:00
|
|
|
/// let other_ptr = &mut 10;
|
2014-11-19 10:35:47 -05:00
|
|
|
///
|
2019-09-10 14:18:16 +02:00
|
|
|
/// let value = some_ptr.compare_and_swap(ptr, other_ptr, Ordering::Relaxed);
|
2014-11-19 10:35:47 -05:00
|
|
|
/// ```
|
2014-05-12 21:30:48 -07:00
|
|
|
#[inline]
|
2015-01-23 21:48:20 -08:00
|
|
|
#[stable(feature = "rust1", since = "1.0.0")]
|
2022-04-07 21:20:32 -04:00
|
|
|
#[deprecated(
|
2020-11-20 21:45:51 +01:00
|
|
|
since = "1.50.0",
|
2022-04-07 21:20:32 -04:00
|
|
|
note = "Use `compare_exchange` or `compare_exchange_weak` instead"
|
2020-11-20 21:45:51 +01:00
|
|
|
)]
|
2019-10-08 17:09:23 +01:00
|
|
|
#[cfg(target_has_atomic = "ptr")]
|
2022-07-20 16:34:24 -04:00
|
|
|
#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
|
2015-06-29 21:44:40 +03:00
|
|
|
pub fn compare_and_swap(&self, current: *mut T, new: *mut T, order: Ordering) -> *mut T {
|
2016-03-14 11:57:50 +01:00
|
|
|
match self.compare_exchange(current, new, order, strongest_failure_ordering(order)) {
|
|
|
|
|
Ok(x) => x,
|
|
|
|
|
Err(x) => x,
|
|
|
|
|
}
|
2016-01-17 05:11:31 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
/// Stores a value into the pointer if the current value is the same as the `current` value.
|
|
|
|
|
///
|
2016-03-14 11:57:50 +01:00
|
|
|
/// The return value is a result indicating whether the new value was written and containing
|
2016-06-02 19:36:10 +02:00
|
|
|
/// the previous value. On success this value is guaranteed to be equal to `current`.
|
2016-01-17 05:11:31 +00:00
|
|
|
///
|
2017-03-29 00:52:16 -04:00
|
|
|
/// `compare_exchange` takes two [`Ordering`] arguments to describe the memory
|
2020-11-22 20:36:29 +01:00
|
|
|
/// ordering of this operation. `success` describes the required ordering for the
|
|
|
|
|
/// read-modify-write operation that takes place if the comparison with `current` succeeds.
|
|
|
|
|
/// `failure` describes the required ordering for the load operation that takes place when
|
|
|
|
|
/// the comparison fails. Using [`Acquire`] as success ordering makes the store part
|
2018-08-07 11:57:43 +02:00
|
|
|
/// of this operation [`Relaxed`], and using [`Release`] makes the successful load
|
2022-06-22 13:15:03 +02:00
|
|
|
/// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
|
2016-12-27 13:52:30 +01:00
|
|
|
///
|
2020-06-05 19:07:24 +02:00
|
|
|
/// **Note:** This method is only available on platforms that support atomic
|
|
|
|
|
/// operations on pointers.
|
|
|
|
|
///
|
2016-01-17 05:11:31 +00:00
|
|
|
/// # Examples
|
|
|
|
|
///
|
|
|
|
|
/// ```
|
|
|
|
|
/// use std::sync::atomic::{AtomicPtr, Ordering};
|
|
|
|
|
///
|
|
|
|
|
/// let ptr = &mut 5;
|
2022-10-19 23:54:00 +01:00
|
|
|
/// let some_ptr = AtomicPtr::new(ptr);
|
2016-01-17 05:11:31 +00:00
|
|
|
///
|
2022-10-19 23:54:00 +01:00
|
|
|
/// let other_ptr = &mut 10;
|
2016-01-17 05:11:31 +00:00
|
|
|
///
|
2019-09-10 14:18:16 +02:00
|
|
|
/// let value = some_ptr.compare_exchange(ptr, other_ptr,
|
2016-01-17 05:11:31 +00:00
|
|
|
/// Ordering::SeqCst, Ordering::Relaxed);
|
|
|
|
|
/// ```
|
|
|
|
|
#[inline]
|
std: Stabilize APIs for the 1.10 release
This commit applies the FCP decisions made by the libs team for the 1.10 cycle,
including both new stabilizations and deprecations. Specifically, the list of
APIs is:
Stabilized:
* `os::windows::fs::OpenOptionsExt::access_mode`
* `os::windows::fs::OpenOptionsExt::share_mode`
* `os::windows::fs::OpenOptionsExt::custom_flags`
* `os::windows::fs::OpenOptionsExt::attributes`
* `os::windows::fs::OpenOptionsExt::security_qos_flags`
* `os::unix::fs::OpenOptionsExt::custom_flags`
* `sync::Weak::new`
* `Default for sync::Weak`
* `panic::set_hook`
* `panic::take_hook`
* `panic::PanicInfo`
* `panic::PanicInfo::payload`
* `panic::PanicInfo::location`
* `panic::Location`
* `panic::Location::file`
* `panic::Location::line`
* `ffi::CStr::from_bytes_with_nul`
* `ffi::CStr::from_bytes_with_nul_unchecked`
* `ffi::FromBytesWithNulError`
* `fs::Metadata::modified`
* `fs::Metadata::accessed`
* `fs::Metadata::created`
* `sync::atomic::Atomic{Usize,Isize,Bool,Ptr}::compare_exchange`
* `sync::atomic::Atomic{Usize,Isize,Bool,Ptr}::compare_exchange_weak`
* `collections::{btree,hash}_map::{Occupied,Vacant,}Entry::key`
* `os::unix::net::{UnixStream, UnixListener, UnixDatagram, SocketAddr}`
* `SocketAddr::is_unnamed`
* `SocketAddr::as_pathname`
* `UnixStream::connect`
* `UnixStream::pair`
* `UnixStream::try_clone`
* `UnixStream::local_addr`
* `UnixStream::peer_addr`
* `UnixStream::set_read_timeout`
* `UnixStream::set_write_timeout`
* `UnixStream::read_timeout`
* `UnixStream::write_Timeout`
* `UnixStream::set_nonblocking`
* `UnixStream::take_error`
* `UnixStream::shutdown`
* Read/Write/RawFd impls for `UnixStream`
* `UnixListener::bind`
* `UnixListener::accept`
* `UnixListener::try_clone`
* `UnixListener::local_addr`
* `UnixListener::set_nonblocking`
* `UnixListener::take_error`
* `UnixListener::incoming`
* RawFd impls for `UnixListener`
* `UnixDatagram::bind`
* `UnixDatagram::unbound`
* `UnixDatagram::pair`
* `UnixDatagram::connect`
* `UnixDatagram::try_clone`
* `UnixDatagram::local_addr`
* `UnixDatagram::peer_addr`
* `UnixDatagram::recv_from`
* `UnixDatagram::recv`
* `UnixDatagram::send_to`
* `UnixDatagram::send`
* `UnixDatagram::set_read_timeout`
* `UnixDatagram::set_write_timeout`
* `UnixDatagram::read_timeout`
* `UnixDatagram::write_timeout`
* `UnixDatagram::set_nonblocking`
* `UnixDatagram::take_error`
* `UnixDatagram::shutdown`
* RawFd impls for `UnixDatagram`
* `{BTree,Hash}Map::values_mut`
* `<[_]>::binary_search_by_key`
Deprecated:
* `StaticCondvar` - this, and all other static synchronization primitives
below, are usable today through the lazy-static crate on
stable Rust today. Additionally, we'd like the non-static
versions to be directly usable in a static context one day,
so they're unlikely to be the final forms of the APIs in any
case.
* `CONDVAR_INIT`
* `StaticMutex`
* `MUTEX_INIT`
* `StaticRwLock`
* `RWLOCK_INIT`
* `iter::Peekable::is_empty`
Closes #27717
Closes #27720
cc #27784 (but encode methods still exist)
Closes #30014
Closes #30425
Closes #30449
Closes #31190
Closes #31399
Closes #31767
Closes #32111
Closes #32281
Closes #32312
Closes #32551
Closes #33018
2016-05-17 11:57:07 -07:00
|
|
|
#[stable(feature = "extended_compare_and_swap", since = "1.10.0")]
|
2019-10-08 17:09:23 +01:00
|
|
|
#[cfg(target_has_atomic = "ptr")]
|
2022-07-20 16:34:24 -04:00
|
|
|
#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
|
2016-01-17 05:11:31 +00:00
|
|
|
pub fn compare_exchange(
|
|
|
|
|
&self,
|
|
|
|
|
current: *mut T,
|
|
|
|
|
new: *mut T,
|
|
|
|
|
success: Ordering,
|
2016-10-16 22:11:01 +05:30
|
|
|
failure: Ordering,
|
|
|
|
|
) -> Result<*mut T, *mut T> {
|
2019-12-26 12:56:34 -08:00
|
|
|
// SAFETY: data races are prevented by atomic intrinsics.
|
2020-12-30 08:04:59 -05:00
|
|
|
unsafe { atomic_compare_exchange(self.p.get(), current, new, success, failure) }
|
2014-05-12 21:30:48 -07:00
|
|
|
}
|
2016-01-17 05:11:31 +00:00
|
|
|
|
|
|
|
|
/// Stores a value into the pointer if the current value is the same as the `current` value.
|
|
|
|
|
///
|
2020-08-28 17:24:47 +02:00
|
|
|
/// Unlike [`AtomicPtr::compare_exchange`], this function is allowed to spuriously fail even when the
|
2016-01-17 05:11:31 +00:00
|
|
|
/// comparison succeeds, which can result in more efficient code on some platforms. The
|
2016-03-14 11:57:50 +01:00
|
|
|
/// return value is a result indicating whether the new value was written and containing the
|
|
|
|
|
/// previous value.
|
2016-01-17 05:11:31 +00:00
|
|
|
///
|
2017-03-29 00:52:16 -04:00
|
|
|
/// `compare_exchange_weak` takes two [`Ordering`] arguments to describe the memory
|
2020-11-22 20:36:29 +01:00
|
|
|
/// ordering of this operation. `success` describes the required ordering for the
|
|
|
|
|
/// read-modify-write operation that takes place if the comparison with `current` succeeds.
|
|
|
|
|
/// `failure` describes the required ordering for the load operation that takes place when
|
|
|
|
|
/// the comparison fails. Using [`Acquire`] as success ordering makes the store part
|
2018-08-07 11:57:43 +02:00
|
|
|
/// of this operation [`Relaxed`], and using [`Release`] makes the successful load
|
2022-06-22 13:15:03 +02:00
|
|
|
/// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
|
2016-12-27 13:52:30 +01:00
|
|
|
///
|
2020-06-05 19:07:24 +02:00
|
|
|
/// **Note:** This method is only available on platforms that support atomic
|
|
|
|
|
/// operations on pointers.
|
|
|
|
|
///
|
2016-01-17 05:11:31 +00:00
|
|
|
/// # Examples
|
|
|
|
|
///
|
|
|
|
|
/// ```
|
|
|
|
|
/// use std::sync::atomic::{AtomicPtr, Ordering};
|
|
|
|
|
///
|
|
|
|
|
/// let some_ptr = AtomicPtr::new(&mut 5);
|
|
|
|
|
///
|
|
|
|
|
/// let new = &mut 10;
|
|
|
|
|
/// let mut old = some_ptr.load(Ordering::Relaxed);
|
|
|
|
|
/// loop {
|
2016-03-14 11:57:50 +01:00
|
|
|
/// match some_ptr.compare_exchange_weak(old, new, Ordering::SeqCst, Ordering::Relaxed) {
|
|
|
|
|
/// Ok(_) => break,
|
|
|
|
|
/// Err(x) => old = x,
|
2016-01-17 05:11:31 +00:00
|
|
|
/// }
|
|
|
|
|
/// }
|
|
|
|
|
/// ```
|
|
|
|
|
#[inline]
|
std: Stabilize APIs for the 1.10 release
This commit applies the FCP decisions made by the libs team for the 1.10 cycle,
including both new stabilizations and deprecations. Specifically, the list of
APIs is:
Stabilized:
* `os::windows::fs::OpenOptionsExt::access_mode`
* `os::windows::fs::OpenOptionsExt::share_mode`
* `os::windows::fs::OpenOptionsExt::custom_flags`
* `os::windows::fs::OpenOptionsExt::attributes`
* `os::windows::fs::OpenOptionsExt::security_qos_flags`
* `os::unix::fs::OpenOptionsExt::custom_flags`
* `sync::Weak::new`
* `Default for sync::Weak`
* `panic::set_hook`
* `panic::take_hook`
* `panic::PanicInfo`
* `panic::PanicInfo::payload`
* `panic::PanicInfo::location`
* `panic::Location`
* `panic::Location::file`
* `panic::Location::line`
* `ffi::CStr::from_bytes_with_nul`
* `ffi::CStr::from_bytes_with_nul_unchecked`
* `ffi::FromBytesWithNulError`
* `fs::Metadata::modified`
* `fs::Metadata::accessed`
* `fs::Metadata::created`
* `sync::atomic::Atomic{Usize,Isize,Bool,Ptr}::compare_exchange`
* `sync::atomic::Atomic{Usize,Isize,Bool,Ptr}::compare_exchange_weak`
* `collections::{btree,hash}_map::{Occupied,Vacant,}Entry::key`
* `os::unix::net::{UnixStream, UnixListener, UnixDatagram, SocketAddr}`
* `SocketAddr::is_unnamed`
* `SocketAddr::as_pathname`
* `UnixStream::connect`
* `UnixStream::pair`
* `UnixStream::try_clone`
* `UnixStream::local_addr`
* `UnixStream::peer_addr`
* `UnixStream::set_read_timeout`
* `UnixStream::set_write_timeout`
* `UnixStream::read_timeout`
* `UnixStream::write_Timeout`
* `UnixStream::set_nonblocking`
* `UnixStream::take_error`
* `UnixStream::shutdown`
* Read/Write/RawFd impls for `UnixStream`
* `UnixListener::bind`
* `UnixListener::accept`
* `UnixListener::try_clone`
* `UnixListener::local_addr`
* `UnixListener::set_nonblocking`
* `UnixListener::take_error`
* `UnixListener::incoming`
* RawFd impls for `UnixListener`
* `UnixDatagram::bind`
* `UnixDatagram::unbound`
* `UnixDatagram::pair`
* `UnixDatagram::connect`
* `UnixDatagram::try_clone`
* `UnixDatagram::local_addr`
* `UnixDatagram::peer_addr`
* `UnixDatagram::recv_from`
* `UnixDatagram::recv`
* `UnixDatagram::send_to`
* `UnixDatagram::send`
* `UnixDatagram::set_read_timeout`
* `UnixDatagram::set_write_timeout`
* `UnixDatagram::read_timeout`
* `UnixDatagram::write_timeout`
* `UnixDatagram::set_nonblocking`
* `UnixDatagram::take_error`
* `UnixDatagram::shutdown`
* RawFd impls for `UnixDatagram`
* `{BTree,Hash}Map::values_mut`
* `<[_]>::binary_search_by_key`
Deprecated:
* `StaticCondvar` - this, and all other static synchronization primitives
below, are usable today through the lazy-static crate on
stable Rust today. Additionally, we'd like the non-static
versions to be directly usable in a static context one day,
so they're unlikely to be the final forms of the APIs in any
case.
* `CONDVAR_INIT`
* `StaticMutex`
* `MUTEX_INIT`
* `StaticRwLock`
* `RWLOCK_INIT`
* `iter::Peekable::is_empty`
Closes #27717
Closes #27720
cc #27784 (but encode methods still exist)
Closes #30014
Closes #30425
Closes #30449
Closes #31190
Closes #31399
Closes #31767
Closes #32111
Closes #32281
Closes #32312
Closes #32551
Closes #33018
2016-05-17 11:57:07 -07:00
|
|
|
#[stable(feature = "extended_compare_and_swap", since = "1.10.0")]
|
2019-10-08 17:09:23 +01:00
|
|
|
#[cfg(target_has_atomic = "ptr")]
|
2022-07-20 16:34:24 -04:00
|
|
|
#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
|
2016-01-17 05:11:31 +00:00
|
|
|
pub fn compare_exchange_weak(
|
|
|
|
|
&self,
|
|
|
|
|
current: *mut T,
|
|
|
|
|
new: *mut T,
|
|
|
|
|
success: Ordering,
|
2016-10-16 22:11:01 +05:30
|
|
|
failure: Ordering,
|
|
|
|
|
) -> Result<*mut T, *mut T> {
|
2020-11-28 18:12:45 +00:00
|
|
|
// SAFETY: This intrinsic is unsafe because it operates on a raw pointer
|
|
|
|
|
// but we know for sure that the pointer is valid (we just got it from
|
|
|
|
|
// an `UnsafeCell` that we have by reference) and the atomic operation
|
|
|
|
|
// itself allows us to safely mutate the `UnsafeCell` contents.
|
2020-12-30 08:04:59 -05:00
|
|
|
unsafe { atomic_compare_exchange_weak(self.p.get(), current, new, success, failure) }
|
2016-01-17 05:11:31 +00:00
|
|
|
}
|
2020-11-01 13:24:22 -05:00
|
|
|
|
|
|
|
|
/// Fetches the value, and applies a function to it that returns an optional
|
|
|
|
|
/// new value. Returns a `Result` of `Ok(previous_value)` if the function
|
|
|
|
|
/// returned `Some(_)`, else `Err(previous_value)`.
|
|
|
|
|
///
|
|
|
|
|
/// Note: This may call the function multiple times if the value has been
|
|
|
|
|
/// changed from other threads in the meantime, as long as the function
|
|
|
|
|
/// returns `Some(_)`, but the function will have been applied only once to
|
|
|
|
|
/// the stored value.
|
|
|
|
|
///
|
|
|
|
|
/// `fetch_update` takes two [`Ordering`] arguments to describe the memory
|
|
|
|
|
/// ordering of this operation. The first describes the required ordering for
|
|
|
|
|
/// when the operation finally succeeds while the second describes the
|
|
|
|
|
/// required ordering for loads. These correspond to the success and failure
|
|
|
|
|
/// orderings of [`AtomicPtr::compare_exchange`] respectively.
|
|
|
|
|
///
|
|
|
|
|
/// Using [`Acquire`] as success ordering makes the store part of this
|
|
|
|
|
/// operation [`Relaxed`], and using [`Release`] makes the final successful
|
|
|
|
|
/// load [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`],
|
2022-06-22 13:15:03 +02:00
|
|
|
/// [`Acquire`] or [`Relaxed`].
|
2020-11-01 13:24:22 -05:00
|
|
|
///
|
|
|
|
|
/// **Note:** This method is only available on platforms that support atomic
|
|
|
|
|
/// operations on pointers.
|
|
|
|
|
///
|
2022-09-14 13:25:14 -04:00
|
|
|
/// # Considerations
|
|
|
|
|
///
|
|
|
|
|
/// This method is not magic; it is not provided by the hardware.
|
|
|
|
|
/// It is implemented in terms of [`AtomicPtr::compare_exchange_weak`], and suffers from the same drawbacks.
|
|
|
|
|
/// In particular, this method will not circumvent the [ABA Problem].
|
|
|
|
|
///
|
|
|
|
|
/// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
|
|
|
|
|
///
|
2020-11-01 13:24:22 -05:00
|
|
|
/// # Examples
|
|
|
|
|
///
|
|
|
|
|
/// ```rust
|
|
|
|
|
/// use std::sync::atomic::{AtomicPtr, Ordering};
|
|
|
|
|
///
|
|
|
|
|
/// let ptr: *mut _ = &mut 5;
|
|
|
|
|
/// let some_ptr = AtomicPtr::new(ptr);
|
|
|
|
|
///
|
|
|
|
|
/// let new: *mut _ = &mut 10;
|
|
|
|
|
/// assert_eq!(some_ptr.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(ptr));
|
|
|
|
|
/// let result = some_ptr.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| {
|
|
|
|
|
/// if x == ptr {
|
|
|
|
|
/// Some(new)
|
|
|
|
|
/// } else {
|
|
|
|
|
/// None
|
|
|
|
|
/// }
|
|
|
|
|
/// });
|
|
|
|
|
/// assert_eq!(result, Ok(ptr));
|
|
|
|
|
/// assert_eq!(some_ptr.load(Ordering::SeqCst), new);
|
|
|
|
|
/// ```
|
|
|
|
|
#[inline]
|
2021-04-11 11:45:46 +02:00
|
|
|
#[stable(feature = "atomic_fetch_update", since = "1.53.0")]
|
2020-11-01 13:24:22 -05:00
|
|
|
#[cfg(target_has_atomic = "ptr")]
|
2022-07-20 16:34:24 -04:00
|
|
|
#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
|
2020-11-01 13:24:22 -05:00
|
|
|
pub fn fetch_update<F>(
|
|
|
|
|
&self,
|
|
|
|
|
set_order: Ordering,
|
|
|
|
|
fetch_order: Ordering,
|
|
|
|
|
mut f: F,
|
|
|
|
|
) -> Result<*mut T, *mut T>
|
|
|
|
|
where
|
|
|
|
|
F: FnMut(*mut T) -> Option<*mut T>,
|
|
|
|
|
{
|
|
|
|
|
let mut prev = self.load(fetch_order);
|
|
|
|
|
while let Some(next) = f(prev) {
|
|
|
|
|
match self.compare_exchange_weak(prev, next, set_order, fetch_order) {
|
|
|
|
|
x @ Ok(_) => return x,
|
|
|
|
|
Err(next_prev) => prev = next_prev,
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
Err(prev)
|
|
|
|
|
}
|
2022-05-10 17:04:26 -07:00
|
|
|
|
|
|
|
|
/// Offsets the pointer's address by adding `val` (in units of `T`),
|
|
|
|
|
/// returning the previous pointer.
|
|
|
|
|
///
|
|
|
|
|
/// This is equivalent to using [`wrapping_add`] to atomically perform the
|
|
|
|
|
/// equivalent of `ptr = ptr.wrapping_add(val);`.
|
|
|
|
|
///
|
|
|
|
|
/// This method operates in units of `T`, which means that it cannot be used
|
|
|
|
|
/// to offset the pointer by an amount which is not a multiple of
|
|
|
|
|
/// `size_of::<T>()`. This can sometimes be inconvenient, as you may want to
|
|
|
|
|
/// work with a deliberately misaligned pointer. In such cases, you may use
|
2022-05-14 12:37:19 -07:00
|
|
|
/// the [`fetch_byte_add`](Self::fetch_byte_add) method instead.
|
2022-05-10 17:04:26 -07:00
|
|
|
///
|
2022-05-14 12:37:19 -07:00
|
|
|
/// `fetch_ptr_add` takes an [`Ordering`] argument which describes the
|
|
|
|
|
/// memory ordering of this operation. All ordering modes are possible. Note
|
|
|
|
|
/// that using [`Acquire`] makes the store part of this operation
|
|
|
|
|
/// [`Relaxed`], and using [`Release`] makes the load part [`Relaxed`].
|
2022-05-10 17:04:26 -07:00
|
|
|
///
|
|
|
|
|
/// **Note**: This method is only available on platforms that support atomic
|
|
|
|
|
/// operations on [`AtomicPtr`].
|
|
|
|
|
///
|
|
|
|
|
/// [`wrapping_add`]: pointer::wrapping_add
|
|
|
|
|
///
|
|
|
|
|
/// # Examples
|
|
|
|
|
///
|
|
|
|
|
/// ```
|
|
|
|
|
/// #![feature(strict_provenance_atomic_ptr, strict_provenance)]
|
|
|
|
|
/// use core::sync::atomic::{AtomicPtr, Ordering};
|
|
|
|
|
///
|
|
|
|
|
/// let atom = AtomicPtr::<i64>::new(core::ptr::null_mut());
|
2022-05-14 12:37:19 -07:00
|
|
|
/// assert_eq!(atom.fetch_ptr_add(1, Ordering::Relaxed).addr(), 0);
|
2022-05-10 17:04:26 -07:00
|
|
|
/// // Note: units of `size_of::<i64>()`.
|
|
|
|
|
/// assert_eq!(atom.load(Ordering::Relaxed).addr(), 8);
|
|
|
|
|
/// ```
|
|
|
|
|
#[inline]
|
|
|
|
|
#[cfg(target_has_atomic = "ptr")]
|
2022-07-10 13:17:33 +04:00
|
|
|
#[unstable(feature = "strict_provenance_atomic_ptr", issue = "99108")]
|
2022-07-20 16:34:24 -04:00
|
|
|
#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
|
2022-05-14 12:37:19 -07:00
|
|
|
pub fn fetch_ptr_add(&self, val: usize, order: Ordering) -> *mut T {
|
|
|
|
|
self.fetch_byte_add(val.wrapping_mul(core::mem::size_of::<T>()), order)
|
2022-05-10 17:04:26 -07:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
/// Offsets the pointer's address by subtracting `val` (in units of `T`),
|
|
|
|
|
/// returning the previous pointer.
|
|
|
|
|
///
|
|
|
|
|
/// This is equivalent to using [`wrapping_sub`] to atomically perform the
|
|
|
|
|
/// equivalent of `ptr = ptr.wrapping_sub(val);`.
|
|
|
|
|
///
|
|
|
|
|
/// This method operates in units of `T`, which means that it cannot be used
|
|
|
|
|
/// to offset the pointer by an amount which is not a multiple of
|
|
|
|
|
/// `size_of::<T>()`. This can sometimes be inconvenient, as you may want to
|
|
|
|
|
/// work with a deliberately misaligned pointer. In such cases, you may use
|
2022-05-14 12:37:19 -07:00
|
|
|
/// the [`fetch_byte_sub`](Self::fetch_byte_sub) method instead.
|
2022-05-10 17:04:26 -07:00
|
|
|
///
|
2022-05-14 12:37:19 -07:00
|
|
|
/// `fetch_ptr_sub` takes an [`Ordering`] argument which describes the memory
|
2022-05-10 17:04:26 -07:00
|
|
|
/// ordering of this operation. All ordering modes are possible. Note that
|
|
|
|
|
/// using [`Acquire`] makes the store part of this operation [`Relaxed`],
|
|
|
|
|
/// and using [`Release`] makes the load part [`Relaxed`].
|
|
|
|
|
///
|
|
|
|
|
/// **Note**: This method is only available on platforms that support atomic
|
|
|
|
|
/// operations on [`AtomicPtr`].
|
|
|
|
|
///
|
|
|
|
|
/// [`wrapping_sub`]: pointer::wrapping_sub
|
|
|
|
|
///
|
|
|
|
|
/// # Examples
|
|
|
|
|
///
|
|
|
|
|
/// ```
|
|
|
|
|
/// #![feature(strict_provenance_atomic_ptr)]
|
|
|
|
|
/// use core::sync::atomic::{AtomicPtr, Ordering};
|
|
|
|
|
///
|
|
|
|
|
/// let array = [1i32, 2i32];
|
|
|
|
|
/// let atom = AtomicPtr::new(array.as_ptr().wrapping_add(1) as *mut _);
|
|
|
|
|
///
|
|
|
|
|
/// assert!(core::ptr::eq(
|
2022-05-14 12:37:19 -07:00
|
|
|
/// atom.fetch_ptr_sub(1, Ordering::Relaxed),
|
2022-05-10 17:04:26 -07:00
|
|
|
/// &array[1],
|
|
|
|
|
/// ));
|
|
|
|
|
/// assert!(core::ptr::eq(atom.load(Ordering::Relaxed), &array[0]));
|
|
|
|
|
/// ```
|
|
|
|
|
#[inline]
|
|
|
|
|
#[cfg(target_has_atomic = "ptr")]
|
2022-07-10 13:17:33 +04:00
|
|
|
#[unstable(feature = "strict_provenance_atomic_ptr", issue = "99108")]
|
2022-07-20 16:34:24 -04:00
|
|
|
#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
|
2022-05-14 12:37:19 -07:00
|
|
|
pub fn fetch_ptr_sub(&self, val: usize, order: Ordering) -> *mut T {
|
|
|
|
|
self.fetch_byte_sub(val.wrapping_mul(core::mem::size_of::<T>()), order)
|
2022-05-10 17:04:26 -07:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
/// Offsets the pointer's address by adding `val` *bytes*, returning the
|
|
|
|
|
/// previous pointer.
|
|
|
|
|
///
|
2022-08-21 06:36:11 +04:00
|
|
|
/// This is equivalent to using [`wrapping_byte_add`] to atomically
|
2022-08-19 13:33:46 +04:00
|
|
|
/// perform `ptr = ptr.wrapping_byte_add(val)`.
|
2022-05-10 17:04:26 -07:00
|
|
|
///
|
2022-05-14 12:37:19 -07:00
|
|
|
/// `fetch_byte_add` takes an [`Ordering`] argument which describes the
|
2022-05-10 17:04:26 -07:00
|
|
|
/// memory ordering of this operation. All ordering modes are possible. Note
|
|
|
|
|
/// that using [`Acquire`] makes the store part of this operation
|
|
|
|
|
/// [`Relaxed`], and using [`Release`] makes the load part [`Relaxed`].
|
|
|
|
|
///
|
|
|
|
|
/// **Note**: This method is only available on platforms that support atomic
|
|
|
|
|
/// operations on [`AtomicPtr`].
|
|
|
|
|
///
|
2022-08-21 06:36:11 +04:00
|
|
|
/// [`wrapping_byte_add`]: pointer::wrapping_byte_add
|
2022-05-10 17:04:26 -07:00
|
|
|
///
|
|
|
|
|
/// # Examples
|
|
|
|
|
///
|
|
|
|
|
/// ```
|
|
|
|
|
/// #![feature(strict_provenance_atomic_ptr, strict_provenance)]
|
|
|
|
|
/// use core::sync::atomic::{AtomicPtr, Ordering};
|
|
|
|
|
///
|
|
|
|
|
/// let atom = AtomicPtr::<i64>::new(core::ptr::null_mut());
|
2022-05-14 12:37:19 -07:00
|
|
|
/// assert_eq!(atom.fetch_byte_add(1, Ordering::Relaxed).addr(), 0);
|
2022-05-10 17:04:26 -07:00
|
|
|
/// // Note: in units of bytes, not `size_of::<i64>()`.
|
|
|
|
|
/// assert_eq!(atom.load(Ordering::Relaxed).addr(), 1);
|
|
|
|
|
/// ```
|
|
|
|
|
#[inline]
|
|
|
|
|
#[cfg(target_has_atomic = "ptr")]
|
2022-07-10 13:17:33 +04:00
|
|
|
#[unstable(feature = "strict_provenance_atomic_ptr", issue = "99108")]
|
2022-07-20 16:34:24 -04:00
|
|
|
#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
|
2022-05-14 12:37:19 -07:00
|
|
|
pub fn fetch_byte_add(&self, val: usize, order: Ordering) -> *mut T {
|
2022-05-10 17:04:26 -07:00
|
|
|
// SAFETY: data races are prevented by atomic intrinsics.
|
2022-08-09 09:56:13 -04:00
|
|
|
unsafe { atomic_add(self.p.get(), core::ptr::invalid_mut(val), order).cast() }
|
2022-05-10 17:04:26 -07:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
/// Offsets the pointer's address by subtracting `val` *bytes*, returning the
|
|
|
|
|
/// previous pointer.
|
|
|
|
|
///
|
2022-08-21 06:36:11 +04:00
|
|
|
/// This is equivalent to using [`wrapping_byte_sub`] to atomically
|
2022-08-19 13:33:46 +04:00
|
|
|
/// perform `ptr = ptr.wrapping_byte_sub(val)`.
|
2022-05-10 17:04:26 -07:00
|
|
|
///
|
2022-05-14 12:37:19 -07:00
|
|
|
/// `fetch_byte_sub` takes an [`Ordering`] argument which describes the
|
2022-05-10 17:04:26 -07:00
|
|
|
/// memory ordering of this operation. All ordering modes are possible. Note
|
|
|
|
|
/// that using [`Acquire`] makes the store part of this operation
|
|
|
|
|
/// [`Relaxed`], and using [`Release`] makes the load part [`Relaxed`].
|
|
|
|
|
///
|
|
|
|
|
/// **Note**: This method is only available on platforms that support atomic
|
|
|
|
|
/// operations on [`AtomicPtr`].
|
|
|
|
|
///
|
2022-08-21 06:36:11 +04:00
|
|
|
/// [`wrapping_byte_sub`]: pointer::wrapping_byte_sub
|
2022-05-10 17:04:26 -07:00
|
|
|
///
|
|
|
|
|
/// # Examples
|
|
|
|
|
///
|
|
|
|
|
/// ```
|
|
|
|
|
/// #![feature(strict_provenance_atomic_ptr, strict_provenance)]
|
|
|
|
|
/// use core::sync::atomic::{AtomicPtr, Ordering};
|
|
|
|
|
///
|
|
|
|
|
/// let atom = AtomicPtr::<i64>::new(core::ptr::invalid_mut(1));
|
2022-05-14 12:37:19 -07:00
|
|
|
/// assert_eq!(atom.fetch_byte_sub(1, Ordering::Relaxed).addr(), 1);
|
2022-05-10 17:04:26 -07:00
|
|
|
/// assert_eq!(atom.load(Ordering::Relaxed).addr(), 0);
|
|
|
|
|
/// ```
|
|
|
|
|
#[inline]
|
|
|
|
|
#[cfg(target_has_atomic = "ptr")]
|
2022-07-10 13:17:33 +04:00
|
|
|
#[unstable(feature = "strict_provenance_atomic_ptr", issue = "99108")]
|
2022-07-20 16:34:24 -04:00
|
|
|
#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
|
2022-05-14 12:37:19 -07:00
|
|
|
pub fn fetch_byte_sub(&self, val: usize, order: Ordering) -> *mut T {
|
2022-05-10 17:04:26 -07:00
|
|
|
// SAFETY: data races are prevented by atomic intrinsics.
|
2022-08-09 09:56:13 -04:00
|
|
|
unsafe { atomic_sub(self.p.get(), core::ptr::invalid_mut(val), order).cast() }
|
2022-05-10 17:04:26 -07:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
/// Performs a bitwise "or" operation on the address of the current pointer,
|
|
|
|
|
/// and the argument `val`, and stores a pointer with provenance of the
|
|
|
|
|
/// current pointer and the resulting address.
|
|
|
|
|
///
|
2022-10-14 00:25:34 +08:00
|
|
|
/// This is equivalent to using [`map_addr`] to atomically perform
|
|
|
|
|
/// `ptr = ptr.map_addr(|a| a | val)`. This can be used in tagged
|
2022-05-10 17:04:26 -07:00
|
|
|
/// pointer schemes to atomically set tag bits.
|
|
|
|
|
///
|
|
|
|
|
/// **Caveat**: This operation returns the previous value. To compute the
|
|
|
|
|
/// stored value without losing provenance, you may use [`map_addr`]. For
|
|
|
|
|
/// example: `a.fetch_or(val).map_addr(|a| a | val)`.
|
|
|
|
|
///
|
|
|
|
|
/// `fetch_or` takes an [`Ordering`] argument which describes the memory
|
|
|
|
|
/// ordering of this operation. All ordering modes are possible. Note that
|
|
|
|
|
/// using [`Acquire`] makes the store part of this operation [`Relaxed`],
|
|
|
|
|
/// and using [`Release`] makes the load part [`Relaxed`].
|
|
|
|
|
///
|
|
|
|
|
/// **Note**: This method is only available on platforms that support atomic
|
|
|
|
|
/// operations on [`AtomicPtr`].
|
|
|
|
|
///
|
|
|
|
|
/// This API and its claimed semantics are part of the Strict Provenance
|
|
|
|
|
/// experiment, see the [module documentation for `ptr`][crate::ptr] for
|
|
|
|
|
/// details.
|
|
|
|
|
///
|
|
|
|
|
/// [`map_addr`]: pointer::map_addr
|
|
|
|
|
///
|
|
|
|
|
/// # Examples
|
|
|
|
|
///
|
|
|
|
|
/// ```
|
|
|
|
|
/// #![feature(strict_provenance_atomic_ptr, strict_provenance)]
|
|
|
|
|
/// use core::sync::atomic::{AtomicPtr, Ordering};
|
|
|
|
|
///
|
|
|
|
|
/// let pointer = &mut 3i64 as *mut i64;
|
|
|
|
|
///
|
|
|
|
|
/// let atom = AtomicPtr::<i64>::new(pointer);
|
|
|
|
|
/// // Tag the bottom bit of the pointer.
|
|
|
|
|
/// assert_eq!(atom.fetch_or(1, Ordering::Relaxed).addr() & 1, 0);
|
|
|
|
|
/// // Extract and untag.
|
|
|
|
|
/// let tagged = atom.load(Ordering::Relaxed);
|
|
|
|
|
/// assert_eq!(tagged.addr() & 1, 1);
|
|
|
|
|
/// assert_eq!(tagged.map_addr(|p| p & !1), pointer);
|
|
|
|
|
/// ```
|
|
|
|
|
#[inline]
|
|
|
|
|
#[cfg(target_has_atomic = "ptr")]
|
2022-07-10 13:17:33 +04:00
|
|
|
#[unstable(feature = "strict_provenance_atomic_ptr", issue = "99108")]
|
2022-07-20 16:34:24 -04:00
|
|
|
#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
|
2022-05-10 17:04:26 -07:00
|
|
|
pub fn fetch_or(&self, val: usize, order: Ordering) -> *mut T {
|
|
|
|
|
// SAFETY: data races are prevented by atomic intrinsics.
|
2022-08-09 09:56:13 -04:00
|
|
|
unsafe { atomic_or(self.p.get(), core::ptr::invalid_mut(val), order).cast() }
|
2022-05-10 17:04:26 -07:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
/// Performs a bitwise "and" operation on the address of the current
|
|
|
|
|
/// pointer, and the argument `val`, and stores a pointer with provenance of
|
|
|
|
|
/// the current pointer and the resulting address.
|
|
|
|
|
///
|
2022-10-14 00:25:34 +08:00
|
|
|
/// This is equivalent to using [`map_addr`] to atomically perform
|
|
|
|
|
/// `ptr = ptr.map_addr(|a| a & val)`. This can be used in tagged
|
2022-05-10 17:04:26 -07:00
|
|
|
/// pointer schemes to atomically unset tag bits.
|
|
|
|
|
///
|
|
|
|
|
/// **Caveat**: This operation returns the previous value. To compute the
|
|
|
|
|
/// stored value without losing provenance, you may use [`map_addr`]. For
|
|
|
|
|
/// example: `a.fetch_and(val).map_addr(|a| a & val)`.
|
|
|
|
|
///
|
|
|
|
|
/// `fetch_and` takes an [`Ordering`] argument which describes the memory
|
|
|
|
|
/// ordering of this operation. All ordering modes are possible. Note that
|
|
|
|
|
/// using [`Acquire`] makes the store part of this operation [`Relaxed`],
|
|
|
|
|
/// and using [`Release`] makes the load part [`Relaxed`].
|
|
|
|
|
///
|
|
|
|
|
/// **Note**: This method is only available on platforms that support atomic
|
|
|
|
|
/// operations on [`AtomicPtr`].
|
|
|
|
|
///
|
|
|
|
|
/// This API and its claimed semantics are part of the Strict Provenance
|
|
|
|
|
/// experiment, see the [module documentation for `ptr`][crate::ptr] for
|
|
|
|
|
/// details.
|
|
|
|
|
///
|
|
|
|
|
/// [`map_addr`]: pointer::map_addr
|
|
|
|
|
///
|
|
|
|
|
/// # Examples
|
|
|
|
|
///
|
|
|
|
|
/// ```
|
|
|
|
|
/// #![feature(strict_provenance_atomic_ptr, strict_provenance)]
|
|
|
|
|
/// use core::sync::atomic::{AtomicPtr, Ordering};
|
|
|
|
|
///
|
|
|
|
|
/// let pointer = &mut 3i64 as *mut i64;
|
|
|
|
|
/// // A tagged pointer
|
|
|
|
|
/// let atom = AtomicPtr::<i64>::new(pointer.map_addr(|a| a | 1));
|
|
|
|
|
/// assert_eq!(atom.fetch_or(1, Ordering::Relaxed).addr() & 1, 1);
|
|
|
|
|
/// // Untag, and extract the previously tagged pointer.
|
|
|
|
|
/// let untagged = atom.fetch_and(!1, Ordering::Relaxed)
|
|
|
|
|
/// .map_addr(|a| a & !1);
|
|
|
|
|
/// assert_eq!(untagged, pointer);
|
|
|
|
|
/// ```
|
|
|
|
|
#[inline]
|
|
|
|
|
#[cfg(target_has_atomic = "ptr")]
|
2022-07-10 13:17:33 +04:00
|
|
|
#[unstable(feature = "strict_provenance_atomic_ptr", issue = "99108")]
|
2022-07-20 16:34:24 -04:00
|
|
|
#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
|
2022-05-10 17:04:26 -07:00
|
|
|
pub fn fetch_and(&self, val: usize, order: Ordering) -> *mut T {
|
|
|
|
|
// SAFETY: data races are prevented by atomic intrinsics.
|
2022-08-09 09:56:13 -04:00
|
|
|
unsafe { atomic_and(self.p.get(), core::ptr::invalid_mut(val), order).cast() }
|
2022-05-10 17:04:26 -07:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
/// Performs a bitwise "xor" operation on the address of the current
|
|
|
|
|
/// pointer, and the argument `val`, and stores a pointer with provenance of
|
|
|
|
|
/// the current pointer and the resulting address.
|
|
|
|
|
///
|
2022-10-14 00:25:34 +08:00
|
|
|
/// This is equivalent to using [`map_addr`] to atomically perform
|
|
|
|
|
/// `ptr = ptr.map_addr(|a| a ^ val)`. This can be used in tagged
|
2022-05-10 17:04:26 -07:00
|
|
|
/// pointer schemes to atomically toggle tag bits.
|
|
|
|
|
///
|
|
|
|
|
/// **Caveat**: This operation returns the previous value. To compute the
|
|
|
|
|
/// stored value without losing provenance, you may use [`map_addr`]. For
|
|
|
|
|
/// example: `a.fetch_xor(val).map_addr(|a| a ^ val)`.
|
|
|
|
|
///
|
|
|
|
|
/// `fetch_xor` takes an [`Ordering`] argument which describes the memory
|
|
|
|
|
/// ordering of this operation. All ordering modes are possible. Note that
|
|
|
|
|
/// using [`Acquire`] makes the store part of this operation [`Relaxed`],
|
|
|
|
|
/// and using [`Release`] makes the load part [`Relaxed`].
|
|
|
|
|
///
|
|
|
|
|
/// **Note**: This method is only available on platforms that support atomic
|
|
|
|
|
/// operations on [`AtomicPtr`].
|
|
|
|
|
///
|
|
|
|
|
/// This API and its claimed semantics are part of the Strict Provenance
|
|
|
|
|
/// experiment, see the [module documentation for `ptr`][crate::ptr] for
|
|
|
|
|
/// details.
|
|
|
|
|
///
|
|
|
|
|
/// [`map_addr`]: pointer::map_addr
|
|
|
|
|
///
|
|
|
|
|
/// # Examples
|
|
|
|
|
///
|
|
|
|
|
/// ```
|
|
|
|
|
/// #![feature(strict_provenance_atomic_ptr, strict_provenance)]
|
|
|
|
|
/// use core::sync::atomic::{AtomicPtr, Ordering};
|
|
|
|
|
///
|
|
|
|
|
/// let pointer = &mut 3i64 as *mut i64;
|
|
|
|
|
/// let atom = AtomicPtr::<i64>::new(pointer);
|
|
|
|
|
///
|
|
|
|
|
/// // Toggle a tag bit on the pointer.
|
|
|
|
|
/// atom.fetch_xor(1, Ordering::Relaxed);
|
|
|
|
|
/// assert_eq!(atom.load(Ordering::Relaxed).addr() & 1, 1);
|
|
|
|
|
/// ```
|
|
|
|
|
#[inline]
|
|
|
|
|
#[cfg(target_has_atomic = "ptr")]
|
2022-07-10 13:17:33 +04:00
|
|
|
#[unstable(feature = "strict_provenance_atomic_ptr", issue = "99108")]
|
2022-07-20 16:34:24 -04:00
|
|
|
#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
|
2022-05-10 17:04:26 -07:00
|
|
|
pub fn fetch_xor(&self, val: usize, order: Ordering) -> *mut T {
|
|
|
|
|
// SAFETY: data races are prevented by atomic intrinsics.
|
2022-08-09 09:56:13 -04:00
|
|
|
unsafe { atomic_xor(self.p.get(), core::ptr::invalid_mut(val), order).cast() }
|
2022-05-10 17:04:26 -07:00
|
|
|
}
|
2023-01-12 07:27:36 +00:00
|
|
|
|
|
|
|
|
/// Returns a mutable pointer to the underlying pointer.
|
|
|
|
|
///
|
|
|
|
|
/// Doing non-atomic reads and writes on the resulting integer can be a data race.
|
|
|
|
|
/// This method is mostly useful for FFI, where the function signature may use
|
|
|
|
|
/// `*mut *mut T` instead of `&AtomicPtr<T>`.
|
|
|
|
|
///
|
|
|
|
|
/// Returning an `*mut` pointer from a shared reference to this atomic is safe because the
|
|
|
|
|
/// atomic types work with interior mutability. All modifications of an atomic change the value
|
|
|
|
|
/// through a shared reference, and can do so safely as long as they use atomic operations. Any
|
|
|
|
|
/// use of the returned raw pointer requires an `unsafe` block and still has to uphold the same
|
|
|
|
|
/// restriction: operations on it must be atomic.
|
|
|
|
|
///
|
|
|
|
|
/// # Examples
|
|
|
|
|
///
|
|
|
|
|
/// ```ignore (extern-declaration)
|
|
|
|
|
/// #![feature(atomic_mut_ptr)]
|
|
|
|
|
//// use std::sync::atomic::AtomicPtr;
|
|
|
|
|
///
|
|
|
|
|
/// extern "C" {
|
|
|
|
|
/// fn my_atomic_op(arg: *mut *mut u32);
|
|
|
|
|
/// }
|
|
|
|
|
///
|
|
|
|
|
/// let mut value = 17;
|
|
|
|
|
/// let atomic = AtomicPtr::new(&mut value);
|
|
|
|
|
///
|
|
|
|
|
/// // SAFETY: Safe as long as `my_atomic_op` is atomic.
|
|
|
|
|
/// unsafe {
|
|
|
|
|
/// my_atomic_op(atomic.as_mut_ptr());
|
|
|
|
|
/// }
|
|
|
|
|
/// ```
|
|
|
|
|
#[inline]
|
|
|
|
|
#[unstable(feature = "atomic_mut_ptr", reason = "recently added", issue = "66893")]
|
|
|
|
|
pub fn as_mut_ptr(&self) -> *mut *mut T {
|
|
|
|
|
self.p.get()
|
|
|
|
|
}
|
2014-05-12 21:30:48 -07:00
|
|
|
}
|
|
|
|
|
|
2019-11-06 08:09:55 -05:00
|
|
|
#[cfg(target_has_atomic_load_store = "8")]
|
2017-11-27 21:26:19 +00:00
|
|
|
#[stable(feature = "atomic_bool_from", since = "1.24.0")]
|
2021-10-18 19:19:28 +09:00
|
|
|
#[rustc_const_unstable(feature = "const_convert", issue = "88674")]
|
|
|
|
|
impl const From<bool> for AtomicBool {
|
2018-12-12 09:41:12 +11:00
|
|
|
/// Converts a `bool` into an `AtomicBool`.
|
|
|
|
|
///
|
|
|
|
|
/// # Examples
|
|
|
|
|
///
|
|
|
|
|
/// ```
|
|
|
|
|
/// use std::sync::atomic::AtomicBool;
|
|
|
|
|
/// let atomic_bool = AtomicBool::from(true);
|
2022-02-12 23:16:17 +04:00
|
|
|
/// assert_eq!(format!("{atomic_bool:?}"), "true")
|
2018-12-12 09:41:12 +11:00
|
|
|
/// ```
|
2017-11-27 01:23:04 +00:00
|
|
|
#[inline]
|
|
|
|
|
fn from(b: bool) -> Self {
|
|
|
|
|
Self::new(b)
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2019-11-06 08:09:55 -05:00
|
|
|
#[cfg(target_has_atomic_load_store = "ptr")]
|
2017-10-31 11:15:10 -08:00
|
|
|
#[stable(feature = "atomic_from", since = "1.23.0")]
|
2021-10-18 19:19:28 +09:00
|
|
|
#[rustc_const_unstable(feature = "const_convert", issue = "88674")]
|
|
|
|
|
impl<T> const From<*mut T> for AtomicPtr<T> {
|
2021-10-13 08:46:34 -07:00
|
|
|
/// Converts a `*mut T` into an `AtomicPtr<T>`.
|
2017-10-29 01:28:54 -08:00
|
|
|
#[inline]
|
|
|
|
|
fn from(p: *mut T) -> Self {
|
|
|
|
|
Self::new(p)
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2020-09-13 18:34:27 +02:00
|
|
|
#[allow(unused_macros)] // This macro ends up being unused on some architectures.
|
2020-07-19 20:57:04 +02:00
|
|
|
macro_rules! if_not_8_bit {
|
|
|
|
|
(u8, $($tt:tt)*) => { "" };
|
|
|
|
|
(i8, $($tt:tt)*) => { "" };
|
|
|
|
|
($_:ident, $($tt:tt)*) => { $($tt)* };
|
|
|
|
|
}
|
|
|
|
|
|
2019-11-06 08:09:55 -05:00
|
|
|
#[cfg(target_has_atomic_load_store = "8")]
|
2016-04-16 05:21:53 +01:00
|
|
|
macro_rules! atomic_int {
|
2019-10-08 17:09:23 +01:00
|
|
|
($cfg_cas:meta,
|
2020-09-20 12:14:41 +02:00
|
|
|
$cfg_align:meta,
|
2019-10-08 17:09:23 +01:00
|
|
|
$stable:meta,
|
2016-04-16 05:21:53 +01:00
|
|
|
$stable_cxchg:meta,
|
|
|
|
|
$stable_debug:meta,
|
2016-08-16 16:58:20 +01:00
|
|
|
$stable_access:meta,
|
2017-12-27 14:11:05 +00:00
|
|
|
$stable_from:meta,
|
2018-02-09 11:19:52 -07:00
|
|
|
$stable_nand:meta,
|
2019-12-08 01:43:10 +01:00
|
|
|
$const_stable:meta,
|
2019-02-21 13:57:51 +01:00
|
|
|
$stable_init_const:meta,
|
2022-04-28 16:42:20 -04:00
|
|
|
$diagnostic_item:meta,
|
2021-02-23 10:03:54 -05:00
|
|
|
$s_int_type:literal,
|
2018-03-14 19:41:22 +01:00
|
|
|
$extra_feature:expr,
|
2018-03-02 06:52:27 +01:00
|
|
|
$min_fn:ident, $max_fn:ident,
|
2018-09-08 12:50:19 +01:00
|
|
|
$align:expr,
|
2019-01-30 17:47:36 +01:00
|
|
|
$atomic_new:expr,
|
2016-04-16 05:21:53 +01:00
|
|
|
$int_type:ident $atomic_type:ident $atomic_init:ident) => {
|
|
|
|
|
/// An integer type which can be safely shared between threads.
|
2016-08-16 16:58:20 +01:00
|
|
|
///
|
2017-09-25 19:39:52 -04:00
|
|
|
/// This type has the same in-memory representation as the underlying
|
|
|
|
|
/// integer type, [`
|
|
|
|
|
#[doc = $s_int_type]
|
2021-02-23 10:03:54 -05:00
|
|
|
/// `]. For more about the differences between atomic types and
|
2019-01-07 12:41:55 -08:00
|
|
|
/// non-atomic types as well as information about the portability of
|
|
|
|
|
/// this type, please see the [module-level documentation].
|
2017-09-25 19:39:52 -04:00
|
|
|
///
|
2020-06-05 19:07:24 +02:00
|
|
|
/// **Note:** This type is only available on platforms that support
|
|
|
|
|
/// atomic loads and stores of [`
|
|
|
|
|
#[doc = $s_int_type]
|
2021-02-23 10:03:54 -05:00
|
|
|
/// `].
|
2020-06-04 22:07:56 +02:00
|
|
|
///
|
2020-08-28 17:24:47 +02:00
|
|
|
/// [module-level documentation]: crate::sync::atomic
|
2016-04-16 05:21:53 +01:00
|
|
|
#[$stable]
|
2022-04-28 16:42:20 -04:00
|
|
|
#[$diagnostic_item]
|
2018-10-27 13:46:33 +03:00
|
|
|
#[repr(C, align($align))]
|
2016-04-16 05:21:53 +01:00
|
|
|
pub struct $atomic_type {
|
|
|
|
|
v: UnsafeCell<$int_type>,
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
/// An atomic integer initialized to `0`.
|
2019-02-21 13:57:51 +01:00
|
|
|
#[$stable_init_const]
|
2022-04-07 21:20:32 -04:00
|
|
|
#[deprecated(
|
2019-01-30 17:47:36 +01:00
|
|
|
since = "1.34.0",
|
2022-04-07 21:20:32 -04:00
|
|
|
note = "the `new` function is now preferred",
|
2019-01-30 17:47:36 +01:00
|
|
|
suggestion = $atomic_new,
|
2019-02-27 16:58:12 -07:00
|
|
|
)]
|
2016-04-16 05:21:53 +01:00
|
|
|
pub const $atomic_init: $atomic_type = $atomic_type::new(0);
|
|
|
|
|
|
|
|
|
|
#[$stable]
|
2021-08-14 16:35:12 +00:00
|
|
|
#[rustc_const_unstable(feature = "const_default_impls", issue = "87864")]
|
|
|
|
|
impl const Default for $atomic_type {
|
2020-11-01 04:38:41 +00:00
|
|
|
#[inline]
|
2016-04-16 05:21:53 +01:00
|
|
|
fn default() -> Self {
|
|
|
|
|
Self::new(Default::default())
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2017-12-27 14:11:05 +00:00
|
|
|
#[$stable_from]
|
2021-10-20 12:04:58 +09:00
|
|
|
#[rustc_const_unstable(feature = "const_num_from_num", issue = "87852")]
|
2021-10-18 19:19:28 +09:00
|
|
|
impl const From<$int_type> for $atomic_type {
|
2020-11-17 23:57:29 +01:00
|
|
|
#[doc = concat!("Converts an `", stringify!($int_type), "` into an `", stringify!($atomic_type), "`.")]
|
|
|
|
|
#[inline]
|
|
|
|
|
fn from(v: $int_type) -> Self { Self::new(v) }
|
2017-10-29 01:28:54 -08:00
|
|
|
}
|
|
|
|
|
|
2016-04-16 05:21:53 +01:00
|
|
|
#[$stable_debug]
|
|
|
|
|
impl fmt::Debug for $atomic_type {
|
2019-04-19 01:37:12 +02:00
|
|
|
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
|
2022-05-13 20:59:21 +02:00
|
|
|
fmt::Debug::fmt(&self.load(Ordering::Relaxed), f)
|
2016-04-16 05:21:53 +01:00
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
// Send is implicitly implemented.
|
|
|
|
|
#[$stable]
|
|
|
|
|
unsafe impl Sync for $atomic_type {}
|
|
|
|
|
|
|
|
|
|
impl $atomic_type {
|
2020-11-17 23:57:29 +01:00
|
|
|
/// Creates a new atomic integer.
|
|
|
|
|
///
|
|
|
|
|
/// # Examples
|
|
|
|
|
///
|
|
|
|
|
/// ```
|
|
|
|
|
#[doc = concat!($extra_feature, "use std::sync::atomic::", stringify!($atomic_type), ";")]
|
|
|
|
|
///
|
|
|
|
|
#[doc = concat!("let atomic_forty_two = ", stringify!($atomic_type), "::new(42);")]
|
|
|
|
|
/// ```
|
|
|
|
|
#[inline]
|
|
|
|
|
#[$stable]
|
|
|
|
|
#[$const_stable]
|
2021-10-10 02:44:26 -04:00
|
|
|
#[must_use]
|
2020-11-17 23:57:29 +01:00
|
|
|
pub const fn new(v: $int_type) -> Self {
|
|
|
|
|
Self {v: UnsafeCell::new(v)}
|
2016-04-16 05:21:53 +01:00
|
|
|
}
|
|
|
|
|
|
2020-11-17 23:57:29 +01:00
|
|
|
/// Returns a mutable reference to the underlying integer.
|
|
|
|
|
///
|
|
|
|
|
/// This is safe because the mutable reference guarantees that no other threads are
|
|
|
|
|
/// concurrently accessing the atomic data.
|
|
|
|
|
///
|
|
|
|
|
/// # Examples
|
|
|
|
|
///
|
|
|
|
|
/// ```
|
|
|
|
|
#[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
|
|
|
|
|
///
|
|
|
|
|
#[doc = concat!("let mut some_var = ", stringify!($atomic_type), "::new(10);")]
|
|
|
|
|
/// assert_eq!(*some_var.get_mut(), 10);
|
|
|
|
|
/// *some_var.get_mut() = 5;
|
|
|
|
|
/// assert_eq!(some_var.load(Ordering::SeqCst), 5);
|
|
|
|
|
/// ```
|
|
|
|
|
#[inline]
|
|
|
|
|
#[$stable_access]
|
|
|
|
|
pub fn get_mut(&mut self) -> &mut $int_type {
|
|
|
|
|
self.v.get_mut()
|
2016-08-16 16:58:20 +01:00
|
|
|
}
|
|
|
|
|
|
2020-11-17 23:57:29 +01:00
|
|
|
#[doc = concat!("Get atomic access to a `&mut ", stringify!($int_type), "`.")]
|
|
|
|
|
///
|
|
|
|
|
#[doc = if_not_8_bit! {
|
|
|
|
|
$int_type,
|
|
|
|
|
concat!(
|
|
|
|
|
"**Note:** This function is only available on targets where `",
|
|
|
|
|
stringify!($int_type), "` has an alignment of ", $align, " bytes."
|
|
|
|
|
)
|
|
|
|
|
}]
|
|
|
|
|
///
|
|
|
|
|
/// # Examples
|
|
|
|
|
///
|
|
|
|
|
/// ```
|
|
|
|
|
/// #![feature(atomic_from_mut)]
|
|
|
|
|
#[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
|
|
|
|
|
///
|
|
|
|
|
/// let mut some_int = 123;
|
|
|
|
|
#[doc = concat!("let a = ", stringify!($atomic_type), "::from_mut(&mut some_int);")]
|
|
|
|
|
/// a.store(100, Ordering::Relaxed);
|
|
|
|
|
/// assert_eq!(some_int, 100);
|
|
|
|
|
/// ```
|
|
|
|
|
///
|
|
|
|
|
#[inline]
|
|
|
|
|
#[$cfg_align]
|
|
|
|
|
#[unstable(feature = "atomic_from_mut", issue = "76314")]
|
2022-01-08 16:57:20 +03:00
|
|
|
pub fn from_mut(v: &mut $int_type) -> &mut Self {
|
2020-11-17 23:57:29 +01:00
|
|
|
use crate::mem::align_of;
|
|
|
|
|
let [] = [(); align_of::<Self>() - align_of::<$int_type>()];
|
|
|
|
|
// SAFETY:
|
|
|
|
|
// - the mutable reference guarantees unique ownership.
|
|
|
|
|
// - the alignment of `$int_type` and `Self` is the
|
|
|
|
|
// same, as promised by $cfg_align and verified above.
|
2022-01-08 16:57:20 +03:00
|
|
|
unsafe { &mut *(v as *mut $int_type as *mut Self) }
|
2020-09-21 20:42:40 +02:00
|
|
|
}
|
|
|
|
|
|
2022-03-10 22:57:02 +04:00
|
|
|
#[doc = concat!("Get non-atomic access to a `&mut [", stringify!($atomic_type), "]` slice")]
|
|
|
|
|
///
|
|
|
|
|
/// This is safe because the mutable reference guarantees that no other threads are
|
|
|
|
|
/// concurrently accessing the atomic data.
|
|
|
|
|
///
|
|
|
|
|
/// # Examples
|
|
|
|
|
///
|
|
|
|
|
/// ```
|
2022-06-09 10:53:45 +02:00
|
|
|
/// #![feature(atomic_from_mut, inline_const)]
|
2022-03-10 22:57:02 +04:00
|
|
|
#[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
|
|
|
|
|
///
|
|
|
|
|
#[doc = concat!("let mut some_ints = [const { ", stringify!($atomic_type), "::new(0) }; 10];")]
|
|
|
|
|
///
|
|
|
|
|
#[doc = concat!("let view: &mut [", stringify!($int_type), "] = ", stringify!($atomic_type), "::get_mut_slice(&mut some_ints);")]
|
|
|
|
|
/// assert_eq!(view, [0; 10]);
|
|
|
|
|
/// view
|
|
|
|
|
/// .iter_mut()
|
|
|
|
|
/// .enumerate()
|
|
|
|
|
/// .for_each(|(idx, int)| *int = idx as _);
|
|
|
|
|
///
|
|
|
|
|
/// std::thread::scope(|s| {
|
|
|
|
|
/// some_ints
|
|
|
|
|
/// .iter()
|
|
|
|
|
/// .enumerate()
|
|
|
|
|
/// .for_each(|(idx, int)| {
|
|
|
|
|
/// s.spawn(move || assert_eq!(int.load(Ordering::Relaxed), idx as _));
|
|
|
|
|
/// })
|
|
|
|
|
/// });
|
|
|
|
|
/// ```
|
|
|
|
|
#[inline]
|
|
|
|
|
#[unstable(feature = "atomic_from_mut", issue = "76314")]
|
|
|
|
|
pub fn get_mut_slice(this: &mut [Self]) -> &mut [$int_type] {
|
|
|
|
|
// SAFETY: the mutable reference guarantees unique ownership.
|
|
|
|
|
unsafe { &mut *(this as *mut [Self] as *mut [$int_type]) }
|
|
|
|
|
}
|
|
|
|
|
|
2022-02-25 15:30:29 -08:00
|
|
|
#[doc = concat!("Get atomic access to a `&mut [", stringify!($int_type), "]` slice.")]
|
|
|
|
|
///
|
|
|
|
|
/// # Examples
|
|
|
|
|
///
|
|
|
|
|
/// ```
|
2022-06-09 10:53:45 +02:00
|
|
|
/// #![feature(atomic_from_mut)]
|
2022-02-25 15:30:29 -08:00
|
|
|
#[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
|
|
|
|
|
///
|
|
|
|
|
/// let mut some_ints = [0; 10];
|
|
|
|
|
#[doc = concat!("let a = &*", stringify!($atomic_type), "::from_mut_slice(&mut some_ints);")]
|
|
|
|
|
/// std::thread::scope(|s| {
|
|
|
|
|
/// for i in 0..a.len() {
|
2022-03-03 14:58:49 +01:00
|
|
|
/// s.spawn(move || a[i].store(i as _, Ordering::Relaxed));
|
2022-02-25 15:30:29 -08:00
|
|
|
/// }
|
|
|
|
|
/// });
|
|
|
|
|
/// for (i, n) in some_ints.into_iter().enumerate() {
|
|
|
|
|
/// assert_eq!(i, n as usize);
|
|
|
|
|
/// }
|
|
|
|
|
/// ```
|
|
|
|
|
#[inline]
|
|
|
|
|
#[$cfg_align]
|
|
|
|
|
#[unstable(feature = "atomic_from_mut", issue = "76314")]
|
|
|
|
|
pub fn from_mut_slice(v: &mut [$int_type]) -> &mut [Self] {
|
|
|
|
|
use crate::mem::align_of;
|
|
|
|
|
let [] = [(); align_of::<Self>() - align_of::<$int_type>()];
|
|
|
|
|
// SAFETY:
|
|
|
|
|
// - the mutable reference guarantees unique ownership.
|
|
|
|
|
// - the alignment of `$int_type` and `Self` is the
|
|
|
|
|
// same, as promised by $cfg_align and verified above.
|
|
|
|
|
unsafe { &mut *(v as *mut [$int_type] as *mut [Self]) }
|
|
|
|
|
}
|
|
|
|
|
|
2020-11-17 23:57:29 +01:00
|
|
|
/// Consumes the atomic and returns the contained value.
|
|
|
|
|
///
|
|
|
|
|
/// This is safe because passing `self` by value guarantees that no other threads are
|
|
|
|
|
/// concurrently accessing the atomic data.
|
|
|
|
|
///
|
|
|
|
|
/// # Examples
|
|
|
|
|
///
|
|
|
|
|
/// ```
|
|
|
|
|
#[doc = concat!($extra_feature, "use std::sync::atomic::", stringify!($atomic_type), ";")]
|
|
|
|
|
///
|
|
|
|
|
#[doc = concat!("let some_var = ", stringify!($atomic_type), "::new(5);")]
|
|
|
|
|
/// assert_eq!(some_var.into_inner(), 5);
|
|
|
|
|
/// ```
|
|
|
|
|
#[inline]
|
|
|
|
|
#[$stable_access]
|
|
|
|
|
#[rustc_const_unstable(feature = "const_cell_into_inner", issue = "78729")]
|
|
|
|
|
pub const fn into_inner(self) -> $int_type {
|
|
|
|
|
self.v.into_inner()
|
2016-08-16 16:58:20 +01:00
|
|
|
}
|
|
|
|
|
|
2020-11-17 23:57:29 +01:00
|
|
|
/// Loads a value from the atomic integer.
|
|
|
|
|
///
|
|
|
|
|
/// `load` takes an [`Ordering`] argument which describes the memory ordering of this operation.
|
|
|
|
|
/// Possible values are [`SeqCst`], [`Acquire`] and [`Relaxed`].
|
|
|
|
|
///
|
|
|
|
|
/// # Panics
|
|
|
|
|
///
|
|
|
|
|
/// Panics if `order` is [`Release`] or [`AcqRel`].
|
|
|
|
|
///
|
|
|
|
|
/// # Examples
|
|
|
|
|
///
|
|
|
|
|
/// ```
|
|
|
|
|
#[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
|
|
|
|
|
///
|
|
|
|
|
#[doc = concat!("let some_var = ", stringify!($atomic_type), "::new(5);")]
|
|
|
|
|
///
|
|
|
|
|
/// assert_eq!(some_var.load(Ordering::Relaxed), 5);
|
|
|
|
|
/// ```
|
|
|
|
|
#[inline]
|
|
|
|
|
#[$stable]
|
2022-07-20 16:34:24 -04:00
|
|
|
#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
|
2020-11-17 23:57:29 +01:00
|
|
|
pub fn load(&self, order: Ordering) -> $int_type {
|
|
|
|
|
// SAFETY: data races are prevented by atomic intrinsics.
|
|
|
|
|
unsafe { atomic_load(self.v.get(), order) }
|
2016-04-16 05:21:53 +01:00
|
|
|
}
|
|
|
|
|
|
2020-11-17 23:57:29 +01:00
|
|
|
/// Stores a value into the atomic integer.
|
|
|
|
|
///
|
|
|
|
|
/// `store` takes an [`Ordering`] argument which describes the memory ordering of this operation.
|
|
|
|
|
/// Possible values are [`SeqCst`], [`Release`] and [`Relaxed`].
|
|
|
|
|
///
|
|
|
|
|
/// # Panics
|
|
|
|
|
///
|
|
|
|
|
/// Panics if `order` is [`Acquire`] or [`AcqRel`].
|
|
|
|
|
///
|
|
|
|
|
/// # Examples
|
|
|
|
|
///
|
|
|
|
|
/// ```
|
|
|
|
|
#[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
|
|
|
|
|
///
|
|
|
|
|
#[doc = concat!("let some_var = ", stringify!($atomic_type), "::new(5);")]
|
|
|
|
|
///
|
|
|
|
|
/// some_var.store(10, Ordering::Relaxed);
|
|
|
|
|
/// assert_eq!(some_var.load(Ordering::Relaxed), 10);
|
|
|
|
|
/// ```
|
|
|
|
|
#[inline]
|
|
|
|
|
#[$stable]
|
2022-07-20 16:34:24 -04:00
|
|
|
#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
|
2020-11-17 23:57:29 +01:00
|
|
|
pub fn store(&self, val: $int_type, order: Ordering) {
|
|
|
|
|
// SAFETY: data races are prevented by atomic intrinsics.
|
|
|
|
|
unsafe { atomic_store(self.v.get(), val, order); }
|
2016-04-16 05:21:53 +01:00
|
|
|
}
|
|
|
|
|
|
2020-11-17 23:57:29 +01:00
|
|
|
/// Stores a value into the atomic integer, returning the previous value.
|
|
|
|
|
///
|
|
|
|
|
/// `swap` takes an [`Ordering`] argument which describes the memory ordering
|
|
|
|
|
/// of this operation. All ordering modes are possible. Note that using
|
|
|
|
|
/// [`Acquire`] makes the store part of this operation [`Relaxed`], and
|
|
|
|
|
/// using [`Release`] makes the load part [`Relaxed`].
|
|
|
|
|
///
|
|
|
|
|
/// **Note**: This method is only available on platforms that support atomic operations on
|
2021-02-23 10:03:54 -05:00
|
|
|
#[doc = concat!("[`", $s_int_type, "`].")]
|
2020-11-17 23:57:29 +01:00
|
|
|
///
|
|
|
|
|
/// # Examples
|
|
|
|
|
///
|
|
|
|
|
/// ```
|
|
|
|
|
#[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
|
|
|
|
|
///
|
|
|
|
|
#[doc = concat!("let some_var = ", stringify!($atomic_type), "::new(5);")]
|
|
|
|
|
///
|
|
|
|
|
/// assert_eq!(some_var.swap(10, Ordering::Relaxed), 5);
|
|
|
|
|
/// ```
|
|
|
|
|
#[inline]
|
|
|
|
|
#[$stable]
|
|
|
|
|
#[$cfg_cas]
|
2022-07-20 16:34:24 -04:00
|
|
|
#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
|
2020-11-17 23:57:29 +01:00
|
|
|
pub fn swap(&self, val: $int_type, order: Ordering) -> $int_type {
|
|
|
|
|
// SAFETY: data races are prevented by atomic intrinsics.
|
|
|
|
|
unsafe { atomic_swap(self.v.get(), val, order) }
|
2016-04-16 05:21:53 +01:00
|
|
|
}
|
|
|
|
|
|
2020-11-17 23:57:29 +01:00
|
|
|
/// Stores a value into the atomic integer if the current value is the same as
|
|
|
|
|
/// the `current` value.
|
|
|
|
|
///
|
|
|
|
|
/// The return value is always the previous value. If it is equal to `current`, then the
|
|
|
|
|
/// value was updated.
|
|
|
|
|
///
|
|
|
|
|
/// `compare_and_swap` also takes an [`Ordering`] argument which describes the memory
|
|
|
|
|
/// ordering of this operation. Notice that even when using [`AcqRel`], the operation
|
|
|
|
|
/// might fail and hence just perform an `Acquire` load, but not have `Release` semantics.
|
|
|
|
|
/// Using [`Acquire`] makes the store part of this operation [`Relaxed`] if it
|
|
|
|
|
/// happens, and using [`Release`] makes the load part [`Relaxed`].
|
|
|
|
|
///
|
|
|
|
|
/// **Note**: This method is only available on platforms that support atomic operations on
|
2021-02-23 10:03:54 -05:00
|
|
|
#[doc = concat!("[`", $s_int_type, "`].")]
|
2020-11-17 23:57:29 +01:00
|
|
|
///
|
|
|
|
|
/// # Migrating to `compare_exchange` and `compare_exchange_weak`
|
|
|
|
|
///
|
|
|
|
|
/// `compare_and_swap` is equivalent to `compare_exchange` with the following mapping for
|
|
|
|
|
/// memory orderings:
|
|
|
|
|
///
|
|
|
|
|
/// Original | Success | Failure
|
|
|
|
|
/// -------- | ------- | -------
|
|
|
|
|
/// Relaxed | Relaxed | Relaxed
|
|
|
|
|
/// Acquire | Acquire | Acquire
|
|
|
|
|
/// Release | Release | Relaxed
|
|
|
|
|
/// AcqRel | AcqRel | Acquire
|
|
|
|
|
/// SeqCst | SeqCst | SeqCst
|
|
|
|
|
///
|
|
|
|
|
/// `compare_exchange_weak` is allowed to fail spuriously even when the comparison succeeds,
|
|
|
|
|
/// which allows the compiler to generate better assembly code when the compare and swap
|
|
|
|
|
/// is used in a loop.
|
|
|
|
|
///
|
|
|
|
|
/// # Examples
|
|
|
|
|
///
|
|
|
|
|
/// ```
|
|
|
|
|
#[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
|
|
|
|
|
///
|
|
|
|
|
#[doc = concat!("let some_var = ", stringify!($atomic_type), "::new(5);")]
|
|
|
|
|
///
|
|
|
|
|
/// assert_eq!(some_var.compare_and_swap(5, 10, Ordering::Relaxed), 5);
|
|
|
|
|
/// assert_eq!(some_var.load(Ordering::Relaxed), 10);
|
|
|
|
|
///
|
|
|
|
|
/// assert_eq!(some_var.compare_and_swap(6, 12, Ordering::Relaxed), 10);
|
|
|
|
|
/// assert_eq!(some_var.load(Ordering::Relaxed), 10);
|
|
|
|
|
/// ```
|
|
|
|
|
#[inline]
|
|
|
|
|
#[$stable]
|
2022-04-07 21:20:32 -04:00
|
|
|
#[deprecated(
|
2020-11-17 23:57:29 +01:00
|
|
|
since = "1.50.0",
|
2022-04-07 21:20:32 -04:00
|
|
|
note = "Use `compare_exchange` or `compare_exchange_weak` instead")
|
2020-11-17 23:57:29 +01:00
|
|
|
]
|
|
|
|
|
#[$cfg_cas]
|
2022-07-20 16:34:24 -04:00
|
|
|
#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
|
2020-11-17 23:57:29 +01:00
|
|
|
pub fn compare_and_swap(&self,
|
|
|
|
|
current: $int_type,
|
|
|
|
|
new: $int_type,
|
|
|
|
|
order: Ordering) -> $int_type {
|
|
|
|
|
match self.compare_exchange(current,
|
|
|
|
|
new,
|
|
|
|
|
order,
|
|
|
|
|
strongest_failure_ordering(order)) {
|
|
|
|
|
Ok(x) => x,
|
|
|
|
|
Err(x) => x,
|
2016-04-16 05:21:53 +01:00
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2020-11-17 23:57:29 +01:00
|
|
|
/// Stores a value into the atomic integer if the current value is the same as
|
|
|
|
|
/// the `current` value.
|
|
|
|
|
///
|
|
|
|
|
/// The return value is a result indicating whether the new value was written and
|
|
|
|
|
/// containing the previous value. On success this value is guaranteed to be equal to
|
|
|
|
|
/// `current`.
|
|
|
|
|
///
|
|
|
|
|
/// `compare_exchange` takes two [`Ordering`] arguments to describe the memory
|
|
|
|
|
/// ordering of this operation. `success` describes the required ordering for the
|
|
|
|
|
/// read-modify-write operation that takes place if the comparison with `current` succeeds.
|
|
|
|
|
/// `failure` describes the required ordering for the load operation that takes place when
|
|
|
|
|
/// the comparison fails. Using [`Acquire`] as success ordering makes the store part
|
|
|
|
|
/// of this operation [`Relaxed`], and using [`Release`] makes the successful load
|
2022-06-22 13:15:03 +02:00
|
|
|
/// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
|
2020-11-17 23:57:29 +01:00
|
|
|
///
|
|
|
|
|
/// **Note**: This method is only available on platforms that support atomic operations on
|
2021-02-23 10:03:54 -05:00
|
|
|
#[doc = concat!("[`", $s_int_type, "`].")]
|
2020-11-17 23:57:29 +01:00
|
|
|
///
|
|
|
|
|
/// # Examples
|
|
|
|
|
///
|
|
|
|
|
/// ```
|
|
|
|
|
#[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
|
|
|
|
|
///
|
|
|
|
|
#[doc = concat!("let some_var = ", stringify!($atomic_type), "::new(5);")]
|
|
|
|
|
///
|
|
|
|
|
/// assert_eq!(some_var.compare_exchange(5, 10,
|
|
|
|
|
/// Ordering::Acquire,
|
|
|
|
|
/// Ordering::Relaxed),
|
|
|
|
|
/// Ok(5));
|
|
|
|
|
/// assert_eq!(some_var.load(Ordering::Relaxed), 10);
|
|
|
|
|
///
|
|
|
|
|
/// assert_eq!(some_var.compare_exchange(6, 12,
|
|
|
|
|
/// Ordering::SeqCst,
|
|
|
|
|
/// Ordering::Acquire),
|
|
|
|
|
/// Err(10));
|
|
|
|
|
/// assert_eq!(some_var.load(Ordering::Relaxed), 10);
|
|
|
|
|
/// ```
|
|
|
|
|
#[inline]
|
|
|
|
|
#[$stable_cxchg]
|
|
|
|
|
#[$cfg_cas]
|
2022-07-20 16:34:24 -04:00
|
|
|
#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
|
2020-11-17 23:57:29 +01:00
|
|
|
pub fn compare_exchange(&self,
|
|
|
|
|
current: $int_type,
|
|
|
|
|
new: $int_type,
|
|
|
|
|
success: Ordering,
|
|
|
|
|
failure: Ordering) -> Result<$int_type, $int_type> {
|
|
|
|
|
// SAFETY: data races are prevented by atomic intrinsics.
|
|
|
|
|
unsafe { atomic_compare_exchange(self.v.get(), current, new, success, failure) }
|
2016-04-16 05:21:53 +01:00
|
|
|
}
|
|
|
|
|
|
2020-11-17 23:57:29 +01:00
|
|
|
/// Stores a value into the atomic integer if the current value is the same as
|
|
|
|
|
/// the `current` value.
|
|
|
|
|
///
|
|
|
|
|
#[doc = concat!("Unlike [`", stringify!($atomic_type), "::compare_exchange`],")]
|
|
|
|
|
/// this function is allowed to spuriously fail even
|
|
|
|
|
/// when the comparison succeeds, which can result in more efficient code on some
|
|
|
|
|
/// platforms. The return value is a result indicating whether the new value was
|
|
|
|
|
/// written and containing the previous value.
|
|
|
|
|
///
|
|
|
|
|
/// `compare_exchange_weak` takes two [`Ordering`] arguments to describe the memory
|
|
|
|
|
/// ordering of this operation. `success` describes the required ordering for the
|
|
|
|
|
/// read-modify-write operation that takes place if the comparison with `current` succeeds.
|
|
|
|
|
/// `failure` describes the required ordering for the load operation that takes place when
|
|
|
|
|
/// the comparison fails. Using [`Acquire`] as success ordering makes the store part
|
|
|
|
|
/// of this operation [`Relaxed`], and using [`Release`] makes the successful load
|
2022-06-22 13:15:03 +02:00
|
|
|
/// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
|
2020-11-17 23:57:29 +01:00
|
|
|
///
|
|
|
|
|
/// **Note**: This method is only available on platforms that support atomic operations on
|
2021-02-23 10:03:54 -05:00
|
|
|
#[doc = concat!("[`", $s_int_type, "`].")]
|
2020-11-17 23:57:29 +01:00
|
|
|
///
|
|
|
|
|
/// # Examples
|
|
|
|
|
///
|
|
|
|
|
/// ```
|
|
|
|
|
#[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
|
|
|
|
|
///
|
|
|
|
|
#[doc = concat!("let val = ", stringify!($atomic_type), "::new(4);")]
|
|
|
|
|
///
|
|
|
|
|
/// let mut old = val.load(Ordering::Relaxed);
|
|
|
|
|
/// loop {
|
|
|
|
|
/// let new = old * 2;
|
|
|
|
|
/// match val.compare_exchange_weak(old, new, Ordering::SeqCst, Ordering::Relaxed) {
|
|
|
|
|
/// Ok(_) => break,
|
|
|
|
|
/// Err(x) => old = x,
|
|
|
|
|
/// }
|
|
|
|
|
/// }
|
|
|
|
|
/// ```
|
|
|
|
|
#[inline]
|
|
|
|
|
#[$stable_cxchg]
|
|
|
|
|
#[$cfg_cas]
|
2022-07-20 16:34:24 -04:00
|
|
|
#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
|
2020-11-17 23:57:29 +01:00
|
|
|
pub fn compare_exchange_weak(&self,
|
|
|
|
|
current: $int_type,
|
|
|
|
|
new: $int_type,
|
|
|
|
|
success: Ordering,
|
|
|
|
|
failure: Ordering) -> Result<$int_type, $int_type> {
|
|
|
|
|
// SAFETY: data races are prevented by atomic intrinsics.
|
|
|
|
|
unsafe {
|
|
|
|
|
atomic_compare_exchange_weak(self.v.get(), current, new, success, failure)
|
2016-04-16 05:21:53 +01:00
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2020-11-17 23:57:29 +01:00
|
|
|
/// Adds to the current value, returning the previous value.
|
|
|
|
|
///
|
|
|
|
|
/// This operation wraps around on overflow.
|
|
|
|
|
///
|
|
|
|
|
/// `fetch_add` takes an [`Ordering`] argument which describes the memory ordering
|
|
|
|
|
/// of this operation. All ordering modes are possible. Note that using
|
|
|
|
|
/// [`Acquire`] makes the store part of this operation [`Relaxed`], and
|
|
|
|
|
/// using [`Release`] makes the load part [`Relaxed`].
|
|
|
|
|
///
|
|
|
|
|
/// **Note**: This method is only available on platforms that support atomic operations on
|
2021-02-23 10:03:54 -05:00
|
|
|
#[doc = concat!("[`", $s_int_type, "`].")]
|
2020-11-17 23:57:29 +01:00
|
|
|
///
|
|
|
|
|
/// # Examples
|
|
|
|
|
///
|
|
|
|
|
/// ```
|
|
|
|
|
#[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
|
|
|
|
|
///
|
|
|
|
|
#[doc = concat!("let foo = ", stringify!($atomic_type), "::new(0);")]
|
|
|
|
|
/// assert_eq!(foo.fetch_add(10, Ordering::SeqCst), 0);
|
|
|
|
|
/// assert_eq!(foo.load(Ordering::SeqCst), 10);
|
|
|
|
|
/// ```
|
|
|
|
|
#[inline]
|
|
|
|
|
#[$stable]
|
|
|
|
|
#[$cfg_cas]
|
2022-07-20 16:34:24 -04:00
|
|
|
#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
|
2020-11-17 23:57:29 +01:00
|
|
|
pub fn fetch_add(&self, val: $int_type, order: Ordering) -> $int_type {
|
|
|
|
|
// SAFETY: data races are prevented by atomic intrinsics.
|
|
|
|
|
unsafe { atomic_add(self.v.get(), val, order) }
|
2016-04-16 05:21:53 +01:00
|
|
|
}
|
|
|
|
|
|
2020-11-17 23:57:29 +01:00
|
|
|
/// Subtracts from the current value, returning the previous value.
|
|
|
|
|
///
|
|
|
|
|
/// This operation wraps around on overflow.
|
|
|
|
|
///
|
|
|
|
|
/// `fetch_sub` takes an [`Ordering`] argument which describes the memory ordering
|
|
|
|
|
/// of this operation. All ordering modes are possible. Note that using
|
|
|
|
|
/// [`Acquire`] makes the store part of this operation [`Relaxed`], and
|
|
|
|
|
/// using [`Release`] makes the load part [`Relaxed`].
|
|
|
|
|
///
|
|
|
|
|
/// **Note**: This method is only available on platforms that support atomic operations on
|
2021-02-23 10:03:54 -05:00
|
|
|
#[doc = concat!("[`", $s_int_type, "`].")]
|
2020-11-17 23:57:29 +01:00
|
|
|
///
|
|
|
|
|
/// # Examples
|
|
|
|
|
///
|
|
|
|
|
/// ```
|
|
|
|
|
#[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
|
|
|
|
|
///
|
|
|
|
|
#[doc = concat!("let foo = ", stringify!($atomic_type), "::new(20);")]
|
|
|
|
|
/// assert_eq!(foo.fetch_sub(10, Ordering::SeqCst), 20);
|
|
|
|
|
/// assert_eq!(foo.load(Ordering::SeqCst), 10);
|
|
|
|
|
/// ```
|
|
|
|
|
#[inline]
|
|
|
|
|
#[$stable]
|
|
|
|
|
#[$cfg_cas]
|
2022-07-20 16:34:24 -04:00
|
|
|
#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
|
2020-11-17 23:57:29 +01:00
|
|
|
pub fn fetch_sub(&self, val: $int_type, order: Ordering) -> $int_type {
|
|
|
|
|
// SAFETY: data races are prevented by atomic intrinsics.
|
|
|
|
|
unsafe { atomic_sub(self.v.get(), val, order) }
|
2016-04-16 05:21:53 +01:00
|
|
|
}
|
|
|
|
|
|
2020-11-17 23:57:29 +01:00
|
|
|
/// Bitwise "and" with the current value.
|
|
|
|
|
///
|
|
|
|
|
/// Performs a bitwise "and" operation on the current value and the argument `val`, and
|
|
|
|
|
/// sets the new value to the result.
|
|
|
|
|
///
|
|
|
|
|
/// Returns the previous value.
|
|
|
|
|
///
|
|
|
|
|
/// `fetch_and` takes an [`Ordering`] argument which describes the memory ordering
|
|
|
|
|
/// of this operation. All ordering modes are possible. Note that using
|
|
|
|
|
/// [`Acquire`] makes the store part of this operation [`Relaxed`], and
|
|
|
|
|
/// using [`Release`] makes the load part [`Relaxed`].
|
|
|
|
|
///
|
|
|
|
|
/// **Note**: This method is only available on platforms that support atomic operations on
|
2021-02-23 10:03:54 -05:00
|
|
|
#[doc = concat!("[`", $s_int_type, "`].")]
|
2020-11-17 23:57:29 +01:00
|
|
|
///
|
|
|
|
|
/// # Examples
|
|
|
|
|
///
|
|
|
|
|
/// ```
|
|
|
|
|
#[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
|
|
|
|
|
///
|
|
|
|
|
#[doc = concat!("let foo = ", stringify!($atomic_type), "::new(0b101101);")]
|
|
|
|
|
/// assert_eq!(foo.fetch_and(0b110011, Ordering::SeqCst), 0b101101);
|
|
|
|
|
/// assert_eq!(foo.load(Ordering::SeqCst), 0b100001);
|
|
|
|
|
/// ```
|
|
|
|
|
#[inline]
|
|
|
|
|
#[$stable]
|
|
|
|
|
#[$cfg_cas]
|
2022-07-20 16:34:24 -04:00
|
|
|
#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
|
2020-11-17 23:57:29 +01:00
|
|
|
pub fn fetch_and(&self, val: $int_type, order: Ordering) -> $int_type {
|
|
|
|
|
// SAFETY: data races are prevented by atomic intrinsics.
|
|
|
|
|
unsafe { atomic_and(self.v.get(), val, order) }
|
2016-04-16 05:21:53 +01:00
|
|
|
}
|
|
|
|
|
|
2020-11-17 23:57:29 +01:00
|
|
|
/// Bitwise "nand" with the current value.
|
|
|
|
|
///
|
|
|
|
|
/// Performs a bitwise "nand" operation on the current value and the argument `val`, and
|
|
|
|
|
/// sets the new value to the result.
|
|
|
|
|
///
|
|
|
|
|
/// Returns the previous value.
|
|
|
|
|
///
|
|
|
|
|
/// `fetch_nand` takes an [`Ordering`] argument which describes the memory ordering
|
|
|
|
|
/// of this operation. All ordering modes are possible. Note that using
|
|
|
|
|
/// [`Acquire`] makes the store part of this operation [`Relaxed`], and
|
|
|
|
|
/// using [`Release`] makes the load part [`Relaxed`].
|
|
|
|
|
///
|
|
|
|
|
/// **Note**: This method is only available on platforms that support atomic operations on
|
2021-02-23 10:03:54 -05:00
|
|
|
#[doc = concat!("[`", $s_int_type, "`].")]
|
2020-11-17 23:57:29 +01:00
|
|
|
///
|
|
|
|
|
/// # Examples
|
|
|
|
|
///
|
|
|
|
|
/// ```
|
|
|
|
|
#[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
|
|
|
|
|
///
|
|
|
|
|
#[doc = concat!("let foo = ", stringify!($atomic_type), "::new(0x13);")]
|
|
|
|
|
/// assert_eq!(foo.fetch_nand(0x31, Ordering::SeqCst), 0x13);
|
|
|
|
|
/// assert_eq!(foo.load(Ordering::SeqCst), !(0x13 & 0x31));
|
|
|
|
|
/// ```
|
|
|
|
|
#[inline]
|
|
|
|
|
#[$stable_nand]
|
|
|
|
|
#[$cfg_cas]
|
2022-07-20 16:34:24 -04:00
|
|
|
#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
|
2020-11-17 23:57:29 +01:00
|
|
|
pub fn fetch_nand(&self, val: $int_type, order: Ordering) -> $int_type {
|
|
|
|
|
// SAFETY: data races are prevented by atomic intrinsics.
|
|
|
|
|
unsafe { atomic_nand(self.v.get(), val, order) }
|
2018-02-09 11:19:52 -07:00
|
|
|
}
|
|
|
|
|
|
2020-11-17 23:57:29 +01:00
|
|
|
/// Bitwise "or" with the current value.
|
|
|
|
|
///
|
|
|
|
|
/// Performs a bitwise "or" operation on the current value and the argument `val`, and
|
|
|
|
|
/// sets the new value to the result.
|
|
|
|
|
///
|
|
|
|
|
/// Returns the previous value.
|
|
|
|
|
///
|
|
|
|
|
/// `fetch_or` takes an [`Ordering`] argument which describes the memory ordering
|
|
|
|
|
/// of this operation. All ordering modes are possible. Note that using
|
|
|
|
|
/// [`Acquire`] makes the store part of this operation [`Relaxed`], and
|
|
|
|
|
/// using [`Release`] makes the load part [`Relaxed`].
|
|
|
|
|
///
|
|
|
|
|
/// **Note**: This method is only available on platforms that support atomic operations on
|
2021-02-23 10:03:54 -05:00
|
|
|
#[doc = concat!("[`", $s_int_type, "`].")]
|
2020-11-17 23:57:29 +01:00
|
|
|
///
|
|
|
|
|
/// # Examples
|
|
|
|
|
///
|
|
|
|
|
/// ```
|
|
|
|
|
#[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
|
|
|
|
|
///
|
|
|
|
|
#[doc = concat!("let foo = ", stringify!($atomic_type), "::new(0b101101);")]
|
|
|
|
|
/// assert_eq!(foo.fetch_or(0b110011, Ordering::SeqCst), 0b101101);
|
|
|
|
|
/// assert_eq!(foo.load(Ordering::SeqCst), 0b111111);
|
|
|
|
|
/// ```
|
|
|
|
|
#[inline]
|
|
|
|
|
#[$stable]
|
|
|
|
|
#[$cfg_cas]
|
2022-07-20 16:34:24 -04:00
|
|
|
#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
|
2020-11-17 23:57:29 +01:00
|
|
|
pub fn fetch_or(&self, val: $int_type, order: Ordering) -> $int_type {
|
|
|
|
|
// SAFETY: data races are prevented by atomic intrinsics.
|
|
|
|
|
unsafe { atomic_or(self.v.get(), val, order) }
|
2016-04-16 05:21:53 +01:00
|
|
|
}
|
|
|
|
|
|
2020-11-17 23:57:29 +01:00
|
|
|
/// Bitwise "xor" with the current value.
|
|
|
|
|
///
|
|
|
|
|
/// Performs a bitwise "xor" operation on the current value and the argument `val`, and
|
|
|
|
|
/// sets the new value to the result.
|
|
|
|
|
///
|
|
|
|
|
/// Returns the previous value.
|
|
|
|
|
///
|
|
|
|
|
/// `fetch_xor` takes an [`Ordering`] argument which describes the memory ordering
|
|
|
|
|
/// of this operation. All ordering modes are possible. Note that using
|
|
|
|
|
/// [`Acquire`] makes the store part of this operation [`Relaxed`], and
|
|
|
|
|
/// using [`Release`] makes the load part [`Relaxed`].
|
|
|
|
|
///
|
|
|
|
|
/// **Note**: This method is only available on platforms that support atomic operations on
|
2021-02-23 10:03:54 -05:00
|
|
|
#[doc = concat!("[`", $s_int_type, "`].")]
|
2020-11-17 23:57:29 +01:00
|
|
|
///
|
|
|
|
|
/// # Examples
|
|
|
|
|
///
|
|
|
|
|
/// ```
|
|
|
|
|
#[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
|
|
|
|
|
///
|
|
|
|
|
#[doc = concat!("let foo = ", stringify!($atomic_type), "::new(0b101101);")]
|
|
|
|
|
/// assert_eq!(foo.fetch_xor(0b110011, Ordering::SeqCst), 0b101101);
|
|
|
|
|
/// assert_eq!(foo.load(Ordering::SeqCst), 0b011110);
|
|
|
|
|
/// ```
|
|
|
|
|
#[inline]
|
|
|
|
|
#[$stable]
|
|
|
|
|
#[$cfg_cas]
|
2022-07-20 16:34:24 -04:00
|
|
|
#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
|
2020-11-17 23:57:29 +01:00
|
|
|
pub fn fetch_xor(&self, val: $int_type, order: Ordering) -> $int_type {
|
|
|
|
|
// SAFETY: data races are prevented by atomic intrinsics.
|
|
|
|
|
unsafe { atomic_xor(self.v.get(), val, order) }
|
2016-04-16 05:21:53 +01:00
|
|
|
}
|
2018-03-02 06:52:27 +01:00
|
|
|
|
2020-11-17 23:57:29 +01:00
|
|
|
/// Fetches the value, and applies a function to it that returns an optional
|
|
|
|
|
/// new value. Returns a `Result` of `Ok(previous_value)` if the function returned `Some(_)`, else
|
|
|
|
|
/// `Err(previous_value)`.
|
|
|
|
|
///
|
|
|
|
|
/// Note: This may call the function multiple times if the value has been changed from other threads in
|
|
|
|
|
/// the meantime, as long as the function returns `Some(_)`, but the function will have been applied
|
|
|
|
|
/// only once to the stored value.
|
|
|
|
|
///
|
|
|
|
|
/// `fetch_update` takes two [`Ordering`] arguments to describe the memory ordering of this operation.
|
|
|
|
|
/// The first describes the required ordering for when the operation finally succeeds while the second
|
|
|
|
|
/// describes the required ordering for loads. These correspond to the success and failure orderings of
|
|
|
|
|
#[doc = concat!("[`", stringify!($atomic_type), "::compare_exchange`]")]
|
|
|
|
|
/// respectively.
|
|
|
|
|
///
|
|
|
|
|
/// Using [`Acquire`] as success ordering makes the store part
|
|
|
|
|
/// of this operation [`Relaxed`], and using [`Release`] makes the final successful load
|
2022-06-22 13:15:03 +02:00
|
|
|
/// [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
|
2020-11-17 23:57:29 +01:00
|
|
|
///
|
|
|
|
|
/// **Note**: This method is only available on platforms that support atomic operations on
|
2021-02-23 10:03:54 -05:00
|
|
|
#[doc = concat!("[`", $s_int_type, "`].")]
|
2020-11-17 23:57:29 +01:00
|
|
|
///
|
2022-09-14 13:25:14 -04:00
|
|
|
/// # Considerations
|
|
|
|
|
///
|
|
|
|
|
/// This method is not magic; it is not provided by the hardware.
|
|
|
|
|
/// It is implemented in terms of
|
|
|
|
|
#[doc = concat!("[`", stringify!($atomic_type), "::compare_exchange_weak`],")]
|
|
|
|
|
/// and suffers from the same drawbacks.
|
|
|
|
|
/// In particular, this method will not circumvent the [ABA Problem].
|
|
|
|
|
///
|
|
|
|
|
/// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
|
|
|
|
|
///
|
2020-11-17 23:57:29 +01:00
|
|
|
/// # Examples
|
|
|
|
|
///
|
|
|
|
|
/// ```rust
|
|
|
|
|
#[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
|
|
|
|
|
///
|
|
|
|
|
#[doc = concat!("let x = ", stringify!($atomic_type), "::new(7);")]
|
|
|
|
|
/// assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(7));
|
|
|
|
|
/// assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(7));
|
|
|
|
|
/// assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(8));
|
|
|
|
|
/// assert_eq!(x.load(Ordering::SeqCst), 9);
|
|
|
|
|
/// ```
|
|
|
|
|
#[inline]
|
|
|
|
|
#[stable(feature = "no_more_cas", since = "1.45.0")]
|
|
|
|
|
#[$cfg_cas]
|
2022-07-20 16:34:24 -04:00
|
|
|
#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
|
2020-11-17 23:57:29 +01:00
|
|
|
pub fn fetch_update<F>(&self,
|
|
|
|
|
set_order: Ordering,
|
|
|
|
|
fetch_order: Ordering,
|
|
|
|
|
mut f: F) -> Result<$int_type, $int_type>
|
|
|
|
|
where F: FnMut($int_type) -> Option<$int_type> {
|
|
|
|
|
let mut prev = self.load(fetch_order);
|
|
|
|
|
while let Some(next) = f(prev) {
|
|
|
|
|
match self.compare_exchange_weak(prev, next, set_order, fetch_order) {
|
|
|
|
|
x @ Ok(_) => return x,
|
|
|
|
|
Err(next_prev) => prev = next_prev
|
2018-03-02 06:52:27 +01:00
|
|
|
}
|
|
|
|
|
}
|
2020-11-17 23:57:29 +01:00
|
|
|
Err(prev)
|
2018-03-02 06:52:27 +01:00
|
|
|
}
|
|
|
|
|
|
2020-11-17 23:57:29 +01:00
|
|
|
/// Maximum with the current value.
|
|
|
|
|
///
|
|
|
|
|
/// Finds the maximum of the current value and the argument `val`, and
|
|
|
|
|
/// sets the new value to the result.
|
|
|
|
|
///
|
|
|
|
|
/// Returns the previous value.
|
|
|
|
|
///
|
|
|
|
|
/// `fetch_max` takes an [`Ordering`] argument which describes the memory ordering
|
|
|
|
|
/// of this operation. All ordering modes are possible. Note that using
|
|
|
|
|
/// [`Acquire`] makes the store part of this operation [`Relaxed`], and
|
|
|
|
|
/// using [`Release`] makes the load part [`Relaxed`].
|
|
|
|
|
///
|
|
|
|
|
/// **Note**: This method is only available on platforms that support atomic operations on
|
2021-02-23 10:03:54 -05:00
|
|
|
#[doc = concat!("[`", $s_int_type, "`].")]
|
2020-11-17 23:57:29 +01:00
|
|
|
///
|
|
|
|
|
/// # Examples
|
|
|
|
|
///
|
|
|
|
|
/// ```
|
|
|
|
|
#[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
|
|
|
|
|
///
|
|
|
|
|
#[doc = concat!("let foo = ", stringify!($atomic_type), "::new(23);")]
|
|
|
|
|
/// assert_eq!(foo.fetch_max(42, Ordering::SeqCst), 23);
|
|
|
|
|
/// assert_eq!(foo.load(Ordering::SeqCst), 42);
|
|
|
|
|
/// ```
|
|
|
|
|
///
|
|
|
|
|
/// If you want to obtain the maximum value in one step, you can use the following:
|
|
|
|
|
///
|
|
|
|
|
/// ```
|
|
|
|
|
#[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
|
|
|
|
|
///
|
|
|
|
|
#[doc = concat!("let foo = ", stringify!($atomic_type), "::new(23);")]
|
|
|
|
|
/// let bar = 42;
|
|
|
|
|
/// let max_foo = foo.fetch_max(bar, Ordering::SeqCst).max(bar);
|
|
|
|
|
/// assert!(max_foo == 42);
|
|
|
|
|
/// ```
|
|
|
|
|
#[inline]
|
|
|
|
|
#[stable(feature = "atomic_min_max", since = "1.45.0")]
|
|
|
|
|
#[$cfg_cas]
|
2022-07-20 16:34:24 -04:00
|
|
|
#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
|
2020-11-17 23:57:29 +01:00
|
|
|
pub fn fetch_max(&self, val: $int_type, order: Ordering) -> $int_type {
|
|
|
|
|
// SAFETY: data races are prevented by atomic intrinsics.
|
|
|
|
|
unsafe { $max_fn(self.v.get(), val, order) }
|
2018-03-02 06:52:27 +01:00
|
|
|
}
|
|
|
|
|
|
2020-11-17 23:57:29 +01:00
|
|
|
/// Minimum with the current value.
|
|
|
|
|
///
|
|
|
|
|
/// Finds the minimum of the current value and the argument `val`, and
|
|
|
|
|
/// sets the new value to the result.
|
|
|
|
|
///
|
|
|
|
|
/// Returns the previous value.
|
|
|
|
|
///
|
|
|
|
|
/// `fetch_min` takes an [`Ordering`] argument which describes the memory ordering
|
|
|
|
|
/// of this operation. All ordering modes are possible. Note that using
|
|
|
|
|
/// [`Acquire`] makes the store part of this operation [`Relaxed`], and
|
|
|
|
|
/// using [`Release`] makes the load part [`Relaxed`].
|
|
|
|
|
///
|
|
|
|
|
/// **Note**: This method is only available on platforms that support atomic operations on
|
2021-02-23 10:03:54 -05:00
|
|
|
#[doc = concat!("[`", $s_int_type, "`].")]
|
2020-11-17 23:57:29 +01:00
|
|
|
///
|
|
|
|
|
/// # Examples
|
|
|
|
|
///
|
|
|
|
|
/// ```
|
|
|
|
|
#[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
|
|
|
|
|
///
|
|
|
|
|
#[doc = concat!("let foo = ", stringify!($atomic_type), "::new(23);")]
|
|
|
|
|
/// assert_eq!(foo.fetch_min(42, Ordering::Relaxed), 23);
|
|
|
|
|
/// assert_eq!(foo.load(Ordering::Relaxed), 23);
|
|
|
|
|
/// assert_eq!(foo.fetch_min(22, Ordering::Relaxed), 23);
|
|
|
|
|
/// assert_eq!(foo.load(Ordering::Relaxed), 22);
|
|
|
|
|
/// ```
|
|
|
|
|
///
|
|
|
|
|
/// If you want to obtain the minimum value in one step, you can use the following:
|
|
|
|
|
///
|
|
|
|
|
/// ```
|
|
|
|
|
#[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
|
|
|
|
|
///
|
|
|
|
|
#[doc = concat!("let foo = ", stringify!($atomic_type), "::new(23);")]
|
|
|
|
|
/// let bar = 12;
|
|
|
|
|
/// let min_foo = foo.fetch_min(bar, Ordering::SeqCst).min(bar);
|
|
|
|
|
/// assert_eq!(min_foo, 12);
|
|
|
|
|
/// ```
|
|
|
|
|
#[inline]
|
|
|
|
|
#[stable(feature = "atomic_min_max", since = "1.45.0")]
|
|
|
|
|
#[$cfg_cas]
|
2022-07-20 16:34:24 -04:00
|
|
|
#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
|
2020-11-17 23:57:29 +01:00
|
|
|
pub fn fetch_min(&self, val: $int_type, order: Ordering) -> $int_type {
|
|
|
|
|
// SAFETY: data races are prevented by atomic intrinsics.
|
|
|
|
|
unsafe { $min_fn(self.v.get(), val, order) }
|
2018-03-02 06:52:27 +01:00
|
|
|
}
|
|
|
|
|
|
2020-11-17 23:57:29 +01:00
|
|
|
/// Returns a mutable pointer to the underlying integer.
|
|
|
|
|
///
|
|
|
|
|
/// Doing non-atomic reads and writes on the resulting integer can be a data race.
|
|
|
|
|
/// This method is mostly useful for FFI, where the function signature may use
|
|
|
|
|
#[doc = concat!("`*mut ", stringify!($int_type), "` instead of `&", stringify!($atomic_type), "`.")]
|
|
|
|
|
///
|
|
|
|
|
/// Returning an `*mut` pointer from a shared reference to this atomic is safe because the
|
|
|
|
|
/// atomic types work with interior mutability. All modifications of an atomic change the value
|
|
|
|
|
/// through a shared reference, and can do so safely as long as they use atomic operations. Any
|
|
|
|
|
/// use of the returned raw pointer requires an `unsafe` block and still has to uphold the same
|
|
|
|
|
/// restriction: operations on it must be atomic.
|
|
|
|
|
///
|
|
|
|
|
/// # Examples
|
|
|
|
|
///
|
|
|
|
|
/// ```ignore (extern-declaration)
|
|
|
|
|
/// # fn main() {
|
|
|
|
|
#[doc = concat!($extra_feature, "use std::sync::atomic::", stringify!($atomic_type), ";")]
|
|
|
|
|
///
|
2020-09-01 17:12:52 -04:00
|
|
|
/// extern "C" {
|
2020-11-17 23:57:29 +01:00
|
|
|
#[doc = concat!(" fn my_atomic_op(arg: *mut ", stringify!($int_type), ");")]
|
|
|
|
|
/// }
|
|
|
|
|
///
|
2023-01-12 07:27:51 +00:00
|
|
|
#[doc = concat!("let atomic = ", stringify!($atomic_type), "::new(1);")]
|
2020-11-17 23:57:29 +01:00
|
|
|
///
|
|
|
|
|
// SAFETY: Safe as long as `my_atomic_op` is atomic.
|
|
|
|
|
/// unsafe {
|
|
|
|
|
/// my_atomic_op(atomic.as_mut_ptr());
|
|
|
|
|
/// }
|
|
|
|
|
/// # }
|
|
|
|
|
/// ```
|
|
|
|
|
#[inline]
|
|
|
|
|
#[unstable(feature = "atomic_mut_ptr",
|
|
|
|
|
reason = "recently added",
|
|
|
|
|
issue = "66893")]
|
|
|
|
|
pub fn as_mut_ptr(&self) -> *mut $int_type {
|
|
|
|
|
self.v.get()
|
2019-11-23 17:07:31 +01:00
|
|
|
}
|
2016-04-16 05:21:53 +01:00
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2019-11-06 08:09:55 -05:00
|
|
|
#[cfg(target_has_atomic_load_store = "8")]
|
2016-04-16 05:21:53 +01:00
|
|
|
atomic_int! {
|
2019-10-08 17:09:23 +01:00
|
|
|
cfg(target_has_atomic = "8"),
|
2020-09-20 12:14:41 +02:00
|
|
|
cfg(target_has_atomic_equal_alignment = "8"),
|
2019-01-07 12:41:55 -08:00
|
|
|
stable(feature = "integer_atomics_stable", since = "1.34.0"),
|
|
|
|
|
stable(feature = "integer_atomics_stable", since = "1.34.0"),
|
|
|
|
|
stable(feature = "integer_atomics_stable", since = "1.34.0"),
|
|
|
|
|
stable(feature = "integer_atomics_stable", since = "1.34.0"),
|
|
|
|
|
stable(feature = "integer_atomics_stable", since = "1.34.0"),
|
|
|
|
|
stable(feature = "integer_atomics_stable", since = "1.34.0"),
|
2019-12-08 01:43:10 +01:00
|
|
|
rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"),
|
2022-07-08 17:52:04 -04:00
|
|
|
unstable(feature = "integer_atomics", issue = "99069"),
|
2022-04-28 16:42:20 -04:00
|
|
|
cfg_attr(not(test), rustc_diagnostic_item = "AtomicI8"),
|
2021-02-23 10:03:54 -05:00
|
|
|
"i8",
|
2019-06-04 13:24:39 +00:00
|
|
|
"",
|
2018-03-02 06:52:27 +01:00
|
|
|
atomic_min, atomic_max,
|
2018-09-08 12:50:19 +01:00
|
|
|
1,
|
2019-01-30 17:47:36 +01:00
|
|
|
"AtomicI8::new(0)",
|
2016-04-16 05:21:53 +01:00
|
|
|
i8 AtomicI8 ATOMIC_I8_INIT
|
|
|
|
|
}
|
2019-11-06 08:09:55 -05:00
|
|
|
#[cfg(target_has_atomic_load_store = "8")]
|
2016-04-16 05:21:53 +01:00
|
|
|
atomic_int! {
|
2019-10-08 17:09:23 +01:00
|
|
|
cfg(target_has_atomic = "8"),
|
2020-09-20 12:14:41 +02:00
|
|
|
cfg(target_has_atomic_equal_alignment = "8"),
|
2019-01-07 12:41:55 -08:00
|
|
|
stable(feature = "integer_atomics_stable", since = "1.34.0"),
|
|
|
|
|
stable(feature = "integer_atomics_stable", since = "1.34.0"),
|
|
|
|
|
stable(feature = "integer_atomics_stable", since = "1.34.0"),
|
|
|
|
|
stable(feature = "integer_atomics_stable", since = "1.34.0"),
|
|
|
|
|
stable(feature = "integer_atomics_stable", since = "1.34.0"),
|
|
|
|
|
stable(feature = "integer_atomics_stable", since = "1.34.0"),
|
2019-12-08 01:43:10 +01:00
|
|
|
rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"),
|
2022-07-08 17:52:04 -04:00
|
|
|
unstable(feature = "integer_atomics", issue = "99069"),
|
2022-04-28 16:42:20 -04:00
|
|
|
cfg_attr(not(test), rustc_diagnostic_item = "AtomicU8"),
|
2021-02-23 10:03:54 -05:00
|
|
|
"u8",
|
2019-06-04 13:24:39 +00:00
|
|
|
"",
|
2018-03-02 06:52:27 +01:00
|
|
|
atomic_umin, atomic_umax,
|
2018-09-08 12:50:19 +01:00
|
|
|
1,
|
2019-01-30 17:47:36 +01:00
|
|
|
"AtomicU8::new(0)",
|
2016-04-16 05:21:53 +01:00
|
|
|
u8 AtomicU8 ATOMIC_U8_INIT
|
|
|
|
|
}
|
2019-11-06 08:09:55 -05:00
|
|
|
#[cfg(target_has_atomic_load_store = "16")]
|
2016-04-16 05:21:53 +01:00
|
|
|
atomic_int! {
|
2019-10-08 17:09:23 +01:00
|
|
|
cfg(target_has_atomic = "16"),
|
2020-09-20 12:14:41 +02:00
|
|
|
cfg(target_has_atomic_equal_alignment = "16"),
|
2019-01-07 12:41:55 -08:00
|
|
|
stable(feature = "integer_atomics_stable", since = "1.34.0"),
|
|
|
|
|
stable(feature = "integer_atomics_stable", since = "1.34.0"),
|
|
|
|
|
stable(feature = "integer_atomics_stable", since = "1.34.0"),
|
|
|
|
|
stable(feature = "integer_atomics_stable", since = "1.34.0"),
|
|
|
|
|
stable(feature = "integer_atomics_stable", since = "1.34.0"),
|
|
|
|
|
stable(feature = "integer_atomics_stable", since = "1.34.0"),
|
2019-12-08 01:43:10 +01:00
|
|
|
rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"),
|
2022-07-08 17:52:04 -04:00
|
|
|
unstable(feature = "integer_atomics", issue = "99069"),
|
2022-04-28 16:42:20 -04:00
|
|
|
cfg_attr(not(test), rustc_diagnostic_item = "AtomicI16"),
|
2021-02-23 10:03:54 -05:00
|
|
|
"i16",
|
2019-06-04 13:24:39 +00:00
|
|
|
"",
|
2018-03-02 06:52:27 +01:00
|
|
|
atomic_min, atomic_max,
|
2018-09-08 12:50:19 +01:00
|
|
|
2,
|
2019-01-30 17:47:36 +01:00
|
|
|
"AtomicI16::new(0)",
|
2016-04-16 05:21:53 +01:00
|
|
|
i16 AtomicI16 ATOMIC_I16_INIT
|
|
|
|
|
}
|
2019-11-06 08:09:55 -05:00
|
|
|
#[cfg(target_has_atomic_load_store = "16")]
|
2016-04-16 05:21:53 +01:00
|
|
|
atomic_int! {
|
2019-10-08 17:09:23 +01:00
|
|
|
cfg(target_has_atomic = "16"),
|
2020-09-20 12:14:41 +02:00
|
|
|
cfg(target_has_atomic_equal_alignment = "16"),
|
2019-01-07 12:41:55 -08:00
|
|
|
stable(feature = "integer_atomics_stable", since = "1.34.0"),
|
|
|
|
|
stable(feature = "integer_atomics_stable", since = "1.34.0"),
|
|
|
|
|
stable(feature = "integer_atomics_stable", since = "1.34.0"),
|
|
|
|
|
stable(feature = "integer_atomics_stable", since = "1.34.0"),
|
|
|
|
|
stable(feature = "integer_atomics_stable", since = "1.34.0"),
|
|
|
|
|
stable(feature = "integer_atomics_stable", since = "1.34.0"),
|
2019-12-08 01:43:10 +01:00
|
|
|
rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"),
|
2022-07-08 17:52:04 -04:00
|
|
|
unstable(feature = "integer_atomics", issue = "99069"),
|
2022-04-28 16:42:20 -04:00
|
|
|
cfg_attr(not(test), rustc_diagnostic_item = "AtomicU16"),
|
2021-02-23 10:03:54 -05:00
|
|
|
"u16",
|
2019-06-04 13:24:39 +00:00
|
|
|
"",
|
2018-03-02 06:52:27 +01:00
|
|
|
atomic_umin, atomic_umax,
|
2018-09-08 12:50:19 +01:00
|
|
|
2,
|
2019-01-30 17:47:36 +01:00
|
|
|
"AtomicU16::new(0)",
|
2016-04-16 05:21:53 +01:00
|
|
|
u16 AtomicU16 ATOMIC_U16_INIT
|
|
|
|
|
}
|
2019-11-06 08:09:55 -05:00
|
|
|
#[cfg(target_has_atomic_load_store = "32")]
|
2016-04-16 05:21:53 +01:00
|
|
|
atomic_int! {
|
2019-10-08 17:09:23 +01:00
|
|
|
cfg(target_has_atomic = "32"),
|
2020-09-20 12:14:41 +02:00
|
|
|
cfg(target_has_atomic_equal_alignment = "32"),
|
2019-01-07 12:41:55 -08:00
|
|
|
stable(feature = "integer_atomics_stable", since = "1.34.0"),
|
|
|
|
|
stable(feature = "integer_atomics_stable", since = "1.34.0"),
|
|
|
|
|
stable(feature = "integer_atomics_stable", since = "1.34.0"),
|
|
|
|
|
stable(feature = "integer_atomics_stable", since = "1.34.0"),
|
|
|
|
|
stable(feature = "integer_atomics_stable", since = "1.34.0"),
|
|
|
|
|
stable(feature = "integer_atomics_stable", since = "1.34.0"),
|
2019-12-08 01:43:10 +01:00
|
|
|
rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"),
|
2022-07-08 17:52:04 -04:00
|
|
|
unstable(feature = "integer_atomics", issue = "99069"),
|
2022-04-28 16:42:20 -04:00
|
|
|
cfg_attr(not(test), rustc_diagnostic_item = "AtomicI32"),
|
2021-02-23 10:03:54 -05:00
|
|
|
"i32",
|
2019-06-04 13:24:39 +00:00
|
|
|
"",
|
2018-03-02 06:52:27 +01:00
|
|
|
atomic_min, atomic_max,
|
2018-09-08 12:50:19 +01:00
|
|
|
4,
|
2019-01-30 17:47:36 +01:00
|
|
|
"AtomicI32::new(0)",
|
2016-04-16 05:21:53 +01:00
|
|
|
i32 AtomicI32 ATOMIC_I32_INIT
|
|
|
|
|
}
|
2019-11-06 08:09:55 -05:00
|
|
|
#[cfg(target_has_atomic_load_store = "32")]
|
2016-04-16 05:21:53 +01:00
|
|
|
atomic_int! {
|
2019-10-08 17:09:23 +01:00
|
|
|
cfg(target_has_atomic = "32"),
|
2020-09-20 12:14:41 +02:00
|
|
|
cfg(target_has_atomic_equal_alignment = "32"),
|
2019-01-07 12:41:55 -08:00
|
|
|
stable(feature = "integer_atomics_stable", since = "1.34.0"),
|
|
|
|
|
stable(feature = "integer_atomics_stable", since = "1.34.0"),
|
|
|
|
|
stable(feature = "integer_atomics_stable", since = "1.34.0"),
|
|
|
|
|
stable(feature = "integer_atomics_stable", since = "1.34.0"),
|
|
|
|
|
stable(feature = "integer_atomics_stable", since = "1.34.0"),
|
|
|
|
|
stable(feature = "integer_atomics_stable", since = "1.34.0"),
|
2019-12-08 01:43:10 +01:00
|
|
|
rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"),
|
2022-07-08 17:52:04 -04:00
|
|
|
unstable(feature = "integer_atomics", issue = "99069"),
|
2022-04-28 16:42:20 -04:00
|
|
|
cfg_attr(not(test), rustc_diagnostic_item = "AtomicU32"),
|
2021-02-23 10:03:54 -05:00
|
|
|
"u32",
|
2019-06-04 13:24:39 +00:00
|
|
|
"",
|
2018-03-02 06:52:27 +01:00
|
|
|
atomic_umin, atomic_umax,
|
2018-09-08 12:50:19 +01:00
|
|
|
4,
|
2019-01-30 17:47:36 +01:00
|
|
|
"AtomicU32::new(0)",
|
2016-04-16 05:21:53 +01:00
|
|
|
u32 AtomicU32 ATOMIC_U32_INIT
|
|
|
|
|
}
|
2019-11-06 08:09:55 -05:00
|
|
|
#[cfg(target_has_atomic_load_store = "64")]
|
2016-04-16 05:21:53 +01:00
|
|
|
atomic_int! {
|
2019-10-08 17:09:23 +01:00
|
|
|
cfg(target_has_atomic = "64"),
|
2020-09-20 12:14:41 +02:00
|
|
|
cfg(target_has_atomic_equal_alignment = "64"),
|
2019-01-07 12:41:55 -08:00
|
|
|
stable(feature = "integer_atomics_stable", since = "1.34.0"),
|
|
|
|
|
stable(feature = "integer_atomics_stable", since = "1.34.0"),
|
|
|
|
|
stable(feature = "integer_atomics_stable", since = "1.34.0"),
|
|
|
|
|
stable(feature = "integer_atomics_stable", since = "1.34.0"),
|
|
|
|
|
stable(feature = "integer_atomics_stable", since = "1.34.0"),
|
|
|
|
|
stable(feature = "integer_atomics_stable", since = "1.34.0"),
|
2019-12-08 01:43:10 +01:00
|
|
|
rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"),
|
2022-07-08 17:52:04 -04:00
|
|
|
unstable(feature = "integer_atomics", issue = "99069"),
|
2022-04-28 16:42:20 -04:00
|
|
|
cfg_attr(not(test), rustc_diagnostic_item = "AtomicI64"),
|
2021-02-23 10:03:54 -05:00
|
|
|
"i64",
|
2019-06-04 13:24:39 +00:00
|
|
|
"",
|
2018-03-02 06:52:27 +01:00
|
|
|
atomic_min, atomic_max,
|
2018-09-08 12:50:19 +01:00
|
|
|
8,
|
2019-01-30 17:47:36 +01:00
|
|
|
"AtomicI64::new(0)",
|
2016-04-16 05:21:53 +01:00
|
|
|
i64 AtomicI64 ATOMIC_I64_INIT
|
|
|
|
|
}
|
2019-11-06 08:09:55 -05:00
|
|
|
#[cfg(target_has_atomic_load_store = "64")]
|
2016-04-16 05:21:53 +01:00
|
|
|
atomic_int! {
|
2019-10-08 17:09:23 +01:00
|
|
|
cfg(target_has_atomic = "64"),
|
2020-09-20 12:14:41 +02:00
|
|
|
cfg(target_has_atomic_equal_alignment = "64"),
|
2019-01-07 12:41:55 -08:00
|
|
|
stable(feature = "integer_atomics_stable", since = "1.34.0"),
|
|
|
|
|
stable(feature = "integer_atomics_stable", since = "1.34.0"),
|
|
|
|
|
stable(feature = "integer_atomics_stable", since = "1.34.0"),
|
|
|
|
|
stable(feature = "integer_atomics_stable", since = "1.34.0"),
|
|
|
|
|
stable(feature = "integer_atomics_stable", since = "1.34.0"),
|
|
|
|
|
stable(feature = "integer_atomics_stable", since = "1.34.0"),
|
2019-12-08 01:43:10 +01:00
|
|
|
rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"),
|
2022-07-08 17:52:04 -04:00
|
|
|
unstable(feature = "integer_atomics", issue = "99069"),
|
2022-04-28 16:42:20 -04:00
|
|
|
cfg_attr(not(test), rustc_diagnostic_item = "AtomicU64"),
|
2021-02-23 10:03:54 -05:00
|
|
|
"u64",
|
2019-06-04 13:24:39 +00:00
|
|
|
"",
|
2018-03-02 06:52:27 +01:00
|
|
|
atomic_umin, atomic_umax,
|
2018-09-08 12:50:19 +01:00
|
|
|
8,
|
2019-01-30 17:47:36 +01:00
|
|
|
"AtomicU64::new(0)",
|
2016-04-16 05:21:53 +01:00
|
|
|
u64 AtomicU64 ATOMIC_U64_INIT
|
|
|
|
|
}
|
2019-10-08 17:09:23 +01:00
|
|
|
#[cfg(target_has_atomic_load_store = "128")]
|
2018-09-08 12:50:19 +01:00
|
|
|
atomic_int! {
|
2019-10-08 17:09:23 +01:00
|
|
|
cfg(target_has_atomic = "128"),
|
2020-09-20 12:14:41 +02:00
|
|
|
cfg(target_has_atomic_equal_alignment = "128"),
|
2022-07-08 17:52:04 -04:00
|
|
|
unstable(feature = "integer_atomics", issue = "99069"),
|
|
|
|
|
unstable(feature = "integer_atomics", issue = "99069"),
|
|
|
|
|
unstable(feature = "integer_atomics", issue = "99069"),
|
|
|
|
|
unstable(feature = "integer_atomics", issue = "99069"),
|
|
|
|
|
unstable(feature = "integer_atomics", issue = "99069"),
|
|
|
|
|
unstable(feature = "integer_atomics", issue = "99069"),
|
2019-12-13 13:28:55 +01:00
|
|
|
rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"),
|
2022-07-08 17:52:04 -04:00
|
|
|
unstable(feature = "integer_atomics", issue = "99069"),
|
2022-04-28 16:42:20 -04:00
|
|
|
cfg_attr(not(test), rustc_diagnostic_item = "AtomicI128"),
|
2021-02-23 10:03:54 -05:00
|
|
|
"i128",
|
2018-09-08 12:50:19 +01:00
|
|
|
"#![feature(integer_atomics)]\n\n",
|
|
|
|
|
atomic_min, atomic_max,
|
|
|
|
|
16,
|
2019-01-30 17:47:36 +01:00
|
|
|
"AtomicI128::new(0)",
|
2018-09-08 12:50:19 +01:00
|
|
|
i128 AtomicI128 ATOMIC_I128_INIT
|
|
|
|
|
}
|
2019-10-08 17:09:23 +01:00
|
|
|
#[cfg(target_has_atomic_load_store = "128")]
|
2018-09-08 12:50:19 +01:00
|
|
|
atomic_int! {
|
2019-10-08 17:09:23 +01:00
|
|
|
cfg(target_has_atomic = "128"),
|
2020-09-20 12:14:41 +02:00
|
|
|
cfg(target_has_atomic_equal_alignment = "128"),
|
2022-07-08 17:52:04 -04:00
|
|
|
unstable(feature = "integer_atomics", issue = "99069"),
|
|
|
|
|
unstable(feature = "integer_atomics", issue = "99069"),
|
|
|
|
|
unstable(feature = "integer_atomics", issue = "99069"),
|
|
|
|
|
unstable(feature = "integer_atomics", issue = "99069"),
|
|
|
|
|
unstable(feature = "integer_atomics", issue = "99069"),
|
|
|
|
|
unstable(feature = "integer_atomics", issue = "99069"),
|
2019-12-13 13:28:55 +01:00
|
|
|
rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"),
|
2022-07-08 17:52:04 -04:00
|
|
|
unstable(feature = "integer_atomics", issue = "99069"),
|
2022-04-28 16:42:20 -04:00
|
|
|
cfg_attr(not(test), rustc_diagnostic_item = "AtomicU128"),
|
2021-02-23 10:03:54 -05:00
|
|
|
"u128",
|
2018-09-08 12:50:19 +01:00
|
|
|
"#![feature(integer_atomics)]\n\n",
|
|
|
|
|
atomic_umin, atomic_umax,
|
|
|
|
|
16,
|
2019-01-30 17:47:36 +01:00
|
|
|
"AtomicU128::new(0)",
|
2018-09-08 12:50:19 +01:00
|
|
|
u128 AtomicU128 ATOMIC_U128_INIT
|
|
|
|
|
}
|
2020-09-27 16:52:51 +03:00
|
|
|
|
|
|
|
|
macro_rules! atomic_int_ptr_sized {
|
|
|
|
|
( $($target_pointer_width:literal $align:literal)* ) => { $(
|
|
|
|
|
#[cfg(target_has_atomic_load_store = "ptr")]
|
|
|
|
|
#[cfg(target_pointer_width = $target_pointer_width)]
|
|
|
|
|
atomic_int! {
|
|
|
|
|
cfg(target_has_atomic = "ptr"),
|
|
|
|
|
cfg(target_has_atomic_equal_alignment = "ptr"),
|
|
|
|
|
stable(feature = "rust1", since = "1.0.0"),
|
|
|
|
|
stable(feature = "extended_compare_and_swap", since = "1.10.0"),
|
|
|
|
|
stable(feature = "atomic_debug", since = "1.3.0"),
|
|
|
|
|
stable(feature = "atomic_access", since = "1.15.0"),
|
|
|
|
|
stable(feature = "atomic_from", since = "1.23.0"),
|
|
|
|
|
stable(feature = "atomic_nand", since = "1.27.0"),
|
2022-03-26 19:43:11 -07:00
|
|
|
rustc_const_stable(feature = "const_ptr_sized_atomics", since = "1.24.0"),
|
2020-09-27 16:52:51 +03:00
|
|
|
stable(feature = "rust1", since = "1.0.0"),
|
2022-04-28 16:42:20 -04:00
|
|
|
cfg_attr(not(test), rustc_diagnostic_item = "AtomicIsize"),
|
2021-02-23 10:03:54 -05:00
|
|
|
"isize",
|
2020-09-27 16:52:51 +03:00
|
|
|
"",
|
|
|
|
|
atomic_min, atomic_max,
|
|
|
|
|
$align,
|
|
|
|
|
"AtomicIsize::new(0)",
|
|
|
|
|
isize AtomicIsize ATOMIC_ISIZE_INIT
|
|
|
|
|
}
|
|
|
|
|
#[cfg(target_has_atomic_load_store = "ptr")]
|
|
|
|
|
#[cfg(target_pointer_width = $target_pointer_width)]
|
|
|
|
|
atomic_int! {
|
|
|
|
|
cfg(target_has_atomic = "ptr"),
|
|
|
|
|
cfg(target_has_atomic_equal_alignment = "ptr"),
|
|
|
|
|
stable(feature = "rust1", since = "1.0.0"),
|
|
|
|
|
stable(feature = "extended_compare_and_swap", since = "1.10.0"),
|
|
|
|
|
stable(feature = "atomic_debug", since = "1.3.0"),
|
|
|
|
|
stable(feature = "atomic_access", since = "1.15.0"),
|
|
|
|
|
stable(feature = "atomic_from", since = "1.23.0"),
|
|
|
|
|
stable(feature = "atomic_nand", since = "1.27.0"),
|
2022-03-26 19:43:11 -07:00
|
|
|
rustc_const_stable(feature = "const_ptr_sized_atomics", since = "1.24.0"),
|
2020-09-27 16:52:51 +03:00
|
|
|
stable(feature = "rust1", since = "1.0.0"),
|
2022-04-28 16:42:20 -04:00
|
|
|
cfg_attr(not(test), rustc_diagnostic_item = "AtomicUsize"),
|
2021-02-23 10:03:54 -05:00
|
|
|
"usize",
|
2020-09-27 16:52:51 +03:00
|
|
|
"",
|
|
|
|
|
atomic_umin, atomic_umax,
|
|
|
|
|
$align,
|
|
|
|
|
"AtomicUsize::new(0)",
|
|
|
|
|
usize AtomicUsize ATOMIC_USIZE_INIT
|
|
|
|
|
}
|
|
|
|
|
)* };
|
2016-04-16 05:21:53 +01:00
|
|
|
}
|
2020-09-27 16:52:51 +03:00
|
|
|
|
|
|
|
|
atomic_int_ptr_sized! {
|
|
|
|
|
"16" 2
|
|
|
|
|
"32" 4
|
|
|
|
|
"64" 8
|
2016-04-16 05:21:53 +01:00
|
|
|
}
|
|
|
|
|
|
2014-05-12 21:30:48 -07:00
|
|
|
#[inline]
|
2019-10-08 17:09:23 +01:00
|
|
|
#[cfg(target_has_atomic = "8")]
|
2016-01-17 05:11:31 +00:00
|
|
|
fn strongest_failure_ordering(order: Ordering) -> Ordering {
|
|
|
|
|
match order {
|
|
|
|
|
Release => Relaxed,
|
|
|
|
|
Relaxed => Relaxed,
|
2016-10-16 22:11:01 +05:30
|
|
|
SeqCst => SeqCst,
|
2016-01-17 05:11:31 +00:00
|
|
|
Acquire => Acquire,
|
2016-10-16 22:11:01 +05:30
|
|
|
AcqRel => Acquire,
|
2016-01-17 05:11:31 +00:00
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
#[inline]
|
2022-07-20 16:34:24 -04:00
|
|
|
#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
|
2020-03-17 00:00:00 +00:00
|
|
|
unsafe fn atomic_store<T: Copy>(dst: *mut T, val: T, order: Ordering) {
|
2020-06-24 13:15:37 +02:00
|
|
|
// SAFETY: the caller must uphold the safety contract for `atomic_store`.
|
|
|
|
|
unsafe {
|
|
|
|
|
match order {
|
|
|
|
|
Relaxed => intrinsics::atomic_store_relaxed(dst, val),
|
2022-06-22 13:02:23 +02:00
|
|
|
Release => intrinsics::atomic_store_release(dst, val),
|
|
|
|
|
SeqCst => intrinsics::atomic_store_seqcst(dst, val),
|
2020-06-24 13:15:37 +02:00
|
|
|
Acquire => panic!("there is no such thing as an acquire store"),
|
2022-06-22 13:02:23 +02:00
|
|
|
AcqRel => panic!("there is no such thing as an acquire-release store"),
|
2020-06-24 13:15:37 +02:00
|
|
|
}
|
2014-05-12 21:30:48 -07:00
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
#[inline]
|
2022-07-20 16:34:24 -04:00
|
|
|
#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
|
2020-03-17 00:00:00 +00:00
|
|
|
unsafe fn atomic_load<T: Copy>(dst: *const T, order: Ordering) -> T {
|
2020-06-24 13:15:37 +02:00
|
|
|
// SAFETY: the caller must uphold the safety contract for `atomic_load`.
|
|
|
|
|
unsafe {
|
|
|
|
|
match order {
|
|
|
|
|
Relaxed => intrinsics::atomic_load_relaxed(dst),
|
2022-06-22 13:02:23 +02:00
|
|
|
Acquire => intrinsics::atomic_load_acquire(dst),
|
|
|
|
|
SeqCst => intrinsics::atomic_load_seqcst(dst),
|
2020-06-24 13:15:37 +02:00
|
|
|
Release => panic!("there is no such thing as a release load"),
|
2022-06-22 13:02:23 +02:00
|
|
|
AcqRel => panic!("there is no such thing as an acquire-release load"),
|
2020-06-24 13:15:37 +02:00
|
|
|
}
|
2014-05-12 21:30:48 -07:00
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
#[inline]
|
2019-10-08 17:09:23 +01:00
|
|
|
#[cfg(target_has_atomic = "8")]
|
2022-07-20 16:34:24 -04:00
|
|
|
#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
|
2020-03-17 00:00:00 +00:00
|
|
|
unsafe fn atomic_swap<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
|
2020-06-24 13:15:37 +02:00
|
|
|
// SAFETY: the caller must uphold the safety contract for `atomic_swap`.
|
|
|
|
|
unsafe {
|
|
|
|
|
match order {
|
|
|
|
|
Relaxed => intrinsics::atomic_xchg_relaxed(dst, val),
|
2022-06-22 13:02:23 +02:00
|
|
|
Acquire => intrinsics::atomic_xchg_acquire(dst, val),
|
|
|
|
|
Release => intrinsics::atomic_xchg_release(dst, val),
|
|
|
|
|
AcqRel => intrinsics::atomic_xchg_acqrel(dst, val),
|
|
|
|
|
SeqCst => intrinsics::atomic_xchg_seqcst(dst, val),
|
2020-06-24 13:15:37 +02:00
|
|
|
}
|
2014-05-12 21:30:48 -07:00
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2017-03-30 16:23:46 +02:00
|
|
|
/// Returns the previous value (like __sync_fetch_and_add).
|
2014-05-12 21:30:48 -07:00
|
|
|
#[inline]
|
2019-10-08 17:09:23 +01:00
|
|
|
#[cfg(target_has_atomic = "8")]
|
2022-07-20 16:34:24 -04:00
|
|
|
#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
|
2020-03-17 00:00:00 +00:00
|
|
|
unsafe fn atomic_add<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
|
2020-06-24 13:15:37 +02:00
|
|
|
// SAFETY: the caller must uphold the safety contract for `atomic_add`.
|
|
|
|
|
unsafe {
|
|
|
|
|
match order {
|
|
|
|
|
Relaxed => intrinsics::atomic_xadd_relaxed(dst, val),
|
2022-06-22 13:02:23 +02:00
|
|
|
Acquire => intrinsics::atomic_xadd_acquire(dst, val),
|
|
|
|
|
Release => intrinsics::atomic_xadd_release(dst, val),
|
|
|
|
|
AcqRel => intrinsics::atomic_xadd_acqrel(dst, val),
|
|
|
|
|
SeqCst => intrinsics::atomic_xadd_seqcst(dst, val),
|
2020-06-24 13:15:37 +02:00
|
|
|
}
|
2014-05-12 21:30:48 -07:00
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2017-03-30 16:23:46 +02:00
|
|
|
/// Returns the previous value (like __sync_fetch_and_sub).
|
2014-05-12 21:30:48 -07:00
|
|
|
#[inline]
|
2019-10-08 17:09:23 +01:00
|
|
|
#[cfg(target_has_atomic = "8")]
|
2022-07-20 16:34:24 -04:00
|
|
|
#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
|
2020-03-17 00:00:00 +00:00
|
|
|
unsafe fn atomic_sub<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
|
2020-06-24 13:15:37 +02:00
|
|
|
// SAFETY: the caller must uphold the safety contract for `atomic_sub`.
|
|
|
|
|
unsafe {
|
|
|
|
|
match order {
|
|
|
|
|
Relaxed => intrinsics::atomic_xsub_relaxed(dst, val),
|
2022-06-22 13:02:23 +02:00
|
|
|
Acquire => intrinsics::atomic_xsub_acquire(dst, val),
|
|
|
|
|
Release => intrinsics::atomic_xsub_release(dst, val),
|
|
|
|
|
AcqRel => intrinsics::atomic_xsub_acqrel(dst, val),
|
|
|
|
|
SeqCst => intrinsics::atomic_xsub_seqcst(dst, val),
|
2020-06-24 13:15:37 +02:00
|
|
|
}
|
2014-05-12 21:30:48 -07:00
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
#[inline]
|
2019-10-08 17:09:23 +01:00
|
|
|
#[cfg(target_has_atomic = "8")]
|
2022-07-20 16:34:24 -04:00
|
|
|
#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
|
2020-03-17 00:00:00 +00:00
|
|
|
unsafe fn atomic_compare_exchange<T: Copy>(
|
2016-01-17 05:11:31 +00:00
|
|
|
dst: *mut T,
|
|
|
|
|
old: T,
|
|
|
|
|
new: T,
|
|
|
|
|
success: Ordering,
|
2016-10-16 22:11:01 +05:30
|
|
|
failure: Ordering,
|
|
|
|
|
) -> Result<T, T> {
|
2020-06-24 13:15:37 +02:00
|
|
|
// SAFETY: the caller must uphold the safety contract for `atomic_compare_exchange`.
|
|
|
|
|
let (val, ok) = unsafe {
|
|
|
|
|
match (success, failure) {
|
2022-06-22 13:02:23 +02:00
|
|
|
(Relaxed, Relaxed) => intrinsics::atomic_cxchg_relaxed_relaxed(dst, old, new),
|
2022-06-22 13:15:03 +02:00
|
|
|
(Relaxed, Acquire) => intrinsics::atomic_cxchg_relaxed_acquire(dst, old, new),
|
|
|
|
|
(Relaxed, SeqCst) => intrinsics::atomic_cxchg_relaxed_seqcst(dst, old, new),
|
2022-06-22 13:02:23 +02:00
|
|
|
(Acquire, Relaxed) => intrinsics::atomic_cxchg_acquire_relaxed(dst, old, new),
|
|
|
|
|
(Acquire, Acquire) => intrinsics::atomic_cxchg_acquire_acquire(dst, old, new),
|
2022-06-22 13:15:03 +02:00
|
|
|
(Acquire, SeqCst) => intrinsics::atomic_cxchg_acquire_seqcst(dst, old, new),
|
2022-06-22 13:02:23 +02:00
|
|
|
(Release, Relaxed) => intrinsics::atomic_cxchg_release_relaxed(dst, old, new),
|
2022-06-22 13:15:03 +02:00
|
|
|
(Release, Acquire) => intrinsics::atomic_cxchg_release_acquire(dst, old, new),
|
|
|
|
|
(Release, SeqCst) => intrinsics::atomic_cxchg_release_seqcst(dst, old, new),
|
2022-06-22 13:02:23 +02:00
|
|
|
(AcqRel, Relaxed) => intrinsics::atomic_cxchg_acqrel_relaxed(dst, old, new),
|
|
|
|
|
(AcqRel, Acquire) => intrinsics::atomic_cxchg_acqrel_acquire(dst, old, new),
|
2022-06-22 13:15:03 +02:00
|
|
|
(AcqRel, SeqCst) => intrinsics::atomic_cxchg_acqrel_seqcst(dst, old, new),
|
2022-06-22 13:02:23 +02:00
|
|
|
(SeqCst, Relaxed) => intrinsics::atomic_cxchg_seqcst_relaxed(dst, old, new),
|
|
|
|
|
(SeqCst, Acquire) => intrinsics::atomic_cxchg_seqcst_acquire(dst, old, new),
|
|
|
|
|
(SeqCst, SeqCst) => intrinsics::atomic_cxchg_seqcst_seqcst(dst, old, new),
|
|
|
|
|
(_, AcqRel) => panic!("there is no such thing as an acquire-release failure ordering"),
|
2020-06-24 13:15:37 +02:00
|
|
|
(_, Release) => panic!("there is no such thing as a release failure ordering"),
|
|
|
|
|
}
|
2016-03-14 11:57:50 +01:00
|
|
|
};
|
2016-10-16 22:11:01 +05:30
|
|
|
if ok { Ok(val) } else { Err(val) }
|
2016-01-17 05:11:31 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
#[inline]
|
2019-10-08 17:09:23 +01:00
|
|
|
#[cfg(target_has_atomic = "8")]
|
2022-07-20 16:34:24 -04:00
|
|
|
#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
|
2020-03-17 00:00:00 +00:00
|
|
|
unsafe fn atomic_compare_exchange_weak<T: Copy>(
|
2016-01-17 05:11:31 +00:00
|
|
|
dst: *mut T,
|
|
|
|
|
old: T,
|
|
|
|
|
new: T,
|
|
|
|
|
success: Ordering,
|
2016-10-16 22:11:01 +05:30
|
|
|
failure: Ordering,
|
|
|
|
|
) -> Result<T, T> {
|
2020-06-24 13:15:37 +02:00
|
|
|
// SAFETY: the caller must uphold the safety contract for `atomic_compare_exchange_weak`.
|
|
|
|
|
let (val, ok) = unsafe {
|
|
|
|
|
match (success, failure) {
|
2022-06-22 13:02:23 +02:00
|
|
|
(Relaxed, Relaxed) => intrinsics::atomic_cxchgweak_relaxed_relaxed(dst, old, new),
|
2022-06-22 13:15:03 +02:00
|
|
|
(Relaxed, Acquire) => intrinsics::atomic_cxchgweak_relaxed_acquire(dst, old, new),
|
|
|
|
|
(Relaxed, SeqCst) => intrinsics::atomic_cxchgweak_relaxed_seqcst(dst, old, new),
|
2022-06-22 13:02:23 +02:00
|
|
|
(Acquire, Relaxed) => intrinsics::atomic_cxchgweak_acquire_relaxed(dst, old, new),
|
|
|
|
|
(Acquire, Acquire) => intrinsics::atomic_cxchgweak_acquire_acquire(dst, old, new),
|
2022-06-22 13:15:03 +02:00
|
|
|
(Acquire, SeqCst) => intrinsics::atomic_cxchgweak_acquire_seqcst(dst, old, new),
|
2022-06-22 13:02:23 +02:00
|
|
|
(Release, Relaxed) => intrinsics::atomic_cxchgweak_release_relaxed(dst, old, new),
|
2022-06-22 13:15:03 +02:00
|
|
|
(Release, Acquire) => intrinsics::atomic_cxchgweak_release_acquire(dst, old, new),
|
|
|
|
|
(Release, SeqCst) => intrinsics::atomic_cxchgweak_release_seqcst(dst, old, new),
|
2022-06-22 13:02:23 +02:00
|
|
|
(AcqRel, Relaxed) => intrinsics::atomic_cxchgweak_acqrel_relaxed(dst, old, new),
|
|
|
|
|
(AcqRel, Acquire) => intrinsics::atomic_cxchgweak_acqrel_acquire(dst, old, new),
|
2022-06-22 13:15:03 +02:00
|
|
|
(AcqRel, SeqCst) => intrinsics::atomic_cxchgweak_acqrel_seqcst(dst, old, new),
|
2022-06-22 13:02:23 +02:00
|
|
|
(SeqCst, Relaxed) => intrinsics::atomic_cxchgweak_seqcst_relaxed(dst, old, new),
|
|
|
|
|
(SeqCst, Acquire) => intrinsics::atomic_cxchgweak_seqcst_acquire(dst, old, new),
|
|
|
|
|
(SeqCst, SeqCst) => intrinsics::atomic_cxchgweak_seqcst_seqcst(dst, old, new),
|
|
|
|
|
(_, AcqRel) => panic!("there is no such thing as an acquire-release failure ordering"),
|
2020-06-24 13:15:37 +02:00
|
|
|
(_, Release) => panic!("there is no such thing as a release failure ordering"),
|
|
|
|
|
}
|
2016-03-14 11:57:50 +01:00
|
|
|
};
|
2016-10-16 22:11:01 +05:30
|
|
|
if ok { Ok(val) } else { Err(val) }
|
2016-01-17 05:11:31 +00:00
|
|
|
}
|
|
|
|
|
|
2014-05-12 21:30:48 -07:00
|
|
|
#[inline]
|
2019-10-08 17:09:23 +01:00
|
|
|
#[cfg(target_has_atomic = "8")]
|
2022-07-20 16:34:24 -04:00
|
|
|
#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
|
2020-03-17 00:00:00 +00:00
|
|
|
unsafe fn atomic_and<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
|
2020-06-24 13:15:37 +02:00
|
|
|
// SAFETY: the caller must uphold the safety contract for `atomic_and`
|
|
|
|
|
unsafe {
|
|
|
|
|
match order {
|
|
|
|
|
Relaxed => intrinsics::atomic_and_relaxed(dst, val),
|
2022-06-22 13:02:23 +02:00
|
|
|
Acquire => intrinsics::atomic_and_acquire(dst, val),
|
|
|
|
|
Release => intrinsics::atomic_and_release(dst, val),
|
|
|
|
|
AcqRel => intrinsics::atomic_and_acqrel(dst, val),
|
|
|
|
|
SeqCst => intrinsics::atomic_and_seqcst(dst, val),
|
2020-06-24 13:15:37 +02:00
|
|
|
}
|
2014-05-12 21:30:48 -07:00
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2018-02-09 11:19:52 -07:00
|
|
|
#[inline]
|
2019-10-08 17:09:23 +01:00
|
|
|
#[cfg(target_has_atomic = "8")]
|
2022-07-20 16:34:24 -04:00
|
|
|
#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
|
2020-03-17 00:00:00 +00:00
|
|
|
unsafe fn atomic_nand<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
|
2020-06-24 13:15:37 +02:00
|
|
|
// SAFETY: the caller must uphold the safety contract for `atomic_nand`
|
|
|
|
|
unsafe {
|
|
|
|
|
match order {
|
|
|
|
|
Relaxed => intrinsics::atomic_nand_relaxed(dst, val),
|
2022-06-22 13:02:23 +02:00
|
|
|
Acquire => intrinsics::atomic_nand_acquire(dst, val),
|
|
|
|
|
Release => intrinsics::atomic_nand_release(dst, val),
|
|
|
|
|
AcqRel => intrinsics::atomic_nand_acqrel(dst, val),
|
|
|
|
|
SeqCst => intrinsics::atomic_nand_seqcst(dst, val),
|
2020-06-24 13:15:37 +02:00
|
|
|
}
|
2018-02-09 11:19:52 -07:00
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2014-05-12 21:30:48 -07:00
|
|
|
#[inline]
|
2019-10-08 17:09:23 +01:00
|
|
|
#[cfg(target_has_atomic = "8")]
|
2022-07-20 16:34:24 -04:00
|
|
|
#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
|
2020-03-17 00:00:00 +00:00
|
|
|
unsafe fn atomic_or<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
|
2020-06-24 13:15:37 +02:00
|
|
|
// SAFETY: the caller must uphold the safety contract for `atomic_or`
|
|
|
|
|
unsafe {
|
|
|
|
|
match order {
|
2022-06-22 13:02:23 +02:00
|
|
|
SeqCst => intrinsics::atomic_or_seqcst(dst, val),
|
|
|
|
|
Acquire => intrinsics::atomic_or_acquire(dst, val),
|
|
|
|
|
Release => intrinsics::atomic_or_release(dst, val),
|
2020-06-24 13:15:37 +02:00
|
|
|
AcqRel => intrinsics::atomic_or_acqrel(dst, val),
|
|
|
|
|
Relaxed => intrinsics::atomic_or_relaxed(dst, val),
|
|
|
|
|
}
|
2014-05-12 21:30:48 -07:00
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
#[inline]
|
2019-10-08 17:09:23 +01:00
|
|
|
#[cfg(target_has_atomic = "8")]
|
2022-07-20 16:34:24 -04:00
|
|
|
#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
|
2020-03-17 00:00:00 +00:00
|
|
|
unsafe fn atomic_xor<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
|
2020-06-24 13:15:37 +02:00
|
|
|
// SAFETY: the caller must uphold the safety contract for `atomic_xor`
|
|
|
|
|
unsafe {
|
|
|
|
|
match order {
|
2022-06-22 13:02:23 +02:00
|
|
|
SeqCst => intrinsics::atomic_xor_seqcst(dst, val),
|
|
|
|
|
Acquire => intrinsics::atomic_xor_acquire(dst, val),
|
|
|
|
|
Release => intrinsics::atomic_xor_release(dst, val),
|
2020-06-24 13:15:37 +02:00
|
|
|
AcqRel => intrinsics::atomic_xor_acqrel(dst, val),
|
|
|
|
|
Relaxed => intrinsics::atomic_xor_relaxed(dst, val),
|
|
|
|
|
}
|
2014-05-12 21:30:48 -07:00
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2018-03-02 06:52:27 +01:00
|
|
|
/// returns the max value (signed comparison)
|
|
|
|
|
#[inline]
|
2019-10-08 17:09:23 +01:00
|
|
|
#[cfg(target_has_atomic = "8")]
|
2022-07-20 16:34:24 -04:00
|
|
|
#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
|
2020-03-17 00:00:00 +00:00
|
|
|
unsafe fn atomic_max<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
|
2020-06-24 13:15:37 +02:00
|
|
|
// SAFETY: the caller must uphold the safety contract for `atomic_max`
|
|
|
|
|
unsafe {
|
|
|
|
|
match order {
|
|
|
|
|
Relaxed => intrinsics::atomic_max_relaxed(dst, val),
|
2022-06-22 13:02:23 +02:00
|
|
|
Acquire => intrinsics::atomic_max_acquire(dst, val),
|
|
|
|
|
Release => intrinsics::atomic_max_release(dst, val),
|
|
|
|
|
AcqRel => intrinsics::atomic_max_acqrel(dst, val),
|
|
|
|
|
SeqCst => intrinsics::atomic_max_seqcst(dst, val),
|
2020-06-24 13:15:37 +02:00
|
|
|
}
|
2018-03-02 06:52:27 +01:00
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
/// returns the min value (signed comparison)
|
|
|
|
|
#[inline]
|
2019-10-08 17:09:23 +01:00
|
|
|
#[cfg(target_has_atomic = "8")]
|
2022-07-20 16:34:24 -04:00
|
|
|
#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
|
2020-03-17 00:00:00 +00:00
|
|
|
unsafe fn atomic_min<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
|
2020-06-24 13:15:37 +02:00
|
|
|
// SAFETY: the caller must uphold the safety contract for `atomic_min`
|
|
|
|
|
unsafe {
|
|
|
|
|
match order {
|
|
|
|
|
Relaxed => intrinsics::atomic_min_relaxed(dst, val),
|
2022-06-22 13:02:23 +02:00
|
|
|
Acquire => intrinsics::atomic_min_acquire(dst, val),
|
|
|
|
|
Release => intrinsics::atomic_min_release(dst, val),
|
|
|
|
|
AcqRel => intrinsics::atomic_min_acqrel(dst, val),
|
|
|
|
|
SeqCst => intrinsics::atomic_min_seqcst(dst, val),
|
2020-06-24 13:15:37 +02:00
|
|
|
}
|
2018-03-02 06:52:27 +01:00
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2020-01-30 22:45:00 +01:00
|
|
|
/// returns the max value (unsigned comparison)
|
2018-03-02 06:52:27 +01:00
|
|
|
#[inline]
|
2019-10-08 17:09:23 +01:00
|
|
|
#[cfg(target_has_atomic = "8")]
|
2022-07-20 16:34:24 -04:00
|
|
|
#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
|
2020-03-17 00:00:00 +00:00
|
|
|
unsafe fn atomic_umax<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
|
2020-06-24 13:15:37 +02:00
|
|
|
// SAFETY: the caller must uphold the safety contract for `atomic_umax`
|
|
|
|
|
unsafe {
|
|
|
|
|
match order {
|
|
|
|
|
Relaxed => intrinsics::atomic_umax_relaxed(dst, val),
|
2022-06-22 13:02:23 +02:00
|
|
|
Acquire => intrinsics::atomic_umax_acquire(dst, val),
|
|
|
|
|
Release => intrinsics::atomic_umax_release(dst, val),
|
|
|
|
|
AcqRel => intrinsics::atomic_umax_acqrel(dst, val),
|
|
|
|
|
SeqCst => intrinsics::atomic_umax_seqcst(dst, val),
|
2020-06-24 13:15:37 +02:00
|
|
|
}
|
2018-03-02 06:52:27 +01:00
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2020-01-30 22:45:00 +01:00
|
|
|
/// returns the min value (unsigned comparison)
|
2018-03-02 06:52:27 +01:00
|
|
|
#[inline]
|
2019-10-08 17:09:23 +01:00
|
|
|
#[cfg(target_has_atomic = "8")]
|
2022-07-20 16:34:24 -04:00
|
|
|
#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
|
2020-03-17 00:00:00 +00:00
|
|
|
unsafe fn atomic_umin<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
|
2020-06-24 13:15:37 +02:00
|
|
|
// SAFETY: the caller must uphold the safety contract for `atomic_umin`
|
|
|
|
|
unsafe {
|
|
|
|
|
match order {
|
|
|
|
|
Relaxed => intrinsics::atomic_umin_relaxed(dst, val),
|
2022-06-22 13:02:23 +02:00
|
|
|
Acquire => intrinsics::atomic_umin_acquire(dst, val),
|
|
|
|
|
Release => intrinsics::atomic_umin_release(dst, val),
|
|
|
|
|
AcqRel => intrinsics::atomic_umin_acqrel(dst, val),
|
|
|
|
|
SeqCst => intrinsics::atomic_umin_seqcst(dst, val),
|
2020-06-24 13:15:37 +02:00
|
|
|
}
|
2018-03-02 06:52:27 +01:00
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2014-05-12 21:30:48 -07:00
|
|
|
/// An atomic fence.
|
|
|
|
|
///
|
2017-04-11 21:45:12 +09:00
|
|
|
/// Depending on the specified order, a fence prevents the compiler and CPU from
|
|
|
|
|
/// reordering certain types of memory operations around it.
|
|
|
|
|
/// That creates synchronizes-with relationships between it and atomic operations
|
|
|
|
|
/// or fences in other threads.
|
|
|
|
|
///
|
|
|
|
|
/// A fence 'A' which has (at least) [`Release`] ordering semantics, synchronizes
|
|
|
|
|
/// with a fence 'B' with (at least) [`Acquire`] semantics, if and only if there
|
|
|
|
|
/// exist operations X and Y, both operating on some atomic object 'M' such
|
2022-05-19 00:25:14 +03:00
|
|
|
/// that A is sequenced before X, Y is sequenced before B and Y observes
|
2014-05-12 21:30:48 -07:00
|
|
|
/// the change to M. This provides a happens-before dependence between A and B.
|
|
|
|
|
///
|
2017-04-11 21:45:12 +09:00
|
|
|
/// ```text
|
|
|
|
|
/// Thread 1 Thread 2
|
|
|
|
|
///
|
|
|
|
|
/// fence(Release); A --------------
|
|
|
|
|
/// x.store(3, Relaxed); X --------- |
|
|
|
|
|
/// | |
|
|
|
|
|
/// | |
|
|
|
|
|
/// -------------> Y if x.load(Relaxed) == 3 {
|
|
|
|
|
/// |-------> B fence(Acquire);
|
|
|
|
|
/// ...
|
|
|
|
|
/// }
|
|
|
|
|
/// ```
|
|
|
|
|
///
|
2016-12-29 11:31:16 +01:00
|
|
|
/// Atomic operations with [`Release`] or [`Acquire`] semantics can also synchronize
|
2014-05-12 21:30:48 -07:00
|
|
|
/// with a fence.
|
|
|
|
|
///
|
2016-12-29 11:31:16 +01:00
|
|
|
/// A fence which has [`SeqCst`] ordering, in addition to having both [`Acquire`]
|
|
|
|
|
/// and [`Release`] semantics, participates in the global program order of the
|
|
|
|
|
/// other [`SeqCst`] operations and/or fences.
|
2014-05-12 21:30:48 -07:00
|
|
|
///
|
2016-12-29 11:31:16 +01:00
|
|
|
/// Accepts [`Acquire`], [`Release`], [`AcqRel`] and [`SeqCst`] orderings.
|
2014-05-12 21:30:48 -07:00
|
|
|
///
|
2014-11-12 03:36:09 +09:00
|
|
|
/// # Panics
|
2014-05-12 21:30:48 -07:00
|
|
|
///
|
2016-12-29 11:31:16 +01:00
|
|
|
/// Panics if `order` is [`Relaxed`].
|
|
|
|
|
///
|
2017-04-11 21:45:12 +09:00
|
|
|
/// # Examples
|
|
|
|
|
///
|
|
|
|
|
/// ```
|
|
|
|
|
/// use std::sync::atomic::AtomicBool;
|
|
|
|
|
/// use std::sync::atomic::fence;
|
|
|
|
|
/// use std::sync::atomic::Ordering;
|
|
|
|
|
///
|
|
|
|
|
/// // A mutual exclusion primitive based on spinlock.
|
|
|
|
|
/// pub struct Mutex {
|
|
|
|
|
/// flag: AtomicBool,
|
|
|
|
|
/// }
|
|
|
|
|
///
|
|
|
|
|
/// impl Mutex {
|
|
|
|
|
/// pub fn new() -> Mutex {
|
|
|
|
|
/// Mutex {
|
|
|
|
|
/// flag: AtomicBool::new(false),
|
|
|
|
|
/// }
|
|
|
|
|
/// }
|
|
|
|
|
///
|
|
|
|
|
/// pub fn lock(&self) {
|
2020-07-29 15:44:19 +02:00
|
|
|
/// // Wait until the old value is `false`.
|
2021-07-06 10:50:17 -03:00
|
|
|
/// while self
|
|
|
|
|
/// .flag
|
|
|
|
|
/// .compare_exchange_weak(false, true, Ordering::Relaxed, Ordering::Relaxed)
|
|
|
|
|
/// .is_err()
|
|
|
|
|
/// {}
|
2017-08-15 21:45:21 +02:00
|
|
|
/// // This fence synchronizes-with store in `unlock`.
|
2017-04-11 21:45:12 +09:00
|
|
|
/// fence(Ordering::Acquire);
|
|
|
|
|
/// }
|
|
|
|
|
///
|
|
|
|
|
/// pub fn unlock(&self) {
|
|
|
|
|
/// self.flag.store(false, Ordering::Release);
|
|
|
|
|
/// }
|
|
|
|
|
/// }
|
|
|
|
|
/// ```
|
2014-05-12 21:30:48 -07:00
|
|
|
#[inline]
|
2015-01-23 21:48:20 -08:00
|
|
|
#[stable(feature = "rust1", since = "1.0.0")]
|
2020-12-02 15:16:12 -08:00
|
|
|
#[rustc_diagnostic_item = "fence"]
|
2022-07-20 16:34:24 -04:00
|
|
|
#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
|
2014-05-12 21:30:48 -07:00
|
|
|
pub fn fence(order: Ordering) {
|
2019-12-26 12:56:34 -08:00
|
|
|
// SAFETY: using an atomic fence is safe.
|
2014-05-12 21:30:48 -07:00
|
|
|
unsafe {
|
|
|
|
|
match order {
|
2022-06-22 13:02:23 +02:00
|
|
|
Acquire => intrinsics::atomic_fence_acquire(),
|
|
|
|
|
Release => intrinsics::atomic_fence_release(),
|
2016-10-16 22:11:01 +05:30
|
|
|
AcqRel => intrinsics::atomic_fence_acqrel(),
|
2022-06-22 13:02:23 +02:00
|
|
|
SeqCst => intrinsics::atomic_fence_seqcst(),
|
2016-10-16 22:11:01 +05:30
|
|
|
Relaxed => panic!("there is no such thing as a relaxed fence"),
|
2014-05-12 21:30:48 -07:00
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
}
|
2015-07-19 13:11:30 -03:00
|
|
|
|
2017-04-12 19:04:57 -04:00
|
|
|
/// A compiler memory fence.
|
2017-04-05 15:33:24 -04:00
|
|
|
///
|
2017-09-16 22:16:49 +02:00
|
|
|
/// `compiler_fence` does not emit any machine code, but restricts the kinds
|
|
|
|
|
/// of memory re-ordering the compiler is allowed to do. Specifically, depending on
|
|
|
|
|
/// the given [`Ordering`] semantics, the compiler may be disallowed from moving reads
|
|
|
|
|
/// or writes from before or after the call to the other side of the call to
|
|
|
|
|
/// `compiler_fence`. Note that it does **not** prevent the *hardware*
|
|
|
|
|
/// from doing such re-ordering. This is not a problem in a single-threaded,
|
|
|
|
|
/// execution context, but when other threads may modify memory at the same
|
|
|
|
|
/// time, stronger synchronization primitives such as [`fence`] are required.
|
2017-04-05 15:33:24 -04:00
|
|
|
///
|
|
|
|
|
/// The re-ordering prevented by the different ordering semantics are:
|
|
|
|
|
///
|
|
|
|
|
/// - with [`SeqCst`], no re-ordering of reads and writes across this point is allowed.
|
|
|
|
|
/// - with [`Release`], preceding reads and writes cannot be moved past subsequent writes.
|
|
|
|
|
/// - with [`Acquire`], subsequent reads and writes cannot be moved ahead of preceding reads.
|
|
|
|
|
/// - with [`AcqRel`], both of the above rules are enforced.
|
|
|
|
|
///
|
2017-09-16 22:16:49 +02:00
|
|
|
/// `compiler_fence` is generally only useful for preventing a thread from
|
|
|
|
|
/// racing *with itself*. That is, if a given thread is executing one piece
|
|
|
|
|
/// of code, and is then interrupted, and starts executing code elsewhere
|
|
|
|
|
/// (while still in the same thread, and conceptually still on the same
|
|
|
|
|
/// core). In traditional programs, this can only occur when a signal
|
|
|
|
|
/// handler is registered. In more low-level code, such situations can also
|
|
|
|
|
/// arise when handling interrupts, when implementing green threads with
|
|
|
|
|
/// pre-emption, etc. Curious readers are encouraged to read the Linux kernel's
|
|
|
|
|
/// discussion of [memory barriers].
|
|
|
|
|
///
|
2017-04-05 15:33:24 -04:00
|
|
|
/// # Panics
|
|
|
|
|
///
|
|
|
|
|
/// Panics if `order` is [`Relaxed`].
|
|
|
|
|
///
|
2017-09-15 17:03:09 +02:00
|
|
|
/// # Examples
|
|
|
|
|
///
|
|
|
|
|
/// Without `compiler_fence`, the `assert_eq!` in following code
|
|
|
|
|
/// is *not* guaranteed to succeed, despite everything happening in a single thread.
|
|
|
|
|
/// To see why, remember that the compiler is free to swap the stores to
|
2021-07-06 10:53:14 -03:00
|
|
|
/// `IMPORTANT_VARIABLE` and `IS_READY` since they are both
|
2017-09-15 17:03:09 +02:00
|
|
|
/// `Ordering::Relaxed`. If it does, and the signal handler is invoked right
|
|
|
|
|
/// after `IS_READY` is updated, then the signal handler will see
|
|
|
|
|
/// `IS_READY=1`, but `IMPORTANT_VARIABLE=0`.
|
|
|
|
|
/// Using a `compiler_fence` remedies this situation.
|
|
|
|
|
///
|
|
|
|
|
/// ```
|
|
|
|
|
/// use std::sync::atomic::{AtomicBool, AtomicUsize};
|
|
|
|
|
/// use std::sync::atomic::Ordering;
|
|
|
|
|
/// use std::sync::atomic::compiler_fence;
|
|
|
|
|
///
|
2019-01-26 09:14:49 -07:00
|
|
|
/// static IMPORTANT_VARIABLE: AtomicUsize = AtomicUsize::new(0);
|
|
|
|
|
/// static IS_READY: AtomicBool = AtomicBool::new(false);
|
2017-09-15 17:03:09 +02:00
|
|
|
///
|
|
|
|
|
/// fn main() {
|
|
|
|
|
/// IMPORTANT_VARIABLE.store(42, Ordering::Relaxed);
|
|
|
|
|
/// // prevent earlier writes from being moved beyond this point
|
|
|
|
|
/// compiler_fence(Ordering::Release);
|
|
|
|
|
/// IS_READY.store(true, Ordering::Relaxed);
|
|
|
|
|
/// }
|
|
|
|
|
///
|
|
|
|
|
/// fn signal_handler() {
|
|
|
|
|
/// if IS_READY.load(Ordering::Relaxed) {
|
|
|
|
|
/// assert_eq!(IMPORTANT_VARIABLE.load(Ordering::Relaxed), 42);
|
|
|
|
|
/// }
|
|
|
|
|
/// }
|
|
|
|
|
/// ```
|
|
|
|
|
///
|
2017-09-16 22:16:49 +02:00
|
|
|
/// [memory barriers]: https://www.kernel.org/doc/Documentation/memory-barriers.txt
|
2017-04-05 15:33:24 -04:00
|
|
|
#[inline]
|
2017-09-24 22:23:26 -07:00
|
|
|
#[stable(feature = "compiler_fences", since = "1.21.0")]
|
2020-12-02 15:16:12 -08:00
|
|
|
#[rustc_diagnostic_item = "compiler_fence"]
|
2022-07-20 16:34:24 -04:00
|
|
|
#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
|
2017-04-12 19:04:57 -04:00
|
|
|
pub fn compiler_fence(order: Ordering) {
|
2019-12-26 12:56:34 -08:00
|
|
|
// SAFETY: using an atomic fence is safe.
|
2017-04-05 15:33:24 -04:00
|
|
|
unsafe {
|
|
|
|
|
match order {
|
2022-06-22 13:02:23 +02:00
|
|
|
Acquire => intrinsics::atomic_singlethreadfence_acquire(),
|
|
|
|
|
Release => intrinsics::atomic_singlethreadfence_release(),
|
2017-04-05 15:33:24 -04:00
|
|
|
AcqRel => intrinsics::atomic_singlethreadfence_acqrel(),
|
2022-06-22 13:02:23 +02:00
|
|
|
SeqCst => intrinsics::atomic_singlethreadfence_seqcst(),
|
2017-04-12 19:04:57 -04:00
|
|
|
Relaxed => panic!("there is no such thing as a relaxed compiler fence"),
|
2017-04-05 15:33:24 -04:00
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2019-11-06 08:09:55 -05:00
|
|
|
#[cfg(target_has_atomic_load_store = "8")]
|
2016-04-16 05:21:53 +01:00
|
|
|
#[stable(feature = "atomic_debug", since = "1.3.0")]
|
|
|
|
|
impl fmt::Debug for AtomicBool {
|
2019-04-19 01:37:12 +02:00
|
|
|
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
|
2022-05-13 20:59:21 +02:00
|
|
|
fmt::Debug::fmt(&self.load(Ordering::Relaxed), f)
|
2016-04-16 05:21:53 +01:00
|
|
|
}
|
|
|
|
|
}
|
2015-07-19 13:11:30 -03:00
|
|
|
|
2019-11-06 08:09:55 -05:00
|
|
|
#[cfg(target_has_atomic_load_store = "ptr")]
|
2015-07-19 13:11:30 -03:00
|
|
|
#[stable(feature = "atomic_debug", since = "1.3.0")]
|
|
|
|
|
impl<T> fmt::Debug for AtomicPtr<T> {
|
2019-04-19 01:37:12 +02:00
|
|
|
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
|
2022-05-13 20:59:21 +02:00
|
|
|
fmt::Debug::fmt(&self.load(Ordering::Relaxed), f)
|
2015-07-19 13:11:30 -03:00
|
|
|
}
|
|
|
|
|
}
|
2017-12-10 23:21:26 +00:00
|
|
|
|
2019-11-06 08:09:55 -05:00
|
|
|
#[cfg(target_has_atomic_load_store = "ptr")]
|
2017-12-10 23:21:26 +00:00
|
|
|
#[stable(feature = "atomic_pointer", since = "1.24.0")]
|
|
|
|
|
impl<T> fmt::Pointer for AtomicPtr<T> {
|
2019-04-19 01:37:12 +02:00
|
|
|
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
|
2017-12-10 23:21:26 +00:00
|
|
|
fmt::Pointer::fmt(&self.load(Ordering::SeqCst), f)
|
|
|
|
|
}
|
|
|
|
|
}
|
2021-01-13 14:39:19 +10:00
|
|
|
|
|
|
|
|
/// Signals the processor that it is inside a busy-wait spin-loop ("spin lock").
|
|
|
|
|
///
|
|
|
|
|
/// This function is deprecated in favor of [`hint::spin_loop`].
|
|
|
|
|
///
|
|
|
|
|
/// [`hint::spin_loop`]: crate::hint::spin_loop
|
|
|
|
|
#[inline]
|
|
|
|
|
#[stable(feature = "spin_loop_hint", since = "1.24.0")]
|
2022-04-07 21:20:32 -04:00
|
|
|
#[deprecated(since = "1.51.0", note = "use hint::spin_loop instead")]
|
2021-01-13 14:39:19 +10:00
|
|
|
pub fn spin_loop_hint() {
|
|
|
|
|
spin_loop()
|
|
|
|
|
}
|