Replace some `mem::forget`'s with `ManuallyDrop`
> but I would like to see a larger effort to replace all uses of `mem::forget`.
_Originally posted by `@saethlin` in https://github.com/rust-lang/rust/issues/127584#issuecomment-2226087767_
So,
r? `@saethlin`
Sorry, I have finished writing all of this before I got your response.
Safely enforce thread name requirements
The requirements for the thread name to be both UTF-8 and null terminated are easily enforced by a wrapper type so lets do that. The fact this used to be just a bare `CString` has tripped me up before because it was entirely safe to use a non UTF-8 `CString`.
Use ThreadId instead of TLS-address in `ReentrantLock`
Fixes#123458
`ReentrantLock` currently uses the address of a thread local variable as an ID that's unique across all currently running threads. This can lead to uninituitive behavior as in #123458 if TLS blocks get reused. This PR changes `ReentrantLock` to instead use the `ThreadId` provided by `std` as the unique ID. `ThreadId` guarantees uniqueness across the lifetime of the whole process, so we don't need to worry about reusing IDs of terminated threads. The main appeal of this PR is thus the possibility of changing the `ReentrantLock` API to guarantee that if a thread leaks a lock guard, no other thread may ever acquire that lock again.
This does entail some complications:
- previously, the only way to retrieve the current thread ID would've been using `thread::current().id()` which creates a temporary `Arc` and which isn't available in TLS destructors. As part of this PR, the thread ID instead gets cached in its own thread local, as suggested [here](https://github.com/rust-lang/rust/issues/123458#issuecomment-2038207704).
- `ThreadId` is always 64-bit whereas the current implementation uses a usize-sized ID. Since this ID needs to be updated atomically, we can't simply use a single atomic variable on 32 bit platforms. Instead, we fall back to using a (sound) seqlock on 32-bit platforms, which works because only one thread at a time can write to the ID. This seqlock is technically susceptible to the ABA problem, but the attack vector to create actual unsoundness has to be very specific:
- You would need to be able to lock+unlock the lock exactly 2^31 times (or a multiple thereof) while a thread trying to lock it sleeps
- The sleeping thread would have to suspend after reading one half of the thread id but before reading the other half
- The teared result from combining the halves of the thread ID would have to exactly line up with the sleeping thread's ID
The risk of this occurring seems slim enough to be acceptable to me, but correct me if I'm wrong. This also means that the size of the lock increases by 8 bytes on 32-bit platforms, but this also shouldn't be an issue.
Performance wise, I did some crude testing of the only case where this could lead to real slowdowns, which is the case of locking a `ReentrantLock` that's already locked by the current thread. On both aarch64 and x86-64, there is (expectedly) pretty much no performance hit. I didn't have any 32-bit platforms to test the seqlock performance on, so I did the next best thing and just forced the 64-bit platforms to use the seqlock implementation. There, the performance degraded by ~1-2ns/(lock+unlock) on x86-64 and ~6-8ns/(lock+unlock) on aarch64, which is measurable but seems acceptable to me seeing as 32-bit platforms should be a small minority anyways.
cc `@joboet` `@RalfJung` `@CAD97`
This changes `ReentrantLock` to use `ThreadId` for the thread ownership check instead of the address of a thread local. Unlike TLS blocks, `ThreadId` is guaranteed to be unique across the lifetime of the process, so if any thread ever terminates while holding a `ReentrantLockGuard`, no other thread may ever acquire that lock again.
On platforms with 64-bit atomics, this is a very simple change. On other platforms, the approach used is slightly more involved, as explained in the module comment.
This also adds a `CURRENT_ID` thread local in addition to the already existing `CURRENT`. This allows us to access the current `ThreadId` without the relatively heavy machinery used by `thread::current().id()`.
Move thread parking to `sys::sync`
Part of #117276.
I'll leave the platform-specific API abstractions in `sys::pal`, as per the initial proposal. I'm not entirely sure whether we'll want to keep it that way, but that remains to be seen.
r? ``@ChrisDenton`` (if you have time)
Reduce code size of `thread::set_current`
#123265 introduced a rather large binary size regression, because it added an `unwrap()` call on a `Result<(), Thread>`, which in turn pulled its rather heavy `Debug` implementation. This PR fixes this by readding the `rtassert!` that was removed.
Add missing `unsafe` to some internal `std` functions
Adds `unsafe` to a few internal functions that have safety requirements but were previously not marked as `unsafe`. Specifically:
- `std::sys::pal::unix:🧵:min_stack_size` needs to be `unsafe` as `__pthread_get_minstack` might dereference the passed pointer. All callers currently pass a valid initialised `libc::pthread_attr_t`.
- `std:🧵:Thread::new` (and `new_inner`) need to be `unsafe` as it requires the passed thread name to be valid UTF-8, otherwise `Thread::name` will trigger undefined behaviour. I've taken the opportunity to split out the unnamed thread case into a separate `new_unnamed` function to make the safety requirement clearer. All callers meet the safety requirement now that #123505 has been merged.
Document memory orderings of `thread::{park, unpark}`
Document `thread::park/unpark` as having acquire/release synchronization. Without that guarantee, even the example in the documentation can deadlock:
```rust
let flag = Arc::new(AtomicBool::new(false));
let t2 = thread::spawn(move || {
while !flag.load(Ordering::Acquire) {
thread::park();
}
});
flag.store(true, Ordering::Release);
t2.thread().unpark();
// t1: flag.store(true)
// t1: thread.unpark()
// t2: flag.load() == false
// t2 now parks, is immediately unblocked but never
// acquires the flag, and thus spins forever
```
Multiple calls to `unpark` should also maintain a release sequence to make sure operations released by previous `unpark`s are not lost:
```rust
let a = Arc::new(AtomicBool::new(false));
let b = Arc::new(AtomicBool::new(false));
let t2 = thread::spawn(move || {
while !a.load(Ordering::Acquire) || !b.load(Ordering::Acquire) {
thread::park();
}
});
thread::spawn(move || {
a.store(true, Ordering::Release);
t2.thread().unpark();
});
b.store(true, Ordering::Release);
t2.thread().unpark();
// t1: a.store(true)
// t1: t2.unpark()
// t3: b.store(true)
// t3: t2.unpark()
// t2 now parks, is immediately unblocked but never
// acquires the store of `a`, only the store of `b` which
// was released by the most recent unpark, and thus spins forever
```
This is of course a contrived example, but is reasonable to rely upon in real code.
Note that all implementations of park/unpark already comply with the rules, it's just undocumented.