Add `#[must_use]` to some `into_raw*` functions.
cc #121287
r? ``@cuviper``
Adds `#[must_use = "losing the pointer will leak memory"]`[^1] to `Box::into_raw(_with_allocator)`, `Vec::into_raw_parts(_with_alloc)`, `String::into_raw_parts`[^2], and `rc::{Rc, Weak}::into_raw_with_allocator` (Rc's normal `into_raw` and all of `Arc`'s `into_raw*`s are already `must_use`).
Adds `#[must_use = "losing the raw <resource name may leak resources"]` to `IntoRawFd::into_raw_fd`, `IntoRawSocket::into_raw_socket`, and `IntoRawHandle::into_raw_handle`.
[^1]: "*will* leak memory" may be too-strong wording (since `Box`/`Vec`/`String`/`rc::Weak` might not have a backing allocation), but I left it as-is for simplicity and consistency.
[^2]: `String::into_raw_parts`'s `must_use` message is changed from the previous (possibly misleading) "`self` will be dropped if the result is not used".
Added SHA512, SM3, SM4 target-features and `sha512_sm_x86` feature gate
This is an effort towards #126624. This adds support for these 3 target-features and introduces the feature flag `sha512_sm_x86`, which would gate these target-features and the yet-to-be-implemented detection and intrinsics in stdarch.
Rewrite binary search implementation
This PR builds on top of #128250, which should be merged first.
This restores the original binary search implementation from #45333 which has the nice property of having a loop count that only depends on the size of the slice. This, along with explicit conditional moves from #128250, means that the entire binary search loop can be perfectly predicted by the branch predictor.
Additionally, LLVM is able to unroll the loop when the slice length is known at compile-time. This results in a very compact code sequence of 3-4 instructions per binary search step and zero branches.
Fixes#53823Fixes#115271
raw_eq: using it on bytes with provenance is not UB (outside const-eval)
The current behavior of raw_eq violates provenance monotonicity. See https://github.com/rust-lang/rust/pull/124921 for an explanation of provenance monotonicity. It is violated in raw_eq because comparing bytes without provenance is well-defined, but adding provenance makes the operation UB.
So remove the no-provenance requirement from raw_eq. However, the requirement stays in-place for compile-time invocations of raw_eq, that indeed cannot deal with provenance.
Cc `@rust-lang/opsem`
android: Remove libstd hacks for unsupported Android APIs
Our minimum supported API version is 21, remove hacks to support older Android APIs.
try-job: arm-android
r? tgross35
Rollup of 7 pull requests
Successful merges:
- #123813 (Add `REDUNDANT_IMPORTS` lint for new redundant import detection)
- #126697 ([RFC] mbe: consider the `_` in 2024 an expression)
- #127159 (match lowering: Hide `Candidate` from outside the lowering algorithm)
- #128244 (Peel off explicit (or implicit) deref before suggesting clone on move error in borrowck, remove some hacks)
- #128431 (Add myself as VxWorks target maintainer for reference)
- #128438 (Add special-case for [T, 0] in dropck_outlives)
- #128457 (Fix docs for OnceLock::get_mut_or_init)
r? `@ghost`
`@rustbot` modify labels: rollup
Cleanup sys module to match house style
This moves a test file out of sys as it's just testing std types. Also cleans up some assorted bits including making the `use` statements match the house style.
std: implement the `once_wait` feature
Tracking issue: #127527
This additionally adds a `wait_force` method to `Once` that doesn't panic on poison.
I also took the opportunity and cleaned up up the code of the queue-based implementation a bit.
Match LLVM ABI in `extern "C"` functions for `f128` on Windows
As MSVC doesn't support `_Float128`, x86-64 Windows doesn't have a defined ABI for `f128`. Currently, Rust will pass and return `f128` indirectly for `extern "C"` functions. This is inconsistent with LLVM, which passes and returns `f128` in XMM registers, meaning that e.g. the ABI of `extern "C"` compiler builtins won't match. This PR fixes this discrepancy by making the x86-64 Windows `extern "C"` ABI pass `f128` directly through to LLVM, so that Rust will follow whatever LLVM does. This still leaves the difference between LLVM and GCC (https://gcc.gnu.org/bugzilla/show_bug.cgi?id=115054) but this PR is still an improvement as at least Rust is now consistent with it's primary codegen backend and compiler builtins from `compiler-builtins` will now work.
I've also fixed the x86-64 Windows `has_reliable_f16` match arm in `std` `build.rs` to refer to the correct target, and added an equivalent match arm to `has_reliable_f128` as the LLVM-GCC ABI difference affects both `f16` and `f128`.
Tracking issue: #116909
try-job: x86_64-msvc
try-job: x86_64-mingw
This restores the original binary search implementation from #45333
which has the nice property of having a loop count that only depends on
the size of the slice. This, along with explicit conditional moves
from #128250, means that the entire binary search loop can be perfectly
predicted by the branch predictor.
Additionally, LLVM is able to unroll the loop when the slice length is
known at compile-time. This results in a very compact code sequence of
3-4 instructions per binary search step and zero branches.
Fixes#53823
Rollup of 4 pull requests
Successful merges:
- #127574 (elaborate unknowable goals)
- #128141 (Set branch protection function attributes)
- #128315 (Fix vita build of std and forbid unsafe in unsafe in the os/vita module)
- #128339 ([rustdoc] Make the buttons remain when code example is clicked)
r? `@ghost`
`@rustbot` modify labels: rollup
Add `select_unpredictable` to force LLVM to use CMOV
Since https://reviews.llvm.org/D118118, LLVM will no longer turn CMOVs into branches if it comes from a `select` marked with an `unpredictable` metadata attribute.
This PR introduces `core::intrinsics::select_unpredictable` which emits such a `select` and uses it in the implementation of `binary_search_by`.
Optimize empty case in Vec::retain
While profiling some code that happens to call Vec::retain() in a tight loop, I noticed more runtime than expected in retain, even in a bench case where the vector was always empty. When I wrapped my call to retain in `if !myvec.is_empty()` I saw faster execution compared with doing retain on an empty vector.
On closer inspection, Vec::retain is doing set_len(0) on itself even when the vector is empty, and then resetting the length again in BackshiftOnDrop::drop.
Unscientific screengrab of a flamegraph illustrating how we end up spending time in set_len and drop:

Clean and enable `rustdoc::unescaped_backticks` for `core/alloc/std/test/proc_macro`
I am not sure if the lint is supposed to be "ready enough" (since it is `allow` by default), but it does catch a couple issues in `core` (`alloc`, `std`, `test` and `proc_macro` are already clean), so I propose making it `warn` in all the crates rendered in the website.
Cc: `@GuillaumeGomez`
Update compiler_builtins to 0.1.114
The `weak-intrinsics` feature was removed from compiler_builtins in https://github.com/rust-lang/compiler-builtins/pull/598, so dropped the `compiler-builtins-weak-intrinsics` feature from alloc/std/sysroot.
In https://github.com/rust-lang/compiler-builtins/pull/593, some builtins for f16/f128 were added. These don't work for all compiler backends, so add a `compiler-builtins-no-f16-f128` feature and disable it for cranelift and gcc.