For certain sorts of systems, programming, it's deemed essential that
all allocation failures be explicitly handled where they occur. For
example, see Linus Torvald's opinion in [1]. Merely not calling global
panic handlers, or always `try_reserving` first (for vectors), is not
deemed good enough, because the mere presence of the global OOM handlers
is burdens static analysis.
One option for these projects to use rust would just be to skip `alloc`,
rolling their own allocation abstractions. But this would, in my
opinion be a real shame. `alloc` has a few `try_*` methods already, and
we could easily have more. Features like custom allocator support also
demonstrate and existing to support diverse use-cases with the same
abstractions.
A natural way to add such a feature flag would a Cargo feature, but
there are currently uncertainties around how std library crate's Cargo
features may or not be stable, so to avoid any risk of stabilizing by
mistake we are going with a more low-level "raw cfg" token, which
cannot be interacted with via Cargo alone.
Note also that since there is no notion of "default cfg tokens" outside
of Cargo features, we have to invert the condition from
`global_oom_handling` to to `not(no_global_oom_handling)`. This breaks
the monotonicity that would be important for a Cargo feature (i.e.
turning on more features should never break compatibility), but it
doesn't matter for raw cfg tokens which are not intended to be
"constraint solved" by Cargo or anything else.
To support this use-case we create a new feature, "global-oom-handling",
on by default, and put the global OOM handler infra and everything else
it that depends on it behind it. By default, nothing is changed, but
users concerned about global handling can make sure it is disabled, and
be confident that all OOM handling is local and explicit.
For this first iteration, non-flat collections are outright disabled.
`Vec` and `String` don't yet have `try_*` allocation methods, but are
kept anyways since they can be oom-safely created "from parts", and we
hope to add those `try_` methods in the future.
[1]: https://lore.kernel.org/lkml/CAHk-=wh_sNLoz84AUUzuqXEsYH35u=8HV3vK-jbRbJ_B-JjGrg@mail.gmail.com/
Add strong_count mutation methods to Rc
The corresponding methods were stabilized on `Arc` in #79285 (tracking: #71983). This patch implements and stabilizes identical methods on the `Rc` types as well.
The advantage of making these docs is mostly in pointing out that these
functions all make new allocations and copy/clone/move the source into them.
These docs are on the function, and not the `impl` block, to avoid showing
the "[+] show undocumented items" button.
CC #51430
Re-stabilize Weak::as_ptr and friends for unsized T
As per [T-lang consensus](https://hackmd.io/7r3_is6uTz-163fsOV8Vfg), this uses a branch to handle the dangling case. The discussed optimization of only doing the branch in the T: ?Sized case is left for a followup patch, as doing so is not trivial (as it requires specialization) and not _obviously_ better (as it requires using `wrapping_offset` rather than `offset` more).
<details><summary>Basically said optimization</summary>
Specialize on `T: Sized`:
```rust
fn as_ptr(&self) -> *const T {
if [ T is Sized ] || !is_dangling(ptr) {
(ptr as *mut T).set_ptr_value( (ptr as *mut u8).wrapping_offset(data_offset) )
} else {
ptr::null()
}
}
fn from_raw(*const T) -> Self {
if [ T is Sized ] || !ptr.is_null() {
let ptr = (ptr as *mut RcBox).set_ptr_value( (ptr as *mut u8).wrapping_offset(-data_offset) );
Weak { ptr }
} else {
Weak::new()
}
}
```
(but with more `set_ptr_value` to avoid `Sized` restrictions and maintain metadata.)
Written in this fashion, this is not a correctness-critical specialization (i.e. so long as `[ T is Sized ]` is false for unsized `T`, it can be `rand()` for sized `T` without breaking correctness), but it's still touchy, so I'd rather do it in another PR with separate review.
---
</details>
This effectively reverts #80422 and re-establishes #74160. T-libs [previously signed off](https://github.com/rust-lang/rust/pull/74160#issuecomment-660539373) on this stable API change in #74160.
As we did with `Box`, we can allocate an uninitialized `Rc` or `Arc`
beforehand, giving the optimizer a chance to skip the local value for
regular clones, or avoid any local altogether for `T: Copy`.
As per T-lang consensus, this uses a branch to handle the dangling case.
The discussed optimization of only doing the branch in the T: ?Sized
case is left for a followup patch, as doing so is not trivial
(as it requires specialization for correctness, not just optimization).
Remove `Box::leak_with_alloc`
Add leak-test for box with allocator
Rename `AllocErr` to `AllocError` in leak-test
Add `Box::alloc` and adjust examples to use the new API
Allow Weak::as_ptr and friends for unsized T
Relaxes `impl<T> Weak<T>` to `impl<T: ?Sized> Weak<T>` for the methods `rc::Weak::as_ptr`, `into_raw`, and `from_raw`.
Follow-up to #73845, which did most of the impl work to make these functions work for `T: ?Sized`.
We still have to adjust the implementation of `Weak::from_raw` here, however, because I missed a use of `ptr.is_null()` previously. This check was necessary when `into`/`from_raw` were first implemented, as `into_raw` returned `ptr::null()` for dangling weak. However, we now just (wrapping) offset dangling weaks' pointers the same as nondangling weak, so the null check is no longer necessary (or even hit). (I can submit just 17a928f as a separate PR if desired.)
As a nice side effect, moves the `fn is_dangling` definition closer to `Weak::new`, which creates the dangling weak.
This technically stabilizes that "something like `align_of_val_raw`" is possible to do. However, I believe the part of the functionality required by these methods here -- specifically, getting the alignment of a pointee from a pointer where it may be dangling iff the pointee is `Sized` -- is uncontroversial enough to stabilize these methods without a way to implement them on stable Rust.
r? `@RalfJung,` who reviewed #73845.
ATTN: This changes (relaxes) the (input) generic bounds on stable fn!
This avoids overlapping a reference covering the data field,
which may be changed due in concurrent conditions. This fully
fixed the UB mainfested with `new_cyclic`.
- Use intra-doc links for `std::io` in `std::fs`
- Use intra-doc links for File::read in unix/ext/fs.rs
- Remove explicit intra-doc links for `true` in `net/addr.rs`
- Use intra-doc links in alloc/src/sync.rs
- Use intra-doc links in src/ascii.rs
- Switch to intra-doc links in alloc/rc.rs
- Use intra-doc links in core/pin.rs
- Use intra-doc links in std/prelude
- Use shorter links in `std/fs.rs`
`io` is already in scope.
New zeroed slice
Add to #63291 the methods
```rust
impl<T> Box<[T]> { pub fn new_zeroed_slice(len: usize) -> Box<[MaybeUninit<T>]> {…} }
impl<T> Rc<[T]> { pub fn new_zeroed_slice(len: usize) -> Rc<[MaybeUninit<T>]> {…} }
impl<T> Arc<[T]> { pub fn new_zeroed_slice(len: usize) -> Arc<[MaybeUninit<T>]> {…} }
```
as suggested in https://github.com/rust-lang/rust/issues/63291#issuecomment-605511675 .
Also optimize `{Rc, Arc}::new_zeroed` to use `alloc_zeroed`, otherwise they are no more efficient than using `new_uninit` and zeroing the memory manually (which was the original implementation).