Commit Graph

2968 Commits

Author SHA1 Message Date
Matthias Krüger
772e80a650 Rollup merge of #119917 - Zalathar:split-off, r=cuviper
Remove special-case handling of `vec.split_off(0)`

#76682 added special handling to `Vec::split_off` for the case where `at == 0`. Instead of copying the vector's contents into a freshly-allocated vector and returning it, the special-case code steals the old vector's allocation, and replaces it with a new (empty) buffer with the same capacity.

That eliminates the need to copy the existing elements, but comes at a surprising cost, as seen in #119913. The returned vector's capacity is no longer determined by the size of its contents (as would be expected for a freshly-allocated vector), and instead uses the full capacity of the old vector.

In cases where the capacity is large but the size is small, that results in a much larger capacity than would be expected from reading the documentation of `split_off`. This is especially bad when `split_off` is called in a loop (to recycle a buffer), and the returned vectors have a wide variety of lengths.

I believe it's better to remove the special-case code, and treat `at == 0` just like any other value:
- The current documentation states that `split_off` returns a “newly allocated vector”, which is not actually true in the current implementation when `at == 0`.
- If the value of `at` could be non-zero at runtime, then the caller has already agreed to the cost of a full memcpy of the taken elements in the general case. Avoiding that copy would be nice if it were close to free, but the different handling of capacity means that it is not.
- If the caller specifically wants to avoid copying in the case where `at == 0`, they can easily implement that behaviour themselves using `mem::replace`.

Fixes #119913.
2024-01-26 14:43:30 +01:00
Matthias Krüger
a5b60c941e Rollup merge of #117678 - niklasf:stabilize-slice_group_by, r=dtolnay
Stabilize `slice_group_by`

Renamed "group by" to "chunk by" a per #80552.

Newly stable items:

* `core::slice::ChunkBy`
* `core::slice::ChunkByMut`
* `[T]::chunk`
* `[T]::chunk_by`

Closes #80552.
2024-01-26 14:43:29 +01:00
bjorn3
1f994c7695 Fix outdated comment on Box 2024-01-26 12:15:46 +00:00
Matthias Krüger
912877d009 Rollup merge of #119466 - Sky9x:str_from_raw_parts, r=dtolnay
Initial implementation of `str::from_raw_parts[_mut]`

ACP (accepted): rust-lang/libs-team#167
Tracking issue: #119206

Thanks to ``@Kixiron`` for previous work on this (#107207)

``@rustbot`` label +T-libs-api -T-libs
r? ``@thomcc``

Closes #107207.
2024-01-26 06:36:37 +01:00
David Tolnay
f94a94227c Export core::str::from_raw_parts{,_mut} into alloc::str and std::str 2024-01-25 18:11:54 -08:00
Matthias Krüger
37f02320bc Rollup merge of #118208 - Amanieu:btree_cursor2, r=dtolnay
Rewrite the BTreeMap cursor API using gaps

Tracking issue: #107540

Currently, a `Cursor` points to a single element in the tree, and allows moving to the next or previous element while mutating the tree. However this was found to be confusing and hard to use.

This PR completely refactors cursors to instead point to a gap between two elements in the tree. This eliminates the need for a "ghost" element that exists after the last element and before the first one. Additionally, `upper_bound` and `lower_bound` now have a much clearer meaning.

The ability to mutate keys is also factored out into a separate `CursorMutKey` type which is unsafe to create. This makes the API easier to use since it avoids duplicated versions of each method with and without key mutation.

API summary:

```rust
impl<K, V> BTreeMap<K, V> {
    fn lower_bound<Q>(&self, bound: Bound<&Q>) -> Cursor<'_, K, V>
    where
        K: Borrow<Q> + Ord,
        Q: Ord;
    fn lower_bound_mut<Q>(&mut self, bound: Bound<&Q>) -> CursorMut<'_, K, V>
    where
        K: Borrow<Q> + Ord,
        Q: Ord;
    fn upper_bound<Q>(&self, bound: Bound<&Q>) -> Cursor<'_, K, V>
    where
        K: Borrow<Q> + Ord,
        Q: Ord;
    fn upper_bound_mut<Q>(&mut self, bound: Bound<&Q>) -> CursorMut<'_, K, V>
    where
        K: Borrow<Q> + Ord,
        Q: Ord;
}

struct Cursor<'a, K: 'a, V: 'a>;

impl<'a, K, V> Cursor<'a, K, V> {
    fn next(&mut self) -> Option<(&'a K, &'a V)>;
    fn prev(&mut self) -> Option<(&'a K, &'a V)>;
    fn peek_next(&self) -> Option<(&'a K, &'a V)>;
    fn peek_prev(&self) -> Option<(&'a K, &'a V)>;
}

struct CursorMut<'a, K: 'a, V: 'a>;

impl<'a, K, V> CursorMut<'a, K, V> {
    fn next(&mut self) -> Option<(&K, &mut V)>;
    fn prev(&mut self) -> Option<(&K, &mut V)>;
    fn peek_next(&mut self) -> Option<(&K, &mut V)>;
    fn peek_prev(&mut self) -> Option<(&K, &mut V)>;

    unsafe fn insert_after_unchecked(&mut self, key: K, value: V);
    unsafe fn insert_before_unchecked(&mut self, key: K, value: V);
    fn insert_after(&mut self, key: K, value: V) -> Result<(), UnorderedKeyError>;
    fn insert_before(&mut self, key: K, value: V) -> Result<(), UnorderedKeyError>;
    fn remove_next(&mut self) -> Option<(K, V)>;
    fn remove_prev(&mut self) -> Option<(K, V)>;

    fn as_cursor(&self) -> Cursor<'_, K, V>;

    unsafe fn with_mutable_key(self) -> CursorMutKey<'a, K, V, A>;
}

struct CursorMutKey<'a, K: 'a, V: 'a>;

impl<'a, K, V> CursorMut<'a, K, V> {
    fn next(&mut self) -> Option<(&mut K, &mut V)>;
    fn prev(&mut self) -> Option<(&mut K, &mut V)>;
    fn peek_next(&mut self) -> Option<(&mut K, &mut V)>;
    fn peek_prev(&mut self) -> Option<(&mut K, &mut V)>;

    unsafe fn insert_after_unchecked(&mut self, key: K, value: V);
    unsafe fn insert_before_unchecked(&mut self, key: K, value: V);
    fn insert_after(&mut self, key: K, value: V) -> Result<(), UnorderedKeyError>;
    fn insert_before(&mut self, key: K, value: V) -> Result<(), UnorderedKeyError>;
    fn remove_next(&mut self) -> Option<(K, V)>;
    fn remove_prev(&mut self) -> Option<(K, V)>;

    fn as_cursor(&self) -> Cursor<'_, K, V>;

    unsafe fn with_mutable_key(self) -> CursorMutKey<'a, K, V, A>;
}

struct UnorderedKeyError;
```
2024-01-25 17:39:26 +01:00
bors
dfe53afaeb Auto merge of #119433 - taiki-e:rc-uninit-ref, r=Nilstrieb
rc,sync: Do not create references to uninitialized values

Closes #119241

r? `@RalfJung`
2024-01-23 16:43:45 +00:00
Frank Steffahn
ab938b9acb Improve documentation for [A]Rc::into_inner
General improvements, and also aims to better encourage the reader
to actually check out Arc::try_unwrap.
2024-01-23 11:38:15 +01:00
bors
e35a56d96f Auto merge of #119892 - joboet:libs_use_assert_unchecked, r=Nilstrieb,cuviper
Use `assert_unchecked` instead of `assume` intrinsic in the standard library

Now that a public wrapper for the `assume` intrinsic exists, we can use it in the standard library.

CC #119131
2024-01-23 06:45:58 +00:00
Matthias Krüger
97bcf0da7c Rollup merge of #119801 - zachs18:zachs18-patch-1, r=steffahn,Nilstrieb
Fix deallocation with wrong allocator in (A)Rc::from_box_in

Deallocate the `Box` with the original allocator (via `&A`), not `Global`.

Fixes #119749

<details> <summary>Example code with error and Miri output</summary>

(Note that this UB is not observable on stable, because the only usable allocator on stable is `Global` anyway.)

Code ([playground link](https://play.rust-lang.org/?version=nightly&mode=debug&edition=2021&gist=96193c2c6a1912d7f669fbbe39174b09)):

```rs
#![feature(allocator_api)]
use std::alloc::System;

// uncomment one of these
use std::rc::Rc;
//use std::sync::Arc as Rc;

fn main() {
    let x: Box<[u32], System> = Box::new_in([1,2,3], System);
    let _: Rc<[u32], System> = Rc::from(x);
}
```

Miri output:

```rs
error: Undefined Behavior: deallocating alloc904, which is C heap memory, using Rust heap deallocation operation
   --> /playground/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/alloc/src/alloc.rs:117:14
    |
117 |     unsafe { __rust_dealloc(ptr, layout.size(), layout.align()) }
    |              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ deallocating alloc904, which is C heap memory, using Rust heap deallocation operation
    |
    = help: this indicates a bug in the program: it performed an invalid operation, and caused Undefined Behavior
    = help: see https://doc.rust-lang.org/nightly/reference/behavior-considered-undefined.html for further information
    = note: BACKTRACE:
    = note: inside `std::alloc::dealloc` at /playground/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/alloc/src/alloc.rs:117:14: 117:64
    = note: inside `<std::alloc::Global as std::alloc::Allocator>::deallocate` at /playground/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/alloc/src/alloc.rs:254:22: 254:51
    = note: inside `<std::boxed::Box<std::mem::ManuallyDrop<[u32]>> as std::ops::Drop>::drop` at /playground/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/alloc/src/boxed.rs:1244:17: 1244:66
    = note: inside `std::ptr::drop_in_place::<std::boxed::Box<std::mem::ManuallyDrop<[u32]>>> - shim(Some(std::boxed::Box<std::mem::ManuallyDrop<[u32]>>))` at /playground/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/ptr/mod.rs:507:1: 507:56
    = note: inside `std::mem::drop::<std::boxed::Box<std::mem::ManuallyDrop<[u32]>>>` at /playground/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/mem/mod.rs:992:24: 992:25
    = note: inside `std::rc::Rc::<[u32], std::alloc::System>::from_box_in` at /playground/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/alloc/src/rc.rs:1928:13: 1928:22
    = note: inside `<std::rc::Rc<[u32], std::alloc::System> as std::convert::From<std::boxed::Box<[u32], std::alloc::System>>>::from` at /playground/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/alloc/src/rc.rs:2504:9: 2504:27
note: inside `main`
   --> src/main.rs:10:32
    |
10  |     let _: Rc<[u32], System> = Rc::from(x);
    |                                ^^^^^^^^^^^

note: some details are omitted, run with `MIRIFLAGS=-Zmiri-backtrace=full` for a verbose backtrace

error: aborting due to 1 previous error
```

</details>
2024-01-22 16:54:57 +01:00
Matthias Krüger
3eb7fe32a1 Rollup merge of #120180 - Zalathar:vec-split-off-alternatives, r=dtolnay
Document some alternatives to `Vec::split_off`

One of the discussion points that came up in #119917 is that some people use `Vec::split_off` in cases where they probably shouldn't, because the alternatives (like `mem::take`) are hard to discover.

This PR adds some suggestions to the documentation of `split_off` that should point people towards alternatives that might be more appropriate for their use-case.

I've deliberately tried to keep these changes as simple and uncontroversial as possible, so that they don't depend on how the team decides to handle the concerns raised in #119917. That's why I haven't touched the existing documentation for `split_off`, and haven't added links to `split_off` to the documentation of other methods.
2024-01-21 12:28:55 +01:00
Matthias Krüger
4a941d384d Rollup merge of #120145 - the8472:fix-inplace-dest-drop, r=cuviper
fix: Drop guard was deallocating with the incorrect size

InPlaceDstBufDrop holds onto the allocation before the shrinking happens which means it must deallocate the destination elements but the source allocation.

Thanks `@cuviper` for spotting this.
2024-01-21 12:28:53 +01:00
Zalathar
6f1944d394 Document some alternatives to Vec::split_off 2024-01-21 11:56:55 +11:00
Guillaume Gomez
b917753d79 Rollup merge of #120116 - the8472:only-same-alignments, r=cuviper
Remove alignment-changing in-place collect

This removes the alignment-changing in-place collect optimization introduced in #110353
Currently stable users can't benefit from the optimization because GlobaAlloc doesn't support alignment-changing realloc and neither do most posix allocators. So in practice it has a negative impact on performance.

Explanation from https://github.com/rust-lang/rust/issues/120091#issuecomment-1899071681:

> > You mention that in case of alignment mismatch -- when the new alignment is less than the old -- the implementation calls `mremap`.
>
> I was trying to note that this isn't really the case in practice, due to the semantics of Rust's allocator APIs. The only use of the allocator within the `in_place_collect` implementation itself is [a call to `Allocator::shrink()`](db7125f008/library/alloc/src/vec/in_place_collect.rs (L299-L303)), which per its documentation [allows decreasing the required alignment](https://doc.rust-lang.org/1.75.0/core/alloc/trait.Allocator.html). However, in stable Rust, the only available `Allocator` is [`Global`](https://doc.rust-lang.org/1.75.0/alloc/alloc/struct.Global.html), which delegates to the registered `GlobalAlloc`. Since `GlobalAlloc::realloc()` [cannot change the required alignment](https://doc.rust-lang.org/1.75.0/core/alloc/trait.GlobalAlloc.html#method.realloc), the implementation of [`<Global as Allocator>::shrink()`](db7125f008/library/alloc/src/alloc.rs (L280-L321)) must fall back to creating a brand-new allocation, `memcpy`ing the data into it, and freeing the old allocation, whenever the alignment doesn't remain exactly the same.
>
> Therefore, the underlying allocator, provided by libc or some other source, has no opportunity to internally `mremap()` the data when the alignment is changed, since it has no way of knowing that the allocation is the same.
2024-01-20 20:06:35 +01:00
Tomás Vallotton
180c68bef5 doc: fix some doctests after rebase 2024-01-20 10:26:25 -03:00
Tomás Vallotton
038c6e046c refactor: make waker mandatory.
This also removes
* impl From<&Context> for ContextBuilder
* Context::try_waker()

The from implementation is removed because now that
wakers are always supported, there are less incentives
to override the current context. Before, the incentive
was to add Waker support to a reactor that didn't have
any.
2024-01-20 10:16:09 -03:00
tvallotton
c67a446e72 fix: Apply suggestions from code review
Co-authored-by: Mark Rousskov <mark.simulacrum@gmail.com>
2024-01-20 10:14:25 -03:00
Tomás Vallotton
093f80ba7e chore: fix ci failures 2024-01-20 10:14:25 -03:00
Tomás Vallotton
0cb7a0a90e chore: add tracking issue number to local waker feature 2024-01-20 10:14:25 -03:00
Tomás Vallotton
2012d4b703 fix: make LocalWake available in targets that don't support atomics by removing a #[cfg(target_has_atomic = ptr)] 2024-01-20 10:14:25 -03:00
Tomás Vallotton
403718b19d feat: add try_waker and From<&mut Context> for ContextBuilder to allow the extention of contexts by futures 2024-01-20 10:14:21 -03:00
Tomás Vallotton
60a08196b6 feat: add LocalWaker type, ContextBuilder type, and LocalWake trait. 2024-01-20 10:13:08 -03:00
The 8472
5796b3c167 fix: Drop guard was deallocating with the incorrect size
InPlaceDstBufDrop holds onto the allocation before the shrinking happens
which means it must deallocate the destination elements but the source
allocation.
2024-01-19 23:05:30 +01:00
Matthias Krüger
7219bd22ed Rollup merge of #120110 - invpt:patch-1, r=the8472
Update documentation for Vec::into_boxed_slice to be more clear about excess capacity

Currently, the documentation for Vec::into_boxed_slice says that "if the vector has excess capacity, its items will be moved into a newly-allocated buffer with exactly the right capacity." This is misleading, as copies do not necessarily occur, depending on if the allocator supports in-place shrinking. I copied some of the wording from shrink_to_fit, though it could potentially still be worded better than this.
2024-01-19 08:15:05 +01:00
invpt
35a9fc3472 Clarify docs for Vec::into_boxed_slice, Vec::shrink_to_fit 2024-01-18 18:01:36 -05:00
The 8472
85d1787962 remove alignment-changing in-place collect
Currently stable users can't benefit from this because GlobaAlloc doesn't support
alignment-changing realloc and neither do most posix allocators.
So in practice it always results in an extra memcpy.
2024-01-18 22:50:14 +01:00
The 8472
b28a95391b update internal ASCII art comment for vec specializations 2024-01-18 22:47:20 +01:00
Jake Goulding
fb7762b1c5 Remove no-longer-needed allow(dead_code) from the standard library
`repr(transparent)` now silences the lint.
2024-01-18 13:14:42 -05:00
zetanumbers
656a38830b Add A: 'static bound for Arc/Rc::pin_in 2024-01-18 14:28:39 +03:00
Robert Grosse
db7125f008 Fix typo in comments (in_place_collect) 2024-01-16 20:48:22 -08:00
joboet
fa9a911a57 libs: use assert_unchecked instead of intrinsic 2024-01-13 20:10:00 +01:00
Zalathar
a655558b38 Remove special-case handling of vec.split_off(0) 2024-01-13 17:21:54 +11:00
hi-rustin
784b50cece chore: remove unnecessary blank line
Signed-off-by: hi-rustin <rustin.liu@gmail.com>
2024-01-11 09:45:21 +08:00
zachs18
bfe04e08c0 Fix deallocation with wrong allocator in (A)Rc::from_box_in 2024-01-10 02:17:47 -06:00
Taiki Endo
4e973b0b63 rc,sync: Do not create references to uninitialized values 2024-01-08 00:36:31 +09:00
The 8472
93b34a5ffa mark vec::IntoIter pointers as !nonnull 2024-01-07 03:44:04 +01:00
The 8472
fd8ba7bc3c typo fix 2024-01-07 03:42:45 +01:00
Matthias Krüger
923578e6f9 Rollup merge of #118781 - RalfJung:core-panic-feature, r=the8472
merge core_panic feature into panic_internals

I don't know why those are two separate features, but it does not seem intentional. This merge is useful because with https://github.com/rust-lang/rust/pull/118123, panic_internals is recognized as an internal feature, but core_panic is not -- but core_panic definitely should be internal.
2024-01-06 16:07:46 +01:00
bors
5113ed28ea Auto merge of #118297 - shepmaster:warn-dead-tuple-fields, r=WaffleLapkin
Merge `unused_tuple_struct_fields` into `dead_code`

This implicitly upgrades the lint from `allow` to `warn` and places it into the `unused` lint group.

[Discussion on Zulip](https://rust-lang.zulipchat.com/#narrow/stream/131828-t-compiler/topic/Moving.20.60unused_tuple_struct_fields.60.20from.20allow.20to.20warn)
2024-01-05 04:51:55 +00:00
León Orell Valerian Liehr
34ef194859 Rollup merge of #119434 - taiki-e:rc-is-dangling, r=Mark-Simulacrum
rc: Take *const T in is_dangling

It is not important which one is used since `is_dangling` does not access memory, but `*const` removes the needs of `*const T` -> `*mut T` casts in `from_raw_in`.
2024-01-03 16:08:25 +01:00
Jake Goulding
5772818dc8 Adjust library tests for unused_tuple_struct_fields -> dead_code 2024-01-02 15:34:37 -05:00
Matthias Krüger
c67ab2e0b4 Rollup merge of #119158 - JohnTheCoolingFan:arc-weak-clone-pretty, r=cuviper
Clean up alloc::sync::Weak Clone implementation

Since both return points (tail and early return) return the same expression and the only difference is whether inner is available, the code that does the atomic operations and checks on inner was moved into the if body and the only return is at the tail. Original comments preserved.
2023-12-30 11:42:02 +01:00
Taiki Endo
2c23c06c32 rc: Take *const T in is_dangling
It is not important which one is used since `is_dangling` does not access
memory, but `*const` removes the needs of `*const T` -> `*mut T` casts
in `from_raw_in`.
2023-12-30 16:28:00 +09:00
Gurinder Singh
e3aca01343 Italicise "bytes" in the docs of some Vec methods
because on a cursory read it's easy to miss that the limit is
in terms of bytes not no. of elements. The italics should help
with that.
2023-12-29 09:53:29 +05:30
Matthias Krüger
89c3236789 Rollup merge of #119205 - mumbleskates:vecdeque-comment-fix, r=Mark-Simulacrum
fix minor mistake in comments describing VecDeque resizing

Avoiding confusion where one of the items in the deque seems to disappear in two of the three cases
2023-12-24 01:08:09 +01:00
Pietro Albini
c00486c9bb update version placeholders 2023-12-22 11:01:42 +01:00
Kent Ross
f2e711e4c2 fix minor mistake in comments describing VecDeque resizing 2023-12-21 15:20:14 -08:00
JohnTheCoolingFan
0453d5fe6f Cleaned up alloc::sync::Weak Clone implementation
Since both return points (tail and early return) return the same
expression and the only difference is whether inner is available, the
code that does the atomic operations and checks on inner was moved into
the if body and the only return is at the tail. Original comments
preserved.
2023-12-20 12:13:34 +03:00
bors
51c0db6a91 Auto merge of #106790 - the8472:rawvec-niche, r=scottmcm
add more niches to rawvec

Previously RawVec only had a single niche in its `NonNull` pointer. With this change it now has `isize::MAX` niches since half the value-space of the capacity field is never needed, we can't have a capacity larger than isize::MAX.
2023-12-20 02:19:10 +00:00
Daniel Huang
f2f9b1d82a Update c_str.rs 2023-12-14 19:08:36 -05:00