Commit Graph

4487 Commits

Author SHA1 Message Date
Matthias Krüger
b3d491696b Rollup merge of #104634 - RalfJung:core-arch, r=Mark-Simulacrum
move core::arch into separate file

This works around https://github.com/rust-lang/rust/issues/104633 which otherwise leads to warnings in miri-test-libstd.
2022-11-20 23:50:29 +01:00
Matthias Krüger
ff72187b06 Rollup merge of #104632 - RalfJung:core-test-strict-provenance, r=thomcc
avoid non-strict-provenance casts in libcore tests

r? `@thomcc`
2022-11-20 23:50:28 +01:00
Rune Tynan
8998711d9b Only one feature gate needed 2022-11-20 17:10:47 -05:00
Rune Tynan
07911879d2 Use ? instead of match 2022-11-20 15:01:22 -05:00
Rune Tynan
a5fecc6905 Fix issue number 2022-11-20 15:01:21 -05:00
Rune Tynan
7972b8aa37 Add derive_const feature 2022-11-20 15:01:21 -05:00
Rune Tynan
6f2dcac78b Update with derive_const 2022-11-20 15:01:21 -05:00
Rune Tynan
414e84a2f7 Add stability for alignment 2022-11-20 15:01:21 -05:00
Rune Tynan
9f4b4e46a3 constify remaining layout methods
Remove bad impl for Eq

Update Cargo.lock and fix last ValidAlign
2022-11-20 15:01:21 -05:00
Matthias Krüger
db5f005f35 Rollup merge of #104568 - RalfJung:realloc, r=Amanieu
clarify that realloc refreshes pointer provenance even when the allocation remains in-place

This [matches what C does](https://en.cppreference.com/w/c/memory/realloc):

> The original pointer ptr is invalidated and any access to it is undefined behavior (even if reallocation was in-place).

Cc `@rust-lang/wg-allocators`
2022-11-20 18:21:48 +01:00
Felix S. Klock II
98993af828 add examples to chunks remainder methods. Also fixed some links to rchunk remainder methods. 2022-11-20 11:43:23 -05:00
Marvin Löbel
3fe37b8c6e Add get_many_mut methods to slice 2022-11-20 11:19:11 -05:00
Ralf Jung
428ab59fb7 enable fuzzy_provenance_casts in libcore+tests 2022-11-20 16:04:16 +01:00
Tethys Svensson
00bf999fcf Incorporate review feedback 2022-11-20 12:30:14 +01:00
Ralf Jung
e19bc6eb80 move core::arch into separate file 2022-11-20 10:28:14 +01:00
Ralf Jung
2bb28c174b avoid non-strict-provenance casts in libcore tests 2022-11-20 09:58:29 +01:00
Yuki Okushi
785237d392 Rollup merge of #104435 - scottmcm:iter-repeat-n, r=thomcc
`VecDeque::resize` should re-use the buffer in the passed-in element

Today it always copies it for *every* appended element, but one of those clones is avoidable.

This adds `iter::repeat_n` (https://github.com/rust-lang/rust/issues/104434) as the primitive needed to do this.  If this PR is acceptable, I'll also use this in `Vec` rather than its custom `ExtendElement` type & infrastructure that is harder to share between multiple different containers:

101e1822c3/library/alloc/src/vec/mod.rs (L2479-L2492)
2022-11-20 13:15:59 +09:00
Yuki Okushi
0858ca97da Rollup merge of #103901 - H4x5:fmt-arguments-as-str-tracking-issue, r=the8472
Add tracking issue for `const_arguments_as_str`

Tracking issue: #103900

The original PR didn't create a tracking issue.
2022-11-20 13:15:58 +09:00
Nilstrieb
6ee0dd97e3 Add unstable type_ascribe macro
This macro serves as a placeholder for future type ascription syntax to
make sure that the semantic implementation keeps working.
2022-11-19 22:16:42 +01:00
Lukas Markeffsky
c9c017dfb5 update provenance test
* fix allocation alignment for 16bit platforms
* add edge case where `stride % align != 0` on pointers with provenance
2022-11-19 16:58:02 +01:00
Lukas
e90d15b247 Update comment on pointer-to-usize transmute
Co-authored-by: Ralf Jung <post@ralfj.de>
2022-11-19 16:58:02 +01:00
Lukas Markeffsky
3d7e9c4b7f Revert "don't call align_offset during const eval, ever"
This reverts commit f3a577bfae376c0222e934911865ed14cddd1539.
2022-11-19 16:58:02 +01:00
Lukas Markeffsky
9e5d497b67 fix const align_offset implementation 2022-11-19 16:57:58 +01:00
Lukas Markeffsky
8a6053618f docs cleanup
* Fix doc examples for Platforms with underaligned integer primitives.
* Mutable pointer doc examples use mutable pointers.
* Fill out tracking issue.
* Minor formatting changes.
2022-11-19 16:47:42 +01:00
Lukas Markeffsky
daccb8c11a always use align_offset in is_aligned_to + add assembly test 2022-11-19 16:47:42 +01:00
Lukas Markeffsky
4696e8906d Schrödinger's pointer
It's aligned *and* not aligned!
2022-11-19 16:47:42 +01:00
Lukas Markeffsky
df0bcfe644 address more review comments
* `cfg` only the body of `align_offset`
* put explicit panics back
* explain why `ptr.align_offset(align) == 0` is slow
2022-11-19 16:47:42 +01:00
Lukas Markeffsky
093c02ed46 document is_aligned{,_to} 2022-11-19 16:47:42 +01:00
Lukas Markeffsky
a906f6cb69 don't call align_offset during const eval, ever 2022-11-19 16:47:42 +01:00
Lukas Markeffsky
24e88066dc mark align_offset as #[must_use] 2022-11-19 16:47:42 +01:00
Lukas Markeffsky
2ef9a8ae0f add coretests for is_aligned 2022-11-19 16:47:42 +01:00
Lukas Markeffsky
6f6320a0a9 constify pointer::is_aligned{,_to} 2022-11-19 16:47:42 +01:00
Lukas Markeffsky
8cf6b16185 add coretests for const align_offset 2022-11-19 16:47:38 +01:00
Lukas Markeffsky
211743b2c8 make const align_offset useful 2022-11-19 16:36:08 +01:00
Lukas Markeffsky
f13c4f4d6a constify exact_div intrinsic 2022-11-19 16:36:08 +01:00
Dylan DPC
5caac92dc0 Rollup merge of #104528 - WaffleLapkin:lazy_lock_docfix, r=matklad
Properly link `{Once,Lazy}{Cell,Lock}` in docs

See https://github.com/rust-lang/rust/issues/74465#issuecomment-1317947443
2022-11-19 11:54:44 +05:30
Scott McMurray
71bb200225 Hide the items while waiting for the ACP 2022-11-18 19:46:18 -08:00
Manish Goregaokar
24ee599195 Rollup merge of #104338 - compiler-errors:pointer-sized, r=eholk
Enforce that `dyn*` coercions are actually pointer-sized

Implement a perma-unstable, rudimentary `PointerSized` trait to enforce `dyn*` casts are `usize`-sized for now, at least to prevent ICEs and weird codegen issues from cropping up after monomorphization since currently we enforce *nothing*.

This probably can/should be removed in favor of a more sophisticated trait for handling `dyn*` conversions when we decide on one, but I just want to get something up for discussion and experimentation for now.

r? ```@eholk``` cc ```@tmandry``` (though feel free to claim/reassign)

Fixes #102141
Fixes #102173
2022-11-18 17:48:18 -05:00
Manish Goregaokar
19efa2599c Rollup merge of #103701 - WaffleLapkin:__points-at-implementation__--this-can-be-simplified, r=scottmcm
Simplify some pointer method implementations

- Make `pointer::with_metadata_of` const (+simplify implementation) (cc #75091)
- Simplify implementation of various pointer methods

r? ```@scottmcm```

----

`from_raw_parts::<T>(this, metadata(self))` was annoying me for a while and I've finally figured out how it should _actually_ be done.
2022-11-18 17:48:17 -05:00
Manish Goregaokar
e2301154e3 Rollup merge of #103456 - scottmcm:fix-unchecked-shifts, r=scottmcm
`unchecked_{shl|shr}` should use `u32` as the RHS

The other shift methods, such as https://doc.rust-lang.org/nightly/std/primitive.u64.html#method.checked_shr and https://doc.rust-lang.org/nightly/std/primitive.i16.html#method.wrapping_shl, use `u32` for the shift amount.  That's consistent with other things, like `count_ones`, which also always use `u32` for a bit count, regardless of the size of the type.

This PR changes `unchecked_shl` and `unchecked_shr` to also use `u32` for the shift amount (rather than Self).

cc #85122, the `unchecked_math` tracking issue
2022-11-18 17:48:17 -05:00
Manish Goregaokar
6b09d60f82 Rollup merge of #103378 - nagisa:fix-infinite-offset, r=scottmcm
Fix mod_inv termination for the last iteration

On usize=u64 platforms, the 4th iteration would overflow the `mod_gate` back to 0. Similarly for usize=u32 platforms, the 3rd iteration would overflow much the same way.

I tested various approaches to resolving this, including approaches with `saturating_mul` and `widening_mul` to a double usize. Turns out LLVM likes `mul_with_overflow` the best. In fact now, that LLVM can see the iteration count is limited, it will happily unroll the loop into a nice linear sequence.

You will also notice that the code around the loop got simplified somewhat. Now that LLVM is handling the loop nicely, there isn’t any more reasons to manually unroll the first iteration out of the loop (though looking at the code today I’m not sure all that complexity was necessary in the first place).

Fixes #103361
2022-11-18 17:48:16 -05:00
Manish Goregaokar
8aca6ccedd Rollup merge of #102977 - lukas-code:is-sorted-hrtb, r=m-ou-se
remove HRTB from `[T]::is_sorted_by{,_key}`

Changes the signature of `[T]::is_sorted_by{,_key}` to match `[T]::binary_search_by{,_key}` and make code like https://github.com/rust-lang/rust/issues/53485#issuecomment-885393452 compile.

Tracking issue: https://github.com/rust-lang/rust/issues/53485

~~Do we need an ACP for something like this?~~ Edit: Filed ACP here: https://github.com/rust-lang/libs-team/issues/121
2022-11-18 17:48:16 -05:00
Michael Goulet
da3c5397a6 Enforce that dyn* casts are actually pointer-sized 2022-11-18 18:23:48 +00:00
Ralf Jung
d26659d611 clarify that realloc refreshes pointer provenance even when the allocation remains in-place 2022-11-18 10:43:40 +01:00
Philipp Krones
34a14349b7 Readd the matches_macro diag item
This is now used by Clippy
2022-11-17 19:32:28 +01:00
bors
b6097f2e1b Auto merge of #104219 - bryangarza:async-track-caller-dup, r=eholk
Support `#[track_caller]` on async fns

Adds `#[track_caller]` to the generator that is created when we desugar the async fn.

Fixes #78840

Open questions:
- What is the performance impact of adding `#[track_caller]` to every `GenFuture`'s `poll(...)` function, even if it's unused (i.e., the parent span does not set `#[track_caller]`)? We might need to set it only conditionally, if the indirection causes overhead we don't want.
2022-11-17 13:47:03 +00:00
Maybe Waffle
57e726108a Properly link {Once,Lazy}{Cell,Lock} in docs 2022-11-17 11:05:56 +00:00
bors
9340e5c1b9 Auto merge of #103779 - the8472:simd-str-contains, r=thomcc
x86_64 SSE2 fast-path for str.contains(&str) and short needles

Based on Wojciech Muła's [SIMD-friendly algorithms for substring searching](http://0x80.pl/articles/simd-strfind.html#sse-avx2)

The two-way algorithm is Big-O efficient but it needs to preprocess the needle
to find a "critical factorization" of it. This additional work is significant
for short needles. Additionally it mostly advances needle.len() bytes at a time.

The SIMD-based approach used here on the other hand can advance based on its
vector width, which can exceed the needle length. Except for pathological cases,
but due to being limited to small needles the worst case blowup is also small.

benchmarks taken on a Zen2, compiled with `-Ccodegen-units=1`:

```
OLD:
test str::bench_contains_16b_in_long                     ... bench:         504 ns/iter (+/- 14) = 5061 MB/s
test str::bench_contains_2b_repeated_long                ... bench:         948 ns/iter (+/- 175) = 2690 MB/s
test str::bench_contains_32b_in_long                     ... bench:         445 ns/iter (+/- 6) = 5732 MB/s
test str::bench_contains_bad_naive                       ... bench:         130 ns/iter (+/- 1) = 569 MB/s
test str::bench_contains_bad_simd                        ... bench:          84 ns/iter (+/- 8) = 880 MB/s
test str::bench_contains_equal                           ... bench:         142 ns/iter (+/- 7) = 394 MB/s
test str::bench_contains_short_long                      ... bench:         677 ns/iter (+/- 25) = 3768 MB/s
test str::bench_contains_short_short                     ... bench:          27 ns/iter (+/- 2) = 2074 MB/s

NEW:
test str::bench_contains_16b_in_long                     ... bench:          82 ns/iter (+/- 0) = 31109 MB/s
test str::bench_contains_2b_repeated_long                ... bench:          73 ns/iter (+/- 0) = 34945 MB/s
test str::bench_contains_32b_in_long                     ... bench:          71 ns/iter (+/- 1) = 35929 MB/s
test str::bench_contains_bad_naive                       ... bench:           7 ns/iter (+/- 0) = 10571 MB/s
test str::bench_contains_bad_simd                        ... bench:          97 ns/iter (+/- 41) = 762 MB/s
test str::bench_contains_equal                           ... bench:           4 ns/iter (+/- 0) = 14000 MB/s
test str::bench_contains_short_long                      ... bench:          73 ns/iter (+/- 0) = 34945 MB/s
test str::bench_contains_short_short                     ... bench:          12 ns/iter (+/- 0) = 4666 MB/s
```
2022-11-17 04:47:11 +00:00
bors
63c748ee23 Auto merge of #104481 - matthiaskrgr:rollup-hf8rev0, r=matthiaskrgr
Rollup of 10 pull requests

Successful merges:

 - #103484 (Add `rust` to `let_underscore_lock` example)
 - #103489 (Make `pointer::byte_offset_from` more generic)
 - #104193 (Shift no characters when using raw string literals)
 - #104348 (Respect visibility & stability of inherent associated types)
 - #104401 (avoid memory leak in mpsc test)
 - #104419 (Fix test/ui/issues/issue-30490.rs)
 - #104424 (rustdoc: remove no-op CSS `.popover { font-size: 1rem }`)
 - #104425 (rustdoc: remove no-op CSS `.main-header { justify-content }`)
 - #104450 (Fuchsia test suite script fix)
 - #104471 (Update PROBLEMATIC_CONSTS in style.rs)

Failed merges:

r? `@ghost`
`@rustbot` modify labels: rollup
2022-11-16 10:27:24 +00:00
Matthias Krüger
91963cc244 Rollup merge of #103489 - WaffleLapkin:byte_offset_from_you, r=scottmcm
Make `pointer::byte_offset_from` more generic

As suggested by https://github.com/rust-lang/rust/issues/96283#issuecomment-1288792955 (cc ````@scottmcm),```` make `pointer::byte_offset_from` work on pointers of different types. `byte_offset_from` really doesn't care about pointer types, so this is totally fine and, for example, allows patterns like this:
```rust
ptr::addr_of!(x.b).byte_offset_from(ptr::addr_of!(x))
```

The only possible downside is that this removes the `T` == `U` hint to inference, but I don't think this matter much. I don't think there are a lot of cases where you'd want to use `byte_offset_from` with a pointer of unbounded type (and in such cases you can just specify the type).

````@rustbot```` label +T-libs-api
2022-11-16 08:36:10 +01:00