Commit Graph

3764 Commits

Author SHA1 Message Date
Dylan DPC
9f6a2fde34 Rollup merge of #99335 - Dav1dde:fromstr-docs, r=JohnTitor
Use split_once in FromStr docs

Current implementation:

```rust
    fn from_str(s: &str) -> Result<Self, Self::Err> {
        let coords: Vec<&str> = s.trim_matches(|p| p == '(' || p == ')' )
                                 .split(',')
                                 .collect();

        let x_fromstr = coords[0].parse::<i32>()?;
        let y_fromstr = coords[1].parse::<i32>()?;

        Ok(Point { x: x_fromstr, y: y_fromstr })
    }
```

Creating the vector is not necessary, `split_once` does the job better.

Alternatively we could also remove `trim_matches` with `strip_prefix` and `strip_suffix`:

```rust
        let (x, y) = s
            .strip_prefix('(')
            .and_then(|s| s.strip_suffix(')'))
            .and_then(|s| s.split_once(','))
            .unwrap();
```

The question is how much 'correctness' is too much and distracts from the example. In a real implementation you would also not unwrap (or originally access the vector without bounds checks), but implementing a custom Error and adding a `From<ParseIntError>` and implementing the `Error` trait adds a lot of code to the example which is not relevant to the `FromStr` trait.
2022-07-19 11:38:53 +05:30
Maybe Waffle
7163e7ff65 Suggest a fix for NonZero* <- * coercion error 2022-07-19 00:13:29 +04:00
Tim Vermeulen
50c612faef Fix Skip::next for non-fused inner iterators 2022-07-18 21:10:47 +02:00
Dylan DPC
5ccdf1f6f7 Rollup merge of #98839 - 5225225:assert_transmute_copy_size, r=thomcc
Add assertion that `transmute_copy`'s U is not larger than T

This is called out as a safety requirement in the docs, but because knowing this can be done at compile time and constant folded (just like the `align_of` branch is removed), we can just panic here.

I've looked at the asm (using `cargo-asm`) of a function that both is correct and incorrect, and the panic is completely removed, or is unconditional, without needing build-std.

I don't expect this to cause much breakage in the wild. I scanned through https://miri.saethlin.dev/ub for issues that would look like this (error: Undefined Behavior: memory access failed: alloc1768 has size 1, so pointer to 8 bytes starting at offset 0 is out-of-bounds), but couldn't find any.

That doesn't rule out it happening in crates tested that fail earlier for some other reason, though, but it indicates that doing this is rare, if it happens at all. A crater run for this would need to be build and test, since this is a runtime thing.

Also added a few more transmute_copy tests.
2022-07-18 21:14:42 +05:30
Yoshua Wuyts
454313fe83 stabilize core::task::ready! 2022-07-18 16:04:52 +02:00
bors
9ed0bf9f2b Auto merge of #99223 - saethlin:panicless-split-mut, r=Mark-Simulacrum
Rearrange slice::split_mut to remove bounds check

Closes https://github.com/rust-lang/rust/issues/86313

Turns out that all we need to do here is reorder the bounds checks to convince LLVM that all the bounds checks can be removed. It seems like LLVM just fails to propagate the original length information past the first bounds check and into the second one. With this implementation it doesn't need to, each check can be proven inbounds based on the one immediately previous.

I've gradually convinced myself that this implementation is unambiguously better based on the above logic, but maybe this is still deserving of a codegen test?

Also the mentioned borrowck limitation no longer seems to exist.
2022-07-18 10:16:58 +00:00
David Herberth
c1c1abc08a Use split_once in FromStr docs 2022-07-18 08:57:43 +02:00
Yuki Okushi
7c98c92ebc Rollup merge of #99374 - TethysSvensson:patch-1, r=Dylan-DPC
Fix doc for `rchunks_exact`

`rchunks_exact` is not a more optimized version of `chunks`, but of `rchunks`.
2022-07-18 08:40:02 +09:00
Yuki Okushi
796bc7cae3 Rollup merge of #98383 - m-ou-se:remove-memory-order-restrictions, r=joshtriplett
Remove restrictions on compare-exchange memory ordering.

We currently don't allow the failure memory ordering of compare-exchange operations to be stronger than the success ordering, as was the case in C++11 when its memory model was copied to Rust. However, this restriction was lifted in C++17 as part of [p0418r2](https://wg21.link/p0418r2). It's time  we lift the restriction too.

| Success | Failure | Before | After |
|---------|---------|--------|-------|
| Relaxed | Relaxed | ✔️ | ✔️ |
| Relaxed | Acquire |                 | ✔️ |
| Relaxed | SeqCst  |                 | ✔️ |
| Acquire | Relaxed | ✔️ | ✔️ |
| Acquire | Acquire | ✔️ | ✔️ |
| Acquire | SeqCst  |                 | ✔️ |
| Release | Relaxed | ✔️ | ✔️ |
| Release | Acquire |                 | ✔️ |
| Release | SeqCst  |                 | ✔️ |
| AcqRel  | Relaxed | ✔️ | ✔️ |
| AcqRel  | Acquire | ✔️ | ✔️ |
| AcqRel  | SeqCst  |                 | ✔️ |
| SeqCst  | Relaxed | ✔️ | ✔️ |
| SeqCst  | Acquire | ✔️ | ✔️ |
| SeqCst  | SeqCst  | ✔️ | ✔️ |
| \*      | Release |                 |                 |
| \*      | AcqRel  |                 |                 |

Fixes https://github.com/rust-lang/rust/issues/68464
2022-07-18 08:39:57 +09:00
Michael Howell
1169832f2f rustdoc: extend #[doc(tuple_variadic)] to fn pointers
The attribute is also renamed `fake_variadic`.
2022-07-17 16:32:06 -07:00
Tethys Svensson
8c58de5e2c Fix for rchunks_exact doc
`rchunks_exact` is not a more optimized version of `chunks`, but of `rchunks`.
2022-07-17 14:18:36 +02:00
Yuki Okushi
50527690e2 Rollup merge of #99306 - JohnTitor:stabilize-future-poll-fn, r=joshtriplett
Stabilize `future_poll_fn`

FCP is done: https://github.com/rust-lang/rust/issues/72302#issuecomment-1179620512
Closes #72302

r? `@joshtriplett` as you started FCP

Signed-off-by: Yuki Okushi <jtitor@2k36.org>
2022-07-17 13:08:52 +09:00
bors
db41351753 Auto merge of #98866 - nagisa:nagisa/align-offset-wroom, r=Mark-Simulacrum
Add a special case for align_offset /w stride != 1

This generalizes the previous `stride == 1` special case to apply to any
situation where the requested alignment is divisible by the stride. This
in turn allows the test case from #98809 produce ideal assembly, along
the lines of:

    leaq 15(%rdi), %rax
    andq $-16, %rax

This also produces pretty high quality code for situations where the
alignment of the input pointer isn’t known:

    pub unsafe fn ptr_u32(slice: *const u32) -> *const u32 {
        slice.offset(slice.align_offset(16) as isize)
    }

    // =>

    movl %edi, %eax
    andl $3, %eax
    leaq 15(%rdi), %rcx
    andq $-16, %rcx
    subq %rdi, %rcx
    shrq $2, %rcx
    negq %rax
    sbbq %rax, %rax
    orq  %rcx, %rax
    leaq (%rdi,%rax,4), %rax

Here LLVM is smart enough to replace the `usize::MAX` special case with
a branch-less bitwise-OR approach, where the mask is constructed using
the neg and sbb instructions. This appears to work across various
architectures I’ve tried.

This change ends up introducing more branches and code in situations
where there is less knowledge of the arguments. For example when the
requested alignment is entirely unknown. This use-case was never really
a focus of this function, so I’m not particularly worried, especially
since llvm-mca is saying that the new code is still appreciably faster,
despite all the new branching.

Fixes #98809.
Sadly, this does not help with #72356.
2022-07-16 23:28:28 +00:00
Simonas Kazlauskas
62a182cf7f Add a special case for align_offset /w stride != 1
This generalizes the previous `stride == 1` special case to apply to any
situation where the requested alignment is divisible by the stride. This
in turn allows the test case from #98809 produce ideal assembly, along
the lines of:

    leaq 15(%rdi), %rax
    andq $-16, %rax

This also produces pretty high quality code for situations where the
alignment of the input pointer isn’t known:

    pub unsafe fn ptr_u32(slice: *const u32) -> *const u32 {
        slice.offset(slice.align_offset(16) as isize)
    }

    // =>

    movl %edi, %eax
    andl $3, %eax
    leaq 15(%rdi), %rcx
    andq $-16, %rcx
    subq %rdi, %rcx
    shrq $2, %rcx
    negq %rax
    sbbq %rax, %rax
    orq  %rcx, %rax
    leaq (%rdi,%rax,4), %rax

Here LLVM is smart enough to replace the `usize::MAX` special case with
a branch-less bitwise-OR approach, where the mask is constructed using
the neg and sbb instructions. This appears to work across various
architectures I’ve tried.

This change ends up introducing more branches and code in situations
where there is less knowledge of the arguments. For example when the
requested alignment is entirely unknown. This use-case was never really
a focus of this function, so I’m not particularly worried, especially
since llvm-mca is saying that the new code is still appreciably faster,
despite all the new branching.

Fixes #98809.
Sadly, this does not help with #72356.
2022-07-17 01:27:37 +03:00
Ben Kimock
c9373903e7 Rearrange slice::split_mut to remove bounds check 2022-07-16 12:26:37 -04:00
Yuki Okushi
083a253e53 Rollup merge of #99277 - joshtriplett:stabilize-core-cstr-alloc-cstring, r=Mark-Simulacrum
Stabilize `core::ffi::CStr`, `alloc::ffi::CString`, and friends

Stabilize the `core_c_str` and `alloc_c_string` feature gates.

Change `std::ffi` to re-export these types rather than creating type
aliases, since they now have matching stability.
2022-07-16 17:53:04 +09:00
Yuki Okushi
084ad59622 Stabilize future_poll_fn
Signed-off-by: Yuki Okushi <jtitor@2k36.org>
2022-07-16 10:04:14 +09:00
Aaron Hill
ef8e322b14 Mark stabilized intrinsics with rustc_allowed_through_unstable_modules
Fixes #99286

PR #95956 accidentally made these intrinsics unstable when
accessed through the unstable path segment 'std::intrinsics'
2022-07-15 11:18:40 -05:00
Josh Triplett
d6b7480c2a Stabilize core::ffi::CStr, alloc::ffi::CString, and friends
Stabilize the `core_c_str` and `alloc_c_string` feature gates.

Change `std::ffi` to re-export these types rather than creating type
aliases, since they now have matching stability.
2022-07-15 03:10:35 -07:00
bors
24699bcbad Auto merge of #95956 - yaahc:stable-in-unstable, r=cjgillot
Support unstable moves via stable in unstable items

part of https://rust-lang.zulipchat.com/#narrow/stream/131828-t-compiler/topic/moving.20items.20to.20core.20unstably and a blocker of https://github.com/rust-lang/rust/pull/90328.

The libs-api team needs the ability to move an already stable item to a new location unstably, in this case for Error in core. Otherwise these changes are insta-stable making them much harder to merge.

This PR attempts to solve the problem by checking the stability of path segments as well as the last item in the path itself, which is currently the only thing checked.
2022-07-14 13:42:09 +00:00
Dylan DPC
103b8602b7 Rollup merge of #98315 - joshtriplett:stabilize-core-ffi-c, r=Mark-Simulacrum
Stabilize `core::ffi:c_*` and rexport in `std::ffi`

This only stabilizes the base types, not the non-zero variants, since
those have their own separate tracking issue and have not gone through
FCP to stabilize.
2022-07-14 14:14:20 +05:30
Josh Triplett
d431338b25 Stabilize core::ffi:c_* and rexport in std::ffi
This only stabilizes the base types, not the non-zero variants, since
those have their own separate tracking issue and have not gone through
FCP to stabilize.
2022-07-13 19:28:20 -07:00
Scott McMurray
a32305a80f Re-optimize Layout::array
This way it's one check instead of two, so hopefully it'll be better

Nightly:
```
layout_array_i32:
	movq	%rdi, %rax
	movl	$4, %ecx
	mulq	%rcx
	jo	.LBB1_2
	movabsq	$9223372036854775805, %rcx
	cmpq	%rcx, %rax
	jae	.LBB1_2
	movl	$4, %edx
	retq
.LBB1_2:
	…
```

This PR:
```
	movq	%rcx, %rax
	shrq	$61, %rax
	jne	.LBB2_1
	shlq	$2, %rcx
	movl	$4, %edx
	movq	%rcx, %rax
	retq
.LBB2_1:
	…
```
2022-07-13 17:07:41 -07:00
bors
87588a2afd Auto merge of #99136 - CAD97:layout-faster, r=scottmcm
Take advantage of known-valid-align in layout.rs

An attempt to improve perf by `@nnethercote's` approach suggested in #99117
2022-07-13 21:01:20 +00:00
Dylan DPC
1e7d04b23b Rollup merge of #99011 - oli-obk:UnsoundCell, r=eddyb
`UnsafeCell` blocks niches inside its nested type from being available outside

fixes #87341

This implements the plan by `@eddyb` in https://github.com/rust-lang/rust/issues/87341#issuecomment-886083646

Somewhat related PR (not strictly necessary, but that cleanup made this PR simpler): #94527
2022-07-13 19:32:34 +05:30
Ralf Jung
7b4149474b mention mitigation in the docs 2022-07-12 11:56:35 -04:00
Ralf Jung
84ff4da726 mem::uninitialized: mitigate many incorrect uses of this function 2022-07-12 10:05:47 -04:00
Christopher Durham
11694905b4 Remove duplication of layout size check 2022-07-11 17:58:42 -04:00
Ralf Jung
1b3870e427 remove a dubious example 2022-07-11 11:36:18 -04:00
Ralf Jung
eed5df52f6 typo
Co-authored-by: Ben Kimock <kimockb@gmail.com>
2022-07-11 09:49:55 -04:00
SOFe
01a9ff0e85 Clarify that [iu]size bounds were only defined for the target arch 2022-07-11 15:08:38 +08:00
Christopher Durham
079d3eb22f Take advantage of known-valid-align in layout.rs 2022-07-10 20:34:39 -04:00
Matthias Krüger
342b666d59 Rollup merge of #99094 - AldaronLau:atomic-ptr-extra-space, r=Dylan-DPC
Remove extra space in AtomicPtr::new docs
2022-07-11 00:33:48 +02:00
bors
268be96d6d Auto merge of #99112 - matthiaskrgr:rollup-uv2zk4d, r=matthiaskrgr
Rollup of 5 pull requests

Successful merges:

 - #99045 (improve print styles)
 - #99086 (Fix display of search result crate filter dropdown)
 - #99100 (Fix binary name in help message for test binaries)
 - #99103 (Avoid some `&str` to `String` conversions)
 - #99109 (fill new tracking issue for `feature(strict_provenance_atomic_ptr)`)

Failed merges:

r? `@ghost`
`@rustbot` modify labels: rollup
2022-07-10 11:35:12 +00:00
Maybe Waffle
e9292b7652 fill new tracking issue for feature(strict_provenance_atomic_ptr) 2022-07-10 13:17:33 +04:00
bors
4ec97d991b Auto merge of #95295 - CAD97:layout-isize, r=scottmcm
Enforce that layout size fits in isize in Layout

As it turns out, enforcing this _in APIs that already enforce `usize` overflow_ is fairly trivial. `Layout::from_size_align_unchecked` continues to "allow" sizes which (when rounded up) would overflow `isize`, but these are now declared as library UB for `Layout`, meaning that consumers of `Layout` no longer have to check this before making an allocation.

(Note that this is "immediate library UB;" IOW it is valid for a future release to make this immediate "language UB," and there is an extant patch to do so, to allow Miri to catch this misuse.)

See also #95252, [Zulip discussion](https://rust-lang.zulipchat.com/#narrow/stream/219381-t-libs/topic/Layout.20Isn't.20Enforcing.20The.20isize.3A.3AMAX.20Rule).
Fixes https://github.com/rust-lang/rust/issues/95334

Some relevant quotes:

`@eddyb,` https://github.com/rust-lang/rust/pull/95252#issuecomment-1078513769

> [B]ecause of the non-trivial presence of both of these among code published on e.g. crates.io:
>
>   1. **`Layout` "producers" / `GlobalAlloc` "users"**: smart pointers (including `alloc::rc` copies with small tweaks), collections, etc.
>   2. **`Layout` "consumers" / `GlobalAlloc` "providers"**: perhaps fewer of these, but anything built on top of OS APIs like `mmap` will expose `> isize::MAX` allocations (on 32-bit hosts) if they lack extra checks
>
> IMO the only responsible option is to enforce the `isize::MAX` limit in `Layout`, which:
>
>   * makes `Layout` _sound_ in terms of only ever allowing allocations where `(alloc_base_ptr: *mut u8).offset(size)` is never UB
>   * frees both "producers" and "consumers" of `Layout` from manually reimplementing the checks
>     * manual checks can be risky, e.g. if the final size passed to the allocator isn't the one being checked
>     * this applies retroactively, fixing the overall soundness of existing code with zero transition period or _any_ changes required from users (as long as going through `Layout` is mandatory, making a "choke point")
>
>
> Feel free to quote this comment onto any relevant issue, I might not be able to keep track of developments.

`@Gankra,` https://github.com/rust-lang/rust/pull/95252#issuecomment-1078556371

> As someone who spent way too much time optimizing libcollections checks for this stuff and tried to splatter docs about it everywhere on the belief that it was a reasonable thing for people to manually take care of: I concede the point, it is not reasonable. I am wholy spiritually defeated by the fact that _liballoc_ of all places is getting this stuff wrong. This isn't throwing shade at the folks who implemented these Rc features, but rather a statement of how impractical it is to expect anyone out in the wider ecosystem to enforce them if _some of the most audited rust code in the library that defines the very notion of allocating memory_ can't even reliably do it.
>
> We need the nuclear option of Layout enforcing this rule. Code that breaks this rule is _deeply_ broken and any "regressions" from changing Layout's contract is a _correctness_ fix. Anyone who disagrees and is sufficiently motivated can go around our backs but the standard library should 100% refuse to enable them.

cc also `@RalfJung` `@rust-lang/wg-allocators.` Even though this technically supersedes #95252, those potential failure points should almost certainly still get nicer panics than just "unwrap failed" (which they would get by this PR).

It might additionally be worth recommending to users of the `Layout` API that they should ideally use `.and_then`/`?` to complete the entire layout calculation, and then `panic!` from a single location at the end of `Layout` manipulation, to reduce the overhead of the checks and optimizations preserving the exact location of each `panic` which are conceptually just one failure: allocation too big.

Probably deserves a T-lang and/or T-libs-api FCP (this technically solidifies the [objects must be no larger than `isize::MAX`](https://rust-lang.github.io/unsafe-code-guidelines/layout/scalars.html#isize-and-usize) rule further, and the UCG document says this hasn't been RFCd) and a crater run. Ideally, no code exists that will start failing with this addition; if it does, it was _likely_ (but not certainly) causing UB.

Changes the raw_vec allocation path, thus deserves a perf run as well.

I suggest hiding whitespace-only changes in the diff view.
2022-07-10 08:54:32 +00:00
Konrad Borowski
0753fd117b Partially stabilize const_slice_from_raw_parts
This doesn't stabilize methods working on mutable pointers.
2022-07-09 23:20:02 +02:00
Jeron Aldaron Lau
4944b5769b Remove extra space in AtomicPtr::new docs 2022-07-09 14:20:34 -05:00
Ralf Jung
2e0ca9472b add a concrete example 2022-07-09 10:48:43 -04:00
Ralf Jung
f6247ffa5a clarify how write_bytes can lead to UB due to invalid values 2022-07-09 09:38:07 -04:00
Dylan DPC
3c35da224b Rollup merge of #99070 - tamird:update-tracking-issue, r=RalfJung
Update integer_atomics tracking issue

Updates #32976.
Updates #99069.

r? ``@RalfJung``
2022-07-09 11:28:09 +05:30
Tamir Duberstein
a491d4582d Update integer_atomics tracking issue
Updates #32976.
Updates #99069.
2022-07-08 17:52:04 -04:00
Jane Lusby
0715616b51 add rt flag to allowed internal unstable for RustcEncodable/Decodable 2022-07-08 21:18:15 +00:00
Jane Lusby
b55453dbad add opt in attribute for stable-in-unstable items 2022-07-08 21:18:15 +00:00
Jane Losare-Lusby
d68cb1f9a3 revert changes to unicode stability 2022-07-08 21:18:15 +00:00
Jane Lusby
e7fe5456c5 Support unstable moves via stable in unstable items 2022-07-08 21:18:13 +00:00
Matthias Krüger
9c6bcb60f3 Rollup merge of #98718 - yoshuawuyts:stabilize-into-future, r=yaahc
Stabilize `into_future`

https://github.com/rust-lang/rust/issues/67644 has been labeled with [S-tracking-ready-to-stabilize](https://github.com/rust-lang/rust/labels/S-tracking-ready-to-stabilize) - which mentions someone needs to file a stabilization PR. So hence this PR!  Thanks!

Closes https://github.com/rust-lang/rust/issues/67644

r? ``@joshtriplett``
2022-07-08 08:00:37 +02:00
Oli Scherer
2a899dc1cf UnsafeCell now has no niches, ever. 2022-07-07 10:46:22 +00:00
Guillaume Gomez
77ec591727 Rollup merge of #98939 - GuillaumeGomez:rustdoc-disamb-impls, r=notriddle
rustdoc: Add more semantic information to impl IDs

Take over of #92745.

I fixed the last remaining issue for the links in the sidebar (mentioned by `@jsha)` and fixed the few links broken in the std/core docs.

cc `@camelid`
r? `@notriddle`
2022-07-06 20:43:27 +02:00
Guillaume Gomez
4755173cf6 Rollup merge of #96935 - thomcc:atomicptr-strict-prov, r=dtolnay
Allow arithmetic and certain bitwise ops on AtomicPtr

This is mainly to support migrating from `AtomicUsize`, for the strict provenance experiment.

This is a pretty dubious set of APIs, but it should be sufficient to allow code that's using `AtomicUsize` to manipulate a tagged pointer atomically. It's under a new feature gate, `#![feature(strict_provenance_atomic_ptr)]`, but I'm not sure if it needs its own tracking issue. I'm happy to make one, but it's not clear that it's needed.

I'm unsure if it needs changes in the various non-LLVM backends. Because we just cast things to integers anyway (and were already doing so), I doubt it.

API change proposal: https://github.com/rust-lang/libs-team/issues/60

Fixes #95492
2022-07-06 20:43:23 +02:00