Commit Graph

3764 Commits

Author SHA1 Message Date
Matthias Krüger
a45f69f27d Rollup merge of #100822 - WaffleLapkin:no_offset_question_mark, r=scottmcm
Replace most uses of `pointer::offset` with `add` and `sub`

As PR title says, it replaces `pointer::offset` in compiler and standard library with `pointer::add` and `pointer::sub`. This generally makes code cleaner, easier to grasp and removes (or, well, hides) integer casts.

This is generally trivially correct, `.offset(-constant)` is just `.sub(constant)`, `.offset(usized as isize)` is just `.add(usized)`, etc. However in some cases we need to be careful with signs of things.

r? ````@scottmcm````

_split off from #100746_
2022-08-21 16:54:07 +02:00
Matthias Krüger
fd403f5d17 Rollup merge of #100821 - WaffleLapkin:ptr_add_docs, r=scottmcm
Make some docs nicer wrt pointer offsets

This PR replaces `pointer::offset` with `pointer::add` and similarly `.cast().wrapping_add().cast()` with `.wrapping_byte_add()` **in docs**.

r? ``````@scottmcm``````

_split off from #100746_
2022-08-21 16:54:06 +02:00
Matthias Krüger
1cdcf508bb Rollup merge of #100663 - clarfonthey:const-reverse, r=scottmcm
Make slice::reverse const

I remember this not being doable for some reason before, but decided to try it again and everything worked out in the tests.
2022-08-21 16:54:01 +02:00
Matthias Krüger
a5c16a5381 Rollup merge of #100556 - Alex-Velez:patch-1, r=scottmcm
Clamp Function for f32 and f64

I thought the clamp function could use a little improvement for readability purposes. The function now returns early in order to skip the extra bound checks.

If there was a reason for binding `self` to `x` or if this code is incorrect, please correct me :)
2022-08-21 16:54:01 +02:00
Maybe Waffle
efef211876 Make use of pointer::is_aligned[_to] 2022-08-21 15:46:03 +04:00
Tim Vermeulen
db2b4a3a7e Use internal iteration in Iterator::{cmp_by, partial_cmp_by, eq_by} 2022-08-21 12:23:10 +02:00
Maybe Waffle
b2625e24b9 fix nitpicks from review 2022-08-21 06:36:11 +04:00
Maybe Waffle
168a837975 fill in tracking issue for feature(ptr_mask) 2022-08-21 05:27:14 +04:00
Maybe Waffle
10270f4b44 Add pointer masking convenience functions
This commit adds the following functions all of which have a signature
`pointer, usize -> pointer`:
- `<*mut T>::mask`
- `<*const T>::mask`
- `intrinsics::ptr_mask`

These functions are equivalent to `.map_addr(|a| a & mask)` but they
utilize `llvm.ptrmask` llvm intrinsic.

*masks your pointers*
2022-08-21 05:27:14 +04:00
Maybe Waffle
3ba393465f Make some docs nicer wrt pointer offsets 2022-08-21 02:22:20 +04:00
Maybe Waffle
e4720e1cf2 Replace most uses of pointer::offset with add and sub 2022-08-21 02:21:41 +04:00
Cameron Steffen
17ddcb434b Improve primitive/std docs separation and headers 2022-08-20 16:50:29 -05:00
Matthias Krüger
bd4a63cda2 Rollup merge of #100585 - wooorm:patch-1, r=Mark-Simulacrum
Fix trailing space showing up in example

The current text is rendered as: U+005B ..= U+0060 ``[ \ ] ^ _ ` ``, or (**note the final space!**)
This patch changes that to render as: U+005B ..= U+0060 `` [ \ ] ^ _ ` ``, or (**note no final space!**)

The reason for that, is that CommonMark has a solution for starting or ending inline code with a backtick/grave accent: padding both sides with a space, makes that padding disappear.
2022-08-20 19:32:08 +02:00
Matthias Krüger
d49906519b Rollup merge of #99544 - dylni:expose-utf8lossy, r=Mark-Simulacrum
Expose `Utf8Lossy` as `Utf8Chunks`

This PR changes the feature for `Utf8Lossy` from `str_internals` to `utf8_lossy` and improves the API. This is done to eventually expose the API as stable.

Proposal: rust-lang/libs-team#54
Tracking Issue: #99543
2022-08-20 19:32:07 +02:00
dylni
e8ee0b7b2b Expose Utf8Lossy as Utf8Chunks 2022-08-20 12:49:20 -04:00
ltdk
ae2b1dbc89 Tracking issue for const_reverse 2022-08-19 20:38:32 -04:00
KaDiWa
a297631bdc use <[u8]>::escape_ascii instead of core::ascii::escape_default 2022-08-19 19:00:37 +02:00
bors
6c943bad02 Auto merge of #99541 - timvermeulen:flatten_cleanup, r=the8472
Refactor iteration logic in the `Flatten` and `FlatMap` iterators

The `Flatten` and `FlatMap` iterators both delegate to `FlattenCompat`:
```rust
struct FlattenCompat<I, U> {
    iter: Fuse<I>,
    frontiter: Option<U>,
    backiter: Option<U>,
}
```
Every individual iterator method that `FlattenCompat` implements needs to carefully manage this state, checking whether the `frontiter` and `backiter` are present, and storing the current iterator appropriately if iteration is aborted. This has led to methods such as `next`, `advance_by`, and `try_fold` all having similar code for managing the iterator's state.

I have extracted this common logic of iterating the inner iterators with the option to exit early into a `iter_try_fold` method:
```rust
impl<I, U> FlattenCompat<I, U>
where
    I: Iterator<Item: IntoIterator<IntoIter = U>>,
{
    fn iter_try_fold<Acc, Fold, R>(&mut self, acc: Acc, fold: Fold) -> R
    where
        Fold: FnMut(Acc, &mut U) -> R,
        R: Try<Output = Acc>,
    { ... }
}
```
It passes each of the inner iterators to the given function as long as it keep succeeding. It takes care of managing `FlattenCompat`'s state, so that the actual `Iterator` methods don't need to. The resulting code that makes use of this abstraction is much more straightforward:
```rust
fn next(&mut self) -> Option<U::Item> {
    #[inline]
    fn next<U: Iterator>((): (), iter: &mut U) -> ControlFlow<U::Item> {
        match iter.next() {
            None => ControlFlow::CONTINUE,
            Some(x) => ControlFlow::Break(x),
        }
    }

    self.iter_try_fold((), next).break_value()
}
```
Note that despite being implemented in terms of `iter_try_fold`, `next` is still able to benefit from `U`'s `next` method. It therefore does not take the performance hit that implementing `next` directly in terms of `Self::try_fold` causes (in some benchmarks).

This PR also adds `iter_try_rfold` which captures the shared logic of `try_rfold` and `advance_back_by`, as well as `iter_fold` and `iter_rfold` for folding without early exits (used by `fold`, `rfold`, `count`, and `last`).

Benchmark results:
```
                                             before                after
bench_flat_map_sum                       423,255 ns/iter      414,338 ns/iter
bench_flat_map_ref_sum                 1,942,139 ns/iter    2,216,643 ns/iter
bench_flat_map_chain_sum               1,616,840 ns/iter    1,246,445 ns/iter
bench_flat_map_chain_ref_sum           4,348,110 ns/iter    3,574,775 ns/iter
bench_flat_map_chain_option_sum          780,037 ns/iter      780,679 ns/iter
bench_flat_map_chain_option_ref_sum    2,056,458 ns/iter      834,932 ns/iter
```

I added the last two benchmarks specifically to demonstrate an extreme case where `FlatMap::next` can benefit from custom internal iteration of the outer iterator, so take it with a grain of salt. We should probably do a perf run to see if the changes to `next` are worth it in practice.
2022-08-19 02:34:30 +00:00
Scott McMurray
8118a31e86 Inline <T as From<T>>::from
I noticed in the MIR for <https://play.rust-lang.org/?version=nightly&mode=release&edition=2021&gist=67097e0494363ee27421a4e3bdfaf513> that it's inlined most stuff
```
scope 5 (inlined <Result<i32, u32> as Try>::branch)
```
```
scope 8 (inlined <Result<i32, u32> as Try>::from_output)
```

But yet the do-nothing `from` call was still there:
```
_17 = <u32 as From<u32>>::from(move _18) -> bb9;
```

So let's give this a try and see what perf has to say.
2022-08-18 16:04:00 -07:00
bors
361c599fee Auto merge of #98655 - nnethercote:dont-derive-PartialEq-ne, r=dtolnay
Don't derive `PartialEq::ne`.

Currently we skip deriving `PartialEq::ne` for C-like (fieldless) enums
and empty structs, thus reyling on the default `ne`. This behaviour is
unnecessarily conservative, because the `PartialEq` docs say this:

> Implementations must ensure that eq and ne are consistent with each other:
>
> `a != b` if and only if `!(a == b)` (ensured by the default
> implementation).

This means that the default implementation (`!(a == b)`) is always good
enough. So this commit changes things such that `ne` is never derived.

The motivation for this change is that not deriving `ne` reduces compile
times and binary sizes.

Observable behaviour may change if a user has defined a type `A` with an
inconsistent `PartialEq` and then defines a type `B` that contains an
`A` and also derives `PartialEq`. Such code is already buggy and
preserving bug-for-bug compatibility isn't necessary.

Two side-effects of the change:
- There is only one error message produced for types where `PartialEq`
  cannot be derived, instead of two.
- For coverage reports, some warnings about generated `ne` methods not
  being executed have disappeared.

Both side-effects seem fine, and possibly preferable.
2022-08-18 10:11:11 +00:00
David Tolnay
83f081fc01 Remove unstable Result::into_ok_or_err 2022-08-17 17:20:42 -07:00
Matthias Krüger
1199dbdcf5 Rollup merge of #100661 - PunkyMunky64:patch-1, r=thomcc
Fixed a few documentation errors

Quick pull request; IEEE-754, not IEEE-745. May save someone a quick second some time.
2022-08-17 12:33:02 +02:00
ltdk
5e1730fd17 Make slice::reverse const 2022-08-17 02:01:32 -04:00
PunkyMunky64
683b3f4e6e Fixed a few documentation errors
Quick pull request; IEEE-754, not IEEE-745. May save someone a quick second some time.
2022-08-16 22:29:14 -07:00
PunkyMunky64
89d9a35b3e Fixed a few documentation errors
IEEE-754, not IEEE-745. May save someone a second sometime
2022-08-16 22:28:11 -07:00
Alex
0ff8f0b578 Update src/test/assembly/x86_64-floating-point-clamp.rs
Simple Clamp Function

I thought this was more robust and easier to read. I also allowed this function to return early in order to skip the extra bound check (I'm sure the difference is negligible). I'm not sure if there was a reason for binding `self` to `x`; if so, please correct me.

Simple Clamp Function for f64

I thought this was more robust and easier to read. I also allowed this function to return early in order to skip the extra bound check (I'm sure the difference is negligible). I'm not sure if there was a reason for binding `self` to `x`; if so, please correct me.

Floating point clamp test

f32 clamp using mut self

f64 clamp using mut self

Update library/core/src/num/f32.rs

Update f64.rs

Update x86_64-floating-point-clamp.rs

Update src/test/assembly/x86_64-floating-point-clamp.rs

Update x86_64-floating-point-clamp.rs

Co-Authored-By: scottmcm <scottmcm@users.noreply.github.com>
2022-08-16 19:45:44 -04:00
Matthias Krüger
0b19a185db Rollup merge of #100460 - cuviper:drop-llvm-12, r=nagisa
Update the minimum external LLVM to 13

With this change, we'll have stable support for LLVM 13 through 15 (pending release).
For reference, the previous increase to LLVM 12 was #90175.

r? `@nagisa`
2022-08-16 06:05:57 +02:00
Titus
8e80c39d2d Fix trailing space showing up in example
The current text is rendered as: U+005B ..= U+0060 ``[ \ ] ^ _ ` ``, or.
This patch changes that to render as: U+005B ..= U+0060 `` [ \ ] ^ _ ` ``, or

The reason for that, is that CommonMark has a solution for starting or ending inline code with a backtick/grave accent: padding both sides with a space, makes that padding disappear.
2022-08-15 16:18:00 +02:00
Urgau
3f10e6c86d Say that the identity holds only for all finite numbers (aka not NaN) 2022-08-15 12:47:05 +02:00
Orson Peters
712bf2a07a Added tracking issue numbers for float_next_up_down. 2022-08-15 12:33:00 +02:00
Orson Peters
04681898f0 Added next_up and next_down for f32/f64. 2022-08-15 12:32:53 +02:00
Scott McMurray
7680c8b690 Properly forward ByRefSized::fold to the inner iterator 2022-08-14 22:55:30 -07:00
Josh Stone
2970ad8aee Update the minimum external LLVM to 13 2022-08-14 13:46:51 -07:00
austinabell
00bc9e8ac4 fix(iter::skip): Optimize next and nth implementations of Skip 2022-08-14 13:25:13 -04:00
Dylan DPC
482a6eaf10 Rollup merge of #100026 - WaffleLapkin:array-chunks, r=scottmcm
Add `Iterator::array_chunks` (take N+1)

A revival of https://github.com/rust-lang/rust/pull/92393.

r? `@Mark-Simulacrum`
cc `@rossmacarthur` `@scottmcm` `@the8472`

I've tried to address most of the review comments on the previous attempt. The only thing I didn't address is `try_fold` implementation, I've left the "custom" one for now, not sure what exactly should it use.
2022-08-14 17:09:14 +05:30
Ralf Jung
2dc9bf0fa0 nicer Miri backtraces for from_exposed_addr 2022-08-13 12:55:43 -04:00
Mark Rousskov
154a09dd91 Adjust cfgs 2022-08-12 16:28:15 -04:00
Dylan DPC
da3b89d0bf Rollup merge of #100255 - thedanvail:issue-98861-fix, r=joshtriplett
Adding more verbose documentation for `std::fmt::Write`

Attempts to address #98861
2022-08-12 20:39:13 +05:30
Dylan DPC
51eed00ca9 Rollup merge of #100030 - WaffleLapkin:nice_pointer_sis, r=scottmcm
cleanup code w/ pointers in std a little

Use pointer methods (`byte_add`, `null_mut`, etc) to make code in std a little nicer.
2022-08-12 20:39:10 +05:30
Maybe Waffle
5fbcde1b55 fill-in tracking issue for feature(iter_array_chunks) 2022-08-12 15:04:29 +04:00
Maybe Waffle
eb6b729545 address review comments 2022-08-12 14:57:15 +04:00
Matthias Krüger
37efd55210 Rollup merge of #99511 - RalfJung:raw_eq, r=wesleywiser
make raw_eq precondition more restrictive

Specifically, don't allow comparing pointers that way. Comparing pointers is subtle because you have to talk about what happens to the provenance.

This matches what [Miri already implements](https://play.rust-lang.org/?version=nightly&mode=debug&edition=2021&gist=9eb1dfb8a61b5a2d4a7cee43df2717af), and all existing users are fine with this.

If raw_eq on pointers is ever desired, we can adjust the intrinsic spec and Miri implementation as needed, but for now that seems just unnecessary. Also, this is a const intrinsic, and in const, comparing pointers this way is *not possible* -- so if we allow the intrinsic to compare pointers in general, we need to impose an extra restrictions saying that in const-context, pointers are *not* okay.
2022-08-11 22:53:01 +02:00
Dylan DPC
d749914f79 Rollup merge of #100184 - Kixunil:stabilize_ptr_const_cast, r=m-ou-se
Stabilize ptr_const_cast

This stabilizes `ptr_const_cast` feature as was decided in a recent
[FCP](https://github.com/rust-lang/rust/issues/92675#issuecomment-1190660233)

Closes #92675
2022-08-11 22:46:58 +05:30
Ralf Jung
338d7c2fb0 more typos
Co-authored-by: Nicholas Nethercote <n.nethercote@gmail.com>
2022-08-11 07:37:22 -04:00
bors
908fc5b26d Auto merge of #99174 - scottmcm:reoptimize-layout-array, r=joshtriplett
Reoptimize layout array

This way it's one check instead of two, so hopefully (cc #99117) it'll be simpler for rustc perf too 🤞

Quick demonstration:
```rust
pub fn demo(n: usize) -> Option<Layout> {
    Layout::array::<i32>(n).ok()
}
```

Nightly: <https://play.rust-lang.org/?version=nightly&mode=release&edition=2021&gist=e97bf33508aa03f38968101cdeb5322d>
```nasm
	mov	rax, rdi
	mov	ecx, 4
	mul	rcx
	seto	cl
	movabs	rdx, 9223372036854775805
	xor	esi, esi
	cmp	rax, rdx
	setb	sil
	shl	rsi, 2
	xor	edx, edx
	test	cl, cl
	cmove	rdx, rsi
	ret
```

This PR (note no `mul`, in addition to being much shorter):
```nasm
	xor	edx, edx
	lea	rax, [4*rcx]
	shr	rcx, 61
	sete	dl
	shl	rdx, 2
	ret
```

This is built atop `@CAD97` 's #99136; the new changes are cb8aba66ef6a0e17f08a0574e4820653e31b45a0.

I added a bunch more tests for `Layout::from_size_align` and `Layout::array` too.
2022-08-10 23:50:18 +00:00
Ralf Jung
d1cace5a97 grammar
Co-authored-by: Frank Steffahn <fdsteffahn@gmail.com>
2022-08-10 16:15:21 -04:00
Michael Goulet
eff71b9927 Rollup merge of #100371 - xfix:inline-from-bytes-with-nul-unchecked-rt-impl, r=scottmcm
Inline CStr::from_bytes_with_nul_unchecked::rt_impl

Currently `CStr::from_bytes_with_nul_unchecked::rt_impl` is not being inlined. The following function:

```rust
pub unsafe fn from_bytes_with_nul_unchecked(bytes: &[u8]) {
    CStr::from_bytes_with_nul_unchecked(bytes);
}
```

Outputs the following assembly on current nightly

```asm
example::from_bytes_with_nul_unchecked:
        jmp     qword ptr [rip + _ZN4core3ffi5c_str4CStr29from_bytes_with_nul_unchecked7rt_impl17h026f29f3d6a41333E@GOTPCREL]
```

Meanwhile on beta this provides the following assembly:

```asm
example::from_bytes_with_nul_unchecked:
        ret
```

This pull request adds `#[inline]` annotation to`rt_impl` to fix a code generation regression for `CStr::from_bytes_with_nul_unchecked`.
2022-08-10 09:28:25 -07:00
Michael Goulet
efa182f3db Rollup merge of #100353 - theli-ua:master, r=joshtriplett
Fix doc links in core::time::Duration::as_secs
2022-08-10 09:28:23 -07:00
Martin Habovstiak
2a3ce7890c Stabilize ptr_const_cast
This stabilizes `ptr_const_cast` feature as was decided in a recent
[FCP](https://github.com/rust-lang/rust/issues/92675#issuecomment-1190660233)

Closes #92675
2022-08-10 17:22:58 +02:00
Konrad Borowski
de95117ea8 Inline CStr::from_bytes_with_nul_unchecked::rt_impl 2022-08-10 12:21:17 +00:00