Reoptimize layout array
This way it's one check instead of two, so hopefully (cc #99117) it'll be simpler for rustc perf too 🤞
Quick demonstration:
```rust
pub fn demo(n: usize) -> Option<Layout> {
Layout::array::<i32>(n).ok()
}
```
Nightly: <https://play.rust-lang.org/?version=nightly&mode=release&edition=2021&gist=e97bf33508aa03f38968101cdeb5322d>
```nasm
mov rax, rdi
mov ecx, 4
mul rcx
seto cl
movabs rdx, 9223372036854775805
xor esi, esi
cmp rax, rdx
setb sil
shl rsi, 2
xor edx, edx
test cl, cl
cmove rdx, rsi
ret
```
This PR (note no `mul`, in addition to being much shorter):
```nasm
xor edx, edx
lea rax, [4*rcx]
shr rcx, 61
sete dl
shl rdx, 2
ret
```
This is built atop `@CAD97` 's #99136; the new changes are cb8aba66ef6a0e17f08a0574e4820653e31b45a0.
I added a bunch more tests for `Layout::from_size_align` and `Layout::array` too.
Inline CStr::from_bytes_with_nul_unchecked::rt_impl
Currently `CStr::from_bytes_with_nul_unchecked::rt_impl` is not being inlined. The following function:
```rust
pub unsafe fn from_bytes_with_nul_unchecked(bytes: &[u8]) {
CStr::from_bytes_with_nul_unchecked(bytes);
}
```
Outputs the following assembly on current nightly
```asm
example::from_bytes_with_nul_unchecked:
jmp qword ptr [rip + _ZN4core3ffi5c_str4CStr29from_bytes_with_nul_unchecked7rt_impl17h026f29f3d6a41333E@GOTPCREL]
```
Meanwhile on beta this provides the following assembly:
```asm
example::from_bytes_with_nul_unchecked:
ret
```
This pull request adds `#[inline]` annotation to`rt_impl` to fix a code generation regression for `CStr::from_bytes_with_nul_unchecked`.
docs: remove repetition in `is_numeric` function docs
In https://github.com/rust-lang/rust/pull/99628 we introduce new docs for the `is_numeric` function, and this is a follow-up PR that removes some unnecessary repetition that may be introduced by some rebasing.
`@rustbot` r? `@joshtriplett`
Add back Send and Sync impls on ChunksMut iterators
Fixes https://github.com/rust-lang/rust/issues/100014
These were accidentally removed in #94247 because the representation was changed from `&mut [T]` to `*mut T`, which has `!Send + !Sync`.
do not claim that transmute is like memcpy
Saying transmute is like memcpy is not a well-formed statement, since memcpy is by-ref whereas transmute is by-val. The by-val nature of transmute inherently means that padding is lost along the way. (This is not specific to transmute, this is how all by-value operations work.) So adjust the docs to clarify this aspect.
Cc `@workingjubilee`
Currently we skip deriving `PartialEq::ne` for C-like (fieldless) enums
and empty structs, thus reyling on the default `ne`. This behaviour is
unnecessarily conservative, because the `PartialEq` docs say this:
> Implementations must ensure that eq and ne are consistent with each other:
>
> `a != b` if and only if `!(a == b)` (ensured by the default
> implementation).
This means that the default implementation (`!(a == b)`) is always good
enough. So this commit changes things such that `ne` is never derived.
The motivation for this change is that not deriving `ne` reduces compile
times and binary sizes.
Observable behaviour may change if a user has defined a type `A` with an
inconsistent `PartialEq` and then defines a type `B` that contains an
`A` and also derives `PartialEq`. Such code is already buggy and
preserving bug-for-bug compatibility isn't necessary.
Two side-effects of the change:
- There is only one error message produced for types where `PartialEq`
cannot be derived, instead of two.
- For coverage reports, some warnings about generated `ne` methods not
being executed have disappeared.
Both side-effects seem fine, and possibly preferable.