Commit Graph

7152 Commits

Author SHA1 Message Date
David Tolnay
441913626d Move is_power_of_two into unsigned part of signedness_dependent_methods 2024-01-14 12:45:44 -08:00
David Tolnay
63256af236 Move nonzero_unsigned_signed_operations methods into the omnibus impl block 2024-01-14 12:45:43 -08:00
David Tolnay
c6d776ef4b Work around rustfmt doc attribute indentation bug 2024-01-14 12:45:41 -08:00
David Tolnay
b21b9cc901 Unindent nonzero_integer_signedness_dependent_methods macro body 2024-01-14 12:45:37 -08:00
David Tolnay
4291b3ff62 Move signedness dependent methods into the omnibus impl block 2024-01-14 12:45:10 -08:00
David Tolnay
757ed25667 Move Neg impl into the macro that generates Div and Rem 2024-01-14 12:45:09 -08:00
Scott McMurray
23483664a2 Split out option::unwrap_failed like we have result::unwrap_failed
...and like `option::expect_failed`
2024-01-14 12:45:01 -08:00
David Tolnay
f846ed53e4 Move leading_zeros and trailing_zeros methods into nonzero_integer macro 2024-01-14 12:45:00 -08:00
David Tolnay
a78d9a6de1 Unindent nonzero_integer_impl_div_rem macro body 2024-01-14 12:43:51 -08:00
David Tolnay
81e1a7c6b5 Move impl Div and Rem into nonzero_integer macro 2024-01-14 12:43:50 -08:00
David Tolnay
3de0af1a4d Move 'impl FromStr for NonZero' into nonzero_integer macro 2024-01-14 12:43:49 -08:00
David Tolnay
a6152cdd9a Format nonzero_integer macro calls same way we do the primitive int impls
The `key = $value` style will be beneficial as we introduce some more
macro arguments here in later commits.
2024-01-14 12:43:49 -08:00
David Tolnay
9196d2a552 Unindent nonzero_integer macro body 2024-01-14 12:43:37 -08:00
David Tolnay
54cb822563 Define only a single NonZero type per macro call
Later in this stack, as the nonzero_integers macro is going to be
responsible for producing a larger fraction of the API for the NonZero
integer types, it will need to receive a number of additional arguments
beyond the ones currently seen here.

Additional arguments, especially named arguments across multiple lines,
will turn out clearer if everything in one macro call is for the same
NonZero type.

This commit adopts a similar arrangement to what we do for generating
the API of the integer primitives (`impl u8` etc), which also generate a
single type's API per top-level macro call, rather than generating all
12 impl blocks for the 12 types from one macro call.
2024-01-14 12:40:33 -08:00
David Tolnay
56df3bb70d Move nonzero_integers macro call to bottom of module
This way all the other macros defined in this module, such as
nonzero_leading_trailing_zeros, are available to call within the expansion of
nonzero_integers.

(Macros defined by macro_rules cannot be called from the same module above the
location of the macro_rules.)

In this commit the ability to call things like nonzero_leading_trailing_zeros is
not immediately used, but later commits in this stack will be consolidating the
entire API of NonZeroT to be generated through nonzero_integers, and will need
to make use of some of the other macros to do that.
2024-01-14 12:40:18 -08:00
clubby789
4ca6342eb3 Add note on SpecOptionPartialEq to newtype_index 2024-01-14 00:24:39 +00:00
joboet
fa9a911a57 libs: use assert_unchecked instead of intrinsic 2024-01-13 20:10:00 +01:00
Matthias Krüger
f53caa1106 Rollup merge of #119902 - asquared31415:patch-1, r=the8472
fix typo in `fn()` docs
2024-01-13 15:10:30 +01:00
asquared31415
46ad13136c update fn pointer trait impl docs 2024-01-12 22:09:38 +00:00
asquared31415
51afc0922c fix typo in fn() docs 2024-01-12 15:51:18 -05:00
bors
2319be8e26 Auto merge of #119452 - AngelicosPhosphoros:make_nonzeroint_get_assume_nonzero, r=scottmcm
Add assume into `NonZeroIntX::get`

LLVM currently don't support range metadata for function arguments so it fails to optimize non zero integers using their invariant if they are provided using by-value function arguments.

Related to https://github.com/rust-lang/rust/issues/119422
Related to https://github.com/llvm/llvm-project/issues/76628
Related to https://github.com/rust-lang/rust/issues/49572
2024-01-12 20:18:04 +00:00
Scott McMurray
b858c591dd Tune the inlinability of Result::unwrap 2024-01-12 10:57:58 -08:00
bors
6029085a6f Auto merge of #119430 - NCGThompson:int-pow-bench, r=cuviper
Add Benchmarks for int_pow Methods.

There is quite a bit of room for improvement in performance of the `int_pow` family of methods. I added benchmarks for those functions. In particular, there are benchmarks for small compile-time bases to measure the effect of  #114390. ~~I added a lot (245), but all but 22 of them are marked with `#[ignore]`. There are a lot of macros, and I would appreciate feedback on how to simplify them.~~

~~To run benches relevant to #114390, use `./x bench core --stage 1 -- pow_base_const --include-ignored`.~~
2024-01-12 03:04:45 +00:00
Nicholas Thompson
c65c35b3ef Reduced amount of int_pow benches
Also simplified the macros
2024-01-11 14:00:01 -05:00
Matthias Krüger
b3d15ebb08 Rollup merge of #119853 - klensy:rustfmt-ignore, r=cuviper
rustfmt.toml: don't ignore just any tests path, only root one

Previously ignored any `tests` path, now only /tests at repo root.

For reference, https://git-scm.com/docs/gitignore#_pattern_format
2024-01-11 19:42:53 +01:00
Tomasz Miąsko
9d84589a96 Waker::will_wake: Compare vtable address instead of its content
Optimize will_wake implementation by comparing vtable address instead
of its content.

The existing best practice to avoid false negatives from will_wake is
to define a waker vtable as a static item. That approach continues to
works with the new implementation.

While this potentially changes the observable behaviour, the function is
documented to work on a best-effort basis. The PartialEq impl for
RawWaker remains as it was.
2024-01-11 19:39:49 +01:00
Nicholas Thompson
7dcce97686 Edited int_pow micro-benchmarks 2024-01-11 11:30:12 -05:00
Nicholas Thompson
33a47df84a Added int_pow micro-benchmarks 2024-01-11 11:30:12 -05:00
Ralf Jung
6b6f2a5a28 rint: further doc tweaks 2024-01-11 13:33:27 +01:00
klensy
aa696c5a22 apply fmt 2024-01-11 15:04:48 +03:00
Jakub Stasiak
4621357d14 Make is_global/is_unicast_global special address handling complete
IANA explicitly documents 192.0.0.9/32, 192.0.0.9/32 and 2001:30::/28 as
globally reachable[1][2] and the is_global implementations declare
following IANA so let's make this happen.

In case of 2002::/16 IANA says N/A so I think it's safe to say we
shouldn't return true there either.

[1] https://www.iana.org/assignments/iana-ipv4-special-registry/iana-ipv4-special-registry.xhtml
[2] https://www.iana.org/assignments/iana-ipv6-special-registry/iana-ipv6-special-registry.xhtml
2024-01-11 01:03:34 +01:00
The8472
37d26c719d Implement in-place iteratation markers for iter::{Copied, Cloned} 2024-01-10 19:03:57 +01:00
The8472
3aa73135cf bench trustedrandomaccess specialization in zip 2024-01-10 18:59:44 +01:00
The8472
451a3b1775 implement TrustedRandomAccess and TrustedLen for Skip 2024-01-10 18:59:42 +01:00
The8472
a2a7caacf7 implement TrustedLen for StepBy 2024-01-10 18:55:34 +01:00
Emil Gardström
075f2e0345 Add #[track_caller] to the "From implies Into" impl 2024-01-10 10:30:54 +01:00
Trevor Gross
500d6f6479 Stabilize slice_first_last_chunk
This stabilizes all methods under `slice_first_last_chunk`.

Additionally, it const stabilizes the non-mut functions and moves the `_mut`
functions under `const_slice_first_last_chunk`. These are blocked on
`const_mut_refs`.

As part of this change, `slice_split_at_unchecked` was marked const-stable for
internal use (but not fully stable).
2024-01-10 03:06:49 -05:00
Matthias Krüger
3fcddf19e7 Rollup merge of #119782 - RalfJung:rint, r=cuviper
rint intrinsics: caution against actually trying to check for floating-point exceptions
2024-01-10 06:28:45 +01:00
Ralf Jung
fa5bef849e rint intrinsics: caution against actually trying to check for floating-point exceptions 2024-01-09 22:19:25 +01:00
bors
190f4c9611 Auto merge of #116846 - krtab:slice_compare_no_memcmp_opt, r=the8472
A more efficient slice comparison implementation for T: !BytewiseEq

(This is a follow up PR on #113654)

This PR changes the implementation for `[T]` slice comparison when `T: !BytewiseEq`. The previous implementation using zip was not optimized properly by the compiler, which didn't leverage the fact that both length were equal. Performance improvements are for example 20% when testing that `[Some(0_u64); 4096].as_slice() == [Some(0_u64); 4096].as_slice()`.
2024-01-09 20:52:34 +00:00
Miguel Ojeda
18a1ca6a17 core: panic: fix broken link
Signed-off-by: Miguel Ojeda <ojeda@kernel.org>
2024-01-09 14:15:45 +01:00
Matthias Krüger
b0c492cd6e Rollup merge of #118979 - ChrisDenton:unwrap-const, r=Nilstrieb,dtolnay
Use `assert_unsafe_precondition` for `char::from_u32_unchecked`

Use `assert_unsafe_precondition` in `char::from_u32_unchecked` so that it can be stabilized as `const`.
2024-01-09 05:33:21 +01:00
Matthias Krüger
668e8b5541 Rollup merge of #119598 - Laura7089:fix/deref-typo, r=Nilstrieb
Fix a typo in core::ops::Deref's doc
2024-01-09 00:19:33 +01:00
Arthur Carcano
5b041abc8c A more efficient slice comparison implementation for T: !BytewiseEq
The previous implementation was not optimized properly by the compiler,
which didn't leverage the fact that both length were equal.
2024-01-08 16:36:48 +01:00
Matthias Krüger
a9b6908e7f Rollup merge of #116129 - fu5ha:better-pin-docs-2, r=Amanieu
Rewrite `pin` module documentation to clarify usage and invariants

The documentation of `pin` today does not give a complete treatment of pinning from first principles, nor does it adequately help build intuition and understanding for how the different elements of the pinning story fit together.

This rewrite attempts to address these in a way that makes the concept more approachable while also making the documentation more normative.

This PR picks up where `@mcy` left off in #88500 (thanks to him for the original work and `@Manishearth` for mentioning it such that I originally found it). I've directly incorporated much of the feedback left on the original PR and have rewritten and changed some of the main conceits of the prose to better adhere to the feedback from the reviewers on that PR or just explain something in (hopefully) a better way.
2024-01-08 00:38:33 +01:00
Manish Goregaokar
7fd841c098 link 2024-01-07 08:57:23 -08:00
Manish Goregaokar
df6d44961d Update library/core/src/pin.rs
Co-authored-by: Ralf Jung <post@ralfj.de>
2024-01-07 08:56:25 -08:00
Manish Goregaokar
b1830f130a clean up structural pinning 2024-01-07 08:56:24 -08:00
Manish Goregaokar
a573c7c409 footnote on dropping futures 2024-01-07 08:56:24 -08:00
Manish Goregaokar
6a54ed71c0 valid 2024-01-07 08:56:24 -08:00