such code is copy from
https://github.com/rust-lang/rust/blob/master/library/std/src/f32.rs
and
https://github.com/rust-lang/rust/blob/master/library/std/src/f64.rs
using r+rhs.abs() is faster than calc it directly.
Bench result:
```
$ cargo bench
Compiling div-euclid v0.1.0 (/me/div-euclid)
Finished bench [optimized] target(s) in 1.01s
Running unittests src/lib.rs (target/release/deps/div_euclid-7a4530ca7817d1ef)
running 7 tests
test tests::it_works ... ignored
test tests::bench_aaabs ... bench: 10,498,793 ns/iter (+/- 104,360)
test tests::bench_aadefault ... bench: 11,061,862 ns/iter (+/- 94,107)
test tests::bench_abs ... bench: 10,477,193 ns/iter (+/- 81,942)
test tests::bench_default ... bench: 10,622,983 ns/iter (+/- 25,119)
test tests::bench_zzabs ... bench: 10,481,971 ns/iter (+/- 43,787)
test tests::bench_zzdefault ... bench: 11,074,976 ns/iter (+/- 29,633)
test result: ok. 0 passed; 0 failed; 1 ignored; 6 measured; 0 filtered out; finished in 19.35s
```
bench code:
```
#![feature(test)]
extern crate test;
fn rem_euclid(a:i32,rhs:i32)->i32{
let r = a % rhs;
if r < 0 { r + rhs.abs() } else { r }
}
#[cfg(test)]
mod tests {
use super::*;
use test::Bencher;
use rand::prelude::*;
use rand::rngs::SmallRng;
const N:i32=1000;
#[test]
fn it_works() {
let a: i32 = 7; // or any other integer type
let b = 4;
let d:Vec<i32>=(-N..=N).collect();
let n:Vec<i32>=(-N..0).chain(1..=N).collect();
for i in &d {
for j in &n {
assert_eq!(i.rem_euclid(*j),rem_euclid(*i,*j));
}
}
assert_eq!(rem_euclid(a,b), 3);
assert_eq!(rem_euclid(-a,b), 1);
assert_eq!(rem_euclid(a,-b), 3);
assert_eq!(rem_euclid(-a,-b), 1);
}
#[bench]
fn bench_aaabs(b: &mut Bencher) {
let mut d:Vec<i32>=(-N..=N).collect();
let mut n:Vec<i32>=(-N..0).chain(1..=N).collect();
let mut rng=SmallRng::from_seed([1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,21]);
n.shuffle(&mut rng);
d.shuffle(&mut rng);
n.shuffle(&mut rng);
b.iter(||{
let mut res=0;
for i in &d {
for j in &n {
res+=rem_euclid(*i,*j);
}
}
res
});
}
#[bench]
fn bench_aadefault(b: &mut Bencher) {
let mut d:Vec<i32>=(-N..=N).collect();
let mut n:Vec<i32>=(-N..0).chain(1..=N).collect();
let mut rng=SmallRng::from_seed([1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,21]);
n.shuffle(&mut rng);
d.shuffle(&mut rng);
n.shuffle(&mut rng);
b.iter(||{
let mut res=0;
for i in &d {
for j in &n {
res+=i.rem_euclid(*j);
}
}
res
});
}
#[bench]
fn bench_abs(b: &mut Bencher) {
let d:Vec<i32>=(-N..=N).collect();
let n:Vec<i32>=(-N..0).chain(1..=N).collect();
b.iter(||{
let mut res=0;
for i in &d {
for j in &n {
res+=rem_euclid(*i,*j);
}
}
res
});
}
#[bench]
fn bench_default(b: &mut Bencher) {
let d:Vec<i32>=(-N..=N).collect();
let n:Vec<i32>=(-N..0).chain(1..=N).collect();
b.iter(||{
let mut res=0;
for i in &d {
for j in &n {
res+=i.rem_euclid(*j);
}
}
res
});
}
#[bench]
fn bench_zzabs(b: &mut Bencher) {
let mut d:Vec<i32>=(-N..=N).collect();
let mut n:Vec<i32>=(-N..0).chain(1..=N).collect();
let mut rng=SmallRng::from_seed([1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,21]);
d.shuffle(&mut rng);
n.shuffle(&mut rng);
d.shuffle(&mut rng);
b.iter(||{
let mut res=0;
for i in &d {
for j in &n {
res+=rem_euclid(*i,*j);
}
}
res
});
}
#[bench]
fn bench_zzdefault(b: &mut Bencher) {
let mut d:Vec<i32>=(-N..=N).collect();
let mut n:Vec<i32>=(-N..0).chain(1..=N).collect();
let mut rng=SmallRng::from_seed([1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,21]);
d.shuffle(&mut rng);
n.shuffle(&mut rng);
d.shuffle(&mut rng);
b.iter(||{
let mut res=0;
for i in &d {
for j in &n {
res+=i.rem_euclid(*j);
}
}
res
});
}
}
```
The final return value doesn't need to be tried at all -- we can just
return the checked option directly. The optimizer can probably figure
this out anyway, but there's no need to make it work here.
Stabilize `#![feature(mixed_integer_ops)]`
Tracked and FCP completed in #87840.
````@rustbot```` label +T-libs-api +S-waiting-on-review +relnotes
r? rust-lang/t-libs-api
Reimplement `carrying_add` and `borrowing_sub` for signed integers.
As per the discussion in #85532, this PR reimplements `carrying_add` and `borrowing_sub` for signed integers.
It also adds unit tests for both unsigned and signed integers, emphasing on the behaviours of the methods.
Remove `#[rustc_deprecated]`
This removes `#[rustc_deprecated]` and introduces diagnostics to help users to the right direction (that being `#[deprecated]`). All uses of `#[rustc_deprecated]` have been converted. CI is expected to fail initially; this requires #95958, which includes converting `stdarch`.
I plan on following up in a short while (maybe a bootstrap cycle?) removing the diagnostics, as they're only intended to be short-term.
Require const stability attribute on all stable functions that are `const`
This PR requires all stable functions (of all kinds) that are `const fn` to have a `#[rustc_const_stable]` or `#[rustc_const_unstable]` attribute. Stability was previously implied if omitted; a follow-up PR is planned to change the fallback to be unstable.
These are working well for *unsigned* types, for the the signed ones there are a bunch of questions about what the semantics and API should be. And for the main "helpers for big integer implementations" use, there's no need for the signed versions anyway.
And there are plenty of other methods which exist for unsigned types but not signed ones, like `next_power_of_two`, so this isn't unusual.
Fixes 90541
Using short-circuit operators makes it easier to perform some kinds of
source code analysis, like MC/DC code coverage (a requirement in
safety-critical environments). The optimized x86 assembly is the same
between the old and new versions:
```
xor eax, eax
test esi, esi
je .LBB0_1
cmp edi, -2147483648
jne .LBB0_4
cmp esi, -1
jne .LBB0_4
ret
.LBB0_1:
ret
.LBB0_4:
mov eax, edi
cdq
idiv esi
mov eax, 1
ret
```
Using short-circuit operators makes it easier to perform some kinds of
source code analysis, like MC/DC code coverage (a requirement in
safety-critical environments). The optimized x86 assembly is the same
between the old and new versions:
```
xor eax, eax
test esi, esi
je .LBB0_1
cmp edi, -2147483648
jne .LBB0_4
cmp esi, -1
jne .LBB0_4
ret
.LBB0_1:
ret
.LBB0_4:
mov eax, edi
cdq
idiv esi
mov edx, eax
mov eax, 1
ret
```
Stabilize feature `saturating_div` for rust 1.58.0
The tracking issue is #89381
This seems like a reasonable simple change(?). The feature `saturating_div` was added as part of the ongoing effort to implement a `Saturating` integer type (see #87921). The implementation has been discussed [here](https://github.com/rust-lang/rust/pull/87921#issuecomment-899357720) and [here](https://github.com/rust-lang/rust/pull/87921#discussion_r691888556). It extends the list of saturating operations on integer types (like `saturating_add`, `saturating_sub`, `saturating_mul`, ...) by the function `fn saturating_div(self, rhs: Self) -> Self`.
The stabilization of the feature `saturating_int_impl` (for the `Saturating` type) needs to have this stabilized first.
Closes#89381
Add #[must_use] to from_value conversions
I added two methods to the list myself. Clippy did not flag them because they take `mut` args, but neither modifies their argument.
```rust
core::str const unsafe fn from_utf8_unchecked_mut(v: &mut [u8]) -> &mut str;
std::ffi::CString unsafe fn from_raw(ptr: *mut c_char) -> CString;
```
I put a custom note on `from_raw`:
```rust
#[must_use = "call `drop(from_raw(ptr))` if you intend to drop the `CString`"]
pub unsafe fn from_raw(ptr: *mut c_char) -> CString {
```
Parent issue: #89692
r? ``@joshtriplett``
Improve docs for int_log
* Clarify rounding.
* Avoid "wrapping" wording.
* Omit wrong claim on 0 only being returned in error cases.
* Typo fix for one_less_than_next_power_of_two.
Since the wrapped remainder is going to be 0 for all cases when the rhs is -1,
there is no need to divide in this case. Comparing the lhs with MIN is only done
for the overflow bool. In particular, this results in better code generation for
wrapping remainder, which discards the overflow bool completely.
For `self == Self::MIN && rhs == -1`, LLVM does not realize that this is the
same check made by `self / rhs`, so the code generated may have some unnecessary
duplication. For `(self == Self::MIN) & (rhs == -1)`, LLVM realizes it is the
same check.