Commit 95e096d6 changed a bunch of size checks already, but more have
been added, so this fixes the new ones the same way: the various size
checks that are conditional on target_arch = "x86_64" were not intended
to apply to x86_64-unknown-linux-gnux32, so add
target_pointer_width = "64" to the conditions.
lazily "compute" anon const default substs
Continuing the work of #83086, this implements the discussed solution for the [unused substs problem](https://github.com/rust-lang/project-const-generics/blob/master/design-docs/anon-const-substs.md#unused-substs). As of now, anonymous constants inherit all of their parents generics, even if they do not use them, e.g. in `fn foo<T, const N: usize>() -> [T; N + 1]`, the array length has `T` as a generic parameter even though it doesn't use it. These *unused substs* cause some backwards incompatible, and imo incorrect behavior, e.g. #78369.
---
We do not actually filter any generic parameters here and the `default_anon_const_substs` query still a dummy which only checks that
- we now prevent the previously existing query cycles and are able to call `predicates_of(parent)` when computing the substs of anonymous constants
- the default anon consts substs only include the typeflags we assume it does.
Implementing that filtering will be left as future work.
---
The idea of this PR is to delay the creation of the anon const substs until after we've computed `predicates_of` for the parent of the anon const. As the predicates of the parent can however contain the anon const we still have to create a `ty::Const` for it.
We do this by changing the substs field of `ty::Unevaluated` to an option and modifying accesses to instead call the method `unevaluated.substs(tcx)` which returns the substs as before. If the substs - now `substs_` - of `ty::Unevaluated` are `None`, it means that the anon const currently has its default substs, i.e. the substs it has when first constructed, which are the generic parameters it has available. To be able to call `unevaluated.substs(tcx)` in a `TypeVisitor`, we add the non-defaulted method `fn tcx_for_anon_const_substs(&self) -> Option<TyCtxt<'tcx>>`. In case `tcx_for_anon_const_substs` returns `None`, unknown anon const default substs are skipped entirely.
Even when `substs_` is `None` we still have to treat the constant as if it has its default substs. To do this, `TypeFlags` are modified so that it is clear whether they can still change when *exposing* any anon const default substs. A new flag, `HAS_UNKNOWN_DEFAULT_CONST_SUBSTS`, is added in case some default flags are missing.
The rest of this PR are some smaller changes to either not cause cycles by trying to access the default anon const substs too early or to be able to access the `tcx` in previously unused locations.
cc `@rust-lang/project-const-generics`
r? `@nikomatsakis`
Use undef for uninitialized bytes in constants
Fixes#83657
This generates good code when the const is fully uninit, e.g.
```rust
#[no_mangle]
pub const fn fully_uninit() -> MaybeUninit<[u8; 10]> {
const M: MaybeUninit<[u8; 10]> = MaybeUninit::uninit();
M
}
```
generates
```asm
fully_uninit:
ret
```
as you would expect.
There is no improvement, however, when it's partially uninit, e.g.
```rust
pub struct PartiallyUninit {
x: u64,
y: MaybeUninit<[u8; 10]>
}
#[no_mangle]
pub const fn partially_uninit() -> PartiallyUninit {
const X: PartiallyUninit = PartiallyUninit { x: 0xdeadbeefcafe, y: MaybeUninit::uninit() };
X
}
```
generates
```asm
partially_uninit:
mov rax, rdi
mov rcx, qword ptr [rip + .L__unnamed_1+16]
mov qword ptr [rdi + 16], rcx
movups xmm0, xmmword ptr [rip + .L__unnamed_1]
movups xmmword ptr [rdi], xmm0
ret
.L__unnamed_1:
.asciz "\376\312\357\276\255\336\000"
.zero 16
.size .L__unnamed_1, 24
```
which copies a bunch of zeros in place of the undef bytes, the same as before this change.
Edit: generating partially-undef constants isn't viable at the moment anyways due to #84565, so it's disabled
This commit intends to fill out some of the remaining pieces of the
C-unwind ABI. This has a number of other changes with it though to move
this design space forward a bit. Notably contained within here is:
* On `panic=unwind`, the `extern "C"` ABI is now considered as "may
unwind". This fixes a longstanding soundness issue where if you
`panic!()` in an `extern "C"` function defined in Rust that's actually
UB because the LLVM representation for the function has the `nounwind`
attribute, but then you unwind.
* Whether or not a function unwinds now mainly considers the ABI of the
function instead of first checking the panic strategy. This fixes a
miscompile of `extern "C-unwind"` with `panic=abort` because that ABI
can still unwind.
* The aborting stub for non-unwinding ABIs with `panic=unwind` has been
reimplemented. Previously this was done as a small tweak during MIR
generation, but this has been moved to a separate and dedicated MIR
pass. This new pass will, for appropriate functions and function
calls, insert a `cleanup` landing pad for any function call that may
unwind within a function that is itself not allowed to unwind. Note
that this subtly changes some behavior from before where previously on
an unwind which was caught-to-abort it would run active destructors in
the function, and now it simply immediately aborts the process.
* The `#[unwind]` attribute has been removed and all users in tests and
such are now using `C-unwind` and `#![feature(c_unwind)]`.
I think this is largely the last piece of the RFC to implement.
Unfortunately I believe this is still not stabilizable as-is because
activating the feature gate changes the behavior of the existing `extern
"C"` ABI in a way that has no replacement. My thinking for how to enable
this is that we add support for the `C-unwind` ABI on stable Rust first,
and then after it hits stable we change the behavior of the `C` ABI.
That way anyone straddling stable/beta/nightly can switch to `C-unwind`
safely.
CTFE/Miri engine Pointer type overhaul
This fixes the long-standing problem that we are using `Scalar` as a type to represent pointers that might be integer values (since they point to a ZST). The main problem is that with int-to-ptr casts, there are multiple ways to represent the same pointer as a `Scalar` and it is unclear if "normalization" (i.e., the cast) already happened or not. This leads to ugly methods like `force_mplace_ptr` and `force_op_ptr`.
Another problem this solves is that in Miri, it would make a lot more sense to have the `Pointer::offset` field represent the full absolute address (instead of being relative to the `AllocId`). This means we can do ptr-to-int casts without access to any machine state, and it means that the overflow checks on pointer arithmetic are (finally!) accurate.
To solve this, the `Pointer` type is made entirely parametric over the provenance, so that we can use `Pointer<AllocId>` inside `Scalar` but use `Pointer<Option<AllocId>>` when accessing memory (where `None` represents the case that we could not figure out an `AllocId`; in that case the `offset` is an absolute address). Moreover, the `Provenance` trait determines if a pointer with a given provenance can be cast to an integer by simply dropping the provenance.
I hope this can be read commit-by-commit, but the first commit does the bulk of the work. It introduces some FIXMEs that are resolved later.
Fixes https://github.com/rust-lang/miri/issues/841
Miri PR: https://github.com/rust-lang/miri/pull/1851
r? `@oli-obk`
Update Rust Float-Parsing Algorithms to use the Eisel-Lemire algorithm.
# Summary
Rust, although it implements a correct float parser, has major performance issues in float parsing. Even for common floats, the performance can be 3-10x [slower](https://arxiv.org/pdf/2101.11408.pdf) than external libraries such as [lexical](https://github.com/Alexhuszagh/rust-lexical) and [fast-float-rust](https://github.com/aldanor/fast-float-rust).
Recently, major advances in float-parsing algorithms have been developed by Daniel Lemire, along with others, and implement a fast, performant, and correct float parser, with speeds up to 1200 MiB/s on Apple's M1 architecture for the [canada](0e2b5d163d/data/canada.txt) dataset, 10x faster than Rust's 130 MiB/s.
In addition, [edge-cases](https://github.com/rust-lang/rust/issues/85234) in Rust's [dec2flt](868c702d0c/library/core/src/num/dec2flt) algorithm can lead to over a 1600x slowdown relative to efficient algorithms. This is due to the use of Clinger's correct, but slow [AlgorithmM and Bellepheron](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.45.4152&rep=rep1&type=pdf), which have been improved by faster big-integer algorithms and the Eisel-Lemire algorithm, respectively.
Finally, this algorithm provides substantial improvements in the number of floats the Rust core library can parse. Denormal floats with a large number of digits cannot be parsed, due to use of the `Big32x40`, which simply does not have enough digits to round a float correctly. Using a custom decimal class, with much simpler logic, we can parse all valid decimal strings of any digit count.
```rust
// Issue in Rust's dec2fly.
"2.47032822920623272088284396434110686182e-324".parse::<f64>(); // Err(ParseFloatError { kind: Invalid })
```
# Solution
This pull request implements the Eisel-Lemire algorithm, modified from [fast-float-rust](https://github.com/aldanor/fast-float-rust) (which is licensed under Apache 2.0/MIT), along with numerous modifications to make it more amenable to inclusion in the Rust core library. The following describes both features in fast-float-rust and improvements in fast-float-rust for inclusion in core.
**Documentation**
Extensive documentation has been added to ensure the code base may be maintained by others, which explains the algorithms as well as various associated constants and routines. For example, two seemingly magical constants include documentation to describe how they were derived as follows:
```rust
// Round-to-even only happens for negative values of q
// when q ≥ −4 in the 64-bit case and when q ≥ −17 in
// the 32-bitcase.
//
// When q ≥ 0,we have that 5^q ≤ 2m+1. In the 64-bit case,we
// have 5^q ≤ 2m+1 ≤ 2^54 or q ≤ 23. In the 32-bit case,we have
// 5^q ≤ 2m+1 ≤ 2^25 or q ≤ 10.
//
// When q < 0, we have w ≥ (2m+1)×5^−q. We must have that w < 2^64
// so (2m+1)×5^−q < 2^64. We have that 2m+1 > 2^53 (64-bit case)
// or 2m+1 > 2^24 (32-bit case). Hence,we must have 2^53×5^−q < 2^64
// (64-bit) and 2^24×5^−q < 2^64 (32-bit). Hence we have 5^−q < 2^11
// or q ≥ −4 (64-bit case) and 5^−q < 2^40 or q ≥ −17 (32-bitcase).
//
// Thus we have that we only need to round ties to even when
// we have that q ∈ [−4,23](in the 64-bit case) or q∈[−17,10]
// (in the 32-bit case). In both cases,the power of five(5^|q|)
// fits in a 64-bit word.
const MIN_EXPONENT_ROUND_TO_EVEN: i32;
const MAX_EXPONENT_ROUND_TO_EVEN: i32;
```
This ensures maintainability of the code base.
**Improvements for Disguised Fast-Path Cases**
The fast path in float parsing algorithms attempts to use native, machine floats to represent both the significant digits and the exponent, which is only possible if both can be exactly represented without rounding. In practice, this means that the significant digits must be 53-bits or less and the then exponent must be in the range `[-22, 22]` (for an f64). This is similar to the existing dec2flt implementation.
However, disguised fast-path cases exist, where there are few significant digits and an exponent above the valid range, such as `1.23e25`. In this case, powers-of-10 may be shifted from the exponent to the significant digits, discussed at length in https://github.com/rust-lang/rust/issues/85198.
**Digit Parsing Improvements**
Typically, integers are parsed from string 1-at-a-time, requiring unnecessary multiplications which can slow down parsing. An approach to parse 8 digits at a time using only 3 multiplications is described in length [here](https://johnnylee-sde.github.io/Fast-numeric-string-to-int/). This leads to significant performance improvements, and is implemented for both big and little-endian systems.
**Unsafe Changes**
Relative to fast-float-rust, this library makes less use of unsafe functionality and clearly documents it. This includes the refactoring and documentation of numerous unsafe methods undesirably marked as safe. The original code would look something like this, which is deceptively marked as safe for unsafe functionality.
```rust
impl AsciiStr {
#[inline]
pub fn step_by(&mut self, n: usize) -> &mut Self {
unsafe { self.ptr = self.ptr.add(n) };
self
}
}
...
#[inline]
fn parse_scientific(s: &mut AsciiStr<'_>) -> i64 {
// the first character is 'e'/'E' and scientific mode is enabled
let start = *s;
s.step();
...
}
```
The new code clearly documents safety concerns, and does not mark unsafe functionality as safe, leading to better safety guarantees.
```rust
impl AsciiStr {
/// Advance the view by n, advancing it in-place to (n..).
pub unsafe fn step_by(&mut self, n: usize) -> &mut Self {
// SAFETY: same as step_by, safe as long n is less than the buffer length
self.ptr = unsafe { self.ptr.add(n) };
self
}
}
...
/// Parse the scientific notation component of a float.
fn parse_scientific(s: &mut AsciiStr<'_>) -> i64 {
let start = *s;
// SAFETY: the first character is 'e'/'E' and scientific mode is enabled
unsafe {
s.step();
}
...
}
```
This allows us to trivially demonstrate the new implementation of dec2flt is safe.
**Inline Annotations Have Been Removed**
In the previous implementation of dec2flt, inline annotations exist practically nowhere in the entire module. Therefore, these annotations have been removed, which mostly does not impact [performance](https://github.com/aldanor/fast-float-rust/issues/15#issuecomment-864485157).
**Fixed Correctness Tests**
Numerous compile errors in `src/etc/test-float-parse` were present, due to deprecation of `time.clock()`, as well as the crate dependencies with `rand`. The tests have therefore been reworked as a [crate](https://github.com/Alexhuszagh/rust/tree/master/src/etc/test-float-parse), and any errors in `runtests.py` have been patched.
**Undefined Behavior**
An implementation of `check_len` which relied on undefined behavior (in fast-float-rust) has been refactored, to ensure that the behavior is well-defined. The original code is as follows:
```rust
#[inline]
pub fn check_len(&self, n: usize) -> bool {
unsafe { self.ptr.add(n) <= self.end }
}
```
And the new implementation is as follows:
```rust
/// Check if the slice at least `n` length.
fn check_len(&self, n: usize) -> bool {
n <= self.as_ref().len()
}
```
Note that this has since been fixed in [fast-float-rust](https://github.com/aldanor/fast-float-rust/pull/29).
**Inferring Binary Exponents**
Rather than explicitly store binary exponents, this new implementation infers them from the decimal exponent, reducing the amount of static storage required. This removes the requirement to store [611 i16s](868c702d0c/library/core/src/num/dec2flt/table.rs (L8)).
# Code Size
The code size, for all optimizations, does not considerably change relative to before for stripped builds, however it is **significantly** smaller prior to stripping the resulting binaries. These binary sizes were calculated on x86_64-unknown-linux-gnu.
**new**
Using rustc version 1.55.0-dev.
opt-level|size|size(stripped)
|:-:|:-:|:-:|
0|400k|300K
1|396k|292K
2|392k|292K
3|392k|296K
s|396k|292K
z|396k|292K
**old**
Using rustc version 1.53.0-nightly.
opt-level|size|size(stripped)
|:-:|:-:|:-:|
0|3.2M|304K
1|3.2M|292K
2|3.1M|284K
3|3.1M|284K
s|3.1M|284K
z|3.1M|284K
# Correctness
The dec2flt implementation passes all of Rust's unittests and comprehensive float parsing tests, along with numerous other tests such as Nigel Toa's comprehensive float [tests](https://github.com/nigeltao/parse-number-fxx-test-data) and Hrvoje Abraham [strtod_tests](https://github.com/ahrvoje/numerics/blob/master/strtod/strtod_tests.toml). Therefore, it is unlikely that this algorithm will incorrectly round parsed floats.
# Issues Addressed
This will fix and close the following issues:
- resolves#85198
- resolves#85214
- resolves#85234
- fixes#31407
- fixes#31109
- fixes#53015
- resolves#68396
- closes https://github.com/aldanor/fast-float-rust/issues/15
Implementation is based off fast-float-rust, with a few notable changes.
- Some unsafe methods have been removed.
- Safe methods with inherently unsafe functionality have been removed.
- All unsafe functionality is documented and provably safe.
- Extensive documentation has been added for simpler maintenance.
- Inline annotations on internal routines has been removed.
- Fixed Python errors in src/etc/test-float-parse/runtests.py.
- Updated test-float-parse to be a library, to avoid missing rand dependency.
- Added regression tests for #31109 and #31407 in core tests.
- Added regression tests for #31109 and #31407 in ui tests.
- Use the existing slice primitive to simplify shared dec2flt methods
- Remove Miri ignores from dec2flt, due to faster parsing times.
- resolves#85198
- resolves#85214
- resolves#85234
- fixes#31407
- fixes#31109
- fixes#53015
- resolves#68396
- closes https://github.com/aldanor/fast-float-rust/issues/15
This resolves all the problems we had around "normalizing" the representation of a Scalar in case it carries a Pointer value: we can just use Pointer if we want to have a value taht we are sure is already normalized.