This commit updates the `MatchIndices` and `RMatchIndices` iterators to follow
the same pattern as the `chars` and `char_indices` iterators. The `matches`
iterator currently yield `&str` elements, so the `MatchIndices` iterator now
yields the index of the match as well as the `&str` that matched (instead of
start/end indexes).
cc #27743
[breaking-change]
`FixedSizeArray` is meant to be implemented for arrays of fixed size only, but can be implemented for anything at the moment. Marking the trait unsafe would make it more reasonable to write unsafe code which operates on fixed size arrays of any size.
For example, using `uninitialized` to create a fixed size array and immediately filling it with a fixed value is externally safe:
```
pub fn init_with_nones<T, A: FixedSizeArray<Option<T>>>() -> A {
let mut res = unsafe { mem::uninitialized() };
for elm in res.as_mut_slice().iter_mut() {
*elm = None;
}
res
}
```
But the same code is not safe if `FixedSizeArray` is implemented for other types:
```
struct Foo { foo: usize }
impl FixedSizeArray<Option<usize>> for Foo {
fn as_slice(&self) -> &[usize] { &[] }
fn as_mut_slice(&self) -> &mut [usize] { &mut [] }
}
```
now `init_with_nones() : Foo` returns a `Foo` with an undefined value for the field `foo`.
Move private bignum module to core::num, because it is not only used in flt2dec.
Extract private 80-bit soft-float into new core::num module for the same reason.
Move private bignum module to core::num, because it is not only used in flt2dec.
Extract private 80-bit soft-float into new core::num module for the same reason.
This branch improves the performance of Ord and PartialOrd methods for slices compared to the iter-based implementation.
Based on the approach used in #26884.
In order to get rid of all range checks, the compiler needs to
explicitly see that the slices it iterates over are as long as the
loop variable upper bound.
This further improves the performance of slice comparison:
```
test u8_cmp ... bench: 4,761 ns/iter (+/- 1,203)
test u8_lt ... bench: 4,579 ns/iter (+/- 649)
test u8_partial_cmp ... bench: 4,768 ns/iter (+/- 761)
test u16_cmp ... bench: 4,607 ns/iter (+/- 580)
test u16_lt ... bench: 4,681 ns/iter (+/- 567)
test u16_partial_cmp ... bench: 4,607 ns/iter (+/- 967)
test u32_cmp ... bench: 4,448 ns/iter (+/- 891)
test u32_lt ... bench: 4,546 ns/iter (+/- 992)
test u32_partial_cmp ... bench: 4,415 ns/iter (+/- 646)
test u64_cmp ... bench: 4,380 ns/iter (+/- 1,184)
test u64_lt ... bench: 5,684 ns/iter (+/- 602)
test u64_partial_cmp ... bench: 4,663 ns/iter (+/- 1,158)
```
Reusing the same idea as in #26884, we can exploit the fact that the
length of slices is known, hence we can use a counted loop instead of
iterators, which means that we only need a single counter, instead of
having to increment and check one pointer for each iterator.
Using the generic implementation of the boolean comparison operators
(`lt`, `le`, `gt`, `ge`) provides further speedup for simple
types. This happens because the loop scans elements checking for
equality and dispatches to element comparison or length comparison
depending on the result of the prefix comparison.
```
test u8_cmp ... bench: 14,043 ns/iter (+/- 1,732)
test u8_lt ... bench: 16,156 ns/iter (+/- 1,864)
test u8_partial_cmp ... bench: 16,250 ns/iter (+/- 2,608)
test u16_cmp ... bench: 15,764 ns/iter (+/- 1,420)
test u16_lt ... bench: 19,833 ns/iter (+/- 2,826)
test u16_partial_cmp ... bench: 19,811 ns/iter (+/- 2,240)
test u32_cmp ... bench: 15,792 ns/iter (+/- 3,409)
test u32_lt ... bench: 18,577 ns/iter (+/- 2,075)
test u32_partial_cmp ... bench: 18,603 ns/iter (+/- 5,666)
test u64_cmp ... bench: 16,337 ns/iter (+/- 2,511)
test u64_lt ... bench: 18,074 ns/iter (+/- 7,914)
test u64_partial_cmp ... bench: 17,909 ns/iter (+/- 1,105)
```
```
test u8_cmp ... bench: 6,511 ns/iter (+/- 982)
test u8_lt ... bench: 6,671 ns/iter (+/- 919)
test u8_partial_cmp ... bench: 7,118 ns/iter (+/- 1,623)
test u16_cmp ... bench: 6,689 ns/iter (+/- 921)
test u16_lt ... bench: 6,712 ns/iter (+/- 947)
test u16_partial_cmp ... bench: 6,725 ns/iter (+/- 780)
test u32_cmp ... bench: 7,704 ns/iter (+/- 1,294)
test u32_lt ... bench: 7,611 ns/iter (+/- 3,062)
test u32_partial_cmp ... bench: 7,640 ns/iter (+/- 1,149)
test u64_cmp ... bench: 7,517 ns/iter (+/- 2,164)
test u64_lt ... bench: 7,579 ns/iter (+/- 1,048)
test u64_partial_cmp ... bench: 7,629 ns/iter (+/- 1,195)
```
Knowing the result of equality comparison can enable additional
optimizations in LLVM.
Additionally, this makes it obvious that `partial_cmp` on totally
ordered types cannot return `None`.
Overflows in integer pow() computations would be missed if they
preceded a 0 bit of the exponent being processed. This made
calls such as 2i32.pow(1024) not trigger an overflow.
Fixes#28012
This allows to skip the codegen for all the unneeded landing pads, reducing code size across the board by about 2-5%, depending on the crate. Compile times seem to be pretty unaffected though :-/
Unwinding across an FFI boundary is undefined behaviour, so we can mark
all external function as nounwind. The obvious exception are those
functions that actually perform the unwinding.
llvm seems to be having some trouble optimizing the iterator-based
string comparsion method into some equivalent to memcmp. This
explicitly calls out to the memcmp intrinisic in order to allow
llvm to generate better code. In some manual benchmarking, this
memcmp-based approach is 20 times faster than the iterator approach.
Generally, including everything that makes an unsafe block safe in the
block is good style. Since the assert! is what makes this safe, it
should go inside the block. I also added a few bits of whitespace.
Overflows in integer pow() computations would be missed if they
preceded a 0 bit of the exponent being processed. This made
calls such as 2i32.pow(1024) not trigger an overflow.
This commit is an implementation of [RFC 1212][rfc] which tweaks the behavior of
the `str::lines` and `BufRead::lines` iterators. Both iterators now account for
`\r\n` sequences in addition to `\n`, allowing for less surprising behavior
across platforms (especially in the `BufRead` case). Splitting *only* on the
`\n` character can still be achieved with `split('\n')` in both cases.
The `str::lines_any` function is also now deprecated as `str::lines` is a
drop-in replacement for it.
[rfc]: https://github.com/rust-lang/rfcs/blob/master/text/1212-line-endings.mdCloses#28032
Nothing too big, a few needless returns and a few closures eliminated (the latter may improve performance in some cases, at least compilation should be a bit faster).