Commit Graph

62 Commits

Author SHA1 Message Date
Corey Farwell
b1a6c8bdd3 Stabilize [T]::rotate_{left,right}
https://github.com/rust-lang/rust/issues/41891
2018-02-22 20:12:38 -05:00
kennytm
ec36e7e972 Partially revert #47333.
Removed the `assume()` which we assumed is the cause of misoptimization in
issue #48116.
2018-02-15 00:04:18 +08:00
bors
0119b44270 Auto merge of #47772 - arthurprs:iter-position-bounds-check, r=dtolnay
Use the slice length to hint the optimizer about iter.position result

Using the len of the iterator doesn't give the same result.
That's also why we can't generalize it to all TrustedLen iterators.

Problem demo: https://godbolt.org/g/MXg2ae
Fix demo: https://godbolt.org/g/P8q5aZ

Second attempt of #47333
Third attempt of #45501
Fixes #45964
2018-01-28 10:41:34 +00:00
arthurprs
4f7109a424 Use the slice length to hint the optimizer
Using the len of the iterator doesn't give the same result.
That's also why we can't generalize it to all TrustedLen iterators.
2018-01-26 12:49:14 +01:00
kennytm
e7087f0f4f Rollup merge of #47404 - integer32llc:reexport-to-re-export, r=steveklabnik
Standardize on "re-export" rather than "reexport"

While working on the book with our editors, it was brought to our attention that we're not consistent with when we use "re-export" versus "reexport". For the book, we've decided (with our editors) to go with "re-export"; in prose, I think that looks better. In code, I'm fine with "reexport".

However, the rustdoc generated section is currently "Reexports", so when we have a screenshot of generated documentation with the prose where we use "re-export", it's inconsistent.

It's too late to fix this for the book because we're using 1.21.0 for the output in the book, and it's really only one spot so it's not a huge deal, but I'd like to advocate for changing the documentation header so that a future edition of the book can be consistent.

The first commit here only changes the documentation section heading text and rustdoc documentation that references it. This is the commit that's most important to me.

The second commit changes error messages and associated tests to also be consistent with the use of re-export. This is the next most important commit to me, but I could be argued out of this one because then it won't match code like the `macro_reexports` feature name, which ostensibly should change to `macro_re_exports` to be most consistent but I didn't want to change code.

The last commit changes re-export anywhere else in prose: either in documentation comments or regular comments. This is least important as most of them aren't user-visible. Instances like these will likely sneak back in over time. I'm totally fine dropping this commit if anyone wants, but [the hobgoblins made me do it](http://www.bartleby.com/100/420.47.html) and it sets a good example.

r? @steveklabnik
2018-01-18 01:57:15 +08:00
kennytm
175dd84ed8 Rollup merge of #47333 - arthurprs:iter-position-bounds-check, r=dtolnay
Optimize slice.{r}position result bounds check

Second attempt of https://github.com/rust-lang/rust/pull/45501
Fixes https://github.com/rust-lang/rust/issues/45964

Demo: https://godbolt.org/g/N4mBHp
2018-01-18 01:57:13 +08:00
Carol (Nichols || Goulding)
e168aa385b Reexport -> re-export in prose and documentation comments 2018-01-15 13:36:53 -05:00
kennytm
5d0474ad73 Rollup merge of #47126 - sdroege:exact-chunks, r=bluss
Add slice::ExactChunks and ::ExactChunksMut iterators

These guarantee that always the requested slice size will be returned
and any leftoever elements at the end will be ignored. It allows llvm to
get rid of bounds checks in the code using the iterator.

This is inspired by the same iterators provided by ndarray.

Fixes https://github.com/rust-lang/rust/issues/47115

I'll add unit tests for all this if the general idea and behaviour makes sense for everybody.
Also see https://github.com/rust-lang/rust/issues/47115#issuecomment-354715511 for an example what this improves.
2018-01-15 18:49:31 +08:00
Sebastian Dröge
6bf1dfdd76 Implement TrustedRandomAccess for slice::{ExactChunks, ExactChunksMut} 2018-01-13 12:18:57 +02:00
Sebastian Dröge
cea36f447e Remove useless assertion 2018-01-13 12:18:56 +02:00
Sebastian Dröge
aa0c08a22c Apply review comments from @bluss
- Simplify nth() by making use of the fact that the slice is evenly
  divisible by the chunk size, and calling next() instead of
  duplicating it
- Call next_back() in last(), they are equivalent
- Implement ExactSizeIterator::is_empty()
2018-01-13 12:18:55 +02:00
Sebastian Dröge
51d546f4aa Add slice::ExactChunks and ::ExactChunksMut iterators
These guarantee that always the requested slice size will be returned
and any leftoever elements at the end will be ignored. It allows llvm to
get rid of bounds checks in the code using the iterator.

This is inspired by the same iterators provided by ndarray.

See https://github.com/rust-lang/rust/issues/47115
2018-01-13 12:18:46 +02:00
arthurprs
0b56ab0f7b Optimize slice.{r}position result bounds check 2018-01-12 22:58:25 +01:00
Corey Farwell
e2e8cd3d14 Rollup merge of #46777 - frewsxcv:frewsxcv-rotate, r=alexcrichton
Deprecate [T]::rotate in favor of [T]::rotate_{left,right}.

Background
==========

Slices currently have an **unstable** [`rotate`] method which rotates
elements in the slice to the _left_ N positions. [Here][tracking] is the
tracking issue for this unstable feature.

```rust
let mut a = ['a', 'b' ,'c', 'd', 'e', 'f'];
a.rotate(2);
assert_eq!(a, ['c', 'd', 'e', 'f', 'a', 'b']);
```

Proposal
========

Deprecate the [`rotate`] method and introduce `rotate_left` and
`rotate_right` methods.

```rust
let mut a = ['a', 'b' ,'c', 'd', 'e', 'f'];
a.rotate_left(2);
assert_eq!(a, ['c', 'd', 'e', 'f', 'a', 'b']);
```

```rust
let mut a = ['a', 'b' ,'c', 'd', 'e', 'f'];
a.rotate_right(2);
assert_eq!(a, ['e', 'f', 'a', 'b', 'c', 'd']);
```

Justification
=============

I used this method today for my first time and (probably because I’m a
naive westerner who reads LTR) was surprised when the docs mentioned that
elements get rotated in a left-ward direction. I was in a situation
where I needed to shift elements in a right-ward direction and had to
context switch from the main problem I was working on and think how much
to rotate left in order to accomplish the right-ward rotation I needed.

Ruby’s `Array.rotate` shifts left-ward, Python’s `deque.rotate` shifts
right-ward. Both of their implementations allow passing negative numbers
to shift in the opposite direction respectively. The current `rotate`
implementation takes an unsigned integer argument which doesn't allow
the negative number behavior.

Introducing `rotate_left` and `rotate_right` would:

- remove ambiguity about direction (alleviating need to read docs 😉)
- make it easier for people who need to rotate right

[`rotate`]: https://doc.rust-lang.org/std/primitive.slice.html#method.rotate
[tracking]: https://github.com/rust-lang/rust/issues/41891
2018-01-09 22:28:23 -05:00
Sebastian Dröge
b69c124255 Fix potential overflow in TrustedRandomAccess impl for slice::{Chunks,ChunksMut} 2018-01-04 11:34:05 +02:00
Sebastian Dröge
3f29e2b495 Fix compilation of TrustedRandomAccess impl for slice::Chunks
https://github.com/rust-lang/rust/pull/47113 renamed the private size
field to chunk_size for consistency.
2018-01-03 15:05:18 +02:00
Sebastian Dröge
48f2f71185 Implement TrustedRandomAccess for slice::{Chunks, ChunksMut, Windows} 2018-01-03 15:05:18 +02:00
Sebastian Dröge
c096b3a451 Use assert!(chunk_size != 0) instead of > 0 for usize value 2018-01-02 10:00:58 +02:00
Sebastian Dröge
9957cf6023 Consistently use chunk_size as the field name for Chunks and ChunksMut
Previously Chunks used size and ChunksMut used chunk_size
2018-01-02 10:00:58 +02:00
bors
8c59418962 Auto merge of #46713 - Manishearth:memchr, r=bluss
Use memchr to speed up [u8]::contains 3x

None
2017-12-31 16:38:10 +00:00
Manish Goregaokar
4ef6847d4d Use memchr for [i8]::contains as well 2017-12-31 20:35:39 +05:30
Corey Farwell
66ef6b9c09 Deprecate [T]::rotate in favor of [T]::rotate_{left,right}.
Background
==========

Slices currently have an unstable [`rotate`] method which rotates
elements in the slice to the _left_ N positions. [Here][tracking] is the
tracking issue for this unstable feature.

```rust
let mut a = ['a', 'b' ,'c', 'd', 'e', 'f'];
a.rotate(2);
assert_eq!(a, ['c', 'd', 'e', 'f', 'a', 'b']);
```

Proposal
========

Deprecate the [`rotate`] method and introduce `rotate_left` and
`rotate_right` methods.

```rust
let mut a = ['a', 'b' ,'c', 'd', 'e', 'f'];
a.rotate_left(2);
assert_eq!(a, ['c', 'd', 'e', 'f', 'a', 'b']);
```

```rust
let mut a = ['a', 'b' ,'c', 'd', 'e', 'f'];
a.rotate_right(2);
assert_eq!(a, ['e', 'f', 'a', 'b', 'c', 'd']);
```

Justification
=============

I used this method today for my first time and (probably because I’m a
naive westerner who reads LTR) was surprised when the docs mentioned that
elements get rotated in a left-ward direction. I was in a situation
where I needed to shift elements in a right-ward direction and had to
context switch from the main problem I was working on and think how much
to rotate left in order to accomplish the right-ward rotation I needed.

Ruby’s `Array.rotate` shifts left-ward, Python’s `deque.rotate` shifts
right-ward. Both of their implementations allow passing negative numbers
to shift in the opposite direction respectively.

Introducing `rotate_left` and `rotate_right` would:

- remove ambiguity about direction (alleviating need to read docs 😉)
- make it easier for people who need to rotate right

[`rotate`]: https://doc.rust-lang.org/std/primitive.slice.html#method.rotate
[tracking]: https://github.com/rust-lang/rust/issues/41891
2017-12-24 23:01:24 -08:00
Florian Keller
f6ab79d1aa docs(slice): Clarify half-open interval 2017-12-20 11:43:49 +01:00
Manish Goregaokar
f8f28886e0 Use memchr in [u8]::contains 2017-12-13 09:11:42 -06:00
Manish Goregaokar
2bf0df777b Move rust memchr impl to libcore 2017-12-13 01:15:18 -06:00
bors
b32267f2c1 Auto merge of #45595 - scottmcm:iter-try-fold, r=dtolnay
Short-circuiting internal iteration with Iterator::try_fold & try_rfold

These are the core methods in terms of which the other methods (`fold`, `all`, `any`, `find`, `position`, `nth`, ...) can be implemented, allowing Iterator implementors to get the full goodness of internal iteration by only overriding one method (per direction).

Based off the `Try` trait, so works with both `Result` and `Option` (🎉 https://github.com/rust-lang/rust/pull/42526).  The `try_fold` rustdoc examples use `Option` and the `try_rfold` ones use `Result`.

AKA continuing in the vein of PRs https://github.com/rust-lang/rust/pull/44682 & https://github.com/rust-lang/rust/pull/44856 for more of `Iterator`.

New bench following the pattern from the latter of those:
```
test iter::bench_take_while_chain_ref_sum          ... bench:   1,130,843 ns/iter (+/- 25,110)
test iter::bench_take_while_chain_sum              ... bench:     362,530 ns/iter (+/- 391)
```

I also ran the benches without the `fold` & `rfold` overrides to test their new default impls, with basically no change.  I left them there, though, to take advantage of existing overrides and because `AlwaysOk` has some sub-optimality due to https://github.com/rust-lang/rust/issues/43278 (which 45225 should fix).

If you're wondering why there are three type parameters, see issue https://github.com/rust-lang/rust/issues/45462

Thanks for @bluss for the [original IRLO thread](https://internals.rust-lang.org/t/pre-rfc-fold-ok-is-composable-internal-iteration/4434) and the rfold PR and to @cuviper for adding so many folds, [encouraging me](https://github.com/rust-lang/rust/pull/45379#issuecomment-339424670) to make this PR, and finding a catastrophic bug in a pre-review.
2017-11-17 07:43:08 +00:00
bors
24bb4d1e75 Auto merge of #45333 - alkis:master, r=bluss
Improve SliceExt::binary_search performance

Improve the performance of binary_search by reducing the number of unpredictable conditional branches in the loop. In addition improve the benchmarks to test performance in l1, l2 and l3 caches on sorted arrays with or without dups.

Before:

```
test slice::binary_search_l1                               ... bench:          48 ns/iter (+/- 1)
test slice::binary_search_l2                               ... bench:          63 ns/iter (+/- 0)
test slice::binary_search_l3                               ... bench:         152 ns/iter (+/- 12)
test slice::binary_search_l1_with_dups                     ... bench:          36 ns/iter (+/- 0)
test slice::binary_search_l2_with_dups                     ... bench:          64 ns/iter (+/- 1)
test slice::binary_search_l3_with_dups                     ... bench:         153 ns/iter (+/- 6)
```

After:

```
test slice::binary_search_l1                               ... bench:          15 ns/iter (+/- 0)
test slice::binary_search_l2                               ... bench:          23 ns/iter (+/- 0)
test slice::binary_search_l3                               ... bench:         100 ns/iter (+/- 17)
test slice::binary_search_l1_with_dups                     ... bench:          15 ns/iter (+/- 0)
test slice::binary_search_l2_with_dups                     ... bench:          23 ns/iter (+/- 0)
test slice::binary_search_l3_with_dups                     ... bench:          98 ns/iter (+/- 14)
```
2017-11-11 18:17:14 +00:00
Alkis Evlogimenos
2ca111b6b9 Improve the performance of binary_search by reducing the number of
unpredictable conditional branches in the loop. In addition improve the
benchmarks to test performance in l1, l2 and l3 caches on sorted arrays
with or without dups.

Before:

```
test slice::binary_search_l1                               ... bench:  48 ns/iter (+/- 1)
test slice::binary_search_l2                               ... bench:  63 ns/iter (+/- 0)
test slice::binary_search_l3                               ... bench: 152 ns/iter (+/- 12)
test slice::binary_search_l1_with_dups                     ... bench:  36 ns/iter (+/- 0)
test slice::binary_search_l2_with_dups                     ... bench:  64 ns/iter (+/- 1)
test slice::binary_search_l3_with_dups                     ... bench: 153 ns/iter (+/- 6)
```

After:

```
test slice::binary_search_l1                               ... bench:  15 ns/iter (+/- 0)
test slice::binary_search_l2                               ... bench:  23 ns/iter (+/- 0)
test slice::binary_search_l3                               ... bench: 100 ns/iter (+/- 17)
test slice::binary_search_l1_with_dups                     ... bench:  15 ns/iter (+/- 0)
test slice::binary_search_l2_with_dups                     ... bench:  23 ns/iter (+/- 0)
test slice::binary_search_l3_with_dups                     ... bench:  98 ns/iter (+/- 14)
```
2017-11-11 16:00:26 +01:00
Badel2
4bd6be9dc6 Inclusive range updated to ..= syntax 2017-11-06 13:43:59 +01:00
whitequark
1cc88be2eb De-stabilize core::slice::{from_ref, from_ref_mut}. 2017-11-01 22:21:29 +00:00
Scott McMurray
eef4d42a3f Fundamental internal iteration with try_fold
This is the core method in terms of which the other methods (fold, all, any, find, position, nth, ...) can be implemented, allowing Iterator implementors to get the full goodness of internal iteration by only overriding one method (per direction).
2017-10-29 15:45:20 -07:00
whitequark
8431811728 Bring back slice::ref_slice as slice::from_ref.
These functions were deprecated and removed in 1.5, but such simple
functionality shouldn't require using unsafe code, and it isn't
cluttering libstd too much.
2017-10-23 22:53:31 +00:00
Niv Kaminer
ff99111f48 address some FIXMEs whose associated issues were marked as closed
remove FIXME(#13101) since `assert_receiver_is_total_eq` stays.
remove FIXME(#19649) now that stability markers render.
remove FIXME(#13642) now the benchmarks were moved.
remove FIXME(#6220) now that floating points can be formatted.
remove FIXME(#18248) and write tests for `Rc<str>` and `Rc<[u8]>`
remove reference to irelevent issues in FIXME(#1697, #2178...)
update FIXME(#5516) to point to getopts issue 7
update FIXME(#7771) to point to RFC 628
update FIXME(#19839) to point to issue 26925
2017-09-30 11:33:47 +03:00
Badel2
4737c5a068 Substitute ... with the expanded form
RangeInclusive { start, end }, this way we supress the warnings about `...` in expressions being deprecated until `..=` is available in the compiler
2017-09-22 22:05:18 +02:00
Alex Burka
e64efc91f4 Add support for ..= syntax
Add ..= to the parser

Add ..= to libproc_macro

Add ..= to ICH

Highlight ..= in rustdoc

Update impl Debug for RangeInclusive to ..=

Replace `...` to `..=` in range docs

Make the dotdoteq warning point to the ...

Add warning for ... in expressions

Updated more tests to the ..= syntax

Updated even more tests to the ..= syntax

Updated the inclusive_range entry in unstable book
2017-09-22 22:05:18 +02:00
Chris Stankus
e15a07ad64 Remove Borrow bound from SliceExt::binary_search 2017-08-30 12:02:25 -05:00
Scott McMurray
c4cb2d1f2e Add [T]::swap_with_slice
The safe version of a method from ptr, like [T]::copy_from_slice
2017-08-21 22:20:00 -07:00
Zack M. Davis
1b6c9605e4 use field init shorthand EVERYWHERE
Like #43008 (f668999), but _much more aggressive_.
2017-08-15 15:29:17 -07:00
Alex Crichton
53d8b1d051 std: Cut down #[inline] annotations where not necessary
This PR cuts down on a large number of `#[inline(always)]` and `#[inline]`
annotations in libcore for various core functions. The `#[inline(always)]`
annotation is almost never needed and is detrimental to debug build times as it
forces LLVM to perform inlining when it otherwise wouldn't need to in debug
builds. Additionally `#[inline]` is an unnecessary annoation on almost all
generic functions because the function will already be monomorphized into other
codegen units and otherwise rarely needs the extra "help" from us to tell LLVM
to inline something.

Overall this PR cut the compile time of a [microbenchmark][1] by 30% from 1s to
0.7s.

[1]: https://gist.github.com/alexcrichton/a7d70319a45aa60cf36a6a7bf540dd3a
2017-07-20 12:01:32 -07:00
Stjepan Glavina
bfbe4039f8 Fix tidy errors 2017-07-02 11:16:37 +02:00
Alex Crichton
be7ebdd512 Bump version and stage0 compiler 2017-06-19 22:25:05 -07:00
bors
558cd1e393 Auto merge of #41670 - scottmcm:slice-rotate, r=alexcrichton
Add an in-place rotate method for slices to libcore

A helpful primitive for moving chunks of data around inside a slice.

For example, if you have a range selected and are drag-and-dropping it somewhere else (Example from [Sean Parent's talk](https://youtu.be/qH6sSOr-yk8?t=560)).

(If this should be an RFC instead of a PR, please let me know.)

Edit: changed example
2017-06-02 07:51:20 +00:00
Mark Simulacrum
00c87a6486 Rollup merge of #42134 - scottmcm:rangeinclusive-struct, r=aturon
Make RangeInclusive just a two-field struct

Not being an enum improves ergonomics and consistency, especially since NonEmpty variant wasn't prevented from being empty.  It can still be iterable without an extra "done" bit by making the range have !(start <= end), which is even possible without changing the Step trait.

Implements merged https://github.com/rust-lang/rfcs/pull/1980; tracking issue https://github.com/rust-lang/rust/issues/28237.

This is definitely a breaking change to anything consuming `RangeInclusive` directly (not as an Iterator) or constructing it without using the sugar.  Is there some change that would make sense before this so compilation failures could be compatibly fixed ahead of time?

r? @aturon (as FCP proposer on the RFC)
2017-05-24 19:50:01 -06:00
Scott McMurray
094d61f079 Stop returning k from [T]::rotate 2017-05-21 03:05:19 -07:00
Scott McMurray
a92ad5e52a Update slice_rotate to a real tracking number 2017-05-21 01:55:43 -07:00
Scott McMurray
c05676b97f Add an in-place rotate method for slices to libcore
A helpful primitive for moving chunks of data around inside a slice.
In particular, adding elements to the end of a Vec then moving them
somewhere else, as a way to do efficient multiple-insert.  (There's
drain for efficient block-remove, but no easy way to block-insert.)

Talk with another example: <https://youtu.be/qH6sSOr-yk8?t=560>
2017-05-21 01:55:43 -07:00
Scott McMurray
f166bd9857 Make RangeInclusive just a two-field struct
Not being an enum improves ergonomics, especially since NonEmpty could be Empty.  It can still be iterable without an extra "done" bit by making the range have !(start <= end), which is even possible without changing the Step trait.

Implements RFC 1980
2017-05-21 01:48:03 -07:00
Oliver Middleton
2f703e4304 Correct some stability versions
These were found by running tidy on stable versions of rust and finding
features stabilised with the wrong version numbers.
2017-05-20 08:38:39 +01:00
Scott McMurray
1f891d11f5 Improve implementation approach comments in [T]::reverse() 2017-05-05 18:54:47 -07:00
Scott McMurray
e8fad325fe Make [u8]::reverse() 5x faster
Since LLVM doesn't vectorize the loop for us, do unaligned reads
of a larger type and use LLVM's bswap intrinsic to do the
reversing of the actual bytes.  cfg!-restricted to x86 and
x86_64, as I assume it wouldn't help on things like ARMv5.

Also makes [u16]::reverse() a more modest 1.5x faster by
loading/storing u32 and swapping the u16s with ROT16.

Thank you ptr::*_unaligned for making this easy :)
2017-05-04 20:28:34 -07:00