Commit Graph

135338 Commits

Author SHA1 Message Date
LeSeulArtichaut
17b395d296 Use TypeVisitor::BreakTy in structural_match::Search 2020-11-14 21:15:32 +01:00
Christiaan Dirkx
6554086526 Add u128 and i128 integer tests 2020-11-14 20:27:08 +01:00
LeSeulArtichaut
cbb6b1c338 Introduce TypeVisitor::BreakTy 2020-11-14 20:25:27 +01:00
LeSeulArtichaut
e0f3119103 Introduce TypeVisitor::BreakTy 2020-11-14 20:25:27 +01:00
bors
0c7a48c5f0 Auto merge of #78809 - vn-ki:fix-issue-76064, r=oli-obk
add error_occured field to ConstQualifs,

fix #76064

I wasn't sure what `in_return_place` actually did and not sure why it returns `ConstQualifs` while it's sibling functions return `bool`. So I tried to make as minimal changes to the structure as possible. Please point out whether I have to refactor it or not.

r? `@oli-obk`
cc `@RalfJung`
2020-11-14 18:03:17 +00:00
bors
98d66340d6 Auto merge of #78809 - vn-ki:fix-issue-76064, r=oli-obk
add error_occured field to ConstQualifs,

fix #76064

I wasn't sure what `in_return_place` actually did and not sure why it returns `ConstQualifs` while it's sibling functions return `bool`. So I tried to make as minimal changes to the structure as possible. Please point out whether I have to refactor it or not.

r? `@oli-obk`
cc `@RalfJung`
2020-11-14 18:03:17 +00:00
Fabian Zaiser
864e554b9a Add underscore expressions for destructuring assignments
Co-authored-by: varkor <github@varkor.com>
2020-11-14 13:53:12 +00:00
Fabian Zaiser
8cf3564310 Add underscore expressions for destructuring assignments
Co-authored-by: varkor <github@varkor.com>
2020-11-14 13:53:12 +00:00
Who? Me?!
b825ae7d28 Style nit
Co-authored-by: matthewjasper <20113453+matthewjasper@users.noreply.github.com>
2020-11-14 07:20:25 -06:00
bors
30e49a9ead Auto merge of #75272 - the8472:spec-copy, r=KodrAus
specialize io::copy to use copy_file_range, splice or sendfile

Fixes #74426.
Also covers #60689 but only as an optimization instead of an official API.

The specialization only covers std-owned structs so it should avoid the problems with #71091

Currently linux-only but it should be generalizable to other unix systems that have sendfile/sosplice and similar.

There is a bit of optimization potential around the syscall count. Right now it may end up doing more syscalls than the naive copy loop when doing short (<8KiB) copies between file descriptors.

The test case executes the following:

```
[pid 103776] statx(3, "", AT_STATX_SYNC_AS_STAT|AT_EMPTY_PATH, STATX_ALL, {stx_mask=STATX_ALL|STATX_MNT_ID, stx_attributes=0, stx_mode=S_IFREG|0644, stx_size=17, ...}) = 0
[pid 103776] write(4, "wxyz", 4)        = 4
[pid 103776] write(4, "iklmn", 5)       = 5
[pid 103776] copy_file_range(3, NULL, 4, NULL, 5, 0) = 5

```

0-1 `stat` calls to identify the source file type. 0 if the type can be inferred from the struct from which the FD was extracted
𝖬 `write` to drain the `BufReader`/`BufWriter` wrappers. only happen when buffers are present. 𝖬 ≾ number of wrappers present. If there is a write buffer it may absorb the read buffer contents first so only result in a single write. Vectored writes would also be an option but that would require more invasive changes to `BufWriter`.
𝖭 `copy_file_range`/`splice`/`sendfile` until file size, EOF or the byte limit from `Take` is reached. This should generally be *much* more efficient than the read-write loop and also have other benefits such as DMA offload or extent sharing.

## Benchmarks

```

OLD

test io::tests::bench_file_to_file_copy         ... bench:      21,002 ns/iter (+/- 750) = 6240 MB/s    [ext4]
test io::tests::bench_file_to_file_copy         ... bench:      35,704 ns/iter (+/- 1,108) = 3671 MB/s  [btrfs]
test io::tests::bench_file_to_socket_copy       ... bench:      57,002 ns/iter (+/- 4,205) = 2299 MB/s
test io::tests::bench_socket_pipe_socket_copy   ... bench:     142,640 ns/iter (+/- 77,851) = 918 MB/s

NEW

test io::tests::bench_file_to_file_copy         ... bench:      14,745 ns/iter (+/- 519) = 8889 MB/s    [ext4]
test io::tests::bench_file_to_file_copy         ... bench:       6,128 ns/iter (+/- 227) = 21389 MB/s   [btrfs]
test io::tests::bench_file_to_socket_copy       ... bench:      13,767 ns/iter (+/- 3,767) = 9520 MB/s
test io::tests::bench_socket_pipe_socket_copy   ... bench:      26,471 ns/iter (+/- 6,412) = 4951 MB/s
```
2020-11-14 12:01:55 +00:00
Ashley Mannix
74f2941a8e tweak new codegen test to work on local 2020-11-14 21:12:15 +10:00
bors
66c1309446 Auto merge of #78959 - petrochenkov:likeuefi, r=nagisa
rustc_target: Mark UEFI targets as `is_like_windows`/`is_like_msvc`

And document what `is_like_windows` and `is_like_msvc` actually mean in more detail.

Addresses FIXMEs left from https://github.com/rust-lang/rust/pull/71030.
r? `@nagisa`
2020-11-14 09:11:25 +00:00
Joshua Nelson
03cbee84af Rename ItemEnum -> ItemKind, inner -> kind 2020-11-14 03:46:18 -05:00
bors
408b615d34 Auto merge of #6320 - giraffate:fix_suggestion_in_manual_range_contains_using_float, r=llogiq
Fix suggestion in `manual_range_contains` when using float

Fix #6315

changelog: Fix suggestion in `manual_range_contains` when using float
2020-11-14 08:06:00 +00:00
Joshua Nelson
4d44d77c4d Use default git pager instead of hard-coding delta 2020-11-14 02:48:13 -05:00
Joshua Nelson
e6e4a0ab63 Capture stdout and stderr of diff so they'll be printed at the end 2020-11-14 02:48:13 -05:00
Joshua Nelson
619880e554 Fix tests with auxiliary docs 2020-11-14 02:48:13 -05:00
Joshua Nelson
975471ca4d Fall back to diff if delta isn't installed 2020-11-14 02:48:13 -05:00
Joshua Nelson
acd6ce2347 Fix bugs 2020-11-14 02:48:13 -05:00
Joshua Nelson
c0eedc0b6a Address review comments
- remove unused args
- Fix formatting
- Improve naming
- Fix typo
2020-11-14 02:48:13 -05:00
Joshua Nelson
341eb6d6f5 Give a better error when rustdoc tests fail
- Run the default rustdoc against the current rustdoc
- Diff output recursively
- Colorize diff output
2020-11-14 02:48:12 -05:00
bors
1a25580c6c Auto merge of #78951 - petrochenkov:unknown, r=ehuss
rustc_target: Change os and vendor values to "none" and "unknown" for some targets

Closes https://github.com/rust-lang/rust/issues/77730
r? `@ehuss`
2020-11-14 06:44:18 +00:00
bors
50d3c2a3cb Auto merge of #78736 - petrochenkov:lazyenum, r=Aaron1011
rustc_parse: Remove optimization for 0-length streams in `collect_tokens`

The optimization conflates empty token streams with unknown token stream, which is at least suspicious, and doesn't affect performance because 0-length token streams are very rare.

r? `@Aaron1011`
2020-11-14 04:21:56 +00:00
Thom Chiovoloni
55d7f736d8 Tighten the bounds on atomic Ordering in std::sys::unix::weak 2020-11-13 19:15:51 -08:00
Camille GILLOT
41c44b498f Move Steal to rustc_data_structures. 2020-11-14 01:30:56 +01:00
Tomasz Miąsko
6903273339 Lower intrinsics calls: forget, size_of, unreachable, wrapping_*
This allows constant propagation to evaluate `size_of` and `wrapping_*`,
and unreachable propagation to propagate a call to `unreachable`.

The lowering is performed as a MIR optimization, rather than during MIR
building to preserve the special status of intrinsics with respect to
unsafety checks and promotion.
2020-11-14 00:00:00 +00:00
bors
b63d05a908 Auto merge of #78683 - Nemo157:issue-78673, r=lcnr
Check predicates from blanket trait impls while testing if they apply

fixes #78673
2020-11-13 23:12:01 +00:00
The8472
bbfa92c82d Always handle EOVERFLOW by falling back to the generic copy loop
Previously EOVERFLOW handling was only applied for io::copy specialization
but not for fs::copy sharing the same code.

Additionally we lower the chunk size to 1GB since we have a user report
that older kernels may return EINVAL when passing 0x8000_0000
but smaller values succeed.
2020-11-13 22:38:27 +01:00
The8472
4854d418a5 do direct splice syscall and probe availability to get android builds to work
Android builds use feature level 14, the libc wrapper for splice is gated
on feature level 21+ so we have to invoke the syscall directly.
Additionally the emulator doesn't seem to support it so we also have to
add ENOSYS checks.
2020-11-13 22:38:27 +01:00
The8472
3dfc377aa1 move sendfile/splice/copy_file_range into kernel_copy module 2020-11-13 22:38:27 +01:00
The8472
888b1031bc limit visibility of copy offload helpers to sys::unix module 2020-11-13 22:38:27 +01:00
The8472
18bfe2a66b move copy specialization tests to their own module 2020-11-13 22:38:27 +01:00
The8472
7f5d2722af move copy specialization into sys::unix module 2020-11-13 22:38:23 +01:00
Vadim Petrochenkov
ac4c1f58b9 rustc_resolve: Make macro_rules scope chain compression lazy 2020-11-14 00:35:56 +03:00
Aman Arora
c50e57f946 Log closure as well 2020-11-13 16:10:12 -05:00
Camelid
7eb1a1afcf Validate that locals have a corresponding LocalDecl 2020-11-13 12:54:42 -08:00
bors
74f7e32f43 Auto merge of #78888 - richkadel:llvm-coverage-tests, r=tmandry
Fix and re-enable two coverage tests on MacOS

Note, in the coverage-reports test, the comment about MacOS was wrong.
The setting is based on config.toml llvm `optimize` setting. There
doesn't appear to be any environment variable I can check, and I
don't think we should add one. Testing the binary itself is a more
reliable way to check anyway.

For the coverage-spanview test, I removed the dependency on sed
altogether, which is much less ugly than trying to work around the
MacOS sed differences.

I tested these changes on Linux, Windows, and Mac.

r? `@tmandry`
FYI `@wesleywiser`
2020-11-13 20:06:46 +00:00
bjorn3
ffd6fdd843 Remove unnecessary paths from LD_LIBRARY_PATH 2020-11-13 19:51:00 +01:00
bjorn3
c982c48579 Use rpath to compile the cg_clif executable 2020-11-13 19:48:49 +01:00
The8472
ad9b07c7e5 add benchmarks 2020-11-13 19:46:37 +01:00
The8472
46e7fbe60b reduce syscalls by inferring FD types based on source struct instead of calling stat()
also adds handling for edge-cases involving large sparse files where sendfile could fail with EOVERFLOW
2020-11-13 19:46:35 +01:00
The8472
0624730d9e add forwarding specializations for &mut variants
`impl Write for &mut T where T: Write`, thus the same should
apply to the specialization traits
2020-11-13 19:45:38 +01:00
The8472
cd3bddc044 prioritize sendfile over splice since it results in fewer context switches when sending to pipes
splice returns to userspace when the pipe is full, sendfile
just blocks until it's done, this can achieve much higher throughput
2020-11-13 19:45:38 +01:00
The8472
67a6059aa5 move tests module into separate file 2020-11-13 19:45:38 +01:00
The8472
5eb88fa5c7 hide unused exports on other platforms 2020-11-13 19:45:38 +01:00
The8472
16236470c1 specialize io::copy to use copy_file_range, splice or sendfile
Currently it only applies to linux systems. It can be extended to make use
of similar syscalls on other unix systems.
2020-11-13 19:45:27 +01:00
Matt Brubeck
bf6902ca61 Add BTreeMap::retain and BTreeSet::retain 2020-11-13 10:23:50 -08:00
Bastian Kauschke
06c9c599ed lessen restriction in check_kind_count 2020-11-13 19:12:49 +01:00
bjorn3
bf94b3819c Rustfmt 2020-11-13 19:02:24 +01:00
bjorn3
7ec44711e6 Rustup to rustc 1.49.0-nightly (9722952f0 2020-11-12) 2020-11-13 19:01:40 +01:00