ci: use 20.04 on x86_64-gnu-nopt builder
Switch the `x86_64-gnu-nopt` builder to use Ubuntu 20.04.
Ubuntu 20.04 has a more recent gdb version than Ubuntu 16.04 (9.1 vs 7.11.1), which is required for rust-lang/rust#77177, as 16.04's gdb 7.11.1 crashes in some cases with Split DWARF. `x86_64-gnu-nopt` is chosen because it runs compare modes, which is how Split DWARF testing is implemented in rust-lang/rust#77177.
I've not confirmed that the issue is resolved with gdb 9.1 (Feb 2020), but system was using gdb 9.2 (May 2020) and that was fine and it seems more likely to me that the bug was resolved between gdb 7.11.1 (May 2016) and gdb 9.1.
Updating a builder to use 20.04 was suggested by `@Mark-Simulacrum` in https://github.com/rust-lang/rust/pull/77117#issuecomment-731846170. I'm not sure if this is the only change that is required - if more are necessary then I'm happy to do that.
r? `@pietroalbini`
cc `@Mark-Simulacrum`
This commit switches the x86_64-gnu-nopt builder to use Ubuntu 20.04,
which contains a more recent gdb version than Ubuntu 16.04 (newer gdb
versions fix a bug that Split DWARF can trigger, see
rust-lang/rust#77177 for motivation). x86_64-gnu-nopt is chosen because
it runs compare modes, which is how Split DWARF testing is implemented
in rust-lang/rust#77177.
Signed-off-by: David Wood <david@davidtw.co>
rustc_codegen_ssa: use bitcasts instead of type punning for scalar transmutes.
This specifically helps with `f32` <-> `u32` (`from_bits`, `to_bits`) in Rust-GPU (`rustc_codegen_spirv`), where (AFAIK) we don't yet have enough infrastructure to turn type punning memory accesses into SSA bitcasts.
(There may be more instances, but the one I've seen myself is `f32::signum` from `num-traits` inspecting e.g. the sign bit)
Sadly I've had to make an exception for `transmute`s between pointers and non-pointers, as LLVM disallows using `bitcast` for them.
r? `@nagisa` cc `@khyperia`
Constier maybe uninit
I was playing around trying to make `[T; N]::zip()` in #79451 be `const fn`. One of the things I bumped into was `MaybeUninit::assume_init`. Is there any reason for the intrinsic `assert_inhabited<T>()` and therefore `MaybeUninit::assume_init` not being `const`?
---
I have as best as I could tried to follow the instruction in [library/core/src/intrinsics.rs](https://github.com/rust-lang/rust/blob/master/library/core/src/intrinsics.rs#L11). I have no idea what I am doing but it seems to compile after some slight changes after the copy paste. Is this anywhere near how this should be done?
Also any ideas for name of the feature gate? I guess `const_maybe_assume_init` is quite misleading since I have added some more methods. Should I add test? If so what should be tested?
The only thing we now cache is cargo-cache, which we only use for cache.
That's a catch-22 if I ever seen one. And for Clippy itself we always
want to do a clean build and not cache anything.
- Closures now use closure_min_captures to figure out captured paths
- Build upvar_mutbls using closure_min_captures
- Change logic in limit_capture_mutability to differentiate b/w
capturing parent's local variable or capturing a variable that is
captured by the parent (in case of nested closure) using PlaceBase.
Co-authored-by: Roxane Fruytier <roxane.fruytier@hotmail.com>
- Use closure_min_capture maps to capture precise paths
- PlaceBuilder now searches for ancestors in min_capture list
- Add API to `Ty` to allow access to the n-th element in a
tuple in O(1) time.
Co-authored-by: Roxane Fruytier <roxane.fruytier@hotmail.com>
implement better availability probing for copy_file_range
Followup to https://github.com/rust-lang/rust/pull/75428#discussion_r469616547
Previously syscall detection was overly pessimistic. Any attempt to copy to an immutable file (EPERM) would disable copy_file_range support for the whole process.
The change tries to copy_file_range on invalid file descriptors which will never run into the immutable file case and thus we can clearly distinguish syscall availability.