Simplify `rustc_span` `analyze_source_file`
Simplifies the logic to what the code *actually* does, which is to just record newlines and multibyte characters. Checking for other ASCII control characters is unnecessary because the generic fallback doesn't do anything for those cases.
Also uses a simpler (and more efficient) means of iterating the set bits of the mask.
Because it's only used in `rustc_mir_transform`. (Presumably it is
currently in `rustc_middle` because lots of other MIR-related stuff is,
but that's not a hard requirement.) And because `rustc_middle` is huge
and it's always good to make it smaller.
`rustc_mir_dataflow/src/elaborate_drops.rs` contains some infrastructure
used by a few MIR passes: the `elaborate_drop` function, the
`DropElaborator` trait, etc.
`rustc_mir_transform/src/elaborate_drops.rs` (same file name, different
crate) contains the `ElaborateDrops` pass. It relies on a lot of the
infrastructure from `rustc_mir_dataflow/src/elaborate_drops.rs`.
It turns out that the drop infrastructure is only used in
`rustc_mir_transform`, so this commit moves it there. (The only
exception is the small `DropFlagState` type, which is moved to the
existing `rustc_mir_dataflow/src/drop_flag_effects.rs`.) The file is
renamed from `rustc_mir_dataflow/src/elaborate_drops.rs` to
`rustc_mir_transform/src/elaborate_drop.rs` (with no trailing `s`)
because (a) the `elaborate_drop` function is the most important export,
and (b) `rustc_mir_transform/src/elaborate_drops.rs` already exists.
All the infrastructure pieces that used to be `pub` are now
`pub(crate)`, because they are now only used within
`rustc_mir_transform`.
coverage: Eliminate more counters by giving them to unreachable nodes
When preparing a function's coverage counters and metadata during codegen, any part of the original coverage graph that was removed by MIR optimizations can be treated as having an execution count of zero.
Somewhat counter-intuitively, if we give those unreachable nodes a _higher_ priority for receiving physical counters (instead of counter expressions), that ends up reducing the total number of physical counters needed.
This works because if a node is unreachable, we don't actually create a physical counter for it. Instead that node gets a fixed zero counter, and any other node that would have relied on that physical counter in its counter expression can just ignore that term completely.
debuginfo: Set bitwidth appropriately in enum variant tags
Previously, we unconditionally set the bitwidth to 128-bits, the largest an enum would possibly be. Then, LLVM would cut down the constant by chopping off leading zeroes before emitting the DWARF. LLVM only supported 64-bit enumerators, so this would also have occasionally resulted in truncated data.
LLVM added support for 128-bit enumerators in llvm/llvm-project#125578
That patchset trusts the constant to describe how wide the variant tag is, so the high 64-bits of zeros are considered potentially load-bearing.
As a result, we went from emitting tags that looked like:
DW_AT_discr_value (0xfe)
(because `dwarf::BestForm` selected `data1`)
to emitting tags that looked like:
DW_AT_discr_value (<0x10> fe ff ff ff 00 00 00 00 00 00 00 00 00 00 00 00 )
This makes the `DW_AT_discr_value` encode at the bitwidth of the tag, which:
1. Is probably closer to our intentions in terms of describing the data.
2. Doesn't invoke the 128-bit support which may not be supported by all debuggers / downstream tools.
3. Will result in smaller debug information.
valtree performance tuning
Summary: This PR makes type checking of code with many type-level constants faster.
After https://github.com/rust-lang/rust/pull/136180 was merged, we observed a small perf regression (https://github.com/rust-lang/rust/pull/136318#issuecomment-2635562821). This happened because that PR introduced additional copies in the fast reject code path for consts, which is very hot for certain crates: 6c1d960d88/compiler/rustc_type_ir/src/fast_reject.rs (L486-L487)
This PR improves the performance again by properly interning the valtrees so that copying and comparing them becomes faster. This will become especially useful with `feature(adt_const_params)`, so the fast reject code doesn't have to do a deep compare of the valtrees.
Note that we can't just compare the interned consts themselves in the fast reject, because sometimes `'static` lifetimes in the type are be replaced with inference variables (due to canonicalization) on one side but not the other.
A less invasive alternative that I considered is simply avoiding copies introduced by https://github.com/rust-lang/rust/pull/136180 and comparing the valtrees it in-place (see commit: 9e91e50ac5 / perf results: https://github.com/rust-lang/rust/pull/136593#issuecomment-2642303245), however that was still measurably slower than interning.
There are some minor regressions in secondary benchmarks: These happen due to changes in memory allocations and seem acceptable to me. The crates that make heavy use of valtrees show no significant changes in memory usage.
Rollup of 8 pull requests
Successful merges:
- #134999 (Add cygwin target.)
- #136559 (Resolve named regions when reporting type test failures in NLL)
- #136660 (Use a trait to enforce field validity for union fields + `unsafe` fields + `unsafe<>` binder types)
- #136858 (Parallel-compiler-related cleanup)
- #136881 (cg_llvm: Reduce visibility of all functions in the llvm module)
- #136888 (Always perform discr read for never pattern in EUV)
- #136948 (Split out the `extern_system_varargs` feature)
- #136949 (Fix import in bench for wasm)
r? `@ghost`
`@rustbot` modify labels: rollup
Split out the `extern_system_varargs` feature
After the stabilization PR was opened, `extern "system"` functions were added to `extended_varargs_abi_support`. This has a number of questions regarding it that were not discussed and were somewhat surprising. It deserves to be considered as its own feature, separate from `extended_varargs_abi_support`.
Tracking issue:
- https://github.com/rust-lang/rust/issues/136946
Always perform discr read for never pattern in EUV
Always perform a read of `!` discriminants to ensure that it's captured by closures in expr use visitor
Fixes#136852
r? Nadrieril or reassign
cg_llvm: Reduce visibility of all functions in the llvm module
Next part of #135502
This reduces the visibility of all functions in the `llvm` module to `pub(crate)` and marks the `enzyme_ffi` modules with `#![expect(dead_code)]` (as previously discussed: <https://github.com/rust-lang/rust/pull/135502#discussion_r1915608085>).
r? ``@Zalathar``
Parallel-compiler-related cleanup
Parallel-compiler-related cleanup
I carefully split changes into commits. Commit messages are self-explanatory. Squashing is not recommended.
cc "Parallel Rustc Front-end" https://github.com/rust-lang/rust/issues/113349
r? SparrowLii
``@rustbot`` label: +WG-compiler-parallel
Use a trait to enforce field validity for union fields + `unsafe` fields + `unsafe<>` binder types
This PR introduces a new, internal-only trait called `BikeshedGuaranteedNoDrop`[^1] to faithfully model the field check that used to be implemented manually by `allowed_union_or_unsafe_field`.
942db6782f/compiler/rustc_hir_analysis/src/check/check.rs (L84-L115)
Copying over the doc comment from the trait:
```rust
/// Marker trait for the types that are allowed in union fields, unsafe fields,
/// and unsafe binder types.
///
/// Implemented for:
/// * `&T`, `&mut T` for all `T`,
/// * `ManuallyDrop<T>` for all `T`,
/// * tuples and arrays whose elements implement `BikeshedGuaranteedNoDrop`,
/// * or otherwise, all types that are `Copy`.
///
/// Notably, this doesn't include all trivially-destructible types for semver
/// reasons.
///
/// Bikeshed name for now.
```
As far as I am aware, there's no new behavior being guaranteed by this trait, since it operates the same as the manually implemented check. We could easily rip out this trait and go back to using the manually implemented check for union fields, however using a trait means that this code can be shared by WF for `unsafe<>` binders too. See the last commit.
The only diagnostic changes are that this now fires false-negatives for fields that are ill-formed. I don't consider that to be much of a problem though.
r? oli-obk
[^1]: Please let's not bikeshed this name lol. There's no good name for `ValidForUnsafeFieldsUnsafeBindersAndUnionFields`.
Resolve named regions when reporting type test failures in NLL
Just a improvement tweak to an error message that I broke out of a bigger PR that I had to close lol
Previously it only did integer-ABI things, but this way it does data pointers too. That gives more information in general to the backend, and allows slightly simplifying one of the helpers in slice iterators.
After the stabilization PR was opened, `extern "system"` functions were
added to `extended_varargs_abi_support`. This has a number of questions
regarding it that were not discussed and were somewhat surprising.
It deserves to be considered as its own feature, separate from
`extended_varargs_abi_support`.
When preparing a function's coverage counters and metadata during codegen, any
part of the original coverage graph that was removed by MIR optimizations can
be treated as having an execution count of zero.
Somewhat counter-intuitively, if we give those unreachable nodes a _higher_
priority for receiving physical counters (instead of counter expressions), that
ends up reducing the total number of physical counters needed.
This works because if a node is unreachable, we don't actually create a
physical counter for it. Instead that node gets a fixed zero counter, and any
other node that would have relied on that physical counter in its counter
expression can just ignore that term completely.
Fix cycle when debug-printing opaque types from RPITIT
Extend #66594 to opaque types from RPITIT.
Before this PR, enabling debug logging like `RUSTC_LOG="[check_type_bounds]"` for code containing RPITIT produces a query cycle of `explicit_item_bounds`, as pretty printing for opaque type calls [it](d9a4a47b8b/compiler/rustc_middle/src/ty/print/pretty.rs (L1001)).
Mark condition/carry bit as clobbered in C-SKY inline assembly
C-SKY's compare and some arithmetic/logical instructions modify condition/carry bit (C) in PSR, but there is currently no way to mark it as clobbered in `asm!`.
This PR marks it as clobbered except when [`options(preserves_flags)`](https://doc.rust-lang.org/reference/inline-assembly.html#r-asm.options.supported-options.preserves_flags) is used.
Refs:
- Section 1.3 "Programming model" and Section 1.3.5 "Condition/carry bit" in CSKY Architecture user_guide:
9f7121f7d4/CSKY%20Architecture%20user_guide.pdf
> Under user mode, condition/carry bit (C) is located in the lowest bit of PSR, and it can be
accessed and changed by common user instructions. It is the only data bit that can be visited
under user mode in PSR.
> Condition or carry bit represents the result after one operation. Condition/carry bit can be
clearly set according to the results of compare instructions or unclearly set as some
high-precision arithmetic or logical instructions. In addition, special instructions such as
DEC[GT,LT,NE] and XTRB[0-3] will influence the value of condition/carry bit.
- Register definition in LLVM:
https://github.com/llvm/llvm-project/blob/llvmorg-19.1.0/llvm/lib/Target/CSKY/CSKYRegisterInfo.td#L88
cc ```@Dirreke``` ([target maintainer](aa6f5ab18e/src/doc/rustc/src/platform-support/csky-unknown-linux-gnuabiv2.md (target-maintainers)))
r? ```@Amanieu```
```@rustbot``` label +O-csky +A-inline-assembly
Reject `?Trait` bounds in various places where we unconditionally warned since 1.0
fixes#135730fixes#135809
Also a breaking change, so let's see what crater says.
This has been an unconditional warning since *before* 1.0
Cast allocas to default address space
Pointers for variables all need to be in the same address space for correct compilation. Therefore ensure that even if an `alloca` is created in a different address space, it is casted to the default address space before its value is used.
This is necessary for the amdgpu target and others where the default address space for `alloca`s is not 0.
For example the following code compiles incorrectly when not casting the address space to the default one:
```rust
fn f(p: *const i8 /* addrspace(0) */) -> *const i8 /* addrspace(0) */ {
let local = 0i8; /* addrspace(5) */
let res = if cond { p } else { &raw const local };
res
}
```
results in
```llvm
%local = alloca addrspace(5) i8
%res = alloca addrspace(5) ptr
if:
; Store 64-bit flat pointer
store ptr %p, ptr addrspace(5) %res
else:
; Store 32-bit scratch pointer
store ptr addrspace(5) %local, ptr addrspace(5) %res
ret:
; Load and return 64-bit flat pointer
%res.load = load ptr, ptr addrspace(5) %res
ret ptr %res.load
```
For amdgpu, `addrspace(0)` are 64-bit pointers, `addrspace(5)` are 32-bit pointers.
The above code may store a 32-bit pointer and read it back as a 64-bit pointer, which is obviously wrong and cannot work. Instead, we need to `addrspacecast %local to ptr addrspace(0)`, then we store and load the correct type.
Tracking issue: #135024