atomic_load intrinsic: use const generic parameter for ordering
We have a gazillion intrinsics for the atomics because we encode the ordering into the intrinsic name rather than making it a parameter. This is particularly bad for those operations that take two orderings. Let's fix that!
This PR only converts `load`, to see if there's any feedback that would fundamentally change the strategy we pursue for the const generic intrinsics.
The first two commits are preparation and could be a separate PR if you prefer.
`@BoxyUwU` -- I hope this is a use of const generics that is unlikely to explode? All we need is a const generic of enum type. We could funnel it through an integer if we had to but an enum is obviously nicer...
`@bjorn3` it seems like the cranelift backend entirely ignores the ordering?
Lower BinOp::Cmp to llvm.{s,u}cmp.* intrinsics
Lowers `mir::BinOp::Cmp` (`three_way_compare` intrinsic) to the corresponding LLVM `llvm.{s,u}cmp.i8.*` intrinsics.
These are the intrinsics mentioned in https://github.com/rust-lang/rust/pull/118310, which are now available in LLVM 19.
I couldn't find any follow-up PRs/discussions about this, please let me know if I missed something.
r? `@scottmcm`
Emit getelementptr inbounds nuw for pointer::add()
Lower pointer::add (via intrinsic::offset with unsigned offset) to getelementptr inbounds nuw on LLVM versions that support it. This lets LLVM make use of the pre-condition that the offset addition does not wrap in an unsigned sense. Together with inbounds, this also implies that the offset is non-negative.
Fixes https://github.com/rust-lang/rust/issues/137217.
- For shifts this shrinks the IR by no longer needing an `assume` while still providing the UB information
- Having this on the `i8`→`i1` truncations will hopefully help with some places that have to load `i8`s or pass those in LLVM structs without range information
improve cold_path()
#120370 added a new instrinsic `cold_path()` and used it to fix `likely` and `unlikely`
However, in order to limit scope, the information about cold code paths is only used in 2-target switch instructions. This is sufficient for `likely` and `unlikely`, but limits usefulness of `cold_path` for idiomatic rust. For example, code like this:
```
if let Some(x) = y { ... }
```
may generate 3-target switch:
```
switch y.discriminator:
0 => true branch
1 = > false branch
_ => unreachable
```
and therefore marking a branch as cold will have no effect.
This PR improves `cold_path()` to work with arbitrary switch instructions.
Note that for 2-target switches, we can use `llvm.expect`, but for multiple targets we need to manually emit branch weights. I checked Clang and it also emits weights in this situation. The Clang's weight calculation is more complex that this PR, which I believe is mainly because `switch` in `C/C++` can have multiple cases going to the same target.
Set both `nuw` and `nsw` in slice size calculation
There's an old note in the code to do this, and now that [LLVM-C has an API for it](f0b8ff1251/llvm/include/llvm-c/Core.h (L4403-L4408)), we might as well. And it's been there since what looks like LLVM 17 de9b6aa341 so doesn't even need to be conditional.
(There's other places, like `RawVecInner` or `Layout`, that might want to do things like this too, but I'll leave those for a future PR.)
Previously it only did integer-ABI things, but this way it does data pointers too. That gives more information in general to the backend, and allows slightly simplifying one of the helpers in slice iterators.
The initial naming of "Abi" was an awful mistake, conveying wrong ideas
about how psABIs worked and even more about what the enum meant.
It was only meant to represent the way the value would be described to
a codegen backend as it was lowered to that intermediate representation.
It was never meant to mean anything about the actual psABI handling!
The conflation is because LLVM typically will associate a certain form
with a certain ABI, but even that does not hold when the special cases
that actually exist arise, plus the IR annotations that modify the ABI.
Reframe `rustc_abi::Abi` as the `BackendRepr` of the type, and rename
`BackendRepr::Aggregate` as `BackendRepr::Memory`. Unfortunately, due to
the persistent misunderstandings, this too is now incorrect:
- Scattered ABI-relevant code is entangled with BackendRepr
- We do not always pre-compute a correct BackendRepr that reflects how
we "actually" want this value to be handled, so we leave the backend
interface to also inject various special-cases here
- In some cases `BackendRepr::Memory` is a "real" aggregate, but in
others it is in fact using memory, and in some cases it is a scalar!
Our rustc-to-backend lowering code handles this sort of thing right now.
That will eventually be addressed by lifting duplicated lowering code
to either rustc_codegen_ssa or rustc_target as appropriate.
llvm: replace some deprecated functions
`LLVMMDStringInContext` and `LLVMMDNodeInContext` are deprecated, replace them with `LLVMMDStringInContext2` and `LLVMMDNodeInContext2`.
Also replace `Value` with `Metadata` in some function signatures for better consistency.
Supertraits of `BuilderMethods` are all called `XyzBuilderMethods`.
Supertraits of `CodegenMethods` are all called `XyzMethods`. This commit
changes the latter to `XyzCodegenMethods`, for consistency.
It's a trait that aggregates five other traits. But consider the places
that use it.
- `BuilderMethods`: requires three of the five traits.
- `CodegenMethods`: requires zero(!) of the five traits.
- `BaseTypeMethods`: requires two of the five traits.
- `LayoutTypeMethods`: requires three of the five traits.
- `TypeMembershipMethods`: requires one of the five traits.
This commit just removes it, which makes everything simpler.
It has `Backend` and `Deref` boudns, plus an associated type
`CodegenCx`, and it has a single use. This commit "inlines" it into
`BuilderMethods`, which makes the complicated backend trait situation a
little simpler.
These types are currently rejected for `as` casts by the compiler.
Remove this incorrect check and add codegen tests for all conversions
involving these types.
I added `PlaceValue` in 123775, but kept that one line-by-line simple because it touched so many places.
This goes through to add more helpers & docs, and change some `PlaceRef` to `PlaceValue` where the type didn't need to be included.
No behaviour changes.
Stop using LLVM struct types for alloca
The alloca type has no semantic meaning, only the size (and alignment, but we specify it explicitly) matter. Using `[N x i8]` is a more direct way to specify that we want `N` bytes, and avoids relying on LLVM's struct layout. It is likely that a future LLVM version will change to an untyped alloca representation.
Split out from #121577.
r? `@ghost`
I added this back in 111999, but I no longer think it's a good idea
- It had to get scaled back to only power-of-two things to not break a bunch of targets
- LLVM seems to be getting better at memcpy removal anyway
- Introducing vector instructions has seemed to sometimes (115515) make autovectorization worse
So this removes it from the codegen crates entirely, and instead just tries to use <https://doc.rust-lang.org/nightly/nightly-rustc/rustc_codegen_ssa/traits/builder/trait.BuilderMethods.html#method.typed_place_copy> instead of direct `memcpy` so things will still use load/store for immediates.
We already use `Instance` at declaration sites when available to glean
additional information about possible abstractions of the type in use.
This does the same when possible at callsites as well.
The primary purpose of this change is to allow CFI to alter how it
generates type information for indirect calls through `Virtual`
instances.
Making `libcore` decide this is silly; the backend has so much better information about when it's a good idea.
So introduce a new `typed_swap` intrinsic with a fallback body, but replace that implementation for immediates and scalar pairs.