Extend attribute deduction to determine whether parameters using
indirect pass mode might have their address captured. Similarly to
the deduction of `readonly` attribute this information facilitates
memcpy optimizations.
This was a bit more invasive than I had kind of hoped. An alternate
approach would be to add an extra call_intrinsic_with_attrs() that would
have the new-in-this-change signature for call_intrinsic, but this felt
about equivalent and made it a little easier to audit the relevant
callsites of call_intrinsic().
Offload host2
r? `@oli-obk`
A follow-up to my previous gpu host PR. With this, I can (in theory) run a sufficiently simple Rust function on GPUs. I tested it on AMD, where the amdgcn tartget of rustc causes issues due to Addressspace castings, which might not be valid. If I (manually) fix them, I can run the generated IR on an AMD GPU. This should conceptually also work on NVIDIA or Intel. I updated the dev-guide acordingly: https://rustc-dev-guide.rust-lang.org/offload/usage.html
I am unhappy with the amount of standalone functions in my offload code, so in my second commit I bundled some of the code around two structs which are Rust versions of the LLVM/Offload structs which they represent. The structs themselves only have doc comments. Since I directly lower everything to llvm-ir I didn't saw a big value in modelling the struct member variables.
Fix ICE on offsetted ZST pointer
I'm not sure this is the *right* fix, but it's simple enough and does roughly what I'd expect. Like with the previous optimization to codegen usize rather than a zero-sized static, there's no guarantee that we continue returning a particular value from the offsetting.
A grep for `const_usize.*align` found the same code copied to rustc_codegen_gcc and cranelift but a quick skim didn't find other cases of similar 'optimization'. That said, I'm not convinced I caught everything, it's not trivial to search for this.
Closesrust-lang/rust#147516
Add vsx register support for ppc inline asm, and implement preserves_flag option
This should address the last(?) missing pieces of inline asm for ppc:
* Explicit VSX register support. ISA 2.06 (POWER7) added a 64x128b register overlay extending the fpr's to 128b, and unifies them with the vmx (altivec) registers. Implementations details within gcc/llvm percolate up, and require using the `x` template modifier. I have updated the inline asm to implicitly include this for vsx arguments which do not specify it. ~~Support for the gcc codegen backend is still a todo.~~
* Implement the `preserves_flags` option. All ABI's, and all ISAs store their flags in `cr`, and the carry bit lives inside `xer`. The other status registers hold sticky bits or control bits which do not affect branch instructions.
There is some interest in the e500 (powerpcspe) port. Architecturally, it has a very different FP ISA, and includes a simd extension called SPR (which is not IBM's cell SPE). Notably, it does not have altivec/fpr/vsx registers. It also has an SPE accumulator register which its ABI marks as volatile, but I am not sure if the compiler uses it.
Move computation of allocator shim contents to cg_ssa
In the future this should make it easier to use weak symbols for the allocator shim on platforms that properly support weak symbols. And it would allow reusing the allocator shim code for handling default implementations of the upcoming externally implementable items feature on platforms that don't properly support weak symbols.
In addition to make this possible, the alloc error handler is now handled in a way such that it is possible to avoid using the allocator shim when liballoc is compiled without `no_global_oom_handling` if you use `#[alloc_error_handler]`. Previously this was only possible if you avoided liballoc entirely or compiled it with `no_global_oom_handling`. You still need to avoid libstd and to define the symbol that indicates that avoiding the allocator shim is unstable.
Where supported, VSX is a 64x128b register set which encompasses
both the floating point and vector registers.
In the type tests, xvsqrtdp is used as it is the only two-argument
vsx opcode supported by all targets on llvm. If you need to copy
a vsx register, the preferred way is "xxlor xt, xa, xa".
cg_llvm: Use `LLVMDIBuilderCreateGlobalVariableExpression`
- Part of rust-lang/rust#134001
- Follow-up to rust-lang/rust#146763
---
This PR dismantles the somewhat complicated `LLVMRustDIBuilderCreateStaticVariable` function, and replaces it with equivalent calls to `LLVMDIBuilderCreateGlobalVariableExpression` and `LLVMGlobalSetMetadata`.
A key difference is that the new code does not replicate the attempted downcast of `InitVal`. As far as I can tell, those downcasts were actually dead, because `llvm::ConstantInt` and `llvm::ConstantFP` are not subclasses of `llvm::GlobalVariable`. I tried replacing those code paths with fatal errors, and was unable to induce failure in any of the relevant test suites I ran.
I have also confirmed that if the calls to `create_static_variable` are commented out, debuginfo tests will fail, demonstrating some amount of relevant test coverage.
The new `DIBuilder` methods have been added via an extension trait, not as inherent methods, to avoid impeding rust-lang/rust#142897.
Note that the code in `LLVMRustDIBuilderCreateStaticVariable` that tried to
downcast `InitVal` appears to have been dead, because `llvm::ConstantInt` and
`llvm::ConstantFP` are not subclasses of `llvm::GlobalVariable`.
In the future this should make it easier to use weak symbols for the
allocator shim on platforms that properly support weak symbols. And it
would allow reusing the allocator shim code for handling default
implementations of the upcoming externally implementable items feature
on platforms that don't properly support weak symbols.
Currently it is possible to avoid linking the allocator shim when
__rust_no_alloc_shim_is_unstable_v2 is defined when linking rlibs
directly as some build systems need. However this requires liballoc to
be compiled with --cfg no_global_oom_handling, which places huge
restrictions on what functions you can call and makes it impossible to
use libstd. Or alternatively you have to define
__rust_alloc_error_handler and (when using libstd)
__rust_alloc_error_handler_should_panic
using #[rustc_std_internal_symbol]. With this commit you can either use
libstd and define __rust_alloc_error_handler_should_panic or not use
libstd and use #[alloc_error_handler] instead. Both options are still
unstable though.
Eventually the alloc_error_handler may either be removed entirely
(though the PR for that has been stale for years now) or we may start
using weak symbols for it instead. For the latter case this commit is a
prerequisite anyway.
refactor: Remove `LLVMRustInsertPrivateGlobal` and `define_private_global`
Since it can easily be implemented using the existing LLVM C API in
terms of `LLVMAddGlobal` and `LLVMSetLinkage` and `define_private_global`
was only used in one place.
Work towards https://github.com/rust-lang/rust/issues/46437
Since it can easily be implemented using the existing LLVM C API in
terms of `LLVMAddGlobal` and `LLVMSetLinkage` and `define_private_global`
was only used in one place.
refactor: replace `LLVMRustAtomicLoad/Store` with LLVM built-in functions
This simplifies the code and reduces the burden of maintaining our own wrappers.
Work towards https://github.com/rust-lang/rust/issues/46437
cg_llvm: Consistently import `llvm::Type` and `llvm::Value`
We already have other modules that import these types and other types from `llvm`, so having the re-exports `type_::Type` and `value::Value` just distracts rust-analyzer and results in messier and less-consistent imports.
No functional change.
Introduce debuginfo to statements in MIR
The PR introduces support for debug information within dead statements. Currently, only the reference statement is supported, which is sufficient to fix rust-lang/rust#128081.
I don't modify Stable MIR, as I don't think we need debug information when using it.
This PR represents the debug information for the dead reference statement via `#dbg_value`. For example, `let _foo_b = &foo.b` becomes `#dbg_value(ptr %foo, !22, !DIExpression(DW_OP_plus_uconst, 4, DW_OP_stack_value), !26)`. You can see this here: https://rust.godbolt.org/z/d43js6adv.
The general principle for handling debug information is to never provide less debug information than the optimized LLVM IR.
The current rules for dropping debug information in this PR are:
- If the LLVM IR cannot represent a reference address, it's replaced with poison or simply dropped. For example, see: https://rust.godbolt.org/z/shGqPec8W. I'm using poison in all such cases now.
- All debuginfos is dropped when merging multiple successor BBs. An example is available here: https://rust.godbolt.org/z/TE1q3Wq6M.
I doesn't drop debuginfos in `MatchBranchSimplification`, because LLVM also pick one branch for it.
Fix autodiff empty ret regression
closes https://github.com/rust-lang/rust/issues/147144
The two gsoc summer projects caused a bit of churn, which was to be expected, especially since we don't run autodiff in CI yet.
This adds a void return testcase that we should have had anyway, and fixes the regression.
r? `@Zalathar` (Just guessing since I've seen you in a few LLVM PRs and Oli is probably still busy. Feel free to reroll!)
Rollup of 6 pull requests
Successful merges:
- rust-lang/rust#143069 (Add fast-path for accessing the current thread id)
- rust-lang/rust#146518 (Improve the documentation around `ZERO_AR_DATE`)
- rust-lang/rust#146596 (Add a dummy codegen backend)
- rust-lang/rust#146617 (Don’t suggest foreign `doc(hidden)` types in "the following other types implement trait" diagnostics)
- rust-lang/rust#146635 (cg_llvm: Stop using `as_c_char_ptr` for coverage-related bindings)
- rust-lang/rust#147184 (Fix the bevy implied bounds hack for the next solver)
r? `@ghost`
`@rustbot` modify labels: rollup