intrinsics.fmuladdf{16,32,64,128}: expose llvm.fmuladd.* semantics

Add intrinsics `fmuladd{f16,f32,f64,f128}`. This computes `(a * b) +
c`, to be fused if the code generator determines that (i) the target
instruction set has support for a fused operation, and (ii) that the
fused operation is more efficient than the equivalent, separate pair
of `mul` and `add` instructions.

https://llvm.org/docs/LangRef.html#llvm-fmuladd-intrinsic

MIRI support is included for f32 and f64.

The codegen_cranelift uses the `fma` function from libc, which is a
correct implementation, but without the desired performance semantic. I
think this requires an update to cranelift to expose a suitable
instruction in its IR.

I have not tested with codegen_gcc, but it should behave the same
way (using `fma` from libc).
This commit is contained in:
Jed Brown
2024-01-05 21:04:41 -07:00
parent 01e2fff90c
commit 0d8a978e8a
11 changed files with 222 additions and 1 deletions

View File

@@ -357,6 +357,19 @@ pub fn check_intrinsic_type(
(0, 0, vec![tcx.types.f128, tcx.types.f128, tcx.types.f128], tcx.types.f128)
}
sym::fmuladdf16 => {
(0, 0, vec![tcx.types.f16, tcx.types.f16, tcx.types.f16], tcx.types.f16)
}
sym::fmuladdf32 => {
(0, 0, vec![tcx.types.f32, tcx.types.f32, tcx.types.f32], tcx.types.f32)
}
sym::fmuladdf64 => {
(0, 0, vec![tcx.types.f64, tcx.types.f64, tcx.types.f64], tcx.types.f64)
}
sym::fmuladdf128 => {
(0, 0, vec![tcx.types.f128, tcx.types.f128, tcx.types.f128], tcx.types.f128)
}
sym::fabsf16 => (0, 0, vec![tcx.types.f16], tcx.types.f16),
sym::fabsf32 => (0, 0, vec![tcx.types.f32], tcx.types.f32),
sym::fabsf64 => (0, 0, vec![tcx.types.f64], tcx.types.f64),