intrinsics.fmuladdf{16,32,64,128}: expose llvm.fmuladd.* semantics
Add intrinsics `fmuladd{f16,f32,f64,f128}`. This computes `(a * b) +
c`, to be fused if the code generator determines that (i) the target
instruction set has support for a fused operation, and (ii) that the
fused operation is more efficient than the equivalent, separate pair
of `mul` and `add` instructions.
https://llvm.org/docs/LangRef.html#llvm-fmuladd-intrinsic
MIRI support is included for f32 and f64.
The codegen_cranelift uses the `fma` function from libc, which is a
correct implementation, but without the desired performance semantic. I
think this requires an update to cranelift to expose a suitable
instruction in its IR.
I have not tested with codegen_gcc, but it should behave the same
way (using `fma` from libc).
This commit is contained in:
@@ -884,6 +884,11 @@ impl<'ll> CodegenCx<'ll, '_> {
|
||||
ifn!("llvm.fma.f64", fn(t_f64, t_f64, t_f64) -> t_f64);
|
||||
ifn!("llvm.fma.f128", fn(t_f128, t_f128, t_f128) -> t_f128);
|
||||
|
||||
ifn!("llvm.fmuladd.f16", fn(t_f16, t_f16, t_f16) -> t_f16);
|
||||
ifn!("llvm.fmuladd.f32", fn(t_f32, t_f32, t_f32) -> t_f32);
|
||||
ifn!("llvm.fmuladd.f64", fn(t_f64, t_f64, t_f64) -> t_f64);
|
||||
ifn!("llvm.fmuladd.f128", fn(t_f128, t_f128, t_f128) -> t_f128);
|
||||
|
||||
ifn!("llvm.fabs.f16", fn(t_f16) -> t_f16);
|
||||
ifn!("llvm.fabs.f32", fn(t_f32) -> t_f32);
|
||||
ifn!("llvm.fabs.f64", fn(t_f64) -> t_f64);
|
||||
|
||||
Reference in New Issue
Block a user