Commit Graph

2485 Commits

Author SHA1 Message Date
Jake Goulding
1f96ac2d53 Typos in cmpistr* functions (#357) 2018-03-08 09:53:44 -06:00
Jake Goulding
77f9754f15 Subtract typo 2018-03-08 16:18:49 +01:00
gnzlbg
afca7f8d16 Migrate to rustfmt-preview and require rustfmt builds to pass (#353)
* migrate to rustfmt-preview and require rustfmt to pass

* reformat with rustfmt-preview
2018-03-08 09:09:24 -06:00
gnzlbg
26fd3bb5a9 better error messages for target-feature detection macros (#352)
Better error messages for target-feature detection macros
2018-03-08 09:59:21 +01:00
Alex Crichton
d7b42faaa3 Add cfg! clauses to detection macro (#351)
This way if the feature is statically detected then it'll be expanded to `true`

Closes #349
2018-03-07 10:28:12 -06:00
Alex Crichton
56af498e9e Rename is_target_feature_detected! (#346)
This commit renames the `is_target_feature_detected!` macro to have different
names depending on the platform. For example:

* `is_x86_feature_detected!`
* `is_arm_feature_detected!`
* `is_aarch64_feature_detected!`
* `is_powerpc64_feature_detected!`

Each macro already has a platform-specific albeit similar interface. Currently,
though, each macro takes a different set of strings so the hope is that like
with the name of the architecture in the module we can signal the dangers of
using the macro in a platform-agnostic context.

One liberty taken with the macro currently though is to on both the x86 and
x86_64 architectures name the macro `is_x86_feature_detected` rather than also
having an `is_x86_64_feature_detected`. This mirrors, however, how all the
intrinsics are named the same on x86/x86_64.
2018-03-07 09:46:16 -06:00
gnzlbg
be0b7f41fc adds AArch64's {s,u,f}{min,max}{v,p} and ARM's {vmov}{n,l} (#345)
* adds {s,u,f}{min,max}{v,p} AArch64 intrinsics
* adds {vmov}{n,l} ARM intrinsics

Closes #314 .
2018-03-07 09:31:14 -06:00
gnzlbg
237ec908f1 remove unnecessary println statements (#343) 2018-03-06 11:51:28 -06:00
gnzlbg
548290b801 Prepare portable packed vector types for RFCs (#338)
* Prepare portable packed SIMD vector types for RFCs

This commit cleans up the implementation of the Portable Packed Vector Types
(PPTV), adds some new features, and makes some breaking changes.

The implementation is moved to `coresimd/src/ppvt` (they are
still exposed via `coresimd::simd`).

As before, the vector types of a certain width are implemented in the `v{width}`
submodules. The `macros.rs` file has been rewritten as an `api` module that
exposes the macros to implement each API.

It should now hopefully be really clear where each API is implemented, and which types
implement these APIs. It should also now be really clear which APIs are tested and how.

- boolean vectors of the form `b{element_size}x{number_of_lanes}`.
- reductions: arithmetic, bitwise, min/max, and boolean - only the facade,
  and a naive working implementation. These need to be implemented
  as `llvm.experimental.vector.reduction.{...}` but this needs rustc support first.
- FromBits trait analogous to `{f32,f64}::from_bits` that perform "safe" transmutes.
  Instead of writing `From::from`/`x.into()` (see below for breaking changes) now you write
  `FromBits::from_bits`/`x.into_bits()`.
- portable vector types implement `Default` and `Hash`
- tests for all portable vector types and all portable operations (~2000 new tests).
- (hopefully) comprehensive implementation of bitwise transmutes and lane-wise
  casts (before `From` and the `.as_...` methods where implemented "when they were needed".
- documentation for PPTV (not great yet, but better than nothing)
- conversions/transmutes from/to x86 architecture specific vector types

- `store/load` API has been replaced with `{store,load}_{aligned,unaligned}`
- `eq,ne,lt,le,gt,ge` APIs now return boolean vectors
- The `.as_{...}` methods have been removed. Lane-wise casts are now performed by `From`.
- `From` now perform casts (see above). It used to perform bitwise transmutes.
- `simd` vectors' `replace` method's result is now `#[must_use]`.

* enable backtrace and nocapture

* unalign load/store fail test by 1 byte

* update arm and aarch64 neon modules

* fix arm example

* fmt

* clippy and read example that rustfmt swallowed

* reductions should take self

* rename add/mul -> sum/product; delete other arith reductions

* clean up fmt::LowerHex impl

* revert incorret doc change

* make Hash equivalent to [T; lanes()]

* use travis_wait to increase timeout limit to 20 minutes

* remove travis_wait; did not help

* implement reductions on top of the llvm.experimental.vector.reduction intrinsics

* implement cmp for boolean vectors

* add missing eq impl file

* implement default

* rename llvm intrinsics

* fix aarch64 example error

* replace #[inline(always)] with #[inline]

* remove cargo clean from run.sh

* workaround broken product in aarch64

* make boolean vector constructors const fn

* fix more reductions on aarch64

* fix min/max reductions on aarch64

* remove whitespace

* remove all boolean vector types except for b8xN

* use a sum reduction fallback on aarch64

* disable llvm add reduction for aarch64

* rename the llvm intrinsics to use llvm names

* remove old macros.rs file
2018-03-05 14:32:35 -06:00
Vincent Esche
4e74e2e4e2 Fixed typo in docs header 2018-03-05 10:02:42 +01:00
gnzlbg
f1d8a88267 Run-time feature detection for new AArch64 features (#339)
* aarch64 run-time feature detection for latest whitelisted features

* dump new aarch64 features in the run-time detection tests

* add some comments

* remove old code
2018-03-02 21:27:55 -06:00
Alex Crichton
708cc9d9b8 Rename bmi to bmi1
In accordance with rust-lang/rust#48565
2018-03-02 07:02:22 -08:00
Alex Crichton
a6eefb6e29 Remove some dead links 2018-02-27 12:49:48 -08:00
Alex Crichton
87566b578b Another minor fix for libstd tests 2018-02-27 12:47:24 -08:00
Alex Crichton
94d8a193c4 Tweak doctests to pass in libstd as well (#335)
The boilerplate just gets more and more ugly...
2018-02-27 13:13:22 -06:00
Alex Crichton
217f89bc4f Reorganize the x86/x86_64 intrinsic folders (#334)
The public API isn't changing in this commit but the internal organization is
being rejiggered. Instead of `x86/$subtarget/$feature.rs` the folders are
changed to `coresimd/x86/$feature.rs` and `coresimd/x86_64/$feature.rs`. The
`arch::x86_64` then reexports both the contents of the `x86` module and the
`x86_64` module.
2018-02-27 08:41:07 -06:00
Artyom Pavlov
aa4cef7723 Implemented rdrand and rdseed intrinsics (#326)
* implemented rdrand and rdseed intrinsics

* added "unsigned short*" case

* moved rdrand from i686 to x86_64

* 64 bit rdrand functions in x86_64, 16 and 32 in i686
2018-02-27 07:58:08 -06:00
Alex Crichton
5636900b03 Reimplement _xgetbv with inline assembly (#333)
Looks like LLVM 6 may have removed the intrinsic, and this implementation is
modeled after clang's.
2018-02-27 07:52:10 -06:00
Alex Crichton
560fe20b61 Beef up documentation of arch module (#331)
This commit reorganizes some documentation for inclusion into the standard
library, moving the bulk of the docs to the `arch` module and away from the
crate root which won't actually be the end-user interface.
2018-02-27 07:24:59 -06:00
Alex Crichton
3579853e20 Fix the implementation of _mm256_alignr_epi8 (#330)
This seems likely to have mostly just been a copy/paste error, so this
re-reviews the intrinsics and aligns it with the implementation in
clang.

Closes #328
2018-02-25 12:37:15 -06:00
Alex Crichton
746ab07521 Compile examples on CI (#329)
Make sure the top-level `examples` folder is registered with the
`stdsimd` crate!
2018-02-25 12:37:08 -06:00
Alex Crichton
db5648e0e4 Start adding stability attributes (#327)
To integrate into the standard library this crate needs *at least* a
stability attribute on the macro itself but this commit also beings by
adding unstable attributes to the exported modules as well. This should
help everything be unstable-by-default and we can start iterating from
there in the standard library.

This commit also does away with the `coresimd::vendor` module internal
implementation detail, instead directly creating the `arch` module to
allow easily documenting it in this crate and having the docs show up in
rust-lang/rust.
2018-02-24 14:11:09 +09:00
Artyom Pavlov
145c52dbf9 CLMUL instruction set (#320)
* added pclmul

* added docs

* pclmul -> pclmulqdq

* imm8: u8 -> imm8: i32

* return changes to stdsimd/arch/detect/x86.rs

* error fixes

* added rustc_args_required_const

* fixed assert_instr for _mm_clmulepi64_si128

* fixed pclmul assert_instr tests
2018-02-18 15:55:57 +09:00
Alex Crichton
39b5ec91ae Reorganize and refactor source tree (#324)
With RFC 2325 looking close to being accepted, I took a crack at
reorganizing this repository to being more amenable for inclusion in
libstd/libcore. My current plan is to add stdsimd as a submodule in
rust-lang/rust and then use `#[path]` to include the modules directly
into libstd/libcore.

Before this commit, however, the source code of coresimd/stdsimd
themselves were not quite ready for this. Imports wouldn't compile for
one reason or another, and the organization was also different than the
RFC itself!

In addition to moving a lot of files around, this commit has the
following major changes:

* The `cfg_feature_enabled!` macro is now renamed to
  `is_target_feature_detected!`
* The `vendor` module is now called `arch`.
* Under the `arch` module is a suite of modules like `x86`, `x86_64`,
  etc. One per `cfg!(target_arch)`.
* The `is_target_feature_detected!` macro was removed from coresimd.
  Unfortunately libcore has no ability to export unstable macros, so for
  now all feature detection is canonicalized in stdsimd.

The `coresimd` and `stdsimd` crates have been updated to the planned
organization in RFC 2325 as well. The runtime bits saw the largest
amount of refactoring, seeing a good deal of simplification without the
core/std split.
2018-02-18 10:07:35 +09:00
Alex Crichton
d097221faf Add #[rustc_args_required_const] annotations (#319)
Support isn't quite in nightly to make this work yet, but using a local build
this gets everything passing again! This also implements native verification
that we have the attribute in the right place
2018-02-11 10:24:33 -06:00
Alex Crichton
354e96ba1b Fix instruction assertions on LLVM 6 (#321)
Looks like some instructions changed here and there, so this updates the
assertions (no behavior appears to have changed though)
2018-02-11 10:04:53 -06:00
Ruud van Asseldonk
ee249f766c Add x86 AES-NI vendor intrinsics (#311)
* Define _mm_aes*_si128 intrinsics

* Add tests for _mm_aes*_si128 intrinsics

These tests are based on the examples in Microsoft's documentation.
Same input should result in the same output in any case.

* Constify imm8 argument of aeskeygenassist

* Do not rely on internal layout of __m128

Use _mm_set_epi64x instead to construct constants.

* Move AES vendor intrinsics from x86_64 to i686

Although i686 does not have the AES New Instructions, making code
compatible across x86 and x64_64 tends to be easier if the intrinsics
are available everywhere.

* Pass constant for assert_instr(aeskeygenassist)

Pass a particular value for the disassembly test, so we end up with one
instruction, instead of the match arm that matches on all 256 values.

* Make aeskeygenassist imm8 argument i32, not u8

Intel documentation specifies it as an "8-bit round constant", but then
goes on to give it a type "const int", which translates to i32 in Rust.
The test that verifies the Rust signatures against Intel documentation
failed on this.

For now we will replicate the C API verbatim. Even when Rust could have
a more accurate type signature that makes passing values more than 8
bits impossible, rather than silently mapping out-of-range values to
255.

* Reflow doc comment as proposed by rustfmt

* Add module doc comment for i686::aes
2018-02-05 11:07:40 -06:00
Alex Crichton
be41ce3369 Remove known exceptions to Intel's signatures (#317)
We had a few lingering intrinsics which were getting some special
treatment for having different types than what Intel specified. This
commit removes all these cases and reverts to precisely what upstream
Intel mentions (even if it doesn't make the most sense in some cases)
2018-02-05 10:04:46 -06:00
Andre Bogus
8b676746f1 remove spurious newline 2018-02-05 10:28:55 +01:00
Andre Bogus
dc650c9c8e move bswap and tsc to i386
This fixes #313
2018-02-05 10:28:55 +01:00
gnzlbg
4d545e713f Run-time feature detection for AES-NI and TSC (#312)
* add runtime detection for aes-ni

* fmtting and fixing some clippy issues

* add runtime-feature detection for tsc

* fix remaining clippy issues

* manually fix some formatting issues

* increase feature cache size

* use 2x AtomicU32 on 32-bit targets as the feature cache

* use the new cache in stdsimd
2018-02-02 09:08:27 -06:00
Alex Crichton
dc587cc46c Comment that the rdtsc intrinsics should be ok
Some more info should be in #308, and otherwise ...

Closes #308
2018-01-29 08:36:10 -08:00
Alex Crichton
0e57eefffe Note that some intrinsics are manually verified
Closes #307
2018-01-29 08:32:13 -08:00
Alex Crichton
d1acec0b39 Refactor the x86 verify implementation
* Support instructions defined multiple times in the XML (just match one of
  them)
* Support AVX-512 in more locations
* Add support for printing lists of missing intrinsics
* Add a few constants to hopefully tweak the program easily
2018-01-29 08:27:46 -08:00
Alex Crichton
912ad06b3b Update CONTRIBUTING.md with recent changes 2018-01-29 07:17:14 -08:00
Alex Crichton
5b445c5cac Update doc generation with recent devlopments 2018-01-28 22:00:13 -08:00
Alex Crichton
0f5b382dd6 Enable verification of more intrinsics (#309)
Looks like intrinsics that weren't listing a target feature were accidentally
omitted from the verification logic, so this commit fixes that!

Along the way I've ended up filing #307 and #308 for detected inconsistencies.
2018-01-28 23:59:44 -06:00
Alex Crichton
82acb0c953 Move from #[inline(always)] to #[inline] (#306)
* Move from #[inline(always)] to #[inline]

This commit blanket changes all `#[inline(always)]` annotations to `#[inline]`.
Fear not though, this should not be a regression! To clarify, though, this
change is done out of correctness to ensure that we don't hit stray LLVM errors.

Most of the LLVM intrinsics and various LLVM functions we actually lower down to
only work correctly if they are invoked from a function with an appropriate
target feature set. For example if we were to out-of-the-blue invoke an AVX
intrinsic then we get a [codegen error][avx-error]. This error comes about
because the surrounding function isn't enabling the AVX feature. Now in general
we don't have a lot of control over how this crate is consumed by downstream
crates. It'd be a pretty bad mistake if all mistakes showed up as scary
un-debuggable codegen errors in LLVM!

On the other side of this issue *we* as the invokers of these intrinsics are
"doing the right thing". All our functions in this crate are tagged
appropriately with target features to be codegen'd correctly. Indeed we have
plenty of tests asserting that we can codegen everything across multiple
platforms!

The error comes about here because of precisely the `#[inline(always)]`
attribute. Typically LLVM *won't* inline functions across target feature sets.
For example if you have a normal function which calls a function that enables
AVX2, then the target, no matter how small, won't be inlined into the caller.
This is done for correctness (register preserving and all that) but is also how
these codegen errors are prevented in practice.

Now we as stdsimd, however, are currently tagging all functions with "always
inline this, no matter what". That ends up, apparently, bypassing the logic of
"is this even possible to inline". In turn we start inlining things like AVX
intrinsics into functions that can't actually call AVX intrinsics, creating
codegen errors at compile time.

So with all that motivation, this commit switches to the normal inline hints for
these functions, just `#[inline]`, instead of `#[inline(always)]`. Now for the
stdsimd crate it is absolutely critical that all functions are inlined to have
good performance. Using `#[inline]`, however, shouldn't hamper that!

The compiler will recognize the `#[inline]` attribute and make sure that each of
these functions is *candidate* to being inlined into any and all downstream
codegen units. (aka if we were missing `#[inline]` then LLVM wouldn't even know
the definition to inline most of the time). After that, though, we're relying on
LLVM to naturally inline these functions as opposed to forcing it to do so.
Typically, however, these intrinsics are one-liners and are trivially
inlineable, so I'd imagine that LLVM will go ahead and inline everything all
over the place.

All in all this change is brought about by #253 which noticed various codegen
errors. I originally thought it was due to ABI issues but turned out to be
wrong! (although that was also a bug which has since been resolved). In any case
after this change I was able to get the example in #253 to execute in both
release and debug mode.

Closes #253

[avx-error]: https://play.rust-lang.org/?gist=50cb08f1e2242e22109a6d69318bd112&version=nightly

* Add inline(always) on eflags intrinsics

Their ABI actually relies on it!

* Leave #[inline(always)] on portable types

They're causing test failures on ARM, let's investigate later.
2018-01-28 23:40:39 -06:00
Alex Crichton
a2403de290 Don't count nop instructions after functions
Looks like disassemblers will fill this in and/or LLVM inserts them for
alignment, not useful to us in calculations.
2018-01-28 20:35:37 -08:00
Alex Crichton
263937a499 Fix a lint warning on aarch64 2018-01-28 20:32:11 -08:00
Alex Crichton
11f7b7e38e Verify intrinsics don't leak to i686 (#305)
Some intrinsics take `i64` or `u64` arguments which typically means that they're
using 64-bit registers and aren't actually available on x86. This commit adds a
check to stdsimd-verify to assert this and moves around some intrinsics that I
believe should only be available on x86_64.

This commit was checked in many places against gcc/clang/MSVC using godbolt.org
to ensure that we're agreeing with what other compilers are doing.

Closes #304
2018-01-28 22:31:31 -06:00
Alex Crichton
aefb22c51e Don't distinguish between i586/i686 (#301)
This was historically done as the contents of the `i686` module wouldn't
actually compile on i586 for various reasons. I believe I've tracked this down
to #300 where LLVM refuses to compile a function using the `x86_mmx` type
without actually enabling the `mmx` feature (sort of reasonably so!). This
commit will now compile in both the `i586` and `i686` modules of this crate into
the `i586-unknown-linux-gnu` target, and the relevant functions now also enable
the `mmx` feature if they're using the `__m64` type.

I believe this is uncovering a more widespread problem where the `__m64` isn't
usable outside the context of `mmx`-enabled functions. The i686 and x86_64
targets have this feature enabled by default which is why it's worked there, but
they're not enabled for the i586 target. We'll probably want to consider this
when stabilizing!
2018-01-28 22:31:22 -06:00
Alex Crichton
e0aed0fffc Fix tests for a future nightly (#297)
In rust-lang/rust#47743 the SIMD types in the Rust ABI are being switched to
getting passed through memory unconditionally rather than through LLVM's
immediate registers. This means that a bunch of our assert_instr invocations
started breaking as LLVM has more efficient methods of dealing with memory than
the instructions themselves.

This switches `assert_instr` to unconditionally use a shim that is an `extern`
function which should revert back to the previous behavior, using the simd types
as immediates and testing the same.
2018-01-25 12:13:48 -06:00
Alex Crichton
73794e5035 Update stdsimd-verify to print out instruction differences
Too many to deal with for now...
2018-01-20 10:13:26 -08:00
Alex Crichton
30694efc68 Remove PartialEq impls for x86 types (#294)
* Remove `PartialEq for __m64`

This helps to strip the public API of the vendor type for now, although this may
come back at a later date!

* Remove `PartialEq for __m128i`

Like the previous commit, but for another type!

* Remove `PartialEq for __m256i`

Same as previous commit!
2018-01-20 11:24:09 -06:00
Alex Crichton
4b66abaede Move x86-specific types to the vendor module (#293)
I believe we're reserving the `simd` module for exclusively the portable types
and their operations, so this commit moves the various x86-specific types from
the portable modules to the `x86` module. Along the way this also adds some doc
blocks for all the existing x86 types.
2018-01-19 21:20:44 -06:00
Alex Crichton
e19b6d9efd Remove Into/From between x86 and portable types (#292)
This is primarily doing to avoid falling into a portability trap by accident,
and in general makes the vendor types (on x86) going towards as minimal as they
can be. Along the way some tests were cleaned up which were still using the
portable types.
2018-01-19 20:15:07 -06:00
Alex Crichton
54452230a7 Add an example of SIMD-powered hex encoding (#291)
This is lifted from an example elsewhere I found and shows off runtime
dispatching along with a lot of intrinsics being used in a bunch.
2018-01-19 16:53:38 -06:00
Alex Crichton
faf5aea427 Reduce implicit reliance on structure of __m* types (#290)
They need to be structured *somehow* to be the right bit width but ideally we
wouldn't have the intrinsics rely on the particulars about how they're
represented.
2018-01-19 14:44:31 -06:00
Alex Crichton
330a124568 Update stdsimd-verify for vendor types (#289)
This commit provides insurance that intrinsics are only introduced with known
canonical types (`__m128i` and such) instead of also allowing `u8x16` for
example.
2018-01-19 12:11:21 -06:00