OSDN Git Service

[x86, InstCombine] transform x86 AVX masked loads to LLVM intrinsics
authorSanjay Patel <spatel@rotateright.com>
Mon, 29 Feb 2016 23:16:48 +0000 (23:16 +0000)
committerSanjay Patel <spatel@rotateright.com>
Mon, 29 Feb 2016 23:16:48 +0000 (23:16 +0000)
commit3a7e7531706ed43543b1fafe419a95853f36b899
tree7e36efe181e841f2e478aa9414c5a6a3ab8dbe93
parent87e4278b8cd22aeb95e206c940a1b675d712b65f
[x86, InstCombine] transform x86 AVX masked loads to LLVM intrinsics

The intended effect of this patch in conjunction with:
http://reviews.llvm.org/rL259392
http://reviews.llvm.org/rL260145

is that customers using the AVX intrinsics in C will benefit from combines when
the load mask is constant:

__m128 mload_zeros(float *f) {
  return _mm_maskload_ps(f, _mm_set1_epi32(0));
}

__m128 mload_fakeones(float *f) {
  return _mm_maskload_ps(f, _mm_set1_epi32(1));
}

__m128 mload_ones(float *f) {
  return _mm_maskload_ps(f, _mm_set1_epi32(0x80000000));
}

__m128 mload_oneset(float *f) {
  return _mm_maskload_ps(f, _mm_set_epi32(0x80000000, 0, 0, 0));
}

...so none of the above will actually generate a masked load for optimized code.

This is the masked load counterpart to:
http://reviews.llvm.org/rL262064

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@262269 91177308-0d34-0410-b5e6-96231b3b80d8
lib/Transforms/InstCombine/InstCombineCalls.cpp
test/Transforms/InstCombine/x86-masked-memops.ll