OSDN Git Service

[X86][AMDGPU][DAGCombiner] Move call to allowsMemoryAccess into isLoadBitCastBenefici...
authorCraig Topper <craig.topper@intel.com>
Tue, 9 Jul 2019 19:55:28 +0000 (19:55 +0000)
committerCraig Topper <craig.topper@intel.com>
Tue, 9 Jul 2019 19:55:28 +0000 (19:55 +0000)
commitd2078b8db6a5a8d3ce1e43cc55ed51fbfc4517e3
treee68d77540dee87d04c6870ceb759c14003a6ccee
parent556a91d011f4d78bdec9c10680358c22a89dda36
[X86][AMDGPU][DAGCombiner] Move call to allowsMemoryAccess into isLoadBitCastBeneficial/isStoreBitCastBeneficial to allow X86 to bypass it

Basically the problem is that X86 doesn't set the Fast flag from
allowsMemoryAccess on certain CPUs due to slow unaligned memory
subtarget features. This prevents bitcasts from being folded into
loads and stores. But all vector loads and stores of the same width
are the same cost on X86.

This patch merges the allowsMemoryAccess call into isLoadBitCastBeneficial to allow X86 to skip it.

Differential Revision: https://reviews.llvm.org/D64295

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@365549 91177308-0d34-0410-b5e6-96231b3b80d8
include/llvm/CodeGen/TargetLowering.h
lib/CodeGen/SelectionDAG/DAGCombiner.cpp
lib/Target/AMDGPU/AMDGPUISelLowering.cpp
lib/Target/AMDGPU/AMDGPUISelLowering.h
lib/Target/X86/X86ISelLowering.cpp
lib/Target/X86/X86ISelLowering.h
test/CodeGen/X86/merge-consecutive-stores-nt.ll
test/CodeGen/X86/vector-shuffle-128-v4.ll