OSDN Git Service

[TTI] The cost model should not assume vector casts get completely scalarized
authorMichael Kuperstein <mkuper@google.com>
Wed, 6 Jul 2016 17:30:56 +0000 (17:30 +0000)
committerMichael Kuperstein <mkuper@google.com>
Wed, 6 Jul 2016 17:30:56 +0000 (17:30 +0000)
commitc7432f9ad36823e5958e5d56868ca6804f977edd
treeffcd890a325b16c433b473ff6a4b20c149ee8fbc
parenta905a4fd678f8f2601882e7da177fa4bf8afa033
[TTI] The cost model should not assume vector casts get completely scalarized

The cost model should not assume vector casts get completely scalarized, since
on targets that have vector support, the common case is a partial split up to
the legal vector size. So, when a vector cast  gets split, the resulting casts
end up legal and cheap.

Instead of pessimistically assuming scalarization, base TTI can use the costs
the concrete TTI provides for the split vector, plus a fudge factor to account
for the cost of the split itself. This fudge factor is currently 1 by default,
except on AMDGPU where inserts and extracts are considered free.

Differential Revision: http://reviews.llvm.org/D21251

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@274642 91177308-0d34-0410-b5e6-96231b3b80d8
include/llvm/CodeGen/BasicTTIImpl.h
lib/Target/AMDGPU/AMDGPUTargetTransformInfo.h
test/Analysis/CostModel/ARM/cast.ll
test/Analysis/CostModel/PowerPC/ext.ll
test/Analysis/CostModel/X86/sitofp.ll
test/Analysis/CostModel/X86/uitofp.ll
test/Transforms/LoopVectorize/X86/gather_scatter.ll