OSDN Git Service

[AMDGPU] Fixed incorrect uniform branch condition
authorTim Renouf <tpr.llvm@botech.co.uk>
Tue, 9 Jan 2018 21:34:43 +0000 (21:34 +0000)
committerTim Renouf <tpr.llvm@botech.co.uk>
Tue, 9 Jan 2018 21:34:43 +0000 (21:34 +0000)
commit0eaae26af34609f8411c88662024a7eb51fbd105
treea53ff7c8cfd84f48fb49983e735780f8512e5f25
parentb763cd8c4934d662a06bcedc8859b33e95f452b8
[AMDGPU] Fixed incorrect uniform branch condition

Summary:
I had a case where multiple nested uniform ifs resulted in code that did
v_cmp comparisons, combining the results with s_and_b64, s_or_b64 and
s_xor_b64 and using the resulting mask in s_cbranch_vccnz, without first
ensuring that bits for inactive lanes were clear.

There was already code for inserting an "s_and_b64 vcc, exec, vcc" to
clear bits for inactive lanes in the case that the branch is instruction
selected as s_cbranch_scc1 and is then changed to s_cbranch_vccnz in
SIFixSGPRCopies. I have added the same code into SILowerControlFlow for
the case that the branch is instruction selected as s_cbranch_vccnz.

This de-optimizes the code in some cases where the s_and is not needed,
because vcc is the result of a v_cmp, or multiple v_cmp instructions
combined by s_and/s_or. We should add a pass to re-optimize those cases.

Reviewers: arsenm, kzhuravl

Subscribers: wdng, yaxunl, t-tye, llvm-commits, dstuttard, timcorringham, nhaehnle

Differential Revision: https://reviews.llvm.org/D41292

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@322119 91177308-0d34-0410-b5e6-96231b3b80d8
lib/Target/AMDGPU/AMDGPUISelDAGToDAG.cpp
test/CodeGen/AMDGPU/branch-relaxation.ll
test/CodeGen/AMDGPU/cf-loop-on-constant.ll
test/CodeGen/AMDGPU/nested-loop-conditions.ll
test/CodeGen/AMDGPU/scalar-branch-missing-and-exec.ll [new file with mode: 0644]
test/CodeGen/AMDGPU/select-opt.ll
test/CodeGen/AMDGPU/skip-if-dead.ll
test/CodeGen/AMDGPU/smrd-vccz-bug.ll
test/CodeGen/AMDGPU/uniform-cfg.ll