1 ===================================
2 Compiling CUDA C/C++ with LLVM
3 ===================================
11 This document contains the user guides and the internals of compiling CUDA
12 C/C++ with LLVM. It is aimed at both users who want to compile CUDA with LLVM
13 and developers who want to improve LLVM for GPUs. This document assumes a basic
14 familiarity with CUDA. Information about CUDA programming can be found in the
15 `CUDA programming guide
16 <http://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html>`_.
18 How to Build LLVM with CUDA Support
19 ===================================
21 CUDA support is still in development and works the best in the trunk version
22 of LLVM. Below is a quick summary of downloading and building the trunk
23 version. Consult the `Getting Started
24 <http://llvm.org/docs/GettingStarted.html>`_ page for more details on setting
29 .. code-block:: console
31 $ cd where-you-want-llvm-to-live
32 $ svn co http://llvm.org/svn/llvm-project/llvm/trunk llvm
36 .. code-block:: console
38 $ cd where-you-want-llvm-to-live
40 $ svn co http://llvm.org/svn/llvm-project/cfe/trunk clang
42 #. Configure and build LLVM and Clang
44 .. code-block:: console
46 $ cd where-you-want-llvm-to-live
52 How to Compile CUDA C/C++ with LLVM
53 ===================================
55 We assume you have installed the CUDA driver and runtime. Consult the `NVIDIA
56 CUDA installation guide
57 <https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html>`_ if
60 Suppose you want to compile and run the following CUDA program (``axpy.cu``)
61 which multiplies a ``float`` array by a ``float`` scalar (AXPY).
67 __global__ void axpy(float a, float* x, float* y) {
68 y[threadIdx.x] = a * x[threadIdx.x];
71 int main(int argc, char* argv[]) {
72 const int kDataLen = 4;
75 float host_x[kDataLen] = {1.0f, 2.0f, 3.0f, 4.0f};
76 float host_y[kDataLen];
78 // Copy input data to device.
81 cudaMalloc(&device_x, kDataLen * sizeof(float));
82 cudaMalloc(&device_y, kDataLen * sizeof(float));
83 cudaMemcpy(device_x, host_x, kDataLen * sizeof(float),
84 cudaMemcpyHostToDevice);
87 axpy<<<1, kDataLen>>>(a, device_x, device_y);
89 // Copy output data to host.
90 cudaDeviceSynchronize();
91 cudaMemcpy(host_y, device_y, kDataLen * sizeof(float),
92 cudaMemcpyDeviceToHost);
95 for (int i = 0; i < kDataLen; ++i) {
96 std::cout << "y[" << i << "] = " << host_y[i] << "\n";
103 The command line for compilation is similar to what you would use for C++.
105 .. code-block:: console
107 $ clang++ axpy.cu -o axpy --cuda-gpu-arch=<GPU arch> \
108 -L<CUDA install path>/<lib64 or lib> \
109 -lcudart_static -ldl -lrt -pthread
116 ``<CUDA install path>`` is the root directory where you installed CUDA SDK,
117 typically ``/usr/local/cuda``. ``<GPU arch>`` is `the compute capability of
118 your GPU <https://developer.nvidia.com/cuda-gpus>`_. For example, if you want
119 to run your program on a GPU with compute capability of 3.5, you should specify
120 ``--cuda-gpu-arch=sm_35``.
122 Detecting clang vs NVCC
123 =======================
125 Although clang's CUDA implementation is largely compatible with NVCC's, you may
126 still want to detect when you're compiling CUDA code specifically with clang.
128 This is tricky, because NVCC may invoke clang as part of its own compilation
129 process! For example, NVCC uses the host compiler's preprocessor when
130 compiling for device code, and that host compiler may in fact be clang.
132 When clang is actually compiling CUDA code -- rather than being used as a
133 subtool of NVCC's -- it defines the ``__CUDA__`` macro. ``__CUDA_ARCH__`` is
134 defined only in device mode (but will be defined if NVCC is using clang as a
135 preprocessor). So you can use the following incantations to detect clang CUDA
136 compilation, in host and device modes:
140 #if defined(__clang__) && defined(__CUDA__) && !defined(__CUDA_ARCH__)
141 // clang compiling CUDA code, host mode.
144 #if defined(__clang__) && defined(__CUDA__) && defined(__CUDA_ARCH__)
145 // clang compiling CUDA code, device mode.
148 Both clang and nvcc define ``__CUDACC__`` during CUDA compilation. You can
149 detect NVCC specifically by looking for ``__NVCC__``.
151 Flags that control numerical code
152 =================================
154 If you're using GPUs, you probably care about making numerical code run fast.
155 GPU hardware allows for more control over numerical operations than most CPUs,
156 but this results in more compiler options for you to juggle.
158 Flags you may wish to tweak include:
160 * ``-ffp-contract={on,off,fast}`` (defaults to ``fast`` on host and device when
161 compiling CUDA) Controls whether the compiler emits fused multiply-add
164 * ``off``: never emit fma operations, and prevent ptxas from fusing multiply
165 and add instructions.
166 * ``on``: fuse multiplies and adds within a single statement, but never
167 across statements (C11 semantics). Prevent ptxas from fusing other
169 * ``fast``: fuse multiplies and adds wherever profitable, even across
170 statements. Doesn't prevent ptxas from fusing additional multiplies and
173 Fused multiply-add instructions can be much faster than the unfused
174 equivalents, but because the intermediate result in an fma is not rounded,
175 this flag can affect numerical code.
177 * ``-fcuda-flush-denormals-to-zero`` (default: off) When this is enabled,
178 floating point operations may flush `denormal
179 <https://en.wikipedia.org/wiki/Denormal_number>`_ inputs and/or outputs to 0.
180 Operations on denormal numbers are often much slower than the same operations
183 * ``-fcuda-approx-transcendentals`` (default: off) When this is enabled, the
184 compiler may emit calls to faster, approximate versions of transcendental
185 functions, instead of using the slower, fully IEEE-compliant versions. For
186 example, this flag allows clang to emit the ptx ``sin.approx.f32``
189 This is implied by ``-ffast-math``.
194 CPU and GPU have different design philosophies and architectures. For example, a
195 typical CPU has branch prediction, out-of-order execution, and is superscalar,
196 whereas a typical GPU has none of these. Due to such differences, an
197 optimization pipeline well-tuned for CPUs may be not suitable for GPUs.
199 LLVM performs several general and CUDA-specific optimizations for GPUs. The
200 list below shows some of the more important optimizations for GPUs. Most of
201 them have been upstreamed to ``lib/Transforms/Scalar`` and
202 ``lib/Target/NVPTX``. A few of them have not been upstreamed due to lack of a
203 customizable target-independent optimization pipeline.
205 * **Straight-line scalar optimizations**. These optimizations reduce redundancy
206 in straight-line code. Details can be found in the `design document for
207 straight-line scalar optimizations <https://goo.gl/4Rb9As>`_.
209 * **Inferring memory spaces**. `This optimization
210 <https://github.com/llvm-mirror/llvm/blob/master/lib/Target/NVPTX/NVPTXInferAddressSpaces.cpp>`_
211 infers the memory space of an address so that the backend can emit faster
212 special loads and stores from it.
214 * **Aggressive loop unrooling and function inlining**. Loop unrolling and
215 function inlining need to be more aggressive for GPUs than for CPUs because
216 control flow transfer in GPU is more expensive. They also promote other
217 optimizations such as constant propagation and SROA which sometimes speed up
218 code by over 10x. An empirical inline threshold for GPUs is 1100. This
219 configuration has yet to be upstreamed with a target-specific optimization
220 pipeline. LLVM also provides `loop unrolling pragmas
221 <http://clang.llvm.org/docs/AttributeReference.html#pragma-unroll-pragma-nounroll>`_
222 and ``__attribute__((always_inline))`` for programmers to force unrolling and
225 * **Aggressive speculative execution**. `This transformation
226 <http://llvm.org/docs/doxygen/html/SpeculativeExecution_8cpp_source.html>`_ is
227 mainly for promoting straight-line scalar optimizations which are most
228 effective on code along dominator paths.
230 * **Memory-space alias analysis**. `This alias analysis
231 <http://reviews.llvm.org/D12414>`_ infers that two pointers in different
232 special memory spaces do not alias. It has yet to be integrated to the new
233 alias analysis infrastructure; the new infrastructure does not run
234 target-specific alias analysis.
236 * **Bypassing 64-bit divides**. `An existing optimization
237 <http://llvm.org/docs/doxygen/html/BypassSlowDivision_8cpp_source.html>`_
238 enabled in the NVPTX backend. 64-bit integer divides are much slower than
239 32-bit ones on NVIDIA GPUs due to lack of a divide unit. Many of the 64-bit
240 divides in our benchmarks have a divisor and dividend which fit in 32-bits at
241 runtime. This optimization provides a fast path for this common case.
246 | `gpucc: An Open-Source GPGPU Compiler <http://dl.acm.org/citation.cfm?id=2854041>`_
247 | Jingyue Wu, Artem Belevich, Eli Bendersky, Mark Heffernan, Chris Leary, Jacques Pienaar, Bjarke Roune, Rob Springer, Xuetian Weng, Robert Hundt
248 | *Proceedings of the 2016 International Symposium on Code Generation and Optimization (CGO 2016)*
249 | `Slides for the CGO talk <http://wujingyue.com/docs/gpucc-talk.pdf>`_
254 `CGO 2016 gpucc tutorial <http://wujingyue.com/docs/gpucc-tutorial.pdf>`_
259 To obtain help on LLVM in general and its CUDA support, see `the LLVM
260 community <http://llvm.org/docs/#mailing-lists>`_.