Introduction

OpenACC is a directives-based API for code parallelization with accelerators, for example, NVIDIA GPUs. In contrast, OpenMP is the API for shared-memory parallel processing with CPUs. OpenACC is designed to provide a simple yet powerful approach to accelerators without significant programming effort. Programmers simply insert OpenACC directives before specific code sections, typically with loops, to engage the GPUs. This approach enables the compiler to target and optimize parallelism. In many cases of GPU computing, the programming efforts in OpenACC is much less than that in Nvidia’s CUDA programming language. For many large existing codes, rewriting them with CUDA is impractical if not impossible. For those cases, OpenACC offers a pragmatic alternative.

What you need to know or do on the SCC

  1. To use OpenACC, compile your Fortran code with the Portland Group Inc. (PGI) compiler, pgfortran (or pgf90, pgf95). You will need to load a module in order to use the PGI compiler:
    scc1% module load nvidia-hpc/2023-23.5
  2. After this, you can proceed with compilation. For example:
    scc1% pgfortran -o mycode -acc -Minfo mycode.f90

    In the above example, -acc turns on the OpenACC feature while -Minfo returns additional information on the compilation. For details, see the man page of pgfortran:

    scc1% man pgfortran
  3. To submit your code (with OpenACC directives) to an SCC node with GPUs:
    scc1% qsub -l gpus=1 -b y mycode

    In the above example, 1 GPU (and in the absence of a multiprocessor request, 1 CPU) is requested.

    Additional examples of GPU batch jobs are available here.

Demonstration of Performance

  1. The following examples demonstrate a matrix multiply (C = A * B) using either multi-threaded OpenMP or OpenACC on a single GPU.
    • For OpenMP application:
      scc1% pgfortran -mp matrix_multiply.f90 -o mm_omp
    • For OpenACC application:

      scc1% pgfortran -acc matrix_multiply.f90 -o mm_acc

  2. The following demonstrates timing comparisons for OpenACC, OpenMP, MPI:
    Bar Chart shows timings for Single GPU Matrix Multiplication using OpenACC, OpenMP, and MPI.

    Based on the given data, we can conclude that for 3 sets of four, eight, and sixteen CPUs, the following results were obtained using OpenACC, OpenMP, and MPI: For four CPUs, OpenACC took 19 seconds, OpenMP took 117 seconds, and MPI took 156 seconds. For eight CPUs, OpenACC took 19 seconds, OpenMP took 72 seconds, and MPI took 66 seconds. For sixteen CPUs, OpenACC took 19 seconds, OpenMP took 25 seconds, and MPI took 27 seconds.


    The above figure shows the timings comparison of a matrix multiply using a single GPU (via OpenACC) against two other parallel methods: OpenMP and MPI. The figure below shows the timings of matrix multiply using 1, 2, and 3 GPU devices.
    Bar chart showing Matrix Multiple timings using 1, 2, and 3 GPU devices

OpenACC Tutorial

Please refer to the RCS tutorial slides for OpenACC programming.

Relevant Links

OpenACC Consulting

RCS staff scientific programmers can help you with your OpenACC code tuning. For assistance, please send email to help@scc.bu.edu.