Implement OpenMP offload for do group update#1782
Implement OpenMP offload for do group update#1782edoyango wants to merge 14 commits intoNOAA-GFDL:mainfrom
Conversation
* add multi gpu support * address review comments, add helpful comment for the acc/mp runbtime call
| do i = is, ie | ||
| pos = pos + 1 | ||
| field(i,j,k) = buffer(pos) | ||
| idx = pos + (k-1)*nj*ni + (j-js)*ni + (i-is) + 1 |
There was a problem hiding this comment.
How are these two implementations equivalent? Is new idx = old pos always?
There was a problem hiding this comment.
Yes, they're equivalent. For any iteration, idx = pos + (k-1)*nj*ni + (j-js)*ni + (i-is) + 1 produces the same value that pos would have had at that point. The formula accounts for all the iterations that would have occurred in the nested loops up to that (i,j,k) position.
The reason for the change is that each nested iteration is now independent and can be performed in parallel.
To enable this, had to be removed - otherwise segfaults happen on the GPU.
|
In 0cc2a77, compiler macros are replaced with the runtime OpenMP For example, below halo communication times doing a self-communication (re-entrant domain) in MOM6 barotropic, where packing and unpacking are on GPU: And with the CPU OpenMP directives not compiled via macros: Hence, 0cc2a77 will be reverted. |
This reverts commit 0cc2a77. Having both the CPU and GPU OpenMP directives compiled caused a significant slowdown in GPU packing/unpacking performance - even if parallelism is controlled using OpenMP "if" clause.
Some very minor changes to the OpenMP target MPI PR: * use_device_ptr -> use_device_addr This appears to be the updated form (or at least nvfortran says it is) * Whitespace added to `!$ use omp_lib` Does not seem crucial but from our previous discussion it appears more correct. * Removal of some trailing whitespace.
This patch refactors several lines to keep within the 121-character line length limit prescribed by the FMS style guidelines.
The no-comm (no MPI) interface has been updated to support the new omp_offload argument.
bensonr
left a comment
There was a problem hiding this comment.
For some of the target omp statements, the if (use_device_ptr) is prior to the target data mapping and other times after. Does it matter where in the !$omp specification the if-test occurs?
The preference for !$omp is default(none) with full specification of shared and private.
| #ifndef __NVCOMPILER_OPENMP_GPU | ||
| !$OMP parallel do default(none) shared(npack,group,ptr,nvector,ksize,buffer_start_pos) & | ||
| !$OMP private(buffer_pos,pos,m,is,ie,js,je,rotation, & | ||
| !$OMP ptr_field, ptr_fieldx, ptr_fieldy,n,k) | ||
| !$OMP ptr_field, ptr_fieldx, ptr_fieldy,n,k,ni,nj,idx) | ||
| #endif |
There was a problem hiding this comment.
In the case where __NVCOMPILER_OPENMP_GPU is defined, the compiled library may offload the packing to the device (assuming use_device_ptr). But if use_device_ptr is false, then we get zero parallelism. It would be good to have both options available at the same time for maximum flexibility.
There was a problem hiding this comment.
I think the plan is to rename this to something that is set at compile-time (i.e. cmake, autoconf).
There was a problem hiding this comment.
@marshallward - Not sure I understand your comment and I'm sure mine wasn't entirely clear. I think we need both a CPU and GPU version of routines like this to allow offload and traditional execution within the same programmatic unit. It isn't entirely clear we get this with such fine-grained GPU offloading and a mix of macro if-tests.
There was a problem hiding this comment.
Sorry for the brief reply above. As you say, the current form would only allow either CPU parallelism or GPU parallelism. I don't think we imagined using both at this stage of development, and assumed one would make this decision at compile-time.
Replacing __NVCOMPILER_OPENMP_GPU with a generic macro would decouple that decision from the choice of compiler, but I agree it doesn't permit the use of both in the same model.
We can create separate versions of group_update_[un]pack.inc rather than rely on preprocessing, if you are OK with the code duplication.
There are also bigger picture ideas here that could consolidate the two versions (e.g. team-parallelism of groups with thread-parallelism of buffer transfer), but AFAIK there are technical issues preventing this, such as the presence of Cray pointers.
There was a problem hiding this comment.
@marshallward - Not sure I understand your comment and I'm sure mine wasn't entirely clear. I think we need both a CPU and GPU version of routines like this to allow offload and traditional execution within the same programmatic unit. It isn't entirely clear we get this with such fine-grained GPU offloading and a mix of macro if-tests.
Is there a preference for how this is implemented? would something like below work? To avoid code duplication
! below include contains openmp offloaded directives but not cpu openmp directives
#include <group_update_pack.inc>
! undefine macro to enable compilation of cpu openmp parallel loop
#ifdef GPU_MACRO
#undef GPU_MACRO
if (.not.omp_offload) then
! contains cpu openmp directives without openmp offload directives
#include <group_update_pack.inc>
endif
#define GPU_MACRO
#endifThere was a problem hiding this comment.
@edoyango - macro-driven, templated Fortran is preferable to maintaining different, yet identical versions that differ by compile-time comment lines only. Please implement this - if you haven't already.
There was a problem hiding this comment.
hi @bensonr I've attempted to implement the suggested macro solution, but nvfortran fails to run.
the fail happens when there's an openmp offload loop and a openmp cpu loop - either with default(none) - referencing the same cray pointer in the same procedure. For the latest compilers, whichever loop has default(none) either hangs or segfaults at runtime.
the workaround is to use default(shared) in both directives, or to put them into isolated subroutines.
reproducer:
test_cray_ptr_omp.txt
compile with nvfortran -mp=gpu and run with ./a.out 0/1/2/3. 0, 1 cases are gpu and cpu (respectively) with default(none), and 2, 3 are with default(shared).
Could a concession be made to use default(shared)? otherwise we can't make progress until cray pointers are removed.
There was a problem hiding this comment.
@edoyango - Thanks for explaining the issue with default(none) and why you'd need default(shared) for now. It would be helpful to leave the default(none) logic in place along with extra comments for the people in the process of removing cray pointers.
There was a problem hiding this comment.
@bensonr I've:
- added required shared/private to the openmp offloaded loops
- swapped from
default(none)todefault(shared)in the packing/unpacking loops - added comments to the top of
group_update_un/pack.incfiles explaining the restrictions imposed by nvfortran with cray pointers - through macro manipulation, ensured openmp cpu parallelism is activated when
ompoffload==.false.
I hope that's sufficient.
AFAIK clause order is not important. I tend to prefer ending with (Also note that some of the
👍 |
| end subroutine MPP_SEND_ | ||
|
|
||
| subroutine MPP_RECV_SCALAR_( get_data, from_pe, glen, block, tag, request ) | ||
| subroutine MPP_RECV_SCALAR_( get_data, from_pe, glen, block, tag, request, omp_offload ) |
There was a problem hiding this comment.
@edoyango - I see you only added the offload option for the scalar version of the send/recv pairs. Is there a reason this is not being done for all of the send/recv routines?
There was a problem hiding this comment.
I've tried to keep the scope of this PR to only cover what's required to get do_group_update working and do_group_update only uses MPP_SEND/RECV_SCALAR_. I can add the omp_offload flag to the other implementations, but it would be untested.
This ensures that (un)packing steps in do_group_update is performed with openmp cpu parallelism if ompoffload=.false.. Previously it would only do serial. This is implemented by undefining the GPU macro (currently __NVCOMPILER_OPENMP_GPU) and re-including the (un)packing files. To make this work, the default(shared) was used in all the relevant OpenMP directives. If default(none) is used, the loops would hang or segfault.

Description
This adds the necessary code to make mpp_do_group_update work with arrays that are managed by NVIDIA's OpenMP offload runtime. This attempts to be minimally disruptive in that non-nvidia compilers will see the same behaviour as previously by adding macros around the relevant openmp directives.
Fixes #1771
How Has This Been Tested?
The OpenMP offload capability is currently tested on the "double gyre" case in MOM6-examples using the
nvfortrancompiler and a cuda-aware openmpi. We have some notes on how to run the gpu-enabled MOM6, but is outdated.Checklist:
make distcheckpasses