-
Notifications
You must be signed in to change notification settings - Fork 149
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CCPP MPI interface #1106
Comments
Would it be possible to add a few examples and/or pseudo code with this issue? Just to make sure we're on the same page as to how this would look to consumers and from a maintenance perspective. |
With the caveat that this is a simplistic version that does not address all the CCPP requirements (do not stop model etc;
|
Thanks @climbfuji! Not to add more to your plate but could we add any details about what this would look like to users with the |
I don't have an
One possible way to provide a generic plus additional, host-specific implementations of This does not address the issue of making code in the CCPP repo dependent on CCPP, however. For this, there are multiple solutions: ifdefs ( |
@mwaxmonsky Here is an example with a ton of MPI commands embedded within an initialization routine, rrtmgp_lw_gas_optics.F90 (I never removed the PP directives when
When we were developing the code, we had the error checking piece, but for brevity we removed it after everything was working. With a CCPP MPI interface like Dom mentioned, this routine would look pretty much the same, but |
Note to self. The update for the NRL physics, required for the transition of NEPTUNE to operations, was merged in https://github.nrlmry.navy.mil/NEPTUNE/ccpp-physics/pull/3. |
Description
Many schemes have I/O in their initialization phases, but are not guarded by MPI commands. Adding these MPI commands, and their associated error checking, within the schemes introduces redundancies.
Explanation: This means that these schemes read input files with each MPI task individually at the same time. This can cause problems on parallel file systems with large core counts, as recently experienced on the DOD HPCMP system Narwhal. Reading this data with the MPI root rank only and then broadcasting it resolves the problem. However, coding up these MPI broadcast calls directly, capturing the error and reporting it in a CCPP-compliant way, is tedious and results in several lines of code for each MPI call. This can be hidden in a CCPP MPI interface that takes care of these CCPP-specific aspects.
Solution
Create a CCPP MPI interface
@DomHeinzeller @nusbaume @peverwhee @cacraigucar
The text was updated successfully, but these errors were encountered: