You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Presently, CRTMv3 (and previous versions) ctests look for an existing reference file for the ctest. If the reference file doesn't exist, It creates a new reference file, then it compares any subsequent ctests against that reference file. This is fine for "local" or short term development evaluation, however, for longer term consistency and evaluation of changes, some changes might end up being missed.
This issue creates a new standard for CRTM development, namely that we (CRTM team) identifies an appropriate "fundamental" reference for each ctest. Nominally this would be tied to a well-tested release.
Let's just say we develop a new set of reference files for each semi-major (3.x) release, because minor releases should not affect the structure of the results output.
In this effort, we will create a reference set for the following releases:
v2.3.0
v2.4.0
v3.0.0
v3.1.0
v3.2.0 (not created / released yet).
I will retroactively update ctests in release branches for each of these releases. Each new release (even minor releases) adds new tests, perhaps we should identify, instead, a common core of "reference" tests that remain relatively unchanged from version to version? The one breaking change here would be the switch to using netCDF by default for the results storage (in v3.2).
I also noticed that the "reference" values will fail on numerical matching if compiled in "RELEASE" and compared against a "DEBUG" build, and vice-versa. Also, there are minor numerical differences (1e-11 or smaller) in ctests between gfortran and ifort using RELEASE build. This suggests that we either (a) have different reference sets based on build type and compiler, OR (b) we loosen the matching tolerance. In release mode, gfortran signals underflow on the tests that exhibit the differences, so this might be a fixable bug, just need to identify where underflow is occurring (which should be handled in a new issue).
The text was updated successfully, but these errors were encountered:
Presently, CRTMv3 (and previous versions) ctests look for an existing reference file for the ctest. If the reference file doesn't exist, It creates a new reference file, then it compares any subsequent ctests against that reference file. This is fine for "local" or short term development evaluation, however, for longer term consistency and evaluation of changes, some changes might end up being missed.
This issue creates a new standard for CRTM development, namely that we (CRTM team) identifies an appropriate "fundamental" reference for each ctest. Nominally this would be tied to a well-tested release.
Let's just say we develop a new set of reference files for each semi-major (3.x) release, because minor releases should not affect the structure of the results output.
In this effort, we will create a reference set for the following releases:
v2.3.0
v2.4.0
v3.0.0
v3.1.0
v3.2.0 (not created / released yet).
I will retroactively update ctests in release branches for each of these releases. Each new release (even minor releases) adds new tests, perhaps we should identify, instead, a common core of "reference" tests that remain relatively unchanged from version to version? The one breaking change here would be the switch to using netCDF by default for the results storage (in v3.2).
I also noticed that the "reference" values will fail on numerical matching if compiled in "RELEASE" and compared against a "DEBUG" build, and vice-versa. Also, there are minor numerical differences (1e-11 or smaller) in ctests between gfortran and ifort using RELEASE build. This suggests that we either (a) have different reference sets based on build type and compiler, OR (b) we loosen the matching tolerance. In release mode, gfortran signals underflow on the tests that exhibit the differences, so this might be a fixable bug, just need to identify where underflow is occurring (which should be handled in a new issue).
The text was updated successfully, but these errors were encountered: