I have read the sample file
NISAR_L2_PR_GCOV_001_005_A_219_4020_SHNA_A_20081012T060910_20081012T060926_P01101_F_N_J_001.h5/
The image in that file is a floating-point 4545 x 6220 array. But for example, the incidence angle array is a floating point array 220 x 254 x 21 array. I know that the last dimension represents the height above the ellipsoid. The 220-long dimension in the incidence angle array maps to the 4545-long dimension in the image. The 254-long dimension in the incidence angle array maps to 6220-long dimension in the image.
I can appreciate that since the incidence angle is slowly varying they don't need to have a large array to represent these data. However, as much as I examined the documentation and data cubes it is not clear to me how the pixels in the incidence angle array map into the image array.
I have read the sample file
NISAR_L2_PR_GCOV_001_005_A_219_4020_SHNA_A_20081012T060910_20081012T060926_P01101_F_N_J_001.h5/
The image in that file is a floating-point 4545 x 6220 array. But for example, the incidence angle array is a floating point array 220 x 254 x 21 array. I know that the last dimension represents the height above the ellipsoid. The 220-long dimension in the incidence angle array maps to the 4545-long dimension in the image. The 254-long dimension in the incidence angle array maps to 6220-long dimension in the image.
I can appreciate that since the incidence angle is slowly varying they don't need to have a large array to represent these data. However, as much as I examined the documentation and data cubes it is not clear to me how the pixels in the incidence angle array map into the image array.