Skip to content

Conversation

@kwodzicki
Copy link

In Python/NumPy, iteration is 'fastest' over the last dimension of an array because data in this dimension are stored sequentially in RAM.

NOTE: This is a breaking change!!!

In previous versions of this package, data were expected to be ordered such that dimension were [time, y, x], or some variant with time being the left-most, and slowest, dimension.

This update flips how data are processed and assumes that the right-most dimension is time. This enables faster compute as the time series of data for a single location are now sequential in RAM, leading to speedups in both serial and parallel performance.

I have also reduced the number of processes that run in the parallel case by only iterating over the last dimension. In the previous version, iteration occured over two dimension, leading to lots of 'small' chunks of data being passed back and forth to the processing Pool. This communication can be slow and incurs some overhead.

To reduce this, we now pass 'large' chunks of data to a single process and use the map_over_location() static method to iterate down to the time dimension.

Testing indicated a 50% reduction in processing time, with only a slight increase in read time (10-20s) when transposing the data on read via Xarray.

Closes #37

In Python/NumPy, iteration is 'fastest' over the last dimension
of an array because data in this dimension are stored sequentially
in RAM.

In previous versions of this package, data were expected to be
ordered such that dimension were [time, y, x], or some variant with
time being the left-most, and slowest, dimension.

This update flips how data are processed and assumes that the
right-most dimension is time. This enables faster compute as
the time series of data for a single location are now sequential
in RAM, leading to speedups in both serial and parallel performance.

I have also reduced the number of processes that run in the parallel
case by only iterating over the last dimension. In the previous
version, iteration occured over two dimension, leading to lots of
'small' chunks of data being passed back and forth to the processing
Pool. This communication can be slow and incurs some overhead.

To reduce this, we now pass 'large' chunks of data to a single
process and use the map_over_location() static method to iterate
down to the time dimension.

Testing indicated a 50% reduction in processing time, with only a
slight increase in read time (10-20s) when transposing the data
on read via Xarray.
@kwodzicki kwodzicki marked this pull request as draft October 22, 2025 19:53
Modified the dimensionality requirment to check that data are
at least 1-D (i.e., array.ndims > 0). Updates to code for speed
improvements made things more flexible with dimensionality, so
do not have to worry as long as time is last/right most dimension
and all other dimensions match across the arrays, everything
should work as expected.

Closes ecmwf-projects#22
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Slow Parallel Performance; Transpose Dimension Order

1 participant