The behaviour proposed in PR #19 and #23 is to reduce the cache size to fit the shared memory exactly once, but this is not properly documented.
The reasoning is that initializing a larger cache takes more time, and would not be beneficial when there cannot possibly be any more memory to take advantage of the larger cache.
Under normal operations this behaviour is not expected to be encountered, but our tests notoriously use small examples to test for edge cases. In these tests, there is a significant performance penalty for creating oversized caches.
The behaviour and reasoning should be documented somewhere.
The behaviour proposed in PR #19 and #23 is to reduce the cache size to fit the shared memory exactly once, but this is not properly documented.
The reasoning is that initializing a larger cache takes more time, and would not be beneficial when there cannot possibly be any more memory to take advantage of the larger cache.
Under normal operations this behaviour is not expected to be encountered, but our tests notoriously use small examples to test for edge cases. In these tests, there is a significant performance penalty for creating oversized caches.
The behaviour and reasoning should be documented somewhere.