adds LRU cache with lazy eviction#159
Open
behzadnouri wants to merge 1 commit intojeromefroe:masterfrom
Open
Conversation
cb82a39 to
78846fd
Compare
The commit implements a variant of LRU cache with lazy eviction: * Each entry maintains an associated ordinal value representing when the entry was last accessed. * The cache is allowed to grow up to 2 times specified capacity with no evictions, at which point, the excess entries are evicted based on LRU policy resulting in an _amortized_ `O(1)` performance. In many use cases which can allow the cache to store 2 times capacity and can tolerate amortized nature of performance, this results in better average performance as shown by the added benchmarks. Additionally, with the existing implementation, `.get` requires a mutable reference `&mut self`. In a multi-threaded setting, this requires an exclusive write-lock on the cache even on the read path, which can exacerbate lock contentions. With lazy eviction, the ordinal values can be updated using atomic operations, allowing shared lock for lookups.
78846fd to
aaeef6e
Compare
Owner
|
Hey @behzadnouri 👋 This is awesome! I would definitely be open to merging once the API is finished – then users of the library can choose which implementation suits them best. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
The commit implements a variant of LRU cache with lazy eviction:
(added in submodule
lru::lazy::LruCache)O(1)performance.In many use cases which can allow the cache to store 2 times capacity and can tolerate amortized nature of performance, this results in better average performance as shown by the added benchmarks.
Additionally, with the existing implementation,
.getrequires a mutable reference&mut self. In a multi-threaded setting, this requires an exclusive write-lock on the cache even on the read path, which can exacerbate lock contentions.With lazy eviction, the ordinal values can be updated using atomic operations, allowing shared lock for lookups.
Results from added benchmarks on my machine (can be reproduced by
cargo bench):22% improvement on
get, 38% improvement onput.I have implemented the primary cache api but not all. I wanted to get a feedback of how likely you would be to merge this before writing more code.
Please let me know if you are open to merging this and I will complete the api.