Skip to content

Allow per-entry TTL #29

@xcapaldi

Description

@xcapaldi

This is a low-priority suggestion but I figured I should offer it to you. I have a use-case in which each entry that I am caching may have a different TTL. Currently I set a cache-level TTL that is larger than any individual TTL and then I actually cache a struct which hold my cacheable thing and the individual-level TTL. Because of this I disable the background process to cleanup expired entries since they expire for me much earlier than they are perceived as expired by the cache. I then need to do some additional manually checking/refreshing based on my own stashed expiration.

entry, err := c.client.GetOrFetch(ctx, key, fetchFn)
if err != nil {
	return nil, err
}

// If the entry has no expiration, return it.
if entry.ExpireAt == nil {
	return &entry, nil
}

// Since sturdyc only supports a global TTL, we need to check the per-ID TTL
// to see if it has expired. If it has, we manually refresh.
// NOTE: sturdyc.Passthrough is not used here because we do not want to fall
// back on the cache if the upstream fails (as it would return the expired entry again).
if time.Now().After(*entry.ExpireAt) {
	entry, err = c.refresh(ctx, key, do)
	if err != nil {
		return nil, err
	}
}

// refresh refreshes the cache for the given key by calling the given fetch function.
// If the function returns an error, the key is deleted from the cache and the
// error is returned. This is treated as a cache miss. If the function returns
// an Entry, the Entry is stored in the cache and returned. This is treated as a
// refresh.
func (c *cache) refresh(ctx context.Context, key string, do Execution) (*User, error) {
	entry, err := do.Call(ctx)
	if err != nil {
		// still want to purge the expired entry
		c.client.Delete(key)
		metrics.ResolversCacheMissTotal.Inc()
		return nil, errors.Wrap(err, "do.Call")
	}
	c.client.Set(key, entry)
	metrics.CacheRefreshTotal.Inc()

	return user, nil
}

Describe the solution you'd like
It would be awesome if we could set a distinct TTL per entry. Perhaps by default the cache would use the global TTL unless per-entry TTL is enabled via an option at cache instantiation. This has side-effects with the forced eviction logic however. Since all entries currently share the same TTL, the eviction partitioning operates on the age of the entry; i.e. you evict the oldest inserted entries. If you allow per-entry TTL the eviction logic will instead evict the earliest to expire entries. Thus you could insert entry A with a 1 year TTL. Then wait 6 months and insert entry B with a 1 month TTL. Then force an eviction. Entry B will be evicted first even though you just inserted it since it will expire sooner.

Instead you could add an additional timestamp field accessedAt here:

expiresAt time.Time

This would allow you to set expiresAt on a per-entry basis. You can update the accessedAt when the entry is inserted, refreshed or returned from the cache. Then make the forced eviction logic operate on accessedAt. This will make the forced eviction behave like LRU (least recently accessed) cache eviction which is fairly standard.

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions