Skip to content

Conversation

@nDmitry
Copy link
Owner

@nDmitry nDmitry commented Dec 20, 2025

Replacing Redis cache store with memory by default, but keeping an option to still use Redis if wanted.

Summary by CodeRabbit

  • New Features

    • In-memory caching is now available and used by default; Redis can be enabled via environment variables.
  • Documentation

    • README updated to reflect in-memory default and how to enable Redis if desired.
  • Tests

    • Added a race-detection test target.
    • Added comprehensive tests for the in-memory cache.
  • Chores

    • Docker Compose configuration now disables Redis by default (can be re-enabled by uncommenting).

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link

coderabbitai bot commented Dec 20, 2025

Warning

Rate limit exceeded

@nDmitry has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 11 minutes and 11 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

📥 Commits

Reviewing files that changed from the base of the PR and between 8bbe78a and 31ced12.

📒 Files selected for processing (3)
  • cmd/tgfeed/main.go (1 hunks)
  • internal/cache/memory.go (1 hunks)
  • internal/cache/memory_test.go (1 hunks)

Walkthrough

Adds an in-memory cache implementation and makes Redis optional: if REDIS_HOST is unset the app uses an in-memory cache; otherwise it uses Redis. Makefile, Docker Compose, README, main initialization, cache package, and tests updated accordingly.

Changes

Cohort / File(s) Summary
Build & compose
Makefile, compose.yaml, README.md
Makefile: added race test target and updated .PHONY. compose.yaml: Redis service, depends_on, and env lines commented out by default. README: updated startup and Docker Compose instructions to document optional Redis and RAM default.
Application entrypoint
cmd/tgfeed/main.go
Conditional cache initialization: use cache.NewMemoryClient() when REDIS_HOST is empty, otherwise create Redis client. Passes a generic cache.Cache to rest.NewServer. Redis client lifecycle (creation/Close) now conditional.
Cache API surface
internal/cache/cache.go
Added exported error ErrCacheMiss (initialized "cache miss").
In-memory cache implementation & tests
internal/cache/memory.go, internal/cache/memory_test.go
New MemoryCache type with NewMemoryClient(), Get, Set (with TTL and background eviction honoring context), Close, and snapshot helper. Comprehensive tests covering set/get, TTL eviction, context cancellation, and Close behavior.
Redis cache adjustments
internal/cache/redis.go
Removed duplicate/previous ErrCacheMiss declaration (moved to cache.go).

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

  • Focus review areas:
    • cmd/tgfeed/main.go: validate conditional initialization, error handling, and deferred Close behavior for both paths.
    • internal/cache/memory.go: verify concurrency correctness (locks), TTL eviction timers/goroutine lifecycle, and context cancellation to avoid leaks.
    • internal/cache/memory_test.go: ensure tests are deterministic (timing/TTL), and cover edge cases.
    • internal/cache/cache.go and internal/cache/redis.go: confirm ErrCacheMiss placement and consistent error semantics across implementations.

Poem

🐇 I hopped from keys to bytes today,
From Redis fields to RAM's soft hay.
TTL clocks tick, goroutines prance,
Optional caching — give it a chance!
A tiny race target joins the play.

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 14.29% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'Implement default memory cache store' accurately and concisely summarizes the main objective of the PR: making an in-memory cache the default instead of Redis.

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (1)
cmd/tgfeed/main.go (1)

65-79: Memory cache Close() is not called on shutdown.

The Redis client has defer redisClient.Close() (line 77), but when using the memory cache, Close() is never called. While this is less critical for an in-memory cache (memory is released on process exit anyway), it would be more consistent to defer the close for both implementations.

🔎 Proposed fix for consistent cleanup
 	var c cache.Cache
 
 	if redisHost == "" {
 		c = cache.NewMemoryClient()
 	} else {
 		redisClient, err := cache.NewRedisClient(ctx, fmt.Sprintf("%s:6379", redisHost))
 
 		if err != nil {
 			logger.Error("Failed to connect to Redis", "error", err)
 			os.Exit(1)
 		}
 
-		defer redisClient.Close()
 		c = redisClient
 	}
+
+	defer c.Close()
📜 Review details

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a2c4e28 and d0284d0.

📒 Files selected for processing (8)
  • Makefile (1 hunks)
  • README.md (2 hunks)
  • cmd/tgfeed/main.go (1 hunks)
  • compose.yaml (2 hunks)
  • internal/cache/cache.go (1 hunks)
  • internal/cache/memory.go (1 hunks)
  • internal/cache/memory_test.go (1 hunks)
  • internal/cache/redis.go (0 hunks)
💤 Files with no reviewable changes (1)
  • internal/cache/redis.go
🧰 Additional context used
🧬 Code graph analysis (2)
internal/cache/memory_test.go (2)
internal/cache/memory.go (2)
  • MemoryCache (11-14)
  • NewMemoryClient (17-21)
internal/cache/cache.go (1)
  • ErrCacheMiss (10-10)
internal/cache/memory.go (1)
internal/cache/cache.go (1)
  • ErrCacheMiss (10-10)
🪛 checkmake (0.2.2)
Makefile

[warning] 2-2: Missing required phony target "all"

(minphony)


[warning] 2-2: Missing required phony target "clean"

(minphony)

🪛 LanguageTool
README.md

[style] ~68-~68: You have already used this phrasing in nearby sentences. Consider replacing it to add variety to your writing.
Context: ...comment some config values there if you want to keep cache in Redis. Otherwise it will ...

(REP_WANT_TO_VB)

🔇 Additional comments (6)
internal/cache/cache.go (1)

9-10: LGTM! Good use of a package-level sentinel error.

Centralizing ErrCacheMiss in the cache package allows both memory and Redis implementations to share the same error type, enabling consistent error handling by callers using errors.Is().

README.md (1)

11-11: Documentation updates look good.

The changes accurately reflect the new default behavior where caching is done in RAM unless Redis is explicitly configured.

Also applies to: 68-68

Makefile (1)

10-11: Good addition of the race detection target.

This is particularly valuable given the new concurrent memory cache implementation with goroutines and mutex usage.

compose.yaml (1)

8-9: Configuration changes align well with the new default behavior.

The comments are clear and provide users with easy instructions to re-enable Redis if needed.

Also applies to: 27-29, 31-50

internal/cache/memory_test.go (1)

204-239: Test logic is correct, but there's a subtle timing issue.

The defer cancel() at line 206 is redundant since cancel() is explicitly called at line 232. This is harmless but could be cleaned up.

More importantly, the test verifies that context cancellation stops the TTL goroutine, but note that this behavior means the cache entry will never be evicted (as flagged in memory.go review). Ensure this is the intended behavior.

cmd/tgfeed/main.go (1)

67-68: Good default behavior selection.

The logic correctly defaults to the in-memory cache when REDIS_HOST is not configured, aligning with the PR objective.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (1)
internal/cache/memory.go (1)

47-59: Context cancellation still prevents TTL expiration.

This issue was previously flagged: when the context is cancelled (lines 56-57), the eviction goroutine exits without deleting the cache entry. If request contexts are passed to Set, cancelled requests will leave entries in memory indefinitely, causing unbounded growth.

The previously suggested solutions remain valid—either remove the context cancellation branch so the timer always completes, or use context.Background() for the eviction goroutine.

🧹 Nitpick comments (2)
internal/cache/memory.go (2)

16-21: Consider extracting the initial capacity as a package constant.

The hardcoded capacity of 100 works fine but could be made more maintainable.

🔎 Proposed refactor
+const defaultInitialCapacity = 100
+
 // NewMemoryClient creates a new cache client
 func NewMemoryClient() *MemoryCache {
-	cache := make(map[string][]byte, 100)
+	cache := make(map[string][]byte, defaultInitialCapacity)
 
 	return &MemoryCache{cache: cache}
 }

64-71: Data race fixed, but eviction goroutines aren't stopped.

Good fix—the mutex is now properly acquired before clearing the cache, addressing the previous data race concern.

However, eviction goroutines spawned by Set will continue running and may attempt deletions after Close is called. While the mutex protects against races, consider adding a context or shutdown signal to stop ongoing eviction goroutines when the cache is closed for cleaner resource management.

📜 Review details

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between d0284d0 and 8bbe78a.

📒 Files selected for processing (1)
  • internal/cache/memory.go (1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
internal/cache/memory.go (1)
internal/cache/cache.go (1)
  • ErrCacheMiss (10-10)
🔇 Additional comments (3)
internal/cache/memory.go (3)

1-14: LGTM! Clean structure with appropriate concurrency primitives.

The package imports and struct definition are well-designed. Using sync.RWMutex is the right choice for a cache with read-heavy workloads.


23-34: LGTM! Correct concurrent read implementation.

The Get method properly uses read locking and returns the appropriate error for cache misses.


73-83: LGTM! Well-implemented testing helper.

The snapshot method correctly uses read locking and efficiently copies the cache state using maps.Copy. This is a clean testing utility.

@nDmitry nDmitry merged commit d1150d2 into main Dec 20, 2025
3 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants