[Volume 5] 상품 좋아요 MV 도입 및 상품 조회 Redis 캐싱 적용#209
[Volume 5] 상품 좋아요 MV 도입 및 상품 조회 Redis 캐싱 적용#209kimjunyoung90 wants to merge 6 commits intoLoopers-dev-lab:kimjunyoung90from
Conversation
- Product 엔티티의 likeCount 필드 및 increment/decrement 쿼리 제거 - ProductLikeCount 집계 엔티티 추가 (product_likes_count 테이블) - ProductFacade에서 LikeService를 통해 좋아요 수 조회 후 조합 - LikeFacade에서 ProductService 좋아요 카운트 조작 의존 제거
- product_likes_count 테이블과 LEFT JOIN하여 별도 조회 제거 - JPQL에서 QueryDSL로 전환하여 타입 안전한 동적 쿼리 및 정렬 지원 - ProductResult와 ProductWithLikeCountResult를 분리하여 역할 명확화
📝 WalkthroughWalkthroughProduct의 likeCount를 엔티티 분리(ProductLikeCount)로 정규화하고, 조회는 ProductWithLikeCount 결과로 제공하도록 저장소·캐시·서비스·퍼사드 계층을 재구성했다. 좋아요 증감 시 Product 엔티티 직접 변경 로직과 카운터 수정 메서드는 제거되었다. Changes
Sequence Diagram(s)sequenceDiagram
participant Client as Client
participant Facade as ProductFacade
participant Service as ProductService
participant Cache as ProductCacheRepository
participant RepoImpl as ProductRepositoryImpl
participant LikeRepo as ProductLikeCountRepository
participant DB as Database
Client->>Facade: getProductsWithLikeCount(pageable, brandId?)
Facade->>Service: 요청: products with like count
Service->>Cache: getProductsWithLikeCount(pageable, brandId)
alt 캐시 히트
Cache-->>Service: cached Page<ProductWithLikeCount>
else 캐시 미스
Service->>RepoImpl: query products projection (pageable, brandId)
RepoImpl->>LikeRepo: 조회(join 또는 별도 쿼리)로 like counts 조회
RepoImpl->>DB: 실행(QueryDSL)
RepoImpl-->>Service: Page<ProductWithLikeCount>
Service->>Cache: putProductsWithLikeCount(pageable, brandId, page)
end
Service-->>Facade: Page<ProductWithLikeCountResult>
Facade-->>Client: Page<ProductWithBrandResult>
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60분 Possibly related PRs
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
apps/commerce-api/src/main/java/com/loopers/application/like/LikeService.java (1)
27-52:⚠️ Potential issue | 🔴 Criticallike()/unlike()/deleteLikes()에서 ProductLikeCount를 갱신하지 않는다.
ProductLike테이블만 변경되고ProductLikeCount테이블은 갱신되지 않는다. 조회 시getLikeCount()가 항상 0을 반환하거나 최초 생성 시점의 값만 유지된다.운영 중 좋아요 기능이 비정상 동작하며, 이는 사용자에게 직접 노출되는 문제다.
🐛 like/unlike 시 카운트 갱신 로직 추가 필요
`@Transactional` public LikeResult like(Long userId, Long productId) { productLikeRepository.findByUserIdAndProductId(userId, productId) .ifPresent(like -> { throw new CoreException(ErrorType.CONFLICT, "이미 좋아요한 상품입니다."); }); ProductLike productLike = ProductLike.builder() .userId(userId) .productId(productId) .build(); - return LikeResult.from(productLikeRepository.save(productLike)); + ProductLike saved = productLikeRepository.save(productLike); + productLikesCountRepository.incrementLikeCount(productId); + return LikeResult.from(saved); } `@Transactional` public void unlike(Long userId, Long productId) { ProductLike productLike = productLikeRepository.findByUserIdAndProductId(userId, productId) .orElseThrow(() -> new CoreException(ErrorType.NOT_FOUND, "좋아요를 찾을 수 없습니다.")); productLikeRepository.delete(productLike); + productLikesCountRepository.decrementLikeCount(productId); }동시성 처리를 위해 원자적 UPDATE 쿼리(
SET like_count = like_count + 1)를 사용하고, 테스트 시 동시 요청에 대한 race condition 검증이 필요하다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/application/like/LikeService.java` around lines 27 - 52, The like/unlike/deleteLikes flows update only ProductLike but never update ProductLikeCount, causing getLikeCount() to be stale; modify like(Long userId, Long productId), unlike(Long userId, Long productId) and the deleteLikes flow to atomically update the counter in ProductLikeCount using repository-level atomic updates (e.g., add methods like incrementLikeCountByProductId(productId) and decrementLikeCountByProductId(productId) on ProductLikeCountRepository that execute SET like_count = like_count + 1 / -1), ensure you create or initialize a ProductLikeCount row if missing (upsert) before incrementing, keep the operations in the same `@Transactional` boundary as ProductLike save/delete, and add concurrency tests that send concurrent like/unlike requests to verify no race conditions.apps/commerce-api/src/main/java/com/loopers/application/product/result/ProductResult.java (1)
7-27:⚠️ Potential issue | 🔴 Critical테스트 코드에서 ProductResult 생성자 호출 오류로 컴파일 실패.
ProductResult 레코드에서 likeCount를 제거한 것은 정규화 방향에 맞으며, 응답 DTO(ProductCreateResponse, ProductDetailResponse)도 올바르게 likeCount 없이 구성되어 있다. 다만 테스트 코드가 구식 생성자 시그니처를 사용하고 있다.
ProductResult는 7개 필드만 가지지만, 테스트에서는 8개 매개변수로 인스턴스화하고 있다. 모든 생성자 호출에서 6번째 매개변수(0)를 제거해야 한다.
영향받는 파일:
ProductAdminV1ControllerTest.java(61, 109, 134, 156줄)OrderFacadeTest.java(50, 144, 171줄)ProductFacadeTest.java(38, 66줄)모든
new ProductResult(1L, 1L, "상품명", price, stock, 0, now, now)형태를new ProductResult(1L, 1L, "상품명", price, stock, now, now)로 수정하고 테스트 실행 후 모든 테스트 케이스가 통과하는지 확인해야 한다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/application/product/result/ProductResult.java` around lines 7 - 27, Tests are failing because ProductResult's record signature no longer includes likeCount (it has 7 fields), but several tests still call the old 8-arg constructor; update all instantiations of ProductResult in the test classes (ProductAdminV1ControllerTest, OrderFacadeTest, ProductFacadeTest) by removing the obsolete sixth argument (the 0) so each call uses new ProductResult(1L, 1L, "상품명", price, stock, now, now) matching the ProductResult record, then re-run the test suite to verify all tests pass.apps/commerce-api/src/main/java/com/loopers/application/product/ProductFacade.java (1)
35-42:⚠️ Potential issue | 🟠 Major목록 조회에서 브랜드를 건별 조회하면 페이지당 N회 추가 호출이 발생한다.
현재 구현은 상품 페이지를 가져온 뒤
brandService.getBrand(...)를 항목마다 호출한다. 운영에서는 페이지 크기만큼 추가 서비스/리포지토리 호출이 붙어 지연시간이 커지고,brandId필터가 있는 경우에도 같은 브랜드를 반복 조회한다.brandId != null이면 브랜드를 1회만 조회해 재사용하고, 전체 목록이면 브랜드 ID를 모아 배치 조회하거나 상품 조회 쿼리에서 브랜드명까지 함께 projection하는 쪽이 안전하다. 추가 테스트로는brandId가 주어진 목록 조회에서 브랜드 서비스 호출이 1회인지, 여러 브랜드가 섞인 페이지에서도 브랜드 조회가 배치 1회로 끝나는지 검증하는 테스트가 필요하다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/application/product/ProductFacade.java` around lines 35 - 42, The getProducts method in ProductFacade currently calls brandService.getBrand(...) for each product causing N additional calls; change it so when brandId != null you fetch the Brand once (call brandService.getBrand(brandId) and reuse its name), and when brandId is null collect unique brandIds from the Page<ProductWithLikeCountResult> and call a batch method (e.g., brandService.getBrandsByIds or add one) to retrieve all BrandResults in one call and map them by id before calling ProductWithBrandResult.from; update or add tests to assert that for brandId-present queries the brand service is called exactly once and that mixed-brand pages resolve via a single batch call.
🧹 Nitpick comments (3)
apps/commerce-api/src/main/java/com/loopers/infrastructure/like/ProductLikeCountJpaRepository.java (1)
9-14: findByProductIdIn의 IN 절 크기에 주의가 필요하다.대량의 productIds가 전달되면 IN 절이 비대해져 성능 저하가 발생할 수 있다. Oracle의 경우 1000개 제한이 있고, MySQL/PostgreSQL도 쿼리 파싱 비용이 증가한다. 호출부에서 적절한 배치 처리를 고려하거나,
@Query로 직접 작성 시@QueryHints로 페치 사이즈를 조정할 수 있다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/infrastructure/like/ProductLikeCountJpaRepository.java` around lines 9 - 14, The repository method findByProductIdIn on ProductLikeCountJpaRepository can generate an excessively large SQL IN clause (and hits Oracle's 1000-item limit) when callers pass many productIds; fix by batching the incoming productIds at the call site into smaller chunks (e.g., <=1000) and aggregating results, or replace direct use with a repository query that pages/streams results (implement a custom method or `@Query` with pagination/streaming and `@QueryHints`) to avoid huge IN lists—update callers that currently call findByProductIdIn to use the batching or the new repository method.apps/commerce-api/src/main/java/com/loopers/application/like/LikeService.java (1)
59-73: getLikeCount/getLikeCounts 조회 로직은 적절하다.
readOnly = true로 성능 최적화되어 있고, Optional/Stream 처리도 가독성이 좋다. 다만findByProductIdIn이 빈 리스트를 받으면 불필요한 쿼리가 발생할 수 있으므로 방어 로직 추가를 고려할 수 있다.♻️ 빈 리스트 방어 로직
`@Transactional`(readOnly = true) public Map<Long, Integer> getLikeCounts(List<Long> productIds) { + if (productIds == null || productIds.isEmpty()) { + return Map.of(); + } return productLikesCountRepository.findByProductIdIn(productIds).stream() .collect(Collectors.toMap( ProductLikeCount::getProductId, ProductLikeCount::getLikeCount )); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/application/like/LikeService.java` around lines 59 - 73, The getLikeCounts method should avoid calling productLikesCountRepository.findByProductIdIn when productIds is null or empty to prevent unnecessary queries; update getLikeCounts to first check for null/empty productIds and return an empty Map immediately, otherwise proceed to call productLikesCountRepository.findByProductIdIn(productIds) and collect into a Map using ProductLikeCount::getProductId and ProductLikeCount::getLikeCount (keep getLikeCount unchanged).apps/commerce-api/src/main/java/com/loopers/domain/product/ProductWithLikeCount.java (1)
5-14:ProductWithLikeCount는 도메인 모델보다 조회 전용 프로젝션으로 분리하는 편이 안전하다.현재 타입은
Product와product_likes_count조인 결과를 담는 read model인데domain/product아래에 두면 엔티티/값 객체/DTO 경계가 흐려진다. 운영 중 조회 필드 변경이 도메인 계약 변경으로 번져 회귀 범위가 불필요하게 커진다. 조회 전용 패키지나 projection 타입으로 이동해 도메인 규칙과 조회 모델을 분리하는 편이 안전하다. 추가 테스트로는 서비스/파사드가 새 타입으로도 기존 응답 스키마를 그대로 내는지 회귀 테스트를 두는 것이 좋다. As per coding guidelines엔티티/값 객체/DTO 경계를 명확히 하고, 도메인 규칙과 인프라 관심사가 섞이면 분리하도록 제안한다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/domain/product/ProductWithLikeCount.java` around lines 5 - 14, ProductWithLikeCount is currently living under the domain layer but is a read-only projection (join of Product and product_likes_count); move the record out of the domain package into a dedicated read/projection package (e.g., a new projection or dto package) and update references to ProductWithLikeCount accordingly; ensure service/facade methods that returned the old domain-embedded type are updated to return the new projection type and add a regression test verifying the public response shape remains identical (test the service/facade that produces ProductWithLikeCount results).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In
`@apps/commerce-api/src/main/java/com/loopers/application/like/LikeFacade.java`:
- Around line 14-22: Removing ProductService left ProductLikeCount update
unimplemented so likes don't change; add explicit increment/decrement of
ProductLikeCount inside the transactional flow: either extend
LikeService.like/unlike to update the ProductLikeCount entity (or add new
methods like incrementLikeCount(Long productId) and decrementLikeCount(Long
productId)) and invoke them from LikeFacade.like/unlike (or call the updated
LikeService methods), ensuring the operations run within the same `@Transactional`
boundary and handle creating the ProductLikeCount row if missing and preventing
negative counts on unlike.
In
`@apps/commerce-api/src/main/java/com/loopers/domain/like/ProductLikeCount.java`:
- Around line 18-42: ProductLikeCount currently lacks methods to mutate
likeCount and is never updated by LikeService, so implement mutators and persist
changes: add increment() and decrement() instance methods to ProductLikeCount
that validate bounds (reusing guard() logic), add a save(ProductLikeCount)
method to ProductLikeCountRepository (or expose JpaRepository.save) so the
service can persist changes, then update LikeService.like() to create
ProductLike and atomically increment the corresponding ProductLikeCount (and
create it if absent) and update LikeService.unlike() to delete ProductLike and
atomically decrement the ProductLikeCount; for concurrency, use optimistic
locking on ProductLikeCount (add a version field/@Version) or implement
repository-level atomic UPDATE queries (e.g., increment/decrement by SQL) to
avoid lost updates.
In
`@apps/commerce-api/src/main/java/com/loopers/domain/like/ProductLikeCountRepository.java`:
- Around line 6-11: ProductLikeCountRepository currently only supports reads
(findByProductId, findByProductIdIn) so callers cannot create or atomically
update ProductLikeCount; add write methods such as save(ProductLikeCount) or
saveOrUpdate(ProductLikeCount) and atomic increment/decrement operations named
incrementLikeCount(Long productId) and decrementLikeCount(Long productId), and
implement those increment/decrement methods in the repository implementation
using a single atomic SQL update (e.g. UPDATE product_likes_count SET like_count
= like_count + 1 WHERE product_id = ?) to prevent race conditions under
concurrent like/unlike requests.
In
`@apps/commerce-api/src/main/java/com/loopers/infrastructure/product/ProductRepositoryImpl.java`:
- Around line 143-155: getOrderSpecifiers currently returns no stable fallback
and can produce unstable pagination; update getOrderSpecifiers(Pageable
pageable, QProduct p, QProductLikeCount plc) to always include a default sort
(e.g., createdAt DESC) when pageable has no sort and to always append a
deterministic tie-breaker OrderSpecifier using the primary key (e.g., p.id) as
the last specifier; ensure the mapping for each incoming order (like ->
plc.likeCount.coalesce(0), price -> p.price, name -> p.name, createdAt ->
p.createdAt) remains unchanged but that after converting pageable.getSort() to
OrderSpecifier[] you append the default createdAt DESC when needed and then
append p.id (ASC or DESC consistent, typically ASC) so pagination is stable.
- Around line 63-141: ProductRepositoryImpl's methods
(findWithLikeCountByIdAndDeletedAtIsNull, findAllWithLikeCountByDeletedAtIsNull,
findAllWithLikeCountByBrandIdAndDeletedAtIsNull) now read like counts from
product_like_count but LikeService.like()/unlike() only mutates ProductLike and
does not update ProductLikeCountRepository, causing stale/zero counts; either
restore a consistent write path by updating/upserting/incrementing/decrementing
ProductLikeCount inside the same transaction as LikeService.like() and
LikeService.unlike() (use ProductLikeCountRepository within those service
methods and ensure transactional boundaries include both ProductLike and
ProductLikeCount changes), or revert the repository queries to compute likeCount
by counting ProductLike (e.g., join/subquery against ProductLike) until a
write-path is implemented; add integration tests asserting that like()/unlike()
immediately reflect in getProductWithLikeCount() and list sorting to validate
correctness.
---
Outside diff comments:
In
`@apps/commerce-api/src/main/java/com/loopers/application/like/LikeService.java`:
- Around line 27-52: The like/unlike/deleteLikes flows update only ProductLike
but never update ProductLikeCount, causing getLikeCount() to be stale; modify
like(Long userId, Long productId), unlike(Long userId, Long productId) and the
deleteLikes flow to atomically update the counter in ProductLikeCount using
repository-level atomic updates (e.g., add methods like
incrementLikeCountByProductId(productId) and
decrementLikeCountByProductId(productId) on ProductLikeCountRepository that
execute SET like_count = like_count + 1 / -1), ensure you create or initialize a
ProductLikeCount row if missing (upsert) before incrementing, keep the
operations in the same `@Transactional` boundary as ProductLike save/delete, and
add concurrency tests that send concurrent like/unlike requests to verify no
race conditions.
In
`@apps/commerce-api/src/main/java/com/loopers/application/product/ProductFacade.java`:
- Around line 35-42: The getProducts method in ProductFacade currently calls
brandService.getBrand(...) for each product causing N additional calls; change
it so when brandId != null you fetch the Brand once (call
brandService.getBrand(brandId) and reuse its name), and when brandId is null
collect unique brandIds from the Page<ProductWithLikeCountResult> and call a
batch method (e.g., brandService.getBrandsByIds or add one) to retrieve all
BrandResults in one call and map them by id before calling
ProductWithBrandResult.from; update or add tests to assert that for
brandId-present queries the brand service is called exactly once and that
mixed-brand pages resolve via a single batch call.
In
`@apps/commerce-api/src/main/java/com/loopers/application/product/result/ProductResult.java`:
- Around line 7-27: Tests are failing because ProductResult's record signature
no longer includes likeCount (it has 7 fields), but several tests still call the
old 8-arg constructor; update all instantiations of ProductResult in the test
classes (ProductAdminV1ControllerTest, OrderFacadeTest, ProductFacadeTest) by
removing the obsolete sixth argument (the 0) so each call uses new
ProductResult(1L, 1L, "상품명", price, stock, now, now) matching the ProductResult
record, then re-run the test suite to verify all tests pass.
---
Nitpick comments:
In
`@apps/commerce-api/src/main/java/com/loopers/application/like/LikeService.java`:
- Around line 59-73: The getLikeCounts method should avoid calling
productLikesCountRepository.findByProductIdIn when productIds is null or empty
to prevent unnecessary queries; update getLikeCounts to first check for
null/empty productIds and return an empty Map immediately, otherwise proceed to
call productLikesCountRepository.findByProductIdIn(productIds) and collect into
a Map using ProductLikeCount::getProductId and ProductLikeCount::getLikeCount
(keep getLikeCount unchanged).
In
`@apps/commerce-api/src/main/java/com/loopers/domain/product/ProductWithLikeCount.java`:
- Around line 5-14: ProductWithLikeCount is currently living under the domain
layer but is a read-only projection (join of Product and product_likes_count);
move the record out of the domain package into a dedicated read/projection
package (e.g., a new projection or dto package) and update references to
ProductWithLikeCount accordingly; ensure service/facade methods that returned
the old domain-embedded type are updated to return the new projection type and
add a regression test verifying the public response shape remains identical
(test the service/facade that produces ProductWithLikeCount results).
In
`@apps/commerce-api/src/main/java/com/loopers/infrastructure/like/ProductLikeCountJpaRepository.java`:
- Around line 9-14: The repository method findByProductIdIn on
ProductLikeCountJpaRepository can generate an excessively large SQL IN clause
(and hits Oracle's 1000-item limit) when callers pass many productIds; fix by
batching the incoming productIds at the call site into smaller chunks (e.g.,
<=1000) and aggregating results, or replace direct use with a repository query
that pages/streams results (implement a custom method or `@Query` with
pagination/streaming and `@QueryHints`) to avoid huge IN lists—update callers that
currently call findByProductIdIn to use the batching or the new repository
method.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 3a92d645-7448-451c-b1e2-76db4ea70a2e
📒 Files selected for processing (16)
apps/commerce-api/src/main/java/com/loopers/application/like/LikeFacade.javaapps/commerce-api/src/main/java/com/loopers/application/like/LikeService.javaapps/commerce-api/src/main/java/com/loopers/application/product/ProductFacade.javaapps/commerce-api/src/main/java/com/loopers/application/product/ProductService.javaapps/commerce-api/src/main/java/com/loopers/application/product/result/ProductResult.javaapps/commerce-api/src/main/java/com/loopers/application/product/result/ProductWithBrandResult.javaapps/commerce-api/src/main/java/com/loopers/application/product/result/ProductWithLikeCountResult.javaapps/commerce-api/src/main/java/com/loopers/domain/like/ProductLikeCount.javaapps/commerce-api/src/main/java/com/loopers/domain/like/ProductLikeCountRepository.javaapps/commerce-api/src/main/java/com/loopers/domain/product/Product.javaapps/commerce-api/src/main/java/com/loopers/domain/product/ProductRepository.javaapps/commerce-api/src/main/java/com/loopers/domain/product/ProductWithLikeCount.javaapps/commerce-api/src/main/java/com/loopers/infrastructure/like/ProductLikeCountJpaRepository.javaapps/commerce-api/src/main/java/com/loopers/infrastructure/like/ProductLikeCountRepositoryImpl.javaapps/commerce-api/src/main/java/com/loopers/infrastructure/product/ProductJpaRepository.javaapps/commerce-api/src/main/java/com/loopers/infrastructure/product/ProductRepositoryImpl.java
💤 Files with no reviewable changes (2)
- apps/commerce-api/src/main/java/com/loopers/domain/product/Product.java
- apps/commerce-api/src/main/java/com/loopers/infrastructure/product/ProductJpaRepository.java
| @Transactional | ||
| public LikeResult like(Long userId, Long productId) { | ||
| LikeResult result = likeService.like(userId, productId); | ||
| productService.incrementLikeCount(productId); | ||
| return result; | ||
| return likeService.like(userId, productId); | ||
| } | ||
|
|
||
| @Transactional | ||
| public void unlike(Long userId, Long productId) { | ||
| likeService.unlike(userId, productId); | ||
| productService.decrementLikeCount(productId); | ||
| } |
There was a problem hiding this comment.
ProductService 의존성 제거로 결합도가 낮아졌다.
다만, 기존에 ProductService를 통해 처리하던 좋아요 수 갱신 로직이 제거되었으나, 대체 로직이 구현되지 않았다. ProductLikeCount 엔티티의 증감 로직이 없으므로 좋아요 수는 갱신되지 않는다. LikeFacade 또는 LikeService에서 카운트 갱신 책임을 명확히 해야 한다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/commerce-api/src/main/java/com/loopers/application/like/LikeFacade.java`
around lines 14 - 22, Removing ProductService left ProductLikeCount update
unimplemented so likes don't change; add explicit increment/decrement of
ProductLikeCount inside the transactional flow: either extend
LikeService.like/unlike to update the ProductLikeCount entity (or add new
methods like incrementLikeCount(Long productId) and decrementLikeCount(Long
productId)) and invoke them from LikeFacade.like/unlike (or call the updated
LikeService methods), ensuring the operations run within the same `@Transactional`
boundary and handle creating the ProductLikeCount row if missing and preventing
negative counts on unlike.
| public class ProductLikeCount extends BaseEntity { | ||
|
|
||
| @Column(name = "product_id", nullable = false, unique = true) | ||
| private Long productId; | ||
|
|
||
| @Column(name = "like_count", nullable = false) | ||
| private int likeCount; | ||
|
|
||
| @Builder | ||
| private ProductLikeCount(Long productId, int likeCount) { | ||
| this.productId = productId; | ||
| this.likeCount = likeCount; | ||
| guard(); | ||
| } | ||
|
|
||
| @Override | ||
| protected void guard() { | ||
| if (productId == null) { | ||
| throw new CoreException(ErrorType.BAD_REQUEST, "상품 ID는 필수입니다."); | ||
| } | ||
| if (likeCount < 0) { | ||
| throw new CoreException(ErrorType.BAD_REQUEST, "좋아요 수는 0 이상이어야 합니다."); | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Description: ProductLikeCount 생성 또는 갱신 로직이 있는지 확인
# LikeService에서 ProductLikeCount를 저장하거나 업데이트하는 코드가 있는지 확인
rg -n "ProductLikeCount" --type java -C 5 | rg -i "save|update|increment|decrement"
# ProductLikeCountRepository에 save 메서드가 있는지 확인
ast-grep --pattern $'interface ProductLikeCountRepository {
$$$
save($$$)
$$$
}'Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 550
🏁 Script executed:
# LikeService 클래스와 like/unlike 메서드 찾기
fd -e java | xargs rg -l "LikeService|class.*Service.*Like"Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 833
🏁 Script executed:
# ProductLikeCountRepository 정의 확인
fd -e java | xargs rg -l "ProductLikeCountRepository" | head -5Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 350
🏁 Script executed:
# ProductLikeCount 사용 패턴 - 전체 컨텍스트 확인
rg -n "ProductLikeCount" --type java -A 10 -B 2 | head -100Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 13088
🏁 Script executed:
# LikeService.java 전체 내용 확인
cat apps/commerce-api/src/main/java/com/loopers/application/like/LikeService.javaRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 2820
🏁 Script executed:
# ProductLikeCountRepository.java 도메인 인터페이스 확인
cat apps/commerce-api/src/main/java/com/loopers/domain/like/ProductLikeCountRepository.javaRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 352
🏁 Script executed:
# ProductLikeCountRepositoryImpl.java 구현 확인
cat apps/commerce-api/src/main/java/com/loopers/infrastructure/like/ProductLikeCountRepositoryImpl.javaRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 918
🏁 Script executed:
# 이벤트 리스너나 스케줄러가 ProductLikeCount를 수정하는지 확인
rg "ProductLikeCount" --type java -l | xargs rg -l "Listener|Event|Scheduled|@PostPersist|@PostUpdate|@PrePersist"Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 69
🏁 Script executed:
# LikeService에서 ProductLikeCount를 수정하는 메서드가 있는지 재확인
rg "productLikesCountRepository\." apps/commerce-api/src/main/java/com/loopers/application/like/LikeService.javaRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 236
🏁 Script executed:
# ProductLikeCount에 increment/decrement 메서드가 있는지 확인
cat apps/commerce-api/src/main/java/com/loopers/domain/like/ProductLikeCount.javaRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 1252
ProductLikeCount 엔티티에 likeCount 증감 메서드가 없어 좋아요 기능이 정상 동작하지 않는다.
현재 설계에서 ProductLike는 개별 사용자 좋아요 기록을, ProductLikeCount는 상품별 좋아요 합계를 저장한다. 그러나 LikeService의 like() 메서드는 ProductLike만 생성하고 ProductLikeCount를 갱신하지 않으며, unlike() 메서드도 ProductLike만 삭제한다. ProductLikeCount 엔티티는 increment/decrement 메서드가 없고, ProductLikeCountRepository 인터페이스도 save() 메서드를 노출하지 않으므로 초기 생성 후 카운트를 수정할 방법이 전혀 없다. 결과적으로 like/unlike 작업이 증가해도 조회되는 좋아요 수는 초기값으로 유지되어 사용자에게 부정확한 정보를 제공한다.
필수 수정 사항:
- ProductLikeCount에 increment()/decrement() 메서드 추가
- ProductLikeCountRepository 인터페이스에 save() 메서드 노출
- LikeService.like()에서 카운트 증가, unlike()에서 카운트 감소 처리
- 동시성 이슈 대응: 낙관적 락 또는 원자적 UPDATE 쿼리 도입
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/commerce-api/src/main/java/com/loopers/domain/like/ProductLikeCount.java`
around lines 18 - 42, ProductLikeCount currently lacks methods to mutate
likeCount and is never updated by LikeService, so implement mutators and persist
changes: add increment() and decrement() instance methods to ProductLikeCount
that validate bounds (reusing guard() logic), add a save(ProductLikeCount)
method to ProductLikeCountRepository (or expose JpaRepository.save) so the
service can persist changes, then update LikeService.like() to create
ProductLike and atomically increment the corresponding ProductLikeCount (and
create it if absent) and update LikeService.unlike() to delete ProductLike and
atomically decrement the ProductLikeCount; for concurrency, use optimistic
locking on ProductLikeCount (add a version field/@Version) or implement
repository-level atomic UPDATE queries (e.g., increment/decrement by SQL) to
avoid lost updates.
| public interface ProductLikeCountRepository { | ||
|
|
||
| Optional<ProductLikeCount> findByProductId(Long productId); | ||
|
|
||
| List<ProductLikeCount> findByProductIdIn(List<Long> productIds); | ||
| } |
There was a problem hiding this comment.
ProductLikeCountRepository에 쓰기 메서드가 없다.
현재 인터페이스는 조회만 지원하므로 ProductLikeCount 레코드의 생성/수정이 불가능하다. 좋아요/좋아요 취소 시 카운트를 갱신하려면 최소한 다음 메서드가 필요하다:
save(ProductLikeCount)또는saveOrUpdate()- 동시성을 고려한 원자적 증감:
incrementLikeCount(Long productId),decrementLikeCount(Long productId)
🐛 인터페이스 확장 제안
public interface ProductLikeCountRepository {
Optional<ProductLikeCount> findByProductId(Long productId);
List<ProductLikeCount> findByProductIdIn(List<Long> productIds);
+
+ ProductLikeCount save(ProductLikeCount productLikeCount);
+
+ void incrementLikeCount(Long productId);
+
+ void decrementLikeCount(Long productId);
}동시 좋아요 요청 시 race condition을 방지하려면 UPDATE product_likes_count SET like_count = like_count + 1 WHERE product_id = ? 형태의 원자적 쿼리를 사용해야 한다.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| public interface ProductLikeCountRepository { | |
| Optional<ProductLikeCount> findByProductId(Long productId); | |
| List<ProductLikeCount> findByProductIdIn(List<Long> productIds); | |
| } | |
| public interface ProductLikeCountRepository { | |
| Optional<ProductLikeCount> findByProductId(Long productId); | |
| List<ProductLikeCount> findByProductIdIn(List<Long> productIds); | |
| ProductLikeCount save(ProductLikeCount productLikeCount); | |
| void incrementLikeCount(Long productId); | |
| void decrementLikeCount(Long productId); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/commerce-api/src/main/java/com/loopers/domain/like/ProductLikeCountRepository.java`
around lines 6 - 11, ProductLikeCountRepository currently only supports reads
(findByProductId, findByProductIdIn) so callers cannot create or atomically
update ProductLikeCount; add write methods such as save(ProductLikeCount) or
saveOrUpdate(ProductLikeCount) and atomic increment/decrement operations named
incrementLikeCount(Long productId) and decrementLikeCount(Long productId), and
implement those increment/decrement methods in the repository implementation
using a single atomic SQL update (e.g. UPDATE product_likes_count SET like_count
= like_count + 1 WHERE product_id = ?) to prevent race conditions under
concurrent like/unlike requests.
| public Optional<ProductWithLikeCount> findWithLikeCountByIdAndDeletedAtIsNull(Long productId) { | ||
| QProduct p = QProduct.product; | ||
| QProductLikeCount plc = QProductLikeCount.productLikeCount; | ||
|
|
||
| ProductWithLikeCount result = queryFactory | ||
| .select(Projections.constructor(ProductWithLikeCount.class, | ||
| p.id, p.brandId, p.name, p.price, p.stock, | ||
| plc.likeCount.coalesce(0), p.createdAt, p.updatedAt | ||
| )) | ||
| .from(p) | ||
| .leftJoin(plc).on(p.id.eq(plc.productId)) | ||
| .where( | ||
| p.id.eq(productId), | ||
| p.deletedAt.isNull() | ||
| ) | ||
| .fetchOne(); | ||
|
|
||
| return Optional.ofNullable(result); | ||
| } | ||
|
|
||
| @Override | ||
| public Page<ProductWithLikeCount> findAllWithLikeCountByDeletedAtIsNull(Pageable pageable) { | ||
| QProduct p = QProduct.product; | ||
| QProductLikeCount plc = QProductLikeCount.productLikeCount; | ||
|
|
||
| List<ProductWithLikeCount> content = queryFactory | ||
| .select(Projections.constructor(ProductWithLikeCount.class, | ||
| p.id, p.brandId, p.name, p.price, p.stock, | ||
| plc.likeCount.coalesce(0), p.createdAt, p.updatedAt | ||
| )) | ||
| .from(p) | ||
| .leftJoin(plc).on(p.id.eq(plc.productId)) | ||
| .where(p.deletedAt.isNull()) | ||
| .offset(pageable.getOffset()) | ||
| .limit(pageable.getPageSize()) | ||
| .orderBy(getOrderSpecifiers(pageable, p, plc)) | ||
| .fetch(); | ||
|
|
||
| Long total = queryFactory | ||
| .select(p.count()) | ||
| .from(p) | ||
| .where(p.deletedAt.isNull()) | ||
| .fetchOne(); | ||
|
|
||
| return new PageImpl<>(content, pageable, total != null ? total : 0L); | ||
| } | ||
|
|
||
| @Override | ||
| public Page<ProductWithLikeCount> findAllWithLikeCountByBrandIdAndDeletedAtIsNull(Long brandId, Pageable pageable) { | ||
| QProduct p = QProduct.product; | ||
| QProductLikeCount plc = QProductLikeCount.productLikeCount; | ||
|
|
||
| List<ProductWithLikeCount> content = queryFactory | ||
| .select(Projections.constructor(ProductWithLikeCount.class, | ||
| p.id, p.brandId, p.name, p.price, p.stock, | ||
| plc.likeCount.coalesce(0), p.createdAt, p.updatedAt | ||
| )) | ||
| .from(p) | ||
| .leftJoin(plc).on(p.id.eq(plc.productId)) | ||
| .where( | ||
| p.brandId.eq(brandId), | ||
| p.deletedAt.isNull() | ||
| ) | ||
| .offset(pageable.getOffset()) | ||
| .limit(pageable.getPageSize()) | ||
| .orderBy(getOrderSpecifiers(pageable, p, plc)) | ||
| .fetch(); | ||
|
|
||
| Long total = queryFactory | ||
| .select(p.count()) | ||
| .from(p) | ||
| .where( | ||
| p.brandId.eq(brandId), | ||
| p.deletedAt.isNull() | ||
| ) | ||
| .fetchOne(); | ||
|
|
||
| return new PageImpl<>(content, pageable, total != null ? total : 0L); | ||
| } |
There was a problem hiding this comment.
좋아요 수 읽기 경로는 바뀌었지만 집계 쓰기 경로가 없어 값이 고정될 수 있다.
이 구현부터 상세/목록 likeCount의 조회 기준이 product_likes_count LEFT JOIN으로 바뀐다. 그런데 제공된 apps/commerce-api/src/main/java/com/loopers/application/like/LikeService.java:28-52를 보면 like()/unlike()는 ProductLike만 저장·삭제하고 ProductLikeCountRepository를 갱신하지 않는다. 이 상태로 배포하면 사용자가 좋아요를 눌러도 상세/목록/정렬의 likeCount가 0 또는 stale 값으로 남아 화면과 정렬 결과가 즉시 어긋난다. 좋아요 생성/취소 트랜잭션 안에서 ProductLikeCount를 upsert/increment/decrement 하도록 쓰기 경로를 복원하거나, 그 전까지는 likes 원본 테이블 count 기반 조회를 유지하는 편이 안전하다. 추가 테스트로는 like 후 getProductWithLikeCount()와 목록 정렬 결과가 즉시 증가하고, unlike 후 즉시 감소하는 통합 테스트가 필요하다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/commerce-api/src/main/java/com/loopers/infrastructure/product/ProductRepositoryImpl.java`
around lines 63 - 141, ProductRepositoryImpl's methods
(findWithLikeCountByIdAndDeletedAtIsNull, findAllWithLikeCountByDeletedAtIsNull,
findAllWithLikeCountByBrandIdAndDeletedAtIsNull) now read like counts from
product_like_count but LikeService.like()/unlike() only mutates ProductLike and
does not update ProductLikeCountRepository, causing stale/zero counts; either
restore a consistent write path by updating/upserting/incrementing/decrementing
ProductLikeCount inside the same transaction as LikeService.like() and
LikeService.unlike() (use ProductLikeCountRepository within those service
methods and ensure transactional boundaries include both ProductLike and
ProductLikeCount changes), or revert the repository queries to compute likeCount
by counting ProductLike (e.g., join/subquery against ProductLike) until a
write-path is implemented; add integration tests asserting that like()/unlike()
immediately reflect in getProductWithLikeCount() and list sorting to validate
correctness.
| private OrderSpecifier<?>[] getOrderSpecifiers(Pageable pageable, QProduct p, QProductLikeCount plc) { | ||
| return pageable.getSort().stream() | ||
| .map(order -> { | ||
| Order direction = order.isAscending() ? Order.ASC : Order.DESC; | ||
| return switch (order.getProperty()) { | ||
| case "like" -> new OrderSpecifier<>(direction, plc.likeCount.coalesce(0)); | ||
| case "price" -> new OrderSpecifier<>(direction, p.price); | ||
| case "name" -> new OrderSpecifier<>(direction, p.name); | ||
| case "createdAt" -> new OrderSpecifier<>(direction, p.createdAt); | ||
| default -> new OrderSpecifier<>(direction, p.createdAt); | ||
| }; | ||
| }) | ||
| .toArray(OrderSpecifier[]::new); |
There was a problem hiding this comment.
정렬 보조 키가 없어 페이지 경계가 불안정하다.
현재는 pageable이 unsorted면 ORDER BY가 아예 빠지고, like/price/name처럼 동률이 흔한 컬럼으로 정렬해도 tie-breaker가 없다. 운영에서는 같은 상품이 다음 페이지에 다시 나오거나 일부 상품이 건너뛰는 현상이 생길 수 있다. 기본 정렬을 항상 넣고(createdAt DESC 등), 사용자가 정렬을 줘도 마지막에 안정적인 id 보조 키를 추가하는 편이 안전하다. 추가 테스트로는 정렬 미지정 요청과 동일 likeCount가 많은 데이터셋에서 페이지 1/2 사이 중복·누락이 없는지 검증해야 한다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/commerce-api/src/main/java/com/loopers/infrastructure/product/ProductRepositoryImpl.java`
around lines 143 - 155, getOrderSpecifiers currently returns no stable fallback
and can produce unstable pagination; update getOrderSpecifiers(Pageable
pageable, QProduct p, QProductLikeCount plc) to always include a default sort
(e.g., createdAt DESC) when pageable has no sort and to always append a
deterministic tie-breaker OrderSpecifier using the primary key (e.g., p.id) as
the last specifier; ensure the mapping for each incoming order (like ->
plc.likeCount.coalesce(0), price -> p.price, name -> p.name, createdAt ->
p.createdAt) remains unchanged but that after converting pageable.getSort() to
OrderSpecifier[] you append the default createdAt DESC when needed and then
append p.id (ASC or DESC consistent, typically ASC) so pagination is stable.
- ProductCacheRepository 인터페이스에 단건/목록 캐시 메서드 정의 - ProductCacheRepositoryImpl에 Redis 기반 cache-aside 패턴 구현 - ProductService 조회 메서드에 캐시 조회 우선 적용, 변경 메서드에 캐시 무효화 추가 - CLAUDE.md에 캐시 키 네이밍 컨벤션 및 무효화 전략 문서화 Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
apps/commerce-api/src/main/java/com/loopers/application/product/ProductService.java (1)
1-155:⚠️ Potential issue | 🔴 Critical
LikeService에서 좋아요 증감 시ProductLikeCount를 업데이트하는 로직이 누락되었다.
LikeService.like()메서드는ProductLike를 저장하지만ProductLikeCount를 증가시키지 않으며,unlike()메서드도 좋아요를 삭제할 때ProductLikeCount를 감소시키지 않는다.ProductLikeCountRepository에는 증감 메서드가 없고, 현재 코드에는 어디서도ProductLikeCount를 수정하지 않는다. 이로 인해 좋아요 수 메트릭이 정확하지 않게 된다.수정안:
ProductLikeCountRepository인터페이스에increment(Long productId),decrement(Long productId)메서드를 추가한다.LikeService.like()에서productLike저장 후productLikesCountRepository.increment(productId)를 호출한다.LikeService.unlike()에서productLike삭제 후productLikesCountRepository.decrement(productId)를 호출한다.추가 테스트:
- 좋아요 등록 후
ProductLikeCount.likeCount가 증가했는지 검증한다.- 좋아요 취소 후
ProductLikeCount.likeCount가 감소했는지 검증한다.- 존재하지 않는 상품에 좋아요를 등록할 때
ProductLikeCount를 새로 생성하는지 확인한다(초기값 1).함께 수정 필요:
LikeFacadeTest의 테스트 어설션을 갱신한다. 현재verify(productService).incrementLikeCount(productId)검증은 불필요하며, 대신LikeService에서 좋아요 수 변경을 검증하는 통합 테스트로 변경해야 한다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/application/product/ProductService.java` around lines 1 - 155, Like counts are never updated because ProductLikeCountRepository lacks increment/decrement and LikeService.like()/unlike() don't call them; add increment(Long productId) and decrement(Long productId) to ProductLikeCountRepository, implement semantics to create a ProductLikeCount with count=1 if missing on increment, and decrement (or handle floor at 0) on decrement; then call productLikeCountRepository.increment(productId) immediately after persisting a ProductLike in LikeService.like(), and call productLikeCountRepository.decrement(productId) after removing a ProductLike in LikeService.unlike(); finally update LikeFacadeTest assertions to verify the repository/state change (or add integration tests) instead of the removed verify(productService).
🧹 Nitpick comments (3)
apps/commerce-api/src/main/java/com/loopers/application/product/ProductService.java (2)
44-51: 캐시 조회 및 저장 로직이 중복되어 있다.
getProduct,getProductWithLikeCount,getProducts,getProductsWithLikeCount메서드들이 모두 동일한 패턴(캐시 조회 → miss 시 DB 조회 → 캐시 저장 → 반환)을 반복하고 있다. 이는 DRY 원칙 위반이며, 캐시 로직 변경 시 여러 곳을 수정해야 한다.수정안: 캐시 조회-저장 패턴을 공통 헬퍼 메서드로 추출하거나,
@Cacheable어노테이션 기반으로 전환하는 것을 고려할 수 있다.♻️ 공통 헬퍼 메서드 예시
private <T> T getFromCacheOrLoad( Supplier<Optional<T>> cacheGetter, Supplier<T> dbLoader, Consumer<T> cachePutter) { return cacheGetter.get() .orElseGet(() -> { T data = dbLoader.get(); cachePutter.accept(data); return data; }); }Also applies to: 56-64, 68-75, 79-86, 90-97
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/application/product/ProductService.java` around lines 44 - 51, Extract the repeated cache-read / DB-load / cache-write pattern used in getProduct, getProductWithLikeCount, getProducts, and getProductsWithLikeCount into a single reusable helper (e.g., a generic method like getFromCacheOrLoad) that accepts a cache getter Supplier<Optional<T>>, a DB loader Supplier<T> (or Supplier<Optional<T>> if you need exception handling), and a cache putter Consumer<T>; replace each method's inline lambda block with a call to that helper (or alternatively annotate the read methods with `@Cacheable` and remove manual cache logic) so the logic in ProductService is centralized and DRY.
146-154:deleteProducts에서 N개의 상품에 대해 개별 캐시 eviction을 수행한다.브랜드에 속한 상품이 많을 경우, Redis에 N번의 DELETE 요청이 발생한다. 파이프라인이나 배치 삭제를 사용하면 네트워크 왕복을 줄일 수 있다.
현재 운영 규모에서 문제가 없다면 defer 가능하나, 대량 삭제 시나리오가 있다면 최적화를 고려해야 한다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/application/product/ProductService.java` around lines 146 - 154, deleteProducts currently calls productRepository.findAllByBrandId and then loops calling product.delete() and productCacheRepository.evictProduct(product.getId()) for each product, causing N separate Redis DELETEs; instead collect the product IDs and call a single batched eviction (e.g., productCacheRepository.evictProductsByIds(Collection<Long> ids) or a pipeline-based evictProducts) to perform a multi-key DEL or pipelined deletes in one roundtrip, then call evictAllProductsCache only if still required; add/implement evictProductsByIds in productCacheRepository and replace the per-item evictProduct calls in deleteProducts with a single batched call while keeping product.delete() logic unchanged.apps/commerce-api/src/main/java/com/loopers/infrastructure/product/ProductCacheRepositoryImpl.java (1)
171-171: CachedProductPage 레코드에 Product 엔티티를 직접 저장하면 영속성 컨텍스트 외부에서 문제가 발생할 수 있다.캐시에서 복원된
Product객체는 영속성 컨텍스트에 속하지 않는 detached 상태이다. 이 객체에 대해 lazy loading이나 변경 감지가 작동하지 않으며, 실수로 수정 후 저장하려 하면 예상치 못한 동작이 발생할 수 있다.현재 코드에서는 조회용으로만 사용하므로 즉각적인 문제는 없으나, 향후 유지보수 시 주의가 필요하다. 가능하면 캐시용 DTO를 별도로 정의하여 엔티티와 분리하는 것을 권장한다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/infrastructure/product/ProductCacheRepositoryImpl.java` at line 171, The CachedProductPage record currently stores Product entities directly (see CachedProductPage and ProductCacheRepositoryImpl), which yields detached entities and can break lazy loading/change tracking; change the cache representation to use a separate immutable DTO (e.g., ProductCacheDto) and update CachedProductPage to hold List<ProductCacheDto> and primitive fields, convert Product -> ProductCacheDto when writing to cache and convert back to lightweight read-only domain objects or rebuild entities within a transactional context when reading from cache; adjust methods in ProductCacheRepositoryImpl that serialize/deserialize the cache to use these mapping helpers so entities are never stored directly in the cache.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In
`@apps/commerce-api/src/main/java/com/loopers/application/product/ProductService.java`:
- Around line 112-114: Currently productCacheRepository.evictProduct(...) and
evictAllProductsCache() are called inside transactions (seen in methods
modifyProduct, deductStock, restoreStock, deleteProduct), causing cache/DB
inconsistency on rollback; change each method to register a
TransactionSynchronization via
TransactionSynchronizationManager.registerSynchronization(...) and perform
productCacheRepository.evictProduct(productId) and
productCacheRepository.evictAllProductsCache() inside the afterCommit() callback
so eviction only happens after the transaction successfully commits.
In
`@apps/commerce-api/src/main/java/com/loopers/infrastructure/product/ProductCacheRepositoryImpl.java`:
- Around line 126-132: The current evictAllProductsCache implementation uses
redisTemplate.keys(PRODUCTS_KEY_PREFIX + "*"), which can block Redis in
production; change evictAllProductsCache to iterate keys using SCAN (e.g.,
RedisConnection/ScanOptions via redisTemplate) to collect and delete matching
keys in small batches (or alternatively maintain a Redis Set of product cache
keys and delete members from that set), replace direct keys() usage (reference:
evictAllProductsCache, redisTemplate.keys, PRODUCTS_KEY_PREFIX) and ensure
deletion is batched to avoid large multi-key operations; also add a performance
test that populates ~100k product keys and verifies eviction latency does not
spike.
- Around line 35-36: The Lettuce client currently has no command timeout
configured, risking threads hanging on redisTemplate calls; update the
LettuceClientConfiguration in the lettuceConnectionFactory() method (in
RedisConfig) to set an explicit commandTimeout (e.g., via
LettuceClientConfiguration.builder().commandTimeout(...)) or wrap Redis calls
made by ProductCacheRepositoryImpl (e.g., methods using
redisTemplate.opsForValue().get/set) with a circuit-breaker/timeout
(Resilience4j) so Redis failures degrade gracefully; pick a timeout value based
on production latency and verify with latency simulation (toxiproxy).
- Line 33: 현재 TTL(Duration TTL in ProductCacheRepositoryImpl) 값이 하드코딩된 1시간인데 캐시
일관성·모니터링 요구사항이 미흡하므로 다음을 수행하세요: ProductCacheRepositoryImpl의 TTL 상수를 환경설정으로 전환해
구성가능하게 만들고(예: application.yml/env), 관리자 상품 수정 흐름에서 evict 메서드들(특히
evictAllProductsCache 및 개별 evict 메서드)이 실제로 호출되는지 서비스/컨트롤러(상품 수정 핸들러) 경로를 찾아 연동
여부를 검증하고 누락 시 해당 핸들러에서 캐시 무효화 호출을 추가하세요; 또한 비즈니스와 협의해 허용 가능한 최대 스테일 시간 결정 후 TTL을
조정하고, Micrometer 지표(캐시 히트율/미스 카운터)를 ProductCacheRepositoryImpl에 삽입해 모니터링 계획을
문서화하세요.
---
Outside diff comments:
In
`@apps/commerce-api/src/main/java/com/loopers/application/product/ProductService.java`:
- Around line 1-155: Like counts are never updated because
ProductLikeCountRepository lacks increment/decrement and
LikeService.like()/unlike() don't call them; add increment(Long productId) and
decrement(Long productId) to ProductLikeCountRepository, implement semantics to
create a ProductLikeCount with count=1 if missing on increment, and decrement
(or handle floor at 0) on decrement; then call
productLikeCountRepository.increment(productId) immediately after persisting a
ProductLike in LikeService.like(), and call
productLikeCountRepository.decrement(productId) after removing a ProductLike in
LikeService.unlike(); finally update LikeFacadeTest assertions to verify the
repository/state change (or add integration tests) instead of the removed
verify(productService).
---
Nitpick comments:
In
`@apps/commerce-api/src/main/java/com/loopers/application/product/ProductService.java`:
- Around line 44-51: Extract the repeated cache-read / DB-load / cache-write
pattern used in getProduct, getProductWithLikeCount, getProducts, and
getProductsWithLikeCount into a single reusable helper (e.g., a generic method
like getFromCacheOrLoad) that accepts a cache getter Supplier<Optional<T>>, a DB
loader Supplier<T> (or Supplier<Optional<T>> if you need exception handling),
and a cache putter Consumer<T>; replace each method's inline lambda block with a
call to that helper (or alternatively annotate the read methods with `@Cacheable`
and remove manual cache logic) so the logic in ProductService is centralized and
DRY.
- Around line 146-154: deleteProducts currently calls
productRepository.findAllByBrandId and then loops calling product.delete() and
productCacheRepository.evictProduct(product.getId()) for each product, causing N
separate Redis DELETEs; instead collect the product IDs and call a single
batched eviction (e.g.,
productCacheRepository.evictProductsByIds(Collection<Long> ids) or a
pipeline-based evictProducts) to perform a multi-key DEL or pipelined deletes in
one roundtrip, then call evictAllProductsCache only if still required;
add/implement evictProductsByIds in productCacheRepository and replace the
per-item evictProduct calls in deleteProducts with a single batched call while
keeping product.delete() logic unchanged.
In
`@apps/commerce-api/src/main/java/com/loopers/infrastructure/product/ProductCacheRepositoryImpl.java`:
- Line 171: The CachedProductPage record currently stores Product entities
directly (see CachedProductPage and ProductCacheRepositoryImpl), which yields
detached entities and can break lazy loading/change tracking; change the cache
representation to use a separate immutable DTO (e.g., ProductCacheDto) and
update CachedProductPage to hold List<ProductCacheDto> and primitive fields,
convert Product -> ProductCacheDto when writing to cache and convert back to
lightweight read-only domain objects or rebuild entities within a transactional
context when reading from cache; adjust methods in ProductCacheRepositoryImpl
that serialize/deserialize the cache to use these mapping helpers so entities
are never stored directly in the cache.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: d898fffd-41c1-4f7a-a0a2-7f92a3dcc6d0
⛔ Files ignored due to path filters (1)
CLAUDE.mdis excluded by!**/*.mdand included by**
📒 Files selected for processing (3)
apps/commerce-api/src/main/java/com/loopers/application/product/ProductService.javaapps/commerce-api/src/main/java/com/loopers/domain/product/ProductCacheRepository.javaapps/commerce-api/src/main/java/com/loopers/infrastructure/product/ProductCacheRepositoryImpl.java
| productCacheRepository.evictProduct(productId); | ||
| productCacheRepository.evictAllProductsCache(); | ||
|
|
There was a problem hiding this comment.
트랜잭션 내에서 캐시를 무효화하면, 롤백 시 캐시 불일치가 발생한다.
현재 modifyProduct, deductStock, restoreStock, deleteProduct 등에서 트랜잭션 내부에서 캐시를 evict한다. 만약 이후 로직에서 예외가 발생하여 트랜잭션이 롤백되면, DB는 원래 상태로 복원되지만 캐시는 이미 삭제된 상태이다. 이후 캐시 miss 시 DB에서 읽어오므로 데이터는 정합하지만, 불필요한 캐시 miss와 DB 부하가 발생한다.
더 심각한 경우, 다른 요청이 캐시 삭제 후 ~ 트랜잭션 롤백 전 사이에 DB를 조회하여 캐시에 저장하면, 롤백 후에도 변경 전 데이터가 캐시에 남아있게 된다.
수정안: TransactionSynchronizationManager.registerSynchronization()을 사용하여 트랜잭션 커밋 후에 캐시를 무효화하는 패턴을 적용해야 한다.
♻️ 트랜잭션 커밋 후 캐시 무효화 예시
`@Transactional`
public ProductResult modifyProduct(Long productId, Long brandId, ProductUpdateCommand command) {
Product product = productRepository.findByIdAndDeletedAtIsNull(productId)
.orElseThrow(() -> new CoreException(ErrorType.NOT_FOUND, "상품을 찾을 수 없습니다."));
product.changeInfo(brandId, command.name(), command.price(), command.stock());
TransactionSynchronizationManager.registerSynchronization(new TransactionSynchronization() {
`@Override`
public void afterCommit() {
productCacheRepository.evictProduct(productId);
productCacheRepository.evictAllProductsCache();
}
});
return ProductResult.from(product);
}Also applies to: 123-125, 133-135, 142-144
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/commerce-api/src/main/java/com/loopers/application/product/ProductService.java`
around lines 112 - 114, Currently productCacheRepository.evictProduct(...) and
evictAllProductsCache() are called inside transactions (seen in methods
modifyProduct, deductStock, restoreStock, deleteProduct), causing cache/DB
inconsistency on rollback; change each method to register a
TransactionSynchronization via
TransactionSynchronizationManager.registerSynchronization(...) and perform
productCacheRepository.evictProduct(productId) and
productCacheRepository.evictAllProductsCache() inside the afterCommit() callback
so eviction only happens after the transaction successfully commits.
|
|
||
| private static final String PRODUCT_KEY_PREFIX = "product:"; | ||
| private static final String PRODUCTS_KEY_PREFIX = "products:"; | ||
| private static final Duration TTL = Duration.ofHours(1); |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
Spring Redis cache best practices TTL configuration
💡 Result:
Spring + Redis cache TTL: recommended approach
1) Set TTL at the cache provider (RedisCacheManager), ideally per cache
Spring’s Redis caching is configured via RedisCacheConfiguration and commonly customized per cache through RedisCacheManager (or a RedisCacheManagerBuilderCustomizer). (docs.spring.io)
`@Bean`
RedisCacheManagerBuilderCustomizer redisCacheManagerBuilderCustomizer() {
return (builder) -> builder
.withCacheConfiguration("users",
RedisCacheConfiguration.defaultCacheConfig()
.entryTtl(Duration.ofMinutes(10)))
.withCacheConfiguration("products",
RedisCacheConfiguration.defaultCacheConfig()
.entryTtl(Duration.ofSeconds(30)));
}(Spring Boot docs show this pattern, including entryTtl(...).) (docs.enterprise.spring.io)
(Per-cache TTL via a map is also a common solution.) (stackoverflow.com)
2) Know the defaults (and override intentionally)
RedisCacheConfiguration.defaultCacheConfig() defaults to eternal TTL (no expiration) and caches null values (among other defaults like key prefixing and serializers). If you don’t set TTL explicitly, you may keep data forever. (docs.spring.io)
3) Prevent “cache avalanche” / stampedes: add TTL jitter
When many keys share the same TTL, they can expire together and cause a load spike. A common best practice is to add small random “jitter” to TTLs. (d1.awsstatic.com)
4) If mixing Spring Cache with other Redis access, don’t assume reads extend TTL
If TTL is set, “accessing” an entry from different Redis access patterns may not reset/extend expiration the way you expect—plan TTL behavior explicitly. (docs.spring.io)
Citations:
- 1: https://docs.spring.io/spring-data/redis/reference/redis/redis-cache.html?utm_source=openai
- 2: https://docs.enterprise.spring.io/spring-boot/reference/io/caching.html?utm_source=openai
- 3: https://stackoverflow.com/questions/51054774/how-to-configure-different-ttl-for-each-redis-cache-when-using-cacheable-in-spr?utm_source=openai
- 4: https://docs.spring.io/spring-data/redis/docs/current/api/org/springframework/data/redis/cache/RedisCacheConfiguration.html?utm_source=openai
- 5: https://d1.awsstatic.com/whitepapers/Database/database-caching-strategies-using-redis.73adbc8708febc9f3e5efc88382ab86f092bda82.pdf?utm_source=openai
- 6: https://docs.spring.io/spring-data/redis/reference/redis/redis-cache.html?utm_source=openai
🏁 Script executed:
# Check if ProductCacheRepositoryImpl.java exists and get basic info
fd ProductCacheRepositoryImpl.java
# Check git history to see if this file was recently modified
git log --oneline -n 20 -- '**/ProductCacheRepositoryImpl.java' 2>/dev/null || echo "No git history found"
# Search for ProductService to see if eviction strategy exists
rg "ProductService" --type java -l | head -10
# Search for TTL-related configuration in the codebase
rg "ofHours|ofMinutes" --type java -B 2 -A 2 | head -50Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 1896
🏁 Script executed:
# Check for cache configuration files
fd -e properties -e yml -e yaml | xargs rg -l "cache|redis|ttl" 2>/dev/null | head -20Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 318
🏁 Script executed:
# Look for the specific file and surrounding context
cat -n apps/commerce-api/src/main/java/com/loopers/infrastructure/product/ProductCacheRepositoryImpl.java 2>/dev/null || echo "File path needs verification"Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 8447
1시간 TTL의 근거와 데이터 정합성 검증이 필요하다.
이 파일은 새로운 캐싱 계층이며, 전체 캐시 무효화 전략(evictAllProductsCache)은 유지되고 있다. 그러나 1시간은 상품 데이터 캐시로서 상당히 긴 TTL이다. 다음을 확인해야 한다:
- admin 상품 수정 시 캐시 무효화 호출 경로가 이 ProductCacheRepositoryImpl의 evict 메서드들과 연결되는가
- 1시간 스테일 데이터를 허용하는 비즈니스 요구사항이 있는가
- 학습 기록의 "Micrometer 메트릭으로 캐시 히트율 측정" 계획이 포함되는가
캐시 일관성 요구사항과 모니터링 전략을 문서화하고, 필요시 TTL을 조정한다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/commerce-api/src/main/java/com/loopers/infrastructure/product/ProductCacheRepositoryImpl.java`
at line 33, 현재 TTL(Duration TTL in ProductCacheRepositoryImpl) 값이 하드코딩된 1시간인데 캐시
일관성·모니터링 요구사항이 미흡하므로 다음을 수행하세요: ProductCacheRepositoryImpl의 TTL 상수를 환경설정으로 전환해
구성가능하게 만들고(예: application.yml/env), 관리자 상품 수정 흐름에서 evict 메서드들(특히
evictAllProductsCache 및 개별 evict 메서드)이 실제로 호출되는지 서비스/컨트롤러(상품 수정 핸들러) 경로를 찾아 연동
여부를 검증하고 누락 시 해당 핸들러에서 캐시 무효화 호출을 추가하세요; 또한 비즈니스와 협의해 허용 가능한 최대 스테일 시간 결정 후 TTL을
조정하고, Micrometer 지표(캐시 히트율/미스 카운터)를 ProductCacheRepositoryImpl에 삽입해 모니터링 계획을
문서화하세요.
| private final RedisTemplate<String, String> redisTemplate; | ||
| private final ObjectMapper cacheObjectMapper; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Redis 설정에서 timeout 관련 설정 확인
rg -n "timeout|commandTimeout|readTimeout" --type=java -g "**/Redis*.java"
rg -n "timeout|commandTimeout|readTimeout" --type=yamlRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 1093
🏁 Script executed:
# ProductCacheRepositoryImpl.java 파일 조회
find . -name "ProductCacheRepositoryImpl.java" -type fRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 185
🏁 Script executed:
# Redis 관련 설정 클래스 검색
find . -name "*Redis*Config*.java" -o -name "*Redis*Configuration*.java" | head -20Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 252
🏁 Script executed:
# RedisTemplate 빈 정의 검색
rg -n "RedisTemplate|LettuceClientConfiguration" --type=java | head -30Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 2560
🏁 Script executed:
# application.yml에서 Redis 관련 설정 확인
rg -n "redis:" -A 10 --type=yamlRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 2923
🏁 Script executed:
# RedisConfig.java 파일 크기 확인 후 전체 읽기
wc -l ./modules/redis/src/main/java/com/loopers/config/redis/RedisConfig.javaRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 160
🏁 Script executed:
# RedisConfig.java 전체 내용 읽기
cat -n ./modules/redis/src/main/java/com/loopers/config/redis/RedisConfig.javaRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 4959
🏁 Script executed:
# ProductCacheRepositoryImpl.java 크기 확인
wc -l ./apps/commerce-api/src/main/java/com/loopers/infrastructure/product/ProductCacheRepositoryImpl.javaRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 189
🏁 Script executed:
# ProductCacheRepositoryImpl.java 필요 부분 읽기 (30-60줄, 145-165줄)
sed -n '30,60p' ./apps/commerce-api/src/main/java/com/loopers/infrastructure/product/ProductCacheRepositoryImpl.javaRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 1322
🏁 Script executed:
# ProductCacheRepositoryImpl.java 150-157줄 부분
sed -n '145,165p' ./apps/commerce-api/src/main/java/com/loopers/infrastructure/product/ProductCacheRepositoryImpl.javaRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 822
🏁 Script executed:
# RedisProperties 클래스 찾기
find . -name "RedisProperties.java" -type fRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 160
🏁 Script executed:
# ProductCacheRepositoryImpl.java의 getFromCache() 메서드 확인 (140-155줄)
sed -n '135,165p' ./apps/commerce-api/src/main/java/com/loopers/infrastructure/product/ProductCacheRepositoryImpl.javaRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 1213
🏁 Script executed:
# RedisProperties.java 전체 내용
cat -n ./modules/redis/src/main/java/com/loopers/config/redis/RedisProperties.javaRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 483
LettuceClientConfiguration에 commandTimeout을 설정해야 한다.
redisTemplate.opsForValue().get(), set() 등 Redis 호출에 타임아웃이 없으면, Redis 서버 장애 또는 네트워크 지연 시 요청 스레드가 무한 대기할 수 있다. 이는 스레드 풀 고갈로 이어져 전체 서비스 장애를 유발한다.
RedisConfig.java의 lettuceConnectionFactory() 메서드에서 LettuceClientConfiguration 빌더에 commandTimeout을 명시적으로 설정하거나, 서킷브레이커(Resilience4j 등)를 적용하여 Redis 장애 시 graceful degradation을 구현해야 한다.
추가로 Redis 응답 지연 시뮬레이션(toxiproxy 등)으로 타임아웃 동작을 검증하고, 타임아웃 값은 운영 환경의 평균 응답시간과 네트워크 지연을 고려하여 결정해야 한다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/commerce-api/src/main/java/com/loopers/infrastructure/product/ProductCacheRepositoryImpl.java`
around lines 35 - 36, The Lettuce client currently has no command timeout
configured, risking threads hanging on redisTemplate calls; update the
LettuceClientConfiguration in the lettuceConnectionFactory() method (in
RedisConfig) to set an explicit commandTimeout (e.g., via
LettuceClientConfiguration.builder().commandTimeout(...)) or wrap Redis calls
made by ProductCacheRepositoryImpl (e.g., methods using
redisTemplate.opsForValue().get/set) with a circuit-breaker/timeout
(Resilience4j) so Redis failures degrade gracefully; pick a timeout value based
on production latency and verify with latency simulation (toxiproxy).
...ommerce-api/src/main/java/com/loopers/infrastructure/product/ProductCacheRepositoryImpl.java
Show resolved
Hide resolved
- 모든 Redis 연산에 try-catch를 적용하여 장애 시 DB 처리되게 보장 - getFromCache: 역직렬화 실패와 Redis 연결 실패를 분리 처리 - putToCache, evictProduct, evictAllProductsCache: Exception 흡수
Fail-Silent 예외 처리, JSON 직렬화, EntityCacheMixin 등 공통 로직을 RedisCacheRepository 추상 클래스로 추출하여 재사용 가능하게 개선
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
apps/commerce-api/src/main/java/com/loopers/infrastructure/product/ProductCacheRepositoryImpl.java (1)
109-112: 전체 목록 캐시 무효화 전략이 의도적 설계임을 확인한다.
evictAllProductsCache()가 브랜드별 캐시를 포함한 모든 목록 캐시를 일괄 삭제하는 것은 학습 기록에 따르면 의도된 trade-off이다. 다만, 학습 기록의 TTL(1분)과 현재 구현의 TTL(1시간)이 다르므로, admin 수정 빈도가 높아지면 선택적 무효화(brandId별) 전환을 검토해야 한다. Based on learnings: "PRODUCT_LIST cache eviction strategy intentionally uses cache.clear() (full cache invalidation)... TTL is 1 minute and admin write frequency is currently low".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/infrastructure/product/ProductCacheRepositoryImpl.java` around lines 109 - 112, The current evictAllProductsCache() (which calls safeDeleteByPattern(PRODUCTS_KEY_PREFIX + "*")) implements full list cache invalidation but the cache TTL in codebase differs from the documented 1-minute design (currently 1 hour); update the cache TTL used for product list entries to 1 minute to match the intended trade-off, or if you intentionally keep the 1-hour TTL, add an explicit comment in ProductCacheRepositoryImpl documenting the trade-off and add a TODO to switch to selective (brandId) invalidation when admin write frequency increases; reference evictAllProductsCache and PRODUCTS_KEY_PREFIX to locate the cache behavior to change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In
`@apps/commerce-api/src/main/java/com/loopers/infrastructure/cache/RedisCacheRepository.java`:
- Around line 73-82: safeDeleteByPattern currently calls
redisTemplate.keys(pattern) which blocks Redis in production; replace it with a
SCAN-based iterative deletion: use RedisTemplate.execute or RedisConnection with
ScanOptions.match(pattern) to obtain a Cursor, iterate the cursor collecting
keys into a batch (e.g., 500-1000) and call redisTemplate.delete(batch) per
batch until complete, ensuring the cursor is closed in a finally block and
exceptions are logged as before; update
ProductCacheRepositoryImpl.evictAllProductsCache to call the new SCAN-based
safeDeleteByPattern and add a load test that creates ~100k keys to verify
eviction latency and no Redis blocking.
---
Nitpick comments:
In
`@apps/commerce-api/src/main/java/com/loopers/infrastructure/product/ProductCacheRepositoryImpl.java`:
- Around line 109-112: The current evictAllProductsCache() (which calls
safeDeleteByPattern(PRODUCTS_KEY_PREFIX + "*")) implements full list cache
invalidation but the cache TTL in codebase differs from the documented 1-minute
design (currently 1 hour); update the cache TTL used for product list entries to
1 minute to match the intended trade-off, or if you intentionally keep the
1-hour TTL, add an explicit comment in ProductCacheRepositoryImpl documenting
the trade-off and add a TODO to switch to selective (brandId) invalidation when
admin write frequency increases; reference evictAllProductsCache and
PRODUCTS_KEY_PREFIX to locate the cache behavior to change.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 1f727c7e-3126-4061-8ad1-c3d9d63c7b06
📒 Files selected for processing (2)
apps/commerce-api/src/main/java/com/loopers/infrastructure/cache/RedisCacheRepository.javaapps/commerce-api/src/main/java/com/loopers/infrastructure/product/ProductCacheRepositoryImpl.java
| protected void safeDeleteByPattern(String pattern) { | ||
| try { | ||
| Set<String> keys = redisTemplate.keys(pattern); | ||
| if (keys != null && !keys.isEmpty()) { | ||
| redisTemplate.delete(keys); | ||
| } | ||
| } catch (Exception e) { | ||
| log.warn("Redis 패턴 삭제 실패: pattern={}", pattern, e); | ||
| } | ||
| } |
There was a problem hiding this comment.
redisTemplate.keys() 사용은 운영 환경에서 Redis 블로킹을 유발한다.
keys() 명령어는 O(N) 복잡도로 전체 키 스페이스를 스캔하며, 키가 많아지면 Redis 서버가 블로킹되어 다른 요청 처리가 지연된다. ProductCacheRepositoryImpl의 evictAllProductsCache()가 이 메서드를 호출하므로, 캐시 무효화 시 장애로 이어질 수 있다.
수정안: SCAN 명령어를 사용하여 점진적으로 키를 조회해야 한다.
추가 테스트: 대량 키(10만 개 이상) 환경에서 eviction 성능 테스트를 수행하여 Redis 응답 지연이 발생하지 않는지 검증해야 한다.
♻️ SCAN 기반 삭제 예시
protected void safeDeleteByPattern(String pattern) {
try {
- Set<String> keys = redisTemplate.keys(pattern);
- if (keys != null && !keys.isEmpty()) {
- redisTemplate.delete(keys);
- }
+ redisTemplate.execute((RedisCallback<Void>) connection -> {
+ ScanOptions scanOptions = ScanOptions.scanOptions()
+ .match(pattern)
+ .count(100)
+ .build();
+ Cursor<byte[]> cursor = connection.scan(scanOptions);
+ while (cursor.hasNext()) {
+ connection.del(cursor.next());
+ }
+ return null;
+ });
} catch (Exception e) {
log.warn("Redis 패턴 삭제 실패: pattern={}", pattern, e);
}
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/commerce-api/src/main/java/com/loopers/infrastructure/cache/RedisCacheRepository.java`
around lines 73 - 82, safeDeleteByPattern currently calls
redisTemplate.keys(pattern) which blocks Redis in production; replace it with a
SCAN-based iterative deletion: use RedisTemplate.execute or RedisConnection with
ScanOptions.match(pattern) to obtain a Cursor, iterate the cursor collecting
keys into a batch (e.g., 500-1000) and call redisTemplate.delete(batch) per
batch until complete, ensuring the cursor is closed in a finally block and
exceptions are logged as before; update
ProductCacheRepositoryImpl.evictAllProductsCache to call the new SCAN-based
safeDeleteByPattern and add a load test that creates ~100k keys to verify
eviction latency and no Redis blocking.
|
@coderabbitai 아래 체크리스트에 충족될 수 있도록 구현되었는지 확인해. 🔖 Index
❤️ Structure
⚡ Cache
|
|
Tip For best results, initiate chat on the files or code changes. 🧩 Analysis chain🏁 Script executed: Length of output: 5778
🔖 Index
❤️ Structure
⚡ Cache
요약
미충족 항목 2개 (인덱스 DDL + 성능 비교, ProductLikeCount 동기화 로직)에 대한 보완이 필요하다. 🧠 Learnings used |
Summary
Context & Decision
1. 좋아요 수 분리: Product 내 집계 → 별도 집계 테이블(ProductLikeCount)
문제: 좋아요 순 상품 정렬을 위해 Product의 likeCount 컬럼에 인덱스를 생성해야 하지만, 좋아요는 사용자 행위마다 INSERT/DELETE가 발생하므로 인덱스 갱신 비용이 높다. 읽기 최적화를 위한 인덱스가 쓰기 성능을 저하시키는 트레이드오프가 발생한다.
선택지 비교:
결정: ProductLikeCount 별도 집계 테이블로 분리. 좋아요는 실시간 정확도가 필수가 아니므로(Eventual Consistency) 주기적 갱신이 괜찮아 보였다. Oracle/PostgreSQL은 Materialized View를 제공하지만, MySQL은 지원하지 않으므로 집계 테이블을 직접 구현하고 DB EVENT 스케줄로 좋아요 집계를 갱신했다. 단, 집계 로직이 DB EVENT에 위치하므로 코드와 DB에 관리 포인트가 분산되었고, 좋아요 정렬을 위해 Product와 ProductLikeCount를 Join 해야하는 트레이드오프가 있다.
집계 방식: DB EVENT 스케줄로
product_likes테이블의 좋아요 수를product_likes_count테이블에 집계한다. 최초 집계 이후 매시간 DB 스케쥴러가 실행되어 1시간 이내에 신규 생성된 좋아요만 증분 집계하여 전체 집계 부담을 줄이고, 자정에 전체 좋아요 갱신을 수행하여 삭제된 좋아요 등 최종적 일관성을 보장했다.2. 조회 성능 비교 (10만 건 기준)
단건 조회:
cost 차이는 존재하지만 단건 조회는 세 방식 모두 충분히 빠르다. 실질적 성능 차이는 없다.
다건 조회 — 좋아요 순 정렬 (LIMIT 20):
다건 조회에서는 성능 차이가 극명하다. Product 내 집계 방식은 인덱스 스캔으로 Cost 1.84에 0.085ms이지만, 집계 테이블 분리 시 풀스캔 + JOIN이 필요해 Cost 13,617에 137ms로 증가했다. 이는 집계 테이블 분리에 따른 트레이드오프이며, Redis 캐싱으로 보완했다.
3. 캐싱 전략: Spring Cache 대신 RestTemplate 사용
Spring Cache를 선택하지 않은 이유:
products:*)를 지원하지 않음구현: 도메인에
ProductCacheRepository인터페이스를 정의하고, infrastructure에서ProductCacheRepositoryImpl을 구현하고 RedisTemplate을 사용했다.4. Redis 장애 대응: 캐시 장애 격리
캐시는 성능 최적화 수단이지 필수 인프라가 아니다. Redis 장애가 서비스 장애로 이어지지 않도록
RedisCacheRepository추상 클래스의 모든 Redis 연산(조회, 저장, 삭제)을 try-catch로 감싸서 예외를 흡수한다.Optional.empty()반환 → Service에서orElseGet()으로 DB 조회 → 정상 응답 (캐시 없이 느리지만 동작)예외 처리 로직
5. 캐시 갱신 전략
product:{id},product:{id}:like)products:*)products:*패턴 삭제products:*패턴 삭제products:*패턴 삭제products:*패턴 삭제상품 등록/수정/삭제 시 캐싱된 상품 목록은 전체 삭제했다. 변경된 상품이 어떤 키에 속해있는지 알 수 없고 정렬 기준에 따라 속하는 페이지도 달라질 수 있기 때문이다. 캐시를 한번에 삭제하면 갑자기 모든 요청이 DB로 몰려 문제를 발생시키는 캐시 스탬피드라는 현상이 발생할 수 있다고 한다. 그 부분을 추후에 고려해야 할 것 같다.
목록 캐시 무효화 시에는
KEYS products:*명령을 사용한다.KEYS는 Redis에 저장된 모든 키를 순회하므로, 키가 많아지면 Redis가 블로킹될 수 있다. 현재는 상품 수정이 관리자만 수행하는 저빈도 연산이고 캐시 키 수도 제한적이므로 문제없지만, 추후 키가 증가하면 조금씩 나눠서 조회하는SCAN명령으로 전환을 검토해볼 수 있다.6. TTL 전략
상세 조회와 목록 조회 모두 상품 수정/삭제 시 명시적으로 캐시를 삭제한다. 따라서 TTL의 역할은 정상적인 캐시 만료가 아니라, 삭제가 실패하거나 누락됐을 때의 안전망이다.
TTL 1시간으로 설정. 집계 테이블의 갱신 주기(1시간)와 맞추어 설정했으나, 정확한 최적값은 실제 트래픽 패턴에 따라 달라지므로 운영하면서 캐시 히트율과 stale 데이터 노출 빈도를 모니터링하여 조정할 예정이다.
Design Overview
신규 추가
ProductLikeCount엔티티 — 좋아요 수 집계 테이블 (MySQL MV 미지원으로 직접 구현, DB EVENT 스케줄로 갱신)ProductCacheRepository인터페이스 (domain) /ProductCacheRepositoryImpl(infrastructure)RedisCacheRepository추상 클래스 — 캐시 장애 격리(try-catch), JSON 직렬화, 안전한 삭제 등 캐시 공통 로직변경
캐시 키 설계
product:{id}product:{id}:likeproducts:page:{p}:size:{s}:sort:{sort}products:*전체 삭제products:like:page:{p}:size:{s}:sort:{sort}products:like:brand:{brandId}:page:{p}:size:{s}:sort:{sort}Flow Diagram
상품 조회 (Cache-Aside)
sequenceDiagram autonumber participant Client participant ProductService participant CacheRepo as ProductCacheRepository participant Redis participant ProductRepo as ProductRepository participant DB Client->>ProductService: getProductWithLikeCount(id) ProductService->>CacheRepo: getProductWithLikeCount(id) CacheRepo->>Redis: GET product:{id}:like alt Cache Hit Redis-->>CacheRepo: JSON data CacheRepo-->>ProductService: Optional.of(data) ProductService-->>Client: cached result else Cache Miss or Redis Failure CacheRepo-->>ProductService: Optional.empty() ProductService->>ProductRepo: findWithLikeCountByIdAndDeletedAtIsNull(id) ProductRepo->>DB: SELECT p.*, plc.like_count LEFT JOIN ... DB-->>ProductRepo: result ProductRepo-->>ProductService: ProductWithLikeCount ProductService->>CacheRepo: putProductWithLikeCount(id, data) CacheRepo->>Redis: SET product:{id}:like (TTL 1h) ProductService-->>Client: db result end캐시 무효화 (상품 수정/삭제 시)
sequenceDiagram autonumber participant ProductService participant CacheRepo as ProductCacheRepository participant Redis ProductService->>CacheRepo: evictProduct(productId) CacheRepo->>Redis: DEL product:{id} CacheRepo->>Redis: DEL product:{id}:like ProductService->>CacheRepo: evictAllProductsCache() CacheRepo->>Redis: KEYS products:* → DEL matched keys🤖 Generated with Claude Code