Skip to content

[Volume 5] 인덱스 및 캐시를 사용한 조회 성능 최적화 #226

Open
juoklee wants to merge 15 commits intoLoopers-dev-lab:juokleefrom
juoklee:round5
Open

[Volume 5] 인덱스 및 캐시를 사용한 조회 성능 최적화 #226
juoklee wants to merge 15 commits intoLoopers-dev-lab:juokleefrom
juoklee:round5

Conversation

@juoklee
Copy link

@juoklee juoklee commented Mar 15, 2026

Summary

  • 배경: 100만건 상품 데이터 기준 목록 조회 API 응답시간이 ~295ms로, 트래픽 증가 시 DB 커넥션 풀 병목과 사용자 경험 저하가 우려되는 상황
  • 목표: DB 인덱스 + 로컬 캐시를 단계적으로 적용하여 조회 성능을 개선하고, 좋아요 수 동기화를 배치 방식으로 전환하여 쓰기 경합 해소
  • 결과: 인덱스 적용으로 218ms → <1ms. Caffeine 로컬 캐시 적용으로 목록 14ms → 3ms (78.6%), 상세 11ms → 1ms (90.9%). 좋아요 토글 시 Product 직접 UPDATE 제거 → LikeCountSyncScheduler 5분 주기 배치 동기화로 전환

Context & Decision

문제 정의

  • 현재 동작/제약: 100만건 상품 테이블에서 정렬+필터 조회 시 Full Table Scan + filesort 발생 (EXPLAIN 분석 기준 727K rows 스캔)
  • 문제(또는 리스크):
    • 인덱스 미적용으로 조회 쿼리 ~295ms 소요
    • 동일 데이터를 수백~수천 명이 반복 조회하면 DB 커넥션 풀 병목
    • 좋아요 토글 시 Product 행 Atomic UPDATE + 캐시 eviction이 매번 발생하여 쓰기 경합 + 캐시 miss 폭증
    • 서버 재시작 시 캐시가 비어있어 콜드 스타트 문제 발생
  • 성공 기준(완료 정의):
    • 주요 조회 패턴(브랜드별 정렬, 전체 정렬)에 인덱스 적용하여 Full Table Scan 제거
    • 로컬 캐시(Caffeine) 적용하여 캐시 히트 시 응답시간 대폭 감소
    • 좋아요 토글 시 쓰기 경합 제거

선택지와 결정

1. 인덱스 설계 — 조회 패턴 기반 복합 인덱스

  • A: deleted_at IS NULL을 인덱스 선두 컬럼으로 → 삭제 데이터 비율 낮으면 선택도 낮아 효과 미미
  • B: 동등 조건(brandId) + 정렬 컬럼을 선두에 두고 deleted_at은 후행 조건으로 처리
  • 최종 결정: B 방식
    • Product: idx_product_brand_like(brandId, likeCount DESC), idx_product_brand_price(brandId, price), idx_product_like_count(likeCount DESC), idx_product_created_at(createdAt DESC)
    • Order: idx_order_member_created(member_id, createdAt DESC)
    • OrderItem: idx_orderitem_order_id(order_id)
    • MemberCoupon: idx_membercoupon_member(member_id)
  • 결과: 218ms → <1ms (ALL 396K rows → ref 5K rows)

2. 좋아요 수 동기화 — 실시간 Atomic UPDATE vs 배치 동기화

  • A: 좋아요 토글 시 increaseLikeCount/decreaseLikeCount Atomic UPDATE + 캐시 eviction
  • B: 좋아요 토글 시 likes 테이블만 변경. LikeCountSyncScheduler가 5분 주기로 likes 테이블 집계 → Product.likeCount/Brand.likeCount 배치 갱신
  • 최종 결정: B 방식
  • 선택 이유:
    • 좋아요 토글 빈도가 높아 매번 Product 행 UPDATE + 인덱스 갱신 + 캐시 eviction이 발생하면 쓰기 경합과 캐시 miss 폭증
    • likes 테이블이 Single Source of Truth. Product.likeCount는 조회용 비정규화 컬럼
    • 좋아요 토글 응답에는 likeService.countLikes()로 likes 테이블 COUNT를 반환하여 실시간 정확한 값 제공
  • 트레이드오프: 목록의 좋아요순 정렬이 배치 주기(5분)만큼 지연될 수 있으나 UX상 허용 가능한 수준

3. 캐시 무효화 — allEntries vs 키 단위

  • A: @CacheEvict(allEntries=true) — 구현 단순하지만 캐시 miss 폭증 위험
  • B: ProductCacheManager로 prefix 기반 키 매칭 후 선택적 삭제
  • 최종 결정: B 방식. 상품 수정 시 해당 상품 상세 + 해당 브랜드 캐시만 무효화
  • 트레이드오프: Caffeine 네이티브 캐시 API에 의존하게 되지만, Redis 전환 시 ProductCacheManager 내부만 교체하면 됨

4. 캐시 범위 — 대상 선정 및 페이지 제한

  • 캐시 대상: 상품 상세 (TTL 10분/최대 1000건), 상품 목록 (TTL 5분/최대 500건)

  • 캐시 제외: 재고 (실시간 정확성 필요 → 별도 API 분리), keyword 검색 (조합 무한대로 적중률 낮음)

  • 첫 페이지(page=0)만 캐시 → 대부분의 트래픽이 집중되는 첫 페이지만 보호

  • 추후 개선 여지: Redis 분산 캐시 전환 (다중 인스턴스 정합성), 2단계 캐시 (ID 목록 + 상세 분리)

5. 캐시 인프라 — Redis vs Caffeine 로컬 캐시

  • A: Redis 분산 캐시 — 다중 인스턴스 간 캐시 정합성 보장, 네트워크 왕복 ~1ms 추가
  • B: Caffeine 로컬 캐시 — JVM 힙 내 캐시로 네트워크 비용 없음, 단일 인스턴스에서 최적
  • 최종 결정: B 방식 (Caffeine)
  • 선택 이유:
    • 현재 서버 1대 운영으로 분산 캐시의 정합성 이점이 불필요
    • 로컬 캐시는 네트워크 왕복이 없어 Redis보다 응답 속도가 빠름 (캐시 히트 시 ~1ms vs ~2ms)
    • Spring @Cacheable 기반으로 구현했기 때문에 Redis 전환 시 CacheConfigProductCacheManager 내부만 교체하면 됨
  • 트레이드오프: 서버가 2대 이상이 되면 인스턴스 간 캐시 불일치 발생. TTL 만료 전까지 서버별로 다른 데이터를 반환할 수 있음 → 스케일아웃 시점에 Redis 전환 필요

Design Overview

변경 범위

  • 영향 받는 모듈/도메인: Product, Like, Brand, Order, Coupon (인덱스)
  • 신규 추가:
    • ProductCacheManager — 키 단위 선택적 캐시 무효화 전담
    • CacheWarmupRunner — 서버 시작 시 캐시 선적재
    • CacheConfig — Caffeine 캐시 설정 (productDetail TTL 10분/1000건, productList TTL 5분/500건)
    • LikeCountSyncScheduler — 5분 주기 좋아요 수 배치 동기화 (Product + Brand)
    • LikeCountProjection — 좋아요 집계 결과 Projection
    • Product 테이블 인덱스 4개, Order/OrderItem/MemberCoupon 인덱스 3개
  • 변경:
    • LikeFacadeincreaseLikeCount/decreaseLikeCount/ProductCacheManager 의존 제거. likeService.countLikes()로 실시간 COUNT 반환
    • ProductFacade@Cacheable 적용 + CRUD 시 ProductCacheManager로 선택적 eviction
    • ProductServiceupdateLikeCount(id, likeCount) 배치 갱신 메서드 추가
    • LikeServicecountLikes(), countAllLikes() 집계 메서드 추가
    • ProductV1ControllerGET /products/{id}/stock 재고 조회 API 분리
  • 제거/대체: 좋아요 토글 시 Product.increaseLikeCount()/decreaseLikeCount() 실시간 호출 → LikeCountSyncScheduler 배치 방식으로 대체

주요 컴포넌트 책임

  • ProductCacheManager: Caffeine 네이티브 캐시의 키를 prefix 기반으로 탐색하여 선택적 eviction 수행
  • CacheWarmupRunner: ApplicationReadyEvent 시점에 LATEST/LIKES_DESC 정렬 첫 페이지를 선적재하여 콜드 스타트 방지
  • ProductFacade: 상품 CRUD 시 ProductCacheManager를 통해 관련 캐시만 무효화 (register → brandId 캐시, update/delete → detail + brandId 캐시)
  • LikeFacade: 좋아요 토글 시 likes 테이블만 변경. likeService.countLikes()로 실시간 좋아요 수 반환 (Product 테이블/캐시 미접근)
  • LikeCountSyncScheduler: 5분 주기로 likes 테이블 전체 집계 → Product/Brand의 likeCount 배치 갱신 → 캐시 전체 evict

Flow Diagram

Main Flow — 상품 목록 조회 (캐시 히트)

sequenceDiagram
    autonumber
    participant Client
    participant Controller
    participant ProductFacade
    participant Caffeine Cache

    Client->>Controller: GET /api/v1/products?sort=LATEST&page=0
    Controller->>ProductFacade: getProducts(null, null, "LATEST", 0, 20)
    ProductFacade->>Caffeine Cache: @Cacheable 키 조회 ("all:LATEST:0:20")
    Caffeine Cache-->>ProductFacade: 캐시 히트 (~3ms)
    ProductFacade-->>Controller: PagedInfo<ProductInfo>
    Controller-->>Client: 200 OK
Loading

Main Flow — 상품 수정 시 선택적 캐시 무효화

sequenceDiagram
    autonumber
    participant Admin
    participant Controller
    participant ProductFacade
    participant ProductCacheManager
    participant Caffeine Cache
    participant DB

    Admin->>Controller: PUT /api-admin/v1/products/{id}
    Controller->>ProductFacade: updateInfo(id, ...)
    ProductFacade->>DB: getProduct(id) → brandId 확인
    ProductFacade->>DB: updateInfo(id, ...)
    ProductFacade->>ProductCacheManager: evictProductDetail(id)
    ProductCacheManager->>Caffeine Cache: productDetail 캐시에서 id 키 제거
    ProductFacade->>ProductCacheManager: evictProductListByBrand(brandId)
    ProductCacheManager->>Caffeine Cache: productList에서 brandId prefix + "all" prefix 키만 제거
    Note over Caffeine Cache: 다른 브랜드 캐시는 유지됨
    ProductFacade-->>Controller: void
    Controller-->>Admin: 200 OK
Loading

Main Flow — 좋아요 토글 + 배치 동기화

sequenceDiagram
    autonumber
    participant Client
    participant LikeFacade
    participant LikeService
    participant DB (likes)

    Client->>LikeFacade: toggleProductLike(loginId, productId)
    LikeFacade->>LikeService: toggleLike(memberId, PRODUCT, productId)
    LikeService->>DB (likes): INSERT or DELETE
    LikeFacade->>LikeService: countLikes(PRODUCT, productId)
    LikeService->>DB (likes): SELECT COUNT(*)
    DB (likes)-->>LikeFacade: 실시간 좋아요 수
    LikeFacade-->>Client: LikeToggleInfo(liked, likeCount)
    Note over DB (likes): Product 테이블/캐시 미접근
Loading
sequenceDiagram
    autonumber
    participant Scheduler
    participant LikeCountSyncScheduler
    participant LikeService
    participant ProductService
    participant ProductCacheManager
    participant DB

    Scheduler->>LikeCountSyncScheduler: @Scheduled(fixedRate = 5분)
    LikeCountSyncScheduler->>LikeService: countAllLikes(PRODUCT)
    LikeService->>DB: GROUP BY targetId로 전체 집계
    DB-->>LikeCountSyncScheduler: List<LikeCountProjection>
    loop 각 상품별
        LikeCountSyncScheduler->>ProductService: updateLikeCount(id, count)
        ProductService->>DB: UPDATE product SET like_count = ?
    end
    LikeCountSyncScheduler->>ProductCacheManager: evictAllProductList() + evictAllProductDetail()
    Note over ProductCacheManager: 배치 동기화 후 캐시 전체 갱신
Loading

Main Flow — 서버 시작 시 캐시 워밍업

sequenceDiagram
    autonumber
    participant Spring Boot
    participant CacheWarmupRunner
    participant ProductFacade
    participant DB
    participant Caffeine Cache

    Spring Boot->>CacheWarmupRunner: ApplicationReadyEvent
    CacheWarmupRunner->>ProductFacade: getProducts(null, null, "LATEST", 0, 20)
    ProductFacade->>DB: 조회 (캐시 미스)
    DB-->>ProductFacade: 결과
    ProductFacade->>Caffeine Cache: "all:LATEST:0:20" 캐시 저장
    CacheWarmupRunner->>ProductFacade: getProducts(null, null, "LIKES_DESC", 0, 20)
    ProductFacade->>DB: 조회 (캐시 미스)
    DB-->>ProductFacade: 결과
    ProductFacade->>Caffeine Cache: "all:LIKES_DESC:0:20" 캐시 저장
    Note over Caffeine Cache: 서버 시작 직후부터 캐시 히트 가능
Loading

리뷰 포인트

  1. 좋아요 배치 동기화의 지연 허용 범위: 좋아요 수를 5분 주기로 배치 동기화하면서 쓰기 경합은 해소했는데, 목록의 좋아요순 정렬이 최대 5분간 지연됩니다. 실제 서비스에서 이런 eventual consistency의 허용 범위는 어떤 기준으로 판단하나요? 비즈니스 요구에 따라 주기를 더 줄이거나 실시간으로 전환해야 하는 시점의 신호는 무엇인가요?

  2. 로컬 캐시에서 분산 캐시로 전환하는 타이밍: 현재 서버 1대 기준으로 Caffeine 로컬 캐시를 선택했는데, 스케일아웃이 필요해지는 시점에 Redis로 전환하려면 어떤 것들을 미리 준비해두는 게 좋을까요? 지금 구조에서 만약에 redis로 전환한다고 했을때, 전환 비용 관련해서 놓치고 있는 부분이 있을지 궁금합니다.

  3. 캐시 무효화 전략의 복잡도 관리: 현재 상품 수정 시 ProductCacheManager로 키 단위 선택적 eviction을 하고 있는데, 캐시 대상이 늘어나면 무효화 로직이 점점 복잡해질 것 같습니다. 실무에서는 이런 캐시 무효화의 복잡도를 어떻게 관리하나요? 일정 수준을 넘으면 allEntries 방식이 오히려 나은 경우도 있나요?


변경 목적 및 배경: 100만 건 규모에서 목록/상세 조회 응답을 대폭 개선하고 DB 쓰기 경쟁을 완화하기 위해 복합 인덱스 적용, 로컬 Caffeine 캐시 도입 및 좋아요 집계를 배치로 전환해 응답지연·커넥션 풀 병목을 해소합니다.
핵심 변경점: 1) Product/Order/OrderItem/MemberCoupon 등 테이블에 복합 인덱스 추가로 쿼리 성능 개선(예: 전체 스캔 → PK 참조 사례). 2) Caffeine 기반 로컬 캐시(productDetail TTL 10분·max1000, productList TTL 5분·max500, 첫 페이지만 캐시) 및 CacheWarmupRunner로 기동 시 선적재. 3) 좋아요는 likes 테이블을 SSoT로 유지하고 토글은 likes만 변경 → LikeCountSyncScheduler가 5분 주기로 집계하여 Product/Brand.likeCount를 배치 갱신(배치 후 관련 캐시 무효화). 4) @CacheEvict(allEntries=true) 대신 ProductCacheManager로 브랜드/상세별 선택적 eviction과 재고 조회 API 분리.
리스크·주의사항: 좋아요 기반 정렬은 최대 배치 주기(≈5분)만큼 지연될 수 있고, toggle에서 실시간 countLikes 호출이 증분 부하를 만들 수 있음(트래픽 환경에서 쿼리 비용 확인 필요). Caffeine은 단일 인스턴스에 적합하며, 스케일아웃 시 Redis 전환 및 분산 무효화 설계가 필요합니다.
테스트·검증 방법: ProductCacheE2ETest(캐시 히트·무효화·선택적 제거 검증), ProductReadOptimizationPerformanceTest(인덱스·캐시 성능 비교 및 EXPLAIN), LikeWritePerformanceTest(denormalized vs MV 동시성 쓰기 비교), LikeV1ApiE2ETest/LikeConcurrencyE2ETest(스케줄러 기반 likeCount 동기화 및 캐시 일관성 검증).
확인 질문: 1) countLikes를 실시간으로 호출하는 현재 경로가 목표 트래픽에서 허용 가능한지(쿼리 비용/모니터링 계획)를 확인해 드릴까요? 2) 스케일아웃 시 Redis 전환 우선순위 및 분산 eviction 전략을 미리 설계할까요?

juoklee added 7 commits March 12, 2026 21:41
- Caffeine 로컬 캐시 적용 (상품 상세 10분/1000건, 목록 5분/500건)
- 재고는 실시간 정확성이 필요하므로 캐시에서 제외하고 별도 API 분리
- 좋아요 변경 시 상품 상세 캐시 evict, 목록은 TTL로 갱신
- 캐시 hit/miss/eviction 검증 및 성능 측정 테스트 추가
- 재고 분리 API 사용하도록 기존 테스트 수정
- 모든 E2E 테스트에 @AfterEach 캐시 클리어 추가
- ProductIndexPerformanceTest 삭제 (종합 테스트와 중복, @transactional+DDL 설계 결함)
- ProductCacheE2ETest에서 성능 측정 내부 클래스 제거 (캐시 동작 검증에만 집중)
- ProductReadOptimizationPerformanceTest 재구성:
  1단계 비정규화, 2단계 인덱스(EXPLAIN+COUNT), 3단계 인덱스만 vs 인덱스+캐시 API 비교
- try-finally로 인덱스 복원 안전장치 추가
- 중앙값(median) 통계 추가
- CacheConfig에 recordStats() 추가
- allEntries=true 제거, ProductCacheManager로 키 단위 선택적 eviction
- 목록 캐시 첫 페이지만 캐시 (page==0 condition)
- 좋아요 토글 시 좋아요순 첫 페이지만 선택적 evict
- 서버 시작 시 CacheWarmupRunner로 캐시 선적재
좋아요 toggle 시 product/brand 테이블 직접 UPDATE 제거하고,
스케줄러가 주기적으로 likes 테이블을 집계하여 likeCount를 갱신하는
MV 시뮬레이션 방식으로 전환하여 쓰기 경합 문제 해결
@coderabbitai
Copy link

coderabbitai bot commented Mar 15, 2026

Warning

Rate limit exceeded

@juoklee has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 9 minutes and 12 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 86eadfa5-b1ba-4345-8193-6c4fcc56b2ed

📥 Commits

Reviewing files that changed from the base of the PR and between 1a8cafb and 54fcf1a.

📒 Files selected for processing (2)
  • apps/commerce-api/src/main/java/com/loopers/application/like/LikeCountSyncScheduler.java
  • apps/commerce-api/src/test/java/com/loopers/application/like/LikeCountSyncSchedulerTest.java
📝 Walkthrough

Walkthrough

Caffeine 기반 캐시 인프라와 캐시 매니저를 추가하고 제품 목록/상세에 캐싱을 적용하며, 라이크 집계를 읽기 기반으로 수집해 5분 주기로 DB에 동기화하는 스케줄러와 캐시 무효화/워밍업, 관련 인덱스 및 다수의 테스트·퍼포먼스 코드가 추가되었다다.

Changes

Cohort / File(s) Summary
빌드 의존성
apps/commerce-api/build.gradle.kts
spring-boot-starter-cache 및 caffeine 의존성 추가다.
애플리케이션 설정
apps/commerce-api/src/main/java/com/loopers/CommerceApiApplication.java
스케줄링 활성화를 위한 @EnableScheduling 추가다.
캐시 설정 및 유틸
apps/commerce-api/src/main/java/com/loopers/support/config/CacheConfig.java, apps/commerce-api/src/main/java/com/loopers/support/cache/ProductCacheManager.java
Caffeine 기반 CacheManager 및 productDetail/productList 캐시 정의, 네이티브 캐시 직접 조작을 통한 선택적 무효화 유틸 추가다.
스케줄러: 라이크 동기화
apps/commerce-api/src/main/java/com/loopers/application/like/LikeCountSyncScheduler.java
Like 집계를 조회해 Product/Brand의 likeCount를 업데이트하고 관련 캐시를 무효화하는 주기적 동기화 로직 추가(@Scheduled, @Transactional)다.
제품 캐시 적용·무효화·워밍업
apps/commerce-api/src/main/java/com/loopers/application/product/ProductFacade.java, apps/commerce-api/src/main/java/com/loopers/application/product/CacheWarmupRunner.java
getProduct/getProducts에 @Cacheable 적용, 등록/수정/삭제 시 ProductCacheManager를 통한 관련 캐시 무효화 호출 및 애플리케이션 기동시 워밍업 실행 추가다.
라이크 집계 API 및 프로젝션
apps/commerce-api/src/main/java/com/loopers/domain/like/LikeReader.java, .../LikeService.java, .../LikeCountProjection.java
타겟별 단건 카운트와 전체 집계 반환 API 및 LikeCountProjection 레코드 추가다.
인프라(레포지토리) 집계/업데이트 구현
apps/commerce-api/src/main/java/com/loopers/infrastructure/like/LikeJpaRepository.java, .../LikeReaderImpl.java, .../product/ProductJpaRepository.java, .../brand/BrandJpaRepository.java
likes 집계용 JPQL 쿼리와 product/brand의 likeCount 직접 설정·리셋용 업데이트 쿼리(@Modifying @Query) 추가 및 구현 위임 메서드 추가다.
도메인 서비스·리포지토리 확장
apps/commerce-api/src/main/java/com/loopers/domain/product/ProductService.java, .../brand/BrandService.java, .../ProductRepository.java, .../BrandRepository.java
updateLikeCount 및 resetLikeCountsNotIn/resetAllLikeCounts 등 likeCount 동기화용 서비스·레포지토리 메서드 추가다.
API 계약: 재고 분리 및 DTO 변경
apps/commerce-api/src/main/java/com/loopers/interfaces/api/product/ProductV1Dto.java, .../ProductV1Controller.java, apps/.../interfaces/api/like/LikeV1Dto.java
ProductDto에서 stockQuantity 제거, 별도 엔드포인트 /api/v1/products/{id}/stock(StockResponse) 추가 및 관련 매핑·컨트롤러 변경이다.
엔티티 인덱스 추가
apps/commerce-api/src/main/java/com/loopers/domain/product/Product.java, .../order/Order.java, .../order/OrderItem.java, .../coupon/MemberCoupon.java, .../like/Like.java
검색·집계 성능 개선을 위한 다수의 JPA 인덱스 추가(제품, 주문, 주문항목, 멤버쿠폰, likes 테이블)다.
테스트·테스트 인프라 대규모 변경
apps/commerce-api/src/test/.../*, apps/commerce-api/src/test/java/com/loopers/support/BulkDataInitializer.java
E2E/단위 테스트에 CacheManager·LikeCountSyncScheduler 주입 및 캐시 정리 추가, 캐시·스케줄러 연계 테스트·성능 테스트(다수) 및 대용량 데이터 시더(BulkDataInitializer) 추가다.

Sequence Diagram

sequenceDiagram
    participant Scheduler as LikeCountSyncScheduler
    participant LikeSvc as LikeService
    participant Reader as LikeReader
    participant DB as Database
    participant ProductSvc as ProductService
    participant BrandSvc as BrandService
    participant CacheMgr as ProductCacheManager

    Scheduler->>LikeSvc: countAllLikes(PRODUCT)
    LikeSvc->>Reader: countAllByTargetType(PRODUCT)
    Reader->>DB: SELECT target_id, COUNT(*) GROUP BY target_id
    DB-->>Reader: List<Projection>
    Reader-->>LikeSvc: List<LikeCountProjection>
    loop 제품별
        Scheduler->>ProductSvc: updateLikeCount(productId, count)
        ProductSvc->>DB: UPDATE product.like_count = ?
        ProductSvc-->>Scheduler: 결과(성공/실패)
    end
    Scheduler->>CacheMgr: evictProductListByBrand(brandId) / evictAllProductDetail()
    Scheduler->>LikeSvc: countAllLikes(BRAND)
    LikeSvc->>Reader: countAllByTargetType(BRAND)
    Reader->>DB: SELECT target_id, COUNT(*) GROUP BY target_id
    loop 브랜드별
        Scheduler->>BrandSvc: updateLikeCount(brandId, count)
        BrandSvc->>DB: UPDATE brand.like_count = ?
    end
    CacheMgr-->>CacheMgr: 캐시 키 제거 / 캐시 clear
Loading

Estimated Code Review Effort

🎯 4 (Complex) | ⏱️ ~75 minutes

Possibly Related PRs

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.96% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed 제목은 주요 변경 사항인 '인덱스 및 캐시를 사용한 조회 성능 최적화'를 명확히 설명하고 있다.
Description check ✅ Passed 설명서는 요구되는 모든 섹션(Summary, Context & Decision, Design Overview, Flow Diagram)을 포함하고 있으며, 각 섹션이 충실하게 작성되어 있다.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
📝 Coding Plan
  • Generate coding plan for human review comments

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 16

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (3)
apps/commerce-api/src/main/java/com/loopers/domain/coupon/MemberCoupon.java (1)

10-22: ⚠️ Potential issue | 🟠 Major

복합 유니크 인덱스와 단일 member_id 인덱스가 중복될 가능성이 높다

운영 관점에서 Line 19의 UNIQUE(member_id, coupon_id)가 이미 member_id 선행 조건 조회를 커버하므로, Line 21 인덱스는 INSERT/UPDATE/DELETE 시 인덱스 유지 비용만 추가해 쓰기 지연을 유발할 수 있다.
수정안은 idx_membercoupon_member를 제거하고 유니크 인덱스만 유지하는 것이다. 만약 실제 쿼리에 별도 정렬/필터 요구가 있다면 그 패턴에 맞는 목적형 인덱스로 교체하는 것이 안전하다.
추가 테스트로 member_id 조건 주요 조회 쿼리에 대해 EXPLAIN을 비교해 실행계획 변화가 없는지 확인하고, 쿠폰 발급/사용 시나리오에서 쓰기 p95 지연을 전후 비교하는 것을 권장한다.

제안 수정안
-import jakarta.persistence.Index;
 import jakarta.persistence.Table;
 import jakarta.persistence.UniqueConstraint;
@@
 `@Table`(name = "member_coupon", uniqueConstraints = {
     `@UniqueConstraint`(columnNames = {"member_id", "coupon_id"})
-}, indexes = {
-    `@Index`(name = "idx_membercoupon_member", columnList = "member_id")
 })
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/commerce-api/src/main/java/com/loopers/domain/coupon/MemberCoupon.java`
around lines 10 - 22, The `@Table` definition on MemberCoupon declares a composite
unique constraint uniqueConstraints = {"member_id","coupon_id"} and a separate
index idx_membercoupon_member which is redundant; remove the redundant
`@Index`(name = "idx_membercoupon_member", columnList = "member_id") from the
`@Table`(indexes=...) so only the UNIQUE(member_id,coupon_id) remains, and if you
need a different access pattern replace it with a purpose-built index; after the
change run EXPLAIN on your main member_id queries and compare write latency
(coupon issue/use p95) to ensure no behavioral regressions.
apps/commerce-api/src/test/java/com/loopers/domain/like/LikeServiceTest.java (1)

3-8: ⚠️ Potential issue | 🟡 Minor

중복 import 제거 필요

PageResult import가 Line 3과 Line 8에 중복 선언되어 있다.

🐛 수정안
 package com.loopers.domain.like;

 import com.loopers.domain.PageResult;
 import org.junit.jupiter.api.BeforeEach;
 import org.junit.jupiter.api.DisplayName;
 import org.junit.jupiter.api.Nested;
 import org.junit.jupiter.api.Test;
-import com.loopers.domain.PageResult;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/commerce-api/src/test/java/com/loopers/domain/like/LikeServiceTest.java`
around lines 3 - 8, Remove the duplicate import of PageResult in the
LikeServiceTest.java imports: keep a single import statement for
com.loopers.domain.PageResult and delete the redundant one so there is only one
PageResult import in the LikeServiceTest class.
apps/commerce-api/src/main/java/com/loopers/application/like/LikeFacade.java (1)

32-41: ⚠️ Potential issue | 🟡 Minor

좋아요 토글 시 캐시 무효화 누락 - 특정 정렬 페이지 일관성 영향

toggleProductLike()와 toggleBrandLike()에서 캐시 무효화를 수행하지 않는다. 현재 구조에서는 좋아요 토글 응답은 실시간 COUNT 반환으로 정확하지만, 상품 상세 캐시(productDetail)와 상품 목록 캐시(productList)는 5분 주기의 LikeCountSyncScheduler에 의존하여 최대 5분까지 불일치가 발생한다. 특히 인기순(LIKES_DESC) 정렬 첫 페이지는 자주 조회되는 반면, 존재하는 evictLikesSortFirstPage() 메서드가 토글 후 호출되지 않아 캐시 히트율 저하 및 사용자 경험 악화를 초래한다.

좋아요 토글 직후에 productCacheManager.evictLikesSortFirstPage()를 호출하여 인기순 첫 페이지 캐시를 즉시 무효화하고, 필요시 최근 토글된 상품의 productDetail 캐시도 함께 무효화하는 것을 검토한다.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/commerce-api/src/main/java/com/loopers/application/like/LikeFacade.java`
around lines 32 - 41, toggleProductLike (and likewise toggleBrandLike) updates
like counts but does not invalidate related caches, causing stale productList
(especially LIKES_DESC first page) and productDetail views until the periodic
LikeCountSyncScheduler runs; after successfully toggling (in
LikeFacade.toggleProductLike and toggleBrandLike) call
productCacheManager.evictLikesSortFirstPage() to immediately evict the
popular-sort first-page cache and also evict the specific product detail cache
(e.g., productCacheManager.evictProductDetail(productId) or the equivalent
method) so UI queries reflect the new like state instantly.
🧹 Nitpick comments (14)
apps/commerce-api/src/test/java/com/loopers/infrastructure/product/ProductReadOptimizationPerformanceTest.java (3)

429-433: 인증 헤더에 하드코딩된 값이 사용되고 있다.

X-Loopers-Ldap: loopers.admin 헤더가 하드코딩되어 있다. 테스트 프로파일에서 사용되므로 보안 위험은 낮으나, 테스트 상수나 설정 파일로 분리하면 유지보수성이 향상된다.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/commerce-api/src/test/java/com/loopers/infrastructure/product/ProductReadOptimizationPerformanceTest.java`
around lines 429 - 433, The adminEntity() helper currently hardcodes the
authentication header ("X-Loopers-Ldap" -> "loopers.admin"); extract the header
key and value into a shared test constant or test configuration (e.g.,
TestConstants.ADMIN_LDAP or application-test property) and update adminEntity()
to read from that constant/config instead of the literal string; this keeps the
test data centralized and easier to change and maintain (refer to adminEntity(),
"X-Loopers-Ldap", and "loopers.admin" when making the change).

405-417: 인덱스 조작 시 예외를 무시하면 테스트 실패 원인 파악이 어렵다.

dropProductIndexes()createProductIndexes()에서 모든 예외를 무시하고 있다. 인덱스가 이미 존재하거나 없는 경우는 무시해도 되지만, 권한 오류나 문법 오류 등 다른 예외는 로깅해야 디버깅이 가능하다.

♻️ 리팩터링 제안
 private void dropProductIndexes() {
     String[] indexes = {"idx_product_brand_like", "idx_product_brand_price", "idx_product_like_count", "idx_product_created_at"};
     for (String idx : indexes) {
-        try { jdbcTemplate.execute("DROP INDEX " + idx + " ON product"); } catch (Exception ignored) {}
+        try {
+            jdbcTemplate.execute("DROP INDEX " + idx + " ON product");
+        } catch (Exception e) {
+            if (!e.getMessage().contains("check that it exists")) {
+                log.warn("인덱스 삭제 실패 [{}]: {}", idx, e.getMessage());
+            }
+        }
     }
 }

 private void createProductIndexes() {
-    try { jdbcTemplate.execute("CREATE INDEX idx_product_brand_like ON product (brand_id, like_count DESC)"); } catch (Exception ignored) {}
-    try { jdbcTemplate.execute("CREATE INDEX idx_product_brand_price ON product (brand_id, price)"); } catch (Exception ignored) {}
-    try { jdbcTemplate.execute("CREATE INDEX idx_product_like_count ON product (like_count DESC)"); } catch (Exception ignored) {}
-    try { jdbcTemplate.execute("CREATE INDEX idx_product_created_at ON product (created_at DESC)"); } catch (Exception ignored) {}
+    String[] ddls = {
+        "CREATE INDEX idx_product_brand_like ON product (brand_id, like_count DESC)",
+        "CREATE INDEX idx_product_brand_price ON product (brand_id, price)",
+        "CREATE INDEX idx_product_like_count ON product (like_count DESC)",
+        "CREATE INDEX idx_product_created_at ON product (created_at DESC)"
+    };
+    for (String ddl : ddls) {
+        try {
+            jdbcTemplate.execute(ddl);
+        } catch (Exception e) {
+            if (!e.getMessage().contains("Duplicate key name")) {
+                log.warn("인덱스 생성 실패: {}", e.getMessage());
+            }
+        }
+    }
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/commerce-api/src/test/java/com/loopers/infrastructure/product/ProductReadOptimizationPerformanceTest.java`
around lines 405 - 417, In dropProductIndexes() and createProductIndexes(), stop
swallowing all exceptions from jdbcTemplate.execute; catch only expected SQL
states for "index does not exist" or "index already exists" and ignore those,
but log and rethrow or fail the test for any other exceptions (e.g., permission
or syntax errors). Locate the jdbcTemplate.execute calls in dropProductIndexes
and createProductIndexes, replace the generic catch(Exception ignored) with
logic that inspects the thrown exception (SQLException or DataAccessException),
filters by SQL state or error code for benign cases, and uses the test logger to
error/throw for unexpected failures so failures are visible during test runs.

34-37: 테스트 격리가 보장되지 않는다.

이 테스트는 DB 인덱스를 삭제/생성하고 likes 테이블에 데이터를 삽입한다. @AfterEach 또는 @AfterAll로 정리하지 않으면 다른 테스트에 영향을 줄 수 있다. 특히 IndexPerformance 테스트가 실패하면 finally 블록에서 인덱스를 복구하지만, 다른 테스트 클래스가 동시에 실행될 경우 인덱스 상태가 불안정해질 수 있다.

운영 관점에서 CI/CD 파이프라인에서 테스트 순서가 달라지거나 병렬 실행 시 플래키 테스트의 원인이 된다.

♻️ 테스트 격리 개선 제안
+import org.junit.jupiter.api.AfterAll;
+import org.junit.jupiter.api.TestInstance;

 `@SpringBootTest`(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
 `@ActiveProfiles`({"test", "bulk-data"})
 `@DisplayName`("상품 조회 읽기 최적화 종합 성능 테스트 (100만건)")
+@TestInstance(TestInstance.Lifecycle.PER_CLASS)
 class ProductReadOptimizationPerformanceTest {
+
+    `@AfterAll`
+    void tearDown() {
+        createProductIndexes(); // 인덱스 복원 보장
+        clearAllCaches();
+    }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/commerce-api/src/test/java/com/loopers/infrastructure/product/ProductReadOptimizationPerformanceTest.java`
around lines 34 - 37, ProductReadOptimizationPerformanceTest modifies DB indexes
and inserts into the likes table without guaranteed cleanup; add deterministic
teardown to avoid cross-test pollution by implementing an `@AfterEach` (or
`@AfterAll` if shared) method in the ProductReadOptimizationPerformanceTest class
that restores any dropped/created indexes and truncates or deletes test rows
from the likes table (or resets the schema used), and/or annotate the test to
use a disposable test schema/context (e.g., `@DirtiesContext` or dedicated test
datasource) so index changes and inserted data are reliably reverted even if the
test or IndexPerformance fails; operate on the same named index and likes table
symbols used in the test to locate the cleanup logic.
apps/commerce-api/build.gradle.kts (1)

22-24: Caffeine 버전 관리 경로를 파일에서 명시적으로 드러내면 운영 중 캐시 의존성 추적이 용이해진다.

Line 24의 Caffeine은 Spring Boot BOM을 통해 버전이 간접적으로 관리되고 있다(spring-boot-starter-cache의 transitive 의존성). 이 방식은 표준 관행이며 Spring Boot 3.4.4의 dependency-management 플러그인이 버전을 제어하므로 무작정 변동되지는 않는다. 다만 이 모듈의 빌드 파일만 읽어서는 Caffeine 버전의 출처가 명확하지 않아, 운영 중 캐시 동작 변화 원인 파악에 시간이 소요될 수 있다.

개선을 위해 gradle.properties에 명시적 주석을 추가하거나, 최소한 이 모듈의 빌드 파일에 "Caffeine 버전은 Spring Boot BOM 관리"라는 참조를 추가하면 된다. 추가로 CI에 "gradle dependencyReport 또는 의존성 잠금" 단계를 포함하여 실제 해결된 Caffeine 버전을 기록하면 운영 투명성이 향상된다.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/commerce-api/build.gradle.kts` around lines 22 - 24, The Caffeine
dependency implementation("com.github.ben-manes.caffeine:caffeine") lacks a
visible version source; add a short comment next to that declaration in
build.gradle.kts stating that the Caffeine version is managed by the Spring Boot
BOM (spring-boot-starter-cache) and controlled by the dependency-management
plugin for Spring Boot 3.4.4, and also add an entry in gradle.properties (e.g.,
CAFFEINE_VERSION_MANAGED_BY_BOM=true or a note) to surface this fact; finally,
update CI to emit a gradle dependencyReport or dependency lock step so the
resolved Caffeine version is recorded for operational traceability.
apps/commerce-api/src/test/java/com/loopers/support/BulkDataInitializer.java (1)

83-94: 성능 회귀 테스트 신뢰도를 위해 데이터 생성 난수 시드를 고정하는 편이 낫다

Line 83-94는 실행마다 데이터 분포가 달라져 인덱스 효율/캐시 히트율 편차가 생기고, 운영 관점에서 성능 회귀 지표의 비교 가능성이 낮아진다. 수정안은 고정 시드 기반 난수 생성기를 사용해 데이터 분포를 재현 가능하게 만드는 것이다.

수정 예시 diff
 import java.util.List;
+import java.util.Random;
 import java.util.concurrent.ThreadLocalRandom;
@@
 public class BulkDataInitializer implements ApplicationRunner {
+    private static final long DATA_SEED = 20260315L;
@@
-            var random = ThreadLocalRandom.current();
+            var random = new Random(DATA_SEED + currentBrandIdx);
@@
-                    String category = categories[random.nextInt(categories.length)];
-                    String adj = adjectives[random.nextInt(adjectives.length)];
-                    long price = 10000 + random.nextLong(490000);
-                    int likeCount = (int) (Math.pow(random.nextDouble(), 4) * 10000);
-                    int stock = random.nextInt(1000);
-                    int daysAgo = random.nextInt(365);
+                    String category = categories[random.nextInt(categories.length)];
+                    String adj = adjectives[random.nextInt(adjectives.length)];
+                    long price = 10000 + random.nextInt(490000);
+                    int likeCount = (int) (Math.pow(random.nextDouble(), 4) * 10000);
+                    int stock = random.nextInt(1000);
+                    int daysAgo = random.nextInt(365);

추가 테스트는 동일한 빈 DB에 2회 시드 후 샘플 행(예: 각 브랜드 첫 10개 상품)의 핵심 컬럼 해시가 동일한지 검증하는 재현성 테스트를 권장한다.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/commerce-api/src/test/java/com/loopers/support/BulkDataInitializer.java`
around lines 83 - 94, The test data generator uses ThreadLocalRandom.current()
which can't be seeded, so make the data generation deterministic by replacing
ThreadLocalRandom.current() with a seedable generator (e.g., new Random(SEED) or
new SplittableRandom(SEED)) declared as a constant in BulkDataInitializer and
used by the jdbcTemplate.batchUpdate setValues lambda; update calls that use
nextInt/nextLong/nextDouble to the chosen API and keep the variable name random
to minimize diff noise. Also add a constant long SEED and a small
reproducibility test that inserts into an empty DB twice and compares hashes of
sample rows (e.g., first 10 products per brand) to ensure identical
distributions.
apps/commerce-api/src/main/java/com/loopers/domain/like/LikeService.java (1)

37-45: 도메인 서비스의 트랜잭션 어노테이션 재검토 권장

countLikescountAllLikes@Transactional(readOnly = true)가 선언되어 있다. 그러나:

  1. LikeFacade.toggleProductLike(쓰기 트랜잭션) 내에서 호출 시, 기본 전파 설정(REQUIRED)으로 인해 외부 트랜잭션을 따르므로 readOnly = true는 무시된다.
  2. 프로젝트 학습 내용에 따르면 트랜잭션 선언은 Application 레이어 책임이다.

현재 동작에는 문제가 없으나, 계층 책임 일관성을 위해 도메인 서비스에서는 트랜잭션 어노테이션 제거를 권장한다.

♻️ 선택적 개선안
-    `@Transactional`(readOnly = true)
     public int countLikes(LikeTargetType targetType, Long targetId) {
         return likeReader.countByTargetTypeAndTargetId(targetType, targetId);
     }

-    `@Transactional`(readOnly = true)
     public List<LikeCountProjection> countAllLikes(LikeTargetType targetType) {
         return likeReader.countAllByTargetType(targetType);
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/commerce-api/src/main/java/com/loopers/domain/like/LikeService.java`
around lines 37 - 45, Remove the `@Transactional`(readOnly = true) annotations
from the domain methods countLikes and countAllLikes in LikeService so
transaction demarcation is not enforced at the domain layer; instead ensure
callers (e.g., LikeFacade.toggleProductLike) start/own transactions at the
application layer with appropriate propagation and readOnly settings. Locate the
two methods named countLikes(LikeTargetType, Long) and
countAllLikes(LikeTargetType) in LikeService and delete the `@Transactional`
annotations, leaving their logic unchanged.
apps/commerce-api/src/test/java/com/loopers/infrastructure/like/LikeWritePerformanceTest.java (2)

149-186: ExecutorService 리소스 누수 위험

executor.shutdown()이 예외 발생 시 호출되지 않을 수 있다. 또한 shutdown()awaitTermination()을 호출하지 않아 스레드가 완전히 종료되기 전에 다음 반복이 시작될 수 있다.

♻️ try-finally 패턴으로 안전한 종료
     private long runConcurrentLikes(Long productId, int threadCount, boolean withProductUpdate) throws InterruptedException {
         ExecutorService executor = Executors.newFixedThreadPool(threadCount);
-        CountDownLatch ready = new CountDownLatch(threadCount);
-        CountDownLatch start = new CountDownLatch(1);
-        CountDownLatch done = new CountDownLatch(threadCount);
-        AtomicInteger successCount = new AtomicInteger(0);
-        // ... 기존 로직 ...
-        executor.shutdown();
-        return elapsed;
+        try {
+            CountDownLatch ready = new CountDownLatch(threadCount);
+            CountDownLatch start = new CountDownLatch(1);
+            CountDownLatch done = new CountDownLatch(threadCount);
+            AtomicInteger successCount = new AtomicInteger(0);
+            // ... 기존 로직 ...
+            return elapsed;
+        } finally {
+            executor.shutdown();
+            if (!executor.awaitTermination(10, java.util.concurrent.TimeUnit.SECONDS)) {
+                executor.shutdownNow();
+            }
+        }
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/commerce-api/src/test/java/com/loopers/infrastructure/like/LikeWritePerformanceTest.java`
around lines 149 - 186, The runConcurrentLikes method currently may leak threads
because executor.shutdown() is only called on the normal path; wrap the
ExecutorService lifecycle in a try-finally around the code that submits tasks
and waits (keeping ready.await(), start.countDown(), done.await() inside the
try), call executor.shutdown() in the finally, then call
executor.awaitTermination(...) with a reasonable timeout and, if it times out,
call executor.shutdownNow(); also handle InterruptedException by restoring the
thread interrupt status; update references: runConcurrentLikes, executor,
shutdown, awaitTermination, shutdownNow, ready/start/done latches and
successCount.

145-146: CI 환경에서 Flaky 테스트 가능성

mvMed <= denormMed * 1.2 조건은 CI 환경의 리소스 경합, 컨테이너 CPU 제한 등으로 인해 간헐적으로 실패할 수 있다. 성능 테스트는 별도 프로파일로 분리하거나, CI에서는 skip하도록 구성하는 것을 권장한다.

`@Test`
`@Tag`("performance") // Gradle/Maven에서 별도 실행 가능
`@DisabledInNativeImage` // 또는 `@DisabledIfEnvironmentVariable` 활용
void comparison() throws InterruptedException {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/commerce-api/src/test/java/com/loopers/infrastructure/like/LikeWritePerformanceTest.java`
around lines 145 - 146, The assertion comparing mvMed and denormMed is flaky in
CI; update the performance test (the test method named comparison in
LikeWritePerformanceTest that uses mvMed and denormMed) so it is excluded from
normal CI runs: add a performance-only marker (e.g., `@Tag`("performance")) and
either annotate with a conditional disable (e.g., `@DisabledIfEnvironmentVariable`
or `@DisabledInNativeImage`) or move the test into a separate performance
profile/suite so CI can skip it by default; ensure the test runner configuration
(Gradle/Maven) respects the new tag or profile.
apps/commerce-api/src/main/java/com/loopers/infrastructure/like/LikeReaderImpl.java (1)

40-45: Object[] 캐스팅의 타입 안전성 개선 권장

countGroupByTargetType 결과를 Object[]에서 직접 캐스팅하고 있다. DB 드라이버나 Hibernate 버전에 따라 COUNT 결과가 Long 또는 BigInteger로 반환될 수 있어 ClassCastException 위험이 있다.

♻️ Number 타입으로 안전하게 캐스팅
     `@Override`
     public List<LikeCountProjection> countAllByTargetType(LikeTargetType targetType) {
         return likeJpaRepository.countGroupByTargetType(targetType).stream()
-            .map(row -> new LikeCountProjection((Long) row[0], (Long) row[1]))
+            .map(row -> new LikeCountProjection(
+                ((Number) row[0]).longValue(),
+                ((Number) row[1]).longValue()))
             .toList();
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/commerce-api/src/main/java/com/loopers/infrastructure/like/LikeReaderImpl.java`
around lines 40 - 45, countAllByTargetType in LikeReaderImpl currently casts
Object[] elements directly to Long, which can throw ClassCastException when the
DB returns other numeric types; change the mapping to cast to Number (e.g.,
Number cnt = (Number) row[0]; Number id = (Number) row[1];) handle possible
nulls, then call cnt.longValue() / id.longValue() (or Long.valueOf(...)) when
constructing the LikeCountProjection; update the mapping expression that
consumes likeJpaRepository.countGroupByTargetType(targetType) to use Number and
longValue() instead of direct (Long) casts.
apps/commerce-api/src/test/java/com/loopers/domain/like/LikeServiceTest.java (1)

130-146: 신규 메서드(countLikes, countAllLikes)에 대한 테스트 케이스 부재

FakeLikeReadercountByTargetTypeAndTargetIdcountAllByTargetType 구현을 추가했으나, 이를 검증하는 테스트 케이스가 없다.

운영 관점에서 다음 시나리오에 대한 테스트가 필요하다:

  1. 좋아요가 없을 때 0 반환 확인
  2. 여러 타겟에 대한 집계 정확성 검증
  3. targetType 필터링 정확성 검증
💚 테스트 케이스 추가 예시
`@DisplayName`("좋아요 수를 조회할 때, ")
`@Nested`
class CountLikes {

    `@DisplayName`("좋아요가 없으면, 0을 반환한다.")
    `@Test`
    void returnsZero_whenNoLikes() {
        // Act
        int result = likeService.countLikes(LikeTargetType.PRODUCT, 100L);

        // Assert
        assertThat(result).isZero();
    }

    `@DisplayName`("좋아요가 있으면, 정확한 수를 반환한다.")
    `@Test`
    void returnsCount_whenLikesExist() {
        // Arrange
        fakeLikeReader.addLike(Like.create(1L, LikeTargetType.PRODUCT, 100L));
        fakeLikeReader.addLike(Like.create(2L, LikeTargetType.PRODUCT, 100L));

        // Act
        int result = likeService.countLikes(LikeTargetType.PRODUCT, 100L);

        // Assert
        assertThat(result).isEqualTo(2);
    }
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/commerce-api/src/test/java/com/loopers/domain/like/LikeServiceTest.java`
around lines 130 - 146, Add unit tests that validate the new FakeLikeReader
methods: exercise likeService.countLikes(...) (backed by
FakeLikeReader.countByTargetTypeAndTargetId) and likeService.countAllLikes(...)
(backed by countAllByTargetType) with the following scenarios: (1) when no likes
exist assert zero is returned; (2) when multiple likes exist for the same target
assert the exact count is returned; (3) when likes exist for multiple targets
and types assert aggregation and targetType filtering are correct. Use
fakeLikeReader.addLike(Like.create(...)) to arrange likes and assert results via
assertions (e.g., assertThat(...).isEqualTo(...)/isZero()) to cover both
per-target and aggregate behaviors.
apps/commerce-api/src/main/java/com/loopers/support/cache/ProductCacheManager.java (2)

58-62: CaffeineCache 타입 캐스팅이 런타임 오류를 유발할 수 있다.

getProductListNativeCache()에서 CaffeineCache로 직접 캐스팅하고 있다. 테스트 환경이나 향후 다른 CacheManager 구현(예: Redis)으로 전환 시 ClassCastException이 발생한다.

🛡️ 방어적 타입 검사 추가 제안
 private Cache<Object, Object> getProductListNativeCache() {
     org.springframework.cache.Cache cache = cacheManager.getCache("productList");
     if (cache == null) return null;
+    if (!(cache instanceof CaffeineCache)) {
+        return null;
+    }
     return ((CaffeineCache) cache).getNativeCache();
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/commerce-api/src/main/java/com/loopers/support/cache/ProductCacheManager.java`
around lines 58 - 62, getProductListNativeCache() currently casts the Spring
Cache to CaffeineCache directly which can throw ClassCastException in tests or
if another CacheManager (e.g., Redis) is used; instead, check the cache instance
type returned from cacheManager.getCache("productList") (use instanceof
CaffeineCache) before casting, and handle the non-Caffeine case safely (return
null or log/throw a clear exception) so the method returns a valid
Cache<Object,Object> only when the native cache is a CaffeineCache.

29-32: 캐시 키 패턴에 대한 의존성이 암묵적이다.

keyStr.startsWith(brandPrefix) 방식은 캐시 키 생성 로직과 강하게 결합되어 있다. 키 패턴이 변경되면 무효화 로직도 함께 수정해야 한다.

캐시 키 패턴을 상수 또는 문서로 명시하여 유지보수성을 확보하는 것이 좋다. 현재 패턴: "{brandId}:{sortType}:{page}:{size}" 또는 "all:{sortType}:{page}:{size}".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/commerce-api/src/main/java/com/loopers/support/cache/ProductCacheManager.java`
around lines 29 - 32, The current removal predicate in
nativeCache.asMap().keySet().removeIf(...) relies on implicit string patterns
(keyStr.startsWith(brandPrefix) / allPrefix); extract the key formats into
explicit constants (e.g., BRAND_KEY_PATTERN and ALL_KEY_PATTERN) or provide a
helper method like ProductCacheKey.isProductCacheKey(key) that encapsulates the
parsing logic and the known patterns ("{brandId}:{sortType}:{page}:{size}" and
"all:{sortType}:{page}:{size}"), then replace the inline startsWith checks with
calls to that constant-backed helper (referencing
nativeCache.asMap().keySet().removeIf, brandPrefix, allPrefix, and the new
ProductCacheKey helper) so the patterns are documented and maintained in one
place.
apps/commerce-api/src/main/java/com/loopers/support/config/CacheConfig.java (1)

22-31: recordStats() 설정만으로는 모니터링이 불가하다.

recordStats()는 Caffeine 내부에서 통계를 수집하지만, 이를 Actuator/Micrometer에 노출하려면 추가 설정이 필요하다. 운영 환경에서 캐시 히트율을 모니터링하려면 CaffeineCacheMetrics를 등록해야 한다.

📊 Micrometer 메트릭 노출 예시
`@Bean`
public CacheManager cacheManager(MeterRegistry meterRegistry) {
    SimpleCacheManager cacheManager = new SimpleCacheManager();
    
    Cache<Object, Object> productDetailCache = Caffeine.newBuilder()
        .expireAfterWrite(Duration.ofMinutes(10))
        .maximumSize(1000)
        .recordStats()
        .build();
    CaffeineCacheMetrics.monitor(meterRegistry, productDetailCache, "productDetail");
    
    // ... productList도 동일하게 처리
    
    cacheManager.setCaches(List.of(
        new CaffeineCache("productDetail", productDetailCache),
        // ...
    ));
    return cacheManager;
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/commerce-api/src/main/java/com/loopers/support/config/CacheConfig.java`
around lines 22 - 31, The current caches call recordStats() but do not expose
those metrics to Micrometer/Actuator; update the CacheConfig's cacheManager
setup to build Caffeine Cache instances (use
Caffeine.newBuilder().recordStats().build() into variables for productDetail and
productList), register them with CaffeineCacheMetrics.monitor(meterRegistry,
cacheInstance, "productDetail"/"productList"), and then wrap those instances
with new CaffeineCache(...) when calling cacheManager.setCaches(...); ensure the
method signature for cacheManager accepts a MeterRegistry so metrics can be
registered.
apps/commerce-api/src/main/java/com/loopers/CommerceApiApplication.java (1)

10-10: 다중 인스턴스 환경에서 스케줄러 중복 실행 위험이 있다.

운영 환경에서 여러 인스턴스가 배포되면 LikeCountSyncScheduler가 각 인스턴스에서 동시에 실행된다. 현재 좋아요 동기화는 멱등성이 있어 데이터 정합성 문제는 없지만, 불필요한 DB 부하가 발생한다.

향후 개선 방안:

  • ShedLock 또는 분산 락을 적용하여 단일 인스턴스에서만 실행
  • Leader election 패턴 적용
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/commerce-api/src/main/java/com/loopers/CommerceApiApplication.java` at
line 10, 현재 다중 인스턴스 환경에서 LikeCountSyncScheduler가 각 인스턴스에서 동시에 실행되어 불필요한 DB 부하가
발생할 수 있으므로 분산 락을 적용하세요: 프로젝트에 ShedLock 의존성을 추가하고,
LockProvider(JdbcTemplateLockProvider 또는 DataSource 기반)를 제공하는 빈을 등록하는 설정 클래스를
생성한 다음 LikeCountSyncScheduler의 스케줄러 메서드(예: 메서드명: syncLikeCounts 또는 해당 `@Scheduled`
메서드)에 `@SchedulerLock을` 적용하여 단일 인스턴스에서만 실행되도록 하세요; 필요시 CommerceApiApplication의
`@EnableScheduling` 유지하고 설정 클래스에서 데이터소스/트랜잭션 빈 등을 참조하도록 구성하세요.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In
`@apps/commerce-api/src/main/java/com/loopers/application/like/LikeCountSyncScheduler.java`:
- Around line 34-53: The current syncProductLikeCounts and syncBrandLikeCounts
only update targets returned by likeService.countAllLikes and never reset
targets whose last like was removed, leaving stale positive likeCount values;
change both methods to first bulk-reset existing rows with likeCount > 0 that
are not present in the current counts to 0 (e.g., via
productService.resetLikeCountsNotIn(Set<Long> ids) and
brandService.resetLikeCountsNotIn(Set<Long> ids)), then apply updates from
counts using productService.updateLikeCount and brandService.updateLikeCount,
and always call productCacheManager.evictAllProductList / evictAllProductDetail
(and equivalent brand cache invalidation) even when counts.isEmpty(); also add
an E2E test that performs like -> sync -> unlike -> sync and asserts
product/brand detail and LIKES_DESC listing show 0.

In
`@apps/commerce-api/src/main/java/com/loopers/application/product/CacheWarmupRunner.java`:
- Around line 27-28: In CacheWarmupRunner's catch block where you currently call
log.warn("[CacheWarmup] 워밍업 실패 - sort={}, error={}", sort, e.getMessage()),
preserve the exception cause by passing the exception object to the logger
(e.g., include e as the last parameter so the stacktrace is logged) and separate
the user-friendly message from exception details; update the log call to include
sort and a clear message while passing e to log.warn so the stacktrace is
captured. Also add a unit test for CacheWarmupRunner that simulates an exception
on the first sort and verifies the runner continues and attempts the second sort
to prevent regression.

In `@apps/commerce-api/src/main/java/com/loopers/domain/brand/BrandService.java`:
- Around line 64-67: The updateLikeCount(Long id, int likeCount) method
currently ignores the repository's update result; change it to capture the
returned updatedCount from brandRepository.updateLikeCount(id, likeCount), throw
a CoreException (so it will be handled by ApiControllerAdvice) when updatedCount
== 0 to surface synchronization failures (matching the existing behavior in
increaseLikeCount() and decreaseLikeCount()), and add unit tests for
updateLikeCount() to cover both successful update and nonexistent-brand
(0-updates) cases; reference updateLikeCount(),
brandRepository.updateLikeCount(...), increaseLikeCount(), decreaseLikeCount(),
CoreException, and ApiControllerAdvice in the changes.

In
`@apps/commerce-api/src/main/java/com/loopers/domain/product/ProductService.java`:
- Around line 79-82: The updateLikeCount method currently ignores the
repository's updated row count which can hide failed syncs; change
ProductService.updateLikeCount to capture the return value from
productRepository.updateLikeCount(id, likeCount), and if updatedCount == 0 throw
a CoreException (consistent with increaseLikeCount/decreaseLikeCount error
handling) so failures propagate to ApiControllerAdvice; also add
unit/integration tests for updateLikeCount covering both successful update and
nonexistent/deleted product (0-updates) scenarios.

In
`@apps/commerce-api/src/main/java/com/loopers/infrastructure/like/LikeJpaRepository.java`:
- Around line 17-20: The Like repository methods countByTargetTypeAndTargetId
and countGroupByTargetType are causing full-table scans; add a composite index
on (targetType, targetId) in the Like entity to optimize those queries. Update
the Like entity's `@Table` to include indexes = {`@Index`(name =
"idx_like_target_type_id", columnList = "targetType, targetId")} while
preserving the existing unique constraint (uk_like_member_target on member_id,
target_type, target_id); ensure the column names match the entity fields
(targetType, targetId) and run the corresponding DB migration so the new index
is created in the database.

In
`@apps/commerce-api/src/test/java/com/loopers/domain/brand/BrandServiceTest.java`:
- Around line 294-297: The fake repository's updateLikeCount method returns 1
for any existing key but doesn't respect the soft-delete condition (deletedAt IS
NULL) present in BrandJpaRepository; modify the Fake/Inline implementation of
updateLikeCount to verify both existence in the brands map and that the Brand
object's deletedAt is null before returning 1, otherwise return 0, and add unit
tests covering the non-existent and soft-deleted Brand cases to ensure
synchronization failure is detected (reference: updateLikeCount,
BrandJpaRepository, brands map, Brand.deletedAt).

In
`@apps/commerce-api/src/test/java/com/loopers/domain/product/ProductServiceTest.java`:
- Around line 324-327: The fake repository's updateLikeCount(Long id, int
likeCount) in ProductServiceTest.java always returns 1, hiding the 0-update
failure path; change the fake so updateLikeCount returns 1 only for IDs that
exist in the fake store and 0 for non-existent IDs (simulate the real repository
contract), and add two unit tests in ProductServiceTest covering the successful
like-update flow and the not-found (0 rows updated) flow to assert the service
handles both outcomes correctly.

In
`@apps/commerce-api/src/test/java/com/loopers/infrastructure/like/LikeWritePerformanceTest.java`:
- Around line 171-175: In LikeWritePerformanceTest (the try/catch around the
concurrent write task), don't swallow all exceptions; instead catch only
expected exceptions (e.g., duplicate or constraint exceptions) or at minimum log
unexpected exceptions before proceeding so failures are visible; update the
catch to handle specific exception types or rethrow after logging via the test
logger (keeping the done.countDown() in the finally block) so the test can
surface real issues like connection pool exhaustion or transaction errors.

In
`@apps/commerce-api/src/test/java/com/loopers/infrastructure/product/ProductReadOptimizationPerformanceTest.java`:
- Around line 435-438: The average(long[] times) method can divide by zero when
times.length == 0; add defensive handling at the start of average(long[] times)
to check for null or empty input and return a sensible value (e.g., 0) or throw
a clear IllegalArgumentException, and ensure callers expect that behavior;
update the method to return 0 for an empty array (or throw) and avoid performing
sum / times.length when times.length == 0.
- Around line 156-157: The COUNT(*) results are using Integer.class which can
overflow for large datasets; change the jdbcTemplate.queryForObject call(s)
(e.g., the call assigning finalCount) to request Long.class and update the
receiving variable type (finalCount) from Integer to Long so COUNT results are
consistently handled as Long across ProductReadOptimizationPerformanceTest;
ensure any downstream uses (like the log.info call) still work with Long.
- Around line 400-403: The clearAllCaches method risks NPE because
cacheManager.getCache(...) can return null; update clearAllCaches to null-safe
clear the caches by retrieving each cache via
cacheManager.getCache("productDetail") and cacheManager.getCache("productList")
and only call clear() if the returned cache is non-null (e.g., if (cache !=
null) cache.clear()), referencing the clearAllCaches method and
cacheManager.getCache calls to locate where to add the null checks.
- Around line 117-158: In ensureLikesData() change COUNT queries from Integer to
Long (use jdbcTemplate.queryForObject(..., Long.class)) to avoid overflow;
replace Math.random() with a deterministically seeded Random (new
Random(fixedSeed)) to make targetId generation reproducible; and tighten the
exception handling around jdbcTemplate.execute(...) so you only suppress
duplicate-key/constraint violations (check for
SQLIntegrityConstraintViolationException or inspect the SQLState/exception type)
while logging and rethrowing other exceptions (do not swallow all Exceptions).
Reference methods/variables: ensureLikesData, jdbcTemplate.queryForObject,
jdbcTemplate.execute, Math.random, and the try-catch block that currently
ignores exceptions.

In
`@apps/commerce-api/src/test/java/com/loopers/interfaces/api/CouponConcurrencyE2ETest.java`:
- Around line 239-243: The test currently accesses
stockResponse.getBody().data().stockQuantity() directly which can NPE or hide
HTTP failures; update the assertions around the ResponseEntity returned by
testRestTemplate.exchange to first assert status is 200 OK, assert
stockResponse.getBody() is not null and stockResponse.getBody().data() is not
null, then assert ProductV1Dto.StockResponse.stockQuantity() equals 0;
additionally add a separate test that uses testRestTemplate.exchange with a
non-existent productId and asserts a 404 status and appropriate error body
instead of relying on NPEs.
- Around line 52-56: tearDown() currently calls
databaseCleanUp.truncateAllTables() and then iterates
cacheManager.getCacheNames() to clear each cache, but if truncateAllTables()
throws the cache cleanup is skipped and cacheManager.getCache(name) can return
null causing NPEs; fix by running databaseCleanUp.truncateAllTables() inside a
try block and moving the cache cleanup into a finally block, and make the cache
clearing null-safe by checking cacheManager.getCache(name) != null before
calling clear() (keep using cacheManager.getCacheNames() to find caches).

In
`@apps/commerce-api/src/test/java/com/loopers/interfaces/api/LikeV1ApiE2ETest.java`:
- Around line 158-163: The test currently clears caches after calling
likeCountSyncScheduler.syncLikeCounts(), masking whether the scheduler actually
evicts productDetail/productList entries; update LikeV1ApiE2ETest to pre-warm
caches by issuing real requests via testRestTemplate.exchange to productDetail
and the LIKES_DESC productList endpoint before calling
likeCountSyncScheduler.syncLikeCounts(), remove the manual
cacheManager.getCache(...).clear() step, then run syncLikeCounts() and assert
the cached responses are updated (e.g., detail cache 0 -> like -> sync ->
re-fetch sees 1 and list cache ordering/counts update); add separate tests for
the "detail 0->1" flow and a "LIKES_DESC list cache refresh" flow to ensure
scheduler-triggered eviction/refresh is validated.

In
`@apps/commerce-api/src/test/java/com/loopers/support/BulkDataInitializer.java`:
- Around line 35-39: The current seeding checks overall product count
(existingCount via jdbcTemplate.queryForObject and TOTAL_PRODUCTS) and then
generates products across all brands (brand creation/loop logic around the block
at lines ~76-81), which is non-idempotent; change the logic to target a named
seed brand set (e.g., add a seedPrefix or runId used when creating brands and
product records), compute existing count scoped to that seed set (query COUNT(*)
FROM product WHERE brand_name LIKE seedPrefix% or brand_id IN (seedBrandIds))
instead of the global TOTAL_PRODUCTS, and only create brands/products for that
explicit seed set (use the same seedPrefix/runId when inserting in the brand
creation method and when generating products in the product generation loop),
ensuring subsequent runs check the seed-scoped count and only insert the delta
to reach the target so the process is idempotent.

---

Outside diff comments:
In
`@apps/commerce-api/src/main/java/com/loopers/application/like/LikeFacade.java`:
- Around line 32-41: toggleProductLike (and likewise toggleBrandLike) updates
like counts but does not invalidate related caches, causing stale productList
(especially LIKES_DESC first page) and productDetail views until the periodic
LikeCountSyncScheduler runs; after successfully toggling (in
LikeFacade.toggleProductLike and toggleBrandLike) call
productCacheManager.evictLikesSortFirstPage() to immediately evict the
popular-sort first-page cache and also evict the specific product detail cache
(e.g., productCacheManager.evictProductDetail(productId) or the equivalent
method) so UI queries reflect the new like state instantly.

In `@apps/commerce-api/src/main/java/com/loopers/domain/coupon/MemberCoupon.java`:
- Around line 10-22: The `@Table` definition on MemberCoupon declares a composite
unique constraint uniqueConstraints = {"member_id","coupon_id"} and a separate
index idx_membercoupon_member which is redundant; remove the redundant
`@Index`(name = "idx_membercoupon_member", columnList = "member_id") from the
`@Table`(indexes=...) so only the UNIQUE(member_id,coupon_id) remains, and if you
need a different access pattern replace it with a purpose-built index; after the
change run EXPLAIN on your main member_id queries and compare write latency
(coupon issue/use p95) to ensure no behavioral regressions.

In
`@apps/commerce-api/src/test/java/com/loopers/domain/like/LikeServiceTest.java`:
- Around line 3-8: Remove the duplicate import of PageResult in the
LikeServiceTest.java imports: keep a single import statement for
com.loopers.domain.PageResult and delete the redundant one so there is only one
PageResult import in the LikeServiceTest class.

---

Nitpick comments:
In `@apps/commerce-api/build.gradle.kts`:
- Around line 22-24: The Caffeine dependency
implementation("com.github.ben-manes.caffeine:caffeine") lacks a visible version
source; add a short comment next to that declaration in build.gradle.kts stating
that the Caffeine version is managed by the Spring Boot BOM
(spring-boot-starter-cache) and controlled by the dependency-management plugin
for Spring Boot 3.4.4, and also add an entry in gradle.properties (e.g.,
CAFFEINE_VERSION_MANAGED_BY_BOM=true or a note) to surface this fact; finally,
update CI to emit a gradle dependencyReport or dependency lock step so the
resolved Caffeine version is recorded for operational traceability.

In `@apps/commerce-api/src/main/java/com/loopers/CommerceApiApplication.java`:
- Line 10: 현재 다중 인스턴스 환경에서 LikeCountSyncScheduler가 각 인스턴스에서 동시에 실행되어 불필요한 DB 부하가
발생할 수 있으므로 분산 락을 적용하세요: 프로젝트에 ShedLock 의존성을 추가하고,
LockProvider(JdbcTemplateLockProvider 또는 DataSource 기반)를 제공하는 빈을 등록하는 설정 클래스를
생성한 다음 LikeCountSyncScheduler의 스케줄러 메서드(예: 메서드명: syncLikeCounts 또는 해당 `@Scheduled`
메서드)에 `@SchedulerLock을` 적용하여 단일 인스턴스에서만 실행되도록 하세요; 필요시 CommerceApiApplication의
`@EnableScheduling` 유지하고 설정 클래스에서 데이터소스/트랜잭션 빈 등을 참조하도록 구성하세요.

In `@apps/commerce-api/src/main/java/com/loopers/domain/like/LikeService.java`:
- Around line 37-45: Remove the `@Transactional`(readOnly = true) annotations from
the domain methods countLikes and countAllLikes in LikeService so transaction
demarcation is not enforced at the domain layer; instead ensure callers (e.g.,
LikeFacade.toggleProductLike) start/own transactions at the application layer
with appropriate propagation and readOnly settings. Locate the two methods named
countLikes(LikeTargetType, Long) and countAllLikes(LikeTargetType) in
LikeService and delete the `@Transactional` annotations, leaving their logic
unchanged.

In
`@apps/commerce-api/src/main/java/com/loopers/infrastructure/like/LikeReaderImpl.java`:
- Around line 40-45: countAllByTargetType in LikeReaderImpl currently casts
Object[] elements directly to Long, which can throw ClassCastException when the
DB returns other numeric types; change the mapping to cast to Number (e.g.,
Number cnt = (Number) row[0]; Number id = (Number) row[1];) handle possible
nulls, then call cnt.longValue() / id.longValue() (or Long.valueOf(...)) when
constructing the LikeCountProjection; update the mapping expression that
consumes likeJpaRepository.countGroupByTargetType(targetType) to use Number and
longValue() instead of direct (Long) casts.

In
`@apps/commerce-api/src/main/java/com/loopers/support/cache/ProductCacheManager.java`:
- Around line 58-62: getProductListNativeCache() currently casts the Spring
Cache to CaffeineCache directly which can throw ClassCastException in tests or
if another CacheManager (e.g., Redis) is used; instead, check the cache instance
type returned from cacheManager.getCache("productList") (use instanceof
CaffeineCache) before casting, and handle the non-Caffeine case safely (return
null or log/throw a clear exception) so the method returns a valid
Cache<Object,Object> only when the native cache is a CaffeineCache.
- Around line 29-32: The current removal predicate in
nativeCache.asMap().keySet().removeIf(...) relies on implicit string patterns
(keyStr.startsWith(brandPrefix) / allPrefix); extract the key formats into
explicit constants (e.g., BRAND_KEY_PATTERN and ALL_KEY_PATTERN) or provide a
helper method like ProductCacheKey.isProductCacheKey(key) that encapsulates the
parsing logic and the known patterns ("{brandId}:{sortType}:{page}:{size}" and
"all:{sortType}:{page}:{size}"), then replace the inline startsWith checks with
calls to that constant-backed helper (referencing
nativeCache.asMap().keySet().removeIf, brandPrefix, allPrefix, and the new
ProductCacheKey helper) so the patterns are documented and maintained in one
place.

In `@apps/commerce-api/src/main/java/com/loopers/support/config/CacheConfig.java`:
- Around line 22-31: The current caches call recordStats() but do not expose
those metrics to Micrometer/Actuator; update the CacheConfig's cacheManager
setup to build Caffeine Cache instances (use
Caffeine.newBuilder().recordStats().build() into variables for productDetail and
productList), register them with CaffeineCacheMetrics.monitor(meterRegistry,
cacheInstance, "productDetail"/"productList"), and then wrap those instances
with new CaffeineCache(...) when calling cacheManager.setCaches(...); ensure the
method signature for cacheManager accepts a MeterRegistry so metrics can be
registered.

In
`@apps/commerce-api/src/test/java/com/loopers/domain/like/LikeServiceTest.java`:
- Around line 130-146: Add unit tests that validate the new FakeLikeReader
methods: exercise likeService.countLikes(...) (backed by
FakeLikeReader.countByTargetTypeAndTargetId) and likeService.countAllLikes(...)
(backed by countAllByTargetType) with the following scenarios: (1) when no likes
exist assert zero is returned; (2) when multiple likes exist for the same target
assert the exact count is returned; (3) when likes exist for multiple targets
and types assert aggregation and targetType filtering are correct. Use
fakeLikeReader.addLike(Like.create(...)) to arrange likes and assert results via
assertions (e.g., assertThat(...).isEqualTo(...)/isZero()) to cover both
per-target and aggregate behaviors.

In
`@apps/commerce-api/src/test/java/com/loopers/infrastructure/like/LikeWritePerformanceTest.java`:
- Around line 149-186: The runConcurrentLikes method currently may leak threads
because executor.shutdown() is only called on the normal path; wrap the
ExecutorService lifecycle in a try-finally around the code that submits tasks
and waits (keeping ready.await(), start.countDown(), done.await() inside the
try), call executor.shutdown() in the finally, then call
executor.awaitTermination(...) with a reasonable timeout and, if it times out,
call executor.shutdownNow(); also handle InterruptedException by restoring the
thread interrupt status; update references: runConcurrentLikes, executor,
shutdown, awaitTermination, shutdownNow, ready/start/done latches and
successCount.
- Around line 145-146: The assertion comparing mvMed and denormMed is flaky in
CI; update the performance test (the test method named comparison in
LikeWritePerformanceTest that uses mvMed and denormMed) so it is excluded from
normal CI runs: add a performance-only marker (e.g., `@Tag`("performance")) and
either annotate with a conditional disable (e.g., `@DisabledIfEnvironmentVariable`
or `@DisabledInNativeImage`) or move the test into a separate performance
profile/suite so CI can skip it by default; ensure the test runner configuration
(Gradle/Maven) respects the new tag or profile.

In
`@apps/commerce-api/src/test/java/com/loopers/infrastructure/product/ProductReadOptimizationPerformanceTest.java`:
- Around line 429-433: The adminEntity() helper currently hardcodes the
authentication header ("X-Loopers-Ldap" -> "loopers.admin"); extract the header
key and value into a shared test constant or test configuration (e.g.,
TestConstants.ADMIN_LDAP or application-test property) and update adminEntity()
to read from that constant/config instead of the literal string; this keeps the
test data centralized and easier to change and maintain (refer to adminEntity(),
"X-Loopers-Ldap", and "loopers.admin" when making the change).
- Around line 405-417: In dropProductIndexes() and createProductIndexes(), stop
swallowing all exceptions from jdbcTemplate.execute; catch only expected SQL
states for "index does not exist" or "index already exists" and ignore those,
but log and rethrow or fail the test for any other exceptions (e.g., permission
or syntax errors). Locate the jdbcTemplate.execute calls in dropProductIndexes
and createProductIndexes, replace the generic catch(Exception ignored) with
logic that inspects the thrown exception (SQLException or DataAccessException),
filters by SQL state or error code for benign cases, and uses the test logger to
error/throw for unexpected failures so failures are visible during test runs.
- Around line 34-37: ProductReadOptimizationPerformanceTest modifies DB indexes
and inserts into the likes table without guaranteed cleanup; add deterministic
teardown to avoid cross-test pollution by implementing an `@AfterEach` (or
`@AfterAll` if shared) method in the ProductReadOptimizationPerformanceTest class
that restores any dropped/created indexes and truncates or deletes test rows
from the likes table (or resets the schema used), and/or annotate the test to
use a disposable test schema/context (e.g., `@DirtiesContext` or dedicated test
datasource) so index changes and inserted data are reliably reverted even if the
test or IndexPerformance fails; operate on the same named index and likes table
symbols used in the test to locate the cleanup logic.

In
`@apps/commerce-api/src/test/java/com/loopers/support/BulkDataInitializer.java`:
- Around line 83-94: The test data generator uses ThreadLocalRandom.current()
which can't be seeded, so make the data generation deterministic by replacing
ThreadLocalRandom.current() with a seedable generator (e.g., new Random(SEED) or
new SplittableRandom(SEED)) declared as a constant in BulkDataInitializer and
used by the jdbcTemplate.batchUpdate setValues lambda; update calls that use
nextInt/nextLong/nextDouble to the chosen API and keep the variable name random
to minimize diff noise. Also add a constant long SEED and a small
reproducibility test that inserts into an empty DB twice and compares hashes of
sample rows (e.g., first 10 products per brand) to ensure identical
distributions.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 7ab6c849-2258-4801-aa1f-16930080841b

📥 Commits

Reviewing files that changed from the base of the PR and between 3393c06 and ef18e94.

📒 Files selected for processing (42)
  • apps/commerce-api/build.gradle.kts
  • apps/commerce-api/src/main/java/com/loopers/CommerceApiApplication.java
  • apps/commerce-api/src/main/java/com/loopers/application/like/LikeCountSyncScheduler.java
  • apps/commerce-api/src/main/java/com/loopers/application/like/LikeFacade.java
  • apps/commerce-api/src/main/java/com/loopers/application/product/CacheWarmupRunner.java
  • apps/commerce-api/src/main/java/com/loopers/application/product/ProductFacade.java
  • apps/commerce-api/src/main/java/com/loopers/application/product/ProductInfo.java
  • apps/commerce-api/src/main/java/com/loopers/domain/brand/BrandRepository.java
  • apps/commerce-api/src/main/java/com/loopers/domain/brand/BrandService.java
  • apps/commerce-api/src/main/java/com/loopers/domain/coupon/MemberCoupon.java
  • apps/commerce-api/src/main/java/com/loopers/domain/like/LikeCountProjection.java
  • apps/commerce-api/src/main/java/com/loopers/domain/like/LikeReader.java
  • apps/commerce-api/src/main/java/com/loopers/domain/like/LikeService.java
  • apps/commerce-api/src/main/java/com/loopers/domain/order/Order.java
  • apps/commerce-api/src/main/java/com/loopers/domain/order/OrderItem.java
  • apps/commerce-api/src/main/java/com/loopers/domain/product/Product.java
  • apps/commerce-api/src/main/java/com/loopers/domain/product/ProductRepository.java
  • apps/commerce-api/src/main/java/com/loopers/domain/product/ProductService.java
  • apps/commerce-api/src/main/java/com/loopers/infrastructure/brand/BrandJpaRepository.java
  • apps/commerce-api/src/main/java/com/loopers/infrastructure/brand/BrandRepositoryImpl.java
  • apps/commerce-api/src/main/java/com/loopers/infrastructure/like/LikeJpaRepository.java
  • apps/commerce-api/src/main/java/com/loopers/infrastructure/like/LikeReaderImpl.java
  • apps/commerce-api/src/main/java/com/loopers/infrastructure/product/ProductJpaRepository.java
  • apps/commerce-api/src/main/java/com/loopers/infrastructure/product/ProductRepositoryImpl.java
  • apps/commerce-api/src/main/java/com/loopers/interfaces/api/like/LikeV1Dto.java
  • apps/commerce-api/src/main/java/com/loopers/interfaces/api/product/ProductV1Controller.java
  • apps/commerce-api/src/main/java/com/loopers/interfaces/api/product/ProductV1Dto.java
  • apps/commerce-api/src/main/java/com/loopers/support/cache/ProductCacheManager.java
  • apps/commerce-api/src/main/java/com/loopers/support/config/CacheConfig.java
  • apps/commerce-api/src/test/java/com/loopers/domain/brand/BrandServiceTest.java
  • apps/commerce-api/src/test/java/com/loopers/domain/like/LikeServiceTest.java
  • apps/commerce-api/src/test/java/com/loopers/domain/product/ProductServiceTest.java
  • apps/commerce-api/src/test/java/com/loopers/infrastructure/like/LikeWritePerformanceTest.java
  • apps/commerce-api/src/test/java/com/loopers/infrastructure/product/ProductReadOptimizationPerformanceTest.java
  • apps/commerce-api/src/test/java/com/loopers/interfaces/api/CouponConcurrencyE2ETest.java
  • apps/commerce-api/src/test/java/com/loopers/interfaces/api/LikeConcurrencyE2ETest.java
  • apps/commerce-api/src/test/java/com/loopers/interfaces/api/LikeV1ApiE2ETest.java
  • apps/commerce-api/src/test/java/com/loopers/interfaces/api/OrderConcurrencyE2ETest.java
  • apps/commerce-api/src/test/java/com/loopers/interfaces/api/OrderV1ApiE2ETest.java
  • apps/commerce-api/src/test/java/com/loopers/interfaces/api/ProductCacheE2ETest.java
  • apps/commerce-api/src/test/java/com/loopers/interfaces/api/ProductV1ApiE2ETest.java
  • apps/commerce-api/src/test/java/com/loopers/support/BulkDataInitializer.java
💤 Files with no reviewable changes (2)
  • apps/commerce-api/src/main/java/com/loopers/interfaces/api/like/LikeV1Dto.java
  • apps/commerce-api/src/main/java/com/loopers/application/product/ProductInfo.java

Comment on lines +34 to +53
private void syncProductLikeCounts() {
List<LikeCountProjection> counts = likeService.countAllLikes(LikeTargetType.PRODUCT);
for (LikeCountProjection projection : counts) {
productService.updateLikeCount(projection.targetId(), (int) projection.count());
}
if (!counts.isEmpty()) {
productCacheManager.evictAllProductList();
productCacheManager.evictAllProductDetail();
log.info("상품 좋아요 수 동기화 완료: {}건", counts.size());
}
}

private void syncBrandLikeCounts() {
List<LikeCountProjection> counts = likeService.countAllLikes(LikeTargetType.BRAND);
for (LikeCountProjection projection : counts) {
brandService.updateLikeCount(projection.targetId(), (int) projection.count());
}
if (!counts.isEmpty()) {
log.info("브랜드 좋아요 수 동기화 완료: {}건", counts.size());
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

마지막 좋아요가 취소된 대상은 영구히 0으로 돌아가지 않는다.

집계 결과는 count > 0인 대상만 반환되므로, 마지막 unlike 이후에는 해당 상품/브랜드 ID가 counts에서 사라진다. 현재 루프는 사라진 대상을 전혀 갱신하지 않아서 likeCount가 이전 값으로 남고, 운영에서는 상세/목록/좋아요순 정렬이 계속 잘못 노출된다. 게다가 전역적으로 counts가 비면 캐시도 비우지 않아 오래된 값이 더 오래 남는다.

수정안은 이번 배치 결과에 없는 기존 positive likeCount를 bulk reset한 뒤 집계값을 덮어쓰고, counts가 비어도 캐시 무효화는 수행하는 것이다. 추가 테스트로 like -> sync -> unlike -> sync 후 상품/브랜드 상세와 LIKES_DESC 목록이 0으로 복구되는 E2E를 반드시 넣는 편이 안전하다.

수정 방향 예시
 private void syncProductLikeCounts() {
     List<LikeCountProjection> counts = likeService.countAllLikes(LikeTargetType.PRODUCT);
+    List<Long> syncedIds = counts.stream().map(LikeCountProjection::targetId).toList();
+    productService.resetMissingLikeCounts(syncedIds);
     for (LikeCountProjection projection : counts) {
         productService.updateLikeCount(projection.targetId(), (int) projection.count());
     }
-    if (!counts.isEmpty()) {
-        productCacheManager.evictAllProductList();
-        productCacheManager.evictAllProductDetail();
-        log.info("상품 좋아요 수 동기화 완료: {}건", counts.size());
-    }
+    productCacheManager.evictAllProductList();
+    productCacheManager.evictAllProductDetail();
+    log.info("상품 좋아요 수 동기화 완료: {}건", counts.size());
 }

브랜드 동기화도 같은 방식의 0건 reset 단계가 필요하다.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/commerce-api/src/main/java/com/loopers/application/like/LikeCountSyncScheduler.java`
around lines 34 - 53, The current syncProductLikeCounts and syncBrandLikeCounts
only update targets returned by likeService.countAllLikes and never reset
targets whose last like was removed, leaving stale positive likeCount values;
change both methods to first bulk-reset existing rows with likeCount > 0 that
are not present in the current counts to 0 (e.g., via
productService.resetLikeCountsNotIn(Set<Long> ids) and
brandService.resetLikeCountsNotIn(Set<Long> ids)), then apply updates from
counts using productService.updateLikeCount and brandService.updateLikeCount,
and always call productCacheManager.evictAllProductList / evictAllProductDetail
(and equivalent brand cache invalidation) even when counts.isEmpty(); also add
an E2E test that performs like -> sync -> unlike -> sync and asserts
product/brand detail and LIKES_DESC listing show 0.

Comment on lines +27 to +28
} catch (Exception e) {
log.warn("[CacheWarmup] 워밍업 실패 - sort={}, error={}", sort, e.getMessage());
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

워밍업 실패 로그에서 예외 원인이 사라진다.

지금처럼 e.getMessage()만 남기면 null 메시지나 중첩 cause를 잃어서, 운영 중 워밍업 실패 원인 분석이 어려워진다. 수정안은 예외 객체 자체를 넘겨 stack trace를 함께 남기는 것이다. 추가 테스트로 첫 번째 sort에서 예외가 나도 두 번째 sort까지 계속 시도하는 단위 테스트를 두면 회귀를 막을 수 있다.

수정 예시
-            } catch (Exception e) {
-                log.warn("[CacheWarmup] 워밍업 실패 - sort={}, error={}", sort, e.getMessage());
+            } catch (Exception e) {
+                log.warn("[CacheWarmup] 워밍업 실패 - sort={}", sort, e);
             }

As per coding guidelines, "예외 처리 시 cause를 보존하고, 사용자 메시지와 로그 메시지를 분리하도록 제안한다."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/commerce-api/src/main/java/com/loopers/application/product/CacheWarmupRunner.java`
around lines 27 - 28, In CacheWarmupRunner's catch block where you currently
call log.warn("[CacheWarmup] 워밍업 실패 - sort={}, error={}", sort, e.getMessage()),
preserve the exception cause by passing the exception object to the logger
(e.g., include e as the last parameter so the stacktrace is logged) and separate
the user-friendly message from exception details; update the log call to include
sort and a clear message while passing e to log.warn so the stacktrace is
captured. Also add a unit test for CacheWarmupRunner that simulates an exception
on the first sort and verifies the runner continues and attempts the second sort
to prevent regression.

Comment on lines +294 to +297
@Override
public int updateLikeCount(Long id, int likeCount) {
return brands.containsKey(id) ? 1 : 0;
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Fake 저장소가 soft delete 조건을 반영하지 않는다.

실제 BrandJpaRepository.updateLikeCount()deletedAt IS NULL 조건이 있어 soft delete 된 브랜드는 0을 반환해야 한다. 그런데 이 fake는 맵에 키만 있으면 1을 반환해서, 삭제된 브랜드에 대한 동기화 실패가 테스트에서 드러나지 않는다. 존재 여부와 deletedAt을 함께 확인하도록 맞추고, 미존재/soft delete 브랜드 케이스를 추가하는 편이 안전하다.

🔧 제안
         `@Override`
         public int updateLikeCount(Long id, int likeCount) {
-            return brands.containsKey(id) ? 1 : 0;
+            Brand brand = brands.get(id);
+            return brand != null && brand.getDeletedAt() == null ? 1 : 0;
         }

As per coding guidelines **/*Test*.java: 단위 테스트는 경계값/실패 케이스/예외 흐름을 포함하는지 점검한다.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/commerce-api/src/test/java/com/loopers/domain/brand/BrandServiceTest.java`
around lines 294 - 297, The fake repository's updateLikeCount method returns 1
for any existing key but doesn't respect the soft-delete condition (deletedAt IS
NULL) present in BrandJpaRepository; modify the Fake/Inline implementation of
updateLikeCount to verify both existence in the brands map and that the Brand
object's deletedAt is null before returning 1, otherwise return 0, and add unit
tests covering the non-existent and soft-deleted Brand cases to ensure
synchronization failure is detected (reference: updateLikeCount,
BrandJpaRepository, brands map, Brand.deletedAt).

Comment on lines +435 to +438
private long average(long[] times) {
long sum = 0;
for (long t : times) sum += t;
return sum / times.length;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

빈 배열 입력 시 ArithmeticException 발생 가능하다.

average() 메서드에서 times.length가 0이면 0으로 나누기 예외가 발생한다. 방어적 처리가 필요하다.

🛠️ 수정 제안
 private long average(long[] times) {
+    if (times == null || times.length == 0) return 0;
     long sum = 0;
     for (long t : times) sum += t;
     return sum / times.length;
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
private long average(long[] times) {
long sum = 0;
for (long t : times) sum += t;
return sum / times.length;
private long average(long[] times) {
if (times == null || times.length == 0) return 0;
long sum = 0;
for (long t : times) sum += t;
return sum / times.length;
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/commerce-api/src/test/java/com/loopers/infrastructure/product/ProductReadOptimizationPerformanceTest.java`
around lines 435 - 438, The average(long[] times) method can divide by zero when
times.length == 0; add defensive handling at the start of average(long[] times)
to check for null or empty input and return a sensible value (e.g., 0) or throw
a clear IllegalArgumentException, and ensure callers expect that behavior;
update the method to return 0 for an empty array (or throw) and avoid performing
sum / times.length when times.length == 0.

Comment on lines +239 to +243
ResponseEntity<ApiResponse<ProductV1Dto.StockResponse>> stockResponse = testRestTemplate.exchange(
"/api/v1/products/" + productId + "/stock", HttpMethod.GET, null,
new ParameterizedTypeReference<>() {}
);
assertThat(productResponse.getBody().data().product().stockQuantity()).isEqualTo(0);
assertThat(stockResponse.getBody().data().stockQuantity()).isEqualTo(0);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

재고 응답 검증에 상태코드/널 단언이 없어 실패 원인 가시성이 낮다

Line 243은 응답 체이닝 접근 전에 성공 상태와 바디 유효성을 확인하지 않아, 실패 시 NPE로 원인이 가려질 수 있다. 운영 관점에서 장애 시그널이 흐려져 디버깅 시간이 증가한다. 상태코드와 body/data를 먼저 단언한 뒤 재고값을 검증하도록 수정하는 것이 좋다. 추가로 “존재하지 않는 productId 조회 시 404” 실패 케이스 테스트를 보강하는 것이 좋다.

수정안(diff)
 ResponseEntity<ApiResponse<ProductV1Dto.StockResponse>> stockResponse = testRestTemplate.exchange(
     "/api/v1/products/" + productId + "/stock", HttpMethod.GET, null,
     new ParameterizedTypeReference<>() {}
 );
+assertThat(stockResponse.getStatusCode()).isEqualTo(HttpStatus.OK);
+assertThat(stockResponse.getBody()).isNotNull();
+assertThat(stockResponse.getBody().data()).isNotNull();
 assertThat(stockResponse.getBody().data().stockQuantity()).isEqualTo(0);

As per coding guidelines: 단위/통합 테스트는 실패 케이스와 예외 흐름을 포함해 검증해야 한다.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
ResponseEntity<ApiResponse<ProductV1Dto.StockResponse>> stockResponse = testRestTemplate.exchange(
"/api/v1/products/" + productId + "/stock", HttpMethod.GET, null,
new ParameterizedTypeReference<>() {}
);
assertThat(productResponse.getBody().data().product().stockQuantity()).isEqualTo(0);
assertThat(stockResponse.getBody().data().stockQuantity()).isEqualTo(0);
ResponseEntity<ApiResponse<ProductV1Dto.StockResponse>> stockResponse = testRestTemplate.exchange(
"/api/v1/products/" + productId + "/stock", HttpMethod.GET, null,
new ParameterizedTypeReference<>() {}
);
assertThat(stockResponse.getStatusCode()).isEqualTo(HttpStatus.OK);
assertThat(stockResponse.getBody()).isNotNull();
assertThat(stockResponse.getBody().data()).isNotNull();
assertThat(stockResponse.getBody().data().stockQuantity()).isEqualTo(0);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/commerce-api/src/test/java/com/loopers/interfaces/api/CouponConcurrencyE2ETest.java`
around lines 239 - 243, The test currently accesses
stockResponse.getBody().data().stockQuantity() directly which can NPE or hide
HTTP failures; update the assertions around the ResponseEntity returned by
testRestTemplate.exchange to first assert status is 200 OK, assert
stockResponse.getBody() is not null and stockResponse.getBody().data() is not
null, then assert ProductV1Dto.StockResponse.stockQuantity() equals 0;
additionally add a separate test that uses testRestTemplate.exchange with a
non-existent productId and asserts a 404 status and appropriate error body
instead of relying on NPEs.

Comment on lines +35 to +39
Integer existingCount = jdbcTemplate.queryForObject("SELECT COUNT(*) FROM product", Integer.class);
if (existingCount != null && existingCount >= TOTAL_PRODUCTS) {
log.info("이미 {}건의 상품 데이터가 존재합니다. 스킵합니다.", existingCount);
return;
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

시드 로직이 비결정적 재실행 경로를 가져 데이터 과삽입/실패 위험이 크다

Line 35-39은 product 총건수만 보고 스킵하고, Line 76-81은 전체 brand를 대상으로 제품을 생성하므로 기존 브랜드가 있거나 이전 실행이 중단된 환경에서 목표치(정확히 100만건) 보장이 깨진다. 운영 관점에서는 성능 테스트 환경이 비결정적으로 변하고 DB 부하가 급증해 벤치마크 신뢰도와 테스트 안정성을 동시에 떨어뜨리는 문제다.
수정안은 “시드 대상 브랜드”를 명시적으로 식별하고(예: prefix/runId), 해당 브랜드 집합에 대해서만 제품을 생성하며, 스킵 조건도 전체 상품 수가 아니라 “시드 집합의 상품 수”로 판단하는 방식으로 idempotency를 보장하는 것이다.

수정 예시 diff
 public class BulkDataInitializer implements ApplicationRunner {
+    private static final String SEED_BRAND_PREFIX = "bulk_seed_brand_";

     `@Override`
     public void run(ApplicationArguments args) {
-        Integer existingCount = jdbcTemplate.queryForObject("SELECT COUNT(*) FROM product", Integer.class);
+        Integer existingCount = jdbcTemplate.queryForObject(
+                "SELECT COUNT(*) " +
+                "FROM product p JOIN brand b ON p.brand_id = b.id " +
+                "WHERE b.name LIKE ?",
+                Integer.class,
+                SEED_BRAND_PREFIX + "%"
+        );
         if (existingCount != null && existingCount >= TOTAL_PRODUCTS) {
             log.info("이미 {}건의 상품 데이터가 존재합니다. 스킵합니다.", existingCount);
             return;
         }
@@
-        List<Long> brandIds = jdbcTemplate.queryForList("SELECT id FROM brand ORDER BY id", Long.class);
+        List<Long> brandIds = jdbcTemplate.queryForList(
+                "SELECT id FROM brand WHERE name LIKE ? ORDER BY id LIMIT ?",
+                Long.class,
+                SEED_BRAND_PREFIX + "%",
+                BRAND_COUNT
+        );

추가 테스트는 최소 2가지를 권장한다:

  1. 기존 일반 브랜드/상품이 섞여 있는 DB에서 실행해도 시드 대상 데이터만 정확히 100만건 생성되는지 검증하는 통합 테스트다.
  2. 시드 도중 중단된 상태(예: 40만건)에서 재실행 시 중복/과삽입 없이 목표치로 수렴하는 재실행(idempotency) 테스트다.

Also applies to: 76-81, 111-116

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/commerce-api/src/test/java/com/loopers/support/BulkDataInitializer.java`
around lines 35 - 39, The current seeding checks overall product count
(existingCount via jdbcTemplate.queryForObject and TOTAL_PRODUCTS) and then
generates products across all brands (brand creation/loop logic around the block
at lines ~76-81), which is non-idempotent; change the logic to target a named
seed brand set (e.g., add a seedPrefix or runId used when creating brands and
product records), compute existing count scoped to that seed set (query COUNT(*)
FROM product WHERE brand_name LIKE seedPrefix% or brand_id IN (seedBrandIds))
instead of the global TOTAL_PRODUCTS, and only create brands/products for that
explicit seed set (use the same seedPrefix/runId when inserting in the brand
creation method and when generating products in the product generation loop),
ensuring subsequent runs check the seed-scoped count and only insert the delta
to reach the target so the process is idempotent.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
apps/commerce-api/src/test/java/com/loopers/domain/brand/BrandServiceTest.java (1)

3-10: ⚠️ Potential issue | 🟡 Minor

중복 import가 존재한다.

com.loopers.domain.PageResult가 Line 3과 Line 10에 중복 선언되어 있다. 컴파일에는 문제없으나 코드 정리가 필요하다.

🔧 제안
 package com.loopers.domain.brand;

 import com.loopers.domain.PageResult;
 import com.loopers.support.error.CoreException;
 import com.loopers.support.error.ErrorType;
 import org.junit.jupiter.api.BeforeEach;
 import org.junit.jupiter.api.DisplayName;
 import org.junit.jupiter.api.Nested;
 import org.junit.jupiter.api.Test;
-import com.loopers.domain.PageResult;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/commerce-api/src/test/java/com/loopers/domain/brand/BrandServiceTest.java`
around lines 3 - 10, In BrandServiceTest remove the duplicate import of
com.loopers.domain.PageResult (it's declared twice); keep a single import line
for PageResult and delete the redundant one so imports are unique and clean.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@apps/commerce-api/src/main/java/com/loopers/domain/brand/BrandService.java`:
- Around line 64-70: LikeCountSyncScheduler currently calls
BrandService.updateLikeCount inside a loop without handling CoreException, so a
NOT_FOUND from a deleted brand breaks the whole batch; wrap each individual
updateLikeCount call in LikeCountSyncScheduler's loop (the calls at the lines
referencing BrandService and ProductService) with a try-catch that catches
CoreException (or Exception), logs the id and error (using the scheduler's
logger) and continues to the next item; also update
ProductService.updateLikeCount to mirror BrandService.updateLikeCount by
validating updatedCount and throwing NOT_FOUND when 0 to keep behavior
consistent; finally add a unit/integration test for LikeCountSyncScheduler that
includes deleted entities in the source set to assert the scheduler logs errors
but completes processing remaining items.

---

Outside diff comments:
In
`@apps/commerce-api/src/test/java/com/loopers/domain/brand/BrandServiceTest.java`:
- Around line 3-10: In BrandServiceTest remove the duplicate import of
com.loopers.domain.PageResult (it's declared twice); keep a single import line
for PageResult and delete the redundant one so imports are unique and clean.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: e2bec78b-fc41-429b-862c-9f3445f3200a

📥 Commits

Reviewing files that changed from the base of the PR and between ef18e94 and 9c90c29.

📒 Files selected for processing (2)
  • apps/commerce-api/src/main/java/com/loopers/domain/brand/BrandService.java
  • apps/commerce-api/src/test/java/com/loopers/domain/brand/BrandServiceTest.java

Comment on lines +64 to +70
@Transactional
public void updateLikeCount(Long id, int likeCount) {
int updatedCount = brandRepository.updateLikeCount(id, likeCount);
if (updatedCount == 0) {
throw new CoreException(ErrorType.NOT_FOUND, "브랜드를 찾을 수 없습니다.");
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Description: LikeCountSyncScheduler에서 BrandService.updateLikeCount 호출 시 예외 처리 방식 확인

# 스케줄러가 개별 브랜드 동기화 실패 시 전체 배치를 중단하는지, 아니면 continue하는지 확인
ast-grep --pattern $'class LikeCountSyncScheduler {
  $$$
}'

# 예외 처리 패턴 확인
rg -n "updateLikeCount" --type java -A 5 -B 2

Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java

Length of output: 11263


LikeCountSyncScheduler에서 개별 동기화 실패 시 배치 전체가 중단되는 운영 장애 위험이 있다.

updateLikeCount 메서드 자체는 이전 리뷰 피드백이 반영되어 올바르게 구현되었으나, LikeCountSyncScheduler(라인 37, 49)에서 이 메서드를 호출할 때 예외 처리가 없다. 삭제된 브랜드나 상품이 좋아요 테이블에 남아 있으면 NOT_FOUND 예외가 발생하여 루프가 중단되고, 이후 항목들이 동기화되지 않는다.

조치사항:

  1. 스케줄러의 루프 내에서 개별 updateLikeCount 호출을 try-catch로 감싸고, 예외 발생 시 로깅 후 계속 진행하도록 수정한다.
  2. ProductService.updateLikeCount도 BrandService와 동일하게 updatedCount를 검증하여 일관성을 맞춘다.
  3. 스케줄러 테스트에 삭제된 엔티티 존재 시 동작을 검증하는 케이스를 추가한다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/commerce-api/src/main/java/com/loopers/domain/brand/BrandService.java`
around lines 64 - 70, LikeCountSyncScheduler currently calls
BrandService.updateLikeCount inside a loop without handling CoreException, so a
NOT_FOUND from a deleted brand breaks the whole batch; wrap each individual
updateLikeCount call in LikeCountSyncScheduler's loop (the calls at the lines
referencing BrandService and ProductService) with a try-catch that catches
CoreException (or Exception), logs the id and error (using the scheduler's
logger) and continues to the next item; also update
ProductService.updateLikeCount to mirror BrandService.updateLikeCount by
validating updatedCount and throwing NOT_FOUND when 0 to keep behavior
consistent; finally add a unit/integration test for LikeCountSyncScheduler that
includes deleted entities in the source set to assert the scheduler logs errors
but completes processing remaining items.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (2)
apps/commerce-api/src/test/java/com/loopers/support/BulkDataInitializer.java (1)

91-93: ⚠️ Potential issue | 🟠 Major

부분 실행 후 재실행 시 상품이 과삽입되는 비멱등 경로가 남아 있다

운영 관점에서 Line 101의 배치 삽입이 브랜드별 기존 건수를 고려하지 않고 항상 5,000건을 넣기 때문에, 시드가 40만건에서 중단된 뒤 재실행하면 목표 100만건을 초과해 데이터가 계속 불어나고 벤치마크 결과 신뢰도가 무너진다.

수정안은 브랜드별 현재 상품 수를 조회해 남은 수량(delta)만 삽입하도록 바꾸는 것이다.

수정 예시 diff
@@
-        for (int brandIdx = 0; brandIdx < brandIds.size(); brandIdx++) {
+        for (int brandIdx = 0; brandIdx < brandIds.size(); brandIdx++) {
             Long brandId = brandIds.get(brandIdx);
             final int currentBrandIdx = brandIdx;
             var random = ThreadLocalRandom.current();
+            Integer existingPerBrand = jdbcTemplate.queryForObject(
+                    "SELECT COUNT(*) FROM product WHERE brand_id = ?",
+                    Integer.class,
+                    brandId
+            );
+            final int already = existingPerBrand == null ? 0 : existingPerBrand;
+            final int toInsert = Math.max(0, PRODUCTS_PER_BRAND - already);
+            if (toInsert == 0) {
+                continue;
+            }
 
             jdbcTemplate.batchUpdate(sql, new BatchPreparedStatementSetter() {
@@
-                    ps.setString(2, adj + " " + category + " #" + (currentBrandIdx * PRODUCTS_PER_BRAND + i + 1));
+                    ps.setString(2, adj + " " + category + " #" + (currentBrandIdx * PRODUCTS_PER_BRAND + already + i + 1));
@@
                 public int getBatchSize() {
-                    return PRODUCTS_PER_BRAND;
+                    return toInsert;
                 }
             });
 
-            totalInserted += PRODUCTS_PER_BRAND;
+            totalInserted += toInsert;

추가 테스트는 다음 두 가지가 필요하다.

  1. 시드 상품 40만건 상태에서 재실행 시 정확히 100만건으로 수렴하는 통합 테스트다.
  2. 브랜드별로 일부만 채워진 상태(예: 1번 브랜드 5000건, 2번 브랜드 1200건)에서 재실행 시 각 브랜드가 정확히 5000건으로 맞춰지는지 검증하는 테스트다.

Also applies to: 101-127

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/commerce-api/src/test/java/com/loopers/support/BulkDataInitializer.java`
around lines 91 - 93, 현재 BulkDataInitializer에서 jdbcTemplate.queryForList(...)로
brandIds를 가져온 뒤 모든 브랜드에 대해 고정량(5000건)을 항상 삽입하는 배치 로직이 있어 재실행 시 과삽입 비멱등 경로가
발생합니다; 수정할 때는 BulkDataInitializer의 브랜드 조회(List<Long> brandIds /
jdbcTemplate.queryForList) 직후 각 브랜드별 현재 상품 수를 조회(예: SELECT COUNT(*) FROM product
WHERE brand_id = ?)하여 목표(예: 5000)에서 현재 수를 뺀 delta만 생성/삽입하도록 배치 삽입 루틴을 변경하고,
delta가 0 이하인 경우 삽입을 건너뛰게 하십시오; 또한 통합 테스트를 추가해 (1) 시드 40만건 상태에서 재실행 시 총건수가 정확히
100만건으로 수렴하는지, (2) 브랜드별 일부만 채워진 상태에서 각 브랜드가 목표건수로 맞춰지는지 검증하도록 만드세요.
apps/commerce-api/src/test/java/com/loopers/interfaces/api/CouponConcurrencyE2ETest.java (1)

247-251: ⚠️ Potential issue | 🟡 Minor

재고 검증 전에 HTTP 성공과 바디 유효성을 먼저 단언해야 한다

현재 Line 247-251은 stockResponse.getBody().data().stockQuantity()를 바로 역참조하므로, 엔드포인트가 4xx/5xx를 반환하거나 역직렬화가 깨졌을 때 NPE로 실패 원인이 가려진다. 운영 관점에서는 API 회귀와 테스트 데이터 문제를 분리하기 어려워져 장애 재현 비용이 커진다. 상태 코드와 body/data를 먼저 단언한 뒤 재고값을 검증하도록 수정하는 편이 낫다. 추가로 존재하지 않는 productId 조회 시 404와 에러 바디를 검증하는 실패 케이스를 별도 테스트로 보강해야 한다.

수정안(diff)
 ResponseEntity<ApiResponse<ProductV1Dto.StockResponse>> stockResponse = testRestTemplate.exchange(
     "/api/v1/products/" + productId + "/stock", HttpMethod.GET, null,
     new ParameterizedTypeReference<>() {}
 );
+assertThat(stockResponse.getStatusCode()).isEqualTo(HttpStatus.OK);
+assertThat(stockResponse.getBody()).isNotNull();
+assertThat(stockResponse.getBody().data()).isNotNull();
 assertThat(stockResponse.getBody().data().stockQuantity()).isEqualTo(0);

As per coding guidelines: **/*Test*.java: 단위 테스트는 경계값/실패 케이스/예외 흐름을 포함하는지 점검한다. 통합 테스트는 격리 수준, 플래키 가능성, 테스트 데이터 준비/정리 전략을 점검한다.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/commerce-api/src/test/java/com/loopers/interfaces/api/CouponConcurrencyE2ETest.java`
around lines 247 - 251, In CouponConcurrencyE2ETest, avoid dereferencing
stockResponse.getBody() directly: after calling testRestTemplate.exchange
(stockResponse), first assert the HTTP status is 200 (HttpStatus.OK), then
assert stockResponse.getBody() is not null and stockResponse.getBody().data() is
not null, and only then assert stockResponse.getBody().data().stockQuantity() ==
0; update assertions around the ApiResponse<ProductV1Dto.StockResponse>
reference and reuse the existing productId variable. Also add a separate test
that queries a non-existent productId and asserts a 404 response with a proper
error body to cover the failure case.
🧹 Nitpick comments (2)
apps/commerce-api/src/test/java/com/loopers/application/like/LikeCountSyncSchedulerTest.java (2)

121-138: FakeProductRepositoryFakeProductReader와 상태를 공유하지 않아 테스트 격리에 주의가 필요하다.

FakeProductRepository.addExistingId()로 등록한 ID가 FakeProductReader.findById()에서 조회되지 않는다. 현재 테스트 시나리오에서는 ProductService.updateLikeCountProductRepository.updateLikeCount만 호출하므로 문제가 없으나, 향후 테스트 확장 시 혼란을 줄 수 있다.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/commerce-api/src/test/java/com/loopers/application/like/LikeCountSyncSchedulerTest.java`
around lines 121 - 138, FakeProductRepository currently maintains its own
existingIds map separate from FakeProductReader, so IDs added via
FakeProductRepository.addExistingId(Long) won't be returned by
FakeProductReader.findById(Long), breaking test isolation if tests expand; fix
by sharing state between the two fakes — e.g., inject the same Map<Long,?> or a
FakeDataStore into FakeProductRepository (or accept a FakeProductReader
reference) and ensure addExistingId also registers the ID with
FakeProductReader, so ProductService.updateLikeCount ->
ProductRepository.updateLikeCount and any future calls to
FakeProductReader.findById will see the same IDs.

56-81: 삭제된 엔티티 처리 시 동작을 검증하나, 성공 항목의 실제 업데이트 검증이 누락되었다.

continuesProcessing_whenSomeEntitiesAreDeleted 테스트는 예외 없이 완료되고 캐시 무효화가 호출됨을 검증한다. 그러나 존재하는 엔티티(id=1)에 대해 updateLikeCount가 실제로 호출되었는지 검증하지 않아, 스케줄러가 성공 항목을 정상 처리했는지 확인할 수 없다.

🔧 제안 - Spy 사용으로 호출 검증 추가
+ import static org.mockito.Mockito.spy;
+ import static org.mockito.Mockito.verify;

  `@BeforeEach`
  void setUp() {
      // ...
      fakeProductRepository = new FakeProductRepository();
-     productService = new ProductService(fakeProductReader, fakeProductRepository);
+     productService = spy(new ProductService(fakeProductReader, fakeProductRepository));

      // ...
-     brandService = new BrandService(fakeBrandReader, fakeBrandRepository);
+     brandService = spy(new BrandService(fakeBrandReader, fakeBrandRepository));
      // ...
  }

  `@Test`
  void continuesProcessing_whenSomeEntitiesAreDeleted() {
      // ... existing arrange/act ...
      
+     // 존재하는 엔티티에 대한 업데이트 호출 검증
+     verify(productService).updateLikeCount(1L, 5);
+     verify(brandService).updateLikeCount(1L, 10);
      
      // 캐시 무효화 검증
      verify(productCacheManager).evictAllProductList();
  }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/commerce-api/src/test/java/com/loopers/application/like/LikeCountSyncSchedulerTest.java`
around lines 56 - 81, The test continuesProcessing_whenSomeEntitiesAreDeleted
should also assert that existing-entity like counts are actually updated: after
calling scheduler.syncLikeCounts(), verify that updateLikeCount was invoked for
the existing IDs (e.g., id 1 for PRODUCT and id 1 for BRAND) and not for the
deleted IDs (999 and 888). Locate the scheduler.syncLikeCounts() invocation and
add verifications (e.g.,
verify(<component>).updateLikeCount(LikeTargetType.PRODUCT, 1L, 5) and
verify(<component>).updateLikeCount(LikeTargetType.BRAND, 1L, 10)) or use a spy
on the component that performs updates so you can assert those calls occurred
for the existing IDs and were not made for deleted IDs. Ensure you reference the
same update method name updateLikeCount and the likeService.countAllLikes stubs
already in the test.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In
`@apps/commerce-api/src/main/java/com/loopers/application/like/LikeCountSyncScheduler.java`:
- Around line 27-32: syncLikeCounts currently runs product and brand sync inside
one transaction and performs cache eviction inside that transaction, which can
lead to cache/DB mismatch on rollback and causes product sync to be rolled back
if brand sync fails; fix by moving cache invalidation
(evictAllProductList/Detail) out of the transactional method and perform it
after commit using a `@TransactionalEventListener`(phase = AFTER_COMMIT) or by
splitting work into separate transactions: mark syncProductLikeCounts() and
syncBrandLikeCounts() with `@Transactional`(propagation = REQUIRES_NEW) so each
can succeed independently and publish a post-commit event that triggers cache
eviction, and ensure resetLikeCountsNotIn() is executed in its own transaction
to avoid rolling back unrelated changes.
- Around line 37-38: When counts is empty, calling
productService.resetLikeCountsNotIn(targetIds) will generate a problematic "NOT
IN ()" SQL; in LikeCountSyncScheduler detect if counts.isEmpty() (or
targetIds.isEmpty()) and call productService.resetAllLikeCounts() instead of
resetLikeCountsNotIn, otherwise call resetLikeCountsNotIn as before; apply the
same empty-list guard to the second occurrence where resetLikeCountsNotIn is
invoked (the later call in this class).

---

Duplicate comments:
In
`@apps/commerce-api/src/test/java/com/loopers/interfaces/api/CouponConcurrencyE2ETest.java`:
- Around line 247-251: In CouponConcurrencyE2ETest, avoid dereferencing
stockResponse.getBody() directly: after calling testRestTemplate.exchange
(stockResponse), first assert the HTTP status is 200 (HttpStatus.OK), then
assert stockResponse.getBody() is not null and stockResponse.getBody().data() is
not null, and only then assert stockResponse.getBody().data().stockQuantity() ==
0; update assertions around the ApiResponse<ProductV1Dto.StockResponse>
reference and reuse the existing productId variable. Also add a separate test
that queries a non-existent productId and asserts a 404 response with a proper
error body to cover the failure case.

In
`@apps/commerce-api/src/test/java/com/loopers/support/BulkDataInitializer.java`:
- Around line 91-93: 현재 BulkDataInitializer에서 jdbcTemplate.queryForList(...)로
brandIds를 가져온 뒤 모든 브랜드에 대해 고정량(5000건)을 항상 삽입하는 배치 로직이 있어 재실행 시 과삽입 비멱등 경로가
발생합니다; 수정할 때는 BulkDataInitializer의 브랜드 조회(List<Long> brandIds /
jdbcTemplate.queryForList) 직후 각 브랜드별 현재 상품 수를 조회(예: SELECT COUNT(*) FROM product
WHERE brand_id = ?)하여 목표(예: 5000)에서 현재 수를 뺀 delta만 생성/삽입하도록 배치 삽입 루틴을 변경하고,
delta가 0 이하인 경우 삽입을 건너뛰게 하십시오; 또한 통합 테스트를 추가해 (1) 시드 40만건 상태에서 재실행 시 총건수가 정확히
100만건으로 수렴하는지, (2) 브랜드별 일부만 채워진 상태에서 각 브랜드가 목표건수로 맞춰지는지 검증하도록 만드세요.

---

Nitpick comments:
In
`@apps/commerce-api/src/test/java/com/loopers/application/like/LikeCountSyncSchedulerTest.java`:
- Around line 121-138: FakeProductRepository currently maintains its own
existingIds map separate from FakeProductReader, so IDs added via
FakeProductRepository.addExistingId(Long) won't be returned by
FakeProductReader.findById(Long), breaking test isolation if tests expand; fix
by sharing state between the two fakes — e.g., inject the same Map<Long,?> or a
FakeDataStore into FakeProductRepository (or accept a FakeProductReader
reference) and ensure addExistingId also registers the ID with
FakeProductReader, so ProductService.updateLikeCount ->
ProductRepository.updateLikeCount and any future calls to
FakeProductReader.findById will see the same IDs.
- Around line 56-81: The test continuesProcessing_whenSomeEntitiesAreDeleted
should also assert that existing-entity like counts are actually updated: after
calling scheduler.syncLikeCounts(), verify that updateLikeCount was invoked for
the existing IDs (e.g., id 1 for PRODUCT and id 1 for BRAND) and not for the
deleted IDs (999 and 888). Locate the scheduler.syncLikeCounts() invocation and
add verifications (e.g.,
verify(<component>).updateLikeCount(LikeTargetType.PRODUCT, 1L, 5) and
verify(<component>).updateLikeCount(LikeTargetType.BRAND, 1L, 10)) or use a spy
on the component that performs updates so you can assert those calls occurred
for the existing IDs and were not made for deleted IDs. Ensure you reference the
same update method name updateLikeCount and the likeService.countAllLikes stubs
already in the test.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: c477aa71-d0a1-4b51-b230-2f758cf501ed

📥 Commits

Reviewing files that changed from the base of the PR and between 9c90c29 and 1a8cafb.

📒 Files selected for processing (16)
  • apps/commerce-api/src/main/java/com/loopers/application/like/LikeCountSyncScheduler.java
  • apps/commerce-api/src/main/java/com/loopers/domain/brand/BrandRepository.java
  • apps/commerce-api/src/main/java/com/loopers/domain/brand/BrandService.java
  • apps/commerce-api/src/main/java/com/loopers/domain/like/Like.java
  • apps/commerce-api/src/main/java/com/loopers/domain/product/ProductRepository.java
  • apps/commerce-api/src/main/java/com/loopers/domain/product/ProductService.java
  • apps/commerce-api/src/main/java/com/loopers/infrastructure/brand/BrandJpaRepository.java
  • apps/commerce-api/src/main/java/com/loopers/infrastructure/brand/BrandRepositoryImpl.java
  • apps/commerce-api/src/main/java/com/loopers/infrastructure/product/ProductJpaRepository.java
  • apps/commerce-api/src/main/java/com/loopers/infrastructure/product/ProductRepositoryImpl.java
  • apps/commerce-api/src/test/java/com/loopers/application/like/LikeCountSyncSchedulerTest.java
  • apps/commerce-api/src/test/java/com/loopers/domain/brand/BrandServiceTest.java
  • apps/commerce-api/src/test/java/com/loopers/domain/product/ProductServiceTest.java
  • apps/commerce-api/src/test/java/com/loopers/interfaces/api/CouponConcurrencyE2ETest.java
  • apps/commerce-api/src/test/java/com/loopers/interfaces/api/LikeV1ApiE2ETest.java
  • apps/commerce-api/src/test/java/com/loopers/support/BulkDataInitializer.java
🚧 Files skipped from review as they are similar to previous changes (2)
  • apps/commerce-api/src/main/java/com/loopers/infrastructure/product/ProductRepositoryImpl.java
  • apps/commerce-api/src/main/java/com/loopers/domain/product/ProductRepository.java

Comment on lines +37 to +38
List<Long> targetIds = counts.stream().map(LikeCountProjection::targetId).toList();
productService.resetLikeCountsNotIn(targetIds);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

빈 리스트 처리가 필요하다.

counts가 비어있으면 targetIds도 빈 리스트가 되며, resetLikeCountsNotIn(emptyList)가 호출된다. BrandJpaRepository 리뷰에서 언급한 것처럼 NOT IN () 절은 DB에 따라 오류를 발생시킬 수 있다.

빈 리스트인 경우 resetAllLikeCounts()를 호출하거나, Service 레이어에서 분기 처리가 필요하다.

빈 리스트 분기 처리 예시
 private void syncProductLikeCounts() {
     List<LikeCountProjection> counts = likeService.countAllLikes(LikeTargetType.PRODUCT);
 
     List<Long> targetIds = counts.stream().map(LikeCountProjection::targetId).toList();
-    productService.resetLikeCountsNotIn(targetIds);
+    if (targetIds.isEmpty()) {
+        productService.resetAllLikeCounts();
+    } else {
+        productService.resetLikeCountsNotIn(targetIds);
+    }
 
     for (LikeCountProjection projection : counts) {

Also applies to: 56-57

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/commerce-api/src/main/java/com/loopers/application/like/LikeCountSyncScheduler.java`
around lines 37 - 38, When counts is empty, calling
productService.resetLikeCountsNotIn(targetIds) will generate a problematic "NOT
IN ()" SQL; in LikeCountSyncScheduler detect if counts.isEmpty() (or
targetIds.isEmpty()) and call productService.resetAllLikeCounts() instead of
resetLikeCountsNotIn, otherwise call resetLikeCountsNotIn as before; apply the
same empty-list guard to the second occurrence where resetLikeCountsNotIn is
invoked (the later call in this class).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant