Skip to content

[Volume5] 상품 조회 성능 최적화 — 인덱스, 원자적 좋아요 동기화, Redis 캐시 (엄인국) #203

Open
ukukdin wants to merge 2 commits intoLoopers-dev-lab:ukukdinfrom
ukukdin:main
Open

[Volume5] 상품 조회 성능 최적화 — 인덱스, 원자적 좋아요 동기화, Redis 캐시 (엄인국) #203
ukukdin wants to merge 2 commits intoLoopers-dev-lab:ukukdinfrom
ukukdin:main

Conversation

@ukukdin
Copy link

@ukukdin ukukdin commented Mar 13, 2026

📌 Summary

  • 배경: 상품 목록/상세 조회 API에서 10만건 기준 모든 쿼리가 Seq Scan(10ms+), 좋아요 동기화에 Lost Update 위험, 캐시 부재로 매 요청마다 DB 조회가 발생하고 있었다.
  • 목표: 인덱스 → 비정규화(원자적 UPDATE) → 캐시를 순차 적용하며, 각 단계별 AS-IS/TO-BE를 비교한다.
  • 결과: Partial Index 4개로 20~131x 쿼리 개선, 원자적 SQL로 Lost Update 해결, Redis 캐시 + AFTER_COMMIT 패턴으로 정합성 보장. 273개 테스트 전체 통과.

🧭 Context & Decision

문제 정의

항목 내용
현재 동작 10만건 기준 모든 쿼리 Seq Scan, 좋아요 Read-Modify-Write + 이중 비관적 락, 캐시 없음
리스크 좋아요 + 어드민 수정 동시 발생 시 Lost Update, 동일 요청 반복 시 DB 부하 누적
성공 기준 EXPLAIN ANALYZE 전후 비교로 Index Scan 전환 확인, 동시성 테스트 통과, 캐시 HIT/MISS 동작 검증

선택지와 결정

① 인덱스 전략

대안 설명
A 일반 복합 인덱스 (deleted_at, price 등)
B (채택) Partial Index (WHERE deleted_at IS NULL)

왜 B를 선택했는가?

일반 복합 인덱스를 먼저 시도했으나, deleted_at IS NULL의 선택도가 95%로 너무 높아 플래너가 Seq Scan을 선택했다. Partial Index는 조건 자체를 인덱스에 내장하여 플래너가 Index Scan을 선택하도록 유도한다.

인덱스 컬럼 용도
idx_products_active_brand_likes (brand_id, like_count DESC) 브랜드별 인기상품
idx_products_active_price (price ASC) 가격순 목록
idx_products_active_likes (like_count DESC) 전체 인기순
idx_products_active_created (created_at DESC) 최신순 (기본)

리뷰 포인트: JPA @Index는 Partial Index의 WHERE 절을 지원하지 않으므로 SQL 마이그레이션 스크립트로 관리합니다.

② 좋아요 동기화 전략

대안 설명
A 비관적 락 + Read-Modify-Write (기존)
B (채택) 원자적 SQL UPDATE (like_count = like_count + 1)

왜 B를 선택했는가?

기존 방식은 Product 전체를 SELECT FOR UPDATE → 메모리에서 likeCount+1 → 전체 컬럼 UPDATE하여, 어드민이 동시에 상품 정보를 수정하면 한쪽의 변경이 유실(Lost Update)되었다. 원자적 SQL은 likeCount 컬럼만 DB 레벨에서 변경하므로 다른 필드에 영향이 없다.

③ 캐시 evict 타이밍 전략

대안 설명
A @CacheEvict 어노테이션 (기존 — 트랜잭션 커밋 전 실행)
B (채택) @TransactionalEventListener(AFTER_COMMIT) + TransactionSynchronization

왜 B를 선택했는가?

@CacheEvict은 AOP 프록시 레벨에서 동작하므로 트랜잭션 커밋보다 먼저 실행된다. 이 사이에 다른 스레드가 캐시를 다시 채우면 stale 데이터가 TTL 동안 고정되는 구조적 결함이 있었다.

④ 목록 캐시 무효화 전략

대안 설명
A 좋아요/주문마다 allEntries=true로 전체 삭제 (기존)
B (채택) 좋아요/주문 경로에서 목록 캐시 evict 제거, TTL 1분 갱신 허용

왜 B를 선택했는가?

좋아요 1회 = 목록 캐시 전체 소멸 → 직후 수백 개 요청이 동시에 Cache MISS → Cache Stampede 발생. 트래픽이 증가할수록 캐시가 없는 것보다 나쁜 상태(Redis 왕복 + DB 쿼리)가 된다. 목록의 좋아요 수/재고는 최대 1분 지연을 허용하고, 상품 상세는 즉시 evict하여 정확한 값을 보장한다.

트레이드오프

  • 목록 캐시의 좋아요 수/재고가 최대 1분간 stale할 수 있다 (비즈니스 허용 범위).
  • @TransactionalEventListener(AFTER_COMMIT)은 트랜잭션이 없는 환경(단위 테스트)에서 동작하지 않으므로, isSynchronizationActive() 가드가 필요하다.
  • SafeCacheErrorHandler가 evict 실패를 삼키면 TTL 만료까지 stale 데이터가 생존할 수 있다 → Micrometer 메트릭으로 가시성 확보.

리뷰 포인트: 목록 캐시의 1분 eventual consistency가 비즈니스 관점에서 적절한지, 상품 상세의 evict가 AFTER_COMMIT으로 충분히 보장되는지 의견 부탁드립니다.

🏗️ Design Overview

변경 범위

구분 파일 수 내용
프로덕션 코드 14개 인덱스 + 원자적 UPDATE + 캐시 레이어 + AFTER_COMMIT evict
테스트 코드 3개 mock 수정 (LikeServiceTest, OrderServiceTest, ProductServiceTest)
인프라 4개 PostgreSQL 전환, Docker, Gradle, 마이그레이션 SQL
데이터 1개 10만건 mock data SQL

1. Partial Index 생성 — V5__create_product_indexes.sql

scripts/V5__create_product_indexes.sql

-- 활성 상품만 인덱싱 (WHERE deleted_at IS NULL)
CREATE INDEX IF NOT EXISTS idx_products_active_brand_likes
    ON products (brand_id, like_count DESC) WHERE deleted_at IS NULL;

CREATE INDEX IF NOT EXISTS idx_products_active_price
    ON products (price ASC) WHERE deleted_at IS NULL;

CREATE INDEX IF NOT EXISTS idx_products_active_likes
    ON products (like_count DESC) WHERE deleted_at IS NULL;

CREATE INDEX IF NOT EXISTS idx_products_active_created
    ON products (created_at DESC) WHERE deleted_at IS NULL;

Partial Index의 WHERE deleted_at IS NULL 조건이 쿼리의 WHERE 절과 정확히 매칭되어 플래너가 Index Scan을 선택한다.


2. 원자적 좋아요 카운트 — ProductJpaRepository

ProductJpaRepository.java

@Modifying
@Query("UPDATE ProductJpaEntity p SET p.likeCount = p.likeCount + 1 WHERE p.id = :productId")
void incrementLikeCount(@Param("productId") Long productId);

@Modifying
@Query("UPDATE ProductJpaEntity p SET p.likeCount = p.likeCount - 1 WHERE p.id = :productId AND p.likeCount > 0")
void decrementLikeCount(@Param("productId") Long productId);

DB 레벨에서 원자적으로 실행되므로 락 없이도 동시성 안전. decrementLikeCountAND p.likeCount > 0 조건으로 음수 방지.


3. 비관적 락 제거 — LikeService

LikeService.java

// AS-IS: 비관적 락으로 상품 조회
private Product findProductWithLock(Long productId) {
    return productRepository.findActiveByIdWithLock(productId)
            .orElseThrow(() -> new CoreException(ErrorType.PRODUCT_NOT_FOUND));
}

// TO-BE: 단순 존재 확인 (락 불필요)
private void validateProductExists(Long productId) {
    productRepository.findActiveById(productId)
            .orElseThrow(() -> new CoreException(ErrorType.PRODUCT_NOT_FOUND));
}

원자적 UPDATE로 전환했으므로 비관적 락이 불필요. 쿼리 수: 3개 → 2개.


4. 이벤트 핸들러 관심사 분리 — LikeEventHandler + ProductCacheEvictHandler

LikeEventHandler.java — DB 작업만 수행

@EventListener
public void handle(ProductLikedEvent event) {
    productRepository.incrementLikeCount(event.productId());
}

ProductCacheEvictHandler.java — 트랜잭션 커밋 후 캐시 무효화

@TransactionalEventListener(phase = TransactionPhase.AFTER_COMMIT)
public void handleLiked(ProductLikedEvent event) {
    evictProductDetail(event.productId());  // 상품 상세만, 목록 캐시는 TTL 갱신
}

private void evictProductDetail(Long productId) {
    Cache cache = cacheManager.getCache(CacheConfig.PRODUCT_DETAIL);
    if (cache != null) cache.evict(productId);
}

@EventListener는 트랜잭션 내에서 DB 작업, @TransactionalEventListener(AFTER_COMMIT)은 커밋 후 캐시 무효화. 같은 이벤트를 두 핸들러가 각각 수신하여 DB 변경과 캐시 evict의 원자성을 보장한다.


5. 캐시 레이어 — CacheConfig + ProductQueryService

CacheConfig.java

@Configuration
@EnableCaching
public class CacheConfig implements CachingConfigurer {

    @Bean
    public CacheManager cacheManager(RedisConnectionFactory connectionFactory) {
        // 캐시 전용 ObjectMapper (PROPERTY 방식 타입 정보)
        ObjectMapper redisObjectMapper = new ObjectMapper();
        redisObjectMapper.activateDefaultTyping(...,
                ObjectMapper.DefaultTyping.EVERYTHING,
                JsonTypeInfo.As.PROPERTY);

        Map<String, RedisCacheConfiguration> cacheConfigs = Map.of(
            PRODUCT_DETAIL, defaultConfig.entryTtl(Duration.ofMinutes(5)),
            PRODUCT_LIST,   defaultConfig.entryTtl(Duration.ofMinutes(1)),
            BRAND_LIST,     defaultConfig.entryTtl(Duration.ofMinutes(10))
        );

        return RedisCacheManager.builder(connectionFactory)
                .withInitialCacheConfigurations(cacheConfigs)
                .enableStatistics()    // Micrometer 캐시 통계 노출
                .build();
    }
}

ProductQueryService.java

@Cacheable(value = CacheConfig.PRODUCT_DETAIL, key = "#productId")
public ProductDetailInfo getProduct(Long productId) { ... }

@Cacheable(value = CacheConfig.PRODUCT_LIST,
           key = "'brand:' + #brandId + ':sort:' + #sort + ':page:' + #page + ':size:' + #size")
public PageResult<ProductSummaryInfo> getProducts(Long brandId, String sort, int page, int size) { ... }

6. 주문 시 커밋 후 캐시 evict — OrderService

OrderService.java

@Override
public void createOrder(UserId userId, OrderCommand command) {
    // ... 재고 차감 + 주문 생성 ...
    orderRepository.save(order);

    // 트랜잭션 커밋 후 영향받은 상품의 캐시만 개별 무효화
    List<Long> affectedProductIds = sortedItems.stream()
            .map(OrderItemCommand::productId).toList();
    registerAfterCommitCacheEviction(affectedProductIds);
}

private void registerAfterCommitCacheEviction(List<Long> productIds) {
    if (!TransactionSynchronizationManager.isSynchronizationActive()) {
        // 단위 테스트 등 트랜잭션 없는 환경 → 즉시 evict
        Cache cache = cacheManager.getCache(CacheConfig.PRODUCT_DETAIL);
        if (cache != null) productIds.forEach(cache::evict);
        return;
    }
    TransactionSynchronizationManager.registerSynchronization(new TransactionSynchronization() {
        @Override
        public void afterCommit() {
            Cache cache = cacheManager.getCache(CacheConfig.PRODUCT_DETAIL);
            if (cache != null) productIds.forEach(cache::evict);
        }
    });
}

기존 @CacheEvict(allEntries=true) → 영향받은 상품 ID만 개별 evict. 목록 캐시는 TTL 1분 갱신.


7. Redis 장애 격리 + 모니터링 — SafeCacheErrorHandler

SafeCacheErrorHandler.java

public class SafeCacheErrorHandler implements CacheErrorHandler {

    private final MeterRegistry meterRegistry;

    @Override
    public void handleCacheGetError(RuntimeException exception, Cache cache, Object key) {
        log.warn("Redis 캐시 조회 실패 - cache: {}, key: {}", cache.getName(), key);
        Counter.builder("cache.errors")
                .tag("cache", cache.getName())
                .tag("operation", "get")
                .register(meterRegistry)
                .increment();
    }
    // put, evict, clear도 동일
}

Redis 장애 시 예외를 삼켜 DB fallback을 보장하면서, Micrometer 카운터로 장애 가시성을 확보한다.


8. DatabaseCleanUp PostgreSQL 호환 — DatabaseCleanUp

DatabaseCleanUp.java

// AS-IS (MySQL 전용)
entityManager.createNativeQuery("SET FOREIGN_KEY_CHECKS = 0").executeUpdate();
entityManager.createNativeQuery("TRUNCATE TABLE `" + table + "`").executeUpdate();

// TO-BE (PostgreSQL 호환)
entityManager.createNativeQuery("TRUNCATE TABLE " + table + " RESTART IDENTITY CASCADE").executeUpdate();

📊 성능 비교

EXPLAIN ANALYZE (10만건 기준)

쿼리 AS-IS (인덱스 없음) TO-BE (Partial Index) 개선율
브랜드+좋아요순 10.757ms (Seq Scan) 0.082ms (Index Scan) 131x
가격순+OFFSET 1000 11.092ms (Parallel Seq) 0.549ms (Index Scan) 20x
최신순 (기본) 11.774ms (Parallel Seq) 0.143ms (Index Scan) 82x
좋아요순 (전체) 11.549ms (Parallel Seq) 0.157ms (Index Scan) 74x

버퍼 사용량 (I/O 감소)

쿼리 AS-IS TO-BE 감소율
브랜드+좋아요순 1,980 24 98.8%
가격순 2,016 250 87.6%
최신순 2,016 22 98.9%
좋아요순 2,016 22 98.9%

좋아요 동기화 개선

항목 AS-IS (Read-Modify-Write) TO-BE (Atomic UPDATE)
쿼리 수/요청 SELECT FOR UPDATE x2 + UPDATE (전체) SELECT + UPDATE (likeCount만)
비관적 락 2회/요청 0회
Lost Update 위험 있음 없음
UPDATE 대상 모든 컬럼 (11개) likeCount 1개

캐시 evict 타이밍 개선

항목 AS-IS (@CacheEvict) TO-BE (AFTER_COMMIT)
evict 시점 메서드 완료 후 (커밋 전) 트랜잭션 커밋 후
stale cache 가능성 evict~커밋 사이 다른 스레드가 캐시 재생성 커밋된 최신 데이터로 캐시 생성
목록 캐시 무효화 좋아요 1회 = 전체 삭제 (Stampede 위험) TTL 1분 자연 갱신

🔁 Flow Diagram

좋아요 → 원자적 UPDATE + AFTER_COMMIT 캐시 evict

sequenceDiagram
    autonumber
    participant Client
    participant LikeService as LikeService<br/>@Transactional
    participant EventHandler as LikeEventHandler<br/>@EventListener
    participant CacheHandler as ProductCacheEvictHandler<br/>@TransactionalEventListener
    participant DB
    participant Redis

    Client->>LikeService: like(userId, productId)

    rect rgb(255, 235, 235)
        Note over LikeService,DB: 트랜잭션 내 실행
        LikeService->>DB: SELECT (존재 확인, 락 없음)
        LikeService->>DB: INSERT INTO likes
        LikeService->>EventHandler: publishEvents(ProductLikedEvent)
        EventHandler->>DB: UPDATE products SET like_count = like_count + 1
    end

    Note over DB: 트랜잭션 COMMIT ✅

    rect rgb(235, 245, 255)
        Note over CacheHandler,Redis: 커밋 후 실행 (AFTER_COMMIT)
        CacheHandler->>Redis: DEL product::{productId}
        Note right of Redis: 상세 캐시만 삭제<br/>목록 캐시는 TTL 1분 갱신
    end

    Client->>LikeService: 200 OK
Loading

캐시 조회 흐름 — Cache HIT / MISS / Redis 장애

flowchart TD
    A[GET /api/v1/products/1] --> B{Redis GET<br/>product::1}

    B -->|HIT| C[JSON 역직렬화<br/>→ ProductDetailInfo 반환]

    B -->|MISS| D[DB SELECT<br/>products + brands]
    D --> E[Redis SET product::1<br/>TTL 5분]
    E --> F[반환]

    B -->|Redis 장애| G[SafeCacheErrorHandler<br/>warn 로그 + cache.errors 메트릭]
    G --> H[DB SELECT<br/>fallback 조회]
    H --> F

    style C fill:#d4edda
    style G fill:#fff3cd
    style H fill:#fff3cd
Loading

AS-IS vs TO-BE 캐시 evict 타이밍

sequenceDiagram
    autonumber
    participant T1 as Thread 1 (좋아요)
    participant T2 as Thread 2 (조회)
    participant DB
    participant Redis

    Note over T1,Redis: ❌ AS-IS: @CacheEvict (커밋 전 실행)

    rect rgb(255, 235, 235)
        T1->>DB: incrementLikeCount (미커밋)
        T1->>Redis: DEL product::1 (커밋 전 evict!)
        T2->>Redis: GET product::1 → MISS
        T2->>DB: SELECT → 이전 값 읽음 (미커밋)
        T2->>Redis: SET product::1 (이전 값으로 캐시 생성!)
        T1->>DB: COMMIT
        Note over Redis: 💥 캐시에 이전 값이 TTL 동안 고정
    end

    Note over T1,Redis: ✅ TO-BE: AFTER_COMMIT (커밋 후 실행)

    rect rgb(235, 245, 255)
        T1->>DB: incrementLikeCount
        T1->>DB: COMMIT ✅
        T1->>Redis: DEL product::1 (커밋 후 evict)
        T2->>Redis: GET product::1 → MISS
        T2->>DB: SELECT → 최신 값 읽음 ✅
        T2->>Redis: SET product::1 (최신 값으로 캐시 생성)
    end
Loading

🧪 테스트

테스트 결과: 273개 전체 통과

변경 과정에서 발견한 문제 4건을 수정하여 모든 테스트가 통과합니다.

# 발견한 문제 원인 수정
1 StockConcurrencyTest x2 실패 주문 재고 감소 시 상품 캐시 미무효화 OrderService에 afterCommit evict 추가
2 ProductApiE2ETest x1 실패 브랜드 cascade 삭제 시 캐시 미무효화 ProductCacheEvictHandler에 BrandDeletedEvent 핸들러 추가
3 LikeServiceTest x4 실패 mock이 findActiveByIdWithLock 사용 findActiveById로 변경
4 ProductServiceTest/OrderServiceTest 실패 생성자에 CacheManager 누락 mock CacheManager 추가

✅ 과제 체크리스트

  • 인덱스 적용 및 전후 성능 비교
  • 좋아요 구조 개선 및 동기화
  • Redis 캐시 적용 및 장애 격리
  • 블로그 문서 작성
구분 요건 충족
Index 상품 목록 API에서 brandId 기반 검색, 좋아요 순 정렬 처리
유즈케이스별 인덱스 적용 + 전후 성능 비교
Structure 좋아요 수 조회 및 좋아요 순 정렬 가능한 구조 개선
좋아요 등록/해제 시 like_count 정상 동기화
Cache Redis 캐시 + TTL 또는 무효화 전략 적용
캐시 미스 시 서비스 정상 동작
추가 @CacheEvict 타이밍 → AFTER_COMMIT 패턴 전환
Cache Stampede 방지 (목록 캐시 allEntries evict 제거)
Micrometer 캐시 메트릭 + enableStatistics

- Partial Index 4개 적용 (20~131x 쿼리 성능 개선)
- 좋아요 Read-Modify-Write → 원자적 SQL UPDATE 전환 (Lost Update 해결)
- 비관적 락 2회/요청 → 0회 제거
- Redis @Cacheable 적용 (상품 상세 5분, 목록 1분, 브랜드 10분)
- @TransactionalEventListener(AFTER_COMMIT) 패턴으로 캐시 evict 타이밍 보장
- Cache Stampede 방지: 좋아요/주문 경로에서 목록 캐시 allEntries evict 제거
- SafeCacheErrorHandler + Micrometer 메트릭으로 Redis 장애 격리 및 가시성 확보
- DatabaseCleanUp MySQL → PostgreSQL 호환 문법 수정

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@coderabbitai
Copy link

coderabbitai bot commented Mar 13, 2026

📝 Walkthrough

Walkthrough

좋아요 카운트 업데이트를 레코드 잠금 대신 repository 수준의 원자적 증감 메서드로 전환하고, Redis 기반 캐싱을 도입하며 이벤트 기반 트랜잭션-후 캐시 무효화와 PostgreSQL 전환 및 관련 DB 인덱스·모의데이터 스크립트를 추가했다.

Changes

Cohort / File(s) Summary
Like 처리 변경
apps/commerce-api/src/main/java/com/loopers/application/like/LikeEventHandler.java, apps/commerce-api/src/main/java/com/loopers/application/like/LikeService.java
제품 행 잠금 제거. LikeEventHandler/LikeService가 제품 조회 대신 ProductRepository.incrementLikeCount / decrementLikeCount 호출 및 단순 존재 검증으로 대체됨.
Repository API 및 구현
apps/commerce-api/src/main/java/com/loopers/domain/repository/ProductRepository.java, apps/commerce-api/src/main/java/com/loopers/infrastructure/product/ProductJpaRepository.java, apps/commerce-api/src/main/java/com/loopers/infrastructure/product/ProductRepositoryImpl.java
인터페이스에 incrementLikeCount/decrementLikeCount 추가. JPA 레포지토리에 @Modifying 쿼리로 원자적 업데이트 구현 및 구현체에서 위임 처리.
캐시 구성 및 핸들러 추가
apps/commerce-api/src/main/java/com/loopers/infrastructure/cache/CacheConfig.java, apps/commerce-api/src/main/java/com/loopers/infrastructure/cache/SafeCacheErrorHandler.java, apps/commerce-api/src/main/java/com/loopers/infrastructure/cache/ProductCacheEvictHandler.java
Redis 기반 CacheManager 및 TTL 설정(상품상세 5분, 목록 1분, 브랜드 10분) 추가. 캐시 에러를 Micrometer로 계측하는 SafeCacheErrorHandler와 트랜잭션 AFTER_COMMIT에서 캐시를 무효화하는 이벤트 핸들러 추가.
조회/쓰기 서비스의 캐시 연계
apps/commerce-api/src/main/java/com/loopers/application/product/ProductQueryService.java, apps/commerce-api/src/main/java/com/loopers/application/product/ProductService.java, apps/commerce-api/src/main/java/com/loopers/application/order/OrderService.java
ProductQueryService에 @Cacheable 적용. ProductService/OrderService에 CacheManager 주입, 생성/수정/삭제 및 주문 생성 시 트랜잭션 후(또는 즉시) 관련 PRODUCT_DETAIL/PRODUCT_LIST 캐시를 무효화하도록 로직 추가.
브랜드 삭제 이벤트 확장
apps/commerce-api/src/main/java/com/loopers/application/brand/BrandDeletedEventHandler.java, apps/commerce-api/src/main/java/com/loopers/domain/model/brand/event/BrandProductsDeletedEvent.java
브랜드 삭제 시 연관 상품을 삭제하고 삭제된 상품 ID 목록을 담은 BrandProductsDeletedEvent 발행하도록 변경됨. 이벤트 발행을 위한 ApplicationEventPublisher 주입 추가.
테스트 및 테스트 컨테이너 변경
apps/commerce-api/src/test/..., modules/jpa/src/testFixtures/..., modules/jpa/src/testFixtures/java/com/loopers/testcontainers/PostgreSQLTestContainersConfig.java
테스트 설정을 MySQL → PostgreSQL로 전환. ProductService/OrderService 테스트에 CacheManager 모킹 추가, 테스트용 PostgreSQL Testcontainers 설정 도입.
인프라 및 빌드 변경
build.gradle.kts, modules/jpa/build.gradle.kts, docker/infra-compose.yml, modules/jpa/src/main/resources/jpa.yml, modules/jpa/src/main/java/com/loopers/config/jpa/DataSourceConfig.java
프로젝트 전반에서 MySQL 드라이버·설정·Compose 서비스를 PostgreSQL로 전환. Hikari 빈/프로퍼티 키 이름 갱신.
DB 인덱스 및 대량 데이터 스크립트
scripts/V5__create_product_indexes.sql, scripts/mock-data.sql, scripts/mock-data-10k.sql, scripts/mock-data-100k.sql, scripts/mock-users.sql
PostgreSQL용 부분 인덱스 추가 스크립트 및 대규모 모의데이터(10k/100k)·사용자 생성 스크립트 추가. DatabaseCleanUp에서 TRUNCATE ... RESTART IDENTITY CASCADE로 변경.
성능·부하 테스트 스크립트
scripts/k6-like-concurrency-test.js, scripts/k6-lost-update-test.js, scripts/k6-slow-query-test.js
k6 시나리오 스크립트 추가로 좋아요 동시성, lost-update, 느린 쿼리 테스트를 도입함.

Sequence Diagram(s)

sequenceDiagram
    participant User as User
    participant LikeService as Like Service
    participant ProductRepository as Product Repo
    participant EventPublisher as Event Publisher
    participant CacheEvictHandler as Cache Evict Handler
    participant CacheManager as Cache Manager

    User->>LikeService: POST /products/{id}/likes
    LikeService->>ProductRepository: validate existence (exists?)
    alt not found
        LikeService-->>User: 404 PRODUCT_NOT_FOUND
    else found
        LikeService->>ProductRepository: incrementLikeCount(productId)
        ProductRepository-->>LikeService: update result
        LikeService->>EventPublisher: publish ProductLikedEvent
        EventPublisher->>CacheEvictHandler: handleLiked(event) [AFTER_COMMIT]
        CacheEvictHandler->>CacheManager: evict(PRODUCT_DETAIL, productId)
        CacheManager-->>CacheEvictHandler: evicted
        LikeService-->>User: 200 OK
    end
Loading
sequenceDiagram
    participant Client as Client
    participant OrderService as Order Service
    participant OrderRepository as Order Repo
    participant TransactionManager as Transaction Manager
    participant CacheManager as Cache Manager

    Client->>OrderService: createOrder(payload)
    OrderService->>OrderRepository: save(order)
    OrderRepository-->>OrderService: order saved
    alt transaction active
        OrderService->>TransactionManager: register after-commit callback
        TransactionManager-->>CacheManager: evict product caches (productIds) [after commit]
    else no transaction
        OrderService->>CacheManager: evict product caches (productIds)
    end
    CacheManager-->>OrderService: evicted
    OrderService-->>Client: 201 Created
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 17.28% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed PR 제목이 주요 변경 사항(인덱스, 원자적 좋아요 동기화, Redis 캐시)을 정확하게 요약하고 있으며, 구체적이고 명확하다.
Description check ✅ Passed PR 설명이 템플릿의 모든 필수 섹션(Summary, Context & Decision, Design Overview, Flow Diagram)을 포함하고 있으며, 성능 개선(20~131x), 기술적 의사결정 근거, 트레이드오프, 테스트 결과를 상세히 기술하고 있다.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
📝 Coding Plan
  • Generate coding plan for human review comments

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Tip

You can get early access to new features in CodeRabbit.

Enable the early_access setting to enable early access features such as new models, tools, and more.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 7

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
apps/commerce-api/src/main/java/com/loopers/application/like/LikeService.java (1)

29-55: ⚠️ Potential issue | 🟠 Major

동시 요청 시 exists -> save 패턴이 제약 조건 위반으로 실패한다.

동일 사용자/상품에 대한 동시 like() 요청 두 개는 모두 Line 32의 존재 확인을 통과할 수 있다. LikeJpaEntity@UniqueConstraint(columnNames = {"userId", "productId"})가 중복을 방지하지만, 선택-실행 사이의 경합 상태로 인해 첫 번째 요청은 성공하고 두 번째 요청은 제약 조건 위반 예외를 발생시킨다. 이는 클라이언트에게 오류 응답을 반환하므로 멱등 성공으로 처리되지 않는다.

조치:

  1. likeRepository.save() 호출을 try-catch로 감싸 제약 조건 위반 예외를 무시하거나,
  2. SQL의 INSERT IGNORE 또는 ON DUPLICATE KEY UPDATE 같은 insert-if-not-exists 패턴으로 변경하거나,
  3. SELECT FOR UPDATE로 상품 잠금 복원.

현재 테스트는 서로 다른 사용자의 동시 요청만 검증하며, 같은 사용자/상품에 대한 실제 경합 상황을 다루지 않는다. 같은 사용자가 동일 상품에 동시에 좋아요를 여러 번 요청할 때 항상 성공하거나 멱등 성공으로 처리되는지 통합 테스트로 검증해야 한다.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/commerce-api/src/main/java/com/loopers/application/like/LikeService.java`
around lines 29 - 55, Concurrent like() calls can race between
existsByUserIdAndProductId and save, causing a unique-constraint violation; wrap
the save path to treat constraint-violation as an idempotent success. In the
like(UserId, Long) method (around Like.create(...) and
likeRepository.save(...)), catch the DB uniqueness exception (e.g.,
DataIntegrityViolationException or the JPA/PersistenceException that wraps the
constraint violation caused by LikeJpaEntity's `@UniqueConstraint`) and simply
return (or log debug) so the second concurrent request is treated as success;
alternatively implement an insert-if-not-exists repository method and use that
atomic operation. Also add an integration test that issues concurrent like
requests for the same userId/productId and asserts idempotent success.
🧹 Nitpick comments (12)
modules/jpa/src/main/resources/jpa.yml (1)

24-29: HikariCP 연결 풀 설정에서 keepalive-time: 0leak-detection-threshold: 0 비활성화에 대한 검토가 필요하다.

운영 관점 문제점:

  • keepalive-time: 0: PostgreSQL의 idle_in_transaction_session_timeout이나 방화벽 idle timeout으로 인해 유휴 연결이 끊어질 수 있다. AWS RDS PostgreSQL 기본 idle timeout은 약 5분이다.
  • leak-detection-threshold: 0: 연결 누수 감지가 비활성화되어 운영 환경에서 풀 고갈 시 원인 파악이 어렵다.

권장 설정:

keepalive-time: 30000        # 30초마다 keepalive 쿼리
leak-detection-threshold: 60000  # 60초 이상 반환되지 않은 연결 감지
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@modules/jpa/src/main/resources/jpa.yml` around lines 24 - 29, The HikariCP
settings currently disable keepalive and leak detection; update the jpa.yml
entries for the pool by changing the keepalive-time value from 0 to 30000 and
the leak-detection-threshold from 0 to 60000 (milliseconds) so periodic
keepalive queries run and connection leaks are detected; locate the keys
keepalive-time and leak-detection-threshold in
modules/jpa/src/main/resources/jpa.yml, apply the new numeric values, and verify
in a staging run that connections remain stable and leak logs appear when
appropriate.
docker/infra-compose.yml (1)

3-12: PostgreSQL 서비스에 healthcheck 설정이 누락되었다.

redis-master, redis-readonly, kafka 서비스에는 healthcheck가 정의되어 있으나, postgres 서비스에는 누락되었다. 이로 인해 의존 서비스가 PostgreSQL 준비 상태를 확인할 수 없다.

운영 관점 문제점:

  • docker compose up 시 PostgreSQL이 완전히 시작되기 전에 애플리케이션이 연결을 시도하면 실패할 수 있다.
  • CI 환경에서 서비스 시작 순서 문제로 플래키 테스트 발생 가능성이 있다.
🛠️ healthcheck 추가 제안
   postgres:
     image: postgres:16
     ports:
       - "5432:5432"
     environment:
       - POSTGRES_USER=application
       - POSTGRES_PASSWORD=application
       - POSTGRES_DB=loopers
     volumes:
       - postgres-data:/var/lib/postgresql/data
+    healthcheck:
+      test: ["CMD-SHELL", "pg_isready -U application -d loopers"]
+      interval: 5s
+      timeout: 3s
+      retries: 10
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docker/infra-compose.yml` around lines 3 - 12, Add a Docker healthcheck for
the postgres service so other services can wait for DB readiness: update the
postgres service block to include a healthcheck that runs pg_isready (e.g.,
CMD-SHELL "pg_isready -U application -d loopers"), with sensible parameters like
interval (10s), timeout (5s), retries (5) and start_period (30s); also ensure
services that depend on postgres use depends_on with condition: service_healthy
(or update CI orchestration to wait on the health status) so application
containers don't attempt connections before PostgreSQL is ready.
modules/jpa/src/testFixtures/java/com/loopers/utils/DatabaseCleanUp.java (1)

35-35: 테이블명 안전성을 위해 동적 TRUNCATE SQL에 식별자 인용 추가 권장

현재 모든 엔티티의 테이블명이 소문자 언더스코어 형식(users, order_items 등)으로 PostgreSQL 인용 규칙상 안전하다. 다만 향후 개발자가 @Table(name = "User") 또는 예약어를 포함한 식별자를 추가할 경우 구문 오류로 통합테스트 정리 단계가 실패할 수 있다. 이를 예방하기 위해 테이블명을 명시적으로 인용하고, 실행 순서를 고정하는 방식의 개선이 권장된다. 추가로 대문자나 예약어 테이블명이 포함될 때 truncateAllTables()가 정상 동작하는지 확인하는 통합테스트를 추가하면 좋다.

개선 예시
 `@Transactional`
 public void truncateAllTables() {
     entityManager.flush();
-
-    for (String table : tableNames) {
-        entityManager.createNativeQuery("TRUNCATE TABLE " + table + " RESTART IDENTITY CASCADE").executeUpdate();
-    }
+    String sql = tableNames.stream()
+        .sorted()
+        .map(this::quoteQualifiedIdentifier)
+        .collect(java.util.stream.Collectors.joining(", ", "TRUNCATE TABLE ", " RESTART IDENTITY CASCADE"));
+    entityManager.createNativeQuery(sql).executeUpdate();
 }
+
+private String quoteQualifiedIdentifier(String tableName) {
+    return java.util.Arrays.stream(tableName.split("\\."))
+        .map(part -> "\"" + part.replace("\"", "\"\"") + "\"")
+        .collect(java.util.stream.Collectors.joining("."));
+}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@modules/jpa/src/testFixtures/java/com/loopers/utils/DatabaseCleanUp.java` at
line 35, Wrap table identifiers with SQL identifier quoting and make truncation
deterministic: in DatabaseCleanUp replace the dynamic SQL construction in the
truncateAllTables logic (the line calling
entityManager.createNativeQuery(...).executeUpdate()) to quote the table name
(e.g. use " + "\"" + table + "\"" + " or equivalent identifier-quoting helper)
so names with uppercase letters or reserved words are handled, and ensure the
list of tables is iterated in a fixed order (e.g. sort the table list before
truncating) to make execution deterministic; also add an integration test that
creates a table with a quoted/reserved/uppercase name (e.g. `@Table`(name =
"User") or a reserved word) and verifies truncateAllTables() succeeds.
apps/commerce-api/src/main/java/com/loopers/infrastructure/product/ProductJpaRepository.java (1)

31-33: decrementLikeCount가 업데이트 실패 여부를 반환하지 않는다.

likeCount > 0 조건이 만족되지 않으면 0 rows가 업데이트되지만 호출자는 이를 알 수 없다. 운영 환경에서 중복 unlike 요청이나 race condition 발생 시 디버깅이 어려워질 수 있다.

💡 선택적 개선안: 반환 타입을 int로 변경
-    void decrementLikeCount(`@Param`("productId") Long productId);
+    int decrementLikeCount(`@Param`("productId") Long productId);

호출측에서 반환값이 0인 경우 로깅하거나 메트릭 수집에 활용할 수 있다.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/commerce-api/src/main/java/com/loopers/infrastructure/product/ProductJpaRepository.java`
around lines 31 - 33, The decrementLikeCount method currently returns void so
callers can't tell if the UPDATE affected any rows; change the signature of
ProductJpaRepository.decrementLikeCount to return int (the number of rows
updated) and propagate that return to callers so they can log/emit metrics or
handle 0-result cases; keep the `@Modifying` and `@Query` annotations on
decrementLikeCount and ensure any callers check the returned int to detect
failed/no-op updates.
apps/commerce-api/src/test/java/com/loopers/application/like/LikeServiceTest.java (1)

42-88: 동시성 시나리오 및 이벤트 발행 검증 테스트 보강을 권장한다.

현재 단위 테스트는 정상 흐름과 멱등성을 검증하지만, 다음 케이스가 누락되어 있다:

  1. LikeEventHandler에서 incrementLikeCount/decrementLikeCount 호출 검증 테스트
  2. 동시 좋아요 요청 시 이벤트 순서/중복 처리 통합 테스트
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/commerce-api/src/test/java/com/loopers/application/like/LikeServiceTest.java`
around lines 42 - 88, Add tests to cover event handler invocation and a
concurrent-like scenario: in LikeServiceTest add a test that stubs/publishes the
Like domain event and verifies LikeEventHandler.incrementLikeCount (and
decrementLikeCount where applicable) is called via the domainEventPublisher
(reference LikeEventHandler, incrementLikeCount, decrementLikeCount,
domainEventPublisher, service.like); also add an integration-style test that
simulates concurrent calls to service.like (use multiple threads or an executor)
for the same user/product and assert idempotency plus correct single event
publication/handler invocation to ensure no duplicate count increments.
scripts/V5__create_product_indexes.sql (1)

7-11: brandId 필터 + 기본 정렬(created_at DESC) 조합에 대한 인덱스가 누락되었다.

idx_products_active_brand_likesbrand_id + like_count DESC만 커버한다. GET /api/v1/products?brandId={id} (기본 정렬)는 brand_id + created_at DESC 복합 인덱스가 없어 index scan 후 filesort가 발생할 수 있다.

운영 환경에서 브랜드별 최신순 조회가 빈번하다면 쿼리 성능 저하가 발생한다.

💡 추가 인덱스 제안
-- 5. 브랜드 필터 + 최신순 정렬
CREATE INDEX IF NOT EXISTS idx_products_active_brand_created
    ON products (brand_id, created_at DESC)
    WHERE deleted_at IS NULL;

추가 검증: EXPLAIN ANALYZEbrandId 필터 + 기본 정렬 쿼리의 실행 계획을 확인하여 실제 병목 여부를 검증하라.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/V5__create_product_indexes.sql` around lines 7 - 11, The current
migration only creates idx_products_active_brand_likes (ON products (brand_id,
like_count DESC)) but misses a composite index for the common brandId + default
sort (created_at DESC), so add a new conditional index named
idx_products_active_brand_created on products (brand_id, created_at DESC) WHERE
deleted_at IS NULL to avoid filesorts for GET /api/v1/products?brandId={id};
after adding the CREATE INDEX statement run EXPLAIN ANALYZE for the brandId +
default-sort query to confirm the optimizer uses the new index.
scripts/mock-data-100k.sql (2)

7-10: TRUNCATE ... CASCADE는 운영 환경에서 실수로 실행될 경우 데이터 손실을 유발한다.

스크립트 파일명이 mock-data-100k.sql로 테스트 목적임이 명확하지만, 운영 DB에서 실수로 실행되지 않도록 스크립트 상단에 환경 체크 로직 또는 경고 주석을 추가하는 것이 안전하다.

💡 환경 체크 예시
-- ⚠️ WARNING: 이 스크립트는 개발/테스트 환경 전용이다.
-- 운영 환경에서 실행하면 모든 데이터가 삭제된다.
DO $$
BEGIN
    IF current_database() NOT LIKE '%test%' AND current_database() NOT LIKE '%dev%' THEN
        RAISE EXCEPTION '운영 환경에서는 이 스크립트를 실행할 수 없다: %', current_database();
    END IF;
END $$;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/mock-data-100k.sql` around lines 7 - 10, Add a safety check/warning
at the top of scripts/mock-data-100k.sql to prevent accidental execution against
production: before the TRUNCATE TABLE statements (the lines truncating likes,
products, brands), either add a clear prominent warning comment and/or a runtime
guard using a DO $$ ... END $$ block that validates current_database() (or
similar) and aborts if the DB name does not match dev/test patterns; ensure this
guard runs before any destructive statements so the TRUNCATEs cannot execute in
non-dev environments.

49-239: 대량 데이터 생성 스크립트에 명시적 트랜잭션 래핑이 없다.

100,000건 삽입 중 오류 발생 시 부분적으로 데이터가 삽입된 상태로 남을 수 있다. 테스트 데이터 초기화 목적상 전체 성공 또는 전체 롤백이 보장되어야 재실행이 안전하다.

💡 트랜잭션 래핑 제안
BEGIN;
-- 기존 DO $$ ... END $$ 블록
COMMIT;

또는 psql 실행 시 --single-transaction 옵션을 사용하라.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/mock-data-100k.sql` around lines 49 - 239, The script runs many
INSERTs inside a DO $$ ... END $$ block without an explicit transaction, so
partial inserts may persist on error; wrap the entire generation/INSERT work in
a single transaction (BEGIN; ... COMMIT;) or convert the DO block to a plpgsql
block that catches exceptions and issues ROLLBACK on failure (or use psql
--single-transaction). Specifically, ensure the work that includes the DO $$
block and the INSERT INTO products statements is executed inside one transaction
scope so any runtime error rolls back all changes; reference the DO $$ ... END
$$ block and the INSERT INTO products(...) call when applying the wrapping or
exception-handling.
apps/commerce-api/src/test/java/com/loopers/application/product/ProductServiceTest.java (1)

29-37: CacheManager mock이 추가되었으나 캐시 동작 검증이 누락되었다.

ProductServiceCacheManager 의존성이 추가되어 테스트 설정이 업데이트되었다. 현재 테스트는 비즈니스 로직(repository 호출)만 검증하고 있으며, 캐시 eviction 동작은 검증하지 않는다.

캐시 eviction이 ProductService의 핵심 책임 중 하나라면, 다음 테스트 추가를 권장한다:

  • 상품 수정/삭제 시 CacheManager.getCache(PRODUCT_DETAIL).evict(productId) 호출 검증
  • 상품 생성 시 PRODUCT_LIST 관련 캐시 eviction 검증
💡 캐시 eviction 검증 테스트 예시
`@Test`
`@DisplayName`("상품 수정 시 캐시 eviction 호출")
void updateProduct_evictsCache() {
    // given
    Product product = createProduct(1L, 1L);
    Cache cache = mock(Cache.class);
    when(productRepository.findActiveById(1L)).thenReturn(Optional.of(product));
    when(cacheManager.getCache(CacheConfig.PRODUCT_DETAIL)).thenReturn(cache);

    // when
    var command = new ProductUpdateCommand(1L, "새 이름", 60000, null, 200, "설명");
    service.updateProduct(command);

    // then
    verify(cache).evict(1L);
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/commerce-api/src/test/java/com/loopers/application/product/ProductServiceTest.java`
around lines 29 - 37, Tests added a mocked CacheManager but do not assert cache
eviction behavior; add unit tests that verify ProductService invokes cache
eviction via cacheManager.getCache(CacheConfig.PRODUCT_DETAIL).evict(productId)
on updateProduct and deleteProduct, and that product creation evicts
PRODUCT_LIST (cacheManager.getCache(CacheConfig.PRODUCT_LIST).evict(...)). In
each test mock a Cache, stub cacheManager.getCache(...) to return it, stub
repository calls (e.g., productRepository.findActiveById) as needed, call the
service method (updateProduct/deleteProduct/createProduct) and verify the
cache.eviction calls with Mockito.verify(cache).evict(...) using the correct
product id or list key.
apps/commerce-api/src/main/java/com/loopers/infrastructure/cache/ProductCacheEvictHandler.java (1)

35-55: 브랜드 하나 삭제로 전체 상품 캐시를 비우면 캐시 워밍 비용이 한꺼번에 돌아온다.

Line 37-38과 Line 49-55는 brand 하나가 삭제될 때 무관한 상품 상세 캐시까지 전부 miss로 만들어, 관리자 작업 한 번으로 catalog 전체가 다시 DB를 치는 상황을 만든다. 이벤트에 영향받은 productId 목록을 함께 담아 상세 캐시는 해당 키만 evict하고, 목록 캐시는 지금처럼 clear하되 가능하면 브랜드 범위로 더 좁히는 편이 안전하다. 추가로 brand A 삭제 시 brand B 상품 상세 key는 유지되고 A 소속 상품 key만 제거되는 테스트를 넣어야 한다.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/commerce-api/src/main/java/com/loopers/infrastructure/cache/ProductCacheEvictHandler.java`
around lines 35 - 55, The current handleBrandDeleted(BrandDeletedEvent)
implementation calls evictAllProductCaches() which clears all product detail and
list caches; instead modify BrandDeletedEvent to include the affected productId
list (or brandId) and change handleBrandDeleted to iterate that list calling
evictProductDetail(productId) for each affected product while keeping
evictBrandList() (or narrowing list eviction to the brand scope) rather than
calling evictAllProductCaches(); update evictAllProductCaches() usage elsewhere
or remove it if unused, and add a unit test that deletes brand A and asserts
evicted keys are only A's product detail keys while product detail keys for
brand B remain present.
apps/commerce-api/src/test/java/com/loopers/application/order/OrderServiceTest.java (1)

41-50: 새 캐시 무효화 분기가 테스트에서 검증되지 않는다.

지금은 CacheManager만 주입하고 어떤 캐시/key가 지워지는지는 확인하지 않아서, 캐시명 오타나 evict 누락이 생겨도 테스트가 그대로 통과한다. createOrder_successCacheManagerCache mock을 연결해 비트랜잭션 경로의 evict(productId) 호출을 검증하고, 가능하면 Spring 트랜잭션 통합 테스트로 commit 이후에만 실행되는지도 추가하는 편이 좋다.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/commerce-api/src/test/java/com/loopers/application/order/OrderServiceTest.java`
around lines 41 - 50, The test currently injects a mocked CacheManager but never
verifies cache eviction; in the createOrder_success test wire a mocked Cache
(mock(Cache.class)) and stub cacheManager.getCache(<cacheName>) to return that
cache, then assert that cache.evict(productId) (or evict(same expected key)) is
invoked once when service.createOrder(...) runs; update the test to reference
the CacheManager and Cache mocks and verify evict(productId) was called, and
optionally add an integration test using Spring transaction support to assert
eviction occurs only after commit.
apps/commerce-api/src/main/java/com/loopers/infrastructure/cache/SafeCacheErrorHandler.java (1)

20-41: 예외 메시지만 남기면 캐시 장애 원인 추적이 어렵다.

이 핸들러는 예외를 삼키므로 운영에서 남는 신호가 로그와 메트릭뿐인데, 지금처럼 exception.getMessage()만 기록하면 연결 오류와 직렬화 오류를 구분하기 어렵다. 마지막 인자로 exception 자체를 넘겨 stack trace를 보존하고, 전역 핸들러인 만큼 민감할 수 있는 key는 필요한 캐시에만 제한적으로 남기는 편이 안전하다. 추가로 각 handler 메서드가 예외를 재던지지 않고 메트릭을 1회만 증가시키는 단위 테스트를 넣어야 한다.

🛠️ 로그 보강 예시
-        log.warn("Redis 캐시 조회 실패 - cache: {}, key: {}, error: {}", cache.getName(), key, exception.getMessage());
+        log.warn("Redis 캐시 조회 실패 - cache: {}, key: {}", cache.getName(), key, exception);

-        log.warn("Redis 캐시 저장 실패 - cache: {}, key: {}, error: {}", cache.getName(), key, exception.getMessage());
+        log.warn("Redis 캐시 저장 실패 - cache: {}, key: {}", cache.getName(), key, exception);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/commerce-api/src/main/java/com/loopers/infrastructure/cache/SafeCacheErrorHandler.java`
around lines 20 - 41, Replace the current log.warn calls in handleCacheGetError,
handleCachePutError, handleCacheEvictError, and handleCacheClearError to pass
the exception object (exception) as the final parameter instead of
exception.getMessage() so stack traces are preserved (update the log.warn calls
that reference cache.getName(), key, exception.getMessage() accordingly); avoid
logging the raw key for all caches—add a small allowlist check on
cache.getName() (or a shouldLogKey(cacheName) helper) to include key only for
specific caches and otherwise omit or redact it; keep
incrementErrorCounter(cache.getName(), "<operation>") calls as-is but ensure
they are invoked exactly once per handler; and add unit tests for each handler
(handleCacheGetError, handleCachePutError, handleCacheEvictError,
handleCacheClearError) that assert the handler does not rethrow and that
incrementErrorCounter is called exactly once.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In
`@apps/commerce-api/src/main/java/com/loopers/application/order/OrderService.java`:
- Around line 101-117: The afterCommit block in registerAfterCommitCacheEviction
can throw from Cache.evict and propagate a 500; wrap the eviction logic in a
safe-evict wrapper or at minimum surround the afterCommit body with a try-catch
that logs the exception and records a metric instead of rethrowing (use the same
Cache retrieval via cacheManager.getCache(CacheConfig.PRODUCT_DETAIL) and
evicting productIds), so failures are swallowed and monitored;
update/registerAfterCommitCacheEviction (and its anonymous
TransactionSynchronization.afterCommit) accordingly and add a unit/integration
test for createOrder that mocks Cache.evict to throw and asserts the order is
persisted while the exception is logged/metricized.

In
`@apps/commerce-api/src/main/java/com/loopers/application/product/ProductQueryService.java`:
- Around line 53-55: The PRODUCT_LIST cache is currently fully cleared by
evictProductListAfterCommit(), causing poor performance; change the strategy to
evict only keys related to the affected brand or use a simpler namespace so
range invalidation is possible: modify evictProductListAfterCommit() to compute
and evict entries for CacheConfig.PRODUCT_LIST scoped to the affected brandId
(or call cache.evict("'brand:' + brandId + ':*'") / use a cache API that
supports pattern eviction) instead of cache.clear(), and update getProducts(...)
and any producers (create/update/delete flows) to supply the brandId to that
evict call similar to how PRODUCT_DETAIL is evicted by id so only relevant list
entries are removed.

In
`@apps/commerce-api/src/main/java/com/loopers/infrastructure/cache/CacheConfig.java`:
- Around line 42-52: The current Redis ObjectMapper configuration uses
activateDefaultTyping with DefaultTyping.EVERYTHING and
allowIfBaseType(Object.class) on ObjectMapper (see activateDefaultTyping /
DefaultTyping.EVERYTHING), which opens unsafe polymorphic deserialization;
replace this by using type-fixed serializers for caches (e.g.
Jackson2JsonRedisSerializer<ProductDetailInfo> for ProductDetailInfo and a
dedicated serializer for PageResult<ProductSummaryInfo>) or, if polymorphism is
required, restrict polymorphic typing to NON_FINAL and whitelist only your
application package using
BasicPolymorphicTypeValidator.builder().allowIfSubType("com.loopers.") instead
of allowIfBaseType(Object.class); after changing, add round-trip tests for
ProductDetailInfo and PageResult<ProductSummaryInfo> to validate
serialization/deserialization.

In
`@apps/commerce-api/src/main/java/com/loopers/infrastructure/product/ProductJpaRepository.java`:
- Around line 27-33: The JPQL update methods incrementLikeCount and
decrementLikeCount use `@Modifying` but omit clearAutomatically=true, which can
leave the JPA persistence context stale (updates applied to DB but not the
first-level cache). Fix by adding clearAutomatically = true to both `@Modifying`
annotations on incrementLikeCount and decrementLikeCount so the EntityManager
clears managed state after the bulk update; also add or update an integration
test that calls incrementLikeCount/decrementLikeCount then findById within the
same transaction to assert likeCount reflects the change.

In `@modules/jpa/src/main/resources/jpa.yml`:
- Around line 36-38: The local profile currently sets hibernate.ddl-auto:
validate which fails on a fresh DB because schema migration scripts for entities
(e.g., ProductJpaEntity, UserJpaEntity, OrderJpaEntity, CouponJpaEntity) are
missing; update the local profile's hibernate.ddl-auto from validate to
create-drop to enable automatic schema creation in development, or alternatively
add Flyway (or Liquibase) and provide V1__create_base_tables.sql ... V4__...
migration scripts to provision the required tables so validate will succeed in
CI; modify the jpa.yml hibernate.ddl-auto entry accordingly and/or add the
Flyway dependency and migration SQL files.

In
`@modules/jpa/src/testFixtures/java/com/loopers/testcontainers/MySqlTestContainersConfig.java`:
- Around line 7-8: Rename the configuration class MySqlTestContainersConfig to
PostgreSqlTestContainersConfig (update the class declaration and any internal
references such as `@Configuration`, bean method names or usages that reference
the old class name) and then update the imports/usages in the listed test
classes (UserApiIntegrationTest, UserApiE2ETest, OrderApiIntegrationTest,
OrderApiE2ETest, ProductApiIntegrationTest, ProductApiE2ETest,
BrandApiIntegrationTest, BrandApiE2ETest, LikeApiIntegrationTest,
LikeApiE2ETest, StockConcurrencyTest, LikeConcurrencyTest,
CouponConcurrencyTest) so they import and reference
PostgreSqlTestContainersConfig instead of MySqlTestContainersConfig.

In `@scripts/V5__create_product_indexes.sql`:
- Around line 13-17: The current single-column index idx_products_active_price
ON products(price ASC) WHERE deleted_at IS NULL may be suboptimal for price DESC
queries; add a composite, filtered index that includes the common filter and
sort columns (e.g., include the query's grouping/filter column such as category
or brand together with price in DESC order) and keep the same WHERE deleted_at
IS NULL predicate (mirror the pattern used by idx_products_active_brand_likes).
After creating the composite index (with price declared DESC), validate
performance with EXPLAIN ANALYZE for both price_asc and price_desc queries and
adjust included leading columns to match actual query patterns.

---

Outside diff comments:
In
`@apps/commerce-api/src/main/java/com/loopers/application/like/LikeService.java`:
- Around line 29-55: Concurrent like() calls can race between
existsByUserIdAndProductId and save, causing a unique-constraint violation; wrap
the save path to treat constraint-violation as an idempotent success. In the
like(UserId, Long) method (around Like.create(...) and
likeRepository.save(...)), catch the DB uniqueness exception (e.g.,
DataIntegrityViolationException or the JPA/PersistenceException that wraps the
constraint violation caused by LikeJpaEntity's `@UniqueConstraint`) and simply
return (or log debug) so the second concurrent request is treated as success;
alternatively implement an insert-if-not-exists repository method and use that
atomic operation. Also add an integration test that issues concurrent like
requests for the same userId/productId and asserts idempotent success.

---

Nitpick comments:
In
`@apps/commerce-api/src/main/java/com/loopers/infrastructure/cache/ProductCacheEvictHandler.java`:
- Around line 35-55: The current handleBrandDeleted(BrandDeletedEvent)
implementation calls evictAllProductCaches() which clears all product detail and
list caches; instead modify BrandDeletedEvent to include the affected productId
list (or brandId) and change handleBrandDeleted to iterate that list calling
evictProductDetail(productId) for each affected product while keeping
evictBrandList() (or narrowing list eviction to the brand scope) rather than
calling evictAllProductCaches(); update evictAllProductCaches() usage elsewhere
or remove it if unused, and add a unit test that deletes brand A and asserts
evicted keys are only A's product detail keys while product detail keys for
brand B remain present.

In
`@apps/commerce-api/src/main/java/com/loopers/infrastructure/cache/SafeCacheErrorHandler.java`:
- Around line 20-41: Replace the current log.warn calls in handleCacheGetError,
handleCachePutError, handleCacheEvictError, and handleCacheClearError to pass
the exception object (exception) as the final parameter instead of
exception.getMessage() so stack traces are preserved (update the log.warn calls
that reference cache.getName(), key, exception.getMessage() accordingly); avoid
logging the raw key for all caches—add a small allowlist check on
cache.getName() (or a shouldLogKey(cacheName) helper) to include key only for
specific caches and otherwise omit or redact it; keep
incrementErrorCounter(cache.getName(), "<operation>") calls as-is but ensure
they are invoked exactly once per handler; and add unit tests for each handler
(handleCacheGetError, handleCachePutError, handleCacheEvictError,
handleCacheClearError) that assert the handler does not rethrow and that
incrementErrorCounter is called exactly once.

In
`@apps/commerce-api/src/main/java/com/loopers/infrastructure/product/ProductJpaRepository.java`:
- Around line 31-33: The decrementLikeCount method currently returns void so
callers can't tell if the UPDATE affected any rows; change the signature of
ProductJpaRepository.decrementLikeCount to return int (the number of rows
updated) and propagate that return to callers so they can log/emit metrics or
handle 0-result cases; keep the `@Modifying` and `@Query` annotations on
decrementLikeCount and ensure any callers check the returned int to detect
failed/no-op updates.

In
`@apps/commerce-api/src/test/java/com/loopers/application/like/LikeServiceTest.java`:
- Around line 42-88: Add tests to cover event handler invocation and a
concurrent-like scenario: in LikeServiceTest add a test that stubs/publishes the
Like domain event and verifies LikeEventHandler.incrementLikeCount (and
decrementLikeCount where applicable) is called via the domainEventPublisher
(reference LikeEventHandler, incrementLikeCount, decrementLikeCount,
domainEventPublisher, service.like); also add an integration-style test that
simulates concurrent calls to service.like (use multiple threads or an executor)
for the same user/product and assert idempotency plus correct single event
publication/handler invocation to ensure no duplicate count increments.

In
`@apps/commerce-api/src/test/java/com/loopers/application/order/OrderServiceTest.java`:
- Around line 41-50: The test currently injects a mocked CacheManager but never
verifies cache eviction; in the createOrder_success test wire a mocked Cache
(mock(Cache.class)) and stub cacheManager.getCache(<cacheName>) to return that
cache, then assert that cache.evict(productId) (or evict(same expected key)) is
invoked once when service.createOrder(...) runs; update the test to reference
the CacheManager and Cache mocks and verify evict(productId) was called, and
optionally add an integration test using Spring transaction support to assert
eviction occurs only after commit.

In
`@apps/commerce-api/src/test/java/com/loopers/application/product/ProductServiceTest.java`:
- Around line 29-37: Tests added a mocked CacheManager but do not assert cache
eviction behavior; add unit tests that verify ProductService invokes cache
eviction via cacheManager.getCache(CacheConfig.PRODUCT_DETAIL).evict(productId)
on updateProduct and deleteProduct, and that product creation evicts
PRODUCT_LIST (cacheManager.getCache(CacheConfig.PRODUCT_LIST).evict(...)). In
each test mock a Cache, stub cacheManager.getCache(...) to return it, stub
repository calls (e.g., productRepository.findActiveById) as needed, call the
service method (updateProduct/deleteProduct/createProduct) and verify the
cache.eviction calls with Mockito.verify(cache).evict(...) using the correct
product id or list key.

In `@docker/infra-compose.yml`:
- Around line 3-12: Add a Docker healthcheck for the postgres service so other
services can wait for DB readiness: update the postgres service block to include
a healthcheck that runs pg_isready (e.g., CMD-SHELL "pg_isready -U application
-d loopers"), with sensible parameters like interval (10s), timeout (5s),
retries (5) and start_period (30s); also ensure services that depend on postgres
use depends_on with condition: service_healthy (or update CI orchestration to
wait on the health status) so application containers don't attempt connections
before PostgreSQL is ready.

In `@modules/jpa/src/main/resources/jpa.yml`:
- Around line 24-29: The HikariCP settings currently disable keepalive and leak
detection; update the jpa.yml entries for the pool by changing the
keepalive-time value from 0 to 30000 and the leak-detection-threshold from 0 to
60000 (milliseconds) so periodic keepalive queries run and connection leaks are
detected; locate the keys keepalive-time and leak-detection-threshold in
modules/jpa/src/main/resources/jpa.yml, apply the new numeric values, and verify
in a staging run that connections remain stable and leak logs appear when
appropriate.

In `@modules/jpa/src/testFixtures/java/com/loopers/utils/DatabaseCleanUp.java`:
- Line 35: Wrap table identifiers with SQL identifier quoting and make
truncation deterministic: in DatabaseCleanUp replace the dynamic SQL
construction in the truncateAllTables logic (the line calling
entityManager.createNativeQuery(...).executeUpdate()) to quote the table name
(e.g. use " + "\"" + table + "\"" + " or equivalent identifier-quoting helper)
so names with uppercase letters or reserved words are handled, and ensure the
list of tables is iterated in a fixed order (e.g. sort the table list before
truncating) to make execution deterministic; also add an integration test that
creates a table with a quoted/reserved/uppercase name (e.g. `@Table`(name =
"User") or a reserved word) and verifies truncateAllTables() succeeds.

In `@scripts/mock-data-100k.sql`:
- Around line 7-10: Add a safety check/warning at the top of
scripts/mock-data-100k.sql to prevent accidental execution against production:
before the TRUNCATE TABLE statements (the lines truncating likes, products,
brands), either add a clear prominent warning comment and/or a runtime guard
using a DO $$ ... END $$ block that validates current_database() (or similar)
and aborts if the DB name does not match dev/test patterns; ensure this guard
runs before any destructive statements so the TRUNCATEs cannot execute in
non-dev environments.
- Around line 49-239: The script runs many INSERTs inside a DO $$ ... END $$
block without an explicit transaction, so partial inserts may persist on error;
wrap the entire generation/INSERT work in a single transaction (BEGIN; ...
COMMIT;) or convert the DO block to a plpgsql block that catches exceptions and
issues ROLLBACK on failure (or use psql --single-transaction). Specifically,
ensure the work that includes the DO $$ block and the INSERT INTO products
statements is executed inside one transaction scope so any runtime error rolls
back all changes; reference the DO $$ ... END $$ block and the INSERT INTO
products(...) call when applying the wrapping or exception-handling.

In `@scripts/V5__create_product_indexes.sql`:
- Around line 7-11: The current migration only creates
idx_products_active_brand_likes (ON products (brand_id, like_count DESC)) but
misses a composite index for the common brandId + default sort (created_at
DESC), so add a new conditional index named idx_products_active_brand_created on
products (brand_id, created_at DESC) WHERE deleted_at IS NULL to avoid filesorts
for GET /api/v1/products?brandId={id}; after adding the CREATE INDEX statement
run EXPLAIN ANALYZE for the brandId + default-sort query to confirm the
optimizer uses the new index.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 4d25bdfc-da12-4a79-9611-5f1d60b8a769

📥 Commits

Reviewing files that changed from the base of the PR and between 058b0e4 and d9b78de.

📒 Files selected for processing (23)
  • apps/commerce-api/src/main/java/com/loopers/application/like/LikeEventHandler.java
  • apps/commerce-api/src/main/java/com/loopers/application/like/LikeService.java
  • apps/commerce-api/src/main/java/com/loopers/application/order/OrderService.java
  • apps/commerce-api/src/main/java/com/loopers/application/product/ProductQueryService.java
  • apps/commerce-api/src/main/java/com/loopers/application/product/ProductService.java
  • apps/commerce-api/src/main/java/com/loopers/domain/repository/ProductRepository.java
  • apps/commerce-api/src/main/java/com/loopers/infrastructure/cache/CacheConfig.java
  • apps/commerce-api/src/main/java/com/loopers/infrastructure/cache/ProductCacheEvictHandler.java
  • apps/commerce-api/src/main/java/com/loopers/infrastructure/cache/SafeCacheErrorHandler.java
  • apps/commerce-api/src/main/java/com/loopers/infrastructure/product/ProductJpaRepository.java
  • apps/commerce-api/src/main/java/com/loopers/infrastructure/product/ProductRepositoryImpl.java
  • apps/commerce-api/src/test/java/com/loopers/application/like/LikeServiceTest.java
  • apps/commerce-api/src/test/java/com/loopers/application/order/OrderServiceTest.java
  • apps/commerce-api/src/test/java/com/loopers/application/product/ProductServiceTest.java
  • build.gradle.kts
  • docker/infra-compose.yml
  • modules/jpa/build.gradle.kts
  • modules/jpa/src/main/java/com/loopers/config/jpa/DataSourceConfig.java
  • modules/jpa/src/main/resources/jpa.yml
  • modules/jpa/src/testFixtures/java/com/loopers/testcontainers/MySqlTestContainersConfig.java
  • modules/jpa/src/testFixtures/java/com/loopers/utils/DatabaseCleanUp.java
  • scripts/V5__create_product_indexes.sql
  • scripts/mock-data-100k.sql

Comment on lines 53 to 55
@Override
@Cacheable(value = CacheConfig.PRODUCT_LIST, key = "'brand:' + #brandId + ':sort:' + #sort + ':page:' + #page + ':size:' + #size")
public PageResult<ProductSummaryInfo> getProducts(Long brandId, String sort, int page, int size) {
Copy link

@coderabbitai coderabbitai bot Mar 13, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# PRODUCT_LIST 캐시 eviction 로직 확인
rg -n "PRODUCT_LIST" --type=java -A 5 -B 2

Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java

Length of output: 6442


🏁 Script executed:

#!/bin/bash
# ProductCacheEvictHandler 호출 위치 확인
rg -n "ProductCacheEvictHandler\|evictAllProductCaches\|evictProductListAfterCommit" --type=java -B 3 -A 3

Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java

Length of output: 67


🏁 Script executed:

#!/bin/bash
# ProductService의 캐시 무효화 호출 위치 확인
rg -n "evictProductListAfterCommit" --type=java -B 5 -A 2

Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java

Length of output: 4100


🏁 Script executed:

#!/bin/bash
# 전체 ProductService 내용 확인 (특히 mutation 메서드들)
cat -n apps/commerce-api/src/main/java/com/loopers/application/product/ProductService.java | head -150

Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java

Length of output: 5115


PRODUCT_LIST 캐시 전체 무효화 전략 개선 필요하다.

원래 지적한 brandId=null 캐시 키 문제는 실제로는 이슈가 아니다. evictProductListAfterCommit()cache.clear()로 전체 캐시를 무효화하기 때문이다. 다만, 이 전략에는 성능상 문제가 있다.

상품 생성/수정/삭제 시마다 전체 PRODUCT_LIST 캐시를 비운다. 캐시는 page/size/sort 조합마다 별도 엔트리를 가지므로, 단일 상품 변경으로도 모든 페이지 결과가 무효화된다. TTL이 1분으로 매우 짧은 상황에서 이 캐시의 효과를 의심할 수 있다.

대신 변경된 상품의 brandId에 해당하는 캐시 키만 선택적으로 무효화하거나, 캐시 키 설계를 단순화하여 범위 무효화를 지원하도록 개선하라. 현재 PRODUCT_DETAIL은 선택적 evict(productId)를 사용하는데, PRODUCT_LIST도 동일한 패턴으로 개선할 수 있다.

성능 영향을 모니터링하고, 캐시 히트율이 낮으면 TTL 조정이나 캐시 전략 재검토를 수행하라.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/commerce-api/src/main/java/com/loopers/application/product/ProductQueryService.java`
around lines 53 - 55, The PRODUCT_LIST cache is currently fully cleared by
evictProductListAfterCommit(), causing poor performance; change the strategy to
evict only keys related to the affected brand or use a simpler namespace so
range invalidation is possible: modify evictProductListAfterCommit() to compute
and evict entries for CacheConfig.PRODUCT_LIST scoped to the affected brandId
(or call cache.evict("'brand:' + brandId + ':*'") / use a cache API that
supports pattern eviction) instead of cache.clear(), and update getProducts(...)
and any producers (create/update/delete flows) to supply the brandId to that
evict call similar to how PRODUCT_DETAIL is evicted by id so only relevant list
entries are removed.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

현 시점에서는 적절한 판단이라고 생각이 든다. 실제로 개선이 필요한 부분 혹은 시점은 어드민 수정이 빈번해지거나 캐시 히트율이 낮게 측정될 때라고 생각이 듭니다.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ukukdin, 동의한다. 현재 TTL이 1분으로 짧고 쓰기 빈도가 낮은 상황에서는 cache.clear()로 전체 무효화하는 단순 전략이 운영 부담 없이 정합성을 보장하는 적절한 선택이다.

다만, 향후 개선 시점 판단을 위해 아래 두 가지 지표를 사전에 Micrometer로 수집해 두길 권장한다:

  • cache.gets + cache.puts 태그로 캐시 히트율 추적
  • 상품 생성/수정/삭제 빈도 (어드민 트래픽)

이 수치가 임계점을 초과할 때 brandId 범위 무효화 전략으로 전환하면 충분하다.


✏️ Learnings added
Learnt from: ukukdin
Repo: Loopers-dev-lab/loop-pack-be-l2-vol3-java PR: 203
File: apps/commerce-api/src/main/java/com/loopers/application/product/ProductQueryService.java:53-55
Timestamp: 2026-03-13T05:44:03.072Z
Learning: In Loopers-dev-lab/loop-pack-be-l2-vol3-java PR `#203`, the PRODUCT_LIST cache eviction strategy in ProductService.evictProductListAfterCommit() intentionally uses cache.clear() (full cache invalidation) rather than selective eviction by brandId. This is a deliberate trade-off: TTL is 1 minute and admin write frequency is currently low, so full invalidation is considered acceptable. Selective eviction by brandId is deferred until admin modifications become frequent or cache hit rates are measured to be low via Micrometer metrics.

Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.


🧠 Learnings used
Learnt from: jsj1215
Repo: Loopers-dev-lab/loop-pack-be-l2-vol3-java PR: 142
File: apps/commerce-api/src/main/java/com/loopers/infrastructure/product/ProductRepositoryImpl.java:80-94
Timestamp: 2026-03-06T06:41:42.198Z
Learning: In `ProductRepositoryImpl.java` (Loopers-dev-lab/loop-pack-be-l2-vol3-java), `findOptionByIdWithLock()` uses `entityManager.find(ProductOption.class, optionId, LockModeType.PESSIMISTIC_WRITE)` without a lock timeout hint. The call flow is: `OrderService.prepareOrderItems()` → `findProductOnly()` (loads only Product, not ProductOption) → `deductStock()` → `findOptionByIdWithLock()`. Since ProductOption is never loaded into the 1st-level cache before this point, there is no stale read risk. However, the lock timeout hint (`jakarta.persistence.lock.timeout`) is missing, risking infinite wait. Fix: use the 4-parameter `entityManager.find()` overload with `Map.of("jakarta.persistence.lock.timeout", 3000L)`.

Learnt from: Namjin-kimm
Repo: Loopers-dev-lab/loop-pack-be-l2-vol3-java PR: 152
File: apps/commerce-api/src/main/java/com/loopers/application/order/OrderFacade.java:86-91
Timestamp: 2026-03-06T09:00:23.795Z
Learning: In Loopers-dev-lab/loop-pack-be-l2-vol3-java PR `#152`, OrderFacade.java's deadlock prevention for multi-product orders relies on MySQL InnoDB's implicit behavior of scanning IN-clause PK queries in ascending order (B+Tree clustered index traversal). This is a deliberate design decision acknowledged by the author as a trade-off: InnoDB generally locks rows in PK ascending order for IN queries, but this is not a formal SQL standard guarantee. If the query optimizer changes execution plans (e.g., full table scan), lock order may change and deadlocks could occur. The safe alternative is adding explicit ORDER BY id to the SELECT FOR UPDATE query in ProductJpaRepository.findAllByIdsForUpdate(). InnoDB does auto-detect and roll back deadlocked transactions, so hangs are avoided but transaction failures still occur.

Learnt from: madirony
Repo: Loopers-dev-lab/loop-pack-be-l2-vol3-java PR: 0
File: :0-0
Timestamp: 2026-03-06T04:36:52.939Z
Learning: In PR `#148` of Loopers-dev-lab/loop-pack-be-l2-vol3-java, ALL WithLock repository methods — including `OrderJpaRepository.findByIdWithLock` — have `QueryHints(QueryHint(name = "jakarta.persistence.lock.timeout", value = "3000"))` applied consistently. No WithLock method is missing the timeout hint. This was confirmed by reading the full file content of OrderJpaRepository.java.

Learnt from: yoon-yoo-tak
Repo: Loopers-dev-lab/loop-pack-be-l2-vol3-java PR: 6
File: apps/commerce-api/src/main/java/com/loopers/interfaces/api/user/UserV1Dto.java:9-10
Timestamp: 2026-02-04T05:52:09.963Z
Learning: In the loop-pack-be-l2-vol3-java project, enforce unified error handling by routing errors through CoreException to ApiControllerAdvice to ensure a consistent response format. Do not introduce MethodArgumentNotValidException handlers or Bean Validation handling, as that would create inconsistent error handling patterns. This guidance applies to all Java files under the commerce-api module (source code under apps/commerce-api/src/main/java/).

Comment on lines 7 to 8
@Configuration
public class MySqlTestContainersConfig {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Description: MySqlTestContainersConfig를 import하는 모든 파일 확인

echo "=== Files importing MySqlTestContainersConfig ==="
rg -l "MySqlTestContainersConfig" --type=java

echo "=== Import statements using MySqlTestContainersConfig ==="
rg "@Import.*MySqlTestContainersConfig" --type=java -n

Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java

Length of output: 3148


🏁 Script executed:

cat -n modules/jpa/src/testFixtures/java/com/loopers/testcontainers/MySqlTestContainersConfig.java

Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java

Length of output: 1142


🏁 Script executed:

# Check DataSourceConfig to verify prefix consistency
rg -A 5 -B 5 "datasource.postgres-jpa.main" --type=java | head -40

Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java

Length of output: 2524


클래스명을 PostgreSqlTestContainersConfig로 변경해야 한다.

현재 MySqlTestContainersConfig 클래스가 실제로는 PostgreSQL을 사용하고 있어 코드 가독성과 유지보수성을 저해한다.

운영 관점 문제점:

  • 신규 개발자가 MySQL 기반으로 오해할 수 있다.
  • IDE 검색에서 PostgreSQL 설정을 찾기 어렵다.
  • 데이터베이스 마이그레이션 시 혼동을 야기할 수 있다.
수정 방안
  1. MySqlTestContainersConfigPostgreSqlTestContainersConfig로 클래스명 변경
  2. 다음 13개 테스트 클래스의 import 문 수정:
    • UserApiIntegrationTest, UserApiE2ETest
    • OrderApiIntegrationTest, OrderApiE2ETest
    • ProductApiIntegrationTest, ProductApiE2ETest
    • BrandApiIntegrationTest, BrandApiE2ETest
    • LikeApiIntegrationTest, LikeApiE2ETest
    • StockConcurrencyTest, LikeConcurrencyTest, CouponConcurrencyTest
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@modules/jpa/src/testFixtures/java/com/loopers/testcontainers/MySqlTestContainersConfig.java`
around lines 7 - 8, Rename the configuration class MySqlTestContainersConfig to
PostgreSqlTestContainersConfig (update the class declaration and any internal
references such as `@Configuration`, bean method names or usages that reference
the old class name) and then update the imports/usages in the listed test
classes (UserApiIntegrationTest, UserApiE2ETest, OrderApiIntegrationTest,
OrderApiE2ETest, ProductApiIntegrationTest, ProductApiE2ETest,
BrandApiIntegrationTest, BrandApiE2ETest, LikeApiIntegrationTest,
LikeApiE2ETest, StockConcurrencyTest, LikeConcurrencyTest,
CouponConcurrencyTest) so they import and reference
PostgreSqlTestContainersConfig instead of MySqlTestContainersConfig.

Comment on lines +13 to +17
-- 2. 가격순 정렬
-- 용도: GET /api/v1/products?sort=price_asc / price_desc
CREATE INDEX IF NOT EXISTS idx_products_active_price
ON products (price ASC)
WHERE deleted_at IS NULL;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# 역방향 스캔 지원 여부 및 성능 차이 확인을 위한 EXPLAIN 쿼리
cat << 'EOF'
-- price_asc 쿼리 실행 계획
EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM products WHERE deleted_at IS NULL ORDER BY price ASC LIMIT 20;

-- price_desc 쿼리 실행 계획 (역방향 스캔 확인)
EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM products WHERE deleted_at IS NULL ORDER BY price DESC LIMIT 20;
EOF

Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java

Length of output: 351


🌐 Web query:

PostgreSQL B-tree index reverse scan performance DESC ORDER BY LIMIT

💡 Result:

PostgreSQL B-tree indexes can be scanned backward (“Index Scan Backward”) to satisfy ORDER BY ... DESC, and the planner will often do that even if the index was created ASC (a separate DESC index is usually unnecessary unless you need mixed directions across multiple columns). [1][6]

If ORDER BY col DESC LIMIT n is slow even though it uses a backward index scan, it’s almost always due to what happens after finding index entries:

  • Heap fetch cost / poor locality: An index scan (forward or backward) still has to visit the heap for each returned row unless it’s truly index-only. If the scan direction makes the heap access pattern less cache-friendly (common when table physical order/correlation doesn’t match the requested order), the backward scan can look much slower in practice. [4]
  • Not actually index-only (visibility map): Even with a “covering” index, PostgreSQL may still fetch heap pages to check tuple visibility unless the visibility map says the heap pages are all-visible; regular vacuuming/autovacuum improves this. [3][10]
  • Index doesn’t match the filter + order: If the query filters on other columns, scanning an index on only the sort key may have to skip many entries until it finds n rows that pass the filter; for LIMIT, that can be catastrophically input-dependent. The usual fix is a multicolumn index that matches the predicate then the sort key, e.g. (user_id, created_at DESC) for WHERE user_id = ? ORDER BY created_at DESC LIMIT ?. [11]

Practical checklist (what usually fixes DESC + LIMIT performance):

  1. Create the index that matches WHERE first, then ORDER BY (and consider INCLUDE (...) to enable index-only scans).
  2. Ensure VACUUM/autovacuum is keeping the visibility map current (so index-only scans avoid heap visits).
  3. If you need consistent top-N speed and the table is append-heavy, consider improving correlation (e.g., clustering/repacking by the “latest-first” key) to make heap fetches more sequential.

Sources: PostgreSQL docs on backward scans [1], scan direction explanation [6], visibility map/index-only behavior [3][10], and examples/discussion of backward scan slowness and filter-mismatch pitfalls [4][11].


🌐 Web query:

PostgreSQL bidirectional index creation price DESC optimization

💡 Result:

PostgreSQL B-tree indexes are bidirectional, so a plain index on price can be scanned forward for ORDER BY price ASC and backward for ORDER BY price DESC—you typically do not need a separate “DESC index” just to optimize descending sorts. PostgreSQL will just use an Index Scan Backward when appropriate. [1], [2]

When CREATE INDEX ... (price DESC) actually matters:

  • NULL positioning: ASC/DESC interacts with NULLS FIRST/LAST. An index’s defined null ordering can determine whether the index can satisfy your query’s exact ORDER BY ... NULLS ... without extra work. [1], [4]
  • Multi-column ordering (mixed directions): If you need something like ORDER BY category ASC, price DESC, matching the index’s per-column sort directions can be important. [1]
  • Top-N with filters (common “browse by price” pattern): If you filter on something else and then want the highest/lowest prices, a composite index is usually the win, e.g.:
    CREATE INDEX ON items (category, price DESC);
    -- helps: WHERE category = ? ORDER BY price DESC LIMIT ?
    Without the leading filter column, PostgreSQL may still have to scan a large portion of the price index to find matching rows. [3]

Rule of thumb: don’t build both (price) and (price DESC); instead, build one index that matches your WHERE + ORDER BY shape, and only specify DESC/NULLS ... when your query needs that exact ordering. [1], [3]


🏁 Script executed:

# 마이그레이션 파일 전체 내용 확인
cat scripts/V5__create_product_indexes.sql

Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java

Length of output: 1021


🏁 Script executed:

# 리포지토리 내 다른 인덱스 패턴 검토
find . -name "*.sql" -type f | head -20 | xargs grep -l "CREATE INDEX" 2>/dev/null | head -5

Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java

Length of output: 125


price DESC 정렬 쿼리 최적화를 위해 인덱스 설계 개선이 필요하다.

PostgreSQL B-tree 인덱스는 양방향 스캔을 지원하므로 price ASC 인덱스로도 price DESC 쿼리를 처리할 수 있다. 그러나 역방향 스캔 시 다음과 같은 성능 저하 요인이 발생할 수 있다:

  • 힙 페치(heap fetch) 시 캐시 지역성 악화: 역방향으로 인덱스를 스캔할 때 테이블의 물리적 저장 순서와 맞지 않아 메모리 접근 패턴이 비효율적이 될 수 있다.
  • LIMIT 쿼리에서 인덱스 활용 제한: 필터 조건 없이 price 열만으로 인덱스가 구성되어 있으면, 대량 데이터를 정렬할 때 충분히 최적화되지 않을 수 있다.

대안: 다른 인덱스들(idx_products_active_brand_likes)처럼 필터 조건과 정렬을 포함하는 복합 인덱스 구조 검토가 필요하다. 예를 들어 (category, price DESC) 형태의 인덱스가 일반적인 쿼리 패턴과 맞다면 성능 개선 효과가 클 것이다. 현재 설계의 성능 특성은 프로덕션 환경에서 EXPLAIN ANALYZE로 검증한 후 필요시 개선하기를 권장한다.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/V5__create_product_indexes.sql` around lines 13 - 17, The current
single-column index idx_products_active_price ON products(price ASC) WHERE
deleted_at IS NULL may be suboptimal for price DESC queries; add a composite,
filtered index that includes the common filter and sort columns (e.g., include
the query's grouping/filter column such as category or brand together with price
in DESC order) and keep the same WHERE deleted_at IS NULL predicate (mirror the
pattern used by idx_products_active_brand_likes). After creating the composite
index (with price declared DESC), validate performance with EXPLAIN ANALYZE for
both price_asc and price_desc queries and adjust included leading columns to
match actual query patterns.

@ukukdin ukukdin changed the title feat: 상품 조회 성능 최적화 — 인덱스, 원자적 좋아요 동기화, Redis 캐시 (엄인국) [Volume5] 상품 조회 성능 최적화 — 인덱스, 원자적 좋아요 동기화, Redis 캐시 (엄인국) Mar 13, 2026
@ukukdin
Copy link
Author

ukukdin commented Mar 13, 2026

Q1. DB 성능 최적화 순서와 비정규화 도입 시점

Slow query의 기준은 회사마다 다르겠지만, 일반적인 최적화 순서에 대해 여쭤보고 싶습니다.
데이터가 적은 초기에는 인덱스 튜닝만으로 충분하고, 트래픽이 늘어 JOIN 비용이 커지는 시점에 비정규화를 도입하고, DB 자체로 감당이 어려울 때 캐싱을 적용하는 순서가 일반적으로 맞을까요?
아니면 비정규화는 초기 설계 단계에서 미리 고려해두는 편이 나을까요?

멘토링때 비정규화도 바로 고려해서 개발을 한다고 들었고, full-scan과 인덱싱은 고려해서 인덱싱 건다라고 알고있는데 (제가 잘못 알아들은건가요?)
AI가 인덱싱 --> 비정규화 --> 캐시 로 학습을 시켜줘서 다시 한번 질문드렸습니다.

Q2. 데이터 정합성이 최우선인 도메인에서의 성능 최적화 기준

핀테크(해외송금)처럼 데이터 정합성이 최우선인 도메인에서는 캐싱이나 비정규화를 섣불리 적용하기 어려울 것 같은데요. 이런 도메인에서는 어떤 기준으로 성능 최적화의 범위와 방법을 결정하시나요?

Q3. 스타트업 초기 단계에서의 확장성 설계

런칭 전 단계의 스타트업에서도 처음부터 DAU/MAU 같은 지표를 예상하고, 그에 맞는 확장성 있는 설계를 하는 편인가요? 아니면 초기에는 빠른 출시에 집중하고, 트래픽이 실제로 늘어나는 시점에 대응하는 편인가요?

현재 런칭 전인 해외송금 스타트업에 2월에 합류해서, 초기 설계 방향을 어떻게 잡아야 할지 고민이 되어 여쭤봅니다.

@loopers-alen

- afterCommit 블록 try-catch 추가로 Redis 장애 시 500 에러 방지
- 브랜드 삭제 시 해당 상품만 선택적 evict (전체 캐시 clear 제거)
- SafeCacheErrorHandler stack trace 보존 및 key 노출 제거

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 15

🧹 Nitpick comments (13)
scripts/mock-data-10k.sql (1)

124-130: 단건 INSERT 반복은 대량 데이터 생성 시 성능이 저하된다.

현재 10,000건을 개별 INSERT로 처리하고 있다. 테스트 환경에서는 허용 가능하나, 100k 스크립트와 같이 데이터 규모가 커지면 실행 시간이 크게 증가한다.

성능 개선이 필요할 경우 INSERT ... SELECT generate_series() 패턴이나 임시 테이블을 활용한 배치 삽입을 고려할 수 있다. 현 단계에서는 참고 사항으로 남긴다.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/mock-data-10k.sql` around lines 124 - 130, 현재 루프 내의 단건 INSERT (INSERT
INTO products ... VALUES (...))가 product_count 루프를 통해 반복되어 대량 삽입 시 성능이 저하됩니다; 이를
수정하려면 루프를 제거하고 INSERT ... SELECT generate_series() 패턴 또는 임시 테이블을 사용해 한번에 배치
삽입하도록 변경하세요 (참조: INSERT INTO products(...), brand_rec, full_name, base_price,
sale_p, stock, desc_text, product_count 변수를 사용한 반복 구조). 구체적으로는 루프에서 생성하는 값들을 임시
테이블이나 CTE로 미리 생성한 뒤 하나의 INSERT ... SELECT로 넣거나 generate_series()로 필요한 행 수를 생성해
조인하여 한 번에 삽입하도록 리팩터링하십시오.
scripts/k6-lost-update-test.js (4)

18-48: setup 단계에서 HTTP 실패 시 에러 핸들링이 없다.

상품 조회(line 20) 또는 유저 생성(line 28)이 실패할 경우 JSON.parse 예외가 발생하거나, 50명 미만의 유저로 테스트가 진행되어 결과 해석이 부정확해진다. 운영 환경에서 테스트 대상 서버가 불안정할 때 setup 실패 원인 파악이 어려워진다.

♻️ 수정안: 검증 로직 추가
 export function setup() {
   const productRes = http.get(`${BASE_URL}/api/v1/products/${TARGET_PRODUCT_ID}`);
+  if (productRes.status !== 200) {
+    throw new Error(`상품 조회 실패: status=${productRes.status}, body=${productRes.body}`);
+  }
   const product = JSON.parse(productRes.body);
   console.log(`초기 상태 - 상품 #${TARGET_PRODUCT_ID}: likeCount=${product.likeCount}, price=${product.price}`);

   const users = [];
   for (let i = 1; i <= 50; i++) {
     const loginId = `lostuser${i}`;
     const res = http.post(
       `${BASE_URL}/api/v1/users`,
       JSON.stringify({
         loginId: loginId,
         password: PASSWORD,
         name: `losttest${i}`,
         birthday: '1990-06-20',
         email: `lost${i}@test.com`,
       }),
       { headers: { 'Content-Type': 'application/json' } }
     );
-    if (res.status === 200) users.push(loginId);
+    if (res.status === 200 || res.status === 409) {
+      // 200: 생성 성공, 409: 이미 존재 (재실행 시)
+      users.push(loginId);
+    } else {
+      console.warn(`유저 생성 실패: ${loginId}, status=${res.status}`);
+    }
   }
-  console.log(`생성된 유저: ${users.length}명`);
+  if (users.length < 50) {
+    console.warn(`⚠️ 유저 생성 불완전: ${users.length}/50명`);
+  }
+  console.log(`준비된 유저: ${users.length}명`);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/k6-lost-update-test.js` around lines 18 - 48, In the setup function,
add robust HTTP failure handling: verify productRes.status is 200 before calling
JSON.parse (wrap parsing in try/catch) and log/throw a descriptive error
including TARGET_PRODUCT_ID and productRes.status/body on failure; likewise
check each http.post response when creating users (referencing the users array
and the http.post call) and retry or record failures, and at the end assert
users.length === 50 (throw or return an error if not) so the test aborts with
clear diagnostic information rather than proceeding with incomplete setup.

179-188: handleSummary 함수는 최소한의 구현이다.

현재는 콘솔 배너만 출력하고 있다. CI/CD 연동 시 JSON 리포트 출력이 필요하면 아래와 같이 확장할 수 있다.

♻️ JSON 리포트 출력 예시
export function handleSummary(data) {
  var lines = [
    '',
    '╔══════════════════════════════════════════════╗',
    '║    Lost Update 동시성 테스트 결과              ║',
    '╚══════════════════════════════════════════════╝',
    '',
  ];
  console.log(lines.join('\n'));
  return {
    'stdout': textSummary(data, { indent: ' ', enableColors: true }),
    'reports/lost-update-result.json': JSON.stringify(data, null, 2),
  };
}

textSummary 사용 시 상단에 import 추가:

import { textSummary } from 'https://jslib.k6.io/k6-summary/0.0.1/index.js';
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/k6-lost-update-test.js` around lines 179 - 188, The handleSummary
function currently only prints a console banner; update it to also emit a
machine-readable JSON report and a human-readable summary for CI by importing
textSummary from 'https://jslib.k6.io/k6-summary/0.0.1/index.js' and modifying
handleSummary to return an object that includes a 'stdout' entry set to
textSummary(data, { indent: ' ', enableColors: true }) and a
'reports/lost-update-result.json' entry set to JSON.stringify(data, null, 2) so
CI can pick up both summary and JSON report.

148-149: teardown에서도 HTTP 실패 핸들링이 필요하다.

최종 상품 상태 조회가 실패하면 JSON.parse 예외로 teardown이 중단되어 테스트 결과 검증이 불가능해진다.

♻️ 수정안
   const productRes = http.get(`${BASE_URL}/api/v1/products/${TARGET_PRODUCT_ID}`);
+  if (productRes.status !== 200) {
+    console.error(`❌ 최종 상품 조회 실패: status=${productRes.status}`);
+    return;
+  }
   const product = JSON.parse(productRes.body);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/k6-lost-update-test.js` around lines 148 - 149, In teardown where you
fetch the final product using productRes and product, add robust HTTP-failure
handling: check productRes.status (or use productRes.ok) before calling
JSON.parse, and if the response is not successful log an error including status
and body and avoid parsing (set product to null or a safe fallback);
additionally wrap JSON.parse in a try/catch to handle malformed JSON and log the
exception. This change should be applied around the existing productRes and
product usage in the teardown block that references BASE_URL and
TARGET_PRODUCT_ID.

108-120: 상품 조회 실패 시 JSON.parse 예외가 발생한다.

line 109의 GET 요청이 실패하면 JSON.parse에서 예외가 발생하여 해당 VU의 이터레이션이 중단된다. 또한 description: product.description || 'updated'(line 119)는 description이 빈 문자열일 경우에도 'updated'로 덮어쓰게 되어, 테스트 결과 해석 시 혼란을 줄 수 있다.

♻️ 수정안
   group('어드민 상품 수정', () => {
     const getRes = http.get(`${BASE_URL}/api/v1/products/${TARGET_PRODUCT_ID}`);
+    if (getRes.status !== 200) {
+      console.warn(`상품 조회 실패: status=${getRes.status}`);
+      return;
+    }
     const product = JSON.parse(getRes.body);

     const newPrice = product.price + 1000;
     const updateBody = JSON.stringify({
       name: product.name,
       price: newPrice,
       salePrice: product.salePrice,
       stock: product.stock,
-      description: product.description || 'updated',
+      description: product.description,
     });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/k6-lost-update-test.js` around lines 108 - 120, The GET response body
is parsed unconditionally and will throw if the request fails; also the
description fallback overwrites empty strings. Before calling
JSON.parse(getRes.body) (referencing getRes and product), verify getRes.status
is a successful code and handle non-2xx by aborting the iteration or returning a
controlled fallback; wrap JSON.parse in a try/catch or use a safe parser to
avoid exceptions. For description, preserve empty-string values by checking for
the property presence (e.g., if 'description' in product) and only use 'updated'
when the description property is absent or undefined, instead of the current
product.description || 'updated' logic.
scripts/k6-slow-query-test.js (4)

57-67: slow_queries 카운터에 대한 임계치가 누락되었다.

Slow query 수에 대한 임계치가 없으면 다수의 느린 쿼리가 발생해도 테스트가 통과한다. 운영 관점에서 slow query 비율이 일정 수준을 초과하면 테스트를 실패시켜야 성능 회귀를 조기에 감지할 수 있다.

♻️ slow query 비율 기반 임계치 추가
   thresholds: {
     // 전체 요청의 95%가 1초 이내
     http_req_duration: ['p(95)<1000'],
+    // slow query 비율: 전체 요청의 5% 이하
+    slow_queries: ['count<rate*0.05'],
     // 시나리오별 임계치
     product_list_duration: ['p(95)<800', 'p(99)<1500'],

k6에서 비율 기반 임계치가 복잡하다면, handleSummary에서 slow query 비율을 계산하여 exit code를 제어하는 방식도 고려한다.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/k6-slow-query-test.js` around lines 57 - 67, Add a threshold for the
slow_queries counter in the thresholds block so the test fails when slow query
rate exceeds the acceptable level (e.g., p(95) or rate-based threshold for
slow_queries), by updating the thresholds object (the thresholds variable where
http_req_duration and other *_duration metrics are defined) to include an entry
for slow_queries with the desired percent/rate limits; if k6 cannot express the
exact ratio you want, compute slow query percentage in handleSummary and set
process.exit non-zero based on that calculation (referencing the slow_queries
counter and the handleSummary function).

73-81: 서버 오류(5xx) 및 실패 응답에 대한 별도 추적이 없다.

trackResponse는 응답 시간만 기록하고, 5xx 에러나 비정상 응답은 별도 카운터로 추적하지 않는다. 장애 상황에서 에러율 급증을 감지하려면 에러 메트릭이 필요하다.

♻️ 에러 카운터 추가
 const slowQueryCount = new Counter('slow_queries');
+const errorCount = new Counter('error_responses');

 // ...

 function trackResponse(res, metricTrend, label) {
   metricTrend.add(res.timings.duration);
+  if (res.status >= 400) {
+    errorCount.add(1);
+    if (res.status >= 500) {
+      console.error(`❌ ERROR [${res.status}] ${label}`);
+    }
+  }
   if (res.timings.duration > SLOW_THRESHOLD) {
     slowQueryCount.add(1);
     console.warn(
       `🐢 SLOW [${res.timings.duration.toFixed(0)}ms] ${label} - status: ${res.status}`
     );
   }
 }

thresholdserror_responses: ['count<rate*0.01'] 등 에러율 임계치를 추가하면 CI에서 자동 실패 처리가 가능하다.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/k6-slow-query-test.js` around lines 73 - 81, trackResponse currently
only records durations (metricTrend) and slowQueryCount; add an error counter
metric (e.g., errorResponseCount using k6 Counter) and increment it in
trackResponse when responses are 5xx (res.status >= 500) and optionally for any
non-2xx (res.status < 200 || res.status >= 300) depending on desired
sensitivity; update the function to call errorResponseCount.add(1) alongside
slowQueryCount.add(1) and ensure you import/define the Counter at top-level so
it can be referenced by trackResponse, then add a corresponding threshold key
(e.g., error_responses) in thresholds to fail CI when error rate exceeds your
target.

87-91: 테스트 데이터 범위가 하드코딩되어 환경별 재사용성이 떨어진다.

PR 목표에서 100k rows 데이터셋을 언급하지만 randomProductId 범위가 1-13000으로 설정되어 있다. 실제 데이터 규모와 불일치하면 캐시 히트율, 404 비율 등 테스트 결과 신뢰도가 낮아진다.

♻️ 환경 변수로 설정 가능하게 변경
 const BASE_URL = __ENV.BASE_URL || 'http://localhost:8080';
 const SLOW_THRESHOLD = 500; // 500ms 이상이면 slow query로 판정
+const MAX_BRAND_ID = parseInt(__ENV.MAX_BRAND_ID) || 20;
+const MAX_PRODUCT_ID = parseInt(__ENV.MAX_PRODUCT_ID) || 100000;

 // ...

 export default function () {
-  const brandIds = Array.from({ length: 20 }, (_, i) => i + 1);
+  const brandIds = Array.from({ length: MAX_BRAND_ID }, (_, i) => i + 1);
   const sorts = ['price_asc', 'price_desc', 'likes_desc', null];
   const randomBrandId = brandIds[Math.floor(Math.random() * brandIds.length)];
   const randomSort = sorts[Math.floor(Math.random() * sorts.length)];
-  const randomProductId = Math.floor(Math.random() * 13000) + 1;
+  const randomProductId = Math.floor(Math.random() * MAX_PRODUCT_ID) + 1;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/k6-slow-query-test.js` around lines 87 - 91, The hardcoded test-data
ranges (brandIds, sorts, randomBrandId, randomSort, randomProductId) reduce
reuse across environments and mismatch the claimed 100k dataset; make these
ranges configurable via environment variables (use k6's __ENV or your runtime
env), e.g., expose BRAND_COUNT, SORT_OPTIONS (comma-separated) and PRODUCT_COUNT
(defaulting to 13000 but allow 100000), parse/validate them at test startup,
build brandIds from BRAND_COUNT, build sorts from SORT_OPTIONS, and compute
randomBrandId/randomProductId using PRODUCT_COUNT so the test adapts to
different dataset sizes and preserves current defaults if env vars are absent.

191-213: CI/CD 통합을 위한 결과 파일 출력이 누락되었다.

handleSummary가 빈 객체를 반환하므로 JSON/HTML 리포트 파일이 생성되지 않는다. 파이프라인에서 테스트 결과를 아티팩트로 보존하거나, 외부 대시보드로 전송하려면 파일 출력이 필요하다.

♻️ JSON 및 텍스트 리포트 출력 추가
 export function handleSummary(data) {
   var lines = [
     '',
     '=== Slow Query 부하테스트 결과 ===',
     // ... 기존 코드 ...
   ];

   console.log(lines.join('\n'));

-  return {};
+  return {
+    'stdout': lines.join('\n'),
+    'reports/k6-slow-query-summary.json': JSON.stringify(data, null, 2),
+    'reports/k6-slow-query-summary.txt': lines.join('\n'),
+  };
 }

출력 디렉토리(reports/)는 .gitignore에 추가하고, CI 파이프라인에서 아티팩트로 수집하도록 설정한다.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/k6-slow-query-test.js` around lines 191 - 213, handleSummary
currently just logs and returns an empty object so k6 produces no artifact;
change handleSummary to generate and persist a JSON summary and a plain-text
report (using the existing helpers getMetricCount/getMetricP and metric names
like 'http_reqs','slow_queries','http_req_duration','product_list_duration',
etc.), ensure the output directory is created before writing, write both files
to disk and also return an object mapping filenames to their contents (per k6
handleSummary contract) so CI can collect them, and include basic error handling
around file writes.
modules/jpa/src/testFixtures/java/com/loopers/testcontainers/PostgreSQLTestContainersConfig.java (1)

12-22: 컨테이너 재사용 설정 누락으로 로컬 개발 효율 저하 가능

.withReuse(true) 설정이 없으면 테스트 실행 시마다 새 컨테이너가 생성된다. 로컬 개발 환경에서 테스트 반복 실행 시 불필요한 오버헤드가 발생할 수 있다.

또한 정적 블록에서 컨테이너 시작 실패 시 원인 파악이 어려운 ExceptionInInitializerError가 발생한다. 명시적 예외 처리를 통해 디버깅 효율을 높이는 것이 운영 관점에서 바람직하다.

♻️ 컨테이너 재사용 및 예외 처리 개선안
 static {
-    postgresContainer = new PostgreSQLContainer<>(DockerImageName.parse("postgres:16"))
-        .withDatabaseName("loopers")
-        .withUsername("test")
-        .withPassword("test");
-    postgresContainer.start();
-
-    System.setProperty("datasource.postgres-jpa.main.jdbc-url", postgresContainer.getJdbcUrl());
-    System.setProperty("datasource.postgres-jpa.main.username", postgresContainer.getUsername());
-    System.setProperty("datasource.postgres-jpa.main.password", postgresContainer.getPassword());
+    try {
+        postgresContainer = new PostgreSQLContainer<>(DockerImageName.parse("postgres:16"))
+            .withDatabaseName("loopers")
+            .withUsername("test")
+            .withPassword("test")
+            .withReuse(true);
+        postgresContainer.start();
+
+        System.setProperty("datasource.postgres-jpa.main.jdbc-url", postgresContainer.getJdbcUrl());
+        System.setProperty("datasource.postgres-jpa.main.username", postgresContainer.getUsername());
+        System.setProperty("datasource.postgres-jpa.main.password", postgresContainer.getPassword());
+    } catch (Exception e) {
+        throw new IllegalStateException("PostgreSQL 테스트 컨테이너 시작 실패", e);
+    }
 }

추가 검증 사항: .withReuse(true) 사용 시 ~/.testcontainers.propertiestestcontainers.reuse.enable=true 설정이 필요하다. CI 환경에서는 재사용을 비활성화하고, 로컬에서만 활성화하는 것이 일반적이다.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@modules/jpa/src/testFixtures/java/com/loopers/testcontainers/PostgreSQLTestContainersConfig.java`
around lines 12 - 22, Static 초기화 블록에서 PostgreSQLTestContainersConfig의
postgresContainer를 생성할 때 .withReuse(true) 옵션을 로컬 개발만 활성화하도록 조건부로
추가하고(postgresContainer = new
PostgreSQLContainer<>(...).withDatabaseName(...).withUsername(...).withPassword(...).withReuse(true)
형태), 컨테이너 시작 부분(postgresContainer.start())을 try-catch로 감싸 예외를 캐치하여 상세 원인(예:
Exception e)을 로깅하고 적절한 런타임 예외로 재발생시키도록 변경하여 ExceptionInInitializerError가 아닌 명확한
오류를 노출하게 하세요; 또한 .withReuse(true)를 사용할 경우 로컬에서만 활성화하도록 환경변수나 시스템 프로퍼티 검사를 추가하고
관련 주석에 testcontainers.reuse.enable=true 필요성을 명시하세요.
apps/commerce-api/src/main/java/com/loopers/application/brand/BrandDeletedEventHandler.java (1)

26-37: 핸들러의 트랜잭션 경계를 명시하는 편이 안전하다.

이 메서드는 여러 상품을 저장한 뒤 AFTER_COMMIT에 의존하는 2차 이벤트를 다시 발행한다. 현재는 호출자가 항상 트랜잭션 안에서 이벤트를 발행한다는 전제에 기대고 있어서, 비트랜잭션 호출·테스트·향후 비동기 이벤트 전환 시 상품 삭제가 개별 커밋되고 후속 캐시 무효화 이벤트는 커밋 훅을 타지 못할 수 있다. handle()@Transactional을 붙여 삭제와 이벤트 발행을 하나의 경계로 고정하는 편이 안전하다. 추가로 외부 트랜잭션 없이 BrandDeletedEvent를 publish했을 때 상품 삭제와 BrandProductsDeletedEvent 기반 후속 처리가 함께 커밋되는 통합 테스트를 넣는 것이 좋다.

변경 예시
 import org.springframework.context.event.EventListener;
 import org.springframework.stereotype.Component;
+import org.springframework.transaction.annotation.Transactional;
@@
     `@EventListener`
+    `@Transactional`
     public void handle(BrandDeletedEvent event) {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/commerce-api/src/main/java/com/loopers/application/brand/BrandDeletedEventHandler.java`
around lines 26 - 37, The handle method in BrandDeletedEventHandler must be
executed in a transactional boundary so productRepository.save(...) calls and
the subsequent eventPublisher.publishEvent(new BrandProductsDeletedEvent(...))
are committed together; add `@Transactional` to the handle(BrandDeletedEvent
event) method (importing
org.springframework.transaction.annotation.Transactional) so deletes and the
BrandProductsDeletedEvent publish occur in the same transaction, and add an
integration test that publishes BrandDeletedEvent without an outer transaction
to verify products are marked deleted and the BrandProductsDeletedEvent is
emitted on commit.
apps/commerce-api/src/main/java/com/loopers/application/product/ProductService.java (1)

78-114: 캐시 무효화 실패 처리를 helper로 통일하는 편이 안전하다.

현재는 afterCommit() 분기만 예외를 삼키고, 비트랜잭션 fallback 분기는 cache.evict()/clear() 예외가 그대로 전파된다. 또한 경고 로그가 e.getMessage()만 남겨 Redis 장애 원인이 유실된다. safeEvict/safeClear 같은 helper로 즉시 실행 경로와 after-commit 경로를 동일한 try/catch + log.warn(..., e) 정책으로 묶어 두는 편이 운영상 안전하다. 추가로 트랜잭션 동기화가 있는 경우와 없는 경우 각각 Cache가 예외를 던져도 상품 생성·수정·삭제가 성공하고 경고만 남는지 검증하는 테스트를 넣는 것이 좋다.

As per coding guidelines, "**/*.java: 예외 처리 시 cause를 보존하고, 사용자 메시지와 로그 메시지를 분리하도록 제안한다." 기준을 적용하였다.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/commerce-api/src/main/java/com/loopers/application/product/ProductService.java`
around lines 78 - 114, Unify cache eviction error handling by extracting helper
methods (e.g., safeEvict(Cache cache, Object key) and safeClear(Cache cache))
and use them from both the immediate path and the TransactionSynchronization
afterCommit() for ProductService.evictProductAfterCommit and
ProductService.evictProductListAfterCommit; each helper should wrap
cache.evict()/cache.clear() in a try/catch that logs a user-level message and
the full exception as the cause (log.warn("상품 캐시 무효화 실패 - productId: {}",
productId, e) / log.warn("상품 목록 캐시 무효화 실패", e)), and ensure the
non-transactional branch calls the same helpers so exceptions are swallowed and
only warnings are logged; also add tests asserting that cache exceptions do not
prevent product create/update/delete success while producing a warning.
apps/commerce-api/src/main/java/com/loopers/domain/model/brand/event/BrandProductsDeletedEvent.java (1)

5-8: 이벤트의 List payload를 방어적으로 복사해야 한다.

지금 구현은 전달받은 deletedProductIds 참조를 그대로 보관한다. 발행자나 선행 리스너가 리스트를 수정하면 동일 이벤트를 받는 다른 리스너의 삭제 대상이 바뀌어 캐시 무효화 결과가 비결정적으로 흔들릴 수 있다. compact constructor에서 List.copyOf(...)와 null 검증으로 이벤트를 완전히 불변으로 고정하는 편이 안전하다. 추가로 원본 리스트를 수정해도 이벤트 내부 값이 바뀌지 않고, deletedProductIds()가 수정 불가능한지 검증하는 테스트를 넣는 것이 좋다.

변경 예시
+import java.util.Objects;
+
 public record BrandProductsDeletedEvent(
         Long brandId,
         List<Long> deletedProductIds
 ) {
+    public BrandProductsDeletedEvent {
+        brandId = Objects.requireNonNull(brandId);
+        deletedProductIds = List.copyOf(Objects.requireNonNull(deletedProductIds));
+    }
 }

As per coding guidelines, "**/*.java: null 처리, 방어적 복사, 불변성" 및 "**/domain/**/*.java: 불변성과 캡슐화" 기준을 적용하였다.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/commerce-api/src/main/java/com/loopers/domain/model/brand/event/BrandProductsDeletedEvent.java`
around lines 5 - 8, BrandProductsDeletedEvent currently stores the incoming
deletedProductIds reference directly; make the record immutable by adding a
compact constructor in BrandProductsDeletedEvent that null-checks
deletedProductIds (e.g., Objects.requireNonNull) and assigns
this.deletedProductIds = List.copyOf(deletedProductIds) so the internal list is
an unmodifiable defensive copy; also add a unit test that mutates the original
list after construction and asserts that event.deletedProductIds() remains
unchanged and is unmodifiable.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In
`@apps/commerce-api/src/main/java/com/loopers/infrastructure/cache/CacheConfig.java`:
- Around line 41-56: The cache serializer currently enables
ObjectMapper.activateDefaultTyping(ObjectMapper.DefaultTyping.EVERYTHING) in
CacheConfig.cacheManager using a custom redisObjectMapper which will reject JDK
types; replace this by injecting the application's ObjectMapper (or obtain it
and call copy()), change typing to ObjectMapper.DefaultTyping.NON_FINAL (or
avoid default typing entirely and register explicit subtype serializers), and
use that mapper to construct GenericJackson2JsonRedisSerializer so JDK boxed
types (Long/Integer) deserialize correctly; also add an integration test that
round-trips ProductDetailInfo and PageResult<ProductSummaryInfo> through the
GenericJackson2JsonRedisSerializer to verify deserialization succeeds.

In
`@apps/commerce-api/src/main/java/com/loopers/infrastructure/product/ProductJpaRepository.java`:
- Around line 27-33: UPDATE 쿼리가 soft-delete된 상품까지 변경할 수 있는 조건이 빠져 있어
ProductJpaEntity의 활성 엔티티만 변경되도록 WHERE 절에 deletedAt IS NULL을 추가하세요; 구체적으로
incrementLikeCount와 decrementLikeCount 쿼리문에 각각 "AND p.deletedAt IS NULL" 조건을
추가하고, decrementLikeCount는 이미 int 반환하므로 soft-deleted 상품 ID로 실행 시 영향 행 수가 0인지 검사하는
리포지토리 통합 테스트를 추가/보강해 증감 시 영향 행 수가 0으로 나오는지 검증하도록 수정하세요.

In `@modules/jpa/src/main/resources/jpa.yml`:
- Around line 87-94: 현재 jpa.yml의 prd 블록은
datasource.postgres-jpa.main.jdbc-url/username/password를 localhost 등 기본값으로 고정하고
있어 오버라이드 누락 시 위험하므로, prd 프로파일(spring.config.activate.on-profile: prd) 항목에서 기본값을
제거하고 각 값을 환경변수로 강제하도록 변경하라: jdbc-url, username, password를 애플리케이션 프로퍼티에서 직접 참조하지
말고 환경변수(예: ${DB_JDBC_URL}, ${DB_USERNAME}, ${DB_PASSWORD})로 바꿔 값이 없을 경우 애플리케이션이
즉시 실패하도록 구성하고(프레임워크의 fail-fast/required 설정 또는 빈 초기화 단계에서 검증), 또한 Context 로드 통합
테스트를 추가해 prd 프로파일로 시작할 때 DB 환경변수 미주입이면 컨텍스트 로드 실패를 검증하고 환경변수 주입 시 성공하는 케이스를 각각
확인하라.

In
`@modules/jpa/src/testFixtures/java/com/loopers/testcontainers/PostgreSQLTestContainersConfig.java`:
- Around line 19-21: PostgreSQLTestContainersConfig currently sets properties
via System.setProperty inside a static block and doesn't manage the container
lifecycle; migrate this to use Spring's `@DynamicPropertySource` to register
datasource.postgres-jpa.* properties via a DynamicPropertyRegistry and change
the container declaration to a Testcontainers-managed `@Container` field (or
integrate into a `@Testcontainers` test base) so Testcontainers controls
start/stop; remove the global System.setProperty side-effect from the static
initializer, ensure the container is either a static `@Container` for class-level
scope or an instance `@Container` for method scope, and register
postgresContainer.getJdbcUrl()/getUsername()/getPassword() into the
DynamicPropertyRegistry in the method referenced (e.g., a static method
annotated with `@DynamicPropertySource`) so Spring test context sees the
properties and the container is stopped automatically.

In `@scripts/k6-like-concurrency-test.js`:
- Line 129: The current random product selection (productId) can pick sentinel
products used by validation steps (the sentinel IDs referenced near lines using
like_load and at the checks around lines 163 and 188), causing test
interference; change the selection so load/test product ranges are separate and
configurable via environment variables (e.g., introduce LOAD_PRODUCT_MIN,
LOAD_PRODUCT_MAX and SENTINEL_PRODUCT_IDS or SENTINEL_PRODUCT_START/END) and
update the code that computes productId to draw only from the LOAD range (and
explicitly exclude any SENTINEL_PRODUCT_IDS), wire these env vars into the
scenario runner (like_load) and add a verification step that when only like_load
runs the likeCount for each sentinel ID does not change. Ensure references to
productId, the like_load scenario, and the sentinel ID variables are updated
consistently.
- Line 18: The test continues even when account preparation fails because the
scenario ignores setup()'s returned users and treats 409 as success; update the
flow so setup() returns data.users and the scenario uses that exact array (do
not create fallback users), immediately abort (throw/return non-zero) from
setup() if data.users.length < USER_COUNT or any user login/signup returned a
non-recoverable error (including treating 409 authentication conflicts as
failure), and change any per-VU user selection to index based on (__VU - 1) to
align mapping; ensure the code paths around USER_COUNT, setup(), data.users, and
__VU perform strict validation so a partial or conflicted setup stops the run
before scenario iterations start.
- Around line 216-237: The test currently only logs failures for
productRes/product2Res and likeCount/likeCount2 mismatches, which prevents CI
from failing; update the checks in the blocks that handle productRes and
product2Res so that any non-200 status or any unexpected likeCount value
immediately fails the run (throw or call the k6 fail() helper) instead of just
console.error/console.warn, ensuring the process exits with a non-zero code;
adjust both the product (`#1`) check (productRes → product, likeCount) and the
product `#2` check (product2Res → product2, likeCount2) and add/verify a negative
test where expected values are intentionally wrong to confirm the script returns
a non-zero exit code.
- Around line 193-205: The race test currently only treats 5xx as a concurrency
conflict; change it so both like and unlike responses are validated with k6
check() against expected status codes and any non-expected 4xx/5xx increments
concurrencyConflicts. Specifically, locate the like/unlike requests where
likeRes and unlikeRes are produced and tracked (trackResponse) and add checks
that assert the expected success status (e.g., 200/201 for like, 200/204 for
unlike or your API's expected codes); if a check fails for either likeRes or
unlikeRes, call concurrencyConflicts.add(1) and log the VU with statuses to
surface 4xx and 5xx as collisions. Ensure existing 5xx branch remains but is
replaced/augmented by these checks so all non-expected responses are counted as
conflicts.

In `@scripts/k6-lost-update-test.js`:
- Line 13: The Counter lostUpdateDetected is declared but never incremented;
update the teardown function to call lostUpdateDetected.add(1) whenever a lost
update is detected so the metric reflects actual occurrences, and also add a k6
threshold for 'lost_updates_detected' in the exported options (e.g., thresholds:
{ 'lost_updates_detected': ['count==0'] }) to enable CI/CD automatic failure on
any detected lost update; locate and modify the teardown function and the
options export to implement these changes.

In `@scripts/k6-slow-query-test.js`:
- Around line 152-157: The admin-brand group "브랜드 목록 조회" is calling the Admin
API without authentication which can produce 401/403 noise or security events;
update the group handler in scripts/k6-slow-query-test.js to include an
Authorization header when available by reading AUTH_TOKEN from environment
(AUTH_TOKEN or __ENV.AUTH_TOKEN) and passing a headers object into the http.get
call used in the group; alternatively, if this scenario should only target
public endpoints, change the URL from `${BASE_URL}/api-admin/v1/brands` to the
public path `${BASE_URL}/api/v1/brands` or remove the group entirely—ensure you
adjust the http.get call that references BASE_URL, trackResponse, and
brandListDuration accordingly.
- Around line 159-170: The composite scenario is reusing
productListSortedDuration which mixes its p95/p99 with the plain sort scenario;
create a dedicated metric (e.g., productListFilteredSortedDuration) and use it
in the group '복합 조회 (필터+정렬+큰사이즈)' instead of productListSortedDuration, then add
that new metric to the thresholds configuration and include it in handleSummary
output so it is measured and reported independently of
productListSortedDuration.

In `@scripts/mock-data-10k.sql`:
- Around line 60-81: The CASE on brand_rec.id that assigns to the models
variable lacks an ELSE branch, so if brands contains id > 20 models remains
unchanged from the previous loop; modify the CASE in the block that references
brand_rec.id to include an ELSE that explicitly resets models (e.g., to an empty
array/null) or raises an error (RAISE EXCEPTION) to avoid silent data
corruption, and ensure the models variable is initialized/reset at the start of
the loop that processes brand_rec so each brand iteration cannot inherit values
from the prior iteration.

In `@scripts/mock-data.sql`:
- Around line 94-115: The CASE on brand_rec.id lacks an ELSE branch so models
can remain uninitialized when id is outside 1–20; update the CASE (the block
assigning to models) to include an ELSE that explicitly initializes models
(e.g., set models to NULL or an empty array/list as appropriate for the
surrounding PL/pgSQL context) to avoid carrying prior loop values or
null-reference errors, and make the same change in the corresponding
mock-data-10k.sql CASE block so both scripts behave consistently.
- Around line 7-8: 현재의 TRUNCATE 문(TRUNCATE TABLE products RESTART IDENTITY
CASCADE; 및 TRUNCATE TABLE brands RESTART IDENTITY CASCADE;)가 참조 테이블(likes,
order_items 등)의 데이터를 모두 삭제하므로 의도된 동작과 위험 범위를 명확히 주석으로 설명하고, 운영 환경에서 실수로 실행되지 않도록
환경 검사 또는 명시적 안전 장치(예: ENV 체크나 확인 프래그)를 추가해 실행을 차단하도록 수정하세요; 주석에는 삭제 대상(참조 테이블
예시), RESTART IDENTITY 의미, 그리고 실행 전 확인 방법을 간단히 명시하고 TRUNCATE 블록을 감싸는 안전 검사 로직을
추가해 로컬/테스트에서만 실행되도록 구현하세요.

In `@scripts/mock-users.sql`:
- Around line 17-22: The script uses pgcrypto functions gen_random_bytes() and
digest() (used in the salt and encoded assignments), but the pgcrypto extension
is not guaranteed to be installed; add a safe extension creation statement at
the top of the SQL script to ensure pgcrypto is available before those calls
(use a conditional/if-not-exists style CREATE EXTENSION for pgcrypto) and ensure
it runs with a role that has permission to create extensions so gen_random_bytes
and digest succeed at runtime.

---

Nitpick comments:
In
`@apps/commerce-api/src/main/java/com/loopers/application/brand/BrandDeletedEventHandler.java`:
- Around line 26-37: The handle method in BrandDeletedEventHandler must be
executed in a transactional boundary so productRepository.save(...) calls and
the subsequent eventPublisher.publishEvent(new BrandProductsDeletedEvent(...))
are committed together; add `@Transactional` to the handle(BrandDeletedEvent
event) method (importing
org.springframework.transaction.annotation.Transactional) so deletes and the
BrandProductsDeletedEvent publish occur in the same transaction, and add an
integration test that publishes BrandDeletedEvent without an outer transaction
to verify products are marked deleted and the BrandProductsDeletedEvent is
emitted on commit.

In
`@apps/commerce-api/src/main/java/com/loopers/application/product/ProductService.java`:
- Around line 78-114: Unify cache eviction error handling by extracting helper
methods (e.g., safeEvict(Cache cache, Object key) and safeClear(Cache cache))
and use them from both the immediate path and the TransactionSynchronization
afterCommit() for ProductService.evictProductAfterCommit and
ProductService.evictProductListAfterCommit; each helper should wrap
cache.evict()/cache.clear() in a try/catch that logs a user-level message and
the full exception as the cause (log.warn("상품 캐시 무효화 실패 - productId: {}",
productId, e) / log.warn("상품 목록 캐시 무효화 실패", e)), and ensure the
non-transactional branch calls the same helpers so exceptions are swallowed and
only warnings are logged; also add tests asserting that cache exceptions do not
prevent product create/update/delete success while producing a warning.

In
`@apps/commerce-api/src/main/java/com/loopers/domain/model/brand/event/BrandProductsDeletedEvent.java`:
- Around line 5-8: BrandProductsDeletedEvent currently stores the incoming
deletedProductIds reference directly; make the record immutable by adding a
compact constructor in BrandProductsDeletedEvent that null-checks
deletedProductIds (e.g., Objects.requireNonNull) and assigns
this.deletedProductIds = List.copyOf(deletedProductIds) so the internal list is
an unmodifiable defensive copy; also add a unit test that mutates the original
list after construction and asserts that event.deletedProductIds() remains
unchanged and is unmodifiable.

In
`@modules/jpa/src/testFixtures/java/com/loopers/testcontainers/PostgreSQLTestContainersConfig.java`:
- Around line 12-22: Static 초기화 블록에서 PostgreSQLTestContainersConfig의
postgresContainer를 생성할 때 .withReuse(true) 옵션을 로컬 개발만 활성화하도록 조건부로
추가하고(postgresContainer = new
PostgreSQLContainer<>(...).withDatabaseName(...).withUsername(...).withPassword(...).withReuse(true)
형태), 컨테이너 시작 부분(postgresContainer.start())을 try-catch로 감싸 예외를 캐치하여 상세 원인(예:
Exception e)을 로깅하고 적절한 런타임 예외로 재발생시키도록 변경하여 ExceptionInInitializerError가 아닌 명확한
오류를 노출하게 하세요; 또한 .withReuse(true)를 사용할 경우 로컬에서만 활성화하도록 환경변수나 시스템 프로퍼티 검사를 추가하고
관련 주석에 testcontainers.reuse.enable=true 필요성을 명시하세요.

In `@scripts/k6-lost-update-test.js`:
- Around line 18-48: In the setup function, add robust HTTP failure handling:
verify productRes.status is 200 before calling JSON.parse (wrap parsing in
try/catch) and log/throw a descriptive error including TARGET_PRODUCT_ID and
productRes.status/body on failure; likewise check each http.post response when
creating users (referencing the users array and the http.post call) and retry or
record failures, and at the end assert users.length === 50 (throw or return an
error if not) so the test aborts with clear diagnostic information rather than
proceeding with incomplete setup.
- Around line 179-188: The handleSummary function currently only prints a
console banner; update it to also emit a machine-readable JSON report and a
human-readable summary for CI by importing textSummary from
'https://jslib.k6.io/k6-summary/0.0.1/index.js' and modifying handleSummary to
return an object that includes a 'stdout' entry set to textSummary(data, {
indent: ' ', enableColors: true }) and a 'reports/lost-update-result.json' entry
set to JSON.stringify(data, null, 2) so CI can pick up both summary and JSON
report.
- Around line 148-149: In teardown where you fetch the final product using
productRes and product, add robust HTTP-failure handling: check
productRes.status (or use productRes.ok) before calling JSON.parse, and if the
response is not successful log an error including status and body and avoid
parsing (set product to null or a safe fallback); additionally wrap JSON.parse
in a try/catch to handle malformed JSON and log the exception. This change
should be applied around the existing productRes and product usage in the
teardown block that references BASE_URL and TARGET_PRODUCT_ID.
- Around line 108-120: The GET response body is parsed unconditionally and will
throw if the request fails; also the description fallback overwrites empty
strings. Before calling JSON.parse(getRes.body) (referencing getRes and
product), verify getRes.status is a successful code and handle non-2xx by
aborting the iteration or returning a controlled fallback; wrap JSON.parse in a
try/catch or use a safe parser to avoid exceptions. For description, preserve
empty-string values by checking for the property presence (e.g., if
'description' in product) and only use 'updated' when the description property
is absent or undefined, instead of the current product.description || 'updated'
logic.

In `@scripts/k6-slow-query-test.js`:
- Around line 57-67: Add a threshold for the slow_queries counter in the
thresholds block so the test fails when slow query rate exceeds the acceptable
level (e.g., p(95) or rate-based threshold for slow_queries), by updating the
thresholds object (the thresholds variable where http_req_duration and other
*_duration metrics are defined) to include an entry for slow_queries with the
desired percent/rate limits; if k6 cannot express the exact ratio you want,
compute slow query percentage in handleSummary and set process.exit non-zero
based on that calculation (referencing the slow_queries counter and the
handleSummary function).
- Around line 73-81: trackResponse currently only records durations
(metricTrend) and slowQueryCount; add an error counter metric (e.g.,
errorResponseCount using k6 Counter) and increment it in trackResponse when
responses are 5xx (res.status >= 500) and optionally for any non-2xx (res.status
< 200 || res.status >= 300) depending on desired sensitivity; update the
function to call errorResponseCount.add(1) alongside slowQueryCount.add(1) and
ensure you import/define the Counter at top-level so it can be referenced by
trackResponse, then add a corresponding threshold key (e.g., error_responses) in
thresholds to fail CI when error rate exceeds your target.
- Around line 87-91: The hardcoded test-data ranges (brandIds, sorts,
randomBrandId, randomSort, randomProductId) reduce reuse across environments and
mismatch the claimed 100k dataset; make these ranges configurable via
environment variables (use k6's __ENV or your runtime env), e.g., expose
BRAND_COUNT, SORT_OPTIONS (comma-separated) and PRODUCT_COUNT (defaulting to
13000 but allow 100000), parse/validate them at test startup, build brandIds
from BRAND_COUNT, build sorts from SORT_OPTIONS, and compute
randomBrandId/randomProductId using PRODUCT_COUNT so the test adapts to
different dataset sizes and preserves current defaults if env vars are absent.
- Around line 191-213: handleSummary currently just logs and returns an empty
object so k6 produces no artifact; change handleSummary to generate and persist
a JSON summary and a plain-text report (using the existing helpers
getMetricCount/getMetricP and metric names like
'http_reqs','slow_queries','http_req_duration','product_list_duration', etc.),
ensure the output directory is created before writing, write both files to disk
and also return an object mapping filenames to their contents (per k6
handleSummary contract) so CI can collect them, and include basic error handling
around file writes.

In `@scripts/mock-data-10k.sql`:
- Around line 124-130: 현재 루프 내의 단건 INSERT (INSERT INTO products ... VALUES
(...))가 product_count 루프를 통해 반복되어 대량 삽입 시 성능이 저하됩니다; 이를 수정하려면 루프를 제거하고 INSERT
... SELECT generate_series() 패턴 또는 임시 테이블을 사용해 한번에 배치 삽입하도록 변경하세요 (참조: INSERT
INTO products(...), brand_rec, full_name, base_price, sale_p, stock, desc_text,
product_count 변수를 사용한 반복 구조). 구체적으로는 루프에서 생성하는 값들을 임시 테이블이나 CTE로 미리 생성한 뒤 하나의
INSERT ... SELECT로 넣거나 generate_series()로 필요한 행 수를 생성해 조인하여 한 번에 삽입하도록 리팩터링하십시오.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: b2156355-c383-416a-be8f-43d12660de4f

📥 Commits

Reviewing files that changed from the base of the PR and between d9b78de and 50a3efe.

📒 Files selected for processing (30)
  • apps/commerce-api/src/main/java/com/loopers/application/brand/BrandDeletedEventHandler.java
  • apps/commerce-api/src/main/java/com/loopers/application/order/OrderService.java
  • apps/commerce-api/src/main/java/com/loopers/application/product/ProductService.java
  • apps/commerce-api/src/main/java/com/loopers/domain/model/brand/event/BrandProductsDeletedEvent.java
  • apps/commerce-api/src/main/java/com/loopers/infrastructure/cache/CacheConfig.java
  • apps/commerce-api/src/main/java/com/loopers/infrastructure/cache/ProductCacheEvictHandler.java
  • apps/commerce-api/src/main/java/com/loopers/infrastructure/cache/SafeCacheErrorHandler.java
  • apps/commerce-api/src/main/java/com/loopers/infrastructure/product/ProductJpaRepository.java
  • apps/commerce-api/src/test/java/com/loopers/concurrency/CouponConcurrencyTest.java
  • apps/commerce-api/src/test/java/com/loopers/concurrency/LikeConcurrencyTest.java
  • apps/commerce-api/src/test/java/com/loopers/concurrency/StockConcurrencyTest.java
  • apps/commerce-api/src/test/java/com/loopers/interfaces/api/brand/BrandApiE2ETest.java
  • apps/commerce-api/src/test/java/com/loopers/interfaces/api/brand/BrandApiIntegrationTest.java
  • apps/commerce-api/src/test/java/com/loopers/interfaces/api/like/LikeApiE2ETest.java
  • apps/commerce-api/src/test/java/com/loopers/interfaces/api/like/LikeApiIntegrationTest.java
  • apps/commerce-api/src/test/java/com/loopers/interfaces/api/order/OrderApiE2ETest.java
  • apps/commerce-api/src/test/java/com/loopers/interfaces/api/order/OrderApiIntegrationTest.java
  • apps/commerce-api/src/test/java/com/loopers/interfaces/api/product/ProductApiE2ETest.java
  • apps/commerce-api/src/test/java/com/loopers/interfaces/api/product/ProductApiIntegrationTest.java
  • apps/commerce-api/src/test/java/com/loopers/interfaces/api/user/UserApiE2ETest.java
  • apps/commerce-api/src/test/java/com/loopers/interfaces/api/user/UserApiIntegrationTest.java
  • docker/infra-compose.yml
  • modules/jpa/src/main/resources/jpa.yml
  • modules/jpa/src/testFixtures/java/com/loopers/testcontainers/PostgreSQLTestContainersConfig.java
  • scripts/k6-like-concurrency-test.js
  • scripts/k6-lost-update-test.js
  • scripts/k6-slow-query-test.js
  • scripts/mock-data-10k.sql
  • scripts/mock-data.sql
  • scripts/mock-users.sql
🚧 Files skipped from review as they are similar to previous changes (3)
  • apps/commerce-api/src/main/java/com/loopers/infrastructure/cache/SafeCacheErrorHandler.java
  • apps/commerce-api/src/main/java/com/loopers/infrastructure/cache/ProductCacheEvictHandler.java
  • docker/infra-compose.yml

Comment on lines +41 to +56
public CacheManager cacheManager(RedisConnectionFactory connectionFactory) {
ObjectMapper redisObjectMapper = new ObjectMapper();
redisObjectMapper.registerModule(new JavaTimeModule());
redisObjectMapper.disable(SerializationFeature.WRITE_DATES_AS_TIMESTAMPS);
redisObjectMapper.disable(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES);
redisObjectMapper.activateDefaultTyping(
BasicPolymorphicTypeValidator.builder()
.allowIfSubType("com.loopers.")
.allowIfSubType("java.util.")
.build(),
ObjectMapper.DefaultTyping.EVERYTHING,
JsonTypeInfo.As.PROPERTY
);

GenericJackson2JsonRedisSerializer jsonSerializer =
new GenericJackson2JsonRedisSerializer(redisObjectMapper);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

cd apps/commerce-api && find . -name "CacheConfig.java" -type f

Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java

Length of output: 150


🏁 Script executed:

cat -n apps/commerce-api/src/main/java/com/loopers/infrastructure/cache/CacheConfig.java

Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java

Length of output: 4312


🏁 Script executed:

# 캐시에 저장되는 DTO 타입 확인
rg "ProductDetailInfo|PageResult" apps/commerce-api/src --type java -A 3 -B 1

Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java

Length of output: 31277


🏁 Script executed:

# ObjectMapper 주입/설정 확인
rg "ObjectMapper|@Bean.*ObjectMapper" apps/commerce-api/src --type java -B 2 -A 2

Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java

Length of output: 9552


🏁 Script executed:

rg "record ProductDetailInfo|record ProductSummaryInfo" apps/commerce-api/src --type java -A 15

Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java

Length of output: 2777


🏁 Script executed:

rg "java.time|java.lang|java.math" apps/commerce-api/src/main/java/com/loopers/application/product/ProductQueryUseCase.java -A 2

Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java

Length of output: 67


🏁 Script executed:

# 애플리케이션 ObjectMapper 빈 확인
find apps/commerce-api -name "*.java" -type f | xargs grep -l "@Bean.*ObjectMapper\|Bean\(name.*ObjectMapper" 2>/dev/null | head -5

Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java

Length of output: 67


🏁 Script executed:

cat -n apps/commerce-api/src/main/java/com/loopers/application/product/ProductQueryUseCase.java | head -60

Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java

Length of output: 1160


🏁 Script executed:

find apps/commerce-api/src/test -name "*Cache*Test.java" -o -name "*Serializ*Test.java" | head -10

Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java

Length of output: 67


🏁 Script executed:

# 통합 테스트에서 캐시 round-trip 검증 확인
rg "@Cacheable|cacheManager" apps/commerce-api/src/test --type java -B 2 -A 10

Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java

Length of output: 5560


🏁 Script executed:

# Jackson DefaultTyping.EVERYTHING 동작 확인 - 커뮤니티 문서/코드 참고
cat > /tmp/test_typing.md << 'EOF'
DefaultTyping.EVERYTHING with BasicPolymorphicTypeValidator behavior:

1. EVERYTHING means include type metadata for ALL types
2. BasicPolymorphicTypeValidator with allowIfSubType("java.lang.") NOT configured
3. JDK wrapper types: Long, Integer, String, Boolean are in java.lang.* package
4. These types ARE final classes, but with EVERYTHING setting, Jackson may still:
   - Include type metadata for collections containing these types
   - Fail validation if validator checks parent packages strictly

Key problem areas:
- PageResult<T> has List<T> content field
- When deserializing PageResult<ProductDetailInfo>, inner ProductDetailInfo records contain Long, Integer fields
- If type metadata is included for these, and validator rejects java.lang.*, deserialization fails
- Symptom: Silent cache miss + fallback to DB (not obvious until performance degrades)
EOF
cat /tmp/test_typing.md

Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java

Length of output: 948


🏁 Script executed:

# 실제 allowIfSubType 동작 확인 - com.loopers와 java.util만 허용되는지 명확히
rg "allowIfSubType|BasicPolymorphicTypeValidator" apps/commerce-api/src --type java -B 3 -A 3

Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java

Length of output: 2325


🏁 Script executed:

# PageResult 내 timestamp/시간 필드 확인
cat -n apps/commerce-api/src/main/java/com/loopers/domain/model/common/PageResult.java

Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java

Length of output: 615


🏁 Script executed:

# SafeCacheErrorHandler 확인 - 캐시 오류 처리 방식 확인
cat -n apps/commerce-api/src/main/java/com/loopers/infrastructure/cache/SafeCacheErrorHandler.java

Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java

Length of output: 2332


캐시 serializer의 타입 검증 정책이 프로덕션 성능을 위험에 빠뜨린다.

allowlist가 com.loopers., java.util.만 허용하는데 DefaultTyping.EVERYTHING을 사용하면, 캐시 값의 역직렬화 시 Long, Integer 같은 JDK 타입이 java.lang.*으로 분류되어 validator에 의해 거부된다. 현재 ProductDetailInfoPageResult<ProductSummaryInfo>에는 Long id, Integer salePrice 등의 필드가 포함되어 있고, 역직렬화 실패 시 SafeCacheErrorHandler가 예외를 로그만 남기고 자동으로 DB 재조회로 fallback한다. 이는 캐시가 조용히 실패하면서 DB 부하가 눈에 띄지 않게 증가하는 운영 장애를 초래한다.

애플리케이션 ObjectMapper를 주입받아 copy()한 뒤 ObjectMapper.DefaultTyping.NON_FINAL로 범위를 축소하거나, 캐시별 고정 serializer를 사용하는 편이 낫다. 추가로 ProductDetailInfo, PageResult<ProductSummaryInfo>를 실제 serializer로 round-trip하는 통합 테스트를 추가하여 역직렬화가 정상적으로 작동하는지 사전에 검증해야 한다.

최소 변경 예시
-    public CacheManager cacheManager(RedisConnectionFactory connectionFactory) {
-        ObjectMapper redisObjectMapper = new ObjectMapper();
+    public CacheManager cacheManager(RedisConnectionFactory connectionFactory, ObjectMapper objectMapper) {
+        ObjectMapper redisObjectMapper = objectMapper.copy();
@@
-                ObjectMapper.DefaultTyping.EVERYTHING,
+                ObjectMapper.DefaultTyping.NON_FINAL,
                 JsonTypeInfo.As.PROPERTY
         );
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/commerce-api/src/main/java/com/loopers/infrastructure/cache/CacheConfig.java`
around lines 41 - 56, The cache serializer currently enables
ObjectMapper.activateDefaultTyping(ObjectMapper.DefaultTyping.EVERYTHING) in
CacheConfig.cacheManager using a custom redisObjectMapper which will reject JDK
types; replace this by injecting the application's ObjectMapper (or obtain it
and call copy()), change typing to ObjectMapper.DefaultTyping.NON_FINAL (or
avoid default typing entirely and register explicit subtype serializers), and
use that mapper to construct GenericJackson2JsonRedisSerializer so JDK boxed
types (Long/Integer) deserialize correctly; also add an integration test that
round-trips ProductDetailInfo and PageResult<ProductSummaryInfo> through the
GenericJackson2JsonRedisSerializer to verify deserialization succeeds.

Comment on lines +27 to +33
@Modifying(clearAutomatically = true)
@Query("UPDATE ProductJpaEntity p SET p.likeCount = p.likeCount + 1 WHERE p.id = :productId")
void incrementLikeCount(@Param("productId") Long productId);

@Modifying(clearAutomatically = true)
@Query("UPDATE ProductJpaEntity p SET p.likeCount = p.likeCount - 1 WHERE p.id = :productId AND p.likeCount > 0")
int decrementLikeCount(@Param("productId") Long productId);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

삭제된 상품까지 likeCount를 갱신할 수 있는 조건 누락이 있다.

운영 관점에서 비동기 좋아요 이벤트가 상품 소프트삭제 직후 처리되면 삭제 엔티티 집계가 변조되어 데이터 정합성과 감사 추적성이 깨진다. Line 28Line 32의 UPDATE 조건에 deletedAt IS NULL을 추가해 활성 상품만 갱신되도록 제한해야 한다. 추가 테스트로 소프트삭제된 상품 ID에 대해 증감 쿼리 실행 시 영향 행 수가 0인지 검증하는 리포지토리 통합 테스트를 보강해야 한다.

🔧 수정안 예시
-    `@Query`("UPDATE ProductJpaEntity p SET p.likeCount = p.likeCount + 1 WHERE p.id = :productId")
+    `@Query`("UPDATE ProductJpaEntity p SET p.likeCount = p.likeCount + 1 " +
+           "WHERE p.id = :productId AND p.deletedAt IS NULL")
     void incrementLikeCount(`@Param`("productId") Long productId);

-    `@Query`("UPDATE ProductJpaEntity p SET p.likeCount = p.likeCount - 1 WHERE p.id = :productId AND p.likeCount > 0")
+    `@Query`("UPDATE ProductJpaEntity p SET p.likeCount = p.likeCount - 1 " +
+           "WHERE p.id = :productId AND p.deletedAt IS NULL AND p.likeCount > 0")
     int decrementLikeCount(`@Param`("productId") Long productId);

As per coding guidelines "**/*Repository*.java: 쿼리 조건 누락/과다 조회, 정렬/인덱스 활용 가능성, 대량 데이터에서의 병목을 점검한다."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/commerce-api/src/main/java/com/loopers/infrastructure/product/ProductJpaRepository.java`
around lines 27 - 33, UPDATE 쿼리가 soft-delete된 상품까지 변경할 수 있는 조건이 빠져 있어
ProductJpaEntity의 활성 엔티티만 변경되도록 WHERE 절에 deletedAt IS NULL을 추가하세요; 구체적으로
incrementLikeCount와 decrementLikeCount 쿼리문에 각각 "AND p.deletedAt IS NULL" 조건을
추가하고, decrementLikeCount는 이미 int 반환하므로 soft-deleted 상품 ID로 실행 시 영향 행 수가 0인지 검사하는
리포지토리 통합 테스트를 추가/보강해 증감 시 영향 행 수가 0으로 나오는지 검증하도록 수정하세요.

Comment on lines 87 to 94
spring.config.activate.on-profile: prd

datasource:
mysql-jpa:
postgres-jpa:
main:
jdbc-url: jdbc:mysql://localhost:3306/loopers
jdbc-url: jdbc:postgresql://localhost:5432/loopers
username: application
password: application
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

운영 프로파일의 DB 접속 정보가 localhost 고정이라 장애 위험이 크다.

운영 관점에서 prd 기본값이 localhost이면 배포 시 오버라이드 누락 한 번으로 잘못된 DB 접속 또는 기동 실패가 발생한다. prd는 환경변수 기반으로 강제하고 기본값을 제거해 fail-fast 하도록 수정해야 한다. 추가 테스트로 prd 프로파일 컨텍스트 로드 시 DB 환경변수 미주입이면 실패하고, 주입 시 성공하는 구성을 검증해야 한다.

🔧 수정안 예시
 spring.config.activate.on-profile: prd

 datasource:
   postgres-jpa:
     main:
-      jdbc-url: jdbc:postgresql://localhost:5432/loopers
-      username: application
-      password: application
+      jdbc-url: ${POSTGRES_JDBC_URL}
+      username: ${POSTGRES_USER}
+      password: ${POSTGRES_PWD}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@modules/jpa/src/main/resources/jpa.yml` around lines 87 - 94, 현재 jpa.yml의 prd
블록은 datasource.postgres-jpa.main.jdbc-url/username/password를 localhost 등 기본값으로
고정하고 있어 오버라이드 누락 시 위험하므로, prd 프로파일(spring.config.activate.on-profile: prd) 항목에서
기본값을 제거하고 각 값을 환경변수로 강제하도록 변경하라: jdbc-url, username, password를 애플리케이션 프로퍼티에서 직접
참조하지 말고 환경변수(예: ${DB_JDBC_URL}, ${DB_USERNAME}, ${DB_PASSWORD})로 바꿔 값이 없을 경우
애플리케이션이 즉시 실패하도록 구성하고(프레임워크의 fail-fast/required 설정 또는 빈 초기화 단계에서 검증), 또한 Context
로드 통합 테스트를 추가해 prd 프로파일로 시작할 때 DB 환경변수 미주입이면 컨텍스트 로드 실패를 검증하고 환경변수 주입 시 성공하는
케이스를 각각 확인하라.

Comment on lines +19 to +21
System.setProperty("datasource.postgres-jpa.main.jdbc-url", postgresContainer.getJdbcUrl());
System.setProperty("datasource.postgres-jpa.main.username", postgresContainer.getUsername());
System.setProperty("datasource.postgres-jpa.main.password", postgresContainer.getPassword());
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🌐 Web query:

Spring Boot Testcontainers @DynamicPropertySource vs System.setProperty

💡 Result:

What’s the difference?

@DynamicPropertySource (preferred for Testcontainers + Spring tests)

  • Registers properties into the Spring Environment for that test ApplicationContext (via DynamicPropertyRegistry). It’s designed specifically for values you don’t know up front (e.g., container host/port). [1]
  • Values can be supplied lazily (method references/suppliers), so Spring can resolve them at the right time during context initialization. [1]
  • Precedence: properties registered via @DynamicPropertySource have higher precedence than @TestPropertySource, and test property sources override OS env + Java system properties. [2]
  • Plays nicely with Spring’s test context lifecycle/caching (i.e., it’s “in-framework” rather than a global JVM side-effect). [1]

System.setProperty(...) (global JVM side effect; use cautiously)

  • Sets a global Java system property for the entire JVM. That means:
    • It can leak between tests (especially with parallel execution).
    • You must manually clean up (clearProperty / restore previous value) to avoid flakiness.
    • It can be too late if the Spring context has already read/bound the property.
  • In Spring tests, system properties are typically lower precedence than test property mechanisms, so your setProperty might be unexpectedly overridden. [2]

Practical guidance for Testcontainers

  • If you’re wiring container connection info into Spring Boot properties (JDBC URL, Redis host/port, etc.), use @DynamicPropertySource. Spring Boot’s Testcontainers docs also show this pattern. [3]
  • Only use System.setProperty when you truly need a process-wide switch (and then always restore/clear it in test teardown).

🏁 Script executed:

cat -n modules/jpa/src/testFixtures/java/com/loopers/testcontainers/PostgreSQLTestContainersConfig.java

Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java

Length of output: 1147


🏁 Script executed:

# Search for parallel test execution configuration
find . -type f \( -name "*.properties" -o -name "*.xml" -o -name "pom.xml" -o -name "gradle.properties" -o -name "build.gradle*" \) | head -20

Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java

Length of output: 966


🏁 Script executed:

# Search for any other usages of `@DynamicPropertySource` in the project
rg "@DynamicPropertySource" --type java

Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java

Length of output: 67


🏁 Script executed:

# Check if there are other test configuration classes that might use this config
rg "PostgreSQLTestContainersConfig" --type java

Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java

Length of output: 3975


🏁 Script executed:

# Check gradle.properties for test parallel execution
cat gradle.properties | grep -i "parallel\|test"

Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java

Length of output: 112


🏁 Script executed:

# Check main build.gradle.kts for test configuration
cat -n build.gradle.kts | head -100

Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java

Length of output: 4235


🏁 Script executed:

# Check the jpa module's build.gradle.kts
cat -n modules/jpa/build.gradle.kts

Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java

Length of output: 923


@DynamicPropertySource로 마이그레이션하여 컨테이너 생명주기 관리 개선 필요

현재 System.setProperty 방식은 동작하나 두 가지 문제가 있다. 첫째, static {} 초기화 블록에서 컨테이너를 시작하지만 명시적으로 중지하는 코드가 없어 자원 누수가 발생한다. 둘째, Spring 테스트에서는 @DynamicPropertySource가 표준 패턴이며, 컨테이너의 생명주기를 Spring 테스트 컨텍스트와 함께 관리할 수 있다.

현재 maxParallelForks = 1로 병렬 실행이 비활성화되어 있으나, 프로퍼티 글로벌 사이드 이펙트 제거와 컨테이너 정상 종료를 위해 @DynamicPropertySource 패턴으로 전환하는 것이 좋다. @TestcontainersTest 애노테이션 또는 별도의 테스트 기본 클래스로 리팩토링하면 즉시 적용 가능하다.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@modules/jpa/src/testFixtures/java/com/loopers/testcontainers/PostgreSQLTestContainersConfig.java`
around lines 19 - 21, PostgreSQLTestContainersConfig currently sets properties
via System.setProperty inside a static block and doesn't manage the container
lifecycle; migrate this to use Spring's `@DynamicPropertySource` to register
datasource.postgres-jpa.* properties via a DynamicPropertyRegistry and change
the container declaration to a Testcontainers-managed `@Container` field (or
integrate into a `@Testcontainers` test base) so Testcontainers controls
start/stop; remove the global System.setProperty side-effect from the static
initializer, ensure the container is either a static `@Container` for class-level
scope or an instance `@Container` for method scope, and register
postgresContainer.getJdbcUrl()/getUsername()/getPassword() into the
DynamicPropertyRegistry in the method referenced (e.g., a static method
annotated with `@DynamicPropertySource`) so Spring test context sees the
properties and the container is stopped automatically.


const BASE_URL = __ENV.BASE_URL || 'http://localhost:8080';
const SLOW_THRESHOLD = 500;
const USER_COUNT = 99;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

계정 준비가 실패해도 테스트가 계속 진행된다.

운영 관점에서는 일부 계정 생성 실패, 기존 계정의 비밀번호 불일치, USER_COUNT(99) < max VUs(100) 상태가 401/409 잡음으로 섞여 성능·동시성 회귀처럼 보이게 된다. 현재는 Line 42에서 409를 성공으로 간주하고, 이후 시나리오에서는 setup()이 반환한 users를 쓰지 않아 실제 준비된 계정 집합과 실행 계정 집합이 분리된다. 수정안: setup()에서 확보한 data.users를 그대로 사용하고, 기대 수량 미달 또는 409 계정 인증 실패 시 즉시 중단해야 한다. Line 126도 (__VU - 1) 기준으로 맞추는 편이 안전하다. 추가 테스트: 계정 1개 생성만 500으로 실패시키는 경우 setup()이 실패하는지, 동일 loginId가 다른 비밀번호로 이미 존재할 때 시나리오 시작 전에 차단되는지 확인해야 한다.

수정 예시
-const USER_COUNT = 99;
+const MAX_VUS = 100;
+const USER_COUNT = MAX_VUS;
+
+function currentLoginId(data) {
+  return data.users[(__VU - 1) % data.users.length];
+}

 export function setup() {
   console.log('=== Setup: 테스트 유저 생성 ===');

   const users = [];
   for (let i = 1; i <= USER_COUNT; i++) {
@@
-    if (res.status === 200 || res.status === 409) {
+    if (res.status === 200) {
       users.push(loginId);
+    } else if (res.status === 409) {
+      const authRes = http.get(`${BASE_URL}/api/v1/users/${loginId}/likes`, {
+        headers: authHeaders(loginId),
+      });
+      if (authRes.status === 200) {
+        users.push(loginId);
+      }
     }
   }

+  if (users.length < MAX_VUS) {
+    throw new Error(`setup failed: expected at least ${MAX_VUS} users, got ${users.length}`);
+  }
+
   console.log(`생성된 유저 수: ${users.length}`);
   return { users };
 }

 export function likeLoadTest(data) {
-  const userIndex = (__VU % USER_COUNT) + 1;
-  const loginId = `likeuser${userIndex}`;
+  const loginId = currentLoginId(data);
   const headers = authHeaders(loginId);

Also applies to: 24-49, 61-63, 125-128, 162-166, 187-191

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/k6-like-concurrency-test.js` at line 18, The test continues even when
account preparation fails because the scenario ignores setup()'s returned users
and treats 409 as success; update the flow so setup() returns data.users and the
scenario uses that exact array (do not create fallback users), immediately abort
(throw/return non-zero) from setup() if data.users.length < USER_COUNT or any
user login/signup returned a non-recoverable error (including treating 409
authentication conflicts as failure), and change any per-VU user selection to
index based on (__VU - 1) to align mapping; ensure the code paths around
USER_COUNT, setup(), data.users, and __VU perform strict validation so a partial
or conflicted setup stops the run before scenario iterations start.

Comment on lines +159 to +170
// ── 7. 복합 시나리오: 브랜드 필터 + 정렬 + 큰 size ──
group('복합 조회 (필터+정렬+큰사이즈)', () => {
const res = http.get(
`${BASE_URL}/api/v1/products?brandId=${randomBrandId}&sort=price_asc&page=0&size=100`
);
check(res, { '200 OK': (r) => r.status === 200 });
trackResponse(
res,
productListSortedDuration,
`GET /products?brandId=${randomBrandId}&sort=price_asc&size=100`
);
});
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

복합 조회 시나리오가 productListSortedDuration 메트릭을 재사용하여 정렬 조회 결과와 혼합된다.

Lines 113-125의 정렬 조회와 동일한 메트릭(productListSortedDuration)을 사용하면 p95/p99 값이 두 시나리오의 혼합 결과가 된다. 복합 조회는 size=100으로 페이로드가 크고, 필터+정렬 조합이므로 별도 메트릭으로 분리해야 정확한 성능 분석이 가능하다.

🔧 수정안: 복합 조회 전용 메트릭 추가
 const productDeepPageDuration = new Trend('product_deep_page_duration', true);
 const brandListDuration = new Trend('brand_list_duration', true);
+const productCompositeQueryDuration = new Trend('product_composite_query_duration', true);
 const slowQueryCount = new Counter('slow_queries');
   // ── 7. 복합 시나리오: 브랜드 필터 + 정렬 + 큰 size ──
   group('복합 조회 (필터+정렬+큰사이즈)', () => {
     const res = http.get(
       `${BASE_URL}/api/v1/products?brandId=${randomBrandId}&sort=price_asc&page=0&size=100`
     );
     check(res, { '200 OK': (r) => r.status === 200 });
     trackResponse(
       res,
-      productListSortedDuration,
+      productCompositeQueryDuration,
       `GET /products?brandId=${randomBrandId}&sort=price_asc&size=100`
     );
   });

추가로 thresholdshandleSummary에도 해당 메트릭을 반영해야 한다.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/k6-slow-query-test.js` around lines 159 - 170, The composite scenario
is reusing productListSortedDuration which mixes its p95/p99 with the plain sort
scenario; create a dedicated metric (e.g., productListFilteredSortedDuration)
and use it in the group '복합 조회 (필터+정렬+큰사이즈)' instead of
productListSortedDuration, then add that new metric to the thresholds
configuration and include it in handleSummary output so it is measured and
reported independently of productListSortedDuration.

Comment on lines +60 to +81
CASE brand_rec.id
WHEN 1 THEN models := nike_models;
WHEN 2 THEN models := adidas_models;
WHEN 3 THEN models := puma_models;
WHEN 4 THEN models := nb_models;
WHEN 5 THEN models := asics_models;
WHEN 6 THEN models := timb_models;
WHEN 7 THEN models := prada_models;
WHEN 8 THEN models := conv_models;
WHEN 9 THEN models := vans_models;
WHEN 10 THEN models := reebok_models;
WHEN 11 THEN models := fila_models;
WHEN 12 THEN models := skechers_models;
WHEN 13 THEN models := ua_models;
WHEN 14 THEN models := salomon_models;
WHEN 15 THEN models := hoka_models;
WHEN 16 THEN models := on_models;
WHEN 17 THEN models := balen_models;
WHEN 18 THEN models := jordan_models;
WHEN 19 THEN models := mizuno_models;
WHEN 20 THEN models := dr_models;
END CASE;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

CASE 문에 ELSE 절이 누락되어 데이터 무결성 문제가 발생할 수 있다.

brands 테이블에 id > 20인 브랜드가 추가되면, models 변수가 초기화되지 않고 이전 반복의 값을 그대로 사용하게 된다. 이 경우 잘못된 브랜드에 다른 브랜드의 모델명이 할당되어 테스트 데이터가 오염된다.

운영 관점에서 향후 브랜드 추가 시 스크립트를 수정하지 않으면 묵시적 오류가 발생하므로, ELSE 절을 추가하여 예외를 명시적으로 처리해야 한다.

수정안
             WHEN 19 THEN models := mizuno_models;
             WHEN 20 THEN models := dr_models;
+            ELSE
+                RAISE EXCEPTION '지원하지 않는 brand_id: %. 모델 배열을 추가하라.', brand_rec.id;
         END CASE;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/mock-data-10k.sql` around lines 60 - 81, The CASE on brand_rec.id
that assigns to the models variable lacks an ELSE branch, so if brands contains
id > 20 models remains unchanged from the previous loop; modify the CASE in the
block that references brand_rec.id to include an ELSE that explicitly resets
models (e.g., to an empty array/null) or raises an error (RAISE EXCEPTION) to
avoid silent data corruption, and ensure the models variable is
initialized/reset at the start of the loop that processes brand_rec so each
brand iteration cannot inherit values from the prior iteration.

Comment on lines +7 to +8
TRUNCATE TABLE products RESTART IDENTITY CASCADE;
TRUNCATE TABLE brands RESTART IDENTITY CASCADE;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

TRUNCATE CASCADE는 연관 테이블의 데이터를 모두 삭제한다.

TRUNCATE TABLE products RESTART IDENTITY CASCADEproducts를 참조하는 모든 테이블(예: likes, order_items 등)의 데이터도 함께 삭제한다. 의도된 동작이라면 스크립트 주석에 영향 범위를 명시하여 실수로 운영 환경에서 실행되는 것을 방지해야 한다.

수정안
 -- 기존 데이터 정리
+-- ⚠️ WARNING: CASCADE로 인해 products를 참조하는 likes, order_items 등 연관 테이블도 삭제됨
+-- 운영 환경에서 절대 실행 금지
 TRUNCATE TABLE products RESTART IDENTITY CASCADE;
 TRUNCATE TABLE brands RESTART IDENTITY CASCADE;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/mock-data.sql` around lines 7 - 8, 현재의 TRUNCATE 문(TRUNCATE TABLE
products RESTART IDENTITY CASCADE; 및 TRUNCATE TABLE brands RESTART IDENTITY
CASCADE;)가 참조 테이블(likes, order_items 등)의 데이터를 모두 삭제하므로 의도된 동작과 위험 범위를 명확히 주석으로
설명하고, 운영 환경에서 실수로 실행되지 않도록 환경 검사 또는 명시적 안전 장치(예: ENV 체크나 확인 프래그)를 추가해 실행을 차단하도록
수정하세요; 주석에는 삭제 대상(참조 테이블 예시), RESTART IDENTITY 의미, 그리고 실행 전 확인 방법을 간단히 명시하고
TRUNCATE 블록을 감싸는 안전 검사 로직을 추가해 로컬/테스트에서만 실행되도록 구현하세요.

Comment on lines +94 to +115
CASE brand_rec.id
WHEN 1 THEN models := nike_models;
WHEN 2 THEN models := adidas_models;
WHEN 3 THEN models := puma_models;
WHEN 4 THEN models := nb_models;
WHEN 5 THEN models := asics_models;
WHEN 6 THEN models := timb_models;
WHEN 7 THEN models := prada_models;
WHEN 8 THEN models := conv_models;
WHEN 9 THEN models := vans_models;
WHEN 10 THEN models := reebok_models;
WHEN 11 THEN models := fila_models;
WHEN 12 THEN models := skechers_models;
WHEN 13 THEN models := ua_models;
WHEN 14 THEN models := salomon_models;
WHEN 15 THEN models := hoka_models;
WHEN 16 THEN models := on_models;
WHEN 17 THEN models := balen_models;
WHEN 18 THEN models := jordan_models;
WHEN 19 THEN models := mizuno_models;
WHEN 20 THEN models := dr_models;
END CASE;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

CASE 문에 ELSE 절이 누락되어 있다.

mock-data-10k.sql과 동일한 문제가 존재한다. brand_rec.id가 1~20 범위를 벗어나면 models 변수가 초기화되지 않아 이전 반복 값을 재사용하거나 null 참조가 발생한다.

수정안
             WHEN 19 THEN models := mizuno_models;
             WHEN 20 THEN models := dr_models;
+            ELSE
+                RAISE EXCEPTION '지원하지 않는 brand_id: %. 모델 배열을 추가하라.', brand_rec.id;
         END CASE;
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
CASE brand_rec.id
WHEN 1 THEN models := nike_models;
WHEN 2 THEN models := adidas_models;
WHEN 3 THEN models := puma_models;
WHEN 4 THEN models := nb_models;
WHEN 5 THEN models := asics_models;
WHEN 6 THEN models := timb_models;
WHEN 7 THEN models := prada_models;
WHEN 8 THEN models := conv_models;
WHEN 9 THEN models := vans_models;
WHEN 10 THEN models := reebok_models;
WHEN 11 THEN models := fila_models;
WHEN 12 THEN models := skechers_models;
WHEN 13 THEN models := ua_models;
WHEN 14 THEN models := salomon_models;
WHEN 15 THEN models := hoka_models;
WHEN 16 THEN models := on_models;
WHEN 17 THEN models := balen_models;
WHEN 18 THEN models := jordan_models;
WHEN 19 THEN models := mizuno_models;
WHEN 20 THEN models := dr_models;
END CASE;
CASE brand_rec.id
WHEN 1 THEN models := nike_models;
WHEN 2 THEN models := adidas_models;
WHEN 3 THEN models := puma_models;
WHEN 4 THEN models := nb_models;
WHEN 5 THEN models := asics_models;
WHEN 6 THEN models := timb_models;
WHEN 7 THEN models := prada_models;
WHEN 8 THEN models := conv_models;
WHEN 9 THEN models := vans_models;
WHEN 10 THEN models := reebok_models;
WHEN 11 THEN models := fila_models;
WHEN 12 THEN models := skechers_models;
WHEN 13 THEN models := ua_models;
WHEN 14 THEN models := salomon_models;
WHEN 15 THEN models := hoka_models;
WHEN 16 THEN models := on_models;
WHEN 17 THEN models := balen_models;
WHEN 18 THEN models := jordan_models;
WHEN 19 THEN models := mizuno_models;
WHEN 20 THEN models := dr_models;
ELSE
RAISE EXCEPTION '지원하지 않는 brand_id: %. 모델 배열을 추가하라.', brand_rec.id;
END CASE;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/mock-data.sql` around lines 94 - 115, The CASE on brand_rec.id lacks
an ELSE branch so models can remain uninitialized when id is outside 1–20;
update the CASE (the block assigning to models) to include an ELSE that
explicitly initializes models (e.g., set models to NULL or an empty array/list
as appropriate for the surrounding PL/pgSQL context) to avoid carrying prior
loop values or null-reference errors, and make the same change in the
corresponding mock-data-10k.sql CASE block so both scripts behave consistently.

Comment on lines +17 to +22
-- salt 생성 (16바이트 랜덤 → base64)
salt := encode(gen_random_bytes(16), 'base64');

-- SHA-256(rawPassword + salt) → base64
hash_input := raw_password || salt;
encoded := salt || ':' || encode(digest(hash_input, 'sha256'), 'base64');
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

pgcrypto 확장 의존성이 명시되지 않았다.

gen_random_bytes()digest() 함수는 PostgreSQL의 pgcrypto 확장이 필요하다. 해당 확장이 설치되지 않은 환경에서 이 스크립트를 실행하면 function gen_random_bytes(integer) does not exist 오류가 발생한다.

운영 관점에서 스크립트 실행 순서나 환경에 따라 장애가 발생할 수 있으므로, 스크립트 상단에 확장 생성 구문을 추가해야 한다.

수정안
 -- 테스트 유저 100명 생성
 -- 비밀번호: TestPass99! (SHA256 with salt)
 -- Sha256PasswordEncoder 형식: salt:hash
+
+-- pgcrypto 확장 활성화 (gen_random_bytes, digest 함수 필요)
+CREATE EXTENSION IF NOT EXISTS pgcrypto;

 DO $$
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/mock-users.sql` around lines 17 - 22, The script uses pgcrypto
functions gen_random_bytes() and digest() (used in the salt and encoded
assignments), but the pgcrypto extension is not guaranteed to be installed; add
a safe extension creation statement at the top of the SQL script to ensure
pgcrypto is available before those calls (use a conditional/if-not-exists style
CREATE EXTENSION for pgcrypto) and ensure it runs with a role that has
permission to create extensions so gen_random_bytes and digest succeed at
runtime.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant