Conversation
There was a problem hiding this comment.
Pull request overview
This pull request addresses multiple production issues discovered through real user feedback, focusing on Random Defense win streak validation, tier synchronization, and service reliability improvements.
Changes:
- Fixed Random Defense validation to exclude failed submissions (negative runtime/memory values) from win streaks and added timestamp verification to prevent counting past solutions
- Implemented lazy tier synchronization with 1-hour throttling when users access their profile page, addressing the issue where Solved.ac tier updates weren't reflected in real-time
- Added read timeout configuration to RestClient to mitigate head-of-line blocking issues
- Introduced a special contributor decoration with animated gradient effects for service contributors
Reviewed changes
Copilot reviewed 8 out of 8 changed files in this pull request and generated 11 comments.
Show a summary per file
| File | Description |
|---|---|
| backend/src/main/java/com/ssafy/dash/defense/application/DefenseService.java | Enhanced defense verification logic with stricter validation: checks submission timestamp is after defense start and runtime/memory values are non-negative |
| backend/src/main/resources/mapper/algorithm/AlgorithmRecordMapper.xml | Changed success criteria from NULL checks to >= 0 checks for runtime_ms and memory_kb to properly exclude failed submissions with negative values |
| backend/src/main/java/com/ssafy/dash/user/application/UserService.java | Added lazy Solved.ac tier sync in findById method with 1-hour throttling using shouldUpdateSolvedacStats helper |
| backend/src/main/java/com/ssafy/dash/analytics/application/SolvedacSyncService.java | Added lightweight updateTierAndStats method for lazy sync that updates only tier/stats without bio verification |
| backend/src/main/java/com/ssafy/dash/config/RestClientConfig.java | Added 60-second read timeout to prevent indefinite hangs on slow API responses |
| backend/src/main/java/com/ssafy/dash/ai/infrastructure/client/AiServerClientImpl.java | Improved error logging by including endpoint URLs for better debugging of AI server connection failures |
| frontend/src/assets/css/effects.css | Added purple-to-cyan gradient animation effect for contributor decoration |
| backend/src/main/resources/db/migration/V14__add_contributor_decoration.sql | Inserted new SPECIAL type decoration for contributors with 0 price |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| INSERT INTO decorations (name, description, css_class, type, price, is_active) VALUES | ||
| ('Contributor''s Insight', 'DashHub 발전에 기여해주신 분들을 위한 특별한 선물입니다.', 'effect-contributor', 'SPECIAL', 0, true); |
There was a problem hiding this comment.
The single quote in "Contributor's Insight" is properly escaped using double single quotes (''), which is correct SQL syntax. However, this migration doesn't grant the decoration to any users.
Consider documenting how administrators should grant this decoration to contributors, or add a separate migration/script for assigning this decoration to specific users by inserting into the user_decorations table.
| if (latestRecordOpt.isPresent()) { | ||
| var record = latestRecordOpt.get(); | ||
|
|
||
| // CRITICAL: 과거에 푼 기록이 아니라, "지금" 푼 기록이어야 함 | ||
| if (record.getCreatedAt().isBefore(user.getDefenseStartTime())) { | ||
| return null; | ||
| } | ||
| } else { | ||
| user.setSilverStreak(user.getSilverStreak() + 1); | ||
| currentStreak = user.getSilverStreak(); | ||
| if (user.getSilverStreak() > user.getMaxSilverStreak()) { | ||
| user.setMaxSilverStreak(user.getSilverStreak()); | ||
|
|
||
| // CRITICAL: 런타임/메모리가 유효한 값이어야 함 (0ms 포함, -1 제외) | ||
| if (record.getRuntimeMs() < 0 || record.getMemoryKb() < 0) { | ||
| return null; | ||
| } | ||
| } | ||
|
|
||
| // 다음 디펜스를 위해 현재 상태 초기화 | ||
| user.setDefenseProblemId(null); | ||
| user.setDefenseStartTime(null); | ||
| user.setDefenseType(null); | ||
| LocalDateTime startTime = user.getDefenseStartTime(); | ||
| LocalDateTime endTime = record.getCreatedAt() != null ? record.getCreatedAt() | ||
| : LocalDateTime.now(); | ||
| long elapsedSeconds = java.time.Duration.between(startTime, endTime).getSeconds(); | ||
| record.setElapsedTimeSeconds(elapsedSeconds); | ||
| algorithmRecordRepository.update(record); | ||
|
|
||
| if ("GOLD".equalsIgnoreCase(user.getDefenseType())) { | ||
| user.setGoldStreak(user.getGoldStreak() + 1); | ||
| currentStreak = user.getGoldStreak(); | ||
| if (user.getGoldStreak() > user.getMaxGoldStreak()) { | ||
| user.setMaxGoldStreak(user.getGoldStreak()); | ||
| } | ||
| } else { | ||
| user.setSilverStreak(user.getSilverStreak() + 1); | ||
| currentStreak = user.getSilverStreak(); | ||
| if (user.getSilverStreak() > user.getMaxSilverStreak()) { | ||
| user.setMaxSilverStreak(user.getSilverStreak()); | ||
| } | ||
| } | ||
|
|
||
| userRepository.update(user); | ||
| return currentStreak; | ||
| // 다음 디펜스를 위해 현재 상태 초기화 | ||
| user.setDefenseProblemId(null); | ||
| user.setDefenseStartTime(null); | ||
| user.setDefenseType(null); | ||
|
|
||
| userRepository.update(user); | ||
| return currentStreak; | ||
| } |
There was a problem hiding this comment.
The validation logic now properly checks that records are created after defense start time and have valid runtime/memory values. However, there's a subtle issue: if existsSuccessfulSubmission returns true (line 116) but findLatestSuccessfulByUserAndProblem returns empty (line 124-127), the method returns null without explanation.
This could happen if:
- A record matching the criteria exists but was deleted between the two queries
- There's a race condition where another thread deletes/modifies the record
Consider adding logging when latestRecordOpt.isEmpty() to help diagnose why validation failed despite a successful submission existing.
| solvedacSyncService.updateTierAndStats(id); | ||
| u = userRepository.findById(id).orElse(u); |
There was a problem hiding this comment.
There is a potential race condition here. If two requests call findById simultaneously after the 1-hour threshold has passed, both will trigger updateTierAndStats, resulting in duplicate API calls to Solved.ac. While this won't cause data corruption (the last write wins), it wastes API quota and resources.
Consider one of these approaches:
- Add a distributed lock (e.g., Redis-based) around the sync operation keyed by userId
- Move the sync to a background job scheduler that runs periodically
- Check
statsLastSyncedAtagain after acquiring the transaction to ensure another thread hasn't already updated it
| solvedacSyncService.updateTierAndStats(id); | |
| u = userRepository.findById(id).orElse(u); | |
| // 잠재적 동시성 문제를 줄이기 위해 최신 사용자 정보를 한 번 더 조회하여 조건을 재확인 | |
| User latestUser = userRepository.findById(id).orElse(u); | |
| if (shouldUpdateSolvedacStats(latestUser)) { | |
| solvedacSyncService.updateTierAndStats(id); | |
| u = userRepository.findById(id).orElse(u); | |
| } |
| } catch (Exception e) { | ||
| log.warn("Failed to lazy sync Solved.ac tiers for user {}: {}", userId, e.getMessage()); | ||
| // 예외를 던지지 않고 로그만 남겨서 프로필 조회 자체는 성공하게 함 | ||
| } |
There was a problem hiding this comment.
While swallowing exceptions here is intentional to prevent profile lookup failures, it could mask legitimate issues. The log message only captures e.getMessage(), which may not provide enough context for debugging.
Consider:
- Logging the full stack trace using
log.warn("Failed to lazy sync...", e)instead of just the message - Adding metrics/monitoring for failed sync attempts to track if there's a systemic issue with the Solved.ac API
| var requestFactory = new org.springframework.http.client.JdkClientHttpRequestFactory(httpClient); | ||
| requestFactory.setReadTimeout(java.time.Duration.ofSeconds(60)); |
There was a problem hiding this comment.
While setting a read timeout helps prevent indefinite blocking on slow responses, this alone doesn't fully prevent Head-of-Line blocking in HTTP/1.1. The issue occurs when a single slow request blocks the entire connection, preventing subsequent requests from being processed.
To truly address Head-of-Line blocking, consider:
- Using HTTP/2 multiplexing (though you've explicitly disabled it for FastAPI compatibility)
- Implementing connection pooling with multiple concurrent connections
- Setting a connection pool size on the HttpClient (e.g., using
.executor()to control thread pool size)
The current fix (adding a read timeout) will at least prevent indefinite hangs, which is an improvement, but may not fully resolve the blocking issue described in the PR.
| problem_number = #{problemNumber} AND runtime_ms >= 0 AND memory_kb >= 0 </select> | ||
|
|
||
| <select id="selectSolvedProblemNumbersByUserId" resultType="string" parameterType="long"> SELECT | ||
| DISTINCT problem_number FROM algorithm_records WHERE user_id = #{userId} AND (record_type = | ||
| 'SOLVED_AC_SYNC' OR (runtime_ms IS NOT NULL AND memory_kb IS NOT NULL)) </select> | ||
| 'SOLVED_AC_SYNC' OR (runtime_ms >= 0 AND memory_kb >= 0)) </select> |
There was a problem hiding this comment.
The change from IS NOT NULL to >= 0 is a significant behavioral change. The old query would consider a record with runtime_ms = -1 and memory_kb = -1 as unsuccessful (NULL check fails), while the new query explicitly filters out negative values.
However, this creates a potential edge case: what about NULL values? The new query runtime_ms >= 0 will exclude NULL values (since NULL comparisons return NULL/false in SQL), which maintains backward compatibility. But it's worth documenting this behavior explicitly.
Additionally, verify that all code paths that set these values use -1 (or other negative values) to indicate failure, and never use NULL. Otherwise, records with NULL values might be unintentionally excluded from valid solved problems.
| // CRITICAL: 런타임/메모리가 유효한 값이어야 함 (0ms 포함, -1 제외) | ||
| if (record.getRuntimeMs() < 0 || record.getMemoryKb() < 0) { |
There was a problem hiding this comment.
The comment says "-1 등은 실패/오류로 간주하여 제외" (treats -1 and similar as failures/errors), but the actual validation checks for < 0. This means the code would accept 0ms runtime or 0KB memory as valid.
While 0ms runtime is theoretically possible for very simple problems (or due to measurement granularity), 0KB memory is physically impossible for any program execution. Consider:
- If 0KB memory should be considered invalid, change the check to
<= 0for memoryKb - If 0KB is acceptable, update the comment to clarify that only negative values are rejected, not zero
- Consider if 0 values indicate a parsing/measurement error that should also be filtered out
| // CRITICAL: 런타임/메모리가 유효한 값이어야 함 (0ms 포함, -1 제외) | |
| if (record.getRuntimeMs() < 0 || record.getMemoryKb() < 0) { | |
| // CRITICAL: 런타임/메모리가 유효한 값이어야 함 (런타임은 0ms 포함, 메모리는 0KB 이상 불가, -1 등 오류 값 제외) | |
| if (record.getRuntimeMs() < 0 || record.getMemoryKb() <= 0) { |
| public UserService(UserRepository userRepository, | ||
| OnboardingRepository onboardingRepository, | ||
| StudyRepository studyRepository, | ||
| StudyService studyService, | ||
| LearningPathCacheMapper learningPathCacheMapper) { | ||
| LearningPathCacheMapper learningPathCacheMapper, | ||
| SolvedacSyncService solvedacSyncService) { | ||
| this.userRepository = userRepository; | ||
| this.onboardingRepository = onboardingRepository; | ||
| this.studyRepository = studyRepository; | ||
| this.studyService = studyService; | ||
| this.learningPathCacheMapper = learningPathCacheMapper; | ||
| this.solvedacSyncService = solvedacSyncService; |
There was a problem hiding this comment.
The UserService constructor now requires a SolvedacSyncService parameter, but the existing UserServiceTest class doesn't include a mock for it. This will cause the existing tests to fail at runtime when @InjectMocks tries to construct UserService.
Add a mock field for SolvedacSyncService to the test class:
@Mock
private SolvedacSyncService solvedacSyncService;
Additionally, consider adding test cases for the new lazy sync behavior to ensure:
- Sync is triggered when statsLastSyncedAt is null
- Sync is triggered when more than 60 minutes have passed
- Sync is not triggered when less than 60 minutes have passed
- Sync failures don't break the findById operation
| solvedacSyncService.updateTierAndStats(id); | ||
| u = userRepository.findById(id).orElse(u); |
There was a problem hiding this comment.
The user entity is refetched from the database after the sync operation, which is good. However, this refetch happens outside the transaction boundary of updateTierAndStats. If another operation modifies the user between the sync and this refetch, those changes could be lost or inconsistent data could be returned.
Consider either:
- Making the entire block (lines 79-83) run within a single transaction
- Returning the updated user from
updateTierAndStatsto avoid the second database call
| LocalDateTime endTime = record.getCreatedAt() != null ? record.getCreatedAt() | ||
| : LocalDateTime.now(); |
There was a problem hiding this comment.
This null check creates inconsistent behavior. If record.getCreatedAt() is null (which shouldn't happen in normal operation since created_at has a DEFAULT CURRENT_TIMESTAMP), the elapsed time is calculated from defense start to "now" instead of to the actual submission time. However, a few lines above (line 131), the code checks if the record was created before the defense start time, which would fail if createdAt is null since null.isBefore() would throw a NullPointerException.
Either:
- Remove the null check here since line 131 already implies createdAt is not null
- Add an explicit null check before line 131 and return null if createdAt is null
🚀 작업 배경
실제 사용자를 통해 랜덤 디펜스 연승 검증 오류 및 티어 미갱신 이슈를 확인하였습니다.
🛠️ 주요 변경 사항
🔗 관련 이슈