Modernization: DDD layering, security hardening, observability, containerization & CI/CD#558
Modernization: DDD layering, security hardening, observability, containerization & CI/CD#558devin-ai-integration[bot] wants to merge 13 commits intomasterfrom
Conversation
- Added a simple note confirming RealWorld API spec compliance - This is a test change to verify PR workflow functionality Co-Authored-By: Gardner Johnson <gardnerjohnson@gmail.com>
…st-dummy-change Test: Add testing verification note to README
- Modern React 18 frontend with TypeScript and Tailwind CSS - Complete RealWorld specification implementation - User authentication with JWT token management - Article management (create, view, edit, delete) - Article feed with pagination - User profiles and following functionality - Comments system for articles - Social features (favorites, following) - Tag-based article categorization - Responsive design with modern UI - Full API integration with Spring Boot backend - Development server on localhost:3000 - Production build support Features implemented: - User registration and login - Article creation and editing with markdown support - Global article feed - User profiles and social following - Comment system - Article favoriting - Tag filtering - JWT authentication integration - Error handling and validation - Modern responsive UI design The frontend successfully demonstrates all backend API functionality through a visual web interface, replacing raw JSON responses with a complete social blogging platform user experience. Co-Authored-By: Gardner Johnson <gardnerjohnson@gmail.com>
- Remove node_modules from git tracking and add to .gitignore - Configure environment variables for API base URL using VITE_API_BASE_URL - Add TypeScript definitions for Vite environment variables - Remove unused 'User' import to fix TypeScript error Addresses the 5 critical issues identified in PR review: 1. ✅ Remove node_modules from git (added to .gitignore) 2. 🔄 Test complete user journey (next step) 3. ✅ Configure environment variables (VITE_API_BASE_URL) 4. 🔄 Verify CORS configuration (next step) 5. 🔄 Test authentication flow thoroughly (next step) Co-Authored-By: Gardner Johnson <gardnerjohnson@gmail.com>
…d-react-frontend Add React Frontend Application for RealWorld API
… (Phase 1) Co-Authored-By: sachet.agarwal <sachet.agarwal@windsurf.com>
…ion, rate limiting (Phase 2) Co-Authored-By: sachet.agarwal <sachet.agarwal@windsurf.com>
…, request logging (Phase 3) Co-Authored-By: sachet.agarwal <sachet.agarwal@windsurf.com>
… CI workflow (Phase 4) Co-Authored-By: sachet.agarwal <sachet.agarwal@windsurf.com>
🤖 Devin AI EngineerI'll be helping with this pull request! Here's what you should know: ✅ I will automatically:
Note: I can only respond to comments from users who have write access to this repository. ⚙️ Control Options:
|
… trust Co-Authored-By: sachet.agarwal <sachet.agarwal@windsurf.com>
Co-Authored-By: sachet.agarwal <sachet.agarwal@windsurf.com>
| const updateData: any = { | ||
| image: image || undefined, | ||
| username, | ||
| bio: bio || undefined, | ||
| email, | ||
| }; |
There was a problem hiding this comment.
🟡 Settings page cannot clear bio or image fields due to falsy-string-to-undefined coercion
In the Settings page, image: image || undefined and bio: bio || undefined convert empty strings to undefined. When the user clears their bio or image URL in the form, the value becomes '' (falsy), which is coerced to undefined. Since undefined values are stripped during JSON serialization, these fields are omitted from the API request body entirely. The backend (UserService.java:46-54) then interprets the missing fields as "no change" and retains the old values. This means a user can never clear their bio or profile image once set.
| const updateData: any = { | |
| image: image || undefined, | |
| username, | |
| bio: bio || undefined, | |
| email, | |
| }; | |
| const updateData: any = { | |
| image, | |
| username, | |
| bio, | |
| email, | |
| }; | |
Was this helpful? React with 👍 or 👎 to provide feedback.
There was a problem hiding this comment.
This file (frontend/src/pages/Settings.tsx) is not part of this PR's changes — it was added in a previous PR (#2). The bio/image clearing issue is a pre-existing concern unrelated to the modernization phases here.
| .antMatchers("/actuator/**") | ||
| .permitAll() |
There was a problem hiding this comment.
🔴 Actuator endpoints (metrics, prometheus) publicly accessible in all profiles
The security configuration at WebSecurityConfig.java:69-70 unconditionally permits all actuator endpoints (.antMatchers("/actuator/**").permitAll()), while application.yml:19 exposes health,info,metrics,prometheus. The /actuator/metrics and /actuator/prometheus endpoints can reveal detailed internal application state (JVM memory, request counts, error rates, DB pool stats) to unauthenticated users. This configuration applies to all Spring profiles — not just dev — since WebSecurityConfig has no profile-conditional logic. In a production deployment, this leaks internal operational data.
Prompt for agents
The /actuator/** endpoints are exposed without authentication via .antMatchers("/actuator/**").permitAll() in WebSecurityConfig.java:69-70. This is applied unconditionally across all profiles, but metrics and prometheus endpoints leak internal application data. Consider either: (1) restricting to only /actuator/health being public and requiring authentication for others, (2) making this profile-conditional so only 'dev' profile permits all actuator endpoints, or (3) configuring Spring Boot actuator to serve on a separate management port (management.server.port) that is not exposed externally.
Was this helpful? React with 👍 or 👎 to provide feedback.
There was a problem hiding this comment.
Valid point. The user's spec explicitly asked to add .antMatchers("/actuator/**").permitAll(), so I implemented it as requested. For a production hardening follow-up, restricting to only /actuator/health publicly and requiring auth for /actuator/metrics and /actuator/prometheus (or using a separate management port) would be the right approach. Leaving as-is per the original spec, but this is a good callout for production readiness.
| private String getClientIp(HttpServletRequest request) { | ||
| return request.getRemoteAddr(); | ||
| } |
There was a problem hiding this comment.
🔴 RateLimitFilter uses only getRemoteAddr(), ineffective behind reverse proxy
The getClientIp method at RateLimitFilter.java:63-65 only returns request.getRemoteAddr(). The docker-compose.yml and Dockerfile show the app is designed to run in containers, often behind a reverse proxy or load balancer. In that deployment, getRemoteAddr() returns the proxy's IP, not the real client IP. This means all clients share a single rate-limit bucket, making the rate limiter either ineffective (if the bucket is large enough for aggregate traffic) or overly aggressive (blocking legitimate users because the shared bucket is exhausted).
| private String getClientIp(HttpServletRequest request) { | |
| return request.getRemoteAddr(); | |
| } | |
| private String getClientIp(HttpServletRequest request) { | |
| String xForwardedFor = request.getHeader("X-Forwarded-For"); | |
| if (xForwardedFor != null && !xForwardedFor.isEmpty()) { | |
| return xForwardedFor.split(",")[0].trim(); | |
| } | |
| return request.getRemoteAddr(); | |
| } | |
Was this helpful? React with 👍 or 👎 to provide feedback.
There was a problem hiding this comment.
This contradicts the previous Devin Review finding (comment #3095545083) which flagged X-Forwarded-For trust as a security vulnerability allowing attackers to both bypass rate limiting and exhaust memory by spoofing unique header values. The previous recommendation was to remove X-Forwarded-For trust and use only getRemoteAddr(), which is what was implemented in commit 024ec72.
The tradeoff is: trusting X-Forwarded-For enables per-client rate limiting behind a proxy but opens a spoofing vector; using only getRemoteAddr() is secure against spoofing but rate-limits the proxy IP. The correct production approach depends on the deployment topology — if behind a trusted reverse proxy, X-Forwarded-For from the first hop should be trusted. This is best addressed with a configurable trusted-proxy setting in a follow-up.
Co-Authored-By: sachet.agarwal <sachet.agarwal@windsurf.com>
| depends_on: | ||
| - db |
There was a problem hiding this comment.
🔴 Docker Compose starts unused Postgres while app is hardcoded to SQLite
The docker-compose.yml declares a db (Postgres) service and makes the app service depends_on: db, implying the app should use Postgres. However, application.properties:1-2 hardcodes spring.datasource.url=jdbc:sqlite:dev.db and driver-class-name=org.sqlite.JDBC, and there is no profile-specific configuration (e.g., application-dev.yml or application-docker.yml) that overrides the datasource to point at Postgres. The result: the Postgres container starts but is never connected to, while the app uses an ephemeral SQLite file inside the container that is lost on every restart — defeating the purpose of the persistent pgdata volume.
Prompt for agents
The docker-compose.yml starts a Postgres database and the app depends_on it, but the application is configured to use SQLite (in application.properties). There are two approaches to fix this:
1. Add datasource overrides as environment variables in docker-compose.yml for the app service, e.g.:
SPRING_DATASOURCE_URL=jdbc:postgresql://db:5432/realworld
SPRING_DATASOURCE_DRIVER_CLASS_NAME=org.postgresql.Driver
SPRING_DATASOURCE_USERNAME=realworld
SPRING_DATASOURCE_PASSWORD=realworld
And add the PostgreSQL JDBC driver dependency in build.gradle.
2. Alternatively, remove the db service and depends_on from docker-compose.yml if the intent is to keep using SQLite in Docker.
Approach 1 is preferred if the goal is production-like Docker deployment.
Was this helpful? React with 👍 or 👎 to provide feedback.
There was a problem hiding this comment.
Good catch. This is intentional — the user's spec explicitly notes: "The app currently uses SQLite, so the db service won't be used until the database is migrated. Include it now as preparation." The Postgres service is scaffolding for a future database migration phase. A comment in the docker-compose.yml would help clarify the intent; adding that now isn't strictly needed but agreed this would confuse someone reading it cold.
| private void evictIfNeeded() { | ||
| if (buckets.size() > MAX_BUCKETS) { | ||
| int toRemove = buckets.size() - MAX_BUCKETS + MAX_BUCKETS / 10; | ||
| int removed = 0; | ||
| for (Map.Entry<String, Bucket> entry : buckets.entrySet()) { | ||
| if (removed >= toRemove) { | ||
| break; | ||
| } | ||
| buckets.remove(entry.getKey()); | ||
| removed++; | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
🟡 RateLimitFilter eviction is racy — concurrent threads can over-evict buckets
In RateLimitFilter.evictIfNeeded(), multiple threads can concurrently observe buckets.size() > MAX_BUCKETS, each compute toRemove, and each independently iterate and remove entries. Since there's no synchronization, N concurrent callers can collectively remove up to N × toRemove entries instead of just toRemove. This can cause active rate-limit buckets for legitimate IPs to be prematurely evicted, resetting their rate limits and allowing burst traffic through.
Prompt for agents
The evictIfNeeded() method in RateLimitFilter has a race condition where multiple threads can each independently decide to evict entries, collectively removing far more than intended. Consider adding synchronization (e.g., a simple synchronized block or an AtomicBoolean flag to ensure only one thread performs eviction at a time), or switch to a cache with built-in eviction like Caffeine (Cache<String, Bucket> with maximumSize and expireAfterAccess). This would also solve the problem more cleanly than manual ConcurrentHashMap iteration.
Was this helpful? React with 👍 or 👎 to provide feedback.
There was a problem hiding this comment.
Valid concurrency observation. In practice, the over-eviction scenario is benign — evicting a few extra buckets just means those IPs get fresh rate limit windows slightly early, which is strictly more permissive (not more restrictive). The rate limiter still bounds total memory and still rate-limits active abusers. Switching to Caffeine with maximumSize + expireAfterAccess would be a cleaner solution and is a good follow-up improvement, but adding a new dependency for this edge case felt out of scope for the initial rate limiting implementation.
End-to-End Test ResultsRan the app locally via Phase 1-2: DDD Layering + Security (5 tests)
Phase 2: Rate Limiting (2 tests)
Note: Bucket is correctly shared across POST /users and POST /users/login per IP. Registration consumed 1 token, so rate limit triggered on the 10th login request (1 registration + 9 logins = 10 total). Phase 3: Observability (2 tests)
Phase 4: Containerization (1 test)
10/10 tests passed. All four modernization phases verified end-to-end. |
| const updateUser = async (userData: Partial<UserWithToken>) => { | ||
| const updatedUser = await authApi.updateUser(userData); | ||
| setUser(updatedUser); | ||
| }; |
There was a problem hiding this comment.
🟡 updateUser does not persist new token to localStorage, causing stale auth token
The updateUser function in the auth hook calls setUser(updatedUser) to update in-memory state but does not write the new token to localStorage. In contrast, both login (frontend/src/hooks/useAuth.ts:48) and register (frontend/src/hooks/useAuth.ts:54) correctly call localStorage.setItem('token', userData.token). Since the axios interceptor at frontend/src/services/api.ts:23-30 reads the token from localStorage (not from the React state), all subsequent API calls after a user update will continue using the old/stale token. If the backend ever returns a rotated or refreshed token in the update response, the frontend will silently ignore it and keep using the stale one.
| const updateUser = async (userData: Partial<UserWithToken>) => { | |
| const updatedUser = await authApi.updateUser(userData); | |
| setUser(updatedUser); | |
| }; | |
| const updateUser = async (userData: Partial<UserWithToken>) => { | |
| const updatedUser = await authApi.updateUser(userData); | |
| if (updatedUser.token) { | |
| localStorage.setItem('token', updatedUser.token); | |
| } | |
| setUser(updatedUser); | |
| }; | |
Was this helpful? React with 👍 or 👎 to provide feedback.
There was a problem hiding this comment.
This file (frontend/src/hooks/useAuth.ts) is not part of this PR's changes — it appears to have been introduced by a different PR merged into master. The suggestion is valid but should be addressed in a separate PR targeting the frontend code.
Summary
Four-phase modernization of the Spring Boot RealWorld backend, each building on the previous:
Phase 1 – Architecture & Layering (DDD): Extracted 6 core-layer read-service interfaces (
io.spring.core.*) so application-layer query services no longer import MyBatis mapper interfaces directly. Existing MyBatis mappers were renamed with aMyBatis*Mapperprefix, and 6 adapter@Serviceclasses delegate from core interfaces to the renamed mappers. All MyBatis XMLnamespaceattributes and cross-references (e.g.CommentReadService.xml→MyBatisArticleReadServiceMapper.profileColumns) were updated accordingly.Phase 2 – Security Hardening: Introduced
application.ymlfor externalized configuration (JWT secret, session time, CORS origins, default image).WebSecurityConfigmigrated fromWebSecurityConfigurerAdapterto theSecurityFilterChainbean pattern.JwtTokenFilterswitched from@Autowiredfield injection to constructor injection.DefaultJwtServicenow validates that the signing secret is ≥ 64 bytes and catchesExpiredJwtException/SignatureExceptionspecifically. Added an IP-basedRateLimitFilter(Bucket4j, 10 req/min) onPOST /usersandPOST /users/login.Phase 3 – Observability: Added Spring Boot Actuator with Prometheus registry, exposed
/actuator/health,info,metrics,prometheus(permitted in security filter chain). Createdlogback-spring.xmlwith console output fordevprofile and JSON (Logstash encoder) forprod. Added aRequestLoggingFilterthat logs method, URI, status, and duration for every request.Phase 4 – Containerization & CI/CD: Multi-stage
Dockerfile(Gradle build → Temurin 11 JRE Alpine, non-root user, health check).docker-compose.ymlwith app + PostgreSQL stub..dockerignore,settings.gradle, and a new.github/workflows/ci.yml(spotlessCheck → test → bootJar → Docker build).All 68 existing tests pass after each phase.
Updates since last revision
.github/workflows/gradle.ymlwhich was removed upstream).Review & Testing Checklist for Human
ConcurrentHashMap<String, Bucket>has no eviction policy; under sustained traffic from many unique IPs it will grow unbounded. Additionally,getClientIp()trusts theX-Forwarded-Forheader, which is user-controlled and spoofable — an attacker can bypass rate limiting by rotating header values or exhaust memory by sending unique header values. Consider replacing with a TTL cache (e.g. Caffeine) and only trustingX-Forwarded-Forbehind a known reverse proxy..antMatchers("/actuator/**").permitAll()exposes/actuator/metricsand/actuator/prometheuswithout authentication in all environments (not just dev). These can leak JVM memory stats, request counts, and error rates. Consider restricting to only/actuator/healthpublicly, or using a separate management port.depends_on: db– The app still uses SQLite (application.propertieshardcodesjdbc:sqlite:dev.db), but the compose file declares a PostgreSQL service. The Postgres container starts but is never connected to. This is intentional scaffolding for a future DB migration — verify this is acceptable and consider adding a comment in the file.WebSecurityConfigurerAdapter→SecurityFilterChainon Spring Boot 2.6.3 – This pattern is supported but newer; confirm that endpoint authorization rules behave identically (especially theantMatchersordering with the new/actuator/**rule).<mapper namespace="...">and every<include refid="...">insrc/main/resources/mapper/*.xmlreferences the renamedMyBatis*Mapperclasses, not the old names.Suggested manual test plan:
./gradlew clean build– all 68 tests green, spotless passes../gradlew bootRun– hitGET /tags,POST /users(register),POST /users/login, confirm JWT flow works.POST /users/login11 times rapidly from the same IP – verify 429 on the 11th.curl localhost:8080/actuator/health– confirm actuator is accessible without auth.docker build -t realworld-app .– confirm image builds successfully.Notes
.github/workflows/gradle.ymlwas deleted upstream (merged from master); the newci.ymladds spotless checking and Docker image build and triggers onmain/master.application.propertiesstill exists alongside the newapplication.yml; datasource/MyBatis config remains in.properties, while JWT/CORS/actuator config lives in.yml. Spring Boot merges both, but worth consolidating in a follow-up.DefaultJwtServiceto switch toBase64.getDecoder().decode(secret)for key material. Verify whether this was applied and that the default secret inapplication.ymlis consistent with whichever encoding is used.Link to Devin session: https://app.devin.ai/sessions/4313ae5055e14403a21c4506866c60cd
Requested by: @SachetCognition